Why should a non-commutative operation even be called “multiplication”?











up vote
9
down vote

favorite
1












As per my knowledge and what was taught in school,




$atimes b$ is $a$ times $b$ or $b$ times $a$




Obviously this is commutative as $a$ times $b$ and $b$ times $a$ are same thing. On the other hand there are multiplications like vector multiplication and matrix multiplication that are not commutative.



What does multiplication mean in general, for these? Or should they even be called multiplication?










share|cite|improve this question









New contributor




mathaholic is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
















  • 18




    Much of the time it's tradition. But if you have an operation which is associative and distributes over addition (whatever "addition" means), then it's definitely worthy of the name "multiplication".
    – Arthur
    2 days ago










  • According to inc.com/bill-murphy-jr/…, kids can learn the material from grade 1 better if they start grade when at the age of 8. Maybe by starting so late, they could avoid having damaging past knowledge and be introduced to the concept of a natural number as a finite ordinal and completely avoid teaching them any concept of cardinal numbers until they're ready for it later. That's because it's much harder to learn to add by creating two groups each of which has a certain number of objects then
    – Timothy
    2 days ago










  • counting the objects in the union than directly from the inductive definition of natural number addition. Also, they should be taught only the bijective unary numeral system in grade 1 and how to add and multiply using it. They should also be guided on how to write a formal proof in a certain weak system of pure number theory and taught how to master that skill. They should later be tested on their ability to write a proof that natural number addition is associative and commutative and that natural number multiplication is also associative and commutative and distributes over addition and
    – Timothy
    2 days ago










  • should not see the proof before they figure it out themself. They should later be given an inductive definition of the decimal notation of each natural number that just says to take the successor, you increase the last digit by 1 if it's not 9 or if it is 9, change it to 0 and change string before the last digit to that which represents the next natural number and be left to figure out on their own a short cut for adding or multiplying natural numbers given in decimal notation and write the answer in decimal notation. That way, when they're taught the definition of matrix multiplication, they
    – Timothy
    2 days ago






  • 3




    @Timothy: If you're going to write four paragraphs in a sequence of comments you should post an answer instead.
    – Rahul
    yesterday















up vote
9
down vote

favorite
1












As per my knowledge and what was taught in school,




$atimes b$ is $a$ times $b$ or $b$ times $a$




Obviously this is commutative as $a$ times $b$ and $b$ times $a$ are same thing. On the other hand there are multiplications like vector multiplication and matrix multiplication that are not commutative.



What does multiplication mean in general, for these? Or should they even be called multiplication?










share|cite|improve this question









New contributor




mathaholic is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
















  • 18




    Much of the time it's tradition. But if you have an operation which is associative and distributes over addition (whatever "addition" means), then it's definitely worthy of the name "multiplication".
    – Arthur
    2 days ago










  • According to inc.com/bill-murphy-jr/…, kids can learn the material from grade 1 better if they start grade when at the age of 8. Maybe by starting so late, they could avoid having damaging past knowledge and be introduced to the concept of a natural number as a finite ordinal and completely avoid teaching them any concept of cardinal numbers until they're ready for it later. That's because it's much harder to learn to add by creating two groups each of which has a certain number of objects then
    – Timothy
    2 days ago










  • counting the objects in the union than directly from the inductive definition of natural number addition. Also, they should be taught only the bijective unary numeral system in grade 1 and how to add and multiply using it. They should also be guided on how to write a formal proof in a certain weak system of pure number theory and taught how to master that skill. They should later be tested on their ability to write a proof that natural number addition is associative and commutative and that natural number multiplication is also associative and commutative and distributes over addition and
    – Timothy
    2 days ago










  • should not see the proof before they figure it out themself. They should later be given an inductive definition of the decimal notation of each natural number that just says to take the successor, you increase the last digit by 1 if it's not 9 or if it is 9, change it to 0 and change string before the last digit to that which represents the next natural number and be left to figure out on their own a short cut for adding or multiplying natural numbers given in decimal notation and write the answer in decimal notation. That way, when they're taught the definition of matrix multiplication, they
    – Timothy
    2 days ago






  • 3




    @Timothy: If you're going to write four paragraphs in a sequence of comments you should post an answer instead.
    – Rahul
    yesterday













up vote
9
down vote

favorite
1









up vote
9
down vote

favorite
1






1





As per my knowledge and what was taught in school,




$atimes b$ is $a$ times $b$ or $b$ times $a$




Obviously this is commutative as $a$ times $b$ and $b$ times $a$ are same thing. On the other hand there are multiplications like vector multiplication and matrix multiplication that are not commutative.



What does multiplication mean in general, for these? Or should they even be called multiplication?










share|cite|improve this question









New contributor




mathaholic is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











As per my knowledge and what was taught in school,




$atimes b$ is $a$ times $b$ or $b$ times $a$




Obviously this is commutative as $a$ times $b$ and $b$ times $a$ are same thing. On the other hand there are multiplications like vector multiplication and matrix multiplication that are not commutative.



What does multiplication mean in general, for these? Or should they even be called multiplication?







number-theory terminology noncommutative-algebra binary-operations






share|cite|improve this question









New contributor




mathaholic is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











share|cite|improve this question









New contributor




mathaholic is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









share|cite|improve this question




share|cite|improve this question








edited yesterday









Flermat

1,27911129




1,27911129






New contributor




mathaholic is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









asked 2 days ago









mathaholic

492




492




New contributor




mathaholic is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.





New contributor





mathaholic is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






mathaholic is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.








  • 18




    Much of the time it's tradition. But if you have an operation which is associative and distributes over addition (whatever "addition" means), then it's definitely worthy of the name "multiplication".
    – Arthur
    2 days ago










  • According to inc.com/bill-murphy-jr/…, kids can learn the material from grade 1 better if they start grade when at the age of 8. Maybe by starting so late, they could avoid having damaging past knowledge and be introduced to the concept of a natural number as a finite ordinal and completely avoid teaching them any concept of cardinal numbers until they're ready for it later. That's because it's much harder to learn to add by creating two groups each of which has a certain number of objects then
    – Timothy
    2 days ago










  • counting the objects in the union than directly from the inductive definition of natural number addition. Also, they should be taught only the bijective unary numeral system in grade 1 and how to add and multiply using it. They should also be guided on how to write a formal proof in a certain weak system of pure number theory and taught how to master that skill. They should later be tested on their ability to write a proof that natural number addition is associative and commutative and that natural number multiplication is also associative and commutative and distributes over addition and
    – Timothy
    2 days ago










  • should not see the proof before they figure it out themself. They should later be given an inductive definition of the decimal notation of each natural number that just says to take the successor, you increase the last digit by 1 if it's not 9 or if it is 9, change it to 0 and change string before the last digit to that which represents the next natural number and be left to figure out on their own a short cut for adding or multiplying natural numbers given in decimal notation and write the answer in decimal notation. That way, when they're taught the definition of matrix multiplication, they
    – Timothy
    2 days ago






  • 3




    @Timothy: If you're going to write four paragraphs in a sequence of comments you should post an answer instead.
    – Rahul
    yesterday














  • 18




    Much of the time it's tradition. But if you have an operation which is associative and distributes over addition (whatever "addition" means), then it's definitely worthy of the name "multiplication".
    – Arthur
    2 days ago










  • According to inc.com/bill-murphy-jr/…, kids can learn the material from grade 1 better if they start grade when at the age of 8. Maybe by starting so late, they could avoid having damaging past knowledge and be introduced to the concept of a natural number as a finite ordinal and completely avoid teaching them any concept of cardinal numbers until they're ready for it later. That's because it's much harder to learn to add by creating two groups each of which has a certain number of objects then
    – Timothy
    2 days ago










  • counting the objects in the union than directly from the inductive definition of natural number addition. Also, they should be taught only the bijective unary numeral system in grade 1 and how to add and multiply using it. They should also be guided on how to write a formal proof in a certain weak system of pure number theory and taught how to master that skill. They should later be tested on their ability to write a proof that natural number addition is associative and commutative and that natural number multiplication is also associative and commutative and distributes over addition and
    – Timothy
    2 days ago










  • should not see the proof before they figure it out themself. They should later be given an inductive definition of the decimal notation of each natural number that just says to take the successor, you increase the last digit by 1 if it's not 9 or if it is 9, change it to 0 and change string before the last digit to that which represents the next natural number and be left to figure out on their own a short cut for adding or multiplying natural numbers given in decimal notation and write the answer in decimal notation. That way, when they're taught the definition of matrix multiplication, they
    – Timothy
    2 days ago






  • 3




    @Timothy: If you're going to write four paragraphs in a sequence of comments you should post an answer instead.
    – Rahul
    yesterday








18




18




Much of the time it's tradition. But if you have an operation which is associative and distributes over addition (whatever "addition" means), then it's definitely worthy of the name "multiplication".
– Arthur
2 days ago




Much of the time it's tradition. But if you have an operation which is associative and distributes over addition (whatever "addition" means), then it's definitely worthy of the name "multiplication".
– Arthur
2 days ago












According to inc.com/bill-murphy-jr/…, kids can learn the material from grade 1 better if they start grade when at the age of 8. Maybe by starting so late, they could avoid having damaging past knowledge and be introduced to the concept of a natural number as a finite ordinal and completely avoid teaching them any concept of cardinal numbers until they're ready for it later. That's because it's much harder to learn to add by creating two groups each of which has a certain number of objects then
– Timothy
2 days ago




According to inc.com/bill-murphy-jr/…, kids can learn the material from grade 1 better if they start grade when at the age of 8. Maybe by starting so late, they could avoid having damaging past knowledge and be introduced to the concept of a natural number as a finite ordinal and completely avoid teaching them any concept of cardinal numbers until they're ready for it later. That's because it's much harder to learn to add by creating two groups each of which has a certain number of objects then
– Timothy
2 days ago












counting the objects in the union than directly from the inductive definition of natural number addition. Also, they should be taught only the bijective unary numeral system in grade 1 and how to add and multiply using it. They should also be guided on how to write a formal proof in a certain weak system of pure number theory and taught how to master that skill. They should later be tested on their ability to write a proof that natural number addition is associative and commutative and that natural number multiplication is also associative and commutative and distributes over addition and
– Timothy
2 days ago




counting the objects in the union than directly from the inductive definition of natural number addition. Also, they should be taught only the bijective unary numeral system in grade 1 and how to add and multiply using it. They should also be guided on how to write a formal proof in a certain weak system of pure number theory and taught how to master that skill. They should later be tested on their ability to write a proof that natural number addition is associative and commutative and that natural number multiplication is also associative and commutative and distributes over addition and
– Timothy
2 days ago












should not see the proof before they figure it out themself. They should later be given an inductive definition of the decimal notation of each natural number that just says to take the successor, you increase the last digit by 1 if it's not 9 or if it is 9, change it to 0 and change string before the last digit to that which represents the next natural number and be left to figure out on their own a short cut for adding or multiplying natural numbers given in decimal notation and write the answer in decimal notation. That way, when they're taught the definition of matrix multiplication, they
– Timothy
2 days ago




should not see the proof before they figure it out themself. They should later be given an inductive definition of the decimal notation of each natural number that just says to take the successor, you increase the last digit by 1 if it's not 9 or if it is 9, change it to 0 and change string before the last digit to that which represents the next natural number and be left to figure out on their own a short cut for adding or multiplying natural numbers given in decimal notation and write the answer in decimal notation. That way, when they're taught the definition of matrix multiplication, they
– Timothy
2 days ago




3




3




@Timothy: If you're going to write four paragraphs in a sequence of comments you should post an answer instead.
– Rahul
yesterday




@Timothy: If you're going to write four paragraphs in a sequence of comments you should post an answer instead.
– Rahul
yesterday










7 Answers
7






active

oldest

votes

















up vote
24
down vote













Terms in mathematics do not necessarily have absolute, universal definitions. The context is very important. It is common for a term to have similar but not identical meanings in multiple contexts but it is also common for the meaning to differ considerably. It can even differ from author to author in the same context. To be sure of the meaning, you need to check the author's definition.



It might be tempting to demand that multiplication be commutative and another term be used when it isn't but that would break some nice patterns such as the real numbers, to the complex numbers, to the quaternions.



In day to day life, multiplication is commutative but only because it deals only with real numbers. As you go deeper into maths, you will need to unlearn this assumption. It is very frequent that something called multiplication is not commutative.






share|cite|improve this answer



















  • 1




    (+1) It's a good answer, but my main reason for upvoting are the first two sentences.
    – José Carlos Santos
    2 days ago










  • @JoséCarlosSantos Thanks. I considered adding some examples e.g. whether a ring has a multiplicative identity and two very different meanings of field. However, I suspected that these might not help.
    – badjohn
    2 days ago






  • 3




    Matrices are probably a more commonly encountered example of non-commutative multiplication than quaternions.
    – Henning Makholm
    2 days ago










  • @HenningMakholm, Yes and it seems that the OP first encountered non-commutative multiplication with matrices. I mentioned quaternions because the progression $mathbb{R}$ to $mathbb{C}$ to $mathbb{H}$ was neater.
    – badjohn
    2 days ago










  • @Broman The "it" in that quote referred to maths in day to day life. Very few people use complex numbers in day to day life. Various structures in addition to the reals have commutative multiplication but are rarely encountered outside mathematics (or serious science). $mathbb{Z}_n$ is also commutative but not often used in day to day life. There are 12 and 24 hour clocks but multiplying two times is not common.
    – badjohn
    2 days ago




















up vote
6
down vote













In school you might have also learned that multiplication or addition are defined for "numbers", but vectors, matrices, and functions are not numbers, why should they be allowed to be added or multiplied to begin with?



It turns out that sounds and letters form words which we use in context to convey information, usually in a concise manner.





In school, for example, most of your mathematical education is about the real numbers, maybe a bit about the complex numbers. Maybe you even learned to derive a function.



But did it ever occur to you that there are functions from $Bbb R$ to itself such that for every $a<b$, the image of the function on $(a,b)$ is the entire set of real numbers?



If all functions you dealt with in school were differentiable (or at least almost everywhere), why is that even a function? What does it even mean for something to be a function?



Well. These are questions that mathematicians dealt with a long time ago, and decided that we should stick to definitions. So in mathematics we have explicit definitions, and we give them names so we don't have to repeat the definition each time. The phrase "Let $H$ be a Hilbert space over $Bbb C$" packs in those eight words an immense amount of knowledge, that usually takes a long time to learn, for example.



Sometimes, out of convenience, and out of the sheer love of generalizations, mathematicians take a word which has a "common meaning", and decide that it is good enough to be used in a different context and to mean something else. Germ, stalk, filter, sheaf, quiver, graph, are all words that take a cue from the natural sense of the word, and give it an explicit definition in context.



(And I haven't even talked about things which have little to no relation to their real world meaning, e.g. a mouse in set theory.)



Multiplication is a word we use in context, and the context is usually an associative binary operation on some set. This set can be of functions, matrices, sets, etc. If we require the operation to have a neutral element and admit inverses, we have a group; if we adjoin another operator which is also associative, admits inverses, and commutative and posit some distributivity laws, we get a ring; or a semi-ring; or so on and so forth.



But it is convenient to talk about multiplication, because it's a word, and in most cases the pedagogical development helps us grasp why in generalizations we might want to omit commutativity from this operation.






share|cite|improve this answer






























    up vote
    3
    down vote













    Terminology for mathematical structures is often built off of, and analogized to, terminology for "normal" mathematical structures such as integers, rational numbers, and real numbers. For vectors over real numbers, we have addition already defined for the coordinates. Applying this operation and adding components termwise results in a meaningful operation, and the natural terminology is to refer to that as simply "addition". Multiplying termwise result in an operation that isn't as meaningful (for one thing, this operation, unlike termwise addition, is dependent on the coordinate system). The cross product, on the other hand, is a meaningful operation, and it interacts with termwise addition in a manner similar to how real multiplication interacts with real addition. For instance, $(a+b)times c = a times c + b times c$ (distributive property).



    For matrices, we again have termwise addition being a meaningful operation. Matrices represent linear operators, and the definition of linearity includes many of the properties of multiplication, such as distribution: A(u+v) = A(u) + A(v). Thus, it's natural to treat application of a linear operator as "multiplying" a vector by a matrix, and from there it's natural to define matrix multiplication as composition of the linear operators: (A*B)(v) = (A(B(v)).






    share|cite|improve this answer

















    • 2




      +1, indeed the common feature of operations called "multiplication" seems to be that they are binary operations that distribute (on both sides) over "addition".
      – Henning Makholm
      2 days ago










    • @HenningMakholm: I don't think that's quite the whole truth. For example, people talk about group "multiplication" even though there's only one operation in that setting. What does seem true is that there are lots of interesting examples of noncommutative operations which distribute over commutative operations and few (or no?) examples of commutative operations which distribute over noncommutative operations, so the tendency is to analogize commutative things with addition and noncommutative things with multiplication.
      – Micah
      2 days ago










    • @Micah: Hmm, perhaps I'm projecting here, because the word "multiply" about arbirary group operations has never sat well me. (I'm cool with multiplicative notation, of course, and speaking about "product" and "factors" doesn't sound half as jarring as "multiply" either).
      – Henning Makholm
      2 days ago












    • @HenningMakholm: Eh, fair enough. After I wrote my comment it occurred to me that you may be historically correct: if you're Galois and all the groups you like to think about consist of field automorphisms, group multiplication really does distribute over addition...
      – Micah
      2 days ago


















    up vote
    1
    down vote













    The mathematical concept below the question is that of operation.



    In a very general setting, if X is a set, than an operation on X is just a function



    $$Xtimes Xto X$$



    Usually denoted with multiplicative notations like $cdot$ or $*$. This means that an operation takes two elements of X and give as a result an element of $X$ (exactly as when you take two numbers, say $3$ ant $5$ and the result is $5*3=15$)



    Examples of oparations are the usual operations on real numbers: plus, minus, division (defined on non-zero reals), multiplications, exponentiation.



    Now, if you want to use operations in mathematics you usualy requires properties that are useful in calculations. Here the most common properties that an operation can have.



    $1)$ Associativity. That means that $(a*b)*c=a*(b*c)$ and allow you to omit parenthesis and write $a*b*c$. The usual summation and multiplications on real numbers are associative. Subtraction, division, and exponentiation are not. For example:
    $$(5-2)-2=3-2=1neq 5=5-(2-2)$$
    $$ (9/3)/3=3/3=1neq 9=9/1=9/(3/3)$$
    $$ 2^{(3^2)}=2^9=512neq 64= 8^2=(2^3)^2$$
    A famous examples of non-associative multiplication often used in mathematics is that of Cayley Octonions.
    A useful example of associative operations is the composition of functions. Let $A$ be any set and $X$ be the set of all functions from $A$ to $A$, i.e. $X={f:Ato A}$. The composition of two function $f,g$ is the function $f*g$ defined by $f*g(a)=f(g(a))$. Clearly $f*(g*h)(a)=f(g(h(a)))=(f*g)*h(a)$.



    $2)$ Commutativity. That means that $a*b=b*a$. Usual sum and multiplications are commutative. Matrix product is not commutative, the composition of functions is not commutative in general (a rotation composed with a translation is not the same as a translation first, and then the rotation), vector product is not commutative. Non commutativity of composition of functions is on the basis of Heisemberg uncertainty principle
    . The Hamilton Quaternions are a useful structure used in math. They have a multiplication which is associative but non commutative.



    $3)$ Existence of Neutral element. This means that there is an element e in $X$ so that $x*e=e*x=x$ for any $x$ of $X$. For summation the netural element is $0$, for multiplication is $1$. If you consider $X$ as the set of even integers numbers, than the usual multiplication is well-defined on $X$, but the neutral element does not exists in $X$ (it would be $1$, which is not in $X$).



    $4)$ Existence of Inverse. In case there is neutral element $ein X$, this means that for any $xin X$ there exists $y$ so that $xy=yx=e$. Usually $y$ is denoted by $x^{-1}$. The inverse for usual sum is $-x$, the inverse for usual multiplication is $1/x$ (which exists only for non-zero elements). In the realm of matrices, there are many matrices that have no inverse, for instane the matrix
    $begin{pmatrix} 1 & 1\1&1end{pmatrix}$.



    Such properties are important because they make an operation user-friendly. For example: is it true that if $a,bneq 0$ then $a*bneq0$? This seems kind of obviuos, but it depends on the properties of the operation. For example
    $begin{pmatrix} 1 & 0\3&0end{pmatrix}begin{pmatrix} 0 & 0\1&2end{pmatrix}=begin{pmatrix} 0 & 0\0&0end{pmatrix}$ but both
    $begin{pmatrix} 1 & 0\3&0end{pmatrix}$ and $begin{pmatrix} 0 & 0\1&2end{pmatrix}$ are different from zero.



    Concluding, I would say that when a mathematician hear the word multiplication, immediately think to an associative operation, usually (but not always) with neutral element, sometimes commutative.






    share|cite|improve this answer





















    • Another example of non-associative multiplication is IEEE floating point numbers (where intermediate results can underflow to zero or not, depending on the order of evaluation. small * small * big can be zero if the first two terms underflow, or small if the second two terms multiply to 1.)
      – Martin Bonner
      2 days ago










    • Not to mention the cross product in $mathbb R^3$.
      – Henning Makholm
      2 days ago










    • @MartinBonner: An interesting example, in that it is a non-associative approximation to an associative operation. I see in Wikipedia, incidentally, that it may be subnormal rather than zero, though I do not see exactly when.
      – PJTraill
      2 days ago


















    up vote
    1
    down vote














    a∗b is a times b or b times a




    That is a theorem not a definition, thank you very much. Of course the easiest proof is geometric: the former is an a x b rectangle, and you rotate it and you get a b x a rectangle. To prove it from axioms is a bit trickier because you have to answer the question if multiplication is not commutative as an axiom then what do we prove it from? The answer is some combination of mathematical induction, which is the idea that if something works for all numbers "and so on" then it works for all numbers, but it only applies in the integers because other number systems don't really have an "and so on."



    So once you talk about multiplication outside of the integers you lose all these good assurances that multiplication is commutative. In my opinion, that's really it. Multiplication makes sense in a lot of places, and many of those places don't have strong reasons that it would be commutative.






    share|cite|improve this answer




























      up vote
      0
      down vote













      It ought to be mentioned that multiplication in general is an action of a collection of transformations on a collection of objects, and multiplication of elements of a group is in fact a special case. This might have even been inherent in the original notion of multiplication; consider:




      $3$ times as many chickens is not the same as "chicken times as many $3$s". (The latter makes no sense.)




      Symbolically, if we let "$C$" stand for one chicken, and denote "$k$ times of $X$" by "$k·X$", then we have




      $3·(k·C) = (3k)·C$ but not $3·(k·C) = (k·C)·3$ (since "$(k·C)·3$" does not make sense).




      What is happening here? Even in many natural languages, numbers have been used in non-commutative syntax (in English, "three chickens" is correct while "chickens three" is wrong), because the multiplier and the multiplied are of different types (one is a countable noun phrase while the other is like a determiner).



      In this special case of natural numbers, they actually act as a semi-ring on countable nouns:




      $0·x = text{nothing}$.



      $1·x = x$.



      $(km)·x = k·(m·x)$ for any natural numbers $k,m$ and countable noun $x$.



      $(k+m)·x = (k·x)text{ and }(m·x)$ for any natural numbers $k,m$ and countable noun $x$.



      $k·(xtext{ and }y) = (k·x)text{ and }(k·y)$ for any natural numbers $k,m$ and countable noun $x$.




      (English requires some details like pluralization and articles, but mathematically the multiplication is as stated.)



      Observe that multiplication between natural numbers is commutative, but multiplication of natural numbers to countable nouns is not. So for example we have:




      $3·(k·C) = (3k)·C = (k3)·C = k·(3·C)$.




      And in mathematics you see the same kind of scalar multiplication in vector spaces, where again the term "scalar" (related to "scale") is not accidental. Again, multiplication of scalings to vectors is not commutative, but multiplication between scalings (which is simply composition) is commutative.



      More generally, even the transformations that act on the collection of objects may not have commutative composition. For example, operators on a collection $S$, namely functions from $S$ to $S$, act on $S$ via function application:




      $(f∘g)(x) = f(g(x))$ for any $f,g : S to S$ and $x in S$.




      And certainly composition on functions from $S$ to $S$ is not commutative. But maybe the "multiplication" name still stuck on as it did for numbers and scalings. That is why we have square matrix multiplication defined precisely so that it is equivalent to their composition when acting on the vectors.






      share|cite|improve this answer





















      • I have chicken/6 of the number 3. :-)
        – Asaf Karagila
        21 hours ago


















      up vote
      0
      down vote













      Commutativity is not a fundamental part of multiplication. It is just a consequence of how it works in certain situations.



      Here is an analogy. When you multiply two positive integers, a and b, you can define multiplication as repetitive additions. If consider a*b, that means that we will do a repetitive addition with a, and we will repeat b times. This is how multiplication is taught in the early years.



      Now have a look what happens when we use this on negative integers. Using the above definition, 4 * (-3) would mean that we would repeatedly add the number 4. Nothing strange so far. But we will repeat this -3 times. Hold on, how can we do something a negative amount of times? Why should we call this multiplication when it is not a repetitive addition?



      My point here is that when we transitioned from just positive integers to include the negative integers, we lost something that we thought was fundamental. But it was never fundamental.



      Let's have a look at matrix multiplication. Multiplication of real numbers could be seen as a special case of matrix multiplication. This special case is where the size of the matrices is 1x1. If you have the 1x1 matrices [a] and [b] and perform matrix multiplication between them, you would get the matrix [ab].






      share|cite|improve this answer




















        protected by Watson yesterday



        Thank you for your interest in this question.
        Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).



        Would you like to answer one of these unanswered questions instead?














        7 Answers
        7






        active

        oldest

        votes








        7 Answers
        7






        active

        oldest

        votes









        active

        oldest

        votes






        active

        oldest

        votes








        up vote
        24
        down vote













        Terms in mathematics do not necessarily have absolute, universal definitions. The context is very important. It is common for a term to have similar but not identical meanings in multiple contexts but it is also common for the meaning to differ considerably. It can even differ from author to author in the same context. To be sure of the meaning, you need to check the author's definition.



        It might be tempting to demand that multiplication be commutative and another term be used when it isn't but that would break some nice patterns such as the real numbers, to the complex numbers, to the quaternions.



        In day to day life, multiplication is commutative but only because it deals only with real numbers. As you go deeper into maths, you will need to unlearn this assumption. It is very frequent that something called multiplication is not commutative.






        share|cite|improve this answer



















        • 1




          (+1) It's a good answer, but my main reason for upvoting are the first two sentences.
          – José Carlos Santos
          2 days ago










        • @JoséCarlosSantos Thanks. I considered adding some examples e.g. whether a ring has a multiplicative identity and two very different meanings of field. However, I suspected that these might not help.
          – badjohn
          2 days ago






        • 3




          Matrices are probably a more commonly encountered example of non-commutative multiplication than quaternions.
          – Henning Makholm
          2 days ago










        • @HenningMakholm, Yes and it seems that the OP first encountered non-commutative multiplication with matrices. I mentioned quaternions because the progression $mathbb{R}$ to $mathbb{C}$ to $mathbb{H}$ was neater.
          – badjohn
          2 days ago










        • @Broman The "it" in that quote referred to maths in day to day life. Very few people use complex numbers in day to day life. Various structures in addition to the reals have commutative multiplication but are rarely encountered outside mathematics (or serious science). $mathbb{Z}_n$ is also commutative but not often used in day to day life. There are 12 and 24 hour clocks but multiplying two times is not common.
          – badjohn
          2 days ago

















        up vote
        24
        down vote













        Terms in mathematics do not necessarily have absolute, universal definitions. The context is very important. It is common for a term to have similar but not identical meanings in multiple contexts but it is also common for the meaning to differ considerably. It can even differ from author to author in the same context. To be sure of the meaning, you need to check the author's definition.



        It might be tempting to demand that multiplication be commutative and another term be used when it isn't but that would break some nice patterns such as the real numbers, to the complex numbers, to the quaternions.



        In day to day life, multiplication is commutative but only because it deals only with real numbers. As you go deeper into maths, you will need to unlearn this assumption. It is very frequent that something called multiplication is not commutative.






        share|cite|improve this answer



















        • 1




          (+1) It's a good answer, but my main reason for upvoting are the first two sentences.
          – José Carlos Santos
          2 days ago










        • @JoséCarlosSantos Thanks. I considered adding some examples e.g. whether a ring has a multiplicative identity and two very different meanings of field. However, I suspected that these might not help.
          – badjohn
          2 days ago






        • 3




          Matrices are probably a more commonly encountered example of non-commutative multiplication than quaternions.
          – Henning Makholm
          2 days ago










        • @HenningMakholm, Yes and it seems that the OP first encountered non-commutative multiplication with matrices. I mentioned quaternions because the progression $mathbb{R}$ to $mathbb{C}$ to $mathbb{H}$ was neater.
          – badjohn
          2 days ago










        • @Broman The "it" in that quote referred to maths in day to day life. Very few people use complex numbers in day to day life. Various structures in addition to the reals have commutative multiplication but are rarely encountered outside mathematics (or serious science). $mathbb{Z}_n$ is also commutative but not often used in day to day life. There are 12 and 24 hour clocks but multiplying two times is not common.
          – badjohn
          2 days ago















        up vote
        24
        down vote










        up vote
        24
        down vote









        Terms in mathematics do not necessarily have absolute, universal definitions. The context is very important. It is common for a term to have similar but not identical meanings in multiple contexts but it is also common for the meaning to differ considerably. It can even differ from author to author in the same context. To be sure of the meaning, you need to check the author's definition.



        It might be tempting to demand that multiplication be commutative and another term be used when it isn't but that would break some nice patterns such as the real numbers, to the complex numbers, to the quaternions.



        In day to day life, multiplication is commutative but only because it deals only with real numbers. As you go deeper into maths, you will need to unlearn this assumption. It is very frequent that something called multiplication is not commutative.






        share|cite|improve this answer














        Terms in mathematics do not necessarily have absolute, universal definitions. The context is very important. It is common for a term to have similar but not identical meanings in multiple contexts but it is also common for the meaning to differ considerably. It can even differ from author to author in the same context. To be sure of the meaning, you need to check the author's definition.



        It might be tempting to demand that multiplication be commutative and another term be used when it isn't but that would break some nice patterns such as the real numbers, to the complex numbers, to the quaternions.



        In day to day life, multiplication is commutative but only because it deals only with real numbers. As you go deeper into maths, you will need to unlearn this assumption. It is very frequent that something called multiplication is not commutative.







        share|cite|improve this answer














        share|cite|improve this answer



        share|cite|improve this answer








        edited 2 days ago

























        answered 2 days ago









        badjohn

        4,1471620




        4,1471620








        • 1




          (+1) It's a good answer, but my main reason for upvoting are the first two sentences.
          – José Carlos Santos
          2 days ago










        • @JoséCarlosSantos Thanks. I considered adding some examples e.g. whether a ring has a multiplicative identity and two very different meanings of field. However, I suspected that these might not help.
          – badjohn
          2 days ago






        • 3




          Matrices are probably a more commonly encountered example of non-commutative multiplication than quaternions.
          – Henning Makholm
          2 days ago










        • @HenningMakholm, Yes and it seems that the OP first encountered non-commutative multiplication with matrices. I mentioned quaternions because the progression $mathbb{R}$ to $mathbb{C}$ to $mathbb{H}$ was neater.
          – badjohn
          2 days ago










        • @Broman The "it" in that quote referred to maths in day to day life. Very few people use complex numbers in day to day life. Various structures in addition to the reals have commutative multiplication but are rarely encountered outside mathematics (or serious science). $mathbb{Z}_n$ is also commutative but not often used in day to day life. There are 12 and 24 hour clocks but multiplying two times is not common.
          – badjohn
          2 days ago
















        • 1




          (+1) It's a good answer, but my main reason for upvoting are the first two sentences.
          – José Carlos Santos
          2 days ago










        • @JoséCarlosSantos Thanks. I considered adding some examples e.g. whether a ring has a multiplicative identity and two very different meanings of field. However, I suspected that these might not help.
          – badjohn
          2 days ago






        • 3




          Matrices are probably a more commonly encountered example of non-commutative multiplication than quaternions.
          – Henning Makholm
          2 days ago










        • @HenningMakholm, Yes and it seems that the OP first encountered non-commutative multiplication with matrices. I mentioned quaternions because the progression $mathbb{R}$ to $mathbb{C}$ to $mathbb{H}$ was neater.
          – badjohn
          2 days ago










        • @Broman The "it" in that quote referred to maths in day to day life. Very few people use complex numbers in day to day life. Various structures in addition to the reals have commutative multiplication but are rarely encountered outside mathematics (or serious science). $mathbb{Z}_n$ is also commutative but not often used in day to day life. There are 12 and 24 hour clocks but multiplying two times is not common.
          – badjohn
          2 days ago










        1




        1




        (+1) It's a good answer, but my main reason for upvoting are the first two sentences.
        – José Carlos Santos
        2 days ago




        (+1) It's a good answer, but my main reason for upvoting are the first two sentences.
        – José Carlos Santos
        2 days ago












        @JoséCarlosSantos Thanks. I considered adding some examples e.g. whether a ring has a multiplicative identity and two very different meanings of field. However, I suspected that these might not help.
        – badjohn
        2 days ago




        @JoséCarlosSantos Thanks. I considered adding some examples e.g. whether a ring has a multiplicative identity and two very different meanings of field. However, I suspected that these might not help.
        – badjohn
        2 days ago




        3




        3




        Matrices are probably a more commonly encountered example of non-commutative multiplication than quaternions.
        – Henning Makholm
        2 days ago




        Matrices are probably a more commonly encountered example of non-commutative multiplication than quaternions.
        – Henning Makholm
        2 days ago












        @HenningMakholm, Yes and it seems that the OP first encountered non-commutative multiplication with matrices. I mentioned quaternions because the progression $mathbb{R}$ to $mathbb{C}$ to $mathbb{H}$ was neater.
        – badjohn
        2 days ago




        @HenningMakholm, Yes and it seems that the OP first encountered non-commutative multiplication with matrices. I mentioned quaternions because the progression $mathbb{R}$ to $mathbb{C}$ to $mathbb{H}$ was neater.
        – badjohn
        2 days ago












        @Broman The "it" in that quote referred to maths in day to day life. Very few people use complex numbers in day to day life. Various structures in addition to the reals have commutative multiplication but are rarely encountered outside mathematics (or serious science). $mathbb{Z}_n$ is also commutative but not often used in day to day life. There are 12 and 24 hour clocks but multiplying two times is not common.
        – badjohn
        2 days ago






        @Broman The "it" in that quote referred to maths in day to day life. Very few people use complex numbers in day to day life. Various structures in addition to the reals have commutative multiplication but are rarely encountered outside mathematics (or serious science). $mathbb{Z}_n$ is also commutative but not often used in day to day life. There are 12 and 24 hour clocks but multiplying two times is not common.
        – badjohn
        2 days ago












        up vote
        6
        down vote













        In school you might have also learned that multiplication or addition are defined for "numbers", but vectors, matrices, and functions are not numbers, why should they be allowed to be added or multiplied to begin with?



        It turns out that sounds and letters form words which we use in context to convey information, usually in a concise manner.





        In school, for example, most of your mathematical education is about the real numbers, maybe a bit about the complex numbers. Maybe you even learned to derive a function.



        But did it ever occur to you that there are functions from $Bbb R$ to itself such that for every $a<b$, the image of the function on $(a,b)$ is the entire set of real numbers?



        If all functions you dealt with in school were differentiable (or at least almost everywhere), why is that even a function? What does it even mean for something to be a function?



        Well. These are questions that mathematicians dealt with a long time ago, and decided that we should stick to definitions. So in mathematics we have explicit definitions, and we give them names so we don't have to repeat the definition each time. The phrase "Let $H$ be a Hilbert space over $Bbb C$" packs in those eight words an immense amount of knowledge, that usually takes a long time to learn, for example.



        Sometimes, out of convenience, and out of the sheer love of generalizations, mathematicians take a word which has a "common meaning", and decide that it is good enough to be used in a different context and to mean something else. Germ, stalk, filter, sheaf, quiver, graph, are all words that take a cue from the natural sense of the word, and give it an explicit definition in context.



        (And I haven't even talked about things which have little to no relation to their real world meaning, e.g. a mouse in set theory.)



        Multiplication is a word we use in context, and the context is usually an associative binary operation on some set. This set can be of functions, matrices, sets, etc. If we require the operation to have a neutral element and admit inverses, we have a group; if we adjoin another operator which is also associative, admits inverses, and commutative and posit some distributivity laws, we get a ring; or a semi-ring; or so on and so forth.



        But it is convenient to talk about multiplication, because it's a word, and in most cases the pedagogical development helps us grasp why in generalizations we might want to omit commutativity from this operation.






        share|cite|improve this answer



























          up vote
          6
          down vote













          In school you might have also learned that multiplication or addition are defined for "numbers", but vectors, matrices, and functions are not numbers, why should they be allowed to be added or multiplied to begin with?



          It turns out that sounds and letters form words which we use in context to convey information, usually in a concise manner.





          In school, for example, most of your mathematical education is about the real numbers, maybe a bit about the complex numbers. Maybe you even learned to derive a function.



          But did it ever occur to you that there are functions from $Bbb R$ to itself such that for every $a<b$, the image of the function on $(a,b)$ is the entire set of real numbers?



          If all functions you dealt with in school were differentiable (or at least almost everywhere), why is that even a function? What does it even mean for something to be a function?



          Well. These are questions that mathematicians dealt with a long time ago, and decided that we should stick to definitions. So in mathematics we have explicit definitions, and we give them names so we don't have to repeat the definition each time. The phrase "Let $H$ be a Hilbert space over $Bbb C$" packs in those eight words an immense amount of knowledge, that usually takes a long time to learn, for example.



          Sometimes, out of convenience, and out of the sheer love of generalizations, mathematicians take a word which has a "common meaning", and decide that it is good enough to be used in a different context and to mean something else. Germ, stalk, filter, sheaf, quiver, graph, are all words that take a cue from the natural sense of the word, and give it an explicit definition in context.



          (And I haven't even talked about things which have little to no relation to their real world meaning, e.g. a mouse in set theory.)



          Multiplication is a word we use in context, and the context is usually an associative binary operation on some set. This set can be of functions, matrices, sets, etc. If we require the operation to have a neutral element and admit inverses, we have a group; if we adjoin another operator which is also associative, admits inverses, and commutative and posit some distributivity laws, we get a ring; or a semi-ring; or so on and so forth.



          But it is convenient to talk about multiplication, because it's a word, and in most cases the pedagogical development helps us grasp why in generalizations we might want to omit commutativity from this operation.






          share|cite|improve this answer

























            up vote
            6
            down vote










            up vote
            6
            down vote









            In school you might have also learned that multiplication or addition are defined for "numbers", but vectors, matrices, and functions are not numbers, why should they be allowed to be added or multiplied to begin with?



            It turns out that sounds and letters form words which we use in context to convey information, usually in a concise manner.





            In school, for example, most of your mathematical education is about the real numbers, maybe a bit about the complex numbers. Maybe you even learned to derive a function.



            But did it ever occur to you that there are functions from $Bbb R$ to itself such that for every $a<b$, the image of the function on $(a,b)$ is the entire set of real numbers?



            If all functions you dealt with in school were differentiable (or at least almost everywhere), why is that even a function? What does it even mean for something to be a function?



            Well. These are questions that mathematicians dealt with a long time ago, and decided that we should stick to definitions. So in mathematics we have explicit definitions, and we give them names so we don't have to repeat the definition each time. The phrase "Let $H$ be a Hilbert space over $Bbb C$" packs in those eight words an immense amount of knowledge, that usually takes a long time to learn, for example.



            Sometimes, out of convenience, and out of the sheer love of generalizations, mathematicians take a word which has a "common meaning", and decide that it is good enough to be used in a different context and to mean something else. Germ, stalk, filter, sheaf, quiver, graph, are all words that take a cue from the natural sense of the word, and give it an explicit definition in context.



            (And I haven't even talked about things which have little to no relation to their real world meaning, e.g. a mouse in set theory.)



            Multiplication is a word we use in context, and the context is usually an associative binary operation on some set. This set can be of functions, matrices, sets, etc. If we require the operation to have a neutral element and admit inverses, we have a group; if we adjoin another operator which is also associative, admits inverses, and commutative and posit some distributivity laws, we get a ring; or a semi-ring; or so on and so forth.



            But it is convenient to talk about multiplication, because it's a word, and in most cases the pedagogical development helps us grasp why in generalizations we might want to omit commutativity from this operation.






            share|cite|improve this answer














            In school you might have also learned that multiplication or addition are defined for "numbers", but vectors, matrices, and functions are not numbers, why should they be allowed to be added or multiplied to begin with?



            It turns out that sounds and letters form words which we use in context to convey information, usually in a concise manner.





            In school, for example, most of your mathematical education is about the real numbers, maybe a bit about the complex numbers. Maybe you even learned to derive a function.



            But did it ever occur to you that there are functions from $Bbb R$ to itself such that for every $a<b$, the image of the function on $(a,b)$ is the entire set of real numbers?



            If all functions you dealt with in school were differentiable (or at least almost everywhere), why is that even a function? What does it even mean for something to be a function?



            Well. These are questions that mathematicians dealt with a long time ago, and decided that we should stick to definitions. So in mathematics we have explicit definitions, and we give them names so we don't have to repeat the definition each time. The phrase "Let $H$ be a Hilbert space over $Bbb C$" packs in those eight words an immense amount of knowledge, that usually takes a long time to learn, for example.



            Sometimes, out of convenience, and out of the sheer love of generalizations, mathematicians take a word which has a "common meaning", and decide that it is good enough to be used in a different context and to mean something else. Germ, stalk, filter, sheaf, quiver, graph, are all words that take a cue from the natural sense of the word, and give it an explicit definition in context.



            (And I haven't even talked about things which have little to no relation to their real world meaning, e.g. a mouse in set theory.)



            Multiplication is a word we use in context, and the context is usually an associative binary operation on some set. This set can be of functions, matrices, sets, etc. If we require the operation to have a neutral element and admit inverses, we have a group; if we adjoin another operator which is also associative, admits inverses, and commutative and posit some distributivity laws, we get a ring; or a semi-ring; or so on and so forth.



            But it is convenient to talk about multiplication, because it's a word, and in most cases the pedagogical development helps us grasp why in generalizations we might want to omit commutativity from this operation.







            share|cite|improve this answer














            share|cite|improve this answer



            share|cite|improve this answer








            edited 2 days ago

























            answered 2 days ago









            Asaf Karagila

            299k32420750




            299k32420750






















                up vote
                3
                down vote













                Terminology for mathematical structures is often built off of, and analogized to, terminology for "normal" mathematical structures such as integers, rational numbers, and real numbers. For vectors over real numbers, we have addition already defined for the coordinates. Applying this operation and adding components termwise results in a meaningful operation, and the natural terminology is to refer to that as simply "addition". Multiplying termwise result in an operation that isn't as meaningful (for one thing, this operation, unlike termwise addition, is dependent on the coordinate system). The cross product, on the other hand, is a meaningful operation, and it interacts with termwise addition in a manner similar to how real multiplication interacts with real addition. For instance, $(a+b)times c = a times c + b times c$ (distributive property).



                For matrices, we again have termwise addition being a meaningful operation. Matrices represent linear operators, and the definition of linearity includes many of the properties of multiplication, such as distribution: A(u+v) = A(u) + A(v). Thus, it's natural to treat application of a linear operator as "multiplying" a vector by a matrix, and from there it's natural to define matrix multiplication as composition of the linear operators: (A*B)(v) = (A(B(v)).






                share|cite|improve this answer

















                • 2




                  +1, indeed the common feature of operations called "multiplication" seems to be that they are binary operations that distribute (on both sides) over "addition".
                  – Henning Makholm
                  2 days ago










                • @HenningMakholm: I don't think that's quite the whole truth. For example, people talk about group "multiplication" even though there's only one operation in that setting. What does seem true is that there are lots of interesting examples of noncommutative operations which distribute over commutative operations and few (or no?) examples of commutative operations which distribute over noncommutative operations, so the tendency is to analogize commutative things with addition and noncommutative things with multiplication.
                  – Micah
                  2 days ago










                • @Micah: Hmm, perhaps I'm projecting here, because the word "multiply" about arbirary group operations has never sat well me. (I'm cool with multiplicative notation, of course, and speaking about "product" and "factors" doesn't sound half as jarring as "multiply" either).
                  – Henning Makholm
                  2 days ago












                • @HenningMakholm: Eh, fair enough. After I wrote my comment it occurred to me that you may be historically correct: if you're Galois and all the groups you like to think about consist of field automorphisms, group multiplication really does distribute over addition...
                  – Micah
                  2 days ago















                up vote
                3
                down vote













                Terminology for mathematical structures is often built off of, and analogized to, terminology for "normal" mathematical structures such as integers, rational numbers, and real numbers. For vectors over real numbers, we have addition already defined for the coordinates. Applying this operation and adding components termwise results in a meaningful operation, and the natural terminology is to refer to that as simply "addition". Multiplying termwise result in an operation that isn't as meaningful (for one thing, this operation, unlike termwise addition, is dependent on the coordinate system). The cross product, on the other hand, is a meaningful operation, and it interacts with termwise addition in a manner similar to how real multiplication interacts with real addition. For instance, $(a+b)times c = a times c + b times c$ (distributive property).



                For matrices, we again have termwise addition being a meaningful operation. Matrices represent linear operators, and the definition of linearity includes many of the properties of multiplication, such as distribution: A(u+v) = A(u) + A(v). Thus, it's natural to treat application of a linear operator as "multiplying" a vector by a matrix, and from there it's natural to define matrix multiplication as composition of the linear operators: (A*B)(v) = (A(B(v)).






                share|cite|improve this answer

















                • 2




                  +1, indeed the common feature of operations called "multiplication" seems to be that they are binary operations that distribute (on both sides) over "addition".
                  – Henning Makholm
                  2 days ago










                • @HenningMakholm: I don't think that's quite the whole truth. For example, people talk about group "multiplication" even though there's only one operation in that setting. What does seem true is that there are lots of interesting examples of noncommutative operations which distribute over commutative operations and few (or no?) examples of commutative operations which distribute over noncommutative operations, so the tendency is to analogize commutative things with addition and noncommutative things with multiplication.
                  – Micah
                  2 days ago










                • @Micah: Hmm, perhaps I'm projecting here, because the word "multiply" about arbirary group operations has never sat well me. (I'm cool with multiplicative notation, of course, and speaking about "product" and "factors" doesn't sound half as jarring as "multiply" either).
                  – Henning Makholm
                  2 days ago












                • @HenningMakholm: Eh, fair enough. After I wrote my comment it occurred to me that you may be historically correct: if you're Galois and all the groups you like to think about consist of field automorphisms, group multiplication really does distribute over addition...
                  – Micah
                  2 days ago













                up vote
                3
                down vote










                up vote
                3
                down vote









                Terminology for mathematical structures is often built off of, and analogized to, terminology for "normal" mathematical structures such as integers, rational numbers, and real numbers. For vectors over real numbers, we have addition already defined for the coordinates. Applying this operation and adding components termwise results in a meaningful operation, and the natural terminology is to refer to that as simply "addition". Multiplying termwise result in an operation that isn't as meaningful (for one thing, this operation, unlike termwise addition, is dependent on the coordinate system). The cross product, on the other hand, is a meaningful operation, and it interacts with termwise addition in a manner similar to how real multiplication interacts with real addition. For instance, $(a+b)times c = a times c + b times c$ (distributive property).



                For matrices, we again have termwise addition being a meaningful operation. Matrices represent linear operators, and the definition of linearity includes many of the properties of multiplication, such as distribution: A(u+v) = A(u) + A(v). Thus, it's natural to treat application of a linear operator as "multiplying" a vector by a matrix, and from there it's natural to define matrix multiplication as composition of the linear operators: (A*B)(v) = (A(B(v)).






                share|cite|improve this answer












                Terminology for mathematical structures is often built off of, and analogized to, terminology for "normal" mathematical structures such as integers, rational numbers, and real numbers. For vectors over real numbers, we have addition already defined for the coordinates. Applying this operation and adding components termwise results in a meaningful operation, and the natural terminology is to refer to that as simply "addition". Multiplying termwise result in an operation that isn't as meaningful (for one thing, this operation, unlike termwise addition, is dependent on the coordinate system). The cross product, on the other hand, is a meaningful operation, and it interacts with termwise addition in a manner similar to how real multiplication interacts with real addition. For instance, $(a+b)times c = a times c + b times c$ (distributive property).



                For matrices, we again have termwise addition being a meaningful operation. Matrices represent linear operators, and the definition of linearity includes many of the properties of multiplication, such as distribution: A(u+v) = A(u) + A(v). Thus, it's natural to treat application of a linear operator as "multiplying" a vector by a matrix, and from there it's natural to define matrix multiplication as composition of the linear operators: (A*B)(v) = (A(B(v)).







                share|cite|improve this answer












                share|cite|improve this answer



                share|cite|improve this answer










                answered 2 days ago









                Acccumulation

                6,4852616




                6,4852616








                • 2




                  +1, indeed the common feature of operations called "multiplication" seems to be that they are binary operations that distribute (on both sides) over "addition".
                  – Henning Makholm
                  2 days ago










                • @HenningMakholm: I don't think that's quite the whole truth. For example, people talk about group "multiplication" even though there's only one operation in that setting. What does seem true is that there are lots of interesting examples of noncommutative operations which distribute over commutative operations and few (or no?) examples of commutative operations which distribute over noncommutative operations, so the tendency is to analogize commutative things with addition and noncommutative things with multiplication.
                  – Micah
                  2 days ago










                • @Micah: Hmm, perhaps I'm projecting here, because the word "multiply" about arbirary group operations has never sat well me. (I'm cool with multiplicative notation, of course, and speaking about "product" and "factors" doesn't sound half as jarring as "multiply" either).
                  – Henning Makholm
                  2 days ago












                • @HenningMakholm: Eh, fair enough. After I wrote my comment it occurred to me that you may be historically correct: if you're Galois and all the groups you like to think about consist of field automorphisms, group multiplication really does distribute over addition...
                  – Micah
                  2 days ago














                • 2




                  +1, indeed the common feature of operations called "multiplication" seems to be that they are binary operations that distribute (on both sides) over "addition".
                  – Henning Makholm
                  2 days ago










                • @HenningMakholm: I don't think that's quite the whole truth. For example, people talk about group "multiplication" even though there's only one operation in that setting. What does seem true is that there are lots of interesting examples of noncommutative operations which distribute over commutative operations and few (or no?) examples of commutative operations which distribute over noncommutative operations, so the tendency is to analogize commutative things with addition and noncommutative things with multiplication.
                  – Micah
                  2 days ago










                • @Micah: Hmm, perhaps I'm projecting here, because the word "multiply" about arbirary group operations has never sat well me. (I'm cool with multiplicative notation, of course, and speaking about "product" and "factors" doesn't sound half as jarring as "multiply" either).
                  – Henning Makholm
                  2 days ago












                • @HenningMakholm: Eh, fair enough. After I wrote my comment it occurred to me that you may be historically correct: if you're Galois and all the groups you like to think about consist of field automorphisms, group multiplication really does distribute over addition...
                  – Micah
                  2 days ago








                2




                2




                +1, indeed the common feature of operations called "multiplication" seems to be that they are binary operations that distribute (on both sides) over "addition".
                – Henning Makholm
                2 days ago




                +1, indeed the common feature of operations called "multiplication" seems to be that they are binary operations that distribute (on both sides) over "addition".
                – Henning Makholm
                2 days ago












                @HenningMakholm: I don't think that's quite the whole truth. For example, people talk about group "multiplication" even though there's only one operation in that setting. What does seem true is that there are lots of interesting examples of noncommutative operations which distribute over commutative operations and few (or no?) examples of commutative operations which distribute over noncommutative operations, so the tendency is to analogize commutative things with addition and noncommutative things with multiplication.
                – Micah
                2 days ago




                @HenningMakholm: I don't think that's quite the whole truth. For example, people talk about group "multiplication" even though there's only one operation in that setting. What does seem true is that there are lots of interesting examples of noncommutative operations which distribute over commutative operations and few (or no?) examples of commutative operations which distribute over noncommutative operations, so the tendency is to analogize commutative things with addition and noncommutative things with multiplication.
                – Micah
                2 days ago












                @Micah: Hmm, perhaps I'm projecting here, because the word "multiply" about arbirary group operations has never sat well me. (I'm cool with multiplicative notation, of course, and speaking about "product" and "factors" doesn't sound half as jarring as "multiply" either).
                – Henning Makholm
                2 days ago






                @Micah: Hmm, perhaps I'm projecting here, because the word "multiply" about arbirary group operations has never sat well me. (I'm cool with multiplicative notation, of course, and speaking about "product" and "factors" doesn't sound half as jarring as "multiply" either).
                – Henning Makholm
                2 days ago














                @HenningMakholm: Eh, fair enough. After I wrote my comment it occurred to me that you may be historically correct: if you're Galois and all the groups you like to think about consist of field automorphisms, group multiplication really does distribute over addition...
                – Micah
                2 days ago




                @HenningMakholm: Eh, fair enough. After I wrote my comment it occurred to me that you may be historically correct: if you're Galois and all the groups you like to think about consist of field automorphisms, group multiplication really does distribute over addition...
                – Micah
                2 days ago










                up vote
                1
                down vote













                The mathematical concept below the question is that of operation.



                In a very general setting, if X is a set, than an operation on X is just a function



                $$Xtimes Xto X$$



                Usually denoted with multiplicative notations like $cdot$ or $*$. This means that an operation takes two elements of X and give as a result an element of $X$ (exactly as when you take two numbers, say $3$ ant $5$ and the result is $5*3=15$)



                Examples of oparations are the usual operations on real numbers: plus, minus, division (defined on non-zero reals), multiplications, exponentiation.



                Now, if you want to use operations in mathematics you usualy requires properties that are useful in calculations. Here the most common properties that an operation can have.



                $1)$ Associativity. That means that $(a*b)*c=a*(b*c)$ and allow you to omit parenthesis and write $a*b*c$. The usual summation and multiplications on real numbers are associative. Subtraction, division, and exponentiation are not. For example:
                $$(5-2)-2=3-2=1neq 5=5-(2-2)$$
                $$ (9/3)/3=3/3=1neq 9=9/1=9/(3/3)$$
                $$ 2^{(3^2)}=2^9=512neq 64= 8^2=(2^3)^2$$
                A famous examples of non-associative multiplication often used in mathematics is that of Cayley Octonions.
                A useful example of associative operations is the composition of functions. Let $A$ be any set and $X$ be the set of all functions from $A$ to $A$, i.e. $X={f:Ato A}$. The composition of two function $f,g$ is the function $f*g$ defined by $f*g(a)=f(g(a))$. Clearly $f*(g*h)(a)=f(g(h(a)))=(f*g)*h(a)$.



                $2)$ Commutativity. That means that $a*b=b*a$. Usual sum and multiplications are commutative. Matrix product is not commutative, the composition of functions is not commutative in general (a rotation composed with a translation is not the same as a translation first, and then the rotation), vector product is not commutative. Non commutativity of composition of functions is on the basis of Heisemberg uncertainty principle
                . The Hamilton Quaternions are a useful structure used in math. They have a multiplication which is associative but non commutative.



                $3)$ Existence of Neutral element. This means that there is an element e in $X$ so that $x*e=e*x=x$ for any $x$ of $X$. For summation the netural element is $0$, for multiplication is $1$. If you consider $X$ as the set of even integers numbers, than the usual multiplication is well-defined on $X$, but the neutral element does not exists in $X$ (it would be $1$, which is not in $X$).



                $4)$ Existence of Inverse. In case there is neutral element $ein X$, this means that for any $xin X$ there exists $y$ so that $xy=yx=e$. Usually $y$ is denoted by $x^{-1}$. The inverse for usual sum is $-x$, the inverse for usual multiplication is $1/x$ (which exists only for non-zero elements). In the realm of matrices, there are many matrices that have no inverse, for instane the matrix
                $begin{pmatrix} 1 & 1\1&1end{pmatrix}$.



                Such properties are important because they make an operation user-friendly. For example: is it true that if $a,bneq 0$ then $a*bneq0$? This seems kind of obviuos, but it depends on the properties of the operation. For example
                $begin{pmatrix} 1 & 0\3&0end{pmatrix}begin{pmatrix} 0 & 0\1&2end{pmatrix}=begin{pmatrix} 0 & 0\0&0end{pmatrix}$ but both
                $begin{pmatrix} 1 & 0\3&0end{pmatrix}$ and $begin{pmatrix} 0 & 0\1&2end{pmatrix}$ are different from zero.



                Concluding, I would say that when a mathematician hear the word multiplication, immediately think to an associative operation, usually (but not always) with neutral element, sometimes commutative.






                share|cite|improve this answer





















                • Another example of non-associative multiplication is IEEE floating point numbers (where intermediate results can underflow to zero or not, depending on the order of evaluation. small * small * big can be zero if the first two terms underflow, or small if the second two terms multiply to 1.)
                  – Martin Bonner
                  2 days ago










                • Not to mention the cross product in $mathbb R^3$.
                  – Henning Makholm
                  2 days ago










                • @MartinBonner: An interesting example, in that it is a non-associative approximation to an associative operation. I see in Wikipedia, incidentally, that it may be subnormal rather than zero, though I do not see exactly when.
                  – PJTraill
                  2 days ago















                up vote
                1
                down vote













                The mathematical concept below the question is that of operation.



                In a very general setting, if X is a set, than an operation on X is just a function



                $$Xtimes Xto X$$



                Usually denoted with multiplicative notations like $cdot$ or $*$. This means that an operation takes two elements of X and give as a result an element of $X$ (exactly as when you take two numbers, say $3$ ant $5$ and the result is $5*3=15$)



                Examples of oparations are the usual operations on real numbers: plus, minus, division (defined on non-zero reals), multiplications, exponentiation.



                Now, if you want to use operations in mathematics you usualy requires properties that are useful in calculations. Here the most common properties that an operation can have.



                $1)$ Associativity. That means that $(a*b)*c=a*(b*c)$ and allow you to omit parenthesis and write $a*b*c$. The usual summation and multiplications on real numbers are associative. Subtraction, division, and exponentiation are not. For example:
                $$(5-2)-2=3-2=1neq 5=5-(2-2)$$
                $$ (9/3)/3=3/3=1neq 9=9/1=9/(3/3)$$
                $$ 2^{(3^2)}=2^9=512neq 64= 8^2=(2^3)^2$$
                A famous examples of non-associative multiplication often used in mathematics is that of Cayley Octonions.
                A useful example of associative operations is the composition of functions. Let $A$ be any set and $X$ be the set of all functions from $A$ to $A$, i.e. $X={f:Ato A}$. The composition of two function $f,g$ is the function $f*g$ defined by $f*g(a)=f(g(a))$. Clearly $f*(g*h)(a)=f(g(h(a)))=(f*g)*h(a)$.



                $2)$ Commutativity. That means that $a*b=b*a$. Usual sum and multiplications are commutative. Matrix product is not commutative, the composition of functions is not commutative in general (a rotation composed with a translation is not the same as a translation first, and then the rotation), vector product is not commutative. Non commutativity of composition of functions is on the basis of Heisemberg uncertainty principle
                . The Hamilton Quaternions are a useful structure used in math. They have a multiplication which is associative but non commutative.



                $3)$ Existence of Neutral element. This means that there is an element e in $X$ so that $x*e=e*x=x$ for any $x$ of $X$. For summation the netural element is $0$, for multiplication is $1$. If you consider $X$ as the set of even integers numbers, than the usual multiplication is well-defined on $X$, but the neutral element does not exists in $X$ (it would be $1$, which is not in $X$).



                $4)$ Existence of Inverse. In case there is neutral element $ein X$, this means that for any $xin X$ there exists $y$ so that $xy=yx=e$. Usually $y$ is denoted by $x^{-1}$. The inverse for usual sum is $-x$, the inverse for usual multiplication is $1/x$ (which exists only for non-zero elements). In the realm of matrices, there are many matrices that have no inverse, for instane the matrix
                $begin{pmatrix} 1 & 1\1&1end{pmatrix}$.



                Such properties are important because they make an operation user-friendly. For example: is it true that if $a,bneq 0$ then $a*bneq0$? This seems kind of obviuos, but it depends on the properties of the operation. For example
                $begin{pmatrix} 1 & 0\3&0end{pmatrix}begin{pmatrix} 0 & 0\1&2end{pmatrix}=begin{pmatrix} 0 & 0\0&0end{pmatrix}$ but both
                $begin{pmatrix} 1 & 0\3&0end{pmatrix}$ and $begin{pmatrix} 0 & 0\1&2end{pmatrix}$ are different from zero.



                Concluding, I would say that when a mathematician hear the word multiplication, immediately think to an associative operation, usually (but not always) with neutral element, sometimes commutative.






                share|cite|improve this answer





















                • Another example of non-associative multiplication is IEEE floating point numbers (where intermediate results can underflow to zero or not, depending on the order of evaluation. small * small * big can be zero if the first two terms underflow, or small if the second two terms multiply to 1.)
                  – Martin Bonner
                  2 days ago










                • Not to mention the cross product in $mathbb R^3$.
                  – Henning Makholm
                  2 days ago










                • @MartinBonner: An interesting example, in that it is a non-associative approximation to an associative operation. I see in Wikipedia, incidentally, that it may be subnormal rather than zero, though I do not see exactly when.
                  – PJTraill
                  2 days ago













                up vote
                1
                down vote










                up vote
                1
                down vote









                The mathematical concept below the question is that of operation.



                In a very general setting, if X is a set, than an operation on X is just a function



                $$Xtimes Xto X$$



                Usually denoted with multiplicative notations like $cdot$ or $*$. This means that an operation takes two elements of X and give as a result an element of $X$ (exactly as when you take two numbers, say $3$ ant $5$ and the result is $5*3=15$)



                Examples of oparations are the usual operations on real numbers: plus, minus, division (defined on non-zero reals), multiplications, exponentiation.



                Now, if you want to use operations in mathematics you usualy requires properties that are useful in calculations. Here the most common properties that an operation can have.



                $1)$ Associativity. That means that $(a*b)*c=a*(b*c)$ and allow you to omit parenthesis and write $a*b*c$. The usual summation and multiplications on real numbers are associative. Subtraction, division, and exponentiation are not. For example:
                $$(5-2)-2=3-2=1neq 5=5-(2-2)$$
                $$ (9/3)/3=3/3=1neq 9=9/1=9/(3/3)$$
                $$ 2^{(3^2)}=2^9=512neq 64= 8^2=(2^3)^2$$
                A famous examples of non-associative multiplication often used in mathematics is that of Cayley Octonions.
                A useful example of associative operations is the composition of functions. Let $A$ be any set and $X$ be the set of all functions from $A$ to $A$, i.e. $X={f:Ato A}$. The composition of two function $f,g$ is the function $f*g$ defined by $f*g(a)=f(g(a))$. Clearly $f*(g*h)(a)=f(g(h(a)))=(f*g)*h(a)$.



                $2)$ Commutativity. That means that $a*b=b*a$. Usual sum and multiplications are commutative. Matrix product is not commutative, the composition of functions is not commutative in general (a rotation composed with a translation is not the same as a translation first, and then the rotation), vector product is not commutative. Non commutativity of composition of functions is on the basis of Heisemberg uncertainty principle
                . The Hamilton Quaternions are a useful structure used in math. They have a multiplication which is associative but non commutative.



                $3)$ Existence of Neutral element. This means that there is an element e in $X$ so that $x*e=e*x=x$ for any $x$ of $X$. For summation the netural element is $0$, for multiplication is $1$. If you consider $X$ as the set of even integers numbers, than the usual multiplication is well-defined on $X$, but the neutral element does not exists in $X$ (it would be $1$, which is not in $X$).



                $4)$ Existence of Inverse. In case there is neutral element $ein X$, this means that for any $xin X$ there exists $y$ so that $xy=yx=e$. Usually $y$ is denoted by $x^{-1}$. The inverse for usual sum is $-x$, the inverse for usual multiplication is $1/x$ (which exists only for non-zero elements). In the realm of matrices, there are many matrices that have no inverse, for instane the matrix
                $begin{pmatrix} 1 & 1\1&1end{pmatrix}$.



                Such properties are important because they make an operation user-friendly. For example: is it true that if $a,bneq 0$ then $a*bneq0$? This seems kind of obviuos, but it depends on the properties of the operation. For example
                $begin{pmatrix} 1 & 0\3&0end{pmatrix}begin{pmatrix} 0 & 0\1&2end{pmatrix}=begin{pmatrix} 0 & 0\0&0end{pmatrix}$ but both
                $begin{pmatrix} 1 & 0\3&0end{pmatrix}$ and $begin{pmatrix} 0 & 0\1&2end{pmatrix}$ are different from zero.



                Concluding, I would say that when a mathematician hear the word multiplication, immediately think to an associative operation, usually (but not always) with neutral element, sometimes commutative.






                share|cite|improve this answer












                The mathematical concept below the question is that of operation.



                In a very general setting, if X is a set, than an operation on X is just a function



                $$Xtimes Xto X$$



                Usually denoted with multiplicative notations like $cdot$ or $*$. This means that an operation takes two elements of X and give as a result an element of $X$ (exactly as when you take two numbers, say $3$ ant $5$ and the result is $5*3=15$)



                Examples of oparations are the usual operations on real numbers: plus, minus, division (defined on non-zero reals), multiplications, exponentiation.



                Now, if you want to use operations in mathematics you usualy requires properties that are useful in calculations. Here the most common properties that an operation can have.



                $1)$ Associativity. That means that $(a*b)*c=a*(b*c)$ and allow you to omit parenthesis and write $a*b*c$. The usual summation and multiplications on real numbers are associative. Subtraction, division, and exponentiation are not. For example:
                $$(5-2)-2=3-2=1neq 5=5-(2-2)$$
                $$ (9/3)/3=3/3=1neq 9=9/1=9/(3/3)$$
                $$ 2^{(3^2)}=2^9=512neq 64= 8^2=(2^3)^2$$
                A famous examples of non-associative multiplication often used in mathematics is that of Cayley Octonions.
                A useful example of associative operations is the composition of functions. Let $A$ be any set and $X$ be the set of all functions from $A$ to $A$, i.e. $X={f:Ato A}$. The composition of two function $f,g$ is the function $f*g$ defined by $f*g(a)=f(g(a))$. Clearly $f*(g*h)(a)=f(g(h(a)))=(f*g)*h(a)$.



                $2)$ Commutativity. That means that $a*b=b*a$. Usual sum and multiplications are commutative. Matrix product is not commutative, the composition of functions is not commutative in general (a rotation composed with a translation is not the same as a translation first, and then the rotation), vector product is not commutative. Non commutativity of composition of functions is on the basis of Heisemberg uncertainty principle
                . The Hamilton Quaternions are a useful structure used in math. They have a multiplication which is associative but non commutative.



                $3)$ Existence of Neutral element. This means that there is an element e in $X$ so that $x*e=e*x=x$ for any $x$ of $X$. For summation the netural element is $0$, for multiplication is $1$. If you consider $X$ as the set of even integers numbers, than the usual multiplication is well-defined on $X$, but the neutral element does not exists in $X$ (it would be $1$, which is not in $X$).



                $4)$ Existence of Inverse. In case there is neutral element $ein X$, this means that for any $xin X$ there exists $y$ so that $xy=yx=e$. Usually $y$ is denoted by $x^{-1}$. The inverse for usual sum is $-x$, the inverse for usual multiplication is $1/x$ (which exists only for non-zero elements). In the realm of matrices, there are many matrices that have no inverse, for instane the matrix
                $begin{pmatrix} 1 & 1\1&1end{pmatrix}$.



                Such properties are important because they make an operation user-friendly. For example: is it true that if $a,bneq 0$ then $a*bneq0$? This seems kind of obviuos, but it depends on the properties of the operation. For example
                $begin{pmatrix} 1 & 0\3&0end{pmatrix}begin{pmatrix} 0 & 0\1&2end{pmatrix}=begin{pmatrix} 0 & 0\0&0end{pmatrix}$ but both
                $begin{pmatrix} 1 & 0\3&0end{pmatrix}$ and $begin{pmatrix} 0 & 0\1&2end{pmatrix}$ are different from zero.



                Concluding, I would say that when a mathematician hear the word multiplication, immediately think to an associative operation, usually (but not always) with neutral element, sometimes commutative.







                share|cite|improve this answer












                share|cite|improve this answer



                share|cite|improve this answer










                answered 2 days ago









                user126154

                5,280716




                5,280716












                • Another example of non-associative multiplication is IEEE floating point numbers (where intermediate results can underflow to zero or not, depending on the order of evaluation. small * small * big can be zero if the first two terms underflow, or small if the second two terms multiply to 1.)
                  – Martin Bonner
                  2 days ago










                • Not to mention the cross product in $mathbb R^3$.
                  – Henning Makholm
                  2 days ago










                • @MartinBonner: An interesting example, in that it is a non-associative approximation to an associative operation. I see in Wikipedia, incidentally, that it may be subnormal rather than zero, though I do not see exactly when.
                  – PJTraill
                  2 days ago


















                • Another example of non-associative multiplication is IEEE floating point numbers (where intermediate results can underflow to zero or not, depending on the order of evaluation. small * small * big can be zero if the first two terms underflow, or small if the second two terms multiply to 1.)
                  – Martin Bonner
                  2 days ago










                • Not to mention the cross product in $mathbb R^3$.
                  – Henning Makholm
                  2 days ago










                • @MartinBonner: An interesting example, in that it is a non-associative approximation to an associative operation. I see in Wikipedia, incidentally, that it may be subnormal rather than zero, though I do not see exactly when.
                  – PJTraill
                  2 days ago
















                Another example of non-associative multiplication is IEEE floating point numbers (where intermediate results can underflow to zero or not, depending on the order of evaluation. small * small * big can be zero if the first two terms underflow, or small if the second two terms multiply to 1.)
                – Martin Bonner
                2 days ago




                Another example of non-associative multiplication is IEEE floating point numbers (where intermediate results can underflow to zero or not, depending on the order of evaluation. small * small * big can be zero if the first two terms underflow, or small if the second two terms multiply to 1.)
                – Martin Bonner
                2 days ago












                Not to mention the cross product in $mathbb R^3$.
                – Henning Makholm
                2 days ago




                Not to mention the cross product in $mathbb R^3$.
                – Henning Makholm
                2 days ago












                @MartinBonner: An interesting example, in that it is a non-associative approximation to an associative operation. I see in Wikipedia, incidentally, that it may be subnormal rather than zero, though I do not see exactly when.
                – PJTraill
                2 days ago




                @MartinBonner: An interesting example, in that it is a non-associative approximation to an associative operation. I see in Wikipedia, incidentally, that it may be subnormal rather than zero, though I do not see exactly when.
                – PJTraill
                2 days ago










                up vote
                1
                down vote














                a∗b is a times b or b times a




                That is a theorem not a definition, thank you very much. Of course the easiest proof is geometric: the former is an a x b rectangle, and you rotate it and you get a b x a rectangle. To prove it from axioms is a bit trickier because you have to answer the question if multiplication is not commutative as an axiom then what do we prove it from? The answer is some combination of mathematical induction, which is the idea that if something works for all numbers "and so on" then it works for all numbers, but it only applies in the integers because other number systems don't really have an "and so on."



                So once you talk about multiplication outside of the integers you lose all these good assurances that multiplication is commutative. In my opinion, that's really it. Multiplication makes sense in a lot of places, and many of those places don't have strong reasons that it would be commutative.






                share|cite|improve this answer

























                  up vote
                  1
                  down vote














                  a∗b is a times b or b times a




                  That is a theorem not a definition, thank you very much. Of course the easiest proof is geometric: the former is an a x b rectangle, and you rotate it and you get a b x a rectangle. To prove it from axioms is a bit trickier because you have to answer the question if multiplication is not commutative as an axiom then what do we prove it from? The answer is some combination of mathematical induction, which is the idea that if something works for all numbers "and so on" then it works for all numbers, but it only applies in the integers because other number systems don't really have an "and so on."



                  So once you talk about multiplication outside of the integers you lose all these good assurances that multiplication is commutative. In my opinion, that's really it. Multiplication makes sense in a lot of places, and many of those places don't have strong reasons that it would be commutative.






                  share|cite|improve this answer























                    up vote
                    1
                    down vote










                    up vote
                    1
                    down vote










                    a∗b is a times b or b times a




                    That is a theorem not a definition, thank you very much. Of course the easiest proof is geometric: the former is an a x b rectangle, and you rotate it and you get a b x a rectangle. To prove it from axioms is a bit trickier because you have to answer the question if multiplication is not commutative as an axiom then what do we prove it from? The answer is some combination of mathematical induction, which is the idea that if something works for all numbers "and so on" then it works for all numbers, but it only applies in the integers because other number systems don't really have an "and so on."



                    So once you talk about multiplication outside of the integers you lose all these good assurances that multiplication is commutative. In my opinion, that's really it. Multiplication makes sense in a lot of places, and many of those places don't have strong reasons that it would be commutative.






                    share|cite|improve this answer













                    a∗b is a times b or b times a




                    That is a theorem not a definition, thank you very much. Of course the easiest proof is geometric: the former is an a x b rectangle, and you rotate it and you get a b x a rectangle. To prove it from axioms is a bit trickier because you have to answer the question if multiplication is not commutative as an axiom then what do we prove it from? The answer is some combination of mathematical induction, which is the idea that if something works for all numbers "and so on" then it works for all numbers, but it only applies in the integers because other number systems don't really have an "and so on."



                    So once you talk about multiplication outside of the integers you lose all these good assurances that multiplication is commutative. In my opinion, that's really it. Multiplication makes sense in a lot of places, and many of those places don't have strong reasons that it would be commutative.







                    share|cite|improve this answer












                    share|cite|improve this answer



                    share|cite|improve this answer










                    answered yesterday









                    djechlin

                    4,7801230




                    4,7801230






















                        up vote
                        0
                        down vote













                        It ought to be mentioned that multiplication in general is an action of a collection of transformations on a collection of objects, and multiplication of elements of a group is in fact a special case. This might have even been inherent in the original notion of multiplication; consider:




                        $3$ times as many chickens is not the same as "chicken times as many $3$s". (The latter makes no sense.)




                        Symbolically, if we let "$C$" stand for one chicken, and denote "$k$ times of $X$" by "$k·X$", then we have




                        $3·(k·C) = (3k)·C$ but not $3·(k·C) = (k·C)·3$ (since "$(k·C)·3$" does not make sense).




                        What is happening here? Even in many natural languages, numbers have been used in non-commutative syntax (in English, "three chickens" is correct while "chickens three" is wrong), because the multiplier and the multiplied are of different types (one is a countable noun phrase while the other is like a determiner).



                        In this special case of natural numbers, they actually act as a semi-ring on countable nouns:




                        $0·x = text{nothing}$.



                        $1·x = x$.



                        $(km)·x = k·(m·x)$ for any natural numbers $k,m$ and countable noun $x$.



                        $(k+m)·x = (k·x)text{ and }(m·x)$ for any natural numbers $k,m$ and countable noun $x$.



                        $k·(xtext{ and }y) = (k·x)text{ and }(k·y)$ for any natural numbers $k,m$ and countable noun $x$.




                        (English requires some details like pluralization and articles, but mathematically the multiplication is as stated.)



                        Observe that multiplication between natural numbers is commutative, but multiplication of natural numbers to countable nouns is not. So for example we have:




                        $3·(k·C) = (3k)·C = (k3)·C = k·(3·C)$.




                        And in mathematics you see the same kind of scalar multiplication in vector spaces, where again the term "scalar" (related to "scale") is not accidental. Again, multiplication of scalings to vectors is not commutative, but multiplication between scalings (which is simply composition) is commutative.



                        More generally, even the transformations that act on the collection of objects may not have commutative composition. For example, operators on a collection $S$, namely functions from $S$ to $S$, act on $S$ via function application:




                        $(f∘g)(x) = f(g(x))$ for any $f,g : S to S$ and $x in S$.




                        And certainly composition on functions from $S$ to $S$ is not commutative. But maybe the "multiplication" name still stuck on as it did for numbers and scalings. That is why we have square matrix multiplication defined precisely so that it is equivalent to their composition when acting on the vectors.






                        share|cite|improve this answer





















                        • I have chicken/6 of the number 3. :-)
                          – Asaf Karagila
                          21 hours ago















                        up vote
                        0
                        down vote













                        It ought to be mentioned that multiplication in general is an action of a collection of transformations on a collection of objects, and multiplication of elements of a group is in fact a special case. This might have even been inherent in the original notion of multiplication; consider:




                        $3$ times as many chickens is not the same as "chicken times as many $3$s". (The latter makes no sense.)




                        Symbolically, if we let "$C$" stand for one chicken, and denote "$k$ times of $X$" by "$k·X$", then we have




                        $3·(k·C) = (3k)·C$ but not $3·(k·C) = (k·C)·3$ (since "$(k·C)·3$" does not make sense).




                        What is happening here? Even in many natural languages, numbers have been used in non-commutative syntax (in English, "three chickens" is correct while "chickens three" is wrong), because the multiplier and the multiplied are of different types (one is a countable noun phrase while the other is like a determiner).



                        In this special case of natural numbers, they actually act as a semi-ring on countable nouns:




                        $0·x = text{nothing}$.



                        $1·x = x$.



                        $(km)·x = k·(m·x)$ for any natural numbers $k,m$ and countable noun $x$.



                        $(k+m)·x = (k·x)text{ and }(m·x)$ for any natural numbers $k,m$ and countable noun $x$.



                        $k·(xtext{ and }y) = (k·x)text{ and }(k·y)$ for any natural numbers $k,m$ and countable noun $x$.




                        (English requires some details like pluralization and articles, but mathematically the multiplication is as stated.)



                        Observe that multiplication between natural numbers is commutative, but multiplication of natural numbers to countable nouns is not. So for example we have:




                        $3·(k·C) = (3k)·C = (k3)·C = k·(3·C)$.




                        And in mathematics you see the same kind of scalar multiplication in vector spaces, where again the term "scalar" (related to "scale") is not accidental. Again, multiplication of scalings to vectors is not commutative, but multiplication between scalings (which is simply composition) is commutative.



                        More generally, even the transformations that act on the collection of objects may not have commutative composition. For example, operators on a collection $S$, namely functions from $S$ to $S$, act on $S$ via function application:




                        $(f∘g)(x) = f(g(x))$ for any $f,g : S to S$ and $x in S$.




                        And certainly composition on functions from $S$ to $S$ is not commutative. But maybe the "multiplication" name still stuck on as it did for numbers and scalings. That is why we have square matrix multiplication defined precisely so that it is equivalent to their composition when acting on the vectors.






                        share|cite|improve this answer





















                        • I have chicken/6 of the number 3. :-)
                          – Asaf Karagila
                          21 hours ago













                        up vote
                        0
                        down vote










                        up vote
                        0
                        down vote









                        It ought to be mentioned that multiplication in general is an action of a collection of transformations on a collection of objects, and multiplication of elements of a group is in fact a special case. This might have even been inherent in the original notion of multiplication; consider:




                        $3$ times as many chickens is not the same as "chicken times as many $3$s". (The latter makes no sense.)




                        Symbolically, if we let "$C$" stand for one chicken, and denote "$k$ times of $X$" by "$k·X$", then we have




                        $3·(k·C) = (3k)·C$ but not $3·(k·C) = (k·C)·3$ (since "$(k·C)·3$" does not make sense).




                        What is happening here? Even in many natural languages, numbers have been used in non-commutative syntax (in English, "three chickens" is correct while "chickens three" is wrong), because the multiplier and the multiplied are of different types (one is a countable noun phrase while the other is like a determiner).



                        In this special case of natural numbers, they actually act as a semi-ring on countable nouns:




                        $0·x = text{nothing}$.



                        $1·x = x$.



                        $(km)·x = k·(m·x)$ for any natural numbers $k,m$ and countable noun $x$.



                        $(k+m)·x = (k·x)text{ and }(m·x)$ for any natural numbers $k,m$ and countable noun $x$.



                        $k·(xtext{ and }y) = (k·x)text{ and }(k·y)$ for any natural numbers $k,m$ and countable noun $x$.




                        (English requires some details like pluralization and articles, but mathematically the multiplication is as stated.)



                        Observe that multiplication between natural numbers is commutative, but multiplication of natural numbers to countable nouns is not. So for example we have:




                        $3·(k·C) = (3k)·C = (k3)·C = k·(3·C)$.




                        And in mathematics you see the same kind of scalar multiplication in vector spaces, where again the term "scalar" (related to "scale") is not accidental. Again, multiplication of scalings to vectors is not commutative, but multiplication between scalings (which is simply composition) is commutative.



                        More generally, even the transformations that act on the collection of objects may not have commutative composition. For example, operators on a collection $S$, namely functions from $S$ to $S$, act on $S$ via function application:




                        $(f∘g)(x) = f(g(x))$ for any $f,g : S to S$ and $x in S$.




                        And certainly composition on functions from $S$ to $S$ is not commutative. But maybe the "multiplication" name still stuck on as it did for numbers and scalings. That is why we have square matrix multiplication defined precisely so that it is equivalent to their composition when acting on the vectors.






                        share|cite|improve this answer












                        It ought to be mentioned that multiplication in general is an action of a collection of transformations on a collection of objects, and multiplication of elements of a group is in fact a special case. This might have even been inherent in the original notion of multiplication; consider:




                        $3$ times as many chickens is not the same as "chicken times as many $3$s". (The latter makes no sense.)




                        Symbolically, if we let "$C$" stand for one chicken, and denote "$k$ times of $X$" by "$k·X$", then we have




                        $3·(k·C) = (3k)·C$ but not $3·(k·C) = (k·C)·3$ (since "$(k·C)·3$" does not make sense).




                        What is happening here? Even in many natural languages, numbers have been used in non-commutative syntax (in English, "three chickens" is correct while "chickens three" is wrong), because the multiplier and the multiplied are of different types (one is a countable noun phrase while the other is like a determiner).



                        In this special case of natural numbers, they actually act as a semi-ring on countable nouns:




                        $0·x = text{nothing}$.



                        $1·x = x$.



                        $(km)·x = k·(m·x)$ for any natural numbers $k,m$ and countable noun $x$.



                        $(k+m)·x = (k·x)text{ and }(m·x)$ for any natural numbers $k,m$ and countable noun $x$.



                        $k·(xtext{ and }y) = (k·x)text{ and }(k·y)$ for any natural numbers $k,m$ and countable noun $x$.




                        (English requires some details like pluralization and articles, but mathematically the multiplication is as stated.)



                        Observe that multiplication between natural numbers is commutative, but multiplication of natural numbers to countable nouns is not. So for example we have:




                        $3·(k·C) = (3k)·C = (k3)·C = k·(3·C)$.




                        And in mathematics you see the same kind of scalar multiplication in vector spaces, where again the term "scalar" (related to "scale") is not accidental. Again, multiplication of scalings to vectors is not commutative, but multiplication between scalings (which is simply composition) is commutative.



                        More generally, even the transformations that act on the collection of objects may not have commutative composition. For example, operators on a collection $S$, namely functions from $S$ to $S$, act on $S$ via function application:




                        $(f∘g)(x) = f(g(x))$ for any $f,g : S to S$ and $x in S$.




                        And certainly composition on functions from $S$ to $S$ is not commutative. But maybe the "multiplication" name still stuck on as it did for numbers and scalings. That is why we have square matrix multiplication defined precisely so that it is equivalent to their composition when acting on the vectors.







                        share|cite|improve this answer












                        share|cite|improve this answer



                        share|cite|improve this answer










                        answered yesterday









                        user21820

                        38k441149




                        38k441149












                        • I have chicken/6 of the number 3. :-)
                          – Asaf Karagila
                          21 hours ago


















                        • I have chicken/6 of the number 3. :-)
                          – Asaf Karagila
                          21 hours ago
















                        I have chicken/6 of the number 3. :-)
                        – Asaf Karagila
                        21 hours ago




                        I have chicken/6 of the number 3. :-)
                        – Asaf Karagila
                        21 hours ago










                        up vote
                        0
                        down vote













                        Commutativity is not a fundamental part of multiplication. It is just a consequence of how it works in certain situations.



                        Here is an analogy. When you multiply two positive integers, a and b, you can define multiplication as repetitive additions. If consider a*b, that means that we will do a repetitive addition with a, and we will repeat b times. This is how multiplication is taught in the early years.



                        Now have a look what happens when we use this on negative integers. Using the above definition, 4 * (-3) would mean that we would repeatedly add the number 4. Nothing strange so far. But we will repeat this -3 times. Hold on, how can we do something a negative amount of times? Why should we call this multiplication when it is not a repetitive addition?



                        My point here is that when we transitioned from just positive integers to include the negative integers, we lost something that we thought was fundamental. But it was never fundamental.



                        Let's have a look at matrix multiplication. Multiplication of real numbers could be seen as a special case of matrix multiplication. This special case is where the size of the matrices is 1x1. If you have the 1x1 matrices [a] and [b] and perform matrix multiplication between them, you would get the matrix [ab].






                        share|cite|improve this answer

























                          up vote
                          0
                          down vote













                          Commutativity is not a fundamental part of multiplication. It is just a consequence of how it works in certain situations.



                          Here is an analogy. When you multiply two positive integers, a and b, you can define multiplication as repetitive additions. If consider a*b, that means that we will do a repetitive addition with a, and we will repeat b times. This is how multiplication is taught in the early years.



                          Now have a look what happens when we use this on negative integers. Using the above definition, 4 * (-3) would mean that we would repeatedly add the number 4. Nothing strange so far. But we will repeat this -3 times. Hold on, how can we do something a negative amount of times? Why should we call this multiplication when it is not a repetitive addition?



                          My point here is that when we transitioned from just positive integers to include the negative integers, we lost something that we thought was fundamental. But it was never fundamental.



                          Let's have a look at matrix multiplication. Multiplication of real numbers could be seen as a special case of matrix multiplication. This special case is where the size of the matrices is 1x1. If you have the 1x1 matrices [a] and [b] and perform matrix multiplication between them, you would get the matrix [ab].






                          share|cite|improve this answer























                            up vote
                            0
                            down vote










                            up vote
                            0
                            down vote









                            Commutativity is not a fundamental part of multiplication. It is just a consequence of how it works in certain situations.



                            Here is an analogy. When you multiply two positive integers, a and b, you can define multiplication as repetitive additions. If consider a*b, that means that we will do a repetitive addition with a, and we will repeat b times. This is how multiplication is taught in the early years.



                            Now have a look what happens when we use this on negative integers. Using the above definition, 4 * (-3) would mean that we would repeatedly add the number 4. Nothing strange so far. But we will repeat this -3 times. Hold on, how can we do something a negative amount of times? Why should we call this multiplication when it is not a repetitive addition?



                            My point here is that when we transitioned from just positive integers to include the negative integers, we lost something that we thought was fundamental. But it was never fundamental.



                            Let's have a look at matrix multiplication. Multiplication of real numbers could be seen as a special case of matrix multiplication. This special case is where the size of the matrices is 1x1. If you have the 1x1 matrices [a] and [b] and perform matrix multiplication between them, you would get the matrix [ab].






                            share|cite|improve this answer












                            Commutativity is not a fundamental part of multiplication. It is just a consequence of how it works in certain situations.



                            Here is an analogy. When you multiply two positive integers, a and b, you can define multiplication as repetitive additions. If consider a*b, that means that we will do a repetitive addition with a, and we will repeat b times. This is how multiplication is taught in the early years.



                            Now have a look what happens when we use this on negative integers. Using the above definition, 4 * (-3) would mean that we would repeatedly add the number 4. Nothing strange so far. But we will repeat this -3 times. Hold on, how can we do something a negative amount of times? Why should we call this multiplication when it is not a repetitive addition?



                            My point here is that when we transitioned from just positive integers to include the negative integers, we lost something that we thought was fundamental. But it was never fundamental.



                            Let's have a look at matrix multiplication. Multiplication of real numbers could be seen as a special case of matrix multiplication. This special case is where the size of the matrices is 1x1. If you have the 1x1 matrices [a] and [b] and perform matrix multiplication between them, you would get the matrix [ab].







                            share|cite|improve this answer












                            share|cite|improve this answer



                            share|cite|improve this answer










                            answered yesterday









                            Broman

                            1033




                            1033

















                                protected by Watson yesterday



                                Thank you for your interest in this question.
                                Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).



                                Would you like to answer one of these unanswered questions instead?



                                Popular posts from this blog

                                Quarter-circle Tiles

                                build a pushdown automaton that recognizes the reverse language of a given pushdown automaton?

                                Mont Emei