How is it that matrix multiplication represents linear transforms?












0












$begingroup$


When multiplying 2 square matrices the resulting first row ends up being a combination of the first column of a matrix and the first row of one another. Aren't we geometrically missing information?



Edit: What I mean is that to compute each composition of transformations there is a certain algorithm which leads to the correct result: 1.
I find it hard to ignore the way it is built up; why is it that to find the new image of i-hat (as shown in the picture) we multiply its first component by the x components (first column) of the second transform, and its second component by the y components? I at first thought we were just "saving" the motion/displacement exherted upon the first transform's "i-hat" components and that it didn't really matter in which component we saved it cause, well, it will end up being a single vector -combination of all the components. But I then discovered that the order followed was the unique path to get the correct composition. All the answers I've seen regarding this question state that this arises the "natural properties" of composition, but the matter is, why? There must be a geometrical explanation as to why we map the given vectors into the images of certain components and not otherwise.



I am deeply sorry if I am being unclear or if this is is just a stupid question that arises from a sequence of misconceptions about matrix multiplication, but it has been stoping me from going any further in linear algebra so I'd appreciate if someone told me what is really going on.










share|cite|improve this question











$endgroup$












  • $begingroup$
    Matrices themselves represent linear transformations and essentially tell one how a coordinate system changes. Matrix multiplication represents a composition of linear transformations (they are functions after all) and hence the product of two matrixes represents two consecutive coordinate changes done at once. Here is a link to neat video that might help with a geometric viewpoint to matrix multiplication: youtu.be/XkY2DOUCWMU
    $endgroup$
    – user328442
    Dec 24 '18 at 0:19










  • $begingroup$
    Welcome to stackexchange. I can't tell from your question what you really want to know. Please edit the question. Perhaps show us the product of two square matrices and tell us just what "geometrically missing information" you have in mind. Do that with an edit to the question, not in a comment, and use mathjax: math.meta.stackexchange.com/questions/5020/…
    $endgroup$
    – Ethan Bolker
    Dec 24 '18 at 0:40






  • 1




    $begingroup$
    Missing something in terms of what? What would you like to know that you feel you don't have enough information about?
    $endgroup$
    – Joel Pereira
    Dec 24 '18 at 0:50
















0












$begingroup$


When multiplying 2 square matrices the resulting first row ends up being a combination of the first column of a matrix and the first row of one another. Aren't we geometrically missing information?



Edit: What I mean is that to compute each composition of transformations there is a certain algorithm which leads to the correct result: 1.
I find it hard to ignore the way it is built up; why is it that to find the new image of i-hat (as shown in the picture) we multiply its first component by the x components (first column) of the second transform, and its second component by the y components? I at first thought we were just "saving" the motion/displacement exherted upon the first transform's "i-hat" components and that it didn't really matter in which component we saved it cause, well, it will end up being a single vector -combination of all the components. But I then discovered that the order followed was the unique path to get the correct composition. All the answers I've seen regarding this question state that this arises the "natural properties" of composition, but the matter is, why? There must be a geometrical explanation as to why we map the given vectors into the images of certain components and not otherwise.



I am deeply sorry if I am being unclear or if this is is just a stupid question that arises from a sequence of misconceptions about matrix multiplication, but it has been stoping me from going any further in linear algebra so I'd appreciate if someone told me what is really going on.










share|cite|improve this question











$endgroup$












  • $begingroup$
    Matrices themselves represent linear transformations and essentially tell one how a coordinate system changes. Matrix multiplication represents a composition of linear transformations (they are functions after all) and hence the product of two matrixes represents two consecutive coordinate changes done at once. Here is a link to neat video that might help with a geometric viewpoint to matrix multiplication: youtu.be/XkY2DOUCWMU
    $endgroup$
    – user328442
    Dec 24 '18 at 0:19










  • $begingroup$
    Welcome to stackexchange. I can't tell from your question what you really want to know. Please edit the question. Perhaps show us the product of two square matrices and tell us just what "geometrically missing information" you have in mind. Do that with an edit to the question, not in a comment, and use mathjax: math.meta.stackexchange.com/questions/5020/…
    $endgroup$
    – Ethan Bolker
    Dec 24 '18 at 0:40






  • 1




    $begingroup$
    Missing something in terms of what? What would you like to know that you feel you don't have enough information about?
    $endgroup$
    – Joel Pereira
    Dec 24 '18 at 0:50














0












0








0





$begingroup$


When multiplying 2 square matrices the resulting first row ends up being a combination of the first column of a matrix and the first row of one another. Aren't we geometrically missing information?



Edit: What I mean is that to compute each composition of transformations there is a certain algorithm which leads to the correct result: 1.
I find it hard to ignore the way it is built up; why is it that to find the new image of i-hat (as shown in the picture) we multiply its first component by the x components (first column) of the second transform, and its second component by the y components? I at first thought we were just "saving" the motion/displacement exherted upon the first transform's "i-hat" components and that it didn't really matter in which component we saved it cause, well, it will end up being a single vector -combination of all the components. But I then discovered that the order followed was the unique path to get the correct composition. All the answers I've seen regarding this question state that this arises the "natural properties" of composition, but the matter is, why? There must be a geometrical explanation as to why we map the given vectors into the images of certain components and not otherwise.



I am deeply sorry if I am being unclear or if this is is just a stupid question that arises from a sequence of misconceptions about matrix multiplication, but it has been stoping me from going any further in linear algebra so I'd appreciate if someone told me what is really going on.










share|cite|improve this question











$endgroup$




When multiplying 2 square matrices the resulting first row ends up being a combination of the first column of a matrix and the first row of one another. Aren't we geometrically missing information?



Edit: What I mean is that to compute each composition of transformations there is a certain algorithm which leads to the correct result: 1.
I find it hard to ignore the way it is built up; why is it that to find the new image of i-hat (as shown in the picture) we multiply its first component by the x components (first column) of the second transform, and its second component by the y components? I at first thought we were just "saving" the motion/displacement exherted upon the first transform's "i-hat" components and that it didn't really matter in which component we saved it cause, well, it will end up being a single vector -combination of all the components. But I then discovered that the order followed was the unique path to get the correct composition. All the answers I've seen regarding this question state that this arises the "natural properties" of composition, but the matter is, why? There must be a geometrical explanation as to why we map the given vectors into the images of certain components and not otherwise.



I am deeply sorry if I am being unclear or if this is is just a stupid question that arises from a sequence of misconceptions about matrix multiplication, but it has been stoping me from going any further in linear algebra so I'd appreciate if someone told me what is really going on.







linear-algebra matrices linear-transformations






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Dec 24 '18 at 5:20







houda el fezzak

















asked Dec 24 '18 at 0:12









houda el fezzakhouda el fezzak

294




294












  • $begingroup$
    Matrices themselves represent linear transformations and essentially tell one how a coordinate system changes. Matrix multiplication represents a composition of linear transformations (they are functions after all) and hence the product of two matrixes represents two consecutive coordinate changes done at once. Here is a link to neat video that might help with a geometric viewpoint to matrix multiplication: youtu.be/XkY2DOUCWMU
    $endgroup$
    – user328442
    Dec 24 '18 at 0:19










  • $begingroup$
    Welcome to stackexchange. I can't tell from your question what you really want to know. Please edit the question. Perhaps show us the product of two square matrices and tell us just what "geometrically missing information" you have in mind. Do that with an edit to the question, not in a comment, and use mathjax: math.meta.stackexchange.com/questions/5020/…
    $endgroup$
    – Ethan Bolker
    Dec 24 '18 at 0:40






  • 1




    $begingroup$
    Missing something in terms of what? What would you like to know that you feel you don't have enough information about?
    $endgroup$
    – Joel Pereira
    Dec 24 '18 at 0:50


















  • $begingroup$
    Matrices themselves represent linear transformations and essentially tell one how a coordinate system changes. Matrix multiplication represents a composition of linear transformations (they are functions after all) and hence the product of two matrixes represents two consecutive coordinate changes done at once. Here is a link to neat video that might help with a geometric viewpoint to matrix multiplication: youtu.be/XkY2DOUCWMU
    $endgroup$
    – user328442
    Dec 24 '18 at 0:19










  • $begingroup$
    Welcome to stackexchange. I can't tell from your question what you really want to know. Please edit the question. Perhaps show us the product of two square matrices and tell us just what "geometrically missing information" you have in mind. Do that with an edit to the question, not in a comment, and use mathjax: math.meta.stackexchange.com/questions/5020/…
    $endgroup$
    – Ethan Bolker
    Dec 24 '18 at 0:40






  • 1




    $begingroup$
    Missing something in terms of what? What would you like to know that you feel you don't have enough information about?
    $endgroup$
    – Joel Pereira
    Dec 24 '18 at 0:50
















$begingroup$
Matrices themselves represent linear transformations and essentially tell one how a coordinate system changes. Matrix multiplication represents a composition of linear transformations (they are functions after all) and hence the product of two matrixes represents two consecutive coordinate changes done at once. Here is a link to neat video that might help with a geometric viewpoint to matrix multiplication: youtu.be/XkY2DOUCWMU
$endgroup$
– user328442
Dec 24 '18 at 0:19




$begingroup$
Matrices themselves represent linear transformations and essentially tell one how a coordinate system changes. Matrix multiplication represents a composition of linear transformations (they are functions after all) and hence the product of two matrixes represents two consecutive coordinate changes done at once. Here is a link to neat video that might help with a geometric viewpoint to matrix multiplication: youtu.be/XkY2DOUCWMU
$endgroup$
– user328442
Dec 24 '18 at 0:19












$begingroup$
Welcome to stackexchange. I can't tell from your question what you really want to know. Please edit the question. Perhaps show us the product of two square matrices and tell us just what "geometrically missing information" you have in mind. Do that with an edit to the question, not in a comment, and use mathjax: math.meta.stackexchange.com/questions/5020/…
$endgroup$
– Ethan Bolker
Dec 24 '18 at 0:40




$begingroup$
Welcome to stackexchange. I can't tell from your question what you really want to know. Please edit the question. Perhaps show us the product of two square matrices and tell us just what "geometrically missing information" you have in mind. Do that with an edit to the question, not in a comment, and use mathjax: math.meta.stackexchange.com/questions/5020/…
$endgroup$
– Ethan Bolker
Dec 24 '18 at 0:40




1




1




$begingroup$
Missing something in terms of what? What would you like to know that you feel you don't have enough information about?
$endgroup$
– Joel Pereira
Dec 24 '18 at 0:50




$begingroup$
Missing something in terms of what? What would you like to know that you feel you don't have enough information about?
$endgroup$
– Joel Pereira
Dec 24 '18 at 0:50










2 Answers
2






active

oldest

votes


















1












$begingroup$

1.Let $E={e_1,...,e_n}$ be any vector-space basis for $Bbb R^n.$ For every $vinBbb R^n$ there is a unique sequence $v_1,...,v_n$ of numbers such that $v=v_1e_1+...+v_ne_n.$ And $v_j$ is called "the $j$-th co-ordinate of $v$ with respect to the basis $E$".



Note that the $j$-th co-ordinate of a sum of some $k$ vectors is the sum of the individual vectors' $j$-th co-ordinates. That is, $(,v(1)+...+v(k),)_j=v(1)_j
,+...+,v(k)_j;.$
And of course $(rv)_j=rcdot v_j$ for any $rin Bbb R.$




  1. A linear map $f:Bbb R^nto Bbb R^n$ is uniquely and completely determined by $f(e_1),...,f(e_n).$ Because the co-ordinates $v_1,...,v_n$ are uniquely determined by a given $v,$ and $$f(v)=f(v_1e_1+..+v_ne_n)=f(v_1e_1)+...+f(v_ne_n)=v_1f(e_1)+...+v_n f(e_n).$$


  2. Let $f_{j,i}$ be the $j$-th co-ordinate (with respect to $E$) of $f(e_i)$. For any $vin Bbb R^n,$ and any $j$, the $j$-th co-ordinate of $f(v)$ is $$ (f(v))_j=(,v_1f(e_1),+...+,v_n f(e_n),)_j =$$ $$=(v_1f(e_1))_j,+...+,(v_nf(e_n))_j=$$ $$=v_1f_{j,1},+...+,v_nf_{j,n};.$$



The array $(f_{j,i})_{1leq ileq n, 1leq jleq n}$ is called "the matrix representation of $f$ with respect to $E$".



Remark: Any $vin Bbb R^n$ is a sequence $(r_1,...,r_n)$ of numbers, and one choice for a basis $E$ is, for each $i$, to let $e_i$ be the sequence $(e_{i,1},,...,e_{i,n})$ in which $e_{i,i}=1$ and $e_{i,j}=0$ when $jne i$. With respect to this basis, if $v=(r_1,...,r_n)$ then $v_j=r_j.$






share|cite|improve this answer











$endgroup$





















    0












    $begingroup$

    Matrix multiplication is feeding one linear change of variables into another. See my answer at Why, historically, do we multiply matrices as we do?.






    share|cite|improve this answer









    $endgroup$













      Your Answer





      StackExchange.ifUsing("editor", function () {
      return StackExchange.using("mathjaxEditing", function () {
      StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
      StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
      });
      });
      }, "mathjax-editing");

      StackExchange.ready(function() {
      var channelOptions = {
      tags: "".split(" "),
      id: "69"
      };
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function() {
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled) {
      StackExchange.using("snippets", function() {
      createEditor();
      });
      }
      else {
      createEditor();
      }
      });

      function createEditor() {
      StackExchange.prepareEditor({
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: true,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: 10,
      bindNavPrevention: true,
      postfix: "",
      imageUploader: {
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      },
      noCode: true, onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      });


      }
      });














      draft saved

      draft discarded


















      StackExchange.ready(
      function () {
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3050828%2fhow-is-it-that-matrix-multiplication-represents-linear-transforms%23new-answer', 'question_page');
      }
      );

      Post as a guest















      Required, but never shown

























      2 Answers
      2






      active

      oldest

      votes








      2 Answers
      2






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      1












      $begingroup$

      1.Let $E={e_1,...,e_n}$ be any vector-space basis for $Bbb R^n.$ For every $vinBbb R^n$ there is a unique sequence $v_1,...,v_n$ of numbers such that $v=v_1e_1+...+v_ne_n.$ And $v_j$ is called "the $j$-th co-ordinate of $v$ with respect to the basis $E$".



      Note that the $j$-th co-ordinate of a sum of some $k$ vectors is the sum of the individual vectors' $j$-th co-ordinates. That is, $(,v(1)+...+v(k),)_j=v(1)_j
      ,+...+,v(k)_j;.$
      And of course $(rv)_j=rcdot v_j$ for any $rin Bbb R.$




      1. A linear map $f:Bbb R^nto Bbb R^n$ is uniquely and completely determined by $f(e_1),...,f(e_n).$ Because the co-ordinates $v_1,...,v_n$ are uniquely determined by a given $v,$ and $$f(v)=f(v_1e_1+..+v_ne_n)=f(v_1e_1)+...+f(v_ne_n)=v_1f(e_1)+...+v_n f(e_n).$$


      2. Let $f_{j,i}$ be the $j$-th co-ordinate (with respect to $E$) of $f(e_i)$. For any $vin Bbb R^n,$ and any $j$, the $j$-th co-ordinate of $f(v)$ is $$ (f(v))_j=(,v_1f(e_1),+...+,v_n f(e_n),)_j =$$ $$=(v_1f(e_1))_j,+...+,(v_nf(e_n))_j=$$ $$=v_1f_{j,1},+...+,v_nf_{j,n};.$$



      The array $(f_{j,i})_{1leq ileq n, 1leq jleq n}$ is called "the matrix representation of $f$ with respect to $E$".



      Remark: Any $vin Bbb R^n$ is a sequence $(r_1,...,r_n)$ of numbers, and one choice for a basis $E$ is, for each $i$, to let $e_i$ be the sequence $(e_{i,1},,...,e_{i,n})$ in which $e_{i,i}=1$ and $e_{i,j}=0$ when $jne i$. With respect to this basis, if $v=(r_1,...,r_n)$ then $v_j=r_j.$






      share|cite|improve this answer











      $endgroup$


















        1












        $begingroup$

        1.Let $E={e_1,...,e_n}$ be any vector-space basis for $Bbb R^n.$ For every $vinBbb R^n$ there is a unique sequence $v_1,...,v_n$ of numbers such that $v=v_1e_1+...+v_ne_n.$ And $v_j$ is called "the $j$-th co-ordinate of $v$ with respect to the basis $E$".



        Note that the $j$-th co-ordinate of a sum of some $k$ vectors is the sum of the individual vectors' $j$-th co-ordinates. That is, $(,v(1)+...+v(k),)_j=v(1)_j
        ,+...+,v(k)_j;.$
        And of course $(rv)_j=rcdot v_j$ for any $rin Bbb R.$




        1. A linear map $f:Bbb R^nto Bbb R^n$ is uniquely and completely determined by $f(e_1),...,f(e_n).$ Because the co-ordinates $v_1,...,v_n$ are uniquely determined by a given $v,$ and $$f(v)=f(v_1e_1+..+v_ne_n)=f(v_1e_1)+...+f(v_ne_n)=v_1f(e_1)+...+v_n f(e_n).$$


        2. Let $f_{j,i}$ be the $j$-th co-ordinate (with respect to $E$) of $f(e_i)$. For any $vin Bbb R^n,$ and any $j$, the $j$-th co-ordinate of $f(v)$ is $$ (f(v))_j=(,v_1f(e_1),+...+,v_n f(e_n),)_j =$$ $$=(v_1f(e_1))_j,+...+,(v_nf(e_n))_j=$$ $$=v_1f_{j,1},+...+,v_nf_{j,n};.$$



        The array $(f_{j,i})_{1leq ileq n, 1leq jleq n}$ is called "the matrix representation of $f$ with respect to $E$".



        Remark: Any $vin Bbb R^n$ is a sequence $(r_1,...,r_n)$ of numbers, and one choice for a basis $E$ is, for each $i$, to let $e_i$ be the sequence $(e_{i,1},,...,e_{i,n})$ in which $e_{i,i}=1$ and $e_{i,j}=0$ when $jne i$. With respect to this basis, if $v=(r_1,...,r_n)$ then $v_j=r_j.$






        share|cite|improve this answer











        $endgroup$
















          1












          1








          1





          $begingroup$

          1.Let $E={e_1,...,e_n}$ be any vector-space basis for $Bbb R^n.$ For every $vinBbb R^n$ there is a unique sequence $v_1,...,v_n$ of numbers such that $v=v_1e_1+...+v_ne_n.$ And $v_j$ is called "the $j$-th co-ordinate of $v$ with respect to the basis $E$".



          Note that the $j$-th co-ordinate of a sum of some $k$ vectors is the sum of the individual vectors' $j$-th co-ordinates. That is, $(,v(1)+...+v(k),)_j=v(1)_j
          ,+...+,v(k)_j;.$
          And of course $(rv)_j=rcdot v_j$ for any $rin Bbb R.$




          1. A linear map $f:Bbb R^nto Bbb R^n$ is uniquely and completely determined by $f(e_1),...,f(e_n).$ Because the co-ordinates $v_1,...,v_n$ are uniquely determined by a given $v,$ and $$f(v)=f(v_1e_1+..+v_ne_n)=f(v_1e_1)+...+f(v_ne_n)=v_1f(e_1)+...+v_n f(e_n).$$


          2. Let $f_{j,i}$ be the $j$-th co-ordinate (with respect to $E$) of $f(e_i)$. For any $vin Bbb R^n,$ and any $j$, the $j$-th co-ordinate of $f(v)$ is $$ (f(v))_j=(,v_1f(e_1),+...+,v_n f(e_n),)_j =$$ $$=(v_1f(e_1))_j,+...+,(v_nf(e_n))_j=$$ $$=v_1f_{j,1},+...+,v_nf_{j,n};.$$



          The array $(f_{j,i})_{1leq ileq n, 1leq jleq n}$ is called "the matrix representation of $f$ with respect to $E$".



          Remark: Any $vin Bbb R^n$ is a sequence $(r_1,...,r_n)$ of numbers, and one choice for a basis $E$ is, for each $i$, to let $e_i$ be the sequence $(e_{i,1},,...,e_{i,n})$ in which $e_{i,i}=1$ and $e_{i,j}=0$ when $jne i$. With respect to this basis, if $v=(r_1,...,r_n)$ then $v_j=r_j.$






          share|cite|improve this answer











          $endgroup$



          1.Let $E={e_1,...,e_n}$ be any vector-space basis for $Bbb R^n.$ For every $vinBbb R^n$ there is a unique sequence $v_1,...,v_n$ of numbers such that $v=v_1e_1+...+v_ne_n.$ And $v_j$ is called "the $j$-th co-ordinate of $v$ with respect to the basis $E$".



          Note that the $j$-th co-ordinate of a sum of some $k$ vectors is the sum of the individual vectors' $j$-th co-ordinates. That is, $(,v(1)+...+v(k),)_j=v(1)_j
          ,+...+,v(k)_j;.$
          And of course $(rv)_j=rcdot v_j$ for any $rin Bbb R.$




          1. A linear map $f:Bbb R^nto Bbb R^n$ is uniquely and completely determined by $f(e_1),...,f(e_n).$ Because the co-ordinates $v_1,...,v_n$ are uniquely determined by a given $v,$ and $$f(v)=f(v_1e_1+..+v_ne_n)=f(v_1e_1)+...+f(v_ne_n)=v_1f(e_1)+...+v_n f(e_n).$$


          2. Let $f_{j,i}$ be the $j$-th co-ordinate (with respect to $E$) of $f(e_i)$. For any $vin Bbb R^n,$ and any $j$, the $j$-th co-ordinate of $f(v)$ is $$ (f(v))_j=(,v_1f(e_1),+...+,v_n f(e_n),)_j =$$ $$=(v_1f(e_1))_j,+...+,(v_nf(e_n))_j=$$ $$=v_1f_{j,1},+...+,v_nf_{j,n};.$$



          The array $(f_{j,i})_{1leq ileq n, 1leq jleq n}$ is called "the matrix representation of $f$ with respect to $E$".



          Remark: Any $vin Bbb R^n$ is a sequence $(r_1,...,r_n)$ of numbers, and one choice for a basis $E$ is, for each $i$, to let $e_i$ be the sequence $(e_{i,1},,...,e_{i,n})$ in which $e_{i,i}=1$ and $e_{i,j}=0$ when $jne i$. With respect to this basis, if $v=(r_1,...,r_n)$ then $v_j=r_j.$







          share|cite|improve this answer














          share|cite|improve this answer



          share|cite|improve this answer








          edited Dec 24 '18 at 7:04

























          answered Dec 24 '18 at 6:51









          DanielWainfleetDanielWainfleet

          35.2k31648




          35.2k31648























              0












              $begingroup$

              Matrix multiplication is feeding one linear change of variables into another. See my answer at Why, historically, do we multiply matrices as we do?.






              share|cite|improve this answer









              $endgroup$


















                0












                $begingroup$

                Matrix multiplication is feeding one linear change of variables into another. See my answer at Why, historically, do we multiply matrices as we do?.






                share|cite|improve this answer









                $endgroup$
















                  0












                  0








                  0





                  $begingroup$

                  Matrix multiplication is feeding one linear change of variables into another. See my answer at Why, historically, do we multiply matrices as we do?.






                  share|cite|improve this answer









                  $endgroup$



                  Matrix multiplication is feeding one linear change of variables into another. See my answer at Why, historically, do we multiply matrices as we do?.







                  share|cite|improve this answer












                  share|cite|improve this answer



                  share|cite|improve this answer










                  answered Dec 24 '18 at 5:39









                  KCdKCd

                  16.7k4076




                  16.7k4076






























                      draft saved

                      draft discarded




















































                      Thanks for contributing an answer to Mathematics Stack Exchange!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid



                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.


                      Use MathJax to format equations. MathJax reference.


                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function () {
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3050828%2fhow-is-it-that-matrix-multiplication-represents-linear-transforms%23new-answer', 'question_page');
                      }
                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      Quarter-circle Tiles

                      build a pushdown automaton that recognizes the reverse language of a given pushdown automaton?

                      Mont Emei