Linear Algebra - How to find all $2times3$ matrices $A$ that satisfy the equation: $Atimes[1;1;1]^T=[0;0]^T$?












0














I wrote out A as a matrix looking like this:
$$begin{bmatrix}a_1&a_2&a_3\a_4&a_5&a_6end{bmatrix}$$
And then using the equation $Atimes[1;1;1]^T=[0;0]^T$ I came to this system of equations:



$$begin{cases}
a_1+a_2+a_3=0\
a_4+a_5+a_6=0
end{cases}
$$



So basically $A$ could be a matrix like this:



$$begin{bmatrix}0&1&-1\-3&7&-4end{bmatrix}$$



or this:
$$begin{bmatrix}-10&6&4\-10&-5&15end{bmatrix}$$
etc...



So I am able to find examples of such $A's$ but I don't know how to find all these $A's$.



I also need to find the dimension and basis for the linear space that these matrices create - the dimension is $6$ but what is the basis?










share|cite|improve this question



























    0














    I wrote out A as a matrix looking like this:
    $$begin{bmatrix}a_1&a_2&a_3\a_4&a_5&a_6end{bmatrix}$$
    And then using the equation $Atimes[1;1;1]^T=[0;0]^T$ I came to this system of equations:



    $$begin{cases}
    a_1+a_2+a_3=0\
    a_4+a_5+a_6=0
    end{cases}
    $$



    So basically $A$ could be a matrix like this:



    $$begin{bmatrix}0&1&-1\-3&7&-4end{bmatrix}$$



    or this:
    $$begin{bmatrix}-10&6&4\-10&-5&15end{bmatrix}$$
    etc...



    So I am able to find examples of such $A's$ but I don't know how to find all these $A's$.



    I also need to find the dimension and basis for the linear space that these matrices create - the dimension is $6$ but what is the basis?










    share|cite|improve this question

























      0












      0








      0







      I wrote out A as a matrix looking like this:
      $$begin{bmatrix}a_1&a_2&a_3\a_4&a_5&a_6end{bmatrix}$$
      And then using the equation $Atimes[1;1;1]^T=[0;0]^T$ I came to this system of equations:



      $$begin{cases}
      a_1+a_2+a_3=0\
      a_4+a_5+a_6=0
      end{cases}
      $$



      So basically $A$ could be a matrix like this:



      $$begin{bmatrix}0&1&-1\-3&7&-4end{bmatrix}$$



      or this:
      $$begin{bmatrix}-10&6&4\-10&-5&15end{bmatrix}$$
      etc...



      So I am able to find examples of such $A's$ but I don't know how to find all these $A's$.



      I also need to find the dimension and basis for the linear space that these matrices create - the dimension is $6$ but what is the basis?










      share|cite|improve this question













      I wrote out A as a matrix looking like this:
      $$begin{bmatrix}a_1&a_2&a_3\a_4&a_5&a_6end{bmatrix}$$
      And then using the equation $Atimes[1;1;1]^T=[0;0]^T$ I came to this system of equations:



      $$begin{cases}
      a_1+a_2+a_3=0\
      a_4+a_5+a_6=0
      end{cases}
      $$



      So basically $A$ could be a matrix like this:



      $$begin{bmatrix}0&1&-1\-3&7&-4end{bmatrix}$$



      or this:
      $$begin{bmatrix}-10&6&4\-10&-5&15end{bmatrix}$$
      etc...



      So I am able to find examples of such $A's$ but I don't know how to find all these $A's$.



      I also need to find the dimension and basis for the linear space that these matrices create - the dimension is $6$ but what is the basis?







      matrices






      share|cite|improve this question













      share|cite|improve this question











      share|cite|improve this question




      share|cite|improve this question










      asked Nov 24 at 13:05









      agromek

      345




      345






















          5 Answers
          5






          active

          oldest

          votes


















          0














          Here are some hints to get you going in the right direction:



          To find a basis, you need to find some examples (you've got those!) that are linearly independent, and that span the subspace. One way to arrange for linear independence is to pick particularly easy examples. In $Bbb R^3$, we choose $pmatrix{1\0\0}, pmatrix{0\1\0}, pmatrix{0\0\1}$ because it's really easy, with all those zeroes, to see that the three are linearly independent.



          Can you, first of all, find some elements of your subspace (let's call it $S$) whose entries are mostly zeroes? (All zeroes is of no use: the zero vector is not part of any set of linearly independent vector!)



          Second, you need to find the dimension, right? Well, the dimension of the space $M$ of $2 times 3$ matrices is six, and there are some matrices in $M$ that are not in $S$, like
          $$
          pmatrix{1 & 3 & 1 \ 0 & 1 & 0}.
          $$

          So the dimension of $S$ must be less than six. Suppose you could find a linearly independent set of vectors in $S$ that had four elements. Then you'd know that the dimension of $S$ was at least four (because the dimension is the size of the largest possible set of linearly independent vectors). Could it be five? Sure ... maybe these one more independent vector to be found. Could it be three? No way.



          So what you want to do is look for large sets of linearly independent vectors (i.e., matrices in $S$). Perhaps your work on the "mostly zero" matrices in $S$ will help you here.






          share|cite|improve this answer





















          • I started with writing out the basis of $M$ which is $${ begin{bmatrix} 1&0&0\0&0&0end{bmatrix},begin{bmatrix} 0&1&0\0&0&0end{bmatrix}, begin{bmatrix} 0&0&1\0&0&0end{bmatrix}, begin{bmatrix} 0&0&0\1&0&0end{bmatrix}, begin{bmatrix} 0&0&0\0&1&0end{bmatrix}, begin{bmatrix} 0&0&0\0&0&1end{bmatrix}}$$ but I don't know if it will help me in any way...
            – agromek
            Nov 24 at 13:43










          • Well, that's a good start. You now understand that "vectors" in $M$ are matrices, which seems weird at first, but you get used to it. Now can you write down any element of $S$ that has lots of zeroes in it? Look at your first matrix: is it in $S$? If not, what minimal alternation could you make to have it end up in $S$?
            – John Hughes
            Nov 24 at 14:02










          • Ok, I got it! $${ begin{bmatrix} 1&0&-1\0&0&0end{bmatrix},begin{bmatrix} -1&1&0\0&0&0end{bmatrix}, begin{bmatrix} 0&0&0\1&-1&0end{bmatrix}, begin{bmatrix} 0&0&0\-1&0&1end{bmatrix}}$$ Thank you so much!!!
            – agromek
            Nov 24 at 14:16



















          1














          Notice that:



          $$begin{cases}
          a_1+a_2+a_3=0\
          a_4+a_5+a_6=0
          end{cases}$$



          is a system of $2$ equations and $6$ variables. We can introduce $6-2 = 4$ real parameters, $x$, $y$, $z$ and $w$, such that:



          $$begin{cases}
          a_1 = -x-y\
          a_2 = x\
          a_3 = y\
          a_4 = -z-w\
          a_5 = z\
          a_6= w
          end{cases}.$$



          Therefore, all matrices $A$ which satisfy you requirements are in the form:



          $$begin{bmatrix}-x-y&x&y\-z-w&z&wend{bmatrix},$$



          for given parameters $x, y, z$ and $w$.






          share|cite|improve this answer





















          • Thank you so much!
            – agromek
            Nov 24 at 13:17



















          0














          I also have a second part to this taks that I don't really understand.



          Let $V$ be the space that all of the $A$ matrices create. Now I'm looking for all matrices $B$ that satisfy the equation: $Btimes[1;1;1]^T=[6;6]^T$ and I need to show the set of all $B's$ as a translation of the space $V$ by a six-dimensional vector $b$.



          How would this vector $b$ look?



          I know that all $B$ matrices are in the form:



          $$begin{bmatrix}a&b&6-a-b\c&d&6-c-dend{bmatrix}$$



          but I don't think that this gives me anything...






          share|cite|improve this answer





























            0














            The row vectors of $A$ must be orthogonal to $(1,1,1)^t$. The orthogonal complement of this vector is spanned by $(1,0,-1)^t$ and $(1,-1,0)^t$, for example. Now take as the row vectors of $A$ any two linear combinations of these vectors.



            Explicitely, the first row is a linear combination of both vectors, say $x(1,0,-1)^t+y(1,-1,0)^t=
            (x+y,-y,-x)^t$
            as well as the second row, say
            $u(1,0,-1)^t+v(1,-1,0)^t=
            (u+v,-u,-v)^t$
            , similar to the_candyman's answer.



            You may write explicitly
            $$begin{pmatrix}x&y\ u&vend{pmatrix}
            begin{pmatrix}1&0&-1\ 1&-1&0end{pmatrix}$$

            for an element of the kernel.






            share|cite|improve this answer































              0














              To answer your second part: Let $Tcolon M(2,3)tomathbb R^2$ defined by $T(A):=A(1,1,1)^t$, that is, $T$ is linear and maps a $(2times3)$-matrix to a $2$-dimensional vector, namely the sum of its columns. From this point of view the first task was to find the kernel of $T$. (This was given in my first answer.) Explictely, the kernel of $T$ consists of all matrices for which the sum of the column vectors is $(0,0)^t$.



              In your second question you are asked to solve a Linear System, namely $T(A)=(6,6)^t$. As you know, all solutions of a linear system are achieved in adding to a particular solution an element of the kernel. Now a particular solution is any matrix for which the sum of the column vectors is $(6,6)^t$. The general solution is then given by
              $$begin{pmatrix}x&y\ u&vend{pmatrix}
              begin{pmatrix}1&0&-1\ 1&-1&0end{pmatrix}
              +begin{pmatrix}6&0&0\ 6&0&0end{pmatrix}.$$



              Does that give you something?






              share|cite|improve this answer























              • Ok, I thought about it for a bit and I think I understand but I'm not really sure - so for example an element of the kernel would be: $$begin{bmatrix}1&0&-1\0&0&0end{bmatrix}$$ and a particular solution to $Btimes[1;1;1]^T=[6;6]^T$ would be: $$begin{bmatrix}1&2&3\5&0&1end{bmatrix}$$ And that gives me: $$begin{bmatrix}1&0&-1\0&0&0end{bmatrix} + begin{bmatrix}1&2&3\5&0&1end{bmatrix}=begin{bmatrix}2&2&2\5&0&1end{bmatrix}$$ But how do I write a general formula for all the $B's$?
                – agromek
                Nov 24 at 16:57










              • is it: $$alphatimesbegin{bmatrix}1&0&-1\0&0&0end{bmatrix}+betatimesbegin{bmatrix}-1&1&0\0&0&0end{bmatrix}+gammatimesbegin{bmatrix}0&0&0\1&-1&0end{bmatrix}+deltatimesbegin{bmatrix}0&0&0\-1&0&1end{bmatrix}+begin{bmatrix}1&2&3\5&0&1end{bmatrix}?$$ $alpha,beta,gamma,deltainmathbb{R}$
                – agromek
                Nov 24 at 17:03












              • Yes, exactly. You may choose $begin{pmatrix}6&0&0\ 6&0&0end{pmatrix}$ instead for a particular solution.
                – Michael Hoppe
                Nov 24 at 19:34












              • Well, it's right, but why did you change signs in the third and fourth summand? See my edited answer, please.
                – Michael Hoppe
                Nov 24 at 19:54










              • I found the basis of $V$ in a different way and that's how it came out, is that a significant difference if I change the signs?
                – agromek
                Nov 24 at 20:44











              Your Answer





              StackExchange.ifUsing("editor", function () {
              return StackExchange.using("mathjaxEditing", function () {
              StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
              StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
              });
              });
              }, "mathjax-editing");

              StackExchange.ready(function() {
              var channelOptions = {
              tags: "".split(" "),
              id: "69"
              };
              initTagRenderer("".split(" "), "".split(" "), channelOptions);

              StackExchange.using("externalEditor", function() {
              // Have to fire editor after snippets, if snippets enabled
              if (StackExchange.settings.snippets.snippetsEnabled) {
              StackExchange.using("snippets", function() {
              createEditor();
              });
              }
              else {
              createEditor();
              }
              });

              function createEditor() {
              StackExchange.prepareEditor({
              heartbeatType: 'answer',
              autoActivateHeartbeat: false,
              convertImagesToLinks: true,
              noModals: true,
              showLowRepImageUploadWarning: true,
              reputationToPostImages: 10,
              bindNavPrevention: true,
              postfix: "",
              imageUploader: {
              brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
              contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
              allowUrls: true
              },
              noCode: true, onDemand: true,
              discardSelector: ".discard-answer"
              ,immediatelyShowMarkdownHelp:true
              });


              }
              });














              draft saved

              draft discarded


















              StackExchange.ready(
              function () {
              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3011530%2flinear-algebra-how-to-find-all-2-times3-matrices-a-that-satisfy-the-equati%23new-answer', 'question_page');
              }
              );

              Post as a guest















              Required, but never shown

























              5 Answers
              5






              active

              oldest

              votes








              5 Answers
              5






              active

              oldest

              votes









              active

              oldest

              votes






              active

              oldest

              votes









              0














              Here are some hints to get you going in the right direction:



              To find a basis, you need to find some examples (you've got those!) that are linearly independent, and that span the subspace. One way to arrange for linear independence is to pick particularly easy examples. In $Bbb R^3$, we choose $pmatrix{1\0\0}, pmatrix{0\1\0}, pmatrix{0\0\1}$ because it's really easy, with all those zeroes, to see that the three are linearly independent.



              Can you, first of all, find some elements of your subspace (let's call it $S$) whose entries are mostly zeroes? (All zeroes is of no use: the zero vector is not part of any set of linearly independent vector!)



              Second, you need to find the dimension, right? Well, the dimension of the space $M$ of $2 times 3$ matrices is six, and there are some matrices in $M$ that are not in $S$, like
              $$
              pmatrix{1 & 3 & 1 \ 0 & 1 & 0}.
              $$

              So the dimension of $S$ must be less than six. Suppose you could find a linearly independent set of vectors in $S$ that had four elements. Then you'd know that the dimension of $S$ was at least four (because the dimension is the size of the largest possible set of linearly independent vectors). Could it be five? Sure ... maybe these one more independent vector to be found. Could it be three? No way.



              So what you want to do is look for large sets of linearly independent vectors (i.e., matrices in $S$). Perhaps your work on the "mostly zero" matrices in $S$ will help you here.






              share|cite|improve this answer





















              • I started with writing out the basis of $M$ which is $${ begin{bmatrix} 1&0&0\0&0&0end{bmatrix},begin{bmatrix} 0&1&0\0&0&0end{bmatrix}, begin{bmatrix} 0&0&1\0&0&0end{bmatrix}, begin{bmatrix} 0&0&0\1&0&0end{bmatrix}, begin{bmatrix} 0&0&0\0&1&0end{bmatrix}, begin{bmatrix} 0&0&0\0&0&1end{bmatrix}}$$ but I don't know if it will help me in any way...
                – agromek
                Nov 24 at 13:43










              • Well, that's a good start. You now understand that "vectors" in $M$ are matrices, which seems weird at first, but you get used to it. Now can you write down any element of $S$ that has lots of zeroes in it? Look at your first matrix: is it in $S$? If not, what minimal alternation could you make to have it end up in $S$?
                – John Hughes
                Nov 24 at 14:02










              • Ok, I got it! $${ begin{bmatrix} 1&0&-1\0&0&0end{bmatrix},begin{bmatrix} -1&1&0\0&0&0end{bmatrix}, begin{bmatrix} 0&0&0\1&-1&0end{bmatrix}, begin{bmatrix} 0&0&0\-1&0&1end{bmatrix}}$$ Thank you so much!!!
                – agromek
                Nov 24 at 14:16
















              0














              Here are some hints to get you going in the right direction:



              To find a basis, you need to find some examples (you've got those!) that are linearly independent, and that span the subspace. One way to arrange for linear independence is to pick particularly easy examples. In $Bbb R^3$, we choose $pmatrix{1\0\0}, pmatrix{0\1\0}, pmatrix{0\0\1}$ because it's really easy, with all those zeroes, to see that the three are linearly independent.



              Can you, first of all, find some elements of your subspace (let's call it $S$) whose entries are mostly zeroes? (All zeroes is of no use: the zero vector is not part of any set of linearly independent vector!)



              Second, you need to find the dimension, right? Well, the dimension of the space $M$ of $2 times 3$ matrices is six, and there are some matrices in $M$ that are not in $S$, like
              $$
              pmatrix{1 & 3 & 1 \ 0 & 1 & 0}.
              $$

              So the dimension of $S$ must be less than six. Suppose you could find a linearly independent set of vectors in $S$ that had four elements. Then you'd know that the dimension of $S$ was at least four (because the dimension is the size of the largest possible set of linearly independent vectors). Could it be five? Sure ... maybe these one more independent vector to be found. Could it be three? No way.



              So what you want to do is look for large sets of linearly independent vectors (i.e., matrices in $S$). Perhaps your work on the "mostly zero" matrices in $S$ will help you here.






              share|cite|improve this answer





















              • I started with writing out the basis of $M$ which is $${ begin{bmatrix} 1&0&0\0&0&0end{bmatrix},begin{bmatrix} 0&1&0\0&0&0end{bmatrix}, begin{bmatrix} 0&0&1\0&0&0end{bmatrix}, begin{bmatrix} 0&0&0\1&0&0end{bmatrix}, begin{bmatrix} 0&0&0\0&1&0end{bmatrix}, begin{bmatrix} 0&0&0\0&0&1end{bmatrix}}$$ but I don't know if it will help me in any way...
                – agromek
                Nov 24 at 13:43










              • Well, that's a good start. You now understand that "vectors" in $M$ are matrices, which seems weird at first, but you get used to it. Now can you write down any element of $S$ that has lots of zeroes in it? Look at your first matrix: is it in $S$? If not, what minimal alternation could you make to have it end up in $S$?
                – John Hughes
                Nov 24 at 14:02










              • Ok, I got it! $${ begin{bmatrix} 1&0&-1\0&0&0end{bmatrix},begin{bmatrix} -1&1&0\0&0&0end{bmatrix}, begin{bmatrix} 0&0&0\1&-1&0end{bmatrix}, begin{bmatrix} 0&0&0\-1&0&1end{bmatrix}}$$ Thank you so much!!!
                – agromek
                Nov 24 at 14:16














              0












              0








              0






              Here are some hints to get you going in the right direction:



              To find a basis, you need to find some examples (you've got those!) that are linearly independent, and that span the subspace. One way to arrange for linear independence is to pick particularly easy examples. In $Bbb R^3$, we choose $pmatrix{1\0\0}, pmatrix{0\1\0}, pmatrix{0\0\1}$ because it's really easy, with all those zeroes, to see that the three are linearly independent.



              Can you, first of all, find some elements of your subspace (let's call it $S$) whose entries are mostly zeroes? (All zeroes is of no use: the zero vector is not part of any set of linearly independent vector!)



              Second, you need to find the dimension, right? Well, the dimension of the space $M$ of $2 times 3$ matrices is six, and there are some matrices in $M$ that are not in $S$, like
              $$
              pmatrix{1 & 3 & 1 \ 0 & 1 & 0}.
              $$

              So the dimension of $S$ must be less than six. Suppose you could find a linearly independent set of vectors in $S$ that had four elements. Then you'd know that the dimension of $S$ was at least four (because the dimension is the size of the largest possible set of linearly independent vectors). Could it be five? Sure ... maybe these one more independent vector to be found. Could it be three? No way.



              So what you want to do is look for large sets of linearly independent vectors (i.e., matrices in $S$). Perhaps your work on the "mostly zero" matrices in $S$ will help you here.






              share|cite|improve this answer












              Here are some hints to get you going in the right direction:



              To find a basis, you need to find some examples (you've got those!) that are linearly independent, and that span the subspace. One way to arrange for linear independence is to pick particularly easy examples. In $Bbb R^3$, we choose $pmatrix{1\0\0}, pmatrix{0\1\0}, pmatrix{0\0\1}$ because it's really easy, with all those zeroes, to see that the three are linearly independent.



              Can you, first of all, find some elements of your subspace (let's call it $S$) whose entries are mostly zeroes? (All zeroes is of no use: the zero vector is not part of any set of linearly independent vector!)



              Second, you need to find the dimension, right? Well, the dimension of the space $M$ of $2 times 3$ matrices is six, and there are some matrices in $M$ that are not in $S$, like
              $$
              pmatrix{1 & 3 & 1 \ 0 & 1 & 0}.
              $$

              So the dimension of $S$ must be less than six. Suppose you could find a linearly independent set of vectors in $S$ that had four elements. Then you'd know that the dimension of $S$ was at least four (because the dimension is the size of the largest possible set of linearly independent vectors). Could it be five? Sure ... maybe these one more independent vector to be found. Could it be three? No way.



              So what you want to do is look for large sets of linearly independent vectors (i.e., matrices in $S$). Perhaps your work on the "mostly zero" matrices in $S$ will help you here.







              share|cite|improve this answer












              share|cite|improve this answer



              share|cite|improve this answer










              answered Nov 24 at 13:13









              John Hughes

              62.3k24090




              62.3k24090












              • I started with writing out the basis of $M$ which is $${ begin{bmatrix} 1&0&0\0&0&0end{bmatrix},begin{bmatrix} 0&1&0\0&0&0end{bmatrix}, begin{bmatrix} 0&0&1\0&0&0end{bmatrix}, begin{bmatrix} 0&0&0\1&0&0end{bmatrix}, begin{bmatrix} 0&0&0\0&1&0end{bmatrix}, begin{bmatrix} 0&0&0\0&0&1end{bmatrix}}$$ but I don't know if it will help me in any way...
                – agromek
                Nov 24 at 13:43










              • Well, that's a good start. You now understand that "vectors" in $M$ are matrices, which seems weird at first, but you get used to it. Now can you write down any element of $S$ that has lots of zeroes in it? Look at your first matrix: is it in $S$? If not, what minimal alternation could you make to have it end up in $S$?
                – John Hughes
                Nov 24 at 14:02










              • Ok, I got it! $${ begin{bmatrix} 1&0&-1\0&0&0end{bmatrix},begin{bmatrix} -1&1&0\0&0&0end{bmatrix}, begin{bmatrix} 0&0&0\1&-1&0end{bmatrix}, begin{bmatrix} 0&0&0\-1&0&1end{bmatrix}}$$ Thank you so much!!!
                – agromek
                Nov 24 at 14:16


















              • I started with writing out the basis of $M$ which is $${ begin{bmatrix} 1&0&0\0&0&0end{bmatrix},begin{bmatrix} 0&1&0\0&0&0end{bmatrix}, begin{bmatrix} 0&0&1\0&0&0end{bmatrix}, begin{bmatrix} 0&0&0\1&0&0end{bmatrix}, begin{bmatrix} 0&0&0\0&1&0end{bmatrix}, begin{bmatrix} 0&0&0\0&0&1end{bmatrix}}$$ but I don't know if it will help me in any way...
                – agromek
                Nov 24 at 13:43










              • Well, that's a good start. You now understand that "vectors" in $M$ are matrices, which seems weird at first, but you get used to it. Now can you write down any element of $S$ that has lots of zeroes in it? Look at your first matrix: is it in $S$? If not, what minimal alternation could you make to have it end up in $S$?
                – John Hughes
                Nov 24 at 14:02










              • Ok, I got it! $${ begin{bmatrix} 1&0&-1\0&0&0end{bmatrix},begin{bmatrix} -1&1&0\0&0&0end{bmatrix}, begin{bmatrix} 0&0&0\1&-1&0end{bmatrix}, begin{bmatrix} 0&0&0\-1&0&1end{bmatrix}}$$ Thank you so much!!!
                – agromek
                Nov 24 at 14:16
















              I started with writing out the basis of $M$ which is $${ begin{bmatrix} 1&0&0\0&0&0end{bmatrix},begin{bmatrix} 0&1&0\0&0&0end{bmatrix}, begin{bmatrix} 0&0&1\0&0&0end{bmatrix}, begin{bmatrix} 0&0&0\1&0&0end{bmatrix}, begin{bmatrix} 0&0&0\0&1&0end{bmatrix}, begin{bmatrix} 0&0&0\0&0&1end{bmatrix}}$$ but I don't know if it will help me in any way...
              – agromek
              Nov 24 at 13:43




              I started with writing out the basis of $M$ which is $${ begin{bmatrix} 1&0&0\0&0&0end{bmatrix},begin{bmatrix} 0&1&0\0&0&0end{bmatrix}, begin{bmatrix} 0&0&1\0&0&0end{bmatrix}, begin{bmatrix} 0&0&0\1&0&0end{bmatrix}, begin{bmatrix} 0&0&0\0&1&0end{bmatrix}, begin{bmatrix} 0&0&0\0&0&1end{bmatrix}}$$ but I don't know if it will help me in any way...
              – agromek
              Nov 24 at 13:43












              Well, that's a good start. You now understand that "vectors" in $M$ are matrices, which seems weird at first, but you get used to it. Now can you write down any element of $S$ that has lots of zeroes in it? Look at your first matrix: is it in $S$? If not, what minimal alternation could you make to have it end up in $S$?
              – John Hughes
              Nov 24 at 14:02




              Well, that's a good start. You now understand that "vectors" in $M$ are matrices, which seems weird at first, but you get used to it. Now can you write down any element of $S$ that has lots of zeroes in it? Look at your first matrix: is it in $S$? If not, what minimal alternation could you make to have it end up in $S$?
              – John Hughes
              Nov 24 at 14:02












              Ok, I got it! $${ begin{bmatrix} 1&0&-1\0&0&0end{bmatrix},begin{bmatrix} -1&1&0\0&0&0end{bmatrix}, begin{bmatrix} 0&0&0\1&-1&0end{bmatrix}, begin{bmatrix} 0&0&0\-1&0&1end{bmatrix}}$$ Thank you so much!!!
              – agromek
              Nov 24 at 14:16




              Ok, I got it! $${ begin{bmatrix} 1&0&-1\0&0&0end{bmatrix},begin{bmatrix} -1&1&0\0&0&0end{bmatrix}, begin{bmatrix} 0&0&0\1&-1&0end{bmatrix}, begin{bmatrix} 0&0&0\-1&0&1end{bmatrix}}$$ Thank you so much!!!
              – agromek
              Nov 24 at 14:16











              1














              Notice that:



              $$begin{cases}
              a_1+a_2+a_3=0\
              a_4+a_5+a_6=0
              end{cases}$$



              is a system of $2$ equations and $6$ variables. We can introduce $6-2 = 4$ real parameters, $x$, $y$, $z$ and $w$, such that:



              $$begin{cases}
              a_1 = -x-y\
              a_2 = x\
              a_3 = y\
              a_4 = -z-w\
              a_5 = z\
              a_6= w
              end{cases}.$$



              Therefore, all matrices $A$ which satisfy you requirements are in the form:



              $$begin{bmatrix}-x-y&x&y\-z-w&z&wend{bmatrix},$$



              for given parameters $x, y, z$ and $w$.






              share|cite|improve this answer





















              • Thank you so much!
                – agromek
                Nov 24 at 13:17
















              1














              Notice that:



              $$begin{cases}
              a_1+a_2+a_3=0\
              a_4+a_5+a_6=0
              end{cases}$$



              is a system of $2$ equations and $6$ variables. We can introduce $6-2 = 4$ real parameters, $x$, $y$, $z$ and $w$, such that:



              $$begin{cases}
              a_1 = -x-y\
              a_2 = x\
              a_3 = y\
              a_4 = -z-w\
              a_5 = z\
              a_6= w
              end{cases}.$$



              Therefore, all matrices $A$ which satisfy you requirements are in the form:



              $$begin{bmatrix}-x-y&x&y\-z-w&z&wend{bmatrix},$$



              for given parameters $x, y, z$ and $w$.






              share|cite|improve this answer





















              • Thank you so much!
                – agromek
                Nov 24 at 13:17














              1












              1








              1






              Notice that:



              $$begin{cases}
              a_1+a_2+a_3=0\
              a_4+a_5+a_6=0
              end{cases}$$



              is a system of $2$ equations and $6$ variables. We can introduce $6-2 = 4$ real parameters, $x$, $y$, $z$ and $w$, such that:



              $$begin{cases}
              a_1 = -x-y\
              a_2 = x\
              a_3 = y\
              a_4 = -z-w\
              a_5 = z\
              a_6= w
              end{cases}.$$



              Therefore, all matrices $A$ which satisfy you requirements are in the form:



              $$begin{bmatrix}-x-y&x&y\-z-w&z&wend{bmatrix},$$



              for given parameters $x, y, z$ and $w$.






              share|cite|improve this answer












              Notice that:



              $$begin{cases}
              a_1+a_2+a_3=0\
              a_4+a_5+a_6=0
              end{cases}$$



              is a system of $2$ equations and $6$ variables. We can introduce $6-2 = 4$ real parameters, $x$, $y$, $z$ and $w$, such that:



              $$begin{cases}
              a_1 = -x-y\
              a_2 = x\
              a_3 = y\
              a_4 = -z-w\
              a_5 = z\
              a_6= w
              end{cases}.$$



              Therefore, all matrices $A$ which satisfy you requirements are in the form:



              $$begin{bmatrix}-x-y&x&y\-z-w&z&wend{bmatrix},$$



              for given parameters $x, y, z$ and $w$.







              share|cite|improve this answer












              share|cite|improve this answer



              share|cite|improve this answer










              answered Nov 24 at 13:12









              the_candyman

              8,70822044




              8,70822044












              • Thank you so much!
                – agromek
                Nov 24 at 13:17


















              • Thank you so much!
                – agromek
                Nov 24 at 13:17
















              Thank you so much!
              – agromek
              Nov 24 at 13:17




              Thank you so much!
              – agromek
              Nov 24 at 13:17











              0














              I also have a second part to this taks that I don't really understand.



              Let $V$ be the space that all of the $A$ matrices create. Now I'm looking for all matrices $B$ that satisfy the equation: $Btimes[1;1;1]^T=[6;6]^T$ and I need to show the set of all $B's$ as a translation of the space $V$ by a six-dimensional vector $b$.



              How would this vector $b$ look?



              I know that all $B$ matrices are in the form:



              $$begin{bmatrix}a&b&6-a-b\c&d&6-c-dend{bmatrix}$$



              but I don't think that this gives me anything...






              share|cite|improve this answer


























                0














                I also have a second part to this taks that I don't really understand.



                Let $V$ be the space that all of the $A$ matrices create. Now I'm looking for all matrices $B$ that satisfy the equation: $Btimes[1;1;1]^T=[6;6]^T$ and I need to show the set of all $B's$ as a translation of the space $V$ by a six-dimensional vector $b$.



                How would this vector $b$ look?



                I know that all $B$ matrices are in the form:



                $$begin{bmatrix}a&b&6-a-b\c&d&6-c-dend{bmatrix}$$



                but I don't think that this gives me anything...






                share|cite|improve this answer
























                  0












                  0








                  0






                  I also have a second part to this taks that I don't really understand.



                  Let $V$ be the space that all of the $A$ matrices create. Now I'm looking for all matrices $B$ that satisfy the equation: $Btimes[1;1;1]^T=[6;6]^T$ and I need to show the set of all $B's$ as a translation of the space $V$ by a six-dimensional vector $b$.



                  How would this vector $b$ look?



                  I know that all $B$ matrices are in the form:



                  $$begin{bmatrix}a&b&6-a-b\c&d&6-c-dend{bmatrix}$$



                  but I don't think that this gives me anything...






                  share|cite|improve this answer












                  I also have a second part to this taks that I don't really understand.



                  Let $V$ be the space that all of the $A$ matrices create. Now I'm looking for all matrices $B$ that satisfy the equation: $Btimes[1;1;1]^T=[6;6]^T$ and I need to show the set of all $B's$ as a translation of the space $V$ by a six-dimensional vector $b$.



                  How would this vector $b$ look?



                  I know that all $B$ matrices are in the form:



                  $$begin{bmatrix}a&b&6-a-b\c&d&6-c-dend{bmatrix}$$



                  but I don't think that this gives me anything...







                  share|cite|improve this answer












                  share|cite|improve this answer



                  share|cite|improve this answer










                  answered Nov 24 at 15:16









                  agromek

                  345




                  345























                      0














                      The row vectors of $A$ must be orthogonal to $(1,1,1)^t$. The orthogonal complement of this vector is spanned by $(1,0,-1)^t$ and $(1,-1,0)^t$, for example. Now take as the row vectors of $A$ any two linear combinations of these vectors.



                      Explicitely, the first row is a linear combination of both vectors, say $x(1,0,-1)^t+y(1,-1,0)^t=
                      (x+y,-y,-x)^t$
                      as well as the second row, say
                      $u(1,0,-1)^t+v(1,-1,0)^t=
                      (u+v,-u,-v)^t$
                      , similar to the_candyman's answer.



                      You may write explicitly
                      $$begin{pmatrix}x&y\ u&vend{pmatrix}
                      begin{pmatrix}1&0&-1\ 1&-1&0end{pmatrix}$$

                      for an element of the kernel.






                      share|cite|improve this answer




























                        0














                        The row vectors of $A$ must be orthogonal to $(1,1,1)^t$. The orthogonal complement of this vector is spanned by $(1,0,-1)^t$ and $(1,-1,0)^t$, for example. Now take as the row vectors of $A$ any two linear combinations of these vectors.



                        Explicitely, the first row is a linear combination of both vectors, say $x(1,0,-1)^t+y(1,-1,0)^t=
                        (x+y,-y,-x)^t$
                        as well as the second row, say
                        $u(1,0,-1)^t+v(1,-1,0)^t=
                        (u+v,-u,-v)^t$
                        , similar to the_candyman's answer.



                        You may write explicitly
                        $$begin{pmatrix}x&y\ u&vend{pmatrix}
                        begin{pmatrix}1&0&-1\ 1&-1&0end{pmatrix}$$

                        for an element of the kernel.






                        share|cite|improve this answer


























                          0












                          0








                          0






                          The row vectors of $A$ must be orthogonal to $(1,1,1)^t$. The orthogonal complement of this vector is spanned by $(1,0,-1)^t$ and $(1,-1,0)^t$, for example. Now take as the row vectors of $A$ any two linear combinations of these vectors.



                          Explicitely, the first row is a linear combination of both vectors, say $x(1,0,-1)^t+y(1,-1,0)^t=
                          (x+y,-y,-x)^t$
                          as well as the second row, say
                          $u(1,0,-1)^t+v(1,-1,0)^t=
                          (u+v,-u,-v)^t$
                          , similar to the_candyman's answer.



                          You may write explicitly
                          $$begin{pmatrix}x&y\ u&vend{pmatrix}
                          begin{pmatrix}1&0&-1\ 1&-1&0end{pmatrix}$$

                          for an element of the kernel.






                          share|cite|improve this answer














                          The row vectors of $A$ must be orthogonal to $(1,1,1)^t$. The orthogonal complement of this vector is spanned by $(1,0,-1)^t$ and $(1,-1,0)^t$, for example. Now take as the row vectors of $A$ any two linear combinations of these vectors.



                          Explicitely, the first row is a linear combination of both vectors, say $x(1,0,-1)^t+y(1,-1,0)^t=
                          (x+y,-y,-x)^t$
                          as well as the second row, say
                          $u(1,0,-1)^t+v(1,-1,0)^t=
                          (u+v,-u,-v)^t$
                          , similar to the_candyman's answer.



                          You may write explicitly
                          $$begin{pmatrix}x&y\ u&vend{pmatrix}
                          begin{pmatrix}1&0&-1\ 1&-1&0end{pmatrix}$$

                          for an element of the kernel.







                          share|cite|improve this answer














                          share|cite|improve this answer



                          share|cite|improve this answer








                          edited Nov 24 at 19:54

























                          answered Nov 24 at 15:46









                          Michael Hoppe

                          10.8k31834




                          10.8k31834























                              0














                              To answer your second part: Let $Tcolon M(2,3)tomathbb R^2$ defined by $T(A):=A(1,1,1)^t$, that is, $T$ is linear and maps a $(2times3)$-matrix to a $2$-dimensional vector, namely the sum of its columns. From this point of view the first task was to find the kernel of $T$. (This was given in my first answer.) Explictely, the kernel of $T$ consists of all matrices for which the sum of the column vectors is $(0,0)^t$.



                              In your second question you are asked to solve a Linear System, namely $T(A)=(6,6)^t$. As you know, all solutions of a linear system are achieved in adding to a particular solution an element of the kernel. Now a particular solution is any matrix for which the sum of the column vectors is $(6,6)^t$. The general solution is then given by
                              $$begin{pmatrix}x&y\ u&vend{pmatrix}
                              begin{pmatrix}1&0&-1\ 1&-1&0end{pmatrix}
                              +begin{pmatrix}6&0&0\ 6&0&0end{pmatrix}.$$



                              Does that give you something?






                              share|cite|improve this answer























                              • Ok, I thought about it for a bit and I think I understand but I'm not really sure - so for example an element of the kernel would be: $$begin{bmatrix}1&0&-1\0&0&0end{bmatrix}$$ and a particular solution to $Btimes[1;1;1]^T=[6;6]^T$ would be: $$begin{bmatrix}1&2&3\5&0&1end{bmatrix}$$ And that gives me: $$begin{bmatrix}1&0&-1\0&0&0end{bmatrix} + begin{bmatrix}1&2&3\5&0&1end{bmatrix}=begin{bmatrix}2&2&2\5&0&1end{bmatrix}$$ But how do I write a general formula for all the $B's$?
                                – agromek
                                Nov 24 at 16:57










                              • is it: $$alphatimesbegin{bmatrix}1&0&-1\0&0&0end{bmatrix}+betatimesbegin{bmatrix}-1&1&0\0&0&0end{bmatrix}+gammatimesbegin{bmatrix}0&0&0\1&-1&0end{bmatrix}+deltatimesbegin{bmatrix}0&0&0\-1&0&1end{bmatrix}+begin{bmatrix}1&2&3\5&0&1end{bmatrix}?$$ $alpha,beta,gamma,deltainmathbb{R}$
                                – agromek
                                Nov 24 at 17:03












                              • Yes, exactly. You may choose $begin{pmatrix}6&0&0\ 6&0&0end{pmatrix}$ instead for a particular solution.
                                – Michael Hoppe
                                Nov 24 at 19:34












                              • Well, it's right, but why did you change signs in the third and fourth summand? See my edited answer, please.
                                – Michael Hoppe
                                Nov 24 at 19:54










                              • I found the basis of $V$ in a different way and that's how it came out, is that a significant difference if I change the signs?
                                – agromek
                                Nov 24 at 20:44
















                              0














                              To answer your second part: Let $Tcolon M(2,3)tomathbb R^2$ defined by $T(A):=A(1,1,1)^t$, that is, $T$ is linear and maps a $(2times3)$-matrix to a $2$-dimensional vector, namely the sum of its columns. From this point of view the first task was to find the kernel of $T$. (This was given in my first answer.) Explictely, the kernel of $T$ consists of all matrices for which the sum of the column vectors is $(0,0)^t$.



                              In your second question you are asked to solve a Linear System, namely $T(A)=(6,6)^t$. As you know, all solutions of a linear system are achieved in adding to a particular solution an element of the kernel. Now a particular solution is any matrix for which the sum of the column vectors is $(6,6)^t$. The general solution is then given by
                              $$begin{pmatrix}x&y\ u&vend{pmatrix}
                              begin{pmatrix}1&0&-1\ 1&-1&0end{pmatrix}
                              +begin{pmatrix}6&0&0\ 6&0&0end{pmatrix}.$$



                              Does that give you something?






                              share|cite|improve this answer























                              • Ok, I thought about it for a bit and I think I understand but I'm not really sure - so for example an element of the kernel would be: $$begin{bmatrix}1&0&-1\0&0&0end{bmatrix}$$ and a particular solution to $Btimes[1;1;1]^T=[6;6]^T$ would be: $$begin{bmatrix}1&2&3\5&0&1end{bmatrix}$$ And that gives me: $$begin{bmatrix}1&0&-1\0&0&0end{bmatrix} + begin{bmatrix}1&2&3\5&0&1end{bmatrix}=begin{bmatrix}2&2&2\5&0&1end{bmatrix}$$ But how do I write a general formula for all the $B's$?
                                – agromek
                                Nov 24 at 16:57










                              • is it: $$alphatimesbegin{bmatrix}1&0&-1\0&0&0end{bmatrix}+betatimesbegin{bmatrix}-1&1&0\0&0&0end{bmatrix}+gammatimesbegin{bmatrix}0&0&0\1&-1&0end{bmatrix}+deltatimesbegin{bmatrix}0&0&0\-1&0&1end{bmatrix}+begin{bmatrix}1&2&3\5&0&1end{bmatrix}?$$ $alpha,beta,gamma,deltainmathbb{R}$
                                – agromek
                                Nov 24 at 17:03












                              • Yes, exactly. You may choose $begin{pmatrix}6&0&0\ 6&0&0end{pmatrix}$ instead for a particular solution.
                                – Michael Hoppe
                                Nov 24 at 19:34












                              • Well, it's right, but why did you change signs in the third and fourth summand? See my edited answer, please.
                                – Michael Hoppe
                                Nov 24 at 19:54










                              • I found the basis of $V$ in a different way and that's how it came out, is that a significant difference if I change the signs?
                                – agromek
                                Nov 24 at 20:44














                              0












                              0








                              0






                              To answer your second part: Let $Tcolon M(2,3)tomathbb R^2$ defined by $T(A):=A(1,1,1)^t$, that is, $T$ is linear and maps a $(2times3)$-matrix to a $2$-dimensional vector, namely the sum of its columns. From this point of view the first task was to find the kernel of $T$. (This was given in my first answer.) Explictely, the kernel of $T$ consists of all matrices for which the sum of the column vectors is $(0,0)^t$.



                              In your second question you are asked to solve a Linear System, namely $T(A)=(6,6)^t$. As you know, all solutions of a linear system are achieved in adding to a particular solution an element of the kernel. Now a particular solution is any matrix for which the sum of the column vectors is $(6,6)^t$. The general solution is then given by
                              $$begin{pmatrix}x&y\ u&vend{pmatrix}
                              begin{pmatrix}1&0&-1\ 1&-1&0end{pmatrix}
                              +begin{pmatrix}6&0&0\ 6&0&0end{pmatrix}.$$



                              Does that give you something?






                              share|cite|improve this answer














                              To answer your second part: Let $Tcolon M(2,3)tomathbb R^2$ defined by $T(A):=A(1,1,1)^t$, that is, $T$ is linear and maps a $(2times3)$-matrix to a $2$-dimensional vector, namely the sum of its columns. From this point of view the first task was to find the kernel of $T$. (This was given in my first answer.) Explictely, the kernel of $T$ consists of all matrices for which the sum of the column vectors is $(0,0)^t$.



                              In your second question you are asked to solve a Linear System, namely $T(A)=(6,6)^t$. As you know, all solutions of a linear system are achieved in adding to a particular solution an element of the kernel. Now a particular solution is any matrix for which the sum of the column vectors is $(6,6)^t$. The general solution is then given by
                              $$begin{pmatrix}x&y\ u&vend{pmatrix}
                              begin{pmatrix}1&0&-1\ 1&-1&0end{pmatrix}
                              +begin{pmatrix}6&0&0\ 6&0&0end{pmatrix}.$$



                              Does that give you something?







                              share|cite|improve this answer














                              share|cite|improve this answer



                              share|cite|improve this answer








                              edited Nov 24 at 19:57

























                              answered Nov 24 at 16:07









                              Michael Hoppe

                              10.8k31834




                              10.8k31834












                              • Ok, I thought about it for a bit and I think I understand but I'm not really sure - so for example an element of the kernel would be: $$begin{bmatrix}1&0&-1\0&0&0end{bmatrix}$$ and a particular solution to $Btimes[1;1;1]^T=[6;6]^T$ would be: $$begin{bmatrix}1&2&3\5&0&1end{bmatrix}$$ And that gives me: $$begin{bmatrix}1&0&-1\0&0&0end{bmatrix} + begin{bmatrix}1&2&3\5&0&1end{bmatrix}=begin{bmatrix}2&2&2\5&0&1end{bmatrix}$$ But how do I write a general formula for all the $B's$?
                                – agromek
                                Nov 24 at 16:57










                              • is it: $$alphatimesbegin{bmatrix}1&0&-1\0&0&0end{bmatrix}+betatimesbegin{bmatrix}-1&1&0\0&0&0end{bmatrix}+gammatimesbegin{bmatrix}0&0&0\1&-1&0end{bmatrix}+deltatimesbegin{bmatrix}0&0&0\-1&0&1end{bmatrix}+begin{bmatrix}1&2&3\5&0&1end{bmatrix}?$$ $alpha,beta,gamma,deltainmathbb{R}$
                                – agromek
                                Nov 24 at 17:03












                              • Yes, exactly. You may choose $begin{pmatrix}6&0&0\ 6&0&0end{pmatrix}$ instead for a particular solution.
                                – Michael Hoppe
                                Nov 24 at 19:34












                              • Well, it's right, but why did you change signs in the third and fourth summand? See my edited answer, please.
                                – Michael Hoppe
                                Nov 24 at 19:54










                              • I found the basis of $V$ in a different way and that's how it came out, is that a significant difference if I change the signs?
                                – agromek
                                Nov 24 at 20:44


















                              • Ok, I thought about it for a bit and I think I understand but I'm not really sure - so for example an element of the kernel would be: $$begin{bmatrix}1&0&-1\0&0&0end{bmatrix}$$ and a particular solution to $Btimes[1;1;1]^T=[6;6]^T$ would be: $$begin{bmatrix}1&2&3\5&0&1end{bmatrix}$$ And that gives me: $$begin{bmatrix}1&0&-1\0&0&0end{bmatrix} + begin{bmatrix}1&2&3\5&0&1end{bmatrix}=begin{bmatrix}2&2&2\5&0&1end{bmatrix}$$ But how do I write a general formula for all the $B's$?
                                – agromek
                                Nov 24 at 16:57










                              • is it: $$alphatimesbegin{bmatrix}1&0&-1\0&0&0end{bmatrix}+betatimesbegin{bmatrix}-1&1&0\0&0&0end{bmatrix}+gammatimesbegin{bmatrix}0&0&0\1&-1&0end{bmatrix}+deltatimesbegin{bmatrix}0&0&0\-1&0&1end{bmatrix}+begin{bmatrix}1&2&3\5&0&1end{bmatrix}?$$ $alpha,beta,gamma,deltainmathbb{R}$
                                – agromek
                                Nov 24 at 17:03












                              • Yes, exactly. You may choose $begin{pmatrix}6&0&0\ 6&0&0end{pmatrix}$ instead for a particular solution.
                                – Michael Hoppe
                                Nov 24 at 19:34












                              • Well, it's right, but why did you change signs in the third and fourth summand? See my edited answer, please.
                                – Michael Hoppe
                                Nov 24 at 19:54










                              • I found the basis of $V$ in a different way and that's how it came out, is that a significant difference if I change the signs?
                                – agromek
                                Nov 24 at 20:44
















                              Ok, I thought about it for a bit and I think I understand but I'm not really sure - so for example an element of the kernel would be: $$begin{bmatrix}1&0&-1\0&0&0end{bmatrix}$$ and a particular solution to $Btimes[1;1;1]^T=[6;6]^T$ would be: $$begin{bmatrix}1&2&3\5&0&1end{bmatrix}$$ And that gives me: $$begin{bmatrix}1&0&-1\0&0&0end{bmatrix} + begin{bmatrix}1&2&3\5&0&1end{bmatrix}=begin{bmatrix}2&2&2\5&0&1end{bmatrix}$$ But how do I write a general formula for all the $B's$?
                              – agromek
                              Nov 24 at 16:57




                              Ok, I thought about it for a bit and I think I understand but I'm not really sure - so for example an element of the kernel would be: $$begin{bmatrix}1&0&-1\0&0&0end{bmatrix}$$ and a particular solution to $Btimes[1;1;1]^T=[6;6]^T$ would be: $$begin{bmatrix}1&2&3\5&0&1end{bmatrix}$$ And that gives me: $$begin{bmatrix}1&0&-1\0&0&0end{bmatrix} + begin{bmatrix}1&2&3\5&0&1end{bmatrix}=begin{bmatrix}2&2&2\5&0&1end{bmatrix}$$ But how do I write a general formula for all the $B's$?
                              – agromek
                              Nov 24 at 16:57












                              is it: $$alphatimesbegin{bmatrix}1&0&-1\0&0&0end{bmatrix}+betatimesbegin{bmatrix}-1&1&0\0&0&0end{bmatrix}+gammatimesbegin{bmatrix}0&0&0\1&-1&0end{bmatrix}+deltatimesbegin{bmatrix}0&0&0\-1&0&1end{bmatrix}+begin{bmatrix}1&2&3\5&0&1end{bmatrix}?$$ $alpha,beta,gamma,deltainmathbb{R}$
                              – agromek
                              Nov 24 at 17:03






                              is it: $$alphatimesbegin{bmatrix}1&0&-1\0&0&0end{bmatrix}+betatimesbegin{bmatrix}-1&1&0\0&0&0end{bmatrix}+gammatimesbegin{bmatrix}0&0&0\1&-1&0end{bmatrix}+deltatimesbegin{bmatrix}0&0&0\-1&0&1end{bmatrix}+begin{bmatrix}1&2&3\5&0&1end{bmatrix}?$$ $alpha,beta,gamma,deltainmathbb{R}$
                              – agromek
                              Nov 24 at 17:03














                              Yes, exactly. You may choose $begin{pmatrix}6&0&0\ 6&0&0end{pmatrix}$ instead for a particular solution.
                              – Michael Hoppe
                              Nov 24 at 19:34






                              Yes, exactly. You may choose $begin{pmatrix}6&0&0\ 6&0&0end{pmatrix}$ instead for a particular solution.
                              – Michael Hoppe
                              Nov 24 at 19:34














                              Well, it's right, but why did you change signs in the third and fourth summand? See my edited answer, please.
                              – Michael Hoppe
                              Nov 24 at 19:54




                              Well, it's right, but why did you change signs in the third and fourth summand? See my edited answer, please.
                              – Michael Hoppe
                              Nov 24 at 19:54












                              I found the basis of $V$ in a different way and that's how it came out, is that a significant difference if I change the signs?
                              – agromek
                              Nov 24 at 20:44




                              I found the basis of $V$ in a different way and that's how it came out, is that a significant difference if I change the signs?
                              – agromek
                              Nov 24 at 20:44


















                              draft saved

                              draft discarded




















































                              Thanks for contributing an answer to Mathematics Stack Exchange!


                              • Please be sure to answer the question. Provide details and share your research!

                              But avoid



                              • Asking for help, clarification, or responding to other answers.

                              • Making statements based on opinion; back them up with references or personal experience.


                              Use MathJax to format equations. MathJax reference.


                              To learn more, see our tips on writing great answers.





                              Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


                              Please pay close attention to the following guidance:


                              • Please be sure to answer the question. Provide details and share your research!

                              But avoid



                              • Asking for help, clarification, or responding to other answers.

                              • Making statements based on opinion; back them up with references or personal experience.


                              To learn more, see our tips on writing great answers.




                              draft saved


                              draft discarded














                              StackExchange.ready(
                              function () {
                              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3011530%2flinear-algebra-how-to-find-all-2-times3-matrices-a-that-satisfy-the-equati%23new-answer', 'question_page');
                              }
                              );

                              Post as a guest















                              Required, but never shown





















































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown

































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown







                              Popular posts from this blog

                              Ellipse (mathématiques)

                              Quarter-circle Tiles

                              Mont Emei