why determinant is volume of parallelepiped in any dimensions












35












$begingroup$


for $n = 2,$ I can visualize that the determinant $n times n$ matrix is the area of the parallelograms by actually calculate the area by coordinates. But how can one easily realize that it is true for any dimensions?










share|cite|improve this question











$endgroup$

















    35












    $begingroup$


    for $n = 2,$ I can visualize that the determinant $n times n$ matrix is the area of the parallelograms by actually calculate the area by coordinates. But how can one easily realize that it is true for any dimensions?










    share|cite|improve this question











    $endgroup$















      35












      35








      35


      23



      $begingroup$


      for $n = 2,$ I can visualize that the determinant $n times n$ matrix is the area of the parallelograms by actually calculate the area by coordinates. But how can one easily realize that it is true for any dimensions?










      share|cite|improve this question











      $endgroup$




      for $n = 2,$ I can visualize that the determinant $n times n$ matrix is the area of the parallelograms by actually calculate the area by coordinates. But how can one easily realize that it is true for any dimensions?







      geometry matrices determinant






      share|cite|improve this question















      share|cite|improve this question













      share|cite|improve this question




      share|cite|improve this question








      edited Jun 23 '13 at 13:47









      Sujaan Kunalan

      7,151123972




      7,151123972










      asked Jun 23 '13 at 13:30









      ahalaahala

      1,06811023




      1,06811023






















          6 Answers
          6






          active

          oldest

          votes


















          44












          $begingroup$

          If the column vectors are linearly dependent, both the determinant and the volume are zero.
          So assume linear independence.
          The determinant remains unchanged when adding multiples of one column to another. This corresponds to a skew translation of the parallelepiped, which does not affect its volume.
          By a finite sequence of such operations, you can transform your matrix to diagonal form, where the relation between determinant (=product of diagonal entries) and volume of a "rectangle" (=product of side lengths) is apparent.






          share|cite|improve this answer











          $endgroup$









          • 1




            $begingroup$
            thanks. from an constructive view of point, can the skew translation does not change volume plus multilinear condition fully determine the form of determinant?
            $endgroup$
            – ahala
            Jun 23 '13 at 15:34






          • 1




            $begingroup$
            Yes. In abstract language: The vector space of alternating $n$-forms is one-dimensional
            $endgroup$
            – Hagen von Eitzen
            May 2 '14 at 11:58










          • $begingroup$
            I don't think the relation is apparent, even in two dimensions. The proof by picture in 2d takes some (pretty little) movement of equal areas around for the second shear.
            $endgroup$
            – Mitch
            Oct 19 '18 at 19:52



















          14












          $begingroup$

          Here is the same argument as Muphrid's, perhaps written in an elementary way.



          Apply Gram-Schmidt orthogonalization to ${v_{1},ldots,v_{n}}$, so that
          begin{eqnarray*}
          v_{1} & = & v_{1}\
          v_{2} & = & c_{12}v_{1}+v_{2}^{perp}\
          v_{3} & = & c_{13}v_{1}+c_{23}v_{2}+v_{3}^{perp}\
          & vdots
          end{eqnarray*}
          where $v_{2}^{perp}$ is orthogonal to $v_{1}$; and $v_{3}^{perp}$
          is orthogonal to $spanleft{ v_{1},v_{2}right} $, etc.



          Since determinant is multilinear, anti-symmetric, then
          begin{eqnarray*}
          detleft(v_{1},v_{2},v_{3},ldots,v_{n}right) & = & detleft(v_{1},c_{12}v_{1}+v_{2}^{perp},c_{13}v_{1}+c_{23}v_{2}+v_{3}^{perp},ldotsright)\
          & = & detleft(v_{1},v_{2}^{perp},v_{3}^{perp},ldots,v_{n}^{perp}right)\
          & = & mbox{signed volume}left(v_{1},ldots,v_{n}right)
          end{eqnarray*}






          share|cite|improve this answer









          $endgroup$





















            12












            $begingroup$

            In 2d, you calculate the area of a parallelogram spanned by two vectors using the cross product. In 3d, you calculate the volume of a parallelepiped using the triple scalar product. Both of these can be written in terms of a determinant, but it's probably not clear to you what the proper generalization is to higher dimensions.



            That generalization is called the wedge product. Given $n$ vectors $v_1, v_2, ldots, v_n$, the wedge product $v_1 wedge v_2 wedge ldots wedge v_n$ is called an $n$-vector, and it has as its magnitude the $n$-volume of that $n$-parallelepiped.



            What is the relationship between the wedge product and the determinant? Quite simple, actually. There is a natural generalization of linear maps to work on $k$-vectors. Given a linear map $underline T$ (which can be represented as a matrix), the action of that map on a $k$-vector is defined as



            $$underline T(v_1 wedge v_2 wedge ldots wedge v_k) equiv underline T(v_1) wedge underline T(v_2) wedge ldots wedge underline T(v_k)$$



            When talking about $n$-vectors in an $n$-dimensional space, it's important to realize that the "vector space" of these $n$-vectors is in fact one-dimensional. That is, if you think about volume, there is only one such unit volume in a given space, and all other volumes are just scalar multiples of it. Hence, when we talk about the action of a linear map on an $n$-vector, we can see that



            $$underline T(v_1 wedge v_2 wedge ldots wedge v_n) = alpha [v_1 wedge v_2 wedge ldots wedge v_n]$$



            for some scalar $alpha$. In fact, this is a coordinate system independent definition of the determinant!



            When you build a matrix out of $n$ vectors $f_1, f_2, ldots, f_n$ as the matrix's columns, what you're really doing is the following: you're saying that, if you have a basis $e_1, e_2, ldots, e_n$, then you're defining a map $underline T$ such that $underline T(e_1) = f_1$, $underline T(e_2) = f_2$, and so on. So when you input $e_1 wedge e_2 wedge ldots wedge e_n$, you get



            $$underline T(e_1 wedge e_2 wedge ldots wedge e_n) = (det underline T) e_1 wedge e_2 wedge ldots wedge e_n= f_1 wedge f_2 wedge ldots wedge f_n$$



            This is how you can use a matrix determinant to calculate volumes: it's just an easy way of constructing something that automatically computes the wedge product.





            Edit: how one can see that the wedge product accurately gives the volume of a parallelepiped. Any vector can be broken down into perpendicular and parallel parts with respect to another vector, to a plane, and so on (or to any $k$-vector). As such, if I have two vectors $a$ and $b$, then the wedge product $a wedge b = a wedge b_perp$, where $b_perp$ is effectively the height of the parallelogram. Similarly, if I construct a parallelepiped with a vector $c$, then the wedge product $a wedge b wedge c = (a wedge b_perp) wedge c_perp$, where $c_perp$ lies entirely normal to $a wedge b_perp$. So we can recursively do this for any $k$-vector, looking at orthogonal vectors instead, which is much simpler to see the volumes from.






            share|cite|improve this answer











            $endgroup$













            • $begingroup$
              Thanks. Actually my question raised exactly from learning about wedge product. How can one see that the wedge product has its magnitude the n-volume of that n-parallelepiped? I guess it is equivalent to ask in term of determinant as in my question.
              $endgroup$
              – ahala
              Jun 23 '13 at 15:39












            • $begingroup$
              I've added a section to this effect.
              $endgroup$
              – Muphrid
              Jun 23 '13 at 16:27



















            4












            $begingroup$

            The determinant of a matrix A is the unique function that satisfies:





            1. $det(A)=0$ when two columns are equal

            2. the determinant is linear in the columns

            3. if A is the identity $det(A)=1$.


            You can easily convince yourself that the oriented volume $operatorname{vol}(v_1,v_2,ldots,v_n)$ between $v_1, v_2,ldots, v_n$ vectors is a function that satisfies exactly the same properties if we place the vectors as the columns of a matrix $A=(v_1,ldots,v_n)$. Hence $operatorname{vol}(v_1,v_2,ldots,v_n)=det(A)$.






            share|cite|improve this answer











            $endgroup$









            • 2




              $begingroup$
              one has to be convinced why such function is unique to reach the conclusion.
              $endgroup$
              – ahala
              Dec 12 '16 at 14:37






            • 1




              $begingroup$
              True. You can prove that the determinant is unique by constructing it using Gaussian elimination as the signed product of all the pivot of a matrix.
              $endgroup$
              – jacopoviti
              Dec 13 '16 at 14:59



















            1












            $begingroup$

            Determinant involves a cross-product of the first two vectors and a dot of the result with the third. The result of a cross product is a vector whose magnitude is the area of its null space. Said simply, any plane in 3D is the null space of its normal.The size of the plane is defined by the length of the normal. The volume is found by projecting this normal onto the third vector.






            share|cite|improve this answer









            $endgroup$













            • $begingroup$
              This question is not about 3D case, it is about nD cases.
              $endgroup$
              – ahala
              Dec 12 '16 at 14:39










            • $begingroup$
              Yes, my bad. Assuming from nD we get an nxn matrix. Remove one row and then find the null space of the remaining n-1xn hyperplane which is inevitably a vector perpendicular to that hyperplane. Dot this vector with the one removed and I believe this amounts to the volume in nD.
              $endgroup$
              – Shadi
              Dec 26 '16 at 3:14



















            0












            $begingroup$

            You can also invoke the change-of-variable theorem in higher dimensions. A $n$-dimensional parelellopiped $mathcal P=mathcal P(a_1,dots,a_n)$ in $mathbb R^n$ (where the $a_i$ are independent vectors in $mathbb R^n$) is the set of all $x$ such that:
            $$
            x=c_1a_1+dots+c_ka_n,
            $$
            with $0leq c_ileq 1$. We can define the linear transformation $h(x)=Acdot x$, where $A$ is the $ntimes n$ matrix with $a_i$ as its columns. This gives us $mathcal P=h([0,1]^n)$. The volume of $h([0,1]^n)$ is equal to $h((0,1)^n))$ (those sets are equal modulo a set of measure zero), so we can apply the change-of-variable theorem:
            $$
            v(mathcal P)=int_{h((0,1)^n)}1=int_{(0,1)^n}vertdet Dhvert=vertdet Avert.
            $$






            share|cite|improve this answer











            $endgroup$









            • 1




              $begingroup$
              Isn't the change of variables theorem based on what we are asked to prove?
              $endgroup$
              – Theorem
              Aug 16 '18 at 14:56











            Your Answer





            StackExchange.ifUsing("editor", function () {
            return StackExchange.using("mathjaxEditing", function () {
            StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
            StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
            });
            });
            }, "mathjax-editing");

            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "69"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            noCode: true, onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f427528%2fwhy-determinant-is-volume-of-parallelepiped-in-any-dimensions%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            6 Answers
            6






            active

            oldest

            votes








            6 Answers
            6






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            44












            $begingroup$

            If the column vectors are linearly dependent, both the determinant and the volume are zero.
            So assume linear independence.
            The determinant remains unchanged when adding multiples of one column to another. This corresponds to a skew translation of the parallelepiped, which does not affect its volume.
            By a finite sequence of such operations, you can transform your matrix to diagonal form, where the relation between determinant (=product of diagonal entries) and volume of a "rectangle" (=product of side lengths) is apparent.






            share|cite|improve this answer











            $endgroup$









            • 1




              $begingroup$
              thanks. from an constructive view of point, can the skew translation does not change volume plus multilinear condition fully determine the form of determinant?
              $endgroup$
              – ahala
              Jun 23 '13 at 15:34






            • 1




              $begingroup$
              Yes. In abstract language: The vector space of alternating $n$-forms is one-dimensional
              $endgroup$
              – Hagen von Eitzen
              May 2 '14 at 11:58










            • $begingroup$
              I don't think the relation is apparent, even in two dimensions. The proof by picture in 2d takes some (pretty little) movement of equal areas around for the second shear.
              $endgroup$
              – Mitch
              Oct 19 '18 at 19:52
















            44












            $begingroup$

            If the column vectors are linearly dependent, both the determinant and the volume are zero.
            So assume linear independence.
            The determinant remains unchanged when adding multiples of one column to another. This corresponds to a skew translation of the parallelepiped, which does not affect its volume.
            By a finite sequence of such operations, you can transform your matrix to diagonal form, where the relation between determinant (=product of diagonal entries) and volume of a "rectangle" (=product of side lengths) is apparent.






            share|cite|improve this answer











            $endgroup$









            • 1




              $begingroup$
              thanks. from an constructive view of point, can the skew translation does not change volume plus multilinear condition fully determine the form of determinant?
              $endgroup$
              – ahala
              Jun 23 '13 at 15:34






            • 1




              $begingroup$
              Yes. In abstract language: The vector space of alternating $n$-forms is one-dimensional
              $endgroup$
              – Hagen von Eitzen
              May 2 '14 at 11:58










            • $begingroup$
              I don't think the relation is apparent, even in two dimensions. The proof by picture in 2d takes some (pretty little) movement of equal areas around for the second shear.
              $endgroup$
              – Mitch
              Oct 19 '18 at 19:52














            44












            44








            44





            $begingroup$

            If the column vectors are linearly dependent, both the determinant and the volume are zero.
            So assume linear independence.
            The determinant remains unchanged when adding multiples of one column to another. This corresponds to a skew translation of the parallelepiped, which does not affect its volume.
            By a finite sequence of such operations, you can transform your matrix to diagonal form, where the relation between determinant (=product of diagonal entries) and volume of a "rectangle" (=product of side lengths) is apparent.






            share|cite|improve this answer











            $endgroup$



            If the column vectors are linearly dependent, both the determinant and the volume are zero.
            So assume linear independence.
            The determinant remains unchanged when adding multiples of one column to another. This corresponds to a skew translation of the parallelepiped, which does not affect its volume.
            By a finite sequence of such operations, you can transform your matrix to diagonal form, where the relation between determinant (=product of diagonal entries) and volume of a "rectangle" (=product of side lengths) is apparent.







            share|cite|improve this answer














            share|cite|improve this answer



            share|cite|improve this answer








            edited Jun 23 '13 at 16:58









            Pedro Tamaroff

            96.4k10152296




            96.4k10152296










            answered Jun 23 '13 at 13:35









            Hagen von EitzenHagen von Eitzen

            277k21269496




            277k21269496








            • 1




              $begingroup$
              thanks. from an constructive view of point, can the skew translation does not change volume plus multilinear condition fully determine the form of determinant?
              $endgroup$
              – ahala
              Jun 23 '13 at 15:34






            • 1




              $begingroup$
              Yes. In abstract language: The vector space of alternating $n$-forms is one-dimensional
              $endgroup$
              – Hagen von Eitzen
              May 2 '14 at 11:58










            • $begingroup$
              I don't think the relation is apparent, even in two dimensions. The proof by picture in 2d takes some (pretty little) movement of equal areas around for the second shear.
              $endgroup$
              – Mitch
              Oct 19 '18 at 19:52














            • 1




              $begingroup$
              thanks. from an constructive view of point, can the skew translation does not change volume plus multilinear condition fully determine the form of determinant?
              $endgroup$
              – ahala
              Jun 23 '13 at 15:34






            • 1




              $begingroup$
              Yes. In abstract language: The vector space of alternating $n$-forms is one-dimensional
              $endgroup$
              – Hagen von Eitzen
              May 2 '14 at 11:58










            • $begingroup$
              I don't think the relation is apparent, even in two dimensions. The proof by picture in 2d takes some (pretty little) movement of equal areas around for the second shear.
              $endgroup$
              – Mitch
              Oct 19 '18 at 19:52








            1




            1




            $begingroup$
            thanks. from an constructive view of point, can the skew translation does not change volume plus multilinear condition fully determine the form of determinant?
            $endgroup$
            – ahala
            Jun 23 '13 at 15:34




            $begingroup$
            thanks. from an constructive view of point, can the skew translation does not change volume plus multilinear condition fully determine the form of determinant?
            $endgroup$
            – ahala
            Jun 23 '13 at 15:34




            1




            1




            $begingroup$
            Yes. In abstract language: The vector space of alternating $n$-forms is one-dimensional
            $endgroup$
            – Hagen von Eitzen
            May 2 '14 at 11:58




            $begingroup$
            Yes. In abstract language: The vector space of alternating $n$-forms is one-dimensional
            $endgroup$
            – Hagen von Eitzen
            May 2 '14 at 11:58












            $begingroup$
            I don't think the relation is apparent, even in two dimensions. The proof by picture in 2d takes some (pretty little) movement of equal areas around for the second shear.
            $endgroup$
            – Mitch
            Oct 19 '18 at 19:52




            $begingroup$
            I don't think the relation is apparent, even in two dimensions. The proof by picture in 2d takes some (pretty little) movement of equal areas around for the second shear.
            $endgroup$
            – Mitch
            Oct 19 '18 at 19:52











            14












            $begingroup$

            Here is the same argument as Muphrid's, perhaps written in an elementary way.



            Apply Gram-Schmidt orthogonalization to ${v_{1},ldots,v_{n}}$, so that
            begin{eqnarray*}
            v_{1} & = & v_{1}\
            v_{2} & = & c_{12}v_{1}+v_{2}^{perp}\
            v_{3} & = & c_{13}v_{1}+c_{23}v_{2}+v_{3}^{perp}\
            & vdots
            end{eqnarray*}
            where $v_{2}^{perp}$ is orthogonal to $v_{1}$; and $v_{3}^{perp}$
            is orthogonal to $spanleft{ v_{1},v_{2}right} $, etc.



            Since determinant is multilinear, anti-symmetric, then
            begin{eqnarray*}
            detleft(v_{1},v_{2},v_{3},ldots,v_{n}right) & = & detleft(v_{1},c_{12}v_{1}+v_{2}^{perp},c_{13}v_{1}+c_{23}v_{2}+v_{3}^{perp},ldotsright)\
            & = & detleft(v_{1},v_{2}^{perp},v_{3}^{perp},ldots,v_{n}^{perp}right)\
            & = & mbox{signed volume}left(v_{1},ldots,v_{n}right)
            end{eqnarray*}






            share|cite|improve this answer









            $endgroup$


















              14












              $begingroup$

              Here is the same argument as Muphrid's, perhaps written in an elementary way.



              Apply Gram-Schmidt orthogonalization to ${v_{1},ldots,v_{n}}$, so that
              begin{eqnarray*}
              v_{1} & = & v_{1}\
              v_{2} & = & c_{12}v_{1}+v_{2}^{perp}\
              v_{3} & = & c_{13}v_{1}+c_{23}v_{2}+v_{3}^{perp}\
              & vdots
              end{eqnarray*}
              where $v_{2}^{perp}$ is orthogonal to $v_{1}$; and $v_{3}^{perp}$
              is orthogonal to $spanleft{ v_{1},v_{2}right} $, etc.



              Since determinant is multilinear, anti-symmetric, then
              begin{eqnarray*}
              detleft(v_{1},v_{2},v_{3},ldots,v_{n}right) & = & detleft(v_{1},c_{12}v_{1}+v_{2}^{perp},c_{13}v_{1}+c_{23}v_{2}+v_{3}^{perp},ldotsright)\
              & = & detleft(v_{1},v_{2}^{perp},v_{3}^{perp},ldots,v_{n}^{perp}right)\
              & = & mbox{signed volume}left(v_{1},ldots,v_{n}right)
              end{eqnarray*}






              share|cite|improve this answer









              $endgroup$
















                14












                14








                14





                $begingroup$

                Here is the same argument as Muphrid's, perhaps written in an elementary way.



                Apply Gram-Schmidt orthogonalization to ${v_{1},ldots,v_{n}}$, so that
                begin{eqnarray*}
                v_{1} & = & v_{1}\
                v_{2} & = & c_{12}v_{1}+v_{2}^{perp}\
                v_{3} & = & c_{13}v_{1}+c_{23}v_{2}+v_{3}^{perp}\
                & vdots
                end{eqnarray*}
                where $v_{2}^{perp}$ is orthogonal to $v_{1}$; and $v_{3}^{perp}$
                is orthogonal to $spanleft{ v_{1},v_{2}right} $, etc.



                Since determinant is multilinear, anti-symmetric, then
                begin{eqnarray*}
                detleft(v_{1},v_{2},v_{3},ldots,v_{n}right) & = & detleft(v_{1},c_{12}v_{1}+v_{2}^{perp},c_{13}v_{1}+c_{23}v_{2}+v_{3}^{perp},ldotsright)\
                & = & detleft(v_{1},v_{2}^{perp},v_{3}^{perp},ldots,v_{n}^{perp}right)\
                & = & mbox{signed volume}left(v_{1},ldots,v_{n}right)
                end{eqnarray*}






                share|cite|improve this answer









                $endgroup$



                Here is the same argument as Muphrid's, perhaps written in an elementary way.



                Apply Gram-Schmidt orthogonalization to ${v_{1},ldots,v_{n}}$, so that
                begin{eqnarray*}
                v_{1} & = & v_{1}\
                v_{2} & = & c_{12}v_{1}+v_{2}^{perp}\
                v_{3} & = & c_{13}v_{1}+c_{23}v_{2}+v_{3}^{perp}\
                & vdots
                end{eqnarray*}
                where $v_{2}^{perp}$ is orthogonal to $v_{1}$; and $v_{3}^{perp}$
                is orthogonal to $spanleft{ v_{1},v_{2}right} $, etc.



                Since determinant is multilinear, anti-symmetric, then
                begin{eqnarray*}
                detleft(v_{1},v_{2},v_{3},ldots,v_{n}right) & = & detleft(v_{1},c_{12}v_{1}+v_{2}^{perp},c_{13}v_{1}+c_{23}v_{2}+v_{3}^{perp},ldotsright)\
                & = & detleft(v_{1},v_{2}^{perp},v_{3}^{perp},ldots,v_{n}^{perp}right)\
                & = & mbox{signed volume}left(v_{1},ldots,v_{n}right)
                end{eqnarray*}







                share|cite|improve this answer












                share|cite|improve this answer



                share|cite|improve this answer










                answered Nov 1 '13 at 2:50









                JamesJames

                1,10768




                1,10768























                    12












                    $begingroup$

                    In 2d, you calculate the area of a parallelogram spanned by two vectors using the cross product. In 3d, you calculate the volume of a parallelepiped using the triple scalar product. Both of these can be written in terms of a determinant, but it's probably not clear to you what the proper generalization is to higher dimensions.



                    That generalization is called the wedge product. Given $n$ vectors $v_1, v_2, ldots, v_n$, the wedge product $v_1 wedge v_2 wedge ldots wedge v_n$ is called an $n$-vector, and it has as its magnitude the $n$-volume of that $n$-parallelepiped.



                    What is the relationship between the wedge product and the determinant? Quite simple, actually. There is a natural generalization of linear maps to work on $k$-vectors. Given a linear map $underline T$ (which can be represented as a matrix), the action of that map on a $k$-vector is defined as



                    $$underline T(v_1 wedge v_2 wedge ldots wedge v_k) equiv underline T(v_1) wedge underline T(v_2) wedge ldots wedge underline T(v_k)$$



                    When talking about $n$-vectors in an $n$-dimensional space, it's important to realize that the "vector space" of these $n$-vectors is in fact one-dimensional. That is, if you think about volume, there is only one such unit volume in a given space, and all other volumes are just scalar multiples of it. Hence, when we talk about the action of a linear map on an $n$-vector, we can see that



                    $$underline T(v_1 wedge v_2 wedge ldots wedge v_n) = alpha [v_1 wedge v_2 wedge ldots wedge v_n]$$



                    for some scalar $alpha$. In fact, this is a coordinate system independent definition of the determinant!



                    When you build a matrix out of $n$ vectors $f_1, f_2, ldots, f_n$ as the matrix's columns, what you're really doing is the following: you're saying that, if you have a basis $e_1, e_2, ldots, e_n$, then you're defining a map $underline T$ such that $underline T(e_1) = f_1$, $underline T(e_2) = f_2$, and so on. So when you input $e_1 wedge e_2 wedge ldots wedge e_n$, you get



                    $$underline T(e_1 wedge e_2 wedge ldots wedge e_n) = (det underline T) e_1 wedge e_2 wedge ldots wedge e_n= f_1 wedge f_2 wedge ldots wedge f_n$$



                    This is how you can use a matrix determinant to calculate volumes: it's just an easy way of constructing something that automatically computes the wedge product.





                    Edit: how one can see that the wedge product accurately gives the volume of a parallelepiped. Any vector can be broken down into perpendicular and parallel parts with respect to another vector, to a plane, and so on (or to any $k$-vector). As such, if I have two vectors $a$ and $b$, then the wedge product $a wedge b = a wedge b_perp$, where $b_perp$ is effectively the height of the parallelogram. Similarly, if I construct a parallelepiped with a vector $c$, then the wedge product $a wedge b wedge c = (a wedge b_perp) wedge c_perp$, where $c_perp$ lies entirely normal to $a wedge b_perp$. So we can recursively do this for any $k$-vector, looking at orthogonal vectors instead, which is much simpler to see the volumes from.






                    share|cite|improve this answer











                    $endgroup$













                    • $begingroup$
                      Thanks. Actually my question raised exactly from learning about wedge product. How can one see that the wedge product has its magnitude the n-volume of that n-parallelepiped? I guess it is equivalent to ask in term of determinant as in my question.
                      $endgroup$
                      – ahala
                      Jun 23 '13 at 15:39












                    • $begingroup$
                      I've added a section to this effect.
                      $endgroup$
                      – Muphrid
                      Jun 23 '13 at 16:27
















                    12












                    $begingroup$

                    In 2d, you calculate the area of a parallelogram spanned by two vectors using the cross product. In 3d, you calculate the volume of a parallelepiped using the triple scalar product. Both of these can be written in terms of a determinant, but it's probably not clear to you what the proper generalization is to higher dimensions.



                    That generalization is called the wedge product. Given $n$ vectors $v_1, v_2, ldots, v_n$, the wedge product $v_1 wedge v_2 wedge ldots wedge v_n$ is called an $n$-vector, and it has as its magnitude the $n$-volume of that $n$-parallelepiped.



                    What is the relationship between the wedge product and the determinant? Quite simple, actually. There is a natural generalization of linear maps to work on $k$-vectors. Given a linear map $underline T$ (which can be represented as a matrix), the action of that map on a $k$-vector is defined as



                    $$underline T(v_1 wedge v_2 wedge ldots wedge v_k) equiv underline T(v_1) wedge underline T(v_2) wedge ldots wedge underline T(v_k)$$



                    When talking about $n$-vectors in an $n$-dimensional space, it's important to realize that the "vector space" of these $n$-vectors is in fact one-dimensional. That is, if you think about volume, there is only one such unit volume in a given space, and all other volumes are just scalar multiples of it. Hence, when we talk about the action of a linear map on an $n$-vector, we can see that



                    $$underline T(v_1 wedge v_2 wedge ldots wedge v_n) = alpha [v_1 wedge v_2 wedge ldots wedge v_n]$$



                    for some scalar $alpha$. In fact, this is a coordinate system independent definition of the determinant!



                    When you build a matrix out of $n$ vectors $f_1, f_2, ldots, f_n$ as the matrix's columns, what you're really doing is the following: you're saying that, if you have a basis $e_1, e_2, ldots, e_n$, then you're defining a map $underline T$ such that $underline T(e_1) = f_1$, $underline T(e_2) = f_2$, and so on. So when you input $e_1 wedge e_2 wedge ldots wedge e_n$, you get



                    $$underline T(e_1 wedge e_2 wedge ldots wedge e_n) = (det underline T) e_1 wedge e_2 wedge ldots wedge e_n= f_1 wedge f_2 wedge ldots wedge f_n$$



                    This is how you can use a matrix determinant to calculate volumes: it's just an easy way of constructing something that automatically computes the wedge product.





                    Edit: how one can see that the wedge product accurately gives the volume of a parallelepiped. Any vector can be broken down into perpendicular and parallel parts with respect to another vector, to a plane, and so on (or to any $k$-vector). As such, if I have two vectors $a$ and $b$, then the wedge product $a wedge b = a wedge b_perp$, where $b_perp$ is effectively the height of the parallelogram. Similarly, if I construct a parallelepiped with a vector $c$, then the wedge product $a wedge b wedge c = (a wedge b_perp) wedge c_perp$, where $c_perp$ lies entirely normal to $a wedge b_perp$. So we can recursively do this for any $k$-vector, looking at orthogonal vectors instead, which is much simpler to see the volumes from.






                    share|cite|improve this answer











                    $endgroup$













                    • $begingroup$
                      Thanks. Actually my question raised exactly from learning about wedge product. How can one see that the wedge product has its magnitude the n-volume of that n-parallelepiped? I guess it is equivalent to ask in term of determinant as in my question.
                      $endgroup$
                      – ahala
                      Jun 23 '13 at 15:39












                    • $begingroup$
                      I've added a section to this effect.
                      $endgroup$
                      – Muphrid
                      Jun 23 '13 at 16:27














                    12












                    12








                    12





                    $begingroup$

                    In 2d, you calculate the area of a parallelogram spanned by two vectors using the cross product. In 3d, you calculate the volume of a parallelepiped using the triple scalar product. Both of these can be written in terms of a determinant, but it's probably not clear to you what the proper generalization is to higher dimensions.



                    That generalization is called the wedge product. Given $n$ vectors $v_1, v_2, ldots, v_n$, the wedge product $v_1 wedge v_2 wedge ldots wedge v_n$ is called an $n$-vector, and it has as its magnitude the $n$-volume of that $n$-parallelepiped.



                    What is the relationship between the wedge product and the determinant? Quite simple, actually. There is a natural generalization of linear maps to work on $k$-vectors. Given a linear map $underline T$ (which can be represented as a matrix), the action of that map on a $k$-vector is defined as



                    $$underline T(v_1 wedge v_2 wedge ldots wedge v_k) equiv underline T(v_1) wedge underline T(v_2) wedge ldots wedge underline T(v_k)$$



                    When talking about $n$-vectors in an $n$-dimensional space, it's important to realize that the "vector space" of these $n$-vectors is in fact one-dimensional. That is, if you think about volume, there is only one such unit volume in a given space, and all other volumes are just scalar multiples of it. Hence, when we talk about the action of a linear map on an $n$-vector, we can see that



                    $$underline T(v_1 wedge v_2 wedge ldots wedge v_n) = alpha [v_1 wedge v_2 wedge ldots wedge v_n]$$



                    for some scalar $alpha$. In fact, this is a coordinate system independent definition of the determinant!



                    When you build a matrix out of $n$ vectors $f_1, f_2, ldots, f_n$ as the matrix's columns, what you're really doing is the following: you're saying that, if you have a basis $e_1, e_2, ldots, e_n$, then you're defining a map $underline T$ such that $underline T(e_1) = f_1$, $underline T(e_2) = f_2$, and so on. So when you input $e_1 wedge e_2 wedge ldots wedge e_n$, you get



                    $$underline T(e_1 wedge e_2 wedge ldots wedge e_n) = (det underline T) e_1 wedge e_2 wedge ldots wedge e_n= f_1 wedge f_2 wedge ldots wedge f_n$$



                    This is how you can use a matrix determinant to calculate volumes: it's just an easy way of constructing something that automatically computes the wedge product.





                    Edit: how one can see that the wedge product accurately gives the volume of a parallelepiped. Any vector can be broken down into perpendicular and parallel parts with respect to another vector, to a plane, and so on (or to any $k$-vector). As such, if I have two vectors $a$ and $b$, then the wedge product $a wedge b = a wedge b_perp$, where $b_perp$ is effectively the height of the parallelogram. Similarly, if I construct a parallelepiped with a vector $c$, then the wedge product $a wedge b wedge c = (a wedge b_perp) wedge c_perp$, where $c_perp$ lies entirely normal to $a wedge b_perp$. So we can recursively do this for any $k$-vector, looking at orthogonal vectors instead, which is much simpler to see the volumes from.






                    share|cite|improve this answer











                    $endgroup$



                    In 2d, you calculate the area of a parallelogram spanned by two vectors using the cross product. In 3d, you calculate the volume of a parallelepiped using the triple scalar product. Both of these can be written in terms of a determinant, but it's probably not clear to you what the proper generalization is to higher dimensions.



                    That generalization is called the wedge product. Given $n$ vectors $v_1, v_2, ldots, v_n$, the wedge product $v_1 wedge v_2 wedge ldots wedge v_n$ is called an $n$-vector, and it has as its magnitude the $n$-volume of that $n$-parallelepiped.



                    What is the relationship between the wedge product and the determinant? Quite simple, actually. There is a natural generalization of linear maps to work on $k$-vectors. Given a linear map $underline T$ (which can be represented as a matrix), the action of that map on a $k$-vector is defined as



                    $$underline T(v_1 wedge v_2 wedge ldots wedge v_k) equiv underline T(v_1) wedge underline T(v_2) wedge ldots wedge underline T(v_k)$$



                    When talking about $n$-vectors in an $n$-dimensional space, it's important to realize that the "vector space" of these $n$-vectors is in fact one-dimensional. That is, if you think about volume, there is only one such unit volume in a given space, and all other volumes are just scalar multiples of it. Hence, when we talk about the action of a linear map on an $n$-vector, we can see that



                    $$underline T(v_1 wedge v_2 wedge ldots wedge v_n) = alpha [v_1 wedge v_2 wedge ldots wedge v_n]$$



                    for some scalar $alpha$. In fact, this is a coordinate system independent definition of the determinant!



                    When you build a matrix out of $n$ vectors $f_1, f_2, ldots, f_n$ as the matrix's columns, what you're really doing is the following: you're saying that, if you have a basis $e_1, e_2, ldots, e_n$, then you're defining a map $underline T$ such that $underline T(e_1) = f_1$, $underline T(e_2) = f_2$, and so on. So when you input $e_1 wedge e_2 wedge ldots wedge e_n$, you get



                    $$underline T(e_1 wedge e_2 wedge ldots wedge e_n) = (det underline T) e_1 wedge e_2 wedge ldots wedge e_n= f_1 wedge f_2 wedge ldots wedge f_n$$



                    This is how you can use a matrix determinant to calculate volumes: it's just an easy way of constructing something that automatically computes the wedge product.





                    Edit: how one can see that the wedge product accurately gives the volume of a parallelepiped. Any vector can be broken down into perpendicular and parallel parts with respect to another vector, to a plane, and so on (or to any $k$-vector). As such, if I have two vectors $a$ and $b$, then the wedge product $a wedge b = a wedge b_perp$, where $b_perp$ is effectively the height of the parallelogram. Similarly, if I construct a parallelepiped with a vector $c$, then the wedge product $a wedge b wedge c = (a wedge b_perp) wedge c_perp$, where $c_perp$ lies entirely normal to $a wedge b_perp$. So we can recursively do this for any $k$-vector, looking at orthogonal vectors instead, which is much simpler to see the volumes from.







                    share|cite|improve this answer














                    share|cite|improve this answer



                    share|cite|improve this answer








                    edited Jun 23 '13 at 16:27

























                    answered Jun 23 '13 at 14:51









                    MuphridMuphrid

                    15.5k11541




                    15.5k11541












                    • $begingroup$
                      Thanks. Actually my question raised exactly from learning about wedge product. How can one see that the wedge product has its magnitude the n-volume of that n-parallelepiped? I guess it is equivalent to ask in term of determinant as in my question.
                      $endgroup$
                      – ahala
                      Jun 23 '13 at 15:39












                    • $begingroup$
                      I've added a section to this effect.
                      $endgroup$
                      – Muphrid
                      Jun 23 '13 at 16:27


















                    • $begingroup$
                      Thanks. Actually my question raised exactly from learning about wedge product. How can one see that the wedge product has its magnitude the n-volume of that n-parallelepiped? I guess it is equivalent to ask in term of determinant as in my question.
                      $endgroup$
                      – ahala
                      Jun 23 '13 at 15:39












                    • $begingroup$
                      I've added a section to this effect.
                      $endgroup$
                      – Muphrid
                      Jun 23 '13 at 16:27
















                    $begingroup$
                    Thanks. Actually my question raised exactly from learning about wedge product. How can one see that the wedge product has its magnitude the n-volume of that n-parallelepiped? I guess it is equivalent to ask in term of determinant as in my question.
                    $endgroup$
                    – ahala
                    Jun 23 '13 at 15:39






                    $begingroup$
                    Thanks. Actually my question raised exactly from learning about wedge product. How can one see that the wedge product has its magnitude the n-volume of that n-parallelepiped? I guess it is equivalent to ask in term of determinant as in my question.
                    $endgroup$
                    – ahala
                    Jun 23 '13 at 15:39














                    $begingroup$
                    I've added a section to this effect.
                    $endgroup$
                    – Muphrid
                    Jun 23 '13 at 16:27




                    $begingroup$
                    I've added a section to this effect.
                    $endgroup$
                    – Muphrid
                    Jun 23 '13 at 16:27











                    4












                    $begingroup$

                    The determinant of a matrix A is the unique function that satisfies:





                    1. $det(A)=0$ when two columns are equal

                    2. the determinant is linear in the columns

                    3. if A is the identity $det(A)=1$.


                    You can easily convince yourself that the oriented volume $operatorname{vol}(v_1,v_2,ldots,v_n)$ between $v_1, v_2,ldots, v_n$ vectors is a function that satisfies exactly the same properties if we place the vectors as the columns of a matrix $A=(v_1,ldots,v_n)$. Hence $operatorname{vol}(v_1,v_2,ldots,v_n)=det(A)$.






                    share|cite|improve this answer











                    $endgroup$









                    • 2




                      $begingroup$
                      one has to be convinced why such function is unique to reach the conclusion.
                      $endgroup$
                      – ahala
                      Dec 12 '16 at 14:37






                    • 1




                      $begingroup$
                      True. You can prove that the determinant is unique by constructing it using Gaussian elimination as the signed product of all the pivot of a matrix.
                      $endgroup$
                      – jacopoviti
                      Dec 13 '16 at 14:59
















                    4












                    $begingroup$

                    The determinant of a matrix A is the unique function that satisfies:





                    1. $det(A)=0$ when two columns are equal

                    2. the determinant is linear in the columns

                    3. if A is the identity $det(A)=1$.


                    You can easily convince yourself that the oriented volume $operatorname{vol}(v_1,v_2,ldots,v_n)$ between $v_1, v_2,ldots, v_n$ vectors is a function that satisfies exactly the same properties if we place the vectors as the columns of a matrix $A=(v_1,ldots,v_n)$. Hence $operatorname{vol}(v_1,v_2,ldots,v_n)=det(A)$.






                    share|cite|improve this answer











                    $endgroup$









                    • 2




                      $begingroup$
                      one has to be convinced why such function is unique to reach the conclusion.
                      $endgroup$
                      – ahala
                      Dec 12 '16 at 14:37






                    • 1




                      $begingroup$
                      True. You can prove that the determinant is unique by constructing it using Gaussian elimination as the signed product of all the pivot of a matrix.
                      $endgroup$
                      – jacopoviti
                      Dec 13 '16 at 14:59














                    4












                    4








                    4





                    $begingroup$

                    The determinant of a matrix A is the unique function that satisfies:





                    1. $det(A)=0$ when two columns are equal

                    2. the determinant is linear in the columns

                    3. if A is the identity $det(A)=1$.


                    You can easily convince yourself that the oriented volume $operatorname{vol}(v_1,v_2,ldots,v_n)$ between $v_1, v_2,ldots, v_n$ vectors is a function that satisfies exactly the same properties if we place the vectors as the columns of a matrix $A=(v_1,ldots,v_n)$. Hence $operatorname{vol}(v_1,v_2,ldots,v_n)=det(A)$.






                    share|cite|improve this answer











                    $endgroup$



                    The determinant of a matrix A is the unique function that satisfies:





                    1. $det(A)=0$ when two columns are equal

                    2. the determinant is linear in the columns

                    3. if A is the identity $det(A)=1$.


                    You can easily convince yourself that the oriented volume $operatorname{vol}(v_1,v_2,ldots,v_n)$ between $v_1, v_2,ldots, v_n$ vectors is a function that satisfies exactly the same properties if we place the vectors as the columns of a matrix $A=(v_1,ldots,v_n)$. Hence $operatorname{vol}(v_1,v_2,ldots,v_n)=det(A)$.







                    share|cite|improve this answer














                    share|cite|improve this answer



                    share|cite|improve this answer








                    edited Nov 30 '18 at 17:19









                    José Carlos Santos

                    153k22123225




                    153k22123225










                    answered Oct 27 '16 at 17:48









                    jacopovitijacopoviti

                    684




                    684








                    • 2




                      $begingroup$
                      one has to be convinced why such function is unique to reach the conclusion.
                      $endgroup$
                      – ahala
                      Dec 12 '16 at 14:37






                    • 1




                      $begingroup$
                      True. You can prove that the determinant is unique by constructing it using Gaussian elimination as the signed product of all the pivot of a matrix.
                      $endgroup$
                      – jacopoviti
                      Dec 13 '16 at 14:59














                    • 2




                      $begingroup$
                      one has to be convinced why such function is unique to reach the conclusion.
                      $endgroup$
                      – ahala
                      Dec 12 '16 at 14:37






                    • 1




                      $begingroup$
                      True. You can prove that the determinant is unique by constructing it using Gaussian elimination as the signed product of all the pivot of a matrix.
                      $endgroup$
                      – jacopoviti
                      Dec 13 '16 at 14:59








                    2




                    2




                    $begingroup$
                    one has to be convinced why such function is unique to reach the conclusion.
                    $endgroup$
                    – ahala
                    Dec 12 '16 at 14:37




                    $begingroup$
                    one has to be convinced why such function is unique to reach the conclusion.
                    $endgroup$
                    – ahala
                    Dec 12 '16 at 14:37




                    1




                    1




                    $begingroup$
                    True. You can prove that the determinant is unique by constructing it using Gaussian elimination as the signed product of all the pivot of a matrix.
                    $endgroup$
                    – jacopoviti
                    Dec 13 '16 at 14:59




                    $begingroup$
                    True. You can prove that the determinant is unique by constructing it using Gaussian elimination as the signed product of all the pivot of a matrix.
                    $endgroup$
                    – jacopoviti
                    Dec 13 '16 at 14:59











                    1












                    $begingroup$

                    Determinant involves a cross-product of the first two vectors and a dot of the result with the third. The result of a cross product is a vector whose magnitude is the area of its null space. Said simply, any plane in 3D is the null space of its normal.The size of the plane is defined by the length of the normal. The volume is found by projecting this normal onto the third vector.






                    share|cite|improve this answer









                    $endgroup$













                    • $begingroup$
                      This question is not about 3D case, it is about nD cases.
                      $endgroup$
                      – ahala
                      Dec 12 '16 at 14:39










                    • $begingroup$
                      Yes, my bad. Assuming from nD we get an nxn matrix. Remove one row and then find the null space of the remaining n-1xn hyperplane which is inevitably a vector perpendicular to that hyperplane. Dot this vector with the one removed and I believe this amounts to the volume in nD.
                      $endgroup$
                      – Shadi
                      Dec 26 '16 at 3:14
















                    1












                    $begingroup$

                    Determinant involves a cross-product of the first two vectors and a dot of the result with the third. The result of a cross product is a vector whose magnitude is the area of its null space. Said simply, any plane in 3D is the null space of its normal.The size of the plane is defined by the length of the normal. The volume is found by projecting this normal onto the third vector.






                    share|cite|improve this answer









                    $endgroup$













                    • $begingroup$
                      This question is not about 3D case, it is about nD cases.
                      $endgroup$
                      – ahala
                      Dec 12 '16 at 14:39










                    • $begingroup$
                      Yes, my bad. Assuming from nD we get an nxn matrix. Remove one row and then find the null space of the remaining n-1xn hyperplane which is inevitably a vector perpendicular to that hyperplane. Dot this vector with the one removed and I believe this amounts to the volume in nD.
                      $endgroup$
                      – Shadi
                      Dec 26 '16 at 3:14














                    1












                    1








                    1





                    $begingroup$

                    Determinant involves a cross-product of the first two vectors and a dot of the result with the third. The result of a cross product is a vector whose magnitude is the area of its null space. Said simply, any plane in 3D is the null space of its normal.The size of the plane is defined by the length of the normal. The volume is found by projecting this normal onto the third vector.






                    share|cite|improve this answer









                    $endgroup$



                    Determinant involves a cross-product of the first two vectors and a dot of the result with the third. The result of a cross product is a vector whose magnitude is the area of its null space. Said simply, any plane in 3D is the null space of its normal.The size of the plane is defined by the length of the normal. The volume is found by projecting this normal onto the third vector.







                    share|cite|improve this answer












                    share|cite|improve this answer



                    share|cite|improve this answer










                    answered Dec 11 '16 at 22:27









                    ShadiShadi

                    111




                    111












                    • $begingroup$
                      This question is not about 3D case, it is about nD cases.
                      $endgroup$
                      – ahala
                      Dec 12 '16 at 14:39










                    • $begingroup$
                      Yes, my bad. Assuming from nD we get an nxn matrix. Remove one row and then find the null space of the remaining n-1xn hyperplane which is inevitably a vector perpendicular to that hyperplane. Dot this vector with the one removed and I believe this amounts to the volume in nD.
                      $endgroup$
                      – Shadi
                      Dec 26 '16 at 3:14


















                    • $begingroup$
                      This question is not about 3D case, it is about nD cases.
                      $endgroup$
                      – ahala
                      Dec 12 '16 at 14:39










                    • $begingroup$
                      Yes, my bad. Assuming from nD we get an nxn matrix. Remove one row and then find the null space of the remaining n-1xn hyperplane which is inevitably a vector perpendicular to that hyperplane. Dot this vector with the one removed and I believe this amounts to the volume in nD.
                      $endgroup$
                      – Shadi
                      Dec 26 '16 at 3:14
















                    $begingroup$
                    This question is not about 3D case, it is about nD cases.
                    $endgroup$
                    – ahala
                    Dec 12 '16 at 14:39




                    $begingroup$
                    This question is not about 3D case, it is about nD cases.
                    $endgroup$
                    – ahala
                    Dec 12 '16 at 14:39












                    $begingroup$
                    Yes, my bad. Assuming from nD we get an nxn matrix. Remove one row and then find the null space of the remaining n-1xn hyperplane which is inevitably a vector perpendicular to that hyperplane. Dot this vector with the one removed and I believe this amounts to the volume in nD.
                    $endgroup$
                    – Shadi
                    Dec 26 '16 at 3:14




                    $begingroup$
                    Yes, my bad. Assuming from nD we get an nxn matrix. Remove one row and then find the null space of the remaining n-1xn hyperplane which is inevitably a vector perpendicular to that hyperplane. Dot this vector with the one removed and I believe this amounts to the volume in nD.
                    $endgroup$
                    – Shadi
                    Dec 26 '16 at 3:14











                    0












                    $begingroup$

                    You can also invoke the change-of-variable theorem in higher dimensions. A $n$-dimensional parelellopiped $mathcal P=mathcal P(a_1,dots,a_n)$ in $mathbb R^n$ (where the $a_i$ are independent vectors in $mathbb R^n$) is the set of all $x$ such that:
                    $$
                    x=c_1a_1+dots+c_ka_n,
                    $$
                    with $0leq c_ileq 1$. We can define the linear transformation $h(x)=Acdot x$, where $A$ is the $ntimes n$ matrix with $a_i$ as its columns. This gives us $mathcal P=h([0,1]^n)$. The volume of $h([0,1]^n)$ is equal to $h((0,1)^n))$ (those sets are equal modulo a set of measure zero), so we can apply the change-of-variable theorem:
                    $$
                    v(mathcal P)=int_{h((0,1)^n)}1=int_{(0,1)^n}vertdet Dhvert=vertdet Avert.
                    $$






                    share|cite|improve this answer











                    $endgroup$









                    • 1




                      $begingroup$
                      Isn't the change of variables theorem based on what we are asked to prove?
                      $endgroup$
                      – Theorem
                      Aug 16 '18 at 14:56
















                    0












                    $begingroup$

                    You can also invoke the change-of-variable theorem in higher dimensions. A $n$-dimensional parelellopiped $mathcal P=mathcal P(a_1,dots,a_n)$ in $mathbb R^n$ (where the $a_i$ are independent vectors in $mathbb R^n$) is the set of all $x$ such that:
                    $$
                    x=c_1a_1+dots+c_ka_n,
                    $$
                    with $0leq c_ileq 1$. We can define the linear transformation $h(x)=Acdot x$, where $A$ is the $ntimes n$ matrix with $a_i$ as its columns. This gives us $mathcal P=h([0,1]^n)$. The volume of $h([0,1]^n)$ is equal to $h((0,1)^n))$ (those sets are equal modulo a set of measure zero), so we can apply the change-of-variable theorem:
                    $$
                    v(mathcal P)=int_{h((0,1)^n)}1=int_{(0,1)^n}vertdet Dhvert=vertdet Avert.
                    $$






                    share|cite|improve this answer











                    $endgroup$









                    • 1




                      $begingroup$
                      Isn't the change of variables theorem based on what we are asked to prove?
                      $endgroup$
                      – Theorem
                      Aug 16 '18 at 14:56














                    0












                    0








                    0





                    $begingroup$

                    You can also invoke the change-of-variable theorem in higher dimensions. A $n$-dimensional parelellopiped $mathcal P=mathcal P(a_1,dots,a_n)$ in $mathbb R^n$ (where the $a_i$ are independent vectors in $mathbb R^n$) is the set of all $x$ such that:
                    $$
                    x=c_1a_1+dots+c_ka_n,
                    $$
                    with $0leq c_ileq 1$. We can define the linear transformation $h(x)=Acdot x$, where $A$ is the $ntimes n$ matrix with $a_i$ as its columns. This gives us $mathcal P=h([0,1]^n)$. The volume of $h([0,1]^n)$ is equal to $h((0,1)^n))$ (those sets are equal modulo a set of measure zero), so we can apply the change-of-variable theorem:
                    $$
                    v(mathcal P)=int_{h((0,1)^n)}1=int_{(0,1)^n}vertdet Dhvert=vertdet Avert.
                    $$






                    share|cite|improve this answer











                    $endgroup$



                    You can also invoke the change-of-variable theorem in higher dimensions. A $n$-dimensional parelellopiped $mathcal P=mathcal P(a_1,dots,a_n)$ in $mathbb R^n$ (where the $a_i$ are independent vectors in $mathbb R^n$) is the set of all $x$ such that:
                    $$
                    x=c_1a_1+dots+c_ka_n,
                    $$
                    with $0leq c_ileq 1$. We can define the linear transformation $h(x)=Acdot x$, where $A$ is the $ntimes n$ matrix with $a_i$ as its columns. This gives us $mathcal P=h([0,1]^n)$. The volume of $h([0,1]^n)$ is equal to $h((0,1)^n))$ (those sets are equal modulo a set of measure zero), so we can apply the change-of-variable theorem:
                    $$
                    v(mathcal P)=int_{h((0,1)^n)}1=int_{(0,1)^n}vertdet Dhvert=vertdet Avert.
                    $$







                    share|cite|improve this answer














                    share|cite|improve this answer



                    share|cite|improve this answer








                    edited Apr 28 '18 at 13:32

























                    answered Apr 28 '18 at 12:49









                    Sha VukliaSha Vuklia

                    1,3651717




                    1,3651717








                    • 1




                      $begingroup$
                      Isn't the change of variables theorem based on what we are asked to prove?
                      $endgroup$
                      – Theorem
                      Aug 16 '18 at 14:56














                    • 1




                      $begingroup$
                      Isn't the change of variables theorem based on what we are asked to prove?
                      $endgroup$
                      – Theorem
                      Aug 16 '18 at 14:56








                    1




                    1




                    $begingroup$
                    Isn't the change of variables theorem based on what we are asked to prove?
                    $endgroup$
                    – Theorem
                    Aug 16 '18 at 14:56




                    $begingroup$
                    Isn't the change of variables theorem based on what we are asked to prove?
                    $endgroup$
                    – Theorem
                    Aug 16 '18 at 14:56


















                    draft saved

                    draft discarded




















































                    Thanks for contributing an answer to Mathematics Stack Exchange!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    Use MathJax to format equations. MathJax reference.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f427528%2fwhy-determinant-is-volume-of-parallelepiped-in-any-dimensions%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Ellipse (mathématiques)

                    Quarter-circle Tiles

                    Mont Emei