Definition of adjoint of a linear map












6












$begingroup$


I am having a tough time understanding adjoint of a linear map.
Consider a linear map between two vector spaces $, f:Vrightarrow W,$ let us denote $f^*$ to denote its adjoint.




  • Accroding to this video https://www.youtube.com/watch?v=SjCs_HyYtSo (around time 5:50) the author explains that adjoint of a linear map is a function from dual of $,W$ (denoted by $,W^*$) to the dual of $,V$ (denoted by $,V^*$). So this implies $,f^*:W^*rightarrow V^*.$

  • On the other hand in the pdf http://math.mit.edu/~trasched/18.700.f10/lect17-article.pdf , the adjoint of the linear map is defined as another linear map from $,W$ to $,V.$ So this implies $,f^*:Wrightarrow V.$


Can some body clarify this discrepancy?










share|cite|improve this question









$endgroup$








  • 3




    $begingroup$
    Without looking at the links: the second one probably considers only Hilbert spaces. We have a concept of a Hilbert space adjoint between the spaces themselves, and the more general concept of an adjoint mapping the duals. The Hilbert space adjoint corresponds to the general adjoint under the Riesz anti-isomorphism between a Hilbert space and its dual.
    $endgroup$
    – Daniel Fischer
    May 3 '16 at 14:54
















6












$begingroup$


I am having a tough time understanding adjoint of a linear map.
Consider a linear map between two vector spaces $, f:Vrightarrow W,$ let us denote $f^*$ to denote its adjoint.




  • Accroding to this video https://www.youtube.com/watch?v=SjCs_HyYtSo (around time 5:50) the author explains that adjoint of a linear map is a function from dual of $,W$ (denoted by $,W^*$) to the dual of $,V$ (denoted by $,V^*$). So this implies $,f^*:W^*rightarrow V^*.$

  • On the other hand in the pdf http://math.mit.edu/~trasched/18.700.f10/lect17-article.pdf , the adjoint of the linear map is defined as another linear map from $,W$ to $,V.$ So this implies $,f^*:Wrightarrow V.$


Can some body clarify this discrepancy?










share|cite|improve this question









$endgroup$








  • 3




    $begingroup$
    Without looking at the links: the second one probably considers only Hilbert spaces. We have a concept of a Hilbert space adjoint between the spaces themselves, and the more general concept of an adjoint mapping the duals. The Hilbert space adjoint corresponds to the general adjoint under the Riesz anti-isomorphism between a Hilbert space and its dual.
    $endgroup$
    – Daniel Fischer
    May 3 '16 at 14:54














6












6








6


4



$begingroup$


I am having a tough time understanding adjoint of a linear map.
Consider a linear map between two vector spaces $, f:Vrightarrow W,$ let us denote $f^*$ to denote its adjoint.




  • Accroding to this video https://www.youtube.com/watch?v=SjCs_HyYtSo (around time 5:50) the author explains that adjoint of a linear map is a function from dual of $,W$ (denoted by $,W^*$) to the dual of $,V$ (denoted by $,V^*$). So this implies $,f^*:W^*rightarrow V^*.$

  • On the other hand in the pdf http://math.mit.edu/~trasched/18.700.f10/lect17-article.pdf , the adjoint of the linear map is defined as another linear map from $,W$ to $,V.$ So this implies $,f^*:Wrightarrow V.$


Can some body clarify this discrepancy?










share|cite|improve this question









$endgroup$




I am having a tough time understanding adjoint of a linear map.
Consider a linear map between two vector spaces $, f:Vrightarrow W,$ let us denote $f^*$ to denote its adjoint.




  • Accroding to this video https://www.youtube.com/watch?v=SjCs_HyYtSo (around time 5:50) the author explains that adjoint of a linear map is a function from dual of $,W$ (denoted by $,W^*$) to the dual of $,V$ (denoted by $,V^*$). So this implies $,f^*:W^*rightarrow V^*.$

  • On the other hand in the pdf http://math.mit.edu/~trasched/18.700.f10/lect17-article.pdf , the adjoint of the linear map is defined as another linear map from $,W$ to $,V.$ So this implies $,f^*:Wrightarrow V.$


Can some body clarify this discrepancy?







linear-algebra






share|cite|improve this question













share|cite|improve this question











share|cite|improve this question




share|cite|improve this question










asked May 3 '16 at 14:48









ConiferousConiferous

171213




171213








  • 3




    $begingroup$
    Without looking at the links: the second one probably considers only Hilbert spaces. We have a concept of a Hilbert space adjoint between the spaces themselves, and the more general concept of an adjoint mapping the duals. The Hilbert space adjoint corresponds to the general adjoint under the Riesz anti-isomorphism between a Hilbert space and its dual.
    $endgroup$
    – Daniel Fischer
    May 3 '16 at 14:54














  • 3




    $begingroup$
    Without looking at the links: the second one probably considers only Hilbert spaces. We have a concept of a Hilbert space adjoint between the spaces themselves, and the more general concept of an adjoint mapping the duals. The Hilbert space adjoint corresponds to the general adjoint under the Riesz anti-isomorphism between a Hilbert space and its dual.
    $endgroup$
    – Daniel Fischer
    May 3 '16 at 14:54








3




3




$begingroup$
Without looking at the links: the second one probably considers only Hilbert spaces. We have a concept of a Hilbert space adjoint between the spaces themselves, and the more general concept of an adjoint mapping the duals. The Hilbert space adjoint corresponds to the general adjoint under the Riesz anti-isomorphism between a Hilbert space and its dual.
$endgroup$
– Daniel Fischer
May 3 '16 at 14:54




$begingroup$
Without looking at the links: the second one probably considers only Hilbert spaces. We have a concept of a Hilbert space adjoint between the spaces themselves, and the more general concept of an adjoint mapping the duals. The Hilbert space adjoint corresponds to the general adjoint under the Riesz anti-isomorphism between a Hilbert space and its dual.
$endgroup$
– Daniel Fischer
May 3 '16 at 14:54










2 Answers
2






active

oldest

votes


















8












$begingroup$

The adjoint of a linear map $T: Bbb V to Bbb W$ between two vector spaces is given by the definition in the first source: It is the map $T^* : Bbb W^* to Bbb V^*$ defined by
$$(T^*(phi))(v) := phi(T(v))$$ for all $phi in Bbb W^*$ and $v in Bbb V$.



In the second source, $Bbb V$ and $Bbb W$ are inner product spaces. An inner product $langle ,cdot, , ,cdot, rangle$ on a vector space $Bbb U$ defines an isomorphism $Phi : Bbb U stackrel{cong}{to} Bbb U^*$ by
$$(Phi(u))(u') := langle u, u' rangle .$$ Thus, for any linear map $T: Bbb V to Bbb W$ between inner product spaces, we can identify $Bbb W^*$ with $Bbb W$ and $Bbb V^*$ with $Bbb V$, and hence $T^*$ with a map $Bbb W to Bbb V$. Unwinding the definitions shows that this map satisfies the identity $$langle w, T v rangle = langle T^* w, v rangle$$ in the second definition.



It is an instructive exercise to write out all of these objects in terms of their matrix representations with respect to some bases of $Bbb V, Bbb W$ (of course, this only makes sense in the case that the vector spaces are finite-dimensional but even in the infinite-dimensional case it is a useful mnemonic). In particular, if $Bbb V, Bbb W$ are finite-dimensional real vector spaces and we choose orthonormal bases of both spaces, one can show that the matrix representations of $T^*$ and $T$ are related by $[T^*] = [T]^{top}$.






share|cite|improve this answer











$endgroup$













  • $begingroup$
    In $[T^*] = {}^t[T]$ does ${}^t[T]$ transpose of the matrix representation of $T$?
    $endgroup$
    – Coniferous
    May 3 '16 at 18:54










  • $begingroup$
    Yes; usually I prefer the notation ${}^T$, but I thought it would look peculiar here given the use of $T$ for the transformation.
    $endgroup$
    – Travis
    May 3 '16 at 19:11





















2












$begingroup$

If the vector spaces V and W have respective nondegenerate bilinear forms $B_V$ and $B_W$, a concept closely related to the transpose – the adjoint – may be defined:



If $f : V → W$ is a linear map between vector spaces $V$ and $W$, we define $g$ as the adjoint of f if $g : W → V$ satisfies :



$ B_V(υ,g(w))=B_W(f(υ),w)$ $ forall υ in V, win W $.



These bilinear forms define an isomorphism between $V $and $V^∗$, and between W and $W^∗$, resulting in an isomorphism between the transpose and adjoint of $f$. The matrix of the adjoint of a map is the transposed matrix only if the bases are orthonormal with respect to their bilinear forms. In this context, many authors use the term transpose to refer to the adjoint as defined here.



The adjoint allows us to consider whether $g : W → V$ is equal to $f^{−1} : W → V$. In particular, this allows the orthogonal group over a vector space $V$ with a quadratic form to be defined without reference to matrices (nor the components thereof) as the set of all linear maps $V → V$ for which the adjoint equals the inverse






share|cite|improve this answer









$endgroup$













    Your Answer





    StackExchange.ifUsing("editor", function () {
    return StackExchange.using("mathjaxEditing", function () {
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    });
    });
    }, "mathjax-editing");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "69"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f1769834%2fdefinition-of-adjoint-of-a-linear-map%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    2 Answers
    2






    active

    oldest

    votes








    2 Answers
    2






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    8












    $begingroup$

    The adjoint of a linear map $T: Bbb V to Bbb W$ between two vector spaces is given by the definition in the first source: It is the map $T^* : Bbb W^* to Bbb V^*$ defined by
    $$(T^*(phi))(v) := phi(T(v))$$ for all $phi in Bbb W^*$ and $v in Bbb V$.



    In the second source, $Bbb V$ and $Bbb W$ are inner product spaces. An inner product $langle ,cdot, , ,cdot, rangle$ on a vector space $Bbb U$ defines an isomorphism $Phi : Bbb U stackrel{cong}{to} Bbb U^*$ by
    $$(Phi(u))(u') := langle u, u' rangle .$$ Thus, for any linear map $T: Bbb V to Bbb W$ between inner product spaces, we can identify $Bbb W^*$ with $Bbb W$ and $Bbb V^*$ with $Bbb V$, and hence $T^*$ with a map $Bbb W to Bbb V$. Unwinding the definitions shows that this map satisfies the identity $$langle w, T v rangle = langle T^* w, v rangle$$ in the second definition.



    It is an instructive exercise to write out all of these objects in terms of their matrix representations with respect to some bases of $Bbb V, Bbb W$ (of course, this only makes sense in the case that the vector spaces are finite-dimensional but even in the infinite-dimensional case it is a useful mnemonic). In particular, if $Bbb V, Bbb W$ are finite-dimensional real vector spaces and we choose orthonormal bases of both spaces, one can show that the matrix representations of $T^*$ and $T$ are related by $[T^*] = [T]^{top}$.






    share|cite|improve this answer











    $endgroup$













    • $begingroup$
      In $[T^*] = {}^t[T]$ does ${}^t[T]$ transpose of the matrix representation of $T$?
      $endgroup$
      – Coniferous
      May 3 '16 at 18:54










    • $begingroup$
      Yes; usually I prefer the notation ${}^T$, but I thought it would look peculiar here given the use of $T$ for the transformation.
      $endgroup$
      – Travis
      May 3 '16 at 19:11


















    8












    $begingroup$

    The adjoint of a linear map $T: Bbb V to Bbb W$ between two vector spaces is given by the definition in the first source: It is the map $T^* : Bbb W^* to Bbb V^*$ defined by
    $$(T^*(phi))(v) := phi(T(v))$$ for all $phi in Bbb W^*$ and $v in Bbb V$.



    In the second source, $Bbb V$ and $Bbb W$ are inner product spaces. An inner product $langle ,cdot, , ,cdot, rangle$ on a vector space $Bbb U$ defines an isomorphism $Phi : Bbb U stackrel{cong}{to} Bbb U^*$ by
    $$(Phi(u))(u') := langle u, u' rangle .$$ Thus, for any linear map $T: Bbb V to Bbb W$ between inner product spaces, we can identify $Bbb W^*$ with $Bbb W$ and $Bbb V^*$ with $Bbb V$, and hence $T^*$ with a map $Bbb W to Bbb V$. Unwinding the definitions shows that this map satisfies the identity $$langle w, T v rangle = langle T^* w, v rangle$$ in the second definition.



    It is an instructive exercise to write out all of these objects in terms of their matrix representations with respect to some bases of $Bbb V, Bbb W$ (of course, this only makes sense in the case that the vector spaces are finite-dimensional but even in the infinite-dimensional case it is a useful mnemonic). In particular, if $Bbb V, Bbb W$ are finite-dimensional real vector spaces and we choose orthonormal bases of both spaces, one can show that the matrix representations of $T^*$ and $T$ are related by $[T^*] = [T]^{top}$.






    share|cite|improve this answer











    $endgroup$













    • $begingroup$
      In $[T^*] = {}^t[T]$ does ${}^t[T]$ transpose of the matrix representation of $T$?
      $endgroup$
      – Coniferous
      May 3 '16 at 18:54










    • $begingroup$
      Yes; usually I prefer the notation ${}^T$, but I thought it would look peculiar here given the use of $T$ for the transformation.
      $endgroup$
      – Travis
      May 3 '16 at 19:11
















    8












    8








    8





    $begingroup$

    The adjoint of a linear map $T: Bbb V to Bbb W$ between two vector spaces is given by the definition in the first source: It is the map $T^* : Bbb W^* to Bbb V^*$ defined by
    $$(T^*(phi))(v) := phi(T(v))$$ for all $phi in Bbb W^*$ and $v in Bbb V$.



    In the second source, $Bbb V$ and $Bbb W$ are inner product spaces. An inner product $langle ,cdot, , ,cdot, rangle$ on a vector space $Bbb U$ defines an isomorphism $Phi : Bbb U stackrel{cong}{to} Bbb U^*$ by
    $$(Phi(u))(u') := langle u, u' rangle .$$ Thus, for any linear map $T: Bbb V to Bbb W$ between inner product spaces, we can identify $Bbb W^*$ with $Bbb W$ and $Bbb V^*$ with $Bbb V$, and hence $T^*$ with a map $Bbb W to Bbb V$. Unwinding the definitions shows that this map satisfies the identity $$langle w, T v rangle = langle T^* w, v rangle$$ in the second definition.



    It is an instructive exercise to write out all of these objects in terms of their matrix representations with respect to some bases of $Bbb V, Bbb W$ (of course, this only makes sense in the case that the vector spaces are finite-dimensional but even in the infinite-dimensional case it is a useful mnemonic). In particular, if $Bbb V, Bbb W$ are finite-dimensional real vector spaces and we choose orthonormal bases of both spaces, one can show that the matrix representations of $T^*$ and $T$ are related by $[T^*] = [T]^{top}$.






    share|cite|improve this answer











    $endgroup$



    The adjoint of a linear map $T: Bbb V to Bbb W$ between two vector spaces is given by the definition in the first source: It is the map $T^* : Bbb W^* to Bbb V^*$ defined by
    $$(T^*(phi))(v) := phi(T(v))$$ for all $phi in Bbb W^*$ and $v in Bbb V$.



    In the second source, $Bbb V$ and $Bbb W$ are inner product spaces. An inner product $langle ,cdot, , ,cdot, rangle$ on a vector space $Bbb U$ defines an isomorphism $Phi : Bbb U stackrel{cong}{to} Bbb U^*$ by
    $$(Phi(u))(u') := langle u, u' rangle .$$ Thus, for any linear map $T: Bbb V to Bbb W$ between inner product spaces, we can identify $Bbb W^*$ with $Bbb W$ and $Bbb V^*$ with $Bbb V$, and hence $T^*$ with a map $Bbb W to Bbb V$. Unwinding the definitions shows that this map satisfies the identity $$langle w, T v rangle = langle T^* w, v rangle$$ in the second definition.



    It is an instructive exercise to write out all of these objects in terms of their matrix representations with respect to some bases of $Bbb V, Bbb W$ (of course, this only makes sense in the case that the vector spaces are finite-dimensional but even in the infinite-dimensional case it is a useful mnemonic). In particular, if $Bbb V, Bbb W$ are finite-dimensional real vector spaces and we choose orthonormal bases of both spaces, one can show that the matrix representations of $T^*$ and $T$ are related by $[T^*] = [T]^{top}$.







    share|cite|improve this answer














    share|cite|improve this answer



    share|cite|improve this answer








    edited Dec 24 '18 at 1:45

























    answered May 3 '16 at 15:23









    TravisTravis

    60.6k767147




    60.6k767147












    • $begingroup$
      In $[T^*] = {}^t[T]$ does ${}^t[T]$ transpose of the matrix representation of $T$?
      $endgroup$
      – Coniferous
      May 3 '16 at 18:54










    • $begingroup$
      Yes; usually I prefer the notation ${}^T$, but I thought it would look peculiar here given the use of $T$ for the transformation.
      $endgroup$
      – Travis
      May 3 '16 at 19:11




















    • $begingroup$
      In $[T^*] = {}^t[T]$ does ${}^t[T]$ transpose of the matrix representation of $T$?
      $endgroup$
      – Coniferous
      May 3 '16 at 18:54










    • $begingroup$
      Yes; usually I prefer the notation ${}^T$, but I thought it would look peculiar here given the use of $T$ for the transformation.
      $endgroup$
      – Travis
      May 3 '16 at 19:11


















    $begingroup$
    In $[T^*] = {}^t[T]$ does ${}^t[T]$ transpose of the matrix representation of $T$?
    $endgroup$
    – Coniferous
    May 3 '16 at 18:54




    $begingroup$
    In $[T^*] = {}^t[T]$ does ${}^t[T]$ transpose of the matrix representation of $T$?
    $endgroup$
    – Coniferous
    May 3 '16 at 18:54












    $begingroup$
    Yes; usually I prefer the notation ${}^T$, but I thought it would look peculiar here given the use of $T$ for the transformation.
    $endgroup$
    – Travis
    May 3 '16 at 19:11






    $begingroup$
    Yes; usually I prefer the notation ${}^T$, but I thought it would look peculiar here given the use of $T$ for the transformation.
    $endgroup$
    – Travis
    May 3 '16 at 19:11













    2












    $begingroup$

    If the vector spaces V and W have respective nondegenerate bilinear forms $B_V$ and $B_W$, a concept closely related to the transpose – the adjoint – may be defined:



    If $f : V → W$ is a linear map between vector spaces $V$ and $W$, we define $g$ as the adjoint of f if $g : W → V$ satisfies :



    $ B_V(υ,g(w))=B_W(f(υ),w)$ $ forall υ in V, win W $.



    These bilinear forms define an isomorphism between $V $and $V^∗$, and between W and $W^∗$, resulting in an isomorphism between the transpose and adjoint of $f$. The matrix of the adjoint of a map is the transposed matrix only if the bases are orthonormal with respect to their bilinear forms. In this context, many authors use the term transpose to refer to the adjoint as defined here.



    The adjoint allows us to consider whether $g : W → V$ is equal to $f^{−1} : W → V$. In particular, this allows the orthogonal group over a vector space $V$ with a quadratic form to be defined without reference to matrices (nor the components thereof) as the set of all linear maps $V → V$ for which the adjoint equals the inverse






    share|cite|improve this answer









    $endgroup$


















      2












      $begingroup$

      If the vector spaces V and W have respective nondegenerate bilinear forms $B_V$ and $B_W$, a concept closely related to the transpose – the adjoint – may be defined:



      If $f : V → W$ is a linear map between vector spaces $V$ and $W$, we define $g$ as the adjoint of f if $g : W → V$ satisfies :



      $ B_V(υ,g(w))=B_W(f(υ),w)$ $ forall υ in V, win W $.



      These bilinear forms define an isomorphism between $V $and $V^∗$, and between W and $W^∗$, resulting in an isomorphism between the transpose and adjoint of $f$. The matrix of the adjoint of a map is the transposed matrix only if the bases are orthonormal with respect to their bilinear forms. In this context, many authors use the term transpose to refer to the adjoint as defined here.



      The adjoint allows us to consider whether $g : W → V$ is equal to $f^{−1} : W → V$. In particular, this allows the orthogonal group over a vector space $V$ with a quadratic form to be defined without reference to matrices (nor the components thereof) as the set of all linear maps $V → V$ for which the adjoint equals the inverse






      share|cite|improve this answer









      $endgroup$
















        2












        2








        2





        $begingroup$

        If the vector spaces V and W have respective nondegenerate bilinear forms $B_V$ and $B_W$, a concept closely related to the transpose – the adjoint – may be defined:



        If $f : V → W$ is a linear map between vector spaces $V$ and $W$, we define $g$ as the adjoint of f if $g : W → V$ satisfies :



        $ B_V(υ,g(w))=B_W(f(υ),w)$ $ forall υ in V, win W $.



        These bilinear forms define an isomorphism between $V $and $V^∗$, and between W and $W^∗$, resulting in an isomorphism between the transpose and adjoint of $f$. The matrix of the adjoint of a map is the transposed matrix only if the bases are orthonormal with respect to their bilinear forms. In this context, many authors use the term transpose to refer to the adjoint as defined here.



        The adjoint allows us to consider whether $g : W → V$ is equal to $f^{−1} : W → V$. In particular, this allows the orthogonal group over a vector space $V$ with a quadratic form to be defined without reference to matrices (nor the components thereof) as the set of all linear maps $V → V$ for which the adjoint equals the inverse






        share|cite|improve this answer









        $endgroup$



        If the vector spaces V and W have respective nondegenerate bilinear forms $B_V$ and $B_W$, a concept closely related to the transpose – the adjoint – may be defined:



        If $f : V → W$ is a linear map between vector spaces $V$ and $W$, we define $g$ as the adjoint of f if $g : W → V$ satisfies :



        $ B_V(υ,g(w))=B_W(f(υ),w)$ $ forall υ in V, win W $.



        These bilinear forms define an isomorphism between $V $and $V^∗$, and between W and $W^∗$, resulting in an isomorphism between the transpose and adjoint of $f$. The matrix of the adjoint of a map is the transposed matrix only if the bases are orthonormal with respect to their bilinear forms. In this context, many authors use the term transpose to refer to the adjoint as defined here.



        The adjoint allows us to consider whether $g : W → V$ is equal to $f^{−1} : W → V$. In particular, this allows the orthogonal group over a vector space $V$ with a quadratic form to be defined without reference to matrices (nor the components thereof) as the set of all linear maps $V → V$ for which the adjoint equals the inverse







        share|cite|improve this answer












        share|cite|improve this answer



        share|cite|improve this answer










        answered May 3 '16 at 14:54









        RebellosRebellos

        14.8k31248




        14.8k31248






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Mathematics Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f1769834%2fdefinition-of-adjoint-of-a-linear-map%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Ellipse (mathématiques)

            Quarter-circle Tiles

            Mont Emei