What's the difference between $mathbb{R}^2$ and the complex plane?












60












$begingroup$


I haven't taken any complex analysis course yet, but now I have this question that relates to it.



Let's have a look at a very simple example. Suppose $x,y$ and $z$ are the Cartesian coordinates and we have a function $z=f(x,y)=cos(x)+sin(y)$. However, now I change the $mathbb{R}^2$ plane $x,y$ to complex plane and make a new function, $z=cos(t)+isin(t)$.



So, can anyone tell me some famous and fundamental differences between complex plane and $mathbb{R}^2$ by this example, like some features $mathbb{R}^2 $ has but complex plane doesn't or the other way around. (Actually I am trying to understand why electrical engineers always want to put signal into the complex numbers rather than $mathbb{R}^2$, if a signal is affected by 2 components)



Thanks for help me out!










share|cite|improve this question











$endgroup$








  • 8




    $begingroup$
    In electrical engineering, we use complex numbers instead of points in the 2D plane for one basic reason: you can multiply and divide complex numbers, which you can't do with points.
    $endgroup$
    – Ataraxia
    Jul 15 '13 at 21:02










  • $begingroup$
    But, can the multiplication and division still make sense in the context after you do so?
    $endgroup$
    – Cancan
    Jul 15 '13 at 21:06






  • 5




    $begingroup$
    it seems to me that complex numbers are simply a handy way of written this electrical engineer stuff. that's all, no electrical engineer understands anything about metric, topological spaces or fields etc. that's all they want to do, after all.
    $endgroup$
    – user85461
    Jul 16 '13 at 0:59






  • 8




    $begingroup$
    @Heinz: Re: "no electrical engineer understands anything about metric, topological spaces or fields etc.": Would you also say that no mathematician understands anything about electricity?
    $endgroup$
    – ruakh
    Jul 16 '13 at 4:48






  • 3




    $begingroup$
    @Cancan it might be more appropriate to say that the a "useful" and natural canonical product is defined in the complex plane, whereas in R^2 such a useful product appears contrived, arbitrary, and unnatural.
    $endgroup$
    – Justin L.
    Jul 16 '13 at 7:25
















60












$begingroup$


I haven't taken any complex analysis course yet, but now I have this question that relates to it.



Let's have a look at a very simple example. Suppose $x,y$ and $z$ are the Cartesian coordinates and we have a function $z=f(x,y)=cos(x)+sin(y)$. However, now I change the $mathbb{R}^2$ plane $x,y$ to complex plane and make a new function, $z=cos(t)+isin(t)$.



So, can anyone tell me some famous and fundamental differences between complex plane and $mathbb{R}^2$ by this example, like some features $mathbb{R}^2 $ has but complex plane doesn't or the other way around. (Actually I am trying to understand why electrical engineers always want to put signal into the complex numbers rather than $mathbb{R}^2$, if a signal is affected by 2 components)



Thanks for help me out!










share|cite|improve this question











$endgroup$








  • 8




    $begingroup$
    In electrical engineering, we use complex numbers instead of points in the 2D plane for one basic reason: you can multiply and divide complex numbers, which you can't do with points.
    $endgroup$
    – Ataraxia
    Jul 15 '13 at 21:02










  • $begingroup$
    But, can the multiplication and division still make sense in the context after you do so?
    $endgroup$
    – Cancan
    Jul 15 '13 at 21:06






  • 5




    $begingroup$
    it seems to me that complex numbers are simply a handy way of written this electrical engineer stuff. that's all, no electrical engineer understands anything about metric, topological spaces or fields etc. that's all they want to do, after all.
    $endgroup$
    – user85461
    Jul 16 '13 at 0:59






  • 8




    $begingroup$
    @Heinz: Re: "no electrical engineer understands anything about metric, topological spaces or fields etc.": Would you also say that no mathematician understands anything about electricity?
    $endgroup$
    – ruakh
    Jul 16 '13 at 4:48






  • 3




    $begingroup$
    @Cancan it might be more appropriate to say that the a "useful" and natural canonical product is defined in the complex plane, whereas in R^2 such a useful product appears contrived, arbitrary, and unnatural.
    $endgroup$
    – Justin L.
    Jul 16 '13 at 7:25














60












60








60


52



$begingroup$


I haven't taken any complex analysis course yet, but now I have this question that relates to it.



Let's have a look at a very simple example. Suppose $x,y$ and $z$ are the Cartesian coordinates and we have a function $z=f(x,y)=cos(x)+sin(y)$. However, now I change the $mathbb{R}^2$ plane $x,y$ to complex plane and make a new function, $z=cos(t)+isin(t)$.



So, can anyone tell me some famous and fundamental differences between complex plane and $mathbb{R}^2$ by this example, like some features $mathbb{R}^2 $ has but complex plane doesn't or the other way around. (Actually I am trying to understand why electrical engineers always want to put signal into the complex numbers rather than $mathbb{R}^2$, if a signal is affected by 2 components)



Thanks for help me out!










share|cite|improve this question











$endgroup$




I haven't taken any complex analysis course yet, but now I have this question that relates to it.



Let's have a look at a very simple example. Suppose $x,y$ and $z$ are the Cartesian coordinates and we have a function $z=f(x,y)=cos(x)+sin(y)$. However, now I change the $mathbb{R}^2$ plane $x,y$ to complex plane and make a new function, $z=cos(t)+isin(t)$.



So, can anyone tell me some famous and fundamental differences between complex plane and $mathbb{R}^2$ by this example, like some features $mathbb{R}^2 $ has but complex plane doesn't or the other way around. (Actually I am trying to understand why electrical engineers always want to put signal into the complex numbers rather than $mathbb{R}^2$, if a signal is affected by 2 components)



Thanks for help me out!







complex-analysis complex-numbers signal-processing






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Jul 15 '13 at 22:01









Zev Chonoles

110k16228429




110k16228429










asked Jul 15 '13 at 20:20









CancanCancan

1,02211426




1,02211426








  • 8




    $begingroup$
    In electrical engineering, we use complex numbers instead of points in the 2D plane for one basic reason: you can multiply and divide complex numbers, which you can't do with points.
    $endgroup$
    – Ataraxia
    Jul 15 '13 at 21:02










  • $begingroup$
    But, can the multiplication and division still make sense in the context after you do so?
    $endgroup$
    – Cancan
    Jul 15 '13 at 21:06






  • 5




    $begingroup$
    it seems to me that complex numbers are simply a handy way of written this electrical engineer stuff. that's all, no electrical engineer understands anything about metric, topological spaces or fields etc. that's all they want to do, after all.
    $endgroup$
    – user85461
    Jul 16 '13 at 0:59






  • 8




    $begingroup$
    @Heinz: Re: "no electrical engineer understands anything about metric, topological spaces or fields etc.": Would you also say that no mathematician understands anything about electricity?
    $endgroup$
    – ruakh
    Jul 16 '13 at 4:48






  • 3




    $begingroup$
    @Cancan it might be more appropriate to say that the a "useful" and natural canonical product is defined in the complex plane, whereas in R^2 such a useful product appears contrived, arbitrary, and unnatural.
    $endgroup$
    – Justin L.
    Jul 16 '13 at 7:25














  • 8




    $begingroup$
    In electrical engineering, we use complex numbers instead of points in the 2D plane for one basic reason: you can multiply and divide complex numbers, which you can't do with points.
    $endgroup$
    – Ataraxia
    Jul 15 '13 at 21:02










  • $begingroup$
    But, can the multiplication and division still make sense in the context after you do so?
    $endgroup$
    – Cancan
    Jul 15 '13 at 21:06






  • 5




    $begingroup$
    it seems to me that complex numbers are simply a handy way of written this electrical engineer stuff. that's all, no electrical engineer understands anything about metric, topological spaces or fields etc. that's all they want to do, after all.
    $endgroup$
    – user85461
    Jul 16 '13 at 0:59






  • 8




    $begingroup$
    @Heinz: Re: "no electrical engineer understands anything about metric, topological spaces or fields etc.": Would you also say that no mathematician understands anything about electricity?
    $endgroup$
    – ruakh
    Jul 16 '13 at 4:48






  • 3




    $begingroup$
    @Cancan it might be more appropriate to say that the a "useful" and natural canonical product is defined in the complex plane, whereas in R^2 such a useful product appears contrived, arbitrary, and unnatural.
    $endgroup$
    – Justin L.
    Jul 16 '13 at 7:25








8




8




$begingroup$
In electrical engineering, we use complex numbers instead of points in the 2D plane for one basic reason: you can multiply and divide complex numbers, which you can't do with points.
$endgroup$
– Ataraxia
Jul 15 '13 at 21:02




$begingroup$
In electrical engineering, we use complex numbers instead of points in the 2D plane for one basic reason: you can multiply and divide complex numbers, which you can't do with points.
$endgroup$
– Ataraxia
Jul 15 '13 at 21:02












$begingroup$
But, can the multiplication and division still make sense in the context after you do so?
$endgroup$
– Cancan
Jul 15 '13 at 21:06




$begingroup$
But, can the multiplication and division still make sense in the context after you do so?
$endgroup$
– Cancan
Jul 15 '13 at 21:06




5




5




$begingroup$
it seems to me that complex numbers are simply a handy way of written this electrical engineer stuff. that's all, no electrical engineer understands anything about metric, topological spaces or fields etc. that's all they want to do, after all.
$endgroup$
– user85461
Jul 16 '13 at 0:59




$begingroup$
it seems to me that complex numbers are simply a handy way of written this electrical engineer stuff. that's all, no electrical engineer understands anything about metric, topological spaces or fields etc. that's all they want to do, after all.
$endgroup$
– user85461
Jul 16 '13 at 0:59




8




8




$begingroup$
@Heinz: Re: "no electrical engineer understands anything about metric, topological spaces or fields etc.": Would you also say that no mathematician understands anything about electricity?
$endgroup$
– ruakh
Jul 16 '13 at 4:48




$begingroup$
@Heinz: Re: "no electrical engineer understands anything about metric, topological spaces or fields etc.": Would you also say that no mathematician understands anything about electricity?
$endgroup$
– ruakh
Jul 16 '13 at 4:48




3




3




$begingroup$
@Cancan it might be more appropriate to say that the a "useful" and natural canonical product is defined in the complex plane, whereas in R^2 such a useful product appears contrived, arbitrary, and unnatural.
$endgroup$
– Justin L.
Jul 16 '13 at 7:25




$begingroup$
@Cancan it might be more appropriate to say that the a "useful" and natural canonical product is defined in the complex plane, whereas in R^2 such a useful product appears contrived, arbitrary, and unnatural.
$endgroup$
– Justin L.
Jul 16 '13 at 7:25










10 Answers
10






active

oldest

votes


















59












$begingroup$

$mathbb{R}^2$ and $mathbb{C}$ have the same cardinality, so there are (lots of) bijective maps from one to the other. In fact, there is one (or perhaps a few) that you might call "obvious" or "natural" bijections, e.g. $(a,b) mapsto a+bi$. This is more than just a bijection:




  • $mathbb{R}^2$ and $mathbb{C}$ are also metric spaces (under the 'obvious' metrics), and this bijection is an isometry, so these spaces "look the same".

  • $mathbb{R}^2$ and $mathbb{C}$ are also groups under addition, and this bijection is a group homomorphism, so these spaces "have the same addition".

  • $mathbb{R}$ is a subfield of $mathbb{C}$ in a natural way, so we can consider $mathbb{C}$ as an $mathbb{R}$-vector space, where it becomes isomorphic to $mathbb{R}^2$ (this is more or less the same statement as above).


Here are some differences:




  • Viewing $mathbb{R}$ as a ring, $mathbb{R}^2$ is actually a direct (Cartesian) product of $mathbb{R}$ with itself. Direct products of rings in general come with a natural "product" multiplication, $(u,v)cdot (x,y) = (ux, vy)$, and it is not usually the case that $(u,v)cdot (x,y) = (ux-vy, uy+vx)$ makes sense or is interesting in general direct products of rings. The fact that it makes $mathbb{R}^2$ look like $mathbb{C}$ (in a way that preserves addition and the metric) is in some sense an accident. (Compare $mathbb{Z}[sqrt{3}]$ and $mathbb{Z}^2$ in the same way.)

  • Differentiable functions $mathbb{C}to mathbb{C}$ are not the same as differentiable functions $mathbb{R}^2tomathbb{R}^2$. (The meaning of "differentiable" changes in a meaningful way with the base field. See complex analysis.) The same is true of linear functions. (The map $(a,b)mapsto (a,-b)$, or $zmapsto overline{z}$, is $mathbb{R}$-linear but not $mathbb{C}$-linear.)






share|cite|improve this answer









$endgroup$









  • 9




    $begingroup$
    Saying that $mathbb{R}^2$ and $mathbb{C}$ have the same cardinality isn't saying much, $[0,,1]$ and $mathbb{R^N}$ have the same cardinality.
    $endgroup$
    – fhyve
    Jul 15 '13 at 23:15






  • 5




    $begingroup$
    @fhyve Right - I guess I was just trying to hammer home the point that any structure that can be imposed on one can be imposed on the other (for obvious but somehow stupid reasons), but that that new structure isn't necessarily interesting. I can give [0,1] a few topologies and algebraic operations at random, but (it seems to me that) this isn't interesting because the structures don't interact. Imposing the structure of $mathbb{C}$ on $mathbb{R}^2$ somehow isn't interesting in the same sort of way. You get two multiplications that never talk to each other, for example.
    $endgroup$
    – Billy
    Jul 16 '13 at 0:43






  • 8




    $begingroup$
    @Heinz Almost all complex analysis derives from the fact that differentiable functions in $mathbb{C}$ are different. Not really sure how that's "lame".
    $endgroup$
    – Emily
    Jul 16 '13 at 4:15






  • 5




    $begingroup$
    Why is it obvious that two things which are defined differently are in fact different?
    $endgroup$
    – jwg
    Jul 16 '13 at 10:06






  • 3




    $begingroup$
    @fhyve: true, cardinality isn't worth much. But they're not only isomorphic but homeomorphic, which is quite a bit more already.
    $endgroup$
    – leftaroundabout
    Jul 16 '13 at 15:58



















28












$begingroup$

The big difference between $mathbb{R}^2$ and $mathbb{C}$: differentiability.



In general, a function from $mathbb{R}^n$ to itself is differentiable if there is a linear transformation $J$ such that the limit exists:



$$lim_{h to 0} frac{mathbf{f}(mathbf{x}+mathbf{h})-mathbf{f}(mathbf{x})-mathbf{J}mathbf{h}}{|mathbf{h}|} = 0$$



where $mathbf{f}, mathbf{x}, $ and $mathbf{h}$ are vector quantities.



In $mathbb{C}$, we have a stronger notion of differentiability given by the Cauchy-Riemann equations:



$$begin{align*}
f(x+iy) &stackrel{textrm{def}}{=} u(x,y)+iv(x,y) \
u_x &= v_y, \
u_y &= -v_x.
end{align*}
$$



These equations, if satisfied, do certainly give rise to such an invertible linear transformation as required; however, the definition of complex multiplication and division requires that these equations hold in order for the limit



$$lim_{h to 0} frac{f(z+h)-f(z)-Jh}{h} = 0$$



to exist. Note the difference here: we divide by $h$, not by its modulus.





In essence, multiplication between elements of $mathbb{R}^2$ is not generally defined (although we could, if we wanted to), nor is division (which we could also attempt to do, given how we define multiplication). Not having these things means that differentiability in $mathbb{R}^2$ is a little more "topological" -- we're not overly concerned with where $mathbf{h}$ is, just that it gets small, and that a non-singular linear transformation exists at the point of differentiation. This all stems from the generalization of the inverse function theorem, which can basically be approached completely topologically.



In $mathbb{C}$, since we can divide by $h$, because we have a rigorous notion of multiplication and division, we want to ensure that the derivative exists independent of the path $h$ takes. If there is some trickeration due to the path $h$ is taking, we can't wash it away with topology quite so easily.



In $mathbb{R}^2$, the question of path independence is less obvious, and less severe. Such functions are analytic, and in the reals we can have differentiable functions that are not analytic. In $mathbb{C}$, differentiability implies analyticity.





Example:



Consider $f(x+iy) = x^2-y^2+2ixy$. We have $u(x,y) = x^2-y^2$, and $v(x,y) = 2xy$. It is trivial to show that
$$u_x = 2x = v_y, \
u_y = -2y = -v_x,$$
so this function is analytic. If we take this over the reals, we have $f_1 = x^2-y^2$ and $f_2 = 2xy$, then
$$J = begin{pmatrix} 2x & -2y \ 2y & 2x end{pmatrix}.$$
Taking the determinant, we find $det J = 4x^2+4y^2$, which is non-zero except at the origin.



By contrast, consider
$f(x+iy) = x^2+y^2-2ixy$. Then,



$$u_x = 2x neq -2x = v_y, \
u_y = -2y neq 2y = -v_x,$$



so the function is not differentiable.



However, $$J = begin{pmatrix} 2x & 2y \ -2y & -2x end{pmatrix}$$ which is not everywhere singular, so we can certainly obtain a real-valued derivative of the function in $mathbb{R}^2$.






share|cite|improve this answer











$endgroup$













  • $begingroup$
    Could you please give me an concrete example to demonstrate this?
    $endgroup$
    – Cancan
    Jul 15 '13 at 20:45










  • $begingroup$
    Sure, let me think one up.
    $endgroup$
    – Emily
    Jul 15 '13 at 20:53






  • 2




    $begingroup$
    This is kind of an apples and oranges thing. The analogues of complex differentiable functions in real spaces are those with vanishing divergence and curl. Trying to use the limit definition confuses things with directional derivatives, which are quite different.
    $endgroup$
    – Muphrid
    Jul 15 '13 at 20:54










  • $begingroup$
    The example was derived from Bak & Newman
    $endgroup$
    – Emily
    Jul 15 '13 at 21:07






  • 2




    $begingroup$
    @Muphrid Isn't it the whole point to show that these two things are different fruits? There are plenty of analogues between reals and complex numbers. The limit definition for complex numbers is effectively structurally the same as in single-variable reals. The notion of analyticity is effectively the same. The concept of a Taylor series is the same. Topology is extremely similar, what with the complex number concept that the modulus is also a norm. The one fundamental thing that is different is differentiation.
    $endgroup$
    – Emily
    Jul 15 '13 at 21:10



















12












$begingroup$

I'll explain this more from an electrical engineer's perspective (which I am) than a mathematician's perspective (which I'm not).



The complex plane has several useful properties which arise due to Euler's identity:



$$Ae^{itheta}=A(cos(theta)+isin(theta))$$



Unlike points in the real plane $mathbb{R}^2$, complex numbers can be added, subtracted, multiplied, and divided. Multiplication and division have a useful meaning that comes about due to Euler's identity:



$$Ae^{itheta_1}cdot{Be^{itheta_2}}=ABe^{i(theta_1+theta_2)}$$



$$Ae^{itheta_1}/{Be^{itheta_2}}=frac{A}{B}e^{i(theta_1-theta_2)}$$



In other words, multiplying two numbers in the complex plane does two things: multiplies their absolute values, and adds together the angle that they make with the real number line. This makes calculating with phasors a simple matter of arithmetic.



As others have stated,addition, subtraction, multiplication, and division can simply be defined likewise on $mathbb{R}^2$, but it makes more sense to use the complex plane, because this is a property that comes about naturally due to the definition of imaginary numbers: $i^2=-1$.






share|cite|improve this answer











$endgroup$





















    7












    $begingroup$

    The difference is that in the complex plane, you've got a multiplication $mathbb Ctimesmathbb Ctomathbb C$ defined, which makes $mathbb C$ into a field (which basically means that all the usual rules of arithmetics hold.)






    share|cite|improve this answer









    $endgroup$









    • 5




      $begingroup$
      @GitGud: As soon as you add(!) that multiplication to $mathbb R^2$, you have $mathbb C$.
      $endgroup$
      – celtschk
      Jul 15 '13 at 20:28






    • 4




      $begingroup$
      @Oleg567 I'm not sure why you're talking about the dot product. Just because the word product is in the name of the operation it doesn't make it particularly relevant. Also the dot product isn't a function from $Bbb R^2$ to $Bbb R^2$.
      $endgroup$
      – Git Gud
      Jul 15 '13 at 20:29






    • 2




      $begingroup$
      @GitGud: No. You can go on and define it yourself (and then arrive at $mathbb C$), but you don't have it available.
      $endgroup$
      – celtschk
      Jul 15 '13 at 20:33






    • 3




      $begingroup$
      @GitGud: It means it is part of the definition of the object under consideration.
      $endgroup$
      – celtschk
      Jul 15 '13 at 20:35






    • 2




      $begingroup$
      If we understand, as usual, $mathbb R^2$ as the $mathbb R$-vector space of pairs of real number under element-wise addition and element-wise scalar multiplication, then the structure of $mathbb R^2$ (I don't use "natural" here because in the end, it's all part of the definition) consists of: The set $mathbb R^2$, the field $mathbb R$ of real numbers (with all of its structure), the operation $+: mathbb R^2timesmathbb R^2tomathbb R^2$, and the operation $cdot: mathbb Rtimesmathbb R^2tomathbb R^2$.
      $endgroup$
      – celtschk
      Jul 15 '13 at 21:08



















    4












    $begingroup$

    If $X = mathbb C$ (a one-dimensional vector space over the scalar field $mathbb C$), [its] balanced sets are $mathbb C$, the empty set $emptyset$, and every circular disc (open or closed) centered at $0$. If $X = mathbb R^2$ (a two-dimensional vector space over the scalar field $mathbb R$), there are many more balanced sets; any line segment with midpoint at $(0,0)$ will do. The point is that, in spite of the well-known and obvious identification of $mathbb C$ with $mathbb R^2$, these two are entirely different as far as their vector space structure is concerned.



    -W. Rudin (1973)






    share|cite|improve this answer









    $endgroup$













    • $begingroup$
      But can I say $mathbb{C}$ is a vector with one direction but $mathbb{R}^2$ is a vector with 2 directions?
      $endgroup$
      – Cancan
      Jul 15 '13 at 20:32










    • $begingroup$
      I'm not sure I agree with this. In my opinion the proper comparation would be comparing $Bbb C/ Bbb R$ with $Bbb R^2/Bbb R$.
      $endgroup$
      – Git Gud
      Jul 15 '13 at 20:32










    • $begingroup$
      @Cancan what do you mean when you say that $mathbb{R}^2$ or $mathbb{C}$ are "vectors".
      $endgroup$
      – Squirtle
      Jul 16 '13 at 2:43












    • $begingroup$
      @Cancan: Neither $mathbb{C}$ nor $mathbb{R}^2$ are vectors. But you can say that $mathbb{C}$ is a $1$-dimensional $mathbb{C}$-vector space, and that $mathbb{R}^2$ is a $2$-dimensional $mathbb{R}$-vector space.
      $endgroup$
      – Jesse Madnick
      Jul 16 '13 at 3:39





















    3












    $begingroup$

    The relationship between $mathbb C$ and $mathbb R^2$ becomes clearer using Clifford algebra.



    Clifford algebra admits a "geometric product" of vectors (and more than just two vectors). The so-called complex plane can instead be seen as the algebra of geometric products of two vectors.



    These objects--geometric products of two vectors--have special geometric significance, both in 2d and beyond. Each product of two vectors describes a pair of reflections, which in turn describes a rotation, specifying not only the unique plane of rotation but also the angle of rotation. This is at the heart of why complex numbers are so useful for rotations; the generalization of this property to 3d generates quaternions. For this reason, these objects are sometimes called spinors.



    On the 2d plane, for every vector $a$, there is an associated spinor $a e_1$, formed using the geometric product. It is this explicit correspondence that is used to convert vector algebra and calculus on the 2d plane to the algebra and calculus of spinors--of "complex numbers"--instead. Hence, much of the calculus that one associates with complex numbers is instead intrinsic to the structure of the 2d plane.



    For example, the residue theorem tells us about meromorphic functions' integrals; there is an equivalent vector analysis that tells us about integrals of vector functions whose divergences are delta functions. This involves using Stokes' theorem. There is a very tight relationship between holomorphic functions and vector fields with vanishing divergence and curl.



    For this reason, I regard much of the impulse to complexify problem on real vector spaces to be inherently misguided. Often, but not always, there is simply no reason to do so. Many results of "complex analysis" have real equivalents, and glossing over them deprives students of powerful theorems that would be useful outside of 2d.






    share|cite|improve this answer









    $endgroup$





















      3












      $begingroup$

      To augment Kendra Lynne's answer, what does it mean to say that signal analysis in $mathbb{R}^2$ isn't as 'clean' as in $mathbb{C}$?



      Fourier series are the decomposition of periodic functions into an infinite sum of 'modes' or single-frequency signals. If a function defined on $mathbb{R}$ is periodic, say (to make the trigonometry easier) that the period is $2pi$, we might as well just consider the piece whose domain ins $(-pi, pi]$.



      If the function is real-valued, we can decompose it in two ways: as a sum of sines and cosines (and a constant):
      $$ f(x) = frac{a_0}{2} + sum_{n=1}^{infty} a_n cos(nx) + sum_{n=1}^{infty} b_n sin(nx)$$
      There is a formula for the $a_k$ and the $b_k$. There is an asymmetry in that $k$ starts at $0$ for $a_k$ and at $1$ for $b_k$. There is a formula in terms of $int_{-pi}^{pi} f(x) cos(kx)dx$ for the $a_k$ and a similar formula for the $b_k$. We can write a formula for $a_0$ which has the same integral but with $cos(0x) = 0$, but unfortunately we have to divide by 2 to make it consistent with the other formulae. $b_0$ would always be $0$ if it existed, and doesn't tell us anything about the function.



      Although we wanted to decompose our function into modes, we actually have two terms for each frequency (except the constant frequency). If we wanted to say, differentiate the series term-by-term, we would have to use different rules to differentiate each term, depending on whether it's a sine or a cosine term, and the derivative of each term would be a different type of term, since sine goes to cosine and vice versa.



      We can also express the Fourier series as a single series of shifted cosine waves, by transforming
      $$ a_k cos(kx) + b_k sin(kx) = r_k cos(kx + theta_k) .$$
      However we now lost the fact of expressing all functions as a sum of the same components. If we want to add two functions expressed like this, we have to separate the $r$ and $theta$ back into $a$ and $b$, add, and transform back. We also still have a slight asymmetry - $r_k$ has a meaning but $theta_0$ is always $0$.



      The same Fourier series using complex numbers is the following:
      $$ sum_{n=-infty}^{infty} a_n e^{inx} .$$ This expresses a function $(-pi, pi] rightarrow mathbb{C}$. We can add two functions by adding their coefficients, we can even work out the energy of a signal as a simple calculation (each component $e^{ikx}$ has the same energy. Differentiating or integrating term-by-term is easy, since we are within a constant of differentiating $e^x$. A real-valued function has $a_n = a_{-n}$ for all $n$ (which is easy to check). $a_n$ all being real, $a_{2n}$ being zero for all $n$ or $a_{n}$ being zero for all $n < 0$ all express important and simple classes of periodic functions.



      We can also define $z = e^{ix}$ and now the Fourier series is actually a Laurent series:
      $$ sum_{n=-infty}^{infty} a_n z^{n} .$$



      The Fourier series with $a_n = 0$ for all $n < 0$ is a Taylor series, and the one with $a_n$ all real is a Laurent series for a function $mathbb{R} rightarrow mathbb{R}$. We are drawing a deep connection between the behavior of a complex function on the unit circle and its behavior on the real line - either of these is enough to specify the function uniquely, given a couple of quite general conditions.






      share|cite|improve this answer









      $endgroup$













      • $begingroup$
        Gorgeous! I'm EE but not in signals, +1 for the better explanation.
        $endgroup$
        – Kendra Lynne
        Jul 16 '13 at 16:20










      • $begingroup$
        Thanks, not quite a direct answer to the question but worth going into I thought.
        $endgroup$
        – jwg
        Jul 17 '13 at 9:14



















      3












      $begingroup$

      Since everyone is defining the space, I figured I could give an example of why we use it (relating to your "Electrical Engineering" reference). The $i$ itself is what makes using complex numbers/variables ideal for numerous applications. For one, note that:



      begin{align*}
      i^1 &= sqrt{-1}\
      i^2 &= -1\
      i^3 &= -i\
      i^4 &= 1.
      end{align*}

      In the complex (real-imaginary) plane, this corresponds to a rotation, which is easier to visualize and manipulate mathematically. These four powers "repeat" themselves, so for geometrical applications (versus real number manipulation), the math is more explicit.



      One of the immediate applications in Electrical Engineering relates to Signal Analysis and Processing. For example, Euler's formula:
      $$
      re^{itheta}=rcostheta +irsintheta
      $$

      relates complex exponentials to trigonometric formulas. Many times, in audio applications, a signal needs to be decomposed into a series of sinusoidal functions because you need to know their individual amplitudes ($r$) and phase angles ($theta$), maybe for filtering a specific frequency:



      Fourier Transform



      This means the signal is being moved from the time-domain, where (time,amplitude) = $(t,y)$, to the frequency domain, where (sinusoid magnitude,phase) = $(r,theta)$. The Fourier Transform (denoted "FT" in the picture) does this, and uses Euler's Formula to express the original signal as a sum of sinusoids of varying magnitude and phase angle. To do further signal analysis in the $mathbb{R}^2$ domain isn't nearly as "clean" computationally.






      share|cite|improve this answer











      $endgroup$





















        2












        $begingroup$

        My thought is this: $mathbb{C}$ is not $mathbb{R}^2$. However, $mathbb{R}^2$ paired with the operation $(a,b) star (c,d) = (ac-bd, ac+bd)$ provides a model of the complex numbers. However, there are others. For example, a colleague of mine insists that complex numbers are $2 times 2$ matrices of the form:
        $$ left[ begin{array}{cc} a & -b \ b & a end{array} right] $$
        but another insists, no, complex numbers have the form
        $$ left[ begin{array}{cc} a & b \ -b & a end{array} right] $$
        but they both agree that complex multiplication and addition are mere matrix multiplication rules for a specific type of matrix. Another friend says, no that's nonsense, you can't teach matrices to undergraduates, they'll never understand it. Maybe they'll calculate it, but they won't really understand. Students get algebra. We should model the complex numbers as the quotient by the ideal generated by $x^2+1$ in the polynomial ring $mathbb{R}[x]$ in fact,
        $$ mathbb{C} = mathbb{R}[x]/langle x^2+1rangle$$
        So, why is it that $mathbb{C} = mathbb{R}^2$ paired with the operation $star$? It's because it is easily implemented by the rule $i^2=-1$ and proceed normally. In other words, if you know how to do real algebra then the rule $i^2=-1$ paired with those real algebra rules gets you fairly far, at least until you face the dangers of exponents. For example,
        $$ -1 = sqrt{-1} sqrt{-1} = sqrt{(-1)(-1)} = sqrt{1} = 1 $$
        oops. Of course, this is easily remedied either by choosing a branch of the square root or working with sets of values as opposed to single-valued functions.



        All of this said, I like Rudin's answer for your question.






        share|cite|improve this answer











        $endgroup$













        • $begingroup$
          May I understand in this stupid way that the main difference between $mathbb{R}^2$ and $mathbb{C}$, in the definition perspective of view, is that their multiplication defined is different. Because of this, it generates a lot of differences such as the difference in diffferentiability, etc.
          $endgroup$
          – Cancan
          Jul 16 '13 at 8:23






        • 1




          $begingroup$
          @Cancan yes, the choice of definition is merely that, a choice. However, the difference between real and complex differentiability is multifaceted and highly nontrivial. There really is no particular multiplication defined on $mathbb{R}^2$, in fact, you could defined several other multiplications on that point set. For example, $j^2=1$ leads to the hyperbolic numbers. Or $epsilon^2=0$ gives the null-numbers. There are all sorts of other things to do with $mathbb{R}^2$.
          $endgroup$
          – James S. Cook
          Jul 16 '13 at 17:01



















        1












        $begingroup$

        There are plenty of differences between $mathbb{R}^2$ plane and $mathbb{C}$ plane. Here I give you two interesting differences.



        First, about branch points and branch lines. Suppose that we are given the function $w=z^{1/2}$. Suppose further that we allow $z$ to make a complete circuit around the origin counter-clockwisely starting from a point $A$ different from the origin. If $z=re^{itheta}$, then $w=sqrt re^{itheta/2}$.



        At point $A$,
        $theta =theta_1$, so $w=sqrt re^{itheta_1/2}$.



        While after completing the circuit, back to point $A$,

        $theta =theta_1+2pi$, so $w=sqrt re^{i(theta_1+2pi)/2}=-sqrt re^{itheta_1/2}$.



        Problem is, if we consider $w$ as a function, we cannot get the same value at the same point.
        To improve, we introduce Riemann Surfaces.Imagine the whole $mathbb{C}$ plane as two sheets superimposed on each other. On the sheets, there is a line which indicate real axis. Cut two sheet simultaneously along the POSITIVE real axis. Imagine the lower edge of the bottom sheet is joined to the upper edge of the top sheet.



        We call the origin as a branch point and the positive real axis as the branch line in this case.



        Now, the surface is complete, when travelling the circuit, you start at the top sheet and if you go one complete circuit, you go to the bottom sheet. Travelling again, you go back to the top sheet. So that $theta_1$ and $theta_1+2pi$ become two different point (at the top and the bottom sheet respectively), and comes out two different value.



        Another thing is, in $mathbb{R}^2$ case, $f'(x)$ exist not implies $f''(x)$ exist. Try thinking that $f(x)=x^2$ if $xge0$ and $f(x)=-x^2$ when $x<0$. But in $mathbb{C}$ plane. If $f'(z)$ exist (we say $f$ is analytic), it guarantees $f'(x)$ and thus $f^{(n)}(x)$ exist. It comes from the Cauchy's integral formula.



        I am not going to give you the proof, but if you are interested, you should know Cauchy Riemann Equations first: $w=f(z)=f(x+yi)=u(x,y)+iv(x,y)$ is analytic iff it satisfy $frac{partial u}{partial x}=frac{partial v}{partial y}$, $frac{partial u}{partial y}=-frac{partial v}{partial x}$ both. Proof is simply comes from the definition of differentiation. Thus, once you get $u(x,y)$ you can find $v(x,y)$ from the above equation, making $f(z)$ analytic,






        share|cite|improve this answer











        $endgroup$













          Your Answer





          StackExchange.ifUsing("editor", function () {
          return StackExchange.using("mathjaxEditing", function () {
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          });
          });
          }, "mathjax-editing");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "69"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          noCode: true, onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f444475%2fwhats-the-difference-between-mathbbr2-and-the-complex-plane%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          10 Answers
          10






          active

          oldest

          votes








          10 Answers
          10






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          59












          $begingroup$

          $mathbb{R}^2$ and $mathbb{C}$ have the same cardinality, so there are (lots of) bijective maps from one to the other. In fact, there is one (or perhaps a few) that you might call "obvious" or "natural" bijections, e.g. $(a,b) mapsto a+bi$. This is more than just a bijection:




          • $mathbb{R}^2$ and $mathbb{C}$ are also metric spaces (under the 'obvious' metrics), and this bijection is an isometry, so these spaces "look the same".

          • $mathbb{R}^2$ and $mathbb{C}$ are also groups under addition, and this bijection is a group homomorphism, so these spaces "have the same addition".

          • $mathbb{R}$ is a subfield of $mathbb{C}$ in a natural way, so we can consider $mathbb{C}$ as an $mathbb{R}$-vector space, where it becomes isomorphic to $mathbb{R}^2$ (this is more or less the same statement as above).


          Here are some differences:




          • Viewing $mathbb{R}$ as a ring, $mathbb{R}^2$ is actually a direct (Cartesian) product of $mathbb{R}$ with itself. Direct products of rings in general come with a natural "product" multiplication, $(u,v)cdot (x,y) = (ux, vy)$, and it is not usually the case that $(u,v)cdot (x,y) = (ux-vy, uy+vx)$ makes sense or is interesting in general direct products of rings. The fact that it makes $mathbb{R}^2$ look like $mathbb{C}$ (in a way that preserves addition and the metric) is in some sense an accident. (Compare $mathbb{Z}[sqrt{3}]$ and $mathbb{Z}^2$ in the same way.)

          • Differentiable functions $mathbb{C}to mathbb{C}$ are not the same as differentiable functions $mathbb{R}^2tomathbb{R}^2$. (The meaning of "differentiable" changes in a meaningful way with the base field. See complex analysis.) The same is true of linear functions. (The map $(a,b)mapsto (a,-b)$, or $zmapsto overline{z}$, is $mathbb{R}$-linear but not $mathbb{C}$-linear.)






          share|cite|improve this answer









          $endgroup$









          • 9




            $begingroup$
            Saying that $mathbb{R}^2$ and $mathbb{C}$ have the same cardinality isn't saying much, $[0,,1]$ and $mathbb{R^N}$ have the same cardinality.
            $endgroup$
            – fhyve
            Jul 15 '13 at 23:15






          • 5




            $begingroup$
            @fhyve Right - I guess I was just trying to hammer home the point that any structure that can be imposed on one can be imposed on the other (for obvious but somehow stupid reasons), but that that new structure isn't necessarily interesting. I can give [0,1] a few topologies and algebraic operations at random, but (it seems to me that) this isn't interesting because the structures don't interact. Imposing the structure of $mathbb{C}$ on $mathbb{R}^2$ somehow isn't interesting in the same sort of way. You get two multiplications that never talk to each other, for example.
            $endgroup$
            – Billy
            Jul 16 '13 at 0:43






          • 8




            $begingroup$
            @Heinz Almost all complex analysis derives from the fact that differentiable functions in $mathbb{C}$ are different. Not really sure how that's "lame".
            $endgroup$
            – Emily
            Jul 16 '13 at 4:15






          • 5




            $begingroup$
            Why is it obvious that two things which are defined differently are in fact different?
            $endgroup$
            – jwg
            Jul 16 '13 at 10:06






          • 3




            $begingroup$
            @fhyve: true, cardinality isn't worth much. But they're not only isomorphic but homeomorphic, which is quite a bit more already.
            $endgroup$
            – leftaroundabout
            Jul 16 '13 at 15:58
















          59












          $begingroup$

          $mathbb{R}^2$ and $mathbb{C}$ have the same cardinality, so there are (lots of) bijective maps from one to the other. In fact, there is one (or perhaps a few) that you might call "obvious" or "natural" bijections, e.g. $(a,b) mapsto a+bi$. This is more than just a bijection:




          • $mathbb{R}^2$ and $mathbb{C}$ are also metric spaces (under the 'obvious' metrics), and this bijection is an isometry, so these spaces "look the same".

          • $mathbb{R}^2$ and $mathbb{C}$ are also groups under addition, and this bijection is a group homomorphism, so these spaces "have the same addition".

          • $mathbb{R}$ is a subfield of $mathbb{C}$ in a natural way, so we can consider $mathbb{C}$ as an $mathbb{R}$-vector space, where it becomes isomorphic to $mathbb{R}^2$ (this is more or less the same statement as above).


          Here are some differences:




          • Viewing $mathbb{R}$ as a ring, $mathbb{R}^2$ is actually a direct (Cartesian) product of $mathbb{R}$ with itself. Direct products of rings in general come with a natural "product" multiplication, $(u,v)cdot (x,y) = (ux, vy)$, and it is not usually the case that $(u,v)cdot (x,y) = (ux-vy, uy+vx)$ makes sense or is interesting in general direct products of rings. The fact that it makes $mathbb{R}^2$ look like $mathbb{C}$ (in a way that preserves addition and the metric) is in some sense an accident. (Compare $mathbb{Z}[sqrt{3}]$ and $mathbb{Z}^2$ in the same way.)

          • Differentiable functions $mathbb{C}to mathbb{C}$ are not the same as differentiable functions $mathbb{R}^2tomathbb{R}^2$. (The meaning of "differentiable" changes in a meaningful way with the base field. See complex analysis.) The same is true of linear functions. (The map $(a,b)mapsto (a,-b)$, or $zmapsto overline{z}$, is $mathbb{R}$-linear but not $mathbb{C}$-linear.)






          share|cite|improve this answer









          $endgroup$









          • 9




            $begingroup$
            Saying that $mathbb{R}^2$ and $mathbb{C}$ have the same cardinality isn't saying much, $[0,,1]$ and $mathbb{R^N}$ have the same cardinality.
            $endgroup$
            – fhyve
            Jul 15 '13 at 23:15






          • 5




            $begingroup$
            @fhyve Right - I guess I was just trying to hammer home the point that any structure that can be imposed on one can be imposed on the other (for obvious but somehow stupid reasons), but that that new structure isn't necessarily interesting. I can give [0,1] a few topologies and algebraic operations at random, but (it seems to me that) this isn't interesting because the structures don't interact. Imposing the structure of $mathbb{C}$ on $mathbb{R}^2$ somehow isn't interesting in the same sort of way. You get two multiplications that never talk to each other, for example.
            $endgroup$
            – Billy
            Jul 16 '13 at 0:43






          • 8




            $begingroup$
            @Heinz Almost all complex analysis derives from the fact that differentiable functions in $mathbb{C}$ are different. Not really sure how that's "lame".
            $endgroup$
            – Emily
            Jul 16 '13 at 4:15






          • 5




            $begingroup$
            Why is it obvious that two things which are defined differently are in fact different?
            $endgroup$
            – jwg
            Jul 16 '13 at 10:06






          • 3




            $begingroup$
            @fhyve: true, cardinality isn't worth much. But they're not only isomorphic but homeomorphic, which is quite a bit more already.
            $endgroup$
            – leftaroundabout
            Jul 16 '13 at 15:58














          59












          59








          59





          $begingroup$

          $mathbb{R}^2$ and $mathbb{C}$ have the same cardinality, so there are (lots of) bijective maps from one to the other. In fact, there is one (or perhaps a few) that you might call "obvious" or "natural" bijections, e.g. $(a,b) mapsto a+bi$. This is more than just a bijection:




          • $mathbb{R}^2$ and $mathbb{C}$ are also metric spaces (under the 'obvious' metrics), and this bijection is an isometry, so these spaces "look the same".

          • $mathbb{R}^2$ and $mathbb{C}$ are also groups under addition, and this bijection is a group homomorphism, so these spaces "have the same addition".

          • $mathbb{R}$ is a subfield of $mathbb{C}$ in a natural way, so we can consider $mathbb{C}$ as an $mathbb{R}$-vector space, where it becomes isomorphic to $mathbb{R}^2$ (this is more or less the same statement as above).


          Here are some differences:




          • Viewing $mathbb{R}$ as a ring, $mathbb{R}^2$ is actually a direct (Cartesian) product of $mathbb{R}$ with itself. Direct products of rings in general come with a natural "product" multiplication, $(u,v)cdot (x,y) = (ux, vy)$, and it is not usually the case that $(u,v)cdot (x,y) = (ux-vy, uy+vx)$ makes sense or is interesting in general direct products of rings. The fact that it makes $mathbb{R}^2$ look like $mathbb{C}$ (in a way that preserves addition and the metric) is in some sense an accident. (Compare $mathbb{Z}[sqrt{3}]$ and $mathbb{Z}^2$ in the same way.)

          • Differentiable functions $mathbb{C}to mathbb{C}$ are not the same as differentiable functions $mathbb{R}^2tomathbb{R}^2$. (The meaning of "differentiable" changes in a meaningful way with the base field. See complex analysis.) The same is true of linear functions. (The map $(a,b)mapsto (a,-b)$, or $zmapsto overline{z}$, is $mathbb{R}$-linear but not $mathbb{C}$-linear.)






          share|cite|improve this answer









          $endgroup$



          $mathbb{R}^2$ and $mathbb{C}$ have the same cardinality, so there are (lots of) bijective maps from one to the other. In fact, there is one (or perhaps a few) that you might call "obvious" or "natural" bijections, e.g. $(a,b) mapsto a+bi$. This is more than just a bijection:




          • $mathbb{R}^2$ and $mathbb{C}$ are also metric spaces (under the 'obvious' metrics), and this bijection is an isometry, so these spaces "look the same".

          • $mathbb{R}^2$ and $mathbb{C}$ are also groups under addition, and this bijection is a group homomorphism, so these spaces "have the same addition".

          • $mathbb{R}$ is a subfield of $mathbb{C}$ in a natural way, so we can consider $mathbb{C}$ as an $mathbb{R}$-vector space, where it becomes isomorphic to $mathbb{R}^2$ (this is more or less the same statement as above).


          Here are some differences:




          • Viewing $mathbb{R}$ as a ring, $mathbb{R}^2$ is actually a direct (Cartesian) product of $mathbb{R}$ with itself. Direct products of rings in general come with a natural "product" multiplication, $(u,v)cdot (x,y) = (ux, vy)$, and it is not usually the case that $(u,v)cdot (x,y) = (ux-vy, uy+vx)$ makes sense or is interesting in general direct products of rings. The fact that it makes $mathbb{R}^2$ look like $mathbb{C}$ (in a way that preserves addition and the metric) is in some sense an accident. (Compare $mathbb{Z}[sqrt{3}]$ and $mathbb{Z}^2$ in the same way.)

          • Differentiable functions $mathbb{C}to mathbb{C}$ are not the same as differentiable functions $mathbb{R}^2tomathbb{R}^2$. (The meaning of "differentiable" changes in a meaningful way with the base field. See complex analysis.) The same is true of linear functions. (The map $(a,b)mapsto (a,-b)$, or $zmapsto overline{z}$, is $mathbb{R}$-linear but not $mathbb{C}$-linear.)







          share|cite|improve this answer












          share|cite|improve this answer



          share|cite|improve this answer










          answered Jul 15 '13 at 20:47









          BillyBilly

          3,552818




          3,552818








          • 9




            $begingroup$
            Saying that $mathbb{R}^2$ and $mathbb{C}$ have the same cardinality isn't saying much, $[0,,1]$ and $mathbb{R^N}$ have the same cardinality.
            $endgroup$
            – fhyve
            Jul 15 '13 at 23:15






          • 5




            $begingroup$
            @fhyve Right - I guess I was just trying to hammer home the point that any structure that can be imposed on one can be imposed on the other (for obvious but somehow stupid reasons), but that that new structure isn't necessarily interesting. I can give [0,1] a few topologies and algebraic operations at random, but (it seems to me that) this isn't interesting because the structures don't interact. Imposing the structure of $mathbb{C}$ on $mathbb{R}^2$ somehow isn't interesting in the same sort of way. You get two multiplications that never talk to each other, for example.
            $endgroup$
            – Billy
            Jul 16 '13 at 0:43






          • 8




            $begingroup$
            @Heinz Almost all complex analysis derives from the fact that differentiable functions in $mathbb{C}$ are different. Not really sure how that's "lame".
            $endgroup$
            – Emily
            Jul 16 '13 at 4:15






          • 5




            $begingroup$
            Why is it obvious that two things which are defined differently are in fact different?
            $endgroup$
            – jwg
            Jul 16 '13 at 10:06






          • 3




            $begingroup$
            @fhyve: true, cardinality isn't worth much. But they're not only isomorphic but homeomorphic, which is quite a bit more already.
            $endgroup$
            – leftaroundabout
            Jul 16 '13 at 15:58














          • 9




            $begingroup$
            Saying that $mathbb{R}^2$ and $mathbb{C}$ have the same cardinality isn't saying much, $[0,,1]$ and $mathbb{R^N}$ have the same cardinality.
            $endgroup$
            – fhyve
            Jul 15 '13 at 23:15






          • 5




            $begingroup$
            @fhyve Right - I guess I was just trying to hammer home the point that any structure that can be imposed on one can be imposed on the other (for obvious but somehow stupid reasons), but that that new structure isn't necessarily interesting. I can give [0,1] a few topologies and algebraic operations at random, but (it seems to me that) this isn't interesting because the structures don't interact. Imposing the structure of $mathbb{C}$ on $mathbb{R}^2$ somehow isn't interesting in the same sort of way. You get two multiplications that never talk to each other, for example.
            $endgroup$
            – Billy
            Jul 16 '13 at 0:43






          • 8




            $begingroup$
            @Heinz Almost all complex analysis derives from the fact that differentiable functions in $mathbb{C}$ are different. Not really sure how that's "lame".
            $endgroup$
            – Emily
            Jul 16 '13 at 4:15






          • 5




            $begingroup$
            Why is it obvious that two things which are defined differently are in fact different?
            $endgroup$
            – jwg
            Jul 16 '13 at 10:06






          • 3




            $begingroup$
            @fhyve: true, cardinality isn't worth much. But they're not only isomorphic but homeomorphic, which is quite a bit more already.
            $endgroup$
            – leftaroundabout
            Jul 16 '13 at 15:58








          9




          9




          $begingroup$
          Saying that $mathbb{R}^2$ and $mathbb{C}$ have the same cardinality isn't saying much, $[0,,1]$ and $mathbb{R^N}$ have the same cardinality.
          $endgroup$
          – fhyve
          Jul 15 '13 at 23:15




          $begingroup$
          Saying that $mathbb{R}^2$ and $mathbb{C}$ have the same cardinality isn't saying much, $[0,,1]$ and $mathbb{R^N}$ have the same cardinality.
          $endgroup$
          – fhyve
          Jul 15 '13 at 23:15




          5




          5




          $begingroup$
          @fhyve Right - I guess I was just trying to hammer home the point that any structure that can be imposed on one can be imposed on the other (for obvious but somehow stupid reasons), but that that new structure isn't necessarily interesting. I can give [0,1] a few topologies and algebraic operations at random, but (it seems to me that) this isn't interesting because the structures don't interact. Imposing the structure of $mathbb{C}$ on $mathbb{R}^2$ somehow isn't interesting in the same sort of way. You get two multiplications that never talk to each other, for example.
          $endgroup$
          – Billy
          Jul 16 '13 at 0:43




          $begingroup$
          @fhyve Right - I guess I was just trying to hammer home the point that any structure that can be imposed on one can be imposed on the other (for obvious but somehow stupid reasons), but that that new structure isn't necessarily interesting. I can give [0,1] a few topologies and algebraic operations at random, but (it seems to me that) this isn't interesting because the structures don't interact. Imposing the structure of $mathbb{C}$ on $mathbb{R}^2$ somehow isn't interesting in the same sort of way. You get two multiplications that never talk to each other, for example.
          $endgroup$
          – Billy
          Jul 16 '13 at 0:43




          8




          8




          $begingroup$
          @Heinz Almost all complex analysis derives from the fact that differentiable functions in $mathbb{C}$ are different. Not really sure how that's "lame".
          $endgroup$
          – Emily
          Jul 16 '13 at 4:15




          $begingroup$
          @Heinz Almost all complex analysis derives from the fact that differentiable functions in $mathbb{C}$ are different. Not really sure how that's "lame".
          $endgroup$
          – Emily
          Jul 16 '13 at 4:15




          5




          5




          $begingroup$
          Why is it obvious that two things which are defined differently are in fact different?
          $endgroup$
          – jwg
          Jul 16 '13 at 10:06




          $begingroup$
          Why is it obvious that two things which are defined differently are in fact different?
          $endgroup$
          – jwg
          Jul 16 '13 at 10:06




          3




          3




          $begingroup$
          @fhyve: true, cardinality isn't worth much. But they're not only isomorphic but homeomorphic, which is quite a bit more already.
          $endgroup$
          – leftaroundabout
          Jul 16 '13 at 15:58




          $begingroup$
          @fhyve: true, cardinality isn't worth much. But they're not only isomorphic but homeomorphic, which is quite a bit more already.
          $endgroup$
          – leftaroundabout
          Jul 16 '13 at 15:58











          28












          $begingroup$

          The big difference between $mathbb{R}^2$ and $mathbb{C}$: differentiability.



          In general, a function from $mathbb{R}^n$ to itself is differentiable if there is a linear transformation $J$ such that the limit exists:



          $$lim_{h to 0} frac{mathbf{f}(mathbf{x}+mathbf{h})-mathbf{f}(mathbf{x})-mathbf{J}mathbf{h}}{|mathbf{h}|} = 0$$



          where $mathbf{f}, mathbf{x}, $ and $mathbf{h}$ are vector quantities.



          In $mathbb{C}$, we have a stronger notion of differentiability given by the Cauchy-Riemann equations:



          $$begin{align*}
          f(x+iy) &stackrel{textrm{def}}{=} u(x,y)+iv(x,y) \
          u_x &= v_y, \
          u_y &= -v_x.
          end{align*}
          $$



          These equations, if satisfied, do certainly give rise to such an invertible linear transformation as required; however, the definition of complex multiplication and division requires that these equations hold in order for the limit



          $$lim_{h to 0} frac{f(z+h)-f(z)-Jh}{h} = 0$$



          to exist. Note the difference here: we divide by $h$, not by its modulus.





          In essence, multiplication between elements of $mathbb{R}^2$ is not generally defined (although we could, if we wanted to), nor is division (which we could also attempt to do, given how we define multiplication). Not having these things means that differentiability in $mathbb{R}^2$ is a little more "topological" -- we're not overly concerned with where $mathbf{h}$ is, just that it gets small, and that a non-singular linear transformation exists at the point of differentiation. This all stems from the generalization of the inverse function theorem, which can basically be approached completely topologically.



          In $mathbb{C}$, since we can divide by $h$, because we have a rigorous notion of multiplication and division, we want to ensure that the derivative exists independent of the path $h$ takes. If there is some trickeration due to the path $h$ is taking, we can't wash it away with topology quite so easily.



          In $mathbb{R}^2$, the question of path independence is less obvious, and less severe. Such functions are analytic, and in the reals we can have differentiable functions that are not analytic. In $mathbb{C}$, differentiability implies analyticity.





          Example:



          Consider $f(x+iy) = x^2-y^2+2ixy$. We have $u(x,y) = x^2-y^2$, and $v(x,y) = 2xy$. It is trivial to show that
          $$u_x = 2x = v_y, \
          u_y = -2y = -v_x,$$
          so this function is analytic. If we take this over the reals, we have $f_1 = x^2-y^2$ and $f_2 = 2xy$, then
          $$J = begin{pmatrix} 2x & -2y \ 2y & 2x end{pmatrix}.$$
          Taking the determinant, we find $det J = 4x^2+4y^2$, which is non-zero except at the origin.



          By contrast, consider
          $f(x+iy) = x^2+y^2-2ixy$. Then,



          $$u_x = 2x neq -2x = v_y, \
          u_y = -2y neq 2y = -v_x,$$



          so the function is not differentiable.



          However, $$J = begin{pmatrix} 2x & 2y \ -2y & -2x end{pmatrix}$$ which is not everywhere singular, so we can certainly obtain a real-valued derivative of the function in $mathbb{R}^2$.






          share|cite|improve this answer











          $endgroup$













          • $begingroup$
            Could you please give me an concrete example to demonstrate this?
            $endgroup$
            – Cancan
            Jul 15 '13 at 20:45










          • $begingroup$
            Sure, let me think one up.
            $endgroup$
            – Emily
            Jul 15 '13 at 20:53






          • 2




            $begingroup$
            This is kind of an apples and oranges thing. The analogues of complex differentiable functions in real spaces are those with vanishing divergence and curl. Trying to use the limit definition confuses things with directional derivatives, which are quite different.
            $endgroup$
            – Muphrid
            Jul 15 '13 at 20:54










          • $begingroup$
            The example was derived from Bak & Newman
            $endgroup$
            – Emily
            Jul 15 '13 at 21:07






          • 2




            $begingroup$
            @Muphrid Isn't it the whole point to show that these two things are different fruits? There are plenty of analogues between reals and complex numbers. The limit definition for complex numbers is effectively structurally the same as in single-variable reals. The notion of analyticity is effectively the same. The concept of a Taylor series is the same. Topology is extremely similar, what with the complex number concept that the modulus is also a norm. The one fundamental thing that is different is differentiation.
            $endgroup$
            – Emily
            Jul 15 '13 at 21:10
















          28












          $begingroup$

          The big difference between $mathbb{R}^2$ and $mathbb{C}$: differentiability.



          In general, a function from $mathbb{R}^n$ to itself is differentiable if there is a linear transformation $J$ such that the limit exists:



          $$lim_{h to 0} frac{mathbf{f}(mathbf{x}+mathbf{h})-mathbf{f}(mathbf{x})-mathbf{J}mathbf{h}}{|mathbf{h}|} = 0$$



          where $mathbf{f}, mathbf{x}, $ and $mathbf{h}$ are vector quantities.



          In $mathbb{C}$, we have a stronger notion of differentiability given by the Cauchy-Riemann equations:



          $$begin{align*}
          f(x+iy) &stackrel{textrm{def}}{=} u(x,y)+iv(x,y) \
          u_x &= v_y, \
          u_y &= -v_x.
          end{align*}
          $$



          These equations, if satisfied, do certainly give rise to such an invertible linear transformation as required; however, the definition of complex multiplication and division requires that these equations hold in order for the limit



          $$lim_{h to 0} frac{f(z+h)-f(z)-Jh}{h} = 0$$



          to exist. Note the difference here: we divide by $h$, not by its modulus.





          In essence, multiplication between elements of $mathbb{R}^2$ is not generally defined (although we could, if we wanted to), nor is division (which we could also attempt to do, given how we define multiplication). Not having these things means that differentiability in $mathbb{R}^2$ is a little more "topological" -- we're not overly concerned with where $mathbf{h}$ is, just that it gets small, and that a non-singular linear transformation exists at the point of differentiation. This all stems from the generalization of the inverse function theorem, which can basically be approached completely topologically.



          In $mathbb{C}$, since we can divide by $h$, because we have a rigorous notion of multiplication and division, we want to ensure that the derivative exists independent of the path $h$ takes. If there is some trickeration due to the path $h$ is taking, we can't wash it away with topology quite so easily.



          In $mathbb{R}^2$, the question of path independence is less obvious, and less severe. Such functions are analytic, and in the reals we can have differentiable functions that are not analytic. In $mathbb{C}$, differentiability implies analyticity.





          Example:



          Consider $f(x+iy) = x^2-y^2+2ixy$. We have $u(x,y) = x^2-y^2$, and $v(x,y) = 2xy$. It is trivial to show that
          $$u_x = 2x = v_y, \
          u_y = -2y = -v_x,$$
          so this function is analytic. If we take this over the reals, we have $f_1 = x^2-y^2$ and $f_2 = 2xy$, then
          $$J = begin{pmatrix} 2x & -2y \ 2y & 2x end{pmatrix}.$$
          Taking the determinant, we find $det J = 4x^2+4y^2$, which is non-zero except at the origin.



          By contrast, consider
          $f(x+iy) = x^2+y^2-2ixy$. Then,



          $$u_x = 2x neq -2x = v_y, \
          u_y = -2y neq 2y = -v_x,$$



          so the function is not differentiable.



          However, $$J = begin{pmatrix} 2x & 2y \ -2y & -2x end{pmatrix}$$ which is not everywhere singular, so we can certainly obtain a real-valued derivative of the function in $mathbb{R}^2$.






          share|cite|improve this answer











          $endgroup$













          • $begingroup$
            Could you please give me an concrete example to demonstrate this?
            $endgroup$
            – Cancan
            Jul 15 '13 at 20:45










          • $begingroup$
            Sure, let me think one up.
            $endgroup$
            – Emily
            Jul 15 '13 at 20:53






          • 2




            $begingroup$
            This is kind of an apples and oranges thing. The analogues of complex differentiable functions in real spaces are those with vanishing divergence and curl. Trying to use the limit definition confuses things with directional derivatives, which are quite different.
            $endgroup$
            – Muphrid
            Jul 15 '13 at 20:54










          • $begingroup$
            The example was derived from Bak & Newman
            $endgroup$
            – Emily
            Jul 15 '13 at 21:07






          • 2




            $begingroup$
            @Muphrid Isn't it the whole point to show that these two things are different fruits? There are plenty of analogues between reals and complex numbers. The limit definition for complex numbers is effectively structurally the same as in single-variable reals. The notion of analyticity is effectively the same. The concept of a Taylor series is the same. Topology is extremely similar, what with the complex number concept that the modulus is also a norm. The one fundamental thing that is different is differentiation.
            $endgroup$
            – Emily
            Jul 15 '13 at 21:10














          28












          28








          28





          $begingroup$

          The big difference between $mathbb{R}^2$ and $mathbb{C}$: differentiability.



          In general, a function from $mathbb{R}^n$ to itself is differentiable if there is a linear transformation $J$ such that the limit exists:



          $$lim_{h to 0} frac{mathbf{f}(mathbf{x}+mathbf{h})-mathbf{f}(mathbf{x})-mathbf{J}mathbf{h}}{|mathbf{h}|} = 0$$



          where $mathbf{f}, mathbf{x}, $ and $mathbf{h}$ are vector quantities.



          In $mathbb{C}$, we have a stronger notion of differentiability given by the Cauchy-Riemann equations:



          $$begin{align*}
          f(x+iy) &stackrel{textrm{def}}{=} u(x,y)+iv(x,y) \
          u_x &= v_y, \
          u_y &= -v_x.
          end{align*}
          $$



          These equations, if satisfied, do certainly give rise to such an invertible linear transformation as required; however, the definition of complex multiplication and division requires that these equations hold in order for the limit



          $$lim_{h to 0} frac{f(z+h)-f(z)-Jh}{h} = 0$$



          to exist. Note the difference here: we divide by $h$, not by its modulus.





          In essence, multiplication between elements of $mathbb{R}^2$ is not generally defined (although we could, if we wanted to), nor is division (which we could also attempt to do, given how we define multiplication). Not having these things means that differentiability in $mathbb{R}^2$ is a little more "topological" -- we're not overly concerned with where $mathbf{h}$ is, just that it gets small, and that a non-singular linear transformation exists at the point of differentiation. This all stems from the generalization of the inverse function theorem, which can basically be approached completely topologically.



          In $mathbb{C}$, since we can divide by $h$, because we have a rigorous notion of multiplication and division, we want to ensure that the derivative exists independent of the path $h$ takes. If there is some trickeration due to the path $h$ is taking, we can't wash it away with topology quite so easily.



          In $mathbb{R}^2$, the question of path independence is less obvious, and less severe. Such functions are analytic, and in the reals we can have differentiable functions that are not analytic. In $mathbb{C}$, differentiability implies analyticity.





          Example:



          Consider $f(x+iy) = x^2-y^2+2ixy$. We have $u(x,y) = x^2-y^2$, and $v(x,y) = 2xy$. It is trivial to show that
          $$u_x = 2x = v_y, \
          u_y = -2y = -v_x,$$
          so this function is analytic. If we take this over the reals, we have $f_1 = x^2-y^2$ and $f_2 = 2xy$, then
          $$J = begin{pmatrix} 2x & -2y \ 2y & 2x end{pmatrix}.$$
          Taking the determinant, we find $det J = 4x^2+4y^2$, which is non-zero except at the origin.



          By contrast, consider
          $f(x+iy) = x^2+y^2-2ixy$. Then,



          $$u_x = 2x neq -2x = v_y, \
          u_y = -2y neq 2y = -v_x,$$



          so the function is not differentiable.



          However, $$J = begin{pmatrix} 2x & 2y \ -2y & -2x end{pmatrix}$$ which is not everywhere singular, so we can certainly obtain a real-valued derivative of the function in $mathbb{R}^2$.






          share|cite|improve this answer











          $endgroup$



          The big difference between $mathbb{R}^2$ and $mathbb{C}$: differentiability.



          In general, a function from $mathbb{R}^n$ to itself is differentiable if there is a linear transformation $J$ such that the limit exists:



          $$lim_{h to 0} frac{mathbf{f}(mathbf{x}+mathbf{h})-mathbf{f}(mathbf{x})-mathbf{J}mathbf{h}}{|mathbf{h}|} = 0$$



          where $mathbf{f}, mathbf{x}, $ and $mathbf{h}$ are vector quantities.



          In $mathbb{C}$, we have a stronger notion of differentiability given by the Cauchy-Riemann equations:



          $$begin{align*}
          f(x+iy) &stackrel{textrm{def}}{=} u(x,y)+iv(x,y) \
          u_x &= v_y, \
          u_y &= -v_x.
          end{align*}
          $$



          These equations, if satisfied, do certainly give rise to such an invertible linear transformation as required; however, the definition of complex multiplication and division requires that these equations hold in order for the limit



          $$lim_{h to 0} frac{f(z+h)-f(z)-Jh}{h} = 0$$



          to exist. Note the difference here: we divide by $h$, not by its modulus.





          In essence, multiplication between elements of $mathbb{R}^2$ is not generally defined (although we could, if we wanted to), nor is division (which we could also attempt to do, given how we define multiplication). Not having these things means that differentiability in $mathbb{R}^2$ is a little more "topological" -- we're not overly concerned with where $mathbf{h}$ is, just that it gets small, and that a non-singular linear transformation exists at the point of differentiation. This all stems from the generalization of the inverse function theorem, which can basically be approached completely topologically.



          In $mathbb{C}$, since we can divide by $h$, because we have a rigorous notion of multiplication and division, we want to ensure that the derivative exists independent of the path $h$ takes. If there is some trickeration due to the path $h$ is taking, we can't wash it away with topology quite so easily.



          In $mathbb{R}^2$, the question of path independence is less obvious, and less severe. Such functions are analytic, and in the reals we can have differentiable functions that are not analytic. In $mathbb{C}$, differentiability implies analyticity.





          Example:



          Consider $f(x+iy) = x^2-y^2+2ixy$. We have $u(x,y) = x^2-y^2$, and $v(x,y) = 2xy$. It is trivial to show that
          $$u_x = 2x = v_y, \
          u_y = -2y = -v_x,$$
          so this function is analytic. If we take this over the reals, we have $f_1 = x^2-y^2$ and $f_2 = 2xy$, then
          $$J = begin{pmatrix} 2x & -2y \ 2y & 2x end{pmatrix}.$$
          Taking the determinant, we find $det J = 4x^2+4y^2$, which is non-zero except at the origin.



          By contrast, consider
          $f(x+iy) = x^2+y^2-2ixy$. Then,



          $$u_x = 2x neq -2x = v_y, \
          u_y = -2y neq 2y = -v_x,$$



          so the function is not differentiable.



          However, $$J = begin{pmatrix} 2x & 2y \ -2y & -2x end{pmatrix}$$ which is not everywhere singular, so we can certainly obtain a real-valued derivative of the function in $mathbb{R}^2$.







          share|cite|improve this answer














          share|cite|improve this answer



          share|cite|improve this answer








          edited Jul 15 '13 at 21:07

























          answered Jul 15 '13 at 20:41









          EmilyEmily

          29.5k468112




          29.5k468112












          • $begingroup$
            Could you please give me an concrete example to demonstrate this?
            $endgroup$
            – Cancan
            Jul 15 '13 at 20:45










          • $begingroup$
            Sure, let me think one up.
            $endgroup$
            – Emily
            Jul 15 '13 at 20:53






          • 2




            $begingroup$
            This is kind of an apples and oranges thing. The analogues of complex differentiable functions in real spaces are those with vanishing divergence and curl. Trying to use the limit definition confuses things with directional derivatives, which are quite different.
            $endgroup$
            – Muphrid
            Jul 15 '13 at 20:54










          • $begingroup$
            The example was derived from Bak & Newman
            $endgroup$
            – Emily
            Jul 15 '13 at 21:07






          • 2




            $begingroup$
            @Muphrid Isn't it the whole point to show that these two things are different fruits? There are plenty of analogues between reals and complex numbers. The limit definition for complex numbers is effectively structurally the same as in single-variable reals. The notion of analyticity is effectively the same. The concept of a Taylor series is the same. Topology is extremely similar, what with the complex number concept that the modulus is also a norm. The one fundamental thing that is different is differentiation.
            $endgroup$
            – Emily
            Jul 15 '13 at 21:10


















          • $begingroup$
            Could you please give me an concrete example to demonstrate this?
            $endgroup$
            – Cancan
            Jul 15 '13 at 20:45










          • $begingroup$
            Sure, let me think one up.
            $endgroup$
            – Emily
            Jul 15 '13 at 20:53






          • 2




            $begingroup$
            This is kind of an apples and oranges thing. The analogues of complex differentiable functions in real spaces are those with vanishing divergence and curl. Trying to use the limit definition confuses things with directional derivatives, which are quite different.
            $endgroup$
            – Muphrid
            Jul 15 '13 at 20:54










          • $begingroup$
            The example was derived from Bak & Newman
            $endgroup$
            – Emily
            Jul 15 '13 at 21:07






          • 2




            $begingroup$
            @Muphrid Isn't it the whole point to show that these two things are different fruits? There are plenty of analogues between reals and complex numbers. The limit definition for complex numbers is effectively structurally the same as in single-variable reals. The notion of analyticity is effectively the same. The concept of a Taylor series is the same. Topology is extremely similar, what with the complex number concept that the modulus is also a norm. The one fundamental thing that is different is differentiation.
            $endgroup$
            – Emily
            Jul 15 '13 at 21:10
















          $begingroup$
          Could you please give me an concrete example to demonstrate this?
          $endgroup$
          – Cancan
          Jul 15 '13 at 20:45




          $begingroup$
          Could you please give me an concrete example to demonstrate this?
          $endgroup$
          – Cancan
          Jul 15 '13 at 20:45












          $begingroup$
          Sure, let me think one up.
          $endgroup$
          – Emily
          Jul 15 '13 at 20:53




          $begingroup$
          Sure, let me think one up.
          $endgroup$
          – Emily
          Jul 15 '13 at 20:53




          2




          2




          $begingroup$
          This is kind of an apples and oranges thing. The analogues of complex differentiable functions in real spaces are those with vanishing divergence and curl. Trying to use the limit definition confuses things with directional derivatives, which are quite different.
          $endgroup$
          – Muphrid
          Jul 15 '13 at 20:54




          $begingroup$
          This is kind of an apples and oranges thing. The analogues of complex differentiable functions in real spaces are those with vanishing divergence and curl. Trying to use the limit definition confuses things with directional derivatives, which are quite different.
          $endgroup$
          – Muphrid
          Jul 15 '13 at 20:54












          $begingroup$
          The example was derived from Bak & Newman
          $endgroup$
          – Emily
          Jul 15 '13 at 21:07




          $begingroup$
          The example was derived from Bak & Newman
          $endgroup$
          – Emily
          Jul 15 '13 at 21:07




          2




          2




          $begingroup$
          @Muphrid Isn't it the whole point to show that these two things are different fruits? There are plenty of analogues between reals and complex numbers. The limit definition for complex numbers is effectively structurally the same as in single-variable reals. The notion of analyticity is effectively the same. The concept of a Taylor series is the same. Topology is extremely similar, what with the complex number concept that the modulus is also a norm. The one fundamental thing that is different is differentiation.
          $endgroup$
          – Emily
          Jul 15 '13 at 21:10




          $begingroup$
          @Muphrid Isn't it the whole point to show that these two things are different fruits? There are plenty of analogues between reals and complex numbers. The limit definition for complex numbers is effectively structurally the same as in single-variable reals. The notion of analyticity is effectively the same. The concept of a Taylor series is the same. Topology is extremely similar, what with the complex number concept that the modulus is also a norm. The one fundamental thing that is different is differentiation.
          $endgroup$
          – Emily
          Jul 15 '13 at 21:10











          12












          $begingroup$

          I'll explain this more from an electrical engineer's perspective (which I am) than a mathematician's perspective (which I'm not).



          The complex plane has several useful properties which arise due to Euler's identity:



          $$Ae^{itheta}=A(cos(theta)+isin(theta))$$



          Unlike points in the real plane $mathbb{R}^2$, complex numbers can be added, subtracted, multiplied, and divided. Multiplication and division have a useful meaning that comes about due to Euler's identity:



          $$Ae^{itheta_1}cdot{Be^{itheta_2}}=ABe^{i(theta_1+theta_2)}$$



          $$Ae^{itheta_1}/{Be^{itheta_2}}=frac{A}{B}e^{i(theta_1-theta_2)}$$



          In other words, multiplying two numbers in the complex plane does two things: multiplies their absolute values, and adds together the angle that they make with the real number line. This makes calculating with phasors a simple matter of arithmetic.



          As others have stated,addition, subtraction, multiplication, and division can simply be defined likewise on $mathbb{R}^2$, but it makes more sense to use the complex plane, because this is a property that comes about naturally due to the definition of imaginary numbers: $i^2=-1$.






          share|cite|improve this answer











          $endgroup$


















            12












            $begingroup$

            I'll explain this more from an electrical engineer's perspective (which I am) than a mathematician's perspective (which I'm not).



            The complex plane has several useful properties which arise due to Euler's identity:



            $$Ae^{itheta}=A(cos(theta)+isin(theta))$$



            Unlike points in the real plane $mathbb{R}^2$, complex numbers can be added, subtracted, multiplied, and divided. Multiplication and division have a useful meaning that comes about due to Euler's identity:



            $$Ae^{itheta_1}cdot{Be^{itheta_2}}=ABe^{i(theta_1+theta_2)}$$



            $$Ae^{itheta_1}/{Be^{itheta_2}}=frac{A}{B}e^{i(theta_1-theta_2)}$$



            In other words, multiplying two numbers in the complex plane does two things: multiplies their absolute values, and adds together the angle that they make with the real number line. This makes calculating with phasors a simple matter of arithmetic.



            As others have stated,addition, subtraction, multiplication, and division can simply be defined likewise on $mathbb{R}^2$, but it makes more sense to use the complex plane, because this is a property that comes about naturally due to the definition of imaginary numbers: $i^2=-1$.






            share|cite|improve this answer











            $endgroup$
















              12












              12








              12





              $begingroup$

              I'll explain this more from an electrical engineer's perspective (which I am) than a mathematician's perspective (which I'm not).



              The complex plane has several useful properties which arise due to Euler's identity:



              $$Ae^{itheta}=A(cos(theta)+isin(theta))$$



              Unlike points in the real plane $mathbb{R}^2$, complex numbers can be added, subtracted, multiplied, and divided. Multiplication and division have a useful meaning that comes about due to Euler's identity:



              $$Ae^{itheta_1}cdot{Be^{itheta_2}}=ABe^{i(theta_1+theta_2)}$$



              $$Ae^{itheta_1}/{Be^{itheta_2}}=frac{A}{B}e^{i(theta_1-theta_2)}$$



              In other words, multiplying two numbers in the complex plane does two things: multiplies their absolute values, and adds together the angle that they make with the real number line. This makes calculating with phasors a simple matter of arithmetic.



              As others have stated,addition, subtraction, multiplication, and division can simply be defined likewise on $mathbb{R}^2$, but it makes more sense to use the complex plane, because this is a property that comes about naturally due to the definition of imaginary numbers: $i^2=-1$.






              share|cite|improve this answer











              $endgroup$



              I'll explain this more from an electrical engineer's perspective (which I am) than a mathematician's perspective (which I'm not).



              The complex plane has several useful properties which arise due to Euler's identity:



              $$Ae^{itheta}=A(cos(theta)+isin(theta))$$



              Unlike points in the real plane $mathbb{R}^2$, complex numbers can be added, subtracted, multiplied, and divided. Multiplication and division have a useful meaning that comes about due to Euler's identity:



              $$Ae^{itheta_1}cdot{Be^{itheta_2}}=ABe^{i(theta_1+theta_2)}$$



              $$Ae^{itheta_1}/{Be^{itheta_2}}=frac{A}{B}e^{i(theta_1-theta_2)}$$



              In other words, multiplying two numbers in the complex plane does two things: multiplies their absolute values, and adds together the angle that they make with the real number line. This makes calculating with phasors a simple matter of arithmetic.



              As others have stated,addition, subtraction, multiplication, and division can simply be defined likewise on $mathbb{R}^2$, but it makes more sense to use the complex plane, because this is a property that comes about naturally due to the definition of imaginary numbers: $i^2=-1$.







              share|cite|improve this answer














              share|cite|improve this answer



              share|cite|improve this answer








              edited Jul 15 '13 at 21:48

























              answered Jul 15 '13 at 21:23









              AtaraxiaAtaraxia

              4,83321648




              4,83321648























                  7












                  $begingroup$

                  The difference is that in the complex plane, you've got a multiplication $mathbb Ctimesmathbb Ctomathbb C$ defined, which makes $mathbb C$ into a field (which basically means that all the usual rules of arithmetics hold.)






                  share|cite|improve this answer









                  $endgroup$









                  • 5




                    $begingroup$
                    @GitGud: As soon as you add(!) that multiplication to $mathbb R^2$, you have $mathbb C$.
                    $endgroup$
                    – celtschk
                    Jul 15 '13 at 20:28






                  • 4




                    $begingroup$
                    @Oleg567 I'm not sure why you're talking about the dot product. Just because the word product is in the name of the operation it doesn't make it particularly relevant. Also the dot product isn't a function from $Bbb R^2$ to $Bbb R^2$.
                    $endgroup$
                    – Git Gud
                    Jul 15 '13 at 20:29






                  • 2




                    $begingroup$
                    @GitGud: No. You can go on and define it yourself (and then arrive at $mathbb C$), but you don't have it available.
                    $endgroup$
                    – celtschk
                    Jul 15 '13 at 20:33






                  • 3




                    $begingroup$
                    @GitGud: It means it is part of the definition of the object under consideration.
                    $endgroup$
                    – celtschk
                    Jul 15 '13 at 20:35






                  • 2




                    $begingroup$
                    If we understand, as usual, $mathbb R^2$ as the $mathbb R$-vector space of pairs of real number under element-wise addition and element-wise scalar multiplication, then the structure of $mathbb R^2$ (I don't use "natural" here because in the end, it's all part of the definition) consists of: The set $mathbb R^2$, the field $mathbb R$ of real numbers (with all of its structure), the operation $+: mathbb R^2timesmathbb R^2tomathbb R^2$, and the operation $cdot: mathbb Rtimesmathbb R^2tomathbb R^2$.
                    $endgroup$
                    – celtschk
                    Jul 15 '13 at 21:08
















                  7












                  $begingroup$

                  The difference is that in the complex plane, you've got a multiplication $mathbb Ctimesmathbb Ctomathbb C$ defined, which makes $mathbb C$ into a field (which basically means that all the usual rules of arithmetics hold.)






                  share|cite|improve this answer









                  $endgroup$









                  • 5




                    $begingroup$
                    @GitGud: As soon as you add(!) that multiplication to $mathbb R^2$, you have $mathbb C$.
                    $endgroup$
                    – celtschk
                    Jul 15 '13 at 20:28






                  • 4




                    $begingroup$
                    @Oleg567 I'm not sure why you're talking about the dot product. Just because the word product is in the name of the operation it doesn't make it particularly relevant. Also the dot product isn't a function from $Bbb R^2$ to $Bbb R^2$.
                    $endgroup$
                    – Git Gud
                    Jul 15 '13 at 20:29






                  • 2




                    $begingroup$
                    @GitGud: No. You can go on and define it yourself (and then arrive at $mathbb C$), but you don't have it available.
                    $endgroup$
                    – celtschk
                    Jul 15 '13 at 20:33






                  • 3




                    $begingroup$
                    @GitGud: It means it is part of the definition of the object under consideration.
                    $endgroup$
                    – celtschk
                    Jul 15 '13 at 20:35






                  • 2




                    $begingroup$
                    If we understand, as usual, $mathbb R^2$ as the $mathbb R$-vector space of pairs of real number under element-wise addition and element-wise scalar multiplication, then the structure of $mathbb R^2$ (I don't use "natural" here because in the end, it's all part of the definition) consists of: The set $mathbb R^2$, the field $mathbb R$ of real numbers (with all of its structure), the operation $+: mathbb R^2timesmathbb R^2tomathbb R^2$, and the operation $cdot: mathbb Rtimesmathbb R^2tomathbb R^2$.
                    $endgroup$
                    – celtschk
                    Jul 15 '13 at 21:08














                  7












                  7








                  7





                  $begingroup$

                  The difference is that in the complex plane, you've got a multiplication $mathbb Ctimesmathbb Ctomathbb C$ defined, which makes $mathbb C$ into a field (which basically means that all the usual rules of arithmetics hold.)






                  share|cite|improve this answer









                  $endgroup$



                  The difference is that in the complex plane, you've got a multiplication $mathbb Ctimesmathbb Ctomathbb C$ defined, which makes $mathbb C$ into a field (which basically means that all the usual rules of arithmetics hold.)







                  share|cite|improve this answer












                  share|cite|improve this answer



                  share|cite|improve this answer










                  answered Jul 15 '13 at 20:22









                  celtschkceltschk

                  30.1k755101




                  30.1k755101








                  • 5




                    $begingroup$
                    @GitGud: As soon as you add(!) that multiplication to $mathbb R^2$, you have $mathbb C$.
                    $endgroup$
                    – celtschk
                    Jul 15 '13 at 20:28






                  • 4




                    $begingroup$
                    @Oleg567 I'm not sure why you're talking about the dot product. Just because the word product is in the name of the operation it doesn't make it particularly relevant. Also the dot product isn't a function from $Bbb R^2$ to $Bbb R^2$.
                    $endgroup$
                    – Git Gud
                    Jul 15 '13 at 20:29






                  • 2




                    $begingroup$
                    @GitGud: No. You can go on and define it yourself (and then arrive at $mathbb C$), but you don't have it available.
                    $endgroup$
                    – celtschk
                    Jul 15 '13 at 20:33






                  • 3




                    $begingroup$
                    @GitGud: It means it is part of the definition of the object under consideration.
                    $endgroup$
                    – celtschk
                    Jul 15 '13 at 20:35






                  • 2




                    $begingroup$
                    If we understand, as usual, $mathbb R^2$ as the $mathbb R$-vector space of pairs of real number under element-wise addition and element-wise scalar multiplication, then the structure of $mathbb R^2$ (I don't use "natural" here because in the end, it's all part of the definition) consists of: The set $mathbb R^2$, the field $mathbb R$ of real numbers (with all of its structure), the operation $+: mathbb R^2timesmathbb R^2tomathbb R^2$, and the operation $cdot: mathbb Rtimesmathbb R^2tomathbb R^2$.
                    $endgroup$
                    – celtschk
                    Jul 15 '13 at 21:08














                  • 5




                    $begingroup$
                    @GitGud: As soon as you add(!) that multiplication to $mathbb R^2$, you have $mathbb C$.
                    $endgroup$
                    – celtschk
                    Jul 15 '13 at 20:28






                  • 4




                    $begingroup$
                    @Oleg567 I'm not sure why you're talking about the dot product. Just because the word product is in the name of the operation it doesn't make it particularly relevant. Also the dot product isn't a function from $Bbb R^2$ to $Bbb R^2$.
                    $endgroup$
                    – Git Gud
                    Jul 15 '13 at 20:29






                  • 2




                    $begingroup$
                    @GitGud: No. You can go on and define it yourself (and then arrive at $mathbb C$), but you don't have it available.
                    $endgroup$
                    – celtschk
                    Jul 15 '13 at 20:33






                  • 3




                    $begingroup$
                    @GitGud: It means it is part of the definition of the object under consideration.
                    $endgroup$
                    – celtschk
                    Jul 15 '13 at 20:35






                  • 2




                    $begingroup$
                    If we understand, as usual, $mathbb R^2$ as the $mathbb R$-vector space of pairs of real number under element-wise addition and element-wise scalar multiplication, then the structure of $mathbb R^2$ (I don't use "natural" here because in the end, it's all part of the definition) consists of: The set $mathbb R^2$, the field $mathbb R$ of real numbers (with all of its structure), the operation $+: mathbb R^2timesmathbb R^2tomathbb R^2$, and the operation $cdot: mathbb Rtimesmathbb R^2tomathbb R^2$.
                    $endgroup$
                    – celtschk
                    Jul 15 '13 at 21:08








                  5




                  5




                  $begingroup$
                  @GitGud: As soon as you add(!) that multiplication to $mathbb R^2$, you have $mathbb C$.
                  $endgroup$
                  – celtschk
                  Jul 15 '13 at 20:28




                  $begingroup$
                  @GitGud: As soon as you add(!) that multiplication to $mathbb R^2$, you have $mathbb C$.
                  $endgroup$
                  – celtschk
                  Jul 15 '13 at 20:28




                  4




                  4




                  $begingroup$
                  @Oleg567 I'm not sure why you're talking about the dot product. Just because the word product is in the name of the operation it doesn't make it particularly relevant. Also the dot product isn't a function from $Bbb R^2$ to $Bbb R^2$.
                  $endgroup$
                  – Git Gud
                  Jul 15 '13 at 20:29




                  $begingroup$
                  @Oleg567 I'm not sure why you're talking about the dot product. Just because the word product is in the name of the operation it doesn't make it particularly relevant. Also the dot product isn't a function from $Bbb R^2$ to $Bbb R^2$.
                  $endgroup$
                  – Git Gud
                  Jul 15 '13 at 20:29




                  2




                  2




                  $begingroup$
                  @GitGud: No. You can go on and define it yourself (and then arrive at $mathbb C$), but you don't have it available.
                  $endgroup$
                  – celtschk
                  Jul 15 '13 at 20:33




                  $begingroup$
                  @GitGud: No. You can go on and define it yourself (and then arrive at $mathbb C$), but you don't have it available.
                  $endgroup$
                  – celtschk
                  Jul 15 '13 at 20:33




                  3




                  3




                  $begingroup$
                  @GitGud: It means it is part of the definition of the object under consideration.
                  $endgroup$
                  – celtschk
                  Jul 15 '13 at 20:35




                  $begingroup$
                  @GitGud: It means it is part of the definition of the object under consideration.
                  $endgroup$
                  – celtschk
                  Jul 15 '13 at 20:35




                  2




                  2




                  $begingroup$
                  If we understand, as usual, $mathbb R^2$ as the $mathbb R$-vector space of pairs of real number under element-wise addition and element-wise scalar multiplication, then the structure of $mathbb R^2$ (I don't use "natural" here because in the end, it's all part of the definition) consists of: The set $mathbb R^2$, the field $mathbb R$ of real numbers (with all of its structure), the operation $+: mathbb R^2timesmathbb R^2tomathbb R^2$, and the operation $cdot: mathbb Rtimesmathbb R^2tomathbb R^2$.
                  $endgroup$
                  – celtschk
                  Jul 15 '13 at 21:08




                  $begingroup$
                  If we understand, as usual, $mathbb R^2$ as the $mathbb R$-vector space of pairs of real number under element-wise addition and element-wise scalar multiplication, then the structure of $mathbb R^2$ (I don't use "natural" here because in the end, it's all part of the definition) consists of: The set $mathbb R^2$, the field $mathbb R$ of real numbers (with all of its structure), the operation $+: mathbb R^2timesmathbb R^2tomathbb R^2$, and the operation $cdot: mathbb Rtimesmathbb R^2tomathbb R^2$.
                  $endgroup$
                  – celtschk
                  Jul 15 '13 at 21:08











                  4












                  $begingroup$

                  If $X = mathbb C$ (a one-dimensional vector space over the scalar field $mathbb C$), [its] balanced sets are $mathbb C$, the empty set $emptyset$, and every circular disc (open or closed) centered at $0$. If $X = mathbb R^2$ (a two-dimensional vector space over the scalar field $mathbb R$), there are many more balanced sets; any line segment with midpoint at $(0,0)$ will do. The point is that, in spite of the well-known and obvious identification of $mathbb C$ with $mathbb R^2$, these two are entirely different as far as their vector space structure is concerned.



                  -W. Rudin (1973)






                  share|cite|improve this answer









                  $endgroup$













                  • $begingroup$
                    But can I say $mathbb{C}$ is a vector with one direction but $mathbb{R}^2$ is a vector with 2 directions?
                    $endgroup$
                    – Cancan
                    Jul 15 '13 at 20:32










                  • $begingroup$
                    I'm not sure I agree with this. In my opinion the proper comparation would be comparing $Bbb C/ Bbb R$ with $Bbb R^2/Bbb R$.
                    $endgroup$
                    – Git Gud
                    Jul 15 '13 at 20:32










                  • $begingroup$
                    @Cancan what do you mean when you say that $mathbb{R}^2$ or $mathbb{C}$ are "vectors".
                    $endgroup$
                    – Squirtle
                    Jul 16 '13 at 2:43












                  • $begingroup$
                    @Cancan: Neither $mathbb{C}$ nor $mathbb{R}^2$ are vectors. But you can say that $mathbb{C}$ is a $1$-dimensional $mathbb{C}$-vector space, and that $mathbb{R}^2$ is a $2$-dimensional $mathbb{R}$-vector space.
                    $endgroup$
                    – Jesse Madnick
                    Jul 16 '13 at 3:39


















                  4












                  $begingroup$

                  If $X = mathbb C$ (a one-dimensional vector space over the scalar field $mathbb C$), [its] balanced sets are $mathbb C$, the empty set $emptyset$, and every circular disc (open or closed) centered at $0$. If $X = mathbb R^2$ (a two-dimensional vector space over the scalar field $mathbb R$), there are many more balanced sets; any line segment with midpoint at $(0,0)$ will do. The point is that, in spite of the well-known and obvious identification of $mathbb C$ with $mathbb R^2$, these two are entirely different as far as their vector space structure is concerned.



                  -W. Rudin (1973)






                  share|cite|improve this answer









                  $endgroup$













                  • $begingroup$
                    But can I say $mathbb{C}$ is a vector with one direction but $mathbb{R}^2$ is a vector with 2 directions?
                    $endgroup$
                    – Cancan
                    Jul 15 '13 at 20:32










                  • $begingroup$
                    I'm not sure I agree with this. In my opinion the proper comparation would be comparing $Bbb C/ Bbb R$ with $Bbb R^2/Bbb R$.
                    $endgroup$
                    – Git Gud
                    Jul 15 '13 at 20:32










                  • $begingroup$
                    @Cancan what do you mean when you say that $mathbb{R}^2$ or $mathbb{C}$ are "vectors".
                    $endgroup$
                    – Squirtle
                    Jul 16 '13 at 2:43












                  • $begingroup$
                    @Cancan: Neither $mathbb{C}$ nor $mathbb{R}^2$ are vectors. But you can say that $mathbb{C}$ is a $1$-dimensional $mathbb{C}$-vector space, and that $mathbb{R}^2$ is a $2$-dimensional $mathbb{R}$-vector space.
                    $endgroup$
                    – Jesse Madnick
                    Jul 16 '13 at 3:39
















                  4












                  4








                  4





                  $begingroup$

                  If $X = mathbb C$ (a one-dimensional vector space over the scalar field $mathbb C$), [its] balanced sets are $mathbb C$, the empty set $emptyset$, and every circular disc (open or closed) centered at $0$. If $X = mathbb R^2$ (a two-dimensional vector space over the scalar field $mathbb R$), there are many more balanced sets; any line segment with midpoint at $(0,0)$ will do. The point is that, in spite of the well-known and obvious identification of $mathbb C$ with $mathbb R^2$, these two are entirely different as far as their vector space structure is concerned.



                  -W. Rudin (1973)






                  share|cite|improve this answer









                  $endgroup$



                  If $X = mathbb C$ (a one-dimensional vector space over the scalar field $mathbb C$), [its] balanced sets are $mathbb C$, the empty set $emptyset$, and every circular disc (open or closed) centered at $0$. If $X = mathbb R^2$ (a two-dimensional vector space over the scalar field $mathbb R$), there are many more balanced sets; any line segment with midpoint at $(0,0)$ will do. The point is that, in spite of the well-known and obvious identification of $mathbb C$ with $mathbb R^2$, these two are entirely different as far as their vector space structure is concerned.



                  -W. Rudin (1973)







                  share|cite|improve this answer












                  share|cite|improve this answer



                  share|cite|improve this answer










                  answered Jul 15 '13 at 20:29









                  Umberto P.Umberto P.

                  39.4k13166




                  39.4k13166












                  • $begingroup$
                    But can I say $mathbb{C}$ is a vector with one direction but $mathbb{R}^2$ is a vector with 2 directions?
                    $endgroup$
                    – Cancan
                    Jul 15 '13 at 20:32










                  • $begingroup$
                    I'm not sure I agree with this. In my opinion the proper comparation would be comparing $Bbb C/ Bbb R$ with $Bbb R^2/Bbb R$.
                    $endgroup$
                    – Git Gud
                    Jul 15 '13 at 20:32










                  • $begingroup$
                    @Cancan what do you mean when you say that $mathbb{R}^2$ or $mathbb{C}$ are "vectors".
                    $endgroup$
                    – Squirtle
                    Jul 16 '13 at 2:43












                  • $begingroup$
                    @Cancan: Neither $mathbb{C}$ nor $mathbb{R}^2$ are vectors. But you can say that $mathbb{C}$ is a $1$-dimensional $mathbb{C}$-vector space, and that $mathbb{R}^2$ is a $2$-dimensional $mathbb{R}$-vector space.
                    $endgroup$
                    – Jesse Madnick
                    Jul 16 '13 at 3:39




















                  • $begingroup$
                    But can I say $mathbb{C}$ is a vector with one direction but $mathbb{R}^2$ is a vector with 2 directions?
                    $endgroup$
                    – Cancan
                    Jul 15 '13 at 20:32










                  • $begingroup$
                    I'm not sure I agree with this. In my opinion the proper comparation would be comparing $Bbb C/ Bbb R$ with $Bbb R^2/Bbb R$.
                    $endgroup$
                    – Git Gud
                    Jul 15 '13 at 20:32










                  • $begingroup$
                    @Cancan what do you mean when you say that $mathbb{R}^2$ or $mathbb{C}$ are "vectors".
                    $endgroup$
                    – Squirtle
                    Jul 16 '13 at 2:43












                  • $begingroup$
                    @Cancan: Neither $mathbb{C}$ nor $mathbb{R}^2$ are vectors. But you can say that $mathbb{C}$ is a $1$-dimensional $mathbb{C}$-vector space, and that $mathbb{R}^2$ is a $2$-dimensional $mathbb{R}$-vector space.
                    $endgroup$
                    – Jesse Madnick
                    Jul 16 '13 at 3:39


















                  $begingroup$
                  But can I say $mathbb{C}$ is a vector with one direction but $mathbb{R}^2$ is a vector with 2 directions?
                  $endgroup$
                  – Cancan
                  Jul 15 '13 at 20:32




                  $begingroup$
                  But can I say $mathbb{C}$ is a vector with one direction but $mathbb{R}^2$ is a vector with 2 directions?
                  $endgroup$
                  – Cancan
                  Jul 15 '13 at 20:32












                  $begingroup$
                  I'm not sure I agree with this. In my opinion the proper comparation would be comparing $Bbb C/ Bbb R$ with $Bbb R^2/Bbb R$.
                  $endgroup$
                  – Git Gud
                  Jul 15 '13 at 20:32




                  $begingroup$
                  I'm not sure I agree with this. In my opinion the proper comparation would be comparing $Bbb C/ Bbb R$ with $Bbb R^2/Bbb R$.
                  $endgroup$
                  – Git Gud
                  Jul 15 '13 at 20:32












                  $begingroup$
                  @Cancan what do you mean when you say that $mathbb{R}^2$ or $mathbb{C}$ are "vectors".
                  $endgroup$
                  – Squirtle
                  Jul 16 '13 at 2:43






                  $begingroup$
                  @Cancan what do you mean when you say that $mathbb{R}^2$ or $mathbb{C}$ are "vectors".
                  $endgroup$
                  – Squirtle
                  Jul 16 '13 at 2:43














                  $begingroup$
                  @Cancan: Neither $mathbb{C}$ nor $mathbb{R}^2$ are vectors. But you can say that $mathbb{C}$ is a $1$-dimensional $mathbb{C}$-vector space, and that $mathbb{R}^2$ is a $2$-dimensional $mathbb{R}$-vector space.
                  $endgroup$
                  – Jesse Madnick
                  Jul 16 '13 at 3:39






                  $begingroup$
                  @Cancan: Neither $mathbb{C}$ nor $mathbb{R}^2$ are vectors. But you can say that $mathbb{C}$ is a $1$-dimensional $mathbb{C}$-vector space, and that $mathbb{R}^2$ is a $2$-dimensional $mathbb{R}$-vector space.
                  $endgroup$
                  – Jesse Madnick
                  Jul 16 '13 at 3:39













                  3












                  $begingroup$

                  The relationship between $mathbb C$ and $mathbb R^2$ becomes clearer using Clifford algebra.



                  Clifford algebra admits a "geometric product" of vectors (and more than just two vectors). The so-called complex plane can instead be seen as the algebra of geometric products of two vectors.



                  These objects--geometric products of two vectors--have special geometric significance, both in 2d and beyond. Each product of two vectors describes a pair of reflections, which in turn describes a rotation, specifying not only the unique plane of rotation but also the angle of rotation. This is at the heart of why complex numbers are so useful for rotations; the generalization of this property to 3d generates quaternions. For this reason, these objects are sometimes called spinors.



                  On the 2d plane, for every vector $a$, there is an associated spinor $a e_1$, formed using the geometric product. It is this explicit correspondence that is used to convert vector algebra and calculus on the 2d plane to the algebra and calculus of spinors--of "complex numbers"--instead. Hence, much of the calculus that one associates with complex numbers is instead intrinsic to the structure of the 2d plane.



                  For example, the residue theorem tells us about meromorphic functions' integrals; there is an equivalent vector analysis that tells us about integrals of vector functions whose divergences are delta functions. This involves using Stokes' theorem. There is a very tight relationship between holomorphic functions and vector fields with vanishing divergence and curl.



                  For this reason, I regard much of the impulse to complexify problem on real vector spaces to be inherently misguided. Often, but not always, there is simply no reason to do so. Many results of "complex analysis" have real equivalents, and glossing over them deprives students of powerful theorems that would be useful outside of 2d.






                  share|cite|improve this answer









                  $endgroup$


















                    3












                    $begingroup$

                    The relationship between $mathbb C$ and $mathbb R^2$ becomes clearer using Clifford algebra.



                    Clifford algebra admits a "geometric product" of vectors (and more than just two vectors). The so-called complex plane can instead be seen as the algebra of geometric products of two vectors.



                    These objects--geometric products of two vectors--have special geometric significance, both in 2d and beyond. Each product of two vectors describes a pair of reflections, which in turn describes a rotation, specifying not only the unique plane of rotation but also the angle of rotation. This is at the heart of why complex numbers are so useful for rotations; the generalization of this property to 3d generates quaternions. For this reason, these objects are sometimes called spinors.



                    On the 2d plane, for every vector $a$, there is an associated spinor $a e_1$, formed using the geometric product. It is this explicit correspondence that is used to convert vector algebra and calculus on the 2d plane to the algebra and calculus of spinors--of "complex numbers"--instead. Hence, much of the calculus that one associates with complex numbers is instead intrinsic to the structure of the 2d plane.



                    For example, the residue theorem tells us about meromorphic functions' integrals; there is an equivalent vector analysis that tells us about integrals of vector functions whose divergences are delta functions. This involves using Stokes' theorem. There is a very tight relationship between holomorphic functions and vector fields with vanishing divergence and curl.



                    For this reason, I regard much of the impulse to complexify problem on real vector spaces to be inherently misguided. Often, but not always, there is simply no reason to do so. Many results of "complex analysis" have real equivalents, and glossing over them deprives students of powerful theorems that would be useful outside of 2d.






                    share|cite|improve this answer









                    $endgroup$
















                      3












                      3








                      3





                      $begingroup$

                      The relationship between $mathbb C$ and $mathbb R^2$ becomes clearer using Clifford algebra.



                      Clifford algebra admits a "geometric product" of vectors (and more than just two vectors). The so-called complex plane can instead be seen as the algebra of geometric products of two vectors.



                      These objects--geometric products of two vectors--have special geometric significance, both in 2d and beyond. Each product of two vectors describes a pair of reflections, which in turn describes a rotation, specifying not only the unique plane of rotation but also the angle of rotation. This is at the heart of why complex numbers are so useful for rotations; the generalization of this property to 3d generates quaternions. For this reason, these objects are sometimes called spinors.



                      On the 2d plane, for every vector $a$, there is an associated spinor $a e_1$, formed using the geometric product. It is this explicit correspondence that is used to convert vector algebra and calculus on the 2d plane to the algebra and calculus of spinors--of "complex numbers"--instead. Hence, much of the calculus that one associates with complex numbers is instead intrinsic to the structure of the 2d plane.



                      For example, the residue theorem tells us about meromorphic functions' integrals; there is an equivalent vector analysis that tells us about integrals of vector functions whose divergences are delta functions. This involves using Stokes' theorem. There is a very tight relationship between holomorphic functions and vector fields with vanishing divergence and curl.



                      For this reason, I regard much of the impulse to complexify problem on real vector spaces to be inherently misguided. Often, but not always, there is simply no reason to do so. Many results of "complex analysis" have real equivalents, and glossing over them deprives students of powerful theorems that would be useful outside of 2d.






                      share|cite|improve this answer









                      $endgroup$



                      The relationship between $mathbb C$ and $mathbb R^2$ becomes clearer using Clifford algebra.



                      Clifford algebra admits a "geometric product" of vectors (and more than just two vectors). The so-called complex plane can instead be seen as the algebra of geometric products of two vectors.



                      These objects--geometric products of two vectors--have special geometric significance, both in 2d and beyond. Each product of two vectors describes a pair of reflections, which in turn describes a rotation, specifying not only the unique plane of rotation but also the angle of rotation. This is at the heart of why complex numbers are so useful for rotations; the generalization of this property to 3d generates quaternions. For this reason, these objects are sometimes called spinors.



                      On the 2d plane, for every vector $a$, there is an associated spinor $a e_1$, formed using the geometric product. It is this explicit correspondence that is used to convert vector algebra and calculus on the 2d plane to the algebra and calculus of spinors--of "complex numbers"--instead. Hence, much of the calculus that one associates with complex numbers is instead intrinsic to the structure of the 2d plane.



                      For example, the residue theorem tells us about meromorphic functions' integrals; there is an equivalent vector analysis that tells us about integrals of vector functions whose divergences are delta functions. This involves using Stokes' theorem. There is a very tight relationship between holomorphic functions and vector fields with vanishing divergence and curl.



                      For this reason, I regard much of the impulse to complexify problem on real vector spaces to be inherently misguided. Often, but not always, there is simply no reason to do so. Many results of "complex analysis" have real equivalents, and glossing over them deprives students of powerful theorems that would be useful outside of 2d.







                      share|cite|improve this answer












                      share|cite|improve this answer



                      share|cite|improve this answer










                      answered Jul 15 '13 at 20:50









                      MuphridMuphrid

                      15.6k11542




                      15.6k11542























                          3












                          $begingroup$

                          To augment Kendra Lynne's answer, what does it mean to say that signal analysis in $mathbb{R}^2$ isn't as 'clean' as in $mathbb{C}$?



                          Fourier series are the decomposition of periodic functions into an infinite sum of 'modes' or single-frequency signals. If a function defined on $mathbb{R}$ is periodic, say (to make the trigonometry easier) that the period is $2pi$, we might as well just consider the piece whose domain ins $(-pi, pi]$.



                          If the function is real-valued, we can decompose it in two ways: as a sum of sines and cosines (and a constant):
                          $$ f(x) = frac{a_0}{2} + sum_{n=1}^{infty} a_n cos(nx) + sum_{n=1}^{infty} b_n sin(nx)$$
                          There is a formula for the $a_k$ and the $b_k$. There is an asymmetry in that $k$ starts at $0$ for $a_k$ and at $1$ for $b_k$. There is a formula in terms of $int_{-pi}^{pi} f(x) cos(kx)dx$ for the $a_k$ and a similar formula for the $b_k$. We can write a formula for $a_0$ which has the same integral but with $cos(0x) = 0$, but unfortunately we have to divide by 2 to make it consistent with the other formulae. $b_0$ would always be $0$ if it existed, and doesn't tell us anything about the function.



                          Although we wanted to decompose our function into modes, we actually have two terms for each frequency (except the constant frequency). If we wanted to say, differentiate the series term-by-term, we would have to use different rules to differentiate each term, depending on whether it's a sine or a cosine term, and the derivative of each term would be a different type of term, since sine goes to cosine and vice versa.



                          We can also express the Fourier series as a single series of shifted cosine waves, by transforming
                          $$ a_k cos(kx) + b_k sin(kx) = r_k cos(kx + theta_k) .$$
                          However we now lost the fact of expressing all functions as a sum of the same components. If we want to add two functions expressed like this, we have to separate the $r$ and $theta$ back into $a$ and $b$, add, and transform back. We also still have a slight asymmetry - $r_k$ has a meaning but $theta_0$ is always $0$.



                          The same Fourier series using complex numbers is the following:
                          $$ sum_{n=-infty}^{infty} a_n e^{inx} .$$ This expresses a function $(-pi, pi] rightarrow mathbb{C}$. We can add two functions by adding their coefficients, we can even work out the energy of a signal as a simple calculation (each component $e^{ikx}$ has the same energy. Differentiating or integrating term-by-term is easy, since we are within a constant of differentiating $e^x$. A real-valued function has $a_n = a_{-n}$ for all $n$ (which is easy to check). $a_n$ all being real, $a_{2n}$ being zero for all $n$ or $a_{n}$ being zero for all $n < 0$ all express important and simple classes of periodic functions.



                          We can also define $z = e^{ix}$ and now the Fourier series is actually a Laurent series:
                          $$ sum_{n=-infty}^{infty} a_n z^{n} .$$



                          The Fourier series with $a_n = 0$ for all $n < 0$ is a Taylor series, and the one with $a_n$ all real is a Laurent series for a function $mathbb{R} rightarrow mathbb{R}$. We are drawing a deep connection between the behavior of a complex function on the unit circle and its behavior on the real line - either of these is enough to specify the function uniquely, given a couple of quite general conditions.






                          share|cite|improve this answer









                          $endgroup$













                          • $begingroup$
                            Gorgeous! I'm EE but not in signals, +1 for the better explanation.
                            $endgroup$
                            – Kendra Lynne
                            Jul 16 '13 at 16:20










                          • $begingroup$
                            Thanks, not quite a direct answer to the question but worth going into I thought.
                            $endgroup$
                            – jwg
                            Jul 17 '13 at 9:14
















                          3












                          $begingroup$

                          To augment Kendra Lynne's answer, what does it mean to say that signal analysis in $mathbb{R}^2$ isn't as 'clean' as in $mathbb{C}$?



                          Fourier series are the decomposition of periodic functions into an infinite sum of 'modes' or single-frequency signals. If a function defined on $mathbb{R}$ is periodic, say (to make the trigonometry easier) that the period is $2pi$, we might as well just consider the piece whose domain ins $(-pi, pi]$.



                          If the function is real-valued, we can decompose it in two ways: as a sum of sines and cosines (and a constant):
                          $$ f(x) = frac{a_0}{2} + sum_{n=1}^{infty} a_n cos(nx) + sum_{n=1}^{infty} b_n sin(nx)$$
                          There is a formula for the $a_k$ and the $b_k$. There is an asymmetry in that $k$ starts at $0$ for $a_k$ and at $1$ for $b_k$. There is a formula in terms of $int_{-pi}^{pi} f(x) cos(kx)dx$ for the $a_k$ and a similar formula for the $b_k$. We can write a formula for $a_0$ which has the same integral but with $cos(0x) = 0$, but unfortunately we have to divide by 2 to make it consistent with the other formulae. $b_0$ would always be $0$ if it existed, and doesn't tell us anything about the function.



                          Although we wanted to decompose our function into modes, we actually have two terms for each frequency (except the constant frequency). If we wanted to say, differentiate the series term-by-term, we would have to use different rules to differentiate each term, depending on whether it's a sine or a cosine term, and the derivative of each term would be a different type of term, since sine goes to cosine and vice versa.



                          We can also express the Fourier series as a single series of shifted cosine waves, by transforming
                          $$ a_k cos(kx) + b_k sin(kx) = r_k cos(kx + theta_k) .$$
                          However we now lost the fact of expressing all functions as a sum of the same components. If we want to add two functions expressed like this, we have to separate the $r$ and $theta$ back into $a$ and $b$, add, and transform back. We also still have a slight asymmetry - $r_k$ has a meaning but $theta_0$ is always $0$.



                          The same Fourier series using complex numbers is the following:
                          $$ sum_{n=-infty}^{infty} a_n e^{inx} .$$ This expresses a function $(-pi, pi] rightarrow mathbb{C}$. We can add two functions by adding their coefficients, we can even work out the energy of a signal as a simple calculation (each component $e^{ikx}$ has the same energy. Differentiating or integrating term-by-term is easy, since we are within a constant of differentiating $e^x$. A real-valued function has $a_n = a_{-n}$ for all $n$ (which is easy to check). $a_n$ all being real, $a_{2n}$ being zero for all $n$ or $a_{n}$ being zero for all $n < 0$ all express important and simple classes of periodic functions.



                          We can also define $z = e^{ix}$ and now the Fourier series is actually a Laurent series:
                          $$ sum_{n=-infty}^{infty} a_n z^{n} .$$



                          The Fourier series with $a_n = 0$ for all $n < 0$ is a Taylor series, and the one with $a_n$ all real is a Laurent series for a function $mathbb{R} rightarrow mathbb{R}$. We are drawing a deep connection between the behavior of a complex function on the unit circle and its behavior on the real line - either of these is enough to specify the function uniquely, given a couple of quite general conditions.






                          share|cite|improve this answer









                          $endgroup$













                          • $begingroup$
                            Gorgeous! I'm EE but not in signals, +1 for the better explanation.
                            $endgroup$
                            – Kendra Lynne
                            Jul 16 '13 at 16:20










                          • $begingroup$
                            Thanks, not quite a direct answer to the question but worth going into I thought.
                            $endgroup$
                            – jwg
                            Jul 17 '13 at 9:14














                          3












                          3








                          3





                          $begingroup$

                          To augment Kendra Lynne's answer, what does it mean to say that signal analysis in $mathbb{R}^2$ isn't as 'clean' as in $mathbb{C}$?



                          Fourier series are the decomposition of periodic functions into an infinite sum of 'modes' or single-frequency signals. If a function defined on $mathbb{R}$ is periodic, say (to make the trigonometry easier) that the period is $2pi$, we might as well just consider the piece whose domain ins $(-pi, pi]$.



                          If the function is real-valued, we can decompose it in two ways: as a sum of sines and cosines (and a constant):
                          $$ f(x) = frac{a_0}{2} + sum_{n=1}^{infty} a_n cos(nx) + sum_{n=1}^{infty} b_n sin(nx)$$
                          There is a formula for the $a_k$ and the $b_k$. There is an asymmetry in that $k$ starts at $0$ for $a_k$ and at $1$ for $b_k$. There is a formula in terms of $int_{-pi}^{pi} f(x) cos(kx)dx$ for the $a_k$ and a similar formula for the $b_k$. We can write a formula for $a_0$ which has the same integral but with $cos(0x) = 0$, but unfortunately we have to divide by 2 to make it consistent with the other formulae. $b_0$ would always be $0$ if it existed, and doesn't tell us anything about the function.



                          Although we wanted to decompose our function into modes, we actually have two terms for each frequency (except the constant frequency). If we wanted to say, differentiate the series term-by-term, we would have to use different rules to differentiate each term, depending on whether it's a sine or a cosine term, and the derivative of each term would be a different type of term, since sine goes to cosine and vice versa.



                          We can also express the Fourier series as a single series of shifted cosine waves, by transforming
                          $$ a_k cos(kx) + b_k sin(kx) = r_k cos(kx + theta_k) .$$
                          However we now lost the fact of expressing all functions as a sum of the same components. If we want to add two functions expressed like this, we have to separate the $r$ and $theta$ back into $a$ and $b$, add, and transform back. We also still have a slight asymmetry - $r_k$ has a meaning but $theta_0$ is always $0$.



                          The same Fourier series using complex numbers is the following:
                          $$ sum_{n=-infty}^{infty} a_n e^{inx} .$$ This expresses a function $(-pi, pi] rightarrow mathbb{C}$. We can add two functions by adding their coefficients, we can even work out the energy of a signal as a simple calculation (each component $e^{ikx}$ has the same energy. Differentiating or integrating term-by-term is easy, since we are within a constant of differentiating $e^x$. A real-valued function has $a_n = a_{-n}$ for all $n$ (which is easy to check). $a_n$ all being real, $a_{2n}$ being zero for all $n$ or $a_{n}$ being zero for all $n < 0$ all express important and simple classes of periodic functions.



                          We can also define $z = e^{ix}$ and now the Fourier series is actually a Laurent series:
                          $$ sum_{n=-infty}^{infty} a_n z^{n} .$$



                          The Fourier series with $a_n = 0$ for all $n < 0$ is a Taylor series, and the one with $a_n$ all real is a Laurent series for a function $mathbb{R} rightarrow mathbb{R}$. We are drawing a deep connection between the behavior of a complex function on the unit circle and its behavior on the real line - either of these is enough to specify the function uniquely, given a couple of quite general conditions.






                          share|cite|improve this answer









                          $endgroup$



                          To augment Kendra Lynne's answer, what does it mean to say that signal analysis in $mathbb{R}^2$ isn't as 'clean' as in $mathbb{C}$?



                          Fourier series are the decomposition of periodic functions into an infinite sum of 'modes' or single-frequency signals. If a function defined on $mathbb{R}$ is periodic, say (to make the trigonometry easier) that the period is $2pi$, we might as well just consider the piece whose domain ins $(-pi, pi]$.



                          If the function is real-valued, we can decompose it in two ways: as a sum of sines and cosines (and a constant):
                          $$ f(x) = frac{a_0}{2} + sum_{n=1}^{infty} a_n cos(nx) + sum_{n=1}^{infty} b_n sin(nx)$$
                          There is a formula for the $a_k$ and the $b_k$. There is an asymmetry in that $k$ starts at $0$ for $a_k$ and at $1$ for $b_k$. There is a formula in terms of $int_{-pi}^{pi} f(x) cos(kx)dx$ for the $a_k$ and a similar formula for the $b_k$. We can write a formula for $a_0$ which has the same integral but with $cos(0x) = 0$, but unfortunately we have to divide by 2 to make it consistent with the other formulae. $b_0$ would always be $0$ if it existed, and doesn't tell us anything about the function.



                          Although we wanted to decompose our function into modes, we actually have two terms for each frequency (except the constant frequency). If we wanted to say, differentiate the series term-by-term, we would have to use different rules to differentiate each term, depending on whether it's a sine or a cosine term, and the derivative of each term would be a different type of term, since sine goes to cosine and vice versa.



                          We can also express the Fourier series as a single series of shifted cosine waves, by transforming
                          $$ a_k cos(kx) + b_k sin(kx) = r_k cos(kx + theta_k) .$$
                          However we now lost the fact of expressing all functions as a sum of the same components. If we want to add two functions expressed like this, we have to separate the $r$ and $theta$ back into $a$ and $b$, add, and transform back. We also still have a slight asymmetry - $r_k$ has a meaning but $theta_0$ is always $0$.



                          The same Fourier series using complex numbers is the following:
                          $$ sum_{n=-infty}^{infty} a_n e^{inx} .$$ This expresses a function $(-pi, pi] rightarrow mathbb{C}$. We can add two functions by adding their coefficients, we can even work out the energy of a signal as a simple calculation (each component $e^{ikx}$ has the same energy. Differentiating or integrating term-by-term is easy, since we are within a constant of differentiating $e^x$. A real-valued function has $a_n = a_{-n}$ for all $n$ (which is easy to check). $a_n$ all being real, $a_{2n}$ being zero for all $n$ or $a_{n}$ being zero for all $n < 0$ all express important and simple classes of periodic functions.



                          We can also define $z = e^{ix}$ and now the Fourier series is actually a Laurent series:
                          $$ sum_{n=-infty}^{infty} a_n z^{n} .$$



                          The Fourier series with $a_n = 0$ for all $n < 0$ is a Taylor series, and the one with $a_n$ all real is a Laurent series for a function $mathbb{R} rightarrow mathbb{R}$. We are drawing a deep connection between the behavior of a complex function on the unit circle and its behavior on the real line - either of these is enough to specify the function uniquely, given a couple of quite general conditions.







                          share|cite|improve this answer












                          share|cite|improve this answer



                          share|cite|improve this answer










                          answered Jul 16 '13 at 10:51









                          jwgjwg

                          2,3461527




                          2,3461527












                          • $begingroup$
                            Gorgeous! I'm EE but not in signals, +1 for the better explanation.
                            $endgroup$
                            – Kendra Lynne
                            Jul 16 '13 at 16:20










                          • $begingroup$
                            Thanks, not quite a direct answer to the question but worth going into I thought.
                            $endgroup$
                            – jwg
                            Jul 17 '13 at 9:14


















                          • $begingroup$
                            Gorgeous! I'm EE but not in signals, +1 for the better explanation.
                            $endgroup$
                            – Kendra Lynne
                            Jul 16 '13 at 16:20










                          • $begingroup$
                            Thanks, not quite a direct answer to the question but worth going into I thought.
                            $endgroup$
                            – jwg
                            Jul 17 '13 at 9:14
















                          $begingroup$
                          Gorgeous! I'm EE but not in signals, +1 for the better explanation.
                          $endgroup$
                          – Kendra Lynne
                          Jul 16 '13 at 16:20




                          $begingroup$
                          Gorgeous! I'm EE but not in signals, +1 for the better explanation.
                          $endgroup$
                          – Kendra Lynne
                          Jul 16 '13 at 16:20












                          $begingroup$
                          Thanks, not quite a direct answer to the question but worth going into I thought.
                          $endgroup$
                          – jwg
                          Jul 17 '13 at 9:14




                          $begingroup$
                          Thanks, not quite a direct answer to the question but worth going into I thought.
                          $endgroup$
                          – jwg
                          Jul 17 '13 at 9:14











                          3












                          $begingroup$

                          Since everyone is defining the space, I figured I could give an example of why we use it (relating to your "Electrical Engineering" reference). The $i$ itself is what makes using complex numbers/variables ideal for numerous applications. For one, note that:



                          begin{align*}
                          i^1 &= sqrt{-1}\
                          i^2 &= -1\
                          i^3 &= -i\
                          i^4 &= 1.
                          end{align*}

                          In the complex (real-imaginary) plane, this corresponds to a rotation, which is easier to visualize and manipulate mathematically. These four powers "repeat" themselves, so for geometrical applications (versus real number manipulation), the math is more explicit.



                          One of the immediate applications in Electrical Engineering relates to Signal Analysis and Processing. For example, Euler's formula:
                          $$
                          re^{itheta}=rcostheta +irsintheta
                          $$

                          relates complex exponentials to trigonometric formulas. Many times, in audio applications, a signal needs to be decomposed into a series of sinusoidal functions because you need to know their individual amplitudes ($r$) and phase angles ($theta$), maybe for filtering a specific frequency:



                          Fourier Transform



                          This means the signal is being moved from the time-domain, where (time,amplitude) = $(t,y)$, to the frequency domain, where (sinusoid magnitude,phase) = $(r,theta)$. The Fourier Transform (denoted "FT" in the picture) does this, and uses Euler's Formula to express the original signal as a sum of sinusoids of varying magnitude and phase angle. To do further signal analysis in the $mathbb{R}^2$ domain isn't nearly as "clean" computationally.






                          share|cite|improve this answer











                          $endgroup$


















                            3












                            $begingroup$

                            Since everyone is defining the space, I figured I could give an example of why we use it (relating to your "Electrical Engineering" reference). The $i$ itself is what makes using complex numbers/variables ideal for numerous applications. For one, note that:



                            begin{align*}
                            i^1 &= sqrt{-1}\
                            i^2 &= -1\
                            i^3 &= -i\
                            i^4 &= 1.
                            end{align*}

                            In the complex (real-imaginary) plane, this corresponds to a rotation, which is easier to visualize and manipulate mathematically. These four powers "repeat" themselves, so for geometrical applications (versus real number manipulation), the math is more explicit.



                            One of the immediate applications in Electrical Engineering relates to Signal Analysis and Processing. For example, Euler's formula:
                            $$
                            re^{itheta}=rcostheta +irsintheta
                            $$

                            relates complex exponentials to trigonometric formulas. Many times, in audio applications, a signal needs to be decomposed into a series of sinusoidal functions because you need to know their individual amplitudes ($r$) and phase angles ($theta$), maybe for filtering a specific frequency:



                            Fourier Transform



                            This means the signal is being moved from the time-domain, where (time,amplitude) = $(t,y)$, to the frequency domain, where (sinusoid magnitude,phase) = $(r,theta)$. The Fourier Transform (denoted "FT" in the picture) does this, and uses Euler's Formula to express the original signal as a sum of sinusoids of varying magnitude and phase angle. To do further signal analysis in the $mathbb{R}^2$ domain isn't nearly as "clean" computationally.






                            share|cite|improve this answer











                            $endgroup$
















                              3












                              3








                              3





                              $begingroup$

                              Since everyone is defining the space, I figured I could give an example of why we use it (relating to your "Electrical Engineering" reference). The $i$ itself is what makes using complex numbers/variables ideal for numerous applications. For one, note that:



                              begin{align*}
                              i^1 &= sqrt{-1}\
                              i^2 &= -1\
                              i^3 &= -i\
                              i^4 &= 1.
                              end{align*}

                              In the complex (real-imaginary) plane, this corresponds to a rotation, which is easier to visualize and manipulate mathematically. These four powers "repeat" themselves, so for geometrical applications (versus real number manipulation), the math is more explicit.



                              One of the immediate applications in Electrical Engineering relates to Signal Analysis and Processing. For example, Euler's formula:
                              $$
                              re^{itheta}=rcostheta +irsintheta
                              $$

                              relates complex exponentials to trigonometric formulas. Many times, in audio applications, a signal needs to be decomposed into a series of sinusoidal functions because you need to know their individual amplitudes ($r$) and phase angles ($theta$), maybe for filtering a specific frequency:



                              Fourier Transform



                              This means the signal is being moved from the time-domain, where (time,amplitude) = $(t,y)$, to the frequency domain, where (sinusoid magnitude,phase) = $(r,theta)$. The Fourier Transform (denoted "FT" in the picture) does this, and uses Euler's Formula to express the original signal as a sum of sinusoids of varying magnitude and phase angle. To do further signal analysis in the $mathbb{R}^2$ domain isn't nearly as "clean" computationally.






                              share|cite|improve this answer











                              $endgroup$



                              Since everyone is defining the space, I figured I could give an example of why we use it (relating to your "Electrical Engineering" reference). The $i$ itself is what makes using complex numbers/variables ideal for numerous applications. For one, note that:



                              begin{align*}
                              i^1 &= sqrt{-1}\
                              i^2 &= -1\
                              i^3 &= -i\
                              i^4 &= 1.
                              end{align*}

                              In the complex (real-imaginary) plane, this corresponds to a rotation, which is easier to visualize and manipulate mathematically. These four powers "repeat" themselves, so for geometrical applications (versus real number manipulation), the math is more explicit.



                              One of the immediate applications in Electrical Engineering relates to Signal Analysis and Processing. For example, Euler's formula:
                              $$
                              re^{itheta}=rcostheta +irsintheta
                              $$

                              relates complex exponentials to trigonometric formulas. Many times, in audio applications, a signal needs to be decomposed into a series of sinusoidal functions because you need to know their individual amplitudes ($r$) and phase angles ($theta$), maybe for filtering a specific frequency:



                              Fourier Transform



                              This means the signal is being moved from the time-domain, where (time,amplitude) = $(t,y)$, to the frequency domain, where (sinusoid magnitude,phase) = $(r,theta)$. The Fourier Transform (denoted "FT" in the picture) does this, and uses Euler's Formula to express the original signal as a sum of sinusoids of varying magnitude and phase angle. To do further signal analysis in the $mathbb{R}^2$ domain isn't nearly as "clean" computationally.







                              share|cite|improve this answer














                              share|cite|improve this answer



                              share|cite|improve this answer








                              edited Dec 23 '18 at 21:13









                              Glorfindel

                              3,41981830




                              3,41981830










                              answered Jul 15 '13 at 21:09









                              Kendra LynneKendra Lynne

                              35815




                              35815























                                  2












                                  $begingroup$

                                  My thought is this: $mathbb{C}$ is not $mathbb{R}^2$. However, $mathbb{R}^2$ paired with the operation $(a,b) star (c,d) = (ac-bd, ac+bd)$ provides a model of the complex numbers. However, there are others. For example, a colleague of mine insists that complex numbers are $2 times 2$ matrices of the form:
                                  $$ left[ begin{array}{cc} a & -b \ b & a end{array} right] $$
                                  but another insists, no, complex numbers have the form
                                  $$ left[ begin{array}{cc} a & b \ -b & a end{array} right] $$
                                  but they both agree that complex multiplication and addition are mere matrix multiplication rules for a specific type of matrix. Another friend says, no that's nonsense, you can't teach matrices to undergraduates, they'll never understand it. Maybe they'll calculate it, but they won't really understand. Students get algebra. We should model the complex numbers as the quotient by the ideal generated by $x^2+1$ in the polynomial ring $mathbb{R}[x]$ in fact,
                                  $$ mathbb{C} = mathbb{R}[x]/langle x^2+1rangle$$
                                  So, why is it that $mathbb{C} = mathbb{R}^2$ paired with the operation $star$? It's because it is easily implemented by the rule $i^2=-1$ and proceed normally. In other words, if you know how to do real algebra then the rule $i^2=-1$ paired with those real algebra rules gets you fairly far, at least until you face the dangers of exponents. For example,
                                  $$ -1 = sqrt{-1} sqrt{-1} = sqrt{(-1)(-1)} = sqrt{1} = 1 $$
                                  oops. Of course, this is easily remedied either by choosing a branch of the square root or working with sets of values as opposed to single-valued functions.



                                  All of this said, I like Rudin's answer for your question.






                                  share|cite|improve this answer











                                  $endgroup$













                                  • $begingroup$
                                    May I understand in this stupid way that the main difference between $mathbb{R}^2$ and $mathbb{C}$, in the definition perspective of view, is that their multiplication defined is different. Because of this, it generates a lot of differences such as the difference in diffferentiability, etc.
                                    $endgroup$
                                    – Cancan
                                    Jul 16 '13 at 8:23






                                  • 1




                                    $begingroup$
                                    @Cancan yes, the choice of definition is merely that, a choice. However, the difference between real and complex differentiability is multifaceted and highly nontrivial. There really is no particular multiplication defined on $mathbb{R}^2$, in fact, you could defined several other multiplications on that point set. For example, $j^2=1$ leads to the hyperbolic numbers. Or $epsilon^2=0$ gives the null-numbers. There are all sorts of other things to do with $mathbb{R}^2$.
                                    $endgroup$
                                    – James S. Cook
                                    Jul 16 '13 at 17:01
















                                  2












                                  $begingroup$

                                  My thought is this: $mathbb{C}$ is not $mathbb{R}^2$. However, $mathbb{R}^2$ paired with the operation $(a,b) star (c,d) = (ac-bd, ac+bd)$ provides a model of the complex numbers. However, there are others. For example, a colleague of mine insists that complex numbers are $2 times 2$ matrices of the form:
                                  $$ left[ begin{array}{cc} a & -b \ b & a end{array} right] $$
                                  but another insists, no, complex numbers have the form
                                  $$ left[ begin{array}{cc} a & b \ -b & a end{array} right] $$
                                  but they both agree that complex multiplication and addition are mere matrix multiplication rules for a specific type of matrix. Another friend says, no that's nonsense, you can't teach matrices to undergraduates, they'll never understand it. Maybe they'll calculate it, but they won't really understand. Students get algebra. We should model the complex numbers as the quotient by the ideal generated by $x^2+1$ in the polynomial ring $mathbb{R}[x]$ in fact,
                                  $$ mathbb{C} = mathbb{R}[x]/langle x^2+1rangle$$
                                  So, why is it that $mathbb{C} = mathbb{R}^2$ paired with the operation $star$? It's because it is easily implemented by the rule $i^2=-1$ and proceed normally. In other words, if you know how to do real algebra then the rule $i^2=-1$ paired with those real algebra rules gets you fairly far, at least until you face the dangers of exponents. For example,
                                  $$ -1 = sqrt{-1} sqrt{-1} = sqrt{(-1)(-1)} = sqrt{1} = 1 $$
                                  oops. Of course, this is easily remedied either by choosing a branch of the square root or working with sets of values as opposed to single-valued functions.



                                  All of this said, I like Rudin's answer for your question.






                                  share|cite|improve this answer











                                  $endgroup$













                                  • $begingroup$
                                    May I understand in this stupid way that the main difference between $mathbb{R}^2$ and $mathbb{C}$, in the definition perspective of view, is that their multiplication defined is different. Because of this, it generates a lot of differences such as the difference in diffferentiability, etc.
                                    $endgroup$
                                    – Cancan
                                    Jul 16 '13 at 8:23






                                  • 1




                                    $begingroup$
                                    @Cancan yes, the choice of definition is merely that, a choice. However, the difference between real and complex differentiability is multifaceted and highly nontrivial. There really is no particular multiplication defined on $mathbb{R}^2$, in fact, you could defined several other multiplications on that point set. For example, $j^2=1$ leads to the hyperbolic numbers. Or $epsilon^2=0$ gives the null-numbers. There are all sorts of other things to do with $mathbb{R}^2$.
                                    $endgroup$
                                    – James S. Cook
                                    Jul 16 '13 at 17:01














                                  2












                                  2








                                  2





                                  $begingroup$

                                  My thought is this: $mathbb{C}$ is not $mathbb{R}^2$. However, $mathbb{R}^2$ paired with the operation $(a,b) star (c,d) = (ac-bd, ac+bd)$ provides a model of the complex numbers. However, there are others. For example, a colleague of mine insists that complex numbers are $2 times 2$ matrices of the form:
                                  $$ left[ begin{array}{cc} a & -b \ b & a end{array} right] $$
                                  but another insists, no, complex numbers have the form
                                  $$ left[ begin{array}{cc} a & b \ -b & a end{array} right] $$
                                  but they both agree that complex multiplication and addition are mere matrix multiplication rules for a specific type of matrix. Another friend says, no that's nonsense, you can't teach matrices to undergraduates, they'll never understand it. Maybe they'll calculate it, but they won't really understand. Students get algebra. We should model the complex numbers as the quotient by the ideal generated by $x^2+1$ in the polynomial ring $mathbb{R}[x]$ in fact,
                                  $$ mathbb{C} = mathbb{R}[x]/langle x^2+1rangle$$
                                  So, why is it that $mathbb{C} = mathbb{R}^2$ paired with the operation $star$? It's because it is easily implemented by the rule $i^2=-1$ and proceed normally. In other words, if you know how to do real algebra then the rule $i^2=-1$ paired with those real algebra rules gets you fairly far, at least until you face the dangers of exponents. For example,
                                  $$ -1 = sqrt{-1} sqrt{-1} = sqrt{(-1)(-1)} = sqrt{1} = 1 $$
                                  oops. Of course, this is easily remedied either by choosing a branch of the square root or working with sets of values as opposed to single-valued functions.



                                  All of this said, I like Rudin's answer for your question.






                                  share|cite|improve this answer











                                  $endgroup$



                                  My thought is this: $mathbb{C}$ is not $mathbb{R}^2$. However, $mathbb{R}^2$ paired with the operation $(a,b) star (c,d) = (ac-bd, ac+bd)$ provides a model of the complex numbers. However, there are others. For example, a colleague of mine insists that complex numbers are $2 times 2$ matrices of the form:
                                  $$ left[ begin{array}{cc} a & -b \ b & a end{array} right] $$
                                  but another insists, no, complex numbers have the form
                                  $$ left[ begin{array}{cc} a & b \ -b & a end{array} right] $$
                                  but they both agree that complex multiplication and addition are mere matrix multiplication rules for a specific type of matrix. Another friend says, no that's nonsense, you can't teach matrices to undergraduates, they'll never understand it. Maybe they'll calculate it, but they won't really understand. Students get algebra. We should model the complex numbers as the quotient by the ideal generated by $x^2+1$ in the polynomial ring $mathbb{R}[x]$ in fact,
                                  $$ mathbb{C} = mathbb{R}[x]/langle x^2+1rangle$$
                                  So, why is it that $mathbb{C} = mathbb{R}^2$ paired with the operation $star$? It's because it is easily implemented by the rule $i^2=-1$ and proceed normally. In other words, if you know how to do real algebra then the rule $i^2=-1$ paired with those real algebra rules gets you fairly far, at least until you face the dangers of exponents. For example,
                                  $$ -1 = sqrt{-1} sqrt{-1} = sqrt{(-1)(-1)} = sqrt{1} = 1 $$
                                  oops. Of course, this is easily remedied either by choosing a branch of the square root or working with sets of values as opposed to single-valued functions.



                                  All of this said, I like Rudin's answer for your question.







                                  share|cite|improve this answer














                                  share|cite|improve this answer



                                  share|cite|improve this answer








                                  edited Jul 16 '13 at 3:09









                                  Pedro Tamaroff

                                  96.9k10153297




                                  96.9k10153297










                                  answered Jul 16 '13 at 3:05









                                  James S. CookJames S. Cook

                                  13.1k22872




                                  13.1k22872












                                  • $begingroup$
                                    May I understand in this stupid way that the main difference between $mathbb{R}^2$ and $mathbb{C}$, in the definition perspective of view, is that their multiplication defined is different. Because of this, it generates a lot of differences such as the difference in diffferentiability, etc.
                                    $endgroup$
                                    – Cancan
                                    Jul 16 '13 at 8:23






                                  • 1




                                    $begingroup$
                                    @Cancan yes, the choice of definition is merely that, a choice. However, the difference between real and complex differentiability is multifaceted and highly nontrivial. There really is no particular multiplication defined on $mathbb{R}^2$, in fact, you could defined several other multiplications on that point set. For example, $j^2=1$ leads to the hyperbolic numbers. Or $epsilon^2=0$ gives the null-numbers. There are all sorts of other things to do with $mathbb{R}^2$.
                                    $endgroup$
                                    – James S. Cook
                                    Jul 16 '13 at 17:01


















                                  • $begingroup$
                                    May I understand in this stupid way that the main difference between $mathbb{R}^2$ and $mathbb{C}$, in the definition perspective of view, is that their multiplication defined is different. Because of this, it generates a lot of differences such as the difference in diffferentiability, etc.
                                    $endgroup$
                                    – Cancan
                                    Jul 16 '13 at 8:23






                                  • 1




                                    $begingroup$
                                    @Cancan yes, the choice of definition is merely that, a choice. However, the difference between real and complex differentiability is multifaceted and highly nontrivial. There really is no particular multiplication defined on $mathbb{R}^2$, in fact, you could defined several other multiplications on that point set. For example, $j^2=1$ leads to the hyperbolic numbers. Or $epsilon^2=0$ gives the null-numbers. There are all sorts of other things to do with $mathbb{R}^2$.
                                    $endgroup$
                                    – James S. Cook
                                    Jul 16 '13 at 17:01
















                                  $begingroup$
                                  May I understand in this stupid way that the main difference between $mathbb{R}^2$ and $mathbb{C}$, in the definition perspective of view, is that their multiplication defined is different. Because of this, it generates a lot of differences such as the difference in diffferentiability, etc.
                                  $endgroup$
                                  – Cancan
                                  Jul 16 '13 at 8:23




                                  $begingroup$
                                  May I understand in this stupid way that the main difference between $mathbb{R}^2$ and $mathbb{C}$, in the definition perspective of view, is that their multiplication defined is different. Because of this, it generates a lot of differences such as the difference in diffferentiability, etc.
                                  $endgroup$
                                  – Cancan
                                  Jul 16 '13 at 8:23




                                  1




                                  1




                                  $begingroup$
                                  @Cancan yes, the choice of definition is merely that, a choice. However, the difference between real and complex differentiability is multifaceted and highly nontrivial. There really is no particular multiplication defined on $mathbb{R}^2$, in fact, you could defined several other multiplications on that point set. For example, $j^2=1$ leads to the hyperbolic numbers. Or $epsilon^2=0$ gives the null-numbers. There are all sorts of other things to do with $mathbb{R}^2$.
                                  $endgroup$
                                  – James S. Cook
                                  Jul 16 '13 at 17:01




                                  $begingroup$
                                  @Cancan yes, the choice of definition is merely that, a choice. However, the difference between real and complex differentiability is multifaceted and highly nontrivial. There really is no particular multiplication defined on $mathbb{R}^2$, in fact, you could defined several other multiplications on that point set. For example, $j^2=1$ leads to the hyperbolic numbers. Or $epsilon^2=0$ gives the null-numbers. There are all sorts of other things to do with $mathbb{R}^2$.
                                  $endgroup$
                                  – James S. Cook
                                  Jul 16 '13 at 17:01











                                  1












                                  $begingroup$

                                  There are plenty of differences between $mathbb{R}^2$ plane and $mathbb{C}$ plane. Here I give you two interesting differences.



                                  First, about branch points and branch lines. Suppose that we are given the function $w=z^{1/2}$. Suppose further that we allow $z$ to make a complete circuit around the origin counter-clockwisely starting from a point $A$ different from the origin. If $z=re^{itheta}$, then $w=sqrt re^{itheta/2}$.



                                  At point $A$,
                                  $theta =theta_1$, so $w=sqrt re^{itheta_1/2}$.



                                  While after completing the circuit, back to point $A$,

                                  $theta =theta_1+2pi$, so $w=sqrt re^{i(theta_1+2pi)/2}=-sqrt re^{itheta_1/2}$.



                                  Problem is, if we consider $w$ as a function, we cannot get the same value at the same point.
                                  To improve, we introduce Riemann Surfaces.Imagine the whole $mathbb{C}$ plane as two sheets superimposed on each other. On the sheets, there is a line which indicate real axis. Cut two sheet simultaneously along the POSITIVE real axis. Imagine the lower edge of the bottom sheet is joined to the upper edge of the top sheet.



                                  We call the origin as a branch point and the positive real axis as the branch line in this case.



                                  Now, the surface is complete, when travelling the circuit, you start at the top sheet and if you go one complete circuit, you go to the bottom sheet. Travelling again, you go back to the top sheet. So that $theta_1$ and $theta_1+2pi$ become two different point (at the top and the bottom sheet respectively), and comes out two different value.



                                  Another thing is, in $mathbb{R}^2$ case, $f'(x)$ exist not implies $f''(x)$ exist. Try thinking that $f(x)=x^2$ if $xge0$ and $f(x)=-x^2$ when $x<0$. But in $mathbb{C}$ plane. If $f'(z)$ exist (we say $f$ is analytic), it guarantees $f'(x)$ and thus $f^{(n)}(x)$ exist. It comes from the Cauchy's integral formula.



                                  I am not going to give you the proof, but if you are interested, you should know Cauchy Riemann Equations first: $w=f(z)=f(x+yi)=u(x,y)+iv(x,y)$ is analytic iff it satisfy $frac{partial u}{partial x}=frac{partial v}{partial y}$, $frac{partial u}{partial y}=-frac{partial v}{partial x}$ both. Proof is simply comes from the definition of differentiation. Thus, once you get $u(x,y)$ you can find $v(x,y)$ from the above equation, making $f(z)$ analytic,






                                  share|cite|improve this answer











                                  $endgroup$


















                                    1












                                    $begingroup$

                                    There are plenty of differences between $mathbb{R}^2$ plane and $mathbb{C}$ plane. Here I give you two interesting differences.



                                    First, about branch points and branch lines. Suppose that we are given the function $w=z^{1/2}$. Suppose further that we allow $z$ to make a complete circuit around the origin counter-clockwisely starting from a point $A$ different from the origin. If $z=re^{itheta}$, then $w=sqrt re^{itheta/2}$.



                                    At point $A$,
                                    $theta =theta_1$, so $w=sqrt re^{itheta_1/2}$.



                                    While after completing the circuit, back to point $A$,

                                    $theta =theta_1+2pi$, so $w=sqrt re^{i(theta_1+2pi)/2}=-sqrt re^{itheta_1/2}$.



                                    Problem is, if we consider $w$ as a function, we cannot get the same value at the same point.
                                    To improve, we introduce Riemann Surfaces.Imagine the whole $mathbb{C}$ plane as two sheets superimposed on each other. On the sheets, there is a line which indicate real axis. Cut two sheet simultaneously along the POSITIVE real axis. Imagine the lower edge of the bottom sheet is joined to the upper edge of the top sheet.



                                    We call the origin as a branch point and the positive real axis as the branch line in this case.



                                    Now, the surface is complete, when travelling the circuit, you start at the top sheet and if you go one complete circuit, you go to the bottom sheet. Travelling again, you go back to the top sheet. So that $theta_1$ and $theta_1+2pi$ become two different point (at the top and the bottom sheet respectively), and comes out two different value.



                                    Another thing is, in $mathbb{R}^2$ case, $f'(x)$ exist not implies $f''(x)$ exist. Try thinking that $f(x)=x^2$ if $xge0$ and $f(x)=-x^2$ when $x<0$. But in $mathbb{C}$ plane. If $f'(z)$ exist (we say $f$ is analytic), it guarantees $f'(x)$ and thus $f^{(n)}(x)$ exist. It comes from the Cauchy's integral formula.



                                    I am not going to give you the proof, but if you are interested, you should know Cauchy Riemann Equations first: $w=f(z)=f(x+yi)=u(x,y)+iv(x,y)$ is analytic iff it satisfy $frac{partial u}{partial x}=frac{partial v}{partial y}$, $frac{partial u}{partial y}=-frac{partial v}{partial x}$ both. Proof is simply comes from the definition of differentiation. Thus, once you get $u(x,y)$ you can find $v(x,y)$ from the above equation, making $f(z)$ analytic,






                                    share|cite|improve this answer











                                    $endgroup$
















                                      1












                                      1








                                      1





                                      $begingroup$

                                      There are plenty of differences between $mathbb{R}^2$ plane and $mathbb{C}$ plane. Here I give you two interesting differences.



                                      First, about branch points and branch lines. Suppose that we are given the function $w=z^{1/2}$. Suppose further that we allow $z$ to make a complete circuit around the origin counter-clockwisely starting from a point $A$ different from the origin. If $z=re^{itheta}$, then $w=sqrt re^{itheta/2}$.



                                      At point $A$,
                                      $theta =theta_1$, so $w=sqrt re^{itheta_1/2}$.



                                      While after completing the circuit, back to point $A$,

                                      $theta =theta_1+2pi$, so $w=sqrt re^{i(theta_1+2pi)/2}=-sqrt re^{itheta_1/2}$.



                                      Problem is, if we consider $w$ as a function, we cannot get the same value at the same point.
                                      To improve, we introduce Riemann Surfaces.Imagine the whole $mathbb{C}$ plane as two sheets superimposed on each other. On the sheets, there is a line which indicate real axis. Cut two sheet simultaneously along the POSITIVE real axis. Imagine the lower edge of the bottom sheet is joined to the upper edge of the top sheet.



                                      We call the origin as a branch point and the positive real axis as the branch line in this case.



                                      Now, the surface is complete, when travelling the circuit, you start at the top sheet and if you go one complete circuit, you go to the bottom sheet. Travelling again, you go back to the top sheet. So that $theta_1$ and $theta_1+2pi$ become two different point (at the top and the bottom sheet respectively), and comes out two different value.



                                      Another thing is, in $mathbb{R}^2$ case, $f'(x)$ exist not implies $f''(x)$ exist. Try thinking that $f(x)=x^2$ if $xge0$ and $f(x)=-x^2$ when $x<0$. But in $mathbb{C}$ plane. If $f'(z)$ exist (we say $f$ is analytic), it guarantees $f'(x)$ and thus $f^{(n)}(x)$ exist. It comes from the Cauchy's integral formula.



                                      I am not going to give you the proof, but if you are interested, you should know Cauchy Riemann Equations first: $w=f(z)=f(x+yi)=u(x,y)+iv(x,y)$ is analytic iff it satisfy $frac{partial u}{partial x}=frac{partial v}{partial y}$, $frac{partial u}{partial y}=-frac{partial v}{partial x}$ both. Proof is simply comes from the definition of differentiation. Thus, once you get $u(x,y)$ you can find $v(x,y)$ from the above equation, making $f(z)$ analytic,






                                      share|cite|improve this answer











                                      $endgroup$



                                      There are plenty of differences between $mathbb{R}^2$ plane and $mathbb{C}$ plane. Here I give you two interesting differences.



                                      First, about branch points and branch lines. Suppose that we are given the function $w=z^{1/2}$. Suppose further that we allow $z$ to make a complete circuit around the origin counter-clockwisely starting from a point $A$ different from the origin. If $z=re^{itheta}$, then $w=sqrt re^{itheta/2}$.



                                      At point $A$,
                                      $theta =theta_1$, so $w=sqrt re^{itheta_1/2}$.



                                      While after completing the circuit, back to point $A$,

                                      $theta =theta_1+2pi$, so $w=sqrt re^{i(theta_1+2pi)/2}=-sqrt re^{itheta_1/2}$.



                                      Problem is, if we consider $w$ as a function, we cannot get the same value at the same point.
                                      To improve, we introduce Riemann Surfaces.Imagine the whole $mathbb{C}$ plane as two sheets superimposed on each other. On the sheets, there is a line which indicate real axis. Cut two sheet simultaneously along the POSITIVE real axis. Imagine the lower edge of the bottom sheet is joined to the upper edge of the top sheet.



                                      We call the origin as a branch point and the positive real axis as the branch line in this case.



                                      Now, the surface is complete, when travelling the circuit, you start at the top sheet and if you go one complete circuit, you go to the bottom sheet. Travelling again, you go back to the top sheet. So that $theta_1$ and $theta_1+2pi$ become two different point (at the top and the bottom sheet respectively), and comes out two different value.



                                      Another thing is, in $mathbb{R}^2$ case, $f'(x)$ exist not implies $f''(x)$ exist. Try thinking that $f(x)=x^2$ if $xge0$ and $f(x)=-x^2$ when $x<0$. But in $mathbb{C}$ plane. If $f'(z)$ exist (we say $f$ is analytic), it guarantees $f'(x)$ and thus $f^{(n)}(x)$ exist. It comes from the Cauchy's integral formula.



                                      I am not going to give you the proof, but if you are interested, you should know Cauchy Riemann Equations first: $w=f(z)=f(x+yi)=u(x,y)+iv(x,y)$ is analytic iff it satisfy $frac{partial u}{partial x}=frac{partial v}{partial y}$, $frac{partial u}{partial y}=-frac{partial v}{partial x}$ both. Proof is simply comes from the definition of differentiation. Thus, once you get $u(x,y)$ you can find $v(x,y)$ from the above equation, making $f(z)$ analytic,







                                      share|cite|improve this answer














                                      share|cite|improve this answer



                                      share|cite|improve this answer








                                      edited Jul 16 '13 at 3:18

























                                      answered Jul 16 '13 at 2:49









                                      Unem ChanUnem Chan

                                      1,017924




                                      1,017924






























                                          draft saved

                                          draft discarded




















































                                          Thanks for contributing an answer to Mathematics Stack Exchange!


                                          • Please be sure to answer the question. Provide details and share your research!

                                          But avoid



                                          • Asking for help, clarification, or responding to other answers.

                                          • Making statements based on opinion; back them up with references or personal experience.


                                          Use MathJax to format equations. MathJax reference.


                                          To learn more, see our tips on writing great answers.




                                          draft saved


                                          draft discarded














                                          StackExchange.ready(
                                          function () {
                                          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f444475%2fwhats-the-difference-between-mathbbr2-and-the-complex-plane%23new-answer', 'question_page');
                                          }
                                          );

                                          Post as a guest















                                          Required, but never shown





















































                                          Required, but never shown














                                          Required, but never shown












                                          Required, but never shown







                                          Required, but never shown

































                                          Required, but never shown














                                          Required, but never shown












                                          Required, but never shown







                                          Required, but never shown







                                          Popular posts from this blog

                                          Quarter-circle Tiles

                                          build a pushdown automaton that recognizes the reverse language of a given pushdown automaton?

                                          Mont Emei