What definitions were crucial to further understanding?











up vote
29
down vote

favorite
9












Often the most difficult part of venturing into a field as a researcher is to come up with an appropriate definition. Sometimes definitions suggest themselves very naturally, as when you solve a problem and then ask, ‘What if I generalize this a bit?’



Other times they arise only after struggling with a subject and realizing you were looking at it from the wrong angle. An appropriate definition can then make all the difference, by reorganizing the thought and sheding light into the problems, somehow making them sharper and more focused.



I would like to collect evidences and instances of this idea. An answer should be a story of how someone came up with a good definition and how this was crucial to their understanding of a topic. If you talk about someone else, then ideally provide a reference.



(My interest in this is mainly psychological, namely, how the action of naming something somehow brings it into existence and organizes the world around it.)



edit



It was suggested that this question is a duplicate of (Examples of advance via good definitions), which asks about good definitions. It was pointed out there, and also here, that basicaly all notions that stood the test of time qualify as good definitions.



I was not asking, although it seems many people construed it this way, for a collection of good definitions, but for a collection of stories that showed how a proper definition actually changed the perception about a field.



A typical such story would have someone saying "Wait a moment! I should not be dealing with [concept A] at all! That's the wrong way to approach this. Instead I should define this other guy, [concept B], and then everything will make a lot more sense!"



I realize this is hard to make precise, so I understand if the question gets closed.










share|cite|improve this question




















  • 12




    There is the famous example of Grothendieck’s definition of a scheme.
    – Sam Hopkins
    yesterday






  • 4




    The definition of a site (Grothendieck realising that one only needs the notion of a covering to define cohomology) and sheaves on them leading to étale cohomology (and other cohomology theories).
    – TKe
    yesterday








  • 5




    I don't really know the history, but it seems to me the (re)definition of "continuous function" as "pullback of open is open" must have been what allowed for the definition of a topology.
    – benblumsmith
    yesterday






  • 6




    Seems like it would be harder to find examples of well-known definitions that weren't crucial to further understanding.
    – Nik Weaver
    yesterday






  • 4




    @Marcel: sure, that's why I qualified it with "well-known". Was the definition of an abstract group crucial to further understanding? The definition of a manifold? Banach space, topological space, probability measure? Well, of course.
    – Nik Weaver
    yesterday















up vote
29
down vote

favorite
9












Often the most difficult part of venturing into a field as a researcher is to come up with an appropriate definition. Sometimes definitions suggest themselves very naturally, as when you solve a problem and then ask, ‘What if I generalize this a bit?’



Other times they arise only after struggling with a subject and realizing you were looking at it from the wrong angle. An appropriate definition can then make all the difference, by reorganizing the thought and sheding light into the problems, somehow making them sharper and more focused.



I would like to collect evidences and instances of this idea. An answer should be a story of how someone came up with a good definition and how this was crucial to their understanding of a topic. If you talk about someone else, then ideally provide a reference.



(My interest in this is mainly psychological, namely, how the action of naming something somehow brings it into existence and organizes the world around it.)



edit



It was suggested that this question is a duplicate of (Examples of advance via good definitions), which asks about good definitions. It was pointed out there, and also here, that basicaly all notions that stood the test of time qualify as good definitions.



I was not asking, although it seems many people construed it this way, for a collection of good definitions, but for a collection of stories that showed how a proper definition actually changed the perception about a field.



A typical such story would have someone saying "Wait a moment! I should not be dealing with [concept A] at all! That's the wrong way to approach this. Instead I should define this other guy, [concept B], and then everything will make a lot more sense!"



I realize this is hard to make precise, so I understand if the question gets closed.










share|cite|improve this question




















  • 12




    There is the famous example of Grothendieck’s definition of a scheme.
    – Sam Hopkins
    yesterday






  • 4




    The definition of a site (Grothendieck realising that one only needs the notion of a covering to define cohomology) and sheaves on them leading to étale cohomology (and other cohomology theories).
    – TKe
    yesterday








  • 5




    I don't really know the history, but it seems to me the (re)definition of "continuous function" as "pullback of open is open" must have been what allowed for the definition of a topology.
    – benblumsmith
    yesterday






  • 6




    Seems like it would be harder to find examples of well-known definitions that weren't crucial to further understanding.
    – Nik Weaver
    yesterday






  • 4




    @Marcel: sure, that's why I qualified it with "well-known". Was the definition of an abstract group crucial to further understanding? The definition of a manifold? Banach space, topological space, probability measure? Well, of course.
    – Nik Weaver
    yesterday













up vote
29
down vote

favorite
9









up vote
29
down vote

favorite
9






9





Often the most difficult part of venturing into a field as a researcher is to come up with an appropriate definition. Sometimes definitions suggest themselves very naturally, as when you solve a problem and then ask, ‘What if I generalize this a bit?’



Other times they arise only after struggling with a subject and realizing you were looking at it from the wrong angle. An appropriate definition can then make all the difference, by reorganizing the thought and sheding light into the problems, somehow making them sharper and more focused.



I would like to collect evidences and instances of this idea. An answer should be a story of how someone came up with a good definition and how this was crucial to their understanding of a topic. If you talk about someone else, then ideally provide a reference.



(My interest in this is mainly psychological, namely, how the action of naming something somehow brings it into existence and organizes the world around it.)



edit



It was suggested that this question is a duplicate of (Examples of advance via good definitions), which asks about good definitions. It was pointed out there, and also here, that basicaly all notions that stood the test of time qualify as good definitions.



I was not asking, although it seems many people construed it this way, for a collection of good definitions, but for a collection of stories that showed how a proper definition actually changed the perception about a field.



A typical such story would have someone saying "Wait a moment! I should not be dealing with [concept A] at all! That's the wrong way to approach this. Instead I should define this other guy, [concept B], and then everything will make a lot more sense!"



I realize this is hard to make precise, so I understand if the question gets closed.










share|cite|improve this question















Often the most difficult part of venturing into a field as a researcher is to come up with an appropriate definition. Sometimes definitions suggest themselves very naturally, as when you solve a problem and then ask, ‘What if I generalize this a bit?’



Other times they arise only after struggling with a subject and realizing you were looking at it from the wrong angle. An appropriate definition can then make all the difference, by reorganizing the thought and sheding light into the problems, somehow making them sharper and more focused.



I would like to collect evidences and instances of this idea. An answer should be a story of how someone came up with a good definition and how this was crucial to their understanding of a topic. If you talk about someone else, then ideally provide a reference.



(My interest in this is mainly psychological, namely, how the action of naming something somehow brings it into existence and organizes the world around it.)



edit



It was suggested that this question is a duplicate of (Examples of advance via good definitions), which asks about good definitions. It was pointed out there, and also here, that basicaly all notions that stood the test of time qualify as good definitions.



I was not asking, although it seems many people construed it this way, for a collection of good definitions, but for a collection of stories that showed how a proper definition actually changed the perception about a field.



A typical such story would have someone saying "Wait a moment! I should not be dealing with [concept A] at all! That's the wrong way to approach this. Instead I should define this other guy, [concept B], and then everything will make a lot more sense!"



I realize this is hard to make precise, so I understand if the question gets closed.







soft-question ho.history-overview big-list definitions






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited 11 mins ago


























community wiki





4 revs, 3 users 88%
Marcel









  • 12




    There is the famous example of Grothendieck’s definition of a scheme.
    – Sam Hopkins
    yesterday






  • 4




    The definition of a site (Grothendieck realising that one only needs the notion of a covering to define cohomology) and sheaves on them leading to étale cohomology (and other cohomology theories).
    – TKe
    yesterday








  • 5




    I don't really know the history, but it seems to me the (re)definition of "continuous function" as "pullback of open is open" must have been what allowed for the definition of a topology.
    – benblumsmith
    yesterday






  • 6




    Seems like it would be harder to find examples of well-known definitions that weren't crucial to further understanding.
    – Nik Weaver
    yesterday






  • 4




    @Marcel: sure, that's why I qualified it with "well-known". Was the definition of an abstract group crucial to further understanding? The definition of a manifold? Banach space, topological space, probability measure? Well, of course.
    – Nik Weaver
    yesterday














  • 12




    There is the famous example of Grothendieck’s definition of a scheme.
    – Sam Hopkins
    yesterday






  • 4




    The definition of a site (Grothendieck realising that one only needs the notion of a covering to define cohomology) and sheaves on them leading to étale cohomology (and other cohomology theories).
    – TKe
    yesterday








  • 5




    I don't really know the history, but it seems to me the (re)definition of "continuous function" as "pullback of open is open" must have been what allowed for the definition of a topology.
    – benblumsmith
    yesterday






  • 6




    Seems like it would be harder to find examples of well-known definitions that weren't crucial to further understanding.
    – Nik Weaver
    yesterday






  • 4




    @Marcel: sure, that's why I qualified it with "well-known". Was the definition of an abstract group crucial to further understanding? The definition of a manifold? Banach space, topological space, probability measure? Well, of course.
    – Nik Weaver
    yesterday








12




12




There is the famous example of Grothendieck’s definition of a scheme.
– Sam Hopkins
yesterday




There is the famous example of Grothendieck’s definition of a scheme.
– Sam Hopkins
yesterday




4




4




The definition of a site (Grothendieck realising that one only needs the notion of a covering to define cohomology) and sheaves on them leading to étale cohomology (and other cohomology theories).
– TKe
yesterday






The definition of a site (Grothendieck realising that one only needs the notion of a covering to define cohomology) and sheaves on them leading to étale cohomology (and other cohomology theories).
– TKe
yesterday






5




5




I don't really know the history, but it seems to me the (re)definition of "continuous function" as "pullback of open is open" must have been what allowed for the definition of a topology.
– benblumsmith
yesterday




I don't really know the history, but it seems to me the (re)definition of "continuous function" as "pullback of open is open" must have been what allowed for the definition of a topology.
– benblumsmith
yesterday




6




6




Seems like it would be harder to find examples of well-known definitions that weren't crucial to further understanding.
– Nik Weaver
yesterday




Seems like it would be harder to find examples of well-known definitions that weren't crucial to further understanding.
– Nik Weaver
yesterday




4




4




@Marcel: sure, that's why I qualified it with "well-known". Was the definition of an abstract group crucial to further understanding? The definition of a manifold? Banach space, topological space, probability measure? Well, of course.
– Nik Weaver
yesterday




@Marcel: sure, that's why I qualified it with "well-known". Was the definition of an abstract group crucial to further understanding? The definition of a manifold? Banach space, topological space, probability measure? Well, of course.
– Nik Weaver
yesterday










11 Answers
11






active

oldest

votes

















up vote
24
down vote













A famous definition which led to a completely new point of view is the definition of Schwartz distribution. It changed the understanding of what a "function" is, even among engineers.



Actually, the definition of function by Dirichlet in 19th century also clarified many things.






share|cite|improve this answer






























    up vote
    16
    down vote













    In set theory, definitely the notion of a Woodin cardinal.



    First, it is not an entirely straightforward notion to guess. Significant large cardinals were up to that point defined as critical points of certain elementary embeddings. This is not the case here: Woodin cardinals need not be measurable. If $kappa$ is Woodin, then $V_kappa$ is a model of set theory where there is a proper class of strong cardinals. Woodinness requires more, namely, that these strong embeddings move some predicates correctly.



    Second, the definition turned out to identify a pivotal point in inner model theory: for notions weaker than Woodinness, the corresponding canonical inner models carry $mathbfDelta^1_3$ well-orderings of the reals. This is no longer true once we have Woodin cardinals. This is closely tied up to the complexity of the comparison process: given two set models that look like initial segments of canonical inner models, how hard is to compare them to tell which one carries more information? For notions weaker than Woodinness, this process is carried out in an essentially linear fashion: disagreements between the models are witnessed by some measures, and repeatedly using these least measures to form ultrapowers eventually lines the models up: One of the iterates ends up as an initial segment of the other, and whichever one is longer comes from the model which originally has more information.With Woodin cardinals and beyond this process is no longer enough. Instead, comparisons sometimes need to retrace steps, and rather than a linear iteration, at the end we have tree-like structures. Identifying these crucial differences allowed us to develop inner model theory beyond this point. This in turn has led to many results and to the identification of deep connections between large cardinals and descriptive set theory. Literally, thanks to the presence of Woodin cardinals, the set-theoretic landscape grew and transformed significantly.



    Third, Woodinness also turns out to be the notion needed to carry out certain forcing constructions. Some consistency results that were not expected now could be established, thanks to the identification of genuinely new forcing notions that use the Woodin cardinals in an essential way. Other constructions that were known from significant large cardinals were improved to their optimal form.



    The story of how the notion of Woodinness was identified is actually quite nice. Kunen had used huge cardinals to prove the consistency of the existence of saturated ideals on $omega_1$. The embeddings witnessing hugeness end up lifting to the generic embeddings witnessing saturation in the appropriate forcing extension. In 1984, Foreman, Magidor and Shelah identified an entirely new way of finding models with saturated ideals. Their construction improved the large cardinal notion needed from hugeness to supercompactness. What is significant is that the generic embedding is no longer a lifting of an old genuine embedding. Indeed, their construction preserves $omega_1$. As a consequence of their results, in May of the same year, Woodin showed that supercompactness implies that all projective sets of reals are Lebesgue measurable (and more). Conversations between Shelah and Woodin quickly led to the realization that much weaker cardinals than supercompact sufficed for this result. Through a series of refinements, the notions of what we now call Shelah and Woodin cardinals were identified, with the latter being of precisely the right strength: all this happened while developments in determinacy and inner model theory showed the deep connections between these fields, and the pivotal role that Woodin cardinals played in this connection. This all happened quite quickly: by the time the Shelah-Woodin paper appeared in print, the importance of Woodin cardinals was already recognized.




    MR1074499 (92m:03087). Shelah, Saharon; Woodin, Hugh. Large cardinals imply that every reasonably definable set of reals is Lebesgue measurable. Israel J. Math. 70 (1990), no. 3, 381–394.







    share|cite|improve this answer























    • Those were some exciting times in set theory!
      – Asaf Karagila
      16 hours ago










    • This answer is so good it's giving me Woodiness.
      – Robert Frost
      2 hours ago




















    up vote
    16
    down vote













    A good example comes from the definition of manifolds, although it's less of a single definition and more of an evolving notion.



    The background to modern differential geometry lies in trying understand cases where the parallel postulate fails. However, trying to rigorously pin down what sort of spaces to study is pretty tricky. It's only tangentially related, but there's a famous paper by Lakatos called "Proofs and Refutations" about the Euler characteristic on surfaces which gives insight as to why defining spaces precisely is a non-trivial problem.



    Riemann had an intuitive notion of manifolds, and it's possible to see how his ideas evolved into the modern notion. It seems that Riemann's "Mannigfaltigkeit" had an element of "I know it when I see it." Similarly, Poincare didn't have a modern definition for manifold in Analysis Situs. The modern definition of a smooth manifold as a locally-Euclidean space wasn't introduced until 1912 by Hermann Weyl.
    Even then, it wasn't clear until the Whitney proved the embedding theorem that the intrinsic definition was equivalent to submanifolds of Euclidean space.



    That's still not the end of the story. If one tries to really pin down the definition, one realizes that there are at least three distinct categories of manifolds; smooth, piece-wise linear, and topological manifolds. The fact that these different types are not equivalent gives rise to a lot of research that is still continuing today.






    share|cite|improve this answer






























      up vote
      11
      down vote













      In the comments I offered the "classical" example of Grothendieck's definition of a scheme. But for a more modern example: Fomin and Zelevinsky's definition of a cluster algebra is at first sight a rather ungainly thing, but turned out to be exactly what was needed to unify certain patterns that appeared in various hitherto unrelated areas of mathematics like discrete integrable systems, the representation theory of associative algebras, Poisson/symplectic geometry, etc.






      share|cite|improve this answer






























        up vote
        9
        down vote













        My nominee is the definition of a topology. I don't know the history of the subject but my impression is that this was the result of a collective effort. The definition with collections of subsets closed under finite intersections and arbitrary unions is not the kind of thing one would get up one morning and decide to consider. It seems to me this was an example of category where the definition of morphisms was clear (continuous maps) but where finding the right notion of object was not obvious.



        Perhaps a sign of a good definition is that it leads to other good definitions. Those could arise in order to remedy some flaws of the original one, like Grothendieck's topos. They could also arise as particular cases of the original definition like Schwartz's definition of the topology on $mathcal{D}$ underlying the notion of distribution mentioned in Alexandre's answer.






        share|cite|improve this answer






























          up vote
          8
          down vote













          I would nominate the definition of probability theory in terms of measure theory, where a probability space is an abstract measure space, an event is a measurable subset, a random variable is a measurable function, and expectation is the Lebesgue integral. This is usually credited to a 1933 paper by Kolmogorov, though it seems that many of the ideas had previously been around. Here is a very interesting survey by Shafer and Vovk, entitled "The origins and legacy of Kolmogorov's Grundbegriffe."



          People had been thinking about probability for a long time before that, but these definitions made it possible to place everything on a rigorous footing and exploit many key results from measure theory. Shafer and Vovk say that it took probability from a mathematical "pastime" to become a respected branch of pure mathematics.



          (I am no expert historian so please feel free to edit with corrections, more background, further discussion, etc.)






          share|cite|improve this answer























          • From the words of my probability professor: "there are only three letters in probability, $Omega$, $mathcal{A}$, and $mathbb{P}$".
            – Najib Idrissi
            13 hours ago










          • @NajibIdrissi : There's also $X$ as in $X:Omegatomathbb R,$ and $operatorname E$ as in $operatorname E X = int_Omega X, dP. qquad$
            – Michael Hardy
            8 hours ago






          • 1




            I am convinced that Kolmogorov was not God's last prophet, and his definitions need to be kept in a certain limited context while others advance beyond them.
            – Michael Hardy
            8 hours ago


















          up vote
          7
          down vote













          I am surprised no-one has mentioned Weierstrass's $epsilon,delta$ definition of limits. This enabled mathematicians to rigorously reason about convergence and eliminate numerous apparent inconsistencies.






          share|cite|improve this answer






























            up vote
            5
            down vote














            My interest in this is mainly psychological, namely, how the action of naming something somehow brings it into existence and organizes the world around it.




            The power of naming...



            J.-M. Souriau wrote a book meant to take anyone from the crib to relativistic cosmology and quantum statistics — that he unfortunately (predictably?) never got to really finish, but the draft is online. He starts by saying that all babies are (of course) natural born mathematicians, and everyone’s “first and sensual” abstraction is when they name that recurring warm blob “mommy”.



            Surely that qualifies as “crucial to further understanding”; he argues that half math is more of the same, and tells stories illustrating the power of naming numbers, points, the unknown $(x)$, maps, variables, determinants, groups, morphisms, cohomologies... all would qualify as answers. It’s not just about coming up with epoch-making definitions, either: teaching students to define and name variables in their problems is also what we do.



            We’ve all seen quotes to the effect that it’s all in the definitions. “Success stories” in that are largely the history of math, or at least one way to look at it. Specific recent ones that likely inspired the book (and made it to the MSC) would be symplectic manifold (1950), symplectic geometry (1953; earlier it was this), and moment(um) map (1967).



            (I agree with others that this is a near-duplicate, and will transfer my answer there if needed.)






            share|cite|improve this answer























            • If you say that the draft is online, why not link to it, for the readers' convenience?
              – Alex M.
              yesterday










            • Oh. What’s wrong?? (Link added, sorry, I was in a rush.)
              – Francois Ziegler
              yesterday








            • 2




              I've heard Noam Chomsky theorize in the reverse direction: That the need to count inductively helped evolve recursive brain structures that were then repurposed for language.
              – Daniel R. Collins
              yesterday










            • This is perhaps restating Heraclitus Logos, a concept considered so important in its day it was made the opening passage of the Christian Bible.
              – Robert Frost
              1 hour ago










            • One might also reflect upon the critical role of words in annotating things, which is itself integral to understanding. Since once a concept is annotated we can begin to annotate its properties and then no longer need to hold the full concept in our mind, rather the salient parts thereof, simplifying the process of thinking about the same and making progress possible where the degree of complexity would otherwise be too great.
              – Robert Frost
              1 hour ago




















            up vote
            4
            down vote













            Well, this is basically the same answer as the one by Alexandre Eremenko, but here goes: The particular form of the definition of derivative is crucial for partial differential equations. Using weaker notions than classical derivatives leads to a whole new theory of solutions. While this is (part of) what the answer about distributions is about, the particular case of weak derivatives due to Sobolev predates distributions and is of prime importance to the field of PDEs. Also, weak derivatives allow for a rich theory of function spaces (Banach and Hilbert ones).






            share|cite|improve this answer






























              up vote
              1
              down vote













              Euclid did not have real numbers. He could say that the length of one line segment goes into the length of another a certain number of times, or that the ratio of lengths of two segments was the same as that of two specified numbers ("numbers" were $2,3,4,5,ldots,$ and in particular $1$ was not a "number") or that the ratio of the length of the diagonal of a square to the side was not the ratio of any two numbers.



              There is a much celebrated essay by James Wilkinson titled The Perfidious Polynomial that I have not brought myself to read beyond the first couple of pages because of the profound stupidity of what Wilkinson claims is the history of the development of number systems. He states with a straight face that negative numbers were introduced for the purpose of solving polynomial equations that had no positive solutions, that rational numbers were introduced after that for the purpose of solving yet more equations, that irrational numbers were then introduced for solving yet more, and finally complex numbers for the purpose of solving quadratic equations that lacked real solutions. Did Wilkinson never hear that Euclid proved the statement above about the ratio of diagonal to side of a square, but Euclid had never heard of negative numbers, and did not even have anything that we would consider a system of numbers that includes positive rational and irrational numbers? That imaginary numbers were used to find real solutions of third-degree polynomials? That negative numbers were introduced by accounts for their purposes? And that all of that obviously makes more sense and is more elegant and intellectually satisfying than the childish story he tells? In order to do what Wilkinson says was done, one would need certain definitions that were introduced only much later.






              share|cite|improve this answer






























                up vote
                -4
                down vote













                I had found a reasonable definition for a topology for a set $X$ and a closure operator $mathsf{cl}$ for $mathsf{Pwr} X$ so that if $X$ is a finite dimensional real vector space and $mathsf{cl}$ is the convex hull operator then resulting topology is the usual topology: The convex set $O$ is open iff for all convex $D$ satisfying neither $D cap O$ nor $D smallsetminus O$ is empty then there is a nonempty convex set $C subseteq D smallsetminus O$ satisfying, for all convex $C' subseteq C$ the set $O cup C'$ is convex.



                Generalizing this to a partially ordered set requires something that looks like union. Least upper bounds can introduce extraneous elements. I looked for an element free description of union when working with sets. The version I used was: If $S$ is a set and $mathcal{A}$ is a collection of subsets of $S$ then $cup mathcal{A} = S$ iff for all $T subseteq S$, if $A cap T = varnothing$ for all $A in mathcal{A}$ then $T = varnothing$.



                To adapt this definition to a partially ordered set a least element, say $bot$, will be convenient. Since a collection of elements need not have a greatest lower bound think of $wedge$ as a partial function. where $a wedge t neq bot$ means either ${ a, t } $ does not have a greatest lower bound or ${ a, t } $ has a greatest lower bound but that greatest lower bound is not $bot$.



                The adapted definition allows on to use the collection of elements that satisfy it as something like a basis for a topology






                share|cite|improve this answer























                  Your Answer





                  StackExchange.ifUsing("editor", function () {
                  return StackExchange.using("mathjaxEditing", function () {
                  StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
                  StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
                  });
                  });
                  }, "mathjax-editing");

                  StackExchange.ready(function() {
                  var channelOptions = {
                  tags: "".split(" "),
                  id: "504"
                  };
                  initTagRenderer("".split(" "), "".split(" "), channelOptions);

                  StackExchange.using("externalEditor", function() {
                  // Have to fire editor after snippets, if snippets enabled
                  if (StackExchange.settings.snippets.snippetsEnabled) {
                  StackExchange.using("snippets", function() {
                  createEditor();
                  });
                  }
                  else {
                  createEditor();
                  }
                  });

                  function createEditor() {
                  StackExchange.prepareEditor({
                  heartbeatType: 'answer',
                  convertImagesToLinks: true,
                  noModals: true,
                  showLowRepImageUploadWarning: true,
                  reputationToPostImages: 10,
                  bindNavPrevention: true,
                  postfix: "",
                  imageUploader: {
                  brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
                  contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
                  allowUrls: true
                  },
                  noCode: true, onDemand: true,
                  discardSelector: ".discard-answer"
                  ,immediatelyShowMarkdownHelp:true
                  });


                  }
                  });














                  draft saved

                  draft discarded


















                  StackExchange.ready(
                  function () {
                  StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmathoverflow.net%2fquestions%2f316785%2fwhat-definitions-were-crucial-to-further-understanding%23new-answer', 'question_page');
                  }
                  );

                  Post as a guest















                  Required, but never shown

























                  11 Answers
                  11






                  active

                  oldest

                  votes








                  11 Answers
                  11






                  active

                  oldest

                  votes









                  active

                  oldest

                  votes






                  active

                  oldest

                  votes








                  up vote
                  24
                  down vote













                  A famous definition which led to a completely new point of view is the definition of Schwartz distribution. It changed the understanding of what a "function" is, even among engineers.



                  Actually, the definition of function by Dirichlet in 19th century also clarified many things.






                  share|cite|improve this answer



























                    up vote
                    24
                    down vote













                    A famous definition which led to a completely new point of view is the definition of Schwartz distribution. It changed the understanding of what a "function" is, even among engineers.



                    Actually, the definition of function by Dirichlet in 19th century also clarified many things.






                    share|cite|improve this answer

























                      up vote
                      24
                      down vote










                      up vote
                      24
                      down vote









                      A famous definition which led to a completely new point of view is the definition of Schwartz distribution. It changed the understanding of what a "function" is, even among engineers.



                      Actually, the definition of function by Dirichlet in 19th century also clarified many things.






                      share|cite|improve this answer














                      A famous definition which led to a completely new point of view is the definition of Schwartz distribution. It changed the understanding of what a "function" is, even among engineers.



                      Actually, the definition of function by Dirichlet in 19th century also clarified many things.







                      share|cite|improve this answer














                      share|cite|improve this answer



                      share|cite|improve this answer








                      edited yesterday


























                      community wiki





                      2 revs
                      Alexandre Eremenko























                          up vote
                          16
                          down vote













                          In set theory, definitely the notion of a Woodin cardinal.



                          First, it is not an entirely straightforward notion to guess. Significant large cardinals were up to that point defined as critical points of certain elementary embeddings. This is not the case here: Woodin cardinals need not be measurable. If $kappa$ is Woodin, then $V_kappa$ is a model of set theory where there is a proper class of strong cardinals. Woodinness requires more, namely, that these strong embeddings move some predicates correctly.



                          Second, the definition turned out to identify a pivotal point in inner model theory: for notions weaker than Woodinness, the corresponding canonical inner models carry $mathbfDelta^1_3$ well-orderings of the reals. This is no longer true once we have Woodin cardinals. This is closely tied up to the complexity of the comparison process: given two set models that look like initial segments of canonical inner models, how hard is to compare them to tell which one carries more information? For notions weaker than Woodinness, this process is carried out in an essentially linear fashion: disagreements between the models are witnessed by some measures, and repeatedly using these least measures to form ultrapowers eventually lines the models up: One of the iterates ends up as an initial segment of the other, and whichever one is longer comes from the model which originally has more information.With Woodin cardinals and beyond this process is no longer enough. Instead, comparisons sometimes need to retrace steps, and rather than a linear iteration, at the end we have tree-like structures. Identifying these crucial differences allowed us to develop inner model theory beyond this point. This in turn has led to many results and to the identification of deep connections between large cardinals and descriptive set theory. Literally, thanks to the presence of Woodin cardinals, the set-theoretic landscape grew and transformed significantly.



                          Third, Woodinness also turns out to be the notion needed to carry out certain forcing constructions. Some consistency results that were not expected now could be established, thanks to the identification of genuinely new forcing notions that use the Woodin cardinals in an essential way. Other constructions that were known from significant large cardinals were improved to their optimal form.



                          The story of how the notion of Woodinness was identified is actually quite nice. Kunen had used huge cardinals to prove the consistency of the existence of saturated ideals on $omega_1$. The embeddings witnessing hugeness end up lifting to the generic embeddings witnessing saturation in the appropriate forcing extension. In 1984, Foreman, Magidor and Shelah identified an entirely new way of finding models with saturated ideals. Their construction improved the large cardinal notion needed from hugeness to supercompactness. What is significant is that the generic embedding is no longer a lifting of an old genuine embedding. Indeed, their construction preserves $omega_1$. As a consequence of their results, in May of the same year, Woodin showed that supercompactness implies that all projective sets of reals are Lebesgue measurable (and more). Conversations between Shelah and Woodin quickly led to the realization that much weaker cardinals than supercompact sufficed for this result. Through a series of refinements, the notions of what we now call Shelah and Woodin cardinals were identified, with the latter being of precisely the right strength: all this happened while developments in determinacy and inner model theory showed the deep connections between these fields, and the pivotal role that Woodin cardinals played in this connection. This all happened quite quickly: by the time the Shelah-Woodin paper appeared in print, the importance of Woodin cardinals was already recognized.




                          MR1074499 (92m:03087). Shelah, Saharon; Woodin, Hugh. Large cardinals imply that every reasonably definable set of reals is Lebesgue measurable. Israel J. Math. 70 (1990), no. 3, 381–394.







                          share|cite|improve this answer























                          • Those were some exciting times in set theory!
                            – Asaf Karagila
                            16 hours ago










                          • This answer is so good it's giving me Woodiness.
                            – Robert Frost
                            2 hours ago

















                          up vote
                          16
                          down vote













                          In set theory, definitely the notion of a Woodin cardinal.



                          First, it is not an entirely straightforward notion to guess. Significant large cardinals were up to that point defined as critical points of certain elementary embeddings. This is not the case here: Woodin cardinals need not be measurable. If $kappa$ is Woodin, then $V_kappa$ is a model of set theory where there is a proper class of strong cardinals. Woodinness requires more, namely, that these strong embeddings move some predicates correctly.



                          Second, the definition turned out to identify a pivotal point in inner model theory: for notions weaker than Woodinness, the corresponding canonical inner models carry $mathbfDelta^1_3$ well-orderings of the reals. This is no longer true once we have Woodin cardinals. This is closely tied up to the complexity of the comparison process: given two set models that look like initial segments of canonical inner models, how hard is to compare them to tell which one carries more information? For notions weaker than Woodinness, this process is carried out in an essentially linear fashion: disagreements between the models are witnessed by some measures, and repeatedly using these least measures to form ultrapowers eventually lines the models up: One of the iterates ends up as an initial segment of the other, and whichever one is longer comes from the model which originally has more information.With Woodin cardinals and beyond this process is no longer enough. Instead, comparisons sometimes need to retrace steps, and rather than a linear iteration, at the end we have tree-like structures. Identifying these crucial differences allowed us to develop inner model theory beyond this point. This in turn has led to many results and to the identification of deep connections between large cardinals and descriptive set theory. Literally, thanks to the presence of Woodin cardinals, the set-theoretic landscape grew and transformed significantly.



                          Third, Woodinness also turns out to be the notion needed to carry out certain forcing constructions. Some consistency results that were not expected now could be established, thanks to the identification of genuinely new forcing notions that use the Woodin cardinals in an essential way. Other constructions that were known from significant large cardinals were improved to their optimal form.



                          The story of how the notion of Woodinness was identified is actually quite nice. Kunen had used huge cardinals to prove the consistency of the existence of saturated ideals on $omega_1$. The embeddings witnessing hugeness end up lifting to the generic embeddings witnessing saturation in the appropriate forcing extension. In 1984, Foreman, Magidor and Shelah identified an entirely new way of finding models with saturated ideals. Their construction improved the large cardinal notion needed from hugeness to supercompactness. What is significant is that the generic embedding is no longer a lifting of an old genuine embedding. Indeed, their construction preserves $omega_1$. As a consequence of their results, in May of the same year, Woodin showed that supercompactness implies that all projective sets of reals are Lebesgue measurable (and more). Conversations between Shelah and Woodin quickly led to the realization that much weaker cardinals than supercompact sufficed for this result. Through a series of refinements, the notions of what we now call Shelah and Woodin cardinals were identified, with the latter being of precisely the right strength: all this happened while developments in determinacy and inner model theory showed the deep connections between these fields, and the pivotal role that Woodin cardinals played in this connection. This all happened quite quickly: by the time the Shelah-Woodin paper appeared in print, the importance of Woodin cardinals was already recognized.




                          MR1074499 (92m:03087). Shelah, Saharon; Woodin, Hugh. Large cardinals imply that every reasonably definable set of reals is Lebesgue measurable. Israel J. Math. 70 (1990), no. 3, 381–394.







                          share|cite|improve this answer























                          • Those were some exciting times in set theory!
                            – Asaf Karagila
                            16 hours ago










                          • This answer is so good it's giving me Woodiness.
                            – Robert Frost
                            2 hours ago















                          up vote
                          16
                          down vote










                          up vote
                          16
                          down vote









                          In set theory, definitely the notion of a Woodin cardinal.



                          First, it is not an entirely straightforward notion to guess. Significant large cardinals were up to that point defined as critical points of certain elementary embeddings. This is not the case here: Woodin cardinals need not be measurable. If $kappa$ is Woodin, then $V_kappa$ is a model of set theory where there is a proper class of strong cardinals. Woodinness requires more, namely, that these strong embeddings move some predicates correctly.



                          Second, the definition turned out to identify a pivotal point in inner model theory: for notions weaker than Woodinness, the corresponding canonical inner models carry $mathbfDelta^1_3$ well-orderings of the reals. This is no longer true once we have Woodin cardinals. This is closely tied up to the complexity of the comparison process: given two set models that look like initial segments of canonical inner models, how hard is to compare them to tell which one carries more information? For notions weaker than Woodinness, this process is carried out in an essentially linear fashion: disagreements between the models are witnessed by some measures, and repeatedly using these least measures to form ultrapowers eventually lines the models up: One of the iterates ends up as an initial segment of the other, and whichever one is longer comes from the model which originally has more information.With Woodin cardinals and beyond this process is no longer enough. Instead, comparisons sometimes need to retrace steps, and rather than a linear iteration, at the end we have tree-like structures. Identifying these crucial differences allowed us to develop inner model theory beyond this point. This in turn has led to many results and to the identification of deep connections between large cardinals and descriptive set theory. Literally, thanks to the presence of Woodin cardinals, the set-theoretic landscape grew and transformed significantly.



                          Third, Woodinness also turns out to be the notion needed to carry out certain forcing constructions. Some consistency results that were not expected now could be established, thanks to the identification of genuinely new forcing notions that use the Woodin cardinals in an essential way. Other constructions that were known from significant large cardinals were improved to their optimal form.



                          The story of how the notion of Woodinness was identified is actually quite nice. Kunen had used huge cardinals to prove the consistency of the existence of saturated ideals on $omega_1$. The embeddings witnessing hugeness end up lifting to the generic embeddings witnessing saturation in the appropriate forcing extension. In 1984, Foreman, Magidor and Shelah identified an entirely new way of finding models with saturated ideals. Their construction improved the large cardinal notion needed from hugeness to supercompactness. What is significant is that the generic embedding is no longer a lifting of an old genuine embedding. Indeed, their construction preserves $omega_1$. As a consequence of their results, in May of the same year, Woodin showed that supercompactness implies that all projective sets of reals are Lebesgue measurable (and more). Conversations between Shelah and Woodin quickly led to the realization that much weaker cardinals than supercompact sufficed for this result. Through a series of refinements, the notions of what we now call Shelah and Woodin cardinals were identified, with the latter being of precisely the right strength: all this happened while developments in determinacy and inner model theory showed the deep connections between these fields, and the pivotal role that Woodin cardinals played in this connection. This all happened quite quickly: by the time the Shelah-Woodin paper appeared in print, the importance of Woodin cardinals was already recognized.




                          MR1074499 (92m:03087). Shelah, Saharon; Woodin, Hugh. Large cardinals imply that every reasonably definable set of reals is Lebesgue measurable. Israel J. Math. 70 (1990), no. 3, 381–394.







                          share|cite|improve this answer














                          In set theory, definitely the notion of a Woodin cardinal.



                          First, it is not an entirely straightforward notion to guess. Significant large cardinals were up to that point defined as critical points of certain elementary embeddings. This is not the case here: Woodin cardinals need not be measurable. If $kappa$ is Woodin, then $V_kappa$ is a model of set theory where there is a proper class of strong cardinals. Woodinness requires more, namely, that these strong embeddings move some predicates correctly.



                          Second, the definition turned out to identify a pivotal point in inner model theory: for notions weaker than Woodinness, the corresponding canonical inner models carry $mathbfDelta^1_3$ well-orderings of the reals. This is no longer true once we have Woodin cardinals. This is closely tied up to the complexity of the comparison process: given two set models that look like initial segments of canonical inner models, how hard is to compare them to tell which one carries more information? For notions weaker than Woodinness, this process is carried out in an essentially linear fashion: disagreements between the models are witnessed by some measures, and repeatedly using these least measures to form ultrapowers eventually lines the models up: One of the iterates ends up as an initial segment of the other, and whichever one is longer comes from the model which originally has more information.With Woodin cardinals and beyond this process is no longer enough. Instead, comparisons sometimes need to retrace steps, and rather than a linear iteration, at the end we have tree-like structures. Identifying these crucial differences allowed us to develop inner model theory beyond this point. This in turn has led to many results and to the identification of deep connections between large cardinals and descriptive set theory. Literally, thanks to the presence of Woodin cardinals, the set-theoretic landscape grew and transformed significantly.



                          Third, Woodinness also turns out to be the notion needed to carry out certain forcing constructions. Some consistency results that were not expected now could be established, thanks to the identification of genuinely new forcing notions that use the Woodin cardinals in an essential way. Other constructions that were known from significant large cardinals were improved to their optimal form.



                          The story of how the notion of Woodinness was identified is actually quite nice. Kunen had used huge cardinals to prove the consistency of the existence of saturated ideals on $omega_1$. The embeddings witnessing hugeness end up lifting to the generic embeddings witnessing saturation in the appropriate forcing extension. In 1984, Foreman, Magidor and Shelah identified an entirely new way of finding models with saturated ideals. Their construction improved the large cardinal notion needed from hugeness to supercompactness. What is significant is that the generic embedding is no longer a lifting of an old genuine embedding. Indeed, their construction preserves $omega_1$. As a consequence of their results, in May of the same year, Woodin showed that supercompactness implies that all projective sets of reals are Lebesgue measurable (and more). Conversations between Shelah and Woodin quickly led to the realization that much weaker cardinals than supercompact sufficed for this result. Through a series of refinements, the notions of what we now call Shelah and Woodin cardinals were identified, with the latter being of precisely the right strength: all this happened while developments in determinacy and inner model theory showed the deep connections between these fields, and the pivotal role that Woodin cardinals played in this connection. This all happened quite quickly: by the time the Shelah-Woodin paper appeared in print, the importance of Woodin cardinals was already recognized.




                          MR1074499 (92m:03087). Shelah, Saharon; Woodin, Hugh. Large cardinals imply that every reasonably definable set of reals is Lebesgue measurable. Israel J. Math. 70 (1990), no. 3, 381–394.








                          share|cite|improve this answer














                          share|cite|improve this answer



                          share|cite|improve this answer








                          edited yesterday


























                          community wiki





                          2 revs
                          Andrés E. Caicedo













                          • Those were some exciting times in set theory!
                            – Asaf Karagila
                            16 hours ago










                          • This answer is so good it's giving me Woodiness.
                            – Robert Frost
                            2 hours ago




















                          • Those were some exciting times in set theory!
                            – Asaf Karagila
                            16 hours ago










                          • This answer is so good it's giving me Woodiness.
                            – Robert Frost
                            2 hours ago


















                          Those were some exciting times in set theory!
                          – Asaf Karagila
                          16 hours ago




                          Those were some exciting times in set theory!
                          – Asaf Karagila
                          16 hours ago












                          This answer is so good it's giving me Woodiness.
                          – Robert Frost
                          2 hours ago






                          This answer is so good it's giving me Woodiness.
                          – Robert Frost
                          2 hours ago












                          up vote
                          16
                          down vote













                          A good example comes from the definition of manifolds, although it's less of a single definition and more of an evolving notion.



                          The background to modern differential geometry lies in trying understand cases where the parallel postulate fails. However, trying to rigorously pin down what sort of spaces to study is pretty tricky. It's only tangentially related, but there's a famous paper by Lakatos called "Proofs and Refutations" about the Euler characteristic on surfaces which gives insight as to why defining spaces precisely is a non-trivial problem.



                          Riemann had an intuitive notion of manifolds, and it's possible to see how his ideas evolved into the modern notion. It seems that Riemann's "Mannigfaltigkeit" had an element of "I know it when I see it." Similarly, Poincare didn't have a modern definition for manifold in Analysis Situs. The modern definition of a smooth manifold as a locally-Euclidean space wasn't introduced until 1912 by Hermann Weyl.
                          Even then, it wasn't clear until the Whitney proved the embedding theorem that the intrinsic definition was equivalent to submanifolds of Euclidean space.



                          That's still not the end of the story. If one tries to really pin down the definition, one realizes that there are at least three distinct categories of manifolds; smooth, piece-wise linear, and topological manifolds. The fact that these different types are not equivalent gives rise to a lot of research that is still continuing today.






                          share|cite|improve this answer



























                            up vote
                            16
                            down vote













                            A good example comes from the definition of manifolds, although it's less of a single definition and more of an evolving notion.



                            The background to modern differential geometry lies in trying understand cases where the parallel postulate fails. However, trying to rigorously pin down what sort of spaces to study is pretty tricky. It's only tangentially related, but there's a famous paper by Lakatos called "Proofs and Refutations" about the Euler characteristic on surfaces which gives insight as to why defining spaces precisely is a non-trivial problem.



                            Riemann had an intuitive notion of manifolds, and it's possible to see how his ideas evolved into the modern notion. It seems that Riemann's "Mannigfaltigkeit" had an element of "I know it when I see it." Similarly, Poincare didn't have a modern definition for manifold in Analysis Situs. The modern definition of a smooth manifold as a locally-Euclidean space wasn't introduced until 1912 by Hermann Weyl.
                            Even then, it wasn't clear until the Whitney proved the embedding theorem that the intrinsic definition was equivalent to submanifolds of Euclidean space.



                            That's still not the end of the story. If one tries to really pin down the definition, one realizes that there are at least three distinct categories of manifolds; smooth, piece-wise linear, and topological manifolds. The fact that these different types are not equivalent gives rise to a lot of research that is still continuing today.






                            share|cite|improve this answer

























                              up vote
                              16
                              down vote










                              up vote
                              16
                              down vote









                              A good example comes from the definition of manifolds, although it's less of a single definition and more of an evolving notion.



                              The background to modern differential geometry lies in trying understand cases where the parallel postulate fails. However, trying to rigorously pin down what sort of spaces to study is pretty tricky. It's only tangentially related, but there's a famous paper by Lakatos called "Proofs and Refutations" about the Euler characteristic on surfaces which gives insight as to why defining spaces precisely is a non-trivial problem.



                              Riemann had an intuitive notion of manifolds, and it's possible to see how his ideas evolved into the modern notion. It seems that Riemann's "Mannigfaltigkeit" had an element of "I know it when I see it." Similarly, Poincare didn't have a modern definition for manifold in Analysis Situs. The modern definition of a smooth manifold as a locally-Euclidean space wasn't introduced until 1912 by Hermann Weyl.
                              Even then, it wasn't clear until the Whitney proved the embedding theorem that the intrinsic definition was equivalent to submanifolds of Euclidean space.



                              That's still not the end of the story. If one tries to really pin down the definition, one realizes that there are at least three distinct categories of manifolds; smooth, piece-wise linear, and topological manifolds. The fact that these different types are not equivalent gives rise to a lot of research that is still continuing today.






                              share|cite|improve this answer














                              A good example comes from the definition of manifolds, although it's less of a single definition and more of an evolving notion.



                              The background to modern differential geometry lies in trying understand cases where the parallel postulate fails. However, trying to rigorously pin down what sort of spaces to study is pretty tricky. It's only tangentially related, but there's a famous paper by Lakatos called "Proofs and Refutations" about the Euler characteristic on surfaces which gives insight as to why defining spaces precisely is a non-trivial problem.



                              Riemann had an intuitive notion of manifolds, and it's possible to see how his ideas evolved into the modern notion. It seems that Riemann's "Mannigfaltigkeit" had an element of "I know it when I see it." Similarly, Poincare didn't have a modern definition for manifold in Analysis Situs. The modern definition of a smooth manifold as a locally-Euclidean space wasn't introduced until 1912 by Hermann Weyl.
                              Even then, it wasn't clear until the Whitney proved the embedding theorem that the intrinsic definition was equivalent to submanifolds of Euclidean space.



                              That's still not the end of the story. If one tries to really pin down the definition, one realizes that there are at least three distinct categories of manifolds; smooth, piece-wise linear, and topological manifolds. The fact that these different types are not equivalent gives rise to a lot of research that is still continuing today.







                              share|cite|improve this answer














                              share|cite|improve this answer



                              share|cite|improve this answer








                              answered yesterday


























                              community wiki





                              Gabe K























                                  up vote
                                  11
                                  down vote













                                  In the comments I offered the "classical" example of Grothendieck's definition of a scheme. But for a more modern example: Fomin and Zelevinsky's definition of a cluster algebra is at first sight a rather ungainly thing, but turned out to be exactly what was needed to unify certain patterns that appeared in various hitherto unrelated areas of mathematics like discrete integrable systems, the representation theory of associative algebras, Poisson/symplectic geometry, etc.






                                  share|cite|improve this answer



























                                    up vote
                                    11
                                    down vote













                                    In the comments I offered the "classical" example of Grothendieck's definition of a scheme. But for a more modern example: Fomin and Zelevinsky's definition of a cluster algebra is at first sight a rather ungainly thing, but turned out to be exactly what was needed to unify certain patterns that appeared in various hitherto unrelated areas of mathematics like discrete integrable systems, the representation theory of associative algebras, Poisson/symplectic geometry, etc.






                                    share|cite|improve this answer

























                                      up vote
                                      11
                                      down vote










                                      up vote
                                      11
                                      down vote









                                      In the comments I offered the "classical" example of Grothendieck's definition of a scheme. But for a more modern example: Fomin and Zelevinsky's definition of a cluster algebra is at first sight a rather ungainly thing, but turned out to be exactly what was needed to unify certain patterns that appeared in various hitherto unrelated areas of mathematics like discrete integrable systems, the representation theory of associative algebras, Poisson/symplectic geometry, etc.






                                      share|cite|improve this answer














                                      In the comments I offered the "classical" example of Grothendieck's definition of a scheme. But for a more modern example: Fomin and Zelevinsky's definition of a cluster algebra is at first sight a rather ungainly thing, but turned out to be exactly what was needed to unify certain patterns that appeared in various hitherto unrelated areas of mathematics like discrete integrable systems, the representation theory of associative algebras, Poisson/symplectic geometry, etc.







                                      share|cite|improve this answer














                                      share|cite|improve this answer



                                      share|cite|improve this answer








                                      answered yesterday


























                                      community wiki





                                      Sam Hopkins























                                          up vote
                                          9
                                          down vote













                                          My nominee is the definition of a topology. I don't know the history of the subject but my impression is that this was the result of a collective effort. The definition with collections of subsets closed under finite intersections and arbitrary unions is not the kind of thing one would get up one morning and decide to consider. It seems to me this was an example of category where the definition of morphisms was clear (continuous maps) but where finding the right notion of object was not obvious.



                                          Perhaps a sign of a good definition is that it leads to other good definitions. Those could arise in order to remedy some flaws of the original one, like Grothendieck's topos. They could also arise as particular cases of the original definition like Schwartz's definition of the topology on $mathcal{D}$ underlying the notion of distribution mentioned in Alexandre's answer.






                                          share|cite|improve this answer



























                                            up vote
                                            9
                                            down vote













                                            My nominee is the definition of a topology. I don't know the history of the subject but my impression is that this was the result of a collective effort. The definition with collections of subsets closed under finite intersections and arbitrary unions is not the kind of thing one would get up one morning and decide to consider. It seems to me this was an example of category where the definition of morphisms was clear (continuous maps) but where finding the right notion of object was not obvious.



                                            Perhaps a sign of a good definition is that it leads to other good definitions. Those could arise in order to remedy some flaws of the original one, like Grothendieck's topos. They could also arise as particular cases of the original definition like Schwartz's definition of the topology on $mathcal{D}$ underlying the notion of distribution mentioned in Alexandre's answer.






                                            share|cite|improve this answer

























                                              up vote
                                              9
                                              down vote










                                              up vote
                                              9
                                              down vote









                                              My nominee is the definition of a topology. I don't know the history of the subject but my impression is that this was the result of a collective effort. The definition with collections of subsets closed under finite intersections and arbitrary unions is not the kind of thing one would get up one morning and decide to consider. It seems to me this was an example of category where the definition of morphisms was clear (continuous maps) but where finding the right notion of object was not obvious.



                                              Perhaps a sign of a good definition is that it leads to other good definitions. Those could arise in order to remedy some flaws of the original one, like Grothendieck's topos. They could also arise as particular cases of the original definition like Schwartz's definition of the topology on $mathcal{D}$ underlying the notion of distribution mentioned in Alexandre's answer.






                                              share|cite|improve this answer














                                              My nominee is the definition of a topology. I don't know the history of the subject but my impression is that this was the result of a collective effort. The definition with collections of subsets closed under finite intersections and arbitrary unions is not the kind of thing one would get up one morning and decide to consider. It seems to me this was an example of category where the definition of morphisms was clear (continuous maps) but where finding the right notion of object was not obvious.



                                              Perhaps a sign of a good definition is that it leads to other good definitions. Those could arise in order to remedy some flaws of the original one, like Grothendieck's topos. They could also arise as particular cases of the original definition like Schwartz's definition of the topology on $mathcal{D}$ underlying the notion of distribution mentioned in Alexandre's answer.







                                              share|cite|improve this answer














                                              share|cite|improve this answer



                                              share|cite|improve this answer








                                              answered 16 hours ago


























                                              community wiki





                                              Abdelmalek Abdesselam























                                                  up vote
                                                  8
                                                  down vote













                                                  I would nominate the definition of probability theory in terms of measure theory, where a probability space is an abstract measure space, an event is a measurable subset, a random variable is a measurable function, and expectation is the Lebesgue integral. This is usually credited to a 1933 paper by Kolmogorov, though it seems that many of the ideas had previously been around. Here is a very interesting survey by Shafer and Vovk, entitled "The origins and legacy of Kolmogorov's Grundbegriffe."



                                                  People had been thinking about probability for a long time before that, but these definitions made it possible to place everything on a rigorous footing and exploit many key results from measure theory. Shafer and Vovk say that it took probability from a mathematical "pastime" to become a respected branch of pure mathematics.



                                                  (I am no expert historian so please feel free to edit with corrections, more background, further discussion, etc.)






                                                  share|cite|improve this answer























                                                  • From the words of my probability professor: "there are only three letters in probability, $Omega$, $mathcal{A}$, and $mathbb{P}$".
                                                    – Najib Idrissi
                                                    13 hours ago










                                                  • @NajibIdrissi : There's also $X$ as in $X:Omegatomathbb R,$ and $operatorname E$ as in $operatorname E X = int_Omega X, dP. qquad$
                                                    – Michael Hardy
                                                    8 hours ago






                                                  • 1




                                                    I am convinced that Kolmogorov was not God's last prophet, and his definitions need to be kept in a certain limited context while others advance beyond them.
                                                    – Michael Hardy
                                                    8 hours ago















                                                  up vote
                                                  8
                                                  down vote













                                                  I would nominate the definition of probability theory in terms of measure theory, where a probability space is an abstract measure space, an event is a measurable subset, a random variable is a measurable function, and expectation is the Lebesgue integral. This is usually credited to a 1933 paper by Kolmogorov, though it seems that many of the ideas had previously been around. Here is a very interesting survey by Shafer and Vovk, entitled "The origins and legacy of Kolmogorov's Grundbegriffe."



                                                  People had been thinking about probability for a long time before that, but these definitions made it possible to place everything on a rigorous footing and exploit many key results from measure theory. Shafer and Vovk say that it took probability from a mathematical "pastime" to become a respected branch of pure mathematics.



                                                  (I am no expert historian so please feel free to edit with corrections, more background, further discussion, etc.)






                                                  share|cite|improve this answer























                                                  • From the words of my probability professor: "there are only three letters in probability, $Omega$, $mathcal{A}$, and $mathbb{P}$".
                                                    – Najib Idrissi
                                                    13 hours ago










                                                  • @NajibIdrissi : There's also $X$ as in $X:Omegatomathbb R,$ and $operatorname E$ as in $operatorname E X = int_Omega X, dP. qquad$
                                                    – Michael Hardy
                                                    8 hours ago






                                                  • 1




                                                    I am convinced that Kolmogorov was not God's last prophet, and his definitions need to be kept in a certain limited context while others advance beyond them.
                                                    – Michael Hardy
                                                    8 hours ago













                                                  up vote
                                                  8
                                                  down vote










                                                  up vote
                                                  8
                                                  down vote









                                                  I would nominate the definition of probability theory in terms of measure theory, where a probability space is an abstract measure space, an event is a measurable subset, a random variable is a measurable function, and expectation is the Lebesgue integral. This is usually credited to a 1933 paper by Kolmogorov, though it seems that many of the ideas had previously been around. Here is a very interesting survey by Shafer and Vovk, entitled "The origins and legacy of Kolmogorov's Grundbegriffe."



                                                  People had been thinking about probability for a long time before that, but these definitions made it possible to place everything on a rigorous footing and exploit many key results from measure theory. Shafer and Vovk say that it took probability from a mathematical "pastime" to become a respected branch of pure mathematics.



                                                  (I am no expert historian so please feel free to edit with corrections, more background, further discussion, etc.)






                                                  share|cite|improve this answer














                                                  I would nominate the definition of probability theory in terms of measure theory, where a probability space is an abstract measure space, an event is a measurable subset, a random variable is a measurable function, and expectation is the Lebesgue integral. This is usually credited to a 1933 paper by Kolmogorov, though it seems that many of the ideas had previously been around. Here is a very interesting survey by Shafer and Vovk, entitled "The origins and legacy of Kolmogorov's Grundbegriffe."



                                                  People had been thinking about probability for a long time before that, but these definitions made it possible to place everything on a rigorous footing and exploit many key results from measure theory. Shafer and Vovk say that it took probability from a mathematical "pastime" to become a respected branch of pure mathematics.



                                                  (I am no expert historian so please feel free to edit with corrections, more background, further discussion, etc.)







                                                  share|cite|improve this answer














                                                  share|cite|improve this answer



                                                  share|cite|improve this answer








                                                  answered yesterday


























                                                  community wiki





                                                  Nate Eldredge













                                                  • From the words of my probability professor: "there are only three letters in probability, $Omega$, $mathcal{A}$, and $mathbb{P}$".
                                                    – Najib Idrissi
                                                    13 hours ago










                                                  • @NajibIdrissi : There's also $X$ as in $X:Omegatomathbb R,$ and $operatorname E$ as in $operatorname E X = int_Omega X, dP. qquad$
                                                    – Michael Hardy
                                                    8 hours ago






                                                  • 1




                                                    I am convinced that Kolmogorov was not God's last prophet, and his definitions need to be kept in a certain limited context while others advance beyond them.
                                                    – Michael Hardy
                                                    8 hours ago


















                                                  • From the words of my probability professor: "there are only three letters in probability, $Omega$, $mathcal{A}$, and $mathbb{P}$".
                                                    – Najib Idrissi
                                                    13 hours ago










                                                  • @NajibIdrissi : There's also $X$ as in $X:Omegatomathbb R,$ and $operatorname E$ as in $operatorname E X = int_Omega X, dP. qquad$
                                                    – Michael Hardy
                                                    8 hours ago






                                                  • 1




                                                    I am convinced that Kolmogorov was not God's last prophet, and his definitions need to be kept in a certain limited context while others advance beyond them.
                                                    – Michael Hardy
                                                    8 hours ago
















                                                  From the words of my probability professor: "there are only three letters in probability, $Omega$, $mathcal{A}$, and $mathbb{P}$".
                                                  – Najib Idrissi
                                                  13 hours ago




                                                  From the words of my probability professor: "there are only three letters in probability, $Omega$, $mathcal{A}$, and $mathbb{P}$".
                                                  – Najib Idrissi
                                                  13 hours ago












                                                  @NajibIdrissi : There's also $X$ as in $X:Omegatomathbb R,$ and $operatorname E$ as in $operatorname E X = int_Omega X, dP. qquad$
                                                  – Michael Hardy
                                                  8 hours ago




                                                  @NajibIdrissi : There's also $X$ as in $X:Omegatomathbb R,$ and $operatorname E$ as in $operatorname E X = int_Omega X, dP. qquad$
                                                  – Michael Hardy
                                                  8 hours ago




                                                  1




                                                  1




                                                  I am convinced that Kolmogorov was not God's last prophet, and his definitions need to be kept in a certain limited context while others advance beyond them.
                                                  – Michael Hardy
                                                  8 hours ago




                                                  I am convinced that Kolmogorov was not God's last prophet, and his definitions need to be kept in a certain limited context while others advance beyond them.
                                                  – Michael Hardy
                                                  8 hours ago










                                                  up vote
                                                  7
                                                  down vote













                                                  I am surprised no-one has mentioned Weierstrass's $epsilon,delta$ definition of limits. This enabled mathematicians to rigorously reason about convergence and eliminate numerous apparent inconsistencies.






                                                  share|cite|improve this answer



























                                                    up vote
                                                    7
                                                    down vote













                                                    I am surprised no-one has mentioned Weierstrass's $epsilon,delta$ definition of limits. This enabled mathematicians to rigorously reason about convergence and eliminate numerous apparent inconsistencies.






                                                    share|cite|improve this answer

























                                                      up vote
                                                      7
                                                      down vote










                                                      up vote
                                                      7
                                                      down vote









                                                      I am surprised no-one has mentioned Weierstrass's $epsilon,delta$ definition of limits. This enabled mathematicians to rigorously reason about convergence and eliminate numerous apparent inconsistencies.






                                                      share|cite|improve this answer














                                                      I am surprised no-one has mentioned Weierstrass's $epsilon,delta$ definition of limits. This enabled mathematicians to rigorously reason about convergence and eliminate numerous apparent inconsistencies.







                                                      share|cite|improve this answer














                                                      share|cite|improve this answer



                                                      share|cite|improve this answer








                                                      answered 11 hours ago


























                                                      community wiki





                                                      Aryeh Kontorovich























                                                          up vote
                                                          5
                                                          down vote














                                                          My interest in this is mainly psychological, namely, how the action of naming something somehow brings it into existence and organizes the world around it.




                                                          The power of naming...



                                                          J.-M. Souriau wrote a book meant to take anyone from the crib to relativistic cosmology and quantum statistics — that he unfortunately (predictably?) never got to really finish, but the draft is online. He starts by saying that all babies are (of course) natural born mathematicians, and everyone’s “first and sensual” abstraction is when they name that recurring warm blob “mommy”.



                                                          Surely that qualifies as “crucial to further understanding”; he argues that half math is more of the same, and tells stories illustrating the power of naming numbers, points, the unknown $(x)$, maps, variables, determinants, groups, morphisms, cohomologies... all would qualify as answers. It’s not just about coming up with epoch-making definitions, either: teaching students to define and name variables in their problems is also what we do.



                                                          We’ve all seen quotes to the effect that it’s all in the definitions. “Success stories” in that are largely the history of math, or at least one way to look at it. Specific recent ones that likely inspired the book (and made it to the MSC) would be symplectic manifold (1950), symplectic geometry (1953; earlier it was this), and moment(um) map (1967).



                                                          (I agree with others that this is a near-duplicate, and will transfer my answer there if needed.)






                                                          share|cite|improve this answer























                                                          • If you say that the draft is online, why not link to it, for the readers' convenience?
                                                            – Alex M.
                                                            yesterday










                                                          • Oh. What’s wrong?? (Link added, sorry, I was in a rush.)
                                                            – Francois Ziegler
                                                            yesterday








                                                          • 2




                                                            I've heard Noam Chomsky theorize in the reverse direction: That the need to count inductively helped evolve recursive brain structures that were then repurposed for language.
                                                            – Daniel R. Collins
                                                            yesterday










                                                          • This is perhaps restating Heraclitus Logos, a concept considered so important in its day it was made the opening passage of the Christian Bible.
                                                            – Robert Frost
                                                            1 hour ago










                                                          • One might also reflect upon the critical role of words in annotating things, which is itself integral to understanding. Since once a concept is annotated we can begin to annotate its properties and then no longer need to hold the full concept in our mind, rather the salient parts thereof, simplifying the process of thinking about the same and making progress possible where the degree of complexity would otherwise be too great.
                                                            – Robert Frost
                                                            1 hour ago

















                                                          up vote
                                                          5
                                                          down vote














                                                          My interest in this is mainly psychological, namely, how the action of naming something somehow brings it into existence and organizes the world around it.




                                                          The power of naming...



                                                          J.-M. Souriau wrote a book meant to take anyone from the crib to relativistic cosmology and quantum statistics — that he unfortunately (predictably?) never got to really finish, but the draft is online. He starts by saying that all babies are (of course) natural born mathematicians, and everyone’s “first and sensual” abstraction is when they name that recurring warm blob “mommy”.



                                                          Surely that qualifies as “crucial to further understanding”; he argues that half math is more of the same, and tells stories illustrating the power of naming numbers, points, the unknown $(x)$, maps, variables, determinants, groups, morphisms, cohomologies... all would qualify as answers. It’s not just about coming up with epoch-making definitions, either: teaching students to define and name variables in their problems is also what we do.



                                                          We’ve all seen quotes to the effect that it’s all in the definitions. “Success stories” in that are largely the history of math, or at least one way to look at it. Specific recent ones that likely inspired the book (and made it to the MSC) would be symplectic manifold (1950), symplectic geometry (1953; earlier it was this), and moment(um) map (1967).



                                                          (I agree with others that this is a near-duplicate, and will transfer my answer there if needed.)






                                                          share|cite|improve this answer























                                                          • If you say that the draft is online, why not link to it, for the readers' convenience?
                                                            – Alex M.
                                                            yesterday










                                                          • Oh. What’s wrong?? (Link added, sorry, I was in a rush.)
                                                            – Francois Ziegler
                                                            yesterday








                                                          • 2




                                                            I've heard Noam Chomsky theorize in the reverse direction: That the need to count inductively helped evolve recursive brain structures that were then repurposed for language.
                                                            – Daniel R. Collins
                                                            yesterday










                                                          • This is perhaps restating Heraclitus Logos, a concept considered so important in its day it was made the opening passage of the Christian Bible.
                                                            – Robert Frost
                                                            1 hour ago










                                                          • One might also reflect upon the critical role of words in annotating things, which is itself integral to understanding. Since once a concept is annotated we can begin to annotate its properties and then no longer need to hold the full concept in our mind, rather the salient parts thereof, simplifying the process of thinking about the same and making progress possible where the degree of complexity would otherwise be too great.
                                                            – Robert Frost
                                                            1 hour ago















                                                          up vote
                                                          5
                                                          down vote










                                                          up vote
                                                          5
                                                          down vote










                                                          My interest in this is mainly psychological, namely, how the action of naming something somehow brings it into existence and organizes the world around it.




                                                          The power of naming...



                                                          J.-M. Souriau wrote a book meant to take anyone from the crib to relativistic cosmology and quantum statistics — that he unfortunately (predictably?) never got to really finish, but the draft is online. He starts by saying that all babies are (of course) natural born mathematicians, and everyone’s “first and sensual” abstraction is when they name that recurring warm blob “mommy”.



                                                          Surely that qualifies as “crucial to further understanding”; he argues that half math is more of the same, and tells stories illustrating the power of naming numbers, points, the unknown $(x)$, maps, variables, determinants, groups, morphisms, cohomologies... all would qualify as answers. It’s not just about coming up with epoch-making definitions, either: teaching students to define and name variables in their problems is also what we do.



                                                          We’ve all seen quotes to the effect that it’s all in the definitions. “Success stories” in that are largely the history of math, or at least one way to look at it. Specific recent ones that likely inspired the book (and made it to the MSC) would be symplectic manifold (1950), symplectic geometry (1953; earlier it was this), and moment(um) map (1967).



                                                          (I agree with others that this is a near-duplicate, and will transfer my answer there if needed.)






                                                          share|cite|improve this answer















                                                          My interest in this is mainly psychological, namely, how the action of naming something somehow brings it into existence and organizes the world around it.




                                                          The power of naming...



                                                          J.-M. Souriau wrote a book meant to take anyone from the crib to relativistic cosmology and quantum statistics — that he unfortunately (predictably?) never got to really finish, but the draft is online. He starts by saying that all babies are (of course) natural born mathematicians, and everyone’s “first and sensual” abstraction is when they name that recurring warm blob “mommy”.



                                                          Surely that qualifies as “crucial to further understanding”; he argues that half math is more of the same, and tells stories illustrating the power of naming numbers, points, the unknown $(x)$, maps, variables, determinants, groups, morphisms, cohomologies... all would qualify as answers. It’s not just about coming up with epoch-making definitions, either: teaching students to define and name variables in their problems is also what we do.



                                                          We’ve all seen quotes to the effect that it’s all in the definitions. “Success stories” in that are largely the history of math, or at least one way to look at it. Specific recent ones that likely inspired the book (and made it to the MSC) would be symplectic manifold (1950), symplectic geometry (1953; earlier it was this), and moment(um) map (1967).



                                                          (I agree with others that this is a near-duplicate, and will transfer my answer there if needed.)







                                                          share|cite|improve this answer














                                                          share|cite|improve this answer



                                                          share|cite|improve this answer








                                                          edited yesterday


























                                                          community wiki





                                                          4 revs
                                                          Francois Ziegler













                                                          • If you say that the draft is online, why not link to it, for the readers' convenience?
                                                            – Alex M.
                                                            yesterday










                                                          • Oh. What’s wrong?? (Link added, sorry, I was in a rush.)
                                                            – Francois Ziegler
                                                            yesterday








                                                          • 2




                                                            I've heard Noam Chomsky theorize in the reverse direction: That the need to count inductively helped evolve recursive brain structures that were then repurposed for language.
                                                            – Daniel R. Collins
                                                            yesterday










                                                          • This is perhaps restating Heraclitus Logos, a concept considered so important in its day it was made the opening passage of the Christian Bible.
                                                            – Robert Frost
                                                            1 hour ago










                                                          • One might also reflect upon the critical role of words in annotating things, which is itself integral to understanding. Since once a concept is annotated we can begin to annotate its properties and then no longer need to hold the full concept in our mind, rather the salient parts thereof, simplifying the process of thinking about the same and making progress possible where the degree of complexity would otherwise be too great.
                                                            – Robert Frost
                                                            1 hour ago




















                                                          • If you say that the draft is online, why not link to it, for the readers' convenience?
                                                            – Alex M.
                                                            yesterday










                                                          • Oh. What’s wrong?? (Link added, sorry, I was in a rush.)
                                                            – Francois Ziegler
                                                            yesterday








                                                          • 2




                                                            I've heard Noam Chomsky theorize in the reverse direction: That the need to count inductively helped evolve recursive brain structures that were then repurposed for language.
                                                            – Daniel R. Collins
                                                            yesterday










                                                          • This is perhaps restating Heraclitus Logos, a concept considered so important in its day it was made the opening passage of the Christian Bible.
                                                            – Robert Frost
                                                            1 hour ago










                                                          • One might also reflect upon the critical role of words in annotating things, which is itself integral to understanding. Since once a concept is annotated we can begin to annotate its properties and then no longer need to hold the full concept in our mind, rather the salient parts thereof, simplifying the process of thinking about the same and making progress possible where the degree of complexity would otherwise be too great.
                                                            – Robert Frost
                                                            1 hour ago


















                                                          If you say that the draft is online, why not link to it, for the readers' convenience?
                                                          – Alex M.
                                                          yesterday




                                                          If you say that the draft is online, why not link to it, for the readers' convenience?
                                                          – Alex M.
                                                          yesterday












                                                          Oh. What’s wrong?? (Link added, sorry, I was in a rush.)
                                                          – Francois Ziegler
                                                          yesterday






                                                          Oh. What’s wrong?? (Link added, sorry, I was in a rush.)
                                                          – Francois Ziegler
                                                          yesterday






                                                          2




                                                          2




                                                          I've heard Noam Chomsky theorize in the reverse direction: That the need to count inductively helped evolve recursive brain structures that were then repurposed for language.
                                                          – Daniel R. Collins
                                                          yesterday




                                                          I've heard Noam Chomsky theorize in the reverse direction: That the need to count inductively helped evolve recursive brain structures that were then repurposed for language.
                                                          – Daniel R. Collins
                                                          yesterday












                                                          This is perhaps restating Heraclitus Logos, a concept considered so important in its day it was made the opening passage of the Christian Bible.
                                                          – Robert Frost
                                                          1 hour ago




                                                          This is perhaps restating Heraclitus Logos, a concept considered so important in its day it was made the opening passage of the Christian Bible.
                                                          – Robert Frost
                                                          1 hour ago












                                                          One might also reflect upon the critical role of words in annotating things, which is itself integral to understanding. Since once a concept is annotated we can begin to annotate its properties and then no longer need to hold the full concept in our mind, rather the salient parts thereof, simplifying the process of thinking about the same and making progress possible where the degree of complexity would otherwise be too great.
                                                          – Robert Frost
                                                          1 hour ago






                                                          One might also reflect upon the critical role of words in annotating things, which is itself integral to understanding. Since once a concept is annotated we can begin to annotate its properties and then no longer need to hold the full concept in our mind, rather the salient parts thereof, simplifying the process of thinking about the same and making progress possible where the degree of complexity would otherwise be too great.
                                                          – Robert Frost
                                                          1 hour ago












                                                          up vote
                                                          4
                                                          down vote













                                                          Well, this is basically the same answer as the one by Alexandre Eremenko, but here goes: The particular form of the definition of derivative is crucial for partial differential equations. Using weaker notions than classical derivatives leads to a whole new theory of solutions. While this is (part of) what the answer about distributions is about, the particular case of weak derivatives due to Sobolev predates distributions and is of prime importance to the field of PDEs. Also, weak derivatives allow for a rich theory of function spaces (Banach and Hilbert ones).






                                                          share|cite|improve this answer



























                                                            up vote
                                                            4
                                                            down vote













                                                            Well, this is basically the same answer as the one by Alexandre Eremenko, but here goes: The particular form of the definition of derivative is crucial for partial differential equations. Using weaker notions than classical derivatives leads to a whole new theory of solutions. While this is (part of) what the answer about distributions is about, the particular case of weak derivatives due to Sobolev predates distributions and is of prime importance to the field of PDEs. Also, weak derivatives allow for a rich theory of function spaces (Banach and Hilbert ones).






                                                            share|cite|improve this answer

























                                                              up vote
                                                              4
                                                              down vote










                                                              up vote
                                                              4
                                                              down vote









                                                              Well, this is basically the same answer as the one by Alexandre Eremenko, but here goes: The particular form of the definition of derivative is crucial for partial differential equations. Using weaker notions than classical derivatives leads to a whole new theory of solutions. While this is (part of) what the answer about distributions is about, the particular case of weak derivatives due to Sobolev predates distributions and is of prime importance to the field of PDEs. Also, weak derivatives allow for a rich theory of function spaces (Banach and Hilbert ones).






                                                              share|cite|improve this answer














                                                              Well, this is basically the same answer as the one by Alexandre Eremenko, but here goes: The particular form of the definition of derivative is crucial for partial differential equations. Using weaker notions than classical derivatives leads to a whole new theory of solutions. While this is (part of) what the answer about distributions is about, the particular case of weak derivatives due to Sobolev predates distributions and is of prime importance to the field of PDEs. Also, weak derivatives allow for a rich theory of function spaces (Banach and Hilbert ones).







                                                              share|cite|improve this answer














                                                              share|cite|improve this answer



                                                              share|cite|improve this answer








                                                              answered yesterday


























                                                              community wiki





                                                              Dirk























                                                                  up vote
                                                                  1
                                                                  down vote













                                                                  Euclid did not have real numbers. He could say that the length of one line segment goes into the length of another a certain number of times, or that the ratio of lengths of two segments was the same as that of two specified numbers ("numbers" were $2,3,4,5,ldots,$ and in particular $1$ was not a "number") or that the ratio of the length of the diagonal of a square to the side was not the ratio of any two numbers.



                                                                  There is a much celebrated essay by James Wilkinson titled The Perfidious Polynomial that I have not brought myself to read beyond the first couple of pages because of the profound stupidity of what Wilkinson claims is the history of the development of number systems. He states with a straight face that negative numbers were introduced for the purpose of solving polynomial equations that had no positive solutions, that rational numbers were introduced after that for the purpose of solving yet more equations, that irrational numbers were then introduced for solving yet more, and finally complex numbers for the purpose of solving quadratic equations that lacked real solutions. Did Wilkinson never hear that Euclid proved the statement above about the ratio of diagonal to side of a square, but Euclid had never heard of negative numbers, and did not even have anything that we would consider a system of numbers that includes positive rational and irrational numbers? That imaginary numbers were used to find real solutions of third-degree polynomials? That negative numbers were introduced by accounts for their purposes? And that all of that obviously makes more sense and is more elegant and intellectually satisfying than the childish story he tells? In order to do what Wilkinson says was done, one would need certain definitions that were introduced only much later.






                                                                  share|cite|improve this answer



























                                                                    up vote
                                                                    1
                                                                    down vote













                                                                    Euclid did not have real numbers. He could say that the length of one line segment goes into the length of another a certain number of times, or that the ratio of lengths of two segments was the same as that of two specified numbers ("numbers" were $2,3,4,5,ldots,$ and in particular $1$ was not a "number") or that the ratio of the length of the diagonal of a square to the side was not the ratio of any two numbers.



                                                                    There is a much celebrated essay by James Wilkinson titled The Perfidious Polynomial that I have not brought myself to read beyond the first couple of pages because of the profound stupidity of what Wilkinson claims is the history of the development of number systems. He states with a straight face that negative numbers were introduced for the purpose of solving polynomial equations that had no positive solutions, that rational numbers were introduced after that for the purpose of solving yet more equations, that irrational numbers were then introduced for solving yet more, and finally complex numbers for the purpose of solving quadratic equations that lacked real solutions. Did Wilkinson never hear that Euclid proved the statement above about the ratio of diagonal to side of a square, but Euclid had never heard of negative numbers, and did not even have anything that we would consider a system of numbers that includes positive rational and irrational numbers? That imaginary numbers were used to find real solutions of third-degree polynomials? That negative numbers were introduced by accounts for their purposes? And that all of that obviously makes more sense and is more elegant and intellectually satisfying than the childish story he tells? In order to do what Wilkinson says was done, one would need certain definitions that were introduced only much later.






                                                                    share|cite|improve this answer

























                                                                      up vote
                                                                      1
                                                                      down vote










                                                                      up vote
                                                                      1
                                                                      down vote









                                                                      Euclid did not have real numbers. He could say that the length of one line segment goes into the length of another a certain number of times, or that the ratio of lengths of two segments was the same as that of two specified numbers ("numbers" were $2,3,4,5,ldots,$ and in particular $1$ was not a "number") or that the ratio of the length of the diagonal of a square to the side was not the ratio of any two numbers.



                                                                      There is a much celebrated essay by James Wilkinson titled The Perfidious Polynomial that I have not brought myself to read beyond the first couple of pages because of the profound stupidity of what Wilkinson claims is the history of the development of number systems. He states with a straight face that negative numbers were introduced for the purpose of solving polynomial equations that had no positive solutions, that rational numbers were introduced after that for the purpose of solving yet more equations, that irrational numbers were then introduced for solving yet more, and finally complex numbers for the purpose of solving quadratic equations that lacked real solutions. Did Wilkinson never hear that Euclid proved the statement above about the ratio of diagonal to side of a square, but Euclid had never heard of negative numbers, and did not even have anything that we would consider a system of numbers that includes positive rational and irrational numbers? That imaginary numbers were used to find real solutions of third-degree polynomials? That negative numbers were introduced by accounts for their purposes? And that all of that obviously makes more sense and is more elegant and intellectually satisfying than the childish story he tells? In order to do what Wilkinson says was done, one would need certain definitions that were introduced only much later.






                                                                      share|cite|improve this answer














                                                                      Euclid did not have real numbers. He could say that the length of one line segment goes into the length of another a certain number of times, or that the ratio of lengths of two segments was the same as that of two specified numbers ("numbers" were $2,3,4,5,ldots,$ and in particular $1$ was not a "number") or that the ratio of the length of the diagonal of a square to the side was not the ratio of any two numbers.



                                                                      There is a much celebrated essay by James Wilkinson titled The Perfidious Polynomial that I have not brought myself to read beyond the first couple of pages because of the profound stupidity of what Wilkinson claims is the history of the development of number systems. He states with a straight face that negative numbers were introduced for the purpose of solving polynomial equations that had no positive solutions, that rational numbers were introduced after that for the purpose of solving yet more equations, that irrational numbers were then introduced for solving yet more, and finally complex numbers for the purpose of solving quadratic equations that lacked real solutions. Did Wilkinson never hear that Euclid proved the statement above about the ratio of diagonal to side of a square, but Euclid had never heard of negative numbers, and did not even have anything that we would consider a system of numbers that includes positive rational and irrational numbers? That imaginary numbers were used to find real solutions of third-degree polynomials? That negative numbers were introduced by accounts for their purposes? And that all of that obviously makes more sense and is more elegant and intellectually satisfying than the childish story he tells? In order to do what Wilkinson says was done, one would need certain definitions that were introduced only much later.







                                                                      share|cite|improve this answer














                                                                      share|cite|improve this answer



                                                                      share|cite|improve this answer








                                                                      answered 8 hours ago


























                                                                      community wiki





                                                                      Michael Hardy























                                                                          up vote
                                                                          -4
                                                                          down vote













                                                                          I had found a reasonable definition for a topology for a set $X$ and a closure operator $mathsf{cl}$ for $mathsf{Pwr} X$ so that if $X$ is a finite dimensional real vector space and $mathsf{cl}$ is the convex hull operator then resulting topology is the usual topology: The convex set $O$ is open iff for all convex $D$ satisfying neither $D cap O$ nor $D smallsetminus O$ is empty then there is a nonempty convex set $C subseteq D smallsetminus O$ satisfying, for all convex $C' subseteq C$ the set $O cup C'$ is convex.



                                                                          Generalizing this to a partially ordered set requires something that looks like union. Least upper bounds can introduce extraneous elements. I looked for an element free description of union when working with sets. The version I used was: If $S$ is a set and $mathcal{A}$ is a collection of subsets of $S$ then $cup mathcal{A} = S$ iff for all $T subseteq S$, if $A cap T = varnothing$ for all $A in mathcal{A}$ then $T = varnothing$.



                                                                          To adapt this definition to a partially ordered set a least element, say $bot$, will be convenient. Since a collection of elements need not have a greatest lower bound think of $wedge$ as a partial function. where $a wedge t neq bot$ means either ${ a, t } $ does not have a greatest lower bound or ${ a, t } $ has a greatest lower bound but that greatest lower bound is not $bot$.



                                                                          The adapted definition allows on to use the collection of elements that satisfy it as something like a basis for a topology






                                                                          share|cite|improve this answer



























                                                                            up vote
                                                                            -4
                                                                            down vote













                                                                            I had found a reasonable definition for a topology for a set $X$ and a closure operator $mathsf{cl}$ for $mathsf{Pwr} X$ so that if $X$ is a finite dimensional real vector space and $mathsf{cl}$ is the convex hull operator then resulting topology is the usual topology: The convex set $O$ is open iff for all convex $D$ satisfying neither $D cap O$ nor $D smallsetminus O$ is empty then there is a nonempty convex set $C subseteq D smallsetminus O$ satisfying, for all convex $C' subseteq C$ the set $O cup C'$ is convex.



                                                                            Generalizing this to a partially ordered set requires something that looks like union. Least upper bounds can introduce extraneous elements. I looked for an element free description of union when working with sets. The version I used was: If $S$ is a set and $mathcal{A}$ is a collection of subsets of $S$ then $cup mathcal{A} = S$ iff for all $T subseteq S$, if $A cap T = varnothing$ for all $A in mathcal{A}$ then $T = varnothing$.



                                                                            To adapt this definition to a partially ordered set a least element, say $bot$, will be convenient. Since a collection of elements need not have a greatest lower bound think of $wedge$ as a partial function. where $a wedge t neq bot$ means either ${ a, t } $ does not have a greatest lower bound or ${ a, t } $ has a greatest lower bound but that greatest lower bound is not $bot$.



                                                                            The adapted definition allows on to use the collection of elements that satisfy it as something like a basis for a topology






                                                                            share|cite|improve this answer

























                                                                              up vote
                                                                              -4
                                                                              down vote










                                                                              up vote
                                                                              -4
                                                                              down vote









                                                                              I had found a reasonable definition for a topology for a set $X$ and a closure operator $mathsf{cl}$ for $mathsf{Pwr} X$ so that if $X$ is a finite dimensional real vector space and $mathsf{cl}$ is the convex hull operator then resulting topology is the usual topology: The convex set $O$ is open iff for all convex $D$ satisfying neither $D cap O$ nor $D smallsetminus O$ is empty then there is a nonempty convex set $C subseteq D smallsetminus O$ satisfying, for all convex $C' subseteq C$ the set $O cup C'$ is convex.



                                                                              Generalizing this to a partially ordered set requires something that looks like union. Least upper bounds can introduce extraneous elements. I looked for an element free description of union when working with sets. The version I used was: If $S$ is a set and $mathcal{A}$ is a collection of subsets of $S$ then $cup mathcal{A} = S$ iff for all $T subseteq S$, if $A cap T = varnothing$ for all $A in mathcal{A}$ then $T = varnothing$.



                                                                              To adapt this definition to a partially ordered set a least element, say $bot$, will be convenient. Since a collection of elements need not have a greatest lower bound think of $wedge$ as a partial function. where $a wedge t neq bot$ means either ${ a, t } $ does not have a greatest lower bound or ${ a, t } $ has a greatest lower bound but that greatest lower bound is not $bot$.



                                                                              The adapted definition allows on to use the collection of elements that satisfy it as something like a basis for a topology






                                                                              share|cite|improve this answer














                                                                              I had found a reasonable definition for a topology for a set $X$ and a closure operator $mathsf{cl}$ for $mathsf{Pwr} X$ so that if $X$ is a finite dimensional real vector space and $mathsf{cl}$ is the convex hull operator then resulting topology is the usual topology: The convex set $O$ is open iff for all convex $D$ satisfying neither $D cap O$ nor $D smallsetminus O$ is empty then there is a nonempty convex set $C subseteq D smallsetminus O$ satisfying, for all convex $C' subseteq C$ the set $O cup C'$ is convex.



                                                                              Generalizing this to a partially ordered set requires something that looks like union. Least upper bounds can introduce extraneous elements. I looked for an element free description of union when working with sets. The version I used was: If $S$ is a set and $mathcal{A}$ is a collection of subsets of $S$ then $cup mathcal{A} = S$ iff for all $T subseteq S$, if $A cap T = varnothing$ for all $A in mathcal{A}$ then $T = varnothing$.



                                                                              To adapt this definition to a partially ordered set a least element, say $bot$, will be convenient. Since a collection of elements need not have a greatest lower bound think of $wedge$ as a partial function. where $a wedge t neq bot$ means either ${ a, t } $ does not have a greatest lower bound or ${ a, t } $ has a greatest lower bound but that greatest lower bound is not $bot$.



                                                                              The adapted definition allows on to use the collection of elements that satisfy it as something like a basis for a topology







                                                                              share|cite|improve this answer














                                                                              share|cite|improve this answer



                                                                              share|cite|improve this answer








                                                                              answered yesterday


























                                                                              community wiki





                                                                              Jay Kangel































                                                                                  draft saved

                                                                                  draft discarded




















































                                                                                  Thanks for contributing an answer to MathOverflow!


                                                                                  • Please be sure to answer the question. Provide details and share your research!

                                                                                  But avoid



                                                                                  • Asking for help, clarification, or responding to other answers.

                                                                                  • Making statements based on opinion; back them up with references or personal experience.


                                                                                  Use MathJax to format equations. MathJax reference.


                                                                                  To learn more, see our tips on writing great answers.





                                                                                  Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


                                                                                  Please pay close attention to the following guidance:


                                                                                  • Please be sure to answer the question. Provide details and share your research!

                                                                                  But avoid



                                                                                  • Asking for help, clarification, or responding to other answers.

                                                                                  • Making statements based on opinion; back them up with references or personal experience.


                                                                                  To learn more, see our tips on writing great answers.




                                                                                  draft saved


                                                                                  draft discarded














                                                                                  StackExchange.ready(
                                                                                  function () {
                                                                                  StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmathoverflow.net%2fquestions%2f316785%2fwhat-definitions-were-crucial-to-further-understanding%23new-answer', 'question_page');
                                                                                  }
                                                                                  );

                                                                                  Post as a guest















                                                                                  Required, but never shown





















































                                                                                  Required, but never shown














                                                                                  Required, but never shown












                                                                                  Required, but never shown







                                                                                  Required, but never shown

































                                                                                  Required, but never shown














                                                                                  Required, but never shown












                                                                                  Required, but never shown







                                                                                  Required, but never shown







                                                                                  Popular posts from this blog

                                                                                  Quarter-circle Tiles

                                                                                  build a pushdown automaton that recognizes the reverse language of a given pushdown automaton?

                                                                                  Mont Emei