Why should makefiles have an “install” target?











up vote
18
down vote

favorite
2












Coming from the world of C and C++, most build system have an install target, notably Makefiles (where it is recommended by GNU for example) or CMake. This target copies the runtime files (executables, libraries, ...) in the operating system (for example, in C:Program Files on Windows).



This feels really hacky, since for me it is not the responsibility of the build system to install programs (which is actually the responsibility of the operating system / package manager). It also means the build system or build script must know the organization of installed programs, with environment variables, registry variables, symlinks, permissions, etc.



At best, build systems should have a release target that will output an installable program (for example .deb or .msi), and then kindly ask the operating system to install that program. It would also allow the user to uninstall without having to type make uninstall.



So, my question: why do build system usually recommend having an install target?










share|improve this question




















  • 7




    Your arguing that "make install" does not fall within the responsibility of a build system, but the much more involved and platform specific responsibility of creating an installable package does.
    – pmf
    Nov 23 at 16:05






  • 2




    Anyway: sometimes you want to install a n application that is not handled by the OS/package manager (because it has dependencies that would cause conflicts impossible to resolve using the package manager etc). make install usually installs under /usr/local (or even /opt) which are directories not handled by the "core OS/package management system". No idea whether Windows has some similar convention though.
    – Bakuriu
    Nov 23 at 19:12






  • 11




    "This feels really hacky." Well, what did you expect from the world of C/C++? ;-)
    – Mason Wheeler
    Nov 23 at 19:53






  • 1




    Note that make install makes no sense when we talk about cross-compiling
    – Hagen von Eitzen
    Nov 24 at 13:41






  • 1




    @HagenvonEitzen it does with DESTDIR.
    – Nax
    Nov 24 at 14:25















up vote
18
down vote

favorite
2












Coming from the world of C and C++, most build system have an install target, notably Makefiles (where it is recommended by GNU for example) or CMake. This target copies the runtime files (executables, libraries, ...) in the operating system (for example, in C:Program Files on Windows).



This feels really hacky, since for me it is not the responsibility of the build system to install programs (which is actually the responsibility of the operating system / package manager). It also means the build system or build script must know the organization of installed programs, with environment variables, registry variables, symlinks, permissions, etc.



At best, build systems should have a release target that will output an installable program (for example .deb or .msi), and then kindly ask the operating system to install that program. It would also allow the user to uninstall without having to type make uninstall.



So, my question: why do build system usually recommend having an install target?










share|improve this question




















  • 7




    Your arguing that "make install" does not fall within the responsibility of a build system, but the much more involved and platform specific responsibility of creating an installable package does.
    – pmf
    Nov 23 at 16:05






  • 2




    Anyway: sometimes you want to install a n application that is not handled by the OS/package manager (because it has dependencies that would cause conflicts impossible to resolve using the package manager etc). make install usually installs under /usr/local (or even /opt) which are directories not handled by the "core OS/package management system". No idea whether Windows has some similar convention though.
    – Bakuriu
    Nov 23 at 19:12






  • 11




    "This feels really hacky." Well, what did you expect from the world of C/C++? ;-)
    – Mason Wheeler
    Nov 23 at 19:53






  • 1




    Note that make install makes no sense when we talk about cross-compiling
    – Hagen von Eitzen
    Nov 24 at 13:41






  • 1




    @HagenvonEitzen it does with DESTDIR.
    – Nax
    Nov 24 at 14:25













up vote
18
down vote

favorite
2









up vote
18
down vote

favorite
2






2





Coming from the world of C and C++, most build system have an install target, notably Makefiles (where it is recommended by GNU for example) or CMake. This target copies the runtime files (executables, libraries, ...) in the operating system (for example, in C:Program Files on Windows).



This feels really hacky, since for me it is not the responsibility of the build system to install programs (which is actually the responsibility of the operating system / package manager). It also means the build system or build script must know the organization of installed programs, with environment variables, registry variables, symlinks, permissions, etc.



At best, build systems should have a release target that will output an installable program (for example .deb or .msi), and then kindly ask the operating system to install that program. It would also allow the user to uninstall without having to type make uninstall.



So, my question: why do build system usually recommend having an install target?










share|improve this question















Coming from the world of C and C++, most build system have an install target, notably Makefiles (where it is recommended by GNU for example) or CMake. This target copies the runtime files (executables, libraries, ...) in the operating system (for example, in C:Program Files on Windows).



This feels really hacky, since for me it is not the responsibility of the build system to install programs (which is actually the responsibility of the operating system / package manager). It also means the build system or build script must know the organization of installed programs, with environment variables, registry variables, symlinks, permissions, etc.



At best, build systems should have a release target that will output an installable program (for example .deb or .msi), and then kindly ask the operating system to install that program. It would also allow the user to uninstall without having to type make uninstall.



So, my question: why do build system usually recommend having an install target?







build-system cmake make install






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 24 at 11:54

























asked Nov 23 at 13:40









Synxis

1976




1976








  • 7




    Your arguing that "make install" does not fall within the responsibility of a build system, but the much more involved and platform specific responsibility of creating an installable package does.
    – pmf
    Nov 23 at 16:05






  • 2




    Anyway: sometimes you want to install a n application that is not handled by the OS/package manager (because it has dependencies that would cause conflicts impossible to resolve using the package manager etc). make install usually installs under /usr/local (or even /opt) which are directories not handled by the "core OS/package management system". No idea whether Windows has some similar convention though.
    – Bakuriu
    Nov 23 at 19:12






  • 11




    "This feels really hacky." Well, what did you expect from the world of C/C++? ;-)
    – Mason Wheeler
    Nov 23 at 19:53






  • 1




    Note that make install makes no sense when we talk about cross-compiling
    – Hagen von Eitzen
    Nov 24 at 13:41






  • 1




    @HagenvonEitzen it does with DESTDIR.
    – Nax
    Nov 24 at 14:25














  • 7




    Your arguing that "make install" does not fall within the responsibility of a build system, but the much more involved and platform specific responsibility of creating an installable package does.
    – pmf
    Nov 23 at 16:05






  • 2




    Anyway: sometimes you want to install a n application that is not handled by the OS/package manager (because it has dependencies that would cause conflicts impossible to resolve using the package manager etc). make install usually installs under /usr/local (or even /opt) which are directories not handled by the "core OS/package management system". No idea whether Windows has some similar convention though.
    – Bakuriu
    Nov 23 at 19:12






  • 11




    "This feels really hacky." Well, what did you expect from the world of C/C++? ;-)
    – Mason Wheeler
    Nov 23 at 19:53






  • 1




    Note that make install makes no sense when we talk about cross-compiling
    – Hagen von Eitzen
    Nov 24 at 13:41






  • 1




    @HagenvonEitzen it does with DESTDIR.
    – Nax
    Nov 24 at 14:25








7




7




Your arguing that "make install" does not fall within the responsibility of a build system, but the much more involved and platform specific responsibility of creating an installable package does.
– pmf
Nov 23 at 16:05




Your arguing that "make install" does not fall within the responsibility of a build system, but the much more involved and platform specific responsibility of creating an installable package does.
– pmf
Nov 23 at 16:05




2




2




Anyway: sometimes you want to install a n application that is not handled by the OS/package manager (because it has dependencies that would cause conflicts impossible to resolve using the package manager etc). make install usually installs under /usr/local (or even /opt) which are directories not handled by the "core OS/package management system". No idea whether Windows has some similar convention though.
– Bakuriu
Nov 23 at 19:12




Anyway: sometimes you want to install a n application that is not handled by the OS/package manager (because it has dependencies that would cause conflicts impossible to resolve using the package manager etc). make install usually installs under /usr/local (or even /opt) which are directories not handled by the "core OS/package management system". No idea whether Windows has some similar convention though.
– Bakuriu
Nov 23 at 19:12




11




11




"This feels really hacky." Well, what did you expect from the world of C/C++? ;-)
– Mason Wheeler
Nov 23 at 19:53




"This feels really hacky." Well, what did you expect from the world of C/C++? ;-)
– Mason Wheeler
Nov 23 at 19:53




1




1




Note that make install makes no sense when we talk about cross-compiling
– Hagen von Eitzen
Nov 24 at 13:41




Note that make install makes no sense when we talk about cross-compiling
– Hagen von Eitzen
Nov 24 at 13:41




1




1




@HagenvonEitzen it does with DESTDIR.
– Nax
Nov 24 at 14:25




@HagenvonEitzen it does with DESTDIR.
– Nax
Nov 24 at 14:25










6 Answers
6






active

oldest

votes

















up vote
23
down vote













Many build scripts or Makefiles have an installation target because they were created before package managers existed, and because even today lots of systems don't have package managers. Plus, there are systems where make install actually is the preferred way of managing packages.






share|improve this answer





















  • I'm curious about systems where make install is preferred. Apart from that, I meant program manager when I said that makefiles should create installable packages. I think almost all OS come with a way of managing the installed programs ? For example, Windows has no package manager (apart from the store) but still has a way to manage installed programs (via .msi packages for examples)
    – Synxis
    Nov 23 at 14:13






  • 2




    @Synxis BSD, Linux, Unix all use makefiles. Whether it's preferred to use them for installation, I don't know, but you often have that ability using make install.
    – Rob
    Nov 23 at 14:33






  • 1




    In debian at least it's preferred to use checkinstall over make install for two reasons: "You can easily remove the package with one step." and "You can install the resulting package upon multiple machines." - as checkinstall builds a .deb and installs it, it uses the package manager...
    – Aaron Hall
    Nov 23 at 17:01








  • 1




    @Synxis - There are several linux distributions (often called source distros) where the package manager install programs by downloading a tar file, decompress it then run make install
    – slebetman
    Nov 23 at 19:15






  • 1




    @AaronHall Correct me if I'm wrong, but I got the impression that a checkinstall invocation will actually use make install and monitor it's actions for package building.
    – cmaster
    Nov 24 at 10:26


















up vote
5
down vote













A makefile might have no install target, and more importantly, you can have programs which are not even supposed to be installable (e.g. because they should run from their build directory, or because they can run installed anywhere). The install target is just a convention for usual makefile-s.



However, many programs require external resources to be run (for example: fonts, databases, configuration files, etc). And their executable often make some hypothesis about these resources. For example, your bash shell would generally read some initialization file from /etc/bash.bashrc etc.... These resources are generally in the file system (see hier(7) for conventions about the file hierarchy) and the default file path is built in your executable.



Try to use strings(1) on most executables of your system. You'll find out which file paths are known to it.



BTW, for many GNU programs using autoconf, you could run make install DESTDIR=/tmp/destdir/ without being root. Then /tmp/destdir/ is filled with the files that should be later packaged.



FWIW, I tend to believe that my bismon (GPLv3+ licensed) program (described in my bismon-chariot-doc.pdf report) cannot be "installed"; I am not sure to be able to prove that, and I cannot imagine how could I make that program installable.






share|improve this answer



















  • 2




    DESTDIR or other prefixes are too often forgotten. As soon as external resources such as dynamic libraries are involved, it is not possible to build the software without knowing where it will be installed. Also great for installing to non-standard locations, e.g /opt or into $HOME. The only way to avoid different prefixes is to use containers, but that's of course a Linux-specific solution.
    – amon
    Nov 23 at 14:50






  • 2




    I've seen more than one package that if you tried DESTDIR=/tmp/destdir would not work later when installed to the normal place because DESTDIR was used in path generation.
    – Joshua
    Nov 23 at 15:34










  • @amon: I'm not sure I would characterize containers as Linux-specific. Linux may be a common target platform for containerization, but some form of container technology exists in most modern operating systems.
    – Kevin
    Nov 23 at 16:25






  • 1




    @Joshua It shouldn't, DESTDIR should only be relevant during the install step. You should be able to do: ./configure --prefix="/opt/foo" && make && DESTDIR=/tmp/foo make install and be able to relocate the package to /opt/foo without any issue.
    – Nax
    Nov 24 at 14:27




















up vote
3
down vote













There are several reasons which come to mind.




  • Many package creating software - the Debian build system for example, and IIRC rpm as well - already expect from the building script to "install" the program to some special subdirectory. So it is driven by backward compatibility in both directions.

  • A user may want to install the software to a local space, like in the $HOME directory. Not all package managers support it.

  • There may still be environments which do not have packages.






share|improve this answer























  • I reworded the question a bit, I meant program manager when I said that makefiles should create installable packages.
    – Synxis
    Nov 23 at 14:15


















up vote
1
down vote













One reason not mentioned is there's a lot of times when you are not using the current version of the software or using a modified version of the software. Trying to create a custom package is not only more work, but it can conflict with currently created and distributed packages. In open source code this happens a lot especially if breaking changes are introduced in future versions you are using.



Let's say you're using the open source project FOO which is currently on version 2.0.1 and you are using version 1.3.0. You don't want to use anything above that because version 2.0.0 is incompatible with what you are currently doing, but there is a single bug fix in 2.0.1 you desperately need. Having the make install option let's you install the modified 1.3.0 software without having to worry about creating a package and install it on your system.






share|improve this answer




























    up vote
    1
    down vote













    Linux distributions generally separate program maintenance from package maintenance. A build system that integrates package generation would force program maintainers to also perform package maintenance.



    This is usually a bad idea. Distributions have lots of infrastructure to verify internal consistency, provide binaries for multiple target platforms, perform small alterations to better integrate with the rest of the system and provide a consistent experience for users reporting bugs.



    To generate packages directly from a build system, you would have to either integrate or bypass all of this infrastructure. Integrating it would be a lot of work for questionable benefit, and bypassing it would give a worse user experience.



    This is one of the "top of the food chain" problems that are typical in multi-party systems. If you have multiple complex systems, there needs to be a clear hierarchy of which system is responsible for coordinating all others.



    In the case of software installation management, the package manager is this component, and it will run the package's build system, then take the output through a convenient interface ("files in a directory after an installation step"), generate a package and prepare it for upload to a repository.



    The package manager stands in the middle between the build system and the repository here, and is in the best position to integrate well with both.



    You may have noticed that there are only few of the JavaScript packages available through npm also available through apt — this is mainly because the JavaScript people decided that npm and the associated repository was going to be the top of their food chain, which made it close to impossible to ship these packages as Debian packages.



    With my Debian Developer hat on: if you release open source software, please leave the packaging to distribution maintainers. It saves both you and us a lot of work.






    share|improve this answer





















    • You've said nothing about why there's an install target, and it seems to me that most of what you've written would apply to it too...
      – curiousdannii
      Nov 24 at 11:45






    • 1




      @curiousdannii, there needs to be some interface between build system and package manager, and this happens to be the simplest one, so it won.
      – Simon Richter
      Nov 25 at 13:06


















    up vote
    1
    down vote













    Well, application developers are the ones that know where each file should go. They could leave that in documentation, and have package maintainers read that and build a script for each package. Maybe the package maintainers will misinterpret the documentation and will have to debug the script until it works. This is inefficient. It's better for the application developer to write a script to properly install the application he's written.



    He could write an install script with an arbitrary name or maybe make it part of the procedure of some other script. However, having a standard install command, make install (a convention that predates package managers), it's become really easy to make packages. If you look at the PKGBUILD template for making Archlinux packages, you can see that the function that actually packages simply does a make DESTDIR="$pkgdir/" install. This probably works for the majority of packages and probably more with a little modification. Thanks to make (and the autotools) being standard, packaging is really, really easy.






    share|improve this answer





















      Your Answer








      StackExchange.ready(function() {
      var channelOptions = {
      tags: "".split(" "),
      id: "131"
      };
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function() {
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled) {
      StackExchange.using("snippets", function() {
      createEditor();
      });
      }
      else {
      createEditor();
      }
      });

      function createEditor() {
      StackExchange.prepareEditor({
      heartbeatType: 'answer',
      convertImagesToLinks: false,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: null,
      bindNavPrevention: true,
      postfix: "",
      imageUploader: {
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      },
      onDemand: false,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      });


      }
      });














      draft saved

      draft discarded


















      StackExchange.ready(
      function () {
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsoftwareengineering.stackexchange.com%2fquestions%2f381924%2fwhy-should-makefiles-have-an-install-target%23new-answer', 'question_page');
      }
      );

      Post as a guest















      Required, but never shown




















      StackExchange.ready(function () {
      $("#show-editor-button input, #show-editor-button button").click(function () {
      var showEditor = function() {
      $("#show-editor-button").hide();
      $("#post-form").removeClass("dno");
      StackExchange.editor.finallyInit();
      };

      var useFancy = $(this).data('confirm-use-fancy');
      if(useFancy == 'True') {
      var popupTitle = $(this).data('confirm-fancy-title');
      var popupBody = $(this).data('confirm-fancy-body');
      var popupAccept = $(this).data('confirm-fancy-accept-button');

      $(this).loadPopup({
      url: '/post/self-answer-popup',
      loaded: function(popup) {
      var pTitle = $(popup).find('h2');
      var pBody = $(popup).find('.popup-body');
      var pSubmit = $(popup).find('.popup-submit');

      pTitle.text(popupTitle);
      pBody.html(popupBody);
      pSubmit.val(popupAccept).click(showEditor);
      }
      })
      } else{
      var confirmText = $(this).data('confirm-text');
      if (confirmText ? confirm(confirmText) : true) {
      showEditor();
      }
      }
      });
      });






      6 Answers
      6






      active

      oldest

      votes








      6 Answers
      6






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes








      up vote
      23
      down vote













      Many build scripts or Makefiles have an installation target because they were created before package managers existed, and because even today lots of systems don't have package managers. Plus, there are systems where make install actually is the preferred way of managing packages.






      share|improve this answer





















      • I'm curious about systems where make install is preferred. Apart from that, I meant program manager when I said that makefiles should create installable packages. I think almost all OS come with a way of managing the installed programs ? For example, Windows has no package manager (apart from the store) but still has a way to manage installed programs (via .msi packages for examples)
        – Synxis
        Nov 23 at 14:13






      • 2




        @Synxis BSD, Linux, Unix all use makefiles. Whether it's preferred to use them for installation, I don't know, but you often have that ability using make install.
        – Rob
        Nov 23 at 14:33






      • 1




        In debian at least it's preferred to use checkinstall over make install for two reasons: "You can easily remove the package with one step." and "You can install the resulting package upon multiple machines." - as checkinstall builds a .deb and installs it, it uses the package manager...
        – Aaron Hall
        Nov 23 at 17:01








      • 1




        @Synxis - There are several linux distributions (often called source distros) where the package manager install programs by downloading a tar file, decompress it then run make install
        – slebetman
        Nov 23 at 19:15






      • 1




        @AaronHall Correct me if I'm wrong, but I got the impression that a checkinstall invocation will actually use make install and monitor it's actions for package building.
        – cmaster
        Nov 24 at 10:26















      up vote
      23
      down vote













      Many build scripts or Makefiles have an installation target because they were created before package managers existed, and because even today lots of systems don't have package managers. Plus, there are systems where make install actually is the preferred way of managing packages.






      share|improve this answer





















      • I'm curious about systems where make install is preferred. Apart from that, I meant program manager when I said that makefiles should create installable packages. I think almost all OS come with a way of managing the installed programs ? For example, Windows has no package manager (apart from the store) but still has a way to manage installed programs (via .msi packages for examples)
        – Synxis
        Nov 23 at 14:13






      • 2




        @Synxis BSD, Linux, Unix all use makefiles. Whether it's preferred to use them for installation, I don't know, but you often have that ability using make install.
        – Rob
        Nov 23 at 14:33






      • 1




        In debian at least it's preferred to use checkinstall over make install for two reasons: "You can easily remove the package with one step." and "You can install the resulting package upon multiple machines." - as checkinstall builds a .deb and installs it, it uses the package manager...
        – Aaron Hall
        Nov 23 at 17:01








      • 1




        @Synxis - There are several linux distributions (often called source distros) where the package manager install programs by downloading a tar file, decompress it then run make install
        – slebetman
        Nov 23 at 19:15






      • 1




        @AaronHall Correct me if I'm wrong, but I got the impression that a checkinstall invocation will actually use make install and monitor it's actions for package building.
        – cmaster
        Nov 24 at 10:26













      up vote
      23
      down vote










      up vote
      23
      down vote









      Many build scripts or Makefiles have an installation target because they were created before package managers existed, and because even today lots of systems don't have package managers. Plus, there are systems where make install actually is the preferred way of managing packages.






      share|improve this answer












      Many build scripts or Makefiles have an installation target because they were created before package managers existed, and because even today lots of systems don't have package managers. Plus, there are systems where make install actually is the preferred way of managing packages.







      share|improve this answer












      share|improve this answer



      share|improve this answer










      answered Nov 23 at 13:59









      Jörg W Mittag

      66.8k14138220




      66.8k14138220












      • I'm curious about systems where make install is preferred. Apart from that, I meant program manager when I said that makefiles should create installable packages. I think almost all OS come with a way of managing the installed programs ? For example, Windows has no package manager (apart from the store) but still has a way to manage installed programs (via .msi packages for examples)
        – Synxis
        Nov 23 at 14:13






      • 2




        @Synxis BSD, Linux, Unix all use makefiles. Whether it's preferred to use them for installation, I don't know, but you often have that ability using make install.
        – Rob
        Nov 23 at 14:33






      • 1




        In debian at least it's preferred to use checkinstall over make install for two reasons: "You can easily remove the package with one step." and "You can install the resulting package upon multiple machines." - as checkinstall builds a .deb and installs it, it uses the package manager...
        – Aaron Hall
        Nov 23 at 17:01








      • 1




        @Synxis - There are several linux distributions (often called source distros) where the package manager install programs by downloading a tar file, decompress it then run make install
        – slebetman
        Nov 23 at 19:15






      • 1




        @AaronHall Correct me if I'm wrong, but I got the impression that a checkinstall invocation will actually use make install and monitor it's actions for package building.
        – cmaster
        Nov 24 at 10:26


















      • I'm curious about systems where make install is preferred. Apart from that, I meant program manager when I said that makefiles should create installable packages. I think almost all OS come with a way of managing the installed programs ? For example, Windows has no package manager (apart from the store) but still has a way to manage installed programs (via .msi packages for examples)
        – Synxis
        Nov 23 at 14:13






      • 2




        @Synxis BSD, Linux, Unix all use makefiles. Whether it's preferred to use them for installation, I don't know, but you often have that ability using make install.
        – Rob
        Nov 23 at 14:33






      • 1




        In debian at least it's preferred to use checkinstall over make install for two reasons: "You can easily remove the package with one step." and "You can install the resulting package upon multiple machines." - as checkinstall builds a .deb and installs it, it uses the package manager...
        – Aaron Hall
        Nov 23 at 17:01








      • 1




        @Synxis - There are several linux distributions (often called source distros) where the package manager install programs by downloading a tar file, decompress it then run make install
        – slebetman
        Nov 23 at 19:15






      • 1




        @AaronHall Correct me if I'm wrong, but I got the impression that a checkinstall invocation will actually use make install and monitor it's actions for package building.
        – cmaster
        Nov 24 at 10:26
















      I'm curious about systems where make install is preferred. Apart from that, I meant program manager when I said that makefiles should create installable packages. I think almost all OS come with a way of managing the installed programs ? For example, Windows has no package manager (apart from the store) but still has a way to manage installed programs (via .msi packages for examples)
      – Synxis
      Nov 23 at 14:13




      I'm curious about systems where make install is preferred. Apart from that, I meant program manager when I said that makefiles should create installable packages. I think almost all OS come with a way of managing the installed programs ? For example, Windows has no package manager (apart from the store) but still has a way to manage installed programs (via .msi packages for examples)
      – Synxis
      Nov 23 at 14:13




      2




      2




      @Synxis BSD, Linux, Unix all use makefiles. Whether it's preferred to use them for installation, I don't know, but you often have that ability using make install.
      – Rob
      Nov 23 at 14:33




      @Synxis BSD, Linux, Unix all use makefiles. Whether it's preferred to use them for installation, I don't know, but you often have that ability using make install.
      – Rob
      Nov 23 at 14:33




      1




      1




      In debian at least it's preferred to use checkinstall over make install for two reasons: "You can easily remove the package with one step." and "You can install the resulting package upon multiple machines." - as checkinstall builds a .deb and installs it, it uses the package manager...
      – Aaron Hall
      Nov 23 at 17:01






      In debian at least it's preferred to use checkinstall over make install for two reasons: "You can easily remove the package with one step." and "You can install the resulting package upon multiple machines." - as checkinstall builds a .deb and installs it, it uses the package manager...
      – Aaron Hall
      Nov 23 at 17:01






      1




      1




      @Synxis - There are several linux distributions (often called source distros) where the package manager install programs by downloading a tar file, decompress it then run make install
      – slebetman
      Nov 23 at 19:15




      @Synxis - There are several linux distributions (often called source distros) where the package manager install programs by downloading a tar file, decompress it then run make install
      – slebetman
      Nov 23 at 19:15




      1




      1




      @AaronHall Correct me if I'm wrong, but I got the impression that a checkinstall invocation will actually use make install and monitor it's actions for package building.
      – cmaster
      Nov 24 at 10:26




      @AaronHall Correct me if I'm wrong, but I got the impression that a checkinstall invocation will actually use make install and monitor it's actions for package building.
      – cmaster
      Nov 24 at 10:26












      up vote
      5
      down vote













      A makefile might have no install target, and more importantly, you can have programs which are not even supposed to be installable (e.g. because they should run from their build directory, or because they can run installed anywhere). The install target is just a convention for usual makefile-s.



      However, many programs require external resources to be run (for example: fonts, databases, configuration files, etc). And their executable often make some hypothesis about these resources. For example, your bash shell would generally read some initialization file from /etc/bash.bashrc etc.... These resources are generally in the file system (see hier(7) for conventions about the file hierarchy) and the default file path is built in your executable.



      Try to use strings(1) on most executables of your system. You'll find out which file paths are known to it.



      BTW, for many GNU programs using autoconf, you could run make install DESTDIR=/tmp/destdir/ without being root. Then /tmp/destdir/ is filled with the files that should be later packaged.



      FWIW, I tend to believe that my bismon (GPLv3+ licensed) program (described in my bismon-chariot-doc.pdf report) cannot be "installed"; I am not sure to be able to prove that, and I cannot imagine how could I make that program installable.






      share|improve this answer



















      • 2




        DESTDIR or other prefixes are too often forgotten. As soon as external resources such as dynamic libraries are involved, it is not possible to build the software without knowing where it will be installed. Also great for installing to non-standard locations, e.g /opt or into $HOME. The only way to avoid different prefixes is to use containers, but that's of course a Linux-specific solution.
        – amon
        Nov 23 at 14:50






      • 2




        I've seen more than one package that if you tried DESTDIR=/tmp/destdir would not work later when installed to the normal place because DESTDIR was used in path generation.
        – Joshua
        Nov 23 at 15:34










      • @amon: I'm not sure I would characterize containers as Linux-specific. Linux may be a common target platform for containerization, but some form of container technology exists in most modern operating systems.
        – Kevin
        Nov 23 at 16:25






      • 1




        @Joshua It shouldn't, DESTDIR should only be relevant during the install step. You should be able to do: ./configure --prefix="/opt/foo" && make && DESTDIR=/tmp/foo make install and be able to relocate the package to /opt/foo without any issue.
        – Nax
        Nov 24 at 14:27

















      up vote
      5
      down vote













      A makefile might have no install target, and more importantly, you can have programs which are not even supposed to be installable (e.g. because they should run from their build directory, or because they can run installed anywhere). The install target is just a convention for usual makefile-s.



      However, many programs require external resources to be run (for example: fonts, databases, configuration files, etc). And their executable often make some hypothesis about these resources. For example, your bash shell would generally read some initialization file from /etc/bash.bashrc etc.... These resources are generally in the file system (see hier(7) for conventions about the file hierarchy) and the default file path is built in your executable.



      Try to use strings(1) on most executables of your system. You'll find out which file paths are known to it.



      BTW, for many GNU programs using autoconf, you could run make install DESTDIR=/tmp/destdir/ without being root. Then /tmp/destdir/ is filled with the files that should be later packaged.



      FWIW, I tend to believe that my bismon (GPLv3+ licensed) program (described in my bismon-chariot-doc.pdf report) cannot be "installed"; I am not sure to be able to prove that, and I cannot imagine how could I make that program installable.






      share|improve this answer



















      • 2




        DESTDIR or other prefixes are too often forgotten. As soon as external resources such as dynamic libraries are involved, it is not possible to build the software without knowing where it will be installed. Also great for installing to non-standard locations, e.g /opt or into $HOME. The only way to avoid different prefixes is to use containers, but that's of course a Linux-specific solution.
        – amon
        Nov 23 at 14:50






      • 2




        I've seen more than one package that if you tried DESTDIR=/tmp/destdir would not work later when installed to the normal place because DESTDIR was used in path generation.
        – Joshua
        Nov 23 at 15:34










      • @amon: I'm not sure I would characterize containers as Linux-specific. Linux may be a common target platform for containerization, but some form of container technology exists in most modern operating systems.
        – Kevin
        Nov 23 at 16:25






      • 1




        @Joshua It shouldn't, DESTDIR should only be relevant during the install step. You should be able to do: ./configure --prefix="/opt/foo" && make && DESTDIR=/tmp/foo make install and be able to relocate the package to /opt/foo without any issue.
        – Nax
        Nov 24 at 14:27















      up vote
      5
      down vote










      up vote
      5
      down vote









      A makefile might have no install target, and more importantly, you can have programs which are not even supposed to be installable (e.g. because they should run from their build directory, or because they can run installed anywhere). The install target is just a convention for usual makefile-s.



      However, many programs require external resources to be run (for example: fonts, databases, configuration files, etc). And their executable often make some hypothesis about these resources. For example, your bash shell would generally read some initialization file from /etc/bash.bashrc etc.... These resources are generally in the file system (see hier(7) for conventions about the file hierarchy) and the default file path is built in your executable.



      Try to use strings(1) on most executables of your system. You'll find out which file paths are known to it.



      BTW, for many GNU programs using autoconf, you could run make install DESTDIR=/tmp/destdir/ without being root. Then /tmp/destdir/ is filled with the files that should be later packaged.



      FWIW, I tend to believe that my bismon (GPLv3+ licensed) program (described in my bismon-chariot-doc.pdf report) cannot be "installed"; I am not sure to be able to prove that, and I cannot imagine how could I make that program installable.






      share|improve this answer














      A makefile might have no install target, and more importantly, you can have programs which are not even supposed to be installable (e.g. because they should run from their build directory, or because they can run installed anywhere). The install target is just a convention for usual makefile-s.



      However, many programs require external resources to be run (for example: fonts, databases, configuration files, etc). And their executable often make some hypothesis about these resources. For example, your bash shell would generally read some initialization file from /etc/bash.bashrc etc.... These resources are generally in the file system (see hier(7) for conventions about the file hierarchy) and the default file path is built in your executable.



      Try to use strings(1) on most executables of your system. You'll find out which file paths are known to it.



      BTW, for many GNU programs using autoconf, you could run make install DESTDIR=/tmp/destdir/ without being root. Then /tmp/destdir/ is filled with the files that should be later packaged.



      FWIW, I tend to believe that my bismon (GPLv3+ licensed) program (described in my bismon-chariot-doc.pdf report) cannot be "installed"; I am not sure to be able to prove that, and I cannot imagine how could I make that program installable.







      share|improve this answer














      share|improve this answer



      share|improve this answer








      edited Nov 23 at 16:35

























      answered Nov 23 at 14:27









      Basile Starynkevitch

      27.1k56098




      27.1k56098








      • 2




        DESTDIR or other prefixes are too often forgotten. As soon as external resources such as dynamic libraries are involved, it is not possible to build the software without knowing where it will be installed. Also great for installing to non-standard locations, e.g /opt or into $HOME. The only way to avoid different prefixes is to use containers, but that's of course a Linux-specific solution.
        – amon
        Nov 23 at 14:50






      • 2




        I've seen more than one package that if you tried DESTDIR=/tmp/destdir would not work later when installed to the normal place because DESTDIR was used in path generation.
        – Joshua
        Nov 23 at 15:34










      • @amon: I'm not sure I would characterize containers as Linux-specific. Linux may be a common target platform for containerization, but some form of container technology exists in most modern operating systems.
        – Kevin
        Nov 23 at 16:25






      • 1




        @Joshua It shouldn't, DESTDIR should only be relevant during the install step. You should be able to do: ./configure --prefix="/opt/foo" && make && DESTDIR=/tmp/foo make install and be able to relocate the package to /opt/foo without any issue.
        – Nax
        Nov 24 at 14:27
















      • 2




        DESTDIR or other prefixes are too often forgotten. As soon as external resources such as dynamic libraries are involved, it is not possible to build the software without knowing where it will be installed. Also great for installing to non-standard locations, e.g /opt or into $HOME. The only way to avoid different prefixes is to use containers, but that's of course a Linux-specific solution.
        – amon
        Nov 23 at 14:50






      • 2




        I've seen more than one package that if you tried DESTDIR=/tmp/destdir would not work later when installed to the normal place because DESTDIR was used in path generation.
        – Joshua
        Nov 23 at 15:34










      • @amon: I'm not sure I would characterize containers as Linux-specific. Linux may be a common target platform for containerization, but some form of container technology exists in most modern operating systems.
        – Kevin
        Nov 23 at 16:25






      • 1




        @Joshua It shouldn't, DESTDIR should only be relevant during the install step. You should be able to do: ./configure --prefix="/opt/foo" && make && DESTDIR=/tmp/foo make install and be able to relocate the package to /opt/foo without any issue.
        – Nax
        Nov 24 at 14:27










      2




      2




      DESTDIR or other prefixes are too often forgotten. As soon as external resources such as dynamic libraries are involved, it is not possible to build the software without knowing where it will be installed. Also great for installing to non-standard locations, e.g /opt or into $HOME. The only way to avoid different prefixes is to use containers, but that's of course a Linux-specific solution.
      – amon
      Nov 23 at 14:50




      DESTDIR or other prefixes are too often forgotten. As soon as external resources such as dynamic libraries are involved, it is not possible to build the software without knowing where it will be installed. Also great for installing to non-standard locations, e.g /opt or into $HOME. The only way to avoid different prefixes is to use containers, but that's of course a Linux-specific solution.
      – amon
      Nov 23 at 14:50




      2




      2




      I've seen more than one package that if you tried DESTDIR=/tmp/destdir would not work later when installed to the normal place because DESTDIR was used in path generation.
      – Joshua
      Nov 23 at 15:34




      I've seen more than one package that if you tried DESTDIR=/tmp/destdir would not work later when installed to the normal place because DESTDIR was used in path generation.
      – Joshua
      Nov 23 at 15:34












      @amon: I'm not sure I would characterize containers as Linux-specific. Linux may be a common target platform for containerization, but some form of container technology exists in most modern operating systems.
      – Kevin
      Nov 23 at 16:25




      @amon: I'm not sure I would characterize containers as Linux-specific. Linux may be a common target platform for containerization, but some form of container technology exists in most modern operating systems.
      – Kevin
      Nov 23 at 16:25




      1




      1




      @Joshua It shouldn't, DESTDIR should only be relevant during the install step. You should be able to do: ./configure --prefix="/opt/foo" && make && DESTDIR=/tmp/foo make install and be able to relocate the package to /opt/foo without any issue.
      – Nax
      Nov 24 at 14:27






      @Joshua It shouldn't, DESTDIR should only be relevant during the install step. You should be able to do: ./configure --prefix="/opt/foo" && make && DESTDIR=/tmp/foo make install and be able to relocate the package to /opt/foo without any issue.
      – Nax
      Nov 24 at 14:27












      up vote
      3
      down vote













      There are several reasons which come to mind.




      • Many package creating software - the Debian build system for example, and IIRC rpm as well - already expect from the building script to "install" the program to some special subdirectory. So it is driven by backward compatibility in both directions.

      • A user may want to install the software to a local space, like in the $HOME directory. Not all package managers support it.

      • There may still be environments which do not have packages.






      share|improve this answer























      • I reworded the question a bit, I meant program manager when I said that makefiles should create installable packages.
        – Synxis
        Nov 23 at 14:15















      up vote
      3
      down vote













      There are several reasons which come to mind.




      • Many package creating software - the Debian build system for example, and IIRC rpm as well - already expect from the building script to "install" the program to some special subdirectory. So it is driven by backward compatibility in both directions.

      • A user may want to install the software to a local space, like in the $HOME directory. Not all package managers support it.

      • There may still be environments which do not have packages.






      share|improve this answer























      • I reworded the question a bit, I meant program manager when I said that makefiles should create installable packages.
        – Synxis
        Nov 23 at 14:15













      up vote
      3
      down vote










      up vote
      3
      down vote









      There are several reasons which come to mind.




      • Many package creating software - the Debian build system for example, and IIRC rpm as well - already expect from the building script to "install" the program to some special subdirectory. So it is driven by backward compatibility in both directions.

      • A user may want to install the software to a local space, like in the $HOME directory. Not all package managers support it.

      • There may still be environments which do not have packages.






      share|improve this answer














      There are several reasons which come to mind.




      • Many package creating software - the Debian build system for example, and IIRC rpm as well - already expect from the building script to "install" the program to some special subdirectory. So it is driven by backward compatibility in both directions.

      • A user may want to install the software to a local space, like in the $HOME directory. Not all package managers support it.

      • There may still be environments which do not have packages.







      share|improve this answer














      share|improve this answer



      share|improve this answer








      edited Nov 23 at 17:41









      Peter Mortensen

      1,11621114




      1,11621114










      answered Nov 23 at 14:07









      max630

      1,120411




      1,120411












      • I reworded the question a bit, I meant program manager when I said that makefiles should create installable packages.
        – Synxis
        Nov 23 at 14:15


















      • I reworded the question a bit, I meant program manager when I said that makefiles should create installable packages.
        – Synxis
        Nov 23 at 14:15
















      I reworded the question a bit, I meant program manager when I said that makefiles should create installable packages.
      – Synxis
      Nov 23 at 14:15




      I reworded the question a bit, I meant program manager when I said that makefiles should create installable packages.
      – Synxis
      Nov 23 at 14:15










      up vote
      1
      down vote













      One reason not mentioned is there's a lot of times when you are not using the current version of the software or using a modified version of the software. Trying to create a custom package is not only more work, but it can conflict with currently created and distributed packages. In open source code this happens a lot especially if breaking changes are introduced in future versions you are using.



      Let's say you're using the open source project FOO which is currently on version 2.0.1 and you are using version 1.3.0. You don't want to use anything above that because version 2.0.0 is incompatible with what you are currently doing, but there is a single bug fix in 2.0.1 you desperately need. Having the make install option let's you install the modified 1.3.0 software without having to worry about creating a package and install it on your system.






      share|improve this answer

























        up vote
        1
        down vote













        One reason not mentioned is there's a lot of times when you are not using the current version of the software or using a modified version of the software. Trying to create a custom package is not only more work, but it can conflict with currently created and distributed packages. In open source code this happens a lot especially if breaking changes are introduced in future versions you are using.



        Let's say you're using the open source project FOO which is currently on version 2.0.1 and you are using version 1.3.0. You don't want to use anything above that because version 2.0.0 is incompatible with what you are currently doing, but there is a single bug fix in 2.0.1 you desperately need. Having the make install option let's you install the modified 1.3.0 software without having to worry about creating a package and install it on your system.






        share|improve this answer























          up vote
          1
          down vote










          up vote
          1
          down vote









          One reason not mentioned is there's a lot of times when you are not using the current version of the software or using a modified version of the software. Trying to create a custom package is not only more work, but it can conflict with currently created and distributed packages. In open source code this happens a lot especially if breaking changes are introduced in future versions you are using.



          Let's say you're using the open source project FOO which is currently on version 2.0.1 and you are using version 1.3.0. You don't want to use anything above that because version 2.0.0 is incompatible with what you are currently doing, but there is a single bug fix in 2.0.1 you desperately need. Having the make install option let's you install the modified 1.3.0 software without having to worry about creating a package and install it on your system.






          share|improve this answer












          One reason not mentioned is there's a lot of times when you are not using the current version of the software or using a modified version of the software. Trying to create a custom package is not only more work, but it can conflict with currently created and distributed packages. In open source code this happens a lot especially if breaking changes are introduced in future versions you are using.



          Let's say you're using the open source project FOO which is currently on version 2.0.1 and you are using version 1.3.0. You don't want to use anything above that because version 2.0.0 is incompatible with what you are currently doing, but there is a single bug fix in 2.0.1 you desperately need. Having the make install option let's you install the modified 1.3.0 software without having to worry about creating a package and install it on your system.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Nov 23 at 16:44









          Dom

          1696




          1696






















              up vote
              1
              down vote













              Linux distributions generally separate program maintenance from package maintenance. A build system that integrates package generation would force program maintainers to also perform package maintenance.



              This is usually a bad idea. Distributions have lots of infrastructure to verify internal consistency, provide binaries for multiple target platforms, perform small alterations to better integrate with the rest of the system and provide a consistent experience for users reporting bugs.



              To generate packages directly from a build system, you would have to either integrate or bypass all of this infrastructure. Integrating it would be a lot of work for questionable benefit, and bypassing it would give a worse user experience.



              This is one of the "top of the food chain" problems that are typical in multi-party systems. If you have multiple complex systems, there needs to be a clear hierarchy of which system is responsible for coordinating all others.



              In the case of software installation management, the package manager is this component, and it will run the package's build system, then take the output through a convenient interface ("files in a directory after an installation step"), generate a package and prepare it for upload to a repository.



              The package manager stands in the middle between the build system and the repository here, and is in the best position to integrate well with both.



              You may have noticed that there are only few of the JavaScript packages available through npm also available through apt — this is mainly because the JavaScript people decided that npm and the associated repository was going to be the top of their food chain, which made it close to impossible to ship these packages as Debian packages.



              With my Debian Developer hat on: if you release open source software, please leave the packaging to distribution maintainers. It saves both you and us a lot of work.






              share|improve this answer





















              • You've said nothing about why there's an install target, and it seems to me that most of what you've written would apply to it too...
                – curiousdannii
                Nov 24 at 11:45






              • 1




                @curiousdannii, there needs to be some interface between build system and package manager, and this happens to be the simplest one, so it won.
                – Simon Richter
                Nov 25 at 13:06















              up vote
              1
              down vote













              Linux distributions generally separate program maintenance from package maintenance. A build system that integrates package generation would force program maintainers to also perform package maintenance.



              This is usually a bad idea. Distributions have lots of infrastructure to verify internal consistency, provide binaries for multiple target platforms, perform small alterations to better integrate with the rest of the system and provide a consistent experience for users reporting bugs.



              To generate packages directly from a build system, you would have to either integrate or bypass all of this infrastructure. Integrating it would be a lot of work for questionable benefit, and bypassing it would give a worse user experience.



              This is one of the "top of the food chain" problems that are typical in multi-party systems. If you have multiple complex systems, there needs to be a clear hierarchy of which system is responsible for coordinating all others.



              In the case of software installation management, the package manager is this component, and it will run the package's build system, then take the output through a convenient interface ("files in a directory after an installation step"), generate a package and prepare it for upload to a repository.



              The package manager stands in the middle between the build system and the repository here, and is in the best position to integrate well with both.



              You may have noticed that there are only few of the JavaScript packages available through npm also available through apt — this is mainly because the JavaScript people decided that npm and the associated repository was going to be the top of their food chain, which made it close to impossible to ship these packages as Debian packages.



              With my Debian Developer hat on: if you release open source software, please leave the packaging to distribution maintainers. It saves both you and us a lot of work.






              share|improve this answer





















              • You've said nothing about why there's an install target, and it seems to me that most of what you've written would apply to it too...
                – curiousdannii
                Nov 24 at 11:45






              • 1




                @curiousdannii, there needs to be some interface between build system and package manager, and this happens to be the simplest one, so it won.
                – Simon Richter
                Nov 25 at 13:06













              up vote
              1
              down vote










              up vote
              1
              down vote









              Linux distributions generally separate program maintenance from package maintenance. A build system that integrates package generation would force program maintainers to also perform package maintenance.



              This is usually a bad idea. Distributions have lots of infrastructure to verify internal consistency, provide binaries for multiple target platforms, perform small alterations to better integrate with the rest of the system and provide a consistent experience for users reporting bugs.



              To generate packages directly from a build system, you would have to either integrate or bypass all of this infrastructure. Integrating it would be a lot of work for questionable benefit, and bypassing it would give a worse user experience.



              This is one of the "top of the food chain" problems that are typical in multi-party systems. If you have multiple complex systems, there needs to be a clear hierarchy of which system is responsible for coordinating all others.



              In the case of software installation management, the package manager is this component, and it will run the package's build system, then take the output through a convenient interface ("files in a directory after an installation step"), generate a package and prepare it for upload to a repository.



              The package manager stands in the middle between the build system and the repository here, and is in the best position to integrate well with both.



              You may have noticed that there are only few of the JavaScript packages available through npm also available through apt — this is mainly because the JavaScript people decided that npm and the associated repository was going to be the top of their food chain, which made it close to impossible to ship these packages as Debian packages.



              With my Debian Developer hat on: if you release open source software, please leave the packaging to distribution maintainers. It saves both you and us a lot of work.






              share|improve this answer












              Linux distributions generally separate program maintenance from package maintenance. A build system that integrates package generation would force program maintainers to also perform package maintenance.



              This is usually a bad idea. Distributions have lots of infrastructure to verify internal consistency, provide binaries for multiple target platforms, perform small alterations to better integrate with the rest of the system and provide a consistent experience for users reporting bugs.



              To generate packages directly from a build system, you would have to either integrate or bypass all of this infrastructure. Integrating it would be a lot of work for questionable benefit, and bypassing it would give a worse user experience.



              This is one of the "top of the food chain" problems that are typical in multi-party systems. If you have multiple complex systems, there needs to be a clear hierarchy of which system is responsible for coordinating all others.



              In the case of software installation management, the package manager is this component, and it will run the package's build system, then take the output through a convenient interface ("files in a directory after an installation step"), generate a package and prepare it for upload to a repository.



              The package manager stands in the middle between the build system and the repository here, and is in the best position to integrate well with both.



              You may have noticed that there are only few of the JavaScript packages available through npm also available through apt — this is mainly because the JavaScript people decided that npm and the associated repository was going to be the top of their food chain, which made it close to impossible to ship these packages as Debian packages.



              With my Debian Developer hat on: if you release open source software, please leave the packaging to distribution maintainers. It saves both you and us a lot of work.







              share|improve this answer












              share|improve this answer



              share|improve this answer










              answered Nov 23 at 17:52









              Simon Richter

              1,17569




              1,17569












              • You've said nothing about why there's an install target, and it seems to me that most of what you've written would apply to it too...
                – curiousdannii
                Nov 24 at 11:45






              • 1




                @curiousdannii, there needs to be some interface between build system and package manager, and this happens to be the simplest one, so it won.
                – Simon Richter
                Nov 25 at 13:06


















              • You've said nothing about why there's an install target, and it seems to me that most of what you've written would apply to it too...
                – curiousdannii
                Nov 24 at 11:45






              • 1




                @curiousdannii, there needs to be some interface between build system and package manager, and this happens to be the simplest one, so it won.
                – Simon Richter
                Nov 25 at 13:06
















              You've said nothing about why there's an install target, and it seems to me that most of what you've written would apply to it too...
              – curiousdannii
              Nov 24 at 11:45




              You've said nothing about why there's an install target, and it seems to me that most of what you've written would apply to it too...
              – curiousdannii
              Nov 24 at 11:45




              1




              1




              @curiousdannii, there needs to be some interface between build system and package manager, and this happens to be the simplest one, so it won.
              – Simon Richter
              Nov 25 at 13:06




              @curiousdannii, there needs to be some interface between build system and package manager, and this happens to be the simplest one, so it won.
              – Simon Richter
              Nov 25 at 13:06










              up vote
              1
              down vote













              Well, application developers are the ones that know where each file should go. They could leave that in documentation, and have package maintainers read that and build a script for each package. Maybe the package maintainers will misinterpret the documentation and will have to debug the script until it works. This is inefficient. It's better for the application developer to write a script to properly install the application he's written.



              He could write an install script with an arbitrary name or maybe make it part of the procedure of some other script. However, having a standard install command, make install (a convention that predates package managers), it's become really easy to make packages. If you look at the PKGBUILD template for making Archlinux packages, you can see that the function that actually packages simply does a make DESTDIR="$pkgdir/" install. This probably works for the majority of packages and probably more with a little modification. Thanks to make (and the autotools) being standard, packaging is really, really easy.






              share|improve this answer

























                up vote
                1
                down vote













                Well, application developers are the ones that know where each file should go. They could leave that in documentation, and have package maintainers read that and build a script for each package. Maybe the package maintainers will misinterpret the documentation and will have to debug the script until it works. This is inefficient. It's better for the application developer to write a script to properly install the application he's written.



                He could write an install script with an arbitrary name or maybe make it part of the procedure of some other script. However, having a standard install command, make install (a convention that predates package managers), it's become really easy to make packages. If you look at the PKGBUILD template for making Archlinux packages, you can see that the function that actually packages simply does a make DESTDIR="$pkgdir/" install. This probably works for the majority of packages and probably more with a little modification. Thanks to make (and the autotools) being standard, packaging is really, really easy.






                share|improve this answer























                  up vote
                  1
                  down vote










                  up vote
                  1
                  down vote









                  Well, application developers are the ones that know where each file should go. They could leave that in documentation, and have package maintainers read that and build a script for each package. Maybe the package maintainers will misinterpret the documentation and will have to debug the script until it works. This is inefficient. It's better for the application developer to write a script to properly install the application he's written.



                  He could write an install script with an arbitrary name or maybe make it part of the procedure of some other script. However, having a standard install command, make install (a convention that predates package managers), it's become really easy to make packages. If you look at the PKGBUILD template for making Archlinux packages, you can see that the function that actually packages simply does a make DESTDIR="$pkgdir/" install. This probably works for the majority of packages and probably more with a little modification. Thanks to make (and the autotools) being standard, packaging is really, really easy.






                  share|improve this answer












                  Well, application developers are the ones that know where each file should go. They could leave that in documentation, and have package maintainers read that and build a script for each package. Maybe the package maintainers will misinterpret the documentation and will have to debug the script until it works. This is inefficient. It's better for the application developer to write a script to properly install the application he's written.



                  He could write an install script with an arbitrary name or maybe make it part of the procedure of some other script. However, having a standard install command, make install (a convention that predates package managers), it's become really easy to make packages. If you look at the PKGBUILD template for making Archlinux packages, you can see that the function that actually packages simply does a make DESTDIR="$pkgdir/" install. This probably works for the majority of packages and probably more with a little modification. Thanks to make (and the autotools) being standard, packaging is really, really easy.







                  share|improve this answer












                  share|improve this answer



                  share|improve this answer










                  answered Nov 24 at 2:22









                  JoL

                  1192




                  1192






























                      draft saved

                      draft discarded




















































                      Thanks for contributing an answer to Software Engineering Stack Exchange!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid



                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.


                      To learn more, see our tips on writing great answers.





                      Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


                      Please pay close attention to the following guidance:


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid



                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.


                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function () {
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsoftwareengineering.stackexchange.com%2fquestions%2f381924%2fwhy-should-makefiles-have-an-install-target%23new-answer', 'question_page');
                      }
                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown











                      Popular posts from this blog

                      Ellipse (mathématiques)

                      Quarter-circle Tiles

                      Mont Emei