Why should makefiles have an “install” target?











up vote
15
down vote

favorite
2












Coming from the world of C and C++, most build system have an install target, notably Makefiles (where it is recommended by GNU for example) or CMake. This target copies the runtime files (executables, libraries, ...) in the operating system (for example, in C:Program Files on Windows).



This feels really hacky, since for me it is not the responsibility of the build system to install programs (which is actually the responsibility of the operating system / package manager). It also means the build system or build script must know the organization of installed programs, with environment variables, registry variables, symlinks, permissions, etc.



At best, build systems should have a release target that will output an installable program (for example .deb or .msi), and then kindly ask the operating system to install that program. It would also allow the user to uninstall without having to type make uninstall.



So, my question: why do build system usually recommend having an install target?










share|improve this question









New contributor




Synxis is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
















  • 7




    Your arguing that "make install" does not fall within the responsibility of a build system, but the much more involved and platform specific responsibility of creating an installable package does.
    – pmf
    2 days ago










  • @pmf Well, I think modifying the system is way outside the build system responsabilities... For example, there is a high risk of breaking things when a package is installed via make install but apt is not aware of it
    – Synxis
    2 days ago






  • 2




    Anyway: sometimes you want to install a n application that is not handled by the OS/package manager (because it has dependencies that would cause conflicts impossible to resolve using the package manager etc). make install usually installs under /usr/local (or even /opt) which are directories not handled by the "core OS/package management system". No idea whether Windows has some similar convention though.
    – Bakuriu
    2 days ago






  • 9




    "This feels really hacky." Well, what did you expect from the world of C/C++? ;-)
    – Mason Wheeler
    2 days ago






  • 1




    Note that make install makes no sense when we talk about cross-compiling
    – Hagen von Eitzen
    yesterday















up vote
15
down vote

favorite
2












Coming from the world of C and C++, most build system have an install target, notably Makefiles (where it is recommended by GNU for example) or CMake. This target copies the runtime files (executables, libraries, ...) in the operating system (for example, in C:Program Files on Windows).



This feels really hacky, since for me it is not the responsibility of the build system to install programs (which is actually the responsibility of the operating system / package manager). It also means the build system or build script must know the organization of installed programs, with environment variables, registry variables, symlinks, permissions, etc.



At best, build systems should have a release target that will output an installable program (for example .deb or .msi), and then kindly ask the operating system to install that program. It would also allow the user to uninstall without having to type make uninstall.



So, my question: why do build system usually recommend having an install target?










share|improve this question









New contributor




Synxis is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
















  • 7




    Your arguing that "make install" does not fall within the responsibility of a build system, but the much more involved and platform specific responsibility of creating an installable package does.
    – pmf
    2 days ago










  • @pmf Well, I think modifying the system is way outside the build system responsabilities... For example, there is a high risk of breaking things when a package is installed via make install but apt is not aware of it
    – Synxis
    2 days ago






  • 2




    Anyway: sometimes you want to install a n application that is not handled by the OS/package manager (because it has dependencies that would cause conflicts impossible to resolve using the package manager etc). make install usually installs under /usr/local (or even /opt) which are directories not handled by the "core OS/package management system". No idea whether Windows has some similar convention though.
    – Bakuriu
    2 days ago






  • 9




    "This feels really hacky." Well, what did you expect from the world of C/C++? ;-)
    – Mason Wheeler
    2 days ago






  • 1




    Note that make install makes no sense when we talk about cross-compiling
    – Hagen von Eitzen
    yesterday













up vote
15
down vote

favorite
2









up vote
15
down vote

favorite
2






2





Coming from the world of C and C++, most build system have an install target, notably Makefiles (where it is recommended by GNU for example) or CMake. This target copies the runtime files (executables, libraries, ...) in the operating system (for example, in C:Program Files on Windows).



This feels really hacky, since for me it is not the responsibility of the build system to install programs (which is actually the responsibility of the operating system / package manager). It also means the build system or build script must know the organization of installed programs, with environment variables, registry variables, symlinks, permissions, etc.



At best, build systems should have a release target that will output an installable program (for example .deb or .msi), and then kindly ask the operating system to install that program. It would also allow the user to uninstall without having to type make uninstall.



So, my question: why do build system usually recommend having an install target?










share|improve this question









New contributor




Synxis is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











Coming from the world of C and C++, most build system have an install target, notably Makefiles (where it is recommended by GNU for example) or CMake. This target copies the runtime files (executables, libraries, ...) in the operating system (for example, in C:Program Files on Windows).



This feels really hacky, since for me it is not the responsibility of the build system to install programs (which is actually the responsibility of the operating system / package manager). It also means the build system or build script must know the organization of installed programs, with environment variables, registry variables, symlinks, permissions, etc.



At best, build systems should have a release target that will output an installable program (for example .deb or .msi), and then kindly ask the operating system to install that program. It would also allow the user to uninstall without having to type make uninstall.



So, my question: why do build system usually recommend having an install target?







build-system cmake make install






share|improve this question









New contributor




Synxis is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











share|improve this question









New contributor




Synxis is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









share|improve this question




share|improve this question








edited yesterday





















New contributor




Synxis is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









asked 2 days ago









Synxis

1826




1826




New contributor




Synxis is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.





New contributor





Synxis is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






Synxis is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.








  • 7




    Your arguing that "make install" does not fall within the responsibility of a build system, but the much more involved and platform specific responsibility of creating an installable package does.
    – pmf
    2 days ago










  • @pmf Well, I think modifying the system is way outside the build system responsabilities... For example, there is a high risk of breaking things when a package is installed via make install but apt is not aware of it
    – Synxis
    2 days ago






  • 2




    Anyway: sometimes you want to install a n application that is not handled by the OS/package manager (because it has dependencies that would cause conflicts impossible to resolve using the package manager etc). make install usually installs under /usr/local (or even /opt) which are directories not handled by the "core OS/package management system". No idea whether Windows has some similar convention though.
    – Bakuriu
    2 days ago






  • 9




    "This feels really hacky." Well, what did you expect from the world of C/C++? ;-)
    – Mason Wheeler
    2 days ago






  • 1




    Note that make install makes no sense when we talk about cross-compiling
    – Hagen von Eitzen
    yesterday














  • 7




    Your arguing that "make install" does not fall within the responsibility of a build system, but the much more involved and platform specific responsibility of creating an installable package does.
    – pmf
    2 days ago










  • @pmf Well, I think modifying the system is way outside the build system responsabilities... For example, there is a high risk of breaking things when a package is installed via make install but apt is not aware of it
    – Synxis
    2 days ago






  • 2




    Anyway: sometimes you want to install a n application that is not handled by the OS/package manager (because it has dependencies that would cause conflicts impossible to resolve using the package manager etc). make install usually installs under /usr/local (or even /opt) which are directories not handled by the "core OS/package management system". No idea whether Windows has some similar convention though.
    – Bakuriu
    2 days ago






  • 9




    "This feels really hacky." Well, what did you expect from the world of C/C++? ;-)
    – Mason Wheeler
    2 days ago






  • 1




    Note that make install makes no sense when we talk about cross-compiling
    – Hagen von Eitzen
    yesterday








7




7




Your arguing that "make install" does not fall within the responsibility of a build system, but the much more involved and platform specific responsibility of creating an installable package does.
– pmf
2 days ago




Your arguing that "make install" does not fall within the responsibility of a build system, but the much more involved and platform specific responsibility of creating an installable package does.
– pmf
2 days ago












@pmf Well, I think modifying the system is way outside the build system responsabilities... For example, there is a high risk of breaking things when a package is installed via make install but apt is not aware of it
– Synxis
2 days ago




@pmf Well, I think modifying the system is way outside the build system responsabilities... For example, there is a high risk of breaking things when a package is installed via make install but apt is not aware of it
– Synxis
2 days ago




2




2




Anyway: sometimes you want to install a n application that is not handled by the OS/package manager (because it has dependencies that would cause conflicts impossible to resolve using the package manager etc). make install usually installs under /usr/local (or even /opt) which are directories not handled by the "core OS/package management system". No idea whether Windows has some similar convention though.
– Bakuriu
2 days ago




Anyway: sometimes you want to install a n application that is not handled by the OS/package manager (because it has dependencies that would cause conflicts impossible to resolve using the package manager etc). make install usually installs under /usr/local (or even /opt) which are directories not handled by the "core OS/package management system". No idea whether Windows has some similar convention though.
– Bakuriu
2 days ago




9




9




"This feels really hacky." Well, what did you expect from the world of C/C++? ;-)
– Mason Wheeler
2 days ago




"This feels really hacky." Well, what did you expect from the world of C/C++? ;-)
– Mason Wheeler
2 days ago




1




1




Note that make install makes no sense when we talk about cross-compiling
– Hagen von Eitzen
yesterday




Note that make install makes no sense when we talk about cross-compiling
– Hagen von Eitzen
yesterday










6 Answers
6






active

oldest

votes

















up vote
21
down vote













Many build scripts or Makefiles have an installation target because they were created before package managers existed, and because even today lots of systems don't have package managers. Plus, there are systems where make install actually is the preferred way of managing packages.






share|improve this answer





















  • I'm curious about systems where make install is preferred. Apart from that, I meant program manager when I said that makefiles should create installable packages. I think almost all OS come with a way of managing the installed programs ? For example, Windows has no package manager (apart from the store) but still has a way to manage installed programs (via .msi packages for examples)
    – Synxis
    2 days ago






  • 2




    @Synxis BSD, Linux, Unix all use makefiles. Whether it's preferred to use them for installation, I don't know, but you often have that ability using make install.
    – Rob
    2 days ago






  • 1




    In debian at least it's preferred to use checkinstall over make install for two reasons: "You can easily remove the package with one step." and "You can install the resulting package upon multiple machines." - as checkinstall builds a .deb and installs it, it uses the package manager...
    – Aaron Hall
    2 days ago








  • 1




    @Synxis - There are several linux distributions (often called source distros) where the package manager install programs by downloading a tar file, decompress it then run make install
    – slebetman
    2 days ago






  • 1




    @AaronHall Correct me if I'm wrong, but I got the impression that a checkinstall invocation will actually use make install and monitor it's actions for package building.
    – cmaster
    yesterday


















up vote
5
down vote













A makefile might have no install target, and more importantly, you can have programs which are not even supposed to be installable (e.g. because they should run from their build directory, or because they can run installed anywhere). The install target is just a convention for usual makefile-s.



However, many programs require external resources to be run (for example: fonts, databases, configuration files, etc). And their executable often make some hypothesis about these resources. For example, your bash shell would generally read some initialization file from /etc/bash.bashrc etc.... These resources are generally in the file system (see hier(7) for conventions about the file hierarchy) and the default file path is built in your executable.



Try to use strings(1) on most executables of your system. You'll find out which file paths are known to it.



BTW, for many GNU programs using autoconf, you could run make install DESTDIR=/tmp/destdir/ without being root. Then /tmp/destdir/ is filled with the files that should be later packaged.



FWIW, I tend to believe that my bismon (GPLv3+ licensed) program (described in my bismon-chariot-doc.pdf report) cannot be "installed"; I am not sure to be able to prove that, and I cannot imagine how could I make that program installable.






share|improve this answer



















  • 2




    DESTDIR or other prefixes are too often forgotten. As soon as external resources such as dynamic libraries are involved, it is not possible to build the software without knowing where it will be installed. Also great for installing to non-standard locations, e.g /opt or into $HOME. The only way to avoid different prefixes is to use containers, but that's of course a Linux-specific solution.
    – amon
    2 days ago






  • 1




    I've seen more than one package that if you tried DESTDIR=/tmp/destdir would not work later when installed to the normal place because DESTDIR was used in path generation.
    – Joshua
    2 days ago










  • @amon: I'm not sure I would characterize containers as Linux-specific. Linux may be a common target platform for containerization, but some form of container technology exists in most modern operating systems.
    – Kevin
    2 days ago










  • @Joshua It shouldn't, DESTDIR should only be relevant during the install step. You should be able to do: ./configure --prefix="/opt/foo" && make && DESTDIR=/tmp/foo make install and be able to relocate the package to /opt/foo without any issue.
    – Nax
    yesterday




















up vote
2
down vote













There are several reasons which come to mind.




  • Many package creating software - the Debian build system for example, and IIRC rpm as well - already expect from the building script to "install" the program to some special subdirectory. So it is driven by backward compatibility in both directions.

  • A user may want to install the software to a local space, like in the $HOME directory. Not all package managers support it.

  • There may still be environments which do not have packages.






share|improve this answer























  • I reworded the question a bit, I meant program manager when I said that makefiles should create installable packages.
    – Synxis
    2 days ago


















up vote
1
down vote













One reason not mentioned is there's a lot of times when you are not using the current version of the software or using a modified version of the software. Trying to create a custom package is not only more work, but it can conflict with currently created and distributed packages. In open source code this happens a lot especially if breaking changes are introduced in future versions you are using.



Let's say you're using the open source project FOO which is currently on version 2.0.1 and you are using version 1.3.0. You don't want to use anything above that because version 2.0.0 is incompatible with what you are currently doing, but there is a single bug fix in 2.0.1 you desperately need. Having the make install option let's you install the modified 1.3.0 software without having to worry about creating a package and install it on your system.






share|improve this answer




























    up vote
    1
    down vote













    Linux distributions generally separate program maintenance from package maintenance. A build system that integrates package generation would force program maintainers to also perform package maintenance.



    This is usually a bad idea. Distributions have lots of infrastructure to verify internal consistency, provide binaries for multiple target platforms, perform small alterations to better integrate with the rest of the system and provide a consistent experience for users reporting bugs.



    To generate packages directly from a build system, you would have to either integrate or bypass all of this infrastructure. Integrating it would be a lot of work for questionable benefit, and bypassing it would give a worse user experience.



    This is one of the "top of the food chain" problems that are typical in multi-party systems. If you have multiple complex systems, there needs to be a clear hierarchy of which system is responsible for coordinating all others.



    In the case of software installation management, the package manager is this component, and it will run the package's build system, then take the output through a convenient interface ("files in a directory after an installation step"), generate a package and prepare it for upload to a repository.



    The package manager stands in the middle between the build system and the repository here, and is in the best position to integrate well with both.



    You may have noticed that there are only few of the JavaScript packages available through npm also available through apt — this is mainly because the JavaScript people decided that npm and the associated repository was going to be the top of their food chain, which made it close to impossible to ship these packages as Debian packages.



    With my Debian Developer hat on: if you release open source software, please leave the packaging to distribution maintainers. It saves both you and us a lot of work.






    share|improve this answer





















    • You've said nothing about why there's an install target, and it seems to me that most of what you've written would apply to it too...
      – curiousdannii
      yesterday






    • 1




      @curiousdannii, there needs to be some interface between build system and package manager, and this happens to be the simplest one, so it won.
      – Simon Richter
      17 hours ago


















    up vote
    1
    down vote













    Well, application developers are the ones that know where each file should go. They could leave that in documentation, and have package maintainers read that and build a script for each package. Maybe the package maintainers will misinterpret the documentation and will have to debug the script until it works. This is inefficient. It's better for the application developer to write a script to properly install the application he's written.



    He could write an install script with an arbitrary name or maybe make it part of the procedure of some other script. However, having a standard install command, make install (a convention that predates package managers), it's become really easy to make packages. If you look at the PKGBUILD template for making Archlinux packages, you can see that the function that actually packages simply does a make DESTDIR="$pkgdir/" install. This probably works for the majority of packages and probably more with a little modification. Thanks to make (and the autotools) being standard, packaging is really, really easy.






    share|improve this answer








    New contributor




    JoL is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.


















      Your Answer








      StackExchange.ready(function() {
      var channelOptions = {
      tags: "".split(" "),
      id: "131"
      };
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function() {
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled) {
      StackExchange.using("snippets", function() {
      createEditor();
      });
      }
      else {
      createEditor();
      }
      });

      function createEditor() {
      StackExchange.prepareEditor({
      heartbeatType: 'answer',
      convertImagesToLinks: false,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: null,
      bindNavPrevention: true,
      postfix: "",
      imageUploader: {
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      },
      onDemand: false,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      });


      }
      });






      Synxis is a new contributor. Be nice, and check out our Code of Conduct.










       

      draft saved


      draft discarded


















      StackExchange.ready(
      function () {
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsoftwareengineering.stackexchange.com%2fquestions%2f381924%2fwhy-should-makefiles-have-an-install-target%23new-answer', 'question_page');
      }
      );

      Post as a guest















      Required, but never shown




















      StackExchange.ready(function () {
      $("#show-editor-button input, #show-editor-button button").click(function () {
      var showEditor = function() {
      $("#show-editor-button").hide();
      $("#post-form").removeClass("dno");
      StackExchange.editor.finallyInit();
      };

      var useFancy = $(this).data('confirm-use-fancy');
      if(useFancy == 'True') {
      var popupTitle = $(this).data('confirm-fancy-title');
      var popupBody = $(this).data('confirm-fancy-body');
      var popupAccept = $(this).data('confirm-fancy-accept-button');

      $(this).loadPopup({
      url: '/post/self-answer-popup',
      loaded: function(popup) {
      var pTitle = $(popup).find('h2');
      var pBody = $(popup).find('.popup-body');
      var pSubmit = $(popup).find('.popup-submit');

      pTitle.text(popupTitle);
      pBody.html(popupBody);
      pSubmit.val(popupAccept).click(showEditor);
      }
      })
      } else{
      var confirmText = $(this).data('confirm-text');
      if (confirmText ? confirm(confirmText) : true) {
      showEditor();
      }
      }
      });
      });






      6 Answers
      6






      active

      oldest

      votes








      6 Answers
      6






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes








      up vote
      21
      down vote













      Many build scripts or Makefiles have an installation target because they were created before package managers existed, and because even today lots of systems don't have package managers. Plus, there are systems where make install actually is the preferred way of managing packages.






      share|improve this answer





















      • I'm curious about systems where make install is preferred. Apart from that, I meant program manager when I said that makefiles should create installable packages. I think almost all OS come with a way of managing the installed programs ? For example, Windows has no package manager (apart from the store) but still has a way to manage installed programs (via .msi packages for examples)
        – Synxis
        2 days ago






      • 2




        @Synxis BSD, Linux, Unix all use makefiles. Whether it's preferred to use them for installation, I don't know, but you often have that ability using make install.
        – Rob
        2 days ago






      • 1




        In debian at least it's preferred to use checkinstall over make install for two reasons: "You can easily remove the package with one step." and "You can install the resulting package upon multiple machines." - as checkinstall builds a .deb and installs it, it uses the package manager...
        – Aaron Hall
        2 days ago








      • 1




        @Synxis - There are several linux distributions (often called source distros) where the package manager install programs by downloading a tar file, decompress it then run make install
        – slebetman
        2 days ago






      • 1




        @AaronHall Correct me if I'm wrong, but I got the impression that a checkinstall invocation will actually use make install and monitor it's actions for package building.
        – cmaster
        yesterday















      up vote
      21
      down vote













      Many build scripts or Makefiles have an installation target because they were created before package managers existed, and because even today lots of systems don't have package managers. Plus, there are systems where make install actually is the preferred way of managing packages.






      share|improve this answer





















      • I'm curious about systems where make install is preferred. Apart from that, I meant program manager when I said that makefiles should create installable packages. I think almost all OS come with a way of managing the installed programs ? For example, Windows has no package manager (apart from the store) but still has a way to manage installed programs (via .msi packages for examples)
        – Synxis
        2 days ago






      • 2




        @Synxis BSD, Linux, Unix all use makefiles. Whether it's preferred to use them for installation, I don't know, but you often have that ability using make install.
        – Rob
        2 days ago






      • 1




        In debian at least it's preferred to use checkinstall over make install for two reasons: "You can easily remove the package with one step." and "You can install the resulting package upon multiple machines." - as checkinstall builds a .deb and installs it, it uses the package manager...
        – Aaron Hall
        2 days ago








      • 1




        @Synxis - There are several linux distributions (often called source distros) where the package manager install programs by downloading a tar file, decompress it then run make install
        – slebetman
        2 days ago






      • 1




        @AaronHall Correct me if I'm wrong, but I got the impression that a checkinstall invocation will actually use make install and monitor it's actions for package building.
        – cmaster
        yesterday













      up vote
      21
      down vote










      up vote
      21
      down vote









      Many build scripts or Makefiles have an installation target because they were created before package managers existed, and because even today lots of systems don't have package managers. Plus, there are systems where make install actually is the preferred way of managing packages.






      share|improve this answer












      Many build scripts or Makefiles have an installation target because they were created before package managers existed, and because even today lots of systems don't have package managers. Plus, there are systems where make install actually is the preferred way of managing packages.







      share|improve this answer












      share|improve this answer



      share|improve this answer










      answered 2 days ago









      Jörg W Mittag

      66.7k14138220




      66.7k14138220












      • I'm curious about systems where make install is preferred. Apart from that, I meant program manager when I said that makefiles should create installable packages. I think almost all OS come with a way of managing the installed programs ? For example, Windows has no package manager (apart from the store) but still has a way to manage installed programs (via .msi packages for examples)
        – Synxis
        2 days ago






      • 2




        @Synxis BSD, Linux, Unix all use makefiles. Whether it's preferred to use them for installation, I don't know, but you often have that ability using make install.
        – Rob
        2 days ago






      • 1




        In debian at least it's preferred to use checkinstall over make install for two reasons: "You can easily remove the package with one step." and "You can install the resulting package upon multiple machines." - as checkinstall builds a .deb and installs it, it uses the package manager...
        – Aaron Hall
        2 days ago








      • 1




        @Synxis - There are several linux distributions (often called source distros) where the package manager install programs by downloading a tar file, decompress it then run make install
        – slebetman
        2 days ago






      • 1




        @AaronHall Correct me if I'm wrong, but I got the impression that a checkinstall invocation will actually use make install and monitor it's actions for package building.
        – cmaster
        yesterday


















      • I'm curious about systems where make install is preferred. Apart from that, I meant program manager when I said that makefiles should create installable packages. I think almost all OS come with a way of managing the installed programs ? For example, Windows has no package manager (apart from the store) but still has a way to manage installed programs (via .msi packages for examples)
        – Synxis
        2 days ago






      • 2




        @Synxis BSD, Linux, Unix all use makefiles. Whether it's preferred to use them for installation, I don't know, but you often have that ability using make install.
        – Rob
        2 days ago






      • 1




        In debian at least it's preferred to use checkinstall over make install for two reasons: "You can easily remove the package with one step." and "You can install the resulting package upon multiple machines." - as checkinstall builds a .deb and installs it, it uses the package manager...
        – Aaron Hall
        2 days ago








      • 1




        @Synxis - There are several linux distributions (often called source distros) where the package manager install programs by downloading a tar file, decompress it then run make install
        – slebetman
        2 days ago






      • 1




        @AaronHall Correct me if I'm wrong, but I got the impression that a checkinstall invocation will actually use make install and monitor it's actions for package building.
        – cmaster
        yesterday
















      I'm curious about systems where make install is preferred. Apart from that, I meant program manager when I said that makefiles should create installable packages. I think almost all OS come with a way of managing the installed programs ? For example, Windows has no package manager (apart from the store) but still has a way to manage installed programs (via .msi packages for examples)
      – Synxis
      2 days ago




      I'm curious about systems where make install is preferred. Apart from that, I meant program manager when I said that makefiles should create installable packages. I think almost all OS come with a way of managing the installed programs ? For example, Windows has no package manager (apart from the store) but still has a way to manage installed programs (via .msi packages for examples)
      – Synxis
      2 days ago




      2




      2




      @Synxis BSD, Linux, Unix all use makefiles. Whether it's preferred to use them for installation, I don't know, but you often have that ability using make install.
      – Rob
      2 days ago




      @Synxis BSD, Linux, Unix all use makefiles. Whether it's preferred to use them for installation, I don't know, but you often have that ability using make install.
      – Rob
      2 days ago




      1




      1




      In debian at least it's preferred to use checkinstall over make install for two reasons: "You can easily remove the package with one step." and "You can install the resulting package upon multiple machines." - as checkinstall builds a .deb and installs it, it uses the package manager...
      – Aaron Hall
      2 days ago






      In debian at least it's preferred to use checkinstall over make install for two reasons: "You can easily remove the package with one step." and "You can install the resulting package upon multiple machines." - as checkinstall builds a .deb and installs it, it uses the package manager...
      – Aaron Hall
      2 days ago






      1




      1




      @Synxis - There are several linux distributions (often called source distros) where the package manager install programs by downloading a tar file, decompress it then run make install
      – slebetman
      2 days ago




      @Synxis - There are several linux distributions (often called source distros) where the package manager install programs by downloading a tar file, decompress it then run make install
      – slebetman
      2 days ago




      1




      1




      @AaronHall Correct me if I'm wrong, but I got the impression that a checkinstall invocation will actually use make install and monitor it's actions for package building.
      – cmaster
      yesterday




      @AaronHall Correct me if I'm wrong, but I got the impression that a checkinstall invocation will actually use make install and monitor it's actions for package building.
      – cmaster
      yesterday












      up vote
      5
      down vote













      A makefile might have no install target, and more importantly, you can have programs which are not even supposed to be installable (e.g. because they should run from their build directory, or because they can run installed anywhere). The install target is just a convention for usual makefile-s.



      However, many programs require external resources to be run (for example: fonts, databases, configuration files, etc). And their executable often make some hypothesis about these resources. For example, your bash shell would generally read some initialization file from /etc/bash.bashrc etc.... These resources are generally in the file system (see hier(7) for conventions about the file hierarchy) and the default file path is built in your executable.



      Try to use strings(1) on most executables of your system. You'll find out which file paths are known to it.



      BTW, for many GNU programs using autoconf, you could run make install DESTDIR=/tmp/destdir/ without being root. Then /tmp/destdir/ is filled with the files that should be later packaged.



      FWIW, I tend to believe that my bismon (GPLv3+ licensed) program (described in my bismon-chariot-doc.pdf report) cannot be "installed"; I am not sure to be able to prove that, and I cannot imagine how could I make that program installable.






      share|improve this answer



















      • 2




        DESTDIR or other prefixes are too often forgotten. As soon as external resources such as dynamic libraries are involved, it is not possible to build the software without knowing where it will be installed. Also great for installing to non-standard locations, e.g /opt or into $HOME. The only way to avoid different prefixes is to use containers, but that's of course a Linux-specific solution.
        – amon
        2 days ago






      • 1




        I've seen more than one package that if you tried DESTDIR=/tmp/destdir would not work later when installed to the normal place because DESTDIR was used in path generation.
        – Joshua
        2 days ago










      • @amon: I'm not sure I would characterize containers as Linux-specific. Linux may be a common target platform for containerization, but some form of container technology exists in most modern operating systems.
        – Kevin
        2 days ago










      • @Joshua It shouldn't, DESTDIR should only be relevant during the install step. You should be able to do: ./configure --prefix="/opt/foo" && make && DESTDIR=/tmp/foo make install and be able to relocate the package to /opt/foo without any issue.
        – Nax
        yesterday

















      up vote
      5
      down vote













      A makefile might have no install target, and more importantly, you can have programs which are not even supposed to be installable (e.g. because they should run from their build directory, or because they can run installed anywhere). The install target is just a convention for usual makefile-s.



      However, many programs require external resources to be run (for example: fonts, databases, configuration files, etc). And their executable often make some hypothesis about these resources. For example, your bash shell would generally read some initialization file from /etc/bash.bashrc etc.... These resources are generally in the file system (see hier(7) for conventions about the file hierarchy) and the default file path is built in your executable.



      Try to use strings(1) on most executables of your system. You'll find out which file paths are known to it.



      BTW, for many GNU programs using autoconf, you could run make install DESTDIR=/tmp/destdir/ without being root. Then /tmp/destdir/ is filled with the files that should be later packaged.



      FWIW, I tend to believe that my bismon (GPLv3+ licensed) program (described in my bismon-chariot-doc.pdf report) cannot be "installed"; I am not sure to be able to prove that, and I cannot imagine how could I make that program installable.






      share|improve this answer



















      • 2




        DESTDIR or other prefixes are too often forgotten. As soon as external resources such as dynamic libraries are involved, it is not possible to build the software without knowing where it will be installed. Also great for installing to non-standard locations, e.g /opt or into $HOME. The only way to avoid different prefixes is to use containers, but that's of course a Linux-specific solution.
        – amon
        2 days ago






      • 1




        I've seen more than one package that if you tried DESTDIR=/tmp/destdir would not work later when installed to the normal place because DESTDIR was used in path generation.
        – Joshua
        2 days ago










      • @amon: I'm not sure I would characterize containers as Linux-specific. Linux may be a common target platform for containerization, but some form of container technology exists in most modern operating systems.
        – Kevin
        2 days ago










      • @Joshua It shouldn't, DESTDIR should only be relevant during the install step. You should be able to do: ./configure --prefix="/opt/foo" && make && DESTDIR=/tmp/foo make install and be able to relocate the package to /opt/foo without any issue.
        – Nax
        yesterday















      up vote
      5
      down vote










      up vote
      5
      down vote









      A makefile might have no install target, and more importantly, you can have programs which are not even supposed to be installable (e.g. because they should run from their build directory, or because they can run installed anywhere). The install target is just a convention for usual makefile-s.



      However, many programs require external resources to be run (for example: fonts, databases, configuration files, etc). And their executable often make some hypothesis about these resources. For example, your bash shell would generally read some initialization file from /etc/bash.bashrc etc.... These resources are generally in the file system (see hier(7) for conventions about the file hierarchy) and the default file path is built in your executable.



      Try to use strings(1) on most executables of your system. You'll find out which file paths are known to it.



      BTW, for many GNU programs using autoconf, you could run make install DESTDIR=/tmp/destdir/ without being root. Then /tmp/destdir/ is filled with the files that should be later packaged.



      FWIW, I tend to believe that my bismon (GPLv3+ licensed) program (described in my bismon-chariot-doc.pdf report) cannot be "installed"; I am not sure to be able to prove that, and I cannot imagine how could I make that program installable.






      share|improve this answer














      A makefile might have no install target, and more importantly, you can have programs which are not even supposed to be installable (e.g. because they should run from their build directory, or because they can run installed anywhere). The install target is just a convention for usual makefile-s.



      However, many programs require external resources to be run (for example: fonts, databases, configuration files, etc). And their executable often make some hypothesis about these resources. For example, your bash shell would generally read some initialization file from /etc/bash.bashrc etc.... These resources are generally in the file system (see hier(7) for conventions about the file hierarchy) and the default file path is built in your executable.



      Try to use strings(1) on most executables of your system. You'll find out which file paths are known to it.



      BTW, for many GNU programs using autoconf, you could run make install DESTDIR=/tmp/destdir/ without being root. Then /tmp/destdir/ is filled with the files that should be later packaged.



      FWIW, I tend to believe that my bismon (GPLv3+ licensed) program (described in my bismon-chariot-doc.pdf report) cannot be "installed"; I am not sure to be able to prove that, and I cannot imagine how could I make that program installable.







      share|improve this answer














      share|improve this answer



      share|improve this answer








      edited 2 days ago

























      answered 2 days ago









      Basile Starynkevitch

      27.1k56098




      27.1k56098








      • 2




        DESTDIR or other prefixes are too often forgotten. As soon as external resources such as dynamic libraries are involved, it is not possible to build the software without knowing where it will be installed. Also great for installing to non-standard locations, e.g /opt or into $HOME. The only way to avoid different prefixes is to use containers, but that's of course a Linux-specific solution.
        – amon
        2 days ago






      • 1




        I've seen more than one package that if you tried DESTDIR=/tmp/destdir would not work later when installed to the normal place because DESTDIR was used in path generation.
        – Joshua
        2 days ago










      • @amon: I'm not sure I would characterize containers as Linux-specific. Linux may be a common target platform for containerization, but some form of container technology exists in most modern operating systems.
        – Kevin
        2 days ago










      • @Joshua It shouldn't, DESTDIR should only be relevant during the install step. You should be able to do: ./configure --prefix="/opt/foo" && make && DESTDIR=/tmp/foo make install and be able to relocate the package to /opt/foo without any issue.
        – Nax
        yesterday
















      • 2




        DESTDIR or other prefixes are too often forgotten. As soon as external resources such as dynamic libraries are involved, it is not possible to build the software without knowing where it will be installed. Also great for installing to non-standard locations, e.g /opt or into $HOME. The only way to avoid different prefixes is to use containers, but that's of course a Linux-specific solution.
        – amon
        2 days ago






      • 1




        I've seen more than one package that if you tried DESTDIR=/tmp/destdir would not work later when installed to the normal place because DESTDIR was used in path generation.
        – Joshua
        2 days ago










      • @amon: I'm not sure I would characterize containers as Linux-specific. Linux may be a common target platform for containerization, but some form of container technology exists in most modern operating systems.
        – Kevin
        2 days ago










      • @Joshua It shouldn't, DESTDIR should only be relevant during the install step. You should be able to do: ./configure --prefix="/opt/foo" && make && DESTDIR=/tmp/foo make install and be able to relocate the package to /opt/foo without any issue.
        – Nax
        yesterday










      2




      2




      DESTDIR or other prefixes are too often forgotten. As soon as external resources such as dynamic libraries are involved, it is not possible to build the software without knowing where it will be installed. Also great for installing to non-standard locations, e.g /opt or into $HOME. The only way to avoid different prefixes is to use containers, but that's of course a Linux-specific solution.
      – amon
      2 days ago




      DESTDIR or other prefixes are too often forgotten. As soon as external resources such as dynamic libraries are involved, it is not possible to build the software without knowing where it will be installed. Also great for installing to non-standard locations, e.g /opt or into $HOME. The only way to avoid different prefixes is to use containers, but that's of course a Linux-specific solution.
      – amon
      2 days ago




      1




      1




      I've seen more than one package that if you tried DESTDIR=/tmp/destdir would not work later when installed to the normal place because DESTDIR was used in path generation.
      – Joshua
      2 days ago




      I've seen more than one package that if you tried DESTDIR=/tmp/destdir would not work later when installed to the normal place because DESTDIR was used in path generation.
      – Joshua
      2 days ago












      @amon: I'm not sure I would characterize containers as Linux-specific. Linux may be a common target platform for containerization, but some form of container technology exists in most modern operating systems.
      – Kevin
      2 days ago




      @amon: I'm not sure I would characterize containers as Linux-specific. Linux may be a common target platform for containerization, but some form of container technology exists in most modern operating systems.
      – Kevin
      2 days ago












      @Joshua It shouldn't, DESTDIR should only be relevant during the install step. You should be able to do: ./configure --prefix="/opt/foo" && make && DESTDIR=/tmp/foo make install and be able to relocate the package to /opt/foo without any issue.
      – Nax
      yesterday






      @Joshua It shouldn't, DESTDIR should only be relevant during the install step. You should be able to do: ./configure --prefix="/opt/foo" && make && DESTDIR=/tmp/foo make install and be able to relocate the package to /opt/foo without any issue.
      – Nax
      yesterday












      up vote
      2
      down vote













      There are several reasons which come to mind.




      • Many package creating software - the Debian build system for example, and IIRC rpm as well - already expect from the building script to "install" the program to some special subdirectory. So it is driven by backward compatibility in both directions.

      • A user may want to install the software to a local space, like in the $HOME directory. Not all package managers support it.

      • There may still be environments which do not have packages.






      share|improve this answer























      • I reworded the question a bit, I meant program manager when I said that makefiles should create installable packages.
        – Synxis
        2 days ago















      up vote
      2
      down vote













      There are several reasons which come to mind.




      • Many package creating software - the Debian build system for example, and IIRC rpm as well - already expect from the building script to "install" the program to some special subdirectory. So it is driven by backward compatibility in both directions.

      • A user may want to install the software to a local space, like in the $HOME directory. Not all package managers support it.

      • There may still be environments which do not have packages.






      share|improve this answer























      • I reworded the question a bit, I meant program manager when I said that makefiles should create installable packages.
        – Synxis
        2 days ago













      up vote
      2
      down vote










      up vote
      2
      down vote









      There are several reasons which come to mind.




      • Many package creating software - the Debian build system for example, and IIRC rpm as well - already expect from the building script to "install" the program to some special subdirectory. So it is driven by backward compatibility in both directions.

      • A user may want to install the software to a local space, like in the $HOME directory. Not all package managers support it.

      • There may still be environments which do not have packages.






      share|improve this answer














      There are several reasons which come to mind.




      • Many package creating software - the Debian build system for example, and IIRC rpm as well - already expect from the building script to "install" the program to some special subdirectory. So it is driven by backward compatibility in both directions.

      • A user may want to install the software to a local space, like in the $HOME directory. Not all package managers support it.

      • There may still be environments which do not have packages.







      share|improve this answer














      share|improve this answer



      share|improve this answer








      edited 2 days ago









      Peter Mortensen

      1,11621114




      1,11621114










      answered 2 days ago









      max630

      1,100411




      1,100411












      • I reworded the question a bit, I meant program manager when I said that makefiles should create installable packages.
        – Synxis
        2 days ago


















      • I reworded the question a bit, I meant program manager when I said that makefiles should create installable packages.
        – Synxis
        2 days ago
















      I reworded the question a bit, I meant program manager when I said that makefiles should create installable packages.
      – Synxis
      2 days ago




      I reworded the question a bit, I meant program manager when I said that makefiles should create installable packages.
      – Synxis
      2 days ago










      up vote
      1
      down vote













      One reason not mentioned is there's a lot of times when you are not using the current version of the software or using a modified version of the software. Trying to create a custom package is not only more work, but it can conflict with currently created and distributed packages. In open source code this happens a lot especially if breaking changes are introduced in future versions you are using.



      Let's say you're using the open source project FOO which is currently on version 2.0.1 and you are using version 1.3.0. You don't want to use anything above that because version 2.0.0 is incompatible with what you are currently doing, but there is a single bug fix in 2.0.1 you desperately need. Having the make install option let's you install the modified 1.3.0 software without having to worry about creating a package and install it on your system.






      share|improve this answer

























        up vote
        1
        down vote













        One reason not mentioned is there's a lot of times when you are not using the current version of the software or using a modified version of the software. Trying to create a custom package is not only more work, but it can conflict with currently created and distributed packages. In open source code this happens a lot especially if breaking changes are introduced in future versions you are using.



        Let's say you're using the open source project FOO which is currently on version 2.0.1 and you are using version 1.3.0. You don't want to use anything above that because version 2.0.0 is incompatible with what you are currently doing, but there is a single bug fix in 2.0.1 you desperately need. Having the make install option let's you install the modified 1.3.0 software without having to worry about creating a package and install it on your system.






        share|improve this answer























          up vote
          1
          down vote










          up vote
          1
          down vote









          One reason not mentioned is there's a lot of times when you are not using the current version of the software or using a modified version of the software. Trying to create a custom package is not only more work, but it can conflict with currently created and distributed packages. In open source code this happens a lot especially if breaking changes are introduced in future versions you are using.



          Let's say you're using the open source project FOO which is currently on version 2.0.1 and you are using version 1.3.0. You don't want to use anything above that because version 2.0.0 is incompatible with what you are currently doing, but there is a single bug fix in 2.0.1 you desperately need. Having the make install option let's you install the modified 1.3.0 software without having to worry about creating a package and install it on your system.






          share|improve this answer












          One reason not mentioned is there's a lot of times when you are not using the current version of the software or using a modified version of the software. Trying to create a custom package is not only more work, but it can conflict with currently created and distributed packages. In open source code this happens a lot especially if breaking changes are introduced in future versions you are using.



          Let's say you're using the open source project FOO which is currently on version 2.0.1 and you are using version 1.3.0. You don't want to use anything above that because version 2.0.0 is incompatible with what you are currently doing, but there is a single bug fix in 2.0.1 you desperately need. Having the make install option let's you install the modified 1.3.0 software without having to worry about creating a package and install it on your system.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered 2 days ago









          Dom

          1696




          1696






















              up vote
              1
              down vote













              Linux distributions generally separate program maintenance from package maintenance. A build system that integrates package generation would force program maintainers to also perform package maintenance.



              This is usually a bad idea. Distributions have lots of infrastructure to verify internal consistency, provide binaries for multiple target platforms, perform small alterations to better integrate with the rest of the system and provide a consistent experience for users reporting bugs.



              To generate packages directly from a build system, you would have to either integrate or bypass all of this infrastructure. Integrating it would be a lot of work for questionable benefit, and bypassing it would give a worse user experience.



              This is one of the "top of the food chain" problems that are typical in multi-party systems. If you have multiple complex systems, there needs to be a clear hierarchy of which system is responsible for coordinating all others.



              In the case of software installation management, the package manager is this component, and it will run the package's build system, then take the output through a convenient interface ("files in a directory after an installation step"), generate a package and prepare it for upload to a repository.



              The package manager stands in the middle between the build system and the repository here, and is in the best position to integrate well with both.



              You may have noticed that there are only few of the JavaScript packages available through npm also available through apt — this is mainly because the JavaScript people decided that npm and the associated repository was going to be the top of their food chain, which made it close to impossible to ship these packages as Debian packages.



              With my Debian Developer hat on: if you release open source software, please leave the packaging to distribution maintainers. It saves both you and us a lot of work.






              share|improve this answer





















              • You've said nothing about why there's an install target, and it seems to me that most of what you've written would apply to it too...
                – curiousdannii
                yesterday






              • 1




                @curiousdannii, there needs to be some interface between build system and package manager, and this happens to be the simplest one, so it won.
                – Simon Richter
                17 hours ago















              up vote
              1
              down vote













              Linux distributions generally separate program maintenance from package maintenance. A build system that integrates package generation would force program maintainers to also perform package maintenance.



              This is usually a bad idea. Distributions have lots of infrastructure to verify internal consistency, provide binaries for multiple target platforms, perform small alterations to better integrate with the rest of the system and provide a consistent experience for users reporting bugs.



              To generate packages directly from a build system, you would have to either integrate or bypass all of this infrastructure. Integrating it would be a lot of work for questionable benefit, and bypassing it would give a worse user experience.



              This is one of the "top of the food chain" problems that are typical in multi-party systems. If you have multiple complex systems, there needs to be a clear hierarchy of which system is responsible for coordinating all others.



              In the case of software installation management, the package manager is this component, and it will run the package's build system, then take the output through a convenient interface ("files in a directory after an installation step"), generate a package and prepare it for upload to a repository.



              The package manager stands in the middle between the build system and the repository here, and is in the best position to integrate well with both.



              You may have noticed that there are only few of the JavaScript packages available through npm also available through apt — this is mainly because the JavaScript people decided that npm and the associated repository was going to be the top of their food chain, which made it close to impossible to ship these packages as Debian packages.



              With my Debian Developer hat on: if you release open source software, please leave the packaging to distribution maintainers. It saves both you and us a lot of work.






              share|improve this answer





















              • You've said nothing about why there's an install target, and it seems to me that most of what you've written would apply to it too...
                – curiousdannii
                yesterday






              • 1




                @curiousdannii, there needs to be some interface between build system and package manager, and this happens to be the simplest one, so it won.
                – Simon Richter
                17 hours ago













              up vote
              1
              down vote










              up vote
              1
              down vote









              Linux distributions generally separate program maintenance from package maintenance. A build system that integrates package generation would force program maintainers to also perform package maintenance.



              This is usually a bad idea. Distributions have lots of infrastructure to verify internal consistency, provide binaries for multiple target platforms, perform small alterations to better integrate with the rest of the system and provide a consistent experience for users reporting bugs.



              To generate packages directly from a build system, you would have to either integrate or bypass all of this infrastructure. Integrating it would be a lot of work for questionable benefit, and bypassing it would give a worse user experience.



              This is one of the "top of the food chain" problems that are typical in multi-party systems. If you have multiple complex systems, there needs to be a clear hierarchy of which system is responsible for coordinating all others.



              In the case of software installation management, the package manager is this component, and it will run the package's build system, then take the output through a convenient interface ("files in a directory after an installation step"), generate a package and prepare it for upload to a repository.



              The package manager stands in the middle between the build system and the repository here, and is in the best position to integrate well with both.



              You may have noticed that there are only few of the JavaScript packages available through npm also available through apt — this is mainly because the JavaScript people decided that npm and the associated repository was going to be the top of their food chain, which made it close to impossible to ship these packages as Debian packages.



              With my Debian Developer hat on: if you release open source software, please leave the packaging to distribution maintainers. It saves both you and us a lot of work.






              share|improve this answer












              Linux distributions generally separate program maintenance from package maintenance. A build system that integrates package generation would force program maintainers to also perform package maintenance.



              This is usually a bad idea. Distributions have lots of infrastructure to verify internal consistency, provide binaries for multiple target platforms, perform small alterations to better integrate with the rest of the system and provide a consistent experience for users reporting bugs.



              To generate packages directly from a build system, you would have to either integrate or bypass all of this infrastructure. Integrating it would be a lot of work for questionable benefit, and bypassing it would give a worse user experience.



              This is one of the "top of the food chain" problems that are typical in multi-party systems. If you have multiple complex systems, there needs to be a clear hierarchy of which system is responsible for coordinating all others.



              In the case of software installation management, the package manager is this component, and it will run the package's build system, then take the output through a convenient interface ("files in a directory after an installation step"), generate a package and prepare it for upload to a repository.



              The package manager stands in the middle between the build system and the repository here, and is in the best position to integrate well with both.



              You may have noticed that there are only few of the JavaScript packages available through npm also available through apt — this is mainly because the JavaScript people decided that npm and the associated repository was going to be the top of their food chain, which made it close to impossible to ship these packages as Debian packages.



              With my Debian Developer hat on: if you release open source software, please leave the packaging to distribution maintainers. It saves both you and us a lot of work.







              share|improve this answer












              share|improve this answer



              share|improve this answer










              answered 2 days ago









              Simon Richter

              1,17569




              1,17569












              • You've said nothing about why there's an install target, and it seems to me that most of what you've written would apply to it too...
                – curiousdannii
                yesterday






              • 1




                @curiousdannii, there needs to be some interface between build system and package manager, and this happens to be the simplest one, so it won.
                – Simon Richter
                17 hours ago


















              • You've said nothing about why there's an install target, and it seems to me that most of what you've written would apply to it too...
                – curiousdannii
                yesterday






              • 1




                @curiousdannii, there needs to be some interface between build system and package manager, and this happens to be the simplest one, so it won.
                – Simon Richter
                17 hours ago
















              You've said nothing about why there's an install target, and it seems to me that most of what you've written would apply to it too...
              – curiousdannii
              yesterday




              You've said nothing about why there's an install target, and it seems to me that most of what you've written would apply to it too...
              – curiousdannii
              yesterday




              1




              1




              @curiousdannii, there needs to be some interface between build system and package manager, and this happens to be the simplest one, so it won.
              – Simon Richter
              17 hours ago




              @curiousdannii, there needs to be some interface between build system and package manager, and this happens to be the simplest one, so it won.
              – Simon Richter
              17 hours ago










              up vote
              1
              down vote













              Well, application developers are the ones that know where each file should go. They could leave that in documentation, and have package maintainers read that and build a script for each package. Maybe the package maintainers will misinterpret the documentation and will have to debug the script until it works. This is inefficient. It's better for the application developer to write a script to properly install the application he's written.



              He could write an install script with an arbitrary name or maybe make it part of the procedure of some other script. However, having a standard install command, make install (a convention that predates package managers), it's become really easy to make packages. If you look at the PKGBUILD template for making Archlinux packages, you can see that the function that actually packages simply does a make DESTDIR="$pkgdir/" install. This probably works for the majority of packages and probably more with a little modification. Thanks to make (and the autotools) being standard, packaging is really, really easy.






              share|improve this answer








              New contributor




              JoL is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
              Check out our Code of Conduct.






















                up vote
                1
                down vote













                Well, application developers are the ones that know where each file should go. They could leave that in documentation, and have package maintainers read that and build a script for each package. Maybe the package maintainers will misinterpret the documentation and will have to debug the script until it works. This is inefficient. It's better for the application developer to write a script to properly install the application he's written.



                He could write an install script with an arbitrary name or maybe make it part of the procedure of some other script. However, having a standard install command, make install (a convention that predates package managers), it's become really easy to make packages. If you look at the PKGBUILD template for making Archlinux packages, you can see that the function that actually packages simply does a make DESTDIR="$pkgdir/" install. This probably works for the majority of packages and probably more with a little modification. Thanks to make (and the autotools) being standard, packaging is really, really easy.






                share|improve this answer








                New contributor




                JoL is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                Check out our Code of Conduct.




















                  up vote
                  1
                  down vote










                  up vote
                  1
                  down vote









                  Well, application developers are the ones that know where each file should go. They could leave that in documentation, and have package maintainers read that and build a script for each package. Maybe the package maintainers will misinterpret the documentation and will have to debug the script until it works. This is inefficient. It's better for the application developer to write a script to properly install the application he's written.



                  He could write an install script with an arbitrary name or maybe make it part of the procedure of some other script. However, having a standard install command, make install (a convention that predates package managers), it's become really easy to make packages. If you look at the PKGBUILD template for making Archlinux packages, you can see that the function that actually packages simply does a make DESTDIR="$pkgdir/" install. This probably works for the majority of packages and probably more with a little modification. Thanks to make (and the autotools) being standard, packaging is really, really easy.






                  share|improve this answer








                  New contributor




                  JoL is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                  Check out our Code of Conduct.









                  Well, application developers are the ones that know where each file should go. They could leave that in documentation, and have package maintainers read that and build a script for each package. Maybe the package maintainers will misinterpret the documentation and will have to debug the script until it works. This is inefficient. It's better for the application developer to write a script to properly install the application he's written.



                  He could write an install script with an arbitrary name or maybe make it part of the procedure of some other script. However, having a standard install command, make install (a convention that predates package managers), it's become really easy to make packages. If you look at the PKGBUILD template for making Archlinux packages, you can see that the function that actually packages simply does a make DESTDIR="$pkgdir/" install. This probably works for the majority of packages and probably more with a little modification. Thanks to make (and the autotools) being standard, packaging is really, really easy.







                  share|improve this answer








                  New contributor




                  JoL is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                  Check out our Code of Conduct.









                  share|improve this answer



                  share|improve this answer






                  New contributor




                  JoL is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                  Check out our Code of Conduct.









                  answered 2 days ago









                  JoL

                  1192




                  1192




                  New contributor




                  JoL is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                  Check out our Code of Conduct.





                  New contributor





                  JoL is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                  Check out our Code of Conduct.






                  JoL is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                  Check out our Code of Conduct.






















                      Synxis is a new contributor. Be nice, and check out our Code of Conduct.










                       

                      draft saved


                      draft discarded


















                      Synxis is a new contributor. Be nice, and check out our Code of Conduct.













                      Synxis is a new contributor. Be nice, and check out our Code of Conduct.












                      Synxis is a new contributor. Be nice, and check out our Code of Conduct.















                       


                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function () {
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsoftwareengineering.stackexchange.com%2fquestions%2f381924%2fwhy-should-makefiles-have-an-install-target%23new-answer', 'question_page');
                      }
                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown











                      Popular posts from this blog

                      Quarter-circle Tiles

                      build a pushdown automaton that recognizes the reverse language of a given pushdown automaton?

                      Mont Emei