Install Ubuntu 18.04 desktop with RAID 1 and LVM on machine with UEFI BIOS











up vote
3
down vote

favorite
4












I have a machine with UEFI BIOS. I want to install Ubuntu 18.04, desktop version with RAID 1 (and LVM) so my system will continue to work even if one of the drives fail. I haven't found a HOWTO of how to do that.
The desktop installer does not support RAID. The answer to this question almost works but requires some GRUB shell/rescue USB disk and UEFI settings magic. Is anyone aware of a procedure that works without the magic parts?










share|improve this question


























    up vote
    3
    down vote

    favorite
    4












    I have a machine with UEFI BIOS. I want to install Ubuntu 18.04, desktop version with RAID 1 (and LVM) so my system will continue to work even if one of the drives fail. I haven't found a HOWTO of how to do that.
    The desktop installer does not support RAID. The answer to this question almost works but requires some GRUB shell/rescue USB disk and UEFI settings magic. Is anyone aware of a procedure that works without the magic parts?










    share|improve this question
























      up vote
      3
      down vote

      favorite
      4









      up vote
      3
      down vote

      favorite
      4






      4





      I have a machine with UEFI BIOS. I want to install Ubuntu 18.04, desktop version with RAID 1 (and LVM) so my system will continue to work even if one of the drives fail. I haven't found a HOWTO of how to do that.
      The desktop installer does not support RAID. The answer to this question almost works but requires some GRUB shell/rescue USB disk and UEFI settings magic. Is anyone aware of a procedure that works without the magic parts?










      share|improve this question













      I have a machine with UEFI BIOS. I want to install Ubuntu 18.04, desktop version with RAID 1 (and LVM) so my system will continue to work even if one of the drives fail. I haven't found a HOWTO of how to do that.
      The desktop installer does not support RAID. The answer to this question almost works but requires some GRUB shell/rescue USB disk and UEFI settings magic. Is anyone aware of a procedure that works without the magic parts?







      18.04 uefi raid lvm






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Aug 16 at 20:46









      Niclas Börlin

      8961616




      8961616






















          2 Answers
          2






          active

          oldest

          votes

















          up vote
          3
          down vote



          accepted










          With some help from How to install Ubuntu server with UEFI and RAID1 + LVM, RAID set up in Ubuntu 18.04, and RAID support in Ubuntu 18.04 Desktop installer? and How to get rid of the "scanning for btrfs file systems" at start-up?, I managed to put together a working HOWTO using linux commands only.



          In short




          1. Download the alternate server installer.

          2. Install with manual partitioning, EFI + RAID and LVM on RAID partition.

          3. Clone EFI partition from installed partition to the other drive.

          4. Install second EFI partition into UEFI boot chain.

          5. To avoid a lengthy wait during boot in case a drive breaks, remove the btrfs boot scripts.


          In detail



          1. Download the installer




          • Download the alternate server installer from http://cdimage.ubuntu.com/ubuntu/releases/bionic/release/

          • Create a bootable CD or USB and boot the new machine from it.

          • Select Install Ubuntu Server.


          2. Install with manual partitioning




          • During install, at the Partition disks step, select Manual.

          • If the disks contain any partitions, remove them.


            • If any logical volumes are present on your drives, select Configure the Logical Volume Manager.


              • Choose Delete logical volume until all volumes have been deleted.

              • Choose Delete volume group until all volume groups have been deleted.



            • If any RAID device is present, select Configure software RAID.


              • Choose Delete MD device until all MD devices have been deleted.



            • Delete every partition on the physical drives by choosing them and selecting Delete the partition.



          • Create physical partitions


            • On each drive, create a 512MB partition (I've seen others use 128MB) at the beginning of the disk, Use as: EFI System Partition.

            • On each drive, create a second partition with 'max' size, Use as: Physical Volume for RAID.



          • Set up RAID


            • Select Configure software RAID.

            • Select Create MD device, type RAID1, 2 active disks, 0 spare disks, and select the /dev/sda2 and /dev/sdb2 devices.



          • Set up LVM


            • Select Configure the Logical Volume Manager.

            • Create volume group vg on the /dev/md0 device.

            • Create logical volumes, e.g.



              • swap at 16G


              • root at 35G


              • tmp at 10G


              • var at 5G


              • home at 200G





          • Set up how to use the logical partitions


            • For the swap partition, select Use as: swap.

            • For the other partitions, select Use as: ext4 with the proper mount points (/, /tmp, /var, /home, respectively).



          • Select Finish partitioning and write changes to disk.

          • Allow the installation program to finish and reboot.


          3. Inspect system





          • Check which EFI partition has been mounted. Most likely /dev/sda1.



            mount | grep boot




          • Check RAID status. Most likely it is synchronizing.



            cat /proc/mdstat




          4. Clone EFI partition



          The EFI bootloaded should have been installed on /dev/sda1. As that partition is not mirrored via the RAID system, we need to clone it.



          sudo dd if=/dev/sda1 of=/dev/sdb1


          5. Insert second drive into boot chain



          This step may not be necessary, since if either drive dies, the system should boot from the (identical) EFI partitions. However, it seems prudent to ensure that we can boot from either disk.




          • Run efibootmgr -v and notice the file name for the ubuntu boot entry. On my install it was EFIubuntushimx64.efi.

          • Run sudo efibootmgr -c -d /dev/sdb -p 1 -L "ubuntu2" -l EFIubuntushimx64.efi.

          • Now the system should boot even if either of the drives fail!


          7. Wait



          If you want to try to remove/disable any drive, you must first wait until the RAID synchronization has finished! Monitor the progress with cat /proc/mdstat However, you may perform step 8 below while waiting.



          8. Remove BTRFS



          If one drive fails (after the synchronization is complete), the system will still boot. However, the boot sequence will spend a lot of time looking for btrfs file systems. To remove that unnecessary wait, run



          sudo apt-get purge btrfs-progs


          This should remove btrfs-progs, btrfs-tools and ubuntu-server. The last package is just a meta package, so if no more packages are listed for removal, you should be ok.



          9. Install the desktop version



          Run sudo apt install ubuntu-desktop to install the desktop version. After that, the synchronization is probably done and your system is configured and should survive a disk failure!



          10. Update EFI partition after grub-efi-amd64 update



          When the package grub-efi-amd64 is updated, the files on the EFI partition (mounted at /boot/efi) may change. In that case, the update must be cloned manually to the mirror partition. Luckily, you should get a warning from the update manager that grub-efi-amd64 is about to be updated, so you don't have to check after every update.



          10.1 Find out clone source, quick way



          If you haven't rebooted after the update, use



          mount | grep boot


          to find out what EFI partition is mounted. That partition, typically /dev/sdb1, should be used as the clone source.



          10.2 Find out clone source, paranoid way



          Create mount points and mount both partitions:



          sudo mkdir /tmp/sda1 /tmp/sdb1
          sudo mount /dev/sda1 /tmp/sda1
          sudo mount /dev/sdb1 /tmp/sdb1


          Find timestamp of newest file in each tree



          sudo find /tmp/sda1 -type f -printf '%T+ %pn' | sort | tail -n 1 > /tmp/newest.sda1
          sudo find /tmp/sdb1 -type f -printf '%T+ %pn' | sort | tail -n 1 > /tmp/newest.sdb1


          Compare timestamps



          cat /tmp/newest.sd* | sort | tail -n 1 | perl -ne 'm,/tmp/(sd[ab]1)/, && print "/dev/$1 is newest.n"'


          Should print /dev/sdb1 is newest (most likely) or /dev/sda1 is newest. That partition should be used as the clone source.



          Unmount the partitions before the cloning to avoid cache/partition inconsistency.



          sudo umount /tmp/sda1 /tmp/sdb1


          10.3 Clone



          If /dev/sdb1 was the clone source:



          sudo dd if=/dev/sdb1 of=/dev/sda1


          If /dev/sda1 was the clone source:



          sudo dd if=/dev/sda1 of=/dev/sdb1


          Done!



          11. Virtual machine gotchas



          If you want to try this out in a virtual machine first, there are some caveats: Apparently, the NVRAM that holds the UEFI information is remembered between reboots, but not between shutdown-restart cycles. In that case, you may end up at the UEFI Shell console. The following commands should boot you into your machine from /dev/sda1 (use FS1: for /dev/sdb1):



          FS0:
          EFIubuntugrubx64.efi


          The first solution in the top answer of UEFI boot in virtualbox - Ubuntu 12.04 might also be helpful.






          share|improve this answer























          • How would you go about using LUKS, for an encrypted mirror set/RAID 1, avoiding encryption happening twice (ex. LUKS sitting under mdadm, so that IO happens twice, but encryption itself happens only once, this is actually not happening with some setups, such as those recommended for ZFS, where volumes are encrypted twice, once per device, effectively duplicating the cost of the encryption side of things). I haven't been able to find recent instructions on this setup.
            – soze
            Sep 18 at 3:40






          • 1




            @soze, unfortunately I have no experience with encrypted Linux partitions. I would do some trial-and-error in a virtual machine to find out. NB: I added a section above about virtual machine gotchas.
            – Niclas Börlin
            Sep 18 at 7:52


















          up vote
          0
          down vote













          RAID-1 + XFS + UEFI



          I was able to get about 99% of the way there with @Niclas Börlin's answer, thank you!



          I also drew help from the following answers :




          • Ubuntu 17.04 will not boot on UEFI system with XFS system partition

          • How to install Ubuntu server with UEFI and RAID1 + LVM


          Here are the ways I messed things up




          1. Having the BIOS in "Auto" mode, which allowed the USB-Key to be booted NOT in UEFI mode. This caused Grub not to be installed correctly. I switched mode to UEFI-only, rebooted and deleted all the logical volumes, raid groups, and partitions and started over. I further tried to re-install grub on the EFI partitions, which only made things worse.

          2. Having the /boot partition be on XFS. The grub2 that comes with Ubuntu 18.04LTS apparently does not handle this. Although that is not documented anywhere. I created a separate EXT-4 /boot partition. Note that this is on the RAID-1 LVM volume still, and not separate partitions like the EFI ones! Lots of older answers say this isn't possible, but it seems to be now. I ended up getting grub but getting unknown file system errors (eg. How to fix "error: unknown filesystem. grub rescue>) that gave me the clue XFS on /boot as a no-go.

          3. Somewhere in the middle of that I ended up with grub installed but a blank grub prompt, no grub menu. (eg. https://help.ubuntu.com/community/Grub2/Troubleshooting#Specific_Troubleshooting). This was due to /boot not being accessible.


          What worked for me



          Start with @Niclas Börlin's answer and change a few minor things.



          Partition Table



          I favor one large / partition, so this reflects that choice. The main change is an EXT4 /boot partition instead of an XFS one.



          sda/
          GPT 1M (auto-added)
          sda1 - EFI - 512M
          sda2 - MD0 - 3.5G

          sdb/
          GPT 1M (auto-added)
          sdb1 - EFI - 512M
          sdb2 - MD0 - 3.5G

          md0/
          vg/
          boot - 1G - EXT4 /boot
          swap - 16G - SWAP
          root - rest - XFS /


          After the completed install I was able to dd the contents of sda1 to sdb2 as detailed in the other answer. I also was able to add the second drive to the boot chain using efibootmgr as detailed.






          share|improve this answer





















            Your Answer








            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "89"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2faskubuntu.com%2fquestions%2f1066028%2finstall-ubuntu-18-04-desktop-with-raid-1-and-lvm-on-machine-with-uefi-bios%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            2 Answers
            2






            active

            oldest

            votes








            2 Answers
            2






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes








            up vote
            3
            down vote



            accepted










            With some help from How to install Ubuntu server with UEFI and RAID1 + LVM, RAID set up in Ubuntu 18.04, and RAID support in Ubuntu 18.04 Desktop installer? and How to get rid of the "scanning for btrfs file systems" at start-up?, I managed to put together a working HOWTO using linux commands only.



            In short




            1. Download the alternate server installer.

            2. Install with manual partitioning, EFI + RAID and LVM on RAID partition.

            3. Clone EFI partition from installed partition to the other drive.

            4. Install second EFI partition into UEFI boot chain.

            5. To avoid a lengthy wait during boot in case a drive breaks, remove the btrfs boot scripts.


            In detail



            1. Download the installer




            • Download the alternate server installer from http://cdimage.ubuntu.com/ubuntu/releases/bionic/release/

            • Create a bootable CD or USB and boot the new machine from it.

            • Select Install Ubuntu Server.


            2. Install with manual partitioning




            • During install, at the Partition disks step, select Manual.

            • If the disks contain any partitions, remove them.


              • If any logical volumes are present on your drives, select Configure the Logical Volume Manager.


                • Choose Delete logical volume until all volumes have been deleted.

                • Choose Delete volume group until all volume groups have been deleted.



              • If any RAID device is present, select Configure software RAID.


                • Choose Delete MD device until all MD devices have been deleted.



              • Delete every partition on the physical drives by choosing them and selecting Delete the partition.



            • Create physical partitions


              • On each drive, create a 512MB partition (I've seen others use 128MB) at the beginning of the disk, Use as: EFI System Partition.

              • On each drive, create a second partition with 'max' size, Use as: Physical Volume for RAID.



            • Set up RAID


              • Select Configure software RAID.

              • Select Create MD device, type RAID1, 2 active disks, 0 spare disks, and select the /dev/sda2 and /dev/sdb2 devices.



            • Set up LVM


              • Select Configure the Logical Volume Manager.

              • Create volume group vg on the /dev/md0 device.

              • Create logical volumes, e.g.



                • swap at 16G


                • root at 35G


                • tmp at 10G


                • var at 5G


                • home at 200G





            • Set up how to use the logical partitions


              • For the swap partition, select Use as: swap.

              • For the other partitions, select Use as: ext4 with the proper mount points (/, /tmp, /var, /home, respectively).



            • Select Finish partitioning and write changes to disk.

            • Allow the installation program to finish and reboot.


            3. Inspect system





            • Check which EFI partition has been mounted. Most likely /dev/sda1.



              mount | grep boot




            • Check RAID status. Most likely it is synchronizing.



              cat /proc/mdstat




            4. Clone EFI partition



            The EFI bootloaded should have been installed on /dev/sda1. As that partition is not mirrored via the RAID system, we need to clone it.



            sudo dd if=/dev/sda1 of=/dev/sdb1


            5. Insert second drive into boot chain



            This step may not be necessary, since if either drive dies, the system should boot from the (identical) EFI partitions. However, it seems prudent to ensure that we can boot from either disk.




            • Run efibootmgr -v and notice the file name for the ubuntu boot entry. On my install it was EFIubuntushimx64.efi.

            • Run sudo efibootmgr -c -d /dev/sdb -p 1 -L "ubuntu2" -l EFIubuntushimx64.efi.

            • Now the system should boot even if either of the drives fail!


            7. Wait



            If you want to try to remove/disable any drive, you must first wait until the RAID synchronization has finished! Monitor the progress with cat /proc/mdstat However, you may perform step 8 below while waiting.



            8. Remove BTRFS



            If one drive fails (after the synchronization is complete), the system will still boot. However, the boot sequence will spend a lot of time looking for btrfs file systems. To remove that unnecessary wait, run



            sudo apt-get purge btrfs-progs


            This should remove btrfs-progs, btrfs-tools and ubuntu-server. The last package is just a meta package, so if no more packages are listed for removal, you should be ok.



            9. Install the desktop version



            Run sudo apt install ubuntu-desktop to install the desktop version. After that, the synchronization is probably done and your system is configured and should survive a disk failure!



            10. Update EFI partition after grub-efi-amd64 update



            When the package grub-efi-amd64 is updated, the files on the EFI partition (mounted at /boot/efi) may change. In that case, the update must be cloned manually to the mirror partition. Luckily, you should get a warning from the update manager that grub-efi-amd64 is about to be updated, so you don't have to check after every update.



            10.1 Find out clone source, quick way



            If you haven't rebooted after the update, use



            mount | grep boot


            to find out what EFI partition is mounted. That partition, typically /dev/sdb1, should be used as the clone source.



            10.2 Find out clone source, paranoid way



            Create mount points and mount both partitions:



            sudo mkdir /tmp/sda1 /tmp/sdb1
            sudo mount /dev/sda1 /tmp/sda1
            sudo mount /dev/sdb1 /tmp/sdb1


            Find timestamp of newest file in each tree



            sudo find /tmp/sda1 -type f -printf '%T+ %pn' | sort | tail -n 1 > /tmp/newest.sda1
            sudo find /tmp/sdb1 -type f -printf '%T+ %pn' | sort | tail -n 1 > /tmp/newest.sdb1


            Compare timestamps



            cat /tmp/newest.sd* | sort | tail -n 1 | perl -ne 'm,/tmp/(sd[ab]1)/, && print "/dev/$1 is newest.n"'


            Should print /dev/sdb1 is newest (most likely) or /dev/sda1 is newest. That partition should be used as the clone source.



            Unmount the partitions before the cloning to avoid cache/partition inconsistency.



            sudo umount /tmp/sda1 /tmp/sdb1


            10.3 Clone



            If /dev/sdb1 was the clone source:



            sudo dd if=/dev/sdb1 of=/dev/sda1


            If /dev/sda1 was the clone source:



            sudo dd if=/dev/sda1 of=/dev/sdb1


            Done!



            11. Virtual machine gotchas



            If you want to try this out in a virtual machine first, there are some caveats: Apparently, the NVRAM that holds the UEFI information is remembered between reboots, but not between shutdown-restart cycles. In that case, you may end up at the UEFI Shell console. The following commands should boot you into your machine from /dev/sda1 (use FS1: for /dev/sdb1):



            FS0:
            EFIubuntugrubx64.efi


            The first solution in the top answer of UEFI boot in virtualbox - Ubuntu 12.04 might also be helpful.






            share|improve this answer























            • How would you go about using LUKS, for an encrypted mirror set/RAID 1, avoiding encryption happening twice (ex. LUKS sitting under mdadm, so that IO happens twice, but encryption itself happens only once, this is actually not happening with some setups, such as those recommended for ZFS, where volumes are encrypted twice, once per device, effectively duplicating the cost of the encryption side of things). I haven't been able to find recent instructions on this setup.
              – soze
              Sep 18 at 3:40






            • 1




              @soze, unfortunately I have no experience with encrypted Linux partitions. I would do some trial-and-error in a virtual machine to find out. NB: I added a section above about virtual machine gotchas.
              – Niclas Börlin
              Sep 18 at 7:52















            up vote
            3
            down vote



            accepted










            With some help from How to install Ubuntu server with UEFI and RAID1 + LVM, RAID set up in Ubuntu 18.04, and RAID support in Ubuntu 18.04 Desktop installer? and How to get rid of the "scanning for btrfs file systems" at start-up?, I managed to put together a working HOWTO using linux commands only.



            In short




            1. Download the alternate server installer.

            2. Install with manual partitioning, EFI + RAID and LVM on RAID partition.

            3. Clone EFI partition from installed partition to the other drive.

            4. Install second EFI partition into UEFI boot chain.

            5. To avoid a lengthy wait during boot in case a drive breaks, remove the btrfs boot scripts.


            In detail



            1. Download the installer




            • Download the alternate server installer from http://cdimage.ubuntu.com/ubuntu/releases/bionic/release/

            • Create a bootable CD or USB and boot the new machine from it.

            • Select Install Ubuntu Server.


            2. Install with manual partitioning




            • During install, at the Partition disks step, select Manual.

            • If the disks contain any partitions, remove them.


              • If any logical volumes are present on your drives, select Configure the Logical Volume Manager.


                • Choose Delete logical volume until all volumes have been deleted.

                • Choose Delete volume group until all volume groups have been deleted.



              • If any RAID device is present, select Configure software RAID.


                • Choose Delete MD device until all MD devices have been deleted.



              • Delete every partition on the physical drives by choosing them and selecting Delete the partition.



            • Create physical partitions


              • On each drive, create a 512MB partition (I've seen others use 128MB) at the beginning of the disk, Use as: EFI System Partition.

              • On each drive, create a second partition with 'max' size, Use as: Physical Volume for RAID.



            • Set up RAID


              • Select Configure software RAID.

              • Select Create MD device, type RAID1, 2 active disks, 0 spare disks, and select the /dev/sda2 and /dev/sdb2 devices.



            • Set up LVM


              • Select Configure the Logical Volume Manager.

              • Create volume group vg on the /dev/md0 device.

              • Create logical volumes, e.g.



                • swap at 16G


                • root at 35G


                • tmp at 10G


                • var at 5G


                • home at 200G





            • Set up how to use the logical partitions


              • For the swap partition, select Use as: swap.

              • For the other partitions, select Use as: ext4 with the proper mount points (/, /tmp, /var, /home, respectively).



            • Select Finish partitioning and write changes to disk.

            • Allow the installation program to finish and reboot.


            3. Inspect system





            • Check which EFI partition has been mounted. Most likely /dev/sda1.



              mount | grep boot




            • Check RAID status. Most likely it is synchronizing.



              cat /proc/mdstat




            4. Clone EFI partition



            The EFI bootloaded should have been installed on /dev/sda1. As that partition is not mirrored via the RAID system, we need to clone it.



            sudo dd if=/dev/sda1 of=/dev/sdb1


            5. Insert second drive into boot chain



            This step may not be necessary, since if either drive dies, the system should boot from the (identical) EFI partitions. However, it seems prudent to ensure that we can boot from either disk.




            • Run efibootmgr -v and notice the file name for the ubuntu boot entry. On my install it was EFIubuntushimx64.efi.

            • Run sudo efibootmgr -c -d /dev/sdb -p 1 -L "ubuntu2" -l EFIubuntushimx64.efi.

            • Now the system should boot even if either of the drives fail!


            7. Wait



            If you want to try to remove/disable any drive, you must first wait until the RAID synchronization has finished! Monitor the progress with cat /proc/mdstat However, you may perform step 8 below while waiting.



            8. Remove BTRFS



            If one drive fails (after the synchronization is complete), the system will still boot. However, the boot sequence will spend a lot of time looking for btrfs file systems. To remove that unnecessary wait, run



            sudo apt-get purge btrfs-progs


            This should remove btrfs-progs, btrfs-tools and ubuntu-server. The last package is just a meta package, so if no more packages are listed for removal, you should be ok.



            9. Install the desktop version



            Run sudo apt install ubuntu-desktop to install the desktop version. After that, the synchronization is probably done and your system is configured and should survive a disk failure!



            10. Update EFI partition after grub-efi-amd64 update



            When the package grub-efi-amd64 is updated, the files on the EFI partition (mounted at /boot/efi) may change. In that case, the update must be cloned manually to the mirror partition. Luckily, you should get a warning from the update manager that grub-efi-amd64 is about to be updated, so you don't have to check after every update.



            10.1 Find out clone source, quick way



            If you haven't rebooted after the update, use



            mount | grep boot


            to find out what EFI partition is mounted. That partition, typically /dev/sdb1, should be used as the clone source.



            10.2 Find out clone source, paranoid way



            Create mount points and mount both partitions:



            sudo mkdir /tmp/sda1 /tmp/sdb1
            sudo mount /dev/sda1 /tmp/sda1
            sudo mount /dev/sdb1 /tmp/sdb1


            Find timestamp of newest file in each tree



            sudo find /tmp/sda1 -type f -printf '%T+ %pn' | sort | tail -n 1 > /tmp/newest.sda1
            sudo find /tmp/sdb1 -type f -printf '%T+ %pn' | sort | tail -n 1 > /tmp/newest.sdb1


            Compare timestamps



            cat /tmp/newest.sd* | sort | tail -n 1 | perl -ne 'm,/tmp/(sd[ab]1)/, && print "/dev/$1 is newest.n"'


            Should print /dev/sdb1 is newest (most likely) or /dev/sda1 is newest. That partition should be used as the clone source.



            Unmount the partitions before the cloning to avoid cache/partition inconsistency.



            sudo umount /tmp/sda1 /tmp/sdb1


            10.3 Clone



            If /dev/sdb1 was the clone source:



            sudo dd if=/dev/sdb1 of=/dev/sda1


            If /dev/sda1 was the clone source:



            sudo dd if=/dev/sda1 of=/dev/sdb1


            Done!



            11. Virtual machine gotchas



            If you want to try this out in a virtual machine first, there are some caveats: Apparently, the NVRAM that holds the UEFI information is remembered between reboots, but not between shutdown-restart cycles. In that case, you may end up at the UEFI Shell console. The following commands should boot you into your machine from /dev/sda1 (use FS1: for /dev/sdb1):



            FS0:
            EFIubuntugrubx64.efi


            The first solution in the top answer of UEFI boot in virtualbox - Ubuntu 12.04 might also be helpful.






            share|improve this answer























            • How would you go about using LUKS, for an encrypted mirror set/RAID 1, avoiding encryption happening twice (ex. LUKS sitting under mdadm, so that IO happens twice, but encryption itself happens only once, this is actually not happening with some setups, such as those recommended for ZFS, where volumes are encrypted twice, once per device, effectively duplicating the cost of the encryption side of things). I haven't been able to find recent instructions on this setup.
              – soze
              Sep 18 at 3:40






            • 1




              @soze, unfortunately I have no experience with encrypted Linux partitions. I would do some trial-and-error in a virtual machine to find out. NB: I added a section above about virtual machine gotchas.
              – Niclas Börlin
              Sep 18 at 7:52













            up vote
            3
            down vote



            accepted







            up vote
            3
            down vote



            accepted






            With some help from How to install Ubuntu server with UEFI and RAID1 + LVM, RAID set up in Ubuntu 18.04, and RAID support in Ubuntu 18.04 Desktop installer? and How to get rid of the "scanning for btrfs file systems" at start-up?, I managed to put together a working HOWTO using linux commands only.



            In short




            1. Download the alternate server installer.

            2. Install with manual partitioning, EFI + RAID and LVM on RAID partition.

            3. Clone EFI partition from installed partition to the other drive.

            4. Install second EFI partition into UEFI boot chain.

            5. To avoid a lengthy wait during boot in case a drive breaks, remove the btrfs boot scripts.


            In detail



            1. Download the installer




            • Download the alternate server installer from http://cdimage.ubuntu.com/ubuntu/releases/bionic/release/

            • Create a bootable CD or USB and boot the new machine from it.

            • Select Install Ubuntu Server.


            2. Install with manual partitioning




            • During install, at the Partition disks step, select Manual.

            • If the disks contain any partitions, remove them.


              • If any logical volumes are present on your drives, select Configure the Logical Volume Manager.


                • Choose Delete logical volume until all volumes have been deleted.

                • Choose Delete volume group until all volume groups have been deleted.



              • If any RAID device is present, select Configure software RAID.


                • Choose Delete MD device until all MD devices have been deleted.



              • Delete every partition on the physical drives by choosing them and selecting Delete the partition.



            • Create physical partitions


              • On each drive, create a 512MB partition (I've seen others use 128MB) at the beginning of the disk, Use as: EFI System Partition.

              • On each drive, create a second partition with 'max' size, Use as: Physical Volume for RAID.



            • Set up RAID


              • Select Configure software RAID.

              • Select Create MD device, type RAID1, 2 active disks, 0 spare disks, and select the /dev/sda2 and /dev/sdb2 devices.



            • Set up LVM


              • Select Configure the Logical Volume Manager.

              • Create volume group vg on the /dev/md0 device.

              • Create logical volumes, e.g.



                • swap at 16G


                • root at 35G


                • tmp at 10G


                • var at 5G


                • home at 200G





            • Set up how to use the logical partitions


              • For the swap partition, select Use as: swap.

              • For the other partitions, select Use as: ext4 with the proper mount points (/, /tmp, /var, /home, respectively).



            • Select Finish partitioning and write changes to disk.

            • Allow the installation program to finish and reboot.


            3. Inspect system





            • Check which EFI partition has been mounted. Most likely /dev/sda1.



              mount | grep boot




            • Check RAID status. Most likely it is synchronizing.



              cat /proc/mdstat




            4. Clone EFI partition



            The EFI bootloaded should have been installed on /dev/sda1. As that partition is not mirrored via the RAID system, we need to clone it.



            sudo dd if=/dev/sda1 of=/dev/sdb1


            5. Insert second drive into boot chain



            This step may not be necessary, since if either drive dies, the system should boot from the (identical) EFI partitions. However, it seems prudent to ensure that we can boot from either disk.




            • Run efibootmgr -v and notice the file name for the ubuntu boot entry. On my install it was EFIubuntushimx64.efi.

            • Run sudo efibootmgr -c -d /dev/sdb -p 1 -L "ubuntu2" -l EFIubuntushimx64.efi.

            • Now the system should boot even if either of the drives fail!


            7. Wait



            If you want to try to remove/disable any drive, you must first wait until the RAID synchronization has finished! Monitor the progress with cat /proc/mdstat However, you may perform step 8 below while waiting.



            8. Remove BTRFS



            If one drive fails (after the synchronization is complete), the system will still boot. However, the boot sequence will spend a lot of time looking for btrfs file systems. To remove that unnecessary wait, run



            sudo apt-get purge btrfs-progs


            This should remove btrfs-progs, btrfs-tools and ubuntu-server. The last package is just a meta package, so if no more packages are listed for removal, you should be ok.



            9. Install the desktop version



            Run sudo apt install ubuntu-desktop to install the desktop version. After that, the synchronization is probably done and your system is configured and should survive a disk failure!



            10. Update EFI partition after grub-efi-amd64 update



            When the package grub-efi-amd64 is updated, the files on the EFI partition (mounted at /boot/efi) may change. In that case, the update must be cloned manually to the mirror partition. Luckily, you should get a warning from the update manager that grub-efi-amd64 is about to be updated, so you don't have to check after every update.



            10.1 Find out clone source, quick way



            If you haven't rebooted after the update, use



            mount | grep boot


            to find out what EFI partition is mounted. That partition, typically /dev/sdb1, should be used as the clone source.



            10.2 Find out clone source, paranoid way



            Create mount points and mount both partitions:



            sudo mkdir /tmp/sda1 /tmp/sdb1
            sudo mount /dev/sda1 /tmp/sda1
            sudo mount /dev/sdb1 /tmp/sdb1


            Find timestamp of newest file in each tree



            sudo find /tmp/sda1 -type f -printf '%T+ %pn' | sort | tail -n 1 > /tmp/newest.sda1
            sudo find /tmp/sdb1 -type f -printf '%T+ %pn' | sort | tail -n 1 > /tmp/newest.sdb1


            Compare timestamps



            cat /tmp/newest.sd* | sort | tail -n 1 | perl -ne 'm,/tmp/(sd[ab]1)/, && print "/dev/$1 is newest.n"'


            Should print /dev/sdb1 is newest (most likely) or /dev/sda1 is newest. That partition should be used as the clone source.



            Unmount the partitions before the cloning to avoid cache/partition inconsistency.



            sudo umount /tmp/sda1 /tmp/sdb1


            10.3 Clone



            If /dev/sdb1 was the clone source:



            sudo dd if=/dev/sdb1 of=/dev/sda1


            If /dev/sda1 was the clone source:



            sudo dd if=/dev/sda1 of=/dev/sdb1


            Done!



            11. Virtual machine gotchas



            If you want to try this out in a virtual machine first, there are some caveats: Apparently, the NVRAM that holds the UEFI information is remembered between reboots, but not between shutdown-restart cycles. In that case, you may end up at the UEFI Shell console. The following commands should boot you into your machine from /dev/sda1 (use FS1: for /dev/sdb1):



            FS0:
            EFIubuntugrubx64.efi


            The first solution in the top answer of UEFI boot in virtualbox - Ubuntu 12.04 might also be helpful.






            share|improve this answer














            With some help from How to install Ubuntu server with UEFI and RAID1 + LVM, RAID set up in Ubuntu 18.04, and RAID support in Ubuntu 18.04 Desktop installer? and How to get rid of the "scanning for btrfs file systems" at start-up?, I managed to put together a working HOWTO using linux commands only.



            In short




            1. Download the alternate server installer.

            2. Install with manual partitioning, EFI + RAID and LVM on RAID partition.

            3. Clone EFI partition from installed partition to the other drive.

            4. Install second EFI partition into UEFI boot chain.

            5. To avoid a lengthy wait during boot in case a drive breaks, remove the btrfs boot scripts.


            In detail



            1. Download the installer




            • Download the alternate server installer from http://cdimage.ubuntu.com/ubuntu/releases/bionic/release/

            • Create a bootable CD or USB and boot the new machine from it.

            • Select Install Ubuntu Server.


            2. Install with manual partitioning




            • During install, at the Partition disks step, select Manual.

            • If the disks contain any partitions, remove them.


              • If any logical volumes are present on your drives, select Configure the Logical Volume Manager.


                • Choose Delete logical volume until all volumes have been deleted.

                • Choose Delete volume group until all volume groups have been deleted.



              • If any RAID device is present, select Configure software RAID.


                • Choose Delete MD device until all MD devices have been deleted.



              • Delete every partition on the physical drives by choosing them and selecting Delete the partition.



            • Create physical partitions


              • On each drive, create a 512MB partition (I've seen others use 128MB) at the beginning of the disk, Use as: EFI System Partition.

              • On each drive, create a second partition with 'max' size, Use as: Physical Volume for RAID.



            • Set up RAID


              • Select Configure software RAID.

              • Select Create MD device, type RAID1, 2 active disks, 0 spare disks, and select the /dev/sda2 and /dev/sdb2 devices.



            • Set up LVM


              • Select Configure the Logical Volume Manager.

              • Create volume group vg on the /dev/md0 device.

              • Create logical volumes, e.g.



                • swap at 16G


                • root at 35G


                • tmp at 10G


                • var at 5G


                • home at 200G





            • Set up how to use the logical partitions


              • For the swap partition, select Use as: swap.

              • For the other partitions, select Use as: ext4 with the proper mount points (/, /tmp, /var, /home, respectively).



            • Select Finish partitioning and write changes to disk.

            • Allow the installation program to finish and reboot.


            3. Inspect system





            • Check which EFI partition has been mounted. Most likely /dev/sda1.



              mount | grep boot




            • Check RAID status. Most likely it is synchronizing.



              cat /proc/mdstat




            4. Clone EFI partition



            The EFI bootloaded should have been installed on /dev/sda1. As that partition is not mirrored via the RAID system, we need to clone it.



            sudo dd if=/dev/sda1 of=/dev/sdb1


            5. Insert second drive into boot chain



            This step may not be necessary, since if either drive dies, the system should boot from the (identical) EFI partitions. However, it seems prudent to ensure that we can boot from either disk.




            • Run efibootmgr -v and notice the file name for the ubuntu boot entry. On my install it was EFIubuntushimx64.efi.

            • Run sudo efibootmgr -c -d /dev/sdb -p 1 -L "ubuntu2" -l EFIubuntushimx64.efi.

            • Now the system should boot even if either of the drives fail!


            7. Wait



            If you want to try to remove/disable any drive, you must first wait until the RAID synchronization has finished! Monitor the progress with cat /proc/mdstat However, you may perform step 8 below while waiting.



            8. Remove BTRFS



            If one drive fails (after the synchronization is complete), the system will still boot. However, the boot sequence will spend a lot of time looking for btrfs file systems. To remove that unnecessary wait, run



            sudo apt-get purge btrfs-progs


            This should remove btrfs-progs, btrfs-tools and ubuntu-server. The last package is just a meta package, so if no more packages are listed for removal, you should be ok.



            9. Install the desktop version



            Run sudo apt install ubuntu-desktop to install the desktop version. After that, the synchronization is probably done and your system is configured and should survive a disk failure!



            10. Update EFI partition after grub-efi-amd64 update



            When the package grub-efi-amd64 is updated, the files on the EFI partition (mounted at /boot/efi) may change. In that case, the update must be cloned manually to the mirror partition. Luckily, you should get a warning from the update manager that grub-efi-amd64 is about to be updated, so you don't have to check after every update.



            10.1 Find out clone source, quick way



            If you haven't rebooted after the update, use



            mount | grep boot


            to find out what EFI partition is mounted. That partition, typically /dev/sdb1, should be used as the clone source.



            10.2 Find out clone source, paranoid way



            Create mount points and mount both partitions:



            sudo mkdir /tmp/sda1 /tmp/sdb1
            sudo mount /dev/sda1 /tmp/sda1
            sudo mount /dev/sdb1 /tmp/sdb1


            Find timestamp of newest file in each tree



            sudo find /tmp/sda1 -type f -printf '%T+ %pn' | sort | tail -n 1 > /tmp/newest.sda1
            sudo find /tmp/sdb1 -type f -printf '%T+ %pn' | sort | tail -n 1 > /tmp/newest.sdb1


            Compare timestamps



            cat /tmp/newest.sd* | sort | tail -n 1 | perl -ne 'm,/tmp/(sd[ab]1)/, && print "/dev/$1 is newest.n"'


            Should print /dev/sdb1 is newest (most likely) or /dev/sda1 is newest. That partition should be used as the clone source.



            Unmount the partitions before the cloning to avoid cache/partition inconsistency.



            sudo umount /tmp/sda1 /tmp/sdb1


            10.3 Clone



            If /dev/sdb1 was the clone source:



            sudo dd if=/dev/sdb1 of=/dev/sda1


            If /dev/sda1 was the clone source:



            sudo dd if=/dev/sda1 of=/dev/sdb1


            Done!



            11. Virtual machine gotchas



            If you want to try this out in a virtual machine first, there are some caveats: Apparently, the NVRAM that holds the UEFI information is remembered between reboots, but not between shutdown-restart cycles. In that case, you may end up at the UEFI Shell console. The following commands should boot you into your machine from /dev/sda1 (use FS1: for /dev/sdb1):



            FS0:
            EFIubuntugrubx64.efi


            The first solution in the top answer of UEFI boot in virtualbox - Ubuntu 12.04 might also be helpful.







            share|improve this answer














            share|improve this answer



            share|improve this answer








            edited Sep 21 at 5:29

























            answered Aug 16 at 21:32









            Niclas Börlin

            8961616




            8961616












            • How would you go about using LUKS, for an encrypted mirror set/RAID 1, avoiding encryption happening twice (ex. LUKS sitting under mdadm, so that IO happens twice, but encryption itself happens only once, this is actually not happening with some setups, such as those recommended for ZFS, where volumes are encrypted twice, once per device, effectively duplicating the cost of the encryption side of things). I haven't been able to find recent instructions on this setup.
              – soze
              Sep 18 at 3:40






            • 1




              @soze, unfortunately I have no experience with encrypted Linux partitions. I would do some trial-and-error in a virtual machine to find out. NB: I added a section above about virtual machine gotchas.
              – Niclas Börlin
              Sep 18 at 7:52


















            • How would you go about using LUKS, for an encrypted mirror set/RAID 1, avoiding encryption happening twice (ex. LUKS sitting under mdadm, so that IO happens twice, but encryption itself happens only once, this is actually not happening with some setups, such as those recommended for ZFS, where volumes are encrypted twice, once per device, effectively duplicating the cost of the encryption side of things). I haven't been able to find recent instructions on this setup.
              – soze
              Sep 18 at 3:40






            • 1




              @soze, unfortunately I have no experience with encrypted Linux partitions. I would do some trial-and-error in a virtual machine to find out. NB: I added a section above about virtual machine gotchas.
              – Niclas Börlin
              Sep 18 at 7:52
















            How would you go about using LUKS, for an encrypted mirror set/RAID 1, avoiding encryption happening twice (ex. LUKS sitting under mdadm, so that IO happens twice, but encryption itself happens only once, this is actually not happening with some setups, such as those recommended for ZFS, where volumes are encrypted twice, once per device, effectively duplicating the cost of the encryption side of things). I haven't been able to find recent instructions on this setup.
            – soze
            Sep 18 at 3:40




            How would you go about using LUKS, for an encrypted mirror set/RAID 1, avoiding encryption happening twice (ex. LUKS sitting under mdadm, so that IO happens twice, but encryption itself happens only once, this is actually not happening with some setups, such as those recommended for ZFS, where volumes are encrypted twice, once per device, effectively duplicating the cost of the encryption side of things). I haven't been able to find recent instructions on this setup.
            – soze
            Sep 18 at 3:40




            1




            1




            @soze, unfortunately I have no experience with encrypted Linux partitions. I would do some trial-and-error in a virtual machine to find out. NB: I added a section above about virtual machine gotchas.
            – Niclas Börlin
            Sep 18 at 7:52




            @soze, unfortunately I have no experience with encrypted Linux partitions. I would do some trial-and-error in a virtual machine to find out. NB: I added a section above about virtual machine gotchas.
            – Niclas Börlin
            Sep 18 at 7:52












            up vote
            0
            down vote













            RAID-1 + XFS + UEFI



            I was able to get about 99% of the way there with @Niclas Börlin's answer, thank you!



            I also drew help from the following answers :




            • Ubuntu 17.04 will not boot on UEFI system with XFS system partition

            • How to install Ubuntu server with UEFI and RAID1 + LVM


            Here are the ways I messed things up




            1. Having the BIOS in "Auto" mode, which allowed the USB-Key to be booted NOT in UEFI mode. This caused Grub not to be installed correctly. I switched mode to UEFI-only, rebooted and deleted all the logical volumes, raid groups, and partitions and started over. I further tried to re-install grub on the EFI partitions, which only made things worse.

            2. Having the /boot partition be on XFS. The grub2 that comes with Ubuntu 18.04LTS apparently does not handle this. Although that is not documented anywhere. I created a separate EXT-4 /boot partition. Note that this is on the RAID-1 LVM volume still, and not separate partitions like the EFI ones! Lots of older answers say this isn't possible, but it seems to be now. I ended up getting grub but getting unknown file system errors (eg. How to fix "error: unknown filesystem. grub rescue>) that gave me the clue XFS on /boot as a no-go.

            3. Somewhere in the middle of that I ended up with grub installed but a blank grub prompt, no grub menu. (eg. https://help.ubuntu.com/community/Grub2/Troubleshooting#Specific_Troubleshooting). This was due to /boot not being accessible.


            What worked for me



            Start with @Niclas Börlin's answer and change a few minor things.



            Partition Table



            I favor one large / partition, so this reflects that choice. The main change is an EXT4 /boot partition instead of an XFS one.



            sda/
            GPT 1M (auto-added)
            sda1 - EFI - 512M
            sda2 - MD0 - 3.5G

            sdb/
            GPT 1M (auto-added)
            sdb1 - EFI - 512M
            sdb2 - MD0 - 3.5G

            md0/
            vg/
            boot - 1G - EXT4 /boot
            swap - 16G - SWAP
            root - rest - XFS /


            After the completed install I was able to dd the contents of sda1 to sdb2 as detailed in the other answer. I also was able to add the second drive to the boot chain using efibootmgr as detailed.






            share|improve this answer

























              up vote
              0
              down vote













              RAID-1 + XFS + UEFI



              I was able to get about 99% of the way there with @Niclas Börlin's answer, thank you!



              I also drew help from the following answers :




              • Ubuntu 17.04 will not boot on UEFI system with XFS system partition

              • How to install Ubuntu server with UEFI and RAID1 + LVM


              Here are the ways I messed things up




              1. Having the BIOS in "Auto" mode, which allowed the USB-Key to be booted NOT in UEFI mode. This caused Grub not to be installed correctly. I switched mode to UEFI-only, rebooted and deleted all the logical volumes, raid groups, and partitions and started over. I further tried to re-install grub on the EFI partitions, which only made things worse.

              2. Having the /boot partition be on XFS. The grub2 that comes with Ubuntu 18.04LTS apparently does not handle this. Although that is not documented anywhere. I created a separate EXT-4 /boot partition. Note that this is on the RAID-1 LVM volume still, and not separate partitions like the EFI ones! Lots of older answers say this isn't possible, but it seems to be now. I ended up getting grub but getting unknown file system errors (eg. How to fix "error: unknown filesystem. grub rescue>) that gave me the clue XFS on /boot as a no-go.

              3. Somewhere in the middle of that I ended up with grub installed but a blank grub prompt, no grub menu. (eg. https://help.ubuntu.com/community/Grub2/Troubleshooting#Specific_Troubleshooting). This was due to /boot not being accessible.


              What worked for me



              Start with @Niclas Börlin's answer and change a few minor things.



              Partition Table



              I favor one large / partition, so this reflects that choice. The main change is an EXT4 /boot partition instead of an XFS one.



              sda/
              GPT 1M (auto-added)
              sda1 - EFI - 512M
              sda2 - MD0 - 3.5G

              sdb/
              GPT 1M (auto-added)
              sdb1 - EFI - 512M
              sdb2 - MD0 - 3.5G

              md0/
              vg/
              boot - 1G - EXT4 /boot
              swap - 16G - SWAP
              root - rest - XFS /


              After the completed install I was able to dd the contents of sda1 to sdb2 as detailed in the other answer. I also was able to add the second drive to the boot chain using efibootmgr as detailed.






              share|improve this answer























                up vote
                0
                down vote










                up vote
                0
                down vote









                RAID-1 + XFS + UEFI



                I was able to get about 99% of the way there with @Niclas Börlin's answer, thank you!



                I also drew help from the following answers :




                • Ubuntu 17.04 will not boot on UEFI system with XFS system partition

                • How to install Ubuntu server with UEFI and RAID1 + LVM


                Here are the ways I messed things up




                1. Having the BIOS in "Auto" mode, which allowed the USB-Key to be booted NOT in UEFI mode. This caused Grub not to be installed correctly. I switched mode to UEFI-only, rebooted and deleted all the logical volumes, raid groups, and partitions and started over. I further tried to re-install grub on the EFI partitions, which only made things worse.

                2. Having the /boot partition be on XFS. The grub2 that comes with Ubuntu 18.04LTS apparently does not handle this. Although that is not documented anywhere. I created a separate EXT-4 /boot partition. Note that this is on the RAID-1 LVM volume still, and not separate partitions like the EFI ones! Lots of older answers say this isn't possible, but it seems to be now. I ended up getting grub but getting unknown file system errors (eg. How to fix "error: unknown filesystem. grub rescue>) that gave me the clue XFS on /boot as a no-go.

                3. Somewhere in the middle of that I ended up with grub installed but a blank grub prompt, no grub menu. (eg. https://help.ubuntu.com/community/Grub2/Troubleshooting#Specific_Troubleshooting). This was due to /boot not being accessible.


                What worked for me



                Start with @Niclas Börlin's answer and change a few minor things.



                Partition Table



                I favor one large / partition, so this reflects that choice. The main change is an EXT4 /boot partition instead of an XFS one.



                sda/
                GPT 1M (auto-added)
                sda1 - EFI - 512M
                sda2 - MD0 - 3.5G

                sdb/
                GPT 1M (auto-added)
                sdb1 - EFI - 512M
                sdb2 - MD0 - 3.5G

                md0/
                vg/
                boot - 1G - EXT4 /boot
                swap - 16G - SWAP
                root - rest - XFS /


                After the completed install I was able to dd the contents of sda1 to sdb2 as detailed in the other answer. I also was able to add the second drive to the boot chain using efibootmgr as detailed.






                share|improve this answer












                RAID-1 + XFS + UEFI



                I was able to get about 99% of the way there with @Niclas Börlin's answer, thank you!



                I also drew help from the following answers :




                • Ubuntu 17.04 will not boot on UEFI system with XFS system partition

                • How to install Ubuntu server with UEFI and RAID1 + LVM


                Here are the ways I messed things up




                1. Having the BIOS in "Auto" mode, which allowed the USB-Key to be booted NOT in UEFI mode. This caused Grub not to be installed correctly. I switched mode to UEFI-only, rebooted and deleted all the logical volumes, raid groups, and partitions and started over. I further tried to re-install grub on the EFI partitions, which only made things worse.

                2. Having the /boot partition be on XFS. The grub2 that comes with Ubuntu 18.04LTS apparently does not handle this. Although that is not documented anywhere. I created a separate EXT-4 /boot partition. Note that this is on the RAID-1 LVM volume still, and not separate partitions like the EFI ones! Lots of older answers say this isn't possible, but it seems to be now. I ended up getting grub but getting unknown file system errors (eg. How to fix "error: unknown filesystem. grub rescue>) that gave me the clue XFS on /boot as a no-go.

                3. Somewhere in the middle of that I ended up with grub installed but a blank grub prompt, no grub menu. (eg. https://help.ubuntu.com/community/Grub2/Troubleshooting#Specific_Troubleshooting). This was due to /boot not being accessible.


                What worked for me



                Start with @Niclas Börlin's answer and change a few minor things.



                Partition Table



                I favor one large / partition, so this reflects that choice. The main change is an EXT4 /boot partition instead of an XFS one.



                sda/
                GPT 1M (auto-added)
                sda1 - EFI - 512M
                sda2 - MD0 - 3.5G

                sdb/
                GPT 1M (auto-added)
                sdb1 - EFI - 512M
                sdb2 - MD0 - 3.5G

                md0/
                vg/
                boot - 1G - EXT4 /boot
                swap - 16G - SWAP
                root - rest - XFS /


                After the completed install I was able to dd the contents of sda1 to sdb2 as detailed in the other answer. I also was able to add the second drive to the boot chain using efibootmgr as detailed.







                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered Oct 24 at 6:00









                maxslug

                262




                262






























                    draft saved

                    draft discarded




















































                    Thanks for contributing an answer to Ask Ubuntu!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    To learn more, see our tips on writing great answers.





                    Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


                    Please pay close attention to the following guidance:


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2faskubuntu.com%2fquestions%2f1066028%2finstall-ubuntu-18-04-desktop-with-raid-1-and-lvm-on-machine-with-uefi-bios%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Ellipse (mathématiques)

                    Quarter-circle Tiles

                    Mont Emei