PXE boot of 18.04 ISO
![Creative The name of the picture](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgO9GURib1T8z7lCwjOGLQaGtrueEthgQ8LO42ZX8cOfTqDK4jvDDpKkLFwf2J49kYCMNW7d4ABih_XCb_2UXdq5fPJDkoyg7-8g_YfRUot-XnaXkNYycsNp7lA5_TW9td0FFpLQ2APzKcZ/s1600/1.jpg)
![Creative The name of the picture](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhYQ0N5W1qAOxLP7t7iOM6O6AzbZnkXUy16s7P_CWfOb5UbTQY_aDsc727chyphenhyphen5W4IppVNernMMQeaUFTB_rFzAd95_CDt-tnwN-nBx6JyUp2duGjPaL5-VgNO41AVsA_vu30EJcipdDG409/s400/Clash+Royale+CLAN+TAG%2523URR8PPP.png)
up vote
7
down vote
favorite
Previously, I've set up PXE booting of the Ubuntu LiveCDs by extracting the ISO to an NFS mount and copying vmlinuz.efi and initrd.gz from casper to the tftpboot directory with some iPXE scripting magic.
This worked flawlessly for 16.04, 16.10, and 17.10 (Artful).
With 18.04, I first find that vmlinuz.efi no longer exists in casper, but vmlinuz does. So, I try again with some name changing...
And now it still doesn't complete booting. I get the "emergency mode". Typing in 'journalctl -xb' (as suggested by the emergency mode prompt) and browsing leads to the following:
Unit sys-fs-fuse-connections has begun starting up.
ubuntu systemd[1]: Failed to set up mount unit: Device or resource busy
ubuntu systemd[1]: Failed to set up mount unit: Device or resource busy
sys-kernel-config.mount: Mount process finished, but there is no mount.
sys-kernel-config.mount: Failed with result 'protocol'.
Failed to mount Kernel Configuration File System.
Help!
Added 2018-04-30:
Script code used to extract ISO for PXE mount (TARGET set to image name, e.g. bionic):
set -e
# Look for bionic.iso as the ISO I am going to extract.
TARGET=invalid.iso
[ -f bionic.iso ] && TARGET=bionic
echo TARGET=$TARGET
# Mount the ISO to the /tmp directory
sudo rm -rf /var/nfs/$TARGET/*
sudo rm -rf /tmp/$TARGET
mkdir /tmp/$TARGET
sudo mount -o loop ~/$TARGET.iso /tmp/$TARGET
# Clear up the NFS directory where things will be copied (and copy them)
sudo rm -rf /var/nfs/$TARGET
sudo mkdir /var/nfs/$TARGET
sudo rsync -avH /tmp/$TARGET/ /var/nfs/$TARGET
# I've not had luck with iPXE changing filesystems to find
# vmlinuz, vmlinuz.efi, or initrd.gz... so I copy those files
# specifically to the tftp directory structure so the boot loader
# can load them.
sudo rm -rf /var/lib/tftpboot/$TARGET
sudo mkdir /var/lib/tftpboot/$TARGET
sudo cp /tmp/$TARGET/casper/vmlinuz* /var/lib/tftpboot/$TARGET/.
sudo cp /tmp/$TARGET/casper/initrd.lz /var/lib/tftpboot/$TARGET/.
# Cleanup: unmount the ISO and remove the temp directory
sudo umount /tmp/$TARGET/
sudo rm -rf /tmp/$TARGET/
echo Done.
boot mount uefi 18.04 pxe
add a comment |Â
up vote
7
down vote
favorite
Previously, I've set up PXE booting of the Ubuntu LiveCDs by extracting the ISO to an NFS mount and copying vmlinuz.efi and initrd.gz from casper to the tftpboot directory with some iPXE scripting magic.
This worked flawlessly for 16.04, 16.10, and 17.10 (Artful).
With 18.04, I first find that vmlinuz.efi no longer exists in casper, but vmlinuz does. So, I try again with some name changing...
And now it still doesn't complete booting. I get the "emergency mode". Typing in 'journalctl -xb' (as suggested by the emergency mode prompt) and browsing leads to the following:
Unit sys-fs-fuse-connections has begun starting up.
ubuntu systemd[1]: Failed to set up mount unit: Device or resource busy
ubuntu systemd[1]: Failed to set up mount unit: Device or resource busy
sys-kernel-config.mount: Mount process finished, but there is no mount.
sys-kernel-config.mount: Failed with result 'protocol'.
Failed to mount Kernel Configuration File System.
Help!
Added 2018-04-30:
Script code used to extract ISO for PXE mount (TARGET set to image name, e.g. bionic):
set -e
# Look for bionic.iso as the ISO I am going to extract.
TARGET=invalid.iso
[ -f bionic.iso ] && TARGET=bionic
echo TARGET=$TARGET
# Mount the ISO to the /tmp directory
sudo rm -rf /var/nfs/$TARGET/*
sudo rm -rf /tmp/$TARGET
mkdir /tmp/$TARGET
sudo mount -o loop ~/$TARGET.iso /tmp/$TARGET
# Clear up the NFS directory where things will be copied (and copy them)
sudo rm -rf /var/nfs/$TARGET
sudo mkdir /var/nfs/$TARGET
sudo rsync -avH /tmp/$TARGET/ /var/nfs/$TARGET
# I've not had luck with iPXE changing filesystems to find
# vmlinuz, vmlinuz.efi, or initrd.gz... so I copy those files
# specifically to the tftp directory structure so the boot loader
# can load them.
sudo rm -rf /var/lib/tftpboot/$TARGET
sudo mkdir /var/lib/tftpboot/$TARGET
sudo cp /tmp/$TARGET/casper/vmlinuz* /var/lib/tftpboot/$TARGET/.
sudo cp /tmp/$TARGET/casper/initrd.lz /var/lib/tftpboot/$TARGET/.
# Cleanup: unmount the ISO and remove the temp directory
sudo umount /tmp/$TARGET/
sudo rm -rf /tmp/$TARGET/
echo Done.
boot mount uefi 18.04 pxe
Was this a "clean" install, meaning, the drive that the kernal is on was freshly formatted? Or is it alongside/overtop another OS?
â Jonathan
Apr 28 at 1:56
1
The target machines in question have no hard drive, and are loading the 18.04 desktop LiveCD via network boot. There is no previous configuration. Imagine a group of machines that instead of using USB keys or CDs to boot the liveCD, boot the live CD using iPXE over the network instead.
â Joe Marley
Apr 29 at 12:18
add a comment |Â
up vote
7
down vote
favorite
up vote
7
down vote
favorite
Previously, I've set up PXE booting of the Ubuntu LiveCDs by extracting the ISO to an NFS mount and copying vmlinuz.efi and initrd.gz from casper to the tftpboot directory with some iPXE scripting magic.
This worked flawlessly for 16.04, 16.10, and 17.10 (Artful).
With 18.04, I first find that vmlinuz.efi no longer exists in casper, but vmlinuz does. So, I try again with some name changing...
And now it still doesn't complete booting. I get the "emergency mode". Typing in 'journalctl -xb' (as suggested by the emergency mode prompt) and browsing leads to the following:
Unit sys-fs-fuse-connections has begun starting up.
ubuntu systemd[1]: Failed to set up mount unit: Device or resource busy
ubuntu systemd[1]: Failed to set up mount unit: Device or resource busy
sys-kernel-config.mount: Mount process finished, but there is no mount.
sys-kernel-config.mount: Failed with result 'protocol'.
Failed to mount Kernel Configuration File System.
Help!
Added 2018-04-30:
Script code used to extract ISO for PXE mount (TARGET set to image name, e.g. bionic):
set -e
# Look for bionic.iso as the ISO I am going to extract.
TARGET=invalid.iso
[ -f bionic.iso ] && TARGET=bionic
echo TARGET=$TARGET
# Mount the ISO to the /tmp directory
sudo rm -rf /var/nfs/$TARGET/*
sudo rm -rf /tmp/$TARGET
mkdir /tmp/$TARGET
sudo mount -o loop ~/$TARGET.iso /tmp/$TARGET
# Clear up the NFS directory where things will be copied (and copy them)
sudo rm -rf /var/nfs/$TARGET
sudo mkdir /var/nfs/$TARGET
sudo rsync -avH /tmp/$TARGET/ /var/nfs/$TARGET
# I've not had luck with iPXE changing filesystems to find
# vmlinuz, vmlinuz.efi, or initrd.gz... so I copy those files
# specifically to the tftp directory structure so the boot loader
# can load them.
sudo rm -rf /var/lib/tftpboot/$TARGET
sudo mkdir /var/lib/tftpboot/$TARGET
sudo cp /tmp/$TARGET/casper/vmlinuz* /var/lib/tftpboot/$TARGET/.
sudo cp /tmp/$TARGET/casper/initrd.lz /var/lib/tftpboot/$TARGET/.
# Cleanup: unmount the ISO and remove the temp directory
sudo umount /tmp/$TARGET/
sudo rm -rf /tmp/$TARGET/
echo Done.
boot mount uefi 18.04 pxe
Previously, I've set up PXE booting of the Ubuntu LiveCDs by extracting the ISO to an NFS mount and copying vmlinuz.efi and initrd.gz from casper to the tftpboot directory with some iPXE scripting magic.
This worked flawlessly for 16.04, 16.10, and 17.10 (Artful).
With 18.04, I first find that vmlinuz.efi no longer exists in casper, but vmlinuz does. So, I try again with some name changing...
And now it still doesn't complete booting. I get the "emergency mode". Typing in 'journalctl -xb' (as suggested by the emergency mode prompt) and browsing leads to the following:
Unit sys-fs-fuse-connections has begun starting up.
ubuntu systemd[1]: Failed to set up mount unit: Device or resource busy
ubuntu systemd[1]: Failed to set up mount unit: Device or resource busy
sys-kernel-config.mount: Mount process finished, but there is no mount.
sys-kernel-config.mount: Failed with result 'protocol'.
Failed to mount Kernel Configuration File System.
Help!
Added 2018-04-30:
Script code used to extract ISO for PXE mount (TARGET set to image name, e.g. bionic):
set -e
# Look for bionic.iso as the ISO I am going to extract.
TARGET=invalid.iso
[ -f bionic.iso ] && TARGET=bionic
echo TARGET=$TARGET
# Mount the ISO to the /tmp directory
sudo rm -rf /var/nfs/$TARGET/*
sudo rm -rf /tmp/$TARGET
mkdir /tmp/$TARGET
sudo mount -o loop ~/$TARGET.iso /tmp/$TARGET
# Clear up the NFS directory where things will be copied (and copy them)
sudo rm -rf /var/nfs/$TARGET
sudo mkdir /var/nfs/$TARGET
sudo rsync -avH /tmp/$TARGET/ /var/nfs/$TARGET
# I've not had luck with iPXE changing filesystems to find
# vmlinuz, vmlinuz.efi, or initrd.gz... so I copy those files
# specifically to the tftp directory structure so the boot loader
# can load them.
sudo rm -rf /var/lib/tftpboot/$TARGET
sudo mkdir /var/lib/tftpboot/$TARGET
sudo cp /tmp/$TARGET/casper/vmlinuz* /var/lib/tftpboot/$TARGET/.
sudo cp /tmp/$TARGET/casper/initrd.lz /var/lib/tftpboot/$TARGET/.
# Cleanup: unmount the ISO and remove the temp directory
sudo umount /tmp/$TARGET/
sudo rm -rf /tmp/$TARGET/
echo Done.
boot mount uefi 18.04 pxe
edited Apr 30 at 14:00
asked Apr 28 at 1:05
Joe Marley
3817
3817
Was this a "clean" install, meaning, the drive that the kernal is on was freshly formatted? Or is it alongside/overtop another OS?
â Jonathan
Apr 28 at 1:56
1
The target machines in question have no hard drive, and are loading the 18.04 desktop LiveCD via network boot. There is no previous configuration. Imagine a group of machines that instead of using USB keys or CDs to boot the liveCD, boot the live CD using iPXE over the network instead.
â Joe Marley
Apr 29 at 12:18
add a comment |Â
Was this a "clean" install, meaning, the drive that the kernal is on was freshly formatted? Or is it alongside/overtop another OS?
â Jonathan
Apr 28 at 1:56
1
The target machines in question have no hard drive, and are loading the 18.04 desktop LiveCD via network boot. There is no previous configuration. Imagine a group of machines that instead of using USB keys or CDs to boot the liveCD, boot the live CD using iPXE over the network instead.
â Joe Marley
Apr 29 at 12:18
Was this a "clean" install, meaning, the drive that the kernal is on was freshly formatted? Or is it alongside/overtop another OS?
â Jonathan
Apr 28 at 1:56
Was this a "clean" install, meaning, the drive that the kernal is on was freshly formatted? Or is it alongside/overtop another OS?
â Jonathan
Apr 28 at 1:56
1
1
The target machines in question have no hard drive, and are loading the 18.04 desktop LiveCD via network boot. There is no previous configuration. Imagine a group of machines that instead of using USB keys or CDs to boot the liveCD, boot the live CD using iPXE over the network instead.
â Joe Marley
Apr 29 at 12:18
The target machines in question have no hard drive, and are loading the 18.04 desktop LiveCD via network boot. There is no previous configuration. Imagine a group of machines that instead of using USB keys or CDs to boot the liveCD, boot the live CD using iPXE over the network instead.
â Joe Marley
Apr 29 at 12:18
add a comment |Â
3 Answers
3
active
oldest
votes
up vote
6
down vote
accepted
I worked around this issue in iPXE by following the advice of "Woodrow Shen" over at the Launchpad bug tracker.
Basically I adapted our old entry for ubuntu 16.04.3:
:deployUbuntu-x64-16.04.3
set server_ip 123.123.123.123
set nfs_path /opt/nfs-exports/ubuntu-x64-16.04.3
kernel nfs://$server_ip$nfs_path/casper/vmlinuz.efi || read void
initrd nfs://$server_ip$nfs_path/casper/initrd.lz || read void
imgargs vmlinuz.efi initrd=initrd.lz root=/dev/nfs boot=casper netboot=nfs nfsroot=$server_ip:$nfs_path ip=dhcp splash quiet -- || read void
boot || read void
To look like this for ubuntu 18.04:
:deployUbuntu-x64-18.04
set server_ip 123.123.123.123
set nfs_path /opt/nfs-exports/ubuntu-x64-18.04
kernel nfs://$server_ip$nfs_path/casper/vmlinuz || read void
initrd nfs://$server_ip$nfs_path/casper/initrd.lz || read void
imgargs vmlinuz initrd=initrd.lz root=/dev/nfs boot=casper netboot=nfs nfsroot=$server_ip:$nfs_path ip=dhcp splash quiet toram -- || read void
boot || read void
note the following changes:
- rename
vmlinuz.efi
to bevmlinux
on lines 4 and 6 - add the
toram
option to line 6 - obviously change the
nfs_path
to match the location of the new extract ISO
note that as pointed out on Launchpad, this toram
option requires additional RAM. In my testing, i needed to ensure my virtual machines had 4GB of RAM allocated
Note that this also works for both our EFI and legacy BIOS systems.
1
Thankyou DrGecko - thetoram
option worked for me with mint 19!
â Brian Sidebotham
Aug 1 at 11:40
This works also for lubuntu 18.04.1 (LTS), which is exactly what I was needing. Thank you!
â Joe Marley
Aug 9 at 16:35
add a comment |Â
up vote
0
down vote
After the weekend, I found a reported bug describing my exact symptoms (and provides an interactive workaround).
https://bugs.launchpad.net/ubuntu/+source/casper/+bug/1755863
Apparently I'll be waiting on 18.04.1. At least I now know I'm not (entirely) crazy!
add a comment |Â
up vote
0
down vote
for ubuntu 14.04 and 16.04, I simply loop-back mounted the full server DVD ISO so it was accessible via a web server, and set up PXE boot in the usual way (copied the kernel and initrd to tftp daemon, DHCP next-server option, pxe menu etc).
we have a kickstart process to fully automate the deployment of nodes.
this simply doesn't work with 18.04, there was no kernel in the install directory, and no install/netboot/ubuntu-installer/amd64 directory! So I tried the kernel and initrd from the casper directory but that's useless too. I grabbed the netinstall DVD iso and used the kernel and initrd from that. It actually fires up the text installer but insists the mirror is missing a file, but the log from my http server isn't giving any 404s!
overall then, I feel the ubuntu 18.04 server ISO is a retrograde step for people wanting to do automated installs.
I also tried adding this to the kickstart
preseed live-installer/net-image string http://myreposerver/ubuntu-18.04-live-server-amd64/casper/filesystem.squashfs
which is somewhat like the thing I had to do to make Ubuntu 14.04 PXE boot automatable
add a comment |Â
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
6
down vote
accepted
I worked around this issue in iPXE by following the advice of "Woodrow Shen" over at the Launchpad bug tracker.
Basically I adapted our old entry for ubuntu 16.04.3:
:deployUbuntu-x64-16.04.3
set server_ip 123.123.123.123
set nfs_path /opt/nfs-exports/ubuntu-x64-16.04.3
kernel nfs://$server_ip$nfs_path/casper/vmlinuz.efi || read void
initrd nfs://$server_ip$nfs_path/casper/initrd.lz || read void
imgargs vmlinuz.efi initrd=initrd.lz root=/dev/nfs boot=casper netboot=nfs nfsroot=$server_ip:$nfs_path ip=dhcp splash quiet -- || read void
boot || read void
To look like this for ubuntu 18.04:
:deployUbuntu-x64-18.04
set server_ip 123.123.123.123
set nfs_path /opt/nfs-exports/ubuntu-x64-18.04
kernel nfs://$server_ip$nfs_path/casper/vmlinuz || read void
initrd nfs://$server_ip$nfs_path/casper/initrd.lz || read void
imgargs vmlinuz initrd=initrd.lz root=/dev/nfs boot=casper netboot=nfs nfsroot=$server_ip:$nfs_path ip=dhcp splash quiet toram -- || read void
boot || read void
note the following changes:
- rename
vmlinuz.efi
to bevmlinux
on lines 4 and 6 - add the
toram
option to line 6 - obviously change the
nfs_path
to match the location of the new extract ISO
note that as pointed out on Launchpad, this toram
option requires additional RAM. In my testing, i needed to ensure my virtual machines had 4GB of RAM allocated
Note that this also works for both our EFI and legacy BIOS systems.
1
Thankyou DrGecko - thetoram
option worked for me with mint 19!
â Brian Sidebotham
Aug 1 at 11:40
This works also for lubuntu 18.04.1 (LTS), which is exactly what I was needing. Thank you!
â Joe Marley
Aug 9 at 16:35
add a comment |Â
up vote
6
down vote
accepted
I worked around this issue in iPXE by following the advice of "Woodrow Shen" over at the Launchpad bug tracker.
Basically I adapted our old entry for ubuntu 16.04.3:
:deployUbuntu-x64-16.04.3
set server_ip 123.123.123.123
set nfs_path /opt/nfs-exports/ubuntu-x64-16.04.3
kernel nfs://$server_ip$nfs_path/casper/vmlinuz.efi || read void
initrd nfs://$server_ip$nfs_path/casper/initrd.lz || read void
imgargs vmlinuz.efi initrd=initrd.lz root=/dev/nfs boot=casper netboot=nfs nfsroot=$server_ip:$nfs_path ip=dhcp splash quiet -- || read void
boot || read void
To look like this for ubuntu 18.04:
:deployUbuntu-x64-18.04
set server_ip 123.123.123.123
set nfs_path /opt/nfs-exports/ubuntu-x64-18.04
kernel nfs://$server_ip$nfs_path/casper/vmlinuz || read void
initrd nfs://$server_ip$nfs_path/casper/initrd.lz || read void
imgargs vmlinuz initrd=initrd.lz root=/dev/nfs boot=casper netboot=nfs nfsroot=$server_ip:$nfs_path ip=dhcp splash quiet toram -- || read void
boot || read void
note the following changes:
- rename
vmlinuz.efi
to bevmlinux
on lines 4 and 6 - add the
toram
option to line 6 - obviously change the
nfs_path
to match the location of the new extract ISO
note that as pointed out on Launchpad, this toram
option requires additional RAM. In my testing, i needed to ensure my virtual machines had 4GB of RAM allocated
Note that this also works for both our EFI and legacy BIOS systems.
1
Thankyou DrGecko - thetoram
option worked for me with mint 19!
â Brian Sidebotham
Aug 1 at 11:40
This works also for lubuntu 18.04.1 (LTS), which is exactly what I was needing. Thank you!
â Joe Marley
Aug 9 at 16:35
add a comment |Â
up vote
6
down vote
accepted
up vote
6
down vote
accepted
I worked around this issue in iPXE by following the advice of "Woodrow Shen" over at the Launchpad bug tracker.
Basically I adapted our old entry for ubuntu 16.04.3:
:deployUbuntu-x64-16.04.3
set server_ip 123.123.123.123
set nfs_path /opt/nfs-exports/ubuntu-x64-16.04.3
kernel nfs://$server_ip$nfs_path/casper/vmlinuz.efi || read void
initrd nfs://$server_ip$nfs_path/casper/initrd.lz || read void
imgargs vmlinuz.efi initrd=initrd.lz root=/dev/nfs boot=casper netboot=nfs nfsroot=$server_ip:$nfs_path ip=dhcp splash quiet -- || read void
boot || read void
To look like this for ubuntu 18.04:
:deployUbuntu-x64-18.04
set server_ip 123.123.123.123
set nfs_path /opt/nfs-exports/ubuntu-x64-18.04
kernel nfs://$server_ip$nfs_path/casper/vmlinuz || read void
initrd nfs://$server_ip$nfs_path/casper/initrd.lz || read void
imgargs vmlinuz initrd=initrd.lz root=/dev/nfs boot=casper netboot=nfs nfsroot=$server_ip:$nfs_path ip=dhcp splash quiet toram -- || read void
boot || read void
note the following changes:
- rename
vmlinuz.efi
to bevmlinux
on lines 4 and 6 - add the
toram
option to line 6 - obviously change the
nfs_path
to match the location of the new extract ISO
note that as pointed out on Launchpad, this toram
option requires additional RAM. In my testing, i needed to ensure my virtual machines had 4GB of RAM allocated
Note that this also works for both our EFI and legacy BIOS systems.
I worked around this issue in iPXE by following the advice of "Woodrow Shen" over at the Launchpad bug tracker.
Basically I adapted our old entry for ubuntu 16.04.3:
:deployUbuntu-x64-16.04.3
set server_ip 123.123.123.123
set nfs_path /opt/nfs-exports/ubuntu-x64-16.04.3
kernel nfs://$server_ip$nfs_path/casper/vmlinuz.efi || read void
initrd nfs://$server_ip$nfs_path/casper/initrd.lz || read void
imgargs vmlinuz.efi initrd=initrd.lz root=/dev/nfs boot=casper netboot=nfs nfsroot=$server_ip:$nfs_path ip=dhcp splash quiet -- || read void
boot || read void
To look like this for ubuntu 18.04:
:deployUbuntu-x64-18.04
set server_ip 123.123.123.123
set nfs_path /opt/nfs-exports/ubuntu-x64-18.04
kernel nfs://$server_ip$nfs_path/casper/vmlinuz || read void
initrd nfs://$server_ip$nfs_path/casper/initrd.lz || read void
imgargs vmlinuz initrd=initrd.lz root=/dev/nfs boot=casper netboot=nfs nfsroot=$server_ip:$nfs_path ip=dhcp splash quiet toram -- || read void
boot || read void
note the following changes:
- rename
vmlinuz.efi
to bevmlinux
on lines 4 and 6 - add the
toram
option to line 6 - obviously change the
nfs_path
to match the location of the new extract ISO
note that as pointed out on Launchpad, this toram
option requires additional RAM. In my testing, i needed to ensure my virtual machines had 4GB of RAM allocated
Note that this also works for both our EFI and legacy BIOS systems.
answered May 30 at 9:50
![](https://i.stack.imgur.com/oEpq6.png?s=32&g=1)
![](https://i.stack.imgur.com/oEpq6.png?s=32&g=1)
DrGecko
1085
1085
1
Thankyou DrGecko - thetoram
option worked for me with mint 19!
â Brian Sidebotham
Aug 1 at 11:40
This works also for lubuntu 18.04.1 (LTS), which is exactly what I was needing. Thank you!
â Joe Marley
Aug 9 at 16:35
add a comment |Â
1
Thankyou DrGecko - thetoram
option worked for me with mint 19!
â Brian Sidebotham
Aug 1 at 11:40
This works also for lubuntu 18.04.1 (LTS), which is exactly what I was needing. Thank you!
â Joe Marley
Aug 9 at 16:35
1
1
Thankyou DrGecko - the
toram
option worked for me with mint 19!â Brian Sidebotham
Aug 1 at 11:40
Thankyou DrGecko - the
toram
option worked for me with mint 19!â Brian Sidebotham
Aug 1 at 11:40
This works also for lubuntu 18.04.1 (LTS), which is exactly what I was needing. Thank you!
â Joe Marley
Aug 9 at 16:35
This works also for lubuntu 18.04.1 (LTS), which is exactly what I was needing. Thank you!
â Joe Marley
Aug 9 at 16:35
add a comment |Â
up vote
0
down vote
After the weekend, I found a reported bug describing my exact symptoms (and provides an interactive workaround).
https://bugs.launchpad.net/ubuntu/+source/casper/+bug/1755863
Apparently I'll be waiting on 18.04.1. At least I now know I'm not (entirely) crazy!
add a comment |Â
up vote
0
down vote
After the weekend, I found a reported bug describing my exact symptoms (and provides an interactive workaround).
https://bugs.launchpad.net/ubuntu/+source/casper/+bug/1755863
Apparently I'll be waiting on 18.04.1. At least I now know I'm not (entirely) crazy!
add a comment |Â
up vote
0
down vote
up vote
0
down vote
After the weekend, I found a reported bug describing my exact symptoms (and provides an interactive workaround).
https://bugs.launchpad.net/ubuntu/+source/casper/+bug/1755863
Apparently I'll be waiting on 18.04.1. At least I now know I'm not (entirely) crazy!
After the weekend, I found a reported bug describing my exact symptoms (and provides an interactive workaround).
https://bugs.launchpad.net/ubuntu/+source/casper/+bug/1755863
Apparently I'll be waiting on 18.04.1. At least I now know I'm not (entirely) crazy!
answered Apr 30 at 14:10
Joe Marley
3817
3817
add a comment |Â
add a comment |Â
up vote
0
down vote
for ubuntu 14.04 and 16.04, I simply loop-back mounted the full server DVD ISO so it was accessible via a web server, and set up PXE boot in the usual way (copied the kernel and initrd to tftp daemon, DHCP next-server option, pxe menu etc).
we have a kickstart process to fully automate the deployment of nodes.
this simply doesn't work with 18.04, there was no kernel in the install directory, and no install/netboot/ubuntu-installer/amd64 directory! So I tried the kernel and initrd from the casper directory but that's useless too. I grabbed the netinstall DVD iso and used the kernel and initrd from that. It actually fires up the text installer but insists the mirror is missing a file, but the log from my http server isn't giving any 404s!
overall then, I feel the ubuntu 18.04 server ISO is a retrograde step for people wanting to do automated installs.
I also tried adding this to the kickstart
preseed live-installer/net-image string http://myreposerver/ubuntu-18.04-live-server-amd64/casper/filesystem.squashfs
which is somewhat like the thing I had to do to make Ubuntu 14.04 PXE boot automatable
add a comment |Â
up vote
0
down vote
for ubuntu 14.04 and 16.04, I simply loop-back mounted the full server DVD ISO so it was accessible via a web server, and set up PXE boot in the usual way (copied the kernel and initrd to tftp daemon, DHCP next-server option, pxe menu etc).
we have a kickstart process to fully automate the deployment of nodes.
this simply doesn't work with 18.04, there was no kernel in the install directory, and no install/netboot/ubuntu-installer/amd64 directory! So I tried the kernel and initrd from the casper directory but that's useless too. I grabbed the netinstall DVD iso and used the kernel and initrd from that. It actually fires up the text installer but insists the mirror is missing a file, but the log from my http server isn't giving any 404s!
overall then, I feel the ubuntu 18.04 server ISO is a retrograde step for people wanting to do automated installs.
I also tried adding this to the kickstart
preseed live-installer/net-image string http://myreposerver/ubuntu-18.04-live-server-amd64/casper/filesystem.squashfs
which is somewhat like the thing I had to do to make Ubuntu 14.04 PXE boot automatable
add a comment |Â
up vote
0
down vote
up vote
0
down vote
for ubuntu 14.04 and 16.04, I simply loop-back mounted the full server DVD ISO so it was accessible via a web server, and set up PXE boot in the usual way (copied the kernel and initrd to tftp daemon, DHCP next-server option, pxe menu etc).
we have a kickstart process to fully automate the deployment of nodes.
this simply doesn't work with 18.04, there was no kernel in the install directory, and no install/netboot/ubuntu-installer/amd64 directory! So I tried the kernel and initrd from the casper directory but that's useless too. I grabbed the netinstall DVD iso and used the kernel and initrd from that. It actually fires up the text installer but insists the mirror is missing a file, but the log from my http server isn't giving any 404s!
overall then, I feel the ubuntu 18.04 server ISO is a retrograde step for people wanting to do automated installs.
I also tried adding this to the kickstart
preseed live-installer/net-image string http://myreposerver/ubuntu-18.04-live-server-amd64/casper/filesystem.squashfs
which is somewhat like the thing I had to do to make Ubuntu 14.04 PXE boot automatable
for ubuntu 14.04 and 16.04, I simply loop-back mounted the full server DVD ISO so it was accessible via a web server, and set up PXE boot in the usual way (copied the kernel and initrd to tftp daemon, DHCP next-server option, pxe menu etc).
we have a kickstart process to fully automate the deployment of nodes.
this simply doesn't work with 18.04, there was no kernel in the install directory, and no install/netboot/ubuntu-installer/amd64 directory! So I tried the kernel and initrd from the casper directory but that's useless too. I grabbed the netinstall DVD iso and used the kernel and initrd from that. It actually fires up the text installer but insists the mirror is missing a file, but the log from my http server isn't giving any 404s!
overall then, I feel the ubuntu 18.04 server ISO is a retrograde step for people wanting to do automated installs.
I also tried adding this to the kickstart
preseed live-installer/net-image string http://myreposerver/ubuntu-18.04-live-server-amd64/casper/filesystem.squashfs
which is somewhat like the thing I had to do to make Ubuntu 14.04 PXE boot automatable
edited May 2 at 10:54
answered May 2 at 10:44
Paul M
1515
1515
add a comment |Â
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
var $window = $(window),
onScroll = function(e)
var $elem = $('.new-login-left'),
docViewTop = $window.scrollTop(),
docViewBottom = docViewTop + $window.height(),
elemTop = $elem.offset().top,
elemBottom = elemTop + $elem.height();
if ((docViewTop elemBottom))
StackExchange.using('gps', function() StackExchange.gps.track('embedded_signup_form.view', location: 'question_page' ); );
$window.unbind('scroll', onScroll);
;
$window.on('scroll', onScroll);
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2faskubuntu.com%2fquestions%2f1029017%2fpxe-boot-of-18-04-iso%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
var $window = $(window),
onScroll = function(e)
var $elem = $('.new-login-left'),
docViewTop = $window.scrollTop(),
docViewBottom = docViewTop + $window.height(),
elemTop = $elem.offset().top,
elemBottom = elemTop + $elem.height();
if ((docViewTop elemBottom))
StackExchange.using('gps', function() StackExchange.gps.track('embedded_signup_form.view', location: 'question_page' ); );
$window.unbind('scroll', onScroll);
;
$window.on('scroll', onScroll);
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
var $window = $(window),
onScroll = function(e)
var $elem = $('.new-login-left'),
docViewTop = $window.scrollTop(),
docViewBottom = docViewTop + $window.height(),
elemTop = $elem.offset().top,
elemBottom = elemTop + $elem.height();
if ((docViewTop elemBottom))
StackExchange.using('gps', function() StackExchange.gps.track('embedded_signup_form.view', location: 'question_page' ); );
$window.unbind('scroll', onScroll);
;
$window.on('scroll', onScroll);
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
var $window = $(window),
onScroll = function(e)
var $elem = $('.new-login-left'),
docViewTop = $window.scrollTop(),
docViewBottom = docViewTop + $window.height(),
elemTop = $elem.offset().top,
elemBottom = elemTop + $elem.height();
if ((docViewTop elemBottom))
StackExchange.using('gps', function() StackExchange.gps.track('embedded_signup_form.view', location: 'question_page' ); );
$window.unbind('scroll', onScroll);
;
$window.on('scroll', onScroll);
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Was this a "clean" install, meaning, the drive that the kernal is on was freshly formatted? Or is it alongside/overtop another OS?
â Jonathan
Apr 28 at 1:56
1
The target machines in question have no hard drive, and are loading the 18.04 desktop LiveCD via network boot. There is no previous configuration. Imagine a group of machines that instead of using USB keys or CDs to boot the liveCD, boot the live CD using iPXE over the network instead.
â Joe Marley
Apr 29 at 12:18