summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--.gitignore12
-rw-r--r--README.md75
-rwxr-xr-xmkosi1979
-rw-r--r--mkosi.default22
-rwxr-xr-xsetup.py5
5 files changed, 1641 insertions, 452 deletions
diff --git a/.gitignore b/.gitignore
index a241340..2bc1175 100644
--- a/.gitignore
+++ b/.gitignore
@@ -1,14 +1,18 @@
-/SHA256SUM
-/SHA256SUM.gpg
+*.cache-pre-dev
+*.cache-pre-inst
+/.mkosi-*
+/SHA256SUMS
+/SHA256SUMS.gpg
/__pycache__
-/mkosi.egg-info
/build
/dist
/image
/image.raw
/image.raw.xz
+/image.roothash
/image.tar.xz
/mkosi.build
-/mkosi.default
+/mkosi.cache
+/mkosi.egg-info
/mkosi.extra
/mkosi.nspawn
diff --git a/README.md b/README.md
index 41c146b..c959f49 100644
--- a/README.md
+++ b/README.md
@@ -1,7 +1,7 @@
# mkosi - Create legacy-free OS images
-A fancy wrapper around `dnf --installroot`, `debootstrap` and
-`pacstrap`, that may generate disk images with a number of
+A fancy wrapper around `dnf --installroot`, `debootstrap`,
+`pacstrap` and `zypper` that may generate disk images with a number of
bells and whistles.
# Supported output formats
@@ -12,6 +12,8 @@ The following output formats are supported:
* Raw *GPT* disk image, with btrfs as root (*raw_btrfs*)
+* Raw *GPT* disk image, with squashfs as read-only root (*raw_squashfs*)
+
* Plain directory, containing the *OS* tree (*directory*)
* btrfs subvolume, with separate subvolumes for `/var`, `/home`,
@@ -28,6 +30,12 @@ options are available:
* Separate partitions for `/srv` and `/home` may be added in
+* The root, /srv and /home partitions may optionally be encrypted with
+ LUKS.
+
+* A dm-verity partition may be added in that adds runtime integrity
+ data for the root partition
+
# Compatibility
Generated images are *legacy-free*. This means only *GPT* disk
@@ -35,9 +43,6 @@ labels (and no *MBR* disk labels) are supported, and only
systemd based images may be generated. Moreover, for bootable
images only *EFI* systems are supported (not plain *MBR/BIOS*).
-Currently, the *EFI* boot loader does not support *SecureBoot*,
-and hence cannot generate signed *SecureBoot* images.
-
All generated *GPT* disk images may be booted in a local
container directly with:
@@ -67,7 +72,7 @@ systemd-nspawn -bD image
# Other features
-* Optionally, create an *SHA256SUM* checksum file for the result,
+* Optionally, create an *SHA256SUMS* checksum file for the result,
possibly even signed via gpg.
* Optionally, place a specific `.nspawn` settings file along
@@ -105,15 +110,18 @@ following *OS*es.
* *Arch Linux* (incomplete)
+* *openSUSE*
+
In theory, any distribution may be used on the host for
building images containing any other distribution, as long as
the necessary tools are available. Specifically, any distro
that packages `debootstrap` may be used to build *Debian* or
*Ubuntu* images. Any distro that packages `dnf` may be used to
build *Fedora* images. Any distro that packages `pacstrap` may
-be used to build *Arch Linux* images.
+be used to build *Arch Linux* images. Any distro that packages
+`zypper` may be used to build *openSUSE* images.
-Currently, *Fedora* packages all three tools.
+Currently, *Fedora* packages the first three tools.
# Files
@@ -132,6 +140,9 @@ they exist in the local directory:
hence little more than a way to make sure simply typing
`mkosi` without further parameters in your *source* tree is
enough to get the right image of your choice set up.
+ Additionally if a `mkosi.default.d` directory exists, each file in it
+ is loaded in the same manner adding/overriding the values specified in
+ `mkosi.default`.
* `mkosi.extra` may be a directory. If this exists all files
contained in it are copied over the directory tree of the
@@ -158,12 +169,39 @@ they exist in the local directory:
build script copied in. However, this time the contents of
`$DESTDIR` is added into the image.
+* `mkosi.postinst` may be an executable script. If it exists it is
+ invoked as last step of preparing an image, from within the image
+ context. It is once called for the *development* image (if this is
+ enabled, see above) with the "build" command line parameter, right
+ before invoking the build script. It is called a second time for the
+ *final* image with the "final" command line parameter, right before
+ the image is considered complete. This script may be used to alter
+ the images without any restrictions, after all software packages and
+ built sources have been installed. Note that this script is executed
+ directly in the image context with the final root directory in
+ place, without any `$SRCDIR`/`$DESTDIR` setup.
+
* `mkosi.nspawn` may be an nspawn settings file. If this exists
it will be copied into the same place as the output image
file. This is useful since nspawn looks for settings files
next to image files it boots, for additional container
runtime settings.
+* `mkosi.cache` may be a directory. If so, it is automatically used as
+ package download cache, in order to speed repeated runs of the tool.
+
+* `mkosi.passphrase` may be a passphrase file to use when LUKS
+ encryption is selected. It should contain the passphrase literally,
+ and not end in a newline character (i.e. in the same format as
+ cryptsetup and /etc/crypttab expect the passphrase files). The file
+ must have an access mode of 0600 or less. If this file does not
+ exist and encryption is requested the user is queried instead.
+
+* `mkosi.secure-boot.crt` and `mkosi.secure-boot.key` may contain an
+ X509 certificate and PEM private key to use when UEFI SecureBoot
+ support is enabled. All EFI binaries included in the image's ESP are
+ signed with this key, as a late step in the build process.
+
All these files are optional.
Note that the location of all these files may also be
@@ -192,7 +230,7 @@ Create and run a *Fedora* image into a plain directory:
```bash
# mkosi -d fedora -t directory -o quux
-# systemd-nspawn -b quux
+# systemd-nspawn -b -D quux
```
Create a compressed tar ball `image.raw.xz` and add a checksum
@@ -223,7 +261,7 @@ BuildPackages=make gcc libcurl-devel
EOF
# cat > mkosi.build <<EOF
#!/bin/sh
-cd $SRCDIR <<EOF
+cd $SRCDIR
./autogen.sh
./configure --prefix=/usr
make -j `nproc`
@@ -239,8 +277,21 @@ EOF
mkosi is packaged for various distributions: Debian, Ubuntu, Arch (in AUR), Fedora.
It is usually easiest to use the distribution package.
-When not using distribution packages, for example, on *Fedora* you need:
+When not using distribution packages make sure to install the
+necessary dependencies. For example, on *Fedora* you need:
```bash
-dnf install python3 debootstrap arch-install-scripts xz btrfs-progs dosfstools edk2-ovmf
+dnf install arch-install-scripts btrfs-progs debootstrap dosfstools edk2-ovmf squashfs-tools gnupg python3 tar veritysetup xz
+```
+
+Note that the minimum required Python version is 3.5.
+
+If SecureBoot signing is to be used, then the "sbsign" tool needs to
+be installed as well, which is currently not available on Fedora, but
+in a COPR repository:
+
+```bash
+
+dnf copr enable msekleta/sbsigntool
+dnf install sbsigntool
```
diff --git a/mkosi b/mkosi
index 1c74c5f..3529b53 100755
--- a/mkosi
+++ b/mkosi
@@ -5,60 +5,90 @@ import configparser
import contextlib
import ctypes, ctypes.util
import crypt
+import getpass
import hashlib
import os
import platform
import shutil
+import stat
import subprocess
import sys
import tempfile
import time
+import urllib.request
import uuid
+
from enum import Enum
__version__ = '1'
+if sys.version_info < (3, 5):
+ sys.exit("Sorry, we need at least Python 3.5.")
+
# TODO
-# - squashfs root
# - volatile images
-# - make debian/ubuntu images bootable
+# - make ubuntu images bootable
# - work on device nodes
# - allow passing env vars
-# - rework cache management to use mkosi.cache by default in the project dir
+
+def die(message, status=1):
+ assert status >= 1 and status < 128
+ sys.stderr.write(message + "\n")
+ sys.exit(status)
class OutputFormat(Enum):
raw_gpt = 1
raw_btrfs = 2
- directory = 3
- subvolume = 4
- tar = 5
+ raw_squashfs = 3
+ directory = 4
+ subvolume = 5
+ tar = 6
class Distribution(Enum):
fedora = 1
debian = 2
ubuntu = 3
arch = 4
-
-GPT_ROOT_X86 = uuid.UUID("44479540f29741b29af7d131d5f0458a")
-GPT_ROOT_X86_64 = uuid.UUID("4f68bce3e8cd4db196e7fbcaf984b709")
-GPT_ROOT_ARM = uuid.UUID("69dad7102ce44e3cb16c21a1d49abed3")
-GPT_ROOT_ARM_64 = uuid.UUID("b921b0451df041c3af444c6f280d3fae")
-GPT_ROOT_IA64 = uuid.UUID("993d8d3df80e4225855a9daf8ed7ea97")
-GPT_ESP = uuid.UUID("c12a7328f81f11d2ba4b00a0c93ec93b")
-GPT_SWAP = uuid.UUID("0657fd6da4ab43c484e50933c84b4f4f")
-GPT_HOME = uuid.UUID("933ac7e12eb44f13b8440e14e2aef915")
-GPT_SRV = uuid.UUID("3b8f842520e04f3b907f1a25a76f98e8")
+ opensuse = 5
+
+GPT_ROOT_X86 = uuid.UUID("44479540f29741b29af7d131d5f0458a")
+GPT_ROOT_X86_64 = uuid.UUID("4f68bce3e8cd4db196e7fbcaf984b709")
+GPT_ROOT_ARM = uuid.UUID("69dad7102ce44e3cb16c21a1d49abed3")
+GPT_ROOT_ARM_64 = uuid.UUID("b921b0451df041c3af444c6f280d3fae")
+GPT_ROOT_IA64 = uuid.UUID("993d8d3df80e4225855a9daf8ed7ea97")
+GPT_ESP = uuid.UUID("c12a7328f81f11d2ba4b00a0c93ec93b")
+GPT_SWAP = uuid.UUID("0657fd6da4ab43c484e50933c84b4f4f")
+GPT_HOME = uuid.UUID("933ac7e12eb44f13b8440e14e2aef915")
+GPT_SRV = uuid.UUID("3b8f842520e04f3b907f1a25a76f98e8")
+GPT_ROOT_X86_VERITY = uuid.UUID("d13c5d3bb5d1422ab29f9454fdc89d76")
+GPT_ROOT_X86_64_VERITY = uuid.UUID("2c7357edebd246d9aec123d437ec2bf5")
+GPT_ROOT_ARM_VERITY = uuid.UUID("7386cdf2203c47a9a498f2ecce45a2d6")
+GPT_ROOT_ARM_64_VERITY = uuid.UUID("df3300ced69f4c92978c9bfb0f38d820")
+GPT_ROOT_IA64_VERITY = uuid.UUID("86ed10d5b60745bb8957d350f23d0571")
if platform.machine() == "x86_64":
GPT_ROOT_NATIVE = GPT_ROOT_X86_64
+ GPT_ROOT_NATIVE_VERITY = GPT_ROOT_X86_64_VERITY
elif platform.machine() == "aarch64":
GPT_ROOT_NATIVE = GPT_ROOT_ARM_64
+ GPT_ROOT_NATIVE_VERITY = GPT_ROOT_ARM_64_VERITY
else:
- sys.stderr.write("Don't known the %s architecture.\n" % platform.machine())
- sys.exit(1)
+ die("Don't know the %s architecture." % platform.machine())
CLONE_NEWNS = 0x00020000
+FEDORA_KEYS_MAP = {
+ "23": "34EC9CBA",
+ "24": "81B46521",
+ "25": "FDB19C98",
+ "26": "64DAB85D",
+}
+
+# 1 MB at the beginning of the disk for the GPT disk label, and
+# another MB at the end (this is actually more than needed.)
+GPT_HEADER_SIZE = 1024*1024
+GPT_FOOTER_SIZE = 1024*1024
+
def unshare(flags):
libc = ctypes.CDLL(ctypes.util.find_library("c"), use_errno=True)
@@ -66,19 +96,37 @@ def unshare(flags):
e = ctypes.get_errno()
raise OSError(e, os.strerror(e))
-def init_namespace(args):
- print_step("Detaching namespace...")
-
- args.original_umask = os.umask(0o000)
- unshare(CLONE_NEWNS)
+def format_bytes(bytes):
+ if bytes >= 1024*1024*1024:
+ return "{:0.1f}G".format(bytes / 1024**3)
+ if bytes >= 1024*1024:
+ return "{:0.1f}M".format(bytes / 1024**2)
+ if bytes >= 1024:
+ return "{:0.1f}K".format(bytes / 1024)
- subprocess.run(["mount", "--make-rslave", "/"], check=True)
+ return "{}B".format(bytes)
- print_step("Detaching namespace complete.")
+def roundup512(x):
+ return (x + 511) & ~511
def print_step(text):
sys.stderr.write("‣ \033[0;1;39m" + text + "\033[0m\n")
+@contextlib.contextmanager
+def complete_step(text, text2=None):
+ print_step(text + '...')
+ args = []
+ yield args
+ if text2 is None:
+ text2 = text + ' complete'
+ print_step(text2.format(*args) + '.')
+
+@complete_step('Detaching namespace')
+def init_namespace(args):
+ args.original_umask = os.umask(0o000)
+ unshare(CLONE_NEWNS)
+ subprocess.run(["mount", "--make-rslave", "/"], check=True)
+
def setup_workspace(args):
print_step("Setting up temporary workspace.")
if args.output_format in (OutputFormat.directory, OutputFormat.subvolume):
@@ -91,18 +139,20 @@ def setup_workspace(args):
def btrfs_subvol_create(path, mode=0o755):
m = os.umask(~mode & 0o7777)
- subprocess.run(["btrfs", "subvol", "create", path], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, check=True)
+ subprocess.run(["btrfs", "subvol", "create", path], check=True)
os.umask(m)
-def btrfs_subvol_delete(path, mode=0o755):
+def btrfs_subvol_delete(path):
subprocess.run(["btrfs", "subvol", "delete", path], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, check=True)
def btrfs_subvol_make_ro(path, b=True):
- subprocess.run(["btrfs", "property", "set", path, "ro", "true" if b else "false"], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, check=True)
+ subprocess.run(["btrfs", "property", "set", path, "ro", "true" if b else "false"], check=True)
def image_size(args):
- size = args.root_size
+ size = GPT_HEADER_SIZE + GPT_FOOTER_SIZE
+ if args.root_size is not None:
+ size += args.root_size
if args.home_size is not None:
size += args.home_size
if args.srv_size is not None:
@@ -111,33 +161,35 @@ def image_size(args):
size += args.esp_size
if args.swap_size is not None:
size += args.swap_size
+ if args.verity_size is not None:
+ size += args.verity_size
return size
-def create_image(args, workspace):
- if not args.output_format in (OutputFormat.raw_gpt, OutputFormat.raw_btrfs):
- return None
+def disable_cow(path):
+ """Disable copy-on-write if applicable on filesystem"""
- print_step("Creating partition table...")
+ subprocess.run(["chattr", "+C", path], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, check=False)
- f = tempfile.NamedTemporaryFile(dir = os.path.dirname(args.output), prefix='.mkosi-')
- subprocess.run(["chattr", "+C", f.name], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, check=True)
- f.truncate(image_size(args))
+def determine_partition_table(args):
pn = 1
table = "label: gpt\n"
+ run_sfdisk = False
if args.bootable:
- table += 'size={}, type={}, name="ESP System Partition"\n'.format(str(int(args.esp_size / 512)), GPT_ESP)
+ table += 'size={}, type={}, name="ESP System Partition"\n'.format(args.esp_size // 512, GPT_ESP)
args.esp_partno = pn
pn += 1
+ run_sfdisk = True
else:
args.esp_partno = None
if args.swap_size is not None:
- table += 'size={}, type={}, name="Swap Partition"\n'.format(str(int(args.swap_size / 512)), GPT_SWAP)
+ table += 'size={}, type={}, name="Swap Partition"\n'.format(args.swap_size // 512, GPT_SWAP)
args.swap_partno = pn
pn += 1
+ run_sfdisk = True
else:
args.swap_partno = None
@@ -146,118 +198,310 @@ def create_image(args, workspace):
if args.output_format != OutputFormat.raw_btrfs:
if args.home_size is not None:
- table += 'size={}, type={}, name="Home Partition"\n'.format(str(int(args.home_size / 512)), GPT_HOME)
+ table += 'size={}, type={}, name="Home Partition"\n'.format(args.home_size // 512, GPT_HOME)
args.home_partno = pn
pn += 1
+ run_sfdisk = True
if args.srv_size is not None:
- table += 'size={}, type={}, name="Server Data Partition"\n'.format(str(int(args.srv_size / 512)), GPT_SRV)
+ table += 'size={}, type={}, name="Server Data Partition"\n'.format(args.srv_size // 512, GPT_SRV)
args.srv_partno = pn
pn += 1
+ run_sfdisk = True
- table += 'type={}, name="Root Partition"\n'.format(GPT_ROOT_NATIVE)
+ if args.output_format != OutputFormat.raw_squashfs:
+ table += 'type={}, attrs={}, name="Root Partition"\n'.format(GPT_ROOT_NATIVE, "GUID:60" if args.read_only and args.output_format != OutputFormat.raw_btrfs else "")
+ run_sfdisk = True
args.root_partno = pn
-
pn += 1
- subprocess.run(["sfdisk", "--color=never", f.name], input=table.encode("utf-8"), check=True)
- subprocess.run(["sync"])
+ if args.verity:
+ args.verity_partno = pn
+ pn += 1
+ else:
+ args.verity_partno = None
+
+ return table, run_sfdisk
+
+
+def create_image(args, workspace, for_cache):
+ if args.output_format not in (OutputFormat.raw_gpt, OutputFormat.raw_btrfs, OutputFormat.raw_squashfs):
+ return None
- print_step("Created partition table as " + f.name + ".")
+ with complete_step('Creating partition table',
+ 'Created partition table as {.name}') as output:
+
+ f = tempfile.NamedTemporaryFile(dir=os.path.dirname(args.output), prefix='.mkosi-', delete=not for_cache)
+ output.append(f)
+ disable_cow(f.name)
+ f.truncate(image_size(args))
+
+ table, run_sfdisk = determine_partition_table(args)
+
+ if run_sfdisk:
+ subprocess.run(["sfdisk", "--color=never", f.name], input=table.encode("utf-8"), check=True)
+ subprocess.run(["sync"])
+
+ args.ran_sfdisk = run_sfdisk
return f
+def reuse_cache_image(args, workspace, run_build_script, for_cache):
+
+ if not args.incremental:
+ return None, False
+ if for_cache:
+ return None, False
+ if args.output_format not in (OutputFormat.raw_gpt, OutputFormat.raw_btrfs):
+ return None, False
+
+ fname = args.cache_pre_dev if run_build_script else args.cache_pre_inst
+ if fname is None:
+ return None, False
+
+ with complete_step('Basing off cached image ' + fname,
+ 'Copied cached image as {.name}') as output:
+
+ try:
+ source = open(fname, "rb")
+ except FileNotFoundError:
+ return None, False
+
+ with source:
+ f = tempfile.NamedTemporaryFile(dir = os.path.dirname(args.output), prefix='.mkosi-')
+ output.append(f)
+ disable_cow(f.name)
+ shutil.copyfileobj(source, f)
+
+ table, run_sfdisk = determine_partition_table(args)
+ args.ran_sfdisk = run_sfdisk
+
+ return f, True
+
@contextlib.contextmanager
def attach_image_loopback(args, raw):
if raw is None:
yield None
return
- print_step("Attaching image file...")
- c = subprocess.run(["losetup", "--find", "--show", "--partscan", raw.name],
- stdout=subprocess.PIPE, check=True)
- loopdev = c.stdout.decode("utf-8").strip()
- print_step("Attached image file as " + loopdev + ".")
+ with complete_step('Attaching image file',
+ 'Attached image file as {}') as output:
+ c = subprocess.run(["losetup", "--find", "--show", "--partscan", raw.name],
+ stdout=subprocess.PIPE, check=True)
+ loopdev = c.stdout.decode("utf-8").strip()
+ output.append(loopdev)
try:
yield loopdev
finally:
- print_step("Detaching image file...");
- subprocess.run(["losetup", "--detach", loopdev], check=True)
- print_step("Detaching image file completed.");
+ with complete_step('Detaching image file'):
+ subprocess.run(["losetup", "--detach", loopdev], check=True)
def partition(loopdev, partno):
+ if partno is None:
+ return None
+
return loopdev + "p" + str(partno)
-def prepare_swap(args, loopdev):
+def prepare_swap(args, loopdev, cached):
if loopdev is None:
return
-
+ if cached:
+ return
if args.swap_partno is None:
return
- print_step("Formatting swap partition...");
-
- subprocess.run(["mkswap", "-Lswap", partition(loopdev, args.swap_partno)], check=True)
-
- print_step("Formatting swap partition completed.");
+ with complete_step('Formatting swap partition'):
+ subprocess.run(["mkswap", "-Lswap", partition(loopdev, args.swap_partno)],
+ check=True)
-def prepare_esp(args, loopdev):
+def prepare_esp(args, loopdev, cached):
if loopdev is None:
return
+ if cached:
+ return
if args.esp_partno is None:
return
- print_step("Formatting ESP partition...");
+ with complete_step('Formatting ESP partition'):
+ subprocess.run(["mkfs.fat", "-nEFI", "-F32", partition(loopdev, args.esp_partno)],
+ check=True)
- subprocess.run(["mkfs.fat", "-nEFI", "-F32", partition(loopdev, args.esp_partno)], check=True)
+def mkfs_ext4(label, mount, dev):
+ subprocess.run(["mkfs.ext4", "-L", label, "-M", mount, dev], check=True)
- print_step("Formatting ESP partition completed.");
+def mkfs_btrfs(label, dev):
+ subprocess.run(["mkfs.btrfs", "-L", label, "-d", "single", "-m", "single", dev], check=True)
-def mkfs_ext4(label, mount, loopdev, partno):
- subprocess.run(["mkfs.ext4", "-L", label, "-M", mount, partition(loopdev, partno)], check=True)
+def luks_format(dev, passphrase):
-def prepare_root(args, loopdev):
- if loopdev is None:
+ if passphrase['type'] == 'stdin':
+ passphrase = (passphrase['content'] + "\n").encode("utf-8")
+ subprocess.run(["cryptsetup", "luksFormat", "--batch-mode", dev], input=passphrase, check=True)
+ else:
+ assert passphrase['type'] == 'file'
+ subprocess.run(["cryptsetup", "luksFormat", "--batch-mode", dev, passphrase['content']], check=True)
+
+def luks_open(dev, passphrase):
+
+ name = str(uuid.uuid4())
+
+ if passphrase['type'] == 'stdin':
+ passphrase = (passphrase['content'] + "\n").encode("utf-8")
+ subprocess.run(["cryptsetup", "open", "--type", "luks", dev, name], input=passphrase, check=True)
+ else:
+ assert passphrase['type'] == 'file'
+ subprocess.run(["cryptsetup", "--key-file", passphrase['content'], "open", "--type", "luks", dev, name], check=True)
+
+ return os.path.join("/dev/mapper", name)
+
+def luks_close(dev, text):
+ if dev is None:
+ return
+
+ with complete_step(text):
+ subprocess.run(["cryptsetup", "close", dev], check=True)
+
+def luks_format_root(args, loopdev, run_build_script, cached, inserting_squashfs=False):
+
+ if args.encrypt != "all":
return
if args.root_partno is None:
return
+ if args.output_format == OutputFormat.raw_squashfs and not inserting_squashfs:
+ return
+ if run_build_script:
+ return
+ if cached:
+ return
- print_step("Formatting root partition...");
+ with complete_step("LUKS formatting root partition"):
+ luks_format(partition(loopdev, args.root_partno), args.passphrase)
- if args.output_format == OutputFormat.raw_btrfs:
- subprocess.run(["mkfs.btrfs", "-Lroot", partition(loopdev, args.root_partno)], check=True)
- else:
- mkfs_ext4("root", "/", loopdev, args.root_partno)
+def luks_format_home(args, loopdev, run_build_script, cached):
- print_step("Formatting root partition completed.");
+ if args.encrypt is None:
+ return
+ if args.home_partno is None:
+ return
+ if run_build_script:
+ return
+ if cached:
+ return
-def prepare_home(args, loopdev):
- if loopdev is None:
+ with complete_step("LUKS formatting home partition"):
+ luks_format(partition(loopdev, args.home_partno), args.passphrase)
+
+def luks_format_srv(args, loopdev, run_build_script, cached):
+
+ if args.encrypt is None:
+ return
+ if args.srv_partno is None:
return
+ if run_build_script:
+ return
+ if cached:
+ return
+
+ with complete_step("LUKS formatting server data partition"):
+ luks_format(partition(loopdev, args.srv_partno), args.passphrase)
+
+def luks_setup_root(args, loopdev, run_build_script, inserting_squashfs=False):
+
+ if args.encrypt != "all":
+ return None
+ if args.root_partno is None:
+ return None
+ if args.output_format == OutputFormat.raw_squashfs and not inserting_squashfs:
+ return None
+ if run_build_script:
+ return None
+
+ with complete_step("Opening LUKS root partition"):
+ return luks_open(partition(loopdev, args.root_partno), args.passphrase)
+
+def luks_setup_home(args, loopdev, run_build_script):
+
+ if args.encrypt is None:
+ return None
if args.home_partno is None:
+ return None
+ if run_build_script:
+ return None
+
+ with complete_step("Opening LUKS home partition"):
+ return luks_open(partition(loopdev, args.home_partno), args.passphrase)
+
+def luks_setup_srv(args, loopdev, run_build_script):
+
+ if args.encrypt is None:
+ return None
+ if args.srv_partno is None:
+ return None
+ if run_build_script:
+ return None
+
+ with complete_step("Opening LUKS server data partition"):
+ return luks_open(partition(loopdev, args.srv_partno), args.passphrase)
+
+@contextlib.contextmanager
+def luks_setup_all(args, loopdev, run_build_script):
+
+ if args.output_format in (OutputFormat.directory, OutputFormat.subvolume, OutputFormat.tar):
+ yield (None, None, None)
return
- print_step("Formatting home partition...");
+ try:
+ root = luks_setup_root(args, loopdev, run_build_script)
+ try:
+ home = luks_setup_home(args, loopdev, run_build_script)
+ try:
+ srv = luks_setup_srv(args, loopdev, run_build_script)
+
+ yield (partition(loopdev, args.root_partno) if root is None else root, \
+ partition(loopdev, args.home_partno) if home is None else home, \
+ partition(loopdev, args.srv_partno) if srv is None else srv)
+ finally:
+ luks_close(srv, "Closing LUKS server data partition")
+ finally:
+ luks_close(home, "Closing LUKS home partition")
+ finally:
+ luks_close(root, "Closing LUKS root partition")
- mkfs_ext4("home", "/home", loopdev, args.home_partno)
+def prepare_root(args, dev, cached):
+ if dev is None:
+ return
+ if args.output_format == OutputFormat.raw_squashfs:
+ return
+ if cached:
+ return
- print_step("Formatting home partition completed.");
+ with complete_step('Formatting root partition'):
+ if args.output_format == OutputFormat.raw_btrfs:
+ mkfs_btrfs("root", dev)
+ else:
+ mkfs_ext4("root", "/", dev)
-def prepare_srv(args, loopdev):
- if loopdev is None:
+def prepare_home(args, dev, cached):
+ if dev is None:
return
- if args.srv_partno is None:
+ if cached:
return
- print_step("Formatting server data partition...");
+ with complete_step('Formatting home partition'):
+ mkfs_ext4("home", "/home", dev)
- mkfs_ext4("srv", "/srv", loopdev, args.srv_partno)
+def prepare_srv(args, dev, cached):
+ if dev is None:
+ return
+ if cached:
+ return
- print_step("Formatted server data partition.");
+ with complete_step('Formatting server data partition'):
+ mkfs_ext4("srv", "/srv", dev)
-def mount_loop(args, loopdev, partno, where):
+def mount_loop(args, dev, where, read_only=False):
os.makedirs(where, 0o755, True)
options = "-odiscard"
@@ -265,74 +509,99 @@ def mount_loop(args, loopdev, partno, where):
if args.compress and args.output_format == OutputFormat.raw_btrfs:
options += ",compress"
- subprocess.run(["mount", "-n", partition(loopdev, partno), where, options], check=True)
+ if read_only:
+ options += ",ro"
+
+ subprocess.run(["mount", "-n", dev, where, options], check=True)
def mount_bind(what, where):
os.makedirs(where, 0o755, True)
subprocess.run(["mount", "--bind", what, where], check=True)
+def mount_tmpfs(where):
+ os.makedirs(where, 0o755, True)
+ subprocess.run(["mount", "tmpfs", "-t", "tmpfs", where], check=True)
+
@contextlib.contextmanager
-def mount_image(args, workspace, loopdev):
+def mount_image(args, workspace, loopdev, root_dev, home_dev, srv_dev, root_read_only=False):
if loopdev is None:
yield None
return
- print_step("Mounting image...");
+ with complete_step('Mounting image'):
+ root = os.path.join(workspace, "root")
- root = os.path.join(workspace, "root")
- mount_loop(args, loopdev, args.root_partno, root)
+ if args.output_format != OutputFormat.raw_squashfs:
+ mount_loop(args, root_dev, root, root_read_only)
- if args.home_partno is not None:
- mount_loop(args, loopdev, args.home_partno, os.path.join(root, "home"))
+ if home_dev is not None:
+ mount_loop(args, home_dev, os.path.join(root, "home"))
- if args.srv_partno is not None:
- mount_loop(args, loopdev, args.srv_partno, os.path.join(root, "srv"))
+ if srv_dev is not None:
+ mount_loop(args, srv_dev, os.path.join(root, "srv"))
- if args.esp_partno is not None:
- mount_loop(args, loopdev, args.esp_partno, os.path.join(root, "boot/efi"))
+ if args.esp_partno is not None:
+ mount_loop(args, partition(loopdev, args.esp_partno), os.path.join(root, "efi"))
- if args.distribution == Distribution.fedora:
- mount_bind("/proc", os.path.join(root, "proc"))
- mount_bind("/dev", os.path.join(root, "dev"))
- mount_bind("/sys", os.path.join(root, "sys"))
+ # Make sure /tmp and /run are not part of the image
+ mount_tmpfs(os.path.join(root, "run"))
+ mount_tmpfs(os.path.join(root, "tmp"))
- print_step("Mounting image completed.");
try:
yield
finally:
- print_step("Unmounting image...");
+ with complete_step('Unmounting image'):
- umount(os.path.join(root, "home"))
- umount(os.path.join(root, "srv"))
- umount(os.path.join(root, "boot/efi"))
- umount(os.path.join(root, "proc"))
- umount(os.path.join(root, "sys"))
- umount(os.path.join(root, "dev"))
- umount(os.path.join(root, "var/cache/dnf"))
- umount(os.path.join(root, "var/cache/apt/archives"))
- umount(os.path.join(root))
+ for d in ("home", "srv", "efi", "var/cache/dnf", "var/cache/apt/archives", "var/cache/pacman/pkg", "var/cache/zypp/packages", "run", "tmp"):
+ umount(os.path.join(root, d))
- print_step("Unmounting image completed.");
+ umount(root)
+@contextlib.contextmanager
+def mount_api_vfs(args, workspace):
+ paths = ('/proc', '/dev', '/sys')
+ root = os.path.join(workspace, "root")
+
+ with complete_step('Mounting API VFS'):
+ for d in paths:
+ mount_bind(d, root + d)
+ try:
+ yield
+ finally:
+ with complete_step('Unmounting API VFS'):
+ for d in paths:
+ umount(root + d)
+
+@contextlib.contextmanager
def mount_cache(args, workspace):
- if not args.distribution in (Distribution.fedora, Distribution.debian, Distribution.ubuntu):
- return
if args.cache_path is None:
+ yield
return
# We can't do this in mount_image() yet, as /var itself might have to be created as a subvolume first
- if args.distribution == Distribution.fedora:
- mount_bind(args.cache_path, os.path.join(workspace, "root", "var/cache/dnf"))
- elif args.distribution in (Distribution.debian, Distribution.ubuntu):
- mount_bind(args.cache_path, os.path.join(workspace, "root", "var/cache/apt/archives"))
+ with complete_step('Mounting Package Cache'):
+ if args.distribution == Distribution.fedora:
+ mount_bind(args.cache_path, os.path.join(workspace, "root", "var/cache/dnf"))
+ elif args.distribution in (Distribution.debian, Distribution.ubuntu):
+ mount_bind(args.cache_path, os.path.join(workspace, "root", "var/cache/apt/archives"))
+ elif args.distribution == Distribution.arch:
+ mount_bind(args.cache_path, os.path.join(workspace, "root", "var/cache/pacman/pkg"))
+ elif args.distribution == Distribution.opensuse:
+ mount_bind(args.cache_path, os.path.join(workspace, "root", "var/cache/zypp/packages"))
+ try:
+ yield
+ finally:
+ with complete_step('Unmounting Package Cache'):
+ for d in ("var/cache/dnf", "var/cache/apt/archives", "var/cache/pacman/pkg", "var/cache/zypp/packages"):
+ umount(os.path.join(workspace, "root", d))
def umount(where):
# Ignore failures and error messages
subprocess.run(["umount", "-n", where], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
-def prepare_tree(args, workspace):
- print_step("Setting up basic OS tree...");
+@complete_step('Setting up basic OS tree')
+def prepare_tree(args, workspace, run_build_script, cached):
if args.output_format == OutputFormat.subvolume:
btrfs_subvol_create(os.path.join(workspace, "root"))
@@ -343,6 +612,10 @@ def prepare_tree(args, workspace):
pass
if args.output_format in (OutputFormat.subvolume, OutputFormat.raw_btrfs):
+
+ if cached and args.output_format is OutputFormat.raw_btrfs:
+ return
+
btrfs_subvol_create(os.path.join(workspace, "root", "home"))
btrfs_subvol_create(os.path.join(workspace, "root", "srv"))
btrfs_subvol_create(os.path.join(workspace, "root", "var"))
@@ -350,22 +623,26 @@ def prepare_tree(args, workspace):
os.mkdir(os.path.join(workspace, "root", "var/lib"))
btrfs_subvol_create(os.path.join(workspace, "root", "var/lib/machines"), 0o700)
+ if cached:
+ return
+
if args.bootable:
# We need an initialized machine ID for the boot logic to work
- mid = uuid.uuid4().hex
os.mkdir(os.path.join(workspace, "root", "etc"), 0o755)
- open(os.path.join(workspace, "root", "etc/machine-id"), "w").write(mid + "\n")
-
- # For now, let's stay compatible with traditional Linux ESP mounts
- os.mkdir(os.path.join(workspace, "root", "boot/efi/EFI"), 0o700)
- os.mkdir(os.path.join(workspace, "root", "boot/efi/EFI/BOOT"), 0o700)
- os.mkdir(os.path.join(workspace, "root", "boot/efi/EFI/systemd"), 0o700)
- os.mkdir(os.path.join(workspace, "root", "boot/efi/loader"), 0o700)
- os.mkdir(os.path.join(workspace, "root", "boot/efi/loader/entries"), 0o700)
- os.mkdir(os.path.join(workspace, "root", "boot/efi", mid), 0o700)
-
+ open(os.path.join(workspace, "root", "etc/machine-id"), "w").write(args.machine_id + "\n")
+
+ os.mkdir(os.path.join(workspace, "root", "efi/EFI"), 0o700)
+ os.mkdir(os.path.join(workspace, "root", "efi/EFI/BOOT"), 0o700)
+ os.mkdir(os.path.join(workspace, "root", "efi/EFI/Linux"), 0o700)
+ os.mkdir(os.path.join(workspace, "root", "efi/EFI/systemd"), 0o700)
+ os.mkdir(os.path.join(workspace, "root", "efi/loader"), 0o700)
+ os.mkdir(os.path.join(workspace, "root", "efi/loader/entries"), 0o700)
+ os.mkdir(os.path.join(workspace, "root", "efi", args.machine_id), 0o700)
+
+ os.mkdir(os.path.join(workspace, "root", "boot"), 0o700)
+ os.symlink("../efi", os.path.join(workspace, "root", "boot/efi"))
os.symlink("efi/loader", os.path.join(workspace, "root", "boot/loader"))
- os.symlink("efi/" + mid, os.path.join(workspace, "root", "boot", mid))
+ os.symlink("efi/" + args.machine_id, os.path.join(workspace, "root", "boot", args.machine_id))
os.mkdir(os.path.join(workspace, "root", "etc/kernel"), 0o755)
@@ -373,7 +650,9 @@ def prepare_tree(args, workspace):
cmdline.write(args.kernel_commandline)
cmdline.write("\n")
- print_step("Setting up basic OS tree completed.");
+ if run_build_script:
+ os.mkdir(os.path.join(workspace, "root", "root"), 0o750)
+ os.mkdir(os.path.join(workspace, "root", "root/dest"), 0o755)
def patch_file(filepath, line_rewriter):
temp_new_filepath = filepath + ".tmp.new"
@@ -408,35 +687,76 @@ Type=ether
DHCP=yes
""")
-def run_workspace_command(workspace, *cmd, network=False):
+def run_workspace_command(args, workspace, *cmd, network=False, env={}):
+
cmdline = ["systemd-nspawn",
- '--quiet',
- "--directory", os.path.join(workspace, "root"),
- "--as-pid2",
- "--register=no"]
+ '--quiet',
+ "--directory=" + os.path.join(workspace, "root"),
+ "--uuid=" + args.machine_id,
+ "--as-pid2",
+ "--register=no",
+ "--bind=" + var_tmp(workspace) + ":/var/tmp" ]
+
if not network:
cmdline += ["--private-network"]
+ cmdline += [ "--setenv={}={}".format(k,v) for k,v in env.items() ]
+
cmdline += ['--', *cmd]
subprocess.run(cmdline, check=True)
+def check_if_url_exists(url):
+ req = urllib.request.Request(url, method="HEAD")
+ try:
+ if urllib.request.urlopen(req):
+ return True
+ except:
+ return False
+
+def disable_kernel_install(args, workspace):
+
+ # Let's disable the automatic kernel installation done by the
+ # kernel RPMs. After all, we want to built our own unified kernels
+ # that include the root hash in the kernel command line and can be
+ # signed as a single EFI executable. Since the root hash is only
+ # known when the root file system is finalized we turn off any
+ # kernel installation beforehand.
+
+ if not args.bootable:
+ return
+
+ for d in ("etc", "etc/kernel", "etc/kernel/install.d"):
+ try:
+ os.mkdir(os.path.join(workspace, "root", d), 0o755)
+ except FileExistsError:
+ pass
+
+ for f in ("50-dracut.install", "51-dracut-rescue.install", "90-loaderentry.install"):
+ os.symlink("/dev/null", os.path.join(workspace, "root", "etc/kernel/install.d", f))
+
+@complete_step('Installing Fedora')
def install_fedora(args, workspace, run_build_script):
- print_step("Installing Fedora...")
+
+ disable_kernel_install(args, workspace)
gpg_key = "/etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-%s-x86_64" % args.release
if os.path.exists(gpg_key):
gpg_key = "file://%s" % gpg_key
else:
- gpg_key = "https://getfedora.org/static/81B46521.txt"
+ gpg_key = "https://getfedora.org/static/%s.txt" % FEDORA_KEYS_MAP[args.release]
if args.mirror:
- release_url = "baseurl={.mirror}/releases/{.release}/Everything/x86_64/os/".format(args)
- updates_url = "baseurl={.mirror}/updates/{.release}/x86_64/".format(args)
+ baseurl = "{args.mirror}/releases/{args.release}/Everything/x86_64/os/".format(args=args)
+ if not check_if_url_exists("%s/media.repo" % baseurl):
+ baseurl = "{args.mirror}/development/{args.release}/Everything/x86_64/os/".format(args=args)
+
+ release_url = "baseurl=%s" % baseurl
+ updates_url = "baseurl={args.mirror}/updates/{args.release}/x86_64/".format(args=args)
else:
release_url = ("metalink=https://mirrors.fedoraproject.org/metalink?" +
- "repo=fedora-{.release}&arch=x86_64".format(args))
+ "repo=fedora-{args.release}&arch=x86_64".format(args=args))
updates_url = ("metalink=https://mirrors.fedoraproject.org/metalink?" +
- "repo=updates-released-f{.release}&arch=x86_64".format(args))
+ "repo=updates-released-f{args.release}&arch=x86_64".format(args=args))
with open(os.path.join(workspace, "dnf.conf"), "w") as f:
f.write("""\
@@ -456,6 +776,10 @@ gpgkey={gpg_key}
gpg_key=gpg_key,
release_url=release_url,
updates_url=updates_url))
+ if args.repositories:
+ repos = ["--enablerepo=" + repo for repo in args.repositories]
+ else:
+ repos = ["--enablerepo=fedora", "--enablerepo=updates"]
root = os.path.join(workspace, "root")
cmdline = ["dnf",
@@ -466,8 +790,7 @@ gpgkey={gpg_key}
"--releasever=" + args.release,
"--installroot=" + root,
"--disablerepo=*",
- "--enablerepo=fedora",
- "--enablerepo=updates",
+ *repos,
"--setopt=keepcache=1",
"--setopt=install_weak_deps=0"]
@@ -488,28 +811,41 @@ gpgkey={gpg_key}
cmdline.extend(args.build_packages)
if args.bootable:
- cmdline.extend(["kernel", "systemd-udev"])
- os.makedirs(os.path.join(root, 'efi'), exist_ok=True)
+ cmdline.extend(["kernel", "systemd-udev", "binutils"])
- subprocess.run(cmdline, check=True)
+ # Temporary hack: dracut only adds crypto support to the initrd, if the cryptsetup binary is installed
+ if args.encrypt or args.verity:
+ cmdline.append("cryptsetup")
+
+ if args.output_format == OutputFormat.raw_gpt:
+ cmdline.append("e2fsprogs")
+
+ if args.output_format == OutputFormat.raw_btrfs:
+ cmdline.append("btrfs-progs")
- print_step("Installing Fedora completed.")
+ with mount_api_vfs(args, workspace):
+ subprocess.run(cmdline, check=True)
def install_debian_or_ubuntu(args, workspace, run_build_script, mirror):
+ if args.repositories:
+ components = ','.join(args.repositories)
+ else:
+ components = 'main'
cmdline = ["debootstrap",
"--verbose",
+ "--merged-usr",
"--variant=minbase",
"--include=systemd-sysv",
"--exclude=sysv-rc,initscripts,startpar,lsb-base,insserv",
+ "--components=" + components,
args.release,
workspace + "/root",
mirror]
if args.bootable and args.output_format == OutputFormat.raw_btrfs:
- cmdline[3] += ",btrfs-tools"
+ cmdline[4] += ",btrfs-tools"
subprocess.run(cmdline, check=True)
-
# Debootstrap is not smart enough to deal correctly with alternative dependencies
# Installing libpam-systemd via debootstrap results in systemd-shim being installed
# Therefore, prefer to install via apt from inside the container
@@ -543,29 +879,36 @@ def install_debian_or_ubuntu(args, workspace, run_build_script, mirror):
f.write("#!/bin/sh\n")
f.write("exit 101")
os.chmod(policyrcd, 0o755)
+ if not args.with_docs:
+ # Create dpkg.cfg to ingore documentation
+ dpkg_conf = os.path.join(workspace, "root/etc/dpkg/dpkg.cfg.d/01_nodoc")
+ with open(dpkg_conf, "w") as f:
+ f.writelines([
+ 'path-exclude /usr/share/locale/*\n',
+ 'path-exclude /usr/share/doc/*\n',
+ 'path-exclude /usr/share/man/*\n',
+ 'path-exclude /usr/share/groff/*\n',
+ 'path-exclude /usr/share/info/*\n',
+ 'path-exclude /usr/share/lintian/*\n',
+ 'path-exclude /usr/share/linda/*\n',
+ ])
+
cmdline = ["/usr/bin/apt-get", "--assume-yes", "--no-install-recommends", "install"] + extra_packages
- run_workspace_command(workspace, network=True, *cmdline)
+ run_workspace_command(args, workspace, network=True, env={'DEBIAN_FRONTEND': 'noninteractive', 'DEBCONF_NONINTERACTIVE_SEEN': 'true'}, *cmdline)
os.unlink(policyrcd)
+@complete_step('Installing Debian')
def install_debian(args, workspace, run_build_script):
- print_step("Installing Debian...")
-
install_debian_or_ubuntu(args, workspace, run_build_script, args.mirror)
- print_step("Installing Debian completed.")
-
+@complete_step('Installing Ubuntu')
def install_ubuntu(args, workspace, run_build_script):
- print_step("Installing Ubuntu...")
-
install_debian_or_ubuntu(args, workspace, run_build_script, args.mirror)
- print_step("Installing Ubuntu completed.")
-
+@complete_step('Installing Arch Linux')
def install_arch(args, workspace, run_build_script):
if args.release is not None:
- sys.stderr.write("Distribution release specification is not supported for ArchLinux, ignoring.")
-
- print_step("Installing ArchLinux...")
+ sys.stderr.write("Distribution release specification is not supported for Arch Linux, ignoring.\n")
keyring = "archlinux"
@@ -575,7 +918,6 @@ def install_arch(args, workspace, run_build_script):
subprocess.run(["pacman-key", "--nocolor", "--init"], check=True)
subprocess.run(["pacman-key", "--nocolor", "--populate", keyring], check=True)
-
if platform.machine() == "aarch64":
server = "Server = {}/$arch/$repo".format(args.mirror)
else:
@@ -584,9 +926,12 @@ def install_arch(args, workspace, run_build_script):
with open(os.path.join(workspace, "pacman.conf"), "w") as f:
f.write("""\
[options]
+LogFile = /dev/null
HookDir = /no_hook/
HoldPkg = pacman glibc
Architecture = auto
+UseSyslog
+Color
CheckSpace
SigLevel = Required DatabaseOptional
@@ -611,6 +956,8 @@ SigLevel = Required DatabaseOptional
"e2fsprogs",
"jfsutils",
"lvm2",
+ "man-db",
+ "man-pages",
"mdadm",
"netctl",
"pcmciautils",
@@ -634,7 +981,6 @@ SigLevel = Required DatabaseOptional
cmdline = ["pacstrap",
"-C", os.path.join(workspace, "pacman.conf"),
- "-c",
"-d",
workspace + "/root"] + \
list(packages)
@@ -643,20 +989,139 @@ SigLevel = Required DatabaseOptional
enable_networkd(workspace)
- print_step("Installing ArchLinux complete.")
+@complete_step('Installing openSUSE')
+def install_opensuse(args, workspace, run_build_script):
+
+ root = os.path.join(workspace, "root")
+ release = args.release.strip('"')
+
+ #
+ # If the release looks like a timestamp, it's Tumbleweed.
+ # 13.x is legacy (14.x won't ever appear). For anything else,
+ # let's default to Leap.
+ #
+ if release.isdigit() or release == "tumbleweed":
+ release_url = "{}/tumbleweed/repo/oss/".format(args.mirror)
+ updates_url = "{}/update/tumbleweed/".format(args.mirror)
+ elif release.startswith("13."):
+ release_url = "{}/distribution/{}/repo/oss/".format(args.mirror, release)
+ updates_url = "{}/update/{}/".format(args.mirror, release)
+ else:
+ release_url = "{}/distribution/leap/{}/repo/oss/".format(args.mirror, release)
+ updates_url = "{}/update/leap/{}/oss/".format(args.mirror, release)
+
+ #
+ # Configure the repositories: we need to enable packages caching
+ # here to make sure that the package cache stays populated after
+ # "zypper install".
+ #
+ subprocess.run(["zypper", "--root", root, "addrepo", "-ck", release_url, "Main"], check=True)
+ subprocess.run(["zypper", "--root", root, "addrepo", "-ck", updates_url, "Updates"], check=True)
+
+ if not args.with_docs:
+ with open(os.path.join(root, "etc/zypp/zypp.conf"), "w") as f:
+ f.write("rpm.install.excludedocs = yes\n")
+
+ # The common part of the install comand.
+ cmdline = ["zypper", "--root", root, "--gpg-auto-import-keys",
+ "install", "-y", "--no-recommends"]
+ #
+ # Install the "minimal" package set.
+ #
+ subprocess.run(cmdline + ["-t", "pattern", "minimal_base"], check=True)
+
+ #
+ # Now install the additional packages if necessary.
+ #
+ extra_packages = []
+
+ if args.bootable:
+ extra_packages += ["kernel-default"]
+
+ if args.encrypt:
+ extra_packages += ["device-mapper"]
+
+ if args.output_format in (OutputFormat.subvolume, OutputFormat.raw_btrfs):
+ extra_packages += ["btrfsprogs"]
+
+ if args.packages:
+ extra_packages += args.packages
+
+ if run_build_script and args.build_packages is not None:
+ extra_packages += args.build_packages
+
+ if extra_packages:
+ subprocess.run(cmdline + extra_packages, check=True)
+
+ #
+ # Disable packages caching in the image that was enabled
+ # previously to populate the package cache.
+ #
+ subprocess.run(["zypper", "--root", root, "modifyrepo", "-K", "Main"], check=True)
+ subprocess.run(["zypper", "--root", root, "modifyrepo", "-K", "Updates"], check=True)
+
+ #
+ # Tune dracut confs: openSUSE uses an old version of dracut that's
+ # probably explain why we need to do those hacks.
+ #
+ if args.bootable:
+ os.makedirs(os.path.join(root, "etc/dracut.conf.d"), exist_ok=True)
+
+ with open(os.path.join(root, "etc/dracut.conf.d/99-mkosi.conf"), "w") as f:
+ f.write("hostonly=no\n")
+
+ # dracut from openSUSE is missing upstream commit 016613c774baf.
+ with open(os.path.join(root, "etc/kernel/cmdline"), "w") as cmdline:
+ cmdline.write(args.kernel_commandline + " root=/dev/gpt-auto-root\n")
+
+def install_distribution(args, workspace, run_build_script, cached):
+
+ if cached:
+ return
-def install_distribution(args, workspace, run_build_script):
install = {
Distribution.fedora : install_fedora,
Distribution.debian : install_debian,
Distribution.ubuntu : install_ubuntu,
Distribution.arch : install_arch,
+ Distribution.opensuse : install_opensuse,
}
install[args.distribution](args, workspace, run_build_script)
-def set_root_password(args, workspace):
+def reset_machine_id(args, workspace, run_build_script, for_cache):
+ """Make /etc/machine-id an empty file.
+
+ This way, on the next boot is either initialized and commited (if /etc is
+ writable) or the image runs with a transient machine ID, that changes on
+ each boot (if the image is read-only).
+ """
+
+ if run_build_script:
+ return
+ if for_cache:
+ return
+
+ with complete_step('Resetting machine ID'):
+ machine_id = os.path.join(workspace, 'root', 'etc/machine-id')
+ os.unlink(machine_id)
+ open(machine_id, "w+b").close()
+ dbus_machine_id = os.path.join(workspace, 'root', 'var/lib/dbus/machine-id')
+ try:
+ os.unlink(dbus_machine_id)
+ except FileNotFoundError:
+ pass
+ else:
+ os.symlink('../../../etc/machine-id', dbus_machine_id)
+
+def set_root_password(args, workspace, run_build_script, for_cache):
"Set the root account password, or just delete it so it's easy to log in"
+
+ if run_build_script:
+ return
+ if for_cache:
+ return
+
if args.password == '':
print_step("Deleting root password...")
jj = lambda line: (':'.join(['root', ''] + line.split(':')[2:])
@@ -669,40 +1134,66 @@ def set_root_password(args, workspace):
if line.startswith('root:') else line)
patch_file(os.path.join(workspace, 'root', 'etc/shadow'), jj)
+def run_postinst_script(args, workspace, run_build_script, for_cache):
+
+ if args.postinst_script is None:
+ return
+ if for_cache:
+ return
+
+ with complete_step('Running post installation script'):
+
+ # We copy the postinst script into the build tree. We'd prefer
+ # mounting it into the tree, but for that we'd need a good
+ # place to mount it to. But if we create that we might as well
+ # just copy the file anyway.
+
+ shutil.copy2(args.postinst_script,
+ os.path.join(workspace, "root", "root/postinst"))
+
+ run_workspace_command(args, workspace, "/root/postinst", "build" if run_build_script else "final", network=args.with_network)
+ os.unlink(os.path.join(workspace, "root", "root/postinst"))
+
def install_boot_loader_arch(args, workspace):
patch_file(os.path.join(workspace, "root", "etc/mkinitcpio.conf"),
lambda line: "HOOKS=\"systemd modconf block filesystems fsck\"\n" if line.startswith("HOOKS=") else line)
kernel_version = next(filter(lambda x: x[0].isdigit(), os.listdir(os.path.join(workspace, "root", "lib/modules"))))
- run_workspace_command(workspace,
+ run_workspace_command(args, workspace,
"/usr/bin/kernel-install", "add", kernel_version, "/boot/vmlinuz-linux")
def install_boot_loader_debian(args, workspace):
kernel_version = next(filter(lambda x: x[0].isdigit(), os.listdir(os.path.join(workspace, "root", "lib/modules"))))
- run_workspace_command(workspace,
+ run_workspace_command(args, workspace,
"/usr/bin/kernel-install", "add", kernel_version, "/boot/vmlinuz-" + kernel_version)
-def install_boot_loader(args, workspace):
+def install_boot_loader_opensuse(args, workspace):
+ install_boot_loader_debian(args, workspace)
+
+def install_boot_loader(args, workspace, cached):
if not args.bootable:
return
- print_step("Installing boot loader...")
+ if cached:
+ return
- shutil.copyfile(os.path.join(workspace, "root", "usr/lib/systemd/boot/efi/systemd-bootx64.efi"),
- os.path.join(workspace, "root", "boot/efi/EFI/systemd/systemd-bootx64.efi"))
+ with complete_step("Installing boot loader"):
+ shutil.copyfile(os.path.join(workspace, "root", "usr/lib/systemd/boot/efi/systemd-bootx64.efi"),
+ os.path.join(workspace, "root", "boot/efi/EFI/systemd/systemd-bootx64.efi"))
- shutil.copyfile(os.path.join(workspace, "root", "usr/lib/systemd/boot/efi/systemd-bootx64.efi"),
- os.path.join(workspace, "root", "boot/efi/EFI/BOOT/bootx64.efi"))
+ shutil.copyfile(os.path.join(workspace, "root", "usr/lib/systemd/boot/efi/systemd-bootx64.efi"),
+ os.path.join(workspace, "root", "boot/efi/EFI/BOOT/bootx64.efi"))
- if args.distribution == Distribution.arch:
- install_boot_loader_arch(args, workspace)
+ if args.distribution == Distribution.arch:
+ install_boot_loader_arch(args, workspace)
- if args.distribution == Distribution.debian:
- install_boot_loader_debian(args, workspace)
+ if args.distribution == Distribution.debian:
+ install_boot_loader_debian(args, workspace)
- print_step("Installing boot loader completed.")
+ if args.distribution == Distribution.opensuse:
+ install_boot_loader_opensuse(args, workspace)
def enumerate_and_copy(source, dest, suffix = ""):
for entry in os.scandir(source + suffix):
@@ -723,110 +1214,368 @@ def enumerate_and_copy(source, dest, suffix = ""):
shutil.copystat(entry.path, dest_path, follow_symlinks=False)
-def install_extra_trees(args, workspace):
+def install_extra_trees(args, workspace, for_cache):
if args.extra_trees is None:
return
- print_step("Copying in extra file trees...")
-
- for d in args.extra_trees:
- enumerate_and_copy(d, os.path.join(workspace, "root"))
+ if for_cache:
+ return
- print_step("Copying in extra file trees completed.")
+ with complete_step('Copying in extra file trees'):
+ for d in args.extra_trees:
+ enumerate_and_copy(d, os.path.join(workspace, "root"))
-def git_files_ignore():
- "Creates a function to be used as a ignore callable argument for copytree"
- c = subprocess.run(['git', 'ls-files', '-z', '--others', '--cached',
- '--exclude-standard', '--exclude', '/.mkosi-*'],
+def copy_git_files(src, dest, *, git_files):
+ what_files = ['--exclude-standard', '--cached']
+ if git_files == 'others':
+ what_files += ['--others']
+ c = subprocess.run(['git', 'ls-files', '-z'] + what_files,
stdout=subprocess.PIPE,
universal_newlines=False,
check=True)
- files = {x.decode("utf-8") for x in c.stdout.split(b'\0')}
+ files = {x.decode("utf-8") for x in c.stdout.rstrip(b'\0').split(b'\0')}
+
del c
- def ignore(src, names):
- return [name for name in names
- if (os.path.relpath(os.path.join(src, name)) not in files
- and not os.path.isdir(os.path.join(src, name)))]
- return ignore
+ for path in files:
+ src_path = os.path.join(src, path)
+ dest_path = os.path.join(dest, path)
-def install_build_src(args, workspace, run_build_script):
+ directory = os.path.dirname(dest_path)
+ os.makedirs(directory, exist_ok=True)
+
+ shutil.copy2(src_path, dest_path, follow_symlinks=False)
+
+def install_build_src(args, workspace, run_build_script, for_cache):
if not run_build_script:
return
+ if for_cache:
+ return
if args.build_script is None:
return
- print_step("Copying in build script and sources...")
-
- shutil.copy(args.build_script, os.path.join(workspace, "root", "root", os.path.basename(args.build_script)))
+ with complete_step('Copying in build script and sources'):
+ shutil.copy(args.build_script,
+ os.path.join(workspace, "root", "root", os.path.basename(args.build_script)))
- if args.build_sources is not None:
- target = os.path.join(workspace, "root", "root/src")
- use_git = args.use_git_files
- if use_git is None:
- use_git = os.path.exists('.git')
-
- if use_git:
- ignore = git_files_ignore()
- else:
- ignore = shutil.ignore_patterns('.mkosi-*', '.git')
- shutil.copytree(args.build_sources, target, symlinks=True, ignore=ignore)
+ if args.build_sources is not None:
+ target = os.path.join(workspace, "root", "root/src")
+ use_git = args.use_git_files
+ if use_git is None:
+ use_git = os.path.exists('.git')
- print_step("Copying in build script and sources completed.")
+ if use_git:
+ copy_git_files(args.build_sources, target, git_files=args.git_files)
+ else:
+ ignore = shutil.ignore_patterns('.git')
+ shutil.copytree(args.build_sources, target, symlinks=True, ignore=ignore)
-def install_build_dest(args, workspace, run_build_script):
+def install_build_dest(args, workspace, run_build_script, for_cache):
if run_build_script:
return
+ if for_cache:
+ return
if args.build_script is None:
return
- print_step("Copying in build tree...")
-
- enumerate_and_copy(os.path.join(workspace, "dest"), os.path.join(workspace, "root"))
+ with complete_step('Copying in build tree'):
+ enumerate_and_copy(os.path.join(workspace, "dest"), os.path.join(workspace, "root"))
- print_step("Copying in build tree completed.")
-
-def make_read_only(args, workspace):
+def make_read_only(args, workspace, for_cache):
if not args.read_only:
return
-
- if not args.output_format in (OutputFormat.raw_btrfs, OutputFormat.subvolume):
+ if for_cache:
return
- print_step("Marking root subvolume read-only...")
+ if args.output_format not in (OutputFormat.raw_btrfs, OutputFormat.subvolume):
+ return
- btrfs_subvol_make_ro(os.path.join(workspace, "root"))
+ with complete_step('Marking root subvolume read-only'):
+ btrfs_subvol_make_ro(os.path.join(workspace, "root"))
- print_step("Marking root subvolume read-only completed.")
+def make_tar(args, workspace, run_build_script, for_cache):
-def make_tar(args, workspace):
+ if run_build_script:
+ return None
if args.output_format != OutputFormat.tar:
return None
+ if for_cache:
+ return None
- print_step("Creating archive...")
+ with complete_step('Creating archive'):
+ f = tempfile.NamedTemporaryFile(dir=os.path.dirname(args.output), prefix=".mkosi-")
+ subprocess.run(["tar", "-C", os.path.join(workspace, "root"),
+ "-c", "-J", "--xattrs", "--xattrs-include=*", "."],
+ stdout=f, check=True)
- f = tempfile.NamedTemporaryFile(dir = os.path.dirname(args.output), prefix=".mkosi-")
- subprocess.run(["tar", "-C", os.path.join(workspace, "root"), "-c", "-J", "--xattrs", "--xattrs-include=*", "."], stdout=f, check=True)
+ return f
+
+def make_squashfs(args, workspace, for_cache):
+ if args.output_format != OutputFormat.raw_squashfs:
+ return None
+ if for_cache:
+ return None
- print_step("Creating archive completed.")
+ with complete_step('Creating squashfs file system'):
+ f = tempfile.NamedTemporaryFile(dir=os.path.dirname(args.output), prefix=".mkosi-squashfs")
+ subprocess.run(["mksquashfs", os.path.join(workspace, "root"), f.name, "-comp", "lz4", "-noappend"],
+ check=True)
return f
+def read_partition_table(loopdev):
+
+ table = []
+ last_sector = 0
+
+ c = subprocess.run(["sfdisk", "--dump", loopdev], stdout=subprocess.PIPE, check=True)
+
+ in_body = False
+ for line in c.stdout.decode("utf-8").split('\n'):
+ stripped = line.strip()
+
+ if stripped == "": # empty line is where the body begins
+ in_body = True
+ continue
+ if not in_body:
+ continue
+
+ table.append(stripped)
+
+ name, rest = stripped.split(":", 1)
+ fields = rest.split(",")
+
+ start = None
+ size = None
+
+ for field in fields:
+ f = field.strip()
+
+ if f.startswith("start="):
+ start = int(f[6:])
+ if f.startswith("size="):
+ size = int(f[5:])
+
+ if start is not None and size is not None:
+ end = start + size
+ if end > last_sector:
+ last_sector = end
+
+ return table, last_sector * 512
+
+def insert_partition(args, workspace, raw, loopdev, partno, blob, name, type_uuid, uuid = None):
+
+ if args.ran_sfdisk:
+ old_table, last_partition_sector = read_partition_table(loopdev)
+ else:
+ # No partition table yet? Then let's fake one...
+ old_table = []
+ last_partition_sector = GPT_HEADER_SIZE
+
+ blob_size = roundup512(os.stat(blob.name).st_size)
+ luks_extra = 2*1024*1024 if args.encrypt == "all" else 0
+ new_size = last_partition_sector + blob_size + luks_extra + GPT_FOOTER_SIZE
+
+ print_step("Resizing disk image to {}...".format(format_bytes(new_size)))
+
+ os.truncate(raw.name, new_size)
+ subprocess.run(["losetup", "--set-capacity", loopdev], check=True)
+
+ print_step("Inserting partition of {}...".format(format_bytes(blob_size)))
+
+ table = "label: gpt\n"
+
+ for t in old_table:
+ table += t + "\n"
+
+ if uuid is not None:
+ table += "uuid=" + str(uuid) + ", "
+
+ table += 'size={}, type={}, attrs=GUID:60, name="{}"\n'.format((blob_size + luks_extra) // 512, type_uuid, name)
+
+ print(table)
+
+ subprocess.run(["sfdisk", "--color=never", loopdev], input=table.encode("utf-8"), check=True)
+ subprocess.run(["sync"])
+
+ print_step("Writing partition...")
+
+ if args.root_partno == partno:
+ luks_format_root(args, loopdev, False, True)
+ dev = luks_setup_root(args, loopdev, False, True)
+ else:
+ dev = None
+
+ try:
+ subprocess.run(["dd", "if=" + blob.name, "of=" + (dev if dev is not None else partition(loopdev, partno))], check=True)
+ finally:
+ luks_close(dev, "Closing LUKS root partition")
+
+ args.ran_sfdisk = True
+
+ return blob_size
+
+def insert_squashfs(args, workspace, raw, loopdev, squashfs, for_cache):
+ if args.output_format != OutputFormat.raw_squashfs:
+ return
+ if for_cache:
+ return
+
+ with complete_step('Inserting squashfs root partition'):
+ args.root_size = insert_partition(args, workspace, raw, loopdev, args.root_partno, squashfs,
+ "Root Partition", GPT_ROOT_NATIVE)
+
+def make_verity(args, workspace, dev, run_build_script, for_cache):
+
+ if run_build_script or not args.verity:
+ return None, None
+ if for_cache:
+ return None, None
+
+ with complete_step('Generating verity hashes'):
+ f = tempfile.NamedTemporaryFile(dir=os.path.dirname(args.output), prefix=".mkosi-")
+ c = subprocess.run(["veritysetup", "format", dev, f.name],
+ stdout=subprocess.PIPE, check=True)
+
+ for line in c.stdout.decode("utf-8").split('\n'):
+ if line.startswith("Root hash:"):
+ root_hash = line[10:].strip()
+ return f, root_hash
+
+ raise ValueError('Root hash not found')
+
+def insert_verity(args, workspace, raw, loopdev, verity, root_hash, for_cache):
+
+ if verity is None:
+ return
+ if for_cache:
+ return
+
+ # Use the final 128 bit of the root hash as partition UUID of the verity partition
+ u = uuid.UUID(root_hash[-32:])
+
+ with complete_step('Inserting verity partition'):
+ insert_partition(args, workspace, raw, loopdev, args.verity_partno, verity,
+ "Verity Partition", GPT_ROOT_NATIVE_VERITY, u)
+
+def patch_root_uuid(args, loopdev, root_hash, for_cache):
+
+ if root_hash is None:
+ return
+ if for_cache:
+ return
+
+ # Use the first 128bit of the root hash as partition UUID of the root partition
+ u = uuid.UUID(root_hash[:32])
+
+ with complete_step('Patching root partition UUID'):
+ subprocess.run(["sfdisk", "--part-uuid", loopdev, str(args.root_partno), str(u)],
+ check=True)
+
+def install_unified_kernel(args, workspace, run_build_script, for_cache, root_hash):
+
+ # Iterates through all kernel versions included in the image and
+ # generates a combined kernel+initrd+cmdline+osrelease EFI file
+ # from it and places it in the /EFI/Linux directory of the
+ # ESP. sd-boot iterates through them and shows them in the
+ # menu. These "unified" single-file images have the benefit that
+ # they can be signed like normal EFI binaries, and can encode
+ # everything necessary to boot a specific root device, including
+ # the root hash.
+
+ if not args.bootable:
+ return
+ if for_cache:
+ return
+
+ if args.distribution != Distribution.fedora:
+ return
+
+ with complete_step("Generating combined kernel + initrd boot file"):
+
+ cmdline = args.kernel_commandline
+ if root_hash is not None:
+ cmdline += " roothash=" + root_hash
+
+ for kver in os.scandir(os.path.join(workspace, "root", "usr/lib/modules")):
+ if not kver.is_dir():
+ continue
+
+ boot_binary = "/efi/EFI/Linux/linux-" + kver.name
+ if root_hash is not None:
+ boot_binary += "-" + root_hash
+ boot_binary += ".efi"
+
+ dracut = ["/usr/bin/dracut",
+ "-v",
+ "--no-hostonly",
+ "--uefi",
+ "--kver", kver.name,
+ "--kernel-cmdline", cmdline ]
+
+ # Temporary fix until dracut includes these in the image anyway
+ dracut += ("-i",) + ("/usr/lib/systemd/system/systemd-volatile-root.service",)*2 + \
+ ("-i",) + ("/usr/lib/systemd/systemd-volatile-root",)*2 + \
+ ("-i",) + ("/usr/lib/systemd/systemd-veritysetup",)*2 + \
+ ("-i",) + ("/usr/lib/systemd/system-generators/systemd-veritysetup-generator",)*2
+
+ if args.output_format == OutputFormat.raw_squashfs:
+ dracut += [ '--add-drivers', 'squashfs' ]
+
+ dracut += [ boot_binary ]
+
+ run_workspace_command(args, workspace, *dracut);
+
+def secure_boot_sign(args, workspace, run_build_script, for_cache):
+
+ if run_build_script:
+ return
+ if not args.bootable:
+ return
+ if not args.secure_boot:
+ return
+ if for_cache:
+ return
+
+ for path, dirnames, filenames in os.walk(os.path.join(workspace, "root", "efi")):
+ for i in filenames:
+ if not i.endswith(".efi") and not i.endswith(".EFI"):
+ continue
+
+ with complete_step("Signing EFI binary {} in ESP".format(i)):
+ p = os.path.join(path, i)
+
+ subprocess.run(["sbsign",
+ "--key", args.secure_boot_key,
+ "--cert", args.secure_boot_certificate,
+ "--output", p + ".signed",
+ p], check=True)
+
+ os.rename(p + ".signed", p)
+
def xz_output(args, raw):
- if not args.output_format in (OutputFormat.raw_btrfs, OutputFormat.raw_gpt):
+ if args.output_format not in (OutputFormat.raw_btrfs, OutputFormat.raw_gpt, OutputFormat.raw_squashfs):
return raw
if not args.xz:
return raw
- print_step("Compressing image file...")
+ with complete_step('Compressing image file'):
+ f = tempfile.NamedTemporaryFile(prefix=".mkosi-", dir=os.path.dirname(args.output))
+ subprocess.run(["xz", "-c", raw.name], stdout=f, check=True)
+
+ return f
- f = tempfile.NamedTemporaryFile(dir = os.path.dirname(args.output), prefix=".mkosi-")
- subprocess.run(["xz", "-c", raw.name], stdout=f, check=True)
+def write_root_hash_file(args, root_hash):
+ if root_hash is None:
+ return None
- print_step("Compressing image file complete.")
+ with complete_step('Writing .roothash file'):
+ f = tempfile.NamedTemporaryFile(mode='w+b', prefix='.mkosi',
+ dir=os.path.dirname(args.output_root_hash_file))
+ f.write((root_hash + "\n").encode())
return f
@@ -834,22 +1583,17 @@ def copy_nspawn_settings(args):
if args.nspawn_settings is None:
return None
- print_step("Copying nspawn settings file...")
+ with complete_step('Copying nspawn settings file'):
+ f = tempfile.NamedTemporaryFile(mode="w+b", prefix=".mkosi-",
+ dir=os.path.dirname(args.output_nspawn_settings))
- f = tempfile.NamedTemporaryFile(mode = "w+b", dir = os.path.dirname(args.output_nspawn_settings), prefix=".mkosi-")
+ with open(args.nspawn_settings, "rb") as c:
+ f.write(c.read())
- with open(args.nspawn_settings, "rb") as c:
- bs = 65536
- buf = c.read(bs)
- while len(buf) > 0:
- f.write(buf)
- buf = c.read(bs)
-
- print_step("Copying nspawn settings file completed.")
return f
def hash_file(of, sf, fname):
- bs = 65536
+ bs = 16*1024**2
h = hashlib.sha256()
sf.seek(0)
@@ -860,25 +1604,26 @@ def hash_file(of, sf, fname):
of.write(h.hexdigest() + " *" + fname + "\n")
-def calculate_sha256sum(args, raw, tar, nspawn_settings):
+def calculate_sha256sum(args, raw, tar, root_hash_file, nspawn_settings):
if args.output_format in (OutputFormat.directory, OutputFormat.subvolume):
return None
if not args.checksum:
return None
- print_step("Calculating SHA256SUM...")
-
- f = tempfile.NamedTemporaryFile(mode="w+", dir=os.path.dirname(args.output_checksum), prefix=".mkosi-", encoding="utf-8")
+ with complete_step('Calculating SHA256SUMS'):
+ f = tempfile.NamedTemporaryFile(mode="w+", prefix=".mkosi-", encoding="utf-8",
+ dir=os.path.dirname(args.output_checksum))
- if raw is not None:
- hash_file(f, raw, os.path.basename(args.output))
- if tar is not None:
- hash_file(f, tar, os.path.basename(args.output))
- if nspawn_settings is not None:
- hash_file(f, nspawn_settings, os.path.basename(args.output_nspawn_settings))
+ if raw is not None:
+ hash_file(f, raw, os.path.basename(args.output))
+ if tar is not None:
+ hash_file(f, tar, os.path.basename(args.output))
+ if root_hash_file is not None:
+ hash_file(f, root_hash_file, os.path.basename(args.output_root_hash_file))
+ if nspawn_settings is not None:
+ hash_file(f, nspawn_settings, os.path.basename(args.output_nspawn_settings))
- print_step("Calculating SHA256SUM complete.")
return f
def calculate_signature(args, checksum):
@@ -888,78 +1633,81 @@ def calculate_signature(args, checksum):
if checksum is None:
return None
- print_step("Signing SHA256SUM...")
+ with complete_step('Signing SHA256SUMS'):
+ f = tempfile.NamedTemporaryFile(mode="wb", prefix=".mkosi-",
+ dir=os.path.dirname(args.output_signature))
- f = tempfile.NamedTemporaryFile(mode="wb", prefix=".mkosi-", dir=os.path.dirname(args.output_signature))
+ cmdline = ["gpg", "--detach-sign"]
- cmdline = ["gpg", "--detach-sign"]
+ if args.key is not None:
+ cmdline += ["--default-key", args.key]
- if args.key is not None:
- cmdline.extend(["--default-key", args.key])
+ checksum.seek(0)
+ subprocess.run(cmdline, stdin=checksum, stdout=f, check=True)
- checksum.seek(0)
- subprocess.run(cmdline, stdin=checksum, stdout=f, check=True)
+ return f
- print_step("Signing SHA256SUM complete.")
+def save_cache(args, workspace, raw, cache_path):
- return f
+ if cache_path is None:
+ return
-def link_output(args, workspace, raw, tar):
- print_step("Linking image file...")
+ with complete_step('Installing cache copy ',
+ 'Successfully installed cache copy ' + cache_path):
- if args.output_format in (OutputFormat.directory, OutputFormat.subvolume):
- os.rename(os.path.join(workspace, "root"), args.output)
- elif args.output_format in (OutputFormat.raw_btrfs, OutputFormat.raw_gpt):
- os.chmod(raw, 0o666 & ~args.original_umask)
- os.link(raw, args.output)
- else:
- os.chmod(raw, 0o666 & ~args.original_umask)
- os.link(tar, args.output)
+ if args.output_format in (OutputFormat.raw_btrfs, OutputFormat.raw_gpt):
+ os.chmod(raw, 0o666 & ~args.original_umask)
+ shutil.move(raw, cache_path)
+ else:
+ shutil.move(os.path.join(workspace, "root"), cache_path)
- print_step("Successfully linked " + args.output + ".")
+def link_output(args, workspace, raw, tar):
+ with complete_step('Linking image file',
+ 'Successfully linked ' + args.output):
+ if args.output_format in (OutputFormat.directory, OutputFormat.subvolume):
+ os.rename(os.path.join(workspace, "root"), args.output)
+ elif args.output_format in (OutputFormat.raw_btrfs, OutputFormat.raw_gpt, OutputFormat.raw_squashfs):
+ os.chmod(raw, 0o666 & ~args.original_umask)
+ os.link(raw, args.output)
+ else:
+ os.chmod(tar, 0o666 & ~args.original_umask)
+ os.link(tar, args.output)
def link_output_nspawn_settings(args, path):
if path is None:
return
- print_step("Linking nspawn settings file...")
-
- os.chmod(path, 0o666 & ~args.original_umask)
- os.link(path, args.output_nspawn_settings)
-
- print_step("Successfully linked " + args.output_nspawn_settings + ".")
+ with complete_step('Linking nspawn settings file',
+ 'Successfully linked ' + args.output_nspawn_settings):
+ os.chmod(path, 0o666 & ~args.original_umask)
+ os.link(path, args.output_nspawn_settings)
def link_output_checksum(args, checksum):
if checksum is None:
return
- print_step("Linking SHA256SUM file...")
+ with complete_step('Linking SHA256SUMS file',
+ 'Successfully linked ' + args.output_checksum):
+ os.chmod(checksum, 0o666 & ~args.original_umask)
+ os.link(checksum, args.output_checksum)
- os.chmod(checksum, 0o666 & ~args.original_umask)
- os.link(checksum, args.output_checksum)
+def link_output_root_hash_file(args, root_hash_file):
+ if root_hash_file is None:
+ return
- print_step("Successfully linked " + args.output_checksum + ".")
+ with complete_step('Linking .roothash file',
+ 'Successfully linked ' + args.output_root_hash_file):
+ os.chmod(root_hash_file, 0o666 & ~args.original_umask)
+ os.link(root_hash_file, args.output_root_hash_file)
def link_output_signature(args, signature):
if signature is None:
return
- print_step("Linking SHA256SUM.gpg file...")
-
- os.chmod(signature, 0o666 & ~args.original_umask)
- os.link(signature, args.output_signature)
-
- print_step("Successfully linked " + args.output_signature + ".")
-
-def format_bytes(bytes):
- if bytes >= 1024*1024*1024:
- return "{:0.1f}G".format(bytes / 1024**3)
- if bytes >= 1024*1024:
- return "{:0.1f}M".format(bytes / 1024**2)
- if bytes >= 1024:
- return "{:0.1f}K".format(bytes / 1024)
-
- return "{}B".format(bytes)
+ with complete_step('Linking SHA256SUMS.gpg file',
+ 'Successfully linked ' + args.output_signature):
+ os.chmod(signature, 0o666 & ~args.original_umask)
+ os.link(signature, args.output_signature)
def dir_size(path):
sum = 0
@@ -983,19 +1731,16 @@ def print_output_size(args):
print_step("Resulting image size is " + format_bytes(st.st_size) + ", consumes " + format_bytes(st.st_blocks * 512) + ".")
def setup_cache(args):
- if not args.distribution in (Distribution.fedora, Distribution.debian, Distribution.ubuntu):
- return None
-
- print_step("Setting up package cache...")
-
- if args.cache_path is None:
- d = tempfile.TemporaryDirectory(dir=os.path.dirname(args.output), prefix=".mkosi-")
- args.cache_path = d.name
- else:
- os.makedirs(args.cache_path, 0o700, True)
- d = None
+ with complete_step('Setting up package cache',
+ 'Setting up package cache {} complete') as output:
+ if args.cache_path is None:
+ d = tempfile.TemporaryDirectory(dir=os.path.dirname(args.output), prefix=".mkosi-")
+ args.cache_path = d.name
+ else:
+ os.makedirs(args.cache_path, 0o755, exist_ok=True)
+ d = None
+ output.append(args.cache_path)
- print_step("Setting up package cache " + args.cache_path + " completed.")
return d
class PackageAction(argparse.Action):
@@ -1012,44 +1757,56 @@ def parse_args():
group = parser.add_argument_group("Commands")
group.add_argument("verb", choices=("build", "clean", "help", "summary"), nargs='?', default="build", help='Operation to execute')
group.add_argument('-h', '--help', action='help', help="Show this help")
+ group.add_argument('--version', action='version', version='%(prog)s ' + __version__)
group = parser.add_argument_group("Distribution")
group.add_argument('-d', "--distribution", choices=Distribution.__members__, help='Distribution to install')
group.add_argument('-r', "--release", help='Distribution release to install')
group.add_argument('-m', "--mirror", help='Distribution mirror to use')
+ group.add_argument("--repositories", action=PackageAction, dest='repositories', help='Repositories to use', metavar='REPOS')
group = parser.add_argument_group("Output")
group.add_argument('-t', "--format", dest='output_format', choices=OutputFormat.__members__, help='Output Format')
group.add_argument('-o', "--output", help='Output image path', metavar='PATH')
- group.add_argument('-f', "--force", action='store_true', help='Remove existing image file before operation')
+ group.add_argument('-f', "--force", action='count', dest='force_count', default=0, help='Remove existing image file before operation')
group.add_argument('-b', "--bootable", type=parse_boolean, nargs='?', const=True,
- help='Make image bootable on EFI (only raw_gpt, raw_btrfs)')
- group.add_argument("--read-only", action='store_true', help='Make root volume read-only (only raw_btrfs, subvolume)')
+ help='Make image bootable on EFI (only raw_gpt, raw_btrfs, raw_squashfs)')
+ group.add_argument("--secure-boot", action='store_true', help='Sign the resulting kernel/initrd image for UEFI SecureBoot')
+ group.add_argument("--secure-boot-key", help="UEFI SecureBoot private key in PEM format", metavar='PATH')
+ group.add_argument("--secure-boot-certificate", help="UEFI SecureBoot certificate in X509 format", metavar='PATH')
+ group.add_argument("--read-only", action='store_true', help='Make root volume read-only (only raw_gpt, raw_btrfs, subvolume, implied on raw_squashs)')
+ group.add_argument("--encrypt", choices=("all", "data"), help='Encrypt everything except: ESP ("all") or ESP and root ("data")')
+ group.add_argument("--verity", action='store_true', help='Add integrity partition (implies --read-only)')
group.add_argument("--compress", action='store_true', help='Enable compression in file system (only raw_btrfs, subvolume)')
- group.add_argument("--xz", action='store_true', help='Compress resulting image with xz (only raw_gpt, raw_btrfs, implied on tar)')
+ group.add_argument("--xz", action='store_true', help='Compress resulting image with xz (only raw_gpt, raw_btrfs, raw_squashfs, implied on tar)')
+ group.add_argument('-i', "--incremental", action='store_true', help='Make use of and generate intermediary cache images')
group = parser.add_argument_group("Packages")
group.add_argument('-p', "--package", action=PackageAction, dest='packages', help='Add an additional package to the OS image', metavar='PACKAGE')
group.add_argument("--with-docs", action='store_true', help='Install documentation (only fedora)')
- group.add_argument("--cache", dest='cache_path', help='Package cache path (only fedora, debian, ubuntu)', metavar='PATH')
+ group.add_argument("--cache", dest='cache_path', help='Package cache path', metavar='PATH')
group.add_argument("--extra-tree", action='append', dest='extra_trees', help='Copy an extra tree on top of image', metavar='PATH')
group.add_argument("--build-script", help='Build script to run inside image', metavar='PATH')
group.add_argument("--build-sources", help='Path for sources to build', metavar='PATH')
group.add_argument("--build-package", action=PackageAction, dest='build_packages', help='Additional packages needed for build script', metavar='PACKAGE')
+ group.add_argument("--postinst-script", help='Post installation script to run inside image', metavar='PATH')
group.add_argument('--use-git-files', type=parse_boolean,
help='Ignore any files that git itself ignores (default: guess)')
+ group.add_argument('--git-files', choices=('cached', 'others'),
+ help='Whether to include untracked files (default: others)')
+ group.add_argument("--with-network", action='store_true', help='Run build and postinst scripts with network access (instead of private network)')
group.add_argument("--settings", dest='nspawn_settings', help='Add in .spawn settings file', metavar='PATH')
group = parser.add_argument_group("Partitions")
group.add_argument("--root-size", help='Set size of root partition (only raw_gpt, raw_btrfs)', metavar='BYTES')
- group.add_argument("--esp-size", help='Set size of EFI system partition (only raw_gpt, raw_btrfs)', metavar='BYTES')
- group.add_argument("--swap-size", help='Set size of swap partition (only raw_gpt, raw_btrfs)', metavar='BYTES')
- group.add_argument("--home-size", help='Set size of /home partition (only raw_gpt)', metavar='BYTES')
- group.add_argument("--srv-size", help='Set size of /srv partition (only raw_gpt)', metavar='BYTES')
-
- group = parser.add_argument_group("Validation (only raw_gpt, raw_btrfs, tar)")
- group.add_argument("--checksum", action='store_true', help='Write SHA256SUM file')
- group.add_argument("--sign", action='store_true', help='Write and sign SHA256SUM file')
+ group.add_argument("--esp-size", help='Set size of EFI system partition (only raw_gpt, raw_btrfs, raw_squashfs)', metavar='BYTES')
+ group.add_argument("--swap-size", help='Set size of swap partition (only raw_gpt, raw_btrfs, raw_squashfs)', metavar='BYTES')
+ group.add_argument("--home-size", help='Set size of /home partition (only raw_gpt, raw_squashfs)', metavar='BYTES')
+ group.add_argument("--srv-size", help='Set size of /srv partition (only raw_gpt, raw_squashfs)', metavar='BYTES')
+
+ group = parser.add_argument_group("Validation (only raw_gpt, raw_btrfs, raw_squashfs, tar)")
+ group.add_argument("--checksum", action='store_true', help='Write SHA256SUMS file')
+ group.add_argument("--sign", action='store_true', help='Write and sign SHA256SUMS file')
group.add_argument("--key", help='GPG key to use for signing')
group.add_argument("--password", help='Set the root password')
@@ -1132,25 +1889,44 @@ def unlink_output(args):
if not args.force and args.verb != "clean":
return
- unlink_try_hard(args.output)
+ with complete_step('Removing output files'):
+ unlink_try_hard(args.output)
- if args.checksum:
- unlink_try_hard(args.output_checksum)
+ if args.checksum:
+ unlink_try_hard(args.output_checksum)
- if args.sign:
- unlink_try_hard(args.output_signature)
+ if args.verity:
+ unlink_try_hard(args.output_root_hash_file)
- if args.nspawn_settings is not None:
- unlink_try_hard(args.output_nspawn_settings)
+ if args.sign:
+ unlink_try_hard(args.output_signature)
+
+ if args.nspawn_settings is not None:
+ unlink_try_hard(args.output_nspawn_settings)
+
+ # We remove the cache if either the user used --force twice, or he called "clean" with it passed once
+ if args.verb == "clean":
+ remove_cache = args.force_count > 0
+ else:
+ remove_cache = args.force_count > 1
+
+ if remove_cache:
+ with complete_step('Removing cache files'):
+ if args.cache_pre_dev is not None:
+ unlink_try_hard(args.cache_pre_dev)
+
+ if args.cache_pre_inst is not None:
+ unlink_try_hard(args.cache_pre_inst)
def parse_boolean(s):
+ "Parse 1/true/yes as true and 0/false/no as false"
if s in {"1", "true", "yes"}:
return True
if s in {"0", "false", "no"}:
return False
- raise ValueError("invalid literal for bool(): {!r}".format(s))
+ raise ValueError("Invalid literal for bool(): {!r}".format(s))
def process_setting(args, section, key, value):
if section == "Distribution":
@@ -1160,6 +1936,12 @@ def process_setting(args, section, key, value):
elif key == "Release":
if args.release is None:
args.release = value
+ elif key == "Repositories":
+ list_value = value if type(value) == list else value.split()
+ if args.repositories is None:
+ args.repositories = list_value
+ else:
+ args.repositories.extend(list_value)
elif key is None:
return True
else:
@@ -1175,11 +1957,31 @@ def process_setting(args, section, key, value):
if not args.force:
args.force = parse_boolean(value)
elif key == "Bootable":
- if not args.bootable:
+ if args.bootable is None:
args.bootable = parse_boolean(value)
+ elif key == "KernelCommandLine":
+ if args.kernel_commandline is None:
+ args.kernel_commandline = value
+ elif key == "SecureBoot":
+ if not args.secure_boot:
+ args.secure_boot = parse_boolean(value)
+ elif key == "SecureBootKey":
+ if args.secure_boot_key is None:
+ args.secure_boot_key = value
+ elif key == "SecureBootCertificate":
+ if args.secure_boot_certificate is None:
+ args.secure_boot_certificate = value
elif key == "ReadOnly":
if not args.read_only:
args.read_only = parse_boolean(value)
+ elif key == "Encrypt":
+ if args.encrypt is None:
+ if value not in ("all", "data"):
+ raise ValueError("Invalid encryption setting: "+ value)
+ args.encrypt = value
+ elif key == "Verity":
+ if not args.verity:
+ args.verity = parse_boolean(value)
elif key == "Compress":
if not args.compress:
args.compress = parse_boolean(value)
@@ -1192,10 +1994,11 @@ def process_setting(args, section, key, value):
return False
elif section == "Packages":
if key == "Packages":
+ list_value = value if type(value) == list else value.split()
if args.packages is None:
- args.packages = value.split()
+ args.packages = list_value
else:
- args.packages.extend(value.split())
+ args.packages.extend(list_value)
elif key == "WithDocs":
if not args.with_docs:
args.with_docs = parse_boolean(value)
@@ -1203,23 +2006,31 @@ def process_setting(args, section, key, value):
if args.cache_path is None:
args.cache_path = value
elif key == "ExtraTrees":
+ list_value = value if type(value) == list else value.split()
if args.extra_trees is None:
- args.extra_trees = value.split()
+ args.extra_trees = list_value
else:
- args.extra_trees.extend(value.split())
+ args.extra_trees.extend(list_value)
elif key == "BuildScript":
- if args.build_script is not None:
+ if args.build_script is None:
args.build_script = value
elif key == "BuildSources":
- if args.build_sources is not None:
+ if args.build_sources is None:
args.build_sources = value
elif key == "BuildPackages":
+ list_value = value if type(value) == list else value.split()
if args.build_packages is None:
- args.build_packages = value.split()
+ args.build_packages = list_value
else:
- args.build_packages.extend(value.split())
+ args.build_packages.extend(list_value)
+ elif key == "PostInstallationScript":
+ if args.postinst_script is None:
+ args.postinst_script = value
+ elif key == "WithNetwork":
+ if not args.with_network:
+ args.with_network = parse_boolean(value)
elif key == "NSpawnSettings":
- if args.nspawn_settings is not None:
+ if args.nspawn_settings is None:
args.nspawn_settings = value
elif key is None:
return True
@@ -1267,9 +2078,7 @@ def process_setting(args, section, key, value):
return True
-def load_defaults(args):
- fname = "mkosi.default" if args.default_path is None else args.default_path
-
+def load_defaults_file(fname, options):
try:
f = open(fname, "r")
except FileNotFoundError:
@@ -1279,13 +2088,44 @@ def load_defaults(args):
config.optionxform = str
config.read_file(f)
+ # this is used only for validation
+ args = parse_args()
+
for section in config.sections():
if not process_setting(args, section, None, None):
sys.stderr.write("Unknown section in {}, ignoring: [{}]\n".format(fname, section))
-
+ continue
+ if section not in options:
+ options[section] = {}
for key in config[section]:
if not process_setting(args, section, key, config[section][key]):
sys.stderr.write("Unknown key in section [{}] in {}, ignoring: {}=\n".format(section, fname, key))
+ continue
+ if section == "Packages" and key in ["Packages", "ExtraTrees", "BuildPackages"]:
+ if key in options[section]:
+ options[section][key].extend(config[section][key].split())
+ else:
+ options[section][key] = config[section][key].split()
+ else:
+ options[section][key] = config[section][key]
+ return options
+
+def load_defaults(args):
+ fname = "mkosi.default" if args.default_path is None else args.default_path
+
+ config = {}
+ load_defaults_file(fname, config)
+
+ defaults_dir = fname + '.d'
+ if os.path.isdir(defaults_dir):
+ for defaults_file in sorted(os.listdir(defaults_dir)):
+ defaults_path = os.path.join(defaults_dir, defaults_file)
+ if os.path.isfile(defaults_path):
+ load_defaults_file(defaults_path, config)
+
+ for section in config.keys():
+ for key in config[section]:
+ process_setting(args, section, key, config[section][key])
def find_nspawn_settings(args):
if args.nspawn_settings is not None:
@@ -1301,6 +2141,14 @@ def find_extra(args):
else:
args.extra_trees.append("mkosi.extra")
+def find_cache(args):
+
+ if args.cache_path is not None:
+ return
+
+ if os.path.exists("mkosi.cache/"):
+ args.cache_path = "mkosi.cache/" + args.distribution.name + "~" + args.release
+
def find_build_script(args):
if args.build_script is not None:
return
@@ -1314,7 +2162,49 @@ def find_build_sources(args):
args.build_sources = os.getcwd()
-def build_nspawn_settings_path(path):
+def find_postinst_script(args):
+ if args.postinst_script is not None:
+ return
+
+ if os.path.exists("mkosi.postinst"):
+ args.postinst_script = "mkosi.postinst"
+
+def find_passphrase(args):
+
+ if args.encrypt is None:
+ args.passphrase = None
+ return
+
+ try:
+ passphrase_mode = os.stat('mkosi.passphrase').st_mode & (stat.S_IRWXU | stat.S_IRWXG | stat.S_IRWXO)
+ if (passphrase_mode & stat.S_IRWXU > 0o600) or (passphrase_mode & (stat.S_IRWXG | stat.S_IRWXO) > 0):
+ die("Permissions of 'mkosi.passphrase' of '{}' are too open. When creating passphrase files please make sure to choose an access mode that restricts access to the owner only. Aborting.\n".format(oct(passphrase_mode)))
+
+ args.passphrase = { 'type': 'file', 'content': 'mkosi.passphrase' }
+
+ except FileNotFoundError:
+ while True:
+ passphrase = getpass.getpass("Please enter passphrase: ")
+ passphrase_confirmation = getpass.getpass("Passphrase confirmation: ")
+ if passphrase == passphrase_confirmation:
+ args.passphrase = { 'type': 'stdin', 'content': passphrase }
+ break
+
+ sys.stderr.write("Passphrase doesn't match confirmation. Please try again.\n")
+
+def find_secure_boot(args):
+ if not args.secure_boot:
+ return
+
+ if args.secure_boot_key is None:
+ if os.path.exists("mkosi.secure-boot.key"):
+ args.secure_boot_key = "mkosi.secure-boot.key"
+
+ if args.secure_boot_certificate is None:
+ if os.path.exists("mkosi.secure-boot.crt"):
+ args.secure_boot_certificate = "mkosi.secure-boot.crt"
+
+def strip_suffixes(path):
t = path
while True:
if t.endswith(".xz"):
@@ -1326,7 +2216,13 @@ def build_nspawn_settings_path(path):
else:
break
- return t + ".nspawn"
+ return t
+
+def build_nspawn_settings_path(path):
+ return strip_suffixes(path) + ".nspawn"
+
+def build_root_hash_file_path(path):
+ return strip_suffixes(path) + ".roothash"
def load_args():
args = parse_args()
@@ -1339,6 +2235,11 @@ def load_args():
find_extra(args)
find_build_script(args)
find_build_sources(args)
+ find_postinst_script(args)
+ find_passphrase(args)
+ find_secure_boot(args)
+
+ args.force = args.force_count > 0
if args.output_format is None:
args.output_format = OutputFormat.raw_gpt
@@ -1358,16 +2259,19 @@ def load_args():
args.release = r
if args.distribution is None:
- sys.stderr.write("Couldn't detect distribution.\n")
- sys.exit(1)
+ die("Couldn't detect distribution.")
if args.release is None:
if args.distribution == Distribution.fedora:
- args.release = "24"
+ args.release = "25"
elif args.distribution == Distribution.debian:
args.release = "unstable"
elif args.distribution == Distribution.ubuntu:
args.release = "yakkety"
+ elif args.distribution == Distribution.opensuse:
+ args.release = "tumbleweed"
+
+ find_cache(args)
if args.mirror is None:
if args.distribution == Distribution.fedora:
@@ -1376,25 +2280,37 @@ def load_args():
args.mirror = "http://httpredir.debian.org/debian"
elif args.distribution == Distribution.ubuntu:
args.mirror = "http://archive.ubuntu.com/ubuntu"
+ if platform.machine() == "aarch64":
+ args.mirror = "http://ports.ubuntu.com/"
elif args.distribution == Distribution.arch:
args.mirror = "https://mirrors.kernel.org/archlinux"
if platform.machine() == "aarch64":
args.mirror = "http://mirror.archlinuxarm.org"
+ elif args.distribution == Distribution.opensuse:
+ args.mirror = "https://download.opensuse.org"
if args.bootable:
- if args.distribution not in (Distribution.fedora, Distribution.arch, Distribution.debian):
- sys.stderr.write("Bootable images are currently supported only on Debian, Fedora and ArchLinux.\n")
- sys.exit(1)
+ if args.distribution == Distribution.ubuntu:
+ die("Bootable images are currently not supported on Ubuntu.")
+
+ if args.output_format in (OutputFormat.directory, OutputFormat.subvolume, OutputFormat.tar):
+ die("Directory, subvolume and tar images cannot be booted.")
+
+ if args.encrypt is not None:
+ if args.output_format not in (OutputFormat.raw_gpt, OutputFormat.raw_btrfs, OutputFormat.raw_squashfs):
+ die("Encryption is only supported for raw gpt, btrfs or squashfs images.")
- if not args.output_format in (OutputFormat.raw_gpt, OutputFormat.raw_btrfs):
- sys.stderr.write("Directory, subvolume and tar images cannot be booted.\n")
- sys.exit(1)
+ if args.encrypt == "data" and args.output_format == OutputFormat.raw_btrfs:
+ die("'data' encryption mode not supported on btrfs, use 'all' instead.")
+
+ if args.encrypt == "all" and args.verity:
+ die("'all' encryption mode may not be combined with Verity.")
if args.sign:
args.checksum = True
if args.output is None:
- if args.output_format in (OutputFormat.raw_gpt, OutputFormat.raw_btrfs):
+ if args.output_format in (OutputFormat.raw_gpt, OutputFormat.raw_btrfs, OutputFormat.raw_squashfs):
if args.xz:
args.output = "image.raw.xz"
else:
@@ -1404,16 +2320,32 @@ def load_args():
else:
args.output = "image"
+ if args.incremental or args.verb == "clean":
+ args.cache_pre_dev = args.output + ".cache-pre-dev"
+ args.cache_pre_inst = args.output + ".cache-pre-inst"
+ else:
+ args.cache_pre_dev = None
+ args.cache_pre_inst = None
+
args.output = os.path.abspath(args.output)
if args.output_format == OutputFormat.tar:
args.xz = True
+ if args.output_format == OutputFormat.raw_squashfs:
+ args.read_only = True
+ args.compress = True
+ args.root_size = None
+
+ if args.verity:
+ args.read_only = True
+ args.output_root_hash_file = build_root_hash_file_path(args.output)
+
if args.checksum:
- args.output_checksum = os.path.join(os.path.dirname(args.output), "SHA256SUM")
+ args.output_checksum = os.path.join(os.path.dirname(args.output), "SHA256SUMS")
if args.sign:
- args.output_signature = os.path.join(os.path.dirname(args.output), "SHA256SUM.gpg")
+ args.output_signature = os.path.join(os.path.dirname(args.output), "SHA256SUMS.gpg")
if args.nspawn_settings is not None:
args.nspawn_settings = os.path.abspath(args.nspawn_settings)
@@ -1425,6 +2357,9 @@ def load_args():
if args.build_sources is not None:
args.build_sources = os.path.abspath(args.build_sources)
+ if args.postinst_script is not None:
+ args.postinst_script = os.path.abspath(args.postinst_script)
+
if args.extra_trees is not None:
for i in range(len(args.extra_trees)):
args.extra_trees[i] = os.path.abspath(args.extra_trees[i])
@@ -1441,23 +2376,38 @@ def load_args():
if args.bootable and args.esp_size is None:
args.esp_size = 256*1024*1024
+ args.verity_size = None
+
if args.bootable and args.kernel_commandline is None:
args.kernel_commandline = "rhgb quiet selinux=0 audit=0 rw"
+ if args.secure_boot_key is not None:
+ args.secure_boot_key = os.path.abspath(args.secure_boot_key)
+
+ if args.secure_boot_certificate is not None:
+ args.secure_boot_certificate = os.path.abspath(args.secure_boot_certificate)
+
+ if args.secure_boot:
+ if args.secure_boot_key is None:
+ die("UEFI SecureBoot enabled, but couldn't find private key. (Consider placing it in mkosi.secure-boot.key?)")
+
+ if args.secure_boot_certificate is None:
+ die("UEFI SecureBoot enabled, but couldn't find certificate. (Consider placing it in mkosi.secure-boot.crt?)")
+
return args
def check_output(args):
for f in (args.output,
args.output_checksum if args.checksum else None,
args.output_signature if args.sign else None,
- args.output_nspawn_settings if args.nspawn_settings is not None else None):
+ args.output_nspawn_settings if args.nspawn_settings is not None else None,
+ args.output_root_hash_file if args.verity else None):
if f is None:
continue
if os.path.exists(f):
- sys.stderr.write("Output file " + f + " exists already. (Consider invocation with --force.)\n")
- sys.exit(1)
+ die("Output file " + f + " exists already. (Consider invocation with --force.)")
def yes_no(b):
return "yes" if b else "no"
@@ -1468,9 +2418,18 @@ def format_bytes_or_disabled(sz):
return format_bytes(sz)
+def format_bytes_or_auto(sz):
+ if sz is None:
+ return "(automatic)"
+
+ return format_bytes(sz)
+
def none_to_na(s):
return "n/a" if s is None else s
+def none_to_no(s):
+ return "no" if s is None else s
+
def none_to_none(s):
return "none" if s is None else s
@@ -1493,128 +2452,269 @@ def print_summary(args):
sys.stderr.write(" Output Checksum: " + none_to_na(args.output_checksum if args.checksum else None) + "\n")
sys.stderr.write(" Output Signature: " + none_to_na(args.output_signature if args.sign else None) + "\n")
sys.stderr.write("Output nspawn Settings: " + none_to_na(args.output_nspawn_settings if args.nspawn_settings is not None else None) + "\n")
+ sys.stderr.write(" Incremental: " + yes_no(args.incremental) + "\n")
- if args.output_format in (OutputFormat.raw_btrfs, OutputFormat.subvolume):
+ if args.output_format in (OutputFormat.raw_gpt, OutputFormat.raw_btrfs, OutputFormat.raw_squashfs, OutputFormat.subvolume):
sys.stderr.write(" Read-only: " + yes_no(args.read_only) + "\n")
+ if args.output_format in (OutputFormat.raw_btrfs, OutputFormat.subvolume):
sys.stderr.write(" FS Compression: " + yes_no(args.compress) + "\n")
- if args.output_format in (OutputFormat.raw_gpt, OutputFormat.raw_btrfs, OutputFormat.tar):
+ if args.output_format in (OutputFormat.raw_gpt, OutputFormat.raw_btrfs, OutputFormat.raw_squashfs, OutputFormat.tar):
sys.stderr.write(" XZ Compression: " + yes_no(args.xz) + "\n")
+ sys.stderr.write(" Encryption: " + none_to_no(args.encrypt) + "\n")
+ sys.stderr.write(" Verity: " + yes_no(args.verity) + "\n")
+
+ if args.output_format in (OutputFormat.raw_gpt, OutputFormat.raw_btrfs, OutputFormat.raw_squashfs):
+ sys.stderr.write(" Bootable: " + yes_no(args.bootable) + "\n")
+
+ if args.bootable:
+ sys.stderr.write(" Kernel Command Line: " + args.kernel_commandline + "\n")
+ sys.stderr.write(" UEFI SecureBoot: " + yes_no(args.secure_boot) + "\n")
+
+ if args.secure_boot:
+ sys.stderr.write(" UEFI SecureBoot Key: " + args.secure_boot_key + "\n")
+ sys.stderr.write(" UEFI SecureBoot Cert.: " + args.secure_boot_certificate + "\n")
+
sys.stderr.write("\nPACKAGES:\n")
sys.stderr.write(" Packages: " + line_join_list(args.packages) + "\n")
if args.distribution == Distribution.fedora:
sys.stderr.write(" With Documentation: " + yes_no(args.with_docs) + "\n")
- if args.distribution in (Distribution.fedora, Distribution.debian, Distribution.ubuntu):
- sys.stderr.write(" Package Cache: " + none_to_none(args.cache_path) + "\n")
+ sys.stderr.write(" Package Cache: " + none_to_none(args.cache_path) + "\n")
sys.stderr.write(" Extra Trees: " + line_join_list(args.extra_trees) + "\n")
sys.stderr.write(" Build Script: " + none_to_none(args.build_script) + "\n")
sys.stderr.write(" Build Sources: " + none_to_none(args.build_sources) + "\n")
sys.stderr.write(" Build Packages: " + line_join_list(args.build_packages) + "\n")
+ sys.stderr.write(" Post Inst. Script: " + none_to_none(args.postinst_script) + "\n")
+ sys.stderr.write(" Scripts with network: " + yes_no(args.with_network) + "\n")
sys.stderr.write(" nspawn Settings: " + none_to_none(args.nspawn_settings) + "\n")
- if args.output_format in (OutputFormat.raw_gpt, OutputFormat.raw_btrfs):
+ if args.output_format in (OutputFormat.raw_gpt, OutputFormat.raw_btrfs, OutputFormat.raw_squashfs):
sys.stderr.write("\nPARTITIONS:\n")
- sys.stderr.write(" Bootable: " + yes_no(args.bootable) + "\n")
- sys.stderr.write(" Root Partition: " + format_bytes(args.root_size) + "\n")
+ sys.stderr.write(" Root Partition: " + format_bytes_or_auto(args.root_size) + "\n")
sys.stderr.write(" Swap Partition: " + format_bytes_or_disabled(args.swap_size) + "\n")
sys.stderr.write(" ESP: " + format_bytes_or_disabled(args.esp_size) + "\n")
sys.stderr.write(" /home Partition: " + format_bytes_or_disabled(args.home_size) + "\n")
sys.stderr.write(" /srv Partition: " + format_bytes_or_disabled(args.srv_size) + "\n")
- if args.output_format in (OutputFormat.raw_gpt, OutputFormat.raw_btrfs, OutputFormat.tar):
+ if args.output_format in (OutputFormat.raw_gpt, OutputFormat.raw_btrfs, OutputFormat.raw_squashfs, OutputFormat.tar):
sys.stderr.write("\nVALIDATION:\n")
sys.stderr.write(" Checksum: " + yes_no(args.checksum) + "\n")
sys.stderr.write(" Sign: " + yes_no(args.sign) + "\n")
sys.stderr.write(" GPG Key: " + ("default" if args.key is None else args.key) + "\n")
sys.stderr.write(" Password: " + ("default" if args.password is None else args.password) + "\n")
-def build_image(args, workspace, run_build_script):
+def reuse_cache_tree(args, workspace, run_build_script, for_cache, cached):
+ """If there's a cached version of this tree around, use it and
+ initialize our new root directly from it. Returns a boolean indicating
+ whether we are now operating on a cached version or not."""
+
+ if cached:
+ return True
+
+ if not args.incremental:
+ return False
+ if for_cache:
+ return False
+ if args.output_format in (OutputFormat.raw_gpt, OutputFormat.raw_btrfs):
+ return False
+
+ fname = args.cache_pre_dev if run_build_script else args.cache_pre_inst
+ if fname is None:
+ return False
+
+ with complete_step('Copying in cached tree ' + fname):
+ try:
+ enumerate_and_copy(fname, os.path.join(workspace, "root"))
+ except FileNotFoundError:
+ return False
+
+ return True
+
+def build_image(args, workspace, run_build_script, for_cache=False):
+
# If there's no build script set, there's no point in executing
- # the build script iteration. Let's quite early.
+ # the build script iteration. Let's quit early.
if args.build_script is None and run_build_script:
- return (None, None)
+ return None, None, None
- tar = None
+ raw, cached = reuse_cache_image(args, workspace.name, run_build_script, for_cache)
+ if not cached:
+ raw = create_image(args, workspace.name, for_cache)
- raw = create_image(args, workspace.name)
with attach_image_loopback(args, raw) as loopdev:
- prepare_swap(args, loopdev)
- prepare_esp(args, loopdev)
- prepare_root(args, loopdev)
- prepare_home(args, loopdev)
- prepare_srv(args, loopdev)
-
- with mount_image(args, workspace.name, loopdev):
- prepare_tree(args, workspace.name)
- mount_cache(args, workspace.name)
- install_distribution(args, workspace.name, run_build_script)
- install_boot_loader(args, workspace.name)
- install_extra_trees(args, workspace.name)
- install_build_src(args, workspace.name, run_build_script)
- install_build_dest(args, workspace.name, run_build_script)
-
- if not run_build_script:
- set_root_password(args, workspace.name)
- make_read_only(args, workspace.name)
- tar = make_tar(args, workspace.name)
-
- return raw, tar
+
+ prepare_swap(args, loopdev, cached)
+ prepare_esp(args, loopdev, cached)
+
+ luks_format_root(args, loopdev, run_build_script, cached)
+ luks_format_home(args, loopdev, run_build_script, cached)
+ luks_format_srv(args, loopdev, run_build_script, cached)
+
+ with luks_setup_all(args, loopdev, run_build_script) as (encrypted_root, encrypted_home, encrypted_srv):
+
+ prepare_root(args, encrypted_root, cached)
+ prepare_home(args, encrypted_home, cached)
+ prepare_srv(args, encrypted_srv, cached)
+
+ with mount_image(args, workspace.name, loopdev, encrypted_root, encrypted_home, encrypted_srv):
+ prepare_tree(args, workspace.name, run_build_script, cached)
+
+ with mount_cache(args, workspace.name):
+ cached = reuse_cache_tree(args, workspace.name, run_build_script, for_cache, cached)
+ install_distribution(args, workspace.name, run_build_script, cached)
+ install_boot_loader(args, workspace.name, cached)
+
+ install_extra_trees(args, workspace.name, for_cache)
+ install_build_src(args, workspace.name, run_build_script, for_cache)
+ install_build_dest(args, workspace.name, run_build_script, for_cache)
+ set_root_password(args, workspace.name, run_build_script, for_cache)
+ run_postinst_script(args, workspace.name, run_build_script, for_cache)
+
+ reset_machine_id(args, workspace.name, run_build_script, for_cache)
+ make_read_only(args, workspace.name, for_cache)
+
+ squashfs = make_squashfs(args, workspace.name, for_cache)
+ insert_squashfs(args, workspace.name, raw, loopdev, squashfs, for_cache)
+
+ verity, root_hash = make_verity(args, workspace.name, encrypted_root, run_build_script, for_cache)
+ patch_root_uuid(args, loopdev, root_hash, for_cache)
+ insert_verity(args, workspace.name, raw, loopdev, verity, root_hash, for_cache)
+
+ # This time we mount read-only, as we already generated
+ # the verity data, and hence really shouldn't modify the
+ # image anymore.
+ with mount_image(args, workspace.name, loopdev, encrypted_root, encrypted_home, encrypted_srv, root_read_only=True):
+ install_unified_kernel(args, workspace.name, run_build_script, for_cache, root_hash)
+ secure_boot_sign(args, workspace.name, run_build_script, for_cache)
+
+ tar = make_tar(args, workspace.name, run_build_script, for_cache)
+
+ return raw, tar, root_hash
+
+def var_tmp(workspace):
+
+ var_tmp = os.path.join(workspace, "var-tmp")
+ try:
+ os.mkdir(var_tmp)
+ except FileExistsError:
+ pass
+
+ return var_tmp
def run_build_script(args, workspace, raw):
if args.build_script is None:
return
- print_step("Running build script...")
+ with complete_step('Running build script'):
+ dest = os.path.join(workspace, "dest")
+ os.mkdir(dest, 0o755)
+
+ target = "--directory=" + os.path.join(workspace, "root") if raw is None else "--image=" + raw.name
+
+ cmdline = ["systemd-nspawn",
+ '--quiet',
+ target,
+ "--uuid=" + args.machine_id,
+ "--as-pid2",
+ "--register=no",
+ "--bind", dest + ":/root/dest",
+ "--bind=" + var_tmp(workspace) + ":/var/tmp",
+ "--setenv=WITH_DOCS=" + ("1" if args.with_docs else "0"),
+ "--setenv=DESTDIR=/root/dest"]
+
+ if args.build_sources is not None:
+ cmdline.append("--setenv=SRCDIR=/root/src")
+ cmdline.append("--chdir=/root/src")
+
+ if args.read_only:
+ cmdline.append("--overlay=+/root/src::/root/src")
+ else:
+ cmdline.append("--chdir=/root")
- dest = os.path.join(workspace, "dest")
- os.mkdir(dest, 0o755)
+ if not args.with_network:
+ cmdline.append("--private-network")
- cmdline = ["systemd-nspawn",
- '--quiet',
- "--directory=" + os.path.join(workspace, "root") if raw is None else "--image=" + raw.name,
- "--as-pid2",
- "--private-network",
- "--register=no",
- "--bind", dest + ":/root/dest",
- "--setenv=WITH_DOCS=" + ("1" if args.with_docs else "0"),
- "--setenv=DESTDIR=/root/dest"]
+ cmdline.append("/root/" + os.path.basename(args.build_script))
+ subprocess.run(cmdline, check=True)
- if args.build_sources is not None:
- cmdline.append("--setenv=SRCDIR=/root/src")
- cmdline.append("--chdir=/root/src")
+def need_cache_images(args):
+
+ if not args.incremental:
+ return False
+
+ if args.force_count > 1:
+ return True
+
+ return not os.path.exists(args.cache_pre_dev) or not os.path.exists(args.cache_pre_inst)
+
+def remove_artifacts(args, workspace, raw, tar, run_build_script, for_cache=False):
+
+ if for_cache:
+ what = "cache build"
+ elif run_build_script:
+ what = "development build"
else:
- cmdline.append("--chdir=/root")
+ return
- cmdline.append("/root/" + os.path.basename(args.build_script))
+ if raw is not None:
+ with complete_step("Removing disk image from " + what):
+ del raw
- print(cmdline)
- subprocess.run(cmdline, check=True)
+ if tar is not None:
+ with complete_step("Removing tar image from " + what):
+ del tar
- print_step("Running build script completed.")
+ with complete_step("Removing artifacts from " + what):
+ unlink_try_hard(os.path.join(workspace, "root"))
+ unlink_try_hard(os.path.join(workspace, "var-tmp"))
def build_stuff(args):
+
+ # Let's define a fixed machine ID for all our build-time
+ # runs. We'll strip it off the final image, but some build-time
+ # tools (dracut...) want a fixed one, hence provide one, and
+ # always the same
+ args.machine_id = uuid.uuid4().hex
+
cache = setup_cache(args)
workspace = setup_workspace(args)
- # Run the image builder twice, once for running the build script and once for the final build
- raw, tar = build_image(args, workspace, run_build_script=True)
+ # If caching is requested, then make sure we have cache images around we can make use of
+ if need_cache_images(args):
- run_build_script(args, workspace.name, raw)
+ # Generate the cache version of the build image, and store it as "cache-pre-dev"
+ raw, tar, root_hash = build_image(args, workspace, run_build_script=True, for_cache=True)
+ save_cache(args,
+ workspace.name,
+ raw.name if raw is not None else None,
+ args.cache_pre_dev)
- if raw is not None:
- del raw
+ remove_artifacts(args, workspace.name, raw, tar, run_build_script=True)
- if tar is not None:
- del tar
+ # Generate the cache version of the build image, and store it as "cache-pre-inst"
+ raw, tar, root_hash = build_image(args, workspace, run_build_script=False, for_cache=True)
+ save_cache(args,
+ workspace.name,
+ raw.name if raw is not None else None,
+ args.cache_pre_inst)
+ remove_artifacts(args, workspace.name, raw, tar, run_build_script=False)
+
+ # Run the image builder for the first (develpoment) stage in preparation for the build script
+ raw, tar, root_hash = build_image(args, workspace, run_build_script=True)
- raw, tar = build_image(args, workspace, run_build_script=False)
+ run_build_script(args, workspace.name, raw)
+ remove_artifacts(args, workspace.name, raw, tar, run_build_script=True)
+
+ # Run the image builder for the second (final) stage
+ raw, tar, root_hash = build_image(args, workspace, run_build_script=False)
raw = xz_output(args, raw)
+ root_hash_file = write_root_hash_file(args, root_hash)
settings = copy_nspawn_settings(args)
- checksum = calculate_sha256sum(args, raw, tar, settings)
+ checksum = calculate_sha256sum(args, raw, tar, root_hash_file, settings)
signature = calculate_signature(args, checksum)
link_output(args,
@@ -1622,6 +2722,8 @@ def build_stuff(args):
raw.name if raw is not None else None,
tar.name if tar is not None else None)
+ link_output_root_hash_file(args, root_hash_file.name if root_hash_file is not None else None)
+
link_output_checksum(args,
checksum.name if checksum is not None else None)
@@ -1631,15 +2733,19 @@ def build_stuff(args):
link_output_nspawn_settings(args,
settings.name if settings is not None else None)
+ if root_hash is not None:
+ print_step("Root hash is {}.".format(root_hash))
+
+def check_root():
+ if os.getuid() != 0:
+ die("Must be invoked as root.")
+
def main():
args = load_args()
- if os.getuid() != 0:
- sys.stderr.write("Must be invoked as root.\n")
- sys.exit(1)
-
if args.verb in ("build", "clean"):
+ check_root()
unlink_output(args)
if args.verb == "build":
@@ -1649,6 +2755,7 @@ def main():
print_summary(args)
if args.verb == "build":
+ check_root()
init_namespace(args)
build_stuff(args)
print_output_size(args)
diff --git a/mkosi.default b/mkosi.default
new file mode 100644
index 0000000..6edd6a5
--- /dev/null
+++ b/mkosi.default
@@ -0,0 +1,22 @@
+# Let's build an image that is just good enough to build new mkosi images again
+
+[Distribution]
+Distribution=fedora
+Release=25
+
+[Output]
+Format=raw_squashfs
+Bootable=yes
+
+[Packages]
+Packages=
+ arch-install-scripts
+ btrfs-progs
+ debootstrap
+ dnf
+ dosfstools
+ git
+ gnupg
+ squashfs-tools
+ tar
+ veritysetup
diff --git a/setup.py b/setup.py
index 8e41779..2830a0c 100755
--- a/setup.py
+++ b/setup.py
@@ -1,5 +1,10 @@
#!/usr/bin/python3
+import sys
+
+if sys.version_info < (3, 5):
+ sys.exit("Sorry, we need at least Python 3.5.")
+
from setuptools import setup
setup(