summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorFelipe Sateler <fsateler@debian.org>2019-02-03 17:32:41 -0300
committerFelipe Sateler <fsateler@debian.org>2019-02-03 17:32:41 -0300
commit2e218703225835565700bce3f1d06349b21e9d20 (patch)
tree3a4028ff4ead55538163bb680d75fb5481c98b59
parent40001c93497d8a7ddd4b33e31e0ed21f2694635b (diff)
New upstream version 4+20190203
-rw-r--r--.gitignore3
-rw-r--r--README.md382
-rw-r--r--TODO.md21
-rwxr-xr-xci/semaphore.sh11
-rwxr-xr-xmkosi2562
l---------[-rw-r--r--]mkosi.default24
-rw-r--r--mkosi.files/mkosi.fedora24
-rw-r--r--mkosi.files/mkosi.ubuntu13
-rw-r--r--mkosi.md1114
l---------mkosi.py1
-rw-r--r--setup.cfg5
-rwxr-xr-xsetup.py7
12 files changed, 2921 insertions, 1246 deletions
diff --git a/.gitignore b/.gitignore
index 2bc1175..5ea1485 100644
--- a/.gitignore
+++ b/.gitignore
@@ -1,5 +1,6 @@
*.cache-pre-dev
*.cache-pre-inst
+.mypy_cache/
/.mkosi-*
/SHA256SUMS
/SHA256SUMS.gpg
@@ -16,3 +17,5 @@
/mkosi.egg-info
/mkosi.extra
/mkosi.nspawn
+/mkosi.rootpw
+.mypy_cache/
diff --git a/README.md b/README.md
index 30d20f7..ed944ef 100644
--- a/README.md
+++ b/README.md
@@ -4,387 +4,9 @@ A fancy wrapper around `dnf --installroot`, `debootstrap`,
`pacstrap` and `zypper` that may generate disk images with a number of
bells and whistles.
-# Supported output formats
+For a longer description and available features and options, see the
+[man page](mkosi.md).
-The following output formats are supported:
-
-* Raw *GPT* disk image, with ext4 as root (*raw_gpt*)
-
-* Raw *GPT* disk image, with btrfs as root (*raw_btrfs*)
-
-* Raw *GPT* disk image, with squashfs as read-only root (*raw_squashfs*)
-
-* Plain directory, containing the *OS* tree (*directory*)
-
-* btrfs subvolume, with separate subvolumes for `/var`, `/home`,
- `/srv`, `/var/tmp` (*subvolume*)
-
-* Tarball (*tar*)
-
-When a *GPT* disk image is created, the following additional
-options are available:
-
-* A swap partition may be added in
-
-* The image may be made bootable on *EFI* systems
-
-* Separate partitions for `/srv` and `/home` may be added in
-
-* The root, /srv and /home partitions may optionally be encrypted with
- LUKS.
-
-* A dm-verity partition may be added in that adds runtime integrity
- data for the root partition
-
-# Compatibility
-
-Generated images are *legacy-free*. This means only *GPT* disk
-labels (and no *MBR* disk labels) are supported, and only
-systemd based images may be generated. Moreover, for bootable
-images only *EFI* systems are supported (not plain *MBR/BIOS*).
-
-All generated *GPT* disk images may be booted in a local
-container directly with:
-
-```bash
-systemd-nspawn -bi image.raw
-```
-
-Additionally, bootable *GPT* disk images (as created with the
-`--bootable` flag) work when booted directly by *EFI* systems, for
-example in *KVM* via:
-
-```bash
-qemu-kvm -m 512 -smp 2 -bios /usr/share/edk2/ovmf/OVMF_CODE.fd -drive format=raw,file=image.raw
-```
-
-*EFI* bootable *GPT* images are larger than plain *GPT* images, as
-they additionally carry an *EFI* system partition containing a
-boot loader, as well as a kernel, kernel modules, udev and
-more.
-
-All directory or btrfs subvolume images may be booted directly
-with:
-
-```bash
-systemd-nspawn -bD image
-```
-
-# Other features
-
-* Optionally, create an *SHA256SUMS* checksum file for the result,
- possibly even signed via gpg.
-
-* Optionally, place a specific `.nspawn` settings file along
- with the result.
-
-* Optionally, build a local project's *source* tree in the image
- and add the result to the generated image (see below).
-
-* Optionally, share *RPM*/*DEB* package cache between multiple runs,
- in order to optimize build speeds.
-
-* Optionally, the resulting image may be compressed with *XZ*.
-
-* Optionally, btrfs' read-only flag for the root subvolume may be
- set.
-
-* Optionally, btrfs' compression may be enabled for all
- created subvolumes.
-
-* By default images are created without all files marked as
- documentation in the packages, on distributions where the
- package manager supports this. Use the `--with-docs` flag to
- build an image with docs added.
-
-# Supported distributions
-
-Images may be created containing installations of the
-following *OS*es.
-
-* *Fedora*
-
-* *Debian*
-
-* *Ubuntu*
-
-* *Arch Linux*
-
-* *openSUSE*
-
-* *Mageia*
-
-* *CentOS*
-
-* *Clear Linux*
-
-In theory, any distribution may be used on the host for building
-images containing any other distribution, as long as the necessary
-tools are available. Specifically, any distro that packages
-`debootstrap` may be used to build *Debian* or *Ubuntu* images. Any
-distro that packages `dnf` may be used to build *Fedora* or *Mageia*
-images. Any distro that packages `pacstrap` may be used to build *Arch
-Linux* images. Any distro that packages `zypper` may be used to build
-*openSUSE* images. Any distro that packages `yum` (or the newer
-replacement `dnf`) may be used to build *CentOS* images.
-
-Currently, *Fedora* packages all relevant tools as of Fedora 26.
-
-# Files
-
-To make it easy to build images for development versions of
-your projects, mkosi can read configuration data from the
-local directory, under the assumption that it is invoked from
-a *source* tree. Specifically, the following files are used if
-they exist in the local directory:
-
-* `mkosi.default` may be used to configure mkosi's image
- building process. For example, you may configure the
- distribution to use (`fedora`, `ubuntu`, `debian`, `archlinux`,
- `opensuse`, `mageia`) for the image, or additional
- distribution packages to install. Note that all options encoded
- in this configuration file may also be set on the command line,
- and this file is hence little more than a way to make sure simply
- typing `mkosi` without further parameters in your *source* tree is
- enough to get the right image of your choice set up.
- Additionally if a `mkosi.default.d` directory exists, each file in it
- is loaded in the same manner adding/overriding the values specified in
- `mkosi.default`. Command-line arguments, as shown in the help
- description, have to be included in a configuration block (e.g.
- "[Packages]") corresponding to the argument group (e.g. "Packages"),
- and the argument gets converted as follows: "--with-network" becomes
- "WithNetwork=yes".
-
-* `mkosi.extra/` or `mkosi.extra.tar` may be respectively a directory
- or archive. If any exist all files contained in it are copied over the
- directory tree of the image after the *OS* was installed. This may be used to
- add in additional files to an image, on top of what the distribution includes
- in its packages. When using a directory file ownership is not preserved:
- all files copied will be owned by root. To preserve ownership use a tar
- archive.
-
-* `mkosi.skeleton/` or `mkosi.skeleton.tar` may be respectively a directory
- or archive, and they work in the same way as
- `mkosi.extra`/`mkosi.skeleton.tar`. However the files are copied before
- anything else so to have a skeleton tree for the OS. This allows to change
- the package manager and create files that need to be there before anything is
- installed. When using a directory file ownership is not preserved:
- all files copied will be owned by root. To preserve ownership use a tar
- archive.
-
-* `mkosi.build` may be an executable script. If it exists the image
- will be built twice: the first iteration will be the *development*
- image, the second iteration will be the *final* image. The
- *development* image is used to build the project in the current
- working directory (the *source* tree). For that the whole directory
- is copied into the image, along with the mkosi.build build
- script. The script is then invoked inside the image (via
- `systemd-nspawn`), with `$SRCDIR` pointing to the *source*
- tree. `$DESTDIR` points to a directory where the script should place
- any files generated it would like to end up in the *final*
- image. Note that `make`/`automake` based build systems generally
- honour `$DESTDIR`, thus making it very natural to build *source*
- trees from the build script. After the *development* image was built
- and the build script ran inside of it, it is removed again. After
- that the *final* image is built, without any *source* tree or build
- script copied in. However, this time the contents of `$DESTDIR` are
- added into the image.
-
- When the source tree is copied into the *build* image, all files are
- copied, except for `mkosi.builddir/`, `mkosi.cache/` and
- `mkosi.output/`. That said, `.gitignore` is respected if the source
- tree is a `git` checkout. If multiple different images shall be
- built from the same source tree it's essential to exclude their
- output files from this copy operation, as otherwise a version of an
- image built earlier might be included in a later build, which is
- usually not intended. An alternative to excluding these built images
- via `.gitignore` entries is making use of the `mkosi.output/`
- directory (see below), which is an easy way to exclude all build
- artifacts.
-
-* `mkosi.postinst` may be an executable script. If it exists it is
- invoked as last step of preparing an image, from within the image
- context. It is once called for the *development* image (if this is
- enabled, see above) with the "build" command line parameter, right
- before invoking the build script. It is called a second time for the
- *final* image with the "final" command line parameter, right before
- the image is considered complete. This script may be used to alter
- the images without any restrictions, after all software packages and
- built sources have been installed. Note that this script is executed
- directly in the image context with the final root directory in
- place, without any `$SRCDIR`/`$DESTDIR` setup.
-
-* `mkosi.nspawn` may be an nspawn settings file. If this exists
- it will be copied into the same place as the output image
- file. This is useful since nspawn looks for settings files
- next to image files it boots, for additional container
- runtime settings.
-
-* `mkosi.cache/` may be a directory. If so, it is automatically used as
- package download cache, in order to speed repeated runs of the tool.
-
-* `mkosi.builddir/` may be a directory. If so, it is automatically
- used as out-of-tree build directory, if the build commands in the
- `mkosi.build` script support it. Specifically, this directory will
- be mounted into the build countainer, and the `$BUILDDIR`
- environment variable will be set to it when the build script is
- invoked. The build script may then use this directory as build
- directory, for automake-style or ninja-style out-of-tree
- builds. This speeds up builds considerably, in particular when
- `mkosi` is used in incremental mode (`-i`): not only the disk images
- but also the build tree is reused between subsequent
- invocations. Note that if this directory does not exist the
- `$BUILDDIR` environment variable is not set, and it is up to build
- script to decide whether to do in in-tree or an out-of-tree build,
- and which build directory to use.
-
-* `mkosi.rootpw` may be a file containing the password for the root
- user of the image to set. The password may optionally be followed by
- a newline character which is implicitly removed. The file must have
- an access mode of 0600 or less. If this file does not exist the
- distribution's default root password is set (which usually means
- access to the root user is blocked).
-
-* `mkosi.passphrase` may be a passphrase file to use when LUKS
- encryption is selected. It should contain the passphrase literally,
- and not end in a newline character (i.e. in the same format as
- cryptsetup and /etc/crypttab expect the passphrase files). The file
- must have an access mode of 0600 or less. If this file does not
- exist and encryption is requested the user is queried instead.
-
-* `mkosi.secure-boot.crt` and `mkosi.secure-boot.key` may contain an
- X509 certificate and PEM private key to use when UEFI SecureBoot
- support is enabled. All EFI binaries included in the image's ESP are
- signed with this key, as a late step in the build process.
-
-* `mkosi.output/` may be a directory. If it exists, and the image
- output path is not configured (i.e. no `--output=` setting
- specified), or configured to a filename (i.e. a path containing no
- `/` character) all build artifacts (that is: the image itself, the
- root hash file in case Verity is used, the checksum and its
- signature if that's enabled, and the nspawn settings file if there
- is any) are placed in this directory. Note that this directory is
- not used if the image output path contains at least one slash, and
- has no effect in that case. This setting is particularly useful if
- multiple different images shall be built from the same working
- directory, as otherwise the build result of a preceeding run might
- be copied into a build image as part of the source tree (see above).
-
-All these files are optional.
-
-Note that the location of all these files may also be
-configured during invocation via command line switches, and as
-settings in `mkosi.default`, in case the default settings are
-not acceptable for a project.
-
-# Examples
-
-Create and run a raw *GPT* image with *ext4*, as `image.raw`:
-
-```bash
-# mkosi
-# systemd-nspawn -b -i image.raw
-```
-
-Create and run a bootable btrfs *GPT* image, as `foobar.raw`:
-
-```bash
-# mkosi -t raw_btrfs --bootable -o foobar.raw
-# systemd-nspawn -b -i foobar.raw
-# qemu-kvm -m 512 -smp 2 -bios /usr/share/edk2/ovmf/OVMF_CODE.fd -drive format=raw,file=foobar.raw
-```
-
-Create and run a *Fedora* image into a plain directory:
-
-```bash
-# mkosi -d fedora -t directory -o quux
-# systemd-nspawn -b -D quux
-```
-
-Create a compressed image `image.raw.xz` and add a checksum file, and
-install *SSH* into it:
-
-```bash
-# mkosi -d fedora -t raw_squashfs --checksum --xz --package=openssh-clients
-```
-
-Inside the source directory of an `automake`-based project,
-configure *mkosi* so that simply invoking `mkosi` without any
-parameters builds an *OS* image containing a built version of
-the project in its current state:
-
-```bash
-# cat > mkosi.default <<EOF
-[Distribution]
-Distribution=fedora
-Release=24
-
-[Output]
-Format=raw_btrfs
-Bootable=yes
-
-[Packages]
-Packages=openssh-clients httpd
-BuildPackages=make gcc libcurl-devel
-EOF
-# cat > mkosi.build <<EOF
-#!/bin/sh
-cd $SRCDIR
-./autogen.sh
-./configure --prefix=/usr
-make -j `nproc`
-make install
-EOF
-# chmod +x mkosi.build
-# mkosi
-# systemd-nspawn -bi image.raw
-```
-
-To create a *Fedora* image with hostname:
-```bash
-# mkosi -d fedora --hostname image
-```
-
-Also you could set hostname in configuration file:
-```bash
-# cat mkosi.default
-...
-[Output]
-Hostname=image
-...
-```
-
-# Requirements
-
-mkosi is packaged for various distributions: Debian, Ubuntu, Arch (in AUR), Fedora.
-It is usually easiest to use the distribution package.
-
-The current version requires systemd 233 (or actually, systemd-nspawn of it).
-
-When not using distribution packages make sure to install the
-necessary dependencies. For example, on *Fedora* you need:
-
-```bash
-dnf install arch-install-scripts btrfs-progs debootstrap dosfstools edk2-ovmf squashfs-tools gnupg python3 tar veritysetup xz zypper
-```
-
-On Debian/Ubuntu it might be necessary to install the `ubuntu-keyring`,
-`ubuntu-archive-keyring` and/or `debian-archive-keyring` packages explicitly,
-in addition to `debootstrap`, depending on what kind of distribution images
-you want to build. `debootstrap` on Debian only pulls in the Debian keyring
-on its own, and the version on Ubuntu only the one from Ubuntu.
-
-Note that the minimum required Python version is 3.5.
-
-If SecureBoot signing is to be used, then the "sbsign" tool needs to
-be installed as well, which is currently not available on Fedora, but
-in a COPR repository:
-
-```bash
-
-dnf copr enable msekleta/sbsigntool
-dnf install sbsigntool
-```
# References
* [Primary mkosi git repository on GitHub](https://github.com/systemd/mkosi/)
diff --git a/TODO.md b/TODO.md
new file mode 100644
index 0000000..1f5aba8
--- /dev/null
+++ b/TODO.md
@@ -0,0 +1,21 @@
+# TODO
+
+* volatile images
+
+* work on device nodes
+
+* allow passing env vars
+
+* mkosi --all (for building everything in mkosi.files/)
+
+* --architecture= is chaos: we need to define a clear vocabulary of
+ architectures that can deal with the different names of
+ architectures in debian, fedora and uname.
+
+* squashfs root images with /home and /srv on ext4
+
+* optionally output the root partition (+ verity) and the unified
+ kernel image as additional artifacts, so that they can be used in
+ automatic updating schemes (i.e. take an old image that is currently
+ in use, add a root partition with the new root image (+ verity), and
+ drop the new kernel into the ESP, and an update is complete.
diff --git a/ci/semaphore.sh b/ci/semaphore.sh
new file mode 100755
index 0000000..759b908
--- /dev/null
+++ b/ci/semaphore.sh
@@ -0,0 +1,11 @@
+#!/bin/bash
+
+set -ex
+
+sudo add-apt-repository --yes ppa:jonathonf/python-3.6
+sudo apt --yes update
+sudo apt --yes install python3.6 debootstrap systemd-container squashfs-tools
+
+sudo python3.6 ./mkosi --default ./mkosi.files/mkosi.ubuntu
+
+test -f ubuntu.raw
diff --git a/mkosi b/mkosi
index b8afd04..d990403 100755
--- a/mkosi
+++ b/mkosi
@@ -3,62 +3,157 @@
# SPDX-License-Identifier: LGPL-2.1+
import argparse
+import collections
import configparser
import contextlib
+import copy
import crypt
-import ctypes, ctypes.util
+import ctypes
+import ctypes.util
+import enum
import errno
import fcntl
import getpass
import glob
import hashlib
import os
-import pathlib
import platform
+import re
+import shlex
import shutil
import stat
import string
+import subprocess
import sys
import tempfile
import urllib.request
import uuid
+from subprocess import DEVNULL, PIPE
+from typing import (
+ IO,
+ Any,
+ BinaryIO,
+ Callable,
+ Dict,
+ Generator,
+ Iterable,
+ List,
+ NamedTuple,
+ NoReturn,
+ Optional,
+ Sequence,
+ Set,
+ TextIO,
+ Tuple,
+ TypeVar,
+ Union,
+ cast,
+)
-try:
- import argcomplete
-except ImportError:
- pass
+__version__ = '4'
-from enum import Enum
-from subprocess import run, DEVNULL, PIPE
+if sys.version_info < (3, 6):
+ sys.exit("Sorry, we need at least Python 3.6.")
-__version__ = '4'
+# This global should be initialized after parsing arguments
+arg_debug = ()
-if sys.version_info < (3, 5):
- sys.exit("Sorry, we need at least Python 3.5.")
-# TODO
-# - volatile images
-# - make ubuntu images bootable
-# - work on device nodes
-# - allow passing env vars
+def run(cmdline: List[str], execvp: bool = False, **kwargs: Any) -> subprocess.CompletedProcess:
+ if 'run' in arg_debug:
+ sys.stderr.write('+ ' + ' '.join(shlex.quote(x) for x in cmdline) + '\n')
+ if execvp:
+ assert not kwargs
+ os.execvp(cmdline[0], cmdline)
+ else:
+ return subprocess.run(cmdline, **kwargs)
+
-def die(message, status=1):
+def die(message: str, status: int = 1) -> NoReturn:
assert status >= 1 and status < 128
sys.stderr.write(message + "\n")
sys.exit(status)
-def warn(message, *args, **kwargs):
+
+def warn(message: str, *args: Any, **kwargs: Any) -> None:
sys.stderr.write('WARNING: ' + message.format(*args, **kwargs) + '\n')
-class OutputFormat(Enum):
- raw_gpt = 1
- raw_btrfs = 2
- raw_squashfs = 3
- directory = 4
- subvolume = 5
- tar = 6
-class Distribution(Enum):
+class CommandLineArguments(argparse.Namespace):
+ """Type-hinted storage for command line arguments."""
+
+ swap_partno: Optional[int] = None
+ esp_partno: Optional[int] = None
+
+
+class SourceFileTransfer(enum.Enum):
+ copy_all = "copy-all"
+ copy_git_cached = "copy-git-cached"
+ copy_git_others = "copy-git-others"
+ mount = "mount"
+
+ def __str__(self):
+ return self.value
+
+ @classmethod
+ def doc(cls):
+ return {cls.copy_all: "normal file copy",
+ cls.copy_git_cached: "use git-ls-files --cached, ignoring any file that git itself ignores",
+ cls.copy_git_others: "use git-ls-files --others, ignoring any file that git itself ignores",
+ cls.mount: "bind mount source files into the build image"}
+
+
+class OutputFormat(enum.Enum):
+ directory = enum.auto()
+ subvolume = enum.auto()
+ tar = enum.auto()
+
+ gpt_ext4 = enum.auto()
+ gpt_xfs = enum.auto()
+ gpt_btrfs = enum.auto()
+ gpt_squashfs = enum.auto()
+
+ plain_squashfs = enum.auto()
+
+ # Kept for backwards compatibility
+ raw_ext4 = raw_gpt = gpt_ext4
+ raw_xfs = gpt_xfs
+ raw_btrfs = gpt_btrfs
+ raw_squashfs = gpt_squashfs
+
+ def __repr__(self) -> str:
+ """Return the member name without the class name"""
+ return self.name
+
+ def __str__(self) -> str:
+ """Return the member name without the class name"""
+ return self.name
+
+ @classmethod
+ def from_string(cls, name: str) -> 'OutputFormat':
+ """A convenience method to be used with argparse"""
+ try:
+ return cls[name]
+ except KeyError:
+ # this let's argparse generate a proper error message
+ return name # type: ignore
+
+ def is_disk_rw(self) -> bool:
+ "Output format is a disk image with a parition table and a writable filesystem"
+ return self in (OutputFormat.gpt_ext4,
+ OutputFormat.gpt_xfs,
+ OutputFormat.gpt_btrfs)
+
+ def is_disk(self) -> bool:
+ "Output format is a disk image with a partition table"
+ return self.is_disk_rw() or self == OutputFormat.gpt_squashfs
+
+ def is_squashfs(self) -> bool:
+ "The output format contains a squashfs partition"
+ return self in {OutputFormat.gpt_squashfs, OutputFormat.plain_squashfs}
+
+
+class Distribution(enum.Enum):
fedora = 1
debian = 2
ubuntu = 3
@@ -68,29 +163,31 @@ class Distribution(Enum):
centos = 7
clear = 8
-GPT_ROOT_X86 = uuid.UUID("44479540f29741b29af7d131d5f0458a")
-GPT_ROOT_X86_64 = uuid.UUID("4f68bce3e8cd4db196e7fbcaf984b709")
-GPT_ROOT_ARM = uuid.UUID("69dad7102ce44e3cb16c21a1d49abed3")
-GPT_ROOT_ARM_64 = uuid.UUID("b921b0451df041c3af444c6f280d3fae")
-GPT_ROOT_IA64 = uuid.UUID("993d8d3df80e4225855a9daf8ed7ea97")
-GPT_ESP = uuid.UUID("c12a7328f81f11d2ba4b00a0c93ec93b")
-GPT_SWAP = uuid.UUID("0657fd6da4ab43c484e50933c84b4f4f")
-GPT_HOME = uuid.UUID("933ac7e12eb44f13b8440e14e2aef915")
-GPT_SRV = uuid.UUID("3b8f842520e04f3b907f1a25a76f98e8")
-GPT_ROOT_X86_VERITY = uuid.UUID("d13c5d3bb5d1422ab29f9454fdc89d76")
-GPT_ROOT_X86_64_VERITY = uuid.UUID("2c7357edebd246d9aec123d437ec2bf5")
-GPT_ROOT_ARM_VERITY = uuid.UUID("7386cdf2203c47a9a498f2ecce45a2d6")
-GPT_ROOT_ARM_64_VERITY = uuid.UUID("df3300ced69f4c92978c9bfb0f38d820")
-GPT_ROOT_IA64_VERITY = uuid.UUID("86ed10d5b60745bb8957d350f23d0571")
-
-if platform.machine() == "x86_64":
- GPT_ROOT_NATIVE = GPT_ROOT_X86_64
- GPT_ROOT_NATIVE_VERITY = GPT_ROOT_X86_64_VERITY
-elif platform.machine() == "aarch64":
- GPT_ROOT_NATIVE = GPT_ROOT_ARM_64
- GPT_ROOT_NATIVE_VERITY = GPT_ROOT_ARM_64_VERITY
-else:
- die("Don't know the %s architecture." % platform.machine())
+
+GPT_ROOT_X86 = uuid.UUID("44479540f29741b29af7d131d5f0458a") # NOQA: E221
+GPT_ROOT_X86_64 = uuid.UUID("4f68bce3e8cd4db196e7fbcaf984b709") # NOQA: E221
+GPT_ROOT_ARM = uuid.UUID("69dad7102ce44e3cb16c21a1d49abed3") # NOQA: E221
+GPT_ROOT_ARM_64 = uuid.UUID("b921b0451df041c3af444c6f280d3fae") # NOQA: E221
+GPT_ROOT_IA64 = uuid.UUID("993d8d3df80e4225855a9daf8ed7ea97") # NOQA: E221
+GPT_ESP = uuid.UUID("c12a7328f81f11d2ba4b00a0c93ec93b") # NOQA: E221
+GPT_BIOS = uuid.UUID("2168614864496e6f744e656564454649") # NOQA: E221
+GPT_SWAP = uuid.UUID("0657fd6da4ab43c484e50933c84b4f4f") # NOQA: E221
+GPT_HOME = uuid.UUID("933ac7e12eb44f13b8440e14e2aef915") # NOQA: E221
+GPT_SRV = uuid.UUID("3b8f842520e04f3b907f1a25a76f98e8") # NOQA: E221
+GPT_ROOT_X86_VERITY = uuid.UUID("d13c5d3bb5d1422ab29f9454fdc89d76") # NOQA: E221
+GPT_ROOT_X86_64_VERITY = uuid.UUID("2c7357edebd246d9aec123d437ec2bf5") # NOQA: E221
+GPT_ROOT_ARM_VERITY = uuid.UUID("7386cdf2203c47a9a498f2ecce45a2d6") # NOQA: E221
+GPT_ROOT_ARM_64_VERITY = uuid.UUID("df3300ced69f4c92978c9bfb0f38d820") # NOQA: E221
+GPT_ROOT_IA64_VERITY = uuid.UUID("86ed10d5b60745bb8957d350f23d0571") # NOQA: E221
+
+# This is a non-formatted partition used to store the second stage
+# part of the bootloader because it doesn't necessarily fits the MBR
+# available space. 1MiB is more than enough for our usages and there's
+# little reason for customization since it only stores the bootloader and
+# not user-owned configuration files or kernels. See
+# https://en.wikipedia.org/wiki/BIOS_boot_partition
+# and https://www.gnu.org/software/grub/manual/grub/html_node/BIOS-installation.html
+BIOS_PARTITION_SIZE = 1024 * 1024
CLONE_NEWNS = 0x00020000
@@ -101,6 +198,8 @@ FEDORA_KEYS_MAP = {
'26': '64DAB85D',
'27': 'F5282EE4',
'28': '9DB62FB1',
+ '29': '429476B4',
+ '30': 'CFC659B9',
}
# 1 MB at the beginning of the disk for the GPT disk label, and
@@ -108,30 +207,64 @@ FEDORA_KEYS_MAP = {
GPT_HEADER_SIZE = 1024*1024
GPT_FOOTER_SIZE = 1024*1024
-def unshare(flags):
- libc = ctypes.CDLL(ctypes.util.find_library("c"), use_errno=True)
+
+class GPTRootTypePair(NamedTuple):
+ root: uuid.UUID
+ verity: uuid.UUID
+
+
+def gpt_root_native(arch: str) -> GPTRootTypePair:
+ """The tag for the native GPT root partition for the given architecture
+
+ Returns a tuple of two tags: for the root partition and for the
+ matching verity partition.
+ """
+ if arch is None:
+ arch = platform.machine()
+ if arch == 'x86_64':
+ return GPTRootTypePair(GPT_ROOT_X86_64, GPT_ROOT_X86_64_VERITY)
+ elif arch == 'aarch64':
+ return GPTRootTypePair(GPT_ROOT_ARM_64, GPT_ROOT_ARM_64_VERITY)
+ else:
+ die(f'Unknown architecture {arch}.')
+
+
+def unshare(flags: int) -> None:
+ libc_name = ctypes.util.find_library("c")
+ if libc_name is None:
+ die("Could not find libc")
+ libc = ctypes.CDLL(libc_name, use_errno=True)
if libc.unshare(ctypes.c_int(flags)) != 0:
e = ctypes.get_errno()
raise OSError(e, os.strerror(e))
-def format_bytes(bytes):
+
+def format_bytes(bytes: int) -> str:
if bytes >= 1024*1024*1024:
- return "{:0.1f}G".format(bytes / 1024**3)
+ return f'{bytes/1024**3 :0.1f}G'
if bytes >= 1024*1024:
- return "{:0.1f}M".format(bytes / 1024**2)
+ return f'{bytes/1024**2 :0.1f}M'
if bytes >= 1024:
- return "{:0.1f}K".format(bytes / 1024)
+ return f'{bytes/1024 :0.1f}K'
- return "{}B".format(bytes)
+ return f'{bytes}B'
-def roundup512(x):
+
+def roundup512(x: int) -> int:
return (x + 511) & ~511
-def print_step(text):
+
+def print_step(text: str) -> None:
sys.stderr.write("‣ \033[0;1;39m" + text + "\033[0m\n")
-def mkdir_last(path, mode=0o777):
+
+def print_running_cmd(cmdline: Iterable[str]) -> None:
+ sys.stderr.write("‣ \033[0;1;39mRunning command:\033[0m\n")
+ sys.stderr.write(" ".join(shlex.quote(x) for x in cmdline) + "\n")
+
+
+def mkdir_last(path: str, mode: int = 0o777) -> str:
"""Create directory path
Only the final component will be created, so this is different than mkdirs().
@@ -143,40 +276,48 @@ def mkdir_last(path, mode=0o777):
raise
return path
-_IOC_NRBITS = 8
-_IOC_TYPEBITS = 8
-_IOC_SIZEBITS = 14
-_IOC_DIRBITS = 2
-_IOC_NRSHIFT = 0
-_IOC_TYPESHIFT = _IOC_NRSHIFT + _IOC_NRBITS
-_IOC_SIZESHIFT = _IOC_TYPESHIFT + _IOC_TYPEBITS
-_IOC_DIRSHIFT = _IOC_SIZESHIFT + _IOC_SIZEBITS
+_IOC_NRBITS = 8 # NOQA: E221,E222
+_IOC_TYPEBITS = 8 # NOQA: E221,E222
+_IOC_SIZEBITS = 14 # NOQA: E221,E222
+_IOC_DIRBITS = 2 # NOQA: E221,E222
+
+_IOC_NRSHIFT = 0 # NOQA: E221
+_IOC_TYPESHIFT = _IOC_NRSHIFT + _IOC_NRBITS # NOQA: E221
+_IOC_SIZESHIFT = _IOC_TYPESHIFT + _IOC_TYPEBITS # NOQA: E221
+_IOC_DIRSHIFT = _IOC_SIZESHIFT + _IOC_SIZEBITS # NOQA: E221
+
+_IOC_NONE = 0 # NOQA: E221
+_IOC_WRITE = 1 # NOQA: E221
+_IOC_READ = 2 # NOQA: E221
+
+
+def _IOC(dir: int, type: int, nr: int, argtype: str) -> int:
+ size = {'int': 4, 'size_t': 8}[argtype]
+ return dir << _IOC_DIRSHIFT | type << _IOC_TYPESHIFT | nr << _IOC_NRSHIFT | size << _IOC_SIZESHIFT
-_IOC_NONE = 0
-_IOC_WRITE = 1
-_IOC_READ = 2
-def _IOC(dir, type, nr, argtype):
- size = {'int':4, 'size_t':8}[argtype]
- return dir<<_IOC_DIRSHIFT | type<<_IOC_TYPESHIFT | nr<<_IOC_NRSHIFT | size<<_IOC_SIZESHIFT
-def _IOW(type, nr, size):
+def _IOW(type: int, nr: int, size: str) -> int:
return _IOC(_IOC_WRITE, type, nr, size)
+
FICLONE = _IOW(0x94, 9, 'int')
+
@contextlib.contextmanager
-def open_close(path, flags, mode=0o664):
+def open_close(path: str, flags: int, mode: int = 0o664) -> Generator[int, None, None]:
fd = os.open(path, flags | os.O_CLOEXEC, mode)
try:
yield fd
finally:
os.close(fd)
-def _reflink(oldfd, newfd):
+
+def _reflink(oldfd: int, newfd: int) -> None:
fcntl.ioctl(newfd, FICLONE, oldfd)
-def copy_fd(oldfd, newfd):
+
+def copy_fd(oldfd: int, newfd: int) -> None:
try:
_reflink(oldfd, newfd)
except OSError as e:
@@ -185,7 +326,8 @@ def copy_fd(oldfd, newfd):
shutil.copyfileobj(open(oldfd, 'rb', closefd=False),
open(newfd, 'wb', closefd=False))
-def copy_file_object(oldobject, newobject):
+
+def copy_file_object(oldobject: BinaryIO, newobject: BinaryIO) -> None:
try:
_reflink(oldobject.fileno(), newobject.fileno())
except OSError as e:
@@ -193,11 +335,13 @@ def copy_file_object(oldobject, newobject):
raise
shutil.copyfileobj(oldobject, newobject)
-def copy_symlink(oldpath, newpath):
+
+def copy_symlink(oldpath: str, newpath: str) -> None:
src = os.readlink(oldpath)
os.symlink(src, newpath)
-def copy_file(oldpath, newpath):
+
+def copy_file(oldpath: str, newpath: str) -> None:
if os.path.islink(oldpath):
copy_symlink(oldpath, newpath)
return
@@ -206,25 +350,24 @@ def copy_file(oldpath, newpath):
st = os.stat(oldfd)
try:
- with open_close(newpath, os.O_WRONLY|os.O_CREAT|os.O_EXCL, st.st_mode) as newfd:
+ with open_close(newpath, os.O_WRONLY | os.O_CREAT | os.O_EXCL, st.st_mode) as newfd:
copy_fd(oldfd, newfd)
except FileExistsError:
os.unlink(newpath)
- with open_close(newpath, os.O_WRONLY|os.O_CREAT, st.st_mode) as newfd:
+ with open_close(newpath, os.O_WRONLY | os.O_CREAT, st.st_mode) as newfd:
copy_fd(oldfd, newfd)
shutil.copystat(oldpath, newpath, follow_symlinks=False)
-def symlink_f(target, path):
+
+def symlink_f(target: str, path: str) -> None:
try:
os.symlink(target, path)
except FileExistsError:
os.unlink(path)
os.symlink(target, path)
-def copy(oldpath, newpath):
- if not isinstance(newpath, pathlib.Path):
- newpath = pathlib.Path(newpath)
+def copy_path(oldpath: str, newpath: str) -> None:
try:
mkdir_last(newpath)
except FileExistsError:
@@ -233,15 +376,15 @@ def copy(oldpath, newpath):
mkdir_last(newpath)
for entry in os.scandir(oldpath):
- newentry = newpath / entry.name
+ newentry = os.path.join(newpath, entry.name)
if entry.is_dir(follow_symlinks=False):
- copy(entry.path, newentry)
+ copy_path(entry.path, newentry)
elif entry.is_symlink():
target = os.readlink(entry.path)
symlink_f(target, newentry)
shutil.copystat(entry.path, newentry, follow_symlinks=False)
else:
- st = entry.stat(follow_symlinks=False)
+ st = entry.stat(follow_symlinks=False) # type: ignore # mypy 0.641 doesn't know about follow_symlinks
if stat.S_ISREG(st.st_mode):
copy_file(entry.path, newentry)
else:
@@ -249,22 +392,30 @@ def copy(oldpath, newpath):
continue
shutil.copystat(oldpath, newpath, follow_symlinks=True)
+
@contextlib.contextmanager
-def complete_step(text, text2=None):
+def complete_step(text: str, text2: Optional[str] = None) -> Generator[List[Any], None, None]:
print_step(text + '...')
- args = []
+ args: List[Any] = []
yield args
if text2 is None:
text2 = text + ' complete'
print_step(text2.format(*args) + '.')
-@complete_step('Detaching namespace')
-def init_namespace(args):
+
+# https://github.com/python/mypy/issues/1317
+C = TypeVar('C', bound=Callable)
+completestep = cast(Callable[[str], Callable[[C], C]], complete_step)
+
+
+@completestep('Detaching namespace')
+def init_namespace(args: CommandLineArguments) -> None:
args.original_umask = os.umask(0o000)
unshare(CLONE_NEWNS)
run(["mount", "--make-rslave", "/"], check=True)
-def setup_workspace(args):
+
+def setup_workspace(args: CommandLineArguments) -> tempfile.TemporaryDirectory:
print_step("Setting up temporary workspace.")
if args.output_format in (OutputFormat.directory, OutputFormat.subvolume):
d = tempfile.TemporaryDirectory(dir=os.path.dirname(args.output), prefix='.mkosi-')
@@ -274,12 +425,14 @@ def setup_workspace(args):
print_step("Temporary workspace in " + d.name + " is now set up.")
return d
-def btrfs_subvol_create(path, mode=0o755):
+
+def btrfs_subvol_create(path: str, mode: int = 0o755) -> None:
m = os.umask(~mode & 0o7777)
run(["btrfs", "subvol", "create", path], check=True)
os.umask(m)
-def btrfs_subvol_delete(path):
+
+def btrfs_subvol_delete(path: str) -> None:
# Extract the path of the subvolume relative to the filesystem
c = run(["btrfs", "subvol", "show", path],
stdout=PIPE, stderr=DEVNULL, universal_newlines=True, check=True)
@@ -301,10 +454,12 @@ def btrfs_subvol_delete(path):
# Delete the subvolume now that all its descendants have been deleted
run(["btrfs", "subvol", "delete", path], stdout=DEVNULL, stderr=DEVNULL, check=True)
-def btrfs_subvol_make_ro(path, b=True):
+
+def btrfs_subvol_make_ro(path: str, b: bool = True) -> None:
run(["btrfs", "property", "set", path, "ro", "true" if b else "false"], check=True)
-def image_size(args):
+
+def image_size(args: CommandLineArguments) -> int:
size = GPT_HEADER_SIZE + GPT_FOOTER_SIZE
if args.root_size is not None:
@@ -314,7 +469,10 @@ def image_size(args):
if args.srv_size is not None:
size += args.srv_size
if args.bootable:
- size += args.esp_size
+ if "uefi" in args.boot_protocols:
+ size += args.esp_size
+ if "bios" in args.boot_protocols:
+ size += BIOS_PARTITION_SIZE
if args.swap_size is not None:
size += args.swap_size
if args.verity_size is not None:
@@ -322,27 +480,35 @@ def image_size(args):
return size
-def disable_cow(path):
+
+def disable_cow(path: str) -> None:
"""Disable copy-on-write if applicable on filesystem"""
run(["chattr", "+C", path], stdout=DEVNULL, stderr=DEVNULL, check=False)
-def determine_partition_table(args):
+def determine_partition_table(args: CommandLineArguments) -> Tuple[str, bool]:
pn = 1
table = "label: gpt\n"
run_sfdisk = False
+ args.esp_partno = None
+ args.bios_partno = None
if args.bootable:
- table += 'size={}, type={}, name="ESP System Partition"\n'.format(args.esp_size // 512, GPT_ESP)
- args.esp_partno = pn
- pn += 1
+ if "uefi" in args.boot_protocols:
+ table += f'size={args.esp_size // 512}, type={GPT_ESP}, name="ESP System Partition"\n'
+ args.esp_partno = pn
+ pn += 1
+
+ if "bios" in args.boot_protocols:
+ table += f'size={BIOS_PARTITION_SIZE // 512}, type={GPT_BIOS}, name="BIOS Boot Partition"\n'
+ args.bios_partno = pn
+ pn += 1
+
run_sfdisk = True
- else:
- args.esp_partno = None
if args.swap_size is not None:
- table += 'size={}, type={}, name="Swap Partition"\n'.format(args.swap_size // 512, GPT_SWAP)
+ table += f'size={args.swap_size // 512}, type={GPT_SWAP}, name="Swap Partition"\n'
args.swap_partno = pn
pn += 1
run_sfdisk = True
@@ -352,21 +518,23 @@ def determine_partition_table(args):
args.home_partno = None
args.srv_partno = None
- if args.output_format != OutputFormat.raw_btrfs:
+ if args.output_format != OutputFormat.gpt_btrfs:
if args.home_size is not None:
- table += 'size={}, type={}, name="Home Partition"\n'.format(args.home_size // 512, GPT_HOME)
+ table += f'size={args.home_size // 512}, type={GPT_HOME}, name="Home Partition"\n'
args.home_partno = pn
pn += 1
run_sfdisk = True
if args.srv_size is not None:
- table += 'size={}, type={}, name="Server Data Partition"\n'.format(args.srv_size // 512, GPT_SRV)
+ table += f'size={args.srv_size // 512}, type={GPT_SRV}, name="Server Data Partition"\n'
args.srv_partno = pn
pn += 1
run_sfdisk = True
- if args.output_format != OutputFormat.raw_squashfs:
- table += 'type={}, attrs={}, name="Root Partition"\n'.format(GPT_ROOT_NATIVE, "GUID:60" if args.read_only and args.output_format != OutputFormat.raw_btrfs else "")
+ if args.output_format != OutputFormat.gpt_squashfs:
+ table += 'type={}, attrs={}, name="Root Partition"\n'.format(
+ gpt_root_native(args.architecture).root,
+ "GUID:60" if args.read_only and args.output_format != OutputFormat.gpt_btrfs else "")
run_sfdisk = True
args.root_partno = pn
@@ -381,14 +549,15 @@ def determine_partition_table(args):
return table, run_sfdisk
-def create_image(args, workspace, for_cache):
- if args.output_format not in (OutputFormat.raw_gpt, OutputFormat.raw_btrfs, OutputFormat.raw_squashfs):
+def create_image(args: CommandLineArguments, workspace: str, for_cache: bool) -> Optional[BinaryIO]:
+ if not args.output_format.is_disk():
return None
with complete_step('Creating partition table',
'Created partition table as {.name}') as output:
- f = tempfile.NamedTemporaryFile(dir=os.path.dirname(args.output), prefix='.mkosi-', delete=not for_cache)
+ f: BinaryIO = cast(BinaryIO, tempfile.NamedTemporaryFile(prefix='.mkosi-', delete=not for_cache,
+ dir=os.path.dirname(args.output)))
output.append(f)
disable_cow(f.name)
f.truncate(image_size(args))
@@ -403,11 +572,14 @@ def create_image(args, workspace, for_cache):
return f
-def reuse_cache_image(args, workspace, run_build_script, for_cache):
+def reuse_cache_image(args: CommandLineArguments,
+ workspace: str,
+ run_build_script: bool,
+ for_cache: bool) -> Tuple[Optional[BinaryIO], bool]:
if not args.incremental:
return None, False
- if args.output_format not in (OutputFormat.raw_gpt, OutputFormat.raw_btrfs):
+ if not args.output_format.is_disk_rw():
return None, False
fname = args.cache_pre_dev if run_build_script else args.cache_pre_inst
@@ -431,7 +603,8 @@ def reuse_cache_image(args, workspace, run_build_script, for_cache):
return None, False
with source:
- f = tempfile.NamedTemporaryFile(dir = os.path.dirname(args.output), prefix='.mkosi-')
+ f: BinaryIO = cast(BinaryIO, tempfile.NamedTemporaryFile(prefix='.mkosi-',
+ dir=os.path.dirname(args.output)))
output.append(f)
# So on one hand we want CoW off, since this stuff will
@@ -450,8 +623,9 @@ def reuse_cache_image(args, workspace, run_build_script, for_cache):
return f, True
+
@contextlib.contextmanager
-def attach_image_loopback(args, raw):
+def attach_image_loopback(args: CommandLineArguments, raw: Optional[BinaryIO]) -> Generator[Optional[str], None, None]:
if raw is None:
yield None
return
@@ -469,13 +643,19 @@ def attach_image_loopback(args, raw):
with complete_step('Detaching image file'):
run(["losetup", "--detach", loopdev], check=True)
-def partition(loopdev, partno):
+
+def optional_partition(loopdev: str, partno: Optional[int]) -> Optional[str]:
if partno is None:
return None
+ return partition(loopdev, partno)
+
+
+def partition(loopdev: str, partno: int) -> str:
return loopdev + "p" + str(partno)
-def prepare_swap(args, loopdev, cached):
+
+def prepare_swap(args: CommandLineArguments, loopdev: Optional[str], cached: bool) -> None:
if loopdev is None:
return
if cached:
@@ -486,7 +666,8 @@ def prepare_swap(args, loopdev, cached):
with complete_step('Formatting swap partition'):
run(["mkswap", "-Lswap", partition(loopdev, args.swap_partno)], check=True)
-def prepare_esp(args, loopdev, cached):
+
+def prepare_esp(args: CommandLineArguments, loopdev: Optional[str], cached: bool) -> None:
if loopdev is None:
return
if cached:
@@ -497,48 +678,68 @@ def prepare_esp(args, loopdev, cached):
with complete_step('Formatting ESP partition'):
run(["mkfs.fat", "-nEFI", "-F32", partition(loopdev, args.esp_partno)], check=True)
-def mkfs_ext4(label, mount, dev):
+
+def mkfs_ext4(label: str, mount: str, dev: str) -> None:
run(["mkfs.ext4", "-L", label, "-M", mount, dev], check=True)
-def mkfs_btrfs(label, dev):
+
+def mkfs_xfs(label: str, dev: str) -> None:
+ run(["mkfs.xfs", "-n", "ftype=1", "-L", label, dev], check=True)
+
+
+def mkfs_btrfs(label: str, dev: str) -> None:
run(["mkfs.btrfs", "-L", label, "-d", "single", "-m", "single", dev], check=True)
-def luks_format(dev, passphrase):
+def mkfs_generic(args: CommandLineArguments, label: str, mount: str, dev: str) -> None:
+ if args.output_format == OutputFormat.gpt_btrfs:
+ mkfs_btrfs(label, dev)
+ elif args.output_format == OutputFormat.gpt_xfs:
+ mkfs_xfs(label, dev)
+ else:
+ mkfs_ext4(label, mount, dev)
+
+
+def luks_format(dev: str, passphrase: Dict[str, str]) -> None:
if passphrase['type'] == 'stdin':
- passphrase = (passphrase['content'] + "\n").encode("utf-8")
- run(["cryptsetup", "luksFormat", "--batch-mode", dev], input=passphrase, check=True)
+ passphrase_content = (passphrase['content'] + "\n").encode("utf-8")
+ run(["cryptsetup", "luksFormat", "--batch-mode", dev], input=passphrase_content, check=True)
else:
assert passphrase['type'] == 'file'
run(["cryptsetup", "luksFormat", "--batch-mode", dev, passphrase['content']], check=True)
-def luks_open(dev, passphrase):
+def luks_open(dev: str, passphrase: Dict[str, str]) -> str:
name = str(uuid.uuid4())
if passphrase['type'] == 'stdin':
- passphrase = (passphrase['content'] + "\n").encode("utf-8")
- run(["cryptsetup", "open", "--type", "luks", dev, name], input=passphrase, check=True)
+ passphrase_content = (passphrase['content'] + "\n").encode("utf-8")
+ run(["cryptsetup", "open", "--type", "luks", dev, name], input=passphrase_content, check=True)
else:
assert passphrase['type'] == 'file'
run(["cryptsetup", "--key-file", passphrase['content'], "open", "--type", "luks", dev, name], check=True)
return os.path.join("/dev/mapper", name)
-def luks_close(dev, text):
+
+def luks_close(dev: Optional[str], text: str) -> None:
if dev is None:
return
with complete_step(text):
run(["cryptsetup", "close", dev], check=True)
-def luks_format_root(args, loopdev, run_build_script, cached, inserting_squashfs=False):
+def luks_format_root(args: CommandLineArguments,
+ loopdev: str,
+ run_build_script: bool,
+ cached: bool,
+ inserting_squashfs: bool = False) -> None:
if args.encrypt != "all":
return
if args.root_partno is None:
return
- if args.output_format == OutputFormat.raw_squashfs and not inserting_squashfs:
+ if args.output_format == OutputFormat.gpt_squashfs and not inserting_squashfs:
return
if run_build_script:
return
@@ -548,8 +749,8 @@ def luks_format_root(args, loopdev, run_build_script, cached, inserting_squashfs
with complete_step("LUKS formatting root partition"):
luks_format(partition(loopdev, args.root_partno), args.passphrase)
-def luks_format_home(args, loopdev, run_build_script, cached):
+def luks_format_home(args: CommandLineArguments, loopdev: str, run_build_script: bool, cached: bool) -> None:
if args.encrypt is None:
return
if args.home_partno is None:
@@ -562,8 +763,8 @@ def luks_format_home(args, loopdev, run_build_script, cached):
with complete_step("LUKS formatting home partition"):
luks_format(partition(loopdev, args.home_partno), args.passphrase)
-def luks_format_srv(args, loopdev, run_build_script, cached):
+def luks_format_srv(args: CommandLineArguments, loopdev: str, run_build_script: bool, cached: bool) -> None:
if args.encrypt is None:
return
if args.srv_partno is None:
@@ -576,13 +777,16 @@ def luks_format_srv(args, loopdev, run_build_script, cached):
with complete_step("LUKS formatting server data partition"):
luks_format(partition(loopdev, args.srv_partno), args.passphrase)
-def luks_setup_root(args, loopdev, run_build_script, inserting_squashfs=False):
+def luks_setup_root(args: CommandLineArguments,
+ loopdev: str,
+ run_build_script: bool,
+ inserting_squashfs: bool = False) -> Optional[str]:
if args.encrypt != "all":
return None
if args.root_partno is None:
return None
- if args.output_format == OutputFormat.raw_squashfs and not inserting_squashfs:
+ if args.output_format == OutputFormat.gpt_squashfs and not inserting_squashfs:
return None
if run_build_script:
return None
@@ -590,8 +794,8 @@ def luks_setup_root(args, loopdev, run_build_script, inserting_squashfs=False):
with complete_step("Opening LUKS root partition"):
return luks_open(partition(loopdev, args.root_partno), args.passphrase)
-def luks_setup_home(args, loopdev, run_build_script):
+def luks_setup_home(args: CommandLineArguments, loopdev: str, run_build_script: bool) -> Optional[str]:
if args.encrypt is None:
return None
if args.home_partno is None:
@@ -602,8 +806,8 @@ def luks_setup_home(args, loopdev, run_build_script):
with complete_step("Opening LUKS home partition"):
return luks_open(partition(loopdev, args.home_partno), args.passphrase)
-def luks_setup_srv(args, loopdev, run_build_script):
+def luks_setup_srv(args: CommandLineArguments, loopdev: str, run_build_script: bool) -> Optional[str]:
if args.encrypt is None:
return None
if args.srv_partno is None:
@@ -614,12 +818,18 @@ def luks_setup_srv(args, loopdev, run_build_script):
with complete_step("Opening LUKS server data partition"):
return luks_open(partition(loopdev, args.srv_partno), args.passphrase)
-@contextlib.contextmanager
-def luks_setup_all(args, loopdev, run_build_script):
- if args.output_format in (OutputFormat.directory, OutputFormat.subvolume, OutputFormat.tar):
+@contextlib.contextmanager
+def luks_setup_all(args: CommandLineArguments,
+ loopdev: Optional[str],
+ run_build_script: bool) -> Generator[Tuple[Optional[str],
+ Optional[str],
+ Optional[str]],
+ None, None]:
+ if not args.output_format.is_disk():
yield (None, None, None)
return
+ assert loopdev is not None
try:
root = luks_setup_root(args, loopdev, run_build_script)
@@ -628,9 +838,9 @@ def luks_setup_all(args, loopdev, run_build_script):
try:
srv = luks_setup_srv(args, loopdev, run_build_script)
- yield (partition(loopdev, args.root_partno) if root is None else root,
- partition(loopdev, args.home_partno) if home is None else home,
- partition(loopdev, args.srv_partno) if srv is None else srv)
+ yield (optional_partition(loopdev, args.root_partno) if root is None else root,
+ optional_partition(loopdev, args.home_partno) if home is None else home,
+ optional_partition(loopdev, args.srv_partno) if srv is None else srv)
finally:
luks_close(srv, "Closing LUKS server data partition")
finally:
@@ -638,70 +848,84 @@ def luks_setup_all(args, loopdev, run_build_script):
finally:
luks_close(root, "Closing LUKS root partition")
-def prepare_root(args, dev, cached):
+
+def prepare_root(args: CommandLineArguments, dev: Optional[str], cached: bool) -> None:
if dev is None:
return
- if args.output_format == OutputFormat.raw_squashfs:
+ if args.output_format == OutputFormat.gpt_squashfs:
return
if cached:
return
with complete_step('Formatting root partition'):
- if args.output_format == OutputFormat.raw_btrfs:
- mkfs_btrfs("root", dev)
- else:
- mkfs_ext4("root", "/", dev)
+ mkfs_generic(args, "root", "/", dev)
+
-def prepare_home(args, dev, cached):
+def prepare_home(args: CommandLineArguments, dev: Optional[str], cached: bool) -> None:
if dev is None:
return
if cached:
return
with complete_step('Formatting home partition'):
- mkfs_ext4("home", "/home", dev)
+ mkfs_generic(args, "home", "/home", dev)
-def prepare_srv(args, dev, cached):
+
+def prepare_srv(args: CommandLineArguments, dev: Optional[str], cached: bool) -> None:
if dev is None:
return
if cached:
return
with complete_step('Formatting server data partition'):
- mkfs_ext4("srv", "/srv", dev)
+ mkfs_generic(args, "srv", "/srv", dev)
+
-def mount_loop(args, dev, where, read_only=False):
+def mount_loop(args: CommandLineArguments, dev: str, where: str, read_only: bool = False) -> None:
os.makedirs(where, 0o755, True)
options = "-odiscard"
- if args.compress and args.output_format == OutputFormat.raw_btrfs:
- options += ",compress"
+ if args.compress and args.output_format == OutputFormat.gpt_btrfs:
+ if isinstance(args.compress, bool):
+ options += ",compress"
+ else:
+ options += f",compress={args.compress}"
if read_only:
options += ",ro"
run(["mount", "-n", dev, where, options], check=True)
-def mount_bind(what, where):
+
+def mount_bind(what: str, where: str) -> None:
os.makedirs(what, 0o755, True)
os.makedirs(where, 0o755, True)
run(["mount", "--bind", what, where], check=True)
-def mount_tmpfs(where):
+
+def mount_tmpfs(where: str) -> None:
os.makedirs(where, 0o755, True)
run(["mount", "tmpfs", "-t", "tmpfs", where], check=True)
+
@contextlib.contextmanager
-def mount_image(args, workspace, loopdev, root_dev, home_dev, srv_dev, root_read_only=False):
+def mount_image(args: CommandLineArguments,
+ workspace: str,
+ loopdev: Optional[str],
+ root_dev: Optional[str],
+ home_dev: Optional[str],
+ srv_dev: Optional[str],
+ root_read_only: bool = False) -> Generator[None, None, None]:
if loopdev is None:
yield None
return
+ assert root_dev is not None
with complete_step('Mounting image'):
root = os.path.join(workspace, "root")
- if args.output_format != OutputFormat.raw_squashfs:
+ if args.output_format != OutputFormat.gpt_squashfs:
mount_loop(args, root_dev, root, root_read_only)
if home_dev is not None:
@@ -721,14 +945,11 @@ def mount_image(args, workspace, loopdev, root_dev, home_dev, srv_dev, root_read
yield
finally:
with complete_step('Unmounting image'):
-
- for d in ("home", "srv", "efi", "run", "tmp"):
- umount(os.path.join(root, d))
-
umount(root)
-@complete_step("Assigning hostname")
-def install_etc_hostname(args, workspace):
+
+@completestep("Assigning hostname")
+def install_etc_hostname(args: CommandLineArguments, workspace: str) -> None:
etc_hostname = os.path.join(workspace, "root", "etc/hostname")
# Always unlink first, so that we don't get in trouble due to a
@@ -743,8 +964,9 @@ def install_etc_hostname(args, workspace):
if args.hostname:
open(etc_hostname, "w").write(args.hostname + "\n")
+
@contextlib.contextmanager
-def mount_api_vfs(args, workspace):
+def mount_api_vfs(args: CommandLineArguments, workspace: str) -> Generator[None, None, None]:
paths = ('/proc', '/dev', '/sys')
root = os.path.join(workspace, "root")
@@ -758,9 +980,9 @@ def mount_api_vfs(args, workspace):
for d in paths:
umount(root + d)
-@contextlib.contextmanager
-def mount_cache(args, workspace):
+@contextlib.contextmanager
+def mount_cache(args: CommandLineArguments, workspace: str) -> Generator[None, None, None]:
if args.cache_path is None:
yield
return
@@ -770,7 +992,9 @@ def mount_cache(args, workspace):
if args.distribution in (Distribution.fedora, Distribution.mageia):
mount_bind(args.cache_path, os.path.join(workspace, "root", "var/cache/dnf"))
elif args.distribution == Distribution.centos:
- # We mount both the YUM and the DNF cache in this case, as YUM might just be redirected to DNF even if we invoke the former
+ # We mount both the YUM and the DNF cache in this case, as
+ # YUM might just be redirected to DNF even if we invoke
+ # the former
mount_bind(os.path.join(args.cache_path, "yum"), os.path.join(workspace, "root", "var/cache/yum"))
mount_bind(os.path.join(args.cache_path, "dnf"), os.path.join(workspace, "root", "var/cache/dnf"))
elif args.distribution in (Distribution.debian, Distribution.ubuntu):
@@ -783,24 +1007,25 @@ def mount_cache(args, workspace):
yield
finally:
with complete_step('Unmounting Package Cache'):
- for d in ("var/cache/dnf", "var/cache/yum", "var/cache/apt/archives", "var/cache/pacman/pkg", "var/cache/zypp/packages"):
+ for d in ("var/cache/dnf", "var/cache/yum", "var/cache/apt/archives", "var/cache/pacman/pkg", "var/cache/zypp/packages"): # NOQA: E501
umount(os.path.join(workspace, "root", d))
-def umount(where):
+
+def umount(where: str) -> None:
# Ignore failures and error messages
- run(["umount", "-n", where], stdout=DEVNULL, stderr=DEVNULL)
+ run(["umount", "--recursive", "-n", where], stdout=DEVNULL, stderr=DEVNULL)
-@complete_step('Setting up basic OS tree')
-def prepare_tree(args, workspace, run_build_script, cached):
+@completestep('Setting up basic OS tree')
+def prepare_tree(args: CommandLineArguments, workspace: str, run_build_script: bool, cached: bool) -> None:
if args.output_format == OutputFormat.subvolume:
btrfs_subvol_create(os.path.join(workspace, "root"))
else:
- mkdir_last(os.path.join(workspace, "root"))
+ mkdir_last(os.path.join(workspace, "root"), 0o755)
- if args.output_format in (OutputFormat.subvolume, OutputFormat.raw_btrfs):
+ if args.output_format in (OutputFormat.subvolume, OutputFormat.gpt_btrfs):
- if cached and args.output_format is OutputFormat.raw_btrfs:
+ if cached and args.output_format is OutputFormat.gpt_btrfs:
return
btrfs_subvol_create(os.path.join(workspace, "root", "home"))
@@ -816,25 +1041,29 @@ def prepare_tree(args, workspace, run_build_script, cached):
if args.bootable:
# We need an initialized machine ID for the boot logic to work
os.mkdir(os.path.join(workspace, "root", "etc"), 0o755)
- open(os.path.join(workspace, "root", "etc/machine-id"), "w").write(args.machine_id + "\n")
-
- os.mkdir(os.path.join(workspace, "root", "efi/EFI"), 0o700)
- os.mkdir(os.path.join(workspace, "root", "efi/EFI/BOOT"), 0o700)
- os.mkdir(os.path.join(workspace, "root", "efi/EFI/Linux"), 0o700)
- os.mkdir(os.path.join(workspace, "root", "efi/EFI/systemd"), 0o700)
- os.mkdir(os.path.join(workspace, "root", "efi/loader"), 0o700)
- os.mkdir(os.path.join(workspace, "root", "efi/loader/entries"), 0o700)
- os.mkdir(os.path.join(workspace, "root", "efi", args.machine_id), 0o700)
+ with open(os.path.join(workspace, "root", "etc/machine-id"), "w") as f:
+ f.write(args.machine_id)
+ f.write("\n")
os.mkdir(os.path.join(workspace, "root", "boot"), 0o700)
- os.symlink("../efi", os.path.join(workspace, "root", "boot/efi"))
- os.symlink("efi/loader", os.path.join(workspace, "root", "boot/loader"))
- os.symlink("efi/" + args.machine_id, os.path.join(workspace, "root", "boot", args.machine_id))
+
+ if args.esp_partno is not None:
+ os.mkdir(os.path.join(workspace, "root", "efi/EFI"), 0o700)
+ os.mkdir(os.path.join(workspace, "root", "efi/EFI/BOOT"), 0o700)
+ os.mkdir(os.path.join(workspace, "root", "efi/EFI/Linux"), 0o700)
+ os.mkdir(os.path.join(workspace, "root", "efi/EFI/systemd"), 0o700)
+ os.mkdir(os.path.join(workspace, "root", "efi/loader"), 0o700)
+ os.mkdir(os.path.join(workspace, "root", "efi/loader/entries"), 0o700)
+ os.mkdir(os.path.join(workspace, "root", "efi", args.machine_id), 0o700)
+
+ os.symlink("../efi", os.path.join(workspace, "root", "boot/efi"))
+ os.symlink("efi/loader", os.path.join(workspace, "root", "boot/loader"))
+ os.symlink("efi/" + args.machine_id, os.path.join(workspace, "root", "boot", args.machine_id))
os.mkdir(os.path.join(workspace, "root", "etc/kernel"), 0o755)
with open(os.path.join(workspace, "root", "etc/kernel/cmdline"), "w") as cmdline:
- cmdline.write(args.kernel_commandline)
+ cmdline.write(args.kernel_command_line)
cmdline.write("\n")
if run_build_script:
@@ -844,7 +1073,8 @@ def prepare_tree(args, workspace, run_build_script, cached):
if args.build_dir is not None:
os.mkdir(os.path.join(workspace, "root", "root/build"), 0o755)
-def patch_file(filepath, line_rewriter):
+
+def patch_file(filepath: str, line_rewriter: Callable[[str], str]) -> None:
temp_new_filepath = filepath + ".tmp.new"
with open(filepath, "r") as old:
@@ -856,7 +1086,8 @@ def patch_file(filepath, line_rewriter):
os.remove(filepath)
shutil.move(temp_new_filepath, filepath)
-def enable_networkd(workspace):
+
+def enable_networkd(workspace: str) -> None:
run(["systemctl",
"--root", os.path.join(workspace, "root"),
"enable", "systemd-networkd", "systemd-resolved"],
@@ -874,14 +1105,20 @@ Type=ether
DHCP=yes
""")
-def enable_networkmanager(workspace):
+
+def enable_networkmanager(workspace: str) -> None:
run(["systemctl",
"--root", os.path.join(workspace, "root"),
"enable", "NetworkManager"],
check=True)
-def run_workspace_command(args, workspace, *cmd, network=False, env={}, nspawn_params=[]):
+def run_workspace_command(args: CommandLineArguments,
+ workspace: str,
+ *cmd: str,
+ network: bool = False,
+ env: Dict[str, str] = {},
+ nspawn_params: List[str] = []) -> None:
cmdline = ["systemd-nspawn",
'--quiet',
"--directory=" + os.path.join(workspace, "root"),
@@ -889,7 +1126,8 @@ def run_workspace_command(args, workspace, *cmd, network=False, env={}, nspawn_p
"--machine=mkosi-" + uuid.uuid4().hex,
"--as-pid2",
"--register=no",
- "--bind=" + var_tmp(workspace) + ":/var/tmp" ]
+ "--bind=" + var_tmp(workspace) + ":/var/tmp",
+ "--setenv=SYSTEMD_OFFLINE=1"]
if network:
# If we're using the host network namespace, use the same resolver
@@ -897,7 +1135,7 @@ def run_workspace_command(args, workspace, *cmd, network=False, env={}, nspawn_p
else:
cmdline += ["--private-network"]
- cmdline += [ "--setenv={}={}".format(k,v) for k,v in env.items() ]
+ cmdline += [f'--setenv={k}={v}' for k, v in env.items()]
if nspawn_params:
cmdline += nspawn_params
@@ -905,29 +1143,33 @@ def run_workspace_command(args, workspace, *cmd, network=False, env={}, nspawn_p
cmdline += ['--', *cmd]
run(cmdline, check=True)
-def check_if_url_exists(url):
+
+def check_if_url_exists(url: str) -> bool:
req = urllib.request.Request(url, method="HEAD")
try:
if urllib.request.urlopen(req):
return True
- except:
+ return False
+ except: # NOQA: E722
return False
-def disable_kernel_install(args, workspace):
+
+def disable_kernel_install(args: CommandLineArguments, workspace: str) -> List[str]:
# Let's disable the automatic kernel installation done by the
# kernel RPMs. After all, we want to built our own unified kernels
# that include the root hash in the kernel command line and can be
# signed as a single EFI executable. Since the root hash is only
# known when the root file system is finalized we turn off any
# kernel installation beforehand.
-
- if not args.bootable:
+ #
+ # For BIOS mode, we don't have that option, so do not mask the units
+ if not args.bootable or args.bios_partno is not None:
return []
for d in ("etc", "etc/kernel", "etc/kernel/install.d"):
mkdir_last(os.path.join(workspace, "root", d), 0o755)
- masked = []
+ masked: List[str] = []
for f in ("50-dracut.install", "51-dracut-rescue.install", "90-loaderentry.install"):
path = os.path.join(workspace, "root", "etc/kernel/install.d", f)
@@ -936,7 +1178,8 @@ def disable_kernel_install(args, workspace):
return masked
-def reenable_kernel_install(args, workspace, masked):
+
+def reenable_kernel_install(args: CommandLineArguments, workspace: str, masked: List[str]) -> None:
# Undo disable_kernel_install() so the final image can be used
# with scripts installing a kernel following the Bootloader Spec
@@ -946,10 +1189,96 @@ def reenable_kernel_install(args, workspace, masked):
for f in masked:
os.unlink(f)
-def invoke_dnf(args, workspace, repositories, base_packages, boot_packages, config_file):
+def make_rpm_list(args: argparse.Namespace, packages: List[str]) -> List[str]:
+ packages = list(packages) # make a copy
+
+ if args.bootable:
+ # Temporary hack: dracut only adds crypto support to the initrd, if the cryptsetup binary is installed
+ if args.encrypt or args.verity:
+ packages += ['cryptsetup']
+
+ if args.output_format == OutputFormat.gpt_ext4:
+ packages += ['e2fsprogs']
+
+ if args.output_format == OutputFormat.gpt_xfs:
+ packages += ['xfsprogs']
+
+ if args.output_format == OutputFormat.gpt_btrfs:
+ packages += ['btrfs-progs']
+
+ if args.bios_partno:
+ packages += ["grub2-pc"]
+
+ return packages
+
+
+def clean_dnf_metadata(root: str) -> None:
+ """Removes dnf metadata iff /bin/dnf is not present in the image
+
+ If dnf is not installed, there doesn't seem to be much use in
+ keeping the dnf metadata, since it's not usable from within the
+ image anyway.
+ """
+ dnf_path = root + '/bin/dnf'
+ keep_dnf_data = os.access(dnf_path, os.F_OK, follow_symlinks=False)
+
+ if not keep_dnf_data:
+ print_step('Cleaning dnf metadata...')
+ remove_glob(root + '/var/lib/dnf',
+ root + '/var/log/dnf.*',
+ root + '/var/log/hawkey.*',
+ root + '/var/cache/dnf')
+
+
+def clean_yum_metadata(root: str) -> None:
+ """Removes yum metadata iff /bin/yum is not present in the image"""
+ yum_path = root + '/bin/yum'
+ keep_yum_data = os.access(yum_path, os.F_OK, follow_symlinks=False)
+
+ if not keep_yum_data:
+ print_step('Cleaning yum metadata...')
+ remove_glob(root + '/var/lib/yum',
+ root + '/var/log/yum.*',
+ root + '/var/cache/yum')
+
+
+def clean_rpm_metadata(root: str) -> None:
+ """Removes rpm metadata iff /bin/rpm is not present in the image"""
+ rpm_path = root + '/bin/rpm'
+ keep_rpm_data = os.access(rpm_path, os.F_OK, follow_symlinks=False)
+
+ if not keep_rpm_data:
+ print_step('Cleaning rpm metadata...')
+ remove_glob(root + '/var/lib/rpm')
+
+
+def clean_package_manager_metadata(workspace: str) -> None:
+ """Clean up package manager metadata
+
+ Try them all regardless of the distro: metadata is only removed if the
+ package manager is present in the image.
+ """
+
+ root = os.path.join(workspace, "root")
+
+ # we try then all: metadata will only be touched if any of them are in the
+ # final image
+ clean_dnf_metadata(root)
+ clean_yum_metadata(root)
+ clean_rpm_metadata(root)
+ # FIXME: implement cleanup for other package managers
+
+
+def invoke_dnf(args: CommandLineArguments,
+ workspace: str,
+ repositories: List[str],
+ packages: List[str],
+ config_file: str) -> None:
repos = ["--enablerepo=" + repo for repo in repositories]
+ packages = make_rpm_list(args, packages)
+
root = os.path.join(workspace, "root")
cmdline = ["dnf",
"-y",
@@ -963,38 +1292,20 @@ def invoke_dnf(args, workspace, repositories, base_packages, boot_packages, conf
"--setopt=keepcache=1",
"--setopt=install_weak_deps=0"]
- # Turn off docs, but not during the development build, as dnf currently has problems with that
- if not args.with_docs and not run_build_script:
- cmdline.append("--setopt=tsflags=nodocs")
+ if args.architecture is not None:
+ cmdline += [f'--forcearch={args.architecture}']
- cmdline.extend([
- "install",
- *base_packages
- ])
-
- cmdline.extend(args.packages)
-
- if run_build_script:
- cmdline.extend(args.build_packages)
-
- if args.bootable:
- cmdline.extend(boot_packages)
-
- # Temporary hack: dracut only adds crypto support to the initrd, if the cryptsetup binary is installed
- if args.encrypt or args.verity:
- cmdline.append("cryptsetup")
-
- if args.output_format == OutputFormat.raw_gpt:
- cmdline.append("e2fsprogs")
+ if not args.with_docs:
+ cmdline += ['--nodocs']
- if args.output_format == OutputFormat.raw_btrfs:
- cmdline.append("btrfs-progs")
+ cmdline += ['install', *packages]
with mount_api_vfs(args, workspace):
run(cmdline, check=True)
-@complete_step('Installing Clear Linux')
-def install_clear(args, workspace, run_build_script):
+
+@completestep('Installing Clear Linux')
+def install_clear(args: CommandLineArguments, workspace: str, run_build_script: bool) -> None:
if args.release == "latest":
release = "clear"
else:
@@ -1021,7 +1332,7 @@ ensure that you have openssl program in your system.
""")
raise FileNotFoundError("Couldn't find swupd-extract")
- print("Using {}".format(swupd_extract))
+ print(f'Using {swupd_extract}')
run([swupd_extract,
'-output', root,
@@ -1044,37 +1355,45 @@ ensure that you have openssl program in your system.
if args.password == "":
args.password = None
-@complete_step('Installing Fedora')
-def install_fedora(args, workspace, run_build_script):
+
+@completestep('Installing Fedora')
+def install_fedora(args: CommandLineArguments, workspace: str, run_build_script: bool) -> None:
if args.release == 'rawhide':
last = sorted(FEDORA_KEYS_MAP)[-1]
- die('Use numerical release for Fedora, not "rawhide"\n' +
- '(rawhide was {} when this mkosi version was released)'.format(last))
+ warn(f'Assuming rawhide is version {last} — ' +
+ 'You may specify otherwise with --release=rawhide-<version>')
+ args.releasever = last
+ elif args.release.startswith('rawhide-'):
+ args.release, args.releasever = args.release.split('-')
+ sys.stderr.write(f'Fedora rawhide — release version: {args.releasever}\n')
+ else:
+ args.releasever = args.release
masked = disable_kernel_install(args, workspace)
- gpg_key = "/etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-%s-x86_64" % args.release
+ arch = args.architecture or platform.machine()
+ gpg_key = f"/etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-{args.releasever}-{arch}"
if os.path.exists(gpg_key):
- gpg_key = "file://%s" % gpg_key
+ gpg_key = f"file://{gpg_key}"
else:
- gpg_key = "https://getfedora.org/static/%s.txt" % FEDORA_KEYS_MAP[args.release]
+ gpg_key = "https://getfedora.org/static/{}.txt".format(FEDORA_KEYS_MAP[args.releasever])
if args.mirror:
- baseurl = "{args.mirror}/releases/{args.release}/Everything/x86_64/os/".format(args=args)
- if not check_if_url_exists("%s/media.repo" % baseurl):
- baseurl = "{args.mirror}/development/{args.release}/Everything/x86_64/os/".format(args=args)
+ baseurl = f"{args.mirror}/releases/{args.release}/Everything/$basearch/os/"
+ if not check_if_url_exists(f"{baseurl}/media.repo"):
+ baseurl = f"{args.mirror}/development/{args.release}/Everything/$basearch/os/"
- release_url = "baseurl=%s" % baseurl
- updates_url = "baseurl={args.mirror}/updates/{args.release}/x86_64/".format(args=args)
+ release_url = f"baseurl={baseurl}"
+ updates_url = f"baseurl={args.mirror}/updates/{args.release}/$basearch/"
else:
- release_url = ("metalink=https://mirrors.fedoraproject.org/metalink?" +
- "repo=fedora-{args.release}&arch=x86_64".format(args=args))
- updates_url = ("metalink=https://mirrors.fedoraproject.org/metalink?" +
- "repo=updates-released-f{args.release}&arch=x86_64".format(args=args))
+ release_url = (f"metalink=https://mirrors.fedoraproject.org/metalink?" +
+ f"repo=fedora-{args.release}&arch=$basearch")
+ updates_url = (f"metalink=https://mirrors.fedoraproject.org/metalink?" +
+ f"repo=updates-released-f{args.release}&arch=$basearch")
config_file = os.path.join(workspace, "dnf.conf")
with open(config_file, "w") as f:
- f.write("""\
+ f.write(f"""\
[main]
gpgcheck=1
@@ -1087,43 +1406,48 @@ gpgkey={gpg_key}
name=Fedora {args.release} - updates
{updates_url}
gpgkey={gpg_key}
-""".format(args=args,
- gpg_key=gpg_key,
- release_url=release_url,
- updates_url=updates_url))
+""")
+ packages = ['fedora-release', 'glibc-minimal-langpack']
+ packages += args.packages or []
+ if args.bootable:
+ packages += ['kernel-core', 'systemd-udev', 'binutils']
+ if run_build_script:
+ packages += args.build_packages or []
invoke_dnf(args, workspace,
- args.repositories if args.repositories else ["fedora", "updates"],
- ["systemd", "fedora-release", "passwd"],
- ["kernel", "systemd-udev", "binutils"],
+ args.repositories or ["fedora", "updates"],
+ packages,
config_file)
+ with open(os.path.join(workspace, 'root', 'etc/locale.conf'), 'w') as f:
+ f.write('LANG=C.UTF-8\n')
+
reenable_kernel_install(args, workspace, masked)
-@complete_step('Installing Mageia')
-def install_mageia(args, workspace, run_build_script):
+@completestep('Installing Mageia')
+def install_mageia(args: CommandLineArguments, workspace: str, run_build_script: bool) -> None:
masked = disable_kernel_install(args, workspace)
# Mageia does not (yet) have RPM GPG key on the web
gpg_key = '/etc/pki/rpm-gpg/RPM-GPG-KEY-Mageia'
if os.path.exists(gpg_key):
- gpg_key = "file://%s" % gpg_key
+ gpg_key = f'file://{gpg_key}'
# else:
-# gpg_key = "https://getfedora.org/static/%s.txt" % FEDORA_KEYS_MAP[args.release]
+# gpg_key = "https://getfedora.org/static/{}.txt".format(FEDORA_KEYS_MAP[args.releasever])
if args.mirror:
- baseurl = "{args.mirror}/distrib/{args.release}/x86_64/media/core/".format(args=args)
- release_url = "baseurl=%s/release/" % baseurl
- updates_url = "baseurl=%s/updates/" % baseurl
+ baseurl = f"{args.mirror}/distrib/{args.release}/x86_64/media/core/"
+ release_url = f"baseurl={baseurl}/release/"
+ updates_url = f"baseurl={baseurl}/updates/"
else:
- baseurl = "https://www.mageia.org/mirrorlist/?release={args.release}&arch=x86_64&section=core".format(args=args)
- release_url = "mirrorlist=%s&repo=release" % baseurl
- updates_url = "mirrorlist=%s&repo=updates" % baseurl
+ baseurl = f"https://www.mageia.org/mirrorlist/?release={args.release}&arch=x86_64&section=core"
+ release_url = f"mirrorlist={baseurl}&repo=release"
+ updates_url = f"mirrorlist={baseurl}&repo=updates"
config_file = os.path.join(workspace, "dnf.conf")
with open(config_file, "w") as f:
- f.write("""\
+ f.write(f"""\
[main]
gpgcheck=1
@@ -1136,23 +1460,28 @@ gpgkey={gpg_key}
name=Mageia {args.release} Core Updates
{updates_url}
gpgkey={gpg_key}
-""".format(args=args,
- gpg_key=gpg_key,
- release_url=release_url,
- updates_url=updates_url))
+""")
+ packages = ["basesystem-minimal"]
+ if args.bootable:
+ packages += ["kernel-server-latest", "binutils"]
invoke_dnf(args, workspace,
args.repositories if args.repositories else ["mageia", "updates"],
- ["basesystem-minimal"],
- ["kernel-server-latest", "binutils"],
+ packages,
config_file)
reenable_kernel_install(args, workspace, masked)
-def invoke_yum(args, workspace, repositories, base_packages, boot_packages, config_file):
+def invoke_yum(args: CommandLineArguments,
+ workspace: str,
+ repositories: List[str],
+ packages: List[str],
+ config_file: str) -> None:
repos = ["--enablerepo=" + repo for repo in repositories]
+ packages = make_rpm_list(args, packages)
+
root = os.path.join(workspace, "root")
cmdline = ["yum",
"-y",
@@ -1163,64 +1492,49 @@ def invoke_yum(args, workspace, repositories, base_packages, boot_packages, conf
*repos,
"--setopt=keepcache=1"]
- # Turn off docs, but not during the development build, as dnf currently has problems with that
- if not args.with_docs and not run_build_script:
- cmdline.append("--setopt=tsflags=nodocs")
-
- cmdline.extend([
- "install",
- *base_packages
- ])
-
- cmdline.extend(args.packages)
-
- if run_build_script:
- cmdline.extend(args.build_packages)
-
- if args.bootable:
- cmdline.extend(boot_packages)
-
- # Temporary hack: dracut only adds crypto support to the initrd, if the cryptsetup binary is installed
- if args.encrypt or args.verity:
- cmdline.append("cryptsetup")
+ if args.architecture is not None:
+ cmdline += [f'--forcearch={args.architecture}']
- if args.output_format == OutputFormat.raw_gpt:
- cmdline.append("e2fsprogs")
+ if not args.with_docs:
+ cmdline.append("--setopt=tsflags=nodocs")
- if args.output_format == OutputFormat.raw_btrfs:
- cmdline.append("btrfs-progs")
+ cmdline += ['install', *packages]
with mount_api_vfs(args, workspace):
run(cmdline, check=True)
-def invoke_dnf_or_yum(args, workspace, repositories, base_packages, boot_packages, config_file):
+def invoke_dnf_or_yum(args: CommandLineArguments,
+ workspace: str,
+ repositories: List[str],
+ packages: List[str],
+ config_file: str) -> None:
if shutil.which("dnf") is None:
- invoke_yum(args, workspace, repositories, base_packages, boot_packages, config_file)
+ invoke_yum(args, workspace, repositories, packages, config_file)
else:
- invoke_dnf(args, workspace, repositories, base_packages, boot_packages, config_file)
+ invoke_dnf(args, workspace, repositories, packages, config_file)
-@complete_step('Installing CentOS')
-def install_centos(args, workspace, run_build_script):
+@completestep('Installing CentOS')
+def install_centos(args: CommandLineArguments, workspace: str, run_build_script: bool) -> None:
masked = disable_kernel_install(args, workspace)
- gpg_key = "/etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-%s" % args.release
+ gpg_key = f"/etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-{args.release}"
if os.path.exists(gpg_key):
- gpg_key = "file://%s" % gpg_key
+ gpg_key = f'file://{gpg_key}'
else:
- gpg_key = "https://www.centos.org/keys/RPM-GPG-KEY-CentOS-%s" % args.release
+ gpg_key = f'https://www.centos.org/keys/RPM-GPG-KEY-CentOS-{args.release}'
if args.mirror:
- release_url = "baseurl={args.mirror}/centos/{args.release}/os/x86_64".format(args=args)
- updates_url = "baseurl={args.mirror}/cenots/{args.release}/updates/x86_64/".format(args=args)
+ release_url = f"baseurl={args.mirror}/centos/{args.release}/os/x86_64"
+ updates_url = f"baseurl={args.mirror}/centos/{args.release}/updates/x86_64/"
else:
- release_url = "mirrorlist=http://mirrorlist.centos.org/?release={args.release}&arch=x86_64&repo=os".format(args=args)
- updates_url = "mirrorlist=http://mirrorlist.centos.org/?release={args.release}&arch=x86_64&repo=updates".format(args=args)
+ release_url = f"mirrorlist=http://mirrorlist.centos.org/?release={args.release}&arch=x86_64&repo=os"
+ updates_url = f"mirrorlist=http://mirrorlist.centos.org/?release={args.release}&arch=x86_64&repo=updates"
config_file = os.path.join(workspace, "yum.conf")
with open(config_file, "w") as f:
- f.write("""\
+ f.write(f"""\
[main]
gpgcheck=1
@@ -1233,35 +1547,50 @@ gpgkey={gpg_key}
name=CentOS-{args.release} - Updates
{updates_url}
gpgkey={gpg_key}
-""".format(args=args,
- gpg_key=gpg_key,
- release_url=release_url,
- updates_url=updates_url))
+""")
+ packages = ['centos-release']
+ packages += args.packages or []
+ if args.bootable:
+ packages += ["kernel", "systemd-udev", "binutils"]
invoke_dnf_or_yum(args, workspace,
- args.repositories if args.repositories else ["base", "updates"],
- ["systemd", "centos-release", "passwd"],
- ["kernel", "systemd-udev", "binutils"],
+ args.repositories or ["base", "updates"],
+ packages,
config_file)
reenable_kernel_install(args, workspace, masked)
-def install_debian_or_ubuntu(args, workspace, run_build_script, mirror):
- if args.repositories:
- components = ','.join(args.repositories)
- else:
- components = 'main'
+
+def debootstrap_knows_arg(arg: str) -> bool:
+ return bytes("invalid option", "UTF-8") not in run(["debootstrap", arg], stdout=PIPE).stdout
+
+def install_debian_or_ubuntu(args: CommandLineArguments,
+ workspace: str,
+ *,
+ run_build_script: bool,
+ mirror: str) -> None:
+ repos = args.repositories if args.repositories else ["main"]
+ # Ubuntu needs the 'universe' repo to install 'dracut'
+ if args.distribution == Distribution.ubuntu and args.bootable and 'universe' not in repos:
+ repos.append('universe')
+
cmdline = ["debootstrap",
"--verbose",
- "--merged-usr",
"--variant=minbase",
"--include=systemd-sysv",
"--exclude=sysv-rc,initscripts,startpar,lsb-base,insserv",
- "--components=" + components,
- args.release,
- workspace + "/root",
- mirror]
- if args.bootable and args.output_format == OutputFormat.raw_btrfs:
+ "--components=" + ','.join(repos)]
+
+ # Let's use --merged-usr and --no-check-valid-until only if debootstrap knows it
+ for arg in ["--merged-usr", "--no-check-valid-until"]:
+ if debootstrap_knows_arg(arg):
+ cmdline += [arg]
+
+ cmdline += [args.release,
+ workspace + "/root",
+ mirror]
+
+ if args.bootable and args.output_format == OutputFormat.gpt_btrfs:
cmdline[4] += ",btrfs-tools"
run(cmdline, check=True)
@@ -1269,7 +1598,7 @@ def install_debian_or_ubuntu(args, workspace, run_build_script, mirror):
# Debootstrap is not smart enough to deal correctly with alternative dependencies
# Installing libpam-systemd via debootstrap results in systemd-shim being installed
# Therefore, prefer to install via apt from inside the container
- extra_packages = [ 'dbus', 'libpam-systemd']
+ extra_packages = ['dbus', 'libpam-systemd']
# Also install extra packages via the secondary APT run, because it is smarter and
# can deal better with any conflicts
@@ -1284,69 +1613,97 @@ def install_debian_or_ubuntu(args, workspace, run_build_script, mirror):
f.write("hostonly=no")
if args.bootable:
- extra_packages += ["linux-image-amd64", "dracut"]
+ extra_packages += ["dracut"]
+ if args.distribution == Distribution.ubuntu:
+ extra_packages += ["linux-generic"]
+ else:
+ extra_packages += ["linux-image-amd64"]
+
+ if args.bios_partno:
+ extra_packages += ["grub-pc"]
+
+ # Debian policy is to start daemons by default.
+ # The policy-rc.d script can be used choose which ones to start
+ # Let's install one that denies all daemon startups
+ # See https://people.debian.org/~hmh/invokerc.d-policyrc.d-specification.txt
+ # Note: despite writing in /usr/sbin, this file is not shipped by the OS
+ # and instead should be managed by the admin.
+ policyrcd = os.path.join(workspace, "root/usr/sbin/policy-rc.d")
+ with open(policyrcd, "w") as f:
+ f.write("#!/bin/sh\n")
+ f.write("exit 101")
+ os.chmod(policyrcd, 0o755)
+ dracut_bug_comment = [
+ '# Work around "Failed to find module \'crc32c\'" dracut issue\n',
+ '# See also:\n',
+ '# - https://github.com/antonio-petricca/buddy-linux/issues/2#issuecomment-404505527\n',
+ '# - https://bugs.launchpad.net/ubuntu/+source/dracut/+bug/1781143\n',
+ ]
+ dracut_bug_conf = os.path.join(workspace, "root/etc/dpkg/dpkg.cfg.d/01_no_dracut_10-debian")
+ with open(dracut_bug_conf, "w") as f:
+ f.writelines(dracut_bug_comment + ['path-exclude /etc/dracut.conf.d/10-debian.conf\n'])
+
+ doc_paths = [
+ '/usr/share/locale',
+ '/usr/share/doc',
+ '/usr/share/man',
+ '/usr/share/groff',
+ '/usr/share/info',
+ '/usr/share/lintian',
+ '/usr/share/linda',
+ ]
+ if not args.with_docs:
+ # Remove documentation installed by debootstrap
+ cmdline = ["/bin/rm", "-rf"] + doc_paths
+ run_workspace_command(args, workspace, *cmdline)
+ # Create dpkg.cfg to ignore documentation on new packages
+ dpkg_conf = os.path.join(workspace, "root/etc/dpkg/dpkg.cfg.d/01_nodoc")
+ with open(dpkg_conf, "w") as f:
+ f.writelines(f'path-exclude {d}/*\n' for d in doc_paths)
+
+ cmdline = ["/usr/bin/apt-get", "--assume-yes", "--no-install-recommends", "install"] + extra_packages
+ env = {
+ 'DEBIAN_FRONTEND': 'noninteractive',
+ 'DEBCONF_NONINTERACTIVE_SEEN': 'true',
+ }
+ run_workspace_command(args, workspace, network=True, env=env, *cmdline)
+ os.unlink(policyrcd)
- if extra_packages:
- # Debian policy is to start daemons by default.
- # The policy-rc.d script can be used choose which ones to start
- # Let's install one that denies all daemon startups
- # See https://people.debian.org/~hmh/invokerc.d-policyrc.d-specification.txt
- # Note: despite writing in /usr/sbin, this file is not shipped by the OS
- # and instead should be managed by the admin.
- policyrcd = os.path.join(workspace, "root/usr/sbin/policy-rc.d")
- with open(policyrcd, "w") as f:
- f.write("#!/bin/sh\n")
- f.write("exit 101")
- os.chmod(policyrcd, 0o755)
- if not args.with_docs:
- # Create dpkg.cfg to ingore documentation
- dpkg_conf = os.path.join(workspace, "root/etc/dpkg/dpkg.cfg.d/01_nodoc")
- with open(dpkg_conf, "w") as f:
- f.writelines([
- 'path-exclude /usr/share/locale/*\n',
- 'path-exclude /usr/share/doc/*\n',
- 'path-exclude /usr/share/man/*\n',
- 'path-exclude /usr/share/groff/*\n',
- 'path-exclude /usr/share/info/*\n',
- 'path-exclude /usr/share/lintian/*\n',
- 'path-exclude /usr/share/linda/*\n',
- ])
-
- cmdline = ["/usr/bin/apt-get", "--assume-yes", "--no-install-recommends", "install"] + extra_packages
- run_workspace_command(args, workspace, network=True, env={'DEBIAN_FRONTEND': 'noninteractive', 'DEBCONF_NONINTERACTIVE_SEEN': 'true'}, *cmdline)
- os.unlink(policyrcd)
-
-@complete_step('Installing Debian')
-def install_debian(args, workspace, run_build_script):
- install_debian_or_ubuntu(args, workspace, run_build_script, args.mirror)
-
-@complete_step('Installing Ubuntu')
-def install_ubuntu(args, workspace, run_build_script):
- install_debian_or_ubuntu(args, workspace, run_build_script, args.mirror)
-
-@complete_step('Installing Arch Linux')
-def install_arch(args, workspace, run_build_script):
- if args.release is not None:
- sys.stderr.write("Distribution release specification is not supported for Arch Linux, ignoring.\n")
- keyring = "archlinux"
+@completestep('Installing Debian')
+def install_debian(args: CommandLineArguments, workspace: str, run_build_script: bool) -> None:
+ install_debian_or_ubuntu(args, workspace, run_build_script=run_build_script, mirror=args.mirror)
- if platform.machine() == "aarch64":
- keyring += "arm"
- run(["pacman-key", "--nocolor", "--init"], check=True)
- run(["pacman-key", "--nocolor", "--populate", keyring], check=True)
+@completestep('Installing Ubuntu')
+def install_ubuntu(args: CommandLineArguments, workspace: str, run_build_script: bool) -> None:
+ install_debian_or_ubuntu(args, workspace, run_build_script=run_build_script, mirror=args.mirror)
+
+
+@completestep('Installing Arch Linux')
+def install_arch(args: CommandLineArguments, workspace: str, run_build_script: bool) -> None:
+ if args.release is not None:
+ sys.stderr.write("Distribution release specification is not supported for Arch Linux, ignoring.\n")
if platform.machine() == "aarch64":
- server = "Server = {}/$arch/$repo".format(args.mirror)
+ server = f"Server = {args.mirror}/$arch/$repo"
else:
- server = "Server = {}/$repo/os/$arch".format(args.mirror)
+ server = f"Server = {args.mirror}/$repo/os/$arch"
- with open(os.path.join(workspace, "pacman.conf"), "w") as f:
- f.write("""\
+ root = os.path.join(workspace, "root")
+ # Create base layout for pacman and pacman-key
+ os.makedirs(os.path.join(root, "var/lib/pacman"), 0o755, exist_ok=True)
+ os.makedirs(os.path.join(root, "etc/pacman.d/gnupg"), 0o755, exist_ok=True)
+
+ pacman_conf = os.path.join(workspace, "pacman.conf")
+ with open(pacman_conf, "w") as f:
+ f.write(f"""\
[options]
+RootDir = {root}
LogFile = /dev/null
-HookDir = /no_hook/
+CacheDir = {root}/var/cache/pacman/pkg/
+GPGDir = {root}/etc/pacman.d/gnupg/
+HookDir = {root}/etc/pacman.d/hooks/
HoldPkg = pacman glibc
Architecture = auto
UseSyslog
@@ -1362,63 +1719,115 @@ SigLevel = Required DatabaseOptional
[community]
{server}
-""".format(args=args, server=server))
+""")
+
+ def run_pacman(args: List[str], **kwargs: Any) -> subprocess.CompletedProcess:
+ cmdline = [
+ "pacman",
+ "--noconfirm",
+ "--color", "never",
+ "--config", pacman_conf,
+ ]
+ return run(cmdline + args, **kwargs, check=True)
+
+ def run_pacman_key(args: List[str]) -> subprocess.CompletedProcess:
+ cmdline = [
+ "pacman-key",
+ "--nocolor",
+ "--config", pacman_conf,
+ ]
+ return run(cmdline + args, check=True)
+
+ def run_pacstrap(packages: Set[str]) -> None:
+ cmdline = ["pacstrap", "-C", pacman_conf, "-dGM", root]
+ run(cmdline + list(packages), check=True)
- run(["pacman", "--color", "never", "--config", os.path.join(workspace, "pacman.conf"), "-Sy"], check=True)
- # determine base packages list from base metapackage
- c = run(["pacman", "--color", "never", "--config", os.path.join(workspace, "pacman.conf"), "-Sg", "base"], stdout=PIPE, universal_newlines=True, check=True)
+ keyring = "archlinux"
+ if platform.machine() == "aarch64":
+ keyring += "arm"
+ run_pacman_key(["--init"])
+ run_pacman_key(["--populate", keyring])
+
+ run_pacman(["-Sy"])
+ # determine base packages list from base group
+ c = run_pacman(["-Sqg", "base"], stdout=PIPE, universal_newlines=True)
packages = set(c.stdout.split())
- packages.remove("base")
+ packages -= {
+ "cryptsetup",
+ "device-mapper",
+ "dhcpcd",
+ "e2fsprogs",
+ "jfsutils",
+ "linux",
+ "lvm2",
+ "man-db",
+ "man-pages",
+ "mdadm",
+ "netctl",
+ "reiserfsprogs",
+ "xfsprogs",
+ }
- official_kernel_packages = [
+ official_kernel_packages = {
"linux",
"linux-lts",
"linux-hardened",
- "linux-zen"
- ]
+ "linux-zen",
+ }
- kernel_packages = {"linux"}
- if args.packages:
- kernel_packages = set.intersection(set(args.packages), set(official_kernel_packages))
- # prefer user-specified packages over implicit base kernel
- if kernel_packages and "linux" not in kernel_packages:
- packages.remove("linux")
- if len(kernel_packages) > 1:
- warn('More than one kernel will be installed: {}', ' '.join(kernel_packages))
-
- packages -= {"device-mapper",
- "dhcpcd",
- "e2fsprogs",
- "jfsutils",
- "lvm2",
- "man-db",
- "man-pages",
- "mdadm",
- "netctl",
- "pcmciautils",
- "reiserfsprogs",
- "xfsprogs"}
+ kernel_packages = official_kernel_packages.intersection(args.packages)
+ if len(kernel_packages) > 1:
+ warn('More than one kernel will be installed: {}', ' '.join(kernel_packages))
if args.bootable:
- if args.output_format == OutputFormat.raw_gpt:
+ if args.output_format == OutputFormat.gpt_ext4:
packages.add("e2fsprogs")
- elif args.output_format == OutputFormat.raw_btrfs:
+ elif args.output_format == OutputFormat.gpt_btrfs:
packages.add("btrfs-progs")
- else:
- packages -= kernel_packages
+ elif args.output_format == OutputFormat.gpt_xfs:
+ packages.add("xfsprogs")
+ if args.encrypt:
+ packages.add("cryptsetup")
+ packages.add("device-mapper")
+ if not kernel_packages:
+ # No user-specified kernel
+ kernel_packages.add("linux")
+ if args.bios_partno:
+ packages.add("grub")
+
+ packages.add("mkinitcpio")
+
+ # Set up system with packages from the base group
+ run_pacstrap(packages)
- packages |= set(args.packages)
+ if args.bootable:
+ # Patch mkinitcpio configuration so:
+ # 1) we remove autodetect and
+ # 2) we add the modules needed for encrypt.
+ def jj(line: str) -> str:
+ if line.startswith("HOOKS="):
+ if args.encrypt == "all":
+ return 'HOOKS="systemd modconf block sd-encrypt filesystems keyboard fsck"\n'
+ else:
+ return 'HOOKS="systemd modconf block filesystems fsck"\n'
+ return line
+ patch_file(os.path.join(workspace, "root", "etc/mkinitcpio.conf"), jj)
- if run_build_script:
- packages |= set(args.build_packages)
+ # Install the user-specified packages and kernel
+ packages = set(args.packages)
+ if args.bootable:
+ packages |= kernel_packages
- cmdline = ["pacstrap",
- "-C", os.path.join(workspace, "pacman.conf"),
- "-d",
- workspace + "/root",
- *packages]
+ if run_build_script:
+ packages.update(args.build_packages)
+ # Remove already installed packages
+ c = run_pacman(['-Qq'], stdout=PIPE, universal_newlines=True)
+ packages.difference_update(c.stdout.split())
+ if packages:
+ run_pacstrap(packages)
- run(cmdline, check=True)
+ # Kill the gpg-agent used by pacman and pacman-key
+ run(['gpg-connect-agent', '--homedir', os.path.join(root, 'etc/pacman.d/gnupg'), 'KILLAGENT', '/bye'])
if "networkmanager" in args.packages:
enable_networkmanager(workspace)
@@ -1433,9 +1842,12 @@ SigLevel = Required DatabaseOptional
with open(os.path.join(workspace, 'root', 'etc/locale.conf'), 'w') as f:
f.write('LANG=en_US.UTF-8\n')
-@complete_step('Installing openSUSE')
-def install_opensuse(args, workspace, run_build_script):
+ # At this point, no process should be left running, kill then
+ run(["fuser", "-c", root, "--kill"])
+
+@completestep('Installing openSUSE')
+def install_opensuse(args: CommandLineArguments, workspace: str, run_build_script: bool) -> None:
root = os.path.join(workspace, "root")
release = args.release.strip('"')
@@ -1445,14 +1857,14 @@ def install_opensuse(args, workspace, run_build_script):
# let's default to Leap.
#
if release.isdigit() or release == "tumbleweed":
- release_url = "{}/tumbleweed/repo/oss/".format(args.mirror)
- updates_url = "{}/update/tumbleweed/".format(args.mirror)
+ release_url = f"{args.mirror}/tumbleweed/repo/oss/"
+ updates_url = f"{args.mirror}/update/tumbleweed/"
elif release.startswith("13."):
- release_url = "{}/distribution/{}/repo/oss/".format(args.mirror, release)
- updates_url = "{}/update/{}/".format(args.mirror, release)
+ release_url = f"{args.mirror}/distribution/{release}/repo/oss/"
+ updates_url = f"{args.mirror}/update/{release}/"
else:
- release_url = "{}/distribution/leap/{}/repo/oss/".format(args.mirror, release)
- updates_url = "{}/update/leap/{}/oss/".format(args.mirror, release)
+ release_url = f"{args.mirror}/distribution/leap/{release}/repo/oss/"
+ updates_url = f"{args.mirror}/update/leap/{release}/oss/"
#
# Configure the repositories: we need to enable packages caching
@@ -1477,7 +1889,7 @@ def install_opensuse(args, workspace, run_build_script):
#
# Now install the additional packages if necessary.
#
- extra_packages = []
+ extra_packages: List[str] = []
if args.bootable:
extra_packages += ["kernel-default"]
@@ -1485,7 +1897,7 @@ def install_opensuse(args, workspace, run_build_script):
if args.encrypt:
extra_packages += ["device-mapper"]
- if args.output_format in (OutputFormat.subvolume, OutputFormat.raw_btrfs):
+ if args.output_format in (OutputFormat.subvolume, OutputFormat.gpt_btrfs):
extra_packages += ["btrfsprogs"]
extra_packages.extend(args.packages)
@@ -1514,28 +1926,33 @@ def install_opensuse(args, workspace, run_build_script):
f.write("hostonly=no\n")
# dracut from openSUSE is missing upstream commit 016613c774baf.
- with open(os.path.join(root, "etc/kernel/cmdline"), "w") as cmdline:
- cmdline.write(args.kernel_commandline + " root=/dev/gpt-auto-root\n")
+ with open(os.path.join(root, "etc/kernel/cmdline"), "w") as cmdlinefile:
+ cmdlinefile.write(args.kernel_command_line + " root=/dev/gpt-auto-root\n")
-def install_distribution(args, workspace, run_build_script, cached):
+def install_distribution(args: CommandLineArguments,
+ workspace: str,
+ *,
+ run_build_script: bool,
+ cached: bool) -> None:
if cached:
return
- install = {
- Distribution.fedora : install_fedora,
- Distribution.centos : install_centos,
- Distribution.mageia : install_mageia,
- Distribution.debian : install_debian,
- Distribution.ubuntu : install_ubuntu,
- Distribution.arch : install_arch,
- Distribution.opensuse : install_opensuse,
- Distribution.clear : install_clear,
+ install: Dict[Distribution, Callable[[CommandLineArguments, str, bool], None]] = {
+ Distribution.fedora: install_fedora,
+ Distribution.centos: install_centos,
+ Distribution.mageia: install_mageia,
+ Distribution.debian: install_debian,
+ Distribution.ubuntu: install_ubuntu,
+ Distribution.arch: install_arch,
+ Distribution.opensuse: install_opensuse,
+ Distribution.clear: install_clear,
}
install[args.distribution](args, workspace, run_build_script)
-def reset_machine_id(args, workspace, run_build_script, for_cache):
+
+def reset_machine_id(args: CommandLineArguments, workspace: str, run_build_script: bool, for_cache: bool) -> None:
"""Make /etc/machine-id an empty file.
This way, on the next boot is either initialized and committed (if /etc is
@@ -1563,7 +1980,19 @@ def reset_machine_id(args, workspace, run_build_script, for_cache):
else:
os.symlink('../../../etc/machine-id', dbus_machine_id)
-def set_root_password(args, workspace, run_build_script, for_cache):
+
+def reset_random_seed(args: CommandLineArguments, workspace: str) -> None:
+ """Remove random seed file, so that it is initialized on first boot"""
+
+ with complete_step('Removing random seed'):
+ random_seed = os.path.join(workspace, 'root', 'var/lib/systemd/random-seed')
+ try:
+ os.unlink(random_seed)
+ except FileNotFoundError:
+ pass
+
+
+def set_root_password(args: CommandLineArguments, workspace: str, run_build_script: bool, for_cache: bool) -> None:
"Set the root account password, or just delete it so it's easy to log in"
if run_build_script:
@@ -1572,25 +2001,32 @@ def set_root_password(args, workspace, run_build_script, for_cache):
return
if args.password == '':
- print_step("Deleting root password...")
- jj = lambda line: (':'.join(['root', ''] + line.split(':')[2:])
- if line.startswith('root:') else line)
- patch_file(os.path.join(workspace, 'root', 'etc/passwd'), jj)
+ with complete_step("Deleting root password"):
+ def jj(line: str) -> str:
+ if line.startswith('root:'):
+ return ':'.join(['root', ''] + line.split(':')[2:])
+ return line
+ patch_file(os.path.join(workspace, 'root', 'etc/passwd'), jj)
elif args.password:
- print_step("Setting root password...")
- password = crypt.crypt(args.password, crypt.mksalt(crypt.METHOD_SHA512))
- jj = lambda line: (':'.join(['root', password] + line.split(':')[2:])
- if line.startswith('root:') else line)
- patch_file(os.path.join(workspace, 'root', 'etc/shadow'), jj)
+ with complete_step("Setting root password"):
+ password = crypt.crypt(args.password, crypt.mksalt(crypt.METHOD_SHA512))
+
+ def jj(line: str) -> str:
+ if line.startswith('root:'):
+ return ':'.join(['root', password] + line.split(':')[2:])
+ return line
+ patch_file(os.path.join(workspace, 'root', 'etc/shadow'), jj)
-def run_postinst_script(args, workspace, run_build_script, for_cache):
+def run_postinst_script(args: CommandLineArguments, workspace: str, run_build_script: bool, for_cache: bool) -> None:
if args.postinst_script is None:
return
if for_cache:
return
- with complete_step('Running post installation script'):
+ verb = "build" if run_build_script else "final"
+
+ with complete_step('Running postinstall script'):
# We copy the postinst script into the build tree. We'd prefer
# mounting it into the tree, but for that we'd need a good
@@ -1600,85 +2036,164 @@ def run_postinst_script(args, workspace, run_build_script, for_cache):
shutil.copy2(args.postinst_script,
os.path.join(workspace, "root", "root/postinst"))
- run_workspace_command(args, workspace, "/root/postinst", "build" if run_build_script else "final", network=args.with_network)
+ run_workspace_command(args, workspace, "/root/postinst", verb, network=args.with_network)
os.unlink(os.path.join(workspace, "root", "root/postinst"))
-def find_kernel_file(workspace_root, pattern):
+
+def run_finalize_script(args: CommandLineArguments, workspace: str, *, verb: str) -> None:
+ if args.finalize_script is None:
+ return
+
+ with complete_step('Running finalize script'):
+ buildroot = workspace + '/root'
+ env = collections.ChainMap({'BUILDROOT': buildroot}, os.environ)
+ run([args.finalize_script, verb], env=env, check=True)
+
+
+def find_kernel_file(workspace_root: str, pattern: str) -> Optional[str]:
# Look for the vmlinuz file in the workspace
workspace_pattern = os.path.join(workspace_root, pattern.lstrip('/'))
kernel_files = sorted(glob.glob(workspace_pattern))
kernel_file = kernel_files[0]
- # The path the kernel-install script expects is within the workspace reference as it is run from within the container
+ # The path the kernel-install script expects is within the
+ # workspace reference as it is run from within the container
if kernel_file.startswith(workspace_root):
kernel_file = kernel_file[len(workspace_root):]
else:
- sys.stderr.write('Error, kernel file %s cannot be used as it is not in the workspace\n' % kernel_file)
- return
+ sys.stderr.write(f'Error, kernel file {kernel_file} cannot be used as it is not in the workspace\n')
+ return None
if len(kernel_files) > 1:
warn('More than one kernel file found, will use {}', kernel_file)
return kernel_file
-def install_boot_loader_arch(args, workspace):
- patch_file(os.path.join(workspace, "root", "etc/mkinitcpio.conf"),
- lambda line: "HOOKS=\"systemd modconf block sd-encrypt filesystems keyboard fsck\"\n" if line.startswith("HOOKS=") and args.encrypt == "all" else
- "HOOKS=\"systemd modconf block filesystems fsck\"\n" if line.startswith("HOOKS=") else
- line)
- workspace_root = os.path.join(workspace, "root")
- kernel_version = next(filter(lambda x: x[0].isdigit(), os.listdir(os.path.join(workspace_root, "lib/modules"))))
- run_workspace_command(args, workspace, "/usr/bin/kernel-install", "add", kernel_version, find_kernel_file(workspace_root, "/boot/vmlinuz-*"))
+def install_grub(args: CommandLineArguments, workspace: str, loopdev: str, grub: str) -> None:
+ if args.bios_partno is None:
+ return
+
+ grub_cmdline = f'GRUB_CMDLINE_LINUX="{args.kernel_command_line}"\n'
+ os.makedirs(os.path.join(workspace, "root", "etc/default"), exist_ok=True, mode=0o755)
+ if not os.path.exists(os.path.join(workspace, "root", "etc/default/grub")):
+ with open(os.path.join(workspace, "root", "etc/default/grub"), "w+") as f:
+ f.write(grub_cmdline)
+ else:
+ def jj(line: str) -> str:
+ if line.startswith("GRUB_CMDLINE_LINUX="):
+ return grub_cmdline
+ return line
+ patch_file(os.path.join(workspace, "root", "etc/default/grub"), jj)
+
+ nspawn_params = [
+ "--bind-ro=/dev",
+ "--property=DeviceAllow=" + loopdev,
+ ]
+ if args.root_partno is not None:
+ nspawn_params += ["--property=DeviceAllow=" + partition(loopdev, args.root_partno)]
+
+ run_workspace_command(
+ args, workspace, f"{grub}-install",
+ "--modules=ext2 part_gpt", "--target=i386-pc",
+ loopdev, nspawn_params=nspawn_params)
+
+ run_workspace_command(
+ args, workspace, f"{grub}-mkconfig",
+ f"--output=/boot/{grub}/grub.cfg",
+ nspawn_params=nspawn_params)
+
+
+def install_boot_loader_fedora(args: CommandLineArguments, workspace: str, loopdev: str) -> None:
+ install_grub(args, workspace, loopdev, "grub2")
+
+
+def install_boot_loader_arch(args: CommandLineArguments, workspace: str, loopdev: str) -> None:
+ if "uefi" in args.boot_protocols:
+ # add loader entries and copy kernel/initrd under that entry
+ workspace_root = os.path.join(workspace, "root")
+ kernel_version = next(filter(lambda x: x[0].isdigit(),
+ os.listdir(os.path.join(workspace_root, "lib/modules"))))
+ kernel_file = find_kernel_file(workspace_root, "/boot/vmlinuz-*")
+ if kernel_file is not None:
+ run_workspace_command(args, workspace, "/usr/bin/kernel-install", "add", kernel_version, kernel_file)
+
+ if "bios" in args.boot_protocols:
+ install_grub(args, workspace, loopdev, "grub")
-def install_boot_loader_debian(args, workspace):
- kernel_version = next(filter(lambda x: x[0].isdigit(), os.listdir(os.path.join(workspace, "root", "lib/modules"))))
- run_workspace_command(args, workspace,
- "/usr/bin/kernel-install", "add", kernel_version, "/boot/vmlinuz-" + kernel_version)
+def install_boot_loader_debian(args: CommandLineArguments, workspace: str, loopdev: str) -> None:
+ if "uefi" in args.boot_protocols:
+ kernel_version = next(filter(lambda x: x[0].isdigit(), os.listdir(os.path.join(workspace, "root", "lib/modules"))))
-def install_boot_loader_opensuse(args, workspace):
- install_boot_loader_debian(args, workspace)
+ run_workspace_command(args, workspace,
+ "/usr/bin/kernel-install", "add", kernel_version, "/boot/vmlinuz-" + kernel_version)
-def install_boot_loader_clear(args, workspace, loopdev):
+ if "bios" in args.boot_protocols:
+ install_grub(args, workspace, loopdev, "grub")
+
+
+def install_boot_loader_ubuntu(args: CommandLineArguments, workspace: str, loopdev: str) -> None:
+ install_boot_loader_debian(args, workspace, loopdev)
+
+
+def install_boot_loader_opensuse(args: CommandLineArguments, workspace: str, loopdev: str) -> None:
+ install_boot_loader_debian(args, workspace, loopdev)
+
+
+def install_boot_loader_clear(args: CommandLineArguments, workspace: str, loopdev: str) -> None:
nspawn_params = [
# clr-boot-manager uses blkid in the device backing "/" to
# figure out uuid and related parameters.
"--bind-ro=/dev",
- "--property=DeviceAllow=" + loopdev,
- "--property=DeviceAllow=" + partition(loopdev, args.esp_partno),
- "--property=DeviceAllow=" + partition(loopdev, args.root_partno),
# clr-boot-manager compiled in Clear Linux will assume EFI
# partition is mounted in "/boot".
"--bind=" + os.path.join(workspace, "root/efi") + ":/boot",
]
+ if loopdev is not None:
+ nspawn_params += ["--property=DeviceAllow=" + loopdev]
+ if args.esp_partno is not None:
+ nspawn_params += ["--property=DeviceAllow=" + partition(loopdev, args.esp_partno)]
+ if args.root_partno is not None:
+ nspawn_params += ["--property=DeviceAllow=" + partition(loopdev, args.root_partno)]
+
run_workspace_command(args, workspace, "/usr/bin/clr-boot-manager", "update", "-i", nspawn_params=nspawn_params)
-def install_boot_loader(args, workspace, loopdev, cached):
+
+def install_boot_loader(args: CommandLineArguments, workspace: str, loopdev: Optional[str], cached: bool) -> None:
if not args.bootable:
return
+ assert loopdev is not None
if cached:
return
with complete_step("Installing boot loader"):
- shutil.copyfile(os.path.join(workspace, "root", "usr/lib/systemd/boot/efi/systemd-bootx64.efi"),
- os.path.join(workspace, "root", "boot/efi/EFI/systemd/systemd-bootx64.efi"))
+ if args.esp_partno:
+ shutil.copyfile(os.path.join(workspace, "root", "usr/lib/systemd/boot/efi/systemd-bootx64.efi"),
+ os.path.join(workspace, "root", "boot/efi/EFI/systemd/systemd-bootx64.efi"))
- shutil.copyfile(os.path.join(workspace, "root", "usr/lib/systemd/boot/efi/systemd-bootx64.efi"),
- os.path.join(workspace, "root", "boot/efi/EFI/BOOT/bootx64.efi"))
+ shutil.copyfile(os.path.join(workspace, "root", "usr/lib/systemd/boot/efi/systemd-bootx64.efi"),
+ os.path.join(workspace, "root", "boot/efi/EFI/BOOT/bootx64.efi"))
+
+ if args.distribution == Distribution.fedora:
+ install_boot_loader_fedora(args, workspace, loopdev)
if args.distribution == Distribution.arch:
- install_boot_loader_arch(args, workspace)
+ install_boot_loader_arch(args, workspace, loopdev)
if args.distribution == Distribution.debian:
- install_boot_loader_debian(args, workspace)
+ install_boot_loader_debian(args, workspace, loopdev)
+
+ if args.distribution == Distribution.ubuntu:
+ install_boot_loader_ubuntu(args, workspace, loopdev)
if args.distribution == Distribution.opensuse:
- install_boot_loader_opensuse(args, workspace)
+ install_boot_loader_opensuse(args, workspace, loopdev)
if args.distribution == Distribution.clear:
install_boot_loader_clear(args, workspace, loopdev)
-def install_extra_trees(args, workspace, for_cache):
+
+def install_extra_trees(args: CommandLineArguments, workspace: str, for_cache: bool) -> None:
if not args.extra_trees:
return
@@ -1688,24 +2203,26 @@ def install_extra_trees(args, workspace, for_cache):
with complete_step('Copying in extra file trees'):
for d in args.extra_trees:
if os.path.isdir(d):
- copy(d, os.path.join(workspace, "root"))
+ copy_path(d, os.path.join(workspace, "root"))
else:
shutil.unpack_archive(d, os.path.join(workspace, "root"))
-def install_skeleton_trees(args, workspace, for_cache):
+
+def install_skeleton_trees(args: CommandLineArguments, workspace: str, for_cache: bool) -> None:
if not args.skeleton_trees:
return
with complete_step('Copying in skeleton file trees'):
for d in args.skeleton_trees:
if os.path.isdir(d):
- copy(d, os.path.join(workspace, "root"))
+ copy_path(d, os.path.join(workspace, "root"))
else:
shutil.unpack_archive(d, os.path.join(workspace, "root"))
-def copy_git_files(src, dest, *, git_files):
+
+def copy_git_files(src: str, dest: str, *, source_file_transfer: SourceFileTransfer) -> None:
what_files = ['--exclude-standard', '--cached']
- if git_files == 'others':
+ if source_file_transfer == SourceFileTransfer.copy_git_others:
what_files += ['--others', '--exclude=.mkosi-*']
c = run(['git', '-C', src, 'ls-files', '-z'] + what_files,
@@ -1744,7 +2261,8 @@ def copy_git_files(src, dest, *, git_files):
copy_file(src_path, dest_path)
-def install_build_src(args, workspace, run_build_script, for_cache):
+
+def install_build_src(args: CommandLineArguments, workspace: str, run_build_script: bool, for_cache: bool) -> None:
if not run_build_script:
return
if for_cache:
@@ -1759,23 +2277,25 @@ def install_build_src(args, workspace, run_build_script, for_cache):
if args.build_sources is not None:
target = os.path.join(workspace, "root", "root/src")
- use_git = args.use_git_files
- if use_git is None:
- use_git = os.path.exists('.git') or os.path.exists(os.path.join(args.build_sources, '.git'))
- if use_git:
- copy_git_files(args.build_sources, target, git_files=args.git_files)
- else:
+ source_file_transfer = args.source_file_transfer
+ if source_file_transfer is None and (os.path.exists('.git') or os.path.exists(os.path.join(args.build_sources, '.git'))):
+ source_file_transfer = SourceFileTransfer.copy_git_cached
+
+ if source_file_transfer in (SourceFileTransfer.copy_git_others, SourceFileTransfer.copy_git_cached):
+ copy_git_files(args.build_sources, target, source_file_transfer=source_file_transfer)
+ elif source_file_transfer == SourceFileTransfer.copy_all:
ignore = shutil.ignore_patterns('.git',
'.mkosi-*',
'*.cache-pre-dev',
'*.cache-pre-inst',
- os.path.basename(args.output_dir)+"/" if args.output_dir else "mkosi.output/",
- os.path.basename(args.cache_path)+"/" if args.cache_path else "mkosi.cache/",
- os.path.basename(args.build_dir)+"/" if args.build_dir else "mkosi.builddir/")
+ os.path.basename(args.output_dir)+"/" if args.output_dir else "mkosi.output/", # NOQA: E501
+ os.path.basename(args.cache_path)+"/" if args.cache_path else "mkosi.cache/", # NOQA: E501
+ os.path.basename(args.build_dir)+"/" if args.build_dir else "mkosi.builddir/") # NOQA: E501
shutil.copytree(args.build_sources, target, symlinks=True, ignore=ignore)
-def install_build_dest(args, workspace, run_build_script, for_cache):
+
+def install_build_dest(args: CommandLineArguments, workspace: str, run_build_script: bool, for_cache: bool) -> None:
if run_build_script:
return
if for_cache:
@@ -1785,22 +2305,26 @@ def install_build_dest(args, workspace, run_build_script, for_cache):
return
with complete_step('Copying in build tree'):
- copy(os.path.join(workspace, "dest"), os.path.join(workspace, "root"))
+ copy_path(os.path.join(workspace, "dest"), os.path.join(workspace, "root"))
+
-def make_read_only(args, workspace, for_cache):
+def make_read_only(args: CommandLineArguments, workspace: str, for_cache: bool) -> None:
if not args.read_only:
return
if for_cache:
return
- if args.output_format not in (OutputFormat.raw_btrfs, OutputFormat.subvolume):
+ if args.output_format not in (OutputFormat.gpt_btrfs, OutputFormat.subvolume):
return
with complete_step('Marking root subvolume read-only'):
btrfs_subvol_make_ro(os.path.join(workspace, "root"))
-def make_tar(args, workspace, run_build_script, for_cache):
+def make_tar(args: CommandLineArguments,
+ workspace: str,
+ run_build_script: bool,
+ for_cache: bool) -> Optional[BinaryIO]:
if run_build_script:
return None
if args.output_format != OutputFormat.tar:
@@ -1809,28 +2333,38 @@ def make_tar(args, workspace, run_build_script, for_cache):
return None
with complete_step('Creating archive'):
- f = tempfile.NamedTemporaryFile(dir=os.path.dirname(args.output), prefix=".mkosi-")
+ f: BinaryIO = cast(BinaryIO, tempfile.NamedTemporaryFile(dir=os.path.dirname(args.output), prefix=".mkosi-"))
run(["tar", "-C", os.path.join(workspace, "root"),
"-c", "-J", "--xattrs", "--xattrs-include=*", "."],
stdout=f, check=True)
return f
-def make_squashfs(args, workspace, for_cache):
- if args.output_format != OutputFormat.raw_squashfs:
+
+def make_squashfs(args: CommandLineArguments, workspace: str, for_cache: bool) -> Optional[BinaryIO]:
+ if not args.output_format.is_squashfs():
return None
if for_cache:
return None
+ command = args.mksquashfs_tool[0] if args.mksquashfs_tool else 'mksquashfs'
+ comp_args = (args.mksquashfs_tool[1:] if args.mksquashfs_tool and args.mksquashfs_tool[1:]
+ else ['-noappend'])
+
+ if args.compress is not True:
+ assert args.compress is not False
+ comp_args += ['-comp', args.compress]
+
with complete_step('Creating squashfs file system'):
- f = tempfile.NamedTemporaryFile(dir=os.path.dirname(args.output), prefix=".mkosi-squashfs")
- run(["mksquashfs", os.path.join(workspace, "root"), f.name, "-comp", "lz4", "-noappend"],
+ f: BinaryIO = cast(BinaryIO, tempfile.NamedTemporaryFile(prefix=".mkosi-squashfs",
+ dir=os.path.dirname(args.output)))
+ run([command, os.path.join(workspace, "root"), f.name, *comp_args],
check=True)
return f
-def read_partition_table(loopdev):
+def read_partition_table(loopdev: str) -> Tuple[List[str], int]:
table = []
last_sector = 0
@@ -1840,7 +2374,7 @@ def read_partition_table(loopdev):
for line in c.stdout.decode("utf-8").split('\n'):
stripped = line.strip()
- if stripped == "": # empty line is where the body begins
+ if stripped == "": # empty line is where the body begins
in_body = True
continue
if not in_body:
@@ -1869,8 +2403,16 @@ def read_partition_table(loopdev):
return table, last_sector * 512
-def insert_partition(args, workspace, raw, loopdev, partno, blob, name, type_uuid, uuid = None):
+def insert_partition(args: CommandLineArguments,
+ workspace: str,
+ raw: BinaryIO,
+ loopdev: str,
+ partno: int,
+ blob: BinaryIO,
+ name: str,
+ type_uuid: uuid.UUID,
+ uuid: Optional[uuid.UUID] = None) -> int:
if args.ran_sfdisk:
old_table, last_partition_sector = read_partition_table(loopdev)
else:
@@ -1882,12 +2424,12 @@ def insert_partition(args, workspace, raw, loopdev, partno, blob, name, type_uui
luks_extra = 2*1024*1024 if args.encrypt == "all" else 0
new_size = last_partition_sector + blob_size + luks_extra + GPT_FOOTER_SIZE
- print_step("Resizing disk image to {}...".format(format_bytes(new_size)))
+ print_step(f'Resizing disk image to {format_bytes(new_size)}...')
os.truncate(raw.name, new_size)
run(["losetup", "--set-capacity", loopdev], check=True)
- print_step("Inserting partition of {}...".format(format_bytes(blob_size)))
+ print_step(f'Inserting partition of {format_bytes(blob_size)}...')
table = "label: gpt\n"
@@ -1897,7 +2439,8 @@ def insert_partition(args, workspace, raw, loopdev, partno, blob, name, type_uui
if uuid is not None:
table += "uuid=" + str(uuid) + ", "
- table += 'size={}, type={}, attrs=GUID:60, name="{}"\n'.format((blob_size + luks_extra) // 512, type_uuid, name)
+ n_sectores = (blob_size + luks_extra) // 512
+ table += f'size={n_sectores}, type={type_uuid}, attrs=GUID:60, name="{name}"\n'
print(table)
@@ -1921,25 +2464,39 @@ def insert_partition(args, workspace, raw, loopdev, partno, blob, name, type_uui
return blob_size
-def insert_squashfs(args, workspace, raw, loopdev, squashfs, for_cache):
- if args.output_format != OutputFormat.raw_squashfs:
+
+def insert_squashfs(args: CommandLineArguments,
+ workspace: str,
+ raw: Optional[BinaryIO],
+ loopdev: Optional[str],
+ squashfs: Optional[BinaryIO],
+ for_cache: bool) -> None:
+ if args.output_format != OutputFormat.gpt_squashfs:
return
if for_cache:
return
+ assert raw is not None
+ assert loopdev is not None
+ assert squashfs is not None
with complete_step('Inserting squashfs root partition'):
args.root_size = insert_partition(args, workspace, raw, loopdev, args.root_partno, squashfs,
- "Root Partition", GPT_ROOT_NATIVE)
+ "Root Partition", gpt_root_native(args.architecture).root)
-def make_verity(args, workspace, dev, run_build_script, for_cache):
+def make_verity(args: CommandLineArguments,
+ workspace: str,
+ dev: Optional[str],
+ run_build_script: bool,
+ for_cache: bool) -> Tuple[Optional[BinaryIO], Optional[str]]:
if run_build_script or not args.verity:
return None, None
if for_cache:
return None, None
+ assert dev is not None
with complete_step('Generating verity hashes'):
- f = tempfile.NamedTemporaryFile(dir=os.path.dirname(args.output), prefix=".mkosi-")
+ f: BinaryIO = cast(BinaryIO, tempfile.NamedTemporaryFile(dir=os.path.dirname(args.output), prefix=".mkosi-"))
c = run(["veritysetup", "format", dev, f.name], stdout=PIPE, check=True)
for line in c.stdout.decode("utf-8").split('\n'):
@@ -1949,24 +2506,38 @@ def make_verity(args, workspace, dev, run_build_script, for_cache):
raise ValueError('Root hash not found')
-def insert_verity(args, workspace, raw, loopdev, verity, root_hash, for_cache):
+def insert_verity(args: CommandLineArguments,
+ workspace: str,
+ raw: Optional[BinaryIO],
+ loopdev: Optional[str],
+ verity: Optional[BinaryIO],
+ root_hash: Optional[str],
+ for_cache: bool) -> None:
if verity is None:
return
if for_cache:
return
+ assert loopdev is not None
+ assert raw is not None
+ assert root_hash is not None
# Use the final 128 bit of the root hash as partition UUID of the verity partition
u = uuid.UUID(root_hash[-32:])
with complete_step('Inserting verity partition'):
insert_partition(args, workspace, raw, loopdev, args.verity_partno, verity,
- "Verity Partition", GPT_ROOT_NATIVE_VERITY, u)
+ "Verity Partition", gpt_root_native(args.architecture).verity, u)
-def patch_root_uuid(args, loopdev, root_hash, for_cache):
+def patch_root_uuid(args: CommandLineArguments,
+ loopdev: Optional[str],
+ root_hash: Optional[str],
+ for_cache: bool) -> None:
if root_hash is None:
return
+ assert loopdev is not None
+
if for_cache:
return
@@ -1977,8 +2548,12 @@ def patch_root_uuid(args, loopdev, root_hash, for_cache):
run(["sfdisk", "--part-uuid", loopdev, str(args.root_partno), str(u)],
check=True)
-def install_unified_kernel(args, workspace, run_build_script, for_cache, root_hash):
+def install_unified_kernel(args: CommandLineArguments,
+ workspace: str,
+ run_build_script: bool,
+ for_cache: bool,
+ root_hash: Optional[str]) -> None:
# Iterates through all kernel versions included in the image and
# generates a combined kernel+initrd+cmdline+osrelease EFI file
# from it and places it in the /EFI/Linux directory of the
@@ -1988,7 +2563,7 @@ def install_unified_kernel(args, workspace, run_build_script, for_cache, root_ha
# everything necessary to boot a specific root device, including
# the root hash.
- if not args.bootable:
+ if not args.bootable or args.esp_partno is None:
return
if for_cache:
return
@@ -2008,7 +2583,7 @@ def install_unified_kernel(args, workspace, run_build_script, for_cache, root_ha
with complete_step("Generating combined kernel + initrd boot file"):
- cmdline = args.kernel_commandline
+ cmdline = args.kernel_command_line
if root_hash is not None:
cmdline += " roothash=" + root_hash
@@ -2034,17 +2609,17 @@ def install_unified_kernel(args, workspace, run_build_script, for_cache, root_ha
("-i",) + ("/usr/lib/systemd/systemd-veritysetup",)*2 + \
("-i",) + ("/usr/lib/systemd/system-generators/systemd-veritysetup-generator",)*2
- if args.output_format == OutputFormat.raw_squashfs:
- dracut += [ '--add-drivers', 'squashfs' ]
+ if args.output_format == OutputFormat.gpt_squashfs:
+ dracut += ['--add-drivers', 'squashfs']
- dracut += [ '--add', 'qemu' ]
+ dracut += ['--add', 'qemu']
- dracut += [ boot_binary ]
+ dracut += [boot_binary]
- run_workspace_command(args, workspace, *dracut);
+ run_workspace_command(args, workspace, *dracut)
-def secure_boot_sign(args, workspace, run_build_script, for_cache):
+def secure_boot_sign(args: CommandLineArguments, workspace: str, run_build_script: bool, for_cache: bool) -> None:
if run_build_script:
return
if not args.bootable:
@@ -2059,7 +2634,7 @@ def secure_boot_sign(args, workspace, run_build_script, for_cache):
if not i.endswith(".efi") and not i.endswith(".EFI"):
continue
- with complete_step("Signing EFI binary {} in ESP".format(i)):
+ with complete_step(f'Signing EFI binary {i} in ESP'):
p = os.path.join(path, i)
run(["sbsign",
@@ -2071,44 +2646,66 @@ def secure_boot_sign(args, workspace, run_build_script, for_cache):
os.rename(p + ".signed", p)
-def xz_output(args, raw):
- if args.output_format not in (OutputFormat.raw_btrfs, OutputFormat.raw_gpt, OutputFormat.raw_squashfs):
+
+def xz_output(args: CommandLineArguments, raw: Optional[BinaryIO]) -> Optional[BinaryIO]:
+ if not args.output_format.is_disk():
return raw
+ assert raw is not None
if not args.xz:
return raw
+ xz_binary = "pxz" if shutil.which("pxz") else "xz"
+
with complete_step('Compressing image file'):
- f = tempfile.NamedTemporaryFile(prefix=".mkosi-", dir=os.path.dirname(args.output))
- run(["xz", "-c", raw.name], stdout=f, check=True)
+ f: BinaryIO = cast(BinaryIO, tempfile.NamedTemporaryFile(prefix=".mkosi-", dir=os.path.dirname(args.output)))
+ run([xz_binary, "-c", raw.name], stdout=f, check=True)
return f
-def write_root_hash_file(args, root_hash):
+
+def qcow2_output(args: CommandLineArguments, raw: Optional[BinaryIO]) -> Optional[BinaryIO]:
+ if not args.output_format.is_disk():
+ return raw
+ assert raw is not None
+
+ if not args.qcow2:
+ return raw
+
+ with complete_step('Converting image file to qcow2'):
+ f: BinaryIO = cast(BinaryIO, tempfile.NamedTemporaryFile(prefix=".mkosi-", dir=os.path.dirname(args.output)))
+ run(["qemu-img", "convert", "-fraw", "-Oqcow2", raw.name, f.name], check=True)
+
+ return f
+
+
+def write_root_hash_file(args: CommandLineArguments, root_hash: Optional[str]) -> Optional[BinaryIO]:
if root_hash is None:
return None
with complete_step('Writing .roothash file'):
- f = tempfile.NamedTemporaryFile(mode='w+b', prefix='.mkosi',
- dir=os.path.dirname(args.output_root_hash_file))
+ f: BinaryIO = cast(BinaryIO, tempfile.NamedTemporaryFile(mode='w+b', prefix='.mkosi',
+ dir=os.path.dirname(args.output_root_hash_file)))
f.write((root_hash + "\n").encode())
return f
-def copy_nspawn_settings(args):
+
+def copy_nspawn_settings(args: CommandLineArguments) -> Optional[BinaryIO]:
if args.nspawn_settings is None:
return None
with complete_step('Copying nspawn settings file'):
- f = tempfile.NamedTemporaryFile(mode="w+b", prefix=".mkosi-",
- dir=os.path.dirname(args.output_nspawn_settings))
+ f: BinaryIO = cast(BinaryIO, tempfile.NamedTemporaryFile(mode="w+b", prefix=".mkosi-",
+ dir=os.path.dirname(args.output_nspawn_settings)))
with open(args.nspawn_settings, "rb") as c:
f.write(c.read())
return f
-def hash_file(of, sf, fname):
+
+def hash_file(of: TextIO, sf: BinaryIO, fname: str) -> None:
bs = 16*1024**2
h = hashlib.sha256()
@@ -2120,7 +2717,12 @@ def hash_file(of, sf, fname):
of.write(h.hexdigest() + " *" + fname + "\n")
-def calculate_sha256sum(args, raw, tar, root_hash_file, nspawn_settings):
+
+def calculate_sha256sum(args: CommandLineArguments,
+ raw: Optional[BinaryIO],
+ tar: Optional[BinaryIO],
+ root_hash_file: Optional[BinaryIO],
+ nspawn_settings: Optional[BinaryIO]) -> Optional[TextIO]:
if args.output_format in (OutputFormat.directory, OutputFormat.subvolume):
return None
@@ -2128,8 +2730,8 @@ def calculate_sha256sum(args, raw, tar, root_hash_file, nspawn_settings):
return None
with complete_step('Calculating SHA256SUMS'):
- f = tempfile.NamedTemporaryFile(mode="w+", prefix=".mkosi-", encoding="utf-8",
- dir=os.path.dirname(args.output_checksum))
+ f: TextIO = cast(TextIO, tempfile.NamedTemporaryFile(mode="w+", prefix=".mkosi-", encoding="utf-8",
+ dir=os.path.dirname(args.output_checksum)))
if raw is not None:
hash_file(f, raw, os.path.basename(args.output))
@@ -2142,7 +2744,8 @@ def calculate_sha256sum(args, raw, tar, root_hash_file, nspawn_settings):
return f
-def calculate_signature(args, checksum):
+
+def calculate_signature(args: CommandLineArguments, checksum: Optional[IO[Any]]) -> Optional[BinaryIO]:
if not args.sign:
return None
@@ -2150,8 +2753,8 @@ def calculate_signature(args, checksum):
return None
with complete_step('Signing SHA256SUMS'):
- f = tempfile.NamedTemporaryFile(mode="wb", prefix=".mkosi-",
- dir=os.path.dirname(args.output_signature))
+ f: BinaryIO = cast(BinaryIO, tempfile.NamedTemporaryFile(mode="wb", prefix=".mkosi-",
+ dir=os.path.dirname(args.output_signature)))
cmdline = ["gpg", "--detach-sign"]
@@ -2163,94 +2766,113 @@ def calculate_signature(args, checksum):
return f
-def calculate_bmap(args, raw):
+
+def calculate_bmap(args: CommandLineArguments, raw: Optional[BinaryIO]) -> Optional[TextIO]:
if not args.bmap:
return None
- if args.output_format not in (OutputFormat.raw_gpt, OutputFormat.raw_btrfs):
+ if not args.output_format.is_disk_rw():
return None
+ assert raw is not None
with complete_step('Creating BMAP file'):
- f = tempfile.NamedTemporaryFile(mode="w+", prefix=".mkosi-", encoding="utf-8",
- dir=os.path.dirname(args.output_bmap))
+ f: TextIO = cast(TextIO, tempfile.NamedTemporaryFile(mode="w+", prefix=".mkosi-", encoding="utf-8",
+ dir=os.path.dirname(args.output_bmap)))
cmdline = ["bmaptool", "create", raw.name]
run(cmdline, stdout=f, check=True)
return f
-def save_cache(args, workspace, raw, cache_path):
+def save_cache(args: CommandLineArguments, workspace: str, raw: Optional[str], cache_path: Optional[str]) -> None:
if cache_path is None or raw is None:
return
with complete_step('Installing cache copy ',
'Successfully installed cache copy ' + cache_path):
- if args.output_format in (OutputFormat.raw_btrfs, OutputFormat.raw_gpt):
+ if args.output_format.is_disk_rw():
os.chmod(raw, 0o666 & ~args.original_umask)
shutil.move(raw, cache_path)
else:
shutil.move(os.path.join(workspace, "root"), cache_path)
-def link_output(args, workspace, raw, tar):
+
+def _link_output(args: CommandLineArguments, oldpath: str, newpath: str) -> None:
+ os.chmod(oldpath, 0o666 & ~args.original_umask)
+ os.link(oldpath, newpath)
+ if args.no_chown:
+ return
+
+ sudo_uid = os.getenv("SUDO_UID")
+ sudo_gid = os.getenv("SUDO_GID")
+ if not (sudo_uid and sudo_gid):
+ return
+
+ sudo_user = os.getenv("SUDO_USER", default=sudo_uid)
+ with complete_step(f"Changing ownership of output file {newpath} to user {sudo_user} (acquired from sudo)",
+ f"Successfully changed ownership of {newpath}"):
+ os.chown(newpath, int(sudo_uid), int(sudo_gid))
+
+
+def link_output(args: CommandLineArguments, workspace: str, artifact: Optional[BinaryIO]) -> None:
with complete_step('Linking image file',
'Successfully linked ' + args.output):
if args.output_format in (OutputFormat.directory, OutputFormat.subvolume):
+ assert artifact is None
os.rename(os.path.join(workspace, "root"), args.output)
- elif args.output_format in (OutputFormat.raw_btrfs, OutputFormat.raw_gpt, OutputFormat.raw_squashfs):
- os.chmod(raw, 0o666 & ~args.original_umask)
- os.link(raw, args.output)
- else:
- os.chmod(tar, 0o666 & ~args.original_umask)
- os.link(tar, args.output)
+ elif args.output_format.is_disk() or args.output_format in (OutputFormat.plain_squashfs, OutputFormat.tar):
+ assert artifact is not None
+ _link_output(args, artifact.name, args.output)
-def link_output_nspawn_settings(args, path):
+
+def link_output_nspawn_settings(args: CommandLineArguments, path: Optional[str]) -> None:
if path is None:
return
with complete_step('Linking nspawn settings file',
'Successfully linked ' + args.output_nspawn_settings):
- os.chmod(path, 0o666 & ~args.original_umask)
- os.link(path, args.output_nspawn_settings)
+ _link_output(args, path, args.output_nspawn_settings)
+
-def link_output_checksum(args, checksum):
+def link_output_checksum(args: CommandLineArguments, checksum: Optional[str]) -> None:
if checksum is None:
return
with complete_step('Linking SHA256SUMS file',
'Successfully linked ' + args.output_checksum):
- os.chmod(checksum, 0o666 & ~args.original_umask)
- os.link(checksum, args.output_checksum)
+ _link_output(args, checksum, args.output_checksum)
-def link_output_root_hash_file(args, root_hash_file):
+
+def link_output_root_hash_file(args: CommandLineArguments, root_hash_file: Optional[str]) -> None:
if root_hash_file is None:
return
with complete_step('Linking .roothash file',
'Successfully linked ' + args.output_root_hash_file):
- os.chmod(root_hash_file, 0o666 & ~args.original_umask)
- os.link(root_hash_file, args.output_root_hash_file)
+ _link_output(args, root_hash_file, args.output_root_hash_file)
+
-def link_output_signature(args, signature):
+def link_output_signature(args: CommandLineArguments, signature: Optional[str]) -> None:
if signature is None:
return
with complete_step('Linking SHA256SUMS.gpg file',
'Successfully linked ' + args.output_signature):
- os.chmod(signature, 0o666 & ~args.original_umask)
- os.link(signature, args.output_signature)
+ _link_output(args, signature, args.output_signature)
+
-def link_output_bmap(args, bmap):
+def link_output_bmap(args: CommandLineArguments, bmap: Optional[str]) -> None:
if bmap is None:
return
with complete_step('Linking .bmap file',
'Successfully linked ' + args.output_bmap):
- os.chmod(bmap, 0o666 & ~args.original_umask)
- os.link(bmap, args.output_bmap)
+ _link_output(args, bmap, args.output_bmap)
-def dir_size(path):
+
+def dir_size(path: str) -> int:
sum = 0
for entry in os.scandir(path):
if entry.is_symlink():
@@ -2264,14 +2886,17 @@ def dir_size(path):
sum += dir_size(entry.path)
return sum
-def print_output_size(args):
+
+def print_output_size(args: CommandLineArguments) -> None:
if args.output_format in (OutputFormat.directory, OutputFormat.subvolume):
print_step("Resulting image size is " + format_bytes(dir_size(args.output)) + ".")
else:
st = os.stat(args.output)
- print_step("Resulting image size is " + format_bytes(st.st_size) + ", consumes " + format_bytes(st.st_blocks * 512) + ".")
+ print_step("Resulting image size is " + format_bytes(st.st_size) + ", consumes " + format_bytes(st.st_blocks * 512) + ".") # NOQA: E501
-def setup_package_cache(args):
+
+def setup_package_cache(args: CommandLineArguments) -> Optional[tempfile.TemporaryDirectory]:
+ d: Optional[tempfile.TemporaryDirectory] = None
with complete_step('Setting up package cache',
'Setting up package cache {} complete') as output:
if args.cache_path is None:
@@ -2279,30 +2904,58 @@ def setup_package_cache(args):
args.cache_path = d.name
else:
os.makedirs(args.cache_path, 0o755, exist_ok=True)
- d = None
output.append(args.cache_path)
return d
+
class ListAction(argparse.Action):
- def __call__(self, parser, namespace, values, option_string=None):
- l = getattr(namespace, self.dest)
- if l is None:
- l = []
- l.extend(values.split(self.delimiter))
- setattr(namespace, self.dest, l)
+ delimiter: str
+
+ def __init__(self, *args: Any, choices: Optional[Iterable[Any]] = None, **kwargs: Any) -> None:
+ self.list_choices = choices
+ super().__init__(*args, **kwargs)
+
+ def __call__(self, # These type-hints are copied from argparse.pyi
+ parser: argparse.ArgumentParser,
+ namespace: argparse.Namespace,
+ values: Union[str, Sequence[Any], None],
+ option_string: Optional[str] = None) -> None:
+ assert isinstance(values, str)
+ ary = getattr(namespace, self.dest)
+ if ary is None:
+ ary = []
+ new = values.split(self.delimiter)
+ for x in new:
+ if self.list_choices is not None and x not in self.list_choices:
+ raise ValueError(f'Unknown value {x!r}')
+ ary.append(x)
+ setattr(namespace, self.dest, ary)
+
class CommaDelimitedListAction(ListAction):
delimiter = ","
+
class ColonDelimitedListAction(ListAction):
delimiter = ":"
-def parse_args():
+
+COMPRESSION_ALGORITHMS = 'zlib', 'lzo', 'zstd', 'lz4', 'xz'
+
+
+def parse_compression(value: str) -> Union[str, bool]:
+ if value in COMPRESSION_ALGORITHMS:
+ return value
+ return parse_boolean(value)
+
+
+def parse_args() -> CommandLineArguments:
parser = argparse.ArgumentParser(description='Build Legacy-Free OS Images', add_help=False)
group = parser.add_argument_group("Commands")
- group.add_argument("verb", choices=("build", "clean", "help", "summary", "shell", "boot", "qemu"), nargs='?', default="build", help='Operation to execute')
+ group.add_argument("verb", choices=("build", "clean", "help", "summary", "shell", "boot", "qemu"), nargs='?',
+ default="build", help='Operation to execute')
group.add_argument("cmdline", nargs=argparse.REMAINDER, help="The command line to use for 'shell', 'boot', 'qemu'")
group.add_argument('-h', '--help', action='help', help="Show this help")
group.add_argument('--version', action='version', version='%(prog)s ' + __version__)
@@ -2311,81 +2964,127 @@ def parse_args():
group.add_argument('-d', "--distribution", choices=Distribution.__members__, help='Distribution to install')
group.add_argument('-r', "--release", help='Distribution release to install')
group.add_argument('-m', "--mirror", help='Distribution mirror to use')
- group.add_argument("--repositories", action=CommaDelimitedListAction, dest='repositories', help='Repositories to use', metavar='REPOS')
+ group.add_argument("--repositories", action=CommaDelimitedListAction, dest='repositories',
+ help='Repositories to use', metavar='REPOS')
+ group.add_argument('--architecture', help='Override the architecture of installation')
group = parser.add_argument_group("Output")
- group.add_argument('-t', "--format", dest='output_format', choices=OutputFormat.__members__, help='Output Format')
+ group.add_argument('-t', "--format", dest='output_format', choices=OutputFormat, type=OutputFormat.from_string,
+ help='Output Format')
group.add_argument('-o', "--output", help='Output image path', metavar='PATH')
group.add_argument('-O', "--output-dir", help='Output root directory', metavar='DIR')
- group.add_argument('-f', "--force", action='count', dest='force_count', default=0, help='Remove existing image file before operation')
+ group.add_argument('-f', "--force", action='count', dest='force_count', default=0,
+ help='Remove existing image file before operation')
group.add_argument('-b', "--bootable", type=parse_boolean, nargs='?', const=True,
- help='Make image bootable on EFI (only raw_gpt, raw_btrfs, raw_squashfs)')
- group.add_argument("--secure-boot", action='store_true', help='Sign the resulting kernel/initrd image for UEFI SecureBoot')
+ help='Make image bootable on EFI (only gpt_ext4, gpt_xfs, gpt_btrfs, gpt_squashfs)')
+ group.add_argument("--boot-protocols", action=CommaDelimitedListAction,
+ help="Boot protocols to use on a bootable image", metavar="PROTOCOLS", default=[])
+ group.add_argument("--kernel-command-line", help='Set the kernel command line (only bootable images)')
+ group.add_argument("--kernel-commandline", dest='kernel_command_line', help=argparse.SUPPRESS) # Compatibility option
+ group.add_argument("--secure-boot", action='store_true',
+ help='Sign the resulting kernel/initrd image for UEFI SecureBoot')
group.add_argument("--secure-boot-key", help="UEFI SecureBoot private key in PEM format", metavar='PATH')
group.add_argument("--secure-boot-certificate", help="UEFI SecureBoot certificate in X509 format", metavar='PATH')
- group.add_argument("--read-only", action='store_true', help='Make root volume read-only (only raw_gpt, raw_btrfs, subvolume, implied on raw_squashs)')
- group.add_argument("--encrypt", choices=("all", "data"), help='Encrypt everything except: ESP ("all") or ESP and root ("data")')
+ group.add_argument("--read-only", action='store_true',
+ help='Make root volume read-only (only gpt_ext4, gpt_xfs, gpt_btrfs, subvolume, implied with gpt_squashfs and plain_squashfs)')
+ group.add_argument("--encrypt", choices=("all", "data"),
+ help='Encrypt everything except: ESP ("all") or ESP and root ("data")')
group.add_argument("--verity", action='store_true', help='Add integrity partition (implies --read-only)')
- group.add_argument("--compress", action='store_true', help='Enable compression in file system (only raw_btrfs, subvolume)')
- group.add_argument("--xz", action='store_true', help='Compress resulting image with xz (only raw_gpt, raw_btrfs, raw_squashfs, implied on tar)')
- group.add_argument('-i', "--incremental", action='store_true', help='Make use of and generate intermediary cache images')
+ group.add_argument("--compress", type=parse_compression,
+ help='Enable compression in file system (only gpt_btrfs, subvolume, gpt_squashfs, plain_squashfs)')
+ group.add_argument('--mksquashfs', dest='mksquashfs_tool', type=str.split,
+ help='Script to call instead of mksquashfs')
+ group.add_argument("--xz", action='store_true',
+ help='Compress resulting image with xz (only gpt_ext4, gpt_xfs, gpt_btrfs, gpt_squashfs, implied on tar)') # NOQA: E501
+ group.add_argument("--qcow2", action='store_true',
+ help='Convert resulting image to qcow2 (only gpt_ext4, gpt_xfs, gpt_btrfs, gpt_squashfs)')
+ group.add_argument("--hostname", help="Set hostname")
+ group.add_argument('--no-chown', action='store_true',
+ help='When running with sudo, disable reassignment of ownership of the generated files to the original user') # NOQA: E501
+ group.add_argument('-i', "--incremental", action='store_true',
+ help='Make use of and generate intermediary cache images')
group = parser.add_argument_group("Packages")
- group.add_argument('-p', "--package", action=CommaDelimitedListAction, dest='packages', default=[], help='Add an additional package to the OS image', metavar='PACKAGE')
- group.add_argument("--with-docs", action='store_true', help='Install documentation (only Fedora, CentOS and Mageia)')
- group.add_argument('-T', "--without-tests", action='store_false', dest='with_tests', default=True, help='Do not run tests as part of build script, if supported')
+ group.add_argument('-p', "--package", action=CommaDelimitedListAction, dest='packages', default=[],
+ help='Add an additional package to the OS image', metavar='PACKAGE')
+ group.add_argument("--with-docs", action='store_true', default=None,
+ help='Install documentation')
+ group.add_argument('-T', "--without-tests", action='store_false', dest='with_tests', default=True,
+ help='Do not run tests as part of build script, if supported')
group.add_argument("--cache", dest='cache_path', help='Package cache path', metavar='PATH')
- group.add_argument("--extra-tree", action='append', dest='extra_trees', default=[], help='Copy an extra tree on top of image', metavar='PATH')
- group.add_argument("--skeleton-tree", action='append', dest='skeleton_trees', default=[], help='Use a skeleton tree to bootstrap the image before installing anything', metavar='PATH')
+ group.add_argument("--extra-tree", action='append', dest='extra_trees', default=[],
+ help='Copy an extra tree on top of image', metavar='PATH')
+ group.add_argument("--skeleton-tree", action='append', dest='skeleton_trees', default=[],
+ help='Use a skeleton tree to bootstrap the image before installing anything', metavar='PATH')
group.add_argument("--build-script", help='Build script to run inside image', metavar='PATH')
group.add_argument("--build-sources", help='Path for sources to build', metavar='PATH')
group.add_argument("--build-dir", help='Path to use as persistent build directory', metavar='PATH')
- group.add_argument("--build-package", action=CommaDelimitedListAction, dest='build_packages', default=[], help='Additional packages needed for build script', metavar='PACKAGE')
- group.add_argument("--postinst-script", help='Post installation script to run inside image', metavar='PATH')
- group.add_argument('--use-git-files', type=parse_boolean,
- help='Ignore any files that git itself ignores (default: guess)')
- group.add_argument('--git-files', choices=('cached', 'others'),
- help='Whether to include untracked files (default: others)')
- group.add_argument("--with-network", action='store_true', help='Run build and postinst scripts with network access (instead of private network)')
- group.add_argument("--settings", dest='nspawn_settings', help='Add in .spawn settings file', metavar='PATH')
+ group.add_argument("--build-package", action=CommaDelimitedListAction, dest='build_packages', default=[],
+ help='Additional packages needed for build script', metavar='PACKAGE')
+ group.add_argument("--postinst-script", help='Postinstall script to run inside image', metavar='PATH')
+ group.add_argument("--finalize-script", help='Postinstall script to run outside image', metavar='PATH')
+ group.add_argument("--source-file-transfer", type=SourceFileTransfer, choices=list(SourceFileTransfer), default=None,
+ help="Method used to copy build sources to the build image." +
+ "; ".join([f"'{k}': {v}" for k, v in SourceFileTransfer.doc().items()]) + " (default: copy-git-cached if in a git repository, otherwise copy-all)")
+ group.add_argument("--with-network", action='store_true', default=None,
+ help='Run build and postinst scripts with network access (instead of private network)')
+ group.add_argument("--settings", dest='nspawn_settings', help='Add in .nspawn settings file', metavar='PATH')
group = parser.add_argument_group("Partitions")
- group.add_argument("--root-size", help='Set size of root partition (only raw_gpt, raw_btrfs)', metavar='BYTES')
- group.add_argument("--esp-size", help='Set size of EFI system partition (only raw_gpt, raw_btrfs, raw_squashfs)', metavar='BYTES')
- group.add_argument("--swap-size", help='Set size of swap partition (only raw_gpt, raw_btrfs, raw_squashfs)', metavar='BYTES')
- group.add_argument("--home-size", help='Set size of /home partition (only raw_gpt, raw_squashfs)', metavar='BYTES')
- group.add_argument("--srv-size", help='Set size of /srv partition (only raw_gpt, raw_squashfs)', metavar='BYTES')
-
- group = parser.add_argument_group("Validation (only raw_gpt, raw_btrfs, raw_squashfs, tar)")
+ group.add_argument("--root-size",
+ help='Set size of root partition (only gpt_ext4, gpt_xfs, gpt_btrfs)', metavar='BYTES')
+ group.add_argument("--esp-size",
+ help='Set size of EFI system partition (only gpt_ext4, gpt_xfs, gpt_btrfs, gpt_squashfs)', metavar='BYTES') # NOQA: E501
+ group.add_argument("--swap-size",
+ help='Set size of swap partition (only gpt_ext4, gpt_xfs, gpt_btrfs, gpt_squashfs)', metavar='BYTES') # NOQA: E501
+ group.add_argument("--home-size",
+ help='Set size of /home partition (only gpt_ext4, gpt_xfs, gpt_squashfs)', metavar='BYTES')
+ group.add_argument("--srv-size",
+ help='Set size of /srv partition (only gpt_ext4, gpt_xfs, gpt_squashfs)', metavar='BYTES')
+
+ group = parser.add_argument_group("Validation (only gpt_ext4, gpt_xfs, gpt_btrfs, gpt_squashfs, tar)")
group.add_argument("--checksum", action='store_true', help='Write SHA256SUMS file')
group.add_argument("--sign", action='store_true', help='Write and sign SHA256SUMS file')
group.add_argument("--key", help='GPG key to use for signing')
- group.add_argument("--bmap", action='store_true', help='Write block map file (.bmap) for bmaptool usage (only raw_gpt, raw_btrfs)')
+ group.add_argument("--bmap", action='store_true',
+ help='Write block map file (.bmap) for bmaptool usage (only gpt_ext4, gpt_btrfs)')
group.add_argument("--password", help='Set the root password')
group = parser.add_argument_group("Host configuration")
- group.add_argument("--extra-search-paths", action=ColonDelimitedListAction, default=[], help="List of colon-separated paths to look for programs before looking in PATH")
+ group.add_argument("--extra-search-path", dest='extra_search_paths', action=ColonDelimitedListAction, default=[],
+ help="List of colon-separated paths to look for programs before looking in PATH")
+ group.add_argument("--extra-search-paths", dest='extra_search_paths', action=ColonDelimitedListAction, help=argparse.SUPPRESS) # Compatibility option
group = parser.add_argument_group("Additional Configuration")
group.add_argument('-C', "--directory", help='Change to specified directory before doing anything', metavar='PATH')
group.add_argument("--default", dest='default_path', help='Read configuration data from file', metavar='PATH')
- group.add_argument("--kernel-commandline", help='Set the kernel command line (only bootable images)')
- group.add_argument("--hostname", help="Set hostname")
+ group.add_argument('-a', "--all", action='store_true', dest='all', default=False, help='Build all settings files in mkosi.files/')
+ group.add_argument("--all-directory", dest='all_directory', help='Specify path to directory to read settings files from', metavar='PATH')
+ group.add_argument('--debug', action=CommaDelimitedListAction, default=[],
+ help='Turn on debugging output', metavar='SELECTOR',
+ choices=('run',))
try:
+ import argcomplete # type: ignore
argcomplete.autocomplete(parser)
- except NameError:
+ except ImportError:
pass
- args = parser.parse_args()
+ args = cast(CommandLineArguments, parser.parse_args(namespace=CommandLineArguments()))
if args.verb == "help":
parser.print_help()
sys.exit(0)
+ if args.all and args.default_path:
+ die("--all and --default= may not be combined.")
+
+ args_find_path(args, 'all_directory', "mkosi.files/")
+
return args
-def parse_bytes(bytes):
+
+def parse_bytes(bytes: Optional[str]) -> Optional[int]:
if bytes is None:
return bytes
@@ -2410,7 +3109,8 @@ def parse_bytes(bytes):
return result
-def detect_distribution():
+
+def detect_distribution() -> Tuple[Optional[Distribution], Optional[str]]:
try:
f = open("/etc/os-release")
except IOError:
@@ -2421,44 +3121,75 @@ def detect_distribution():
id = None
version_id = None
+ version_codename = None
+ extracted_codename = None
for ln in f:
if ln.startswith("ID="):
id = ln[3:].strip()
if ln.startswith("VERSION_ID="):
version_id = ln[11:].strip()
+ if ln.startswith("VERSION_CODENAME="):
+ version_codename = ln[17:].strip()
+ if ln.startswith("VERSION="):
+ # extract Debian release codename
+ version_str = ln[8:].strip()
+ debian_codename_re = r'\((.*?)\)'
+
+ codename_list = re.findall(debian_codename_re, version_str)
+ if len(codename_list) == 1:
+ extracted_codename = codename_list[0]
if id == "clear-linux-os":
id = "clear"
- d = Distribution.__members__.get(id, None)
+ d: Optional[Distribution] = None
+ if id is not None:
+ d = Distribution.__members__.get(id, None)
+
+ if d == Distribution.debian and (version_codename or extracted_codename):
+ # debootstrap needs release codenames, not version numbers
+ if version_codename:
+ version_id = version_codename
+ else:
+ version_id = extracted_codename
+
return d, version_id
-def unlink_try_hard(path):
+
+def unlink_try_hard(path: str) -> None:
try:
os.unlink(path)
- except:
+ except: # NOQA: E722
pass
try:
btrfs_subvol_delete(path)
- except:
+ except: # NOQA: E722
pass
try:
shutil.rmtree(path)
- except:
+ except: # NOQA: E722
pass
-def empty_directory(path):
+def remove_glob(*patterns: str) -> None:
+ pathgen = (glob.glob(pattern) for pattern in patterns)
+ paths: Set[str] = set(sum(pathgen, [])) # uniquify
+ for path in paths:
+ unlink_try_hard(path)
+
+
+def empty_directory(path: str) -> None:
try:
for f in os.listdir(path):
unlink_try_hard(os.path.join(path, f))
except FileNotFoundError:
pass
-def unlink_output(args):
+
+def unlink_output(args: CommandLineArguments) -> None:
if not args.force and args.verb != "clean":
return
@@ -2510,7 +3241,8 @@ def unlink_output(args):
with complete_step('Clearing out package cache'):
empty_directory(args.cache_path)
-def parse_boolean(s):
+
+def parse_boolean(s: str) -> bool:
"Parse 1/true/yes as true and 0/false/no as false"
if s in {"1", "true", "yes"}:
return True
@@ -2518,9 +3250,10 @@ def parse_boolean(s):
if s in {"0", "false", "no"}:
return False
- raise ValueError("Invalid literal for bool(): {!r}".format(s))
+ raise ValueError(f'Invalid literal for bool(): {s!r}')
-def process_setting(args, section, key, value):
+
+def process_setting(args: CommandLineArguments, section: str, key: Optional[str], value: Any) -> bool:
if section == "Distribution":
if key == "Distribution":
if args.distribution is None:
@@ -2537,6 +3270,9 @@ def process_setting(args, section, key, value):
elif key == "Mirror":
if args.mirror is None:
args.mirror = value
+ elif key == 'Architecture':
+ if args.architecture is None:
+ args.architecture = value
elif key is None:
return True
else:
@@ -2544,7 +3280,7 @@ def process_setting(args, section, key, value):
elif section == "Output":
if key == "Format":
if args.output_format is None:
- args.output_format = value
+ args.output_format = OutputFormat[value]
elif key == "Output":
if args.output is None:
args.output = value
@@ -2552,16 +3288,19 @@ def process_setting(args, section, key, value):
if args.output_dir is None:
args.output_dir = value
elif key == "Force":
- if not args.force:
+ if args.force is None:
args.force = parse_boolean(value)
elif key == "Bootable":
if args.bootable is None:
args.bootable = parse_boolean(value)
+ elif key == "BootProtocols":
+ if not args.boot_protocols:
+ args.boot_protocols = value if type(value) == list else value.split()
elif key == "KernelCommandLine":
- if args.kernel_commandline is None:
- args.kernel_commandline = value
+ if args.kernel_command_line is None:
+ args.kernel_command_line = value
elif key == "SecureBoot":
- if not args.secure_boot:
+ if args.secure_boot is None:
args.secure_boot = parse_boolean(value)
elif key == "SecureBootKey":
if args.secure_boot_key is None:
@@ -2570,22 +3309,28 @@ def process_setting(args, section, key, value):
if args.secure_boot_certificate is None:
args.secure_boot_certificate = value
elif key == "ReadOnly":
- if not args.read_only:
+ if args.read_only is None:
args.read_only = parse_boolean(value)
elif key == "Encrypt":
if args.encrypt is None:
if value not in ("all", "data"):
- raise ValueError("Invalid encryption setting: "+ value)
+ raise ValueError("Invalid encryption setting: " + value)
args.encrypt = value
elif key == "Verity":
- if not args.verity:
+ if args.verity is None:
args.verity = parse_boolean(value)
elif key == "Compress":
- if not args.compress:
- args.compress = parse_boolean(value)
+ if args.compress is None:
+ args.compress = parse_compression(value)
+ elif key == 'Mksquashfs':
+ if args.mksquashfs_tool is None:
+ args.mksquashfs_tool = value.split()
elif key == "XZ":
- if not args.xz:
+ if args.xz is None:
args.xz = parse_boolean(value)
+ elif key == "QCow2":
+ if args.qcow2 is None:
+ args.qcow2 = parse_boolean(value)
elif key == "Hostname":
if not args.hostname:
args.hostname = value
@@ -2598,10 +3343,10 @@ def process_setting(args, section, key, value):
list_value = value if type(value) == list else value.split()
args.packages.extend(list_value)
elif key == "WithDocs":
- if not args.with_docs:
+ if args.with_docs is None:
args.with_docs = parse_boolean(value)
elif key == "WithTests":
- if not args.with_tests:
+ if args.with_tests is None:
args.with_tests = parse_boolean(value)
elif key == "Cache":
if args.cache_path is None:
@@ -2618,17 +3363,26 @@ def process_setting(args, section, key, value):
elif key == "BuildSources":
if args.build_sources is None:
args.build_sources = value
+ elif key == "SourceFileTransfer":
+ if args.source_file_transfer is None:
+ try:
+ args.source_file_transfer = SourceFileTransfer(value)
+ except ValueError:
+ raise ValueError(f"Invalid source file transfer setting: {value}")
elif key == "BuildDirectory":
if args.build_dir is None:
args.build_dir = value
elif key == "BuildPackages":
list_value = value if type(value) == list else value.split()
args.build_packages.extend(list_value)
- elif key == "PostInstallationScript":
+ elif key in {"PostinstallScript", "PostInstallationScript"}:
if args.postinst_script is None:
args.postinst_script = value
+ elif key == "FinalizeScript":
+ if args.finalize_script is None:
+ args.finalize_script = value
elif key == "WithNetwork":
- if not args.with_network:
+ if args.with_network is None:
args.with_network = parse_boolean(value)
elif key == "NSpawnSettings":
if args.nspawn_settings is None:
@@ -2659,15 +3413,16 @@ def process_setting(args, section, key, value):
return False
elif section == "Validation":
if key == "CheckSum":
- if not args.checksum:
+ if args.checksum is None:
args.checksum = parse_boolean(value)
elif key == "Sign":
- if not args.sign:
+ if args.sign is None:
args.sign = parse_boolean(value)
elif key == "Key":
if args.key is None:
args.key = value
elif key == "Bmap":
+ if args.bmap is None:
args.bmap = parse_boolean(value)
elif key == "Password":
if args.password is None:
@@ -2686,14 +3441,15 @@ def process_setting(args, section, key, value):
return True
-def load_defaults_file(fname, options):
+
+def load_defaults_file(fname: str, options: Dict[str, Dict[str, Any]]) -> Optional[Dict[str, Dict[str, Any]]]:
try:
f = open(fname)
except FileNotFoundError:
- return
+ return None
config = configparser.ConfigParser(delimiters='=')
- config.optionxform = str
+ config.optionxform = str # type: ignore
config.read_file(f)
# this is used only for validation
@@ -2701,13 +3457,13 @@ def load_defaults_file(fname, options):
for section in config.sections():
if not process_setting(args, section, None, None):
- sys.stderr.write("Unknown section in {}, ignoring: [{}]\n".format(fname, section))
+ sys.stderr.write(f"Unknown section in {fname!r}, ignoring: [{section}]\n")
continue
if section not in options:
options[section] = {}
for key in config[section]:
if not process_setting(args, section, key, config[section][key]):
- sys.stderr.write("Unknown key in section [{}] in {}, ignoring: {}=\n".format(section, fname, key))
+ sys.stderr.write(f'Unknown key in section [{section}] in {fname!r}, ignoring: {key}=\n')
continue
if section == "Packages" and key in ["Packages", "ExtraTrees", "BuildPackages"]:
if key in options[section]:
@@ -2718,10 +3474,11 @@ def load_defaults_file(fname, options):
options[section][key] = config[section][key]
return options
-def load_defaults(args):
+
+def load_defaults(args: CommandLineArguments) -> None:
fname = "mkosi.default" if args.default_path is None else args.default_path
- config = {}
+ config: Dict[str, Dict[str, str]] = {}
load_defaults_file(fname, config)
defaults_dir = fname + '.d'
@@ -2735,27 +3492,51 @@ def load_defaults(args):
for key in config[section]:
process_setting(args, section, key, config[section][key])
-def find_nspawn_settings(args):
+
+def find_nspawn_settings(args: CommandLineArguments) -> None:
if args.nspawn_settings is not None:
return
if os.path.exists("mkosi.nspawn"):
args.nspawn_settings = "mkosi.nspawn"
-def find_extra(args):
+
+def find_extra(args: CommandLineArguments) -> None:
+
+ if len(args.extra_trees) > 0:
+ return
+
if os.path.isdir("mkosi.extra"):
args.extra_trees.append("mkosi.extra")
if os.path.isfile("mkosi.extra.tar"):
args.extra_trees.append("mkosi.extra.tar")
-def find_skeleton(args):
+
+def find_skeleton(args: CommandLineArguments) -> None:
+
+ if len(args.skeleton_trees) > 0:
+ return
+
if os.path.isdir("mkosi.skeleton"):
args.skeleton_trees.append("mkosi.skeleton")
if os.path.isfile("mkosi.skeleton.tar"):
args.skeleton_trees.append("mkosi.skeleton.tar")
-def find_cache(args):
+def args_find_path(args: CommandLineArguments,
+ name: str,
+ path: str,
+ *,
+ type: Callable[[str], Any] = lambda x: x) -> None:
+ if getattr(args, name) is not None:
+ return
+ if os.path.exists(path):
+ path = os.path.abspath(path)
+ path = type(path)
+ setattr(args, name, path)
+
+
+def find_cache(args: CommandLineArguments) -> None:
if args.cache_path is not None:
return
@@ -2767,49 +3548,16 @@ def find_cache(args):
if args.distribution != Distribution.clear and args.release is not None:
args.cache_path += "~" + args.release
-def find_build_script(args):
- if args.build_script is not None:
- return
-
- if os.path.exists("mkosi.build"):
- args.build_script = "mkosi.build"
-
-def find_build_sources(args):
- if args.build_sources is not None:
- return
-
- args.build_sources = os.getcwd()
-def find_build_dir(args):
- if args.build_dir is not None:
- return
-
- if os.path.exists("mkosi.builddir/"):
- args.build_dir = "mkosi.builddir"
-
-def find_postinst_script(args):
- if args.postinst_script is not None:
- return
-
- if os.path.exists("mkosi.postinst"):
- args.postinst_script = "mkosi.postinst"
-
-def find_output_dir(args):
- if args.output_dir is not None:
- return
-
- if os.path.exists("mkosi.output/"):
- args.output_dir = "mkosi.output"
-
-def require_private_file(name, description):
+def require_private_file(name: str, description: str) -> None:
mode = os.stat(name).st_mode & 0o777
if mode & 0o007:
warn("Permissions of '{}' of '{}' are too open.\n" +
"When creating {} files use an access mode that restricts access to the owner only.",
name, oct(mode), description)
-def find_passphrase(args):
+def find_passphrase(args: CommandLineArguments) -> None:
if args.encrypt is None:
args.passphrase = None
return
@@ -2817,20 +3565,20 @@ def find_passphrase(args):
try:
require_private_file('mkosi.passphrase', 'passphrase')
- args.passphrase = { 'type': 'file', 'content': 'mkosi.passphrase' }
+ args.passphrase = {'type': 'file', 'content': 'mkosi.passphrase'}
except FileNotFoundError:
while True:
passphrase = getpass.getpass("Please enter passphrase: ")
passphrase_confirmation = getpass.getpass("Passphrase confirmation: ")
if passphrase == passphrase_confirmation:
- args.passphrase = { 'type': 'stdin', 'content': passphrase }
+ args.passphrase = {'type': 'stdin', 'content': passphrase}
break
sys.stderr.write("Passphrase doesn't match confirmation. Please try again.\n")
-def find_password(args):
+def find_password(args: CommandLineArguments) -> None:
if args.password is not None:
return
@@ -2843,7 +3591,8 @@ def find_password(args):
except FileNotFoundError:
pass
-def find_secure_boot(args):
+
+def find_secure_boot(args: CommandLineArguments) -> None:
if not args.secure_boot:
return
@@ -2855,7 +3604,8 @@ def find_secure_boot(args):
if os.path.exists("mkosi.secure-boot.crt"):
args.secure_boot_certificate = "mkosi.secure-boot.crt"
-def strip_suffixes(path):
+
+def strip_suffixes(path: str) -> str:
t = path
while True:
if t.endswith(".xz"):
@@ -2864,32 +3614,39 @@ def strip_suffixes(path):
t = t[:-4]
elif t.endswith(".tar"):
t = t[:-4]
+ elif t.endswith(".qcow2"):
+ t = t[:-6]
else:
break
return t
-def build_nspawn_settings_path(path):
+
+def build_nspawn_settings_path(path: str) -> str:
return strip_suffixes(path) + ".nspawn"
-def build_root_hash_file_path(path):
+
+def build_root_hash_file_path(path: str) -> str:
return strip_suffixes(path) + ".roothash"
-def load_args():
- args = parse_args()
- if args.directory is not None:
- os.chdir(args.directory)
+def load_args(args) -> CommandLineArguments:
+ global arg_debug
+ arg_debug = args.debug
load_defaults(args)
- find_nspawn_settings(args)
+
+ args_find_path(args, 'nspawn_settings', "mkosi.nspawn")
+ args_find_path(args, 'build_script', "mkosi.build")
+ args_find_path(args, 'build_sources', ".")
+ args_find_path(args, 'build_dir', "mkosi.builddir/")
+ args_find_path(args, 'postinst_script', "mkosi.postinst")
+ args_find_path(args, 'finalize_script', "mkosi.finalize")
+ args_find_path(args, 'output_dir', "mkosi.output/")
+ args_find_path(args, 'mksquashfs_tool', "mkosi.mksquashfs-tool", type=lambda x: [x])
+
find_extra(args)
find_skeleton(args)
- find_build_script(args)
- find_build_sources(args)
- find_build_dir(args)
- find_postinst_script(args)
- find_output_dir(args)
find_password(args)
find_passphrase(args)
find_secure_boot(args)
@@ -2902,9 +3659,7 @@ def load_args():
args.force = args.force_count > 0
if args.output_format is None:
- args.output_format = OutputFormat.raw_gpt
- else:
- args.output_format = OutputFormat[args.output_format]
+ args.output_format = OutputFormat.gpt_ext4
if args.distribution is not None:
args.distribution = Distribution[args.distribution]
@@ -2923,10 +3678,10 @@ def load_args():
if args.release is None:
if args.distribution == Distribution.fedora:
- args.release = "25"
- if args.distribution == Distribution.centos:
+ args.release = "29"
+ elif args.distribution == Distribution.centos:
args.release = "7"
- if args.distribution == Distribution.mageia:
+ elif args.distribution == Distribution.mageia:
args.release = "6"
elif args.distribution == Distribution.debian:
args.release = "unstable"
@@ -2940,9 +3695,7 @@ def load_args():
find_cache(args)
if args.mirror is None:
- if args.distribution == Distribution.fedora:
- args.mirror = None
- if args.distribution == Distribution.centos:
+ if args.distribution in (Distribution.fedora, Distribution.centos):
args.mirror = None
elif args.distribution == Distribution.debian:
args.mirror = "http://deb.debian.org/debian"
@@ -2958,17 +3711,24 @@ def load_args():
args.mirror = "http://download.opensuse.org"
if args.bootable:
- if args.distribution == Distribution.ubuntu:
- die("Bootable images are currently not supported on Ubuntu.")
-
if args.output_format in (OutputFormat.directory, OutputFormat.subvolume, OutputFormat.tar):
die("Directory, subvolume and tar images cannot be booted.")
+ if not args.boot_protocols:
+ args.boot_protocols = ["uefi"]
+ if not {"uefi", "bios"}.issuperset(args.boot_protocols):
+ die("Not a valid boot protocol")
+ if "bios" in args.boot_protocols and args.distribution not in (Distribution.fedora,
+ Distribution.arch,
+ Distribution.debian,
+ Distribution.ubuntu):
+ die(f"bios boot not implemented yet for {args.distribution}")
+
if args.encrypt is not None:
- if args.output_format not in (OutputFormat.raw_gpt, OutputFormat.raw_btrfs, OutputFormat.raw_squashfs):
- die("Encryption is only supported for raw gpt, btrfs or squashfs images.")
+ if not args.output_format.is_disk():
+ die("Encryption is only supported for disk images.")
- if args.encrypt == "data" and args.output_format == OutputFormat.raw_btrfs:
+ if args.encrypt == "data" and args.output_format == OutputFormat.gpt_btrfs:
die("'data' encryption mode not supported on btrfs, use 'all' instead.")
if args.encrypt == "all" and args.verity:
@@ -2978,11 +3738,10 @@ def load_args():
args.checksum = True
if args.output is None:
- if args.output_format in (OutputFormat.raw_gpt, OutputFormat.raw_btrfs, OutputFormat.raw_squashfs):
- if args.xz:
- args.output = "image.raw.xz"
- else:
- args.output = "image.raw"
+ if args.output_format.is_disk():
+ args.output = ('image' +
+ ('.qcow2' if args.qcow2 else '.raw') +
+ ('.xz' if args.xz else ''))
elif args.output_format == OutputFormat.tar:
args.output = "image.tar.xz"
else:
@@ -3008,10 +3767,13 @@ def load_args():
if args.output_format == OutputFormat.tar:
args.xz = True
- if args.output_format == OutputFormat.raw_squashfs:
+ if args.output_format.is_squashfs():
args.read_only = True
- args.compress = True
args.root_size = None
+ if args.compress is False:
+ die('Cannot disable compression with squashfs')
+ if args.compress is None:
+ args.compress = True
if args.verity:
args.read_only = True
@@ -3042,6 +3804,9 @@ def load_args():
if args.postinst_script is not None:
args.postinst_script = os.path.abspath(args.postinst_script)
+ if args.finalize_script is not None:
+ args.finalize_script = os.path.abspath(args.finalize_script)
+
if args.cache_path is not None:
args.cache_path = os.path.abspath(args.cache_path)
@@ -3059,16 +3824,19 @@ def load_args():
args.esp_size = parse_bytes(args.esp_size)
args.swap_size = parse_bytes(args.swap_size)
- if args.output_format in (OutputFormat.raw_gpt, OutputFormat.raw_btrfs) and args.root_size is None:
+ if args.output_format in (OutputFormat.gpt_ext4, OutputFormat.gpt_btrfs) and args.root_size is None:
args.root_size = 1024*1024*1024
+ if args.output_format == OutputFormat.gpt_xfs and args.root_size is None:
+ args.root_size = 1300*1024*1024
+
if args.bootable and args.esp_size is None:
args.esp_size = 256*1024*1024
args.verity_size = None
- if args.bootable and args.kernel_commandline is None:
- args.kernel_commandline = "rhgb quiet selinux=0 audit=0 rw"
+ if args.bootable and args.kernel_command_line is None:
+ args.kernel_command_line = "rhgb quiet selinux=0 audit=0 rw"
if args.secure_boot_key is not None:
args.secure_boot_key = os.path.abspath(args.secure_boot_key)
@@ -3078,10 +3846,10 @@ def load_args():
if args.secure_boot:
if args.secure_boot_key is None:
- die("UEFI SecureBoot enabled, but couldn't find private key. (Consider placing it in mkosi.secure-boot.key?)")
+ die("UEFI SecureBoot enabled, but couldn't find private key. (Consider placing it in mkosi.secure-boot.key?)") # NOQA: E501
if args.secure_boot_certificate is None:
- die("UEFI SecureBoot enabled, but couldn't find certificate. (Consider placing it in mkosi.secure-boot.crt?)")
+ die("UEFI SecureBoot enabled, but couldn't find certificate. (Consider placing it in mkosi.secure-boot.crt?)") # NOQA: E501
if args.verb in ("shell", "boot", "qemu"):
if args.output_format == OutputFormat.tar:
@@ -3089,13 +3857,18 @@ def load_args():
if args.xz:
die("Sorry, can't acquire shell in or boot an XZ compressed image.")
+ if args.verb in ("shell", "boot"):
+ if args.qcow2:
+ die("Sorry, can't acquire shell in or boot a qcow2 image.")
+
if args.verb == "qemu":
- if args.output_format not in (OutputFormat.raw_gpt, OutputFormat.raw_btrfs, OutputFormat.raw_squashfs):
- die("Sorry, can't boot non-raw images with qemu.")
+ if not args.output_format.is_disk():
+ die("Sorry, can't boot non-disk images with qemu.")
return args
-def check_output(args):
+
+def check_output(args: CommandLineArguments) -> None:
for f in (args.output,
args.output_checksum if args.checksum else None,
args.output_signature if args.sign else None,
@@ -3109,41 +3882,51 @@ def check_output(args):
if os.path.exists(f):
die("Output file " + f + " exists already. (Consider invocation with --force.)")
-def yes_no(b):
+
+def yes_no(b: bool) -> str:
return "yes" if b else "no"
-def format_bytes_or_disabled(sz):
+
+def format_bytes_or_disabled(sz: Optional[int]) -> str:
if sz is None:
return "(disabled)"
return format_bytes(sz)
-def format_bytes_or_auto(sz):
+
+def format_bytes_or_auto(sz: Optional[int])-> str:
if sz is None:
return "(automatic)"
return format_bytes(sz)
-def none_to_na(s):
+
+def none_to_na(s: Optional[str]) -> str:
return "n/a" if s is None else s
-def none_to_no(s):
+
+def none_to_no(s: Optional[str]) -> str:
return "no" if s is None else s
-def none_to_none(s):
- return "none" if s is None else s
-def line_join_list(l):
+def none_to_none(o: Optional[object]) -> str:
+ return "none" if o is None else str(o)
+
+
+def line_join_list(ary: List[str]) -> str:
- if not l:
+ if not ary:
return "none"
- return "\n ".join(l)
+ return "\n ".join(ary)
-def print_summary(args):
+
+def print_summary(args: CommandLineArguments) -> None:
sys.stderr.write("DISTRIBUTION:\n")
sys.stderr.write(" Distribution: " + args.distribution.name + "\n")
sys.stderr.write(" Release: " + none_to_na(args.release) + "\n")
+ if args.architecture:
+ sys.stderr.write(" Architecture: " + args.architecture + "\n")
if args.mirror is not None:
sys.stderr.write(" Mirror: " + args.mirror + "\n")
sys.stderr.write("\nOUTPUT:\n")
@@ -3156,31 +3939,36 @@ def print_summary(args):
sys.stderr.write(" Output Checksum: " + none_to_na(args.output_checksum if args.checksum else None) + "\n")
sys.stderr.write(" Output Signature: " + none_to_na(args.output_signature if args.sign else None) + "\n")
sys.stderr.write(" Output Bmap: " + none_to_na(args.output_bmap if args.bmap else None) + "\n")
- sys.stderr.write("Output nspawn Settings: " + none_to_na(args.output_nspawn_settings if args.nspawn_settings is not None else None) + "\n")
+ sys.stderr.write("Output nspawn Settings: " + none_to_na(args.output_nspawn_settings if args.nspawn_settings is not None else None) + "\n") # NOQA: E501
sys.stderr.write(" Incremental: " + yes_no(args.incremental) + "\n")
- if args.output_format in (OutputFormat.raw_gpt, OutputFormat.raw_btrfs, OutputFormat.raw_squashfs, OutputFormat.subvolume):
- sys.stderr.write(" Read-only: " + yes_no(args.read_only) + "\n")
- if args.output_format in (OutputFormat.raw_btrfs, OutputFormat.subvolume):
- sys.stderr.write(" FS Compression: " + yes_no(args.compress) + "\n")
+ sys.stderr.write(" Read-only: " + yes_no(args.read_only) + "\n")
+ detail = ' ({})'.format(args.compress) if args.compress and not isinstance(args.compress, bool) else ''
+ sys.stderr.write(" FS Compression: " + yes_no(args.compress) + detail + "\n")
+
+ sys.stderr.write(" XZ Compression: " + yes_no(args.xz) + "\n")
+ if args.mksquashfs_tool:
+ sys.stderr.write(" Mksquashfs tool: " + ' '.join(args.mksquashfs_tool) + "\n")
- if args.output_format in (OutputFormat.raw_gpt, OutputFormat.raw_btrfs, OutputFormat.raw_squashfs, OutputFormat.tar):
- sys.stderr.write(" XZ Compression: " + yes_no(args.xz) + "\n")
+ if args.output_format.is_disk():
+ sys.stderr.write(" QCow2: " + yes_no(args.qcow2) + "\n")
sys.stderr.write(" Encryption: " + none_to_no(args.encrypt) + "\n")
sys.stderr.write(" Verity: " + yes_no(args.verity) + "\n")
- if args.output_format in (OutputFormat.raw_gpt, OutputFormat.raw_btrfs, OutputFormat.raw_squashfs):
+ if args.output_format.is_disk():
sys.stderr.write(" Bootable: " + yes_no(args.bootable) + "\n")
if args.bootable:
- sys.stderr.write(" Kernel Command Line: " + args.kernel_commandline + "\n")
+ sys.stderr.write(" Kernel Command Line: " + args.kernel_command_line + "\n")
sys.stderr.write(" UEFI SecureBoot: " + yes_no(args.secure_boot) + "\n")
if args.secure_boot:
sys.stderr.write(" UEFI SecureBoot Key: " + args.secure_boot_key + "\n")
sys.stderr.write(" UEFI SecureBoot Cert.: " + args.secure_boot_certificate + "\n")
+ sys.stderr.write(" Boot Protocols: " + line_join_list(args.boot_protocols) + "\n")
+
sys.stderr.write("\nPACKAGES:\n")
sys.stderr.write(" Packages: " + line_join_list(args.packages) + "\n")
@@ -3196,21 +3984,25 @@ def print_summary(args):
sys.stderr.write(" Run tests: " + yes_no(args.with_tests) + "\n")
sys.stderr.write(" Build Sources: " + none_to_none(args.build_sources) + "\n")
+ sys.stderr.write(" Source File Transfer: " + none_to_none(args.source_file_transfer) + "\n")
sys.stderr.write(" Build Directory: " + none_to_none(args.build_dir) + "\n")
sys.stderr.write(" Build Packages: " + line_join_list(args.build_packages) + "\n")
- sys.stderr.write(" Post Inst. Script: " + none_to_none(args.postinst_script) + "\n")
+ sys.stderr.write(" Postinstall Script: " + none_to_none(args.postinst_script) + "\n")
+ sys.stderr.write(" Finalize Script: " + none_to_none(args.finalize_script) + "\n")
sys.stderr.write(" Scripts with network: " + yes_no(args.with_network) + "\n")
sys.stderr.write(" nspawn Settings: " + none_to_none(args.nspawn_settings) + "\n")
- if args.output_format in (OutputFormat.raw_gpt, OutputFormat.raw_btrfs, OutputFormat.raw_squashfs):
+ if args.output_format.is_disk():
sys.stderr.write("\nPARTITIONS:\n")
sys.stderr.write(" Root Partition: " + format_bytes_or_auto(args.root_size) + "\n")
sys.stderr.write(" Swap Partition: " + format_bytes_or_disabled(args.swap_size) + "\n")
- sys.stderr.write(" ESP: " + format_bytes_or_disabled(args.esp_size) + "\n")
+ if "uefi" in args.boot_protocols:
+ sys.stderr.write(" ESP: " + format_bytes_or_disabled(args.esp_size) + "\n")
+ if "bios" in args.boot_protocols:
+ sys.stderr.write(" BIOS: " + format_bytes_or_disabled(BIOS_PARTITION_SIZE) + "\n")
sys.stderr.write(" /home Partition: " + format_bytes_or_disabled(args.home_size) + "\n")
sys.stderr.write(" /srv Partition: " + format_bytes_or_disabled(args.srv_size) + "\n")
- if args.output_format in (OutputFormat.raw_gpt, OutputFormat.raw_btrfs, OutputFormat.raw_squashfs, OutputFormat.tar):
sys.stderr.write("\nVALIDATION:\n")
sys.stderr.write(" Checksum: " + yes_no(args.checksum) + "\n")
sys.stderr.write(" Sign: " + yes_no(args.sign) + "\n")
@@ -3220,7 +4012,12 @@ def print_summary(args):
sys.stderr.write("\nHOST CONFIGURATION:\n")
sys.stderr.write(" Extra search paths: " + line_join_list(args.extra_search_paths) + "\n")
-def reuse_cache_tree(args, workspace, run_build_script, for_cache, cached):
+
+def reuse_cache_tree(args: CommandLineArguments,
+ workspace: str,
+ run_build_script: bool,
+ for_cache: bool,
+ cached: bool) -> bool:
"""If there's a cached version of this tree around, use it and
initialize our new root directly from it. Returns a boolean indicating
whether we are now operating on a cached version or not."""
@@ -3232,7 +4029,7 @@ def reuse_cache_tree(args, workspace, run_build_script, for_cache, cached):
return False
if for_cache:
return False
- if args.output_format in (OutputFormat.raw_gpt, OutputFormat.raw_btrfs):
+ if args.output_format.is_disk_rw():
return False
fname = args.cache_pre_dev if run_build_script else args.cache_pre_inst
@@ -3241,34 +4038,40 @@ def reuse_cache_tree(args, workspace, run_build_script, for_cache, cached):
with complete_step('Copying in cached tree ' + fname):
try:
- copy(fname, os.path.join(workspace, "root"))
+ copy_path(fname, os.path.join(workspace, "root"))
except FileNotFoundError:
return False
return True
-def make_output_dir(args):
+
+def make_output_dir(args: CommandLineArguments) -> None:
"""Create the output directory if set and not existing yet"""
if args.output_dir is None:
return
mkdir_last(args.output_dir, 0o755)
-def make_build_dir(args):
+
+def make_build_dir(args: CommandLineArguments) -> None:
"""Create the build directory if set and not existing yet"""
if args.build_dir is None:
return
mkdir_last(args.build_dir, 0o755)
-def build_image(args, workspace, run_build_script, for_cache=False):
+def build_image(args: CommandLineArguments,
+ workspace: tempfile.TemporaryDirectory,
+ *,
+ run_build_script: bool,
+ for_cache: bool = False,
+ cleanup: bool = False) -> Tuple[Optional[BinaryIO], Optional[BinaryIO], Optional[str]]:
# If there's no build script set, there's no point in executing
# the build script iteration. Let's quit early.
if args.build_script is None and run_build_script:
return None, None, None
- make_output_dir(args)
make_build_dir(args)
raw, cached = reuse_cache_image(args, workspace.name, run_build_script, for_cache)
@@ -3284,9 +4087,10 @@ def build_image(args, workspace, run_build_script, for_cache=False):
prepare_swap(args, loopdev, cached)
prepare_esp(args, loopdev, cached)
- luks_format_root(args, loopdev, run_build_script, cached)
- luks_format_home(args, loopdev, run_build_script, cached)
- luks_format_srv(args, loopdev, run_build_script, cached)
+ if loopdev is not None:
+ luks_format_root(args, loopdev, run_build_script, cached)
+ luks_format_home(args, loopdev, run_build_script, cached)
+ luks_format_srv(args, loopdev, run_build_script, cached)
with luks_setup_all(args, loopdev, run_build_script) as (encrypted_root, encrypted_home, encrypted_srv):
@@ -3300,7 +4104,8 @@ def build_image(args, workspace, run_build_script, for_cache=False):
with mount_cache(args, workspace.name):
cached = reuse_cache_tree(args, workspace.name, run_build_script, for_cache, cached)
install_skeleton_trees(args, workspace.name, for_cache)
- install_distribution(args, workspace.name, run_build_script, cached)
+ install_distribution(args, workspace.name,
+ run_build_script=run_build_script, cached=cached)
install_etc_hostname(args, workspace.name)
install_boot_loader(args, workspace.name, loopdev, cached)
install_extra_trees(args, workspace.name, for_cache)
@@ -3309,7 +4114,10 @@ def build_image(args, workspace, run_build_script, for_cache=False):
set_root_password(args, workspace.name, run_build_script, for_cache)
run_postinst_script(args, workspace.name, run_build_script, for_cache)
+ if cleanup:
+ clean_package_manager_metadata(workspace.name)
reset_machine_id(args, workspace.name, run_build_script, for_cache)
+ reset_random_seed(args, workspace.name)
make_read_only(args, workspace.name, for_cache)
squashfs = make_squashfs(args, workspace.name, for_cache)
@@ -3322,18 +4130,25 @@ def build_image(args, workspace, run_build_script, for_cache=False):
# This time we mount read-only, as we already generated
# the verity data, and hence really shouldn't modify the
# image anymore.
- with mount_image(args, workspace.name, loopdev, encrypted_root, encrypted_home, encrypted_srv, root_read_only=True):
+ with mount_image(args, workspace.name, loopdev,
+ encrypted_root, encrypted_home, encrypted_srv, root_read_only=True):
install_unified_kernel(args, workspace.name, run_build_script, for_cache, root_hash)
secure_boot_sign(args, workspace.name, run_build_script, for_cache)
tar = make_tar(args, workspace.name, run_build_script, for_cache)
- return raw, tar, root_hash
+ return raw or squashfs, tar, root_hash
-def var_tmp(workspace):
+
+def var_tmp(workspace: str) -> str:
return mkdir_last(os.path.join(workspace, "var-tmp"))
-def run_build_script(args, workspace, raw):
+
+def one_zero(b: bool) -> str:
+ return "1" if b else "0"
+
+
+def run_build_script(args: CommandLineArguments, workspace: str, raw: Optional[BinaryIO]) -> None:
if args.build_script is None:
return
@@ -3352,13 +4167,16 @@ def run_build_script(args, workspace, raw):
"--register=no",
"--bind", dest + ":/root/dest",
"--bind=" + var_tmp(workspace) + ":/var/tmp",
- "--setenv=WITH_DOCS=" + ("1" if args.with_docs else "0"),
- "--setenv=WITH_TESTS=" + ("1" if args.with_tests else "0"),
+ "--setenv=WITH_DOCS=" + one_zero(args.with_docs),
+ "--setenv=WITH_TESTS=" + one_zero(args.with_tests),
+ "--setenv=WITH_NETWORK=" + one_zero(args.with_network),
"--setenv=DESTDIR=/root/dest"]
if args.build_sources is not None:
cmdline.append("--setenv=SRCDIR=/root/src")
cmdline.append("--chdir=/root/src")
+ if args.source_file_transfer == SourceFileTransfer.mount:
+ cmdline.append("--bind=" + args.build_sources + ":/root/src")
if args.read_only:
cmdline.append("--overlay=+/root/src::/root/src")
@@ -3378,8 +4196,8 @@ def run_build_script(args, workspace, raw):
cmdline.append("/root/" + os.path.basename(args.build_script))
run(cmdline, check=True)
-def need_cache_images(args):
+def need_cache_images(args: CommandLineArguments) -> bool:
if not args.incremental:
return False
@@ -3388,8 +4206,13 @@ def need_cache_images(args):
return not os.path.exists(args.cache_pre_dev) or not os.path.exists(args.cache_pre_inst)
-def remove_artifacts(args, workspace, raw, tar, run_build_script, for_cache=False):
+def remove_artifacts(args: CommandLineArguments,
+ workspace: str,
+ raw: Optional[BinaryIO],
+ tar: Optional[BinaryIO],
+ run_build_script: bool,
+ for_cache: bool = False) -> None:
if for_cache:
what = "cache build"
elif run_build_script:
@@ -3409,14 +4232,15 @@ def remove_artifacts(args, workspace, raw, tar, run_build_script, for_cache=Fals
unlink_try_hard(os.path.join(workspace, "root"))
unlink_try_hard(os.path.join(workspace, "var-tmp"))
-def build_stuff(args):
+def build_stuff(args: CommandLineArguments) -> None:
# Let's define a fixed machine ID for all our build-time
# runs. We'll strip it off the final image, but some build-time
# tools (dracut...) want a fixed one, hence provide one, and
# always the same
args.machine_id = uuid.uuid4().hex
+ make_output_dir(args)
setup_package_cache(args)
workspace = setup_workspace(args)
@@ -3443,6 +4267,8 @@ def build_stuff(args):
args.cache_pre_inst)
remove_artifacts(args, workspace.name, raw, tar, run_build_script=False)
+ run_finalize_script(args, workspace.name, verb='build')
+
if args.build_script:
# Run the image builder for the first (develpoment) stage in preparation for the build script
raw, tar, root_hash = build_image(args, workspace, run_build_script=True)
@@ -3450,9 +4276,12 @@ def build_stuff(args):
run_build_script(args, workspace.name, raw)
remove_artifacts(args, workspace.name, raw, tar, run_build_script=True)
+ run_finalize_script(args, workspace.name, verb='final')
+
# Run the image builder for the second (final) stage
- raw, tar, root_hash = build_image(args, workspace, run_build_script=False)
+ raw, tar, root_hash = build_image(args, workspace, run_build_script=False, cleanup=True)
+ raw = qcow2_output(args, raw)
raw = xz_output(args, raw)
root_hash_file = write_root_hash_file(args, root_hash)
settings = copy_nspawn_settings(args)
@@ -3460,10 +4289,7 @@ def build_stuff(args):
signature = calculate_signature(args, checksum)
bmap = calculate_bmap(args, raw)
- link_output(args,
- workspace.name,
- raw.name if raw is not None else None,
- tar.name if tar is not None else None)
+ link_output(args, workspace.name, raw or tar)
link_output_root_hash_file(args, root_hash_file.name if root_hash_file is not None else None)
@@ -3480,65 +4306,96 @@ def build_stuff(args):
settings.name if settings is not None else None)
if root_hash is not None:
- print_step("Root hash is {}.".format(root_hash))
+ print_step(f'Root hash is {root_hash}.')
-def check_root():
+
+def check_root() -> None:
if os.getuid() != 0:
die("Must be invoked as root.")
-def run_shell(args):
- target = "--directory=" + args.output if args.output_format in (OutputFormat.directory, OutputFormat.subvolume) else "--image=" + args.output
+
+def check_native(args: CommandLineArguments) -> None:
+ if args.architecture is not None and args.architecture != platform.machine() and args.build_script:
+ die('Cannot (currently) override the architecture and run build commands')
+
+
+def run_shell(args: CommandLineArguments) -> None:
+ if args.output_format in (OutputFormat.directory, OutputFormat.subvolume):
+ target = "--directory=" + args.output
+ else:
+ target = "--image=" + args.output
cmdline = ["systemd-nspawn",
target]
+ if args.read_only:
+ cmdline += ('--read-only',)
+
+ # If we copied in a .nspawn file, make sure it's actually honoured
+ if args.nspawn_settings is not None:
+ cmdline += ('--settings=trusted',)
+
if args.verb == "boot":
cmdline += ('--boot',)
if args.cmdline:
cmdline += ('--', *args.cmdline)
- os.execvp(cmdline[0], cmdline)
+ run(cmdline, execvp=True)
-def run_qemu(args):
+def run_qemu(args: CommandLineArguments) -> None:
# Look for the right qemu command line to use
- ARCH_BINARIES = { 'x86_64' : 'qemu-system-x86_64',
- 'i386' : 'qemu-system-i386'}
+ cmdlines: List[List[str]] = []
+ ARCH_BINARIES = {'x86_64': 'qemu-system-x86_64',
+ 'i386': 'qemu-system-i386'}
arch_binary = ARCH_BINARIES.get(platform.machine(), None)
- for cmdline in ([arch_binary, '-machine', 'accel=kvm'],
- ['qemu', '-machine', 'accel=kvm'],
- ['qemu-kvm']):
-
- if cmdline[0] and shutil.which(cmdline[0]):
- break
+ if arch_binary is not None:
+ cmdlines += [[arch_binary, '-machine', 'accel=kvm']]
+ cmdlines += [
+ ['qemu', '-machine', 'accel=kvm'],
+ ['qemu-kvm'],
+ ]
+ for cmdline in cmdlines:
+ if shutil.which(cmdline[0]) is not None:
+ break
else:
die("Couldn't find QEMU/KVM binary")
- # Look for UEFI firmware blob
- FIRMWARE_LOCATIONS = [
- '/usr/share/edk2/ovmf/OVMF_CODE.fd',
- '/usr/share/qemu/OVMF_CODE.fd',
- ]
+ # UEFI firmware blobs are found in a variety of locations,
+ # depending on distribution and package.
+ FIRMWARE_LOCATIONS = []
+ # First, we look in paths that contain the architecture –
+ # if they exist, they’re almost certainly correct.
if platform.machine() == 'x86_64':
FIRMWARE_LOCATIONS.append('/usr/share/ovmf/ovmf_code_x64.bin')
+ FIRMWARE_LOCATIONS.append('/usr/share/ovmf/x64/OVMF_CODE.fd')
elif platform.machine() == 'i386':
FIRMWARE_LOCATIONS.append('/usr/share/ovmf/ovmf_code_ia32.bin')
+ FIRMWARE_LOCATIONS.append('/usr/share/edk2/ovmf-ia32/OVMF_CODE.fd')
+ # After that, we try some generic paths and hope that if they exist,
+ # they’ll correspond to the current architecture, thanks to the package manager.
+ FIRMWARE_LOCATIONS.append('/usr/share/edk2/ovmf/OVMF_CODE.fd')
+ FIRMWARE_LOCATIONS.append('/usr/share/qemu/OVMF_CODE.fd')
+
for firmware in FIRMWARE_LOCATIONS:
if os.path.exists(firmware):
break
else:
die("Couldn't find OVMF UEFI firmware blob.")
- cmdline += [ "-bios", firmware,
- "-smp", "2",
- "-m", "1024",
- "-drive", "format=raw,file=" + args.output,
- *args.cmdline ]
+ cmdline += ["-smp", "2",
+ "-m", "1024",
+ "-drive", "if=pflash,format=raw,readonly,file=" + firmware,
+ "-drive", "format=" + ("qcow2" if args.qcow2 else "raw") + ",file=" + args.output,
+ *args.cmdline]
- os.execvp(cmdline[0], cmdline)
+ print_running_cmd(cmdline)
-def expand_paths(paths):
+ run(cmdline, execvp=True)
+
+
+def expand_paths(paths: List[str]) -> List[str]:
if not paths:
return []
@@ -3547,7 +4404,7 @@ def expand_paths(paths):
# paths in their home when using mkosi via sudo.
sudo_user = os.getenv("SUDO_USER")
if sudo_user and "SUDO_HOME" not in environ:
- environ["SUDO_HOME"] = os.path.expanduser("~{}".format(sudo_user))
+ environ["SUDO_HOME"] = os.path.expanduser(f'~{sudo_user}')
# No os.path.expandvars because it treats unset variables as empty.
expanded = []
@@ -3560,7 +4417,8 @@ def expand_paths(paths):
pass
return expanded
-def prepend_to_environ_path(paths):
+
+def prepend_to_environ_path(paths: List[str]) -> None:
if not paths:
return
@@ -3572,8 +4430,11 @@ def prepend_to_environ_path(paths):
else:
os.environ["PATH"] = new_path + ":" + original_path
-def main():
- args = load_args()
+
+def run_verb(args):
+ load_args(args)
+
+ prepend_to_environ_path(args.extra_search_paths)
if args.verb in ("build", "clean", "shell", "boot", "qemu"):
check_root()
@@ -3587,10 +4448,9 @@ def main():
if args.verb == "summary" or needs_build:
print_summary(args)
- prepend_to_environ_path(args.extra_search_paths)
-
if needs_build:
check_root()
+ check_native(args)
init_namespace(args)
build_stuff(args)
print_output_size(args)
@@ -3601,5 +4461,27 @@ def main():
if args.verb == "qemu":
run_qemu(args)
+
+def main() -> None:
+ args = parse_args()
+
+ if args.directory is not None:
+ os.chdir(args.directory)
+
+ if args.all:
+ for f in os.scandir(args.all_directory):
+
+ if not f.name.startswith("mkosi."):
+ continue
+
+ a = copy.deepcopy(args)
+ a.default_path = f.path
+
+ with complete_step('Processing ' + f.path):
+ run_verb(a)
+ else:
+ run_verb(args)
+
+
if __name__ == "__main__":
main()
diff --git a/mkosi.default b/mkosi.default
index 9e23a17..aa75737 100644..120000
--- a/mkosi.default
+++ b/mkosi.default
@@ -1,23 +1 @@
-# SPDX-License-Identifier: LGPL-2.1+
-# Let's build an image that is just good enough to build new mkosi images again
-
-[Distribution]
-Distribution=fedora
-Release=27
-
-[Output]
-Format=raw_squashfs
-Bootable=yes
-
-[Packages]
-Packages=
- arch-install-scripts
- btrfs-progs
- debootstrap
- dnf
- dosfstools
- git
- gnupg
- squashfs-tools
- tar
- veritysetup
+mkosi.files/mkosi.fedora \ No newline at end of file
diff --git a/mkosi.files/mkosi.fedora b/mkosi.files/mkosi.fedora
new file mode 100644
index 0000000..6bc292f
--- /dev/null
+++ b/mkosi.files/mkosi.fedora
@@ -0,0 +1,24 @@
+# SPDX-License-Identifier: LGPL-2.1+
+# Let's build an image that is just good enough to build new mkosi images again
+
+[Distribution]
+Distribution=fedora
+Release=29
+
+[Output]
+Format=gpt_ext4
+Bootable=yes
+Output=fedora.raw
+
+[Packages]
+Packages=
+ arch-install-scripts
+ btrfs-progs
+ debootstrap
+ dnf
+ dosfstools
+ git-core
+ gnupg
+ squashfs-tools
+ tar
+ veritysetup
diff --git a/mkosi.files/mkosi.ubuntu b/mkosi.files/mkosi.ubuntu
new file mode 100644
index 0000000..98f33f9
--- /dev/null
+++ b/mkosi.files/mkosi.ubuntu
@@ -0,0 +1,13 @@
+# SPDX-License-Identifier: LGPL-2.1+
+
+[Distribution]
+Distribution=ubuntu
+Release=xenial
+
+[Output]
+Format=gpt_ext4
+Output=ubuntu.raw
+
+[Packages]
+Packages=
+ tzdata
diff --git a/mkosi.md b/mkosi.md
new file mode 100644
index 0000000..2b56a58
--- /dev/null
+++ b/mkosi.md
@@ -0,0 +1,1114 @@
+% mkosi(1)
+% The mkosi Authors
+% 2016-
+
+# NAME
+
+mkosi — Build Legacy-Free OS Images
+
+# SYNOPSIS
+
+`mkosi [options…] build`
+
+`mkosi [options…] clean`
+
+`mkosi [options…] summary`
+
+`mkosi [options…] shell [command line…]`
+
+`mkosi [options…] boot [command line…]`
+
+`mkosi [options…] qemu`
+
+# DESCRIPTION
+
+`mkosi` is a tool for easily building legacy-free OS images. It's a
+fancy wrapper around `dnf --installroot`, `debootstrap`, `pacstrap`
+and `zypper` that may generate disk images with a number of bells and
+whistles.
+
+## Supported output formats
+
+The following output formats are supported:
+
+* Raw *GPT* disk image, with ext4 as root (*gpt_ext4*)
+
+* Raw *GPT* disk image, with xfs as root (*gpt_xfs*)
+
+* Raw *GPT* disk image, with btrfs as root (*gpt_btrfs*)
+
+* Raw *GPT* disk image, with squashfs as read-only root (*gpt_squashfs*)
+
+* Plain squashfs image, without partition table, as read-only root
+ (*plain_squashfs*)
+
+* Plain directory, containing the *OS* tree (*directory*)
+
+* btrfs subvolume, with separate subvolumes for `/var`, `/home`,
+ `/srv`, `/var/tmp` (*subvolume*)
+
+* Tarball (*tar*)
+
+When a *GPT* disk image is created, the following additional
+options are available:
+
+* A swap partition may be added in
+
+* The image may be made bootable on *EFI* and *BIOS* systems
+
+* Separate partitions for `/srv` and `/home` may be added in
+
+* The root, /srv and /home partitions may optionally be encrypted with
+ LUKS.
+
+* A dm-verity partition may be added in that adds runtime integrity
+ data for the root partition
+
+## Other features
+
+* Optionally, create an *SHA256SUMS* checksum file for the result,
+ possibly even signed via `gpg`.
+
+* Optionally, place a specific `.nspawn` settings file along
+ with the result.
+
+* Optionally, build a local project's *source* tree in the image
+ and add the result to the generated image (see below).
+
+* Optionally, share *RPM*/*DEB* package cache between multiple runs,
+ in order to optimize build speeds.
+
+* Optionally, the resulting image may be compressed with *XZ*.
+
+* Optionally, the resulting image may be converted into a *QCOW2* file
+ suitable for `qemu` storage.
+
+* Optionally, btrfs' read-only flag for the root subvolume may be
+ set.
+
+* Optionally, btrfs' compression may be enabled for all
+ created subvolumes.
+
+* By default images are created without all files marked as
+ documentation in the packages, on distributions where the
+ package manager supports this. Use the `--with-docs` flag to
+ build an image with docs added.
+
+## Command Line Verbs
+
+The following command line verbs are known:
+
+`build`
+
+: This builds the image, based on the settings passed in on the
+ command line or read from a `mkosi.default` file, see below. This
+ verb is the default if no verb is explicitly specified. This command
+ must be executed as `root`.
+
+`clean`
+
+: Remove build artifacts generated on a previous build. If combined
+ with `-f`, also removes incremental build cache images. If `-f` is
+ specified twice, also removes any package cache.
+
+`summary`
+
+: Outputs a human-readable summary of all options used for building an
+ image. This will parse the command line and `mkosi.default` file as it
+ would do on `build`, but only output what it is configured for and not
+ actually build anything.`
+
+`shell`
+
+: This builds the image if it is not build yet, and then invokes
+ `systemd-nspawn` to acquire an interactive shell prompt in it. If
+ this verb is used an optional command line may be specified which is
+ then invoked in place of the shell in the container. Combine this
+ with `-f` in order to rebuild the image unconditionally before
+ acquiring the shell, see below. This command must be executed as
+ `root`.
+
+`boot`
+
+: Similar to `shell` but boots the image up using `systemd-nspawn`. If
+ this verb is used an optional command line may be specified which is
+ passed as "kernel command line" to the init system in the image.
+
+`qemu`
+
+: Similar to `boot` but uses `qemu` to boot up the image, i.e. instead
+ of container virtualization VM virtualization is used. This verb is
+ only supported on images that contain a boot loader, i.e. those
+ built with `--bootable` (see below). This command must be executed
+ as `root` unless the image already exists and `-f` is not specified.
+
+`help`
+
+: This verb is equivalent to the `--help` switch documented below: it
+ shows a brief usage explanation.
+
+## Command Line Parameters
+
+The following command line parameters are understood. Note that many
+of these parameters can also be set in the `mkosi.default` file, for
+details see the table below.
+
+`--distribution=`, `-d`
+: The distribution to install in the image. Takes one of the following
+ arguments: `fedora`, `debian`, `ubuntu`, `arch`, `opensuse`,
+ `mageia`, `centos`, `clear`. If not specified, defaults to the
+ distribution of the host.
+
+`--release=`, `-r`
+
+: The release of the distribution to install in the image. The precise
+ syntax of the argument this takes depends on the distribution used,
+ and is either a numeric string (in case of Fedora, CentOS, …,
+ e.g. `29`), or a distribution version name (in case of Debian,
+ Ubuntu, …, e.g. `artful`). If neither this option, not
+ `--distribution=` is specified, defaults to the distribution version
+ of the host. If the distribution is specified, defaults to a recent
+ version of it.
+
+`--mirror=`, `-m`
+
+: The mirror to use for downloading the distribution packages. Expects
+ a mirror URL as argument.
+
+`--repositories=`
+
+: Additional package repositories to use during installation. Expects
+ one or more URLs as argument, separated by commas. This option may
+ be used multiple times, in which case the list of repositories to
+ use is combined.
+
+`--architecture=`
+
+: The architecture to build the image for. Note that this currently
+ only works for architectures compatible with the host's
+ architecture.
+
+`--format=`, `-t`
+
+: The image format type to generate. One of `directory` (for
+ generating OS images inside a local directory), `subvolume`
+ (similar, but as a btrfs subvolume), `tar` (similar, but a tarball
+ of the image is generated), `gpt_ext4` (a block device image with an
+ ext4 file system inside a GPT partition table), `gpt_xfs`
+ (similar, but with an xfs file system), `gpt_btrfs` (similar, but
+ with an btrfs file system), `gpt_squashfs` (similar, but with a
+ squashfs file system), `plain_squashfs` (a plain squashfs file
+ system without a partition table).
+
+`--output=`, `-o`
+
+: Path for the output image file to generate. Takes a relative or
+ absolute path where the generated image will be placed. If neither
+ this option nor `--output-dir=` is used (see below), the image is
+ generated under the name `image`, but its name suffixed with an
+ appropriate file suffix (e.g. `image.raw.xz` in case `gpt_ext4` is
+ used in combination with `--xz`).
+
+`--output-dir=`, `-O`
+
+: Path to a directory where to place all generated artifacts (i.e. the
+ `SHA256SUMS` file and similar). If this is not specified and a
+ directory `mkosi.output/` exists in the local directory it is
+ automatically used for this purpose. If this is not specified and
+ such a directory does not exist, all output artifacts are placed
+ adjacent to the output image file.
+
+`--force`, `-f`
+
+: Replace the output file if it already exists, when building an
+ image. By default when building an image and an output artifact
+ already exists `mkosi` will refuse operation. Specify `-f` to delete
+ all build artifacts from a previous run before re-building the
+ image. If incremental builds are enabled (see below), specifying
+ this option twice will ensure the intermediary cache files are
+ removed, too, before the re-build is initiated. If a package cache
+ is used (see below), specifying this option thrice will ensure the
+ package cache is removed too, before the re-build is initiated. For
+ the `clean` operation `-f` has a slightly different effect: by
+ default the verb will only remove build artifacts from a previous
+ run, when specified once the incremental cache files are deleted
+ too, and when specified twice the package cache is also removed.
+
+`--bootable`, `-b`
+
+: Generate a bootable image. By default this will generate an image
+ bootable on UEFI systems. Use `--boot-protocols=` to select support
+ for a different boot protocol.
+
+`--boot-protocols=`
+
+: Pick one or more boot protocols to support when generating a
+ bootable image, as enabled with `--bootable` above. Takes a
+ comma-separated list of `uefi` or `bios`. May be specified more than
+ once in which case the specified lists are merged. If `uefi` is
+ specified the `sd-boot` UEFI boot loader is used, if `bios` is
+ specified the GNU Grub boot loader is used.
+
+`--kernel-command-line=`
+
+: Use the specified kernel command line for when building bootable
+ images.
+
+`--secure-boot`
+
+: Sign the resulting kernel/initrd image for UEFI SecureBoot
+
+`--secure-boot-key=`
+
+: Path to the PEM file containing the secret key for signing the
+ UEFI kernel image, if `--secure-boot` is used.
+
+`--secure-boot-certificate=`
+
+: Path to the X.509 file containing the certificate for the signed
+ UEFI kernel image, if `--secure-boot` is used.
+
+`--read-only`
+
+: Make root file system read-only. Only applies to `gpt_ext4`,
+ `gpt_xfs`, `gpt-btrfs`, `subvolume` output formats, and implied on
+ `gpt_squashfs` and `plain_squashfs`.
+
+`--encrypt`
+
+: Encrypt all partitions in the file system or just the root file
+ system. Takes either `all` or `data` as argument. If `all` the root,
+ `/home` and `/srv` file systems will be encrypted using
+ dm-crypt/LUKS (with its default settings). If `data` the root file
+ system will be left unencrypted, but `/home` and `/srv` will be
+ encrypted. The passphrase to use is read from the `mkosi.passphrase`
+ file in the current working directory (see below). Note that the
+ UEFI System Partition (ESP) containing the boot loader and kernel to
+ boot is never encrypted since it needs to be accessible by the
+ firmware.
+
+`--verity`
+
+: Add an "Verity" integrity partition to the image. If enabled, the
+ root partition is protected with `dm-verity` against off-line
+ modification, the verification data is placed in an additional GPT
+ partition. Implies `--read-only`.
+
+`--compress=`
+
+: Compress the generated file systems. Only applies to `gpt_btrfs`,
+ `subvolume`, `gpt_squashfs`, `plain_squashfs`. Takes one of `zlib`,
+ `lzo`, `zstd`, `lz4`, `xz` or a boolean value as argument. If the
+ latter is used compression is enabled/disabled and the default
+ algorithm is used. In case of the `squashfs` output formats
+ compression is implied, however this option may be used to select
+ the algorithm.
+
+`--mksquashfs=`
+
+: Set the path to the `mksquashfs` executable to use. This is useful
+ in case the parameters for the tool shall be augmented, as the tool
+ may be replaced by a script invoking it with the right parameters,
+ this way.
+
+`--xz`
+
+: Compress the resulting image with `xz`. This only applies to
+ `gpt_ext4`, `gpt_xfs`, `gpt_btrfs`, `gpt_squashfs` and is implied
+ for `tar`. Note that when applied to the block device image types
+ this means the image cannot be started directly but needs to be
+ decompressed first. This also means that the `shell`, `boot`, `qemu`
+ verbs are not available when this option is used.
+
+`--qcow2`
+
+: Encode the resulting image as QEMU QCOW2 image. This only applies to
+ `gpt_ext4`, `gpt_xfs`, `gpt_btrfs`, `gpt_squashfs`. QCOW2 images can
+ be read natively by `qemu`, but not by the Linux kernel. This means
+ the `shell` and `boot` verbs are not available when this option is
+ used, however `qemu` will work.
+
+`--hostname=`
+
+: Set the image's hostname to the specified name.
+
+`--no-chown`
+
+: By default, if `mkosi` is run inside a `sudo` environment all
+ generated artifacts have their UNIX user/group ownership changed to
+ the user which invoked `sudo`. With this option this may be turned
+ off and all generated files are owned by `root`.
+
+`--incremental`, `-i`
+
+: Enable incremental build mode. This only applies if the two-phase
+ `mkosi.build` build script logic is used. In this mode, a copy of
+ the OS image is created immediately after all OS packages are
+ unpacked but before the `mkosi.build` script is invoked in the
+ development container. Similar a copy of the final image is created
+ immediately before the build artifacts from the `mkosi.build` script
+ are copied in. On subsequent invocations of `mkosi` with the `-i`
+ switch these cached images may be used to skip the OS package
+ unpacking, thus drastically speeding up repetitive build times. Note
+ that when this is used and a pair of cached incremental images
+ exists they are not automatically regenerated, even if options such
+ as `--packages=` are modified. In order to force rebuilding of these
+ cached images, combined `-i` with `-ff`, which ensures the cached
+ images are removed first, and then re-created.
+
+`--package=`, `-p`
+
+: Install the specified distribution packages (i.e. RPM, DEB, …) in
+ the image. Takes a comma separated list of packages. This option may
+ be used multiple times in which case the specified package list is
+ combined. Packaged specified this way will be installed both in the
+ development and the final image (see below). Use `--build-package=`
+ (see below) to specify packages that shall only be used for the
+ image generated in the build image, but that shall not appear in the
+ final image.
+
+`--with-docs`
+
+: Include documentation in the image built. By default if the
+ underlying distribution package manager supports it documentation is
+ not included in the image built. The `$WITH_DOCS` environment
+ variable passed to the `mkosi.build` script indicates whether this
+ option was used or not, see below.
+
+`--without-tests`, `-T`
+
+: If set the `$WITH_TESTS` environment variable is set to `0` when the
+ `mkosi.build` script is invoked. This is supposed to be used by the
+ build script to bypass any unit or integration tests that are
+ normally run during the source build process. Note that this option
+ has no effect unless the `mkosi.build` build script honors it.
+
+`--cache=`
+
+: Takes a path to a directory to use as package cache for the
+ distribution package manager used. If this option is not used, but a
+ `mkosi.cache/` directory is found in the local directory it is
+ automatically used for this purpose (also see below). The directory
+ configured this way is mounted into both the development and the
+ final image while the package manager is running.
+
+`--extra-tree=`
+
+: Takes a path to a directory to copy on top of the OS tree the
+ package manager generated. Use this to override any default
+ configuration files shipped with the distribution. If this option is
+ not used, but the `mkosi.extra/` directory is found in the local
+ directory it is automatically used for this purpose (also see
+ below). Instead of a directory a `tar` file may be specified too. In
+ this case it is unpacked into the OS tree before the package manager
+ is invoked. This mode of operation allows setting permissions and
+ file ownership explicitly, in particular for projects stored in a
+ version control system such as `git` which does retain full file
+ ownership and access mode metadata for committed files. If a tar file
+ `mkosi.extra.tar` is found in the local directory it automatically
+ used for this purpose.
+
+`--skeleton-tree=`
+
+: Takes a path to a directory to copy into the OS tree before invoking
+ the package manager. Use this to insert files and directories into
+ the OS tree before the package manager installs any packages. If
+ this option is not used, but the `mkosi.skeleton/` directory is
+ found in the local directory it is automatically used for this
+ purpose (also see below). As with the extra tree logic above,
+ instead of a directory a `tar` file may be used too, and
+ `mkosi.skeleton.tar` is automatically used.
+
+`--build-script=`
+
+: Takes a path to an executable that is used as build script for this
+ image. If this option is used the build process will be two-phased
+ instead of single-phased (see below). The specified script is copied
+ onto the development image and executed inside an `systemd-nspawn`
+ container environment. If this option is not used, but the
+ `mkosi.build` file found in the local directory it is automatically
+ used for this purpose (also see below).
+
+`--build-sources=`
+
+: Takes a path of a source tree to copy into the development image, if
+ a build script is used. This only applies if a build script is used,
+ and defaults to the local directory. Use `--source-file-transfer=`
+ to configure how the files are transferred from the host to the
+ container image.
+
+`--build-dir=`
+
+: Takes a path of a directory to use as build directory for build
+ systems that support out-of-tree builds (such as Meson). The
+ directory used this way is shared between repeated builds, and
+ allows the build system to reuse artifacts (such as object files,
+ executable, …) generated on previous invocations. This directory is
+ mounted into the development image when the build script is
+ invoked. The build script can find the path to this directory in the
+ `$BUILDDIR` environment variable. If this option is not specified,
+ but a directory `mkosi.builddir/` exists in the local directory it
+ is automatically used for this purpose (also see below).
+
+`--build-package=`
+
+: Similar to `--package=`, but configures packages to install only in
+ the first phase of the build, into the development image. This
+ option should be used to list packages containing header files,
+ compilers, build systems, linkers and other build tools the
+ `mkosi.build` script requires to operate. Note that packages listed
+ here are only included in the image created during the first phase
+ of the build, and are absent in the final image. use `--package=` to
+ list packages that shall be included in both.
+
+`--postinst-script=`
+
+: Takes a path to an executable that is invoked inside the final image
+ right after copying in the build artifacts generated in the first
+ phase of the build. This script is invoked inside a `systemd-nspawn`
+ container environment, and thus does not have access to host
+ resources. If this option is not used, but an executable
+ `mkosi.postinst` is found in the local directory, it is
+ automatically used for this purpose (also see below).
+
+`--finalize-script=`
+
+: Takes a path to an executable that is invoked outside the final
+ image right after copying in the build artifacts generated in the
+ first phase of the build, and after having executed the
+ `mkosi.postinst` script (see above). This script is invoked directly
+ in the host environment, and hence has full access to the host's
+ resources. If this option is not used, but an executable
+ `mkosi.finalize` is found in the local directory, it is
+ automatically used for this purpose (also see below).
+
+`--source-file-transfer=`
+
+: Configures how the source file tree (as configured with
+ `--build-sources=`) is transferred into the container image
+ during the first phase of the build. Takes one of `copy-all` (to
+ copy all files from the source tree), `copy-git-cached` (to copy
+ only those files `git-ls-files --cached` lists), `copy-git-others`
+ (to copy only those files `git-ls-files --others` lists), `mount` to
+ bind mount the source tree directly. Defaults to `copy-git-cached`
+ if a `git` source tree is detected, otherwise `copy-all`.
+
+`--with-network`
+
+: Enables network connectivity while the build script `mkosi.build` is
+ invoked. By default, the build script runs with networking turned
+ off. The `$WITH_NETWORK` environment variable is passed to the
+ `mkosi.build` build script indicating whether the build is done with
+ or without this option.
+
+`--settings=`
+
+: Specifies a `.nspawn` settings file for `systemd-nspawn` to use in
+ the `boot` and `shell` verbs, and to place next to the generated
+ image file. This is useful to configure the `systemd-nspawn`
+ environment when the image is run. If this setting is not used but
+ an `mkosi.nspawn` file found in the local directory it is
+ automatically used for this purpose (also see below).
+
+`--root-size=`
+
+: Takes a size in bytes for the root file system. The specified
+ numeric value may be suffixed with `K`, `M`, `G` to indicate kilo-,
+ mega- and gigabytes (all to the base of 1024). This applies to
+ output formats `gpt_ext4`, `gpt_xfs`, `gpt_btrfs`. Defaults to 1G,
+ except for `gpt_xfs` where it defaults to 1.3G.
+
+`--esp-size=`
+
+: Similar, and configures the size of the UEFI System Partition
+ (ESP). This is only relevant if the `--bootable` option is used to
+ generate a bootable image. Defaults to 256M.
+
+`--swap-size=`
+
+: Similar, and configures the size of a swap partition on the
+ image. If omitted no swap partition is created.
+
+`--home-size=`
+
+: Similar, and configures the size of the `/home` partition. If
+ omitted no separate `/home` partition is created.
+
+`--srv-size=`
+
+: Similar, and configures the size of the `/srv` partition. If
+ omitted no separate `/srv` partition is created.
+
+`--checksum`
+
+: Generate a `SHA256SUMS` file of all generated artifacts after the
+ build is complete.
+
+`--sign`
+
+: Sign the generated `SHA256SUMS` using `gpg` after completion.
+
+`--key=`
+
+: Select the `gpg` key to use for signing `SHA256SUMS`. This key
+ is required to exist in the `gpg` keyring already.
+
+`--bmap`
+
+: Generate a `bmap` file for usage with `bmaptool` from the generated
+ image file.
+
+`--password=`
+
+: Set the password of the `root` user. By default the `root` account
+ is locked. If this option is not used but a file `mkosi.rootpw` exists
+ in the local directory the root password is automatically read from it.
+
+`--extra-search-paths=`
+
+: List of colon-separated paths to look for tools in, before using the
+ regular `$PATH` search path.
+
+`--directory=`, `-C`
+
+: Takes a path to a directory. `mkosi` switches to this directory
+ before doing anything. Note that the various `mkosi.*` files are
+ searched for only after changing to this directory, hence using this
+ option is an effective way to build a project located in a specific
+ directory.
+
+`--default=`
+
+: Loads additional settings from the specified settings file. Most
+ command line options may also be configured in a settings file. See
+ the table below to see which command line options match which
+ settings file option. If this option is not used, but a file
+ `mkosi.default` is found in the local directory it is automatically
+ used for this purpose. If a setting is configured both on the
+ command line and in the settings file, the command line generally
+ wins, except for options taking lists in which case both lists are
+ combined.
+
+`--all`, `-a`
+
+: Iterate through all files `mkosi.*` in the `mkosi.files/`
+ subdirectory, and build each as if `--default=mkosi.files/mkosi.…`
+ was invoked. This is a quick way to build a large number of images
+ in one go. Any additional specified command line arguments override
+ the relevant options in all files processed this way.
+
+`--all-directory=`
+
+: If specified, overrides the directory the `--all` logic described
+ above looks for settings files in. If unspecified, defaults to
+ `mkosi.files/` in the current working directory (see above).
+
+`--version`
+: Show package version.
+
+`--help`, `-h`
+: Show brief usage information.
+
+## Command Line Parameters and their Settings File Counterparts
+
+Most command line parameters may also be placed in an `mkosi.default`
+settings file (or any other file `--default=` is used on). The
+following table shows which command lines parameters correspond with
+which settings file options.
+
+| Command Line Parameter | `mkosi.default` section | `mkosi.default` setting |
+|------------------------------|-------------------------|---------------------------|
+| `--distribution=`, `-d` | `[Distribution]` | `Distribution=` |
+| `--release=`, `-r` | `[Distribution]` | `Release=` |
+| `--repositories=` | `[Distribution]` | `Repositories=` |
+| `--mirror=`, `-m` | `[Distribution]` | `Mirror=` |
+| `--architecture=` | `[Distribution]` | `Architecture=` |
+| `--format=`, `-t` | `[Output]` | `Format=` |
+| `--output=`, `-o` | `[Output]` | `Output=` |
+| `--output-dir=`, `-O` | `[Output]` | `OutputDirectory=` |
+| `--force`, `-f` | `[Output]` | `Force=` |
+| `--bootable`, `-b` | `[Output]` | `Bootable=` |
+| `--boot-protocols=` | `[Output]` | `BootProtocols=` |
+| `--kernel-command-line=` | `[Output]` | `KernelCommandLine=` |
+| `--secure-boot` | `[Output]` | `SecureBoot=` |
+| `--secure-boot-key=` | `[Output]` | `SecureBootKey=` |
+| `--secure-boot-certificate=` | `[Output]` | `SecureBootCertificate=` |
+| `--read-only` | `[Output]` | `ReadOnly=` |
+| `--encrypt=` | `[Output]` | `Encrypt=` |
+| `--verity=` | `[Output]` | `Verity=` |
+| `--compress=` | `[Output]` | `Compress=` |
+| `--mksquashfs=` | `[Output]` | `Mksquashfs=` |
+| `--xz` | `[Output]` | `XZ=` |
+| `--qcow2` | `[Output]` | `QCow2=` |
+| `--hostname=` | `[Output]` | `Hostname=` |
+| `--package=` | `[Packages]` | `Packages=` |
+| `--with-docs` | `[Packages]` | `WithDocs=` |
+| `--without-tests`, `-T` | `[Packages]` | `WithTests=` |
+| `--cache=` | `[Packages]` | `Cache=` |
+| `--extra-tree=` | `[Packages]` | `ExtraTrees=` |
+| `--skeleton-tree=` | `[Packages]` | `SkeletonTrees=` |
+| `--build-script=` | `[Packages]` | `BuildScript=` |
+| `--build-sources=` | `[Packages]` | `BuildSources=` |
+| `--source-file-transfer=` | `[Packages]` | `SourceFileTransfer=` |
+| `--build-directory=` | `[Packages]` | `BuildDirectory=` |
+| `--build-packages=` | `[Packages]` | `BuildPackages=` |
+| `--postinst-script=` | `[Packages]` | `PostInstallationScript=` |
+| `--finalize-script=` | `[Packages]` | `FinalizeScript=` |
+| `--with-network` | `[Packages]` | `WithNetwork=` |
+| `--settings=` | `[Packages]` | `NSpawnSettings=` |
+| `--root-size=` | `[Partitions]` | `RootSize=` |
+| `--esp-size=` | `[Partitions]` | `ESPSize=` |
+| `--swap-size=` | `[Partitions]` | `SwapSize=` |
+| `--home-size=` | `[Partitions]` | `HomeSize=` |
+| `--srv-size=` | `[Partitions]` | `SrvSize=` |
+| `--checksum` | `[Validation]` | `CheckSum=` |
+| `--sign` | `[Validation]` | `Sign=` |
+| `--key=` | `[Validation]` | `Key=` |
+| `--bmap` | `[Validation]` | `BMap=` |
+| `--password=` | `[Validation]` | `Password=` |
+| `--extra-search-paths=` | `[Host]` | `ExtraSearchPaths=` |
+
+Command line options that take no argument are not suffixed with a `=`
+in their long version in the table above. In the `mkosi.default` file
+they are modeled as boolean option that take either `1`, `yes`,
+`true` for enabling, and `0`, `no`, `false` for disabling.
+
+## Supported distributions
+
+Images may be created containing installations of the
+following *OS*es.
+
+* *Fedora*
+
+* *Debian*
+
+* *Ubuntu*
+
+* *Arch Linux*
+
+* *openSUSE*
+
+* *Mageia*
+
+* *CentOS*
+
+* *Clear Linux*
+
+In theory, any distribution may be used on the host for building
+images containing any other distribution, as long as the necessary
+tools are available. Specifically, any distribution that packages
+`debootstrap` may be used to build *Debian* or *Ubuntu* images. Any
+distribution that packages `dnf` may be used to build *Fedora* or
+*Mageia* images. Any distro that packages `pacstrap` may be used to
+build *Arch Linux* images. Any distribution that packages `zypper` may
+be used to build *openSUSE* images. Any distribution that packages
+`yum` (or the newer replacement `dnf`) may be used to build *CentOS*
+images.
+
+Currently, *Fedora* packages all relevant tools as of Fedora 28.
+
+## Compatibility
+
+Generated images are *legacy-free*. This means only *GPT* disk labels
+(and no *MBR* disk labels) are supported, and only systemd based
+images may be generated.
+
+All generated *GPT* disk images may be booted in a local
+container directly with:
+
+```bash
+systemd-nspawn -bi image.raw
+```
+
+Additionally, bootable *GPT* disk images (as created with the
+`--bootable` flag) work when booted directly by *EFI* and *BIOS*
+systems, for example in *KVM* via:
+
+```bash
+qemu-kvm -m 512 -smp 2 -bios /usr/share/edk2/ovmf/OVMF_CODE.fd -drive format=raw,file=image.raw
+```
+
+*EFI* bootable *GPT* images are larger than plain *GPT* images, as
+they additionally carry an *EFI* system partition containing a
+boot loader, as well as a kernel, kernel modules, udev and
+more.
+
+All directory or btrfs subvolume images may be booted directly
+with:
+
+```bash
+systemd-nspawn -bD image
+```
+
+# FILES
+
+To make it easy to build images for development versions of your
+projects, mkosi can read configuration data from the local directory,
+under the assumption that it is invoked from a *source*
+tree. Specifically, the following files are used if they exist in the
+local directory:
+
+* `mkosi.default` may be used to configure mkosi's image building
+ process. For example, you may configure the distribution to use
+ (`fedora`, `ubuntu`, `debian`, `arch`, `opensuse`, `mageia`) for the
+ image, or additional distribution packages to install. Note that all
+ options encoded in this configuration file may also be set on the
+ command line, and this file is hence little more than a way to make
+ sure simply typing `mkosi` without further parameters in your
+ *source* tree is enough to get the right image of your choice set
+ up. Additionally if a `mkosi.default.d` directory exists, each file
+ in it is loaded in the same manner adding/overriding the values
+ specified in `mkosi.default`. The file format is inspired by Windows
+ `.ini` files and supports multi-line assignments: any line with
+ initial whitespace is considered a continuation line of the line
+ before. Command-line arguments, as shown in the help description,
+ have to be included in a configuration block (e.g. "[Packages]")
+ corresponding to the argument group (e.g. "Packages"), and the
+ argument gets converted as follows: "--with-network" becomes
+ "WithNetwork=yes". For further details see the table above.
+
+* `mkosi.extra/` or `mkosi.extra.tar` may be respectively a directory
+ or archive. If any exist all files contained in it are copied over
+ the directory tree of the image after the *OS* was installed. This
+ may be used to add in additional files to an image, on top of what
+ the distribution includes in its packages. When using a directory
+ file ownership is not preserved: all files copied will be owned by
+ root. To preserve ownership use a tar archive.
+
+* `mkosi.skeleton/` or `mkosi.skeleton.tar` may be respectively a
+ directory or archive, and they work in the same way as
+ `mkosi.extra`/`mkosi.skeleton.tar`. However the files are copied
+ before anything else so to have a skeleton tree for the OS. This
+ allows to change the package manager and create files that need to
+ be there before anything is installed. When using a directory file
+ ownership is not preserved: all files copied will be owned by
+ root. To preserve ownership use a tar archive.
+
+* `mkosi.build` may be an executable script. If it exists the image
+ will be built twice: the first iteration will be the *development*
+ image, the second iteration will be the *final* image. The
+ *development* image is used to build the project in the current
+ working directory (the *source* tree). For that the whole directory
+ is copied into the image, along with the mkosi.build build
+ script. The script is then invoked inside the image (via
+ `systemd-nspawn`), with `$SRCDIR` pointing to the *source*
+ tree. `$DESTDIR` points to a directory where the script should place
+ any files generated it would like to end up in the *final*
+ image. Note that `make`/`automake`/`meson` based build systems
+ generally honor `$DESTDIR`, thus making it very natural to build
+ *source* trees from the build script. After the *development* image
+ was built and the build script ran inside of it, it is removed
+ again. After that the *final* image is built, without any *source*
+ tree or build script copied in. However, this time the contents of
+ `$DESTDIR` are added into the image.
+
+ When the source tree is copied into the *build* image, all files are
+ copied, except for `mkosi.builddir/`, `mkosi.cache/` and
+ `mkosi.output/`. That said, `.gitignore` is respected if the source
+ tree is a `git` checkout. If multiple different images shall be
+ built from the same source tree it's essential to exclude their
+ output files from this copy operation, as otherwise a version of an
+ image built earlier might be included in a later build, which is
+ usually not intended. An alternative to excluding these built images
+ via `.gitignore` entries is making use of the `mkosi.output/`
+ directory (see below), which is an easy way to exclude all build
+ artifacts.
+
+* `mkosi.postinst` may be an executable script. If it exists it is
+ invoked as the penultimate step of preparing an image, from within
+ the image context. It is once called for the *development* image (if
+ this is enabled, see above) with the "build" command line parameter,
+ right before invoking the build script. It is called a second time
+ for the *final* image with the "final" command line parameter, right
+ before the image is considered complete. This script may be used to
+ alter the images without any restrictions, after all software
+ packages and built sources have been installed. Note that this
+ script is executed directly in the image context with the final root
+ directory in place, without any `$SRCDIR`/`$DESTDIR` setup.
+
+* `mkosi.finalize` may be an executable script. If it exists it is
+ invoked as last step of preparing an image, from the host system.
+ It is once called for the *development* image (if this is enabled,
+ see above) with the "build" command line parameter, as the last step
+ before invoking the build script, after the `mkosi.postinst` script
+ is invoked. It is called the second time with the "final" command
+ line parameter as the last step before the image is considered
+ complete. The environment variable `$BUILDROOT` points to the root
+ directory of the installation image. Additional verbs may be added
+ in the future, the script should be prepared for that. This script
+ may be used to alter the images without any restrictions, after all
+ software packages and built sources have been installed. This script
+ is more flexible than `mkosi.postinst` in two regards: it has access
+ to the host file system so it's easier to copy in additional files
+ or to modify the image based on external configuration, and the
+ script is run in the host, so it can be used even without emulation
+ even if the image has a foreign architecture.
+
+* `mkosi.mksquashfs-tool` may be an executable script. If it exists is
+ is called instead of `mksquashfs`.
+
+* `mkosi.nspawn` may be an nspawn settings file. If this exists it
+ will be copied into the same place as the output image file. This is
+ useful since nspawn looks for settings files next to image files it
+ boots, for additional container runtime settings.
+
+* `mkosi.cache/` may be a directory. If so, it is automatically used as
+ package download cache, in order to speed repeated runs of the tool.
+
+* `mkosi.builddir/` may be a directory. If so, it is automatically
+ used as out-of-tree build directory, if the build commands in the
+ `mkosi.build` script support it. Specifically, this directory will
+ be mounted into the build container, and the `$BUILDDIR` environment
+ variable will be set to it when the build script is invoked. The
+ build script may then use this directory as build directory, for
+ automake-style or ninja-style out-of-tree builds. This speeds up
+ builds considerably, in particular when `mkosi` is used in
+ incremental mode (`-i`): not only the disk images but also the build
+ tree is reused between subsequent invocations. Note that if this
+ directory does not exist the `$BUILDDIR` environment variable is not
+ set, and it is up to build script to decide whether to do in in-tree
+ or an out-of-tree build, and which build directory to use.
+
+* `mkosi.rootpw` may be a file containing the password for the root
+ user of the image to set. The password may optionally be followed by
+ a newline character which is implicitly removed. The file must have
+ an access mode of 0600 or less. If this file does not exist the
+ distribution's default root password is set (which usually means
+ access to the root user is blocked).
+
+* `mkosi.passphrase` may be a passphrase file to use when LUKS
+ encryption is selected. It should contain the passphrase literally,
+ and not end in a newline character (i.e. in the same format as
+ cryptsetup and /etc/crypttab expect the passphrase files). The file
+ must have an access mode of 0600 or less. If this file does not
+ exist and encryption is requested the user is queried instead.
+
+* `mkosi.secure-boot.crt` and `mkosi.secure-boot.key` may contain an
+ X.509 certificate and PEM private key to use when UEFI SecureBoot
+ support is enabled. All EFI binaries included in the image's ESP are
+ signed with this key, as a late step in the build process.
+
+* `mkosi.output/` may be a directory. If it exists, and the image
+ output path is not configured (i.e. no `--output=` setting
+ specified), or configured to a filename (i.e. a path containing no
+ `/` character) all build artifacts (that is: the image itself, the
+ root hash file in case Verity is used, the checksum and its
+ signature if that's enabled, and the nspawn settings file if there
+ is any) are placed in this directory. Note that this directory is
+ not used if the image output path contains at least one slash, and
+ has no effect in that case. This setting is particularly useful if
+ multiple different images shall be built from the same working
+ directory, as otherwise the build result of a preceding run might be
+ copied into a build image as part of the source tree (see above).
+
+All these files are optional.
+
+Note that the location of all these files may also be configured
+during invocation via command line switches, and as settings in
+`mkosi.default`, in case the default settings are not acceptable for a
+project.
+
+# BUILD PHASES
+
+If no build script `mkosi.build` (see above) is used the build
+consists of a single phase only: the final image is generated as the
+combination of `mkosi.skeleton/` (see above), the unpacked
+distribution packages and `mkosi.extra/`.
+
+If a build script `mkosi.build` is used the build consists of two
+phases: in the the first `development` phase an image that includes
+necessary build tools (i.e. the combination of `Packages=` and
+`BuildPackages=` is installed) is generated (i.e. the combination of
+`mkosi.skeleton/` and unpacked distribution packages). Into this image
+the source tree is copied and `mkosi.build` executed. The artifacts
+the `mkosi.build` generates are saved. Then, the second `final` phase
+starts: an image that excludes the build tools (i.e. only `Packages=`
+is installed, `BuildPackages=` is not) is generated. This time the
+build artifacts saved from the first phase are copied in, and
+`mkosi.extra` copied on top, thus generating the final image.
+
+The two-phased approach ensures that source tree is executed in a
+clean and comprehensive environment, while at the same the final image
+remains minimal and contains only those packages necessary at runtime,
+but avoiding those necessary at build-time.
+
+Note that only the package cache `mkosi.cache/` (see below) is shared
+between the two phases. The distribution package manager is executed
+exactly once in each phase, always starting from a directory tree that
+is populated with `mkosi.skeleton` but nothing else.
+
+# CACHING
+
+`mkosi` supports three different caches for speeding up repetitive
+re-building of images. Specifically:
+
+1. The package cache of the distribution package manager may be cached
+ between builds. This is configured with the `--cache=` option or
+ the `mkosi.cache/` directory. This form of caching relies on the
+ distribution's package manager, and caches distribution packages
+ (RPM, DEB, …) after they are downloaded, but before they are
+ unpacked.
+
+2. If an `mkosi.build` script is used, by enabling incremental build
+ mode with `--incremental` (see above) a cached copy of the
+ development and final images can be made immediately before the
+ build sources are copied in (for the development image) or the
+ artifacts generated by `mkosi.build` are copied in (in case of the
+ final image). This form of caching allows bypassing the
+ time-consuming package unpacking step of the distribution package
+ managers, but is only effective if the list of packages to use
+ remains stable, but the build sources and its scripts change
+ regularly. Note that this cache requires manual flushing: whenever
+ the package list is modified the cached images need to be
+ explicitly removed before the next re-build, using the `-f` switch.
+
+3. Finally, between multiple builds the build artifact directory may
+ be shared, using the `mkosi.builddir/` directory. This directory
+ allows build systems such as Meson to reuse already compiled
+ sources from a previous built, thus speeding up the build process
+ of the `mkosi.build` build script.
+
+The package cache (i.e. the first item above) is unconditionally
+useful. The latter two caches only apply to uses of `mkosi` with a
+source tree and build script. When all three are enabled together
+turn-around times for complete image builds are minimal, as only
+changed source files need to be recompiled: an OS image rebuilt will
+be almost as quick to build the source tree only.
+
+# ENVIRONMENT VARIABLES
+
+The build script `mkosi.build` receives the following environment
+variables:
+
+* `$SRCDIR` contains the path to the sources to build.
+
+* `$DESTDIR` is a directory into which any artifacts generated by the
+ build script shall be placed.
+
+* `$BUILDDIR` is only defined if `mkosi.builddir` and points to the
+ build directory to use. This is useful for all build systems that
+ support out-of-tree builds to reuse already built artifacts from
+ previous runs.
+
+* `$WITH_DOCS` is either `0` or `1` depending on whether a build
+ without or with installed documentation was requested (see
+ `--with-docs` above). The build script should suppress installation
+ of any package documentation to `$DESTDIR` in case `$WITH_DOCS` is
+ set to `0`.
+
+* `$WITH_TESTS` is either `0`or `1` depending on whether a build
+ without or with running the test suite was requested (see
+ `--without-tests` above). The build script should avoid running any
+ unit or integration tests in case `$WITH_TESTS` is `0`.
+
+* `$WITH_NETWORK` is either `0`or `1` depending on whether a build
+ without or with networking is being executed (see `--with-network`
+ above). The build script should avoid any network communication in
+ case `$WITH_NETWORK` is `0`.
+
+# EXAMPLES
+
+Create and run a raw *GPT* image with *ext4*, as `image.raw`:
+
+```bash
+# mkosi
+# systemd-nspawn -b -i image.raw
+```
+
+Create and run a bootable btrfs *GPT* image, as `foobar.raw`:
+
+```bash
+# mkosi -t gpt_btrfs --bootable -o foobar.raw
+# systemd-nspawn -b -i foobar.raw
+# qemu-kvm -m 512 -smp 2 -bios /usr/share/edk2/ovmf/OVMF_CODE.fd -drive format=raw,file=foobar.raw
+```
+
+Create and run a *Fedora* image into a plain directory:
+
+```bash
+# mkosi -d fedora -t directory -o quux
+# systemd-nspawn -b -D quux
+```
+
+Create a compressed image `image.raw.xz` and add a checksum file, and
+install *SSH* into it:
+
+```bash
+# mkosi -d fedora -t gpt_squashfs --checksum --xz --package=openssh-clients
+```
+
+Inside the source directory of an `automake`-based project,
+configure *mkosi* so that simply invoking `mkosi` without any
+parameters builds an *OS* image containing a built version of
+the project in its current state:
+
+```bash
+# cat > mkosi.default <<EOF
+[Distribution]
+Distribution=fedora
+Release=24
+
+[Output]
+Format=gpt_btrfs
+Bootable=yes
+
+[Packages]
+Packages=openssh-clients httpd
+BuildPackages=make gcc libcurl-devel
+EOF
+# cat > mkosi.build <<EOF
+#!/bin/sh
+cd $SRCDIR
+./autogen.sh
+./configure --prefix=/usr
+make -j `nproc`
+make install
+EOF
+# chmod +x mkosi.build
+# mkosi
+# systemd-nspawn -bi image.raw
+```
+
+To create a *Fedora* image with hostname:
+```bash
+# mkosi -d fedora --hostname image
+```
+
+Also you could set hostname in configuration file:
+```bash
+# cat mkosi.default
+...
+[Output]
+Hostname=image
+...
+```
+
+# REQUIREMENTS
+
+mkosi is packaged for various distributions: Debian, Ubuntu, Arch (in AUR), Fedora.
+It is usually easiest to use the distribution package.
+
+The current version requires systemd 233 (or actually, systemd-nspawn of it).
+
+When not using distribution packages make sure to install the
+necessary dependencies. For example, on *Fedora* you need:
+
+```bash
+dnf install arch-install-scripts btrfs-progs debootstrap dosfstools edk2-ovmf e2fsprogs squashfs-tools gnupg python3 tar veritysetup xfsprogs xz zypper sbsigntools
+```
+
+On Debian/Ubuntu it might be necessary to install the `ubuntu-keyring`,
+`ubuntu-archive-keyring` and/or `debian-archive-keyring` packages explicitly,
+in addition to `debootstrap`, depending on what kind of distribution images
+you want to build. `debootstrap` on Debian only pulls in the Debian keyring
+on its own, and the version on Ubuntu only the one from Ubuntu.
+
+Note that the minimum required Python version is 3.6.
+
+# REFERENCES
+* [Primary mkosi git repository on GitHub](https://github.com/systemd/mkosi/)
+* [mkosi — A Tool for Generating OS Images](http://0pointer.net/blog/mkosi-a-tool-for-generating-os-images.html) introductory blog post by Lennart Poettering
+* [The mkosi OS generation tool](https://lwn.net/Articles/726655/) story on LWN
+
+# SEE ALSO
+`systemd-nspawn(1)`, `dnf(8)`, `debootstrap(8)`
diff --git a/mkosi.py b/mkosi.py
new file mode 120000
index 0000000..b5f44fa
--- /dev/null
+++ b/mkosi.py
@@ -0,0 +1 @@
+mkosi \ No newline at end of file
diff --git a/setup.cfg b/setup.cfg
new file mode 100644
index 0000000..add850c
--- /dev/null
+++ b/setup.cfg
@@ -0,0 +1,5 @@
+[flake8]
+max-line-length = 119
+[isort]
+multi_line_output = 3
+include_trailing_comma = True
diff --git a/setup.py b/setup.py
index 482e01a..27e95f8 100755
--- a/setup.py
+++ b/setup.py
@@ -3,11 +3,12 @@
import sys
-if sys.version_info < (3, 5):
- sys.exit("Sorry, we need at least Python 3.5.")
-
from setuptools import setup
+if sys.version_info < (3, 6):
+ sys.exit("Sorry, we need at least Python 3.6.")
+
+
setup(
name="mkosi",
version="4",