summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--README.md65
-rwxr-xr-xmkosi256
2 files changed, 247 insertions, 74 deletions
diff --git a/README.md b/README.md
index c959f49..242cb58 100644
--- a/README.md
+++ b/README.md
@@ -112,16 +112,18 @@ following *OS*es.
* *openSUSE*
+* *Mageia*
+
In theory, any distribution may be used on the host for
building images containing any other distribution, as long as
the necessary tools are available. Specifically, any distro
that packages `debootstrap` may be used to build *Debian* or
*Ubuntu* images. Any distro that packages `dnf` may be used to
-build *Fedora* images. Any distro that packages `pacstrap` may
-be used to build *Arch Linux* images. Any distro that packages
-`zypper` may be used to build *openSUSE* images.
+build *Fedora* or *Mageia* images. Any distro that packages
+`pacstrap` may be used to build *Arch Linux* images. Any distro
+that packages `zypper` may be used to build *openSUSE* images.
-Currently, *Fedora* packages the first three tools.
+Currently, *Fedora* packages all four tools as of Fedora 26.
# Files
@@ -133,18 +135,18 @@ they exist in the local directory:
* `mkosi.default` may be used to configure mkosi's image
building process. For example, you may configure the
- distribution to use (`fedora`, `ubuntu`, `debian`, `archlinux`) for
- the image, or additional distribution packages to
- install. Note that all options encoded in this configuration
- file may also be set on the command line, and this file is
- hence little more than a way to make sure simply typing
- `mkosi` without further parameters in your *source* tree is
+ distribution to use (`fedora`, `ubuntu`, `debian`, `archlinux`,
+ `opensuse`, `mageia`) for the image, or additional
+ distribution packages to install. Note that all options encoded
+ in this configuration file may also be set on the command line,
+ and this file is hence little more than a way to make sure simply
+ typing `mkosi` without further parameters in your *source* tree is
enough to get the right image of your choice set up.
Additionally if a `mkosi.default.d` directory exists, each file in it
is loaded in the same manner adding/overriding the values specified in
`mkosi.default`.
-* `mkosi.extra` may be a directory. If this exists all files
+* `mkosi.extra/` may be a directory. If this exists all files
contained in it are copied over the directory tree of the
image after the *OS* was installed. This may be used to add in
additional files to an image, on top of what the
@@ -187,9 +189,24 @@ they exist in the local directory:
next to image files it boots, for additional container
runtime settings.
-* `mkosi.cache` may be a directory. If so, it is automatically used as
+* `mkosi.cache/` may be a directory. If so, it is automatically used as
package download cache, in order to speed repeated runs of the tool.
+* `mkosi.builddir/` may be a directory. If so, it is automatically
+ used as out-of-tree build directory, if the build commands in the
+ `mkosi.build` script support it. Specifically, this directory will
+ be mounted into the build countainer, and the `$BUILDDIR`
+ environment variable will be set to it when the build script is
+ invoked. The build script may then use this directory as build
+ directory, for automake-style or ninja-style out-of-tree
+ builds. This speeds up builds considerably, in particular when
+ `mkosi` is used in incremental mode (`-i`): not only the disk images
+ but also the build tree is reused between subsequent
+ invocations. Note that if this directory does not exist the
+ `$BUILDDIR` environment variable is not set, and it is up to build
+ script to decide whether to do in in-tree or an out-of-tree build,
+ and which build directory to use.
+
* `mkosi.passphrase` may be a passphrase file to use when LUKS
encryption is selected. It should contain the passphrase literally,
and not end in a newline character (i.e. in the same format as
@@ -233,11 +250,11 @@ Create and run a *Fedora* image into a plain directory:
# systemd-nspawn -b -D quux
```
-Create a compressed tar ball `image.raw.xz` and add a checksum
-file, and install *SSH* into it:
+Create a compressed image `image.raw.xz` and add a checksum file, and
+install *SSH* into it:
```bash
-# mkosi -d fedora -t tar --checksum --compress --package=openssh-clients
+# mkosi -d fedora -t raw_squashfs --checksum --xz --package=openssh-clients
```
Inside the source directory of an `automake`-based project,
@@ -272,16 +289,32 @@ EOF
# systemd-nspawn -bi image.raw
```
+To create a *Fedora* image with hostname:
+```bash
+# mkosi -d fedora --hostname image
+```
+
+Also you could set hostname in configuration file:
+```bash
+# cat mkosi.default
+...
+[Output]
+Hostname=image
+...
+```
+
# Requirements
mkosi is packaged for various distributions: Debian, Ubuntu, Arch (in AUR), Fedora.
It is usually easiest to use the distribution package.
+The current version requires systemd 233 (or actually, systemd-nspawn of it).
+
When not using distribution packages make sure to install the
necessary dependencies. For example, on *Fedora* you need:
```bash
-dnf install arch-install-scripts btrfs-progs debootstrap dosfstools edk2-ovmf squashfs-tools gnupg python3 tar veritysetup xz
+dnf install arch-install-scripts btrfs-progs debootstrap dosfstools edk2-ovmf squashfs-tools gnupg python3 tar veritysetup xz zypper
```
Note that the minimum required Python version is 3.5.
diff --git a/mkosi b/mkosi
index aa812e0..cc09c8f 100755
--- a/mkosi
+++ b/mkosi
@@ -1,4 +1,5 @@
#!/usr/bin/python3
+# PYTHON_ARGCOMPLETE_OK
import argparse
import configparser
@@ -18,6 +19,11 @@ import time
import urllib.request
import uuid
+try:
+ import argcomplete
+except ImportError:
+ pass
+
from enum import Enum
__version__ = '3'
@@ -50,6 +56,7 @@ class Distribution(Enum):
ubuntu = 3
arch = 4
opensuse = 5
+ mageia = 6
GPT_ROOT_X86 = uuid.UUID("44479540f29741b29af7d131d5f0458a")
GPT_ROOT_X86_64 = uuid.UUID("4f68bce3e8cd4db196e7fbcaf984b709")
@@ -552,11 +559,25 @@ def mount_image(args, workspace, loopdev, root_dev, home_dev, srv_dev, root_read
finally:
with complete_step('Unmounting image'):
- for d in ("home", "srv", "efi", "var/cache/dnf", "var/cache/apt/archives", "var/cache/pacman/pkg", "var/cache/zypp/packages", "run", "tmp"):
+ for d in ("home", "srv", "efi", "run", "tmp"):
umount(os.path.join(root, d))
umount(root)
+@complete_step("Assigning hostname")
+def assign_hostname(args, workspace):
+ root = os.path.join(workspace, "root")
+ hostname_path = os.path.join(root, "etc/hostname")
+
+ if os.path.isfile(hostname_path):
+ os.remove(hostname_path)
+
+ if args.hostname:
+ if os.path.islink(hostname_path) or os.path.isfile(hostname_path):
+ os.remove(hostname_path)
+ with open(hostname_path, "w+") as f:
+ f.write("{}\n".format(args.hostname))
+
@contextlib.contextmanager
def mount_api_vfs(args, workspace):
paths = ('/proc', '/dev', '/sys')
@@ -581,7 +602,7 @@ def mount_cache(args, workspace):
# We can't do this in mount_image() yet, as /var itself might have to be created as a subvolume first
with complete_step('Mounting Package Cache'):
- if args.distribution == Distribution.fedora:
+ if args.distribution in (Distribution.fedora, Distribution.mageia):
mount_bind(args.cache_path, os.path.join(workspace, "root", "var/cache/dnf"))
elif args.distribution in (Distribution.debian, Distribution.ubuntu):
mount_bind(args.cache_path, os.path.join(workspace, "root", "var/cache/apt/archives"))
@@ -666,6 +687,13 @@ def patch_file(filepath, line_rewriter):
os.remove(filepath)
shutil.move(temp_new_filepath, filepath)
+def fix_hosts_line_in_nsswitch(line):
+ if line.startswith("hosts:"):
+ sources = line.split(" ")
+ if 'resolve' not in sources:
+ return " ".join(["resolve" if w == "dns" else w for w in sources])
+ return line
+
def enable_networkd(workspace):
subprocess.run(["systemctl",
"--root", os.path.join(workspace, "root"),
@@ -675,8 +703,7 @@ def enable_networkd(workspace):
os.remove(os.path.join(workspace, "root", "etc/resolv.conf"))
os.symlink("../usr/lib/systemd/resolv.conf", os.path.join(workspace, "root", "etc/resolv.conf"))
- patch_file(os.path.join(workspace, "root", "etc/nsswitch.conf"),
- lambda line: " ".join(["resolve" if w == "dns" else w for w in line.split(" ")]) if line.startswith("hosts:") else line)
+ patch_file(os.path.join(workspace, "root", "etc/nsswitch.conf"), fix_hosts_line_in_nsswitch)
with open(os.path.join(workspace, "root", "etc/systemd/network/all-ethernet.network"), "w") as f:
f.write("""\
@@ -693,6 +720,7 @@ def run_workspace_command(args, workspace, *cmd, network=False, env={}):
'--quiet',
"--directory=" + os.path.join(workspace, "root"),
"--uuid=" + args.machine_id,
+ "--machine=mkosi-" + uuid.uuid4().hex,
"--as-pid2",
"--register=no",
"--bind=" + var_tmp(workspace) + ":/var/tmp" ]
@@ -734,6 +762,54 @@ def disable_kernel_install(args, workspace):
for f in ("50-dracut.install", "51-dracut-rescue.install", "90-loaderentry.install"):
os.symlink("/dev/null", os.path.join(workspace, "root", "etc/kernel/install.d", f))
+def invoke_dnf(args, workspace, repositories, base_packages, boot_packages):
+
+ repos = ["--enablerepo=" + repo for repo in repositories]
+
+ root = os.path.join(workspace, "root")
+ cmdline = ["dnf",
+ "-y",
+ "--config=" + os.path.join(workspace, "dnf.conf"),
+ "--best",
+ "--allowerasing",
+ "--releasever=" + args.release,
+ "--installroot=" + root,
+ "--disablerepo=*",
+ *repos,
+ "--setopt=keepcache=1",
+ "--setopt=install_weak_deps=0"]
+
+ # Turn off docs, but not during the development build, as dnf currently has problems with that
+ if not args.with_docs and not run_build_script:
+ cmdline.append("--setopt=tsflags=nodocs")
+
+ cmdline.extend([
+ "install",
+ *base_packages
+ ])
+
+ if args.packages is not None:
+ cmdline.extend(args.packages)
+
+ if run_build_script and args.build_packages is not None:
+ cmdline.extend(args.build_packages)
+
+ if args.bootable:
+ cmdline.extend(boot_packages)
+
+ # Temporary hack: dracut only adds crypto support to the initrd, if the cryptsetup binary is installed
+ if args.encrypt or args.verity:
+ cmdline.append("cryptsetup")
+
+ if args.output_format == OutputFormat.raw_gpt:
+ cmdline.append("e2fsprogs")
+
+ if args.output_format == OutputFormat.raw_btrfs:
+ cmdline.append("btrfs-progs")
+
+ with mount_api_vfs(args, workspace):
+ subprocess.run(cmdline, check=True)
+
@complete_step('Installing Fedora')
def install_fedora(args, workspace, run_build_script):
@@ -776,55 +852,56 @@ gpgkey={gpg_key}
gpg_key=gpg_key,
release_url=release_url,
updates_url=updates_url))
- if args.repositories:
- repos = ["--enablerepo=" + repo for repo in args.repositories]
- else:
- repos = ["--enablerepo=fedora", "--enablerepo=updates"]
- root = os.path.join(workspace, "root")
- cmdline = ["dnf",
- "-y",
- "--config=" + os.path.join(workspace, "dnf.conf"),
- "--best",
- "--allowerasing",
- "--releasever=" + args.release,
- "--installroot=" + root,
- "--disablerepo=*",
- *repos,
- "--setopt=keepcache=1",
- "--setopt=install_weak_deps=0"]
+ invoke_dnf(args, workspace,
+ args.repositories if args.repositories else ["fedora", "updates"],
+ ["systemd", "fedora-release", "passwd"],
+ ["kernel", "systemd-udev", "binutils"])
- # Turn off docs, but not during the development build, as dnf currently has problems with that
- if not args.with_docs and not run_build_script:
- cmdline.append("--setopt=tsflags=nodocs")
+@complete_step('Installing Mageia')
+def install_mageia(args, workspace, run_build_script):
- cmdline.extend([
- "install",
- "systemd",
- "fedora-release",
- "passwd"])
-
- if args.packages is not None:
- cmdline.extend(args.packages)
+ disable_kernel_install(args, workspace)
- if run_build_script and args.build_packages is not None:
- cmdline.extend(args.build_packages)
+ # Mageia does not (yet) have RPM GPG key on the web
+ gpg_key = '/etc/pki/rpm-gpg/RPM-GPG-KEY-Mageia'
+ if os.path.exists(gpg_key):
+ gpg_key = "file://%s" % gpg_key
+# else:
+# gpg_key = "https://getfedora.org/static/%s.txt" % FEDORA_KEYS_MAP[args.release]
- if args.bootable:
- cmdline.extend(["kernel", "systemd-udev", "binutils"])
+ if args.mirror:
+ baseurl = "{args.mirror}/distrib/{args.release}/x86_64/media/core/".format(args=args)
+ release_url = "baseurl=%s/release/" % baseurl
+ updates_url = "baseurl=%s/updates/" % baseurl
+ else:
+ baseurl = "https://www.mageia.org/mirrorlist/?release={args.release}&arch=x86_64&section=core".format(args=args)
+ release_url = "mirrorlist=%s&repo=release" % baseurl
+ updates_url = "mirrorlist=%s&repo=updates" % baseurl
- # Temporary hack: dracut only adds crypto support to the initrd, if the cryptsetup binary is installed
- if args.encrypt or args.verity:
- cmdline.append("cryptsetup")
+ with open(os.path.join(workspace, "dnf.conf"), "w") as f:
+ f.write("""\
+[main]
+gpgcheck=1
- if args.output_format == OutputFormat.raw_gpt:
- cmdline.append("e2fsprogs")
+[mageia]
+name=Mageia {args.release} Core Release
+{release_url}
+gpgkey={gpg_key}
- if args.output_format == OutputFormat.raw_btrfs:
- cmdline.append("btrfs-progs")
+[updates]
+name=Mageia {args.release} Core Updates
+{updates_url}
+gpgkey={gpg_key}
+""".format(args=args,
+ gpg_key=gpg_key,
+ release_url=release_url,
+ updates_url=updates_url))
- with mount_api_vfs(args, workspace):
- subprocess.run(cmdline, check=True)
+ invoke_dnf(args, workspace,
+ args.repositories if args.repositories else ["mageia", "updates"],
+ ["basesystem-minimal"],
+ ["kernel-server-latest", "binutils"])
def install_debian_or_ubuntu(args, workspace, run_build_script, mirror):
if args.repositories:
@@ -1081,6 +1158,7 @@ def install_distribution(args, workspace, run_build_script, cached):
install = {
Distribution.fedora : install_fedora,
+ Distribution.mageia : install_mageia,
Distribution.debian : install_debian,
Distribution.ubuntu : install_ubuntu,
Distribution.arch : install_arch,
@@ -1088,6 +1166,7 @@ def install_distribution(args, workspace, run_build_script, cached):
}
install[args.distribution](args, workspace, run_build_script)
+ assign_hostname(args, workspace)
def reset_machine_id(args, workspace, run_build_script, for_cache):
"""Make /etc/machine-id an empty file.
@@ -1226,15 +1305,28 @@ def install_extra_trees(args, workspace, for_cache):
enumerate_and_copy(d, os.path.join(workspace, "root"))
def copy_git_files(src, dest, *, git_files):
- what_files = ['--exclude-standard', '--cached']
+ subprocess.run(['git', 'clone', '--depth=1', '--recursive', '--shallow-submodules', src, dest],
+ check=True)
+
+ what_files = ['--exclude-standard', '--modified']
if git_files == 'others':
- what_files += ['--others']
- c = subprocess.run(['git', 'ls-files', '-z'] + what_files,
+ what_files += ['--others', '--exclude=.mkosi-*']
+
+ # everything that's modified from the tree
+ c = subprocess.run(['git', '-C', src, 'ls-files', '-z'] + what_files,
stdout=subprocess.PIPE,
universal_newlines=False,
check=True)
files = {x.decode("utf-8") for x in c.stdout.rstrip(b'\0').split(b'\0')}
+ # everything that's modified and about to be committed
+ c = subprocess.run(['git', '-C', src, 'diff', '--cached', '--name-only', '-z'],
+ stdout=subprocess.PIPE,
+ universal_newlines=False,
+ check=True)
+ files |= {x.decode("utf-8") for x in c.stdout.rstrip(b'\0').split(b'\0')}
+ files.discard('')
+
del c
for path in files:
@@ -1263,12 +1355,12 @@ def install_build_src(args, workspace, run_build_script, for_cache):
target = os.path.join(workspace, "root", "root/src")
use_git = args.use_git_files
if use_git is None:
- use_git = os.path.exists('.git')
+ use_git = os.path.exists('.git') or os.path.exists(os.path.join(args.build_sources, '.git'))
if use_git:
copy_git_files(args.build_sources, target, git_files=args.git_files)
else:
- ignore = shutil.ignore_patterns('.git')
+ ignore = shutil.ignore_patterns('.git', '.mkosi-*')
shutil.copytree(args.build_sources, target, symlinks=True, ignore=ignore)
def install_build_dest(args, workspace, run_build_script, for_cache):
@@ -1490,7 +1582,7 @@ def install_unified_kernel(args, workspace, run_build_script, for_cache, root_ha
if for_cache:
return
- if args.distribution != Distribution.fedora:
+ if args.distribution not in (Distribution.fedora, Distribution.mageia):
return
with complete_step("Generating combined kernel + initrd boot file"):
@@ -1783,11 +1875,12 @@ def parse_args():
group = parser.add_argument_group("Packages")
group.add_argument('-p', "--package", action=PackageAction, dest='packages', help='Add an additional package to the OS image', metavar='PACKAGE')
- group.add_argument("--with-docs", action='store_true', help='Install documentation (only fedora)')
+ group.add_argument("--with-docs", action='store_true', help='Install documentation (only Fedora and Mageia)')
group.add_argument("--cache", dest='cache_path', help='Package cache path', metavar='PATH')
group.add_argument("--extra-tree", action='append', dest='extra_trees', help='Copy an extra tree on top of image', metavar='PATH')
group.add_argument("--build-script", help='Build script to run inside image', metavar='PATH')
group.add_argument("--build-sources", help='Path for sources to build', metavar='PATH')
+ group.add_argument("--build-dir", help='Path to use as persistent build directory', metavar='PATH')
group.add_argument("--build-package", action=PackageAction, dest='build_packages', help='Additional packages needed for build script', metavar='PACKAGE')
group.add_argument("--postinst-script", help='Post installation script to run inside image', metavar='PATH')
group.add_argument('--use-git-files', type=parse_boolean,
@@ -1814,6 +1907,12 @@ def parse_args():
group.add_argument('-C', "--directory", help='Change to specified directory before doing anything', metavar='PATH')
group.add_argument("--default", dest='default_path', help='Read configuration data from file', metavar='PATH')
group.add_argument("--kernel-commandline", help='Set the kernel command line (only bootable images)')
+ group.add_argument("--hostname", help="Set hostname")
+
+ try:
+ argcomplete.autocomplete(parser)
+ except NameError:
+ pass
args = parser.parse_args()
@@ -1885,6 +1984,11 @@ def unlink_try_hard(path):
except:
pass
+def empty_directory(path):
+
+ for f in os.listdir(path):
+ unlink_try_hard(os.path.join(path, f))
+
def unlink_output(args):
if not args.force and args.verb != "clean":
return
@@ -1911,12 +2015,18 @@ def unlink_output(args):
remove_cache = args.force_count > 1
if remove_cache:
- with complete_step('Removing cache files'):
- if args.cache_pre_dev is not None:
- unlink_try_hard(args.cache_pre_dev)
- if args.cache_pre_inst is not None:
- unlink_try_hard(args.cache_pre_inst)
+ if args.cache_pre_dev is not None or args.cache_pre_inst is not None:
+ with complete_step('Removing incremental cache files'):
+ if args.cache_pre_dev is not None:
+ unlink_try_hard(args.cache_pre_dev)
+
+ if args.cache_pre_inst is not None:
+ unlink_try_hard(args.cache_pre_inst)
+
+ if args.build_dir is not None:
+ with complete_step('Clearing out build directory'):
+ empty_directory(args.build_dir)
def parse_boolean(s):
"Parse 1/true/yes as true and 0/false/no as false"
@@ -1942,6 +2052,9 @@ def process_setting(args, section, key, value):
args.repositories = list_value
else:
args.repositories.extend(list_value)
+ elif key == "Mirror":
+ if args.mirror is None:
+ args.mirror = value
elif key is None:
return True
else:
@@ -1988,6 +2101,9 @@ def process_setting(args, section, key, value):
elif key == "XZ":
if not args.xz:
args.xz = parse_boolean(value)
+ elif key == "Hostname":
+ if not args.hostname:
+ args.hostname = value
elif key is None:
return True
else:
@@ -2017,6 +2133,9 @@ def process_setting(args, section, key, value):
elif key == "BuildSources":
if args.build_sources is None:
args.build_sources = value
+ elif key == "BuildDirectory":
+ if args.build_dir is None:
+ args.build_dir = value
elif key == "BuildPackages":
list_value = value if type(value) == list else value.split()
if args.build_packages is None:
@@ -2162,6 +2281,13 @@ def find_build_sources(args):
args.build_sources = os.getcwd()
+def find_build_dir(args):
+ if args.build_dir is not None:
+ return
+
+ if os.path.exists("mkosi.builddir/"):
+ args.build_dir = "mkosi.builddir"
+
def find_postinst_script(args):
if args.postinst_script is not None:
return
@@ -2235,6 +2361,7 @@ def load_args():
find_extra(args)
find_build_script(args)
find_build_sources(args)
+ find_build_dir(args)
find_postinst_script(args)
find_passphrase(args)
find_secure_boot(args)
@@ -2264,6 +2391,8 @@ def load_args():
if args.release is None:
if args.distribution == Distribution.fedora:
args.release = "25"
+ if args.distribution == Distribution.mageia:
+ args.release = "6"
elif args.distribution == Distribution.debian:
args.release = "unstable"
elif args.distribution == Distribution.ubuntu:
@@ -2277,7 +2406,7 @@ def load_args():
if args.distribution == Distribution.fedora:
args.mirror = None
elif args.distribution == Distribution.debian:
- args.mirror = "http://httpredir.debian.org/debian"
+ args.mirror = "http://deb.debian.org/debian"
elif args.distribution == Distribution.ubuntu:
args.mirror = "http://archive.ubuntu.com/ubuntu"
if platform.machine() == "aarch64":
@@ -2357,6 +2486,9 @@ def load_args():
if args.build_sources is not None:
args.build_sources = os.path.abspath(args.build_sources)
+ if args.build_dir is not None:
+ args.build_dir = os.path.abspath(args.build_dir)
+
if args.postinst_script is not None:
args.postinst_script = os.path.abspath(args.postinst_script)
@@ -2447,6 +2579,8 @@ def print_summary(args):
if args.mirror is not None:
sys.stderr.write(" Mirror: " + args.mirror + "\n")
sys.stderr.write("\nOUTPUT:\n")
+ if args.hostname:
+ sys.stderr.write(" Hostname: " + args.hostname + "\n")
sys.stderr.write(" Output Format: " + args.output_format.name + "\n")
sys.stderr.write(" Output: " + args.output + "\n")
sys.stderr.write(" Output Checksum: " + none_to_na(args.output_checksum if args.checksum else None) + "\n")
@@ -2479,13 +2613,14 @@ def print_summary(args):
sys.stderr.write("\nPACKAGES:\n")
sys.stderr.write(" Packages: " + line_join_list(args.packages) + "\n")
- if args.distribution == Distribution.fedora:
+ if args.distribution in (Distribution.fedora, Distribution.mageia):
sys.stderr.write(" With Documentation: " + yes_no(args.with_docs) + "\n")
sys.stderr.write(" Package Cache: " + none_to_none(args.cache_path) + "\n")
sys.stderr.write(" Extra Trees: " + line_join_list(args.extra_trees) + "\n")
sys.stderr.write(" Build Script: " + none_to_none(args.build_script) + "\n")
sys.stderr.write(" Build Sources: " + none_to_none(args.build_sources) + "\n")
+ sys.stderr.write(" Build Directory: " + none_to_none(args.build_dir) + "\n")
sys.stderr.write(" Build Packages: " + line_join_list(args.build_packages) + "\n")
sys.stderr.write(" Post Inst. Script: " + none_to_none(args.postinst_script) + "\n")
sys.stderr.write(" Scripts with network: " + yes_no(args.with_network) + "\n")
@@ -2618,6 +2753,7 @@ def run_build_script(args, workspace, raw):
'--quiet',
target,
"--uuid=" + args.machine_id,
+ "--machine=mkosi-" + uuid.uuid4().hex,
"--as-pid2",
"--register=no",
"--bind", dest + ":/root/dest",
@@ -2634,6 +2770,10 @@ def run_build_script(args, workspace, raw):
else:
cmdline.append("--chdir=/root")
+ if args.build_dir is not None:
+ cmdline.append("--setenv=BUILDDIR=/root/build")
+ cmdline.append("--bind=" + args.build_dir + ":/root/build")
+
if not args.with_network:
cmdline.append("--private-network")