summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--.gitignore18
-rw-r--r--LICENSE502
-rw-r--r--MANIFEST.in1
l---------README1
-rw-r--r--README.md330
-rwxr-xr-xdo-a-release.sh15
-rwxr-xr-xmkosi2904
-rw-r--r--mkosi.default22
-rwxr-xr-xsetup.py19
9 files changed, 3812 insertions, 0 deletions
diff --git a/.gitignore b/.gitignore
new file mode 100644
index 0000000..2bc1175
--- /dev/null
+++ b/.gitignore
@@ -0,0 +1,18 @@
+*.cache-pre-dev
+*.cache-pre-inst
+/.mkosi-*
+/SHA256SUMS
+/SHA256SUMS.gpg
+/__pycache__
+/build
+/dist
+/image
+/image.raw
+/image.raw.xz
+/image.roothash
+/image.tar.xz
+/mkosi.build
+/mkosi.cache
+/mkosi.egg-info
+/mkosi.extra
+/mkosi.nspawn
diff --git a/LICENSE b/LICENSE
new file mode 100644
index 0000000..4362b49
--- /dev/null
+++ b/LICENSE
@@ -0,0 +1,502 @@
+ GNU LESSER GENERAL PUBLIC LICENSE
+ Version 2.1, February 1999
+
+ Copyright (C) 1991, 1999 Free Software Foundation, Inc.
+ 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ Everyone is permitted to copy and distribute verbatim copies
+ of this license document, but changing it is not allowed.
+
+[This is the first released version of the Lesser GPL. It also counts
+ as the successor of the GNU Library Public License, version 2, hence
+ the version number 2.1.]
+
+ Preamble
+
+ The licenses for most software are designed to take away your
+freedom to share and change it. By contrast, the GNU General Public
+Licenses are intended to guarantee your freedom to share and change
+free software--to make sure the software is free for all its users.
+
+ This license, the Lesser General Public License, applies to some
+specially designated software packages--typically libraries--of the
+Free Software Foundation and other authors who decide to use it. You
+can use it too, but we suggest you first think carefully about whether
+this license or the ordinary General Public License is the better
+strategy to use in any particular case, based on the explanations below.
+
+ When we speak of free software, we are referring to freedom of use,
+not price. Our General Public Licenses are designed to make sure that
+you have the freedom to distribute copies of free software (and charge
+for this service if you wish); that you receive source code or can get
+it if you want it; that you can change the software and use pieces of
+it in new free programs; and that you are informed that you can do
+these things.
+
+ To protect your rights, we need to make restrictions that forbid
+distributors to deny you these rights or to ask you to surrender these
+rights. These restrictions translate to certain responsibilities for
+you if you distribute copies of the library or if you modify it.
+
+ For example, if you distribute copies of the library, whether gratis
+or for a fee, you must give the recipients all the rights that we gave
+you. You must make sure that they, too, receive or can get the source
+code. If you link other code with the library, you must provide
+complete object files to the recipients, so that they can relink them
+with the library after making changes to the library and recompiling
+it. And you must show them these terms so they know their rights.
+
+ We protect your rights with a two-step method: (1) we copyright the
+library, and (2) we offer you this license, which gives you legal
+permission to copy, distribute and/or modify the library.
+
+ To protect each distributor, we want to make it very clear that
+there is no warranty for the free library. Also, if the library is
+modified by someone else and passed on, the recipients should know
+that what they have is not the original version, so that the original
+author's reputation will not be affected by problems that might be
+introduced by others.
+
+ Finally, software patents pose a constant threat to the existence of
+any free program. We wish to make sure that a company cannot
+effectively restrict the users of a free program by obtaining a
+restrictive license from a patent holder. Therefore, we insist that
+any patent license obtained for a version of the library must be
+consistent with the full freedom of use specified in this license.
+
+ Most GNU software, including some libraries, is covered by the
+ordinary GNU General Public License. This license, the GNU Lesser
+General Public License, applies to certain designated libraries, and
+is quite different from the ordinary General Public License. We use
+this license for certain libraries in order to permit linking those
+libraries into non-free programs.
+
+ When a program is linked with a library, whether statically or using
+a shared library, the combination of the two is legally speaking a
+combined work, a derivative of the original library. The ordinary
+General Public License therefore permits such linking only if the
+entire combination fits its criteria of freedom. The Lesser General
+Public License permits more lax criteria for linking other code with
+the library.
+
+ We call this license the "Lesser" General Public License because it
+does Less to protect the user's freedom than the ordinary General
+Public License. It also provides other free software developers Less
+of an advantage over competing non-free programs. These disadvantages
+are the reason we use the ordinary General Public License for many
+libraries. However, the Lesser license provides advantages in certain
+special circumstances.
+
+ For example, on rare occasions, there may be a special need to
+encourage the widest possible use of a certain library, so that it becomes
+a de-facto standard. To achieve this, non-free programs must be
+allowed to use the library. A more frequent case is that a free
+library does the same job as widely used non-free libraries. In this
+case, there is little to gain by limiting the free library to free
+software only, so we use the Lesser General Public License.
+
+ In other cases, permission to use a particular library in non-free
+programs enables a greater number of people to use a large body of
+free software. For example, permission to use the GNU C Library in
+non-free programs enables many more people to use the whole GNU
+operating system, as well as its variant, the GNU/Linux operating
+system.
+
+ Although the Lesser General Public License is Less protective of the
+users' freedom, it does ensure that the user of a program that is
+linked with the Library has the freedom and the wherewithal to run
+that program using a modified version of the Library.
+
+ The precise terms and conditions for copying, distribution and
+modification follow. Pay close attention to the difference between a
+"work based on the library" and a "work that uses the library". The
+former contains code derived from the library, whereas the latter must
+be combined with the library in order to run.
+
+ GNU LESSER GENERAL PUBLIC LICENSE
+ TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
+
+ 0. This License Agreement applies to any software library or other
+program which contains a notice placed by the copyright holder or
+other authorized party saying it may be distributed under the terms of
+this Lesser General Public License (also called "this License").
+Each licensee is addressed as "you".
+
+ A "library" means a collection of software functions and/or data
+prepared so as to be conveniently linked with application programs
+(which use some of those functions and data) to form executables.
+
+ The "Library", below, refers to any such software library or work
+which has been distributed under these terms. A "work based on the
+Library" means either the Library or any derivative work under
+copyright law: that is to say, a work containing the Library or a
+portion of it, either verbatim or with modifications and/or translated
+straightforwardly into another language. (Hereinafter, translation is
+included without limitation in the term "modification".)
+
+ "Source code" for a work means the preferred form of the work for
+making modifications to it. For a library, complete source code means
+all the source code for all modules it contains, plus any associated
+interface definition files, plus the scripts used to control compilation
+and installation of the library.
+
+ Activities other than copying, distribution and modification are not
+covered by this License; they are outside its scope. The act of
+running a program using the Library is not restricted, and output from
+such a program is covered only if its contents constitute a work based
+on the Library (independent of the use of the Library in a tool for
+writing it). Whether that is true depends on what the Library does
+and what the program that uses the Library does.
+
+ 1. You may copy and distribute verbatim copies of the Library's
+complete source code as you receive it, in any medium, provided that
+you conspicuously and appropriately publish on each copy an
+appropriate copyright notice and disclaimer of warranty; keep intact
+all the notices that refer to this License and to the absence of any
+warranty; and distribute a copy of this License along with the
+Library.
+
+ You may charge a fee for the physical act of transferring a copy,
+and you may at your option offer warranty protection in exchange for a
+fee.
+
+ 2. You may modify your copy or copies of the Library or any portion
+of it, thus forming a work based on the Library, and copy and
+distribute such modifications or work under the terms of Section 1
+above, provided that you also meet all of these conditions:
+
+ a) The modified work must itself be a software library.
+
+ b) You must cause the files modified to carry prominent notices
+ stating that you changed the files and the date of any change.
+
+ c) You must cause the whole of the work to be licensed at no
+ charge to all third parties under the terms of this License.
+
+ d) If a facility in the modified Library refers to a function or a
+ table of data to be supplied by an application program that uses
+ the facility, other than as an argument passed when the facility
+ is invoked, then you must make a good faith effort to ensure that,
+ in the event an application does not supply such function or
+ table, the facility still operates, and performs whatever part of
+ its purpose remains meaningful.
+
+ (For example, a function in a library to compute square roots has
+ a purpose that is entirely well-defined independent of the
+ application. Therefore, Subsection 2d requires that any
+ application-supplied function or table used by this function must
+ be optional: if the application does not supply it, the square
+ root function must still compute square roots.)
+
+These requirements apply to the modified work as a whole. If
+identifiable sections of that work are not derived from the Library,
+and can be reasonably considered independent and separate works in
+themselves, then this License, and its terms, do not apply to those
+sections when you distribute them as separate works. But when you
+distribute the same sections as part of a whole which is a work based
+on the Library, the distribution of the whole must be on the terms of
+this License, whose permissions for other licensees extend to the
+entire whole, and thus to each and every part regardless of who wrote
+it.
+
+Thus, it is not the intent of this section to claim rights or contest
+your rights to work written entirely by you; rather, the intent is to
+exercise the right to control the distribution of derivative or
+collective works based on the Library.
+
+In addition, mere aggregation of another work not based on the Library
+with the Library (or with a work based on the Library) on a volume of
+a storage or distribution medium does not bring the other work under
+the scope of this License.
+
+ 3. You may opt to apply the terms of the ordinary GNU General Public
+License instead of this License to a given copy of the Library. To do
+this, you must alter all the notices that refer to this License, so
+that they refer to the ordinary GNU General Public License, version 2,
+instead of to this License. (If a newer version than version 2 of the
+ordinary GNU General Public License has appeared, then you can specify
+that version instead if you wish.) Do not make any other change in
+these notices.
+
+ Once this change is made in a given copy, it is irreversible for
+that copy, so the ordinary GNU General Public License applies to all
+subsequent copies and derivative works made from that copy.
+
+ This option is useful when you wish to copy part of the code of
+the Library into a program that is not a library.
+
+ 4. You may copy and distribute the Library (or a portion or
+derivative of it, under Section 2) in object code or executable form
+under the terms of Sections 1 and 2 above provided that you accompany
+it with the complete corresponding machine-readable source code, which
+must be distributed under the terms of Sections 1 and 2 above on a
+medium customarily used for software interchange.
+
+ If distribution of object code is made by offering access to copy
+from a designated place, then offering equivalent access to copy the
+source code from the same place satisfies the requirement to
+distribute the source code, even though third parties are not
+compelled to copy the source along with the object code.
+
+ 5. A program that contains no derivative of any portion of the
+Library, but is designed to work with the Library by being compiled or
+linked with it, is called a "work that uses the Library". Such a
+work, in isolation, is not a derivative work of the Library, and
+therefore falls outside the scope of this License.
+
+ However, linking a "work that uses the Library" with the Library
+creates an executable that is a derivative of the Library (because it
+contains portions of the Library), rather than a "work that uses the
+library". The executable is therefore covered by this License.
+Section 6 states terms for distribution of such executables.
+
+ When a "work that uses the Library" uses material from a header file
+that is part of the Library, the object code for the work may be a
+derivative work of the Library even though the source code is not.
+Whether this is true is especially significant if the work can be
+linked without the Library, or if the work is itself a library. The
+threshold for this to be true is not precisely defined by law.
+
+ If such an object file uses only numerical parameters, data
+structure layouts and accessors, and small macros and small inline
+functions (ten lines or less in length), then the use of the object
+file is unrestricted, regardless of whether it is legally a derivative
+work. (Executables containing this object code plus portions of the
+Library will still fall under Section 6.)
+
+ Otherwise, if the work is a derivative of the Library, you may
+distribute the object code for the work under the terms of Section 6.
+Any executables containing that work also fall under Section 6,
+whether or not they are linked directly with the Library itself.
+
+ 6. As an exception to the Sections above, you may also combine or
+link a "work that uses the Library" with the Library to produce a
+work containing portions of the Library, and distribute that work
+under terms of your choice, provided that the terms permit
+modification of the work for the customer's own use and reverse
+engineering for debugging such modifications.
+
+ You must give prominent notice with each copy of the work that the
+Library is used in it and that the Library and its use are covered by
+this License. You must supply a copy of this License. If the work
+during execution displays copyright notices, you must include the
+copyright notice for the Library among them, as well as a reference
+directing the user to the copy of this License. Also, you must do one
+of these things:
+
+ a) Accompany the work with the complete corresponding
+ machine-readable source code for the Library including whatever
+ changes were used in the work (which must be distributed under
+ Sections 1 and 2 above); and, if the work is an executable linked
+ with the Library, with the complete machine-readable "work that
+ uses the Library", as object code and/or source code, so that the
+ user can modify the Library and then relink to produce a modified
+ executable containing the modified Library. (It is understood
+ that the user who changes the contents of definitions files in the
+ Library will not necessarily be able to recompile the application
+ to use the modified definitions.)
+
+ b) Use a suitable shared library mechanism for linking with the
+ Library. A suitable mechanism is one that (1) uses at run time a
+ copy of the library already present on the user's computer system,
+ rather than copying library functions into the executable, and (2)
+ will operate properly with a modified version of the library, if
+ the user installs one, as long as the modified version is
+ interface-compatible with the version that the work was made with.
+
+ c) Accompany the work with a written offer, valid for at
+ least three years, to give the same user the materials
+ specified in Subsection 6a, above, for a charge no more
+ than the cost of performing this distribution.
+
+ d) If distribution of the work is made by offering access to copy
+ from a designated place, offer equivalent access to copy the above
+ specified materials from the same place.
+
+ e) Verify that the user has already received a copy of these
+ materials or that you have already sent this user a copy.
+
+ For an executable, the required form of the "work that uses the
+Library" must include any data and utility programs needed for
+reproducing the executable from it. However, as a special exception,
+the materials to be distributed need not include anything that is
+normally distributed (in either source or binary form) with the major
+components (compiler, kernel, and so on) of the operating system on
+which the executable runs, unless that component itself accompanies
+the executable.
+
+ It may happen that this requirement contradicts the license
+restrictions of other proprietary libraries that do not normally
+accompany the operating system. Such a contradiction means you cannot
+use both them and the Library together in an executable that you
+distribute.
+
+ 7. You may place library facilities that are a work based on the
+Library side-by-side in a single library together with other library
+facilities not covered by this License, and distribute such a combined
+library, provided that the separate distribution of the work based on
+the Library and of the other library facilities is otherwise
+permitted, and provided that you do these two things:
+
+ a) Accompany the combined library with a copy of the same work
+ based on the Library, uncombined with any other library
+ facilities. This must be distributed under the terms of the
+ Sections above.
+
+ b) Give prominent notice with the combined library of the fact
+ that part of it is a work based on the Library, and explaining
+ where to find the accompanying uncombined form of the same work.
+
+ 8. You may not copy, modify, sublicense, link with, or distribute
+the Library except as expressly provided under this License. Any
+attempt otherwise to copy, modify, sublicense, link with, or
+distribute the Library is void, and will automatically terminate your
+rights under this License. However, parties who have received copies,
+or rights, from you under this License will not have their licenses
+terminated so long as such parties remain in full compliance.
+
+ 9. You are not required to accept this License, since you have not
+signed it. However, nothing else grants you permission to modify or
+distribute the Library or its derivative works. These actions are
+prohibited by law if you do not accept this License. Therefore, by
+modifying or distributing the Library (or any work based on the
+Library), you indicate your acceptance of this License to do so, and
+all its terms and conditions for copying, distributing or modifying
+the Library or works based on it.
+
+ 10. Each time you redistribute the Library (or any work based on the
+Library), the recipient automatically receives a license from the
+original licensor to copy, distribute, link with or modify the Library
+subject to these terms and conditions. You may not impose any further
+restrictions on the recipients' exercise of the rights granted herein.
+You are not responsible for enforcing compliance by third parties with
+this License.
+
+ 11. If, as a consequence of a court judgment or allegation of patent
+infringement or for any other reason (not limited to patent issues),
+conditions are imposed on you (whether by court order, agreement or
+otherwise) that contradict the conditions of this License, they do not
+excuse you from the conditions of this License. If you cannot
+distribute so as to satisfy simultaneously your obligations under this
+License and any other pertinent obligations, then as a consequence you
+may not distribute the Library at all. For example, if a patent
+license would not permit royalty-free redistribution of the Library by
+all those who receive copies directly or indirectly through you, then
+the only way you could satisfy both it and this License would be to
+refrain entirely from distribution of the Library.
+
+If any portion of this section is held invalid or unenforceable under any
+particular circumstance, the balance of the section is intended to apply,
+and the section as a whole is intended to apply in other circumstances.
+
+It is not the purpose of this section to induce you to infringe any
+patents or other property right claims or to contest validity of any
+such claims; this section has the sole purpose of protecting the
+integrity of the free software distribution system which is
+implemented by public license practices. Many people have made
+generous contributions to the wide range of software distributed
+through that system in reliance on consistent application of that
+system; it is up to the author/donor to decide if he or she is willing
+to distribute software through any other system and a licensee cannot
+impose that choice.
+
+This section is intended to make thoroughly clear what is believed to
+be a consequence of the rest of this License.
+
+ 12. If the distribution and/or use of the Library is restricted in
+certain countries either by patents or by copyrighted interfaces, the
+original copyright holder who places the Library under this License may add
+an explicit geographical distribution limitation excluding those countries,
+so that distribution is permitted only in or among countries not thus
+excluded. In such case, this License incorporates the limitation as if
+written in the body of this License.
+
+ 13. The Free Software Foundation may publish revised and/or new
+versions of the Lesser General Public License from time to time.
+Such new versions will be similar in spirit to the present version,
+but may differ in detail to address new problems or concerns.
+
+Each version is given a distinguishing version number. If the Library
+specifies a version number of this License which applies to it and
+"any later version", you have the option of following the terms and
+conditions either of that version or of any later version published by
+the Free Software Foundation. If the Library does not specify a
+license version number, you may choose any version ever published by
+the Free Software Foundation.
+
+ 14. If you wish to incorporate parts of the Library into other free
+programs whose distribution conditions are incompatible with these,
+write to the author to ask for permission. For software which is
+copyrighted by the Free Software Foundation, write to the Free
+Software Foundation; we sometimes make exceptions for this. Our
+decision will be guided by the two goals of preserving the free status
+of all derivatives of our free software and of promoting the sharing
+and reuse of software generally.
+
+ NO WARRANTY
+
+ 15. BECAUSE THE LIBRARY IS LICENSED FREE OF CHARGE, THERE IS NO
+WARRANTY FOR THE LIBRARY, TO THE EXTENT PERMITTED BY APPLICABLE LAW.
+EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR
+OTHER PARTIES PROVIDE THE LIBRARY "AS IS" WITHOUT WARRANTY OF ANY
+KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE
+LIBRARY IS WITH YOU. SHOULD THE LIBRARY PROVE DEFECTIVE, YOU ASSUME
+THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
+
+ 16. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN
+WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY
+AND/OR REDISTRIBUTE THE LIBRARY AS PERMITTED ABOVE, BE LIABLE TO YOU
+FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR
+CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE
+LIBRARY (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING
+RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A
+FAILURE OF THE LIBRARY TO OPERATE WITH ANY OTHER SOFTWARE), EVEN IF
+SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH
+DAMAGES.
+
+ END OF TERMS AND CONDITIONS
+
+ How to Apply These Terms to Your New Libraries
+
+ If you develop a new library, and you want it to be of the greatest
+possible use to the public, we recommend making it free software that
+everyone can redistribute and change. You can do so by permitting
+redistribution under these terms (or, alternatively, under the terms of the
+ordinary General Public License).
+
+ To apply these terms, attach the following notices to the library. It is
+safest to attach them to the start of each source file to most effectively
+convey the exclusion of warranty; and each file should have at least the
+"copyright" line and a pointer to where the full notice is found.
+
+ <one line to give the library's name and a brief idea of what it does.>
+ Copyright (C) <year> <name of author>
+
+ This library is free software; you can redistribute it and/or
+ modify it under the terms of the GNU Lesser General Public
+ License as published by the Free Software Foundation; either
+ version 2.1 of the License, or (at your option) any later version.
+
+ This library is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ Lesser General Public License for more details.
+
+ You should have received a copy of the GNU Lesser General Public
+ License along with this library; if not, write to the Free Software
+ Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+Also add information on how to contact you by electronic and paper mail.
+
+You should also get your employer (if you work as a programmer) or your
+school, if any, to sign a "copyright disclaimer" for the library, if
+necessary. Here is a sample; alter the names:
+
+ Yoyodyne, Inc., hereby disclaims all copyright interest in the
+ library `Frob' (a library for tweaking knobs) written by James Random Hacker.
+
+ <signature of Ty Coon>, 1 April 1990
+ Ty Coon, President of Vice
+
+That's all there is to it!
diff --git a/MANIFEST.in b/MANIFEST.in
new file mode 100644
index 0000000..1aba38f
--- /dev/null
+++ b/MANIFEST.in
@@ -0,0 +1 @@
+include LICENSE
diff --git a/README b/README
new file mode 120000
index 0000000..42061c0
--- /dev/null
+++ b/README
@@ -0,0 +1 @@
+README.md \ No newline at end of file
diff --git a/README.md b/README.md
new file mode 100644
index 0000000..242cb58
--- /dev/null
+++ b/README.md
@@ -0,0 +1,330 @@
+# mkosi - Create legacy-free OS images
+
+A fancy wrapper around `dnf --installroot`, `debootstrap`,
+`pacstrap` and `zypper` that may generate disk images with a number of
+bells and whistles.
+
+# Supported output formats
+
+The following output formats are supported:
+
+* Raw *GPT* disk image, with ext4 as root (*raw_gpt*)
+
+* Raw *GPT* disk image, with btrfs as root (*raw_btrfs*)
+
+* Raw *GPT* disk image, with squashfs as read-only root (*raw_squashfs*)
+
+* Plain directory, containing the *OS* tree (*directory*)
+
+* btrfs subvolume, with separate subvolumes for `/var`, `/home`,
+ `/srv`, `/var/tmp` (*subvolume*)
+
+* Tarball (*tar*)
+
+When a *GPT* disk image is created, the following additional
+options are available:
+
+* A swap partition may be added in
+
+* The image may be made bootable on *EFI* systems
+
+* Separate partitions for `/srv` and `/home` may be added in
+
+* The root, /srv and /home partitions may optionally be encrypted with
+ LUKS.
+
+* A dm-verity partition may be added in that adds runtime integrity
+ data for the root partition
+
+# Compatibility
+
+Generated images are *legacy-free*. This means only *GPT* disk
+labels (and no *MBR* disk labels) are supported, and only
+systemd based images may be generated. Moreover, for bootable
+images only *EFI* systems are supported (not plain *MBR/BIOS*).
+
+All generated *GPT* disk images may be booted in a local
+container directly with:
+
+```bash
+systemd-nspawn -bi image.raw
+```
+
+Additionally, bootable *GPT* disk images (as created with the
+`--bootable` flag) work when booted directly by *EFI* systems, for
+example in *KVM* via:
+
+```bash
+qemu-kvm -m 512 -smp 2 -bios /usr/share/edk2/ovmf/OVMF_CODE.fd -drive format=raw,file=image.raw
+```
+
+*EFI* bootable *GPT* images are larger than plain *GPT* images, as
+they additionally carry an *EFI* system partition containing a
+boot loader, as well as a kernel, kernel modules, udev and
+more.
+
+All directory or btrfs subvolume images may be booted directly
+with:
+
+```bash
+systemd-nspawn -bD image
+```
+
+# Other features
+
+* Optionally, create an *SHA256SUMS* checksum file for the result,
+ possibly even signed via gpg.
+
+* Optionally, place a specific `.nspawn` settings file along
+ with the result.
+
+* Optionally, build a local project's *source* tree in the image
+ and add the result to the generated image (see below).
+
+* Optionally, share *RPM*/*DEB* package cache between multiple runs,
+ in order to optimize build speeds.
+
+* Optionally, the resulting image may be compressed with *XZ*.
+
+* Optionally, btrfs' read-only flag for the root subvolume may be
+ set.
+
+* Optionally, btrfs' compression may be enabled for all
+ created subvolumes.
+
+* By default images are created without all files marked as
+ documentation in the packages, on distributions where the
+ package manager supports this. Use the `--with-docs` flag to
+ build an image with docs added.
+
+# Supported distributions
+
+Images may be created containing installations of the
+following *OS*es.
+
+* *Fedora*
+
+* *Debian*
+
+* *Ubuntu*
+
+* *Arch Linux* (incomplete)
+
+* *openSUSE*
+
+* *Mageia*
+
+In theory, any distribution may be used on the host for
+building images containing any other distribution, as long as
+the necessary tools are available. Specifically, any distro
+that packages `debootstrap` may be used to build *Debian* or
+*Ubuntu* images. Any distro that packages `dnf` may be used to
+build *Fedora* or *Mageia* images. Any distro that packages
+`pacstrap` may be used to build *Arch Linux* images. Any distro
+that packages `zypper` may be used to build *openSUSE* images.
+
+Currently, *Fedora* packages all four tools as of Fedora 26.
+
+# Files
+
+To make it easy to build images for development versions of
+your projects, mkosi can read configuration data from the
+local directory, under the assumption that it is invoked from
+a *source* tree. Specifically, the following files are used if
+they exist in the local directory:
+
+* `mkosi.default` may be used to configure mkosi's image
+ building process. For example, you may configure the
+ distribution to use (`fedora`, `ubuntu`, `debian`, `archlinux`,
+ `opensuse`, `mageia`) for the image, or additional
+ distribution packages to install. Note that all options encoded
+ in this configuration file may also be set on the command line,
+ and this file is hence little more than a way to make sure simply
+ typing `mkosi` without further parameters in your *source* tree is
+ enough to get the right image of your choice set up.
+ Additionally if a `mkosi.default.d` directory exists, each file in it
+ is loaded in the same manner adding/overriding the values specified in
+ `mkosi.default`.
+
+* `mkosi.extra/` may be a directory. If this exists all files
+ contained in it are copied over the directory tree of the
+ image after the *OS* was installed. This may be used to add in
+ additional files to an image, on top of what the
+ distribution includes in its packages.
+
+* `mkosi.build` may be an executable script. If it exists the
+ image will be built twice: the first iteration will be the
+ *development* image, the second iteration will be the
+ *final* image. The *development* image is used to build the
+ project in the current working directory (the *source*
+ tree). For that the whole directory is copied into the
+ image, along with the mkosi.build build script. The script
+ is then invoked inside the image (via `systemd-nspawn`), with
+ `$SRCDIR` pointing to the *source* tree. `$DESTDIR` points to a
+ directory where the script should place any files generated
+ it would like to end up in the *final* image. Note that
+ `make`/`automake` based build systems generally honour `$DESTDIR`,
+ thus making it very natural to build *source* trees from the
+ build script. After the *development* image was built and the
+ build script ran inside of it, it is removed again. After
+ that the *final* image is built, without any *source* tree or
+ build script copied in. However, this time the contents of
+ `$DESTDIR` is added into the image.
+
+* `mkosi.postinst` may be an executable script. If it exists it is
+ invoked as last step of preparing an image, from within the image
+ context. It is once called for the *development* image (if this is
+ enabled, see above) with the "build" command line parameter, right
+ before invoking the build script. It is called a second time for the
+ *final* image with the "final" command line parameter, right before
+ the image is considered complete. This script may be used to alter
+ the images without any restrictions, after all software packages and
+ built sources have been installed. Note that this script is executed
+ directly in the image context with the final root directory in
+ place, without any `$SRCDIR`/`$DESTDIR` setup.
+
+* `mkosi.nspawn` may be an nspawn settings file. If this exists
+ it will be copied into the same place as the output image
+ file. This is useful since nspawn looks for settings files
+ next to image files it boots, for additional container
+ runtime settings.
+
+* `mkosi.cache/` may be a directory. If so, it is automatically used as
+ package download cache, in order to speed repeated runs of the tool.
+
+* `mkosi.builddir/` may be a directory. If so, it is automatically
+ used as out-of-tree build directory, if the build commands in the
+ `mkosi.build` script support it. Specifically, this directory will
+ be mounted into the build countainer, and the `$BUILDDIR`
+ environment variable will be set to it when the build script is
+ invoked. The build script may then use this directory as build
+ directory, for automake-style or ninja-style out-of-tree
+ builds. This speeds up builds considerably, in particular when
+ `mkosi` is used in incremental mode (`-i`): not only the disk images
+ but also the build tree is reused between subsequent
+ invocations. Note that if this directory does not exist the
+ `$BUILDDIR` environment variable is not set, and it is up to build
+ script to decide whether to do in in-tree or an out-of-tree build,
+ and which build directory to use.
+
+* `mkosi.passphrase` may be a passphrase file to use when LUKS
+ encryption is selected. It should contain the passphrase literally,
+ and not end in a newline character (i.e. in the same format as
+ cryptsetup and /etc/crypttab expect the passphrase files). The file
+ must have an access mode of 0600 or less. If this file does not
+ exist and encryption is requested the user is queried instead.
+
+* `mkosi.secure-boot.crt` and `mkosi.secure-boot.key` may contain an
+ X509 certificate and PEM private key to use when UEFI SecureBoot
+ support is enabled. All EFI binaries included in the image's ESP are
+ signed with this key, as a late step in the build process.
+
+All these files are optional.
+
+Note that the location of all these files may also be
+configured during invocation via command line switches, and as
+settings in `mkosi.default`, in case the default settings are
+not acceptable for a project.
+
+# Examples
+
+Create and run a raw *GPT* image with *ext4*, as `image.raw`:
+
+```bash
+# mkosi
+# systemd-nspawn -b -i image.raw
+```
+
+Create and run a bootable btrfs *GPT* image, as `foobar.raw`:
+
+```bash
+# mkosi -t raw_btrfs --bootable -o foobar.raw
+# systemd-nspawn -b -i foobar.raw
+# qemu-kvm -m 512 -smp 2 -bios /usr/share/edk2/ovmf/OVMF_CODE.fd -drive format=raw,file=foobar.raw
+```
+
+Create and run a *Fedora* image into a plain directory:
+
+```bash
+# mkosi -d fedora -t directory -o quux
+# systemd-nspawn -b -D quux
+```
+
+Create a compressed image `image.raw.xz` and add a checksum file, and
+install *SSH* into it:
+
+```bash
+# mkosi -d fedora -t raw_squashfs --checksum --xz --package=openssh-clients
+```
+
+Inside the source directory of an `automake`-based project,
+configure *mkosi* so that simply invoking `mkosi` without any
+parameters builds an *OS* image containing a built version of
+the project in its current state:
+
+```bash
+# cat > mkosi.default <<EOF
+[Distribution]
+Distribution=fedora
+Release=24
+
+[Output]
+Format=raw_btrfs
+Bootable=yes
+
+[Packages]
+Packages=openssh-clients httpd
+BuildPackages=make gcc libcurl-devel
+EOF
+# cat > mkosi.build <<EOF
+#!/bin/sh
+cd $SRCDIR
+./autogen.sh
+./configure --prefix=/usr
+make -j `nproc`
+make install
+EOF
+# chmod +x mkosi.build
+# mkosi
+# systemd-nspawn -bi image.raw
+```
+
+To create a *Fedora* image with hostname:
+```bash
+# mkosi -d fedora --hostname image
+```
+
+Also you could set hostname in configuration file:
+```bash
+# cat mkosi.default
+...
+[Output]
+Hostname=image
+...
+```
+
+# Requirements
+
+mkosi is packaged for various distributions: Debian, Ubuntu, Arch (in AUR), Fedora.
+It is usually easiest to use the distribution package.
+
+The current version requires systemd 233 (or actually, systemd-nspawn of it).
+
+When not using distribution packages make sure to install the
+necessary dependencies. For example, on *Fedora* you need:
+
+```bash
+dnf install arch-install-scripts btrfs-progs debootstrap dosfstools edk2-ovmf squashfs-tools gnupg python3 tar veritysetup xz zypper
+```
+
+Note that the minimum required Python version is 3.5.
+
+If SecureBoot signing is to be used, then the "sbsign" tool needs to
+be installed as well, which is currently not available on Fedora, but
+in a COPR repository:
+
+```bash
+
+dnf copr enable msekleta/sbsigntool
+dnf install sbsigntool
+```
diff --git a/do-a-release.sh b/do-a-release.sh
new file mode 100755
index 0000000..7817b4b
--- /dev/null
+++ b/do-a-release.sh
@@ -0,0 +1,15 @@
+#!/bin/sh
+
+if [ x"$1" == x ] ; then
+ echo "Version number not specified."
+ exit 1
+fi
+
+sed -ie 's/version=".*",/version="'"$1"'",/' setup.py
+sed -ie "s/__version__ = '.*'/__version__ = '$1'/" mkosi
+
+git add -p setup.py mkosi
+
+git commit -m "bump version numbers for v$1"
+
+git tag -s "v$1" -m "mkosi $1"
diff --git a/mkosi b/mkosi
new file mode 100755
index 0000000..cc09c8f
--- /dev/null
+++ b/mkosi
@@ -0,0 +1,2904 @@
+#!/usr/bin/python3
+# PYTHON_ARGCOMPLETE_OK
+
+import argparse
+import configparser
+import contextlib
+import ctypes, ctypes.util
+import crypt
+import getpass
+import hashlib
+import os
+import platform
+import shutil
+import stat
+import subprocess
+import sys
+import tempfile
+import time
+import urllib.request
+import uuid
+
+try:
+ import argcomplete
+except ImportError:
+ pass
+
+from enum import Enum
+
+__version__ = '3'
+
+if sys.version_info < (3, 5):
+ sys.exit("Sorry, we need at least Python 3.5.")
+
+# TODO
+# - volatile images
+# - make ubuntu images bootable
+# - work on device nodes
+# - allow passing env vars
+
+def die(message, status=1):
+ assert status >= 1 and status < 128
+ sys.stderr.write(message + "\n")
+ sys.exit(status)
+
+class OutputFormat(Enum):
+ raw_gpt = 1
+ raw_btrfs = 2
+ raw_squashfs = 3
+ directory = 4
+ subvolume = 5
+ tar = 6
+
+class Distribution(Enum):
+ fedora = 1
+ debian = 2
+ ubuntu = 3
+ arch = 4
+ opensuse = 5
+ mageia = 6
+
+GPT_ROOT_X86 = uuid.UUID("44479540f29741b29af7d131d5f0458a")
+GPT_ROOT_X86_64 = uuid.UUID("4f68bce3e8cd4db196e7fbcaf984b709")
+GPT_ROOT_ARM = uuid.UUID("69dad7102ce44e3cb16c21a1d49abed3")
+GPT_ROOT_ARM_64 = uuid.UUID("b921b0451df041c3af444c6f280d3fae")
+GPT_ROOT_IA64 = uuid.UUID("993d8d3df80e4225855a9daf8ed7ea97")
+GPT_ESP = uuid.UUID("c12a7328f81f11d2ba4b00a0c93ec93b")
+GPT_SWAP = uuid.UUID("0657fd6da4ab43c484e50933c84b4f4f")
+GPT_HOME = uuid.UUID("933ac7e12eb44f13b8440e14e2aef915")
+GPT_SRV = uuid.UUID("3b8f842520e04f3b907f1a25a76f98e8")
+GPT_ROOT_X86_VERITY = uuid.UUID("d13c5d3bb5d1422ab29f9454fdc89d76")
+GPT_ROOT_X86_64_VERITY = uuid.UUID("2c7357edebd246d9aec123d437ec2bf5")
+GPT_ROOT_ARM_VERITY = uuid.UUID("7386cdf2203c47a9a498f2ecce45a2d6")
+GPT_ROOT_ARM_64_VERITY = uuid.UUID("df3300ced69f4c92978c9bfb0f38d820")
+GPT_ROOT_IA64_VERITY = uuid.UUID("86ed10d5b60745bb8957d350f23d0571")
+
+if platform.machine() == "x86_64":
+ GPT_ROOT_NATIVE = GPT_ROOT_X86_64
+ GPT_ROOT_NATIVE_VERITY = GPT_ROOT_X86_64_VERITY
+elif platform.machine() == "aarch64":
+ GPT_ROOT_NATIVE = GPT_ROOT_ARM_64
+ GPT_ROOT_NATIVE_VERITY = GPT_ROOT_ARM_64_VERITY
+else:
+ die("Don't know the %s architecture." % platform.machine())
+
+CLONE_NEWNS = 0x00020000
+
+FEDORA_KEYS_MAP = {
+ "23": "34EC9CBA",
+ "24": "81B46521",
+ "25": "FDB19C98",
+ "26": "64DAB85D",
+}
+
+# 1 MB at the beginning of the disk for the GPT disk label, and
+# another MB at the end (this is actually more than needed.)
+GPT_HEADER_SIZE = 1024*1024
+GPT_FOOTER_SIZE = 1024*1024
+
+def unshare(flags):
+ libc = ctypes.CDLL(ctypes.util.find_library("c"), use_errno=True)
+
+ if libc.unshare(ctypes.c_int(flags)) != 0:
+ e = ctypes.get_errno()
+ raise OSError(e, os.strerror(e))
+
+def format_bytes(bytes):
+ if bytes >= 1024*1024*1024:
+ return "{:0.1f}G".format(bytes / 1024**3)
+ if bytes >= 1024*1024:
+ return "{:0.1f}M".format(bytes / 1024**2)
+ if bytes >= 1024:
+ return "{:0.1f}K".format(bytes / 1024)
+
+ return "{}B".format(bytes)
+
+def roundup512(x):
+ return (x + 511) & ~511
+
+def print_step(text):
+ sys.stderr.write("‣ \033[0;1;39m" + text + "\033[0m\n")
+
+@contextlib.contextmanager
+def complete_step(text, text2=None):
+ print_step(text + '...')
+ args = []
+ yield args
+ if text2 is None:
+ text2 = text + ' complete'
+ print_step(text2.format(*args) + '.')
+
+@complete_step('Detaching namespace')
+def init_namespace(args):
+ args.original_umask = os.umask(0o000)
+ unshare(CLONE_NEWNS)
+ subprocess.run(["mount", "--make-rslave", "/"], check=True)
+
+def setup_workspace(args):
+ print_step("Setting up temporary workspace.")
+ if args.output_format in (OutputFormat.directory, OutputFormat.subvolume):
+ d = tempfile.TemporaryDirectory(dir=os.path.dirname(args.output), prefix='.mkosi-')
+ else:
+ d = tempfile.TemporaryDirectory(dir='/var/tmp', prefix='mkosi-')
+
+ print_step("Temporary workspace in " + d.name + " is now set up.")
+ return d
+
+def btrfs_subvol_create(path, mode=0o755):
+ m = os.umask(~mode & 0o7777)
+ subprocess.run(["btrfs", "subvol", "create", path], check=True)
+ os.umask(m)
+
+def btrfs_subvol_delete(path):
+ subprocess.run(["btrfs", "subvol", "delete", path], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, check=True)
+
+def btrfs_subvol_make_ro(path, b=True):
+ subprocess.run(["btrfs", "property", "set", path, "ro", "true" if b else "false"], check=True)
+
+def image_size(args):
+ size = GPT_HEADER_SIZE + GPT_FOOTER_SIZE
+
+ if args.root_size is not None:
+ size += args.root_size
+ if args.home_size is not None:
+ size += args.home_size
+ if args.srv_size is not None:
+ size += args.srv_size
+ if args.bootable:
+ size += args.esp_size
+ if args.swap_size is not None:
+ size += args.swap_size
+ if args.verity_size is not None:
+ size += args.verity_size
+
+ return size
+
+def disable_cow(path):
+ """Disable copy-on-write if applicable on filesystem"""
+
+ subprocess.run(["chattr", "+C", path], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, check=False)
+
+def determine_partition_table(args):
+
+ pn = 1
+ table = "label: gpt\n"
+ run_sfdisk = False
+
+ if args.bootable:
+ table += 'size={}, type={}, name="ESP System Partition"\n'.format(args.esp_size // 512, GPT_ESP)
+ args.esp_partno = pn
+ pn += 1
+ run_sfdisk = True
+ else:
+ args.esp_partno = None
+
+ if args.swap_size is not None:
+ table += 'size={}, type={}, name="Swap Partition"\n'.format(args.swap_size // 512, GPT_SWAP)
+ args.swap_partno = pn
+ pn += 1
+ run_sfdisk = True
+ else:
+ args.swap_partno = None
+
+ args.home_partno = None
+ args.srv_partno = None
+
+ if args.output_format != OutputFormat.raw_btrfs:
+ if args.home_size is not None:
+ table += 'size={}, type={}, name="Home Partition"\n'.format(args.home_size // 512, GPT_HOME)
+ args.home_partno = pn
+ pn += 1
+ run_sfdisk = True
+
+ if args.srv_size is not None:
+ table += 'size={}, type={}, name="Server Data Partition"\n'.format(args.srv_size // 512, GPT_SRV)
+ args.srv_partno = pn
+ pn += 1
+ run_sfdisk = True
+
+ if args.output_format != OutputFormat.raw_squashfs:
+ table += 'type={}, attrs={}, name="Root Partition"\n'.format(GPT_ROOT_NATIVE, "GUID:60" if args.read_only and args.output_format != OutputFormat.raw_btrfs else "")
+ run_sfdisk = True
+
+ args.root_partno = pn
+ pn += 1
+
+ if args.verity:
+ args.verity_partno = pn
+ pn += 1
+ else:
+ args.verity_partno = None
+
+ return table, run_sfdisk
+
+
+def create_image(args, workspace, for_cache):
+ if args.output_format not in (OutputFormat.raw_gpt, OutputFormat.raw_btrfs, OutputFormat.raw_squashfs):
+ return None
+
+ with complete_step('Creating partition table',
+ 'Created partition table as {.name}') as output:
+
+ f = tempfile.NamedTemporaryFile(dir=os.path.dirname(args.output), prefix='.mkosi-', delete=not for_cache)
+ output.append(f)
+ disable_cow(f.name)
+ f.truncate(image_size(args))
+
+ table, run_sfdisk = determine_partition_table(args)
+
+ if run_sfdisk:
+ subprocess.run(["sfdisk", "--color=never", f.name], input=table.encode("utf-8"), check=True)
+ subprocess.run(["sync"])
+
+ args.ran_sfdisk = run_sfdisk
+
+ return f
+
+def reuse_cache_image(args, workspace, run_build_script, for_cache):
+
+ if not args.incremental:
+ return None, False
+ if for_cache:
+ return None, False
+ if args.output_format not in (OutputFormat.raw_gpt, OutputFormat.raw_btrfs):
+ return None, False
+
+ fname = args.cache_pre_dev if run_build_script else args.cache_pre_inst
+ if fname is None:
+ return None, False
+
+ with complete_step('Basing off cached image ' + fname,
+ 'Copied cached image as {.name}') as output:
+
+ try:
+ source = open(fname, "rb")
+ except FileNotFoundError:
+ return None, False
+
+ with source:
+ f = tempfile.NamedTemporaryFile(dir = os.path.dirname(args.output), prefix='.mkosi-')
+ output.append(f)
+ disable_cow(f.name)
+ shutil.copyfileobj(source, f)
+
+ table, run_sfdisk = determine_partition_table(args)
+ args.ran_sfdisk = run_sfdisk
+
+ return f, True
+
+@contextlib.contextmanager
+def attach_image_loopback(args, raw):
+ if raw is None:
+ yield None
+ return
+
+ with complete_step('Attaching image file',
+ 'Attached image file as {}') as output:
+ c = subprocess.run(["losetup", "--find", "--show", "--partscan", raw.name],
+ stdout=subprocess.PIPE, check=True)
+ loopdev = c.stdout.decode("utf-8").strip()
+ output.append(loopdev)
+
+ try:
+ yield loopdev
+ finally:
+ with complete_step('Detaching image file'):
+ subprocess.run(["losetup", "--detach", loopdev], check=True)
+
+def partition(loopdev, partno):
+ if partno is None:
+ return None
+
+ return loopdev + "p" + str(partno)
+
+def prepare_swap(args, loopdev, cached):
+ if loopdev is None:
+ return
+ if cached:
+ return
+ if args.swap_partno is None:
+ return
+
+ with complete_step('Formatting swap partition'):
+ subprocess.run(["mkswap", "-Lswap", partition(loopdev, args.swap_partno)],
+ check=True)
+
+def prepare_esp(args, loopdev, cached):
+ if loopdev is None:
+ return
+ if cached:
+ return
+ if args.esp_partno is None:
+ return
+
+ with complete_step('Formatting ESP partition'):
+ subprocess.run(["mkfs.fat", "-nEFI", "-F32", partition(loopdev, args.esp_partno)],
+ check=True)
+
+def mkfs_ext4(label, mount, dev):
+ subprocess.run(["mkfs.ext4", "-L", label, "-M", mount, dev], check=True)
+
+def mkfs_btrfs(label, dev):
+ subprocess.run(["mkfs.btrfs", "-L", label, "-d", "single", "-m", "single", dev], check=True)
+
+def luks_format(dev, passphrase):
+
+ if passphrase['type'] == 'stdin':
+ passphrase = (passphrase['content'] + "\n").encode("utf-8")
+ subprocess.run(["cryptsetup", "luksFormat", "--batch-mode", dev], input=passphrase, check=True)
+ else:
+ assert passphrase['type'] == 'file'
+ subprocess.run(["cryptsetup", "luksFormat", "--batch-mode", dev, passphrase['content']], check=True)
+
+def luks_open(dev, passphrase):
+
+ name = str(uuid.uuid4())
+
+ if passphrase['type'] == 'stdin':
+ passphrase = (passphrase['content'] + "\n").encode("utf-8")
+ subprocess.run(["cryptsetup", "open", "--type", "luks", dev, name], input=passphrase, check=True)
+ else:
+ assert passphrase['type'] == 'file'
+ subprocess.run(["cryptsetup", "--key-file", passphrase['content'], "open", "--type", "luks", dev, name], check=True)
+
+ return os.path.join("/dev/mapper", name)
+
+def luks_close(dev, text):
+ if dev is None:
+ return
+
+ with complete_step(text):
+ subprocess.run(["cryptsetup", "close", dev], check=True)
+
+def luks_format_root(args, loopdev, run_build_script, cached, inserting_squashfs=False):
+
+ if args.encrypt != "all":
+ return
+ if args.root_partno is None:
+ return
+ if args.output_format == OutputFormat.raw_squashfs and not inserting_squashfs:
+ return
+ if run_build_script:
+ return
+ if cached:
+ return
+
+ with complete_step("LUKS formatting root partition"):
+ luks_format(partition(loopdev, args.root_partno), args.passphrase)
+
+def luks_format_home(args, loopdev, run_build_script, cached):
+
+ if args.encrypt is None:
+ return
+ if args.home_partno is None:
+ return
+ if run_build_script:
+ return
+ if cached:
+ return
+
+ with complete_step("LUKS formatting home partition"):
+ luks_format(partition(loopdev, args.home_partno), args.passphrase)
+
+def luks_format_srv(args, loopdev, run_build_script, cached):
+
+ if args.encrypt is None:
+ return
+ if args.srv_partno is None:
+ return
+ if run_build_script:
+ return
+ if cached:
+ return
+
+ with complete_step("LUKS formatting server data partition"):
+ luks_format(partition(loopdev, args.srv_partno), args.passphrase)
+
+def luks_setup_root(args, loopdev, run_build_script, inserting_squashfs=False):
+
+ if args.encrypt != "all":
+ return None
+ if args.root_partno is None:
+ return None
+ if args.output_format == OutputFormat.raw_squashfs and not inserting_squashfs:
+ return None
+ if run_build_script:
+ return None
+
+ with complete_step("Opening LUKS root partition"):
+ return luks_open(partition(loopdev, args.root_partno), args.passphrase)
+
+def luks_setup_home(args, loopdev, run_build_script):
+
+ if args.encrypt is None:
+ return None
+ if args.home_partno is None:
+ return None
+ if run_build_script:
+ return None
+
+ with complete_step("Opening LUKS home partition"):
+ return luks_open(partition(loopdev, args.home_partno), args.passphrase)
+
+def luks_setup_srv(args, loopdev, run_build_script):
+
+ if args.encrypt is None:
+ return None
+ if args.srv_partno is None:
+ return None
+ if run_build_script:
+ return None
+
+ with complete_step("Opening LUKS server data partition"):
+ return luks_open(partition(loopdev, args.srv_partno), args.passphrase)
+
+@contextlib.contextmanager
+def luks_setup_all(args, loopdev, run_build_script):
+
+ if args.output_format in (OutputFormat.directory, OutputFormat.subvolume, OutputFormat.tar):
+ yield (None, None, None)
+ return
+
+ try:
+ root = luks_setup_root(args, loopdev, run_build_script)
+ try:
+ home = luks_setup_home(args, loopdev, run_build_script)
+ try:
+ srv = luks_setup_srv(args, loopdev, run_build_script)
+
+ yield (partition(loopdev, args.root_partno) if root is None else root, \
+ partition(loopdev, args.home_partno) if home is None else home, \
+ partition(loopdev, args.srv_partno) if srv is None else srv)
+ finally:
+ luks_close(srv, "Closing LUKS server data partition")
+ finally:
+ luks_close(home, "Closing LUKS home partition")
+ finally:
+ luks_close(root, "Closing LUKS root partition")
+
+def prepare_root(args, dev, cached):
+ if dev is None:
+ return
+ if args.output_format == OutputFormat.raw_squashfs:
+ return
+ if cached:
+ return
+
+ with complete_step('Formatting root partition'):
+ if args.output_format == OutputFormat.raw_btrfs:
+ mkfs_btrfs("root", dev)
+ else:
+ mkfs_ext4("root", "/", dev)
+
+def prepare_home(args, dev, cached):
+ if dev is None:
+ return
+ if cached:
+ return
+
+ with complete_step('Formatting home partition'):
+ mkfs_ext4("home", "/home", dev)
+
+def prepare_srv(args, dev, cached):
+ if dev is None:
+ return
+ if cached:
+ return
+
+ with complete_step('Formatting server data partition'):
+ mkfs_ext4("srv", "/srv", dev)
+
+def mount_loop(args, dev, where, read_only=False):
+ os.makedirs(where, 0o755, True)
+
+ options = "-odiscard"
+
+ if args.compress and args.output_format == OutputFormat.raw_btrfs:
+ options += ",compress"
+
+ if read_only:
+ options += ",ro"
+
+ subprocess.run(["mount", "-n", dev, where, options], check=True)
+
+def mount_bind(what, where):
+ os.makedirs(where, 0o755, True)
+ subprocess.run(["mount", "--bind", what, where], check=True)
+
+def mount_tmpfs(where):
+ os.makedirs(where, 0o755, True)
+ subprocess.run(["mount", "tmpfs", "-t", "tmpfs", where], check=True)
+
+@contextlib.contextmanager
+def mount_image(args, workspace, loopdev, root_dev, home_dev, srv_dev, root_read_only=False):
+ if loopdev is None:
+ yield None
+ return
+
+ with complete_step('Mounting image'):
+ root = os.path.join(workspace, "root")
+
+ if args.output_format != OutputFormat.raw_squashfs:
+ mount_loop(args, root_dev, root, root_read_only)
+
+ if home_dev is not None:
+ mount_loop(args, home_dev, os.path.join(root, "home"))
+
+ if srv_dev is not None:
+ mount_loop(args, srv_dev, os.path.join(root, "srv"))
+
+ if args.esp_partno is not None:
+ mount_loop(args, partition(loopdev, args.esp_partno), os.path.join(root, "efi"))
+
+ # Make sure /tmp and /run are not part of the image
+ mount_tmpfs(os.path.join(root, "run"))
+ mount_tmpfs(os.path.join(root, "tmp"))
+
+ try:
+ yield
+ finally:
+ with complete_step('Unmounting image'):
+
+ for d in ("home", "srv", "efi", "run", "tmp"):
+ umount(os.path.join(root, d))
+
+ umount(root)
+
+@complete_step("Assigning hostname")
+def assign_hostname(args, workspace):
+ root = os.path.join(workspace, "root")
+ hostname_path = os.path.join(root, "etc/hostname")
+
+ if os.path.isfile(hostname_path):
+ os.remove(hostname_path)
+
+ if args.hostname:
+ if os.path.islink(hostname_path) or os.path.isfile(hostname_path):
+ os.remove(hostname_path)
+ with open(hostname_path, "w+") as f:
+ f.write("{}\n".format(args.hostname))
+
+@contextlib.contextmanager
+def mount_api_vfs(args, workspace):
+ paths = ('/proc', '/dev', '/sys')
+ root = os.path.join(workspace, "root")
+
+ with complete_step('Mounting API VFS'):
+ for d in paths:
+ mount_bind(d, root + d)
+ try:
+ yield
+ finally:
+ with complete_step('Unmounting API VFS'):
+ for d in paths:
+ umount(root + d)
+
+@contextlib.contextmanager
+def mount_cache(args, workspace):
+
+ if args.cache_path is None:
+ yield
+ return
+
+ # We can't do this in mount_image() yet, as /var itself might have to be created as a subvolume first
+ with complete_step('Mounting Package Cache'):
+ if args.distribution in (Distribution.fedora, Distribution.mageia):
+ mount_bind(args.cache_path, os.path.join(workspace, "root", "var/cache/dnf"))
+ elif args.distribution in (Distribution.debian, Distribution.ubuntu):
+ mount_bind(args.cache_path, os.path.join(workspace, "root", "var/cache/apt/archives"))
+ elif args.distribution == Distribution.arch:
+ mount_bind(args.cache_path, os.path.join(workspace, "root", "var/cache/pacman/pkg"))
+ elif args.distribution == Distribution.opensuse:
+ mount_bind(args.cache_path, os.path.join(workspace, "root", "var/cache/zypp/packages"))
+ try:
+ yield
+ finally:
+ with complete_step('Unmounting Package Cache'):
+ for d in ("var/cache/dnf", "var/cache/apt/archives", "var/cache/pacman/pkg", "var/cache/zypp/packages"):
+ umount(os.path.join(workspace, "root", d))
+
+def umount(where):
+ # Ignore failures and error messages
+ subprocess.run(["umount", "-n", where], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
+
+@complete_step('Setting up basic OS tree')
+def prepare_tree(args, workspace, run_build_script, cached):
+
+ if args.output_format == OutputFormat.subvolume:
+ btrfs_subvol_create(os.path.join(workspace, "root"))
+ else:
+ try:
+ os.mkdir(os.path.join(workspace, "root"))
+ except FileExistsError:
+ pass
+
+ if args.output_format in (OutputFormat.subvolume, OutputFormat.raw_btrfs):
+
+ if cached and args.output_format is OutputFormat.raw_btrfs:
+ return
+
+ btrfs_subvol_create(os.path.join(workspace, "root", "home"))
+ btrfs_subvol_create(os.path.join(workspace, "root", "srv"))
+ btrfs_subvol_create(os.path.join(workspace, "root", "var"))
+ btrfs_subvol_create(os.path.join(workspace, "root", "var/tmp"), 0o1777)
+ os.mkdir(os.path.join(workspace, "root", "var/lib"))
+ btrfs_subvol_create(os.path.join(workspace, "root", "var/lib/machines"), 0o700)
+
+ if cached:
+ return
+
+ if args.bootable:
+ # We need an initialized machine ID for the boot logic to work
+ os.mkdir(os.path.join(workspace, "root", "etc"), 0o755)
+ open(os.path.join(workspace, "root", "etc/machine-id"), "w").write(args.machine_id + "\n")
+
+ os.mkdir(os.path.join(workspace, "root", "efi/EFI"), 0o700)
+ os.mkdir(os.path.join(workspace, "root", "efi/EFI/BOOT"), 0o700)
+ os.mkdir(os.path.join(workspace, "root", "efi/EFI/Linux"), 0o700)
+ os.mkdir(os.path.join(workspace, "root", "efi/EFI/systemd"), 0o700)
+ os.mkdir(os.path.join(workspace, "root", "efi/loader"), 0o700)
+ os.mkdir(os.path.join(workspace, "root", "efi/loader/entries"), 0o700)
+ os.mkdir(os.path.join(workspace, "root", "efi", args.machine_id), 0o700)
+
+ os.mkdir(os.path.join(workspace, "root", "boot"), 0o700)
+ os.symlink("../efi", os.path.join(workspace, "root", "boot/efi"))
+ os.symlink("efi/loader", os.path.join(workspace, "root", "boot/loader"))
+ os.symlink("efi/" + args.machine_id, os.path.join(workspace, "root", "boot", args.machine_id))
+
+ os.mkdir(os.path.join(workspace, "root", "etc/kernel"), 0o755)
+
+ with open(os.path.join(workspace, "root", "etc/kernel/cmdline"), "w") as cmdline:
+ cmdline.write(args.kernel_commandline)
+ cmdline.write("\n")
+
+ if run_build_script:
+ os.mkdir(os.path.join(workspace, "root", "root"), 0o750)
+ os.mkdir(os.path.join(workspace, "root", "root/dest"), 0o755)
+
+def patch_file(filepath, line_rewriter):
+ temp_new_filepath = filepath + ".tmp.new"
+
+ with open(filepath, "r") as old:
+ with open(temp_new_filepath, "w") as new:
+ for line in old:
+ new.write(line_rewriter(line))
+
+ shutil.copystat(filepath, temp_new_filepath)
+ os.remove(filepath)
+ shutil.move(temp_new_filepath, filepath)
+
+def fix_hosts_line_in_nsswitch(line):
+ if line.startswith("hosts:"):
+ sources = line.split(" ")
+ if 'resolve' not in sources:
+ return " ".join(["resolve" if w == "dns" else w for w in sources])
+ return line
+
+def enable_networkd(workspace):
+ subprocess.run(["systemctl",
+ "--root", os.path.join(workspace, "root"),
+ "enable", "systemd-networkd", "systemd-resolved"],
+ check=True)
+
+ os.remove(os.path.join(workspace, "root", "etc/resolv.conf"))
+ os.symlink("../usr/lib/systemd/resolv.conf", os.path.join(workspace, "root", "etc/resolv.conf"))
+
+ patch_file(os.path.join(workspace, "root", "etc/nsswitch.conf"), fix_hosts_line_in_nsswitch)
+
+ with open(os.path.join(workspace, "root", "etc/systemd/network/all-ethernet.network"), "w") as f:
+ f.write("""\
+[Match]
+Type=ether
+
+[Network]
+DHCP=yes
+""")
+
+def run_workspace_command(args, workspace, *cmd, network=False, env={}):
+
+ cmdline = ["systemd-nspawn",
+ '--quiet',
+ "--directory=" + os.path.join(workspace, "root"),
+ "--uuid=" + args.machine_id,
+ "--machine=mkosi-" + uuid.uuid4().hex,
+ "--as-pid2",
+ "--register=no",
+ "--bind=" + var_tmp(workspace) + ":/var/tmp" ]
+
+ if not network:
+ cmdline += ["--private-network"]
+
+ cmdline += [ "--setenv={}={}".format(k,v) for k,v in env.items() ]
+
+ cmdline += ['--', *cmd]
+ subprocess.run(cmdline, check=True)
+
+def check_if_url_exists(url):
+ req = urllib.request.Request(url, method="HEAD")
+ try:
+ if urllib.request.urlopen(req):
+ return True
+ except:
+ return False
+
+def disable_kernel_install(args, workspace):
+
+ # Let's disable the automatic kernel installation done by the
+ # kernel RPMs. After all, we want to built our own unified kernels
+ # that include the root hash in the kernel command line and can be
+ # signed as a single EFI executable. Since the root hash is only
+ # known when the root file system is finalized we turn off any
+ # kernel installation beforehand.
+
+ if not args.bootable:
+ return
+
+ for d in ("etc", "etc/kernel", "etc/kernel/install.d"):
+ try:
+ os.mkdir(os.path.join(workspace, "root", d), 0o755)
+ except FileExistsError:
+ pass
+
+ for f in ("50-dracut.install", "51-dracut-rescue.install", "90-loaderentry.install"):
+ os.symlink("/dev/null", os.path.join(workspace, "root", "etc/kernel/install.d", f))
+
+def invoke_dnf(args, workspace, repositories, base_packages, boot_packages):
+
+ repos = ["--enablerepo=" + repo for repo in repositories]
+
+ root = os.path.join(workspace, "root")
+ cmdline = ["dnf",
+ "-y",
+ "--config=" + os.path.join(workspace, "dnf.conf"),
+ "--best",
+ "--allowerasing",
+ "--releasever=" + args.release,
+ "--installroot=" + root,
+ "--disablerepo=*",
+ *repos,
+ "--setopt=keepcache=1",
+ "--setopt=install_weak_deps=0"]
+
+ # Turn off docs, but not during the development build, as dnf currently has problems with that
+ if not args.with_docs and not run_build_script:
+ cmdline.append("--setopt=tsflags=nodocs")
+
+ cmdline.extend([
+ "install",
+ *base_packages
+ ])
+
+ if args.packages is not None:
+ cmdline.extend(args.packages)
+
+ if run_build_script and args.build_packages is not None:
+ cmdline.extend(args.build_packages)
+
+ if args.bootable:
+ cmdline.extend(boot_packages)
+
+ # Temporary hack: dracut only adds crypto support to the initrd, if the cryptsetup binary is installed
+ if args.encrypt or args.verity:
+ cmdline.append("cryptsetup")
+
+ if args.output_format == OutputFormat.raw_gpt:
+ cmdline.append("e2fsprogs")
+
+ if args.output_format == OutputFormat.raw_btrfs:
+ cmdline.append("btrfs-progs")
+
+ with mount_api_vfs(args, workspace):
+ subprocess.run(cmdline, check=True)
+
+@complete_step('Installing Fedora')
+def install_fedora(args, workspace, run_build_script):
+
+ disable_kernel_install(args, workspace)
+
+ gpg_key = "/etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-%s-x86_64" % args.release
+ if os.path.exists(gpg_key):
+ gpg_key = "file://%s" % gpg_key
+ else:
+ gpg_key = "https://getfedora.org/static/%s.txt" % FEDORA_KEYS_MAP[args.release]
+
+ if args.mirror:
+ baseurl = "{args.mirror}/releases/{args.release}/Everything/x86_64/os/".format(args=args)
+ if not check_if_url_exists("%s/media.repo" % baseurl):
+ baseurl = "{args.mirror}/development/{args.release}/Everything/x86_64/os/".format(args=args)
+
+ release_url = "baseurl=%s" % baseurl
+ updates_url = "baseurl={args.mirror}/updates/{args.release}/x86_64/".format(args=args)
+ else:
+ release_url = ("metalink=https://mirrors.fedoraproject.org/metalink?" +
+ "repo=fedora-{args.release}&arch=x86_64".format(args=args))
+ updates_url = ("metalink=https://mirrors.fedoraproject.org/metalink?" +
+ "repo=updates-released-f{args.release}&arch=x86_64".format(args=args))
+
+ with open(os.path.join(workspace, "dnf.conf"), "w") as f:
+ f.write("""\
+[main]
+gpgcheck=1
+
+[fedora]
+name=Fedora {args.release} - base
+{release_url}
+gpgkey={gpg_key}
+
+[updates]
+name=Fedora {args.release} - updates
+{updates_url}
+gpgkey={gpg_key}
+""".format(args=args,
+ gpg_key=gpg_key,
+ release_url=release_url,
+ updates_url=updates_url))
+
+ invoke_dnf(args, workspace,
+ args.repositories if args.repositories else ["fedora", "updates"],
+ ["systemd", "fedora-release", "passwd"],
+ ["kernel", "systemd-udev", "binutils"])
+
+@complete_step('Installing Mageia')
+def install_mageia(args, workspace, run_build_script):
+
+ disable_kernel_install(args, workspace)
+
+ # Mageia does not (yet) have RPM GPG key on the web
+ gpg_key = '/etc/pki/rpm-gpg/RPM-GPG-KEY-Mageia'
+ if os.path.exists(gpg_key):
+ gpg_key = "file://%s" % gpg_key
+# else:
+# gpg_key = "https://getfedora.org/static/%s.txt" % FEDORA_KEYS_MAP[args.release]
+
+ if args.mirror:
+ baseurl = "{args.mirror}/distrib/{args.release}/x86_64/media/core/".format(args=args)
+ release_url = "baseurl=%s/release/" % baseurl
+ updates_url = "baseurl=%s/updates/" % baseurl
+ else:
+ baseurl = "https://www.mageia.org/mirrorlist/?release={args.release}&arch=x86_64&section=core".format(args=args)
+ release_url = "mirrorlist=%s&repo=release" % baseurl
+ updates_url = "mirrorlist=%s&repo=updates" % baseurl
+
+ with open(os.path.join(workspace, "dnf.conf"), "w") as f:
+ f.write("""\
+[main]
+gpgcheck=1
+
+[mageia]
+name=Mageia {args.release} Core Release
+{release_url}
+gpgkey={gpg_key}
+
+[updates]
+name=Mageia {args.release} Core Updates
+{updates_url}
+gpgkey={gpg_key}
+""".format(args=args,
+ gpg_key=gpg_key,
+ release_url=release_url,
+ updates_url=updates_url))
+
+ invoke_dnf(args, workspace,
+ args.repositories if args.repositories else ["mageia", "updates"],
+ ["basesystem-minimal"],
+ ["kernel-server-latest", "binutils"])
+
+def install_debian_or_ubuntu(args, workspace, run_build_script, mirror):
+ if args.repositories:
+ components = ','.join(args.repositories)
+ else:
+ components = 'main'
+ cmdline = ["debootstrap",
+ "--verbose",
+ "--merged-usr",
+ "--variant=minbase",
+ "--include=systemd-sysv",
+ "--exclude=sysv-rc,initscripts,startpar,lsb-base,insserv",
+ "--components=" + components,
+ args.release,
+ workspace + "/root",
+ mirror]
+ if args.bootable and args.output_format == OutputFormat.raw_btrfs:
+ cmdline[4] += ",btrfs-tools"
+
+ subprocess.run(cmdline, check=True)
+
+ # Debootstrap is not smart enough to deal correctly with alternative dependencies
+ # Installing libpam-systemd via debootstrap results in systemd-shim being installed
+ # Therefore, prefer to install via apt from inside the container
+ extra_packages = [ 'dbus', 'libpam-systemd']
+
+ # Also install extra packages via the secondary APT run, because it is smarter and
+ # can deal better with any conflicts
+ if args.packages is not None:
+ extra_packages += args.packages
+
+ if run_build_script and args.build_packages is not None:
+ extra_packages += args.build_packages
+
+ # Work around debian bug #835628
+ os.makedirs(os.path.join(workspace, "root/etc/dracut.conf.d"), exist_ok=True)
+ with open(os.path.join(workspace, "root/etc/dracut.conf.d/99-generic.conf"), "w") as f:
+ f.write("hostonly=no")
+
+ if args.bootable:
+ extra_packages += ["linux-image-amd64", "dracut"]
+
+ if extra_packages:
+ # Debian policy is to start daemons by default.
+ # The policy-rc.d script can be used choose which ones to start
+ # Let's install one that denies all daemon startups
+ # See https://people.debian.org/~hmh/invokerc.d-policyrc.d-specification.txt
+ # Note: despite writing in /usr/sbin, this file is not shipped by the OS
+ # and instead should be managed by the admin.
+ policyrcd = os.path.join(workspace, "root/usr/sbin/policy-rc.d")
+ with open(policyrcd, "w") as f:
+ f.write("#!/bin/sh\n")
+ f.write("exit 101")
+ os.chmod(policyrcd, 0o755)
+ if not args.with_docs:
+ # Create dpkg.cfg to ingore documentation
+ dpkg_conf = os.path.join(workspace, "root/etc/dpkg/dpkg.cfg.d/01_nodoc")
+ with open(dpkg_conf, "w") as f:
+ f.writelines([
+ 'path-exclude /usr/share/locale/*\n',
+ 'path-exclude /usr/share/doc/*\n',
+ 'path-exclude /usr/share/man/*\n',
+ 'path-exclude /usr/share/groff/*\n',
+ 'path-exclude /usr/share/info/*\n',
+ 'path-exclude /usr/share/lintian/*\n',
+ 'path-exclude /usr/share/linda/*\n',
+ ])
+
+ cmdline = ["/usr/bin/apt-get", "--assume-yes", "--no-install-recommends", "install"] + extra_packages
+ run_workspace_command(args, workspace, network=True, env={'DEBIAN_FRONTEND': 'noninteractive', 'DEBCONF_NONINTERACTIVE_SEEN': 'true'}, *cmdline)
+ os.unlink(policyrcd)
+
+@complete_step('Installing Debian')
+def install_debian(args, workspace, run_build_script):
+ install_debian_or_ubuntu(args, workspace, run_build_script, args.mirror)
+
+@complete_step('Installing Ubuntu')
+def install_ubuntu(args, workspace, run_build_script):
+ install_debian_or_ubuntu(args, workspace, run_build_script, args.mirror)
+
+@complete_step('Installing Arch Linux')
+def install_arch(args, workspace, run_build_script):
+ if args.release is not None:
+ sys.stderr.write("Distribution release specification is not supported for Arch Linux, ignoring.\n")
+
+ keyring = "archlinux"
+
+ if platform.machine() == "aarch64":
+ keyring += "arm"
+
+ subprocess.run(["pacman-key", "--nocolor", "--init"], check=True)
+ subprocess.run(["pacman-key", "--nocolor", "--populate", keyring], check=True)
+
+ if platform.machine() == "aarch64":
+ server = "Server = {}/$arch/$repo".format(args.mirror)
+ else:
+ server = "Server = {}/$repo/os/$arch".format(args.mirror)
+
+ with open(os.path.join(workspace, "pacman.conf"), "w") as f:
+ f.write("""\
+[options]
+LogFile = /dev/null
+HookDir = /no_hook/
+HoldPkg = pacman glibc
+Architecture = auto
+UseSyslog
+Color
+CheckSpace
+SigLevel = Required DatabaseOptional
+
+[core]
+{server}
+
+[extra]
+{server}
+
+[community]
+{server}
+""".format(args=args, server=server))
+
+ subprocess.run(["pacman", "--color", "never", "--config", os.path.join(workspace, "pacman.conf"), "-Sy"], check=True)
+ c = subprocess.run(["pacman", "--color", "never", "--config", os.path.join(workspace, "pacman.conf"), "-Sg", "base"], stdout=subprocess.PIPE, universal_newlines=True, check=True)
+ packages = set(c.stdout.split())
+ packages.remove("base")
+
+ packages -= {"cryptsetup",
+ "device-mapper",
+ "dhcpcd",
+ "e2fsprogs",
+ "jfsutils",
+ "lvm2",
+ "man-db",
+ "man-pages",
+ "mdadm",
+ "netctl",
+ "pcmciautils",
+ "reiserfsprogs",
+ "xfsprogs"}
+
+ if args.bootable:
+ if args.output_format == OutputFormat.raw_gpt:
+ packages.add("e2fsprogs")
+ elif args.output_format == OutputFormat.raw_btrfs:
+ packages.add("btrfs-progs")
+ else:
+ if "linux" in packages:
+ packages.remove("linux")
+
+ if args.packages is not None:
+ packages |= set(args.packages)
+
+ if run_build_script and args.build_packages is not None:
+ packages |= set(args.build_packages)
+
+ cmdline = ["pacstrap",
+ "-C", os.path.join(workspace, "pacman.conf"),
+ "-d",
+ workspace + "/root"] + \
+ list(packages)
+
+ subprocess.run(cmdline, check=True)
+
+ enable_networkd(workspace)
+
+@complete_step('Installing openSUSE')
+def install_opensuse(args, workspace, run_build_script):
+
+ root = os.path.join(workspace, "root")
+ release = args.release.strip('"')
+
+ #
+ # If the release looks like a timestamp, it's Tumbleweed.
+ # 13.x is legacy (14.x won't ever appear). For anything else,
+ # let's default to Leap.
+ #
+ if release.isdigit() or release == "tumbleweed":
+ release_url = "{}/tumbleweed/repo/oss/".format(args.mirror)
+ updates_url = "{}/update/tumbleweed/".format(args.mirror)
+ elif release.startswith("13."):
+ release_url = "{}/distribution/{}/repo/oss/".format(args.mirror, release)
+ updates_url = "{}/update/{}/".format(args.mirror, release)
+ else:
+ release_url = "{}/distribution/leap/{}/repo/oss/".format(args.mirror, release)
+ updates_url = "{}/update/leap/{}/oss/".format(args.mirror, release)
+
+ #
+ # Configure the repositories: we need to enable packages caching
+ # here to make sure that the package cache stays populated after
+ # "zypper install".
+ #
+ subprocess.run(["zypper", "--root", root, "addrepo", "-ck", release_url, "Main"], check=True)
+ subprocess.run(["zypper", "--root", root, "addrepo", "-ck", updates_url, "Updates"], check=True)
+
+ if not args.with_docs:
+ with open(os.path.join(root, "etc/zypp/zypp.conf"), "w") as f:
+ f.write("rpm.install.excludedocs = yes\n")
+
+ # The common part of the install comand.
+ cmdline = ["zypper", "--root", root, "--gpg-auto-import-keys",
+ "install", "-y", "--no-recommends"]
+ #
+ # Install the "minimal" package set.
+ #
+ subprocess.run(cmdline + ["-t", "pattern", "minimal_base"], check=True)
+
+ #
+ # Now install the additional packages if necessary.
+ #
+ extra_packages = []
+
+ if args.bootable:
+ extra_packages += ["kernel-default"]
+
+ if args.encrypt:
+ extra_packages += ["device-mapper"]
+
+ if args.output_format in (OutputFormat.subvolume, OutputFormat.raw_btrfs):
+ extra_packages += ["btrfsprogs"]
+
+ if args.packages:
+ extra_packages += args.packages
+
+ if run_build_script and args.build_packages is not None:
+ extra_packages += args.build_packages
+
+ if extra_packages:
+ subprocess.run(cmdline + extra_packages, check=True)
+
+ #
+ # Disable packages caching in the image that was enabled
+ # previously to populate the package cache.
+ #
+ subprocess.run(["zypper", "--root", root, "modifyrepo", "-K", "Main"], check=True)
+ subprocess.run(["zypper", "--root", root, "modifyrepo", "-K", "Updates"], check=True)
+
+ #
+ # Tune dracut confs: openSUSE uses an old version of dracut that's
+ # probably explain why we need to do those hacks.
+ #
+ if args.bootable:
+ os.makedirs(os.path.join(root, "etc/dracut.conf.d"), exist_ok=True)
+
+ with open(os.path.join(root, "etc/dracut.conf.d/99-mkosi.conf"), "w") as f:
+ f.write("hostonly=no\n")
+
+ # dracut from openSUSE is missing upstream commit 016613c774baf.
+ with open(os.path.join(root, "etc/kernel/cmdline"), "w") as cmdline:
+ cmdline.write(args.kernel_commandline + " root=/dev/gpt-auto-root\n")
+
+def install_distribution(args, workspace, run_build_script, cached):
+
+ if cached:
+ return
+
+ install = {
+ Distribution.fedora : install_fedora,
+ Distribution.mageia : install_mageia,
+ Distribution.debian : install_debian,
+ Distribution.ubuntu : install_ubuntu,
+ Distribution.arch : install_arch,
+ Distribution.opensuse : install_opensuse,
+ }
+
+ install[args.distribution](args, workspace, run_build_script)
+ assign_hostname(args, workspace)
+
+def reset_machine_id(args, workspace, run_build_script, for_cache):
+ """Make /etc/machine-id an empty file.
+
+ This way, on the next boot is either initialized and commited (if /etc is
+ writable) or the image runs with a transient machine ID, that changes on
+ each boot (if the image is read-only).
+ """
+
+ if run_build_script:
+ return
+ if for_cache:
+ return
+
+ with complete_step('Resetting machine ID'):
+ machine_id = os.path.join(workspace, 'root', 'etc/machine-id')
+ os.unlink(machine_id)
+ open(machine_id, "w+b").close()
+ dbus_machine_id = os.path.join(workspace, 'root', 'var/lib/dbus/machine-id')
+ try:
+ os.unlink(dbus_machine_id)
+ except FileNotFoundError:
+ pass
+ else:
+ os.symlink('../../../etc/machine-id', dbus_machine_id)
+
+def set_root_password(args, workspace, run_build_script, for_cache):
+ "Set the root account password, or just delete it so it's easy to log in"
+
+ if run_build_script:
+ return
+ if for_cache:
+ return
+
+ if args.password == '':
+ print_step("Deleting root password...")
+ jj = lambda line: (':'.join(['root', ''] + line.split(':')[2:])
+ if line.startswith('root:') else line)
+ patch_file(os.path.join(workspace, 'root', 'etc/passwd'), jj)
+ elif args.password:
+ print_step("Setting root password...")
+ password = crypt.crypt(args.password, crypt.mksalt(crypt.METHOD_SHA512))
+ jj = lambda line: (':'.join(['root', password] + line.split(':')[2:])
+ if line.startswith('root:') else line)
+ patch_file(os.path.join(workspace, 'root', 'etc/shadow'), jj)
+
+def run_postinst_script(args, workspace, run_build_script, for_cache):
+
+ if args.postinst_script is None:
+ return
+ if for_cache:
+ return
+
+ with complete_step('Running post installation script'):
+
+ # We copy the postinst script into the build tree. We'd prefer
+ # mounting it into the tree, but for that we'd need a good
+ # place to mount it to. But if we create that we might as well
+ # just copy the file anyway.
+
+ shutil.copy2(args.postinst_script,
+ os.path.join(workspace, "root", "root/postinst"))
+
+ run_workspace_command(args, workspace, "/root/postinst", "build" if run_build_script else "final", network=args.with_network)
+ os.unlink(os.path.join(workspace, "root", "root/postinst"))
+
+def install_boot_loader_arch(args, workspace):
+ patch_file(os.path.join(workspace, "root", "etc/mkinitcpio.conf"),
+ lambda line: "HOOKS=\"systemd modconf block filesystems fsck\"\n" if line.startswith("HOOKS=") else line)
+
+ kernel_version = next(filter(lambda x: x[0].isdigit(), os.listdir(os.path.join(workspace, "root", "lib/modules"))))
+
+ run_workspace_command(args, workspace,
+ "/usr/bin/kernel-install", "add", kernel_version, "/boot/vmlinuz-linux")
+
+def install_boot_loader_debian(args, workspace):
+ kernel_version = next(filter(lambda x: x[0].isdigit(), os.listdir(os.path.join(workspace, "root", "lib/modules"))))
+
+ run_workspace_command(args, workspace,
+ "/usr/bin/kernel-install", "add", kernel_version, "/boot/vmlinuz-" + kernel_version)
+
+def install_boot_loader_opensuse(args, workspace):
+ install_boot_loader_debian(args, workspace)
+
+def install_boot_loader(args, workspace, cached):
+ if not args.bootable:
+ return
+
+ if cached:
+ return
+
+ with complete_step("Installing boot loader"):
+ shutil.copyfile(os.path.join(workspace, "root", "usr/lib/systemd/boot/efi/systemd-bootx64.efi"),
+ os.path.join(workspace, "root", "boot/efi/EFI/systemd/systemd-bootx64.efi"))
+
+ shutil.copyfile(os.path.join(workspace, "root", "usr/lib/systemd/boot/efi/systemd-bootx64.efi"),
+ os.path.join(workspace, "root", "boot/efi/EFI/BOOT/bootx64.efi"))
+
+ if args.distribution == Distribution.arch:
+ install_boot_loader_arch(args, workspace)
+
+ if args.distribution == Distribution.debian:
+ install_boot_loader_debian(args, workspace)
+
+ if args.distribution == Distribution.opensuse:
+ install_boot_loader_opensuse(args, workspace)
+
+def enumerate_and_copy(source, dest, suffix = ""):
+ for entry in os.scandir(source + suffix):
+ dest_path = dest + suffix + "/" + entry.name
+
+ if entry.is_dir():
+ os.makedirs(dest_path,
+ mode=entry.stat(follow_symlinks=False).st_mode & 0o7777,
+ exist_ok=True)
+ enumerate_and_copy(source, dest, suffix + "/" + entry.name)
+ else:
+ try:
+ os.unlink(dest_path)
+ except:
+ pass
+
+ shutil.copy(entry.path, dest_path, follow_symlinks=False)
+
+ shutil.copystat(entry.path, dest_path, follow_symlinks=False)
+
+def install_extra_trees(args, workspace, for_cache):
+ if args.extra_trees is None:
+ return
+
+ if for_cache:
+ return
+
+ with complete_step('Copying in extra file trees'):
+ for d in args.extra_trees:
+ enumerate_and_copy(d, os.path.join(workspace, "root"))
+
+def copy_git_files(src, dest, *, git_files):
+ subprocess.run(['git', 'clone', '--depth=1', '--recursive', '--shallow-submodules', src, dest],
+ check=True)
+
+ what_files = ['--exclude-standard', '--modified']
+ if git_files == 'others':
+ what_files += ['--others', '--exclude=.mkosi-*']
+
+ # everything that's modified from the tree
+ c = subprocess.run(['git', '-C', src, 'ls-files', '-z'] + what_files,
+ stdout=subprocess.PIPE,
+ universal_newlines=False,
+ check=True)
+ files = {x.decode("utf-8") for x in c.stdout.rstrip(b'\0').split(b'\0')}
+
+ # everything that's modified and about to be committed
+ c = subprocess.run(['git', '-C', src, 'diff', '--cached', '--name-only', '-z'],
+ stdout=subprocess.PIPE,
+ universal_newlines=False,
+ check=True)
+ files |= {x.decode("utf-8") for x in c.stdout.rstrip(b'\0').split(b'\0')}
+ files.discard('')
+
+ del c
+
+ for path in files:
+ src_path = os.path.join(src, path)
+ dest_path = os.path.join(dest, path)
+
+ directory = os.path.dirname(dest_path)
+ os.makedirs(directory, exist_ok=True)
+
+ shutil.copy2(src_path, dest_path, follow_symlinks=False)
+
+def install_build_src(args, workspace, run_build_script, for_cache):
+ if not run_build_script:
+ return
+ if for_cache:
+ return
+
+ if args.build_script is None:
+ return
+
+ with complete_step('Copying in build script and sources'):
+ shutil.copy(args.build_script,
+ os.path.join(workspace, "root", "root", os.path.basename(args.build_script)))
+
+ if args.build_sources is not None:
+ target = os.path.join(workspace, "root", "root/src")
+ use_git = args.use_git_files
+ if use_git is None:
+ use_git = os.path.exists('.git') or os.path.exists(os.path.join(args.build_sources, '.git'))
+
+ if use_git:
+ copy_git_files(args.build_sources, target, git_files=args.git_files)
+ else:
+ ignore = shutil.ignore_patterns('.git', '.mkosi-*')
+ shutil.copytree(args.build_sources, target, symlinks=True, ignore=ignore)
+
+def install_build_dest(args, workspace, run_build_script, for_cache):
+ if run_build_script:
+ return
+ if for_cache:
+ return
+
+ if args.build_script is None:
+ return
+
+ with complete_step('Copying in build tree'):
+ enumerate_and_copy(os.path.join(workspace, "dest"), os.path.join(workspace, "root"))
+
+def make_read_only(args, workspace, for_cache):
+ if not args.read_only:
+ return
+ if for_cache:
+ return
+
+ if args.output_format not in (OutputFormat.raw_btrfs, OutputFormat.subvolume):
+ return
+
+ with complete_step('Marking root subvolume read-only'):
+ btrfs_subvol_make_ro(os.path.join(workspace, "root"))
+
+def make_tar(args, workspace, run_build_script, for_cache):
+
+ if run_build_script:
+ return None
+ if args.output_format != OutputFormat.tar:
+ return None
+ if for_cache:
+ return None
+
+ with complete_step('Creating archive'):
+ f = tempfile.NamedTemporaryFile(dir=os.path.dirname(args.output), prefix=".mkosi-")
+ subprocess.run(["tar", "-C", os.path.join(workspace, "root"),
+ "-c", "-J", "--xattrs", "--xattrs-include=*", "."],
+ stdout=f, check=True)
+
+ return f
+
+def make_squashfs(args, workspace, for_cache):
+ if args.output_format != OutputFormat.raw_squashfs:
+ return None
+ if for_cache:
+ return None
+
+ with complete_step('Creating squashfs file system'):
+ f = tempfile.NamedTemporaryFile(dir=os.path.dirname(args.output), prefix=".mkosi-squashfs")
+ subprocess.run(["mksquashfs", os.path.join(workspace, "root"), f.name, "-comp", "lz4", "-noappend"],
+ check=True)
+
+ return f
+
+def read_partition_table(loopdev):
+
+ table = []
+ last_sector = 0
+
+ c = subprocess.run(["sfdisk", "--dump", loopdev], stdout=subprocess.PIPE, check=True)
+
+ in_body = False
+ for line in c.stdout.decode("utf-8").split('\n'):
+ stripped = line.strip()
+
+ if stripped == "": # empty line is where the body begins
+ in_body = True
+ continue
+ if not in_body:
+ continue
+
+ table.append(stripped)
+
+ name, rest = stripped.split(":", 1)
+ fields = rest.split(",")
+
+ start = None
+ size = None
+
+ for field in fields:
+ f = field.strip()
+
+ if f.startswith("start="):
+ start = int(f[6:])
+ if f.startswith("size="):
+ size = int(f[5:])
+
+ if start is not None and size is not None:
+ end = start + size
+ if end > last_sector:
+ last_sector = end
+
+ return table, last_sector * 512
+
+def insert_partition(args, workspace, raw, loopdev, partno, blob, name, type_uuid, uuid = None):
+
+ if args.ran_sfdisk:
+ old_table, last_partition_sector = read_partition_table(loopdev)
+ else:
+ # No partition table yet? Then let's fake one...
+ old_table = []
+ last_partition_sector = GPT_HEADER_SIZE
+
+ blob_size = roundup512(os.stat(blob.name).st_size)
+ luks_extra = 2*1024*1024 if args.encrypt == "all" else 0
+ new_size = last_partition_sector + blob_size + luks_extra + GPT_FOOTER_SIZE
+
+ print_step("Resizing disk image to {}...".format(format_bytes(new_size)))
+
+ os.truncate(raw.name, new_size)
+ subprocess.run(["losetup", "--set-capacity", loopdev], check=True)
+
+ print_step("Inserting partition of {}...".format(format_bytes(blob_size)))
+
+ table = "label: gpt\n"
+
+ for t in old_table:
+ table += t + "\n"
+
+ if uuid is not None:
+ table += "uuid=" + str(uuid) + ", "
+
+ table += 'size={}, type={}, attrs=GUID:60, name="{}"\n'.format((blob_size + luks_extra) // 512, type_uuid, name)
+
+ print(table)
+
+ subprocess.run(["sfdisk", "--color=never", loopdev], input=table.encode("utf-8"), check=True)
+ subprocess.run(["sync"])
+
+ print_step("Writing partition...")
+
+ if args.root_partno == partno:
+ luks_format_root(args, loopdev, False, True)
+ dev = luks_setup_root(args, loopdev, False, True)
+ else:
+ dev = None
+
+ try:
+ subprocess.run(["dd", "if=" + blob.name, "of=" + (dev if dev is not None else partition(loopdev, partno))], check=True)
+ finally:
+ luks_close(dev, "Closing LUKS root partition")
+
+ args.ran_sfdisk = True
+
+ return blob_size
+
+def insert_squashfs(args, workspace, raw, loopdev, squashfs, for_cache):
+ if args.output_format != OutputFormat.raw_squashfs:
+ return
+ if for_cache:
+ return
+
+ with complete_step('Inserting squashfs root partition'):
+ args.root_size = insert_partition(args, workspace, raw, loopdev, args.root_partno, squashfs,
+ "Root Partition", GPT_ROOT_NATIVE)
+
+def make_verity(args, workspace, dev, run_build_script, for_cache):
+
+ if run_build_script or not args.verity:
+ return None, None
+ if for_cache:
+ return None, None
+
+ with complete_step('Generating verity hashes'):
+ f = tempfile.NamedTemporaryFile(dir=os.path.dirname(args.output), prefix=".mkosi-")
+ c = subprocess.run(["veritysetup", "format", dev, f.name],
+ stdout=subprocess.PIPE, check=True)
+
+ for line in c.stdout.decode("utf-8").split('\n'):
+ if line.startswith("Root hash:"):
+ root_hash = line[10:].strip()
+ return f, root_hash
+
+ raise ValueError('Root hash not found')
+
+def insert_verity(args, workspace, raw, loopdev, verity, root_hash, for_cache):
+
+ if verity is None:
+ return
+ if for_cache:
+ return
+
+ # Use the final 128 bit of the root hash as partition UUID of the verity partition
+ u = uuid.UUID(root_hash[-32:])
+
+ with complete_step('Inserting verity partition'):
+ insert_partition(args, workspace, raw, loopdev, args.verity_partno, verity,
+ "Verity Partition", GPT_ROOT_NATIVE_VERITY, u)
+
+def patch_root_uuid(args, loopdev, root_hash, for_cache):
+
+ if root_hash is None:
+ return
+ if for_cache:
+ return
+
+ # Use the first 128bit of the root hash as partition UUID of the root partition
+ u = uuid.UUID(root_hash[:32])
+
+ with complete_step('Patching root partition UUID'):
+ subprocess.run(["sfdisk", "--part-uuid", loopdev, str(args.root_partno), str(u)],
+ check=True)
+
+def install_unified_kernel(args, workspace, run_build_script, for_cache, root_hash):
+
+ # Iterates through all kernel versions included in the image and
+ # generates a combined kernel+initrd+cmdline+osrelease EFI file
+ # from it and places it in the /EFI/Linux directory of the
+ # ESP. sd-boot iterates through them and shows them in the
+ # menu. These "unified" single-file images have the benefit that
+ # they can be signed like normal EFI binaries, and can encode
+ # everything necessary to boot a specific root device, including
+ # the root hash.
+
+ if not args.bootable:
+ return
+ if for_cache:
+ return
+
+ if args.distribution not in (Distribution.fedora, Distribution.mageia):
+ return
+
+ with complete_step("Generating combined kernel + initrd boot file"):
+
+ cmdline = args.kernel_commandline
+ if root_hash is not None:
+ cmdline += " roothash=" + root_hash
+
+ for kver in os.scandir(os.path.join(workspace, "root", "usr/lib/modules")):
+ if not kver.is_dir():
+ continue
+
+ boot_binary = "/efi/EFI/Linux/linux-" + kver.name
+ if root_hash is not None:
+ boot_binary += "-" + root_hash
+ boot_binary += ".efi"
+
+ dracut = ["/usr/bin/dracut",
+ "-v",
+ "--no-hostonly",
+ "--uefi",
+ "--kver", kver.name,
+ "--kernel-cmdline", cmdline ]
+
+ # Temporary fix until dracut includes these in the image anyway
+ dracut += ("-i",) + ("/usr/lib/systemd/system/systemd-volatile-root.service",)*2 + \
+ ("-i",) + ("/usr/lib/systemd/systemd-volatile-root",)*2 + \
+ ("-i",) + ("/usr/lib/systemd/systemd-veritysetup",)*2 + \
+ ("-i",) + ("/usr/lib/systemd/system-generators/systemd-veritysetup-generator",)*2
+
+ if args.output_format == OutputFormat.raw_squashfs:
+ dracut += [ '--add-drivers', 'squashfs' ]
+
+ dracut += [ boot_binary ]
+
+ run_workspace_command(args, workspace, *dracut);
+
+def secure_boot_sign(args, workspace, run_build_script, for_cache):
+
+ if run_build_script:
+ return
+ if not args.bootable:
+ return
+ if not args.secure_boot:
+ return
+ if for_cache:
+ return
+
+ for path, dirnames, filenames in os.walk(os.path.join(workspace, "root", "efi")):
+ for i in filenames:
+ if not i.endswith(".efi") and not i.endswith(".EFI"):
+ continue
+
+ with complete_step("Signing EFI binary {} in ESP".format(i)):
+ p = os.path.join(path, i)
+
+ subprocess.run(["sbsign",
+ "--key", args.secure_boot_key,
+ "--cert", args.secure_boot_certificate,
+ "--output", p + ".signed",
+ p], check=True)
+
+ os.rename(p + ".signed", p)
+
+def xz_output(args, raw):
+ if args.output_format not in (OutputFormat.raw_btrfs, OutputFormat.raw_gpt, OutputFormat.raw_squashfs):
+ return raw
+
+ if not args.xz:
+ return raw
+
+ with complete_step('Compressing image file'):
+ f = tempfile.NamedTemporaryFile(prefix=".mkosi-", dir=os.path.dirname(args.output))
+ subprocess.run(["xz", "-c", raw.name], stdout=f, check=True)
+
+ return f
+
+def write_root_hash_file(args, root_hash):
+ if root_hash is None:
+ return None
+
+ with complete_step('Writing .roothash file'):
+ f = tempfile.NamedTemporaryFile(mode='w+b', prefix='.mkosi',
+ dir=os.path.dirname(args.output_root_hash_file))
+ f.write((root_hash + "\n").encode())
+
+ return f
+
+def copy_nspawn_settings(args):
+ if args.nspawn_settings is None:
+ return None
+
+ with complete_step('Copying nspawn settings file'):
+ f = tempfile.NamedTemporaryFile(mode="w+b", prefix=".mkosi-",
+ dir=os.path.dirname(args.output_nspawn_settings))
+
+ with open(args.nspawn_settings, "rb") as c:
+ f.write(c.read())
+
+ return f
+
+def hash_file(of, sf, fname):
+ bs = 16*1024**2
+ h = hashlib.sha256()
+
+ sf.seek(0)
+ buf = sf.read(bs)
+ while len(buf) > 0:
+ h.update(buf)
+ buf = sf.read(bs)
+
+ of.write(h.hexdigest() + " *" + fname + "\n")
+
+def calculate_sha256sum(args, raw, tar, root_hash_file, nspawn_settings):
+ if args.output_format in (OutputFormat.directory, OutputFormat.subvolume):
+ return None
+
+ if not args.checksum:
+ return None
+
+ with complete_step('Calculating SHA256SUMS'):
+ f = tempfile.NamedTemporaryFile(mode="w+", prefix=".mkosi-", encoding="utf-8",
+ dir=os.path.dirname(args.output_checksum))
+
+ if raw is not None:
+ hash_file(f, raw, os.path.basename(args.output))
+ if tar is not None:
+ hash_file(f, tar, os.path.basename(args.output))
+ if root_hash_file is not None:
+ hash_file(f, root_hash_file, os.path.basename(args.output_root_hash_file))
+ if nspawn_settings is not None:
+ hash_file(f, nspawn_settings, os.path.basename(args.output_nspawn_settings))
+
+ return f
+
+def calculate_signature(args, checksum):
+ if not args.sign:
+ return None
+
+ if checksum is None:
+ return None
+
+ with complete_step('Signing SHA256SUMS'):
+ f = tempfile.NamedTemporaryFile(mode="wb", prefix=".mkosi-",
+ dir=os.path.dirname(args.output_signature))
+
+ cmdline = ["gpg", "--detach-sign"]
+
+ if args.key is not None:
+ cmdline += ["--default-key", args.key]
+
+ checksum.seek(0)
+ subprocess.run(cmdline, stdin=checksum, stdout=f, check=True)
+
+ return f
+
+def save_cache(args, workspace, raw, cache_path):
+
+ if cache_path is None:
+ return
+
+ with complete_step('Installing cache copy ',
+ 'Successfully installed cache copy ' + cache_path):
+
+ if args.output_format in (OutputFormat.raw_btrfs, OutputFormat.raw_gpt):
+ os.chmod(raw, 0o666 & ~args.original_umask)
+ shutil.move(raw, cache_path)
+ else:
+ shutil.move(os.path.join(workspace, "root"), cache_path)
+
+def link_output(args, workspace, raw, tar):
+ with complete_step('Linking image file',
+ 'Successfully linked ' + args.output):
+ if args.output_format in (OutputFormat.directory, OutputFormat.subvolume):
+ os.rename(os.path.join(workspace, "root"), args.output)
+ elif args.output_format in (OutputFormat.raw_btrfs, OutputFormat.raw_gpt, OutputFormat.raw_squashfs):
+ os.chmod(raw, 0o666 & ~args.original_umask)
+ os.link(raw, args.output)
+ else:
+ os.chmod(tar, 0o666 & ~args.original_umask)
+ os.link(tar, args.output)
+
+def link_output_nspawn_settings(args, path):
+ if path is None:
+ return
+
+ with complete_step('Linking nspawn settings file',
+ 'Successfully linked ' + args.output_nspawn_settings):
+ os.chmod(path, 0o666 & ~args.original_umask)
+ os.link(path, args.output_nspawn_settings)
+
+def link_output_checksum(args, checksum):
+ if checksum is None:
+ return
+
+ with complete_step('Linking SHA256SUMS file',
+ 'Successfully linked ' + args.output_checksum):
+ os.chmod(checksum, 0o666 & ~args.original_umask)
+ os.link(checksum, args.output_checksum)
+
+def link_output_root_hash_file(args, root_hash_file):
+ if root_hash_file is None:
+ return
+
+ with complete_step('Linking .roothash file',
+ 'Successfully linked ' + args.output_root_hash_file):
+ os.chmod(root_hash_file, 0o666 & ~args.original_umask)
+ os.link(root_hash_file, args.output_root_hash_file)
+
+def link_output_signature(args, signature):
+ if signature is None:
+ return
+
+ with complete_step('Linking SHA256SUMS.gpg file',
+ 'Successfully linked ' + args.output_signature):
+ os.chmod(signature, 0o666 & ~args.original_umask)
+ os.link(signature, args.output_signature)
+
+def dir_size(path):
+ sum = 0
+ for entry in os.scandir(path):
+ if entry.is_symlink():
+ # We can ignore symlinks because they either point into our tree,
+ # in which case we'll include the size of target directory anyway,
+ # or outside, in which case we don't need to.
+ continue
+ elif entry.is_file():
+ sum += entry.stat().st_blocks * 512
+ elif entry.is_dir():
+ sum += dir_size(entry.path)
+ return sum
+
+def print_output_size(args):
+ if args.output_format in (OutputFormat.directory, OutputFormat.subvolume):
+ print_step("Resulting image size is " + format_bytes(dir_size(args.output)) + ".")
+ else:
+ st = os.stat(args.output)
+ print_step("Resulting image size is " + format_bytes(st.st_size) + ", consumes " + format_bytes(st.st_blocks * 512) + ".")
+
+def setup_cache(args):
+ with complete_step('Setting up package cache',
+ 'Setting up package cache {} complete') as output:
+ if args.cache_path is None:
+ d = tempfile.TemporaryDirectory(dir=os.path.dirname(args.output), prefix=".mkosi-")
+ args.cache_path = d.name
+ else:
+ os.makedirs(args.cache_path, 0o755, exist_ok=True)
+ d = None
+ output.append(args.cache_path)
+
+ return d
+
+class PackageAction(argparse.Action):
+ def __call__(self, parser, namespace, values, option_string=None):
+ l = getattr(namespace, self.dest)
+ if l is None:
+ l = []
+ l.extend(values.split(","))
+ setattr(namespace, self.dest, l)
+
+def parse_args():
+ parser = argparse.ArgumentParser(description='Build Legacy-Free OS Images', add_help=False)
+
+ group = parser.add_argument_group("Commands")
+ group.add_argument("verb", choices=("build", "clean", "help", "summary"), nargs='?', default="build", help='Operation to execute')
+ group.add_argument('-h', '--help', action='help', help="Show this help")
+ group.add_argument('--version', action='version', version='%(prog)s ' + __version__)
+
+ group = parser.add_argument_group("Distribution")
+ group.add_argument('-d', "--distribution", choices=Distribution.__members__, help='Distribution to install')
+ group.add_argument('-r', "--release", help='Distribution release to install')
+ group.add_argument('-m', "--mirror", help='Distribution mirror to use')
+ group.add_argument("--repositories", action=PackageAction, dest='repositories', help='Repositories to use', metavar='REPOS')
+
+ group = parser.add_argument_group("Output")
+ group.add_argument('-t', "--format", dest='output_format', choices=OutputFormat.__members__, help='Output Format')
+ group.add_argument('-o', "--output", help='Output image path', metavar='PATH')
+ group.add_argument('-f', "--force", action='count', dest='force_count', default=0, help='Remove existing image file before operation')
+ group.add_argument('-b', "--bootable", type=parse_boolean, nargs='?', const=True,
+ help='Make image bootable on EFI (only raw_gpt, raw_btrfs, raw_squashfs)')
+ group.add_argument("--secure-boot", action='store_true', help='Sign the resulting kernel/initrd image for UEFI SecureBoot')
+ group.add_argument("--secure-boot-key", help="UEFI SecureBoot private key in PEM format", metavar='PATH')
+ group.add_argument("--secure-boot-certificate", help="UEFI SecureBoot certificate in X509 format", metavar='PATH')
+ group.add_argument("--read-only", action='store_true', help='Make root volume read-only (only raw_gpt, raw_btrfs, subvolume, implied on raw_squashs)')
+ group.add_argument("--encrypt", choices=("all", "data"), help='Encrypt everything except: ESP ("all") or ESP and root ("data")')
+ group.add_argument("--verity", action='store_true', help='Add integrity partition (implies --read-only)')
+ group.add_argument("--compress", action='store_true', help='Enable compression in file system (only raw_btrfs, subvolume)')
+ group.add_argument("--xz", action='store_true', help='Compress resulting image with xz (only raw_gpt, raw_btrfs, raw_squashfs, implied on tar)')
+ group.add_argument('-i', "--incremental", action='store_true', help='Make use of and generate intermediary cache images')
+
+ group = parser.add_argument_group("Packages")
+ group.add_argument('-p', "--package", action=PackageAction, dest='packages', help='Add an additional package to the OS image', metavar='PACKAGE')
+ group.add_argument("--with-docs", action='store_true', help='Install documentation (only Fedora and Mageia)')
+ group.add_argument("--cache", dest='cache_path', help='Package cache path', metavar='PATH')
+ group.add_argument("--extra-tree", action='append', dest='extra_trees', help='Copy an extra tree on top of image', metavar='PATH')
+ group.add_argument("--build-script", help='Build script to run inside image', metavar='PATH')
+ group.add_argument("--build-sources", help='Path for sources to build', metavar='PATH')
+ group.add_argument("--build-dir", help='Path to use as persistent build directory', metavar='PATH')
+ group.add_argument("--build-package", action=PackageAction, dest='build_packages', help='Additional packages needed for build script', metavar='PACKAGE')
+ group.add_argument("--postinst-script", help='Post installation script to run inside image', metavar='PATH')
+ group.add_argument('--use-git-files', type=parse_boolean,
+ help='Ignore any files that git itself ignores (default: guess)')
+ group.add_argument('--git-files', choices=('cached', 'others'),
+ help='Whether to include untracked files (default: others)')
+ group.add_argument("--with-network", action='store_true', help='Run build and postinst scripts with network access (instead of private network)')
+ group.add_argument("--settings", dest='nspawn_settings', help='Add in .spawn settings file', metavar='PATH')
+
+ group = parser.add_argument_group("Partitions")
+ group.add_argument("--root-size", help='Set size of root partition (only raw_gpt, raw_btrfs)', metavar='BYTES')
+ group.add_argument("--esp-size", help='Set size of EFI system partition (only raw_gpt, raw_btrfs, raw_squashfs)', metavar='BYTES')
+ group.add_argument("--swap-size", help='Set size of swap partition (only raw_gpt, raw_btrfs, raw_squashfs)', metavar='BYTES')
+ group.add_argument("--home-size", help='Set size of /home partition (only raw_gpt, raw_squashfs)', metavar='BYTES')
+ group.add_argument("--srv-size", help='Set size of /srv partition (only raw_gpt, raw_squashfs)', metavar='BYTES')
+
+ group = parser.add_argument_group("Validation (only raw_gpt, raw_btrfs, raw_squashfs, tar)")
+ group.add_argument("--checksum", action='store_true', help='Write SHA256SUMS file')
+ group.add_argument("--sign", action='store_true', help='Write and sign SHA256SUMS file')
+ group.add_argument("--key", help='GPG key to use for signing')
+ group.add_argument("--password", help='Set the root password')
+
+ group = parser.add_argument_group("Additional Configuration")
+ group.add_argument('-C', "--directory", help='Change to specified directory before doing anything', metavar='PATH')
+ group.add_argument("--default", dest='default_path', help='Read configuration data from file', metavar='PATH')
+ group.add_argument("--kernel-commandline", help='Set the kernel command line (only bootable images)')
+ group.add_argument("--hostname", help="Set hostname")
+
+ try:
+ argcomplete.autocomplete(parser)
+ except NameError:
+ pass
+
+ args = parser.parse_args()
+
+ if args.verb == "help":
+ parser.print_help()
+ sys.exit(0)
+
+ return args
+
+def parse_bytes(bytes):
+ if bytes is None:
+ return bytes
+
+ if bytes.endswith('G'):
+ factor = 1024**3
+ elif bytes.endswith('M'):
+ factor = 1024**2
+ elif bytes.endswith('K'):
+ factor = 1024
+ else:
+ factor = 1
+
+ if factor > 1:
+ bytes = bytes[:-1]
+
+ result = int(bytes) * factor
+ if result <= 0:
+ raise ValueError("Size out of range")
+
+ if result % 512 != 0:
+ raise ValueError("Size not a multiple of 512")
+
+ return result
+
+def detect_distribution():
+ try:
+ f = open("/etc/os-release")
+ except IOError:
+ try:
+ f = open("/usr/lib/os-release")
+ except IOError:
+ return None, None
+
+ id = None
+ version_id = None
+
+ for ln in f:
+ if ln.startswith("ID="):
+ id = ln[3:].strip()
+ if ln.startswith("VERSION_ID="):
+ version_id = ln[11:].strip()
+
+ d = Distribution.__members__.get(id, None)
+ return d, version_id
+
+def unlink_try_hard(path):
+ try:
+ os.unlink(path)
+ except:
+ pass
+
+ try:
+ btrfs_subvol_delete(path)
+ except:
+ pass
+
+ try:
+ shutil.rmtree(path)
+ except:
+ pass
+
+def empty_directory(path):
+
+ for f in os.listdir(path):
+ unlink_try_hard(os.path.join(path, f))
+
+def unlink_output(args):
+ if not args.force and args.verb != "clean":
+ return
+
+ with complete_step('Removing output files'):
+ unlink_try_hard(args.output)
+
+ if args.checksum:
+ unlink_try_hard(args.output_checksum)
+
+ if args.verity:
+ unlink_try_hard(args.output_root_hash_file)
+
+ if args.sign:
+ unlink_try_hard(args.output_signature)
+
+ if args.nspawn_settings is not None:
+ unlink_try_hard(args.output_nspawn_settings)
+
+ # We remove the cache if either the user used --force twice, or he called "clean" with it passed once
+ if args.verb == "clean":
+ remove_cache = args.force_count > 0
+ else:
+ remove_cache = args.force_count > 1
+
+ if remove_cache:
+
+ if args.cache_pre_dev is not None or args.cache_pre_inst is not None:
+ with complete_step('Removing incremental cache files'):
+ if args.cache_pre_dev is not None:
+ unlink_try_hard(args.cache_pre_dev)
+
+ if args.cache_pre_inst is not None:
+ unlink_try_hard(args.cache_pre_inst)
+
+ if args.build_dir is not None:
+ with complete_step('Clearing out build directory'):
+ empty_directory(args.build_dir)
+
+def parse_boolean(s):
+ "Parse 1/true/yes as true and 0/false/no as false"
+ if s in {"1", "true", "yes"}:
+ return True
+
+ if s in {"0", "false", "no"}:
+ return False
+
+ raise ValueError("Invalid literal for bool(): {!r}".format(s))
+
+def process_setting(args, section, key, value):
+ if section == "Distribution":
+ if key == "Distribution":
+ if args.distribution is None:
+ args.distribution = value
+ elif key == "Release":
+ if args.release is None:
+ args.release = value
+ elif key == "Repositories":
+ list_value = value if type(value) == list else value.split()
+ if args.repositories is None:
+ args.repositories = list_value
+ else:
+ args.repositories.extend(list_value)
+ elif key == "Mirror":
+ if args.mirror is None:
+ args.mirror = value
+ elif key is None:
+ return True
+ else:
+ return False
+ elif section == "Output":
+ if key == "Format":
+ if args.output_format is None:
+ args.output_format = value
+ elif key == "Output":
+ if args.output is None:
+ args.output = value
+ elif key == "Force":
+ if not args.force:
+ args.force = parse_boolean(value)
+ elif key == "Bootable":
+ if args.bootable is None:
+ args.bootable = parse_boolean(value)
+ elif key == "KernelCommandLine":
+ if args.kernel_commandline is None:
+ args.kernel_commandline = value
+ elif key == "SecureBoot":
+ if not args.secure_boot:
+ args.secure_boot = parse_boolean(value)
+ elif key == "SecureBootKey":
+ if args.secure_boot_key is None:
+ args.secure_boot_key = value
+ elif key == "SecureBootCertificate":
+ if args.secure_boot_certificate is None:
+ args.secure_boot_certificate = value
+ elif key == "ReadOnly":
+ if not args.read_only:
+ args.read_only = parse_boolean(value)
+ elif key == "Encrypt":
+ if args.encrypt is None:
+ if value not in ("all", "data"):
+ raise ValueError("Invalid encryption setting: "+ value)
+ args.encrypt = value
+ elif key == "Verity":
+ if not args.verity:
+ args.verity = parse_boolean(value)
+ elif key == "Compress":
+ if not args.compress:
+ args.compress = parse_boolean(value)
+ elif key == "XZ":
+ if not args.xz:
+ args.xz = parse_boolean(value)
+ elif key == "Hostname":
+ if not args.hostname:
+ args.hostname = value
+ elif key is None:
+ return True
+ else:
+ return False
+ elif section == "Packages":
+ if key == "Packages":
+ list_value = value if type(value) == list else value.split()
+ if args.packages is None:
+ args.packages = list_value
+ else:
+ args.packages.extend(list_value)
+ elif key == "WithDocs":
+ if not args.with_docs:
+ args.with_docs = parse_boolean(value)
+ elif key == "Cache":
+ if args.cache_path is None:
+ args.cache_path = value
+ elif key == "ExtraTrees":
+ list_value = value if type(value) == list else value.split()
+ if args.extra_trees is None:
+ args.extra_trees = list_value
+ else:
+ args.extra_trees.extend(list_value)
+ elif key == "BuildScript":
+ if args.build_script is None:
+ args.build_script = value
+ elif key == "BuildSources":
+ if args.build_sources is None:
+ args.build_sources = value
+ elif key == "BuildDirectory":
+ if args.build_dir is None:
+ args.build_dir = value
+ elif key == "BuildPackages":
+ list_value = value if type(value) == list else value.split()
+ if args.build_packages is None:
+ args.build_packages = list_value
+ else:
+ args.build_packages.extend(list_value)
+ elif key == "PostInstallationScript":
+ if args.postinst_script is None:
+ args.postinst_script = value
+ elif key == "WithNetwork":
+ if not args.with_network:
+ args.with_network = parse_boolean(value)
+ elif key == "NSpawnSettings":
+ if args.nspawn_settings is None:
+ args.nspawn_settings = value
+ elif key is None:
+ return True
+ else:
+ return False
+ elif section == "Partitions":
+ if key == "RootSize":
+ if args.root_size is None:
+ args.root_size = value
+ elif key == "ESPSize":
+ if args.esp_size is None:
+ args.esp_size = value
+ elif key == "SwapSize":
+ if args.swap_size is None:
+ args.swap_size = value
+ elif key == "HomeSize":
+ if args.home_size is None:
+ args.home_size = value
+ elif key == "SrvSize":
+ if args.srv_size is None:
+ args.srv_size = value
+ elif key is None:
+ return True
+ else:
+ return False
+ elif section == "Validation":
+ if key == "CheckSum":
+ if not args.checksum:
+ args.checksum = parse_boolean(value)
+ elif key == "Sign":
+ if not args.sign:
+ args.sign = parse_boolean(value)
+ elif key == "Key":
+ if args.key is None:
+ args.key = value
+ elif key == "Password":
+ if args.password is None:
+ args.password = value
+ elif key is None:
+ return True
+ else:
+ return False
+ else:
+ return False
+
+ return True
+
+def load_defaults_file(fname, options):
+ try:
+ f = open(fname, "r")
+ except FileNotFoundError:
+ return
+
+ config = configparser.ConfigParser(delimiters='=')
+ config.optionxform = str
+ config.read_file(f)
+
+ # this is used only for validation
+ args = parse_args()
+
+ for section in config.sections():
+ if not process_setting(args, section, None, None):
+ sys.stderr.write("Unknown section in {}, ignoring: [{}]\n".format(fname, section))
+ continue
+ if section not in options:
+ options[section] = {}
+ for key in config[section]:
+ if not process_setting(args, section, key, config[section][key]):
+ sys.stderr.write("Unknown key in section [{}] in {}, ignoring: {}=\n".format(section, fname, key))
+ continue
+ if section == "Packages" and key in ["Packages", "ExtraTrees", "BuildPackages"]:
+ if key in options[section]:
+ options[section][key].extend(config[section][key].split())
+ else:
+ options[section][key] = config[section][key].split()
+ else:
+ options[section][key] = config[section][key]
+ return options
+
+def load_defaults(args):
+ fname = "mkosi.default" if args.default_path is None else args.default_path
+
+ config = {}
+ load_defaults_file(fname, config)
+
+ defaults_dir = fname + '.d'
+ if os.path.isdir(defaults_dir):
+ for defaults_file in sorted(os.listdir(defaults_dir)):
+ defaults_path = os.path.join(defaults_dir, defaults_file)
+ if os.path.isfile(defaults_path):
+ load_defaults_file(defaults_path, config)
+
+ for section in config.keys():
+ for key in config[section]:
+ process_setting(args, section, key, config[section][key])
+
+def find_nspawn_settings(args):
+ if args.nspawn_settings is not None:
+ return
+
+ if os.path.exists("mkosi.nspawn"):
+ args.nspawn_settings = "mkosi.nspawn"
+
+def find_extra(args):
+ if os.path.exists("mkosi.extra"):
+ if args.extra_trees is None:
+ args.extra_trees = ["mkosi.extra"]
+ else:
+ args.extra_trees.append("mkosi.extra")
+
+def find_cache(args):
+
+ if args.cache_path is not None:
+ return
+
+ if os.path.exists("mkosi.cache/"):
+ args.cache_path = "mkosi.cache/" + args.distribution.name + "~" + args.release
+
+def find_build_script(args):
+ if args.build_script is not None:
+ return
+
+ if os.path.exists("mkosi.build"):
+ args.build_script = "mkosi.build"
+
+def find_build_sources(args):
+ if args.build_sources is not None:
+ return
+
+ args.build_sources = os.getcwd()
+
+def find_build_dir(args):
+ if args.build_dir is not None:
+ return
+
+ if os.path.exists("mkosi.builddir/"):
+ args.build_dir = "mkosi.builddir"
+
+def find_postinst_script(args):
+ if args.postinst_script is not None:
+ return
+
+ if os.path.exists("mkosi.postinst"):
+ args.postinst_script = "mkosi.postinst"
+
+def find_passphrase(args):
+
+ if args.encrypt is None:
+ args.passphrase = None
+ return
+
+ try:
+ passphrase_mode = os.stat('mkosi.passphrase').st_mode & (stat.S_IRWXU | stat.S_IRWXG | stat.S_IRWXO)
+ if (passphrase_mode & stat.S_IRWXU > 0o600) or (passphrase_mode & (stat.S_IRWXG | stat.S_IRWXO) > 0):
+ die("Permissions of 'mkosi.passphrase' of '{}' are too open. When creating passphrase files please make sure to choose an access mode that restricts access to the owner only. Aborting.\n".format(oct(passphrase_mode)))
+
+ args.passphrase = { 'type': 'file', 'content': 'mkosi.passphrase' }
+
+ except FileNotFoundError:
+ while True:
+ passphrase = getpass.getpass("Please enter passphrase: ")
+ passphrase_confirmation = getpass.getpass("Passphrase confirmation: ")
+ if passphrase == passphrase_confirmation:
+ args.passphrase = { 'type': 'stdin', 'content': passphrase }
+ break
+
+ sys.stderr.write("Passphrase doesn't match confirmation. Please try again.\n")
+
+def find_secure_boot(args):
+ if not args.secure_boot:
+ return
+
+ if args.secure_boot_key is None:
+ if os.path.exists("mkosi.secure-boot.key"):
+ args.secure_boot_key = "mkosi.secure-boot.key"
+
+ if args.secure_boot_certificate is None:
+ if os.path.exists("mkosi.secure-boot.crt"):
+ args.secure_boot_certificate = "mkosi.secure-boot.crt"
+
+def strip_suffixes(path):
+ t = path
+ while True:
+ if t.endswith(".xz"):
+ t = t[:-3]
+ elif t.endswith(".raw"):
+ t = t[:-4]
+ elif t.endswith(".tar"):
+ t = t[:-4]
+ else:
+ break
+
+ return t
+
+def build_nspawn_settings_path(path):
+ return strip_suffixes(path) + ".nspawn"
+
+def build_root_hash_file_path(path):
+ return strip_suffixes(path) + ".roothash"
+
+def load_args():
+ args = parse_args()
+
+ if args.directory is not None:
+ os.chdir(args.directory)
+
+ load_defaults(args)
+ find_nspawn_settings(args)
+ find_extra(args)
+ find_build_script(args)
+ find_build_sources(args)
+ find_build_dir(args)
+ find_postinst_script(args)
+ find_passphrase(args)
+ find_secure_boot(args)
+
+ args.force = args.force_count > 0
+
+ if args.output_format is None:
+ args.output_format = OutputFormat.raw_gpt
+ else:
+ args.output_format = OutputFormat[args.output_format]
+
+ if args.distribution is not None:
+ args.distribution = Distribution[args.distribution]
+
+ if args.distribution is None or args.release is None:
+ d, r = detect_distribution()
+
+ if args.distribution is None:
+ args.distribution = d
+
+ if args.distribution == d and args.release is None:
+ args.release = r
+
+ if args.distribution is None:
+ die("Couldn't detect distribution.")
+
+ if args.release is None:
+ if args.distribution == Distribution.fedora:
+ args.release = "25"
+ if args.distribution == Distribution.mageia:
+ args.release = "6"
+ elif args.distribution == Distribution.debian:
+ args.release = "unstable"
+ elif args.distribution == Distribution.ubuntu:
+ args.release = "yakkety"
+ elif args.distribution == Distribution.opensuse:
+ args.release = "tumbleweed"
+
+ find_cache(args)
+
+ if args.mirror is None:
+ if args.distribution == Distribution.fedora:
+ args.mirror = None
+ elif args.distribution == Distribution.debian:
+ args.mirror = "http://deb.debian.org/debian"
+ elif args.distribution == Distribution.ubuntu:
+ args.mirror = "http://archive.ubuntu.com/ubuntu"
+ if platform.machine() == "aarch64":
+ args.mirror = "http://ports.ubuntu.com/"
+ elif args.distribution == Distribution.arch:
+ args.mirror = "https://mirrors.kernel.org/archlinux"
+ if platform.machine() == "aarch64":
+ args.mirror = "http://mirror.archlinuxarm.org"
+ elif args.distribution == Distribution.opensuse:
+ args.mirror = "https://download.opensuse.org"
+
+ if args.bootable:
+ if args.distribution == Distribution.ubuntu:
+ die("Bootable images are currently not supported on Ubuntu.")
+
+ if args.output_format in (OutputFormat.directory, OutputFormat.subvolume, OutputFormat.tar):
+ die("Directory, subvolume and tar images cannot be booted.")
+
+ if args.encrypt is not None:
+ if args.output_format not in (OutputFormat.raw_gpt, OutputFormat.raw_btrfs, OutputFormat.raw_squashfs):
+ die("Encryption is only supported for raw gpt, btrfs or squashfs images.")
+
+ if args.encrypt == "data" and args.output_format == OutputFormat.raw_btrfs:
+ die("'data' encryption mode not supported on btrfs, use 'all' instead.")
+
+ if args.encrypt == "all" and args.verity:
+ die("'all' encryption mode may not be combined with Verity.")
+
+ if args.sign:
+ args.checksum = True
+
+ if args.output is None:
+ if args.output_format in (OutputFormat.raw_gpt, OutputFormat.raw_btrfs, OutputFormat.raw_squashfs):
+ if args.xz:
+ args.output = "image.raw.xz"
+ else:
+ args.output = "image.raw"
+ elif args.output_format == OutputFormat.tar:
+ args.output = "image.tar.xz"
+ else:
+ args.output = "image"
+
+ if args.incremental or args.verb == "clean":
+ args.cache_pre_dev = args.output + ".cache-pre-dev"
+ args.cache_pre_inst = args.output + ".cache-pre-inst"
+ else:
+ args.cache_pre_dev = None
+ args.cache_pre_inst = None
+
+ args.output = os.path.abspath(args.output)
+
+ if args.output_format == OutputFormat.tar:
+ args.xz = True
+
+ if args.output_format == OutputFormat.raw_squashfs:
+ args.read_only = True
+ args.compress = True
+ args.root_size = None
+
+ if args.verity:
+ args.read_only = True
+ args.output_root_hash_file = build_root_hash_file_path(args.output)
+
+ if args.checksum:
+ args.output_checksum = os.path.join(os.path.dirname(args.output), "SHA256SUMS")
+
+ if args.sign:
+ args.output_signature = os.path.join(os.path.dirname(args.output), "SHA256SUMS.gpg")
+
+ if args.nspawn_settings is not None:
+ args.nspawn_settings = os.path.abspath(args.nspawn_settings)
+ args.output_nspawn_settings = build_nspawn_settings_path(args.output)
+
+ if args.build_script is not None:
+ args.build_script = os.path.abspath(args.build_script)
+
+ if args.build_sources is not None:
+ args.build_sources = os.path.abspath(args.build_sources)
+
+ if args.build_dir is not None:
+ args.build_dir = os.path.abspath(args.build_dir)
+
+ if args.postinst_script is not None:
+ args.postinst_script = os.path.abspath(args.postinst_script)
+
+ if args.extra_trees is not None:
+ for i in range(len(args.extra_trees)):
+ args.extra_trees[i] = os.path.abspath(args.extra_trees[i])
+
+ args.root_size = parse_bytes(args.root_size)
+ args.home_size = parse_bytes(args.home_size)
+ args.srv_size = parse_bytes(args.srv_size)
+ args.esp_size = parse_bytes(args.esp_size)
+ args.swap_size = parse_bytes(args.swap_size)
+
+ if args.output_format in (OutputFormat.raw_gpt, OutputFormat.raw_btrfs) and args.root_size is None:
+ args.root_size = 1024*1024*1024
+
+ if args.bootable and args.esp_size is None:
+ args.esp_size = 256*1024*1024
+
+ args.verity_size = None
+
+ if args.bootable and args.kernel_commandline is None:
+ args.kernel_commandline = "rhgb quiet selinux=0 audit=0 rw"
+
+ if args.secure_boot_key is not None:
+ args.secure_boot_key = os.path.abspath(args.secure_boot_key)
+
+ if args.secure_boot_certificate is not None:
+ args.secure_boot_certificate = os.path.abspath(args.secure_boot_certificate)
+
+ if args.secure_boot:
+ if args.secure_boot_key is None:
+ die("UEFI SecureBoot enabled, but couldn't find private key. (Consider placing it in mkosi.secure-boot.key?)")
+
+ if args.secure_boot_certificate is None:
+ die("UEFI SecureBoot enabled, but couldn't find certificate. (Consider placing it in mkosi.secure-boot.crt?)")
+
+ return args
+
+def check_output(args):
+ for f in (args.output,
+ args.output_checksum if args.checksum else None,
+ args.output_signature if args.sign else None,
+ args.output_nspawn_settings if args.nspawn_settings is not None else None,
+ args.output_root_hash_file if args.verity else None):
+
+ if f is None:
+ continue
+
+ if os.path.exists(f):
+ die("Output file " + f + " exists already. (Consider invocation with --force.)")
+
+def yes_no(b):
+ return "yes" if b else "no"
+
+def format_bytes_or_disabled(sz):
+ if sz is None:
+ return "(disabled)"
+
+ return format_bytes(sz)
+
+def format_bytes_or_auto(sz):
+ if sz is None:
+ return "(automatic)"
+
+ return format_bytes(sz)
+
+def none_to_na(s):
+ return "n/a" if s is None else s
+
+def none_to_no(s):
+ return "no" if s is None else s
+
+def none_to_none(s):
+ return "none" if s is None else s
+
+def line_join_list(l):
+
+ if l is None:
+ return "none"
+
+ return "\n ".join(l)
+
+def print_summary(args):
+ sys.stderr.write("DISTRIBUTION:\n")
+ sys.stderr.write(" Distribution: " + args.distribution.name + "\n")
+ sys.stderr.write(" Release: " + none_to_na(args.release) + "\n")
+ if args.mirror is not None:
+ sys.stderr.write(" Mirror: " + args.mirror + "\n")
+ sys.stderr.write("\nOUTPUT:\n")
+ if args.hostname:
+ sys.stderr.write(" Hostname: " + args.hostname + "\n")
+ sys.stderr.write(" Output Format: " + args.output_format.name + "\n")
+ sys.stderr.write(" Output: " + args.output + "\n")
+ sys.stderr.write(" Output Checksum: " + none_to_na(args.output_checksum if args.checksum else None) + "\n")
+ sys.stderr.write(" Output Signature: " + none_to_na(args.output_signature if args.sign else None) + "\n")
+ sys.stderr.write("Output nspawn Settings: " + none_to_na(args.output_nspawn_settings if args.nspawn_settings is not None else None) + "\n")
+ sys.stderr.write(" Incremental: " + yes_no(args.incremental) + "\n")
+
+ if args.output_format in (OutputFormat.raw_gpt, OutputFormat.raw_btrfs, OutputFormat.raw_squashfs, OutputFormat.subvolume):
+ sys.stderr.write(" Read-only: " + yes_no(args.read_only) + "\n")
+ if args.output_format in (OutputFormat.raw_btrfs, OutputFormat.subvolume):
+ sys.stderr.write(" FS Compression: " + yes_no(args.compress) + "\n")
+
+ if args.output_format in (OutputFormat.raw_gpt, OutputFormat.raw_btrfs, OutputFormat.raw_squashfs, OutputFormat.tar):
+ sys.stderr.write(" XZ Compression: " + yes_no(args.xz) + "\n")
+
+ sys.stderr.write(" Encryption: " + none_to_no(args.encrypt) + "\n")
+ sys.stderr.write(" Verity: " + yes_no(args.verity) + "\n")
+
+ if args.output_format in (OutputFormat.raw_gpt, OutputFormat.raw_btrfs, OutputFormat.raw_squashfs):
+ sys.stderr.write(" Bootable: " + yes_no(args.bootable) + "\n")
+
+ if args.bootable:
+ sys.stderr.write(" Kernel Command Line: " + args.kernel_commandline + "\n")
+ sys.stderr.write(" UEFI SecureBoot: " + yes_no(args.secure_boot) + "\n")
+
+ if args.secure_boot:
+ sys.stderr.write(" UEFI SecureBoot Key: " + args.secure_boot_key + "\n")
+ sys.stderr.write(" UEFI SecureBoot Cert.: " + args.secure_boot_certificate + "\n")
+
+ sys.stderr.write("\nPACKAGES:\n")
+ sys.stderr.write(" Packages: " + line_join_list(args.packages) + "\n")
+
+ if args.distribution in (Distribution.fedora, Distribution.mageia):
+ sys.stderr.write(" With Documentation: " + yes_no(args.with_docs) + "\n")
+
+ sys.stderr.write(" Package Cache: " + none_to_none(args.cache_path) + "\n")
+ sys.stderr.write(" Extra Trees: " + line_join_list(args.extra_trees) + "\n")
+ sys.stderr.write(" Build Script: " + none_to_none(args.build_script) + "\n")
+ sys.stderr.write(" Build Sources: " + none_to_none(args.build_sources) + "\n")
+ sys.stderr.write(" Build Directory: " + none_to_none(args.build_dir) + "\n")
+ sys.stderr.write(" Build Packages: " + line_join_list(args.build_packages) + "\n")
+ sys.stderr.write(" Post Inst. Script: " + none_to_none(args.postinst_script) + "\n")
+ sys.stderr.write(" Scripts with network: " + yes_no(args.with_network) + "\n")
+ sys.stderr.write(" nspawn Settings: " + none_to_none(args.nspawn_settings) + "\n")
+
+ if args.output_format in (OutputFormat.raw_gpt, OutputFormat.raw_btrfs, OutputFormat.raw_squashfs):
+ sys.stderr.write("\nPARTITIONS:\n")
+ sys.stderr.write(" Root Partition: " + format_bytes_or_auto(args.root_size) + "\n")
+ sys.stderr.write(" Swap Partition: " + format_bytes_or_disabled(args.swap_size) + "\n")
+ sys.stderr.write(" ESP: " + format_bytes_or_disabled(args.esp_size) + "\n")
+ sys.stderr.write(" /home Partition: " + format_bytes_or_disabled(args.home_size) + "\n")
+ sys.stderr.write(" /srv Partition: " + format_bytes_or_disabled(args.srv_size) + "\n")
+
+ if args.output_format in (OutputFormat.raw_gpt, OutputFormat.raw_btrfs, OutputFormat.raw_squashfs, OutputFormat.tar):
+ sys.stderr.write("\nVALIDATION:\n")
+ sys.stderr.write(" Checksum: " + yes_no(args.checksum) + "\n")
+ sys.stderr.write(" Sign: " + yes_no(args.sign) + "\n")
+ sys.stderr.write(" GPG Key: " + ("default" if args.key is None else args.key) + "\n")
+ sys.stderr.write(" Password: " + ("default" if args.password is None else args.password) + "\n")
+
+def reuse_cache_tree(args, workspace, run_build_script, for_cache, cached):
+ """If there's a cached version of this tree around, use it and
+ initialize our new root directly from it. Returns a boolean indicating
+ whether we are now operating on a cached version or not."""
+
+ if cached:
+ return True
+
+ if not args.incremental:
+ return False
+ if for_cache:
+ return False
+ if args.output_format in (OutputFormat.raw_gpt, OutputFormat.raw_btrfs):
+ return False
+
+ fname = args.cache_pre_dev if run_build_script else args.cache_pre_inst
+ if fname is None:
+ return False
+
+ with complete_step('Copying in cached tree ' + fname):
+ try:
+ enumerate_and_copy(fname, os.path.join(workspace, "root"))
+ except FileNotFoundError:
+ return False
+
+ return True
+
+def build_image(args, workspace, run_build_script, for_cache=False):
+
+ # If there's no build script set, there's no point in executing
+ # the build script iteration. Let's quit early.
+ if args.build_script is None and run_build_script:
+ return None, None, None
+
+ raw, cached = reuse_cache_image(args, workspace.name, run_build_script, for_cache)
+ if not cached:
+ raw = create_image(args, workspace.name, for_cache)
+
+ with attach_image_loopback(args, raw) as loopdev:
+
+ prepare_swap(args, loopdev, cached)
+ prepare_esp(args, loopdev, cached)
+
+ luks_format_root(args, loopdev, run_build_script, cached)
+ luks_format_home(args, loopdev, run_build_script, cached)
+ luks_format_srv(args, loopdev, run_build_script, cached)
+
+ with luks_setup_all(args, loopdev, run_build_script) as (encrypted_root, encrypted_home, encrypted_srv):
+
+ prepare_root(args, encrypted_root, cached)
+ prepare_home(args, encrypted_home, cached)
+ prepare_srv(args, encrypted_srv, cached)
+
+ with mount_image(args, workspace.name, loopdev, encrypted_root, encrypted_home, encrypted_srv):
+ prepare_tree(args, workspace.name, run_build_script, cached)
+
+ with mount_cache(args, workspace.name):
+ cached = reuse_cache_tree(args, workspace.name, run_build_script, for_cache, cached)
+ install_distribution(args, workspace.name, run_build_script, cached)
+ install_boot_loader(args, workspace.name, cached)
+
+ install_extra_trees(args, workspace.name, for_cache)
+ install_build_src(args, workspace.name, run_build_script, for_cache)
+ install_build_dest(args, workspace.name, run_build_script, for_cache)
+ set_root_password(args, workspace.name, run_build_script, for_cache)
+ run_postinst_script(args, workspace.name, run_build_script, for_cache)
+
+ reset_machine_id(args, workspace.name, run_build_script, for_cache)
+ make_read_only(args, workspace.name, for_cache)
+
+ squashfs = make_squashfs(args, workspace.name, for_cache)
+ insert_squashfs(args, workspace.name, raw, loopdev, squashfs, for_cache)
+
+ verity, root_hash = make_verity(args, workspace.name, encrypted_root, run_build_script, for_cache)
+ patch_root_uuid(args, loopdev, root_hash, for_cache)
+ insert_verity(args, workspace.name, raw, loopdev, verity, root_hash, for_cache)
+
+ # This time we mount read-only, as we already generated
+ # the verity data, and hence really shouldn't modify the
+ # image anymore.
+ with mount_image(args, workspace.name, loopdev, encrypted_root, encrypted_home, encrypted_srv, root_read_only=True):
+ install_unified_kernel(args, workspace.name, run_build_script, for_cache, root_hash)
+ secure_boot_sign(args, workspace.name, run_build_script, for_cache)
+
+ tar = make_tar(args, workspace.name, run_build_script, for_cache)
+
+ return raw, tar, root_hash
+
+def var_tmp(workspace):
+
+ var_tmp = os.path.join(workspace, "var-tmp")
+ try:
+ os.mkdir(var_tmp)
+ except FileExistsError:
+ pass
+
+ return var_tmp
+
+def run_build_script(args, workspace, raw):
+ if args.build_script is None:
+ return
+
+ with complete_step('Running build script'):
+ dest = os.path.join(workspace, "dest")
+ os.mkdir(dest, 0o755)
+
+ target = "--directory=" + os.path.join(workspace, "root") if raw is None else "--image=" + raw.name
+
+ cmdline = ["systemd-nspawn",
+ '--quiet',
+ target,
+ "--uuid=" + args.machine_id,
+ "--machine=mkosi-" + uuid.uuid4().hex,
+ "--as-pid2",
+ "--register=no",
+ "--bind", dest + ":/root/dest",
+ "--bind=" + var_tmp(workspace) + ":/var/tmp",
+ "--setenv=WITH_DOCS=" + ("1" if args.with_docs else "0"),
+ "--setenv=DESTDIR=/root/dest"]
+
+ if args.build_sources is not None:
+ cmdline.append("--setenv=SRCDIR=/root/src")
+ cmdline.append("--chdir=/root/src")
+
+ if args.read_only:
+ cmdline.append("--overlay=+/root/src::/root/src")
+ else:
+ cmdline.append("--chdir=/root")
+
+ if args.build_dir is not None:
+ cmdline.append("--setenv=BUILDDIR=/root/build")
+ cmdline.append("--bind=" + args.build_dir + ":/root/build")
+
+ if not args.with_network:
+ cmdline.append("--private-network")
+
+ cmdline.append("/root/" + os.path.basename(args.build_script))
+ subprocess.run(cmdline, check=True)
+
+def need_cache_images(args):
+
+ if not args.incremental:
+ return False
+
+ if args.force_count > 1:
+ return True
+
+ return not os.path.exists(args.cache_pre_dev) or not os.path.exists(args.cache_pre_inst)
+
+def remove_artifacts(args, workspace, raw, tar, run_build_script, for_cache=False):
+
+ if for_cache:
+ what = "cache build"
+ elif run_build_script:
+ what = "development build"
+ else:
+ return
+
+ if raw is not None:
+ with complete_step("Removing disk image from " + what):
+ del raw
+
+ if tar is not None:
+ with complete_step("Removing tar image from " + what):
+ del tar
+
+ with complete_step("Removing artifacts from " + what):
+ unlink_try_hard(os.path.join(workspace, "root"))
+ unlink_try_hard(os.path.join(workspace, "var-tmp"))
+
+def build_stuff(args):
+
+ # Let's define a fixed machine ID for all our build-time
+ # runs. We'll strip it off the final image, but some build-time
+ # tools (dracut...) want a fixed one, hence provide one, and
+ # always the same
+ args.machine_id = uuid.uuid4().hex
+
+ cache = setup_cache(args)
+ workspace = setup_workspace(args)
+
+ # If caching is requested, then make sure we have cache images around we can make use of
+ if need_cache_images(args):
+
+ # Generate the cache version of the build image, and store it as "cache-pre-dev"
+ raw, tar, root_hash = build_image(args, workspace, run_build_script=True, for_cache=True)
+ save_cache(args,
+ workspace.name,
+ raw.name if raw is not None else None,
+ args.cache_pre_dev)
+
+ remove_artifacts(args, workspace.name, raw, tar, run_build_script=True)
+
+ # Generate the cache version of the build image, and store it as "cache-pre-inst"
+ raw, tar, root_hash = build_image(args, workspace, run_build_script=False, for_cache=True)
+ save_cache(args,
+ workspace.name,
+ raw.name if raw is not None else None,
+ args.cache_pre_inst)
+ remove_artifacts(args, workspace.name, raw, tar, run_build_script=False)
+
+ # Run the image builder for the first (develpoment) stage in preparation for the build script
+ raw, tar, root_hash = build_image(args, workspace, run_build_script=True)
+
+ run_build_script(args, workspace.name, raw)
+ remove_artifacts(args, workspace.name, raw, tar, run_build_script=True)
+
+ # Run the image builder for the second (final) stage
+ raw, tar, root_hash = build_image(args, workspace, run_build_script=False)
+
+ raw = xz_output(args, raw)
+ root_hash_file = write_root_hash_file(args, root_hash)
+ settings = copy_nspawn_settings(args)
+ checksum = calculate_sha256sum(args, raw, tar, root_hash_file, settings)
+ signature = calculate_signature(args, checksum)
+
+ link_output(args,
+ workspace.name,
+ raw.name if raw is not None else None,
+ tar.name if tar is not None else None)
+
+ link_output_root_hash_file(args, root_hash_file.name if root_hash_file is not None else None)
+
+ link_output_checksum(args,
+ checksum.name if checksum is not None else None)
+
+ link_output_signature(args,
+ signature.name if signature is not None else None)
+
+ link_output_nspawn_settings(args,
+ settings.name if settings is not None else None)
+
+ if root_hash is not None:
+ print_step("Root hash is {}.".format(root_hash))
+
+def check_root():
+ if os.getuid() != 0:
+ die("Must be invoked as root.")
+
+
+def main():
+ args = load_args()
+
+ if args.verb in ("build", "clean"):
+ check_root()
+ unlink_output(args)
+
+ if args.verb == "build":
+ check_output(args)
+
+ if args.verb in ("build", "summary"):
+ print_summary(args)
+
+ if args.verb == "build":
+ check_root()
+ init_namespace(args)
+ build_stuff(args)
+ print_output_size(args)
+
+if __name__ == "__main__":
+ main()
diff --git a/mkosi.default b/mkosi.default
new file mode 100644
index 0000000..6edd6a5
--- /dev/null
+++ b/mkosi.default
@@ -0,0 +1,22 @@
+# Let's build an image that is just good enough to build new mkosi images again
+
+[Distribution]
+Distribution=fedora
+Release=25
+
+[Output]
+Format=raw_squashfs
+Bootable=yes
+
+[Packages]
+Packages=
+ arch-install-scripts
+ btrfs-progs
+ debootstrap
+ dnf
+ dosfstools
+ git
+ gnupg
+ squashfs-tools
+ tar
+ veritysetup
diff --git a/setup.py b/setup.py
new file mode 100755
index 0000000..e0489e1
--- /dev/null
+++ b/setup.py
@@ -0,0 +1,19 @@
+#!/usr/bin/python3
+
+import sys
+
+if sys.version_info < (3, 5):
+ sys.exit("Sorry, we need at least Python 3.5.")
+
+from setuptools import setup
+
+setup(
+ name="mkosi",
+ version="3",
+ description="Create legacy-free OS images",
+ url="https://github.com/systemd/mkosi",
+ maintainer="mkosi contributors",
+ maintainer_email="systemd-devel@lists.freedesktop.org",
+ license="LGPLv2+",
+ scripts=["mkosi"],
+)