summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorRuss Allbery <rra@cpan.org>2020-12-25 10:26:14 -0800
committerRuss Allbery <rra@cpan.org>2020-12-25 10:26:14 -0800
commit1b652ddecd468e9d78d1b1300ae1f631744950e8 (patch)
treef6be1e736e85b03406680d37a1cd0f5d155025e9
parente30220a31200f636a091fccfbe9db6d2afda25d2 (diff)
Overhaul supplemental section handling
Support a test.override metadata key that overrides the Testing section in README and README.md files entirely, except for the note about Lancaster Consensus environment variables. Move readme.sections to just sections, and if it defined a testing section, move that to test.override. Add some additional markup to the Markdown version of building instructions for packages that use Kerberos and Autoconf.
-rw-r--r--Changes15
-rw-r--r--lib/App/DocKnot/Update.pm21
-rw-r--r--share/schema/docknot.yaml27
-rw-r--r--share/templates/readme-md.tmpl28
-rw-r--r--share/templates/readme.tmpl19
-rw-r--r--t/data/generate/c-tap-harness/docknot.yaml272
-rw-r--r--t/data/generate/c-tap-harness/output/readme4
-rw-r--r--t/data/generate/c-tap-harness/output/readme-md4
-rw-r--r--t/data/generate/control-archive/docknot.yaml462
-rw-r--r--t/data/generate/pam-krb5/docknot.yaml735
-rw-r--r--t/data/generate/pam-krb5/output/readme-md8
-rw-r--r--t/data/generate/remctl/docknot.yaml73
-rw-r--r--t/data/generate/remctl/output/readme-md8
-rw-r--r--t/data/generate/rra-c-util/docknot.yaml335
-rw-r--r--t/data/generate/rra-c-util/output/readme41
-rw-r--r--t/data/generate/rra-c-util/output/readme-md47
-rw-r--r--t/data/generate/wallet/docknot.yaml44
-rw-r--r--t/data/generate/wallet/output/readme-md8
-rw-r--r--t/data/update/c-tap-harness/docknot.yaml261
-rw-r--r--t/data/update/control-archive/docknot.yaml431
-rw-r--r--t/data/update/pam-krb5/docknot.yaml689
-rw-r--r--t/data/update/remctl/docknot.yaml69
-rw-r--r--t/data/update/rra-c-util/docknot.yaml363
23 files changed, 1951 insertions, 2013 deletions
diff --git a/Changes b/Changes
index d082412..d8481b9 100644
--- a/Changes
+++ b/Changes
@@ -9,9 +9,15 @@ DocKnot 4.00 (unreleased)
The new metadata format is checked against a schema when read.
DocKnot now depends on YAML::XS and Kwalify.
- Move bootstrap metadata to build.bootstrap and packaging to
- distribution.packaging now that everything can be specified in a
- single YAML file.
+ Support a test.override metadata key that overrides the Testing
+ section in README and README.md files entirely, except for the note
+ about Lancaster Consensus environment variables.
+
+ Move bootstrap metadata to build.bootstrap, packaging to
+ distribution.packaging, and readme.sections to sections to clean up
+ some old issues with the schema now that there's an upgrade process.
+ If readme.sections defined a testing section, move that to
+ test.override.
Drop support for the support.cpan metadata key, since the CPAN RT
instance is going away. For packages with support.cpan set, if
@@ -30,6 +36,9 @@ DocKnot 4.00 (unreleased)
paragraphs can still use four spaces because they are wrapped in
markup lines.)
+ Add some additional markup to the Markdown version of building
+ instructions for packages that use Kerberos and Autoconf.
+
DocKnot 3.05 (2020-08-09)
Change the heuristic for when to refrain from wrapping output
diff --git a/lib/App/DocKnot/Update.pm b/lib/App/DocKnot/Update.pm
index 91e3e24..58f168a 100644
--- a/lib/App/DocKnot/Update.pm
+++ b/lib/App/DocKnot/Update.pm
@@ -101,8 +101,8 @@ sub _config_from_json {
# Load supplemental README sections. readme.sections will contain a list
# of sections to add to the README file.
- for my $section ($data_ref->{readme}{sections}->@*) {
- my $title = $section->{title};
+ for my $section_ref ($data_ref->{readme}{sections}->@*) {
+ my $title = $section_ref->{title};
# The file containing the section data will match the title, converted
# to lowercase and with spaces changed to dashes.
@@ -110,7 +110,7 @@ sub _config_from_json {
$file =~ tr{ }{-};
# Load the section content.
- $section->{body} = $self->_load_metadata('sections', $file);
+ $section_ref->{body} = $self->_load_metadata('sections', $file);
}
# If there are no supplemental README sections, remove that data element.
@@ -218,6 +218,21 @@ sub update {
delete $data_ref->{packaging};
}
+ # Move readme.sections to sections. If there was a testing override, move
+ # it to test.override and delete it from sections.
+ if (defined($data_ref->{readme})) {
+ $data_ref->{sections} = $data_ref->{readme}{sections};
+ delete $data_ref->{readme};
+ for my $section_ref ($data_ref->{sections}->@*) {
+ if (lc($section_ref->{title}) eq 'testing') {
+ $data_ref->{test}{override} = $section_ref->{body};
+ last;
+ }
+ }
+ $data_ref->{sections}
+ = [grep { lc($_->{title}) ne 'testing' } $data_ref->{sections}->@*];
+ }
+
# support.cpan is obsolete. If vcs.github is set and support.github is
# not, use it as support.github.
if (defined($data_ref->{support}{cpan})) {
diff --git a/share/schema/docknot.yaml b/share/schema/docknot.yaml
index 946d773..0c0df0f 100644
--- a/share/schema/docknot.yaml
+++ b/share/schema/docknot.yaml
@@ -173,23 +173,20 @@ mapping:
type: text
work:
type: text
- readme:
- type: map
- mapping:
- sections:
- type: seq
- sequence:
- - type: map
- mapping:
- body:
- type: text
- required: true
- title:
- type: text
- required: true
requirements:
type: text
required: true
+ sections:
+ type: seq
+ sequence:
+ - type: map
+ mapping:
+ body:
+ type: text
+ required: true
+ title:
+ type: text
+ required: true
support:
type: map
mapping:
@@ -213,6 +210,8 @@ mapping:
test:
type: map
mapping:
+ override:
+ type: text
prefix:
type: text
suffix:
diff --git a/share/templates/readme-md.tmpl b/share/templates/readme-md.tmpl
index b8ac056..e30649f 100644
--- a/share/templates/readme-md.tmpl
+++ b/share/templates/readme-md.tmpl
@@ -117,11 +117,11 @@ you need to specify a different Kerberos installation root via
You can also individually set the paths to the include directory and the
library directory with `--with-krb5-include` and `--with-krb5-lib`. You
-may need to do this if Autoconf can't figure out whether to use lib,
-lib32, or lib64 on your platform.
+may need to do this if Autoconf can't figure out whether to use `lib`,
+`lib32`, or `lib64` on your platform.
-To not use krb5-config and force library probing even if there is a
-krb5-config script on your path, set PATH_KRB5_CONFIG to a nonexistent
+To not use `krb5-config` and force library probing even if there is a
+`krb5-config` script on your path, set `PATH_KRB5_CONFIG` to a nonexistent
path:
```
@@ -155,9 +155,11 @@ that make shared library migrations more difficult. If none of the above
made any sense to you, don't bother with this flag.
[% END %][% IF build.suffix %]
[% build.suffix %]
-[% END %][% END %][% IF !readme.testing && (build.type == 'Module::Build' || build.type == 'ExtUtils::MakeMaker' || build.type == 'Autoconf') %]
+[% END %][% END %][% IF test.override || build.type == 'Module::Build' || build.type == 'ExtUtils::MakeMaker' || build.type == 'Autoconf' %]
## Testing
-[% IF test.prefix %]
+[% IF test.override %]
+[% test.override %]
+[% ELSE %][% IF test.prefix %]
[% test.prefix %]
[% ELSE %]
[% name %] comes with a test suite, which you can run after building with:
@@ -196,7 +198,7 @@ Do this instead of running the test program directly since it will ensure
that necessary environment variables are set up.
[% END %][% IF test.suffix %]
[% test.suffix %]
-[% END %][% IF build.lancaster %]
+[% END %][% END %][% IF build.lancaster %]
To enable tests that don't detect functionality problems but are used to
sanity-check the release, set the environment variable `RELEASE_TESTING`
to a true value. To enable tests that may be sensitive to the local
@@ -204,19 +206,11 @@ environment or that produce a lot of false positives without uncovering
many problems, set the environment variable `AUTHOR_TESTING` to a true
value.
[% END %][% END %]
-[% FOREACH section IN readme.sections %]## [% section.title %]
+[% FOREACH section IN sections %]## [% section.title %]
[% section.body %]
-[% IF section.title == 'Testing' && build.lancaster %]
-To enable tests that don't detect functionality problems but are used to
-sanity-check the release, set the environment variable `RELEASE_TESTING`
-to a true value. To enable tests that may be sensitive to the local
-environment or that produce a lot of false positives without uncovering
-many problems, set the environment variable `AUTHOR_TESTING` to a true
-value.
-
-[% END %][% END %]## Support
+[% END %]## Support
The [[% name %] web page]([% support.web %]) will always have the current
version of this package, the current documentation, and pointers to any
diff --git a/share/templates/readme.tmpl b/share/templates/readme.tmpl
index 2decc6a..1acb3f6 100644
--- a/share/templates/readme.tmpl
+++ b/share/templates/readme.tmpl
@@ -137,8 +137,11 @@ BUILDING
If none of the above made any sense to you, don't bother with this flag.
[% END %][% IF build.suffix %]
[% indent(to_text(build.suffix), 2) %]
-[% END %][% END %][% IF !readme.testing && (build.type == 'Module::Build' || build.type == 'ExtUtils::MakeMaker' || build.type == 'Autoconf') %]
+[% END %][% END %][% IF test.override || build.type == 'Module::Build' || build.type == 'ExtUtils::MakeMaker' || build.type == 'Autoconf' %]
TESTING
+[% IF test.override %]
+[% indent(to_text(test.override), 2) %]
+[% ELSE %]
[% IF test.prefix %]
[% indent(to_text(test.prefix), 2) %]
[% ELSE %]
@@ -167,7 +170,7 @@ TESTING
ensure that necessary environment variables are set up.
[% END %][% IF test.suffix %]
[% indent(to_text(test.suffix), 2) %]
-[% END %][% IF build.lancaster %]
+[% END %][% END %][% IF build.lancaster %]
To enable tests that don't detect functionality problems but are used to
sanity-check the release, set the environment variable RELEASE_TESTING
to a true value. To enable tests that may be sensitive to the local
@@ -175,19 +178,11 @@ TESTING
many problems, set the environment variable AUTHOR_TESTING to a true
value.
[% END %][% END %]
-[% FOREACH section IN readme.sections %][% section.title FILTER upper %]
+[% FOREACH section IN sections %][% section.title FILTER upper %]
[% indent(to_text(section.body), 2) %]
-[% IF section.title == 'Testing' && build.lancaster %]
- To enable tests that don't detect functionality problems but are used to
- sanity-check the release, set the environment variable RELEASE_TESTING
- to a true value. To enable tests that may be sensitive to the local
- environment or that produce a lot of false positives without uncovering
- many problems, set the environment variable AUTHOR_TESTING to a true
- value.
-
-[% END %][% END %]SUPPORT
+[% END %]SUPPORT
The [% name %] web page at:
diff --git a/t/data/generate/c-tap-harness/docknot.yaml b/t/data/generate/c-tap-harness/docknot.yaml
index d16c14e..8a74885 100644
--- a/t/data/generate/c-tap-harness/docknot.yaml
+++ b/t/data/generate/c-tap-harness/docknot.yaml
@@ -112,144 +112,6 @@ description: |
files, creating temporary files, reporting output from external programs
running in the background, and similar common problems.
-readme:
- sections:
- - title: Testing
- body: |
- C TAP Harness comes with a comprehensive test suite, which you can
- run after building with:
-
- ```
- make check
- ```
-
- If a test fails, you can run a single test with verbose output via:
-
- ```
- ./runtests -b `pwd`/tests -s `pwd`/tests -o <name-of-test>
- ```
-
- Do this instead of running the test program directly since it will
- ensure that necessary environment variables are set up. You may
- need to change the `-s` option argument if you build with a
- separate build directory from the source directory.
- - title: Using the Harness
- body: |
- While there is an install target that installs runtests in the
- default binary directory (`/usr/local/bin` by default) and
- installs the man pages, one normally wouldn't install anything
- from this package. Instead, the code is intended to be copied
- into your package and refreshed from the latest release of C TAP
- Harness for each release.
-
- You can obviously copy the code and integrate it however works
- best for your package and your build system. Here's how I do it
- for my packages as an example:
-
- * Create a tests directory and copy tests/runtests.c into it.
- Create a `tests/tap` subdirectory and copy the portions of the
- TAP library (from `tests/tap`) that I need for that package into
- it. The TAP library is designed to let you drop in additional
- source and header files for additional utility functions that
- are useful in your package.
-
- * Add code to my top-level `Makefile.am` (I always use a
- non-recursive Makefile with `subdir-objects` set) to build
- `runtests` and the test library:
-
- ```make
- check_PROGRAMS = tests/runtests
- tests_runtests_CPPFLAGS = -DC_TAP_SOURCE='"$(abs_top_srcdir)/tests"' \
- -DC_TAP_BUILD='"$(abs_top_builddir)/tests"'
- check_LIBRARIES = tests/tap/libtap.a
- tests_tap_libtap_a_CPPFLAGS = -I$(abs_top_srcdir)/tests
- tests_tap_libtap_a_SOURCES = tests/tap/basic.c tests/tap/basic.h \
- tests/tap/float.c tests/tap/float.h tests/tap/macros.h
- ```
-
- Omit `float.c` and `float.h` from the last line if your package
- doesn't need the `is_double` function. Building the build and
- source directories into runtests will let `tests/runtests -o
- <test>` work for users without requiring that they set any other
- variables, even if they're doing an out-of-source build.
-
- Add additional source files and headers that should go into the
- TAP library if you added extra utility functions for your
- package.
-
- * Add code to `Makefile.am` to run the test suite:
-
- ```make
- check-local: $(check_PROGRAMS)
- cd tests && ./runtests -l $(abs_top_srcdir)/tests/TESTS
- ```
-
- See the `Makefile.am` in this package for an example.
-
- * List the test programs in the `tests/TESTS` file. This should
- have the name of the test executable with the trailing "-t" or
- ".t" (you can use either extension as you prefer) omitted.
-
- Test programs must be executable.
-
- For any test programs that need to be compiled, add build rules
- for them in `Makefile.am`, similar to:
-
- ```make
- tests_libtap_c_basic_LDADD = tests/tap/libtap.a
- ```
-
- and add them to `check_PROGRAMS`. If you include the `float.c`
- add-on in your libtap library, you will need to add `-lm` to the
- `_LDADD` setting for all test programs linked against it.
-
- A more complex example from the remctl package that needs
- additional libraries:
-
- ```make
- tests_client_open_t_LDFLAGS = $(GSSAPI_LDFLAGS)
- tests_client_open_t_LDADD = client/libremctl.la tests/tap/libtap.a \
- util/libutil.la $(GSSAPI_LIBS)
- ```
-
- If the test program doesn't need to be compiled, add it to
- `EXTRA_DIST` so that it will be included in the distribution.
-
- * If you have test programs written in shell, copy
- `tests/tap/libtap.sh` the tap subdirectory of your tests
- directory and add it to `EXTRA_DIST`. Shell programs should
- start with:
-
- ```sh
- . "${C_TAP_SOURCE}/tap/libtap.sh"
- ```
-
- and can then use the functions defined in the library.
-
- * Optionally copy `docs/writing-tests` into your package
- somewhere, such as `tests/README`, as instructions to
- contributors on how to write tests for this framework.
-
- If you have configuration files that the user must create to
- enable some of the tests, conventionally they go into
- `tests/config`.
-
- If you have data files that your test cases use, conventionally
- they go into `tests/data`. You can then find the data directory
- relative to the `C_TAP_SOURCE` environment variable (set by
- `runtests`) in your test program. If you have data that's
- compiled or generated by Autoconf, it will be relative to the
- `BUILD` environment variable. Don't forget to add test data to
- `EXTRA_DIST` as necessary.
-
- For more TAP library add-ons, generally ones that rely on
- additional portability code not shipped in this package or with
- narrower uses, see [the rra-c-util
- package](https://www.eyrie.org/~eagle/software/rra-c-util/).
- There are several additional TAP library add-ons in the
- `tests/tap` directory in that package. It's also an example of
- how to use this test harness in another package.
-
requirements: |
C TAP Harness requires a C compiler to build. Any ISO C89 or later C
compiler on a system supporting the Single UNIX Specification, version 3
@@ -267,3 +129,137 @@ requirements: |
All are available on CPAN. Those tests will be skipped if the modules are
not available.
+
+sections:
+ - title: Using the Harness
+ body: |
+ While there is an install target that installs runtests in the
+ default binary directory (`/usr/local/bin` by default) and installs
+ the man pages, one normally wouldn't install anything from this
+ package. Instead, the code is intended to be copied into your
+ package and refreshed from the latest release of C TAP Harness for
+ each release.
+
+ You can obviously copy the code and integrate it however works best
+ for your package and your build system. Here's how I do it for my
+ packages as an example:
+
+ * Create a tests directory and copy tests/runtests.c into it.
+ Create a `tests/tap` subdirectory and copy the portions of the TAP
+ library (from `tests/tap`) that I need for that package into it.
+ The TAP library is designed to let you drop in additional source
+ and header files for additional utility functions that are useful
+ in your package.
+
+ * Add code to my top-level `Makefile.am` (I always use a
+ non-recursive Makefile with `subdir-objects` set) to build
+ `runtests` and the test library:
+
+ ```make
+ check_PROGRAMS = tests/runtests
+ tests_runtests_CPPFLAGS = -DC_TAP_SOURCE='"$(abs_top_srcdir)/tests"' \
+ -DC_TAP_BUILD='"$(abs_top_builddir)/tests"'
+ check_LIBRARIES = tests/tap/libtap.a
+ tests_tap_libtap_a_CPPFLAGS = -I$(abs_top_srcdir)/tests
+ tests_tap_libtap_a_SOURCES = tests/tap/basic.c tests/tap/basic.h \
+ tests/tap/float.c tests/tap/float.h tests/tap/macros.h
+ ```
+
+ Omit `float.c` and `float.h` from the last line if your package
+ doesn't need the `is_double` function. Building the build and
+ source directories into runtests will let `tests/runtests -o
+ <test>` work for users without requiring that they set any other
+ variables, even if they're doing an out-of-source build.
+
+ Add additional source files and headers that should go into the
+ TAP library if you added extra utility functions for your package.
+
+ * Add code to `Makefile.am` to run the test suite:
+
+ ```make
+ check-local: $(check_PROGRAMS)
+ cd tests && ./runtests -l $(abs_top_srcdir)/tests/TESTS
+ ```
+
+ See the `Makefile.am` in this package for an example.
+
+ * List the test programs in the `tests/TESTS` file. This should
+ have the name of the test executable with the trailing "-t" or
+ ".t" (you can use either extension as you prefer) omitted.
+
+ Test programs must be executable.
+
+ For any test programs that need to be compiled, add build rules
+ for them in `Makefile.am`, similar to:
+
+ ```make
+ tests_libtap_c_basic_LDADD = tests/tap/libtap.a
+ ```
+
+ and add them to `check_PROGRAMS`. If you include the `float.c`
+ add-on in your libtap library, you will need to add `-lm` to the
+ `_LDADD` setting for all test programs linked against it.
+
+ A more complex example from the remctl package that needs
+ additional libraries:
+
+ ```make
+ tests_client_open_t_LDFLAGS = $(GSSAPI_LDFLAGS)
+ tests_client_open_t_LDADD = client/libremctl.la tests/tap/libtap.a \
+ util/libutil.la $(GSSAPI_LIBS)
+ ```
+
+ If the test program doesn't need to be compiled, add it to
+ `EXTRA_DIST` so that it will be included in the distribution.
+
+ * If you have test programs written in shell, copy
+ `tests/tap/libtap.sh` the tap subdirectory of your tests directory
+ and add it to `EXTRA_DIST`. Shell programs should start with:
+
+ ```sh
+ . "${C_TAP_SOURCE}/tap/libtap.sh"
+ ```
+
+ and can then use the functions defined in the library.
+
+ * Optionally copy `docs/writing-tests` into your package somewhere,
+ such as `tests/README`, as instructions to contributors on how to
+ write tests for this framework.
+
+ If you have configuration files that the user must create to enable
+ some of the tests, conventionally they go into `tests/config`.
+
+ If you have data files that your test cases use, conventionally they
+ go into `tests/data`. You can then find the data directory relative
+ to the `C_TAP_SOURCE` environment variable (set by `runtests`) in
+ your test program. If you have data that's compiled or generated by
+ Autoconf, it will be relative to the `BUILD` environment variable.
+ Don't forget to add test data to `EXTRA_DIST` as necessary.
+
+ For more TAP library add-ons, generally ones that rely on additional
+ portability code not shipped in this package or with narrower uses,
+ see [the rra-c-util
+ package](https://www.eyrie.org/~eagle/software/rra-c-util/). There
+ are several additional TAP library add-ons in the `tests/tap`
+ directory in that package. It's also an example of how to use this
+ test harness in another package.
+
+test:
+ override: |
+ C TAP Harness comes with a test suite, which you can run after
+ building with:
+
+ ```
+ make check
+ ```
+
+ If a test fails, you can run a single test with verbose output via:
+
+ ```
+ ./runtests -b `pwd`/tests -s `pwd`/tests -o <name-of-test>
+ ```
+
+ Do this instead of running the test program directly since it will
+ ensure that necessary environment variables are set up. You may need
+ to change the `-s` option argument if you build with a separate build
+ directory from the source directory.
diff --git a/t/data/generate/c-tap-harness/output/readme b/t/data/generate/c-tap-harness/output/readme
index 181201e..b8f69a4 100644
--- a/t/data/generate/c-tap-harness/output/readme
+++ b/t/data/generate/c-tap-harness/output/readme
@@ -98,8 +98,8 @@ BUILDING
TESTING
- C TAP Harness comes with a comprehensive test suite, which you can run
- after building with:
+ C TAP Harness comes with a test suite, which you can run after building
+ with:
make check
diff --git a/t/data/generate/c-tap-harness/output/readme-md b/t/data/generate/c-tap-harness/output/readme-md
index 5a482cd..4c4323d 100644
--- a/t/data/generate/c-tap-harness/output/readme-md
+++ b/t/data/generate/c-tap-harness/output/readme-md
@@ -96,8 +96,8 @@ on using the harness below.
## Testing
-C TAP Harness comes with a comprehensive test suite, which you can run
-after building with:
+C TAP Harness comes with a test suite, which you can run after building
+with:
```
make check
diff --git a/t/data/generate/control-archive/docknot.yaml b/t/data/generate/control-archive/docknot.yaml
index 586f4ea..54c7af7 100644
--- a/t/data/generate/control-archive/docknot.yaml
+++ b/t/data/generate/control-archive/docknot.yaml
@@ -111,236 +111,232 @@ requirements: |
like tinyleaf is suitable for this (I wrote tinyleaf, available as part of
[INN](https://www.eyrie.org/~eagle/software/inn/), for this purpose).
-readme:
- sections:
- - title: Versioning
- body: |
- This package uses a three-part version number. The first number
- will be incremented for major changes, major new functionality,
- incompatible changes to the configuration format (more than just
- adding new keys), or similar disruptive changes. For lesser
- changes, the second number will be incremented for any change to
- the code or functioning of the software. A change to the third
- part of the version number indicates a release with changes only
- to the configuration, PGP keys, and documentation files.
- - title: Layout
- body: |
- The configuration data is in one file per hierarchy in the
- `config` directory. Each file has the format specified in FORMAT
- and is designed to be readable by INN's new configuration parser
- in case this can be further automated down the road. The
- `config/special` directory contains overrides, raw `control.ctl`
- fragments that should be used for particular hierarchies instead
- of automatically-generated entries (usually for special comments).
- Eventually, the format should be extended to handle as many of
- these cases as possible.
-
- The `keys` directory contains the PGP public keys for every
- hierarchy that has one. The user IDs on these keys must match the
- signer expected by the configuration data for the corresponding
- hierarchy.
-
- The `forms` directory contains the basic file structure for the
- three generated files.
-
- The `scripts` directory contains all the software that generates
- the configuration and documentation files, processes control
- messages, updates the database, creates the newsgroup lists, and
- generates reports. Most scripts in that directory have POD
- documentation included at the end of the script, viewable by
- running perldoc on the script.
-
- The `templates` directory contains templates for the
- `control-summary` script. These are the templates I use myself.
- Other installations should customize them.
-
- The `docs` directory contains the extra documentation files that
- are distributed from ftp.isc.org in the control message archive
- and newsgroup list directories, plus the DocKnot metadata for this
- package.
- - title: Installation
- body: |
- This software is set up to run from `/srv/control`. To use a
- different location, edit the paths at the beginning of each of the
- scripts in the `scripts` directory to use different paths. By
- default, copying all the files from the distribution into a
- `/srv/control` directory is almost all that's needed. An install
- rule is provided to do this. To install the software, run:
-
- ```sh
- make install
- ```
-
- You will need write access to `/srv/control` or permission to
- create it.
-
- `process-control` and `generate-files` need a GnuPG keyring
- containing all of the honored hierarchy keys. To generate this
- keyring, run `make install` or:
-
- ```sh
- mkdir keyring
- gpg --homedir=keyring --allow-non-selfsigned-uid --import keys/*
- ```
-
- from the top level of this distribution. `process-control` also
- expects a `control.ctl` file in `/srv/control/control.ctl`, which
- can be generated from the files included here (after creating the
- keyring as described above) by running `make install` or:
-
- ```sh
- scripts/generate-files
- ```
-
- Both of these are done automatically as part of `make install`.
- process-control expects `/srv/control/archive` to exist and
- archives control messages there. It expects `/srv/control/tmp` to
- exist and uses it for temporary files for GnuPG control message
- verification.
-
- To process incoming control messages, you need to run
- `process-control` on each message. `process-control` expects to
- receive, on standard input, lines consisting of a path to a file,
- a space, and a message ID. This input format is designed to work
- with the tinyleaf server that comes with INN 2.5 and later, but it
- should also work as a channel feed from pre-storage-API versions
- of INN (1.x). It will not work without modification via a channel
- feed from a current version of INN, since it doesn't understand
- the storage API and doesn't know how to retrieve articles by
- tokens. This could be easily added; I just haven't needed it.
-
- If you're using tinyleaf, here is the setup process:
-
- 1. Create a directory that tinyleaf will use to store incoming
- articles temporarily, the archive directory, and the logs
- directory and install the software:
-
- ```sh
- make install
- ```
-
- 2. Run tinyleaf on some port, configuring it to use that directory
- and to run process-control. A typical tinyleaf command line
- would be:
-
- ```sh
- tinyleaf /srv/control/spool /srv/control/scripts/process-control
- ```
-
- I run tinyleaf using systemd, but any inetd implementation
- should work equally well.
-
- 3. Set up a news feed to the system running tinyleaf that sends
- control messages of interest. You should be careful not to
- send cancel control messages or you'll get a ton of junk in
- your logs. The INN newsfeeds entry I use is:
-
- ```
- isc-control:control,control.*,!control.cancel:Tf,Wnm:
- ```
-
- combined with nntpsend to send the articles.
-
- That should be all there is to it. Watch the logs directory to
- see what happens for incoming messages.
-
- `scripts/process-control` just maintains a database file. To
- export that data in a format that's useful for other software, run
- `scripts/export-control`. This expects a `/srv/control/export`
- directory into which it stores active and newsgroups files, a copy
- of the `control.ctl` file, and all of the logs in a `LOGS`
- subdirectory. This export directory can then be made available on
- the web, copied to another system, or whatever else is
- appropriate. Generally, `scripts/export-control` should be run
- periodically from cron.
-
- Reports can be generated using `scripts/control-summary`. This
- script needs configuration before running; see the top of the
- script and its included POD documentation. There is a sample
- template in the `templates` directory, and `scripts/weekly-report`
- shows a sample cron job for sending out a regular report.
- - title: Bootstrapping
- body: |
- This package is intended to provide all of the tools,
- configuration, and information required to duplicate the
- ftp.isc.org control message archive and newsgroup list service if
- you so desire. To set up a similar service based on that service,
- however, you will also want to bootstrap from the existing data.
- Here is the procedure for that:
-
- 1. Be sure that you're starting from the latest software and set
- of configuration files. I will generally try to make a new
- release after committing a batch of changes, but I may not make
- a new release after every change. See the sections below for
- information about the Git repository in which this package is
- maintained. You can always clone that repository to get the
- latest configuration (and then merge or cherry-pick changes
- from my repository into your repository as you desire).
-
- 2. Download the current newsgroup list from:
-
- ftp://ftp.isc.org/pub/usenet/CONFIG/newsgroups.bz2
-
- and then bootstrap the database from it:
-
- ```sh
- bzip2 -dc newsgroups.bz2 | scripts/update-control bulkload
- ```
-
- 3. If you want the log information so that your reports will
- include changes made in the ftp.isc.org archive before you
- created your own, copy the contents of
- ftp://ftp.isc.org/pub/usenet/CONFIG/LOGS/ into
- `/srv/control/logs`.
-
- 4. If you want to start with the existing control message
- repository, download the contents of
- ftp://ftp.isc.org/pub/usenet/control/ into
- `/srv/control/archive`. You can do this using a recursive
- download tool that understands FTP, such as wget, but please
- use the options that add delays and don't hammer the server to
- death.
-
- After finishing those steps, you will have a copy of the
- ftp.isc.org archive and can start processing control messages,
- possibly with different configuration choices. You can generate
- the files that are found in ftp://ftp.isc.org/pub/usenet/CONFIG/
- by running `scripts/export-control` as described above.
- - title: Maintenance
- body: |
- To add a new hierarchy, add a configuration fragment in the
- `config` directory named after the hierarchy, following the format
- of the existing files, and run `scripts/generate-files` to create
- a new `control.ctl` file. See the documentation in
- `scripts/generate-files` for details about the supported
- configuration keys.
-
- If the hierarchy uses PGP-signed control messages, also put the
- PGP key into the `keys` directory in a file named after the
- hierarchy. Then, run:
-
- ```sh
- gpg --homedir=keyring --import keys/<hierarchy>
- ```
-
- to add the new key to the working keyring.
-
- The first user ID on the key must match the signer expected by the
- configuration data for the corresponding hierarchy. If a
- hierarchy administrator sets that up wrong (usually by putting
- additional key IDs on the key), this can be corrected by importing
- the key into a keyring with GnuPG, using `gpg --edit-key` to
- remove the offending user ID, and exporting the key again with
- `gpg --export --ascii`.
-
- When adding a new hierarchy, it's often useful to bootstrap the
- newsgroup list by importing the current checkgroups. To do this,
- obtain the checkgroups as a text file (containing only the groups
- without any news headers) and run:
-
- ```sh
- scripts/update-control checkgroups <hierarchy> < <checkgroups>
- ```
-
- where <hierarchy> is the hierarchy the checkgroups is for and
- <checkgroups> is the path to the checkgroups file.
+sections:
+ - title: Versioning
+ body: |
+ This package uses a three-part version number. The first number
+ will be incremented for major changes, major new functionality,
+ incompatible changes to the configuration format (more than just
+ adding new keys), or similar disruptive changes. For lesser
+ changes, the second number will be incremented for any change to the
+ code or functioning of the software. A change to the third part of
+ the version number indicates a release with changes only to the
+ configuration, PGP keys, and documentation files.
+ - title: Layout
+ body: |
+ The configuration data is in one file per hierarchy in the `config`
+ directory. Each file has the format specified in FORMAT and is
+ designed to be readable by INN's new configuration parser in case
+ this can be further automated down the road. The `config/special`
+ directory contains overrides, raw `control.ctl` fragments that
+ should be used for particular hierarchies instead of
+ automatically-generated entries (usually for special comments).
+ Eventually, the format should be extended to handle as many of these
+ cases as possible.
+
+ The `keys` directory contains the PGP public keys for every
+ hierarchy that has one. The user IDs on these keys must match the
+ signer expected by the configuration data for the corresponding
+ hierarchy.
+
+ The `forms` directory contains the basic file structure for the
+ three generated files.
+
+ The `scripts` directory contains all the software that generates the
+ configuration and documentation files, processes control messages,
+ updates the database, creates the newsgroup lists, and generates
+ reports. Most scripts in that directory have POD documentation
+ included at the end of the script, viewable by running perldoc on
+ the script.
+
+ The `templates` directory contains templates for the
+ `control-summary` script. These are the templates I use myself.
+ Other installations should customize them.
+
+ The `docs` directory contains the extra documentation files that are
+ distributed from ftp.isc.org in the control message archive and
+ newsgroup list directories, plus the DocKnot metadata for this
+ package.
+ - title: Installation
+ body: |
+ This software is set up to run from `/srv/control`. To use a
+ different location, edit the paths at the beginning of each of the
+ scripts in the `scripts` directory to use different paths. By
+ default, copying all the files from the distribution into a
+ `/srv/control` directory is almost all that's needed. An install
+ rule is provided to do this. To install the software, run:
+
+ ```sh
+ make install
+ ```
+
+ You will need write access to `/srv/control` or permission to create
+ it.
+
+ `process-control` and `generate-files` need a GnuPG keyring
+ containing all of the honored hierarchy keys. To generate this
+ keyring, run `make install` or:
+
+ ```sh
+ mkdir keyring
+ gpg --homedir=keyring --allow-non-selfsigned-uid --import keys/*
+ ```
+
+ from the top level of this distribution. `process-control` also
+ expects a `control.ctl` file in `/srv/control/control.ctl`, which
+ can be generated from the files included here (after creating the
+ keyring as described above) by running `make install` or:
+
+ ```sh
+ scripts/generate-files
+ ```
+
+ Both of these are done automatically as part of `make install`.
+ process-control expects `/srv/control/archive` to exist and archives
+ control messages there. It expects `/srv/control/tmp` to exist and
+ uses it for temporary files for GnuPG control message verification.
+
+ To process incoming control messages, you need to run
+ `process-control` on each message. `process-control` expects to
+ receive, on standard input, lines consisting of a path to a file, a
+ space, and a message ID. This input format is designed to work with
+ the tinyleaf server that comes with INN 2.5 and later, but it should
+ also work as a channel feed from pre-storage-API versions of INN
+ (1.x). It will not work without modification via a channel feed
+ from a current version of INN, since it doesn't understand the
+ storage API and doesn't know how to retrieve articles by tokens.
+ This could be easily added; I just haven't needed it.
+
+ If you're using tinyleaf, here is the setup process:
+
+ 1. Create a directory that tinyleaf will use to store incoming
+ articles temporarily, the archive directory, and the logs
+ directory and install the software:
+
+ ```sh
+ make install
+ ```
+
+ 2. Run tinyleaf on some port, configuring it to use that directory
+ and to run process-control. A typical tinyleaf command line
+ would be:
+
+ ```sh
+ tinyleaf /srv/control/spool /srv/control/scripts/process-control
+ ```
+
+ I run tinyleaf using systemd, but any inetd implementation should
+ work equally well.
+
+ 3. Set up a news feed to the system running tinyleaf that sends
+ control messages of interest. You should be careful not to send
+ cancel control messages or you'll get a ton of junk in your logs.
+ The INN newsfeeds entry I use is:
+
+ ```
+ isc-control:control,control.*,!control.cancel:Tf,Wnm:
+ ```
+
+ combined with nntpsend to send the articles.
+
+ That should be all there is to it. Watch the logs directory to see
+ what happens for incoming messages.
+
+ `scripts/process-control` just maintains a database file. To export
+ that data in a format that's useful for other software, run
+ `scripts/export-control`. This expects a `/srv/control/export`
+ directory into which it stores active and newsgroups files, a copy
+ of the `control.ctl` file, and all of the logs in a `LOGS`
+ subdirectory. This export directory can then be made available on
+ the web, copied to another system, or whatever else is appropriate.
+ Generally, `scripts/export-control` should be run periodically from
+ cron.
+
+ Reports can be generated using `scripts/control-summary`. This
+ script needs configuration before running; see the top of the script
+ and its included POD documentation. There is a sample template in
+ the `templates` directory, and `scripts/weekly-report` shows a
+ sample cron job for sending out a regular report.
+ - title: Bootstrapping
+ body: |
+ This package is intended to provide all of the tools, configuration,
+ and information required to duplicate the ftp.isc.org control
+ message archive and newsgroup list service if you so desire. To set
+ up a similar service based on that service, however, you will also
+ want to bootstrap from the existing data. Here is the procedure for
+ that:
+
+ 1. Be sure that you're starting from the latest software and set of
+ configuration files. I will generally try to make a new release
+ after committing a batch of changes, but I may not make a new
+ release after every change. See the sections below for
+ information about the Git repository in which this package is
+ maintained. You can always clone that repository to get the
+ latest configuration (and then merge or cherry-pick changes from
+ my repository into your repository as you desire).
+
+ 2. Download the current newsgroup list from:
+
+ ftp://ftp.isc.org/pub/usenet/CONFIG/newsgroups.bz2
+
+ and then bootstrap the database from it:
+
+ ```sh
+ bzip2 -dc newsgroups.bz2 | scripts/update-control bulkload
+ ```
+
+ 3. If you want the log information so that your reports will include
+ changes made in the ftp.isc.org archive before you created your
+ own, copy the contents of
+ ftp://ftp.isc.org/pub/usenet/CONFIG/LOGS/ into
+ `/srv/control/logs`.
+
+ 4. If you want to start with the existing control message
+ repository, download the contents of
+ ftp://ftp.isc.org/pub/usenet/control/ into
+ `/srv/control/archive`. You can do this using a recursive
+ download tool that understands FTP, such as wget, but please use
+ the options that add delays and don't hammer the server to death.
+
+ After finishing those steps, you will have a copy of the ftp.isc.org
+ archive and can start processing control messages, possibly with
+ different configuration choices. You can generate the files that
+ are found in ftp://ftp.isc.org/pub/usenet/CONFIG/ by running
+ `scripts/export-control` as described above.
+ - title: Maintenance
+ body: |
+ To add a new hierarchy, add a configuration fragment in the `config`
+ directory named after the hierarchy, following the format of the
+ existing files, and run `scripts/generate-files` to create a new
+ `control.ctl` file. See the documentation in
+ `scripts/generate-files` for details about the supported
+ configuration keys.
+
+ If the hierarchy uses PGP-signed control messages, also put the PGP
+ key into the `keys` directory in a file named after the hierarchy.
+ Then, run:
+
+ ```sh
+ gpg --homedir=keyring --import keys/<hierarchy>
+ ```
+
+ to add the new key to the working keyring.
+
+ The first user ID on the key must match the signer expected by the
+ configuration data for the corresponding hierarchy. If a hierarchy
+ administrator sets that up wrong (usually by putting additional key
+ IDs on the key), this can be corrected by importing the key into a
+ keyring with GnuPG, using `gpg --edit-key` to remove the offending
+ user ID, and exporting the key again with `gpg --export --ascii`.
+
+ When adding a new hierarchy, it's often useful to bootstrap the
+ newsgroup list by importing the current checkgroups. To do this,
+ obtain the checkgroups as a text file (containing only the groups
+ without any news headers) and run:
+
+ ```sh
+ scripts/update-control checkgroups <hierarchy> < <checkgroups>
+ ```
+
+ where <hierarchy> is the hierarchy the checkgroups is for and
+ <checkgroups> is the path to the checkgroups file.
diff --git a/t/data/generate/pam-krb5/docknot.yaml b/t/data/generate/pam-krb5/docknot.yaml
index 15373d8..a741190 100644
--- a/t/data/generate/pam-krb5/docknot.yaml
+++ b/t/data/generate/pam-krb5/docknot.yaml
@@ -109,380 +109,367 @@ description: |
Sourceforge PAM module that you're missing in this module, please let me
know.
-readme:
- sections:
- - title: Configuring
- body: |
- Just installing the module does not enable it or change anything
- about your system authentication configuration. To use the module
- for all system authentication on Debian systems, put something
- like:
-
- ```
- auth sufficient pam_krb5.so minimum_uid=1000
- auth required pam_unix.so try_first_pass nullok_secure
- ```
-
- in `/etc/pam.d/common-auth`, something like:
-
- ```
- session optional pam_krb5.so minimum_uid=1000
- session required pam_unix.so
- ```
-
- in `/etc/pam.d/common-session`, and something like:
-
- ```
- account required pam_krb5.so minimum_uid=1000
- account required pam_unix.so
- ```
-
- in `/etc/pam.d/common-account`. The `minimum_uid` setting tells
- the PAM module to pass on any users with a UID lower than 1000,
- thereby bypassing Kerberos authentication for the root account and
- any system accounts. You normally want to do this since
- otherwise, if the network is down, the Kerberos authentication can
- time out and make it difficult to log in as root and fix matters.
- This also avoids problems with Kerberos principals that happen to
- match system accounts accidentally getting access to those
- accounts.
-
- Be sure to include the module in the session group as well as the
- auth group. Without the session entry, the user's ticket cache
- will not be created properly for ssh logins (among possibly
- others).
-
- If your users should normally all use Kerberos passwords
- exclusively, putting something like:
-
- ```
- password sufficient pam_krb5.so minimum_uid=1000
- password required pam_unix.so try_first_pass obscure md5
- ```
-
- in `/etc/pam.d/common-password` will change users' passwords in
- Kerberos by default and then only fall back on Unix if that
- doesn't work. (You can make this tighter by using the more
- complex new-style PAM configuration.) If you instead want to
- synchronize local and Kerberos passwords and change them both at
- the same time, you can do something like:
-
- ```
- password required pam_unix.so obscure sha512
- password required pam_krb5.so use_authtok minimum_uid=1000
- ```
-
- If you have multiple environments that you want to synchronize and
- you don't want password changes to continue if the Kerberos
- password change fails, use the `clear_on_fail` option. For
- example:
-
- ```
- password required pam_krb5.so clear_on_fail minimum_uid=1000
- password required pam_unix.so use_authtok obscure sha512
- password required pam_smbpass.so use_authtok
- ```
-
- In this case, if `pam_krb5` cannot change the password (due to
- password strength rules on the KDC, for example), it will clear
- the stored password (because of the `clear_on_fail` option), and
- since `pam_unix` and `pam_smbpass` are both configured with
- `use_authtok`, they will both fail. `clear_on_fail` is not the
- default because it would interfere with the more common pattern of
- falling back to local passwords if the user doesn't exist in
- Kerberos.
-
- If you use a more complex configuration with the Linux PAM `[]`
- syntax for the session and account groups, note that `pam_krb5`
- returns a status of ignore, not success, if the user didn't log on
- with Kerberos. You may need to handle that explicitly with
- `ignore=ignore` in your action list.
-
- There are many, many other possibilities. See the Linux PAM
- documentation for all the configuration options.
-
- On Red Hat systems, modify `/etc/pam.d/system-auth` instead, which
- contains all of the configuration for the different stacks.
-
- You can also use pam-krb5 only for specific services. In that
- case, modify the files in `/etc/pam.d` for that particular service
- to use `pam_krb5.so` for authentication. For services that are
- using passwords over TLS to authenticate users, you may want to
- use the `ignore_k5login` and `no_ccache` options to the
- authenticate module. `.k5login` authorization is only meaningful
- for local accounts and ticket caches are usually (although not
- always) only useful for interactive sessions.
-
- Configuring the module for Solaris is both simpler and less
- flexible, since Solaris (at least Solaris 8 and 9, which are the
- last versions of Solaris with which this module was extensively
- tested) use a single `/etc/pam.conf` file that contains
- configuration for all programs. For console login on Solaris, try
- something like:
-
- ```
- login auth sufficient /usr/local/lib/security/pam_krb5.so minimum_uid=100
- login auth required /usr/lib/security/pam_unix_auth.so.1 use_first_pass
- login account required /usr/local/lib/security/pam_krb5.so minimum_uid=100
- login account required /usr/lib/security/pam_unix_account.so.1
- login session required /usr/local/lib/security/pam_krb5.so retain_after_close minimum_uid=100
- login session required /usr/lib/security/pam_unix_session.so.1
- ```
-
- A similar configuration could be used for other services, such as
- ssh. See the pam.conf(5) man page for more information. When
- using this module with Solaris login (at least on Solaris 8 and
- 9), you will probably also need to add `retain_after_close` to the
- PAM configuration to avoid having the user's credentials deleted
- before they are logged in.
-
- The Solaris Kerberos library reportedly does not support prompting
- for a password change of an expired account during authentication.
- Supporting password change for expired accounts on Solaris with
- native Kerberos may therefore require setting the `defer_pwchange`
- or `force_pwchange` option for selected login applications. See
- the description and warnings about that option in the pam_krb5(5)
- man page.
-
- Some configuration options may be put in the `krb5.conf` file used
- by your Kerberos libraries (usually `/etc/krb5.conf` or
- `/usr/local/etc/krb5.conf`) instead or in addition to the PAM
- configuration. See the man page for more details.
-
- The Kerberos library, via pam-krb5, will prompt the user to change
- their password if their password is expired, but when using
- OpenSSH, this will only work when
- `ChallengeResponseAuthentication` is enabled. Unless this option
- is enabled, OpenSSH doesn't pass PAM messages to the user and can
- only respond to a simple password prompt.
-
- If you are using MIT Kerberos, be aware that users whose passwords
- are expired will not be prompted to change their password unless
- the KDC configuration for your realm in `[realms]` in `krb5.conf`
- contains a `master_kdc` setting or, if using DNS SRV records, you
- have a DNS entry for `_kerberos-master` as well as `_kerberos`.
- - title: Debugging
- body: |
- The first step when debugging any problems with this module is to
- add `debug` to the PAM options for the module (either in the PAM
- configuration or in `krb5.conf`). This will significantly
- increase the logging from the module and should provide a trace of
- exactly what failed and any available error information.
-
- Many Kerberos authentication problems are due to configuration
- issues in `krb5.conf`. If pam-krb5 doesn't work, first check that
- `kinit` works on the same system. That will test your basic
- Kerberos configuration. If the system has a keytab file installed
- that's readable by the process doing authentication via PAM, make
- sure that the keytab is current and contains a key for
- `host/<system>` where <system> is the fully-qualified hostname.
- pam-krb5 prevents KDC spoofing by checking the user's credentials
- when possible, but this means that if a keytab is present it must
- be correct or authentication will fail. You can check the keytab
- with `klist -k` and `kinit -k`.
-
- Be sure that all libraries and modules, including PAM modules,
- loaded by a program use the same Kerberos libraries. Sometimes
- programs that use PAM, such as current versions of OpenSSH, also
- link against Kerberos directly. If your sshd is linked against
- one set of Kerberos libraries and pam-krb5 is linked against a
- different set of Kerberos libraries, this will often cause
- problems (such as segmentation faults, bus errors, assertions, or
- other strange behavior). Similar issues apply to the com_err
- library or any other library used by both modules and shared
- libraries and by the application that loads them. If your OS
- ships Kerberos libraries, it's usually best if possible to build
- all Kerberos software on the system against those libraries.
- - title: Implementation Notes
- body: |
- The normal sequence of actions taken for a user login is:
-
- ```
- pam_authenticate
- pam_setcred(PAM_ESTABLISH_CRED)
- pam_open_session
- pam_acct_mgmt
- ```
-
- and then at logout:
-
- ```
- pam_close_session
- ```
-
- followed by closing the open PAM session. The corresponding
- `pam_sm_*` functions in this module are called when an application
- calls those public interface functions. Not all applications call
- all of those functions, or in particularly that order, although
- `pam_authenticate` is always first and has to be.
-
- When `pam_authenticate` is called, pam-krb5 creates a temporary
- ticket cache in `/tmp` and sets the PAM environment variable
- `PAM_KRB5CCNAME` to point to it. This ticket cache will be
- automatically destroyed when the PAM session is closed and is
- there only to pass the initial credentials to the call to
- `pam_setcred`. The module would use a memory cache, but memory
- caches will only work if the application preserves the PAM
- environment between the calls to `pam_authenticate` and
- `pam_setcred`. Most do, but OpenSSH notoriously does not and
- calls `pam_authenticate` in a subprocess, so this method is used
- to pass the tickets to the `pam_setcred` call in a different
- process.
-
- `pam_authenticate` does a complete authentication, including
- checking the resulting TGT by obtaining a service ticket for the
- local host if possible, but this requires read access to the
- system keytab. If the keytab doesn't exist, can't be read, or
- doesn't include the appropriate credentials, the default is to
- accept the authentication. This can be controlled by setting
- `verify_ap_req_nofail` to true in `[libdefaults]` in
- `/etc/krb5.conf`. `pam_authenticate` also does a basic
- authorization check, by default calling `krb5_kuserok` (which uses
- `~/.k5login` if available and falls back to checking that the
- principal corresponds to the account name). This can be
- customized with several options documented in the pam_krb5(5) man
- page.
-
- pam-krb5 treats `pam_open_session` and
- `pam_setcred(PAM_ESTABLISH_CRED)` as synonymous, as some
- applications call one and some call the other. Both copy the
- initial credentials from the temporary cache into a permanent
- cache for this session and set `KRB5CCNAME` in the environment.
- It will remember when the credential cache has been established
- and then avoid doing any duplicate work afterwards, since some
- applications call `pam_setcred` or `pam_open_session` multiple
- times (most notably X.Org 7 and earlier xdm, which also throws
- away the module settings the last time it calls them).
-
- `pam_acct_mgmt` finds the ticket cache, reads it in to obtain the
- authenticated principal, and then does is another authorization
- check against `.k5login` or the local account name as described
- above.
-
- After the call to `pam_setcred` or `pam_open_session`, the ticket
- cache will be destroyed whenever the calling application either
- destroys the PAM environment or calls `pam_close_session`, which
- it should do on user logout.
-
- The normal sequence of events when refreshing a ticket cache (such
- as inside a screensaver) is:
-
- ```
- pam_authenticate
- pam_setcred(PAM_REINITIALIZE_CRED)
- pam_acct_mgmt
- ```
-
- (`PAM_REFRESH_CRED` may be used instead.) Authentication proceeds
- as above. At the `pam_setcred` stage, rather than creating a new
- ticket cache, the module instead finds the current ticket cache
- (from the `KRB5CCNAME` environment variable or the default ticket
- cache location from the Kerberos library) and then reinitializes
- it with the credentials from the temporary `pam_authenticate`
- ticket cache. When refreshing a ticket cache, the application
- should not open a session. Calling `pam_acct_mgmt` is optional;
- pam-krb5 doesn't do anything different when it's called in this
- case.
-
- If `pam_authenticate` apparently didn't succeed, or if an account
- was configured to be ignored via `ignore_root` or `minimum_uid`,
- `pam_setcred` (and therefore `pam_open_session`) and
- `pam_acct_mgmt` return `PAM_IGNORE`, which tells the PAM library
- to proceed as if that module wasn't listed in the PAM
- configuration at all. `pam_authenticate`, however, returns
- failure in the ignored user case by default, since otherwise a
- configuration using `ignore_root` with pam-krb5 as the only PAM
- module would allow anyone to log in as root without a password.
- There doesn't appear to be a case where returning `PAM_IGNORE`
- instead would improve the module's behavior, but if you know of a
- case, please let me know.
-
- By default, `pam_authenticate` intentionally does not follow the
- PAM standard for handling expired accounts and instead returns
- failure from `pam_authenticate` unless the Kerberos libraries are
- able to change the account password during authentication. Too
- many applications either do not call `pam_acct_mgmt` or ignore its
- exit status. The fully correct PAM behavior (returning success
- from `pam_authenticate` and `PAM_NEW_AUTHTOK_REQD` from
- `pam_acct_mgmt`) can be enabled with the `defer_pwchange` option.
-
- The `defer_pwchange` option is unfortunately somewhat tricky to
- implement. In this case, the calling sequence is:
-
- ```
- pam_authenticate
- pam_acct_mgmt
- pam_chauthtok
- pam_setcred
- pam_open_session
- ```
-
- During the first `pam_authenticate`, we can't obtain credentials
- and therefore a ticket cache since the password is expired. But
- `pam_authenticate` isn't called again after `pam_chauthtok`, so
- `pam_chauthtok` has to create a ticket cache. We however don't
- want it to do this for the normal password change (`passwd`) case.
-
- What we do is set a flag in our PAM data structure saying that
- we're processing an expired password, and `pam_chauthtok`, if it
- sees that flag, redoes the authentication with password prompting
- disabled after it finishes changing the password.
-
- Unfortunately, when handling password changes this way,
- `pam_chauthtok` will always have to prompt the user for their
- current password again even though they just typed it. This is
- because the saved authentication tokens are cleared after
- `pam_authenticate` returns, for security reasons. We could hack
- around this by saving the password in our PAM data structure, but
- this would let the application gain access to it (exactly what the
- clearing is intended to prevent) and breaks a PAM library
- guarantee. We could also work around this by having
- `pam_authenticate` get the `kadmin/changepw` authenticator in the
- expired password case and store it for `pam_chauthtok`, but it
- doesn't seem worth the hassle.
- - title: History and Acknowledgements
- body: |
- Originally written by
- Frank Cusack <fcusack@fcusack.com>, with the following
- acknowledgement:
-
- > Thanks to Naomaru Itoi <itoi@eecs.umich.edu>, Curtis King
- > <curtis.king@cul.ca>, and Derrick Brashear
- > <shadow@dementia.org>, all of whom have written and made
- > available Kerberos 4/5 modules. Although no code in this module
- > is directly from these author's modules, (except the
- > get_user_info() routine in support.c; derived from whichever of
- > these authors originally wrote the first module the other 2
- > copied from), it was extremely helpful to look over their code
- > which aided in my design.
-
- The module was then patched for the FreeBSD ports collection with
- additional modifications by unknown maintainers and then was
- modified by Joel Kociolek <joko@logidee.com> to be usable with
- Debian GNU/Linux.
-
- It was packaged by Sam Hartman as the Kerberos v5 PAM module for
- Debian and improved and modified by him and later by Russ Allbery
- to fix bugs and add additional features. It was then adopted by
- Andres Salomon, who added support for refreshing credentials.
-
- The current distribution is maintained by Russ Allbery, who also
- added support for reading configuration from `krb5.conf`, added
- many features for compatibility with the Sourceforge module,
- commented and standardized the formatting of the code, and
- overhauled the documentation.
-
- Thanks to Douglas E. Engert for the initial implementation of
- PKINIT support. I have since modified and reworked it
- extensively, so any bugs or compilation problems are my fault.
-
- Thanks to Markus Moeller for lots of debugging and multiple
- patches and suggestions for improved portability.
-
- Thanks to Booker Bense for the implementation of the
- `alt_auth_map` option.
-
- Thanks to Sam Hartman for the FAST support implementation.
+sections:
+ - title: Configuring
+ body: |
+ Just installing the module does not enable it or change anything
+ about your system authentication configuration. To use the module
+ for all system authentication on Debian systems, put something like:
+
+ ```
+ auth sufficient pam_krb5.so minimum_uid=1000
+ auth required pam_unix.so try_first_pass nullok_secure
+ ```
+
+ in `/etc/pam.d/common-auth`, something like:
+
+ ```
+ session optional pam_krb5.so minimum_uid=1000
+ session required pam_unix.so
+ ```
+
+ in `/etc/pam.d/common-session`, and something like:
+
+ ```
+ account required pam_krb5.so minimum_uid=1000
+ account required pam_unix.so
+ ```
+
+ in `/etc/pam.d/common-account`. The `minimum_uid` setting tells the
+ PAM module to pass on any users with a UID lower than 1000, thereby
+ bypassing Kerberos authentication for the root account and any
+ system accounts. You normally want to do this since otherwise, if
+ the network is down, the Kerberos authentication can time out and
+ make it difficult to log in as root and fix matters. This also
+ avoids problems with Kerberos principals that happen to match system
+ accounts accidentally getting access to those accounts.
+
+ Be sure to include the module in the session group as well as the
+ auth group. Without the session entry, the user's ticket cache will
+ not be created properly for ssh logins (among possibly others).
+
+ If your users should normally all use Kerberos passwords
+ exclusively, putting something like:
+
+ ```
+ password sufficient pam_krb5.so minimum_uid=1000
+ password required pam_unix.so try_first_pass obscure md5
+ ```
+
+ in `/etc/pam.d/common-password` will change users' passwords in
+ Kerberos by default and then only fall back on Unix if that doesn't
+ work. (You can make this tighter by using the more complex
+ new-style PAM configuration.) If you instead want to synchronize
+ local and Kerberos passwords and change them both at the same time,
+ you can do something like:
+
+ ```
+ password required pam_unix.so obscure sha512
+ password required pam_krb5.so use_authtok minimum_uid=1000
+ ```
+
+ If you have multiple environments that you want to synchronize and
+ you don't want password changes to continue if the Kerberos password
+ change fails, use the `clear_on_fail` option. For example:
+
+ ```
+ password required pam_krb5.so clear_on_fail minimum_uid=1000
+ password required pam_unix.so use_authtok obscure sha512
+ password required pam_smbpass.so use_authtok
+ ```
+
+ In this case, if `pam_krb5` cannot change the password (due to
+ password strength rules on the KDC, for example), it will clear the
+ stored password (because of the `clear_on_fail` option), and since
+ `pam_unix` and `pam_smbpass` are both configured with `use_authtok`,
+ they will both fail. `clear_on_fail` is not the default because it
+ would interfere with the more common pattern of falling back to
+ local passwords if the user doesn't exist in Kerberos.
+
+ If you use a more complex configuration with the Linux PAM `[]`
+ syntax for the session and account groups, note that `pam_krb5`
+ returns a status of ignore, not success, if the user didn't log on
+ with Kerberos. You may need to handle that explicitly with
+ `ignore=ignore` in your action list.
+
+ There are many, many other possibilities. See the Linux PAM
+ documentation for all the configuration options.
+
+ On Red Hat systems, modify `/etc/pam.d/system-auth` instead, which
+ contains all of the configuration for the different stacks.
+
+ You can also use pam-krb5 only for specific services. In that case,
+ modify the files in `/etc/pam.d` for that particular service to use
+ `pam_krb5.so` for authentication. For services that are using
+ passwords over TLS to authenticate users, you may want to use the
+ `ignore_k5login` and `no_ccache` options to the authenticate module.
+ `.k5login` authorization is only meaningful for local accounts and
+ ticket caches are usually (although not always) only useful for
+ interactive sessions.
+
+ Configuring the module for Solaris is both simpler and less
+ flexible, since Solaris (at least Solaris 8 and 9, which are the
+ last versions of Solaris with which this module was extensively
+ tested) use a single `/etc/pam.conf` file that contains
+ configuration for all programs. For console login on Solaris, try
+ something like:
+
+ ```
+ login auth sufficient /usr/local/lib/security/pam_krb5.so minimum_uid=100
+ login auth required /usr/lib/security/pam_unix_auth.so.1 use_first_pass
+ login account required /usr/local/lib/security/pam_krb5.so minimum_uid=100
+ login account required /usr/lib/security/pam_unix_account.so.1
+ login session required /usr/local/lib/security/pam_krb5.so retain_after_close minimum_uid=100
+ login session required /usr/lib/security/pam_unix_session.so.1
+ ```
+
+ A similar configuration could be used for other services, such as
+ ssh. See the pam.conf(5) man page for more information. When using
+ this module with Solaris login (at least on Solaris 8 and 9), you
+ will probably also need to add `retain_after_close` to the PAM
+ configuration to avoid having the user's credentials deleted before
+ they are logged in.
+
+ The Solaris Kerberos library reportedly does not support prompting
+ for a password change of an expired account during authentication.
+ Supporting password change for expired accounts on Solaris with
+ native Kerberos may therefore require setting the `defer_pwchange`
+ or `force_pwchange` option for selected login applications. See the
+ description and warnings about that option in the pam_krb5(5) man
+ page.
+
+ Some configuration options may be put in the `krb5.conf` file used
+ by your Kerberos libraries (usually `/etc/krb5.conf` or
+ `/usr/local/etc/krb5.conf`) instead or in addition to the PAM
+ configuration. See the man page for more details.
+
+ The Kerberos library, via pam-krb5, will prompt the user to change
+ their password if their password is expired, but when using OpenSSH,
+ this will only work when `ChallengeResponseAuthentication` is
+ enabled. Unless this option is enabled, OpenSSH doesn't pass PAM
+ messages to the user and can only respond to a simple password
+ prompt.
+
+ If you are using MIT Kerberos, be aware that users whose passwords
+ are expired will not be prompted to change their password unless the
+ KDC configuration for your realm in `[realms]` in `krb5.conf`
+ contains a `master_kdc` setting or, if using DNS SRV records, you
+ have a DNS entry for `_kerberos-master` as well as `_kerberos`.
+ - title: Debugging
+ body: |
+ The first step when debugging any problems with this module is to
+ add `debug` to the PAM options for the module (either in the PAM
+ configuration or in `krb5.conf`). This will significantly increase
+ the logging from the module and should provide a trace of exactly
+ what failed and any available error information.
+
+ Many Kerberos authentication problems are due to configuration
+ issues in `krb5.conf`. If pam-krb5 doesn't work, first check that
+ `kinit` works on the same system. That will test your basic
+ Kerberos configuration. If the system has a keytab file installed
+ that's readable by the process doing authentication via PAM, make
+ sure that the keytab is current and contains a key for
+ `host/<system>` where <system> is the fully-qualified hostname.
+ pam-krb5 prevents KDC spoofing by checking the user's credentials
+ when possible, but this means that if a keytab is present it must be
+ correct or authentication will fail. You can check the keytab with
+ `klist -k` and `kinit -k`.
+
+ Be sure that all libraries and modules, including PAM modules,
+ loaded by a program use the same Kerberos libraries. Sometimes
+ programs that use PAM, such as current versions of OpenSSH, also
+ link against Kerberos directly. If your sshd is linked against one
+ set of Kerberos libraries and pam-krb5 is linked against a different
+ set of Kerberos libraries, this will often cause problems (such as
+ segmentation faults, bus errors, assertions, or other strange
+ behavior). Similar issues apply to the com_err library or any other
+ library used by both modules and shared libraries and by the
+ application that loads them. If your OS ships Kerberos libraries,
+ it's usually best if possible to build all Kerberos software on the
+ system against those libraries.
+ - title: Implementation Notes
+ body: |
+ The normal sequence of actions taken for a user login is:
+
+ ```
+ pam_authenticate
+ pam_setcred(PAM_ESTABLISH_CRED)
+ pam_open_session
+ pam_acct_mgmt
+ ```
+
+ and then at logout:
+
+ ```
+ pam_close_session
+ ```
+
+ followed by closing the open PAM session. The corresponding
+ `pam_sm_*` functions in this module are called when an application
+ calls those public interface functions. Not all applications call
+ all of those functions, or in particularly that order, although
+ `pam_authenticate` is always first and has to be.
+
+ When `pam_authenticate` is called, pam-krb5 creates a temporary
+ ticket cache in `/tmp` and sets the PAM environment variable
+ `PAM_KRB5CCNAME` to point to it. This ticket cache will be
+ automatically destroyed when the PAM session is closed and is there
+ only to pass the initial credentials to the call to `pam_setcred`.
+ The module would use a memory cache, but memory caches will only
+ work if the application preserves the PAM environment between the
+ calls to `pam_authenticate` and `pam_setcred`. Most do, but OpenSSH
+ notoriously does not and calls `pam_authenticate` in a subprocess,
+ so this method is used to pass the tickets to the `pam_setcred` call
+ in a different process.
+
+ `pam_authenticate` does a complete authentication, including
+ checking the resulting TGT by obtaining a service ticket for the
+ local host if possible, but this requires read access to the system
+ keytab. If the keytab doesn't exist, can't be read, or doesn't
+ include the appropriate credentials, the default is to accept the
+ authentication. This can be controlled by setting
+ `verify_ap_req_nofail` to true in `[libdefaults]` in
+ `/etc/krb5.conf`. `pam_authenticate` also does a basic
+ authorization check, by default calling `krb5_kuserok` (which uses
+ `~/.k5login` if available and falls back to checking that the
+ principal corresponds to the account name). This can be customized
+ with several options documented in the pam_krb5(5) man page.
+
+ pam-krb5 treats `pam_open_session` and
+ `pam_setcred(PAM_ESTABLISH_CRED)` as synonymous, as some
+ applications call one and some call the other. Both copy the
+ initial credentials from the temporary cache into a permanent cache
+ for this session and set `KRB5CCNAME` in the environment. It will
+ remember when the credential cache has been established and then
+ avoid doing any duplicate work afterwards, since some applications
+ call `pam_setcred` or `pam_open_session` multiple times (most
+ notably X.Org 7 and earlier xdm, which also throws away the module
+ settings the last time it calls them).
+
+ `pam_acct_mgmt` finds the ticket cache, reads it in to obtain the
+ authenticated principal, and then does is another authorization
+ check against `.k5login` or the local account name as described
+ above.
+
+ After the call to `pam_setcred` or `pam_open_session`, the ticket
+ cache will be destroyed whenever the calling application either
+ destroys the PAM environment or calls `pam_close_session`, which it
+ should do on user logout.
+
+ The normal sequence of events when refreshing a ticket cache (such
+ as inside a screensaver) is:
+
+ ```
+ pam_authenticate
+ pam_setcred(PAM_REINITIALIZE_CRED)
+ pam_acct_mgmt
+ ```
+
+ (`PAM_REFRESH_CRED` may be used instead.) Authentication proceeds
+ as above. At the `pam_setcred` stage, rather than creating a new
+ ticket cache, the module instead finds the current ticket cache
+ (from the `KRB5CCNAME` environment variable or the default ticket
+ cache location from the Kerberos library) and then reinitializes it
+ with the credentials from the temporary `pam_authenticate` ticket
+ cache. When refreshing a ticket cache, the application should not
+ open a session. Calling `pam_acct_mgmt` is optional; pam-krb5
+ doesn't do anything different when it's called in this case.
+
+ If `pam_authenticate` apparently didn't succeed, or if an account
+ was configured to be ignored via `ignore_root` or `minimum_uid`,
+ `pam_setcred` (and therefore `pam_open_session`) and `pam_acct_mgmt`
+ return `PAM_IGNORE`, which tells the PAM library to proceed as if
+ that module wasn't listed in the PAM configuration at all.
+ `pam_authenticate`, however, returns failure in the ignored user
+ case by default, since otherwise a configuration using `ignore_root`
+ with pam-krb5 as the only PAM module would allow anyone to log in as
+ root without a password. There doesn't appear to be a case where
+ returning `PAM_IGNORE` instead would improve the module's behavior,
+ but if you know of a case, please let me know.
+
+ By default, `pam_authenticate` intentionally does not follow the PAM
+ standard for handling expired accounts and instead returns failure
+ from `pam_authenticate` unless the Kerberos libraries are able to
+ change the account password during authentication. Too many
+ applications either do not call `pam_acct_mgmt` or ignore its exit
+ status. The fully correct PAM behavior (returning success from
+ `pam_authenticate` and `PAM_NEW_AUTHTOK_REQD` from `pam_acct_mgmt`)
+ can be enabled with the `defer_pwchange` option.
+
+ The `defer_pwchange` option is unfortunately somewhat tricky to
+ implement. In this case, the calling sequence is:
+
+ ```
+ pam_authenticate
+ pam_acct_mgmt
+ pam_chauthtok
+ pam_setcred
+ pam_open_session
+ ```
+
+ During the first `pam_authenticate`, we can't obtain credentials and
+ therefore a ticket cache since the password is expired. But
+ `pam_authenticate` isn't called again after `pam_chauthtok`, so
+ `pam_chauthtok` has to create a ticket cache. We however don't want
+ it to do this for the normal password change (`passwd`) case.
+
+ What we do is set a flag in our PAM data structure saying that we're
+ processing an expired password, and `pam_chauthtok`, if it sees that
+ flag, redoes the authentication with password prompting disabled
+ after it finishes changing the password.
+
+ Unfortunately, when handling password changes this way,
+ `pam_chauthtok` will always have to prompt the user for their
+ current password again even though they just typed it. This is
+ because the saved authentication tokens are cleared after
+ `pam_authenticate` returns, for security reasons. We could hack
+ around this by saving the password in our PAM data structure, but
+ this would let the application gain access to it (exactly what the
+ clearing is intended to prevent) and breaks a PAM library guarantee.
+ We could also work around this by having `pam_authenticate` get the
+ `kadmin/changepw` authenticator in the expired password case and
+ store it for `pam_chauthtok`, but it doesn't seem worth the hassle.
+ - title: History and Acknowledgements
+ body: |
+ Originally written by Frank Cusack <fcusack@fcusack.com>, with the
+ following acknowledgement:
+
+ > Thanks to Naomaru Itoi <itoi@eecs.umich.edu>, Curtis King
+ > <curtis.king@cul.ca>, and Derrick Brashear <shadow@dementia.org>,
+ > all of whom have written and made available Kerberos 4/5 modules.
+ > Although no code in this module is directly from these author's
+ > modules, (except the get_user_info() routine in support.c; derived
+ > from whichever of these authors originally wrote the first module
+ > the other 2 copied from), it was extremely helpful to look over
+ > their code which aided in my design.
+
+ The module was then patched for the FreeBSD ports collection with
+ additional modifications by unknown maintainers and then was
+ modified by Joel Kociolek <joko@logidee.com> to be usable with
+ Debian GNU/Linux.
+
+ It was packaged by Sam Hartman as the Kerberos v5 PAM module for
+ Debian and improved and modified by him and later by Russ Allbery to
+ fix bugs and add additional features. It was then adopted by Andres
+ Salomon, who added support for refreshing credentials.
+
+ The current distribution is maintained by Russ Allbery, who also
+ added support for reading configuration from `krb5.conf`, added many
+ features for compatibility with the Sourceforge module, commented
+ and standardized the formatting of the code, and overhauled the
+ documentation.
+
+ Thanks to Douglas E. Engert for the initial implementation of PKINIT
+ support. I have since modified and reworked it extensively, so any
+ bugs or compilation problems are my fault.
+
+ Thanks to Markus Moeller for lots of debugging and multiple patches
+ and suggestions for improved portability.
+
+ Thanks to Booker Bense for the implementation of the `alt_auth_map`
+ option.
+
+ Thanks to Sam Hartman for the FAST support implementation.
requirements: |
Either MIT Kerberos (or Kerberos implementations based on it) or Heimdal
diff --git a/t/data/generate/pam-krb5/output/readme-md b/t/data/generate/pam-krb5/output/readme-md
index 44f9c36..9834f04 100644
--- a/t/data/generate/pam-krb5/output/readme-md
+++ b/t/data/generate/pam-krb5/output/readme-md
@@ -140,11 +140,11 @@ you need to specify a different Kerberos installation root via
You can also individually set the paths to the include directory and the
library directory with `--with-krb5-include` and `--with-krb5-lib`. You
-may need to do this if Autoconf can't figure out whether to use lib,
-lib32, or lib64 on your platform.
+may need to do this if Autoconf can't figure out whether to use `lib`,
+`lib32`, or `lib64` on your platform.
-To not use krb5-config and force library probing even if there is a
-krb5-config script on your path, set PATH_KRB5_CONFIG to a nonexistent
+To not use `krb5-config` and force library probing even if there is a
+`krb5-config` script on your path, set `PATH_KRB5_CONFIG` to a nonexistent
path:
```
diff --git a/t/data/generate/remctl/docknot.yaml b/t/data/generate/remctl/docknot.yaml
index 7a9d1d4..bf459f2 100644
--- a/t/data/generate/remctl/docknot.yaml
+++ b/t/data/generate/remctl/docknot.yaml
@@ -258,43 +258,42 @@ description: |
Also present, as `docs/design.html`, is the original design document (now
somewhat out of date).
-readme:
- sections:
- - title: Building on Windows
- body: |
- (These instructions are not tested by the author and are now
- dated. Updated instructions via a pull request, issue, or email
- are very welcome.)
-
- First, install the Microsoft Windows SDK for Windows Vista if you
- have not already. This is a free download from Microsoft for
- users of "Genuine Microsoft Windows." The `vcvars32.bat`
- environment provided by Visual Studio may work as an alternative,
- but has not been tested.
-
- Next, install the [MIT Kerberos for Windows
- SDK](https://web.mit.edu/kerberos/www/dist/index.html). remctl
- has been tested with version 3.2.1 but should hopefully work with
- later versions.
-
- Then, follow these steps:
-
- 1. Run the `InitEnv.cmd` script included with the Windows SDK with
- parameters `"/xp /release"`.
-
- 2. Run the `configure.bat` script, giving it as an argument the
- location of the Kerberos for Windows SDK. For example, if you
- installed the KfW SDK in `"c:\KfW SDK"`, you should run:
-
- ```
- configure "c:\KfW SDK"
- ```
-
- 3. Run `nmake` to start compiling. You can ignore the warnings.
-
- If all goes well, you will have `remctl.exe` and `remctl.dll`.
- The latter is a shared library used by the client program. It
- exports the same interface as the UNIX libremctl library.
+sections:
+ - title: Building on Windows
+ body: |
+ (These instructions are not tested by the author and are now dated.
+ Updated instructions via a pull request, issue, or email are very
+ welcome.)
+
+ First, install the Microsoft Windows SDK for Windows Vista if you
+ have not already. This is a free download from Microsoft for users
+ of "Genuine Microsoft Windows." The `vcvars32.bat` environment
+ provided by Visual Studio may work as an alternative, but has not
+ been tested.
+
+ Next, install the [MIT Kerberos for Windows
+ SDK](https://web.mit.edu/kerberos/www/dist/index.html). remctl has
+ been tested with version 3.2.1 but should hopefully work with later
+ versions.
+
+ Then, follow these steps:
+
+ 1. Run the `InitEnv.cmd` script included with the Windows SDK with
+ parameters `"/xp /release"`.
+
+ 2. Run the `configure.bat` script, giving it as an argument the
+ location of the Kerberos for Windows SDK. For example, if you
+ installed the KfW SDK in `"c:\KfW SDK"`, you should run:
+
+ ```
+ configure "c:\KfW SDK"
+ ```
+
+ 3. Run `nmake` to start compiling. You can ignore the warnings.
+
+ If all goes well, you will have `remctl.exe` and `remctl.dll`. The
+ latter is a shared library used by the client program. It exports
+ the same interface as the UNIX libremctl library.
requirements: |
The remctld server and the standard client are written in C and require a
diff --git a/t/data/generate/remctl/output/readme-md b/t/data/generate/remctl/output/readme-md
index b242a97..20b5e37 100644
--- a/t/data/generate/remctl/output/readme-md
+++ b/t/data/generate/remctl/output/readme-md
@@ -246,11 +246,11 @@ you need to specify a different Kerberos installation root via
You can also individually set the paths to the include directory and the
library directory with `--with-krb5-include` and `--with-krb5-lib`. You
-may need to do this if Autoconf can't figure out whether to use lib,
-lib32, or lib64 on your platform.
+may need to do this if Autoconf can't figure out whether to use `lib`,
+`lib32`, or `lib64` on your platform.
-To not use krb5-config and force library probing even if there is a
-krb5-config script on your path, set PATH_KRB5_CONFIG to a nonexistent
+To not use `krb5-config` and force library probing even if there is a
+`krb5-config` script on your path, set `PATH_KRB5_CONFIG` to a nonexistent
path:
```
diff --git a/t/data/generate/rra-c-util/docknot.yaml b/t/data/generate/rra-c-util/docknot.yaml
index ad5abf5..a4ea326 100644
--- a/t/data/generate/rra-c-util/docknot.yaml
+++ b/t/data/generate/rra-c-util/docknot.yaml
@@ -17,8 +17,17 @@ build:
autoconf: '2.64'
automake: '1.11'
autotools: true
+ gssapi: true
+ install: false
+ kerberos: true
lancaster: true
manpages: true
+ middle: |
+ Pass `--enable-kafs` to configure to attempt to build kafs support,
+ which will use either an existing libkafs or libkopenafs library or
+ build the kafs replacement included in this package. You can also add
+ `--without-libkafs` to force the use of the internal kafs replacement.
+ type: Autoconf
distribution:
section: devel
tarname: rra-c-util
@@ -104,203 +113,117 @@ description: |
pulled from [C TAP
Harness](https://www.eyrie.org/~eagle/software/c-tap-harness/) instead.
-readme:
- sections:
- - title: Building
- body: |
- You can build rra-c-util with:
-
- ```
- ./configure
- make
- ```
-
- Pass `--enable-kafs` to configure to attempt to build kafs
- support, which will use either an existing libkafs or libkopenafs
- library or build the kafs replacement included in this package.
- You can also add `--without-libkafs` to force the use of the
- internal kafs replacement.
-
- Pass `--enable-silent-rules` to configure for a quieter build
- (similar to the Linux kernel). Use `make warnings` instead of
- make to build with full GCC compiler warnings (requires a
- relatively current version of GCC).
-
- Normally, configure will use `krb5-config` to determine the flags
- to use to compile with your Kerberos libraries. If `krb5-config`
- isn't found, it will look for the standard Kerberos libraries in
- locations already searched by your compiler. If the the
- `krb5-config` script first in your path is not the one
- corresponding to the Kerberos libraries you want to use or if your
- Kerberos libraries and includes aren't in a location searched by
- default by your compiler, you need to specify a different Kerberos
- installation root via `--with-krb5=PATH`. For example:
-
- ```
- ./configure --with-krb5=/usr/pubsw
- ```
-
- You can also individually set the paths to the include directory
- and the library directory with `--with-krb5-include` and
- `--with-krb5-lib`. You may need to do this if Autoconf can't
- figure out whether to use `lib`, `lib32`, or `lib64` on your
- platform.
-
- To specify a particular `krb5-config` script to use, either set
- the `PATH_KRB5_CONFIG` environment variable or pass it to
- configure like:
-
- ```
- ./configure PATH_KRB5_CONFIG=/path/to/krb5-config
- ```
-
- To not use `krb5-config` and force library probing even if there
- is a `krb5-config` script on your path, set `PATH_KRB5_CONFIG` to
- a nonexistent path:
-
- ```
- ./configure PATH_KRB5_CONFIG=/nonexistent
- ```
-
- `krb5-config` is not used and library probing is always done if
- either `--with-krb5-include` or `--with-krb5-lib` are given.
-
- GSS-API libraries are found the same way: with `krb5-config` by
- default if it is found, and a `--with-gssapi=PATH` flag to specify
- the installation root. `PATH_KRB5_CONFIG` is similarly used to
- find krb5-config for the GSS-API libraries, and
- `--with-gssapi-include` and `--with-gssapi-lib` can be used to
- specify the exact paths, overriding any `krb5-config` results.
- - title: Testing
- body: |
- rra-c-util comes with an extensive test suite, which you can run after
- building with:
-
- ```
- make check
- ```
-
- If a test fails, you can run a single test with verbose output via:
-
- ```
- tests/runtests -o <name-of-test>
- ```
-
- Do this instead of running the test program directly since it will
- ensure that necessary environment variables are set up.
- - title: Using This Code
- body: |
- While there is an install target, it's present only because
- Automake provides it automatically. Its use is not recommended.
- Instead, the code in this package is intended to be copied into
- your package and refreshed from the latest release of rra-c-util
- for each release.
-
- You can obviously copy the code and integrate it however works
- best for your package and your build system. Here's how I do it
- for my packages as an example:
-
- * Create a portable directory and copy `macros.h`, `system.h`,
- `stdbool.h`, and `dummy.c` along with whatever additional
- functions that your package uses that may not be present on all
- systems. If you use much of the `util` directory (see below),
- you'll need `asprintf.c`, `reallocarray.c`, and `snprintf.c` at
- least. If you use `util/network.c`, you'll also need
- `getaddrinfo.c`, `getaddrinfo.h`, `getnameinfo.c`,
- `getnameinfo.h`, `inet_*.c`, and `socket.h`. You'll need
- `winsock.c` for networking portability to Windows.
-
- * Copy the necessary portions of `configure.ac` from this package
- into your package. `configure.ac` is commented to try to give
- you a guide for what you need to copy over. You will also need
- to make an `m4` subdirectory, add the code to `configure.ac` to
- load Autoconf macros from `m4`, and copy over `m4/snprintf.m4`
- and possibly `m4/socket.m4` and `m4/inet-ntoa.m4`.
-
- * Copy the code from `Makefile.am` for building `libportable.a`
- into your package and be sure to link your package binaries with
- `libportable.a`. If you include this code in a shared library,
- you'll need to build `libportable.la` instead; see the Automake
- manual for the differences. You'll need to change `LIBRARIES`
- to `LTLIBRARIES` and `LIBOBJS` to `LTLIBOBJS` in addition to
- renaming the targets.
-
- * Create a `util` directory and copy over the portions of the
- utility library that you want. You will probably need
- `messages.[ch]` and `xmalloc.[ch]` if you copy anything over at
- all, since most of the rest of the library uses those. You will
- also need `m4/vamacros.m4` if you use `messages.[ch]`.
-
- * Copy the code from `Makefile.am` for building `libutil.a` into
- your package and be sure to link your package binaries with
- `libutil.a`. As with `libportable.a`, if you want to use the
- utility functions in a shared library, you'll need to instead
- build `libutil.la` and change some of the Automake variables.
-
- * If your package uses a TAP-based test suite written in C,
- consider using the additional TAP utility functions in
- `tests/tap` (specifically `messages.*`, `process.*`, and
- `string.*`).
-
- * If you're using the Kerberos portability code, copy over
- `portable/krb5.h`, `portable/krb5-extra.c`, `m4/krb5.m4`,
- `m4/lib-depends.m4`, `m4/lib-pathname.m4`, and optionally
- `util/messages-krb5.[ch]`. You'll also need the relevant
- fragments of `configure.ac`. You may want to remove some things
- from `krb5.h` and `krb5-extra.c` the corresponding configure
- checks if your code doesn't need all of those functions. If you
- need `krb5_get_renewed_creds`, also copy over `krb5-renew.c`.
- Don't forget to add `$(KRB5_CPPFLAGS)` to `CPPFLAGS` for
- `libportable` and possibly `libutil`, and if you're building a
- shared library, also add `$(KRB5_LDFLAGS)` to `LDFLAGS` and
- `$(KRB5_LIBS)` to `LIBADD` for those libraries.
-
- For a Kerberos-enabled test suite, also consider copying the
- `kerberos.*` libraries in `tests/tap` for a Kerberos-enabled
- test suite. If you want to use `kerberos_generate_conf` from
- `tests/tap/kerberos.c`, also copy over
- `tests/data/generate-krb5-conf`.
-
- * For testing that requires making Kerberos administrative
- changes, consider copying over the `kadmin.*` libraries in
- `tests/tap`.
-
- * For testing packages that use remctl, see the
- `tests/tap/remctl.c` and `tests/tap/remctl.h` files for C tests
- and `tests/tap/remctl.sh` for shell scripts.
-
- * If you're using the kafs portability code, copy over the `kafs`
- directory, `m4/kafs.m4`, `m4/lib-pathname.m4`,
- `portable/k_haspag.c`, the code to build kafs from
- `Makefile.am`, and the relevant fragments of `configure.ac`.
-
- * If you're using the PAM portability code, copy over
- `pam-util/*`, `portable/pam*`, `m4/pam-const.m4`, and the
- relevant fragments of `configure.ac`.
-
- * Copy over any other Autoconf macros that you want to use in your
- package from the m4 directory.
-
- * Copy over any generic tests from `tests/docs` and `tests/perl`
- that are appropriate for your package. If you use any of these,
- also copy over the `tests/tap/perl` directory and
- `tests/data/perl.conf` (and customize the latter for your
- package).
-
- * If the package embeds a Perl module, copy over any tests from
- the `perl/t` directory that are applicable. This can provide
- generic testing of the embedded Perl module using Perl's own
- test infrastructure. If you use any of these, also copy over
- the `perl/t/data/perl.conf` file and customize it for your
- package. You will need to arrange for `perl/t/data` to contain
- copies of the `perlcriticrc` and `perltidyrc` files, either by
- making copies of the files from `tests/data` or by using make to
- copy them.
-
- I also copy over all the relevant tests from the `tests` directory
- and the build machinery for them from `Makefile.am` so that the
- portability and utility layer are tested along with the rest of
- the package. The test driver should come from C TAP Harness.
+sections:
+ - title: Using This Code
+ body: |
+ While there is an install target, it's present only because Automake
+ provides it automatically. Its use is not recommended. Instead,
+ the code in this package is intended to be copied into your package
+ and refreshed from the latest release of rra-c-util for each
+ release.
+
+ You can obviously copy the code and integrate it however works best
+ for your package and your build system. Here's how I do it for my
+ packages as an example:
+
+ * Create a portable directory and copy `macros.h`, `system.h`,
+ `stdbool.h`, and `dummy.c` along with whatever additional
+ functions that your package uses that may not be present on all
+ systems. If you use much of the `util` directory (see below),
+ you'll need `asprintf.c`, `reallocarray.c`, and `snprintf.c` at
+ least. If you use `util/network.c`, you'll also need
+ `getaddrinfo.c`, `getaddrinfo.h`, `getnameinfo.c`,
+ `getnameinfo.h`, `inet_*.c`, and `socket.h`. You'll need
+ `winsock.c` for networking portability to Windows.
+
+ * Copy the necessary portions of `configure.ac` from this package
+ into your package. `configure.ac` is commented to try to give you
+ a guide for what you need to copy over. You will also need to
+ make an `m4` subdirectory, add the code to `configure.ac` to load
+ Autoconf macros from `m4`, and copy over `m4/snprintf.m4` and
+ possibly `m4/socket.m4` and `m4/inet-ntoa.m4`.
+
+ * Copy the code from `Makefile.am` for building `libportable.a` into
+ your package and be sure to link your package binaries with
+ `libportable.a`. If you include this code in a shared library,
+ you'll need to build `libportable.la` instead; see the Automake
+ manual for the differences. You'll need to change `LIBRARIES` to
+ `LTLIBRARIES` and `LIBOBJS` to `LTLIBOBJS` in addition to renaming
+ the targets.
+
+ * Create a `util` directory and copy over the portions of the
+ utility library that you want. You will probably need
+ `messages.[ch]` and `xmalloc.[ch]` if you copy anything over at
+ all, since most of the rest of the library uses those. You will
+ also need `m4/vamacros.m4` if you use `messages.[ch]`.
+
+ * Copy the code from `Makefile.am` for building `libutil.a` into
+ your package and be sure to link your package binaries with
+ `libutil.a`. As with `libportable.a`, if you want to use the
+ utility functions in a shared library, you'll need to instead
+ build `libutil.la` and change some of the Automake variables.
+
+ * If your package uses a TAP-based test suite written in C, consider
+ using the additional TAP utility functions in `tests/tap`
+ (specifically `messages.*`, `process.*`, and `string.*`).
+
+ * If you're using the Kerberos portability code, copy over
+ `portable/krb5.h`, `portable/krb5-extra.c`, `m4/krb5.m4`,
+ `m4/lib-depends.m4`, `m4/lib-pathname.m4`, and optionally
+ `util/messages-krb5.[ch]`. You'll also need the relevant
+ fragments of `configure.ac`. You may want to remove some things
+ from `krb5.h` and `krb5-extra.c` the corresponding configure
+ checks if your code doesn't need all of those functions. If you
+ need `krb5_get_renewed_creds`, also copy over `krb5-renew.c`.
+ Don't forget to add `$(KRB5_CPPFLAGS)` to `CPPFLAGS` for
+ `libportable` and possibly `libutil`, and if you're building a
+ shared library, also add `$(KRB5_LDFLAGS)` to `LDFLAGS` and
+ `$(KRB5_LIBS)` to `LIBADD` for those libraries.
+
+ For a Kerberos-enabled test suite, also consider copying the
+ `kerberos.*` libraries in `tests/tap` for a Kerberos-enabled test
+ suite. If you want to use `kerberos_generate_conf` from
+ `tests/tap/kerberos.c`, also copy over
+ `tests/data/generate-krb5-conf`.
+
+ * For testing that requires making Kerberos administrative changes,
+ consider copying over the `kadmin.*` libraries in `tests/tap`.
+
+ * For testing packages that use remctl, see the `tests/tap/remctl.c`
+ and `tests/tap/remctl.h` files for C tests and
+ `tests/tap/remctl.sh` for shell scripts.
+
+ * If you're using the kafs portability code, copy over the `kafs`
+ directory, `m4/kafs.m4`, `m4/lib-pathname.m4`,
+ `portable/k_haspag.c`, the code to build kafs from `Makefile.am`,
+ and the relevant fragments of `configure.ac`.
+
+ * If you're using the PAM portability code, copy over `pam-util/*`,
+ `portable/pam*`, `m4/pam-const.m4`, and the relevant fragments of
+ `configure.ac`.
+
+ * Copy over any other Autoconf macros that you want to use in your
+ package from the m4 directory.
+
+ * Copy over any generic tests from `tests/docs` and `tests/perl`
+ that are appropriate for your package. If you use any of these,
+ also copy over the `tests/tap/perl` directory and
+ `tests/data/perl.conf` (and customize the latter for your
+ package).
+
+ * If the package embeds a Perl module, copy over any tests from the
+ `perl/t` directory that are applicable. This can provide generic
+ testing of the embedded Perl module using Perl's own test
+ infrastructure. If you use any of these, also copy over the
+ `perl/t/data/perl.conf` file and customize it for your package.
+ You will need to arrange for `perl/t/data` to contain copies of
+ the `perlcriticrc` and `perltidyrc` files, either by making copies
+ of the files from `tests/data` or by using make to copy them.
+
+ I also copy over all the relevant tests from the `tests` directory
+ and the build machinery for them from `Makefile.am` so that the
+ portability and utility layer are tested along with the rest of the
+ package. The test driver should come from C TAP Harness.
requirements: |
Everything requires a C compiler to build and expects an ISO C89 or later
@@ -339,3 +262,21 @@ requirements: |
All are available on CPAN. Those tests will be skipped if the modules are
not available.
+
+test:
+ override: |
+ rra-c-util comes with an extensive test suite, which you can run after
+ building with:
+
+ ```
+ make check
+ ```
+
+ If a test fails, you can run a single test with verbose output via:
+
+ ```
+ tests/runtests -o <name-of-test>
+ ```
+
+ Do this instead of running the test program directly since it will
+ ensure that necessary environment variables are set up.
diff --git a/t/data/generate/rra-c-util/output/readme b/t/data/generate/rra-c-util/output/readme
index 35cdcbb..7184396 100644
--- a/t/data/generate/rra-c-util/output/readme
+++ b/t/data/generate/rra-c-util/output/readme
@@ -101,28 +101,35 @@ REQUIREMENTS
BUILDING
- You can build rra-c-util with:
+ You can build rra-c-util with the standard commands:
./configure
make
+ If you are building from a Git clone, first run ./bootstrap in the
+ source directory to generate the build files. Building outside of the
+ source directory is also supported, if you wish, by creating an empty
+ directory and then running configure with the correct relative path.
+
Pass --enable-kafs to configure to attempt to build kafs support, which
will use either an existing libkafs or libkopenafs library or build the
kafs replacement included in this package. You can also add
--without-libkafs to force the use of the internal kafs replacement.
- Pass --enable-silent-rules to configure for a quieter build (similar to
- the Linux kernel). Use make warnings instead of make to build with full
- GCC compiler warnings (requires a relatively current version of GCC).
-
Normally, configure will use krb5-config to determine the flags to use
- to compile with your Kerberos libraries. If krb5-config isn't found, it
- will look for the standard Kerberos libraries in locations already
- searched by your compiler. If the the krb5-config script first in your
- path is not the one corresponding to the Kerberos libraries you want to
- use or if your Kerberos libraries and includes aren't in a location
- searched by default by your compiler, you need to specify a different
- Kerberos installation root via --with-krb5=PATH. For example:
+ to compile with your Kerberos libraries. To specify a particular
+ krb5-config script to use, either set the PATH_KRB5_CONFIG environment
+ variable or pass it to configure like:
+
+ ./configure PATH_KRB5_CONFIG=/path/to/krb5-config
+
+ If krb5-config isn't found, configure will look for the standard
+ Kerberos libraries in locations already searched by your compiler. If
+ the the krb5-config script first in your path is not the one
+ corresponding to the Kerberos libraries you want to use, or if your
+ Kerberos libraries and includes aren't in a location searched by default
+ by your compiler, you need to specify a different Kerberos installation
+ root via --with-krb5=PATH. For example:
./configure --with-krb5=/usr/pubsw
@@ -131,11 +138,6 @@ BUILDING
need to do this if Autoconf can't figure out whether to use lib, lib32,
or lib64 on your platform.
- To specify a particular krb5-config script to use, either set the
- PATH_KRB5_CONFIG environment variable or pass it to configure like:
-
- ./configure PATH_KRB5_CONFIG=/path/to/krb5-config
-
To not use krb5-config and force library probing even if there is a
krb5-config script on your path, set PATH_KRB5_CONFIG to a nonexistent
path:
@@ -151,6 +153,11 @@ BUILDING
GSS-API libraries, and --with-gssapi-include and --with-gssapi-lib can
be used to specify the exact paths, overriding any krb5-config results.
+ Pass --enable-silent-rules to configure for a quieter build (similar to
+ the Linux kernel). Use make warnings instead of make to build with full
+ compiler warnings (requires either GCC or Clang and may require a
+ relatively current version of the compiler).
+
TESTING
rra-c-util comes with an extensive test suite, which you can run after
diff --git a/t/data/generate/rra-c-util/output/readme-md b/t/data/generate/rra-c-util/output/readme-md
index de3f21e..294fac4 100644
--- a/t/data/generate/rra-c-util/output/readme-md
+++ b/t/data/generate/rra-c-util/output/readme-md
@@ -96,30 +96,39 @@ fresh Git checkout.
## Building
-You can build rra-c-util with:
+You can build rra-c-util with the standard commands:
```
./configure
make
```
+If you are building from a Git clone, first run `./bootstrap` in the
+source directory to generate the build files. Building outside of the
+source directory is also supported, if you wish, by creating an empty
+directory and then running configure with the correct relative path.
+
Pass `--enable-kafs` to configure to attempt to build kafs support, which
will use either an existing libkafs or libkopenafs library or build the
kafs replacement included in this package. You can also add
`--without-libkafs` to force the use of the internal kafs replacement.
-Pass `--enable-silent-rules` to configure for a quieter build (similar to
-the Linux kernel). Use `make warnings` instead of make to build with full
-GCC compiler warnings (requires a relatively current version of GCC).
-
Normally, configure will use `krb5-config` to determine the flags to use
-to compile with your Kerberos libraries. If `krb5-config` isn't found, it
-will look for the standard Kerberos libraries in locations already
-searched by your compiler. If the the `krb5-config` script first in your
-path is not the one corresponding to the Kerberos libraries you want to
-use or if your Kerberos libraries and includes aren't in a location
-searched by default by your compiler, you need to specify a different
-Kerberos installation root via `--with-krb5=PATH`. For example:
+to compile with your Kerberos libraries. To specify a particular
+`krb5-config` script to use, either set the `PATH_KRB5_CONFIG` environment
+variable or pass it to configure like:
+
+```
+ ./configure PATH_KRB5_CONFIG=/path/to/krb5-config
+```
+
+If `krb5-config` isn't found, configure will look for the standard
+Kerberos libraries in locations already searched by your compiler. If the
+the `krb5-config` script first in your path is not the one corresponding
+to the Kerberos libraries you want to use, or if your Kerberos libraries
+and includes aren't in a location searched by default by your compiler,
+you need to specify a different Kerberos installation root via
+`--with-krb5=PATH`. For example:
```
./configure --with-krb5=/usr/pubsw
@@ -130,13 +139,6 @@ library directory with `--with-krb5-include` and `--with-krb5-lib`. You
may need to do this if Autoconf can't figure out whether to use `lib`,
`lib32`, or `lib64` on your platform.
-To specify a particular `krb5-config` script to use, either set the
-`PATH_KRB5_CONFIG` environment variable or pass it to configure like:
-
-```
- ./configure PATH_KRB5_CONFIG=/path/to/krb5-config
-```
-
To not use `krb5-config` and force library probing even if there is a
`krb5-config` script on your path, set `PATH_KRB5_CONFIG` to a nonexistent
path:
@@ -150,10 +152,15 @@ path:
GSS-API libraries are found the same way: with `krb5-config` by default if
it is found, and a `--with-gssapi=PATH` flag to specify the installation
-root. `PATH_KRB5_CONFIG` is similarly used to find krb5-config for the
+root. `PATH_KRB5_CONFIG` is similarly used to find `krb5-config` for the
GSS-API libraries, and `--with-gssapi-include` and `--with-gssapi-lib` can
be used to specify the exact paths, overriding any `krb5-config` results.
+Pass `--enable-silent-rules` to configure for a quieter build (similar to
+the Linux kernel). Use `make warnings` instead of `make` to build with
+full GCC compiler warnings (requires either GCC or Clang and may require a
+relatively current version of the compiler).
+
## Testing
rra-c-util comes with an extensive test suite, which you can run after
diff --git a/t/data/generate/wallet/docknot.yaml b/t/data/generate/wallet/docknot.yaml
index c60b053..6b531da 100644
--- a/t/data/generate/wallet/docknot.yaml
+++ b/t/data/generate/wallet/docknot.yaml
@@ -232,29 +232,6 @@ description: |
Kerberos keytab object for MIT Kerberos. (Heimdal doesn't require any
special support.)
-readme:
- sections:
- - title: Configuration
- body: |
- Before setting up the wallet server, review the Wallet::Config
- documentation (with man Wallet::Config or perldoc Wallet::Config).
- There are many customization options, some of which must be set.
- You may also need to create a Kerberos keytab for the keytab
- object backend and give it appropriate ACLs, and set up
- `keytab-backend` and its `remctld` configuration on your KDC if
- you want unchanging flag support.
-
- For the basic setup and configuration of the wallet server, see
- the file `docs/setup` in the source distribution. You will need
- to set up a database on the server (unless you're using SQLite),
- initialize the database, install `remctld` and the wallet Perl
- modules, and set up `remctld` to run the `wallet-backend` program.
-
- The wallet client supports reading configuration settings from the
- system `krb5.conf` file. For more information, see the
- CONFIGURATION section of the wallet client man page (`man
- wallet`).
-
requirements: |
The wallet client requires the C
[remctl](https://www.eyrie.org/~eagle/software/remctl/) client library and
@@ -316,6 +293,27 @@ requirements: |
The NetDB ACL verifier (only of interest at sites using NetDB to manage
DNS) requires the Net::Remctl Perl module.
+sections:
+ - title: Configuration
+ body: |
+ Before setting up the wallet server, review the Wallet::Config
+ documentation (with man Wallet::Config or perldoc Wallet::Config).
+ There are many customization options, some of which must be set.
+ You may also need to create a Kerberos keytab for the keytab object
+ backend and give it appropriate ACLs, and set up `keytab-backend`
+ and its `remctld` configuration on your KDC if you want unchanging
+ flag support.
+
+ For the basic setup and configuration of the wallet server, see the
+ file `docs/setup` in the source distribution. You will need to set
+ up a database on the server (unless you're using SQLite), initialize
+ the database, install `remctld` and the wallet Perl modules, and set
+ up `remctld` to run the `wallet-backend` program.
+
+ The wallet client supports reading configuration settings from the
+ system `krb5.conf` file. For more information, see the
+ CONFIGURATION section of the wallet client man page (`man wallet`).
+
test:
prefix: |
The wallet comes with a comprehensive test suite, but it requires some
diff --git a/t/data/generate/wallet/output/readme-md b/t/data/generate/wallet/output/readme-md
index 4146425..1ec3e6e 100644
--- a/t/data/generate/wallet/output/readme-md
+++ b/t/data/generate/wallet/output/readme-md
@@ -196,11 +196,11 @@ you need to specify a different Kerberos installation root via
You can also individually set the paths to the include directory and the
library directory with `--with-krb5-include` and `--with-krb5-lib`. You
-may need to do this if Autoconf can't figure out whether to use lib,
-lib32, or lib64 on your platform.
+may need to do this if Autoconf can't figure out whether to use `lib`,
+`lib32`, or `lib64` on your platform.
-To not use krb5-config and force library probing even if there is a
-krb5-config script on your path, set PATH_KRB5_CONFIG to a nonexistent
+To not use `krb5-config` and force library probing even if there is a
+`krb5-config` script on your path, set `PATH_KRB5_CONFIG` to a nonexistent
path:
```
diff --git a/t/data/update/c-tap-harness/docknot.yaml b/t/data/update/c-tap-harness/docknot.yaml
index 88337dc..ea434bb 100644
--- a/t/data/update/c-tap-harness/docknot.yaml
+++ b/t/data/update/c-tap-harness/docknot.yaml
@@ -94,137 +94,6 @@ license:
name: Expat
maintainer: Russ Allbery <eagle@eyrie.org>
name: C TAP Harness
-readme:
- sections:
- - body: |
- C TAP Harness comes with a comprehensive test suite, which you can run
- after building with:
-
- ```
- make check
- ```
-
- If a test fails, you can run a single test with verbose output via:
-
- ```
- ./runtests -b `pwd`/tests -s `pwd`/tests -o <name-of-test>
- ```
-
- Do this instead of running the test program directly since it will ensure
- that necessary environment variables are set up. You may need to change
- the `-s` option argument if you build with a separate build directory from
- the source directory.
- title: Testing
- - body: |
- While there is an install target that installs runtests in the default
- binary directory (`/usr/local/bin` by default) and installs the man pages,
- one normally wouldn't install anything from this package. Instead, the
- code is intended to be copied into your package and refreshed from the
- latest release of C TAP Harness for each release.
-
- You can obviously copy the code and integrate it however works best for
- your package and your build system. Here's how I do it for my packages
- as an example:
-
- * Create a tests directory and copy tests/runtests.c into it. Create a
- `tests/tap` subdirectory and copy the portions of the TAP library (from
- `tests/tap`) that I need for that package into it. The TAP library is
- designed to let you drop in additional source and header files for
- additional utility functions that are useful in your package.
-
- * Add code to my top-level `Makefile.am` (I always use a non-recursive
- Makefile with `subdir-objects` set) to build `runtests` and the test
- library:
-
- ```make
- check_PROGRAMS = tests/runtests
- tests_runtests_CPPFLAGS = -DC_TAP_SOURCE='"$(abs_top_srcdir)/tests"' \
- -DC_TAP_BUILD='"$(abs_top_builddir)/tests"'
- check_LIBRARIES = tests/tap/libtap.a
- tests_tap_libtap_a_CPPFLAGS = -I$(abs_top_srcdir)/tests
- tests_tap_libtap_a_SOURCES = tests/tap/basic.c tests/tap/basic.h \
- tests/tap/float.c tests/tap/float.h tests/tap/macros.h
- ```
-
- Omit `float.c` and `float.h` from the last line if your package doesn't
- need the `is_double` function. Building the build and source
- directories into runtests will let `tests/runtests -o <test>` work for
- users without requiring that they set any other variables, even if
- they're doing an out-of-source build.
-
- Add additional source files and headers that should go into the TAP
- library if you added extra utility functions for your package.
-
- * Add code to `Makefile.am` to run the test suite:
-
- ```make
- check-local: $(check_PROGRAMS)
- cd tests && ./runtests -l $(abs_top_srcdir)/tests/TESTS
- ```
-
- See the `Makefile.am` in this package for an example.
-
- * List the test programs in the `tests/TESTS` file. This should have the
- name of the test executable with the trailing "-t" or ".t" (you can use
- either extension as you prefer) omitted.
-
- Test programs must be executable.
-
- For any test programs that need to be compiled, add build rules for
- them in `Makefile.am`, similar to:
-
- ```make
- tests_libtap_c_basic_LDADD = tests/tap/libtap.a
- ```
-
- and add them to `check_PROGRAMS`. If you include the `float.c` add-on
- in your libtap library, you will need to add `-lm` to the `_LDADD`
- setting for all test programs linked against it.
-
- A more complex example from the remctl package that needs additional
- libraries:
-
- ```make
- tests_client_open_t_LDFLAGS = $(GSSAPI_LDFLAGS)
- tests_client_open_t_LDADD = client/libremctl.la tests/tap/libtap.a \
- util/libutil.la $(GSSAPI_LIBS)
- ```
-
- If the test program doesn't need to be compiled, add it to `EXTRA_DIST`
- so that it will be included in the distribution.
-
- * If you have test programs written in shell, copy `tests/tap/libtap.sh`
- the tap subdirectory of your tests directory and add it to `EXTRA_DIST`.
- Shell programs should start with:
-
- ```sh
- . "${C_TAP_SOURCE}/tap/libtap.sh"
- ```
-
- and can then use the functions defined in the library.
-
- * Optionally copy `docs/writing-tests` into your package somewhere, such
- as `tests/README`, as instructions to contributors on how to write tests
- for this framework.
-
- If you have configuration files that the user must create to enable some
- of the tests, conventionally they go into `tests/config`.
-
- If you have data files that your test cases use, conventionally they go
- into `tests/data`. You can then find the data directory relative to the
- `C_TAP_SOURCE` environment variable (set by `runtests`) in your test
- program. If you have data that's compiled or generated by Autoconf, it
- will be relative to the `BUILD` environment variable. Don't forget to add
- test data to `EXTRA_DIST` as necessary.
-
- For more TAP library add-ons, generally ones that rely on additional
- portability code not shipped in this package or with narrower uses, see
- [the rra-c-util
- package](https://www.eyrie.org/~eagle/software/rra-c-util/). There are
- several additional TAP library add-ons in the `tests/tap` directory in
- that package. It's also an example of how to use this test harness in
- another package.
- title: Using the Harness
requirements: |
C TAP Harness requires a C compiler to build. Any ISO C89 or later C
compiler on a system supporting the Single UNIX Specification, version 3
@@ -242,11 +111,141 @@ requirements: |
All are available on CPAN. Those tests will be skipped if the modules are
not available.
+sections:
+- body: |
+ While there is an install target that installs runtests in the default
+ binary directory (`/usr/local/bin` by default) and installs the man pages,
+ one normally wouldn't install anything from this package. Instead, the
+ code is intended to be copied into your package and refreshed from the
+ latest release of C TAP Harness for each release.
+
+ You can obviously copy the code and integrate it however works best for
+ your package and your build system. Here's how I do it for my packages
+ as an example:
+
+ * Create a tests directory and copy tests/runtests.c into it. Create a
+ `tests/tap` subdirectory and copy the portions of the TAP library (from
+ `tests/tap`) that I need for that package into it. The TAP library is
+ designed to let you drop in additional source and header files for
+ additional utility functions that are useful in your package.
+
+ * Add code to my top-level `Makefile.am` (I always use a non-recursive
+ Makefile with `subdir-objects` set) to build `runtests` and the test
+ library:
+
+ ```make
+ check_PROGRAMS = tests/runtests
+ tests_runtests_CPPFLAGS = -DC_TAP_SOURCE='"$(abs_top_srcdir)/tests"' \
+ -DC_TAP_BUILD='"$(abs_top_builddir)/tests"'
+ check_LIBRARIES = tests/tap/libtap.a
+ tests_tap_libtap_a_CPPFLAGS = -I$(abs_top_srcdir)/tests
+ tests_tap_libtap_a_SOURCES = tests/tap/basic.c tests/tap/basic.h \
+ tests/tap/float.c tests/tap/float.h tests/tap/macros.h
+ ```
+
+ Omit `float.c` and `float.h` from the last line if your package doesn't
+ need the `is_double` function. Building the build and source
+ directories into runtests will let `tests/runtests -o <test>` work for
+ users without requiring that they set any other variables, even if
+ they're doing an out-of-source build.
+
+ Add additional source files and headers that should go into the TAP
+ library if you added extra utility functions for your package.
+
+ * Add code to `Makefile.am` to run the test suite:
+
+ ```make
+ check-local: $(check_PROGRAMS)
+ cd tests && ./runtests -l $(abs_top_srcdir)/tests/TESTS
+ ```
+
+ See the `Makefile.am` in this package for an example.
+
+ * List the test programs in the `tests/TESTS` file. This should have the
+ name of the test executable with the trailing "-t" or ".t" (you can use
+ either extension as you prefer) omitted.
+
+ Test programs must be executable.
+
+ For any test programs that need to be compiled, add build rules for
+ them in `Makefile.am`, similar to:
+
+ ```make
+ tests_libtap_c_basic_LDADD = tests/tap/libtap.a
+ ```
+
+ and add them to `check_PROGRAMS`. If you include the `float.c` add-on
+ in your libtap library, you will need to add `-lm` to the `_LDADD`
+ setting for all test programs linked against it.
+
+ A more complex example from the remctl package that needs additional
+ libraries:
+
+ ```make
+ tests_client_open_t_LDFLAGS = $(GSSAPI_LDFLAGS)
+ tests_client_open_t_LDADD = client/libremctl.la tests/tap/libtap.a \
+ util/libutil.la $(GSSAPI_LIBS)
+ ```
+
+ If the test program doesn't need to be compiled, add it to `EXTRA_DIST`
+ so that it will be included in the distribution.
+
+ * If you have test programs written in shell, copy `tests/tap/libtap.sh`
+ the tap subdirectory of your tests directory and add it to `EXTRA_DIST`.
+ Shell programs should start with:
+
+ ```sh
+ . "${C_TAP_SOURCE}/tap/libtap.sh"
+ ```
+
+ and can then use the functions defined in the library.
+
+ * Optionally copy `docs/writing-tests` into your package somewhere, such
+ as `tests/README`, as instructions to contributors on how to write tests
+ for this framework.
+
+ If you have configuration files that the user must create to enable some
+ of the tests, conventionally they go into `tests/config`.
+
+ If you have data files that your test cases use, conventionally they go
+ into `tests/data`. You can then find the data directory relative to the
+ `C_TAP_SOURCE` environment variable (set by `runtests`) in your test
+ program. If you have data that's compiled or generated by Autoconf, it
+ will be relative to the `BUILD` environment variable. Don't forget to add
+ test data to `EXTRA_DIST` as necessary.
+
+ For more TAP library add-ons, generally ones that rely on additional
+ portability code not shipped in this package or with narrower uses, see
+ [the rra-c-util
+ package](https://www.eyrie.org/~eagle/software/rra-c-util/). There are
+ several additional TAP library add-ons in the `tests/tap` directory in
+ that package. It's also an example of how to use this test harness in
+ another package.
+ title: Using the Harness
support:
email: eagle@eyrie.org
github: rra/c-tap-harness
web: https://www.eyrie.org/~eagle/software/c-tap-harness/
synopsis: C harness for running TAP-compliant tests
+test:
+ override: |
+ C TAP Harness comes with a comprehensive test suite, which you can run
+ after building with:
+
+ ```
+ make check
+ ```
+
+ If a test fails, you can run a single test with verbose output via:
+
+ ```
+ ./runtests -b `pwd`/tests -s `pwd`/tests -o <name-of-test>
+ ```
+
+ Do this instead of running the test program directly since it will ensure
+ that necessary environment variables are set up. You may need to change
+ the `-s` option argument if you build with a separate build directory from
+ the source directory.
vcs:
browse: https://git.eyrie.org/?p=devel/c-tap-harness.git
github: rra/c-tap-harness
diff --git a/t/data/update/control-archive/docknot.yaml b/t/data/update/control-archive/docknot.yaml
index 4508720..7c3e590 100644
--- a/t/data/update/control-archive/docknot.yaml
+++ b/t/data/update/control-archive/docknot.yaml
@@ -74,222 +74,6 @@ quote:
Usenet is like a herd of performing elephants with diarrhea — massive,
difficult to redirect, awe-inspiring, entertaining, and a source of
mind-boggling amounts of excrement when you least expect it.
-readme:
- sections:
- - body: |
- This package uses a three-part version number. The first number will be
- incremented for major changes, major new functionality, incompatible
- changes to the configuration format (more than just adding new keys), or
- similar disruptive changes. For lesser changes, the second number will be
- incremented for any change to the code or functioning of the software. A
- change to the third part of the version number indicates a release with
- changes only to the configuration, PGP keys, and documentation files.
- title: Versioning
- - body: |
- The configuration data is in one file per hierarchy in the `config`
- directory. Each file has the format specified in FORMAT and is designed
- to be readable by INN's new configuration parser in case this can be
- further automated down the road. The `config/special` directory contains
- overrides, raw `control.ctl` fragments that should be used for particular
- hierarchies instead of automatically-generated entries (usually for
- special comments). Eventually, the format should be extended to handle as
- many of these cases as possible.
-
- The `keys` directory contains the PGP public keys for every hierarchy that
- has one. The user IDs on these keys must match the signer expected by the
- configuration data for the corresponding hierarchy.
-
- The `forms` directory contains the basic file structure for the three
- generated files.
-
- The `scripts` directory contains all the software that generates the
- configuration and documentation files, processes control messages, updates
- the database, creates the newsgroup lists, and generates reports. Most
- scripts in that directory have POD documentation included at the end of
- the script, viewable by running perldoc on the script.
-
- The `templates` directory contains templates for the `control-summary`
- script. These are the templates I use myself. Other installations should
- customize them.
-
- The `docs` directory contains the extra documentation files that are
- distributed from ftp.isc.org in the control message archive and newsgroup
- list directories, plus the DocKnot metadata for this package.
- title: Layout
- - body: |
- This software is set up to run from `/srv/control`. To use a different
- location, edit the paths at the beginning of each of the scripts in the
- `scripts` directory to use different paths. By default, copying all the
- files from the distribution into a `/srv/control` directory is almost all
- that's needed. An install rule is provided to do this. To install the
- software, run:
-
- ```sh
- make install
- ```
-
- You will need write access to `/srv/control` or permission to create it.
-
- `process-control` and `generate-files` need a GnuPG keyring containing all
- of the honored hierarchy keys. To generate this keyring, run `make
- install` or:
-
- ```sh
- mkdir keyring
- gpg --homedir=keyring --allow-non-selfsigned-uid --import keys/*
- ```
-
- from the top level of this distribution. `process-control` also expects a
- `control.ctl` file in `/srv/control/control.ctl`, which can be generated
- from the files included here (after creating the keyring as described
- above) by running `make install` or:
-
- ```sh
- scripts/generate-files
- ```
-
- Both of these are done automatically as part of `make install`.
- process-control expects `/srv/control/archive` to exist and archives
- control messages there. It expects `/srv/control/tmp` to exist and uses
- it for temporary files for GnuPG control message verification.
-
- To process incoming control messages, you need to run `process-control` on
- each message. `process-control` expects to receive, on standard input,
- lines consisting of a path to a file, a space, and a message ID. This
- input format is designed to work with the tinyleaf server that comes with
- INN 2.5 and later, but it should also work as a channel feed from
- pre-storage-API versions of INN (1.x). It will not work without
- modification via a channel feed from a current version of INN, since it
- doesn't understand the storage API and doesn't know how to retrieve
- articles by tokens. This could be easily added; I just haven't needed it.
-
- If you're using tinyleaf, here is the setup process:
-
- 1. Create a directory that tinyleaf will use to store incoming articles
- temporarily, the archive directory, and the logs directory and
- install the software:
-
- ```sh
- make install
- ```
-
- 2. Run tinyleaf on some port, configuring it to use that directory and
- to run process-control. A typical tinyleaf command line would be:
-
- ```sh
- tinyleaf /srv/control/spool /srv/control/scripts/process-control
- ```
-
- I run tinyleaf using systemd, but any inetd implementation should work
- equally well.
-
- 3. Set up a news feed to the system running tinyleaf that sends control
- messages of interest. You should be careful not to send cancel control
- messages or you'll get a ton of junk in your logs. The INN newsfeeds
- entry I use is:
-
- ```
- isc-control:control,control.*,!control.cancel:Tf,Wnm:
- ```
-
- combined with nntpsend to send the articles.
-
- That should be all there is to it. Watch the logs directory to see what
- happens for incoming messages.
-
- `scripts/process-control` just maintains a database file. To export that
- data in a format that's useful for other software, run
- `scripts/export-control`. This expects a `/srv/control/export` directory
- into which it stores active and newsgroups files, a copy of the
- `control.ctl` file, and all of the logs in a `LOGS` subdirectory. This
- export directory can then be made available on the web, copied to another
- system, or whatever else is appropriate. Generally,
- `scripts/export-control` should be run periodically from cron.
-
- Reports can be generated using `scripts/control-summary`. This script
- needs configuration before running; see the top of the script and its
- included POD documentation. There is a sample template in the `templates`
- directory, and `scripts/weekly-report` shows a sample cron job for sending
- out a regular report.
- title: Installation
- - body: |
- This package is intended to provide all of the tools, configuration, and
- information required to duplicate the ftp.isc.org control message archive
- and newsgroup list service if you so desire. To set up a similar service
- based on that service, however, you will also want to bootstrap from the
- existing data. Here is the procedure for that:
-
- 1. Be sure that you're starting from the latest software and set of
- configuration files. I will generally try to make a new release after
- committing a batch of changes, but I may not make a new release after
- every change. See the sections below for information about the Git
- repository in which this package is maintained. You can always clone
- that repository to get the latest configuration (and then merge or
- cherry-pick changes from my repository into your repository as you
- desire).
-
- 2. Download the current newsgroup list from:
-
- ftp://ftp.isc.org/pub/usenet/CONFIG/newsgroups.bz2
-
- and then bootstrap the database from it:
-
- ```sh
- bzip2 -dc newsgroups.bz2 | scripts/update-control bulkload
- ```
-
- 3. If you want the log information so that your reports will include
- changes made in the ftp.isc.org archive before you created your own,
- copy the contents of ftp://ftp.isc.org/pub/usenet/CONFIG/LOGS/ into
- `/srv/control/logs`.
-
- 4. If you want to start with the existing control message repository,
- download the contents of ftp://ftp.isc.org/pub/usenet/control/ into
- `/srv/control/archive`. You can do this using a recursive download
- tool that understands FTP, such as wget, but please use the options
- that add delays and don't hammer the server to death.
-
- After finishing those steps, you will have a copy of the ftp.isc.org
- archive and can start processing control messages, possibly with different
- configuration choices. You can generate the files that are found in
- ftp://ftp.isc.org/pub/usenet/CONFIG/ by running `scripts/export-control`
- as described above.
- title: Bootstrapping
- - body: |
- To add a new hierarchy, add a configuration fragment in the `config`
- directory named after the hierarchy, following the format of the existing
- files, and run `scripts/generate-files` to create a new `control.ctl`
- file. See the documentation in `scripts/generate-files` for details about
- the supported configuration keys.
-
- If the hierarchy uses PGP-signed control messages, also put the PGP key
- into the `keys` directory in a file named after the hierarchy. Then, run:
-
- ```sh
- gpg --homedir=keyring --import keys/<hierarchy>
- ```
-
- to add the new key to the working keyring.
-
- The first user ID on the key must match the signer expected by the
- configuration data for the corresponding hierarchy. If a hierarchy
- administrator sets that up wrong (usually by putting additional key IDs on
- the key), this can be corrected by importing the key into a keyring with
- GnuPG, using `gpg --edit-key` to remove the offending user ID, and
- exporting the key again with `gpg --export --ascii`.
-
- When adding a new hierarchy, it's often useful to bootstrap the newsgroup
- list by importing the current checkgroups. To do this, obtain the
- checkgroups as a text file (containing only the groups without any news
- headers) and run:
-
- ```sh
- scripts/update-control checkgroups <hierarchy> < <checkgroups>
- ```
-
- where <hierarchy> is the hierarchy the checkgroups is for and
- <checkgroups> is the path to the checkgroups file.
- title: Maintenance
requirements: |
Perl 5.6 or later plus the following additional Perl modules are required:
@@ -307,6 +91,221 @@ requirements: |
server or some other source of control messages. A minimalist news server
like tinyleaf is suitable for this (I wrote tinyleaf, available as part of
[INN](https://www.eyrie.org/~eagle/software/inn/), for this purpose).
+sections:
+- body: |
+ This package uses a three-part version number. The first number will be
+ incremented for major changes, major new functionality, incompatible
+ changes to the configuration format (more than just adding new keys), or
+ similar disruptive changes. For lesser changes, the second number will be
+ incremented for any change to the code or functioning of the software. A
+ change to the third part of the version number indicates a release with
+ changes only to the configuration, PGP keys, and documentation files.
+ title: Versioning
+- body: |
+ The configuration data is in one file per hierarchy in the `config`
+ directory. Each file has the format specified in FORMAT and is designed
+ to be readable by INN's new configuration parser in case this can be
+ further automated down the road. The `config/special` directory contains
+ overrides, raw `control.ctl` fragments that should be used for particular
+ hierarchies instead of automatically-generated entries (usually for
+ special comments). Eventually, the format should be extended to handle as
+ many of these cases as possible.
+
+ The `keys` directory contains the PGP public keys for every hierarchy that
+ has one. The user IDs on these keys must match the signer expected by the
+ configuration data for the corresponding hierarchy.
+
+ The `forms` directory contains the basic file structure for the three
+ generated files.
+
+ The `scripts` directory contains all the software that generates the
+ configuration and documentation files, processes control messages, updates
+ the database, creates the newsgroup lists, and generates reports. Most
+ scripts in that directory have POD documentation included at the end of
+ the script, viewable by running perldoc on the script.
+
+ The `templates` directory contains templates for the `control-summary`
+ script. These are the templates I use myself. Other installations should
+ customize them.
+
+ The `docs` directory contains the extra documentation files that are
+ distributed from ftp.isc.org in the control message archive and newsgroup
+ list directories, plus the DocKnot metadata for this package.
+ title: Layout
+- body: |
+ This software is set up to run from `/srv/control`. To use a different
+ location, edit the paths at the beginning of each of the scripts in the
+ `scripts` directory to use different paths. By default, copying all the
+ files from the distribution into a `/srv/control` directory is almost all
+ that's needed. An install rule is provided to do this. To install the
+ software, run:
+
+ ```sh
+ make install
+ ```
+
+ You will need write access to `/srv/control` or permission to create it.
+
+ `process-control` and `generate-files` need a GnuPG keyring containing all
+ of the honored hierarchy keys. To generate this keyring, run `make
+ install` or:
+
+ ```sh
+ mkdir keyring
+ gpg --homedir=keyring --allow-non-selfsigned-uid --import keys/*
+ ```
+
+ from the top level of this distribution. `process-control` also expects a
+ `control.ctl` file in `/srv/control/control.ctl`, which can be generated
+ from the files included here (after creating the keyring as described
+ above) by running `make install` or:
+
+ ```sh
+ scripts/generate-files
+ ```
+
+ Both of these are done automatically as part of `make install`.
+ process-control expects `/srv/control/archive` to exist and archives
+ control messages there. It expects `/srv/control/tmp` to exist and uses
+ it for temporary files for GnuPG control message verification.
+
+ To process incoming control messages, you need to run `process-control` on
+ each message. `process-control` expects to receive, on standard input,
+ lines consisting of a path to a file, a space, and a message ID. This
+ input format is designed to work with the tinyleaf server that comes with
+ INN 2.5 and later, but it should also work as a channel feed from
+ pre-storage-API versions of INN (1.x). It will not work without
+ modification via a channel feed from a current version of INN, since it
+ doesn't understand the storage API and doesn't know how to retrieve
+ articles by tokens. This could be easily added; I just haven't needed it.
+
+ If you're using tinyleaf, here is the setup process:
+
+ 1. Create a directory that tinyleaf will use to store incoming articles
+ temporarily, the archive directory, and the logs directory and
+ install the software:
+
+ ```sh
+ make install
+ ```
+
+ 2. Run tinyleaf on some port, configuring it to use that directory and
+ to run process-control. A typical tinyleaf command line would be:
+
+ ```sh
+ tinyleaf /srv/control/spool /srv/control/scripts/process-control
+ ```
+
+ I run tinyleaf using systemd, but any inetd implementation should work
+ equally well.
+
+ 3. Set up a news feed to the system running tinyleaf that sends control
+ messages of interest. You should be careful not to send cancel control
+ messages or you'll get a ton of junk in your logs. The INN newsfeeds
+ entry I use is:
+
+ ```
+ isc-control:control,control.*,!control.cancel:Tf,Wnm:
+ ```
+
+ combined with nntpsend to send the articles.
+
+ That should be all there is to it. Watch the logs directory to see what
+ happens for incoming messages.
+
+ `scripts/process-control` just maintains a database file. To export that
+ data in a format that's useful for other software, run
+ `scripts/export-control`. This expects a `/srv/control/export` directory
+ into which it stores active and newsgroups files, a copy of the
+ `control.ctl` file, and all of the logs in a `LOGS` subdirectory. This
+ export directory can then be made available on the web, copied to another
+ system, or whatever else is appropriate. Generally,
+ `scripts/export-control` should be run periodically from cron.
+
+ Reports can be generated using `scripts/control-summary`. This script
+ needs configuration before running; see the top of the script and its
+ included POD documentation. There is a sample template in the `templates`
+ directory, and `scripts/weekly-report` shows a sample cron job for sending
+ out a regular report.
+ title: Installation
+- body: |
+ This package is intended to provide all of the tools, configuration, and
+ information required to duplicate the ftp.isc.org control message archive
+ and newsgroup list service if you so desire. To set up a similar service
+ based on that service, however, you will also want to bootstrap from the
+ existing data. Here is the procedure for that:
+
+ 1. Be sure that you're starting from the latest software and set of
+ configuration files. I will generally try to make a new release after
+ committing a batch of changes, but I may not make a new release after
+ every change. See the sections below for information about the Git
+ repository in which this package is maintained. You can always clone
+ that repository to get the latest configuration (and then merge or
+ cherry-pick changes from my repository into your repository as you
+ desire).
+
+ 2. Download the current newsgroup list from:
+
+ ftp://ftp.isc.org/pub/usenet/CONFIG/newsgroups.bz2
+
+ and then bootstrap the database from it:
+
+ ```sh
+ bzip2 -dc newsgroups.bz2 | scripts/update-control bulkload
+ ```
+
+ 3. If you want the log information so that your reports will include
+ changes made in the ftp.isc.org archive before you created your own,
+ copy the contents of ftp://ftp.isc.org/pub/usenet/CONFIG/LOGS/ into
+ `/srv/control/logs`.
+
+ 4. If you want to start with the existing control message repository,
+ download the contents of ftp://ftp.isc.org/pub/usenet/control/ into
+ `/srv/control/archive`. You can do this using a recursive download
+ tool that understands FTP, such as wget, but please use the options
+ that add delays and don't hammer the server to death.
+
+ After finishing those steps, you will have a copy of the ftp.isc.org
+ archive and can start processing control messages, possibly with different
+ configuration choices. You can generate the files that are found in
+ ftp://ftp.isc.org/pub/usenet/CONFIG/ by running `scripts/export-control`
+ as described above.
+ title: Bootstrapping
+- body: |
+ To add a new hierarchy, add a configuration fragment in the `config`
+ directory named after the hierarchy, following the format of the existing
+ files, and run `scripts/generate-files` to create a new `control.ctl`
+ file. See the documentation in `scripts/generate-files` for details about
+ the supported configuration keys.
+
+ If the hierarchy uses PGP-signed control messages, also put the PGP key
+ into the `keys` directory in a file named after the hierarchy. Then, run:
+
+ ```sh
+ gpg --homedir=keyring --import keys/<hierarchy>
+ ```
+
+ to add the new key to the working keyring.
+
+ The first user ID on the key must match the signer expected by the
+ configuration data for the corresponding hierarchy. If a hierarchy
+ administrator sets that up wrong (usually by putting additional key IDs on
+ the key), this can be corrected by importing the key into a keyring with
+ GnuPG, using `gpg --edit-key` to remove the offending user ID, and
+ exporting the key again with `gpg --export --ascii`.
+
+ When adding a new hierarchy, it's often useful to bootstrap the newsgroup
+ list by importing the current checkgroups. To do this, obtain the
+ checkgroups as a text file (containing only the groups without any news
+ headers) and run:
+
+ ```sh
+ scripts/update-control checkgroups <hierarchy> < <checkgroups>
+ ```
+
+ where <hierarchy> is the hierarchy the checkgroups is for and
+ <checkgroups> is the path to the checkgroups file.
+ title: Maintenance
support:
email: eagle@eyrie.org
extra: |
diff --git a/t/data/update/pam-krb5/docknot.yaml b/t/data/update/pam-krb5/docknot.yaml
index 80f2478..23b0d9c 100644
--- a/t/data/update/pam-krb5/docknot.yaml
+++ b/t/data/update/pam-krb5/docknot.yaml
@@ -91,351 +91,6 @@ quote:
of a tepid change for the somewhat better," explained one source.
title: '"Look, ma, no hands!"'
work: Salon
-readme:
- sections:
- - body: |
- Just installing the module does not enable it or change anything about
- your system authentication configuration. To use the module for all
- system authentication on Debian systems, put something like:
-
- ```
- auth sufficient pam_krb5.so minimum_uid=1000
- auth required pam_unix.so try_first_pass nullok_secure
- ```
-
- in `/etc/pam.d/common-auth`, something like:
-
- ```
- session optional pam_krb5.so minimum_uid=1000
- session required pam_unix.so
- ```
-
- in `/etc/pam.d/common-session`, and something like:
-
- ```
- account required pam_krb5.so minimum_uid=1000
- account required pam_unix.so
- ```
-
- in `/etc/pam.d/common-account`. The `minimum_uid` setting tells the PAM
- module to pass on any users with a UID lower than 1000, thereby bypassing
- Kerberos authentication for the root account and any system accounts. You
- normally want to do this since otherwise, if the network is down, the
- Kerberos authentication can time out and make it difficult to log in as
- root and fix matters. This also avoids problems with Kerberos principals
- that happen to match system accounts accidentally getting access to those
- accounts.
-
- Be sure to include the module in the session group as well as the auth
- group. Without the session entry, the user's ticket cache will not be
- created properly for ssh logins (among possibly others).
-
- If your users should normally all use Kerberos passwords exclusively,
- putting something like:
-
- ```
- password sufficient pam_krb5.so minimum_uid=1000
- password required pam_unix.so try_first_pass obscure md5
- ```
-
- in `/etc/pam.d/common-password` will change users' passwords in Kerberos
- by default and then only fall back on Unix if that doesn't work. (You can
- make this tighter by using the more complex new-style PAM configuration.)
- If you instead want to synchronize local and Kerberos passwords and change
- them both at the same time, you can do something like:
-
- ```
- password required pam_unix.so obscure sha512
- password required pam_krb5.so use_authtok minimum_uid=1000
- ```
-
- If you have multiple environments that you want to synchronize and you
- don't want password changes to continue if the Kerberos password change
- fails, use the `clear_on_fail` option. For example:
-
- ```
- password required pam_krb5.so clear_on_fail minimum_uid=1000
- password required pam_unix.so use_authtok obscure sha512
- password required pam_smbpass.so use_authtok
- ```
-
- In this case, if `pam_krb5` cannot change the password (due to password
- strength rules on the KDC, for example), it will clear the stored password
- (because of the `clear_on_fail` option), and since `pam_unix` and
- `pam_smbpass` are both configured with `use_authtok`, they will both fail.
- `clear_on_fail` is not the default because it would interfere with the
- more common pattern of falling back to local passwords if the user doesn't
- exist in Kerberos.
-
- If you use a more complex configuration with the Linux PAM `[]` syntax for
- the session and account groups, note that `pam_krb5` returns a status of
- ignore, not success, if the user didn't log on with Kerberos. You may
- need to handle that explicitly with `ignore=ignore` in your action list.
-
- There are many, many other possibilities. See the Linux PAM documentation
- for all the configuration options.
-
- On Red Hat systems, modify `/etc/pam.d/system-auth` instead, which
- contains all of the configuration for the different stacks.
-
- You can also use pam-krb5 only for specific services. In that case,
- modify the files in `/etc/pam.d` for that particular service to use
- `pam_krb5.so` for authentication. For services that are using passwords
- over TLS to authenticate users, you may want to use the `ignore_k5login`
- and `no_ccache` options to the authenticate module. `.k5login`
- authorization is only meaningful for local accounts and ticket caches are
- usually (although not always) only useful for interactive sessions.
-
- Configuring the module for Solaris is both simpler and less flexible,
- since Solaris (at least Solaris 8 and 9, which are the last versions of
- Solaris with which this module was extensively tested) use a single
- `/etc/pam.conf` file that contains configuration for all programs. For
- console login on Solaris, try something like:
-
- ```
- login auth sufficient /usr/local/lib/security/pam_krb5.so minimum_uid=100
- login auth required /usr/lib/security/pam_unix_auth.so.1 use_first_pass
- login account required /usr/local/lib/security/pam_krb5.so minimum_uid=100
- login account required /usr/lib/security/pam_unix_account.so.1
- login session required /usr/local/lib/security/pam_krb5.so retain_after_close minimum_uid=100
- login session required /usr/lib/security/pam_unix_session.so.1
- ```
-
- A similar configuration could be used for other services, such as ssh.
- See the pam.conf(5) man page for more information. When using this module
- with Solaris login (at least on Solaris 8 and 9), you will probably also
- need to add `retain_after_close` to the PAM configuration to avoid having
- the user's credentials deleted before they are logged in.
-
- The Solaris Kerberos library reportedly does not support prompting for a
- password change of an expired account during authentication. Supporting
- password change for expired accounts on Solaris with native Kerberos may
- therefore require setting the `defer_pwchange` or `force_pwchange` option
- for selected login applications. See the description and warnings about
- that option in the pam_krb5(5) man page.
-
- Some configuration options may be put in the `krb5.conf` file used by your
- Kerberos libraries (usually `/etc/krb5.conf` or
- `/usr/local/etc/krb5.conf`) instead or in addition to the PAM
- configuration. See the man page for more details.
-
- The Kerberos library, via pam-krb5, will prompt the user to change their
- password if their password is expired, but when using OpenSSH, this will
- only work when `ChallengeResponseAuthentication` is enabled. Unless this
- option is enabled, OpenSSH doesn't pass PAM messages to the user and can
- only respond to a simple password prompt.
-
- If you are using MIT Kerberos, be aware that users whose passwords are
- expired will not be prompted to change their password unless the KDC
- configuration for your realm in `[realms]` in `krb5.conf` contains a
- `master_kdc` setting or, if using DNS SRV records, you have a DNS entry
- for `_kerberos-master` as well as `_kerberos`.
- title: Configuring
- - body: |
- The first step when debugging any problems with this module is to add
- `debug` to the PAM options for the module (either in the PAM configuration
- or in `krb5.conf`). This will significantly increase the logging from the
- module and should provide a trace of exactly what failed and any available
- error information.
-
- Many Kerberos authentication problems are due to configuration issues in
- `krb5.conf`. If pam-krb5 doesn't work, first check that `kinit` works on
- the same system. That will test your basic Kerberos configuration. If
- the system has a keytab file installed that's readable by the process
- doing authentication via PAM, make sure that the keytab is current and
- contains a key for `host/<system>` where <system> is the fully-qualified
- hostname. pam-krb5 prevents KDC spoofing by checking the user's
- credentials when possible, but this means that if a keytab is present it
- must be correct or authentication will fail. You can check the keytab
- with `klist -k` and `kinit -k`.
-
- Be sure that all libraries and modules, including PAM modules, loaded by a
- program use the same Kerberos libraries. Sometimes programs that use PAM,
- such as current versions of OpenSSH, also link against Kerberos directly.
- If your sshd is linked against one set of Kerberos libraries and pam-krb5
- is linked against a different set of Kerberos libraries, this will often
- cause problems (such as segmentation faults, bus errors, assertions, or
- other strange behavior). Similar issues apply to the com_err library or
- any other library used by both modules and shared libraries and by the
- application that loads them. If your OS ships Kerberos libraries, it's
- usually best if possible to build all Kerberos software on the system
- against those libraries.
- title: Debugging
- - body: |
- The normal sequence of actions taken for a user login is:
-
- ```
- pam_authenticate
- pam_setcred(PAM_ESTABLISH_CRED)
- pam_open_session
- pam_acct_mgmt
- ```
-
- and then at logout:
-
- ```
- pam_close_session
- ```
-
- followed by closing the open PAM session. The corresponding `pam_sm_*`
- functions in this module are called when an application calls those public
- interface functions. Not all applications call all of those functions, or
- in particularly that order, although `pam_authenticate` is always first
- and has to be.
-
- When `pam_authenticate` is called, pam-krb5 creates a temporary ticket
- cache in `/tmp` and sets the PAM environment variable `PAM_KRB5CCNAME` to
- point to it. This ticket cache will be automatically destroyed when the
- PAM session is closed and is there only to pass the initial credentials to
- the call to `pam_setcred`. The module would use a memory cache, but
- memory caches will only work if the application preserves the PAM
- environment between the calls to `pam_authenticate` and `pam_setcred`.
- Most do, but OpenSSH notoriously does not and calls `pam_authenticate` in
- a subprocess, so this method is used to pass the tickets to the
- `pam_setcred` call in a different process.
-
- `pam_authenticate` does a complete authentication, including checking the
- resulting TGT by obtaining a service ticket for the local host if
- possible, but this requires read access to the system keytab. If the
- keytab doesn't exist, can't be read, or doesn't include the appropriate
- credentials, the default is to accept the authentication. This can be
- controlled by setting `verify_ap_req_nofail` to true in `[libdefaults]` in
- `/etc/krb5.conf`. `pam_authenticate` also does a basic authorization
- check, by default calling `krb5_kuserok` (which uses `~/.k5login` if
- available and falls back to checking that the principal corresponds to the
- account name). This can be customized with several options documented in
- the pam_krb5(5) man page.
-
- pam-krb5 treats `pam_open_session` and `pam_setcred(PAM_ESTABLISH_CRED)`
- as synonymous, as some applications call one and some call the other.
- Both copy the initial credentials from the temporary cache into a
- permanent cache for this session and set `KRB5CCNAME` in the environment.
- It will remember when the credential cache has been established and then
- avoid doing any duplicate work afterwards, since some applications call
- `pam_setcred` or `pam_open_session` multiple times (most notably X.Org 7
- and earlier xdm, which also throws away the module settings the last time
- it calls them).
-
- `pam_acct_mgmt` finds the ticket cache, reads it in to obtain the
- authenticated principal, and then does is another authorization check
- against `.k5login` or the local account name as described above.
-
- After the call to `pam_setcred` or `pam_open_session`, the ticket cache
- will be destroyed whenever the calling application either destroys the PAM
- environment or calls `pam_close_session`, which it should do on user
- logout.
-
- The normal sequence of events when refreshing a ticket cache (such as
- inside a screensaver) is:
-
- ```
- pam_authenticate
- pam_setcred(PAM_REINITIALIZE_CRED)
- pam_acct_mgmt
- ```
-
- (`PAM_REFRESH_CRED` may be used instead.) Authentication proceeds as
- above. At the `pam_setcred` stage, rather than creating a new ticket
- cache, the module instead finds the current ticket cache (from the
- `KRB5CCNAME` environment variable or the default ticket cache location
- from the Kerberos library) and then reinitializes it with the credentials
- from the temporary `pam_authenticate` ticket cache. When refreshing a
- ticket cache, the application should not open a session. Calling
- `pam_acct_mgmt` is optional; pam-krb5 doesn't do anything different when
- it's called in this case.
-
- If `pam_authenticate` apparently didn't succeed, or if an account was
- configured to be ignored via `ignore_root` or `minimum_uid`, `pam_setcred`
- (and therefore `pam_open_session`) and `pam_acct_mgmt` return
- `PAM_IGNORE`, which tells the PAM library to proceed as if that module
- wasn't listed in the PAM configuration at all. `pam_authenticate`,
- however, returns failure in the ignored user case by default, since
- otherwise a configuration using `ignore_root` with pam-krb5 as the only
- PAM module would allow anyone to log in as root without a password. There
- doesn't appear to be a case where returning `PAM_IGNORE` instead would
- improve the module's behavior, but if you know of a case, please let me
- know.
-
- By default, `pam_authenticate` intentionally does not follow the PAM
- standard for handling expired accounts and instead returns failure from
- `pam_authenticate` unless the Kerberos libraries are able to change the
- account password during authentication. Too many applications either do
- not call `pam_acct_mgmt` or ignore its exit status. The fully correct PAM
- behavior (returning success from `pam_authenticate` and
- `PAM_NEW_AUTHTOK_REQD` from `pam_acct_mgmt`) can be enabled with the
- `defer_pwchange` option.
-
- The `defer_pwchange` option is unfortunately somewhat tricky to implement.
- In this case, the calling sequence is:
-
- ```
- pam_authenticate
- pam_acct_mgmt
- pam_chauthtok
- pam_setcred
- pam_open_session
- ```
-
- During the first `pam_authenticate`, we can't obtain credentials and
- therefore a ticket cache since the password is expired. But
- `pam_authenticate` isn't called again after `pam_chauthtok`, so
- `pam_chauthtok` has to create a ticket cache. We however don't want it to
- do this for the normal password change (`passwd`) case.
-
- What we do is set a flag in our PAM data structure saying that we're
- processing an expired password, and `pam_chauthtok`, if it sees that flag,
- redoes the authentication with password prompting disabled after it
- finishes changing the password.
-
- Unfortunately, when handling password changes this way, `pam_chauthtok`
- will always have to prompt the user for their current password again even
- though they just typed it. This is because the saved authentication
- tokens are cleared after `pam_authenticate` returns, for security reasons.
- We could hack around this by saving the password in our PAM data
- structure, but this would let the application gain access to it (exactly
- what the clearing is intended to prevent) and breaks a PAM library
- guarantee. We could also work around this by having `pam_authenticate`
- get the `kadmin/changepw` authenticator in the expired password case and
- store it for `pam_chauthtok`, but it doesn't seem worth the hassle.
- title: Implementation Notes
- - body: |
- Originally written by Frank Cusack <fcusack@fcusack.com>, with the
- following acknowledgement:
-
- > Thanks to Naomaru Itoi <itoi@eecs.umich.edu>, Curtis King
- > <curtis.king@cul.ca>, and Derrick Brashear <shadow@dementia.org>, all of
- > whom have written and made available Kerberos 4/5 modules. Although no
- > code in this module is directly from these author's modules, (except the
- > get_user_info() routine in support.c; derived from whichever of these
- > authors originally wrote the first module the other 2 copied from), it
- > was extremely helpful to look over their code which aided in my design.
-
- The module was then patched for the FreeBSD ports collection with
- additional modifications by unknown maintainers and then was modified by
- Joel Kociolek <joko@logidee.com> to be usable with Debian GNU/Linux.
-
- It was packaged by Sam Hartman as the Kerberos v5 PAM module for Debian
- and improved and modified by him and later by Russ Allbery to fix bugs and
- add additional features. It was then adopted by Andres Salomon, who added
- support for refreshing credentials.
-
- The current distribution is maintained by Russ Allbery, who also added
- support for reading configuration from `krb5.conf`, added many features
- for compatibility with the Sourceforge module, commented and standardized
- the formatting of the code, and overhauled the documentation.
-
- Thanks to Douglas E. Engert for the initial implementation of PKINIT
- support. I have since modified and reworked it extensively, so any bugs
- or compilation problems are my fault.
-
- Thanks to Markus Moeller for lots of debugging and multiple patches and
- suggestions for improved portability.
-
- Thanks to Booker Bense for the implementation of the `alt_auth_map`
- option.
-
- Thanks to Sam Hartman for the FAST support implementation.
- title: History and Acknowledgements
requirements: |
Either MIT Kerberos (or Kerberos implementations based on it) or Heimdal
are supported. MIT Keberos 1.3 or later may be required; this module has
@@ -472,6 +127,350 @@ requirements: |
solution to this problem is to upgrade OpenSSH. I'm not sure exactly when
this problem was fixed, but at the very least OpenSSH 4.3 and later do not
exhibit it.
+sections:
+- body: |
+ Just installing the module does not enable it or change anything about
+ your system authentication configuration. To use the module for all
+ system authentication on Debian systems, put something like:
+
+ ```
+ auth sufficient pam_krb5.so minimum_uid=1000
+ auth required pam_unix.so try_first_pass nullok_secure
+ ```
+
+ in `/etc/pam.d/common-auth`, something like:
+
+ ```
+ session optional pam_krb5.so minimum_uid=1000
+ session required pam_unix.so
+ ```
+
+ in `/etc/pam.d/common-session`, and something like:
+
+ ```
+ account required pam_krb5.so minimum_uid=1000
+ account required pam_unix.so
+ ```
+
+ in `/etc/pam.d/common-account`. The `minimum_uid` setting tells the PAM
+ module to pass on any users with a UID lower than 1000, thereby bypassing
+ Kerberos authentication for the root account and any system accounts. You
+ normally want to do this since otherwise, if the network is down, the
+ Kerberos authentication can time out and make it difficult to log in as
+ root and fix matters. This also avoids problems with Kerberos principals
+ that happen to match system accounts accidentally getting access to those
+ accounts.
+
+ Be sure to include the module in the session group as well as the auth
+ group. Without the session entry, the user's ticket cache will not be
+ created properly for ssh logins (among possibly others).
+
+ If your users should normally all use Kerberos passwords exclusively,
+ putting something like:
+
+ ```
+ password sufficient pam_krb5.so minimum_uid=1000
+ password required pam_unix.so try_first_pass obscure md5
+ ```
+
+ in `/etc/pam.d/common-password` will change users' passwords in Kerberos
+ by default and then only fall back on Unix if that doesn't work. (You can
+ make this tighter by using the more complex new-style PAM configuration.)
+ If you instead want to synchronize local and Kerberos passwords and change
+ them both at the same time, you can do something like:
+
+ ```
+ password required pam_unix.so obscure sha512
+ password required pam_krb5.so use_authtok minimum_uid=1000
+ ```
+
+ If you have multiple environments that you want to synchronize and you
+ don't want password changes to continue if the Kerberos password change
+ fails, use the `clear_on_fail` option. For example:
+
+ ```
+ password required pam_krb5.so clear_on_fail minimum_uid=1000
+ password required pam_unix.so use_authtok obscure sha512
+ password required pam_smbpass.so use_authtok
+ ```
+
+ In this case, if `pam_krb5` cannot change the password (due to password
+ strength rules on the KDC, for example), it will clear the stored password
+ (because of the `clear_on_fail` option), and since `pam_unix` and
+ `pam_smbpass` are both configured with `use_authtok`, they will both fail.
+ `clear_on_fail` is not the default because it would interfere with the
+ more common pattern of falling back to local passwords if the user doesn't
+ exist in Kerberos.
+
+ If you use a more complex configuration with the Linux PAM `[]` syntax for
+ the session and account groups, note that `pam_krb5` returns a status of
+ ignore, not success, if the user didn't log on with Kerberos. You may
+ need to handle that explicitly with `ignore=ignore` in your action list.
+
+ There are many, many other possibilities. See the Linux PAM documentation
+ for all the configuration options.
+
+ On Red Hat systems, modify `/etc/pam.d/system-auth` instead, which
+ contains all of the configuration for the different stacks.
+
+ You can also use pam-krb5 only for specific services. In that case,
+ modify the files in `/etc/pam.d` for that particular service to use
+ `pam_krb5.so` for authentication. For services that are using passwords
+ over TLS to authenticate users, you may want to use the `ignore_k5login`
+ and `no_ccache` options to the authenticate module. `.k5login`
+ authorization is only meaningful for local accounts and ticket caches are
+ usually (although not always) only useful for interactive sessions.
+
+ Configuring the module for Solaris is both simpler and less flexible,
+ since Solaris (at least Solaris 8 and 9, which are the last versions of
+ Solaris with which this module was extensively tested) use a single
+ `/etc/pam.conf` file that contains configuration for all programs. For
+ console login on Solaris, try something like:
+
+ ```
+ login auth sufficient /usr/local/lib/security/pam_krb5.so minimum_uid=100
+ login auth required /usr/lib/security/pam_unix_auth.so.1 use_first_pass
+ login account required /usr/local/lib/security/pam_krb5.so minimum_uid=100
+ login account required /usr/lib/security/pam_unix_account.so.1
+ login session required /usr/local/lib/security/pam_krb5.so retain_after_close minimum_uid=100
+ login session required /usr/lib/security/pam_unix_session.so.1
+ ```
+
+ A similar configuration could be used for other services, such as ssh.
+ See the pam.conf(5) man page for more information. When using this module
+ with Solaris login (at least on Solaris 8 and 9), you will probably also
+ need to add `retain_after_close` to the PAM configuration to avoid having
+ the user's credentials deleted before they are logged in.
+
+ The Solaris Kerberos library reportedly does not support prompting for a
+ password change of an expired account during authentication. Supporting
+ password change for expired accounts on Solaris with native Kerberos may
+ therefore require setting the `defer_pwchange` or `force_pwchange` option
+ for selected login applications. See the description and warnings about
+ that option in the pam_krb5(5) man page.
+
+ Some configuration options may be put in the `krb5.conf` file used by your
+ Kerberos libraries (usually `/etc/krb5.conf` or
+ `/usr/local/etc/krb5.conf`) instead or in addition to the PAM
+ configuration. See the man page for more details.
+
+ The Kerberos library, via pam-krb5, will prompt the user to change their
+ password if their password is expired, but when using OpenSSH, this will
+ only work when `ChallengeResponseAuthentication` is enabled. Unless this
+ option is enabled, OpenSSH doesn't pass PAM messages to the user and can
+ only respond to a simple password prompt.
+
+ If you are using MIT Kerberos, be aware that users whose passwords are
+ expired will not be prompted to change their password unless the KDC
+ configuration for your realm in `[realms]` in `krb5.conf` contains a
+ `master_kdc` setting or, if using DNS SRV records, you have a DNS entry
+ for `_kerberos-master` as well as `_kerberos`.
+ title: Configuring
+- body: |
+ The first step when debugging any problems with this module is to add
+ `debug` to the PAM options for the module (either in the PAM configuration
+ or in `krb5.conf`). This will significantly increase the logging from the
+ module and should provide a trace of exactly what failed and any available
+ error information.
+
+ Many Kerberos authentication problems are due to configuration issues in
+ `krb5.conf`. If pam-krb5 doesn't work, first check that `kinit` works on
+ the same system. That will test your basic Kerberos configuration. If
+ the system has a keytab file installed that's readable by the process
+ doing authentication via PAM, make sure that the keytab is current and
+ contains a key for `host/<system>` where <system> is the fully-qualified
+ hostname. pam-krb5 prevents KDC spoofing by checking the user's
+ credentials when possible, but this means that if a keytab is present it
+ must be correct or authentication will fail. You can check the keytab
+ with `klist -k` and `kinit -k`.
+
+ Be sure that all libraries and modules, including PAM modules, loaded by a
+ program use the same Kerberos libraries. Sometimes programs that use PAM,
+ such as current versions of OpenSSH, also link against Kerberos directly.
+ If your sshd is linked against one set of Kerberos libraries and pam-krb5
+ is linked against a different set of Kerberos libraries, this will often
+ cause problems (such as segmentation faults, bus errors, assertions, or
+ other strange behavior). Similar issues apply to the com_err library or
+ any other library used by both modules and shared libraries and by the
+ application that loads them. If your OS ships Kerberos libraries, it's
+ usually best if possible to build all Kerberos software on the system
+ against those libraries.
+ title: Debugging
+- body: |
+ The normal sequence of actions taken for a user login is:
+
+ ```
+ pam_authenticate
+ pam_setcred(PAM_ESTABLISH_CRED)
+ pam_open_session
+ pam_acct_mgmt
+ ```
+
+ and then at logout:
+
+ ```
+ pam_close_session
+ ```
+
+ followed by closing the open PAM session. The corresponding `pam_sm_*`
+ functions in this module are called when an application calls those public
+ interface functions. Not all applications call all of those functions, or
+ in particularly that order, although `pam_authenticate` is always first
+ and has to be.
+
+ When `pam_authenticate` is called, pam-krb5 creates a temporary ticket
+ cache in `/tmp` and sets the PAM environment variable `PAM_KRB5CCNAME` to
+ point to it. This ticket cache will be automatically destroyed when the
+ PAM session is closed and is there only to pass the initial credentials to
+ the call to `pam_setcred`. The module would use a memory cache, but
+ memory caches will only work if the application preserves the PAM
+ environment between the calls to `pam_authenticate` and `pam_setcred`.
+ Most do, but OpenSSH notoriously does not and calls `pam_authenticate` in
+ a subprocess, so this method is used to pass the tickets to the
+ `pam_setcred` call in a different process.
+
+ `pam_authenticate` does a complete authentication, including checking the
+ resulting TGT by obtaining a service ticket for the local host if
+ possible, but this requires read access to the system keytab. If the
+ keytab doesn't exist, can't be read, or doesn't include the appropriate
+ credentials, the default is to accept the authentication. This can be
+ controlled by setting `verify_ap_req_nofail` to true in `[libdefaults]` in
+ `/etc/krb5.conf`. `pam_authenticate` also does a basic authorization
+ check, by default calling `krb5_kuserok` (which uses `~/.k5login` if
+ available and falls back to checking that the principal corresponds to the
+ account name). This can be customized with several options documented in
+ the pam_krb5(5) man page.
+
+ pam-krb5 treats `pam_open_session` and `pam_setcred(PAM_ESTABLISH_CRED)`
+ as synonymous, as some applications call one and some call the other.
+ Both copy the initial credentials from the temporary cache into a
+ permanent cache for this session and set `KRB5CCNAME` in the environment.
+ It will remember when the credential cache has been established and then
+ avoid doing any duplicate work afterwards, since some applications call
+ `pam_setcred` or `pam_open_session` multiple times (most notably X.Org 7
+ and earlier xdm, which also throws away the module settings the last time
+ it calls them).
+
+ `pam_acct_mgmt` finds the ticket cache, reads it in to obtain the
+ authenticated principal, and then does is another authorization check
+ against `.k5login` or the local account name as described above.
+
+ After the call to `pam_setcred` or `pam_open_session`, the ticket cache
+ will be destroyed whenever the calling application either destroys the PAM
+ environment or calls `pam_close_session`, which it should do on user
+ logout.
+
+ The normal sequence of events when refreshing a ticket cache (such as
+ inside a screensaver) is:
+
+ ```
+ pam_authenticate
+ pam_setcred(PAM_REINITIALIZE_CRED)
+ pam_acct_mgmt
+ ```
+
+ (`PAM_REFRESH_CRED` may be used instead.) Authentication proceeds as
+ above. At the `pam_setcred` stage, rather than creating a new ticket
+ cache, the module instead finds the current ticket cache (from the
+ `KRB5CCNAME` environment variable or the default ticket cache location
+ from the Kerberos library) and then reinitializes it with the credentials
+ from the temporary `pam_authenticate` ticket cache. When refreshing a
+ ticket cache, the application should not open a session. Calling
+ `pam_acct_mgmt` is optional; pam-krb5 doesn't do anything different when
+ it's called in this case.
+
+ If `pam_authenticate` apparently didn't succeed, or if an account was
+ configured to be ignored via `ignore_root` or `minimum_uid`, `pam_setcred`
+ (and therefore `pam_open_session`) and `pam_acct_mgmt` return
+ `PAM_IGNORE`, which tells the PAM library to proceed as if that module
+ wasn't listed in the PAM configuration at all. `pam_authenticate`,
+ however, returns failure in the ignored user case by default, since
+ otherwise a configuration using `ignore_root` with pam-krb5 as the only
+ PAM module would allow anyone to log in as root without a password. There
+ doesn't appear to be a case where returning `PAM_IGNORE` instead would
+ improve the module's behavior, but if you know of a case, please let me
+ know.
+
+ By default, `pam_authenticate` intentionally does not follow the PAM
+ standard for handling expired accounts and instead returns failure from
+ `pam_authenticate` unless the Kerberos libraries are able to change the
+ account password during authentication. Too many applications either do
+ not call `pam_acct_mgmt` or ignore its exit status. The fully correct PAM
+ behavior (returning success from `pam_authenticate` and
+ `PAM_NEW_AUTHTOK_REQD` from `pam_acct_mgmt`) can be enabled with the
+ `defer_pwchange` option.
+
+ The `defer_pwchange` option is unfortunately somewhat tricky to implement.
+ In this case, the calling sequence is:
+
+ ```
+ pam_authenticate
+ pam_acct_mgmt
+ pam_chauthtok
+ pam_setcred
+ pam_open_session
+ ```
+
+ During the first `pam_authenticate`, we can't obtain credentials and
+ therefore a ticket cache since the password is expired. But
+ `pam_authenticate` isn't called again after `pam_chauthtok`, so
+ `pam_chauthtok` has to create a ticket cache. We however don't want it to
+ do this for the normal password change (`passwd`) case.
+
+ What we do is set a flag in our PAM data structure saying that we're
+ processing an expired password, and `pam_chauthtok`, if it sees that flag,
+ redoes the authentication with password prompting disabled after it
+ finishes changing the password.
+
+ Unfortunately, when handling password changes this way, `pam_chauthtok`
+ will always have to prompt the user for their current password again even
+ though they just typed it. This is because the saved authentication
+ tokens are cleared after `pam_authenticate` returns, for security reasons.
+ We could hack around this by saving the password in our PAM data
+ structure, but this would let the application gain access to it (exactly
+ what the clearing is intended to prevent) and breaks a PAM library
+ guarantee. We could also work around this by having `pam_authenticate`
+ get the `kadmin/changepw` authenticator in the expired password case and
+ store it for `pam_chauthtok`, but it doesn't seem worth the hassle.
+ title: Implementation Notes
+- body: |
+ Originally written by Frank Cusack <fcusack@fcusack.com>, with the
+ following acknowledgement:
+
+ > Thanks to Naomaru Itoi <itoi@eecs.umich.edu>, Curtis King
+ > <curtis.king@cul.ca>, and Derrick Brashear <shadow@dementia.org>, all of
+ > whom have written and made available Kerberos 4/5 modules. Although no
+ > code in this module is directly from these author's modules, (except the
+ > get_user_info() routine in support.c; derived from whichever of these
+ > authors originally wrote the first module the other 2 copied from), it
+ > was extremely helpful to look over their code which aided in my design.
+
+ The module was then patched for the FreeBSD ports collection with
+ additional modifications by unknown maintainers and then was modified by
+ Joel Kociolek <joko@logidee.com> to be usable with Debian GNU/Linux.
+
+ It was packaged by Sam Hartman as the Kerberos v5 PAM module for Debian
+ and improved and modified by him and later by Russ Allbery to fix bugs and
+ add additional features. It was then adopted by Andres Salomon, who added
+ support for refreshing credentials.
+
+ The current distribution is maintained by Russ Allbery, who also added
+ support for reading configuration from `krb5.conf`, added many features
+ for compatibility with the Sourceforge module, commented and standardized
+ the formatting of the code, and overhauled the documentation.
+
+ Thanks to Douglas E. Engert for the initial implementation of PKINIT
+ support. I have since modified and reworked it extensively, so any bugs
+ or compilation problems are my fault.
+
+ Thanks to Markus Moeller for lots of debugging and multiple patches and
+ suggestions for improved portability.
+
+ Thanks to Booker Bense for the implementation of the `alt_auth_map`
+ option.
+
+ Thanks to Sam Hartman for the FAST support implementation.
+ title: History and Acknowledgements
support:
email: eagle@eyrie.org
github: rra/pam-krb5
diff --git a/t/data/update/remctl/docknot.yaml b/t/data/update/remctl/docknot.yaml
index 3fa1582..a4dee66 100644
--- a/t/data/update/remctl/docknot.yaml
+++ b/t/data/update/remctl/docknot.yaml
@@ -237,41 +237,6 @@ quote:
author: Peter Marshall
text: |
Small deeds done are better than great deeds planned.
-readme:
- sections:
- - body: |
- (These instructions are not tested by the author and are now dated.
- Updated instructions via a pull request, issue, or email are very
- welcome.)
-
- First, install the Microsoft Windows SDK for Windows Vista if you have not
- already. This is a free download from Microsoft for users of "Genuine
- Microsoft Windows." The `vcvars32.bat` environment provided by Visual
- Studio may work as an alternative, but has not been tested.
-
- Next, install the [MIT Kerberos for Windows
- SDK](https://web.mit.edu/kerberos/www/dist/index.html). remctl has been
- tested with version 3.2.1 but should hopefully work with later versions.
-
- Then, follow these steps:
-
- 1. Run the `InitEnv.cmd` script included with the Windows SDK with
- parameters `"/xp /release"`.
-
- 2. Run the `configure.bat` script, giving it as an argument the location
- of the Kerberos for Windows SDK. For example, if you installed the KfW
- SDK in `"c:\KfW SDK"`, you should run:
-
- ```
- configure "c:\KfW SDK"
- ```
-
- 3. Run `nmake` to start compiling. You can ignore the warnings.
-
- If all goes well, you will have `remctl.exe` and `remctl.dll`. The latter
- is a shared library used by the client program. It exports the same
- interface as the UNIX libremctl library.
- title: Building on Windows
requirements: |
The remctld server and the standard client are written in C and require a
C compiler and GSS-API libraries to build. Both will build against either
@@ -325,6 +290,40 @@ requirements: |
currently requires the Sun Java JDK (1.4.2, 5, or 6) or OpenJDK 6 or
later. A considerably better Java client implementation is available on
the `java` branch in the Git repository but has not yet been merged.
+sections:
+- body: |
+ (These instructions are not tested by the author and are now dated.
+ Updated instructions via a pull request, issue, or email are very
+ welcome.)
+
+ First, install the Microsoft Windows SDK for Windows Vista if you have not
+ already. This is a free download from Microsoft for users of "Genuine
+ Microsoft Windows." The `vcvars32.bat` environment provided by Visual
+ Studio may work as an alternative, but has not been tested.
+
+ Next, install the [MIT Kerberos for Windows
+ SDK](https://web.mit.edu/kerberos/www/dist/index.html). remctl has been
+ tested with version 3.2.1 but should hopefully work with later versions.
+
+ Then, follow these steps:
+
+ 1. Run the `InitEnv.cmd` script included with the Windows SDK with
+ parameters `"/xp /release"`.
+
+ 2. Run the `configure.bat` script, giving it as an argument the location
+ of the Kerberos for Windows SDK. For example, if you installed the KfW
+ SDK in `"c:\KfW SDK"`, you should run:
+
+ ```
+ configure "c:\KfW SDK"
+ ```
+
+ 3. Run `nmake` to start compiling. You can ignore the warnings.
+
+ If all goes well, you will have `remctl.exe` and `remctl.dll`. The latter
+ is a shared library used by the client program. It exports the same
+ interface as the UNIX libremctl library.
+ title: Building on Windows
support:
email: eagle@eyrie.org
github: rra/remctl
diff --git a/t/data/update/rra-c-util/docknot.yaml b/t/data/update/rra-c-util/docknot.yaml
index ced18a0..1d9f4f9 100644
--- a/t/data/update/rra-c-util/docknot.yaml
+++ b/t/data/update/rra-c-util/docknot.yaml
@@ -86,188 +86,6 @@ quote:
Greenspun's Tenth Rule of Programming: any sufficiently complicated C or
Fortran program contains an ad hoc informally-specified bug-ridden slow
implementation of half of Common Lisp.
-readme:
- sections:
- - body: |
- You can build rra-c-util with:
-
- ```
- ./configure
- make
- ```
-
- Pass `--enable-kafs` to configure to attempt to build kafs support, which
- will use either an existing libkafs or libkopenafs library or build the
- kafs replacement included in this package. You can also add
- `--without-libkafs` to force the use of the internal kafs replacement.
-
- Pass `--enable-silent-rules` to configure for a quieter build (similar to
- the Linux kernel). Use `make warnings` instead of make to build with full
- GCC compiler warnings (requires a relatively current version of GCC).
-
- Normally, configure will use `krb5-config` to determine the flags to use
- to compile with your Kerberos libraries. If `krb5-config` isn't found, it
- will look for the standard Kerberos libraries in locations already
- searched by your compiler. If the the `krb5-config` script first in your
- path is not the one corresponding to the Kerberos libraries you want to
- use or if your Kerberos libraries and includes aren't in a location
- searched by default by your compiler, you need to specify a different
- Kerberos installation root via `--with-krb5=PATH`. For example:
-
- ```
- ./configure --with-krb5=/usr/pubsw
- ```
-
- You can also individually set the paths to the include directory and the
- library directory with `--with-krb5-include` and `--with-krb5-lib`. You
- may need to do this if Autoconf can't figure out whether to use `lib`,
- `lib32`, or `lib64` on your platform.
-
- To specify a particular `krb5-config` script to use, either set the
- `PATH_KRB5_CONFIG` environment variable or pass it to configure like:
-
- ```
- ./configure PATH_KRB5_CONFIG=/path/to/krb5-config
- ```
-
- To not use `krb5-config` and force library probing even if there is a
- `krb5-config` script on your path, set `PATH_KRB5_CONFIG` to a nonexistent
- path:
-
- ```
- ./configure PATH_KRB5_CONFIG=/nonexistent
- ```
-
- `krb5-config` is not used and library probing is always done if either
- `--with-krb5-include` or `--with-krb5-lib` are given.
-
- GSS-API libraries are found the same way: with `krb5-config` by default if
- it is found, and a `--with-gssapi=PATH` flag to specify the installation
- root. `PATH_KRB5_CONFIG` is similarly used to find krb5-config for the
- GSS-API libraries, and `--with-gssapi-include` and `--with-gssapi-lib` can
- be used to specify the exact paths, overriding any `krb5-config` results.
- title: Building
- - body: |
- rra-c-util comes with an extensive test suite, which you can run after
- building with:
-
- ```
- make check
- ```
-
- If a test fails, you can run a single test with verbose output via:
-
- ```
- tests/runtests -o <name-of-test>
- ```
-
- Do this instead of running the test program directly since it will ensure
- that necessary environment variables are set up.
- title: Testing
- - body: |
- While there is an install target, it's present only because Automake
- provides it automatically. Its use is not recommended. Instead, the code
- in this package is intended to be copied into your package and refreshed
- from the latest release of rra-c-util for each release.
-
- You can obviously copy the code and integrate it however works best for
- your package and your build system. Here's how I do it for my packages as
- an example:
-
- * Create a portable directory and copy `macros.h`, `system.h`,
- `stdbool.h`, and `dummy.c` along with whatever additional functions that
- your package uses that may not be present on all systems. If you use
- much of the `util` directory (see below), you'll need `asprintf.c`,
- `reallocarray.c`, and `snprintf.c` at least. If you use
- `util/network.c`, you'll also need `getaddrinfo.c`, `getaddrinfo.h`,
- `getnameinfo.c`, `getnameinfo.h`, `inet_*.c`, and `socket.h`. You'll
- need `winsock.c` for networking portability to Windows.
-
- * Copy the necessary portions of `configure.ac` from this package into
- your package. `configure.ac` is commented to try to give you a guide
- for what you need to copy over. You will also need to make an `m4`
- subdirectory, add the code to `configure.ac` to load Autoconf macros
- from `m4`, and copy over `m4/snprintf.m4` and possibly `m4/socket.m4`
- and `m4/inet-ntoa.m4`.
-
- * Copy the code from `Makefile.am` for building `libportable.a` into your
- package and be sure to link your package binaries with `libportable.a`.
- If you include this code in a shared library, you'll need to build
- `libportable.la` instead; see the Automake manual for the differences.
- You'll need to change `LIBRARIES` to `LTLIBRARIES` and `LIBOBJS` to
- `LTLIBOBJS` in addition to renaming the targets.
-
- * Create a `util` directory and copy over the portions of the utility
- library that you want. You will probably need `messages.[ch]` and
- `xmalloc.[ch]` if you copy anything over at all, since most of the rest
- of the library uses those. You will also need `m4/vamacros.m4` if you
- use `messages.[ch]`.
-
- * Copy the code from `Makefile.am` for building `libutil.a` into your
- package and be sure to link your package binaries with `libutil.a`. As
- with `libportable.a`, if you want to use the utility functions in a
- shared library, you'll need to instead build `libutil.la` and change
- some of the Automake variables.
-
- * If your package uses a TAP-based test suite written in C, consider using
- the additional TAP utility functions in `tests/tap` (specifically
- `messages.*`, `process.*`, and `string.*`).
-
- * If you're using the Kerberos portability code, copy over
- `portable/krb5.h`, `portable/krb5-extra.c`, `m4/krb5.m4`,
- `m4/lib-depends.m4`, `m4/lib-pathname.m4`, and optionally
- `util/messages-krb5.[ch]`. You'll also need the relevant fragments of
- `configure.ac`. You may want to remove some things from `krb5.h` and
- `krb5-extra.c` the corresponding configure checks if your code doesn't
- need all of those functions. If you need `krb5_get_renewed_creds`, also
- copy over `krb5-renew.c`. Don't forget to add `$(KRB5_CPPFLAGS)` to
- `CPPFLAGS` for `libportable` and possibly `libutil`, and if you're
- building a shared library, also add `$(KRB5_LDFLAGS)` to `LDFLAGS` and
- `$(KRB5_LIBS)` to `LIBADD` for those libraries.
-
- For a Kerberos-enabled test suite, also consider copying the
- `kerberos.*` libraries in `tests/tap` for a Kerberos-enabled test suite.
- If you want to use `kerberos_generate_conf` from `tests/tap/kerberos.c`,
- also copy over `tests/data/generate-krb5-conf`.
-
- * For testing that requires making Kerberos administrative changes,
- consider copying over the `kadmin.*` libraries in `tests/tap`.
-
- * For testing packages that use remctl, see the `tests/tap/remctl.c` and
- `tests/tap/remctl.h` files for C tests and `tests/tap/remctl.sh` for
- shell scripts.
-
- * If you're using the kafs portability code, copy over the `kafs`
- directory, `m4/kafs.m4`, `m4/lib-pathname.m4`, `portable/k_haspag.c`,
- the code to build kafs from `Makefile.am`, and the relevant fragments of
- `configure.ac`.
-
- * If you're using the PAM portability code, copy over `pam-util/*`,
- `portable/pam*`, `m4/pam-const.m4`, and the relevant fragments of
- `configure.ac`.
-
- * Copy over any other Autoconf macros that you want to use in your
- package from the m4 directory.
-
- * Copy over any generic tests from `tests/docs` and `tests/perl` that are
- appropriate for your package. If you use any of these, also copy over
- the `tests/tap/perl` directory and `tests/data/perl.conf` (and customize
- the latter for your package).
-
- * If the package embeds a Perl module, copy over any tests from the
- `perl/t` directory that are applicable. This can provide generic
- testing of the embedded Perl module using Perl's own test
- infrastructure. If you use any of these, also copy over the
- `perl/t/data/perl.conf` file and customize it for your package. You
- will need to arrange for `perl/t/data` to contain copies of the
- `perlcriticrc` and `perltidyrc` files, either by making copies of the
- files from `tests/data` or by using make to copy them.
-
- I also copy over all the relevant tests from the `tests` directory and the
- build machinery for them from `Makefile.am` so that the portability and
- utility layer are tested along with the rest of the package. The test
- driver should come from C TAP Harness.
- title: Using This Code
requirements: |
Everything requires a C compiler to build and expects an ISO C89 or later
C compiler and libraries. Presence of strdup is also assumed, which is
@@ -305,11 +123,192 @@ requirements: |
All are available on CPAN. Those tests will be skipped if the modules are
not available.
+sections:
+- body: |
+ You can build rra-c-util with:
+
+ ```
+ ./configure
+ make
+ ```
+
+ Pass `--enable-kafs` to configure to attempt to build kafs support, which
+ will use either an existing libkafs or libkopenafs library or build the
+ kafs replacement included in this package. You can also add
+ `--without-libkafs` to force the use of the internal kafs replacement.
+
+ Pass `--enable-silent-rules` to configure for a quieter build (similar to
+ the Linux kernel). Use `make warnings` instead of make to build with full
+ GCC compiler warnings (requires a relatively current version of GCC).
+
+ Normally, configure will use `krb5-config` to determine the flags to use
+ to compile with your Kerberos libraries. If `krb5-config` isn't found, it
+ will look for the standard Kerberos libraries in locations already
+ searched by your compiler. If the the `krb5-config` script first in your
+ path is not the one corresponding to the Kerberos libraries you want to
+ use or if your Kerberos libraries and includes aren't in a location
+ searched by default by your compiler, you need to specify a different
+ Kerberos installation root via `--with-krb5=PATH`. For example:
+
+ ```
+ ./configure --with-krb5=/usr/pubsw
+ ```
+
+ You can also individually set the paths to the include directory and the
+ library directory with `--with-krb5-include` and `--with-krb5-lib`. You
+ may need to do this if Autoconf can't figure out whether to use `lib`,
+ `lib32`, or `lib64` on your platform.
+
+ To specify a particular `krb5-config` script to use, either set the
+ `PATH_KRB5_CONFIG` environment variable or pass it to configure like:
+
+ ```
+ ./configure PATH_KRB5_CONFIG=/path/to/krb5-config
+ ```
+
+ To not use `krb5-config` and force library probing even if there is a
+ `krb5-config` script on your path, set `PATH_KRB5_CONFIG` to a nonexistent
+ path:
+
+ ```
+ ./configure PATH_KRB5_CONFIG=/nonexistent
+ ```
+
+ `krb5-config` is not used and library probing is always done if either
+ `--with-krb5-include` or `--with-krb5-lib` are given.
+
+ GSS-API libraries are found the same way: with `krb5-config` by default if
+ it is found, and a `--with-gssapi=PATH` flag to specify the installation
+ root. `PATH_KRB5_CONFIG` is similarly used to find krb5-config for the
+ GSS-API libraries, and `--with-gssapi-include` and `--with-gssapi-lib` can
+ be used to specify the exact paths, overriding any `krb5-config` results.
+ title: Building
+- body: |
+ While there is an install target, it's present only because Automake
+ provides it automatically. Its use is not recommended. Instead, the code
+ in this package is intended to be copied into your package and refreshed
+ from the latest release of rra-c-util for each release.
+
+ You can obviously copy the code and integrate it however works best for
+ your package and your build system. Here's how I do it for my packages as
+ an example:
+
+ * Create a portable directory and copy `macros.h`, `system.h`,
+ `stdbool.h`, and `dummy.c` along with whatever additional functions that
+ your package uses that may not be present on all systems. If you use
+ much of the `util` directory (see below), you'll need `asprintf.c`,
+ `reallocarray.c`, and `snprintf.c` at least. If you use
+ `util/network.c`, you'll also need `getaddrinfo.c`, `getaddrinfo.h`,
+ `getnameinfo.c`, `getnameinfo.h`, `inet_*.c`, and `socket.h`. You'll
+ need `winsock.c` for networking portability to Windows.
+
+ * Copy the necessary portions of `configure.ac` from this package into
+ your package. `configure.ac` is commented to try to give you a guide
+ for what you need to copy over. You will also need to make an `m4`
+ subdirectory, add the code to `configure.ac` to load Autoconf macros
+ from `m4`, and copy over `m4/snprintf.m4` and possibly `m4/socket.m4`
+ and `m4/inet-ntoa.m4`.
+
+ * Copy the code from `Makefile.am` for building `libportable.a` into your
+ package and be sure to link your package binaries with `libportable.a`.
+ If you include this code in a shared library, you'll need to build
+ `libportable.la` instead; see the Automake manual for the differences.
+ You'll need to change `LIBRARIES` to `LTLIBRARIES` and `LIBOBJS` to
+ `LTLIBOBJS` in addition to renaming the targets.
+
+ * Create a `util` directory and copy over the portions of the utility
+ library that you want. You will probably need `messages.[ch]` and
+ `xmalloc.[ch]` if you copy anything over at all, since most of the rest
+ of the library uses those. You will also need `m4/vamacros.m4` if you
+ use `messages.[ch]`.
+
+ * Copy the code from `Makefile.am` for building `libutil.a` into your
+ package and be sure to link your package binaries with `libutil.a`. As
+ with `libportable.a`, if you want to use the utility functions in a
+ shared library, you'll need to instead build `libutil.la` and change
+ some of the Automake variables.
+
+ * If your package uses a TAP-based test suite written in C, consider using
+ the additional TAP utility functions in `tests/tap` (specifically
+ `messages.*`, `process.*`, and `string.*`).
+
+ * If you're using the Kerberos portability code, copy over
+ `portable/krb5.h`, `portable/krb5-extra.c`, `m4/krb5.m4`,
+ `m4/lib-depends.m4`, `m4/lib-pathname.m4`, and optionally
+ `util/messages-krb5.[ch]`. You'll also need the relevant fragments of
+ `configure.ac`. You may want to remove some things from `krb5.h` and
+ `krb5-extra.c` the corresponding configure checks if your code doesn't
+ need all of those functions. If you need `krb5_get_renewed_creds`, also
+ copy over `krb5-renew.c`. Don't forget to add `$(KRB5_CPPFLAGS)` to
+ `CPPFLAGS` for `libportable` and possibly `libutil`, and if you're
+ building a shared library, also add `$(KRB5_LDFLAGS)` to `LDFLAGS` and
+ `$(KRB5_LIBS)` to `LIBADD` for those libraries.
+
+ For a Kerberos-enabled test suite, also consider copying the
+ `kerberos.*` libraries in `tests/tap` for a Kerberos-enabled test suite.
+ If you want to use `kerberos_generate_conf` from `tests/tap/kerberos.c`,
+ also copy over `tests/data/generate-krb5-conf`.
+
+ * For testing that requires making Kerberos administrative changes,
+ consider copying over the `kadmin.*` libraries in `tests/tap`.
+
+ * For testing packages that use remctl, see the `tests/tap/remctl.c` and
+ `tests/tap/remctl.h` files for C tests and `tests/tap/remctl.sh` for
+ shell scripts.
+
+ * If you're using the kafs portability code, copy over the `kafs`
+ directory, `m4/kafs.m4`, `m4/lib-pathname.m4`, `portable/k_haspag.c`,
+ the code to build kafs from `Makefile.am`, and the relevant fragments of
+ `configure.ac`.
+
+ * If you're using the PAM portability code, copy over `pam-util/*`,
+ `portable/pam*`, `m4/pam-const.m4`, and the relevant fragments of
+ `configure.ac`.
+
+ * Copy over any other Autoconf macros that you want to use in your
+ package from the m4 directory.
+
+ * Copy over any generic tests from `tests/docs` and `tests/perl` that are
+ appropriate for your package. If you use any of these, also copy over
+ the `tests/tap/perl` directory and `tests/data/perl.conf` (and customize
+ the latter for your package).
+
+ * If the package embeds a Perl module, copy over any tests from the
+ `perl/t` directory that are applicable. This can provide generic
+ testing of the embedded Perl module using Perl's own test
+ infrastructure. If you use any of these, also copy over the
+ `perl/t/data/perl.conf` file and customize it for your package. You
+ will need to arrange for `perl/t/data` to contain copies of the
+ `perlcriticrc` and `perltidyrc` files, either by making copies of the
+ files from `tests/data` or by using make to copy them.
+
+ I also copy over all the relevant tests from the `tests` directory and the
+ build machinery for them from `Makefile.am` so that the portability and
+ utility layer are tested along with the rest of the package. The test
+ driver should come from C TAP Harness.
+ title: Using This Code
support:
email: eagle@eyrie.org
github: rra/rra-c-util
web: https://www.eyrie.org/~eagle/software/rra-c-util/
synopsis: Russ Allbery's utility libraries for C
+test:
+ override: |
+ rra-c-util comes with an extensive test suite, which you can run after
+ building with:
+
+ ```
+ make check
+ ```
+
+ If a test fails, you can run a single test with verbose output via:
+
+ ```
+ tests/runtests -o <name-of-test>
+ ```
+
+ Do this instead of running the test program directly since it will ensure
+ that necessary environment variables are set up.
vcs:
browse: https://git.eyrie.org/?p=devel/rra-c-util.git
github: rra/rra-c-util