summaryrefslogtreecommitdiff
path: root/src/basic/mempool.c
diff options
context:
space:
mode:
authorLennart Poettering <lennart@poettering.net>2018-04-27 14:09:31 +0200
committerSven Eden <yamakuzure@gmx.net>2018-08-24 16:47:08 +0200
commit94062cd7c9680c5e9870f4352fcd5f0db2e51dfd (patch)
tree5485ca50514dcf5f3efa5e73cfaf1c3be771afb7 /src/basic/mempool.c
parent338c3c3619a265a1928184d9e8dfdf5219537b66 (diff)
tree-wide: be more careful with the type of array sizes
Previously we were a bit sloppy with the index and size types of arrays, we'd regularly use unsigned. While I don't think this ever resulted in real issues I think we should be more careful there and follow a stricter regime: unless there's a strong reason not to use size_t for array sizes and indexes, size_t it should be. Any allocations we do ultimately will use size_t anyway, and converting forth and back between unsigned and size_t will always be a source of problems. Note that on 32bit machines "unsigned" and "size_t" are equivalent, and on 64bit machines our arrays shouldn't grow that large anyway, and if they do we have a problem, however that kind of overly large allocation we have protections for usually, but for overflows we do not have that so much, hence let's add it. So yeah, it's a story of the current code being already "good enough", but I think some extra type hygiene is better. This patch tries to be comprehensive, but it probably isn't and I missed a few cases. But I guess we can cover that later as we notice it. Among smaller fixes, this changes: 1. strv_length()' return type becomes size_t 2. the unit file changes array size becomes size_t 3. DNS answer and query array sizes become size_t Fixes: https://bugs.freedesktop.org/show_bug.cgi?id=76745
Diffstat (limited to 'src/basic/mempool.c')
-rw-r--r--src/basic/mempool.c9
1 files changed, 4 insertions, 5 deletions
diff --git a/src/basic/mempool.c b/src/basic/mempool.c
index de04215ee..4be4a3d38 100644
--- a/src/basic/mempool.c
+++ b/src/basic/mempool.c
@@ -15,12 +15,12 @@
struct pool {
struct pool *next;
- unsigned n_tiles;
- unsigned n_used;
+ size_t n_tiles;
+ size_t n_used;
};
void* mempool_alloc_tile(struct mempool *mp) {
- unsigned i;
+ size_t i;
/* When a tile is released we add it to the list and simply
* place the next pointer at its offset 0. */
@@ -38,8 +38,7 @@ void* mempool_alloc_tile(struct mempool *mp) {
if (_unlikely_(!mp->first_pool) ||
_unlikely_(mp->first_pool->n_used >= mp->first_pool->n_tiles)) {
- unsigned n;
- size_t size;
+ size_t size, n;
struct pool *p;
n = mp->first_pool ? mp->first_pool->n_tiles : 0;