Age | Commit message (Collapse) | Author |
|
The btree key cache now works for snapshots btrees, where doing a lookup
requires searching through keys in older snapshots - we can finaly
re-enable it for inodes.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
|
This switches btree_key_cache_fill() to use a btree iterator, not a
btree path, so that it can search for keys in previous snapshots.
We also add another iterator flag, BTREE_ITER_KEY_CACHE_FILL, to avoid
recursion back into the key cache.
This will allow us to re-enable the key cache for inodes in the next
patch.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
|
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
|
There was no reason for this to be a separate helper - we always want
the relock call that btree_trans_peek_key_cache() did.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
|
When we start using the key cache for inodes again, it'll be possible
for bch2_btree_path_peek_slot() to return a key in a different snapshot
with a key cache path.
This isn't what we want when triggers are checking what they're
overwriting, so introduce a new helper for the commit path.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
|
We were returning a pointer to a variable on the stack - oops.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
|
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
|
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
|
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
|
Don't print out opts= if no options have been specified.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
|
|
|
This patch changes how the LRU index works:
Instead of using KEY_TYPE_lru where the bucket the lru entry points to
is part of the value, this switches to KEY_TYPE_set and encoding the
bucket we refer to in the low bits of the key.
This means that we no longer have to check for collisions when inserting
LRU entries. We'll be making using of this in the next patch, which adds
a btree write buffer - a pure write buffer for btree updates, where
updates are appended to a simple array and then periodically sorted and
batch inserted.
This is a new on disk format version, and a forced upgrade.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
|
We weren't guarding against the alloc key having an invalid data type.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
|
BCH_ALLOC_V4_BACKPOINTERS_START() may be unset on older filesystems, and
alloc_v4_backpointers() needs to take this into account - because it
wasn't, __bch2_alloc_to_v4_mut() was zeroing out all fields, instead of
only new ones.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
|
This improves the nocow lock table so that hash table entries have
multiple locks, and locks specify which bucket they're for - i.e. we can
now resolve hash collisions.
This is important because the allocator has to skip buckets that are
locked in the nocow lock table, and previously hash collisions would
cause it to spuriously skip unlocked buckets.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
|
While a btree transaction is running, we hold a SRCU read lock on the
btree key cache that prevents btree key cache keys from being freed -
this is so that relock() operations won't access freed memory.
The downside of this is that long running btree transactions prevent
memory from being freed from the key cache. This adds a check in
bch2_trans_begin() - if the transaction has been running longer than 1
second, drop and retake the SRCU read lock and zero out pointers to
unlock key cache paths.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
|
If it so happens that we crash while dirty, meaning we don't have the
superblock clean section, and we erroneously mark a journal entry we
wrote as blacklisted, we won't be able to recover.
This patch fixes this by adding a fallback: if we've got no superblock
clean section, and no non-ignored journal entries, we try the most
recent ignored journal entry.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
|
This patch
- Adds a mechanism for queuing up journal entries prior to the journal
being started, which will be used for early journal log messages
- Adds bch2_fs_log_msg() and improves bch2_trans_log_msg(), which now
take format strings. bch2_fs_log_msg() can be used before or after
the journal has been started, and will use the appropriate mechanism.
- Deletes the now obsolete bch2_journal_log_msg()
- And adds more log messages to the recovery path - messages for
journal/filesystem started, journal entries being blacklisted, and
journal replay starting/finishing.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
|
We're using bkey_init() which already does the memset(), no need to do
it twice.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
|
These errors are expected during the shutdown process, and shouldn't be
printed.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
|
It's possible to do btree updates before going RW by adding them to the
list of updates for journal replay to do, but this is limited by what
fits in RAM. This patch switches the second alloc info phase to run
after going RW - btree_gc has already ensured the alloc btree itself is
correct - and tweaks the allocation path to deal with the potential
small inconsistencies.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
|
We weren't resetting filesystem & device usage when restarting gc, which
was spotted when free bucket counters overflowed - whoops.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
|
More error code cleanup, for better error messages and debugability.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
|
More error code improvements - this gets us more useful error messages.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
|
Previously, we were journalling extra_journal_entries (which is used for
new btree roots, and irreversably mutates system state) before calling
bch2_trans_fs_usage_apply(), which can fail - whoops.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
|
data_update_init allocates several resources, but we forget to clean
these up when it fails.
Signed-off-by: Daniel Hill <daniel@gluo.nz>
|
|
Signed-off-by: Daniel Hill <daniel@gluo.nz>
|
|
Signed-off-by: Daniel Hill <daniel@gluo.nz>
|
|
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
|
bch2_btree_iter_peek_upto() in snapshots mode may need to keep a
btree_path for the insert position, not just the position of the key
we're returning. The code was incorrectly assuming this would be in the
same btree node - we were missing a bch2_btree_path_traverse() call.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
|
|
|
bch2_journal_keys_peek_upto() was comparing against btree_id & level
incorrectly - fix this by using __journal_key_cmp().
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
|
Replace with standard bcachefs-private error codes.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
|
The format for bch_alloc_v4 allows for adding new fields, but
bch2_alloc_to_v4_mut() was buggy - it was allocating a buffer sized
based on the current key, not including new fields being added.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
|
- Fix rare error path memory leaks
- Use new bch2_bkey_alloc() helper
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
|
Old inode formats don't have all the fields of the current inode format:
when unpacking inodes in the current format we can thus skip zeroing out
the destination buffer, but that doesn't work on for the old formats.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
|
Expand some bitfields so we can keep adding more btrees.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
|
|
|
To improve mount times, add a btree for just bucket gens, 256 of them
per key: this means we'll have to scan drastically less metadata at
startup.
This adds
- trigger for keeping it in sync with the all btree
- initialization code, for filesystems from previous versions
- new path for reading bucket gens
- new fsck code
And a new on disk format version.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
|
Add a new bcachefs-specific magic number for the superblock, instead of
continuing to use the old bcache magic number3
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
|
This fixes a (harmless) broken invariant in __bch2_btree_path_set_pos():
iterators to interior nodes should point to the first non whiteout.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
|
This just cleans up and simplifies the code that decides where to resume
writing in the journal - when the code was originally written we weren't
saving the precise location of every journal write found.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
|
On startup, we need to ensure the first journal entry written is a flush
write: after a clean shutdown we generally don't read the journal, which
means we might be overwriting whatever was there previously, and there
must always be at least one flush entry in the journal or recovery will
fail.
Found by fstests generic/388.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
|
|
|
This tweaks the recovery and journal paths so that we don't error out
before we need to: the list_journal command should work, even if we
wouldn't be able to replay successfully.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
|
|
|
This factors out a new helper from bch2_dev_freespace_init(),
bch2_get_key_or_hole(), and uses it in bch2_check_alloc_info(): we're
now able to process holes in the alloc btree as ranges, instead of one
bucket at a time.
This will improve fsck performance on new filesystems, or filesystems
where not every bucket has been used yet.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
|
When we started stashing the key being overwritten in
btree_insert_entry, this introduced a typical iterator invalidation
problem, triggered by btree node splits or resorts.
Previously, dealt with this by unconditionally re-validating those
stashed pointers in the transaction commit path. This patch gets rid of
that by doing it only when needed, in bch2_trans_node_add() or
bch2_trans_node_reinit_iter().
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
|
This makes bch2_dev_freespace_init() much faster: instead of processing
every bucket on the device one at a time, we handle ranges of missing
keys all at once: the freespace btree is an extents style btree, so we
only have to insert one freespace key for every range of missing keys
in the alloc btree.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
|
It's important that in BTREE_ITER_FILTER_SNAPSHOTS mode we always use
peek_upto() and provide an end for the interval we're searching for -
otherwise, when we hit the end of the inode the next inode be in a
different subvolume and not have any keys in the current snapshot, and
we'd iterate over arbitrarily many keys before returning one.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|