summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2023-05-15interleave unpackbkey_unpackKent Overstreet
2023-05-14__bch2_bkey_unpack_key(): avoid unaligned accessKent Overstreet
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-05-14bcachefs: New and improved __bch2_bkey_unpack_key()Kent Overstreet
This implements a new and improved __bch2_bkey_unpack_key() as suggested by Eric Biggers; we use a postprocessing step to compute byte indexes and masks, and then fetch each packed field with a simple fetch and mask, which are now able to run in parallel. This should provide roughly similar performance to the dynamically generated bkey unpack functions dropped by the previous patch. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-05-14bcachefs: bkey_format_processedKent Overstreet
This patch makes no functional changes; we're just introducing a new type to be used for a new and improved __bch2_bkey_unpack_key(). Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-05-14bcachefs: Drop compiled bkey unpackHristo Venev
It uses vmalloc_exec, which will be removed. Signed-off-by: Hristo Venev <hristo@venev.name> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-05-14bcachefs: Delete an incorrect bch2_trans_unlock()Kent Overstreet
These deletes a bch2_trans_unlock() call from __bch2_move_data(). It was redundant; bch2_move_extent() has the correct unlock call, and it was buggy because when move_extent calls bch2_extent_drop_ptrs() we don't want the transaction to be unlocked yet - this fixes a btree_iter.c assertion. Fixes https://github.com/koverstreet/bcachefs/issues/511. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-05-13bcachefs: Use memcpy_u64s_small() for copying keysKent Overstreet
Small performance optimization; an open coded loop is better than rep ; movsq for small copies. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-05-13fixup! bcachefs: bkey_ops.min_val_sizeKent Overstreet
2023-05-13Update issue templatesDaniel Hill
2023-05-13fs/aio: obey min_nr when doing wakeupsKent Overstreet
I've been observing workloads where IPIs due to wakeups in aio_complete() are ~15% of total CPU time in the profile. Most of those wakeups are unnecessary when completion batching is in use in io_getevents(). This plumbs min_nr through via the wait eventry, so that aio_complete() can avoid doing unnecessary wakeups. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev> Cc: Benjamin LaHaise <bcrl@kvack.org Cc: linux-aio@kvack.org Cc: linux-fsdevel@vger.kernel.org
2023-05-13fs/aio: Use kmap_local() instead of kmap()Kent Overstreet
Originally, we used kmap() instead of kmap_atomic() for reading events out of the completion ringbuffer because we're using copy_to_user(), which can fault. Now that kmap_local() is a thing, use that instead. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev> Cc: Benjamin LaHaise <bcrl@kvack.org Cc: linux-aio@kvack.org Cc: linux-fsdevel@vger.kernel.org
2023-05-13bcachefs: add counters for failed shrinker reclaimDaniel Hill
These counters should help us debug OOM issues. Signed-off-by: Daniel Hill <daniel@gluo.nz>
2023-05-13bcachefs: shrinker.to_text() methodsKent Overstreet
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-05-13mm: Centralize & improve oom reporting in show_mem.cKent Overstreet
This patch: - Changes show_mem() to always report on slab usage - Instead of reporting on all slabs, we only report on top 10 slabs, and in sorted order - Also reports on shrinkers, with the new shrinkers_to_text(). Shrinkers need to be included in OOM/allocation failure reporting because they're responsible for memory reclaim - if a shrinker isn't giving up its memory, we need to know which one and why. More OOM reporting can be moved to show_mem.c and improved, this patch is only a start. New example output on OOM/memory allocation failure: 00177 Mem-Info: 00177 active_anon:13706 inactive_anon:32266 isolated_anon:16 00177 active_file:1653 inactive_file:1822 isolated_file:0 00177 unevictable:0 dirty:0 writeback:0 00177 slab_reclaimable:6242 slab_unreclaimable:11168 00177 mapped:3824 shmem:3 pagetables:1266 bounce:0 00177 kernel_misc_reclaimable:0 00177 free:4362 free_pcp:35 free_cma:0 00177 Node 0 active_anon:54824kB inactive_anon:129064kB active_file:6612kB inactive_file:7288kB unevictable:0kB isolated(anon):64kB isolated(file):0kB mapped:15296kB dirty:0kB writeback:0kB shmem:12kB writeback_tmp:0kB kernel_stack:3392kB pagetables:5064kB all_unreclaimable? no 00177 DMA free:2232kB boost:0kB min:88kB low:108kB high:128kB reserved_highatomic:0KB active_anon:2924kB inactive_anon:6596kB active_file:428kB inactive_file:384kB unevictable:0kB writepending:0kB present:15992kB managed:15360kB mlocked:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB 00177 lowmem_reserve[]: 0 426 426 426 00177 DMA32 free:15092kB boost:5836kB min:8432kB low:9080kB high:9728kB reserved_highatomic:0KB active_anon:52196kB inactive_anon:122392kB active_file:6176kB inactive_file:7068kB unevictable:0kB writepending:0kB present:507760kB managed:441816kB mlocked:0kB bounce:0kB free_pcp:72kB local_pcp:0kB free_cma:0kB 00177 lowmem_reserve[]: 0 0 0 0 00177 DMA: 284*4kB (UM) 53*8kB (UM) 21*16kB (U) 11*32kB (U) 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 2248kB 00177 DMA32: 2765*4kB (UME) 375*8kB (UME) 57*16kB (UM) 5*32kB (U) 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 15132kB 00177 4656 total pagecache pages 00177 1031 pages in swap cache 00177 Swap cache stats: add 6572399, delete 6572173, find 488603/3286476 00177 Free swap = 509112kB 00177 Total swap = 2097148kB 00177 130938 pages RAM 00177 0 pages HighMem/MovableOnly 00177 16644 pages reserved 00177 Unreclaimable slab info: 00177 9p-fcall-cache total: 8.25 MiB active: 8.25 MiB 00177 kernfs_node_cache total: 2.15 MiB active: 2.15 MiB 00177 kmalloc-64 total: 2.08 MiB active: 2.07 MiB 00177 task_struct total: 1.95 MiB active: 1.95 MiB 00177 kmalloc-4k total: 1.50 MiB active: 1.50 MiB 00177 signal_cache total: 1.34 MiB active: 1.34 MiB 00177 kmalloc-2k total: 1.16 MiB active: 1.16 MiB 00177 bch_inode_info total: 1.02 MiB active: 922 KiB 00177 perf_event total: 1.02 MiB active: 1.02 MiB 00177 biovec-max total: 992 KiB active: 960 KiB 00177 Shrinkers: 00177 super_cache_scan: objects: 127 00177 super_cache_scan: objects: 106 00177 jbd2_journal_shrink_scan: objects: 32 00177 ext4_es_scan: objects: 32 00177 bch2_btree_cache_scan: objects: 8 00177 nr nodes: 24 00177 nr dirty: 0 00177 cannibalize lock: 0000000000000000 00177 00177 super_cache_scan: objects: 8 00177 super_cache_scan: objects: 1 Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-05-13mm: Move lib/show_mem.c to mm/Kent Overstreet
show_mem.c is really mm specific, and the next patch in the series is going to require mm/slab.h, so let's move it before doing more work on it. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
2023-05-13mm: Count requests to free & nr freed per shrinkerKent Overstreet
The next step in this patch series for improving debugging of shrinker related issues: keep counts of number of objects we request to free vs. actually freed, and prints them in shrinker_to_text(). Shrinkers won't necessarily free all objects requested for a variety of reasons, but if the two counts are wildly different something is likely amiss. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
2023-05-13mm: Add a .to_text() method for shrinkersKent Overstreet
This adds a new callback method to shrinkers which they can use to describe anything relevant to memory reclaim about their internal state, for example object dirtyness. This uses the new printbufs to output to heap allocated strings, so that the .to_text() methods can be used both for messages logged to the console, and also sysfs/debugfs. This patch also adds shrinkers_to_text(), which reports on the top 10 shrinkers - by object count - in sorted order, to be used in OOM reporting. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-05-13seq_buf: seq_buf_human_readable_u64()Kent Overstreet
This adds a seq_buf wrapper for string_get_size(). Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-05-13xfs: add nodataio mount option to skip all data I/OBrian Foster
When mounted with nodataio, add the NOSUBMIT iomap flag to all data mappings passed into the iomap layer. This causes iomap to skip all data I/O submission and thus facilitates metadata only performance testing. For experimental use only. Only tested insofar as fsstress runs for a few minutes without blowing up. Signed-off-by: Brian Foster <bfoster@redhat.com>
2023-05-13iomap: add nosubmit flag to skip data I/O on iomap mappingBrian Foster
Implement a quick and dirty hack to skip data I/O submission on a specified mapping. The iomap layer will still perform every step up through constructing the bio as if it will be submitted, but instead invokes completion on the bio directly from submit context. The purpose of this is to facilitate filesystem metadata performance testing without the overhead of actual data I/O. Note that this may be dangerous in current form in that folios are not explicitly zeroed where they otherwise wouldn't be, so whatever previous data exists in a folio prior to being added to a read bio is mapped into pagecache for the file. Signed-off-by: Brian Foster <bfoster@redhat.com>
2023-05-13vfs: inode cache conversion to hash-blDave Chinner
Because scalability of the global inode_hash_lock really, really sucks. 32-way concurrent create on a couple of different filesystems before: - 52.13% 0.04% [kernel] [k] ext4_create - 52.09% ext4_create - 41.03% __ext4_new_inode - 29.92% insert_inode_locked - 25.35% _raw_spin_lock - do_raw_spin_lock - 24.97% __pv_queued_spin_lock_slowpath - 72.33% 0.02% [kernel] [k] do_filp_open - 72.31% do_filp_open - 72.28% path_openat - 57.03% bch2_create - 56.46% __bch2_create - 40.43% inode_insert5 - 36.07% _raw_spin_lock - do_raw_spin_lock 35.86% __pv_queued_spin_lock_slowpath 4.02% find_inode Convert the inode hash table to a RCU-aware hash-bl table just like the dentry cache. Note that we need to store a pointer to the hlist_bl_head the inode has been added to in the inode so that when it comes to unhash the inode we know what list to lock. We need to do this because the hash value that is used to hash the inode is generated from the inode itself - filesystems can provide this themselves so we have to either store the hash or the head pointer in the inode to be able to find the right list head for removal... Same workload after: Signed-off-by: Dave Chinner <dchinner@redhat.com> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Christian Brauner <brauner@kernel.org> Cc: linux-fsdevel@vger.kernel.org
2023-05-13hlist-bl: add hlist_bl_fake()Dave Chinner
in preparation for switching the VFS inode cache over the hlist_bl lists, we nee dto be able to fake a list node that looks like it is hased for correct operation of filesystems that don't directly use the VFS indoe cache. Signed-off-by: Dave Chinner <dchinner@redhat.com>
2023-05-13vfs: factor out inode hash head calculationDave Chinner
In preparation for changing the inode hash table implementation. Signed-off-by: Dave Chinner <dchinner@redhat.com> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Christian Brauner <brauner@kernel.org> Cc: linux-fsdevel@vger.kernel.org
2023-05-13Increase MAX_LOCK_DEPTH, bcachefs BTREE_ITER_MAX (do not upstream)Kent Overstreet
2023-05-13bcachefs: Fix check_overlapping_extents()Kent Overstreet
A error check had a flipped conditional - whoops. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-05-13bcachefs: Replace a BUG_ON() with fatal errorKent Overstreet
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-05-13bcachefs: Delete some dead code in bch2_replicas_gc_end()Kent Overstreet
bch2_replicas_gc_(start|end) is now only used for journal replicas entries, which don't have bucket sector counts - so this code is entirely dead and can be deleted. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-05-13bcachefs: mark journal replicas before journal write submissionBrian Foster
The journal write submission path marks the associated replica entries for journal data in journal_write_done(), which is just after journal write bio submission. This creates a small window where journal entries might have been written out, but the associated replica is not marked such that recovery does not know that the associated device contains journal data. Move the replica marking a bit earlier in the write path such that recovery is guaranteed to recognize that the device contains journal data in the event of a crash. Signed-off-by: Brian Foster <bfoster@redhat.com>
2023-05-13bcachefs: Improved comment for bch2_replicas_gc2()Kent Overstreet
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-05-13bcachefs: Fix quotas + snapshotsKent Overstreet
Now that we can reliably designate and find the master subvolume out of a tree of snapshots, we can finally make quotas work with snapshots: That is - quotas will now _ignore_ snapshot subvolumes, and only be in effect for the master (non snapshot) subvolume. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-05-13bcachefs: Add otime, parent to bch_subvolumeKent Overstreet
Add two new fields to bch_subvolume: - otime: creation time - parent: For snapshots, this is the id of the subvolume the snapshot was created from Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-05-13bcachefs: BTREE_ID_snapshot_treeKent Overstreet
This adds a new btree which gets us a persistent per-snapshot-tree identifier. - BTREE_ID_snapshot_trees - KEY_TYPE_snapshot_tree - bch_snapshot now has a field that points to a snapshot_tree This is going to be used to designate one snapshot ID/subvolume out of a given tree of snapshots as the "main" subvolume, so that we can do quota accounting in that subvolume and not the rest. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-05-12bcachefs: bch2_bkey_get_empty_slot()Kent Overstreet
Add a new helper for allocating a new slot in a btree. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-05-12bcachefs: bch2_bkey_make_mut() now calls bch2_trans_update()Kent Overstreet
It's safe to call bch2_trans_update with a k/v pair where the value hasn't been filled out, as long as the key part has been and the value is filled out by transaction commit time. This patch folds the bch2_trans_update() call into bch2_bkey_make_mut(), eliminating a bit of boilerplate. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-05-12bcachefs: bch2_bkey_get_mut() now calls bch2_trans_update()Kent Overstreet
It's safe to call bch2_trans_update with a k/v pair where the value hasn't been filled out, as long as the key part has been and the value is filled out by transaction commit time. This patch folds the bch2_trans_update() call into bch2_bkey_get_mut(), eliminating a bit of boilerplate. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-05-12bcachefs: bch2_bkey_alloc() now calls bch2_trans_update()Kent Overstreet
It's safe to call bch2_trans_update with a k/v pair where the value hasn't been filled out, as long as the key part has been and the value is filled out by transaction commit time. This patch folds the bch2_trans_update() call into bch2_bkey_alloc(), eliminating a bit of boilerplate. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-05-12bcachefs: bch2_bkey_get_mut() improvementsKent Overstreet
- bch2_bkey_get_mut() now handles types increasing in size, allocating a buffer for the type's current size when necessary - bch2_bkey_make_mut_typed() - bch2_bkey_get_mut() now initializes the iterator, like bch2_bkey_get_iter() Also, refactor so that most of the code is in functions - now macros are only used for wrappers. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-05-12bcachefs: Move bch2_bkey_make_mut() to btree_update.hKent Overstreet
It's for doing updates - this is where it belongs, and next pathes will be changing these helpers to use items from btree_update.h. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-05-12bcachefs: bch2_bkey_get_iter() helpersKent Overstreet
Introduce new helpers for a common pattern: bch2_trans_iter_init(); bch2_btree_iter_peek_slot(); - bch2_bkey_get_iter_type() returns -ENOENT if it doesn't find a key of the correct type - bch2_bkey_get_val_typed() copies the val out of the btree to a (typically stack allocated) variable; it handles the case where the value in the btree is smaller than the current version of the type, zeroing out the remainder. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-05-12bcachefs: bkey_ops.min_val_sizeKent Overstreet
This adds a new field to bkey_ops for the minimum size of the value, which standardizes that check and also enforces the new rule (previously done somewhat ad-hoc) that we can extend value types by adding new fields on to the end. To make that work we do _not_ initialize min_val_size with sizeof, instead we initialize it to the size of the first version of those values. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-05-12bcachefs: Converting to typed bkeys is now allowed for err, null ptrsKent Overstreet
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-05-12bcachefs: Btree iterator, update flags no longer conflictKent Overstreet
Change btree_update_flags to start after the last btree iterator flag, so that we can pass both in the same flags argument. This is needed for the upcoming bch2_bkey_get_mut() helper. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-05-12bcachefs: remove unused key cache coherency flagBrian Foster
Signed-off-by: Brian Foster <bfoster@redhat.com>
2023-05-12bcachefs: fix accounting corruption race between reclaim and dev addBrian Foster
When a device is removed from a bcachefs volume, the associated content is removed from the various btrees. The alloc tree uses the key cache, so when keys are removed the deletes exist in cache for a period of time until reclaim comes along and flushes outstanding updates. When a device is re-added to the bcachefs volume, the add process re-adds some of these previously deleted keys. When marking device superblock locations on device add, the keys will likely refer to some of the same alloc keys that were just removed. The memory triggers for these key updates are responsible for further updates, such as bch2_mark_alloc() calling into bch2_dev_usage_update() to update per-device usage accounting. When a new key is added to key cache, the trans update path also flushes the key to the backing btree for coherency reasons for tree walks. With all of this context, if a device is removed and re-added quickly enough such that some key deletes from the remove are still pending a key cache flush, the trans update path can view this as addition of a new key because the old key in the insert entry refers to a deleted key. However the deleted cached key has not been filled by absence of a btree key, but rather refers to an explicit deletion of an existing key that occurred during device removal. The trans update path adds a new update to flush the key and tags the original (cached) update to skip running the memory triggers. This results in running triggers on the non-cached update instead, which in turn will perform accounting updates based on incoherent values. For example, bch2_dev_usage_update() subtracts the the old alloc key dirty sector count in the non-cached btree key from the newly initialized (i.e. zeroed) per device counters, leading to underflow and accounting corruption. There are at least a few ways to avoid this problem, the simplest of which may be to run triggers against the cached update rather than the non-cached update. If the key only needs to be flushed when the key is not present in the tree, however, then this still performs an unnecessary update. We could potentially use the cached key dirty state to determine whether the delete is a dirty, cached update vs. a clean cache fill, but this may require transmitting key cache dirty state across layers, which adds complexity and seems to be of limited value. Instead, update flush_new_cached_update() to handle this by simply checking for the key in the btree and only perform the flush when a backing key is not present. Signed-off-by: Brian Foster <bfoster@redhat.com>
2023-05-12bcachefs: Mark bch2_copygc() noinlineKent Overstreet
This works around a "stack from too large" error. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-05-12bcachefs: Delete obsolete btree ptr checkKent Overstreet
This patch deletes a .key_invalid check for btree pointers that only applies to _very_ old on disk format versions, and potentially complicates the upgrade process. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-05-12bcachefs: Always run topology error when CONFIG_BCACHEFS_DEBUG=yKent Overstreet
Improved test coverage. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-05-12bcachefs: Fix a userspace build errorKent Overstreet
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-05-12bcachefs: Make sure hash info gets initialized in fsckKent Overstreet
We had some bugs with setting/using first_this_inode in the inode walker in the dirents/xattr code. This patch changes to not clear first_this_inode until after initializing the new hash info. Also, we fix an error message to not print on transaction restart, and add a comment to related fsck error code. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-05-12bcachefs: Kill bch2_verify_bucket_evacuated()Kent Overstreet
With backpointers, it's now impossible for bch2_evacuate_bucket() to be completely reliable: it can race with an extent being partially overwritten or split, which needs a new write buffer flush for the backpointer to be seen. This shouldn't be a real issue in practice; the previous patch added a new tracepoint so we'll be able to see more easily if it is. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>