diff options
author | Mateusz Guzik <mjguzik@gmail.com> | 2025-03-17 17:07:07 +0100 |
---|---|---|
committer | Christian Brauner <brauner@kernel.org> | 2025-03-18 15:34:27 +0100 |
commit | eb7e453a83007d019d718c6b3666a1c082b676b0 (patch) | |
tree | 282367d2ae1a0cdf5d825fc9714d94c54677100e | |
parent | 008a746a01e221b05932fd4561233ef35fa791cc (diff) |
fs: drop the lock trip around I_NEW wake up in evict()
The unhashed state check in __wait_on_freeing_inode() performed with
->i_lock held against remove_hash_inode() also holding the lock makes
another lock acquire in evict() completely spurious -- all potential
sleepers already dropped the lock before remove_hash_inode() acquired
it or they found the inode to be unhashed and aborted.
Note there is no trickery here: the usual cost of both sides taking
locks is still being paid, it just stops being paid twice.
Signed-off-by: Mateusz Guzik <mjguzik@gmail.com>
Link: https://lore.kernel.org/r/20250317160707.1694135-1-mjguzik@gmail.com
Signed-off-by: Christian Brauner <brauner@kernel.org>
-rw-r--r-- | fs/inode.c | 19 |
1 files changed, 6 insertions, 13 deletions
diff --git a/fs/inode.c b/fs/inode.c index 10121fc7b87e..4c3be44838a5 100644 --- a/fs/inode.c +++ b/fs/inode.c @@ -816,23 +816,16 @@ static void evict(struct inode *inode) /* * Wake up waiters in __wait_on_freeing_inode(). * - * Lockless hash lookup may end up finding the inode before we removed - * it above, but only lock it *after* we are done with the wakeup below. - * In this case the potential waiter cannot safely block. + * It is an invariant that any thread we need to wake up is already + * accounted for before remove_inode_hash() acquires ->i_lock -- both + * sides take the lock and sleep is aborted if the inode is found + * unhashed. Thus either the sleeper wins and goes off CPU, or removal + * wins and the sleeper aborts after testing with the lock. * - * The inode being unhashed after the call to remove_inode_hash() is - * used as an indicator whether blocking on it is safe. + * This also means we don't need any fences for the call below. */ - spin_lock(&inode->i_lock); - /* - * Pairs with the barrier in prepare_to_wait_event() to make sure - * ___wait_var_event() either sees the bit cleared or - * waitqueue_active() check in wake_up_var() sees the waiter. - */ - smp_mb__after_spinlock(); inode_wake_up_bit(inode, __I_NEW); BUG_ON(inode->i_state != (I_FREEING | I_CLEAR)); - spin_unlock(&inode->i_lock); destroy_inode(inode); } |