summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2025-04-26generic/556: support bcachefsHEADmasterKent Overstreet
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2025-02-24tests/generic/702: fix for bcahcefsKent Overstreet
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2025-02-24disable more tests on bcachefsKent Overstreet
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-10-17tests/generic/299: run fio verify with 512b blocksizeKent Overstreet
This works around a fio bug, where verify breaks badly in the presence of short writes misaligned w.r.t. verify blocksize. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-10-12generic/271,278: fix for bcachefsKent Overstreet
bcachefs might go read-only when we fail writes if a btree node write happens to fail; we have to also check for -EROFS. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-10-12generic/171: fix for bcachefsKent Overstreet
On bcachefs, we don't know precisely how much disk space will be used until after it's written, so in-memory reservations include a fudge factor - so we need a second _fill_fs if we really want it to be full. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-07-11tests/generic/704: don't run on bcachefsKent Overstreet
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-07-11generic/674: allow size to grow after dedup onceKent Overstreet
bcachefs has to allocate the reflink btree on first use, and btree nodes are 256k by default. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-07-11generic/729: disable on bcachefsKent Overstreet
bcachefs doesn't allow DIO with the buffer mapped from the same file, so this test fails in an uninteresting way. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-07-11Add ktest style markers for test starting and finishingKent Overstreet
Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-07-11Make bash shebang work on nixosKent Overstreet
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-07-11generic/050: tweak for bcachefsKent Overstreet
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-07-11generic/275: Reserve more space on bcachefsKent Overstreet
bcachefs btree nodes default to 256k, therefore we need to reserve more than 256k of space to ensure we can write. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-07-11Ensure fuse filesystems unmount correctlyKent Overstreet
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-07-11fixup! Disable a few tests on bcachefsKent Overstreet
2024-07-11Dump seqres.full on test failureKent Overstreet
In ktest, we try to keep all essential information on test failure in a single log file - dumping seqres.full to stdout will end up in that log file. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
2024-07-11Make sure to pass -t $FSTYP to mountKent Overstreet
When running a kernel that supports other unrelated filesystems we can get spurious errors without this. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
2024-07-11Disable a few tests on bcachefsKent Overstreet
bcachefs extents are generally smaller on other filesystems due to checksum granularity being the same as extent size, which breaks assumptions made by a few tests. Ideally we'd like to get these working on bcachefs, but for now we're just disabling them generic/647 also doesn't really apply to bcachefs - bcachefs has explicit locking between dio and buffered/mmapped IO, and the test then fails because those IOs aren't done concurrently.
2024-07-11generic/103: increase reserved bytes for bcachefsDan Robertson
The bcachefs btree nodes are quite large. If we only reserve 512 bytes we hit an intermittent failure where the fallocate that is intented to fill the available space triggers a btree node split and the exente update is interrupted. The retry of the extent update will fail because the new amount of available space is less than that of the request. Signed-off-by: Dan Robertson <dan@dlrobertson.com>
2024-07-11Add bcachefs/001 for pagecache_add lock deadlock torture testKent Overstreet
2024-07-11generic/{455,457,482}: make dmlogwrites tests work on bcachefsKent Overstreet
bcachefs has log structured btree nodes, in addition to a regular journal, which means that unless we replay to markers in the log in the same order that they happened and are careful to avoid writing in between replaying to different events - we need to wipe and start fresh each time. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
2024-07-11check: Add -f: failfast modeKent Overstreet
This adds a new flag to check which exits immediately after the first test failure, so as to leave test/scratch devices untouched and make it easier to debug rare test failures. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
2024-07-11generic/042: pass -t to mountKent Overstreet
When running a kernel with all filesystems enabled, we sometimes get strange mount errors if we don't specify the filesystem type. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
2024-06-07generic: test Btrfs fsync vs. size-extending prealloc write crashv2024.06.09Omar Sandoval
This is a regression test for a Btrfs bug, but there's nothing Btrfs-specific about it. Since it's a race, we just try to make the race happen in a loop and pass if it doesn't crash after all of our attempts. Signed-off-by: Omar Sandoval <osandov@fb.com> Reviewed-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: Zorro Lang <zlang@kernel.org>
2024-06-07generic/077: ignore errors occurred while accessing the filler filesLuis Henriques (SUSE)
When looking for data to fill in the filesystem, errors accessing files may occur. This will cause the test to fail as it'll show in the output lines such as: du: cannot read directory '/usr/etc/sudoers.d': Permission denied Ignoring these errors should be safe, so simply redirecting the stderr of 'du' to $seqres.full fixes it. Unfortunately, this exposed a different issue, which was the truncation of the $seqres.full file while copying files into the filesystem. This patch also fixes that. Signed-off-by: "Luis Henriques (SUSE)" <luis.henriques@linux.dev> Reviewed-by: Zorro Lang <zlang@redhat.com> Signed-off-by: Zorro Lang <zlang@kernel.org>
2024-06-07fuzzy: test other dquot idsDarrick J. Wong
Signed-off-by: "Darrick J. Wong" <djwong@kernel.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Zorro Lang <zlang@kernel.org>
2024-06-07fuzzy: allow FUZZ_REWRITE_DURATION to control fsstress runtime when fuzzingDarrick J. Wong
For each iteration of the fuzz test loop, we try to correct the problem, and then we run fsstress on the (allegedly corrected) filesystem to check that subsequent use of the filesystem won't crash the kernel or panic. Now that fsstress has a --duration switch, let's add a new config variable that people can set to constrain the amount of time that a fuzz test run takes. Signed-off-by: "Darrick J. Wong" <djwong@kernel.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Zorro Lang <zlang@kernel.org>
2024-06-07fuzzy: mask off a few more inode fields from the fuzz testsDarrick J. Wong
XFS doesn't do any validation for filestreams, so don't waste time fuzzing that. Exclude the bigtime flag, since we already have inode timestamps on the no-fuzz list. Exclude the warning counters, since they're defunct now. Signed-off-by: "Darrick J. Wong" <djwong@kernel.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Zorro Lang <zlang@kernel.org>
2024-06-07btrfs/280: run defrag after creating file to get expected extent layoutFilipe Manana
The test writes a 128M file and expects to end up with 1024 extents, each with a size of 128K, which is the maximum size for compressed extents. Generally this is what happens, but often it's possibly for writeback to kick in while creating the file (due to memory pressure, or something calling sync in parallel, etc) which may result in creating more and smaller extents, which makes the test fail since its golden output expects exactly 1024 extents with a size of 128K each. So to work around run defrag after creating the file, which will ensure we get only 128K extents in the file. Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Disseldorp <ddiss@suse.de> Reviewed-by: David Sterba <dsterba@suse.com> Reviewed-by: Qu Wenruo <wqu@suse.com> Signed-off-by: Zorro Lang <zlang@kernel.org>
2024-06-07btrfs: fix raid-stripe-tree tests with non-experimental btrfs-progs buildFilipe Manana
When running the raid-stripe-tree tests with a btrfs-progs version that was not configured with the experimental features, the tests fail because they expect the dump tree commands to output the stored and calculated checksums lines, which are enabled only for experimental builds. Also, these lines exists only starting with btrfs-progs v5.17 (more specifically since commit 1bb6fb896dfc ("btrfs-progs: btrfstune: experimental, new option to switch csums")). The tests fail like this on non-experimental btrfs-progs build: $ ./check btrfs/304 FSTYP -- btrfs PLATFORM -- Linux/x86_64 debian0 6.9.0-btrfs-next-160+ #1 SMP PREEMPT_DYNAMIC Tue May 28 12:00:03 WEST 2024 MKFS_OPTIONS -- /dev/sdc MOUNT_OPTIONS -- /dev/sdc /home/fdmanana/btrfs-tests/scratch_1 btrfs/304 1s ... - output mismatch (see /home/fdmanana/git/hub/xfstests/results//btrfs/304.out.bad) --- tests/btrfs/304.out 2024-01-25 11:15:33.420769484 +0000 +++ /home/fdmanana/git/hub/xfstests/results//btrfs/304.out.bad 2024-06-04 12:55:04.289903124 +0100 @@ -8,8 +8,6 @@ raid stripe tree key (RAID_STRIPE_TREE ROOT_ITEM 0) leaf XXXXXXXXX items X free space XXXXX generation X owner RAID_STRIPE_TREE leaf XXXXXXXXX flags 0x1(WRITTEN) backref revision 1 -checksum stored <CHECKSUM> -checksum calced <CHECKSUM> fs uuid <UUID> chunk uuid <UUID> ... (Run 'diff -u /home/fdmanana/git/hub/xfstests/tests/btrfs/304.out /home/fdmanana/git/hub/xfstests/results//btrfs/304.out.bad' to see the entire diff) Ran: btrfs/304 Failures: btrfs/304 Failed 1 of 1 tests So update _filter_stripe_tree() to remove the checksum lines, since we don't care about them, and change the golden output of the tests to not expect those lines. This way the tests work with both experimental and non-experimental btrfs-progs builds, as well as btrfs-progs versions below v5.17. Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: Zorro Lang <zlang@kernel.org>
2024-06-07generic/747: redirect mkfs stderr to seqres.fullDarrick J. Wong
ext4 fails on this test with: --- /tmp/fstests/tests/generic/747.out 2024-05-13 06:05:59.727025928 -0700 +++ /var/tmp/fstests/generic/747.out.bad 2024-05-21 18:34:51.836000000 -0700 @@ -1,4 +1,5 @@ QA output created by 747 +mke2fs 1.47.2~WIP-2024-05-21 (21-May-2024) Starting fillup using direct IO Starting mixed write/delete test using direct IO Starting mixed write/delete test using buffered IO The reason for this is that mke2fs annoyingly prints the program version to stderr, which messes up the golden output. Fix this by redirecting stderr like all the othe tests, even though this doesn't seem like a great solution... Signed-off-by: "Darrick J. Wong" <djwong@kernel.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Zorro Lang <zlang@kernel.org>
2024-06-07xfs/008: use block size instead of the pagesizePankaj Raghav
The testcase estimates to have ratio of 1:3/4 for holes:filesize. This holds true where the blocksize is always less than or equal to pagesize and the total size of the file is calculated based on the pagesize. There is an implicit assumption that blocksize will always be less than the pagesize. LBS support will enable bs > ps where a minimum IO size is one block, which can be greater than a page. Adjust the size calculation to be based on the blocksize and not the pagesize. Signed-off-by: Pankaj Raghav <p.raghav@samsung.com> Reviewed-by: "Ritesh Harjani (IBM)" <ritesh.list@gmail.com> Signed-off-by: Zorro Lang <zlang@kernel.org>
2024-06-07generic/436: round up bufsz to nearest filesystem blkszPankaj Raghav
SEEK_HOLE and SEEK_DATA work in filesystem block size granularity. So while filling up the buffer for test 13 - 16, round up the bufsz to the closest filesystem blksz. As we only allowed blocksizes lower than the pagesize, this was never an issue and it always aligned. Once we have blocksize > pagesize, this assumption will break. Fixes the test for LBS configuration. Signed-off-by: Pankaj Raghav <p.raghav@samsung.com> Reviewed-by: "Ritesh Harjani (IBM)" <ritesh.list@gmail.com> Reviewed-by: Darrick J. Wong <djwong@kernel.org> Signed-off-by: Zorro Lang <zlang@kernel.org>
2024-06-07xfs/161: adapt the test case for 64k FS blocksizePankaj Raghav
This test fails when xfs is formatted with 64k filesystem block size*. It fails because the soft quota is not exceeded with the hardcoded 64k pwrite, thereby, the grace time is not set. Even though soft quota is set to 12k for uid1, it is rounded up to the nearest blocksize. *** Report for user quotas on device /dev/sdb3 Block grace time: 7days; Inode grace time: 7days Block limits File limits User used soft hard grace used soft hard grace ---------------------------------------------------------------------- 0 -- 0 0 0 0 3 0 0 0 1 -- 64 64 1024 0 1 0 0 0 2 -- 64 0 0 0 1 0 0 0 Adapt the pwrite to do twice the FS block size and set the soft limit to be 1 FS block and hard limit to be 100 FS blocks. This also gets rid of harcoded quota limit values. * This happens even on a 64k pagesize system and it is not related to LBS effort. Signed-off-by: Pankaj Raghav <p.raghav@samsung.com> Reviewed-by: "Ritesh Harjani (IBM)" <ritesh.list@gmail.com> Signed-off-by: Zorro Lang <zlang@kernel.org>
2024-05-25Remove richacl supportv2024.05.26David Sterba
There's no support for richacl in Linux and based on information in the links will never be. Remove all related files and support code. References: - https://wiki.samba.org/index.php/NFS4_ACL_overview#Linux - https://lwn.net/Articles/661357/ (article, 2015) - https://lwn.net/Articles/661078/ (patches, 2015) - https://github.com/andreas-gruenbacher/richacl/ - http://www.bestbits.at/richacl/ (n/a anymore) Signed-off-by: David Sterba <dsterba@suse.com> Reviewed-by: David Disseldorp <ddiss@suse.de> Reviewed-by: Zorro Lang <zlang@redhat.com> Signed-off-by: Zorro Lang <zlang@kernel.org>
2024-05-25btrfs/741: add commit ID in _fixed_by_kernel_commitAnand Jain
Now that the kernel patch is merged in v6.9, replace the placeholder with the actual commit ID. Signed-off-by: Anand Jain <anand.jain@oracle.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: Zorro Lang <zlang@kernel.org>
2024-05-25_test_mkfs: Include external log device (if any) when creating fs on TEST_DEVChandan Babu R
Test execution fails when testing XFS with external log device and when RECREATE_TEST_DEV is set to true. This is because _test_mkfs() is invoked as part of recreating the filesystem on test device and this function does not include the external log device as part of the mkfs.xfs command line. _test_mount() invoked later fails since it passes an external log device to the mount syscall which the kernel does not expect to find. To fix this bug, this commit modifies _test_mkfs() to invoke _test_options() in order to compute the value of TEST_OPTIONS and includes the resulting value in the mkfs.xfs command line. Signed-off-by: Chandan Babu R <chandanbabu@kernel.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: "Darrick J. Wong" <djwong@kernel.org> Signed-off-by: Zorro Lang <zlang@kernel.org>
2024-05-25check: log kernel version in check.logCarlos Maiolino
After collecting several xfstests runs, it's useful to keep track on which kernel a specific run happened. Signed-off-by: Carlos Maiolino <cmaiolino@redhat.com> Reviewed-by: "Darrick J. Wong" <djwong@kernel.org> Signed-off-by: Zorro Lang <zlang@kernel.org>
2024-05-25generic/733: add commit ID for btrfsFilipe Manana
As of commit 5d6f0e9890ed ("btrfs: stop locking the source extent range during reflink"), btrfs now does reflink operations without locking the source file's range, allowing concurrent reads in the whole source file. So update the test to annotate that commit. Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Reviewed-by: Zorro Lang <zlang@redhat.com> Signed-off-by: Zorro Lang <zlang@kernel.org>
2024-05-25generic/742: require FIEMAP supportChen Hanxiao
Some filesystems which don't support FIEMP fail on this test, e.g. FSTYP -- nfs PLATFORM -- xxxxxxxx MKFS_OPTIONS -- xxxx-xxx-xxx:/mnt/xfstests/scratch/nfs-server MOUNT_OPTIONS -- -o vers=4.2 -o context=system_u:object_r:root_t:s0 xxxx-xxx-xxx:/mnt/xfstests/scratch/nfs-server /mnt/xfstests/scratch/nfs-client generic/742 [failed, exit status 1]- output mismatch (see /var/lib/xfstests/results//generic/742.out.bad) --- tests/generic/742.out 2024-05-12 10:48:02.502761852 -0400 +++ /var/lib/xfstests/results//generic/742.out.bad 2024-05-12 21:10:48.412498322 -0400 @@ -1,2 +1,3 @@ QA output created by 742 Silence is golden +fiemap-fault: fiemap failed 95: Operation not supported So _notrun if FIEMAP isn't supported by $FSTYP. Signed-off-by: Chen Hanxiao <chenhx.fnst@fujitsu.com> Reviewed-by: Zorro Lang <zlang@redhat.com> Signed-off-by: Zorro Lang <zlang@kernel.org>
2024-05-24fstests: mkfs the scratch device if we have missing profilesJosef Bacik
I have a btrfs config where I specifically exclude raid56 testing, and this resulted in btrfs/011 failing with an inconsistent file system. This happens because the last test we run does a btrfs device replace of the $SCRATCH_DEV, leaving it with no valid file system. We then skip the remaining profiles and exit, but then we go to check the device on $SCRATCH_DEV and it fails because there is no file system. Fix this to re-make the scratch device if we skip any of the raid profiles. This only happens in the case of some idiot user configuring their testing in a special way, in normal runs of this test we'll never re-make the fs. Reviewed-by: Anand Jain <anand.jain@oracle.com> Signed-off-by: Josef Bacik <josef@toxicpanda.com> Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: Anand Jain <anand.jain@oracle.com>
2024-05-24fstests: btrfs/301: handle auto-removed qgroupsQu Wenruo
There are always attempts to auto-remove empty qgroups after dropping a subvolume. For squota mode, not all qgroups can or should be dropped, as there are common cases where the dropped subvolume are still referred by other snapshots. In that case, the numbers can only be freed when the last referencer got dropped. The latest kernel attempt would only try to drop empty qgroups for squota mode. But even with such safe change, the test case still needs to handle auto-removed qgroups, by explicitly echoing "0", or later calculation would break bash grammar. This patch would add extra handling for such removed qgroups, to be future proof for qgroup auto-removal behavior change. Reviewed-by: Anand Jain <anand.jain@oracle.com> Reviewed-by: Boris Burkov <boris@bur.io> Signed-off-by: Qu Wenruo <wqu@suse.com> Signed-off-by: Anand Jain <anand.jain@oracle.com>
2024-05-24btrfs/{140,141}: verify read-repair test data by md5sumJosef Bacik
For validating that read repair works properly we corrupt one mirror and then read back the physical location after we do a direct or buffered read on the mounted file system and then unmount the file system. The golden output expects all a's, however with encryption this will obviously not be the case. However I still broke read repair, so these tests are quite valuable. Fix them to dump the on disk values to a temporary file and then md5sum the files, and then validate the md5sum to make sure the read repair worked properly. Reviewed-by: Anand Jain <anand.jain@oracle.com> Signed-off-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: David Sterba <dsterba@suse.com> Signed-off-by: Anand Jain <anand.jain@oracle.com>
2024-05-24generic/269: require no compressionJosef Bacik
This is meant to test ENOSPC, but we're dd'ing /dev/zero, which won't fill up anything with compression on. Additionally we're killing dd and then immediately trying to unmount. With compression we could have references to the inode being held by the async compression workers, so sometimes this will fail with EBUSY on the unmount. A better test would be to use slightly compressible data; use _ddt. Reviewed-by: Anand Jain <anand.jain@oracle.com> Signed-off-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: David Sterba <dsterba@suse.com> Signed-off-by: Anand Jain <anand.jain@oracle.com> [ changed to use _ddt ]
2024-05-24generic/027: require no compressionJosef Bacik
This test creates a small file and then a giant file and then tries to create a bunch of small files in a loop to exercise ENOPSC. The problem is that with compression the giant file isn't actually giant, so it can make this test take forever. Simply disable it for compression. Signed-off-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: David Sterba <dsterba@suse.com> Reviewed-by: Anand Jain <anand.jain@oracle.com> Signed-off-by: Anand Jain <anand.jain@oracle.com>
2024-05-24generic/352: require no compressionJosef Bacik
Our CI has been failing on this test for compression since 0fc226e7 ("fstests: generic/352 should accomodate other pwrite behaviors"). This is because we changed the size of the initial write down to 4k, and we write a repeatable pattern. With compression on btrfs this results in an inline extent, and when you reflink an inline extent this just turns it into full on copies instead of a reflink. As this isn't a bug with compression, it's just not well aligned with how compression interacts with the allocation of space, simply exclude this test from running when you have compression enabled. Signed-off-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: David Sterba <dsterba@suse.com> Reviewed-by: Anand Jain <anand.jain@oracle.com> Reviewed-by: Qu Wenruo <wqu@suse.com> Signed-off-by: Anand Jain <anand.jain@oracle.com>
2024-05-12generic: add gc stress testv2024.05.12Hans Holmberg
This test stresses garbage collection for file systems by first filling up a scratch mount to a specific usage point with files of random size, then doing overwrites in parallel with deletes to fragment the backing storage, forcing reclaim. Signed-off-by: Hans Holmberg <hans.holmberg@wdc.com> Reviewed-by: Zorro Lang <zlang@redhat.com> Signed-off-by: Zorro Lang <zlang@kernel.org>
2024-05-12common/tracing: use /sys/kernel/tracing at firstZorro Lang
To avoid the dependence of debugfs, tracefs is mounted on another place -- /sys/kernel/tracing now. But for the compatibility, the /sys/kernel/debug/tracing is still there. So change _require_ftrace helper, try to use the new /sys/kernel/tracing path at first, or fallback to the old one if it's not supported. xfs/499 uses ftrace, so call _require_ftrace in it. Reviewed-by: "Darrick J. Wong" <djwong@kernel.org> Signed-off-by: Zorro Lang <zlang@kernel.org>
2024-05-12fstests: fix _require_debugfs and call it properlyZorro Lang
The old _require_debugfs helper doesn't work now, fix it to check a system supports debugfs. And then call this helper in cases which need $DEBUGFS_MNT. Reviewed-by: "Darrick J. Wong" <djwong@kernel.org> Signed-off-by: Zorro Lang <zlang@kernel.org>
2024-05-12fstests: remove the rest of sharedDavid Sterba
All tests from shared/ have been moved to generic/, remove the Makefile and the reference from the 'check' scripts. Signed-off-by: David Sterba <dsterba@suse.com> Acked-by: Darrick J. Wong <djwong@kernel.org> Reviewed-by: Zorro Lang <zlang@redhat.com> Signed-off-by: Zorro Lang <zlang@kernel.org>