summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorKillian De Volder <killian.de.volderc@megasoft.be>2017-04-01 11:45:00 +0200
committerKillian De Volder <killian.de.volderc@megasoft.be>2017-04-01 11:45:00 +0200
commit47a7d9c9a0d594d2a32ecad0196377aa8c654312 (patch)
treed5fee2e13fcc1fab920089d6aaa6250a099de94e
parentf31ce6720250612a39a31e6796929d14869f26aa (diff)
Added 1 FAQ and info about bcache VS bcachefs.
-rw-r--r--FAQ.mdwn5
-rw-r--r--FuturePlans.mdwn20
2 files changed, 22 insertions, 3 deletions
diff --git a/FAQ.mdwn b/FAQ.mdwn
index 87d18a3..4a59df9 100644
--- a/FAQ.mdwn
+++ b/FAQ.mdwn
@@ -1,6 +1,11 @@
Frequently Asked Questions
+## How does bcachefs and bcache compare ?
+1 is a filesystem, the other a block caching system.
+"bcache ain't perfect if you really hammer on it, but i know about those bugs and they're fixed in bcachefs :p"
+## Are there improvement in bcachefs internals vs bcache ?
+Yes there are significant improvements.
## Can I use bcache with an existing device, without reformatting?
diff --git a/FuturePlans.mdwn b/FuturePlans.mdwn
index 94ef581..6561d92 100644
--- a/FuturePlans.mdwn
+++ b/FuturePlans.mdwn
@@ -1,6 +1,20 @@
+The following future plans are no longer accurate.
+If you wish to use these feature take a look at bcachefs.
-Further off, there's plans to use Bcache's index to implement overcommited storage. If you're familiar with LVM, it works by allocating logical volumes in units of 4 mb extents; thus you can arbitrarily create and resize LVs. But when you create an LV you have to fully allocate its storage, regardless of whether it'll ever be written to. If you've ever managed servers with lots of random LVs, you've probably experienced first hand how much of a pain it is to keep track of how much free space you have, resize LVs when the filesystems fill up, etc. - not to mention the huge amount of space that typically gets wasted because you really don't want filesystems to fill up.
+Further off, there's plans to use Bcache's index to implement overcommited storage.
+If you're familiar with LVM, it works by allocating logical volumes in units of 4 mb extents;
+thus you can arbitrarily create and resize LVs. But when you create an LV you have to fully allocate its storage,
+regardless of whether it'll ever be written to. If you've ever managed servers with lots of random LVs,
+you've probably experienced first hand how much of a pain it is to keep track of how much free space you have,
+resize LVs when the filesystems fill up, etc. - not to mention the huge amount of space that typically gets wasted because you really don't want filesystems to fill up.
-But all the work has already been done in Bcache for allocating on demand, and maintaining the index while it's in use - and by using the same index for cached data and the volumes themselves, there will be approximately zero extra runtime overhead. You'll be able to create petabyte sized filesystems with a tiny amount of real storage, resize them arbitrarily, and be able to see exactly how much space you're using. Reading from newly created volumes also won't return old data; sectors that haven't previously been written to will return 0s. This was actually the primary motivation for this feature - for shared hosting, you don't want customers to be able to see other people's data.
+But all the work has already been done in Bcache for allocating on demand,
+and maintaining the index while it's in use - and by using the same index for cached data and the volumes themselves,
+there will be approximately zero extra runtime overhead. You'll be able to create petabyte sized filesystems with a tiny amount of real storage,
+resize them arbitrarily, and be able to see exactly how much space you're using. Reading from newly created volumes also won't return old data;
+sectors that haven't previously been written to will return 0s. This was actually the primary motivation for this feature - for shared hosting,
+you don't want customers to be able to see other people's data.
-There's also been quite a bit of interest in tiered storage. If you've got a truly large amount of storage, it may be beneficial to have a really large RAID60 of large 7200 rpm drives, and a smaller RAID10 of 15k rpm SAS drives. Nobody wants to manually manage what goes where - and keep track of what data gets accessed the most - so if you could use it as one large pool, and have data migrate between them automatically, so much the better. Once overcommited storage is implemented, tiered storage should actually be quite easy to add.
+There's also been quite a bit of interest in tiered storage. If you've got a truly large amount of storage, it may be beneficial to have a really large RAID60 of large 7200 rpm drives,
+and a smaller RAID10 of 15k rpm SAS drives. Nobody wants to manually manage what goes where - and keep track of what data gets accessed the most - so if you could use it as one large pool,
+and have data migrate between them automatically, so much the better. Once overcommited storage is implemented, tiered storage should actually be quite easy to add.