The bread and butter of the BTRFS filesystem on Debian

Using BTRFS on Debian 'Buster' (Kernel 4.17 , btrfs-progs V4.17)
This document is meant for users with some technical skills. For absolute beginners, you are probably better off by starting elsewhere.
The purpose for this document is to try to clarify some concepts and describe how BTRFS works in principle as there seems to be a lot of misinformation or simply lack of knowledge out there.

Note: the author of this document is just a regular user with some C coding experience and an interest in BTRFS so while I will try hard to avoid misinformation there is a slight risk that I might in fact add to the misinformation that is already there.

Note#2: This document is as of writing this (30.June 2018) work in progress and must be considered unfinished.
What it BTRFS:
Btrfs is a (mostly) self-healing Copy On Write (COW) filesystem for Linux with a lot of fancy features. It is maturing, which means that some parts of it works great, some parts need a little bit of know how, and some parts of the filesystem is experimental and only recommended for testing purposes.
So what can BTRFS do for you:
How does BTRFS work:
So let's get to work...:
The first thing that you want to keep in mind is that you typically want your harddrives partitioned. BTRFS can be run directly on the underlying device, but it is not always advised or beneficial to do so. If you for example want to make your disk pool bootable (with redundancy) you might want to install GRUB on all of your disks so that if one disks is toast you will still be able to boot your system comfortably.

Basic usage:
How to create a filesystem:
    mkfs.btrfs /dev/sdx
    ...and if you want to use multiple storage devices...
    mkfs.btrfs /dev/sdx /dev/sdy /dev/sdz

How to add a device to the filesystem
    btrfs device add /dev/sdn /mountpoint

How to remove a device from the filesystem
    btrfs device remove /dev/sdn /mountpoint

How to switch between storage profiles online:
    btrfs balance start -dconvert=profile -mconvert=profile /mountpoint

    The available storage profiles are described here:

    Note the new format first determines the number of copies that should be made, then it determines how these copies should be stored.
    nCmSpP    = number of copies
    nCmSpP    = number of devices to stripe over (m=max)
    nCmSpP    = number of parity devices to use


Storage profile name
Description
Technical description
Redundancy
Total storage utilization
Old format:
New format:
A block of data / metadata is stored like so:

Device failures allowed:
In percent (%) :
SINGLE
1C
Only one copy on any device
No replicas
0
100
DUP
2CD
Two copies on one storage device
One local replica
0
50
RAID0
1CmS
One copy, striped over all storage devices
Striping
0 100
RAID1
2C
Two copies on different storage devices
1xReplica
1
50
RAID10
2CmS
Two copies, striped over different storage devices
1xReplica+1xStripe
1
50
N/A
3C
Three copies on different storage devices
2xReplicas
2
33
N/A
4C
Four copies on different storage devices
3xReplicas
3
25
RAID5
1CmS1P
One copy striped over all, but one storage device (used for parity)
1xStripe+1xParity
1
((num_devices-1)*100) / num_devices
RAID6
1CmS2P
One copy striped over all, but two storage devices (used for parity)
1xStripe+2xParity
2
((num_devices-2)*100) / num_devices
N/A
1CmS3P
One copy striped over all, but three storage devices (used for parity)
1xStripe+3xParity
3
((num_devices-3)*100) / num_devices

SINGLE
DUP
STRIPE
MIRROR
MIRROR2
MIRROR3
MIRRORSTRIPE
STRIPEPARITY1
STRIPEPARITY2


    NOTE: Because BTRFS will store small files in the metadata don't be fooled into thinking that data=RAID6 protect all your files against dual disk failure unless you also have the metadata stored in the same profile

How to view filesystem allocation (Beware: this is NOT the same as usage)
    btrfs filesystem usage -T /mnt