If you delete a file (i.e. rm /path/to/file
) on a filesystem like ext4, its inode and blocks will be released to the pool (marked as free), and there's a good chance that the contents of the file can be recovered.
There are tools like shred
, srm
and wipe
that can be used to overwrite those blocks, but let's see what we can do without any extra tools.
Note: Don't rely on any of these tools or techniques to securely delete files on modern disks and filesystems (especially SSDs with wear leveling, and journaling/COW filesystems like ext4 and btrfs). Always encrypt your data, and make sure that you understand the encryption scheme you're using, and where and how you can leak unencrypted data.
We can't just do something like this:
cat /dev/zero > /path/to/file
First, because /dev/zero
provides us with a continuous stream of bytes, so the writing process wouldn't stop.
Instead of /dev/zero
, we could use /dev/null
to replace the contents of the file:
cat /dev/null > /path/to/file
But there is a second, more import issue - those commands will truncate the file (and release its blocks), and we need to replace the existing blocks in place.
dd
(a part of coreutils
package) is a handy tool that we can use here. But again, if we'd just try something like this:
dd if=/dev/zero of=/path/to/file
We'd have the same problem.
To solve the first issue, we need to tell dd
how many blocks we want to write, and the block size.
To find those numbers, we can use stat
command:
stat /path/to/file
For a simple, 2 byte file on ext4 with the default settings, you should see something like this:
File: test
Size: 2 Blocks: 8 IO Block: 4096 regular file
...
This means that our file has 8 512-byte blocks (stat
always returns 512-byte blocks, my ext4 filesystem has 4KB blocksize, and this file is using a single allocation block - 8*512 = 4096 bytes).
And we can tell that to dd
with:
dd if=/dev/zero of=/path/to/file bs=512 count=8
This will overwrite our file, but we still have the second issue - the file is still getting truncated, and blocks are released to the pool. To solve that, we can use conv=notrunc
option to tell dd
that we want to preserve the existing blocks:
dd if=/dev/zero of=/path/to/file bs=512 count=8 conv=notrunc
This should work as expected.
Instead of /dev/zero, we can also use /dev/urandom (slower, but it makes recovery even harder):
dd if=/dev/urandom of=/path/to/file bs=512 count=8 conv=notrunc
A more scripting-friendly approach would look like this:
read bs count < <(stat -c "%B %b" $file)
dd if=/dev/zero of=$file bs=$bs count=$count conv=notrunc
It wouldn't be hard to add support for multiple passes, but at that point, it probably makes more sense to use one of the tools made for this purpose.
shred
shred
is also a part of coreutils
, and you almost certainly have it already on your system.
shred -n 3 -z /path/to/file
This command will make three passes overwriting the blocks with random data, and then a final pass overwriting with zeros.
srm
srm
(from Secure Remove) is another popular tool for this purpose.
By default, it will do the following:
- 1 pass with 0xff
- 5 random passes. /dev/urandom is used for a secure RNG if available.
- 27 passes with special values defined by Peter Gutmann.
- 5 random passes. /dev/urandom is used for a secure RNG if available.
- Rename the file to a random value
- Truncate the file
On Debian-based distributions, srm
is a part of the secure-delete
package.
apt install secure-delete
On Fedora and rpm-based distros, you can install it with:
dnf install srm
Example use:
srm -vz /path/to/file
(-v
means 'verbose', and -z
tells srm to zero blocks after overwriting them with random data)
wipe
Another popular tool for this purpose is called wipe
.
Installation:
apt install wipe
dnf install wipe
pacman -S wipe
Example use:
wipe -i /path/to/file
(-i
means informational, verbose mode)
Note: This is a snapshot of the page from the BetterWays.dev wiki, you can find the latest (better formatted) version here: betterways.dev/linux-wiping-and-overwriting-file-blocks.
Top comments (0)