Manual recovery from backup without Databasus
Backing up is not only about data protection. It's also about data recovery. Databasus draw attention to keep your backups recoverable even if VPS with Databasus is deleted, you lost access or cannot access UI for some reason. So you don't need Databasus to recover backups, because they stored in standard format and there is no vendor lock-in.
What you need
To manually recover a backup, you need:
- Backup file from your storage (local storage, S3, Google Drive, etc.)
- Metadata file from the same storage. It named in the same way as the backup file, but with the
.metadataextension. - Secret key from
./databasus-data/secret.key(located in the same directory as the backup files, usually/opt/databasus/)
File structure
Each backup consists of two files stored in your storage (local or cloud):
{database-name}-{timestamp}-{backup-id}- Encrypted and compressed backup data{database-name}-{timestamp}-{backup-id}.metadata- JSON file with encryption details
The metadata file contains encryption salt and IV (nonce) in Base64 format:
{
"backupId": "550e8400-e29b-41d4-a716-446655440000",
"encryptionSalt": "base64-encoded-salt",
"encryptionIV": "base64-encoded-nonce",
"encryption": "encrypted"
}Decryption
Databasus uses AES-256-GCM encryption with PBKDF2 key derivation. Each backup has a unique encryption key derived from:
- Master key (from secret.key file)
- Backup ID
- Random salt (stored in metadata)
Use this Python script to decrypt your backup:
import json
import base64
import struct
import os
from Crypto.Cipher import AES
from Crypto.Protocol.KDF import PBKDF2
from Crypto.Hash import SHA256
# Constants from Databasus encryption
MAGIC_BYTES = b"PGRSUS01"
HEADER_LENGTH = 64
CHUNK_SIZE = 1024 * 1024
PBKDF2_ITERATIONS = 100000
def decrypt_backup(backup_file, metadata_file, master_key):
"""
Decrypt a Databasus backup file using metadata and master key.
Args:
backup_file: Path to encrypted backup file
metadata_file: Path to metadata JSON file
master_key: Master key from ./databasus-data/secret.key
"""
# Validate files exist
if not os.path.exists(backup_file):
print(f"Error: Backup file not found: {backup_file}")
return
if not os.path.exists(metadata_file):
print(f"Error: Metadata file not found: {metadata_file}")
return
# Read metadata
with open(metadata_file, "r") as f:
metadata = json.load(f)
# Check if file is encrypted (case-insensitive check)
encryption_status = metadata.get("encryption", "").upper()
if encryption_status != "ENCRYPTED":
print(
f"Error: Backup is not encrypted (encryption status: {metadata.get('encryption')})"
)
print("No decryption needed. You can decompress/restore the file directly.")
return
backup_id = metadata["backupId"]
salt = base64.b64decode(metadata["encryptionSalt"])
iv = base64.b64decode(metadata["encryptionIV"])
# Generate output filename with decrypted_ prefix
backup_dir = os.path.dirname(backup_file) or "."
backup_name = os.path.basename(backup_file)
output_file = os.path.join(backup_dir, f"decrypted_{backup_name}")
# Derive encryption key using PBKDF2
key_material = (master_key + backup_id).encode("utf-8")
derived_key = PBKDF2(
key_material, salt, dkLen=32, count=PBKDF2_ITERATIONS, hmac_hash_module=SHA256
)
try:
with open(backup_file, "rb") as f_in, open(output_file, "wb") as f_out:
# Read and validate header
header = f_in.read(HEADER_LENGTH)
# Validate magic bytes
magic = header[:8]
if magic != MAGIC_BYTES:
raise ValueError(
f"Invalid magic bytes: expected {MAGIC_BYTES}, got {magic}"
)
# Decrypt chunks
chunk_index = 0
while True:
# Read chunk length (4 bytes)
length_bytes = f_in.read(4)
if not length_bytes:
break
chunk_length = struct.unpack(">I", length_bytes)[0]
# Read encrypted chunk
encrypted_chunk = f_in.read(chunk_length)
if not encrypted_chunk:
break
# Generate chunk nonce (base IV + chunk index)
chunk_nonce = bytearray(iv)
chunk_nonce[4:12] = struct.pack(">Q", chunk_index)
# Create cipher for this chunk
chunk_cipher = AES.new(derived_key, AES.MODE_GCM, nonce=bytes(chunk_nonce))
# Decrypt chunk
try:
decrypted_chunk = chunk_cipher.decrypt_and_verify(
encrypted_chunk[:-16], # ciphertext
encrypted_chunk[-16:], # auth tag
)
except ValueError as e:
if "MAC check failed" in str(e):
print("\nError: Failed to decrypt backup (MAC check failed)")
print("This usually means:")
print(" - The master key is incorrect")
print(" - The backup file is corrupted")
print(" - The metadata doesn't match this backup file")
print(f"\nFailed at chunk {chunk_index}")
raise
raise
# Write decrypted data
f_out.write(decrypted_chunk)
chunk_index += 1
print(f"Successfully decrypted {chunk_index} chunks to {output_file}")
except ValueError as e:
# Clean up partial output file after files are closed
if "MAC check failed" in str(e) and os.path.exists(output_file):
os.remove(output_file)
return
# Example usage:
if __name__ == "__main__":
decrypt_backup(
backup_file="./your-backup-file", # <--- change this to your backup file
metadata_file="./your-backup-file.metadata", # <--- change this to your metadata file
master_key="your-master-key-here", # <--- change this to your master key
)Install required dependencies:
pip install pycryptodomeHow to use the script:
- Save the script above to a file (e.g.,
decrypt_backup.py) - Update the parameters in the example usage section at the bottom
- Run the script:
python decrypt_backup.pyThe script will automatically create the output file with a decrypted_ prefix. For example, if your backup file is backup-id.dump, the decrypted file will be decrypted_backup-id.dump.
Restore to database
After decryption, restore using database-specific tools:
PostgreSQL
PostgreSQL backups use built-in compression and can be restored directly:
Local database:
# Restore to local database
pg_restore -d your_database decrypted-backup.dumpRemote database:
# Restore to remote database
pg_restore -h hostname -p 5432 -U username -d database_name decrypted-backup.dumpMySQL
MySQL backups are compressed with zstd level 5 and must be decompressed before restoring.
Step 1: Decompress the backup
Use the zstd command-line tool or any compatible decompression tool (7-Zip, PeaZip, WinRAR, etc.):
# Decompress with zstd command-line tool
zstd -d decrypted-backup.sql.zst -o decrypted-backup.sql
# Or use graphical tools like 7-Zip, PeaZip, or WinRARStep 2: Restore to database
Local database:
# Restore to local database
mysql your_database < decrypted-backup.sqlRemote database:
# Restore to remote database
mysql -h hostname -P 3306 -u username -p database_name < decrypted-backup.sqlMariaDB
MariaDB backups are compressed with zstd level 5 and must be decompressed before restoring.
Step 1: Decompress the backup
Use the zstd command-line tool or any compatible decompression tool (7-Zip, PeaZip, WinRAR, etc.):
# Decompress with zstd command-line tool
zstd -d decrypted-backup.sql.zst -o decrypted-backup.sql
# Or use graphical tools like 7-Zip, PeaZip, or WinRARStep 2: Restore to database
Local database:
# Restore to local database
mariadb your_database < decrypted-backup.sqlRemote database:
# Restore to remote database
mariadb -h hostname -P 3306 -u username -p database_name < decrypted-backup.sqlMongoDB
MongoDB backups use built-in gzip compression and can be restored directly:
Local database:
# Restore to local database
mongorestore --archive=decrypted-backup.archive --gzip --db your_databaseRemote database:
# Restore to remote database
mongorestore --host hostname:27017 --username username --password password \
--archive=decrypted-backup.archive --gzip --db database_nameWhat if I have issues?
If you encounter any problems during the recovery process:
- Ask AI for help. AI assistants like ChatGPT, Claude or Gemini are excellent at helping with compression tools and database restore procedures. Simply describe your issue and they can guide you through the process.
- Join our community. Our developers and community members can help with your particular case.