Introduction
There’s a really useful utility that I’ve used at a few places I’ve worked called Credstash. Basically, it’s a persistent, highly-available key value store meant for things like credentials. It’s very ergonomic to use and has a Python API as well as a CLI application, so storing and accessing credentials is pretty straightforward. However, there is a big catch: it depends on DynamoDB! For those not familiar with the AWS ecosystem, Dynamo is an AWS key/value database, similar to Cassandra.

The reason I built this tool is that I have a fairly extensive home compute lab, where I run a K8s cluster that deploys services, data pipelines, etc. I wanted to use something like credstash, but I couldn’t find anything like it, and obviously I didn’t want to have an AWS dependency for something like this, if for no other reason than the cost. Additionally, I utilize a cloud provider for bucket storage. Most of the time, I’m accessing the objects from within my house, and therefore, I can’t use things like IAM roles for permissions, so I have to rely on API keys. I added several convenience features to accommodate this featureset, including S3 compatible handle creation as well as database connection string and SQLAlchemy ODBC connection helpers to make my life easier.
How it works
When you run:
mattstash setup
It generates a keepass database in ~/.credentials/mattstash.kdbx. This is a Keepass database. Therefore, the credentials are encrypted at rest. NOTE: By default, a sidecar is placed next to it in ~/.credentials/.mattstash.txt that contains the password to your database. You may either leave it there, if having the password next to it is sufficiently secure for you, or you may take that file and use environment variables instead.
Basic CLI Commands
The primary function of this is to serve as a key:value store. So running:
mattstash put "name-of-your-token" --value "123"
Will put an entry named “name-of-your-token” with a value of 123. This can be retrieved by doing:
mattstash get "name-of-your-token" --show-password

Complex “Value” Store
In addition to being a simple key:value store, any of the fields that are available in a KeePass database (username, password, url, notes, and tags) are also available.
mattstash put "production-db" --fields \
--username "app_user" \
--password "secure_db_pass" \
--url "db.company.com:5432" \
--notes "Production PostgreSQL" \
--tag "production" \
--tag "database"
In the CLI, running a get on this would print off all of the fields added (except the password, which is masked by default, as demonstrated above).

Versioned Credential Storage
All values added with the “put” command are retained using a mix of the existing KeePass functionality with a bit of extra logic on top for 10-digit numbered versions, similar to Credstash. Truthfully, I never got versions working in credstash, but I also never really put much effort into it. I think whre I’ve worked it may have been misconfigured, so there may be slight deviations with how this is handled so this MAY NOT a drop in replacement for Credstash.
mattstash put "api-key" --value "key-v1-initial"
mattstash put "api-key" --value "key-v2-updated"
mattstash put "api-key" --value "key-v3-rotated"
mattstash versions "api-key"
# Shows: api-key versions: 0000000001, 0000000002, 0000000003 (latest)
# Retrieve specific version
mattstash get "api-key" --version 1 --show-password
# Shows: api-key: key-v1-initial
# Get latest
mattstash get "api-key" --show-password
# Shows: api-key: key-v3-rotated
# Explicit version number
mattstash put "api-key" --value "key-v5-explicit" --version 5
mattstash versions "api-key"
# Shows jump to version 5

S3 Usage
If you conform to an opinionated “put” structure for S3 credentials, you can use some built-ins to directly establish a boto connection, rather than retrieving the credentials and then injecting them back into a boto3 call in your codebase.
mattstash put "aws-s3" --fields \
--username "AKIA..." \
--password "secret-access-key" \
--url "https://s3.amazonaws.com" \
--notes "AWS S3 production account"
# Test connection
mattstash s3-test "backup-storage" --bucket "daily-backups"
On the Python side, you can then do:
from mattstash import MattStash
# Initialize with defaults
stash = MattStash()
# Custom database path
stash = MattStash(path="/path/to/custom.kdbx", password="mypassword")
# Get S3 client
s3 = stash.get_s3_client("s3-backup")
# Use the client
s3.upload_file('local.txt', 'my-bucket', 'remote.txt')
# List buckets
buckets = s3.list_buckets()
NOTE: You are not forced to use the CLI to put credentials in the KeePass database. You may either add them by running a KeePass DB client or by using the Python API as well.
result = stash.put("api-token", value="sk-123456")
Database Usage
There is a similar workflow for databases:
mattstash put "production-db" --fields \
--username "app_user" \
--password "secure_db_pass" \
--url "db.company.com:5432" \
--notes "Production PostgreSQL" \
--tag "production" \
--tag "database"
Which will allow you to generate ODBC connection strings:
import mattstash
from sqlalchemy import create_engine, text, MetaData, Table, Column, Integer, String
from sqlalchemy.orm import sessionmaker, declarative_base
import psycopg
# Generate database URL with masked password (for logging/display)
masked_db_url = mattstash.get_db_url(
"postgres-production",
database="myapp_production",
driver="psycopg" # Modern psycopg3 driver
)
print(f"Database URL: {masked_db_url}")
# Output: "postgresql+psycopg://app_user:*****@prod-db.company.com:5432/myapp_production"
# Generate unmasked URL for actual connection
postgres_url = mattstash.get_db_url(
"postgres-production",
database="myapp_production",
driver="psycopg", # or "psycopg2", "asyncpg"
mask_password=False # Needed for actual connections
)
# Create SQLAlchemy engine with connection pooling
postgres_engine = create_engine(
postgres_url, # <- This is where the entry is injected.
pool_size=10,
max_overflow=20,
pool_timeout=30,
pool_recycle=3600,
echo=False # Set to True for SQL debugging
)