Local TLS Trust on Ubuntu 26.04: Caddy, Python, and Chrome in a Sovereign Dev Stack
A practical record of the problems we encountered establishing local TLS trust across two Ubuntu machines running Caddy, Python, and Chrome — and what is required to fix each layer correctly.
Why local TLS is not optional
RiskNodes is sovereign by design: code, data, and assessment records remain inside the client’s physical perimeter. For regulated industries — banking, defence, government — this is not product positioning, but a legal requirement.
Sovereign-first deployments mean local services. Local services need TLS. And local TLS, as it turns out, is considerably more involved than it appears. The system trust store, the browser, and Python’s HTTP libraries are three separate layers, and they do not share trust configuration. Getting them aligned requires specific steps at each layer.
This post records the problems we encountered whilst building and testing the RiskNodes development stack across two Ubuntu 26.04 machines — one running a Caddy reverse proxy and OIDC provider, the other running the application and test runner. The fixes range from Linux system fundamentals to Python runtime behaviour to Caddy PKI quirks.
Tested environment
- OS: Ubuntu 24.04 and 26.04 on both machines
- Reverse proxy and OIDC provider: Caddy (system install, running as the
caddyuser) - Application runtime: Python 3.12 with
httpxandtruststore - Browsers tested: Chrome on Linux
1. The Linux system trust store
On Debian and Ubuntu, trusted CA certificates are aggregated into a single file:
/etc/ssl/certs/ca-certificates.crt
To add a CA, place a PEM-format .crt file (containing exactly one certificate) in:
/usr/local/share/ca-certificates/
Then rebuild the bundle:
sudo update-ca-certificates
This creates symlinks in /etc/ssl/certs/ and appends the certificate to ca-certificates.crt. Tools that use system OpenSSL — curl, wget, and Python’s ssl module when pointed at the system bundle — will then trust the CA.
Gotcha: update-ca-certificates is idempotent on filename, not content
The tool tracks its state via symlinks. If a symlink named my-ca.pem already exists in /etc/ssl/certs/, the tool reports 0 added, 0 removed even if the underlying certificate has changed entirely. This caused several confusing failures when we regenerated Caddy’s root CA.
The fix is to delete stale symlinks before copying the new certificate:
sudo find /etc/ssl/certs -name 'my-ca*' -type l -delete
sudo cp /path/to/new-root.crt /usr/local/share/ca-certificates/my-ca.crt
sudo update-ca-certificates
# Should report: 1 added, 0 removed
Gotcha: grep cannot verify a certificate is present in the bundle
Certificate subjects are base64-encoded PEM data, not plain text. Searching ca-certificates.crt for a human-readable name will always return nothing, even if the certificate is present. Use openssl verify instead:
openssl verify -CAfile /etc/ssl/certs/ca-certificates.crt /path/to/cert.crt
2. Chrome on Linux does not use the system bundle
This is the most surprising behaviour in the stack. Unlike macOS and Windows, Chrome on Linux does not read from /etc/ssl/certs/ca-certificates.crt. It maintains its own certificate database using NSS (Network Security Services), stored at:
~/.pki/nssdb
Updating the system bundle has no effect on Chrome. The certificate must be added to the NSS database separately using certutil:
sudo apt install libnss3-tools
# Initialise the database if it does not exist
mkdir -p ~/.pki/nssdb
certutil -d sql:~/.pki/nssdb -N --empty-password
# Remove any existing entry with the same name, then add the new certificate
certutil -d sql:~/.pki/nssdb -D -n "My Local CA" 2>/dev/null || true
certutil -d sql:~/.pki/nssdb -A -n "My Local CA" -t "CT,," \
-i /usr/local/share/ca-certificates/my-ca.crt
Chrome must be fully quit — not merely the window closed — and reopened for the change to take effect.
3. Python’s HTTP libraries do not use the system bundle by default
Python’s ssl module can be pointed at the system bundle, but httpx — like requests — bundles its own CA certificates via certifi and ignores both the system bundle and environment variables such as SSL_CERT_FILE or REQUESTS_CA_BUNDLE.
Consequently, even after installing Caddy’s root CA into the system trust store, our Python test runner rejected TLS connections to local services.
Solution: truststore
The truststore package patches Python’s ssl.SSLContext at the process level so that all SSL connections — httpx, urllib, requests, aiohttp — use the OS trust store. No per-call-site changes are needed.
pip install truststore
import truststore
truststore.inject_into_ssl()
Call this as early as possible, before any SSL connections are made. In an application, the startup module is the right place. In a test suite, the top of conftest.py works well.
One important point: truststore delegates to the OS bundle — it does not add trust itself. If the system bundle is stale or missing the CA, Python connections will still fail. Both steps are required: update the system bundle with update-ca-certificates, and call truststore.inject_into_ssl() at runtime.
4. Caddy’s two-tier local PKI
When tls internal is configured, Caddy generates a local PKI consisting of:
- A root CA — long-lived, self-signed
- An intermediate CA — short-lived, signed by the root
- Leaf certificates for each domain — very short-lived, signed by the intermediate
On a system install on Ubuntu, these are stored under the caddy user’s home directory:
/var/lib/caddy/.local/share/caddy/pki/ # root and intermediate
/var/lib/caddy/.local/share/caddy/certificates/ # leaf certificates
Gotcha: the root and intermediate are cached separately
When the PKI is reset by deleting only pki/authorities/local, Caddy regenerates the root CA but the old intermediate survives in certificates/. Caddy continues to serve the old intermediate in TLS handshakes — which is now signed by a different root. The new root in the trust store cannot verify this chain, and connections fail with certificate errors that point to the wrong place.
Always wipe both directories together:
sudo systemctl stop caddy
sudo rm -rf /var/lib/caddy/.local/share/caddy/pki/
sudo rm -rf /var/lib/caddy/.local/share/caddy/certificates/
sudo systemctl start caddy
Gotcha: the default intermediate lifetime is seven days
Caddy’s intermediate CA expires after seven days by default. When it expires, Caddy renews it against the existing root — no trust changes are needed, and this is normally transparent. The problem arises if the PKI has been manually reset in the meantime: a short default lifetime means resets happen frequently, and each reset requires updating the trust stores on every machine in the network.
Configure a longer intermediate lifetime in the global options block of your Caddyfile:
{
pki {
ca local {
intermediate_lifetime 43800h
}
}
}
service.local {
tls internal
reverse_proxy 127.0.0.1:9999
}
Validate the configuration before reloading:
caddy validate --config /etc/caddy/Caddyfile
sudo systemctl reload caddy
5. Every machine that makes TLS connections needs the CA trusted
This is straightforward once stated, but easy to overlook in a multi-machine setup. In our case:
- The test runner on
a6makes TLS connections to the OIDC provider ona9. - The application server on
a9also makes outbound TLS connections to its own OIDC provider for token exchange and JWKS verification.
Both machines need the Caddy root CA in their system bundles and, where Chrome is used, in their NSS databases. Trusting the CA only on a6 left the backend server unable to verify TLS during the OIDC callback, producing opaque 500 errors that were not immediately traceable to the missing certificate.
The practical summary
| Layer | Mechanism | Uses system bundle? | Required action |
|---|---|---|---|
curl / openssl | System OpenSSL | Yes | update-ca-certificates |
Python httpx | certifi (bundled) | No | truststore.inject_into_ssl() |
| Chrome | NSS (~/.pki/nssdb) | No | certutil -A |
| Caddy PKI reset | Caddy internal | — | Wipe both pki/ and certificates/ |
Relevance to the broader stack
This technical detail reflects the principle that motivates RiskNodes: trust must be explicitly established and continuously verified, not assumed.
A TLS certificate is a claim — that this service is who it says it is, and that its communications can be trusted. The mechanisms above exist to ensure that every part of the stack evaluates that claim against the same evidence. When they are out of alignment — when the system bundle is updated but Chrome’s NSS database is not, or when the intermediate CA has been renewed against a new root that a Python library has never heard of — the result is opaque failures and eroded confidence in the infrastructure.
For organisations building sovereign AI deployments — where the entire stack, from inference engine to governance tooling, must operate inside the perimeter — getting this alignment right is a prerequisite, not an afterthought. The alternative is a development environment that is unreliable enough to become an obstacle to the work it is meant to support.
The fixes documented here are not complicated. They are, however, easy to miss, and missing any one of them produces failures that look unrelated to the actual cause. We record them here because we could have saved several hours with a document like this.
Share this article:
Related posts
April 17, 2026
Installing Ubuntu 26.04 on the Geekom A9: Fixing BIOS Boot Loops and NVMe Failures
A practical guide to installing Ubuntu 26.04 on the Geekom A9 with Ryzen AI Max+ 395, including fixes for BIOS boot loops, NVMe failures, and network configuration issues.
April 25, 2026
Axolotl on Ubuntu 26.04 for Ryzen AI Max+ 395: ROCm 7.2, bitsandbytes and gfx1151 fixes
Step-by-step fixes for Axolotl LoRA fine-tuning on a Geekom A9 with AMD Ryzen AI Max+ 395 (gfx1151) on Ubuntu 26.04: ROCm 7.2 PyTorch nightlies, building bitsandbytes from source, and the Axolotl configuration that works.