Skip to content
Beau Barker edited this page Jul 28, 2025 · 47 revisions

How to implement secure JWT-based authentication in SuperStack.

An /rpc/login endpoint issues an access token, and both Caddy and PostgREST validate it. Validating in the API gateway means you can protect non-PostgREST endpoints — such as static files or custom services — using the same authentication.

1. JWT Secret

Generate a secret:

openssl rand -base64 32

📝 caddy-jwt requires the secret to be base64 encoded.

Put the secret in the environment file:

.env

JWT_SECRET=(your secret)

⚠️ The .env file is for development only. Never store real secrets in version control or production.

This secret will be used by Caddy, PostgREST and Postgres. Add it and other settings to the Compose file:

compose.yaml

caddy:
  environment:
    JWT_SECRET: ${JWT_SECRET:?}

postgrest:
  environment:
    PGRST_APP_SETTINGS_JWT_EXP: 3600 # PostgREST default is no expiry!
    PGRST_DB_SCHEMAS: public,auth # Add auth to your list of schemas
    PGRST_JWT_SECRET: ${JWT_SECRET:?}
    PGRST_JWT_SECRET_IS_BASE64: true

postgres:
  environment:
    JWT_SECRET: ${JWT_SECRET:?}

📝 Since auth is not the first schema listed in PGRST_DB_SCHEMAS, auth requests must include the HTTP header Content-Profile: auth.

2. Caddy

Install caddy-jwt

Build Caddy with caddy-jwt, a Caddy module that facilitates JWT authentication:

caddy/Dockerfile

FROM caddy:builder AS builder

RUN xcaddy build \
    --with github.com/ggicci/[email protected]

FROM caddy:latest

COPY --from=builder /usr/bin/caddy /usr/bin/caddy

# Copy our Caddyfile into the image
COPY Caddyfile /etc/caddy/Caddyfile

Build the image:

docker compose build caddy

Caddyfile

Split your Caddyfile into sections, Public (no auth) and JWT, where an access token is required):

caddy/Caddyfile

:80, :443

# --- Public ---

# PostgREST's public RPC endpoints
@auth {
  path /rpc/login /rpc/logout /rpc/refresh_token
}
handle @auth {
  reverse_proxy http://postgrest:3000
}

# PostgREST's OpenAPI endpoint
handle_path /rest/ {
  reverse_proxy http://postgrest:3000
}

# .. other public endpoints ..

# --- JWT Protected ---

route {

  jwtauth {
    sign_key {env.JWT_SECRET}
    sign_alg HS256
    from_cookies access_token
  }

  # Set the Authorization header from the Cookie header (for PostgREST)
  request_header Authorization "Bearer {cookie.access_token}"

  # Non-public PostgREST endpoints
  handle /rpc/* {
    reverse_proxy http://postgrest:3000
  }

  handle_path /rest/* {
    reverse_proxy http://postgrest:3000
  }

  # .. other private endpoints ..

}

Restart Caddy for the changes to take effect:

docker compose down caddy
docker compose up -d caddy

3. Postgres

Load the pgcrypto Extension

Add a migration script (migration filenames are just examples — choose your own naming convention):

postgres/migrations/01-extensions.sql

-- pgcrypto adds public.crypt and public.hmac, used by auth
create extension pgcrypto;

🏗 Create the Auth Schema

Create a new migration file:

postgres/migrations/02-create_auth_schema.sql

Click to view the full SQL script
\set pgrst_jwt_secret '$JWT_SECRET'

-- Set the JWT secret in the db - despite it being set in the JWT_SECRET
-- env var, this appears to be also required
alter system set pgrst.jwt_secret = :'pgrst_jwt_secret';

begin;

-- Create auth schema and tables
create schema auth;

create table auth.user (
  username text primary key check (length(username) >= 3),
  password text not null check (length(password) < 512),
  role name not null check (length(role) < 512)
);

create table auth.refresh_token (
  id bigint generated always as identity primary key,
  created_at timestamp not null default now(),
  token text,
  username text
);

-- Enforce that roles exist in pg_roles
create function auth.check_role_exists() returns trigger
language plpgsql as $$
begin
  if not exists (select 1 from pg_roles where rolname = new.role) then
    raise foreign_key_violation using message = 'unknown database role: ' || new.role;
    return null;
  end if;
  return new;
end
$$;

create constraint trigger ensure_user_role_exists
after insert or update on auth.user
for each row execute procedure auth.check_role_exists();

-- Encrypt passwords on insert/update
create function auth.encrypt_pass() returns trigger
language plpgsql as $$
begin
  if tg_op = 'INSERT' or new.password <> old.password then
    new.password := crypt(new.password, gen_salt('bf'));
  end if;
  return new;
end
$$;

create trigger encrypt_pass
before insert or update on auth.user
for each row execute procedure auth.encrypt_pass();

create function auth.url_encode(data bytea) returns text language sql AS $$
  select translate(encode(data, 'base64'), E'+/=\n', '-_');
$$ immutable;

create or replace function auth.sign_raw(
  payload json,
  secret_base64 text, -- can be base64 or raw
  algorithm text default 'HS256'
) returns text
language plpgsql immutable as $$
declare
  alg text;
  clean_secret bytea;
begin
  -- Determine algorithm
  alg := case algorithm
    when 'HS256' then 'sha256'
    when 'HS384' then 'sha384'
    when 'HS512' then 'sha512'
    else 'sha256'
  end;

  begin
    clean_secret := decode(secret_base64, 'base64');
  exception when others then
    raise exception 'Invalid base64-encoded secret';
  end;

  return (
    with
      header as (
        select auth.url_encode(convert_to('{"alg":"' || algorithm || '","typ":"JWT"}','utf8')) as data
      ),
      payload_enc as (
        select auth.url_encode(convert_to(payload::text,'utf8')) as data
      ),
      signables as (
        select header.data || '.' || payload_enc.data as data from header, payload_enc
      )
    select
      signables.data || '.' ||
      auth.url_encode(public.hmac(convert_to(signables.data,'utf8'), clean_secret, alg))
    from signables
  );
end;
$$;

-- Generate JWT access tokens
create function auth.generate_access_token(
  role_ text, user_ text, secret text
) returns text
language plpgsql as $$
declare
  access_token text;
begin
  select auth.sign_raw(row_to_json(r), secret)
  into access_token
  from (
    select
      role_ as role,
      user_ as username,
      user_ as sub,
      extract(epoch from now())::integer + 600 as exp
  ) r;
  return access_token;
end;
$$;

-- Login endpoint
create function auth.login(user_ text, pass text) returns void
language plpgsql security definer as $$
declare
  access_token text;
  headers text;
  refresh_token text;
  role_ name;
begin
  select role into role_
  from auth.user
  where username = user_
    and password = public.crypt(pass, password);

  if role_ is null then
    raise sqlstate 'PT401' using message = 'Invalid user or password';
  end if;

  select auth.generate_access_token(role_, user_, current_setting('pgrst.jwt_secret')) into access_token;

  refresh_token := public.gen_random_uuid();
  insert into auth.refresh_token (token, username) values (refresh_token, user_);

  headers := json_build_array(
    json_build_object('Set-Cookie', 'access_token=' || access_token || '; Path=/; HttpOnly;'),
    json_build_object('Set-Cookie', 'refresh_token=' || refresh_token || '; Path=/rpc/refresh_token; HttpOnly;')
  )::text;
  perform set_config('response.headers', headers, true);
end;
$$;

-- Logout endpoint
create function auth.logout() returns void
language plpgsql security definer as $$
declare headers text;
begin
  headers := '[' ||
    '{"Set-Cookie": "access_token=; path=/; expires=Thu, 01 Jan 1970 00:00:00 GMT;"},' ||
    '{"Set-Cookie": "refresh_token=; path=/; expires=Thu, 01 Jan 1970 00:00:00 GMT;"}' ||
  ']';
  perform set_config('response.headers', headers, true);
end;
$$;

-- Refresh token endpoint
create function auth.refresh_token() returns void
language plpgsql security definer as $$
declare
  user_ text;
  access_token text;
  headers text;
  refresh_token_ text;
  role_ text;
begin
  refresh_token_ := current_setting('request.cookies', true)::json->>'refresh_token';

  select username into user_
  from auth.refresh_token
  where token = refresh_token_
    and created_at > now() - interval '30 days';

  if user_ is null then
    raise sqlstate 'PT401' using message = 'Invalid or expired refresh token';
  end if;

  select role into role_ from auth.user where username = user_;
  if role_ is null then
    raise sqlstate 'PT401' using message = 'Unknown user';
  end if;

  select auth.generate_access_token(role_, user_, current_setting('pgrst.jwt_secret')) into access_token;

  headers := '[{"Set-Cookie": "access_token=' || access_token || '; Path=/; HttpOnly;"}]';
  perform set_config('response.headers', headers, true);
end;
$$;

commit;

👮 Create Roles and Grant Permissions

Add a migration script for roles:

postgres/migrations/04-roles.sql

create role basic_subscriber;

Add another migration script for grants:

postgres/migrations/05-grants.sql

begin;

-- Anon can access the auth functions
grant usage on schema auth to anon;
grant execute on function auth.login(text, text) to anon;
grant execute on function auth.logout() to anon;
grant execute on function auth.refresh_token() to anon;

-- Setup initial permissions for basic_subscriber
grant basic_subscriber to authenticator;
-- Grant more privileges here
-- grant usage on schema api to basic_subscriber;
-- grant select, insert, update on api.customer to basic_subscriber;

commit;

▶️ Run the Migrations

bin/postgres migrate

All done.

✅ How to use it

This section will show how authentication works, using curl commands.

For testing purposes, seed a demo user.

Create a directory for seed data:

mkdir postgres/seed

Add a demo user:

postgres/seed/seed_demo.sql

insert into auth.user (username, password, role) values (
  'demo', 'demo', 'basic_subscriber'
);

Run the script:

bin/postgres psql < postgres/seed/seed_demo.sql

Login

Authenticate and retrieve tokens:

curl --show-headers -X POST \
  -H 'Content-Profile: auth' \
  -H 'Content-Type: application/json' \
  --data '{"user_": "demo", "pass": "demo"}' \
  http://localhost:8000/rpc/login

If successful, the response headers will include:

HTTP/1.1 204 No Content
Set-Cookie: access_token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJyb2xlIjoiYmFzaWNfc3Vic2NyaWJlciIsInVzZXJuYW1lIjoiZGVtbyIsImV4cCI6MTc1MTY3NDA5MX0.6SRT0g1BlqMAkNWxk5VuAIuCHuk03EtaOnjO5hoVtpM; Path=/; HttpOnly;
Set-Cookie: refresh_token=028caa10-d087-41d8-8d8c-62d60bb419b5; Path=/rpc/refresh_token; HttpOnly;

The access_token is used for authenticated requests. The refresh_token is used to request a new access token when it expires.

Browser behavior: The cookies are stored and sent on subsequent requests. This happens automatically.

PostgREST behavior: PostgREST itself does not read cookies, so we've configured Caddy to copy the access_token from the cookie into the Authorization header.

Make Authenticated Requests

curl --show-headers -X GET \
  -H "Cookie: access_token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJyb2xlIjoiYmFzaWNfc3Vic2NyaWJlciIsInVzZXJuYW1lIjoiZGVtbyIsImV4cCI6MTc1MTY3NDA5MX0.6SRT0g1BlqMAkNWxk5VuAIuCHuk03EtaOnjO5hoVtpM" \
  http://localhost:8000/rest/task

Because of Caddy’s configuration, you don’t need to manually add an Authorization header—Caddy handles that.

Refresh token

The access token expires after 1 hour (set in PGRST_APP_SETTINGS_JWT_EXP).

Use the refresh token to obtain a new access token:

curl --show-headers -X POST \
  -H 'Content-Profile: auth' \
  -H 'Cookie: refresh_token='c1d54797-ecfa-4ecb-a6dc-bb4ff2ef803a'; HttpOnly' \
  http://localhost:8000/rpc/refresh_token

Successful response:

HTTP/1.1 204 No Content
Set-Cookie: access_token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJyb2xlIjoiYmFzaWNfc3Vic2NyaWJlciIsInVzZXJuYW1lIjoiZGVtbyIsImV4cCI6MTc1MTY3NTAzMX0.kPvJXJNiTo1TZEvShhRFWS6qLfMUqd_AyKrjk7Gs5Io; Path=/; HttpOnly;

Logout

Logout clears the cookies:

curl --show-headers -X POST
  -H 'Content-Profile: auth'
  http://localhost:8000/rpc/logout

Response:

HTTP/1.1 204 No Content
Set-Cookie: access_token=; path=/; expires=Thu, 01 Jan 1970 00:00:00 GMT;
Set-Cookie: refresh_token=; path=/; expires=Thu, 01 Jan 1970 00:00:00 GMT;

🔎 Debugging

Enable request/response logging in Caddy to troubleshoot authentication issues.

Add this to the top of caddy/Caddyfile:

{
  servers {
    log_credentials
  }
  log {
    output stdout
    format json
  }
}

This will include headers (including tokens) in the logs.

📚 References

Clone this wiki locally