Database Overview
pidgn_db is the database layer for the pidgn web framework. It provides a unified API for SQLite and PostgreSQL with compile-time schema definitions, type-safe repository operations, a composable query builder, and connection pooling.
Features
Section titled “Features”- Dual backend — SQLite and PostgreSQL through the same API surface
- Compile-time schema — zero-cost struct-to-table mapping with automatic DDL generation
- Repository pattern — type-safe CRUD with
all,one,get,insert,update,delete,count - Query builder — composable fluent API with where clauses, joins, group by, aggregates, ordering, and pagination
- Connection pooling — thread-safe pool with configurable size and timeouts
- Transactions — with commit, rollback, and nested savepoints
- Changesets — field validation and change tracking for form submissions
- Dialect-aware SQL —
?placeholders for SQLite,$Nfor PostgreSQL
Supported backends
Section titled “Supported backends”| Backend | Status | Build flag | Notes |
|---|---|---|---|
| SQLite | Stable | Enabled by default | Vendored amalgamation, no system lib |
| PostgreSQL | Stable | -Dpostgres=true | Requires libpq |
Architecture
Section titled “Architecture”pidgn_db follows a layered architecture with three primary abstractions:
Schema --> Query --> Repo | | | | struct-to-table | fluent query builder | executes against pool | mapping at comptime | produces SQL + binds | returns typed results- Schema — maps a Zig struct to a database table at compile time. Generates column lists, INSERT placeholders, and CREATE TABLE SQL for both dialects.
- Query — builds SELECT statements with where clauses, joins, ordering, grouping, and pagination. Produces dialect-aware SQL and bind values.
- Repo — executes queries against a connection pool and maps result rows back into typed Zig structs. Provides CRUD convenience methods.
Supporting these are Pool (manages a fixed set of connections), Transaction (begin/commit/rollback), and Changeset (validates user input before persisting).
Adding the dependency
Section titled “Adding the dependency”Pick whichever layout matches your setup. The build.zig import is the same either way.
Useful when pidgn_db lives next to your app on disk (monorepo, fork, or local checkout).
.dependencies = .{ .pidgn_db = .{ .path = "../pidgn_db", },},Fetches pidgn_db from GitHub and pins it to a specific commit. The hash is a content-addressable digest that Zig verifies on fetch.
.dependencies = .{ .pidgn_db = .{ .url = "git+https://github.com/seemsindie/pidgn_db#964b707fbfdea26f75f5fee500317c2cf22c61ac", .hash = "pidgn_db-0.3.1-beta.1-0vqMpvi2mQBpxrPOrjYILyrNFRXRFHMHuEdoC0SvQ6rK", },},Replace the commit SHA after # with whatever you want to pin to; the hash comes from Zig’s package cache.
To find the right hash, leave the hash = "..." line out (or paste any string), run zig build once, and Zig will print the correct hash in the error message:
error: hash mismatch: manifest declares <old> but the fetched package has <new>Paste the <new> value back into build.zig.zon. The workspace ships a helper script zhash that automates this — see Workspace tooling.
Then in your build.zig, import the module:
const pidgn_db_dep = b.dependency("pidgn_db", .{ .target = target, .optimize = optimize,});exe.root_module.addImport("pidgn_db", pidgn_db_dep.module("pidgn_db"));To enable PostgreSQL support, pass -Dpostgres=true when building:
zig build -Dpostgres=trueQuick start
Section titled “Quick start”-
Define a schema
const pidgn_db = @import("pidgn_db");pub const User = struct {id: i64,name: []const u8,email: []const u8,inserted_at: i64 = 0,updated_at: i64 = 0,pub const Meta = pidgn_db.Schema.define(@This(), .{.table = "users",.primary_key = "id",.timestamps = true,});}; -
Create a connection pool
var pool = try pidgn_db.SqlitePool.init(.{.size = 5,.connection = .{ .database = "myapp.db" },});defer pool.deinit();var pool = try pidgn_db.PgPool.init(.{.size = 5,.connection = .{ .database = "postgresql://localhost/myapp" },});defer pool.deinit(); -
Initialize a repository
var repo = pidgn_db.SqliteRepo.init(&pool); -
Insert a record
var user = try repo.insert(User, .{.id = 0,.name = "Alice",.email = "alice@example.com",}, allocator);defer pidgn_db.freeOne(User, &user, allocator); -
Query records
const query = pidgn_db.Query(User).init().where("name", .eq, "Alice").orderBy("inserted_at", .desc).limit(10);const users = try repo.all(User, query, allocator);defer pidgn_db.freeAll(User, users, allocator);
Connection pool configuration
Section titled “Connection pool configuration”| Option | Type | Default | Description |
|---|---|---|---|
size | u16 | 5 | Number of connections in the pool |
connection | Config | Backend default | Backend-specific connection config |
checkout_timeout_iterations | u32 | 100_000 | Spin iterations before checkout fails |
SQLite connection options
Section titled “SQLite connection options”| Option | Type | Default | Description |
|---|---|---|---|
database | [:0]const u8 | ":memory:" | Database file path |
busy_timeout_ms | c_int | 5000 | Busy timeout in milliseconds |
enable_wal | bool | true | Enable WAL journal mode |
pragmas | []const [:0]const u8 | &.{"PRAGMA foreign_keys = ON"} | PRAGMA statements to run on connect |
Memory management
Section titled “Memory management”pidgn_db allocates memory for string fields in query results. You are responsible for freeing these allocations:
pidgn_db.freeAll(T, items, allocator)— frees a slice returned byrepo.all()orrepo.rawAll(), including all owned string fieldspidgn_db.freeOne(T, &item, allocator)— frees string fields in a single struct returned byrepo.get(),repo.one(),repo.insert(), orrepo.update()
Always use defer to ensure cleanup:
var user = try repo.insert(User, .{ .id = 0, .name = "Alice", .email = "a@example.com" }, allocator);defer pidgn_db.freeOne(User, &user, allocator);Next steps
Section titled “Next steps”- Schema — learn how to define struct-to-table mappings
- Repository — explore the full CRUD API
- Query Builder — compose queries with the fluent API
- Changesets — validate user input before persisting
- Transactions — manage transactions and savepoints
- Migrations — version your database schema