TheDocumentation Index
Fetch the complete documentation index at: https://docs.vibefollow.com/llms.txt
Use this file to discover all available pages before exploring further.
vf.events resource handles custom event tracking and high-volume batching. Use it for anything outside the canonical 9 lifecycle events.
track(eventName, userId, properties?)
Emit a single event. Idempotent on the server when the auto-generated Idempotency-Key is preserved across retries (handled by the client).
Event name. Snake_case recommended. The backend validates only that it’s a non-empty string — reserved-name collisions return
ValidationError.External user ID.
JSON-serialisable property bag. Defaults to
{}.Promise<void>.
batch(options?)
Returns an EventBatch — an in-memory buffer that flushes in one POST. Useful for backfills and high-volume write paths.
Auto-flush when the queue reaches this size.
Auto-flush when this many milliseconds have elapsed since the first queued event.
EventBatch
batch.track(eventName, userId, properties?)
Enqueue an event. Same shape as events.track(), but synchronous — returns void. Auto-flushes when the buffer fills or the age timer fires.
batch.flush()
Drain the queue and POST to /api/v1/events/batch. No-op when empty. Always await — the returned promise resolves only after the network round-trip.
Trade-offs
| Use case | Choose |
|---|---|
| Real-time, interactive | events.track() — per-event POST |
| Backfilling history | events.batch() — amortised HTTP |
| Stream from data warehouse | events.batch() |
| Webhook fan-out / fan-in | events.track() — small enough that batching adds latency |
Single-event POST is cheap (sub-100ms typical). Don’t reach for the batch API unless you’re moving thousands of events.
Failure modes
Single track() fails (NetworkError, ServerError, RateLimitError)
Single track() fails (NetworkError, ServerError, RateLimitError)
Auto-retried up to
maxRetries. Throws on exhaustion. The SDK preserves the Idempotency-Key across retries so duplicates are deduped server-side within 24 hours.batch.flush() fails
batch.flush() fails
Throws — the queue is already drained at this point. Re-enqueueing on failure would risk duplicates.
batch.track() overflows
batch.track() overflows
Synchronously triggers
flush(); the next track() after that just enqueues normally.Process exits before flush()
Process exits before flush()
In-memory queue is lost. Always
await flush() in a shutdown handler.