Now that you’ve got the stack installed and running locally, let’s connect the key services together so Guardian can start making use of them.
This guide assumes:
- Docker containers are running for Supabase and Qdrant
- Ollama is installed locally with
llama3
running - You’ve cloned or initialized a project folder with your Guardian server backend
🔹 Step 1: Connect to Supabase
Inside your Guardian server (Hono/Express), you’ll want to connect directly to the Postgres DB exposed by Supabase.
If you’re using Drizzle ORM like I do:
// db.config.ts
import { drizzle } from 'drizzle-orm/node-postgres';
import pg from 'pg';
const pool = new pg.Pool({
connectionString: 'postgresql://postgres:postgres@localhost:54322/postgres'
});
export const db = drizzle(pool);
📝 Port
54322
is Supabase’s default if you’re using the Docker setup.
🔹 Step 2: Connect to Qdrant
You’ll use the Qdrant JS client:
pnpm add @qdrant/js-client-rest
Then initialize:
// qdrant.ts
import { QdrantClient } from '@qdrant/js-client-rest';
export const qdrant = new QdrantClient({
url: 'http://localhost:6333'
});
You can now create collections, insert embeddings, and run similarity searches.
🔹 Step 3: Connect to Ollama
Use the Ollama SDK or just a simple fetch if you’re running locally:
const response = await fetch('http://localhost:11434/api/generate', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
model: 'llama3',
prompt: 'What is Guardian AI?'
})
});
const result = await response.json();
console.log(result.response);
⚠️ You can wrap this in a utility like
runLLM()
to abstract model calls.
🔹 Step 4: Define Your Agent’s Memory Loop
Here’s the high-level logic:
const result = await runLLM(prompt);
// Embed result
const embedding = await embedText(result.response); // use `@pinecone-database/embedding-openai` or your own
// Store in Qdrant
await qdrant.upsert('guardian-memory', {
points: [
{
id: uuid(),
vector: embedding,
payload: {
type: 'agent_thought',
source: 'blog-reader',
content: result.response
}
}
]
});
âś… Final Checks
Make sure:
- Supabase has a working schema (start with
tasks
,memories
, oragents
) - Qdrant is seeded with at least one collection (e.g.
guardian-memory
) - Ollama’s model is loaded and not timing out
🚀 Next Up
Now that Guardian can:
- Talk to Supabase
- Store/retrieve semantic memory via Qdrant
- Use a local LLM to reason and respond
Leave a Reply