Quick Test Commands
1. Health Check
curl http://localhost:8080/api/health
Expected output:
{
"status": "ok",
"timestamp": "2026-02-06T...",
"services": {
"llamaCpp": {
"status": "healthy",
"endpoint": "http://your-droplet-ip:8080"
},
"redis": {
"status": "healthy",
"host": "your-droplet-ip",
"port": 6379
},
"database": {
"status": "healthy"
}
}
}
2. Test llama.cpp Directly
curl -X POST http://localhost:8080/api/test-llama \
-H "Content-Type: application/json" \
-d '{"text": "Please rewrite this sentence to be more professional."}'
3. Test Redis Connection
curl http://localhost:8080/api/test-redis
4. Test AI Worker (via Redis pub/sub)
# In one terminal, subscribe to results
redis-cli -h your-droplet-ip -a your-password SUBSCRIBE "tickets:ai:analysis:test123"
# In another terminal, publish a test ticket
redis-cli -h your-droplet-ip -a your-password PUBLISH "tickets:new" '{"id":"test123","text":"My computer won't start"}'
Configuration Required
Environment Variables
# Add to backend/.env
# llama.cpp Configuration
LLAMA_CPP_ENDPOINT=http://your-droplet-ip:8080
LLAMA_CPP_MODEL=qwen2.5:0.5b
LLAMA_CPP_TIMEOUT=60000
# Redis Configuration
REDIS_HOST=your-droplet-ip
REDIS_PORT=6379
REDIS_PASSWORD=your-redis-password
REDIS_URL=redis://:your-redis-password@your-droplet-ip:6379
Update index.js to Include Health Route
Add this line to your backend/index.js:
const healthRouter = require('./routes/health');
app.use('/api', healthRouter);
Testing Checklist
- Backend starts without errors
- Health endpoint returns 200 OK
- llama.cpp service shows "healthy"
- Redis service shows "healthy"
- Test llama.cpp endpoint returns AI response
- Test Redis read/write succeeds
- AI Worker processes ticket analysis
- AI Worker processes alert classification
- AI Worker processes email analysis
- LLM queue processes rewrite jobs
- SSE streaming works for real-time AI
- Caching works (same input returns cached result)
Common Issues
llama.cpp Unreachable
Symptom: ‘llamaCpp: { status: 'error’, error: 'connect ECONNREFUSED' }
**Solutions**:
- Verify llama.cpp is running on droplet
- Check firewall allows connections from DO App Platform
- Verify LLAMA_CPP_ENDPOINT is correct
- Test direct curl from backend:
curl http://droplet-ip:8080/health`
Redis Connection Failed
Symptom: ‘redis: { status: 'error’, error: 'ECONNREFUSED' }
**Solutions**:
redis-cli -h droplet-ip ping
- Check Redis requires password:
redis-cli -h droplet-ip -a password ping`
- Verify firewall allows Redis port (6379)
- Check Redis bind address (should allow external connections)
Slow AI Responses
Symptom: Requests timeout or take >30 seconds Solutions:
- Check droplet CPU/RAM usage
- Verify model is loaded in llama.cpp
- Increase LLAMA_CPP_TIMEOUT
- Consider using smaller model or increasing droplet resources
Model Not Found
Symptom: llama.cpp returns "model not found" error Solutions:
- Verify model is loaded: curl http://droplet-ip:8080/props
- Check LLAMA_CPP_MODEL matches available model
- Load model in llama.cpp before starting backend
Performance Benchmarks
Run these to establish baseline performance:
# Test 10 concurrent requests
for i in {1..10}; do
(curl -X POST http://localhost:8080/api/test-llama \
-H "Content-Type: application/json" \
-d '{"text": "Test sentence number '$i'"}' &)
done
Expected:
- Response time: 1-5 seconds (depends on model size)
- Concurrent requests: 5-10 simultaneous (depends on droplet specs)
- Redis latency: <10ms
Next Steps After Testing
- Update TODO #3 to completed
- Start TODO #4: Configure external Redis (already done in code, just needs .env)
- Deploy to staging environment
- Run integration tests
- Monitor for 24 hours before production deployment