A secure, high-performance API key management system with dual-hash strategy, distributed caching, and automatic expiration.
- π Dual Hash Security: SHA-1 (fast indexing) + Argon2id (secure verification)
- β‘ High Performance: Redis cache-first strategy with 95%+ hit rate
- π Distributed Locking: Race condition prevention with Redis locks
- π Auto Expiration: MongoDB TTL index for automatic cleanup
- π¦ Rate Limiting: Configurable IP-based rate limiting (100 req/min default)
- π Monitoring: Built-in metrics tracking and reporting
- π³ Docker Ready: Complete Docker Compose setup
- β Fully Tested: Comprehensive integration test suite
- π Structured Logging: JSON logging with LogTape, sensitive data masking, and performance tracking
- Runtime: Bun - Fast JavaScript runtime
- Framework: ElysiaJS - Ergonomic web framework
- Database: MongoDB 7 with Prisma
- Cache: Redis 7 with ioredis
- Security: Argon2id password hashing
- Logging: LogTape - Structured JSON logging
- Docker and Docker Compose
- That's it! Everything else runs in containers.
docker-compose up -dThat's all! The system automatically:
- β Starts MongoDB with replica set
- β Starts Redis cache
- β Creates TTL index for expiration
- β Builds and launches the API server
Server running at: http://localhost:3030
# Check health
curl http://localhost:3030/health
# Expected output:
# {
# "status": "healthy",
# "services": {
# "mongodb": "connected",
# "redis": "connected"
# },
# "timestamp": "2025-10-04T..."
# }# Publish a new API key
curl -X POST http://localhost:3030/api/keys/publish \
-H "Content-Type: application/json" \
-d '{
"itemKey": "myapp://users/user123",
"permission": ["read", "write"],
"expiresAt": "2025-12-31T23:59:59Z",
"maxUses": 1000
}'
# Save the returned apiKey, you'll only see it once!
# Validate the key
curl -X POST http://localhost:3030/api/keys/validate \
-H "Content-Type: application/json" \
-d '{
"apiKey": "YOUR_API_KEY_HERE"
}'- Architecture
- API Documentation
- Development
- Testing
- Docker Deployment
- Configuration
- Monitoring
- Logging System
- Security
- Troubleshooting
- Contributing
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Client Request β
βββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββββββββ
β
βΌ
ββββββββββββββββββββββββββ
β Rate Limiter (Redis) β
ββββββββββββββ¬ββββββββββββ
β
βΌ
ββββββββββββββββββββββββββ
β Distributed Lock β
β (Redis) β
ββββββββββββββ¬ββββββββββββ
β
βΌ
ββββββββββββββββββββββββββ
β Cache Layer (Redis) β
β TTL: 15 minutes β
ββββββββββββββ¬ββββββββββββ
β Cache Miss
βΌ
ββββββββββββββββββββββββββ
β MongoDB β
β - Replica Set β
β - TTL Index β
β - Dual Hash Storage β
ββββββββββββββββββββββββββ
- SHA-1 Hash (8 bytes): Fast indexing for initial lookup
- Argon2id Hash: Secure verification of candidates
// Publishing
apiKey (64 chars)
ββ> SHA-1 β searchableHash (indexing)
ββ> Argon2id β hashedApiKey (verification)
// Validation
apiKey β SHA-1 β Find candidates β Argon2id verify eachPublishing Flow:
1. Generate 64-char API key
2. Create SHA-1 hash (searchableHash)
3. Create Argon2id hash (hashedApiKey)
4. Check for duplicates
5. Store in MongoDB
6. Return original key (once!)
Validation Flow:
1. Check rate limit (Redis)
2. Acquire distributed lock
3. Check cache (Redis)
ββ Hit β Return cached data
ββ Miss β
4. Generate SHA-1 from provided key
5. Query MongoDB by searchableHash
6. Verify candidates with Argon2id
7. Check expiration & usage limits
8. Increment usage counter
9. Update cache
10. Release lock
http://localhost:3030
Endpoint: POST /api/keys/validate
Rate Limit: 100 requests/minute per IP
Request:
{
"apiKey": "your-64-character-api-key"
}Success Response (200):
{
"success": true,
"data": {
"valid": true,
"itemKey": "myapp://users/user123",
"permission": ["read", "write"],
"expiresAt": "2025-12-31T23:59:59.000Z",
"usedCount": 5,
"maxUses": 1000
}
}Error Responses:
400- Invalid request401- Invalid API key403- Expired or exhausted404- Not found429- Rate limit exceeded
Endpoint: POST /api/keys/publish
Request:
{
"itemKey": "myapp://users/user123?action=read",
"permission": ["read", "write"],
"expiresAt": "2025-12-31T23:59:59Z",
"maxUses": 1000
}Item Key Format: <scheme>://<service>/<key>?<query>
Success Response (200):
{
"success": true,
"data": {
"apiKey": "abc...xyz",
"itemKey": "myapp://users/user123?action=read",
"permission": ["read", "write"],
"publishedAt": "2025-10-04T12:00:00.000Z",
"expiresAt": "2025-12-31T23:59:59.000Z",
"maxUses": 1000
}
}apiKey is shown only once!
GET /admin/keys/by-item?itemKey=<encoded-item-key>GET /admin/keys/stats/:hashedApiKeyDELETE /admin/keys/revoke/:hashedApiKeyGET /admin/statsGET /admin/metricsResponse:
{
"success": true,
"data": {
"keysPublished": 150,
"keysValidated": 5430,
"cacheHits": 5200,
"cacheMisses": 230,
"cacheHitRate": 95.76,
"avgValidationTime": 12.5,
"rateLimitErrors": 15
}
}POST /admin/keys/cleanupNote: MongoDB TTL index handles this automatically.
GET /healthResponse:
{
"status": "healthy",
"services": {
"mongodb": "connected",
"redis": "connected"
},
"timestamp": "2025-10-04T12:34:56.789Z"
}For detailed API documentation, see docs/API.md.
# Everything in Docker
docker-compose up -d --build# 1. Start only infrastructure
docker-compose up -d mongodb redis setup
# 2. Install dependencies
bun install
# 3. Generate Prisma client
bun run db:generate
# 4. Set environment variables
# Windows PowerShell
$env:PORT="3030"
$env:MONGODB_URI="mongodb://localhost:27017/inventory?replicaSet=rs0&directConnection=true"
$env:REDIS_HOST="localhost"
$env:REDIS_PORT="6379"
$env:REDIS_PASSWORD="redis123"
$env:NODE_ENV="development"
# 5. Start dev server with hot reload
bun run dev# Development
bun run dev # Start with hot reload
bun run start # Production mode
# Database
bun run db:generate # Generate Prisma client
bun run db:push # Push schema to database
bun run db:studio # Open Prisma Studio
# Testing
bun test # Run all tests
bun test --watch # Watch mode
# Docker
bun run docker:up # Start all services
bun run docker:down # Stop services
bun run docker:logs # View logs
bun run docker:restart # Restart appinventory/
βββ src/
β βββ config/
β β βββ env.ts # Environment configuration
β βββ db/
β β βββ prisma.ts # Prisma client & health check
β β βββ api-key-repository.ts # Database operations
β βββ cache/
β β βββ redis.ts # Redis client
β β βββ distributed-lock.ts # Distributed locking
β β βββ api-key-cache.ts # Cache operations
β βββ middleware/
β β βββ rate-limiter.ts # Rate limiting
β βββ monitoring/
β β βββ metrics.ts # Metrics collection
β βββ services/
β β βββ publisher.ts # API key publishing
β β βββ validator.ts # API key validation
β β βββ admin.ts # Admin operations
β βββ routes/
β β βββ api.ts # Public API routes
β β βββ admin.ts # Admin routes
β βββ types/
β β βββ api.ts # Request/response types
β β βββ errors.ts # Custom error classes
β βββ utils/
β β βββ crypto.ts # Hashing utilities
β β βββ logger.ts # LogTape logging configuration
β βββ index.ts # Application entry point
βββ tests/
β βββ api.test.ts # Integration tests
βββ prisma/
β βββ schema.prisma # Database schema
βββ scripts/
β βββ setup-ttl-index.js # TTL index setup
βββ docs/ # Additional documentation
βββ docker-compose.yml
βββ Dockerfile
βββ package.json
# Make sure services are running
docker-compose up -d
# Run all tests
bun test
# Watch mode
bun test --watch
# Specific test file
bun test tests/api.test.ts- β Health checks
- β API key publishing (with validation)
- β API key validation (cache + DB)
- β Usage counter increment
- β Admin endpoints
- β Rate limiting
- β Error handling
Test Results:
β 13 tests passed
β 51 assertions
β Completed in 1.02s# Publish
curl -X POST http://localhost:3030/api/keys/publish \
-H "Content-Type: application/json" \
-d '{
"itemKey": "myapp://item/123",
"permission": ["read"],
"expiresAt": "2025-12-31T23:59:59Z",
"maxUses": 100
}'
# Validate
curl -X POST http://localhost:3030/api/keys/validate \
-H "Content-Type: application/json" \
-d '{"apiKey": "YOUR_KEY_HERE"}'
# Metrics
curl http://localhost:3030/admin/metrics | jq# Publish
$body = @{
itemKey = "myapp://item/123"
permission = @("read", "write")
expiresAt = "2025-12-31T23:59:59Z"
maxUses = 100
} | ConvertTo-Json
Invoke-RestMethod -Uri "http://localhost:3030/api/keys/publish" `
-Method Post -ContentType "application/json" -Body $bodyFor detailed testing guide, see docs/TESTING.md.
βββββββββββββββββββββββββββββββββββββββββββββββββββ
β docker-compose.yml β
βββββββββββββββ¬βββββββββββββββ¬βββββββββββ¬βββββββββ€
β mongodb β redis β setup β app β
β (rs0) β (cache) β (init) β (api) β
β :27017 β :6379 β (once) β :3030 β
βββββββββββββββ΄βββββββββββββββ΄βββββββββββ΄βββββββββ
| Service | Purpose | Port | Health Check |
|---|---|---|---|
mongodb |
Database with replica set | 27017 | mongosh --eval "rs.status()" |
redis |
Cache & distributed locks | 6379 | redis-cli ping |
setup |
Initialize replica set & indexes | - | One-time only |
app |
API server | 3030 | GET /health |
# Start all services
docker-compose up -d --build
# View logs
docker-compose logs -f app # App logs only
docker-compose logs -f # All services
# Restart app (after code changes)
docker-compose restart app
# Stop everything
docker-compose down
# Stop and remove volumes (fresh start)
docker-compose down -v
# Check service status
docker-compose ps
# Execute commands in containers
docker-compose exec mongodb mongosh
docker-compose exec redis redis-cliCreate a .env file in the project root:
# Application
PORT=3030
NODE_ENV=production
# MongoDB
MONGODB_URI=mongodb://mongodb:27017/inventory?replicaSet=rs0&directConnection=true
# Redis
REDIS_HOST=redis
REDIS_PORT=6379
REDIS_PASSWORD=redis123
# Optional: Rate Limiting
RATE_LIMIT_WINDOW=60000 # 1 minute in ms
RATE_LIMIT_MAX_REQUESTS=100 # Max requests per windowFor production, use a proper 3-node replica set:
# docker-compose.prod.yml
services:
mongodb-primary:
image: mongo:7
command: mongod --replSet rs0 --bind_ip_all --keyFile /data/mongodb-keyfile
# ... (see docs/DOCKER.md for full config)
mongodb-secondary:
# ...
mongodb-arbiter:
# ...- β Enable MongoDB authentication
- β Use strong Redis password
- β Set up TLS/SSL certificates
- β Configure firewall rules
- β Use Docker secrets for credentials
# Scale app instances
docker-compose up -d --scale app=3
# Use load balancer (nginx, traefik)
# ...For detailed Docker guide, see docs/DOCKER.md.
| Variable | Required | Default | Description |
|---|---|---|---|
PORT |
No | 3030 |
Server port |
NODE_ENV |
No | development |
Environment (development/production) |
MONGODB_URI |
Yes | - | MongoDB connection string with replica set |
REDIS_HOST |
Yes | - | Redis server host |
REDIS_PORT |
No | 6379 |
Redis server port |
REDIS_PASSWORD |
Yes | - | Redis password |
REDIS_DB |
No | 0 |
Redis database number |
RATE_LIMIT_WINDOW |
No | 60000 |
Rate limit window (ms) |
RATE_LIMIT_MAX_REQUESTS |
No | 100 |
Max requests per window |
mongodb://localhost:27017/inventory?replicaSet=rs0&directConnection=true
mongodb://user:pass@mongo1:27017,mongo2:27017,mongo3:27017/inventory?replicaSet=rs0&authSource=admin
replicaSet=rs0- Required for transactionsdirectConnection=true- For single-node replica set (dev only)authSource=admin- For authenticationretryWrites=true- Enable write retry (default)w=majority- Write concern level
// src/cache/redis.ts
const redis = new Redis({
host: process.env.REDIS_HOST,
port: parseInt(process.env.REDIS_PORT || "6379"),
password: process.env.REDIS_PASSWORD,
db: parseInt(process.env.REDIS_DB || "0"),
retryStrategy(times) {
const delay = Math.min(times * 50, 2000);
return delay;
}
});Structured JSON logging in production:
console.log(JSON.stringify({
timestamp: new Date().toISOString(),
level: "info",
message: "API key validated",
apiKey: hash.substring(0, 16),
itemKey: key.itemKey
}));curl http://localhost:3030/healthResponse:
{
"status": "ok",
"timestamp": "2025-01-20T10:30:00.000Z",
"services": {
"mongodb": "connected",
"redis": "connected"
}
}Status Codes:
200- All services healthy503- One or more services unavailable
curl http://localhost:3030/admin/metricsResponse:
{
"activeKeys": 42,
"expiredKeys": 15,
"exhaustedKeys": 8,
"totalValidations": 1523,
"cacheHitRate": 0.87,
"timestamp": "2025-01-20T10:30:00.000Z"
}| Metric | Description | Alert Threshold |
|---|---|---|
cacheHitRate |
Redis cache efficiency | < 0.7 (70%) |
activeKeys |
Available API keys | < 10 |
exhaustedKeys |
Keys at max usage | Growing rapidly |
| Response time | API latency | > 500ms |
| Error rate | Failed requests | > 1% |
// Add prometheus client
import { Registry, Counter, Histogram } from 'prom-client';
const register = new Registry();
const httpRequestDuration = new Histogram({
name: 'http_request_duration_ms',
help: 'Duration of HTTP requests in ms',
labelNames: ['method', 'route', 'status_code'],
registers: [register]
});Create dashboards for:
- Request rate & latency
- Cache hit rate
- Active vs expired keys
- Error rate by endpoint
For detailed monitoring setup, see docs/MONITORING.md.
The system uses LogTape for enterprise-grade structured logging with the following features:
- Structured JSON Output: All logs are formatted as JSON for easy parsing and analysis
- Sensitive Data Masking: Automatic masking of API keys, hashes, passwords, and MongoDB credentials
- Performance Tracking: Built-in timing utilities for measuring operation duration
- Caller Information: Every log includes the calling function for full traceability
- Categorized Loggers: Separate loggers for app, database, cache, service, API, and admin operations
{
"timestamp": "2025-10-05T10:35:51.693Z",
"level": "info",
"category": "inventory.api",
"message": ["API key validated successfully"],
"itemKey": "myapp://users/123",
"usedCount": 5,
"maxUses": 1000,
"clientIp": "192.168.1.100",
"caller": "apiRoute.validate"
}| Category | Purpose | Examples |
|---|---|---|
inventory.app |
Application lifecycle | Startup, shutdown, health checks |
inventory.db |
Database operations | Queries, connections, Prisma operations |
inventory.cache |
Redis operations | Cache hits/misses, lock operations |
inventory.service |
Business logic | Key publishing, validation, admin tasks |
inventory.api |
Public API endpoints | Request/response, rate limiting |
inventory.admin |
Admin operations | Stats, metrics, revocation |
- DEBUG: Detailed information for debugging (cache hits, lock acquisition)
- INFO: General informational messages (successful operations)
- WARN: Warning messages (rate limits, validation failures)
- ERROR: Error messages (unexpected errors, database failures)
The logging system automatically masks sensitive information:
// API Keys: Shows only first 8 characters
"apiKey-abc...def123" β "apiKey-abc***** (masked)"
// Hashes: Shows only first 16 characters
"$argon2id$v=19$..." β "$argon2id$v=19$m***** (masked)"
// Passwords: Fully masked
"mySecretPassword" β "******* (masked)"
// MongoDB URIs: Credentials removed
"mongodb://user:pass@host/db" β "mongodb://***:***@host/db"Built-in performance timing utilities:
import { performance } from './utils/logger'
// Start timing
const timer = performance.start('operation.name')
try {
// ... your code ...
// End timing with success
timer.end({ success: true, additionalData: value })
} catch (error) {
// Track error with timing
timer.error(error, { context: 'additional info' })
}# Follow all logs
docker-compose logs -f app
# Filter by category
docker-compose logs app | grep "inventory.api"
# Filter by level
docker-compose logs app | grep "\"level\":\"error\""# Using jq for JSON parsing
docker-compose logs app | jq 'select(.level == "error")'
# Filter by category
docker-compose logs app | jq 'select(.category | contains("cache"))'
# Show only messages and timestamps
docker-compose logs app | jq '{timestamp, message, caller}'# Find all errors in the last hour
docker-compose logs --since 1h app | jq 'select(.level == "error")'
# Track API key validation performance
docker-compose logs app | jq 'select(.message[0] | contains("validated")) | {timestamp, duration: .durationMs}'
# Monitor cache hit rate
docker-compose logs app | jq 'select(.message[0] | contains("Cache")) | .message'
# Find rate limit violations
docker-compose logs app | jq 'select(.message[0] | contains("Rate limit")) | {timestamp, clientIp, caller}'The structured JSON format integrates seamlessly with:
- ELK Stack (Elasticsearch, Logstash, Kibana)
- Grafana Loki - Log aggregation system
- Datadog - Monitoring and analytics
- CloudWatch - AWS logging service
- Splunk - Log analysis platform
Example Logstash configuration:
input {
docker {
type => "inventory-api"
}
}
filter {
json {
source => "message"
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "inventory-logs-%{+YYYY.MM.dd}"
}
}The logger is configured in src/utils/logger.ts:
import { configureLogger, logger, performance } from './utils/logger'
// Initialize at application startup
await configureLogger()
// Use in your code
logger.api.info('Request received', {
method: 'POST',
path: '/api/keys/validate',
caller: 'apiRoute.validate'
})
// Performance tracking
const timer = performance.start('db.query')
// ... operation ...
timer.end({ success: true, rows: 42 })Add custom context to any log:
logger.service.info('Processing batch', {
batchId: 'batch-123',
itemCount: 50,
startTime: new Date(),
caller: 'BatchProcessor.process'
})For detailed logging documentation, see docs/LOGGING.md.
This system uses dual hashing for security:
- Argon2id - Memory-hard, resistant to GPU/ASIC attacks
- SHA-1 - Fast hash for Redis key generation
// Publishing (once)
const argonHash = await argon2.hash(apiKey); // Store in MongoDB
const sha1Hash = createHash('sha1').update(apiKey).digest('hex'); // Cache key
// Validation (every request)
const sha1Hash = createHash('sha1').update(apiKey).digest('hex');
const cached = await redis.get(`apikey:${sha1Hash}`); // Fast lookup
if (!cached) {
const dbKey = await db.findUnique({ where: { apiKeyHash } });
await argon2.verify(dbKey.apiKeyHash, apiKey); // Verify with Argon2
}- β Never log full API keys - Only log hash prefixes
- β
Generate cryptographically secure keys - Use
crypto.randomBytes(32) - β Validate input - TypeBox schemas on all endpoints
- β Rate limiting - Prevent brute force attacks
# docker-compose.yml
services:
mongodb:
networks:
- backend
# Don't expose port publicly in production
redis:
networks:
- backend
command: redis-server --requirepass ${REDIS_PASSWORD}
app:
networks:
- backend
ports:
- "3030:3030" # Only expose app port# NEVER commit .env to git
echo ".env" >> .gitignore
# Use strong passwords
REDIS_PASSWORD=$(openssl rand -base64 32)
# Rotate credentials regularly// Enable authentication
db.createUser({
user: "admin",
pwd: "strong-password",
roles: ["readWrite", "dbAdmin"]
});
// Use keyfile for replica set
openssl rand -base64 756 > mongodb-keyfile
chmod 400 mongodb-keyfile- Enable MongoDB authentication
- Use TLS/SSL certificates
- Set up firewall rules
- Configure CORS properly
- Enable request logging
- Set up intrusion detection
- Regular security audits
- Dependency vulnerability scanning (
bun audit)
Error:
MongoServerError: Transaction numbers are only allowed on a replica set member or mongos
Solution:
# Check replica set status
docker-compose exec mongodb mongosh --eval "rs.status()"
# Reinitialize if needed
docker-compose down -v
docker-compose up -dError:
Error: connect ECONNREFUSED 127.0.0.1:6379
Solution:
# Check Redis is running
docker-compose ps redis
# Check logs
docker-compose logs redis
# Restart Redis
docker-compose restart redisError:
/bin/sh: 1: Syntax error: word unexpected (expecting ")")
Solution:
# Create .gitattributes
echo "*.sh text eol=lf" > .gitattributes
echo "scripts/* text eol=lf" >> .gitattributes
# Convert existing files
dos2unix scripts/*.sh # Or use editor to convert
# Rebuild
docker-compose down -v
docker-compose up -d --buildError:
Error: bind: address already in use
Solution:
# Windows: Find process using port
netstat -ano | findstr :3030
taskkill /PID <PID> /F
# Or change port in .env
PORT=3031Symptoms:
- Slow API responses
- High database load
- Cache hit rate < 70%
Solution:
# Check Redis memory
docker-compose exec redis redis-cli INFO memory
# Increase Redis max memory
# In docker-compose.yml:
command: redis-server --maxmemory 512mb --maxmemory-policy allkeys-lru
# Monitor cache performance
curl http://localhost:3030/admin/metrics | jq '.cacheHitRate'Debugging:
# Check if key exists in database
docker-compose exec mongodb mongosh inventory --eval '
db.apiKey.findOne({ itemKey: "myapp://item/123" })
'
# Check Redis cache
docker-compose exec redis redis-cli KEYS "apikey:*"
docker-compose exec redis redis-cli GET "apikey:<hash>"
# Check application logs
docker-compose logs app | grep "validation failed"Solution:
# Limit container resources in docker-compose.yml
services:
app:
deploy:
resources:
limits:
cpus: '1'
memory: 512M
reservations:
cpus: '0.5'
memory: 256M# Enable verbose logging
NODE_ENV=development docker-compose up
# Or set log level
LOG_LEVEL=debug docker-compose up- Check GitHub Issues
- Review docs/ folder for detailed guides
- Enable debug logging and check logs
- Open a new issue with:
- Error message
- Docker logs
- Environment setup
- Steps to reproduce
Contributions are welcome! Please follow these guidelines:
-
Fork the repository
-
Create a feature branch
git checkout -b feature/your-feature-name
-
Make your changes
- Follow existing code style
- Add tests for new features
- Update documentation
-
Run tests
bun test bun run lint # If configured
-
Commit with clear messages
git commit -m "feat: add new validation endpoint" git commit -m "fix: resolve Redis connection timeout" git commit -m "docs: update API documentation"
-
Push and create PR
git push origin feature/your-feature-name
- Use TypeScript strict mode
- Follow existing naming conventions
- Add JSDoc comments for public APIs
- Keep functions small and focused
- All new features must include tests
- Maintain test coverage above 80%
- Integration tests for API endpoints
- Unit tests for utility functions
- Update README.md for major changes
- Add entries to CHANGELOG.md
- Update API documentation in docs/API.md
- Include inline code comments
This project is licensed under the MIT License.
MIT License
Copyright (c) 2025 snowmerak
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
- API Documentation - Complete API reference
- Docker Guide - Docker deployment details
- Testing Guide - Comprehensive testing guide
- MongoDB Replica Set - Replica set setup
- Architecture - System architecture deep dive
Built with:
- Bun - Fast JavaScript runtime
- ElysiaJS - Fast and friendly web framework
- Prisma - Next-generation ORM
- MongoDB - NoSQL database
- Redis - In-memory data store
- LogTape - Structured logging library
Made with β€οΈ by snowmerak