MongoDB
Designing a Logging System
Build a centralized logging pipeline using Elasticsearch, Logstash, and Kibana (ELK Stack) to aggregate, search, and alert on logs at scale.
S
srikanthtelkalapally888@gmail.com
Designing a Logging System
A centralized logging system collects, stores, and searches logs from hundreds of services.
Requirements
- Ingest 1TB of logs per day
- Search logs within seconds
- Retain logs for 30 days hot, 1 year cold
- Alert on error patterns
ELK Stack Architecture
Services
↓ (Filebeat / Fluentd)
Logstash (transform/enrich)
↓
Elasticsearch (store + index)
↓
Kibana (search + visualize)
↓
ElastAlert (alerting)
Log Collection Agents
Filebeat: Lightweight, ships log files
Fluentd: Rich plugin ecosystem
Vector: High-performance Rust-based
OTel Collector: OpenTelemetry standard
Structured Logging
Always log in JSON:
{
"timestamp": "2026-03-06T10:00:00Z",
"level": "ERROR",
"service": "order-service",
"trace_id": "abc123",
"message": "Payment failed",
"user_id": 456,
"error": "timeout"
}
Elasticsearch Indexing
Index pattern: logs-YYYY.MM.DD
New index per day → easy retention management
Log Levels Strategy
ERROR: System failures requiring immediate action
WARN: Degraded behavior, monitor closely
INFO: Business events (order placed, user login)
DEBUG: Development only, never in production
Retention Policy
Hot tier (SSD): 7 days — fast queries
Warm tier (HDD): 23 days — slower
Cold tier (S3): 1 year — archived
Delete: After 1 year
Alerting
# ElastAlert rule
name: high_error_rate
type: frequency
index: logs-*
num_events: 100
timeframe:
minutes: 5
filter:
- term:
level: ERROR
Conclusion
ELK Stack with structured JSON logging, daily indices, and tiered storage provides a scalable, cost-effective centralized logging solution.