WhatsApp API Deployment & Scaling

Panduan deploy WhatsApp API ke production. Docker, PM2, load balancing, scaling. Best practices untuk developer!

WhatsApp API Deployment & Scaling
WhatsApp API Deployment & Scaling

Production = Reliability!

Deploy WhatsApp bot ke production butuh perhatian khusus untuk uptime, performance, dan scalability.


Deployment Options

🚀 OPSI DEPLOYMENT:

VPS/CLOUD VM:
- DigitalOcean, Linode, Vultr
- Full control
- Manual management
- Rp 50-200k/bulan

CONTAINER (Docker):
- Portable & reproducible
- Easy scaling
- Kubernetes-ready

SERVERLESS:
- AWS Lambda, Cloud Functions
- Pay per use
- ⚠️ Tricky untuk WebSocket

PaaS:
- Railway, Render, Fly.io
- Easy deployment
- Auto-scaling

Docker Deployment

Dockerfile:

dockerfile

# Dockerfile
FROM node:18-alpine

WORKDIR /app

# Install dependencies
COPY package*.json ./
RUN npm ci --only=production

# Copy source
COPY . .

# Create session directory
RUN mkdir -p /app/session

# Environment
ENV NODE_ENV=production
ENV PORT=3000

EXPOSE 3000

# Health check
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
  CMD wget --no-verbose --tries=1 --spider http://localhost:3000/health || exit 1

# Run
CMD ["node", "src/index.js"]

Docker Compose:

yaml

# docker-compose.yml
version: '3.8'

services:
  whatsapp-bot:
    build: .
    container_name: wa-bot
    restart: unless-stopped
    ports:
      - "3000:3000"
    volumes:
      - ./session:/app/session
      - ./logs:/app/logs
    environment:
      - NODE_ENV=production
      - MONGO_URI=mongodb://mongo:27017/whatsapp
      - REDIS_URL=redis://redis:6379
    depends_on:
      - mongo
      - redis
    networks:
      - wa-network

  mongo:
    image: mongo:6
    container_name: wa-mongo
    restart: unless-stopped
    volumes:
      - mongo-data:/data/db
    networks:
      - wa-network

  redis:
    image: redis:7-alpine
    container_name: wa-redis
    restart: unless-stopped
    volumes:
      - redis-data:/data
    networks:
      - wa-network

  nginx:
    image: nginx:alpine
    container_name: wa-nginx
    restart: unless-stopped
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf
      - ./ssl:/etc/nginx/ssl
    depends_on:
      - whatsapp-bot
    networks:
      - wa-network

volumes:
  mongo-data:
  redis-data:

networks:
  wa-network:
    driver: bridge

Deploy Commands:

bash

# Build dan run
docker-compose up -d --build

# Lihat logs
docker-compose logs -f whatsapp-bot

# Restart
docker-compose restart whatsapp-bot

# Scale (jika stateless)
docker-compose up -d --scale whatsapp-bot=3

PM2 Deployment

ecosystem.config.js:

javascript

// ecosystem.config.js
module.exports = {
  apps: [{
    name: 'whatsapp-bot',
    script: 'src/index.js',
    instances: 1,              // WhatsApp: 1 instance per session
    exec_mode: 'fork',
    autorestart: true,
    watch: false,
    max_memory_restart: '500M',
    env: {
      NODE_ENV: 'development'
    },
    env_production: {
      NODE_ENV: 'production',
      PORT: 3000
    },
    error_file: './logs/err.log',
    out_file: './logs/out.log',
    log_date_format: 'YYYY-MM-DD HH:mm:ss',
    merge_logs: true
  }]
};

PM2 Commands:

bash

# Install PM2
npm install -g pm2

# Start
pm2 start ecosystem.config.js --env production

# Status
pm2 status

# Logs
pm2 logs whatsapp-bot

# Restart
pm2 restart whatsapp-bot

# Auto-start on reboot
pm2 startup
pm2 save

# Monitor
pm2 monit

Nginx Reverse Proxy

nginx.conf:

nginx

events {
    worker_connections 1024;
}

http {
    upstream whatsapp_bot {
        server whatsapp-bot:3000;
    }

    # Rate limiting
    limit_req_zone $binary_remote_addr zone=webhook:10m rate=100r/s;

    server {
        listen 80;
        server_name bot.yourdomain.com;
        
        # Redirect to HTTPS
        return 301 https://$server_name$request_uri;
    }

    server {
        listen 443 ssl http2;
        server_name bot.yourdomain.com;

        ssl_certificate /etc/nginx/ssl/fullchain.pem;
        ssl_certificate_key /etc/nginx/ssl/privkey.pem;
        ssl_protocols TLSv1.2 TLSv1.3;

        # Webhook endpoint
        location /webhook {
            limit_req zone=webhook burst=50 nodelay;
            
            proxy_pass http://whatsapp_bot;
            proxy_http_version 1.1;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
            
            # Timeout untuk webhook
            proxy_connect_timeout 5s;
            proxy_send_timeout 10s;
            proxy_read_timeout 10s;
        }

        # API endpoints
        location /api {
            proxy_pass http://whatsapp_bot;
            proxy_http_version 1.1;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
        }

        # Health check
        location /health {
            proxy_pass http://whatsapp_bot;
        }
    }
}

Health Checks

Health Endpoint:

javascript

const express = require('express');
const app = express();

// Basic health check
app.get('/health', (req, res) => {
    res.json({ status: 'ok', timestamp: new Date() });
});

// Detailed health check
app.get('/health/detailed', async (req, res) => {
    const health = {
        status: 'ok',
        timestamp: new Date(),
        uptime: process.uptime(),
        memory: process.memoryUsage(),
        checks: {}
    };
    
    // Check database
    try {
        await db.command({ ping: 1 });
        health.checks.database = 'ok';
    } catch (error) {
        health.checks.database = 'error';
        health.status = 'degraded';
    }
    
    // Check WhatsApp connection
    health.checks.whatsapp = sock?.user ? 'connected' : 'disconnected';
    if (!sock?.user) health.status = 'degraded';
    
    // Check Redis
    try {
        await redis.ping();
        health.checks.redis = 'ok';
    } catch (error) {
        health.checks.redis = 'error';
    }
    
    const statusCode = health.status === 'ok' ? 200 : 503;
    res.status(statusCode).json(health);
});

Logging

Winston Logger:

javascript

const winston = require('winston');
const DailyRotateFile = require('winston-daily-rotate-file');

const logger = winston.createLogger({
    level: process.env.LOG_LEVEL || 'info',
    format: winston.format.combine(
        winston.format.timestamp(),
        winston.format.errors({ stack: true }),
        winston.format.json()
    ),
    defaultMeta: { service: 'whatsapp-bot' },
    transports: [
        // Console
        new winston.transports.Console({
            format: winston.format.combine(
                winston.format.colorize(),
                winston.format.simple()
            )
        }),
        // Daily rotate file
        new DailyRotateFile({
            filename: 'logs/app-%DATE%.log',
            datePattern: 'YYYY-MM-DD',
            maxSize: '20m',
            maxFiles: '14d'
        }),
        // Error file
        new DailyRotateFile({
            filename: 'logs/error-%DATE%.log',
            datePattern: 'YYYY-MM-DD',
            level: 'error',
            maxSize: '20m',
            maxFiles: '30d'
        })
    ]
});

// Log semua incoming messages
function logMessage(direction, wa_id, message) {
    logger.info('Message', {
        direction,
        wa_id,
        type: message.type,
        preview: message.text?.body?.substring(0, 50)
    });
}

module.exports = logger;

Monitoring

Prometheus Metrics:

javascript

const promClient = require('prom-client');

// Default metrics
promClient.collectDefaultMetrics();

// Custom metrics
const messageCounter = new promClient.Counter({
    name: 'whatsapp_messages_total',
    help: 'Total messages processed',
    labelNames: ['direction', 'type']
});

const responseTime = new promClient.Histogram({
    name: 'whatsapp_response_time_seconds',
    help: 'Response time in seconds',
    buckets: [0.1, 0.5, 1, 2, 5]
});

const activeConnections = new promClient.Gauge({
    name: 'whatsapp_active_connections',
    help: 'Number of active WhatsApp connections'
});

// Track messages
function trackMessage(direction, type) {
    messageCounter.inc({ direction, type });
}

// Metrics endpoint
app.get('/metrics', async (req, res) => {
    res.set('Content-Type', promClient.register.contentType);
    res.end(await promClient.register.metrics());
});

Alert Setup:

javascript

const axios = require('axios');

async function sendAlert(title, message, severity = 'warning') {
    // Telegram
    if (process.env.TELEGRAM_BOT_TOKEN) {
        await axios.post(
            `https://api.telegram.org/bot${process.env.TELEGRAM_BOT_TOKEN}/sendMessage`,
            {
                chat_id: process.env.TELEGRAM_CHAT_ID,
                text: `🚨 *${title}*\n\n${message}\n\nSeverity: ${severity}`,
                parse_mode: 'Markdown'
            }
        );
    }
    
    // Slack
    if (process.env.SLACK_WEBHOOK) {
        await axios.post(process.env.SLACK_WEBHOOK, {
            text: `🚨 *${title}*\n${message}`
        });
    }
}

// Usage
sock.ev.on('connection.update', (update) => {
    if (update.connection === 'close') {
        sendAlert(
            'WhatsApp Disconnected',
            `Bot disconnected at ${new Date().toISOString()}`,
            'critical'
        );
    }
});

Scaling Considerations

📈 SCALING WHATSAPP BOT:

CHALLENGE:
- WhatsApp = 1 number per session
- Tidak bisa horizontal scale per number
- Bottleneck di WhatsApp rate limit

SOLUTIONS:

1. MULTIPLE NUMBERS:
   • 1 bot instance per number
   • Load balance across numbers
   • Separate use cases per number

2. QUEUE SYSTEM:
   • Redis/RabbitMQ for message queue
   • Separate workers untuk processing
   • Rate limit di queue level

3. MICROSERVICES:
   • WhatsApp connector (stateful)
   • Message processor (stateless, scalable)
   • Database service
   • Analytics service

Queue-Based Architecture:

javascript

const Queue = require('bull');

// Message processing queue
const messageQueue = new Queue('message-processing', {
    redis: process.env.REDIS_URL
});

// WhatsApp connector hanya terima dan push ke queue
sock.ev.on('messages.upsert', async ({ messages }) => {
    for (const msg of messages) {
        await messageQueue.add('incoming', {
            message: msg,
            timestamp: Date.now()
        });
    }
});

// Separate workers untuk processing (bisa scale)
// worker.js
messageQueue.process('incoming', async (job) => {
    const { message } = job.data;
    
    // Process message (AI, database, dll)
    const response = await processMessage(message);
    
    // Push response ke outgoing queue
    await outgoingQueue.add('send', {
        to: message.from,
        message: response
    });
});

Best Practices

DO ✅

- Use Docker untuk reproducibility
- Implement health checks
- Centralized logging
- Monitor metrics
- Auto-restart on failure
- Regular backups
- SSL/TLS everywhere
- Rate limiting

DON'T ❌

- Deploy tanpa monitoring
- Skip health checks
- Log ke stdout saja
- No alerting
- Manual restart
- Skip backup
- HTTP tanpa SSL
- No rate limiting

FAQ

Butuh berapa RAM?

Minimum 512MB, recommended 1-2GB untuk bot dengan traffic moderate.

VPS mana yang bagus?

DigitalOcean, Vultr, Linode untuk mulai. AWS/GCP untuk enterprise.

Perlu Kubernetes?

Untuk scale besar. Mulai dengan Docker Compose, migrate ke K8s saat perlu.


Kesimpulan

Production = Reliability + Monitoring!

DevelopmentProduction
Manual runAuto-restart
Console logCentralized logging
No monitoringFull observability
Single instanceScalable

Deploy ke Production →


Artikel Terkait