Logging¶
The QuantFin Bot includes comprehensive logging to help monitor trading activities, debug issues, and track system performance. The logging system is built with Python's standard logging module and provides multiple levels of detail.
Log Levels¶
The system uses standard Python logging levels:
- DEBUG: Detailed information for diagnosing problems
- INFO: General information about system operation
- WARNING: Indication of potential issues
- ERROR: Serious problems that prevented a function from completing
- CRITICAL: Very serious errors that may cause the system to stop
Log Files¶
Main Log Files¶
bot.log
- Main trading bot log file
- Trading strategy executions
- Order placements and fills
- Position updates
- Error handling
api.log
- API request/response log
- All REST API calls
- Authentication events
- Rate limiting events
- Request/response payloads
exchange.log
- Exchange connection log
- Bluefin exchange connectivity
- WebSocket connection status
- Market data feed issues
- Order routing problems
wallet.log
- Wallet operations log
- Sui wallet connections
- Transaction signing
- Balance updates
- Blockchain interactions
Backup and Rotation¶
Logs are automatically rotated to prevent excessive disk usage: - Maximum file size: 10MB per log file - Backup count: 5 backup files retained - Rotation: When max size is reached, files are rotated - Cleanup: Old backups are automatically deleted
Log Configuration¶
Log Format¶
All logs follow a consistent format:
[TIMESTAMP] [LEVEL] [MODULE] [USER_ID] MESSAGE
Example:
[2024-01-15 10:30:45,123] [INFO] [bot_service] [user123] Strategy started: xgridt_mm_both
[2024-01-15 10:30:46,456] [DEBUG] [exchange] [user123] Order placed: BUY 10 SUI-USDC @ 1.8345
[2024-01-15 10:30:47,789] [WARNING] [wallet] [user123] Low balance warning: 45.2 USDC remaining
Configuration File¶
Logging is configured in logging.conf
:
[loggers]
keys=root,bot,api,exchange,wallet
[handlers]
keys=consoleHandler,fileHandler,rotatingFileHandler
[formatters]
keys=simpleFormatter
[logger_root]
level=INFO
handlers=consoleHandler,rotatingFileHandler
[logger_bot]
level=DEBUG
handlers=fileHandler
qualname=bot
propagate=0
[handler_rotatingFileHandler]
class=handlers.RotatingFileHandler
level=DEBUG
formatter=simpleFormatter
args=('bot.log', 'a', 10485760, 5)
[formatter_simpleFormatter]
format=[%(asctime)s] [%(levelname)s] [%(name)s] %(message)s
datefmt=%Y-%m-%d %H:%M:%S
Monitoring Logs¶
Real-time Monitoring¶
Using tail command:
# Monitor main bot log
tail -f bot.log
# Monitor API calls
tail -f api.log
# Monitor all logs simultaneously
tail -f *.log
Using grep for filtering:
# Show only errors
grep "ERROR" bot.log
# Show logs for specific user
grep "user123" bot.log
# Show trading activity
grep "Order\|Position\|Trade" bot.log
Log Analysis¶
Common patterns to monitor:
-
Trading Performance
grep -E "(Trade executed|Position closed)" bot.log | tail -20
-
Error Patterns
grep -E "(ERROR|CRITICAL)" *.log | tail -50
-
API Usage
grep -E "(POST|GET)" api.log | wc -l
Troubleshooting with Logs¶
Common Issues and Log Patterns¶
Connection Issues:
[ERROR] [exchange] Connection to Bluefin failed: timeout
[WARNING] [wallet] Sui wallet connection lost, attempting reconnection
Trading Errors:
[ERROR] [bot] Order placement failed: insufficient balance
[ERROR] [bot] Position size exceeds risk limits
[WARNING] [bot] Market volatility detected, adjusting strategy
Configuration Problems:
[ERROR] [config] Invalid strategy parameter: xgrid_levels must be > 0
[WARNING] [config] Using default values for missing parameters
Log-based Debugging¶
-
Enable Debug Mode
import logging logging.getLogger('bot').setLevel(logging.DEBUG)
-
Add Custom Log Messages
logger.debug(f"Strategy state: {strategy.get_state()}") logger.info(f"Order placed: {order_details}")
-
Correlation IDs Each trading session gets a unique correlation ID for tracking related events:
[INFO] [bot] [user123] [session_abc123] Strategy started [DEBUG] [bot] [user123] [session_abc123] Position opened: 150 SUI
Log Aggregation¶
Centralized Logging¶
For production deployments, consider using log aggregation:
ELK Stack Integration:
import logging
from pythonjsonlogger import jsonlogger
# JSON formatted logs for Elasticsearch
logHandler = logging.StreamHandler()
formatter = jsonlogger.JsonFormatter()
logHandler.setFormatter(formatter)
logger.addHandler(logHandler)
Structured Logging:
logger.info("Trade executed", extra={
"user_id": "user123",
"symbol": "SUI-USDC",
"side": "BUY",
"quantity": 10,
"price": 1.8345,
"pnl": 15.25
})
Log Metrics¶
Key metrics to track from logs: - Error Rate: Number of ERROR/CRITICAL messages per hour - API Response Time: Average time for API calls - Trading Volume: Number of trades per user per day - System Health: Connection uptime, memory usage
Performance Considerations¶
Log Volume Management¶
- Adjust log levels based on environment (DEBUG for development, INFO for production)
- Use conditional logging for expensive operations:
if logger.isEnabledFor(logging.DEBUG): logger.debug(f"Expensive calculation: {complex_calculation()}")
Async Logging¶
For high-performance scenarios, consider async logging:
import logging.handlers
import asyncio
# Queue-based async logging
queue_handler = logging.handlers.QueueHandler(asyncio.Queue())
logger.addHandler(queue_handler)
Security Considerations¶
Sensitive Data¶
Never log sensitive information: - API keys or tokens - Full wallet addresses - Personal user information - Internal system tokens
Note on Private Keys: Private keys are end-to-end encrypted and never accessible to the system in plaintext. The platform cannot view, log, or decrypt private keys - they remain secure on the user's device and are only used for signing transactions locally.
Use redaction for sensitive data:
def redact_sensitive(data):
if 'api_token' in data:
data['api_token'] = '[REDACTED]'
if 'wallet_address' in data:
data['wallet_address'] = data['wallet_address'][:6] + '...' + data['wallet_address'][-4:]
return data
logger.info(f"Wallet info: {redact_sensitive(wallet_data)}")
Log Access Control¶
- Restrict log file access to authorized users only
- Use log rotation to prevent sensitive data accumulation
- Consider encrypting log files for compliance requirements
Integration with Monitoring¶
Streamlit Dashboard¶
The admin dashboard (dashboard.py
) displays real-time log information:
- Recent error messages
- Trading activity feed
- System health indicators
- Performance metrics
Alerts and Notifications¶
Configure alerts based on log patterns:
# Example alert configuration
if error_count > 10:
send_alert("High error rate detected")
if "CRITICAL" in log_message:
send_immediate_alert(log_message)
For more advanced monitoring setup, see the Troubleshooting documentation.