wiki:sentry_troubleshooting
Table of Contents
Sentry troubleshooting
Postgres DB getting too big
One option is to truncate the nodestore_node table. Enter the DB:
psql -U postgres
Check the table size
postgres=# SELECT oid::regclass, reltoastrelid::regclass, pg_relation_size(reltoastrelid) AS toast_size FROM pg_class WHERE relkind = 'r' AND reltoastrelid <> 0 ORDER BY 3 DESC;
Run truncate:
postgres=# TRUNCATE nodestore_node ;
Kafka too big
Try adapting the KAFKA_* env variables to this:
KAFKA_LOG_RETENTION_BYTES: 53687091200 KAFKA_LOG_SEGMENT_BYTES: 1073741824 KAFKA_LOG_RETENTION_CHECK_INTERVAL_MS: 300000 KAFKA_LOG_SEGMENT_DELETE_DELAY_MS: 60000
Example from docker-compose file:
... kafka: <<: *restart_policy depends_on: zookeeper: <<: *depends_on-healthy image: "confluentinc/cp-kafka:5.5.7" environment: KAFKA_ZOOKEEPER_CONNECT: "zookeeper:2181" KAFKA_ADVERTISED_LISTENERS: "PLAINTEXT://kafka:9092" KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: "1" KAFKA_OFFSETS_TOPIC_NUM_PARTITIONS: "1" KAFKA_LOG_RETENTION_HOURS: "24" KAFKA_LOG_RETENTION_BYTES: 53687091200 KAFKA_LOG_SEGMENT_BYTES: 1073741824 KAFKA_LOG_RETENTION_CHECK_INTERVAL_MS: 300000 KAFKA_LOG_SEGMENT_DELETE_DELAY_MS: 60000 KAFKA_MESSAGE_MAX_BYTES: "50000000" #50MB or bust KAFKA_MAX_REQUEST_SIZE: "50000000" #50MB on requests apparently too CONFLUENT_SUPPORT_METRICS_ENABLE: "false" KAFKA_LOG4J_LOGGERS: "kafka.cluster=WARN,kafka.controller=WARN,kafka.coordinator=WARN,kafka.log=WARN,kafka.server=WARN,kafka.zookeeper=WARN,state.change.logger=WARN" KAFKA_LOG4J_ROOT_LOGLEVEL: "WARN" KAFKA_TOOLS_LOG4J_LOGLEVEL: "WARN" ulimits: nofile: soft: 4096 hard: 4096 volumes: - "sentry-kafka:/var/lib/kafka/data" - "sentry-kafka-log:/var/lib/kafka/log" - "sentry-secrets:/etc/kafka/secrets" healthcheck: <<: *healthcheck_defaults test: ["CMD-SHELL", "nc -z localhost 9092"] interval: 10s timeout: 10s retries: 30 ...
Tested on
- sentry_version: 23.8.0 (docker)
- PostgreSQL 14.5
See also
References
wiki/sentry_troubleshooting.txt · Last modified: 2024/09/25 15:25 by antisa