Problem Description:
the import frequency is too fast, and the compaction cannot be merged in time, resulting in too many versions. The default version is 1000
Solution:
1. Increase the amount of data imported in a single time and reduce the frequency
2. Adjust the compaction strategy to speed up the consolidation (after adjustment, observe the memory and IO) be.conf
cumulative_compaction_num_threads_per_disk = 4
base_compaction_num_threads_per_disk = 2
cumulative_compaction_check_interval_seconds = 2
Similar Posts:
- [Solved] CDH6.3.2 Hive on spark Error: is running beyond physical memory limits
- Etcd:failed to send out heartbeat on time [How to Solve]
- Tuning and setting of memory and CPU on yarn cluster
- Message: failed to decode response from marionette
- [Solved] fluentd Log Error: read timeout reached
- JAVA package Run Error: Java hotspot (TM) 64 bit server VI warning: Info: OST; The ‘eror’ page file is too small to complete the operation
- odoo Wkhtmltopdf failed (error code: -11). Memory limit too low or maximum file number of subprocess reached. Message : b”
- [Solved] scheduler Error: maximum number of running instances reached
- How to Solve Mysql Error 1206: The total number of locks exceeds the lock table size
- ZABBIX agent active mode monitoring