When I was using run_load.command (from CloudSuite package) to populate the store, the Cassandra server keeps crashing and reporting
java.lang.OutOfMemoryError: Java heap spaceAfter trying different parameter combinations, I finally successfully populated the data store.
Let me describe how I solved this.
In the file conf/cassandra.yaml, I tried to change the following parameters to different values.
binary_memtable_throughput_in_mbBasically, when these threshold are met, e.g., heap usage is over binary_memrable_throughput_in_mb, it will flush binary memtable to relieve the stress on memory space.
memtable_throughput_in_mb
flush_largest_memtable_at
However, simply tuning these parameters did not solve the problem.
So, in the conf/cassandra-env.sh, I also tried to tune
MAX_HEAP_SIZEThen, problem solved. This surprised me because I assume it would automatically choose the right size. However, it does not.
NEW_HEAP_SIZE
So, below is the summary of workable environment settings.
Machine: The smallest Rackspace VM (1 VCPU and 512 MB memory)
OS: Ubuntu 12.10
binary_memtable_throughput_in_mb: 32M
memtable_throughput_in_mb: 32M
MAX_HEAP_SIZE="256M"
NEW_HEAP_SIZE="100M"
Note: this setting may lead to lower throughput because of low memory footprint.
Picture today: Statue of Liberty [courtesy of my dear wife - Claire Huang]
No comments:
Post a Comment