Oct 3, 2013

CloudSuite - Cassandra server - out of memory error

While I was trying to run YCSB with Cassandra on VM, which has 1 VCPU and 512 MB memory.
When I was using run_load.command (from CloudSuite package) to populate the store, the Cassandra server keeps crashing and reporting
java.lang.OutOfMemoryError: Java heap space 
After trying different parameter combinations, I finally successfully populated the data store.
Let me describe how I solved this.

In the file conf/cassandra.yaml, I tried to change the following parameters to different values.
binary_memtable_throughput_in_mb
memtable_throughput_in_mb
flush_largest_memtable_at
Basically, when these threshold are met, e.g., heap usage is over binary_memrable_throughput_in_mb, it will flush binary memtable to relieve the stress on memory space.
However, simply tuning these parameters did not solve the problem.

So, in the conf/cassandra-env.sh, I also tried to tune
MAX_HEAP_SIZE
NEW_HEAP_SIZE
Then, problem solved. This surprised me because I assume it would automatically choose the right size. However, it does not.

So, below is the summary of workable environment settings.
Machine: The smallest Rackspace VM (1 VCPU and 512 MB memory)
OS: Ubuntu 12.10
binary_memtable_throughput_in_mb: 32M
memtable_throughput_in_mb: 32M
MAX_HEAP_SIZE="256M"
NEW_HEAP_SIZE="100M"

Note: this setting may lead to lower throughput because of low memory footprint.

Picture today: Statue of Liberty [courtesy of my dear wife - Claire Huang]



No comments:

Post a Comment