Today we ran into a problem with Cassandra in which Cassandra failed to start with the following stack trace:
ERROR 2011-04-18 12:53:01,759 Fatal exception in thread Thread[FlushWriter:1,5,main] java.io.IOError: java.io.IOException: Map failed at org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.createSegments(MmappedSegmentedFile.java:172) at org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.complete(MmappedSegmentedFile.java:149) at org.apache.cassandra.io.sstable.SSTableWriter.closeAndOpenReader(SSTableWriter.java:190) at org.apache.cassandra.io.sstable.SSTableWriter.closeAndOpenReader(SSTableWriter.java:169) at org.apache.cassandra.db.Memtable.writeSortedContents(Memtable.java:163) at org.apache.cassandra.db.Memtable.access$000(Memtable.java:51) at org.apache.cassandra.db.Memtable$1.runMayThrow(Memtable.java:176) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) Caused by: java.io.IOException: Map failed at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:748) at org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.createSegments(MmappedSegmentedFile.java:164) ... 10 more Caused by: java.lang.OutOfMemoryError: Map failed at sun.nio.ch.FileChannelImpl.map0(Native Method) at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:745) ... 11 more
Additional strange errors in the logs were:
Java HotSpot(TM) 64-Bit Server VM warning: Attempt to deallocate stack guard pages failed.
Java HotSpot(TM) 64-Bit Server VM warning: Attempt to allocate stack guard pages failed.
It took us a while to figure out, but after using strace, some help in #cassandra on freenode and a knowledgeable guy (thanks Ed), we determined the problem was with Cassandra trying to mmap too many files. This limit is enforced by the kernel in the /proc/sys/vm/max_map_count. Eventually an strace pointed it out to us:
mmap(NULL, 135168, PROT_READ|PROT_WRITE|PROT_EXEC, MAP_PRIVATE|MAP_ANONYMOUS|MAP_STACK, -1, 0) = -1 ENOMEM (Cannot allocate memory)
According to man 2 mmap:
ENOMEM No memory is available, or the process’s maximum number of mappings would have been exceeded.
So we ended up increase max_map_count with:
sysctl -w vm.max_map_count = 131072
It’s a bit hackish (I personally don’t think Cassandra should mmap that many files), but it solved our problem. Hope it helps someone!
Hi there,
We had the same problem. Today I found out on the mailing lists that version 1.1.3 of Cassandra solves this problem. You might take advantage of this.
Best regards,
Robin
Thanks for the tip!