Napkin math for MongoDB performance

来源:互联网 发布:完整的数据库源代码 编辑:程序博客网 时间:2024/06/05 11:43

文章来源:http://rickosborne.org/blog/2010/02/napkin-math-for-mongodb-performance/

 

As we all know, there are lies, damned lies, and statistics. What I’m about to present shouldn’t even qualify as statistics—it’s just a bunch of damned lies. I’m not set up to do any sort of rigorous performance testing, so these should not be construed as anything but what they are: one guy’s half-assed and probably flawed measurements.

I was playing around with MapReduce on MongoDB, trying to figure out how to code the equivalent of SQL’s COUNT(DISTINCT column) functionality. The short answer is: don’t do it. Or, if you do it, figure out a better way than I did. Along the way, I gathered some metrics on what types of operations cause what kinds of performance hits.

The Setup

My set up is a database of 3,397,115 records, all of which look something like this:

Yeah, I just took the Netflix prize data and inserted ~3M records. I did the inserts across 3 shard services, all running on the same machine, which led to 9 chunks of roughly equal size. I let MongoDB handle the sharding—I didn’t manually split the shards. I ensured one index on the collection, over movie and cust, which isn’t really used for the query in question, but I thought it was worth mentioning.

Yeah, I know performance is going to suffer because I’m running 3 shards from the same hard drive. That’s kindof the point.

I ran all of this on my MacBook Pro, which is a 2.66 GHz Core 2 Duo with 4GB of 1067 MHz DDR3. I continued to do other light-duty tasks while running the tests, but nothing that should have interfered greatly.

The Queries

Here’s the starting query’s SQL equivalent:

And the MapReduce query itself, as I wrote it:

Those nasty bits with the for-in loops are for the COUNT(DISTINCT column) logic. This query produces the following result set:

The Results

All times below are in mm:ss format. (Minutes, not hours.)

QueryTotal TimeShards TimeFinal Function110:4403:4606:58This was the starting query above, as written.290:4836:2654:22I widened the release year restriction from just 1990 to 1990-1999, via { year: { $gte: 1990, $lte: 1999 } }. That's close to a linear relationship between emitted records and time elapsed.321:3313:5307:40I used movechunk to consolidate all of the chunks on one shard server, then shut down the other two. I reduced the release year restriction back to just 1990. It takes 2x longer than the first query, presumably due to disk bottlenecks? One shard trying to reduce 9 chunks at once?402:0802:08-I removed the for-in loops and COUNT(DISTINCT) logic, leaving only the plain record count and average, but was still on the one shard server, implying a 10x slowdown for that type of logic.QueryTotal TimeMap TimeEmit Loop500:1300:0600:13I connected to the one remaining shard directly, instead of through mongos, and ran the previous query (no for-in). Again, this implies a 10x slowdown due to trying to process chunks simultaneously.605:2400:1501:14Still connected directly to the one shard (no mongos) with all of the records, I ran the original query (with for-in logic). A slowdown of 25x seems a little high, but I ran the query twice to verify it.

Lessons Learned

  • Queries scream when a single shard is left to its own devices—but when parallelism is attempted on the same shard you get a massive performance hit. Don't run different shards off the same hard drive—no matter how many cores you have.
  • Don't try to emulate COUNT(DISTINCT). Really.

I have to wonder if mongos can be tweaked to serialize queries against chunks on the same shard, to prevent disk contention issues?

 

推荐阅读:MongoDB: Terrible MapReduce Performance

               MongoDB's performance on aggregation queries

               Is this Map Reduce performance normal or I am missing something

原创粉丝点击