BlogNode.js

Fixing MongoDB sort exceeded memory limit error with aggregation and projection

Written by Codemzy on December 2nd, 2025

Here's how I fixed the `sort exceeded memory limit` MongoDB error without `allowDiskUse` by optimising the query, switching my find command to aggregation, and rearranging the order of my query.

I recently hit the Executor error during find command: database.collection :: caused by :: Sort exceeded memory limit of 33554432 bytes, but did not opt in to external sorting. Aborting operation. Pass allowDiskUse:true to opt in. error on MongoDB. And that was bad, because some of my queries were returning errors instead of results.

This usually occurred with paged data, when there were over 3,000 results, and the error only happened around page 30+.

Reading the error, I thought that I could use the allowDiskUse option as a simple fix, but it turns out that, since MongoDB 6.0 allowDiskUseByDefault is true, so allowDiskUse was true by default.

Starting in MongoDB 6.0, pipeline stages that require more than 100 megabytes of memory to execute write temporary files to disk by default.

- MongoDB allowDiskUseByDefault

Unfortunately, this wasn't happening for my database because it was on the Flex tier, and it turns out that Flex clusters don't support the allowDiskUse option.

So how can I fix this error?

Before I reached out to upgrade to a dedicated cluster, I wondered if I could optimise the query.

Optimising the query

This is the find command I was using:

let posts = await db.collection('posts')
 .find({ author: userId })
 .sort({ date: -1 })
 .skip(skip)
 .limit(pageLimit + 1)
 .project({ title: 1, type: 1, date: 1, author: 1 });

I use pageLimit + 1 to know if there is a next page of data, see my MongoDB next page trick post for more details.

And it worked great, for the first 2000 results. After that, it got a bit slow. And after 3000 results, I'd hit the error.

I'd already done the most basic query optimisations and made sure I had an index to support the query.

I checked, and the query was using the index, so that was a plus. But obviously, not the reason I was running into this pesky error.

Creating an index is a good place to start if you haven't already. In this case, the index is on the author field.

db.posts.createIndex( { author: 1 } );

The index was being used, but the problem wasn't at the find command; it was at the sort command.

And I was limiting the query results, but I could only do this after sorting them. If you limit before a sort, it can only sort the first X number of results. And that can create problems, especially if you use .limit() with .skip() too.

I needed to think outside the box.

Move the order of .sort() and .project()

As I was hunting around the MongoDB forums, I found this comment that mentioned moving the project stage to reduce the size of the records returned (and therefore reduce the amount of memory used to sort those records).

And that made sense to me, since the documents in my posts collection contained fields like description and content that were quite large, but I didn't need them returned for these pages of data.

So I tried moving the project stage earlier in the command, before the sort:

let posts = await db.collection('posts')
 .find({ author: userId })
 .project({ title: 1, type: 1, date: 1, author: 1 })
 .sort({ date: -1 })
 .skip(skip)
 .limit(pageLimit + 1);

It didn't work. 🤯

I nearly gave up on this idea until I found out that the find() command has a general order of operations that I couldn't control.

  • match
  • sort
  • skip
  • limit
  • project

In other words, although I was putting the project before the sort in my command, it was still being run last by MongoDB.

That's why the order of .skip() and .limit() is not important in find commands, which I have written about previously. But order is important in aggregation.

So if I switch my command to an aggregation query, I can specify the order of operations, or more accurately, in the case of aggregation, pipeline stages!

Change to an aggregation pipeline

As I need to specify when the project happens, and I need it to happen before the sort, switching to an aggregation pipeline will allow me to specify the order of each stage.

Here's how the operation looks as an aggregation pipeline:

let posts = await db.collection('posts').aggregate([
 { $match: { author: userId }},
 { $project: { title: 1, type: 1, date: 1, author: 1 }},
 { $sort: { date: -1 }},
 { $skip: skip },
 { $limit: pageLimit + 1 },
]);

Now the $project stage happens before the $sort stage, meaning that the documents only have four fields when they go to be sorted, limiting the size of the documents and limiting the memory needed to sort them.

And it worked.

I could get all the pages, even when there were lots of results, and the query ran faster too.

No more sort exceeded memory limit error!


Related posts: