Ramon Snir Follow @ramonsnir

Node.js HTTP Server CPU Leak from Long Responses Read comments

Node.js HTTP module comes with the built-in response.write(chunk, [encoding]) method. What many people don’t know, is that it doesn’t have consistent behavior in Node.js. This problem affects different Node.js versions differently, starting to hit after 1~2MBs. The version most affected by this is Node.js 0.10, which we are still using in production (thanks to Node.js’s habit of breaking compatibility between versions). I’ll work in this article with Node.js 0.10, which has the worst symptoms and also start with response lengths greater than 1,031,795 bytes, where as the other versions have lighter symptoms (still noticable at 0.12, but only barely noticable at 5) and more towards the 2MB size.

Let’s look at this code:

var http = require('http');
var port = 9944;

function server(req, res) {
  var length = parseInt(req.url.match('[0-9]+')[0]);
  var response = new Array(length + 1).join('.');
  res.write(response);
  res.end();
}

http
  .createServer(server)
  .listen(port);

This code is hardly efficient, and most people would try to point at the array/join obscenity. But we aren’t going to look at any line of code other than res.write(response);. What we noticed (sadly, not in our initial review of Node.js, but only over a year later in production) is that res.write(response), while taking less than a millisecond for solid-sized responses even at hundreds of kilobytes, has a very rough cut-point at 1,031,796 bytes.

How rough?

With length = 1,031,795 this line takes about half a millisecond, with length = 1,031,796 it takes (on my blogging laptop) 16(!) milliseconds. On my development machine it takes 13 milliseconds, and on a production server (which also has a real job to do) it takes 30(!!) milliseconds.

Why is this so bad? Node.js developers are used to think that it’s OK for things to take time, as operations in Node.js are (if the developers are good) all asynchronous, and so the long time something takes allows for other operations to run in concurrency. But res.write is a synchronous operation, and blocks the entire Node.js process for (in production) 30 milliseconds. This takes a process that handles hundreds of heavy requests per second, and turns it into a process that can handle only 30 basic requests per second.

Solving this is sadly very easy: manual chunking.

var chunkSize = 64 * 1024;
for (var cursor = 0; cursor <= response.length; cursor += chunkSize) {
  res.write(response.substr(cursor, chunkSize));
}
res.end();