Skip to main content

node.js parallel-io library

The fact that all the CPU work in a node.js process is done by a single thread continues to amaze me. But, considering node.js cluster API, it all makes so much sense! Having just one thread per CPU core is the "ideal situation", which barely any piece of software ever achieves. I remember good example on this topic from "CLR via C#" book  it's about Windows native File Open dialog, which creates tens (!) of new threads when it opens up.

So it's all good, but we pay the price for that in the amount of asynchrony we put into the source code. There are good libraries out there, which help in taming this additional complexity, but they usually don't focus on decreasing perceptual complexity of the source code. This is what I tried to do in the tiny node.js library, called "parellel-io". It helps in situations, when you have multiple I/O operations, which you want to run in parallel and then conveniently process their results all at once. Following is an example of it's use:

Comments

Popular posts from this blog

Nginx secure link with node.js

Serving static files is a natural task for web servers. They, especially ones, having asynchronous architecture (like Nginx), are very good at such tasks. However, usually there is an additional security logic, which should restrict access to files you've published. IIS, for example, offers deep integration with application layer, which allows custom .NET "middleware" logic injection into the request pipeline. Node.js applications are very often published behind Nginx for various reasons and, with the help of Nginx "Secure Link" module, it's possible to offload static file serving tasks from node.js to Nginx, even if the files are not public. This module uses "shared secret" string (known to Nginx and the application) and expects a hash, based on this secret, to be present in the request to decide whether to proceed or return an error. Secure Link module may work in 2 alternative modes ( http://nginx.org/en/docs/http/ngx_http_secure_link_modul...

Performance of Redis sorted set operations

I was working on a feature recently, which involved several Redis "remove from sorted set" operations. Target production environment sorted sets are expected to be small and most of these calls would be trying to remove items, which do not exist in the sets. Although ZREM operation has documented LON(N) time complexity, ZSCORE has documented constant time complexity. This led me to believe, that Redis might have constant time complecity for ZREM calls when values to be removed do not exist in the set (if ZSCORE "knows" if item is NOT in the set in constant time, ZREM can do the same, right?). Not in this case, apparently. ZSCORE documented constant time complexity is actually misleading (as many cases of asymptotic time complexity documentation for small data sets). Redis stores small sorted sets (up to 128 items by default) as "ziplists", which are essentially linked lists, optimized for memory consumption. Browsing through Redis source code confirms th...

Lost promises

I love Promises . I think they make modern JavaScript possible, especially on the server side. But promises are, you know, promises and some of them are literally lost! I would even say that lost promises are, to a certain degree, the buffer overflow of JavaScript. OK, it's not as widespread and it hasn't cost as many billions of dollars, but it still may be as subtle, as difficult to notice and just as devastating. At least I have encountered this issue a few times and it works like that: In the code above we simply forget to add "return" keyword before call to sideEffect3 function. This is totally OK, except when you rely on the fact that the Promise returned from giveMePromise is resolved after "side effect 3" can be observed. In our case, Promise was given, but it was lost. That sideEffect3  function is trying in vain, because it's work will never be used. I think this is just a danger of asynchronous code and such errors can only be detecte...