About this talk

Node is single threaded and uses non-blocking event loops for IO but for CPU intensive tasks it still blocks. If we want to use more CPU cores to parallelize the processing we must create new processes and distribute the load. This talk describes our initial problem and a few iterations until we found a passable solution.

Transcript

Hello everybody, my name is Marco, and I'll talk about interprocess communication in Node.js. I saw that there are many people who work with Node.js, so this will be [inaudible]. Currently, I'm working for Infonova as a software developer. It's a company based in Graz, and we also have offices here and we are hiring if somebody is interested to work on cool and interesting projects, for example for OBB. Currently, I am sitting there working on a few back office projects, which are part of the biggest and coolest article in Austria, the new ticket shop. So probably you already saw it. All jokes aside, it was quite a big project about 200 people. We are working for two years now, and the end is nearing. The results are already there. Our problem by this back office project, we have Node.js as a backend and frontend. And the problem is that the node is single threaded. And we have some long running super intensive account processes, in particular station import and export. And the comparison of what was new, what was changed, etc. And we had a problem that during these long processes, the GUI would just stop and the system would be unusable. In your worst case, it would be for about an hour and a half. And our users reasonably wanted at least a read-only access to the system while importing or exporting, so logical solution, otherwise everything that we can. So our first idea was just to use node cluster. We start a few servers, we have about 10 users in total. So then we have a cluster of five servers, it should be enough. And a cluster is module of which comes with a standard node installation. It is quite easy to set up, and we tried it. I can also show the results. So here, I have the same code. Where is my cluster? No, my cluster's gone. Cluster JS. Yep, there it is. So we require a cluster and if it is a master, then we focus on workers, and if not, then they can do their magic. So this 20 something lines of code, we have started a cluster. And in this case, I am just starting an HTTP server. And right on console and an output of each process did something, and here I have a very, very intelligent way of simulating some long running process. We can even determine how long we want it to run. Let's try to run it. So I'm in a cluster folder. Node cluster. It's running, so I have my few workers. I can show it...at the beginning, I look how many course I have and start that many servers. And let's see how it behaves. I start new tab. 8,000. So now it's doing something. Yellow world, front process, 8585, again. Refreshing. Again the same. Another refresh. Another refresh. So it's not very smart, let's say. All of our processes came through this same node. If you try to do it in Parallel, now we have two, 8586. Oh, but the second one was also on the same node. So long story short, it didn't bring anything. In cluster you have no guarantee of each process will process the next request. It's a bit better than the original single threaded solution, but it still did not solve our problem. Yeah, I know it to be true. So our search continued. I tried to browse some forums and found second approach. Node has possibility to postpone some processes using either a spoon or a fork performance, and we decided to try it. So a child process fork is a special case of a child process spoon. Spoon can spoon any system process. And the fork is special for node applications. It looks like this. We require the fork and simply fork our child node application and it runs on its own separate process. And simple. And in this case, we decided to send the data using comma line parameters, and the client uses interprocess communication to report [inaudible]. Looks like this, for example, or let's see the codes. Fork. So we require a fork, start at the same HTTP server. Just recent, and the, in this case, child does this very difficult problem of counting to nine million, I think. And it also outputs on console log. And then done. It returns Result. So here we have this process sent and the parent receives the message, and the parent we have a child as a handle of our fork process, and we can use on message and do something with this message. Let's see how that works. Bigger server. So we started our server, and if you try to call our local host, 8000 again. It does nothing. Ah, yep. Child process, 8634 responded with a very intelligent result. [inaudible] okay. Next one, refresh. We have another process [inaudible], so every time we have a new request, we spoon another new child process. So in our production code, it looks like this. We are parallel as an export here, which does full dump of [inaudible] database in target stick to the file, and sends it to the server. So it's all encapsulated in stand-alone export file. And we fork it from our sever, and in child process we call the functions, and then done, we simply send, "We are done," and that's it. Then a server gets a, "I'm done" message. It shows it on the GUI and on the rest of the GUI is still responsible. And with that, our export was done. But with import, we get another problem. Of course, I try to do copy-paste the codes, and found out it doesn't work. With export, we send small amount of data. We send some startup params, what should we export, and where and who started the export. And client does its magic, uploads the files to the server and says, "I'm done." So it's about a few lines of data. But by import, we have to show what was imported, which are the new records, what was deleted and which were unchanged. And in case of station data set, it can be about 10 megabytes of data with 100, 4,000 stations. And I learned the hard way that IPC has internal limit of about 8 to 20k, depending on server internals, so we cannot simply use process send for this data. Then I will Google the bit, so obvious solutions would be to use message screen or something like that, but I didn't want to involve operations because there's always a bit of resistance when they have to install and maintain a new server components, so I tried to solve the problem myself. And I found the node IPC, which is a node module available at MPM for a local and remote interprocess communication, exactly what I needed. And it has the full support out of the box for Linux, Mac and Windows. It also supports those forms of socket communication, which means that it doesn't have to run on the same machine. So let's see example in this code. Here we have a bit more of wallet wait, so first we include node IPC. And then do some configurations. So we have to give some ID to our server. If we put it if [inaudible] is not true then we can see all our messages in our console, and we try if we have some problems with delivering. Then I forward my client which I will show in a bit and define some callback functions for different messages which our client can send. So our client started. I say, "Okay, you are ready. We can have you [inaudible] operation start." We can send any data in a data log. And we can emit messages with a server emit and [inaudible] server stop. Client looks like this, so the same configuration and we connect to our web server and say, "Okay, client has started." Then our server sends the data with operation start message, and we do the same counting and then client should shut down, then we call process [inaudible] with the client. Let's see if it works. Let's hope so. Here we have node modules, because we have some dependancies. And I start it with node server. So client starts, sends operations and we don't see much, so client only writes in operation. Change operation and short stop. We get here again. And so our first version of protocol was to...for client, start server. Client says, "I'm ready." Server sends some data, do magic. Client does its magic, which can last up to an hour and sends result back. Server shuts down IPC server and shuts down the client. Production code looks like this, looked like this in the first version. And so with the client messages...so it is the same code that I showed with some logging, etc. So I start my server and await message from company to client, then get my client started, then I sent data to compare and when compare is done, then I send the data to GUI, client calls, connects to Mongo database and tries to compare data which it has with database content. So far so good. With the [inaudible] case, it was good enough. Yay. But I found out that child process IPC is very slow, about 100 to 10,000 times slower than it should be. And it scales with the message size, of course. And the problem is in serialization, this serialization which is unnaturally slow for some reason. So second try was to use the [inaudible] messages, and they use row buffers for data transfer. Then again, finding buffers. I looked at node documentation, and so this operated legal red message [inaudible], of course. Then I used [inaudible], which looked like this. So in a comparison with a previous example, here I have row buffer two, and start up parameter and zipping and unzipping of all data that is sent and received. But then I deploy it to the server, because buffer front was added in 510 And I swear this message was not there when I read it. On my worker machine, I had Node six something, and on our servers were long term support for two, or something like that, so it all crashed. And then I found this wonderful example of Java code, which was always [inaudible]. And then they added it to Node application, too. But it wasn't there before. So then I cleaned all the buffer parse and the [inaudible]. In a few weeks, we had a node six on the server spot. It was a nice exercise to bounce back and forth. So now we have a new buffer, there we are, replicated solution. And it worked. Unbelievable. But [inaudible] like that is everything with the same node version as our server. It just worked with a small data set. We have stations business partners, so the private railroads, etc. which are not very many. I just first did that because this import is about one minute long, the station import is 30 minutes long. So with simplest case, everything worked on my worker machine and on the server too. But with the full data set, we had another problem. So on my machine, it worked. On our server, the process died silently without any error messages, console logs, anything. So it just worked, worked and just, it died. Node work on the server, so I tried to console log every line and restart the server every few minutes, and emit the live coding on server [inaudible and I found out that the message size limit on Linux is 64k. And so came the new protocol. And so the node IPC or a socket below does it completely transparently, so without any messages, if messages are smaller, it is transferred in one piece. And if message is bigger, it gets transferred in chunks. So then I added these few blue steps, so I calculated data size, then I send it to clients or [inaudible] protocol, something like HTTP. Client says, "Thanks, ready for data." Then I send all data. I don't know if it comes in chunks or in one piece or client receives, assembles the data, and does its magic, then sends result size back to the server because the result size can also be very big if you have many changes. And then server assembles responds data and stops the server. And then they both die, at the end. So the final version of the server without some error handling looks like this, albeit more of if I receive messages. I use the very advanced method of finding if my message is a data message or control message, so I send a few control messages and data messages. [inaudible]. And so I receive my data packets and concat them using a buffer concat. And then, when everything's done, die, and I shut down server and return the data to GUI. And client, we have similar logic, so at first, at the beginning, I look how big is my message, then send some X. Shut down, then I shut down. And here we have the combining of the data which I already explained. Yes. All done. It worked, and it's still working today, so this last version seems to be good enough. Questions? Did any one of you have problems like this? Okay, your question's first. Because that would mean that operations has to install new component, then I have to write emails and I avoid all forms of human communication. - Do you have any kind of sessions [inaudible]? - Yes. They are stored on the server, I think. Send the...send them [inaudible]. More questions, please? Yep, you can find and configure IP from server and then it does transparently the same. Yep, I think so. Maybe I could have done that. I didn't want to write on protocol, and that's it. Yeah. We also have a quite restrictive firewall policy, etc. So everything new is frowned upon. And also, as I said, we don't have opportunity to do something like that everyday so it was fantastic, even the debugging part. More questions? - All right, thank you. - Thank you.

Sessions by Pusher

About Sessions

Inspiring talks by inspiring speakers

Meetups are a great way to hear from the experts and keep up to date with the latest ideas - but what happens if you can’t be there? As developers ourselves, the Pusher team got to thinking that there had to be a better way. This content is just too valuable to miss.

So we decided to do something about it. Our mission: make it simple for developers anywhere to watch great programming talks and learn from the experts - anytime and absolutely free.

We spoke to meetup organisers and speakers and got them excited about getting in front of a wider audience. We pulled together a professional production team to create high quality videos and transcripts from meetups. We built a video platform to bring the content together in one place.

And now we have Sessions. Watch the talks that interest you. Subscribe to get notified when new content gets added. If you’re a meetup and want to get involved, let us know.