Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why the event loop existes from the beginning of JavaScript when there were almost no blocking operations

I am trying to understand how the JavaScript runtime works with its single thread model. There is an event loop which move the blocking operations (I/O most of them) to a different part of the runtime in order to keep clean the main thread. I found this model very innovative by the way.

I assume this model is part of JavaScript since its creation, and that most of the blocking I/O operations, like AJAX calls were "discovered" like 5 years later, so in the beginning what was the motivation to the single thread non blocking model if there were almost no blocking operations, and the language was only intended to validate forms and animate the screen. Was it long term view or only luck?

like image 353
jesantana Avatar asked Nov 02 '25 11:11

jesantana


1 Answers

As you already stated, event loops are for coping with slow I/O - or more generally, with operations not involving the CPU that happen elsewhere from where the code runs that requires results from such operations.

But I/O is not just network and disk! There is I/O that is far slower than any device: Computers communicating with humans!

GUI input - clicking buttons, entering text, is all SLOOOOOWW because the computer waits for user input. Your code requires data from an external source (external form the CPU the code runs on).

GUI events are the primary reason for event based programming. Think about it: How would you do GUI programming synchronously? (you could use preemption by the OS - described below) You don't know when a user is going to click a button. Event based programming is the best option (we know of) for this particular task.

In addition, a requirement was to have only one thread because parallel programming is HARD and Javascript was meant to be for "normal users".

Here is a nice blog post I just found:

http://www.lanedo.com/the-main-loop-the-engine-of-a-gui-library/

Modern GUI libraries have in common that they all embody the Event-based Programming paradigm. These libraries implement GUI elements that draw output to a computer screen and change state in response to incoming events. Events are generated from different sources. The majority of events are typically generated directly from user input, such as mouse movements and keyboard input. Other events are generated by the windowing system, for instance requests to redraw a certain area of a GUI, indications that a window has changed size or notifications of changes to the session’s clipboard. Note that some of these events are generated indirectly by user input.




I would like to add this:

We have two major options for dealing with the problem of your code having to wait for an external event (i.e. data that cannot be computed in the CPU your code is running on or retrieved from the directly attached RAM - anything that would leave the CPU unable to continue processing your code):

  • Events
  • Preemption by a "higher power" like the operating system.

In the latter case you can write sequential code and the OS will detect when your code requires data that is not there yet. It will stop the execution of your code and give the CPU to other code.

In a sense the ubiquitous event based paradigm in Javascript is a step backwards: Writing lots of event handlers for everything is a lot of work compared to just writing down what you want in sequence and letting the OS take care of managing the resource "CPU".

I noticed that I never felt like complaining when my event based programming was for the GUI - but when I had to do it for disk and network I/O it jumped out to me how much effort it was with all the event handling compared to letting the OS handle this in the background.

My theory: Coping with humans (their actions) in event handlers felt natural, it was the entire purpose of the software after all (GUI based software). But when I had to do all the event based stuff for devices it felt unnatural - I had to accommodate the hardware in my programming?

In a sense the event based programming that came upon us is a step away from previous dreams of "4th generation languages" and back towards more hardware oriented programming - for the sake of machine efficiency, not programmer efficiency. It takes A LOT of getting used to to writing event based code. Writing synchronously and letting the OS take care of resource management is actually easier - unless you are so used to event based code that you now have a knee-jerk reaction against anything else.

But think about it: In event based programming we let physical details like where our code is executed and where it gets data from determine how we write the code. Instead of concentrating on what we want we are much more into how we want it done. That is a big step away from abstraction and towards the hardware.

We are now slowly developing and introducing tools that help us with that problem, but even things like promises still require us to think "event based" - we use such constructs where we have events, i.e. we have to be aware of the breaks. So I don't see THAT much gain, because we still have to write code differently that has such "breaks" (i.e. leaves the CPU).

like image 102
Mörre Avatar answered Nov 04 '25 01:11

Mörre



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!