I've been reading about Java Fibers as small unit of work which would mapped to Threads. In case of a blocking call a different Fiber would be mapped to the same Thread. Since Threads in Java are kernel level threads, this would prevent Threads from getting exhausted.
I've been using Spring Web-Flux, so just wanted to understand What all happens internally when Netty server receives 100 request/sec each of which include reactive databases access, How are these requests mapped to 40 threads which Netty server spawns by default?
How is a flux different from a Fiber? How does flux guarantee asynchronous behaviour with limited number of threads?
What all happens internally when Netty server receives 100 request/sec each of which include reactive databases access, How are these requests mapped to 40 threads which Netty server spawns by default?
In brief, it takes those requests and assigns them "round robin" style to each available underlying thread (as those threads become available.) The same thing happens with all other reactive calls too, of course with the caveat that depending on the configuration, they may be running on other schedulers and so other underlying thread pools with different numbers of threads.
How is a flux different from a Fiber?
That's a very big topic, but the "high level" overview is that flux (by which I assume you mean "reactive" Java rather than a Flux itself) is an asynchronous model where no thread is allowed to block, and fibers are "green" threads, designed to be used synchronously, that make use of preemptive scheduling (amongst other techniques) to map to far fewer underlying kernel level threads.
In practice, that means that you can use pretty much the same threading model & code techniques you do today with fibers, but reactive programming will require you to adopt new paradigms.
How does flux guarantee asynchronous behaviour with limited number of threads?
Quite simply, because it's architected to be asynchronous. The question here seems like it's based on a false premise - asynchronous behaviour isn't guaranteed or not guaranteed by the number of threads available, but by your model (it can't "spill over" into synchronous behaviour if it's overwhelmed by requests.)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With