Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Too many open files exception on AwS machine with high configuration

My sincere apologies if the question is stupid but I am a novice here (front end developer, recently working on backend).

I have my app running on Amazon aws machine. What I want is to efficiently utilize my resources so that more requests are served.

I am running a Java vertx server that serves GET and websocket request. I have created three instances of this server running on different port and balanced the load using nginx.

My aws resource is pretty much

lsblk
NAME    MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
xvda    202:0    0   100G  0 disk 
└─xvda1 202:1    0   100G  0 part /

My soft limit is unlimited

ulimit -S
unlimited

My hard limit is unlimited

ulimit -H
unlimited

I am checking the total number of opened files as

sudo lsof -Fn -u root| wc -l
13397

Why am i getting this exception

java.io.IOException: Too many open files

My ulimit -a is

core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 128305
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 700000
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 128305
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

What is the best way to check the number of available files and also the number of files that are used. And how should I use the resources in such a way that I can make large number of connections.

Please let me know.

like image 609
CuriousMind Avatar asked Oct 26 '25 09:10

CuriousMind


2 Answers

I believe you check the wrong limit.

From man ulimit

If no option is given, then -f is assumed.

This means ulimit -S returns the same as ulimit -S -f, respective ulimit -H and ulimit -H -f.

The option -f means

The maximum size of files written by the shell and its children

The exception java.io.IOException: Too many open files mention too many open files. Therefore you need to check the The maximum number of open file descriptors.

# as root
$ ulimit -S -n
1024
$ ulimit -S -n 2048
2048

On CentOS 7 man ulimit mention for option -n

-n The maximum number of open file descriptors (most systems do not allow this value to be set)

On some system you might not be able to change it.

like image 127
SubOptimal Avatar answered Oct 28 '25 22:10

SubOptimal


I am assuming that your hard limit and soft limit is set properly. But you are getting this error because the vertx is not able to utilize the full ulimits that you have set.

Check what is the maximum limit, your vertx server can use by:

cat /proc/PID/limits

 Max open files    700000      700000    files

is the line that tells you.

If you have set the soft limit high but still this value comes low then there is something in the app (like the init files) that is changing the soft limit.

So you can find that init script and simply change the soft limit there. It will fix your problem.

https://underyx.me/2015/05/18/raising-the-maximum-number-of-file-descriptors

like image 29
Juvenik Avatar answered Oct 29 '25 00:10

Juvenik



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!