thats my first question on stackoverflow. I was mostly able to find here what i need to know. Thanks a lot for this btw.
However. If i try to kill my ProcessPoolExecutor it will just work through the whole queue which is generated (.. i think so?). Is there any simple way to immediately clean the queue of a Processpoolexecutor?
from concurrent.futures import ProcessPoolExecutor
from time import sleep
from random import randint
def something_fancy():
sleep(randint(0, 5))
return 'im back!'
class Work:
def __init__(self):
self.exe = ProcessPoolExecutor(4)
def start_procs(self):
for i in range(300):
t = self.exe.submit(something_fancy)
t.add_done_callback(self.done)
def done(self, f):
print f.result()
def kill(self):
self.exe.shutdown()
if __name__ == '__main__':
work_obj = Work()
work_obj.start_procs()
sleep(5)
work_obj.kill()
So what i want to do is generate a Queue by 300 which gets worked out by 4 processes. After 5 seconds it should just quit.
I need to use processes because of gil btw.
Using shutdown(wait=False) it will return faster. The default for wait is True Otherwise it also provides a .Cancel() which returns False if not cancelable.
link to the docu
It will still finish all processing futures though:
If
waitisTruethen this method will not return until all the pending futures are done executing and the resources associated with the executor have been freed.If
waitisFalsethen this method will return immediately and the resources associated with the executor will be freed when all pending futures are done executing. Regardless of the value of wait, the entire Python program will not exit until all pending futures are done executing.
If you have a fixed amount of time, you should provide a timeout:
map(func, *iterables, timeout=None, chunksize=1)
which can be a float or int, specified in seconds
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With