I have combined dropbox with celery in my app and with this i am allowing users to have their own photos stored if they have their dropbox connected.
I have written a piece of code, but I am worried that this might lead into an infinite loop which will kill the system.
The API I am tapping into only allows 60 photos at a time in which it then provides you with pagination.
Here is a copy of my tasks.py file - this actually works fine, but I want to check that I am doing the right thing and not impacting the system too much.
class DropboxUsers(PeriodicTask):
run_every = timedelta(hours=4)
def run(self, **kwargs):
logger = self.get_logger(**kwargs)
logger.info("Collecting Dropbox users")
dropbox_users = UserSocialAuth.objects.filter(provider='dropbox')
for db in dropbox_users:
...
...
...
sync_images.delay(first, second, third_argument)
return True
@task(ignore_result=True)
def sync_images(token, secret, username):
"""docstring for sync_images"""
logger = sync_images.get_logger()
logger.info("Syncing images for %s" % username)
...
...
...
...
feed = api.user_recent_media(user_id='self', count=60)
images = feed[0]
pagination = feed[1]
for obj in images:
### STORE TO DROPBOX
...
...
...
response = dropbox.put_file(f, my_picture, overwrite=True)
### CLOSE DB SESSION
sess.unlink()
if pagination:
store_images.delay(first, second, third, fourth_argument)
@task(ignore_result=True)
def store_images(token, secret, username, max_id):
"""docstring for sync_images"""
logger = store_images.get_logger()
logger.info("Storing images for %s" % username)
...
...
...
...
feed = api.user_recent_media(user_id='self', count=60, max_id=max_id)
images = feed[0]
try:
pagination = feed[1]
except:
pagination = None
for obj in images:
### STORE TO DROPBOX
...
...
...
response = dropbox.put_file(f, my_picture, overwrite=True)
### CLOSE DB SESSION
sess.unlink()
if pagination:
### BASICALLY RESTART THE TASK WITH NEW ARGS
store_images.delay(first, second, third, fourth_argument)
return True
Your expertise is much appreciated.
I don't see any major problems. I have also implemented systems where a task kicks off another task.
For a while, I was having problems with celery duplicating tasks on server restart. I wrote a decorator that wraps around a task which using the caching back-end to ensure that the same task with the same arguments isn't running too often. Might be useful as a hedge against infinite loops for you.
from django.core.cache import cache as _djcache
from django.utils.functional import wraps
class cache_task(object):
""" Makes sure that a task is only run once over the course of a configurable
number of seconds. Useful for tasks that get queued multiple times by accident,
or on service restart, etc. Uses django's cache (memcache) to keep track."""
def __init__(self, seconds=120, minutes=0, hours=0):
self.cache_timeout_seconds = seconds + 60 * minutes + 60 * 60 * hours
def __call__(self, task):
task.unsynchronized_run = task.run
@wraps(task.unsynchronized_run)
def wrapper(*args, **kwargs):
key = sha1(str(task.__module__) + str(task.__name__) + str(args) + str(kwargs)).hexdigest()
is_cached = _djcache.get(key)
if not is_cached:
# store the cache BEFORE to cut down on race conditions caused by long tasks
if self.cache_timeout_seconds:
_djcache.set(key, True, self.cache_timeout_seconds)
task.unsynchronized_run(*args, **kwargs)
task.run = wrapper
return task
Usage:
@cache_task(hours=2)
@task(ignore_result=True)
def store_images(token, secret, username, max_id):
...
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With