I had setup a logger with the following code:
log = logging.getLogger('base')
logfilename =  <path to logfile>
logFile = logging.FileHandler(logfilename)
log.setLevel(debug)
logFile.setFormatter(logging.Formatter('[%(asctime)s]: [%(filename)s:%(lineno)d:%(funcName)s]: %(levelname)s :: %(message)s', datefmt='%m-%d-%Y %H:%M:%S'))
log.addHandler(logFile)
Since the log files were large, I wanted to create a rotating log file. Hence I made the following change:
# logFile = logging.FileHandler(logfilename)
logFile = RotatingFileHandler(logfilename, maxBytes=1024)  # maxBytes=1024 only for testing
However, the resulting log file isn't rotated. I still get logs that are few MBs large. I have cleared all the .pyc files.
Questions:
maxBytes I assume it is actual Bytes (so in my case log should be rotated every 1 kB) and nothing else. Am I correct?maxBytes below which rotating is ineffective? (I suppose not)I work with Python 2.7.14 (Anaconda) and 3.6.4 (Anaconda).
You need to set a backupCount value, or change the file mode from appending to truncating on open.
The log file is closed, but then re-opened again for appending, so you never actually see a difference.
What happens now is:
backupCount is greater than zero, rotate any existing backup files, and then rename the log file to add .1Apart from setting backupCount to a number higher than 0, you could also change the mode parameter to 'w', at which point you'll find that the file is truncated each time it would get too large:
# 'rotate' logfile by truncating:
logFile = RotatingFileHandler(logfilename, mode='w', maxBytes=1024)
Note that the file can still become larger than maxBytes, if the new message itself is greater than 1024 bytes long.
There is no option to retain all rotated files. You'd indeed have to use an insanely high number, or use a different file rotation strategy. For example, the TimedRotatingFileHandler rotates files after a given interval, and if you leave backupCount at zero, it'll never delete rotated backups.
You could also subclass RotatingFileHandler() to implement your own renaming strategy, providing your own doRollover() method. You need to generate unique names if you want to retain all backups; you could add a UUID to ensure this (together with the date):
import uuid
from datetime import datetime
from logging import RotatingFileHandler
class InfiniteRotatingFileHandler(RotatingFileHandler):
    def doRollover(self):
        if self.stream:
            self.stream.close()
            self.stream = None
        new_name = '{}.{:%Y%m%d%H%M%S}.{}'.format(
            self.baseFilename, datetime.now(), uuid.uuid4())
        self.rotate(self.baseFilename, new_name)
        if not self.delay:
            self.stream = self._open()
                        From the docs:
if either of maxBytes or backupCount is zero, rollover never occurs, so you generally want to set backupCount to at least 1, and have a non-zero maxBytes.
Which means that without setting backupCount (the default value is 0) you don't get any rollover functionality
I need to retain all logs! Hence if this is mandatory, I have to give some insanely large number
Yes! Set some insanely large number, and maybe add another script to backup old files every once in a while
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With