These can be safely sent (marshalled) across process boundaries
Set up logger from config file:
from proxy_logger import * args={} args["config_file"] = "/my/config/file" (logger_proxy, logging_mutex) = make_shared_logger_and_proxy (setup_std_shared_logger, "my_logger", args)
Log to file "/my/lg.log" in the specified format (Time / Log name / Event type / Message).
Delay file creation until first log.
Only log Debug messages
Other alternatives for the logging threshold (args["level"]) include
- logging.DEBUG
- logging.INFO
- logging.WARNING
- logging.ERROR
- logging.CRITICAL
from proxy_logger import * args={} args["file_name"] = "/my/lg.log" args["formatter"] = "%(asctime)s - %(name)s - %(levelname)6s - %(message)s" args["delay"] = True args["level"] = logging.DEBUG (logger_proxy, logging_mutex) = make_shared_logger_and_proxy (setup_std_shared_logger, "my_logger", args)
Rotate log files every 20 Kb, with up to 10 backups.
from proxy_logger import * args={} args["file_name"] = "/my/lg.log" args["rotating"] = True args["maxBytes"]=20000 args["backupCount"]=10 (logger_proxy, logging_mutex) = make_shared_logger_and_proxy (setup_std_shared_logger, "my_logger", args)
(logger_proxy, logging_mutex) = make_shared_logger_and_proxy (setup_std_shared_logger, "my_logger", args) with logging_mutex: my_log.debug('This is a debug message') my_log.info('This is an info message') my_log.warning('This is a warning message') my_log.error('This is an error message') my_log.critical('This is a critical error message') my_log.log(logging.DEBUG, 'This is a debug message')Note that the logging function exception() is not included because python stack trace information is not well-marshalled (pickled) across processes.
Make a logging object called “logger_name” by calling logger_factory(args)
This function will return a proxy to the shared logger which can be copied to jobs in other processes, as well as a mutex which can be used to prevent simultaneous logging from happening.
Parameters: |
|
---|---|
Returns: | a proxy to the shared logger which can be copied to jobs in other processes |
Returns: | a mutex which can be used to prevent simultaneous logging from happening |
This function is a simple around wrapper around the python logging module.
This logger_factory example creates logging objects which can then be managed by proxy via ruffus.proxy_logger.make_shared_logger_and_proxy()
This can be:
- a disk log file
- a automatically backed-up (rotating) log.
- any log specified in a configuration file
These are specified in the args dictionary forwarded by make_shared_logger_and_proxy()
Parameters: |
|
---|