Logging in Python 3, How To Output Logs to File and Console

python log files

Logging is an essential tool for development as it helps us in keeping track of events, monitoring the program behavior, and debugging; and without it making big projects and running them is unimaginable. It is a vital tool to understand what’s happening in your code during development and testing, but also becomes an essential tool once your code goes into production

Using print() vs Logging

When we write a new application, we tend to always using print() to understand what’s happening in your code. You should stop doing that as soon as your code starts to becoming more than a quick hack. There are many advantages to using Logging vs print, and in fact logging does not take that much effort. Here are some quick points:

  • You generally only print out to the console and not to a file. So the output disappears after execution
  • There is no uniformity and each print statement is different – with logging, you can include time stamps, the calling function, the line of code. This is invaluable to debug issues
  • With Logging you can output to both console and a file, and have these appended over time so you get a history
  • You can turn on/off the level of information that is output for logging – e.g only DEBUG level, only ERRORs, etc

Basics of logging in python 3

Often we use the print() command for tracking events which is good for small programs but become difficult in complex programs. In python, we use logging module for logging.

Logging module comes built in with python so we don’t need to install it. Just import and use.

import logging

# The demo test code
logging.debug("This is debug")
logging.info("This is info")
logging.warning("This is warning")
logging.error("This is error")
logging.critical("This is critical")

The output logging.debug and logging.info is not shown due to the severity level. It’s a level which we use to define how important a line of the log will be. Only the logs of the level set and above are shown. There are 5 levels namely as DEBUG , INFO , WARNING , ERROR and CRITICAL.

  • DEBUG: Detailed information, typically of interest only when diagnosing problems.
  • INFO: Confirmation that things are working as expected.
  • WARNING: An indication that something unexpected happened, or indicative of some problem in the near future (e.g. ‘disk space low’). The software is still working as expected.
  • ERROR: Due to a more serious problem, the software has not been able to perform some functions.
  • CRITICAL: A serious error, indicating that the problem itself may be unable to continue running.

By default log level is set to WARNING so only logs of WARNING , ERROR and CRITICAL are shown we can change it by adding basicConfig command.

import logging


# The demo test code
logging.debug("This is debug")
logging.info("This is info")
logging.warning("This is warning")
logging.error("This is error")
logging.critical("This is critical")

The function logging.DEBUG used in basicConfig does nothing than just returning a number which denotes the level. All the level have a number associated with them.

Level Numeric Value

NOTE : While writing the logging command the level is always written in small letters . logging.info() not logging.INFO(). While declaring level we use capital letters basicConfig(level=logging.DEBUG) not basicConfig(level=logging.debug) .

Output logs to file

If you want to make the logs to be written in a file. This is easy to do. You just need to add filename="file.log" in the basicConfig file.

import logging


# The demo test code
logging.warning("This is warning")
logging.error("This is error")
logging.critical("This is critical")

Formatting the logging output

Now if you have closely observed you would have found that the logs are been written in specific formats i.e <level><name><message> . But we can change it as we want by adding the format parameters in basicConfig. eg:
The name of the attribute (like asctime , levelname , message) written inside are case sensitive and each is separated by :

import logging


logging.warning("This is warning")
logging.error("This is Error")
logging.critical("This is Critical")

There are lots of attributes you can use. Listed Below

Attribute nameFormatDescription
args You shouldn’t need to format this yourself. The tuple of arguments merged into msg to produce message, or a dict whose values are used for the merge (when there is only one argument, and it is a dictionary).
asctime %(asctime)s Human-readable time when the LogRecord was created. By default this is of the form ‘2003-07-08 16:49:45,896’ (the numbers after the comma are millisecond portion of the time).
created %(created)f Time when the LogRecord was created (as returned by time.time()).
exc_info You shouldn’t need to format this yourself. Exception tuple (à la sys.exc_info) or, if no exception has occurred, None.
filename %(filename)s Filename portion of pathname.
funcName %(funcName)s Name of function containing the logging call.
levelname %(levelname)s Text logging level for the message ('DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL').
levelno %(levelno)s Numeric logging level for the message (DEBUG, INFO, WARNING, ERROR, CRITICAL).
lineno %(lineno)d Source line number where the logging call was issued (if available).
message %(message)s The logged message, computed as msg % args. This is set when Formatter.format() is invoked.
module %(module)s Module (name portion of filename).
msecs %(msecs)d Millisecond portion of the time when the LogRecord was created.
msg You shouldn’t need to format this yourself. The format string passed in the original logging call. Merged with args to produce message, or an arbitrary object.
name %(name)s Name of the logger used to log the call.
pathname %(pathname)s Full pathname of the source file where the logging call was issued (if available).
process %(process)d Process ID (if available).
processName %(processName)s Process name (if available).
relativeCreated %(relativeCreated)d Time in milliseconds when the LogRecord was created, relative to the time the logging module was loaded.
stack_info You shouldn’t need to format this yourself. Stack frame information (where available) from the bottom of the stack in the current thread, up to and including the stack frame of the logging call which resulted in the creation of this record.
thread %(thread)d Thread ID (if available).
threadName %(threadName)s Thread name (if available).

Now lets create something which is more comfortable and easy to understand using the Attributes mentioned above.

import logging
logging.basicConfig(format="%(asctime)s [%(levelname)s] - [%(filename)s > %(funcName)s() > %(lineno)s] - %(message)s",
def hellologs():
    logging.warning("Some wrong has occured")


If you clearly observe you will easily understand how I created the format to look like this. Here what you will find is a new parameter which is datefmt and if you clearly observe the time in the logs you will observe that the date is not written its just time.
Well , datefmt parameter is passed to set the format of asctime . You can format it in you own way by using strings like.
%Y – Year, %m – Month, %d – Date, %H – Hour, %M – Minutes, %S – Second
“23/05/2020 21:52” > datefmt="%d/%m/%Y %H:%M"
“05-23 21:52:13” > datefmt="%m-%d %H:%M:%S"

Outputting Python Logs to Console and Files

By default, using the logging class you can only output the logging to just one channel. However, you can output to multiple streams using handlers. Handlers are useful flexible tools to output files, screen, email and others. You can do this by adding to your own logger instance which also has an advantage to be shared across multiple files. When your program has many python files and all using the logging module it’s preferred to use a logger object instance which is shared. A logger has many advantages like it avoids confusion since you can setup the configuration settings once and then reuse it everywhere.

import logging

mylogs = logging.getLogger(__name__)

file = logging.FileHandler("sample.log")

# The demo test code
mylogs.debug("The debug")
mylogs.info("The info")
mylogs.warning("The warn")
mylogs.error("The error")
mylogs.critical("The critical")
  • mylogs = logging.getLogger(__name__)
    • Creates an instance of the logger in variable mylogs . Now we will user mylogs instead of logging for storing logs.
  • file = logging.FileHandler("sample.log")
    • Now this is what we call a handler . The variable file is now a file handler . For creating a file handler you must pass a file name to it.
  • mylogs.addHandler(file)
    • Every Handler needs to be added to the logger.

So , now the reason that debug and info are not shown and the kind of weird format of output we got is because we have not set the level and format for the handler. So it’s on its default level (warning) and format.
Let’s change its format and level..

import logging

mylogs = logging.getLogger(__name__)

file = logging.FileHandler("sample.log")
fileformat = logging.Formatter("%(asctime)s:%(levelname)s:%(message)s",datefmt="%H:%M:%S")


# And all that demo test code below this
  • mylogs.setLevel(logging.DEBUG)
    • Although we need to set the level of the handler but before that we have to give a level to the logger . The level that is given to the logger should be as lowest as possible otherwise it will filter out rest levels of the logs.
  • file.setLevel(logging.INFO)
    • Pretty easy to set the level , isn’t it ?
  • fileformat = logging.Formatter("%(asctime)s:%(levelname)s:%(message)s",datefmt="%H:%M:%S")
    • Well, firstly we need to create a format that we will use for handlers. NOTE: This format can be used by any handler
  • file.setFormatter(fileformat)
    • Now setting the format of file handler to fileformat

Next we extend the handler to output to console as well

Outputting to Console and Files

Writing in the console is done by creating a Stream Handler next. The rest is as same as other handlers. You can configure the format, level, and all other properties as you want.

import logging

mylogs = logging.getLogger(__name__)

stream = logging.StreamHandler()
streamformat = logging.Formatter("%(asctime)s:%(levelname)s:%(message)s")


# And all that demo code below this

Colored Output

The traditional white output in the Console isn’t always helpful to find out what’s going on especially when you have long log files to investigate. To get logs output in different colours we will use coloredlogs module . Since its not preinstalled in python so you need to install it using pip.
pip install coloredlogs

So with all set let’s dive into the colorful console.

import coloredlogs, logging

mylogs = logging.getLogger(__name__)

coloredlogs.install(level=logging.DEBUG, logger=logger)
# Some examples.
mylogs.debug("This is debug")
mylogs.info("This is info")
mylogs.warning("This is warning")
mylogs.error("This is an error")
mylogs.critical("This is a critical message")

coloredlogs creates a stream handler and attaches it to the logger passed. It has its own default format and colour settings which can be customised as per interest. Lets first look at the few important parameters we can pass in coloredlogs.install()

  • level
    • An integer to denote the level. (Remember logging.DEBUG returns an integer)
  • logger
    • Name of the logger in which this stream handler has to be attached. (Performs same act as logger.add_handler() )
  • fmt
    • A string denoting the format style.
  • datefmt
    • A string denoting the format of asctime .
  • level_styles
    • A dictionary containing the data of level and their colors
  • field_styles
    • A dictionary containing the data of field and their colors
import coloredlogs, logging

# Create a logger object.
mylogs = logging.getLogger(__name__)

fieldstyle = {'asctime': {'color': 'green'},
              'levelname': {'bold': True, 'color': 'black'},
levelstyles = {'critical': {'bold': True, 'color': 'red'},
               'debug': {'color': 'green'}, 
               'error': {'color': 'red'}, 
               'info': {'color':'magenta'},
               'warning': {'color': 'yellow'}}

                    fmt='%(asctime)s [%(levelname)s] - [%(filename)s > %(funcName)s() > %(lineno)s] - %(message)s',
# Some examples.
mylogs.debug("This is debug")
mylogs.info("This is info")
mylogs.warning("This is warning")
mylogs.error("This is an error")
mylogs.critical("This is a critical message")


Now we are going to create 3 handlers namely file , cric_file , stream .
file will be storing all the logs of level warning and above in the file sample.log
cric_file will be storing all the logs of critical level in the file Critical.log
stream will be showing all the logs in the console.
So, as I said about handlers they can be configured differently as you want; like having different levels and all other configs. Dont worry everything is explained below.

import logging

# Creating logger
mylogs = logging.getLogger(__name__)

# Handler - 1
file = logging.FileHandler("Sample.log")
fileformat = logging.Formatter("%(asctime)s:%(levelname)s:%(message)s")

# Handler - 2
cric_file = logging.FileHandler("Critical.log")
# format we can use it anywhere.

# Handler - 3
stream = logging.StreamHandler()
streamformat = logging.Formatter("%(levelname)s:%(module)s:%(message)s")

# Adding all handlers to the logs

# Some demo codes
  • import logging
    • Import the logging module
  • mylogs = logging.getLogger(__name__)
    • Defines a logger instance
  • file = logging.FileHandler("Sample.log")
    • Handler – 1
    • Creating the Handler command FileHandler and passing the filename for it.  
    • This handler will save all the logs of Warning and above in the file “sample.log”
  • fileformat = logging.Formatter("%(asctime)s:%(levelname)s:%(message)s")
    • Creating a format. (It’s independent. The file still hasn’t been set in this format. The format type is stored in the variable fileformat which can be used by any handler
  • file.setLevel(logging.WARNING)
    • Setting the level of this file handler
  • file.setFormatter(fileformat)
    • Now setting its format to fileformat defined just above.
  • cric_file = logging.FileHandler("Critical.log")
    • Handler – 2
    • This handler will save all the critical logs in the file “Critical.log”
    • Creating the Handler command FileHandler and passing the filename for it.
  • cric_file.setLevel(logging.CRITICAL)
    • Setting the level of this file handler
  • cric_file.setFormatter(fileformat)
    • Now setting its format to fileformat . Since we have already defined that format we can use it anywhere.
  • stream = logging.StreamHandler()
    • Handler – 3 
    • This handler is what will be shown in the console . Its called StreamHandler . And obivously we dont need a filename to pass.
  • streamformat = logging.Formatter("%(levelname)s:%(module)s:%(message)s")
    • Since we want a different format to what will be shown in the console so creating another formatter for it. Obiviously we can alos use the format defined before it i.e fileformat
  • mylogs.setLevel(logging.INFO)
    • Setting its level
  • stream.setFormatter(streamformat)
    • Setting its format 
  • mylogs.addHandler(file)
  • mylogs.addHandler(stream)
  • mylogs.addHandler(cric_file)
    • Now since we have defined all our handlers we added them to the mylogs logger.

Final Thoughts

Now that we have gone in depth into managing logs, you should be able to reuse this code in your future projects where you can save time and get more robustness to help you understanding what’s happening under the hood.

Subscribe to our newsletter

Get new tips in your inbox automatically. Subscribe to our newsletter!

Leave a Reply

Your email address will not be published. Required fields are marked *

Recent Content