Introduction
Writing code is about solving problems, but not every problem is predictable. In the true world, your software will encounter unexpected situations: missing files, invalid user inputs, network timeouts, and even hardware failures. For this reason handling errors isn’t only a nice-to-have; it’s a critical a part of constructing robust and reliable applications for production.
Imagine an e-commerce website. A customer places an order, but in the course of the checkout process, a database connection issue occurs. Without proper Error Handling, this issue could cause the appliance to crash, leaving the shopper frustrated and the transaction incomplete. Worse, it’d create inconsistent data, resulting in even greater problems down the road. Thus, error handling is a fundamental skill for any Python developer who wants to jot down code for production.
Nevertheless, good error handling also goes hand in hand with a very good logging system. It’s rare to have access to the console when the code is running in production. So there’s no likelihood of your print being seen by anyone. To make sure you can monitor your application and investigate any incidents, you have to arrange a logging system. That is where the loguru package comes into play, which I’ll introduce in this text.
I – handle Python errors?
On this part I present the perfect practices of error handling in Python, from try-except blocks and using raise
to the finally
statement. These concepts will aid you write cleaner, more maintainable code that’s suitable for a production environment.
The blocks
The try-except block is the most important tool for handling errors in Python. It means that you can catch potential errors during code execution and forestall this system from crashing.
def divide(a, b):
try:
return a / b
except ZeroDivisionError:
print(f"Only Chuck Norris can divide by 0!")
On this trivial function, the try-except block allows the error attributable to a division by 0 to be intercepted. The code within the try block is executed, and if an error occurs, the except block checks whether it’s a ZeroDivisionError
and print a message. But only this sort of error is caught. For instance, if is a string, an error occurs. To avoid this, you’ll be able to add a TypeError
. So, it will be significant to check all possible errors.
The function becomes:
def divide(a, b):
try:
return a / b
except ZeroDivisionError:
print(f"Only Chuck Norris can divide by 0!")
except TypeError:
print("Don't compare apples and orange!")
Raise an exception
You should use the raise statement to manually raise an exception. This is beneficial if you wish to report a user-defined error or impose a selected restriction in your code.
def divide(a, b):
if b == 0:
raise ValueError("Only Chuck Norris can divide by 0!")
return a / b
try:
result = divide(10, 0)
except ValueError as e:
print(f"Error: {e}")
except TypeError:
print("Don't compare apples and orange!")
In this instance, a ValueError
exception is triggered if the divisor is zero. In this fashion, you’ll be able to explicitly control the error conditions. Within the print function, the message shall be ““.
A number of the commonest exceptions
ValueError: The kind of a worth is correct but its value is invalid.
try:
number = math.sqrt(-10)
except ValueError:
print("It's too complex to be real!")
KeyError: Attempting to access a key that doesn’t exist in a dictionary.
data = {"name": "Alice"}
try:
age = data["age"]
except KeyError:
print("Never ask a girl her age!")
IndexError: Attempting to access a non-existent index in a listing.
items = [1, 2, 3]
try:
print(items[3])
except IndexError:
print("You forget that indexing starts at 0, don't you?")
TypeError: Performing an operation on incompatible types.
try:
result = "text" + 5
except TypeError:
print("Don't compare apples and orange!")
FileNotFoundError: Attempting to open a non-existing file.
try:
with open("notexisting_file.txt", "r") as file:
content = file.read()
except FileNotFoundError:
print("Are you sure of your path?")
Custom Error: You possibly can trigger predefined exceptions or also define your personal exception classes:
class CustomError(Exception):
pass
try:
raise CustomError("This can be a custom error")
except CustomError as e:
print(f"Catched error: {e}")
Clean with the statement
The finally
block is executed in every case, no matter whether the error has occurred or not. It is commonly used to perform cleanup actions, resembling closing a connection to a database or releasing resources.
import sqlite3
try:
conn = sqlite3.connect("users_db.db") # Hook up with a database
cursor = conn.cursor()
cursor.execute("SELECT * FROM users") # Execute a question
results = cursor.fetchall() # Get results of the query
print(results)
except sqlite3.DatabaseError as e:
print("Database error:", e)
finally:
print("Closing the database connection.")
if 'conn' in locals():
conn.close() # Ensures the connection is closed
Best practices for error handling
- Catch specific exceptions: Avoid using a generic except block without specifying an exception, as it might mask unexpected errors. Prefer specifying the exception:
# Bad practice
try:
result = 10 / 0
except Exception as e:
print(f"Error: {e}")
# Good practice
try:
result = 10 / 0
except ZeroDivisionError as e:
print(f"Error: {e}")
- Provide explicit messages: Add clear and descriptive messages when raising or handling exceptions.
- Avoid silent failures: In the event you catch an exception, ensure it’s logged or re-raised so it doesn’t go unnoticed.
import logging
logging.basicConfig(level=logging.ERROR)
try:
result = 10 / 0
except ZeroDivisionError:
logging.error("Division by zero detected.")
- Use
else
andfinally
blocks: Theelse
block runs provided that no exception is raised within thetry
block.
try:
result = 10 / 2
except ZeroDivisionError:
logging.error("Division by zero detected.")
else:
logging.info(f"Success: {result}")
finally:
logging.info("End of processing.")
II – handle Python logs?
Good error-handling is one thing, but when nobody knows that an error has occurred, the entire point is lost. As explained within the introduction, the monitor isn’t consulted and even seen when a program is running in production. Nobody will see print. Subsequently, good error handling have to be accompanied by a very good logging system.
What are logs?
Logs are records of messages generated by a program to trace the events that occur during its execution. These messages may contain details about errors, warnings, successful actions, process milestones or other relevant events. Logs are essential for debugging, tracking performance and monitoring the health of an application. They permit developers to know what is happening in a program without having to interrupt its execution, making it easier to unravel problems and repeatedly improve the software.
The loguru package
Python already has a native logging package: logging. But we prefer the loguru package, which is way simpler to make use of and easier to configure. Actually, complete output formatting is already preconfigured.
from loguru import logger
logger.debug("A reasonably debug message!")
All of the vital elements are included directly within the message:
- Time stamp
- Log level, indicating the seriousness of the message.
- File location, module and line number. In this instance, the file location is __main__ since it was executed directly from the command line. The module is
as a result of the undeniable fact that the log just isn’t situated in a category or function. - The message.
The several logging levels
There are several log levels to take into consideration the importance of the message displayed (which is more complicated in a print). Each level has a reputation and an associated number:
- TRACE (5): used to record detailed information on this system’s execution path for diagnostic purposes.
- DEBUG (10): utilized by developers to record messages for debugging purposes.
- INFO (20): used to record information messages describing normal program operation.
- SUCCESS (25): just like INFO, but used to point the success of an operation.
- WARNING (30): used to point an unusual event that will require further investigation.
- ERROR (40): used to record error conditions which have affected a selected operation.
- CRITICAL (50): used to record error conditions that prevent a most important function from working.
The package naturally handles different formatting depending on the extent used
from loguru import logger
logger.trace("A trace message.")
logger.debug("A debug message.")
logger.info("An information message.")
logger.success("Successful message.")
logger.warning("A warning message.")
logger.error("An error message.")
logger.critical("A critical message.")

The trace message was not displayed since the default minimum level utilized by loguru is debug. It subsequently ignores all messages at lower levels.
It is feasible to define latest log levels with the extent method and is used with the log method
logger.level("FATAL", no=60, color="", icon="!!!")
logger.log("FATAL", "A FATAL event has just occurred.")
- name : the name of the log level.
- no : the corresponding severity value (have to be an integer).
- color : color markup.
- icon : the extent icon.
The logger configuration
It is feasible to recreate a logger with a brand new configuration by deleting the old one with the remove
command and generating a brand new logger with a brand new configuration with the add
function. This function takes the next arguments:
- sink [mandatory]: specifies a goal for every data set created by the logger. By default, this value is about to
sys.stderr
(which corresponds to the usual error output). We may store all output in a “.log” file (except if you will have a log collector). - level: Sets the minimum logging level for the recorder.
- format: is beneficial to define a custom format in your logs. To take care of the coloring of the logs within the terminal, this have to be specified (see example below).
- filter: is used to find out whether a log needs to be recorded or not.
- colorize: takes a boolean value and determines whether the terminal coloring needs to be activated or not.
- serialize: causes the log to be displayed in JSON format if it is about to
True
. - backtrace: determines whether the exception trace should transcend the purpose at which the error was recorded with the intention to facilitate troubleshooting.
- diagnose: Determines whether variable values needs to be displayed within the exception trace. This selection have to be set to
False
in production environments in order that no sensitive information is leaked. - enqueue: If this selection is activated, the log data records are placed in a queue to avoid conflicts if several processes hook up with the identical goal.
- catch: If an unexpected error occurs when connecting to the server specified sink, you’ll be able to detect it by setting this selection to
True
. The error shall be displayed in the usual error.
import sys
from loguru import logger
logger_format = (
"{time:YYYY-MM-DD HH:mm:ss.SSS} | "
"{level: <8} | "
"{name}:{function}:{line}"
)
logger.remove()
logger.add(sys.stderr, format=logger_format)
Note:
Colours disappear in a file. It's because there are special characters (called ansi codes) that display colours within the terminal, but this formatting doesn't exist within the files.
Add context to logs
For complex applications, it may possibly be useful so as to add further information to the logs to enable sorting and facilitate troubleshooting.
For instance, if a user changes a database, it may possibly be useful to have the user ID along with the change information.
Before you begin recording context data, you have to be certain that that the {extra}
directive is included in your custom format. This variable is a Python dictionary that accommodates context data for every log entry (if applicable).
Here is an example of a customization where an additional user_id
is added. On this format, the colours.
import sys
from loguru import logger
logger_format = (
"{time:YYYY-MM-DD HH:mm:ss.SSS} | "
"{level: <8} | "
"{name} :{function} :{line} | "
"User ID: {extra[user_id]} - {message} "
)
logger.configure(extra={"user_id": ""}) # Default value
logger.remove()
logger.add(sys.stderr, format=logger_format)
It's now possible to make use of the bind method to create a baby logger inheriting all the info from the parent logger.
childLogger = logger.bind(user_id="001")
childLogger.info("Here a message from the kid logger")
logger.info("Here a message from the parent logger")

One other option to do that is to make use of the contextualize method in a with block.
with logger.contextualize(user_id="001"):
logger.info("Here a message from the logger with user_id 001")
logger.info("Here a message from the logger without user_id")

As a substitute of the with block, you should utilize a decorator. The preceding code then becomes
@logger.contextualize(user_id="001")
def child_logger():
logger.info("Here a message from the logger with user_id 001")
child_logger()
logger.info("Here a message from the logger without user_id")
The catch method
Errors might be mechanically logged after they occur using the catch method.
def test(x):
50/x
with logger.catch():
test(0)

But it surely’s simpler to make use of this method as a decorator. This leads to the next code
@logger.catch()
def test(x):
50/x
test(0)
The log file
A production application is designed to run repeatedly and uninterrupted. In some cases, it will be significant to predict the behavior of the file, otherwise you'll have to seek the advice of pages of logs within the event of an error.
Listed here are the various conditions under which a file might be modified:
- rotation: specifies a condition under which the present log file is closed and a brand new file is created. This condition might be an int, a datetime or a str. Str is beneficial because it is simpler to read.
- retention: specifies how long each log file needs to be kept before it's deleted from the file system.
- compression: The log file is converted to the required compression format if this selection is activated.
- delay: If this selection is about to True, the creation of a brand new log file is delayed until the primary log message has been pushed.
- mode, buffering, encoding : Parameters which can be passed to the Python function open and determine how Python opens log files.
Note:
Often, within the case of a production application, a log collector shall be set as much as retrieve the app’s outputs directly. It's subsequently not vital to create a log file.
Conclusion
Error handling in Python is a crucial step in writing skilled and reliable code. By combining try-except blocks, the raise statement, and the finally block, you'll be able to handle errors predictably while maintaining readable and maintainable code.
Furthermore, a very good logging system improves the power to watch and debug your application. Loguru provides a straightforward and versatile package for logging messages and may subsequently be easily integrated into your codebase.
In summary, combining effective error handling with a comprehensive logging system can significantly improve the reliability, maintainability, and debugging capability of your Python applications.
References
1 – Error handling in Python: official Python documentation on exceptions
2 – The loguru documentation: https://loguru.readthedocs.io/en/stable/
3 – Guide about loguru: https://betterstack.com/community/guides/logging/loguru/