Intermediate
Putting parameters in configuration files can take some extra effort at the start, but then can save you a lot of time and heartache in the future. We are all tempted to simply hardcode parameters directly into our code as we save precious time when we write code, but then doing this properly can take extra effort. Some of us at least create constants or store parameters in a variable, while others store them in a class variable to keep this even cleaner. Arguably the best option is store these in a configuration file. In this article you’ll learn the steps compulsory to use configuration files in python 3. It will be strictly according to the official documentation of python 3.
ConfigParser is the class used to implement configuration files in python 3. The main function of using these files is to write python programs which can easily be modified by end users easily. The main aspect of this article is to know about the complete implementation of configuration files. We will cover the three main aspects in this article which are Setup, File format and Basic API.
Introduction to Python 3 Configuration Files
Configuration files can play a vital role in any program and its management. One of the popular approaches to separate code from configuration is to store these files in YAML, JSON or INI and not in .py format. One reason that .py files are not used is that Python 3 can be slower when it comes to reloading. You would need to restart the whole program if you stored your config in a python .py file. Also, the end user can modify the code at will if it is in .py format. Configuration files make it easier to modify or change the code. The data stored in configuration is to have separation so that the programmer can focus on code development and ensure that is clean as possible and the user only needs to touch the configuration file.
Setup of Python 3 ConfigParser
The class used to create configuration files is ConfigParser. This is a part of the standard python 3 library so no need to do any pip installation. We have to import it: “import configparser” to use it or there is another way of using it, it will work in both python2 and python 3, which is:
import configparser
File Format of configuration file
One convention that is used for the file format is to use the extension .ini (short for initial or initiation) but you can use the configuration based on your own or on clients preferences. There are different parts of configuration files.
- A configuration file consists of one or more sections.
- The section names are written in these delimiters [section name].
- The concept is similar to mapping. It consists of key-value pairs meaning there is a name of the configuration item (“key”) and the other the actual value of the configuration (“value”)
- Two operators are used to initialize or separate key-value pair assignment operator (=) or colon operator (:).
- You can even put in a comment using the # or ; prefix.
Example:
[default]
host = 192.168.1.1
port = 31
username = admin
password = admin
[database]
#database related configuration files
port = 22
forwardx11 = no
name = db_test
In the above configuration file example, we have two sections first is [default] and second is [database]. Each section has its own key-value pairs/entries like username = admin and name = db_test. So all of the key-value pairs belong to a given section, so it is easier to organise your configuration files. Finally the sentence with a prefix of # is for commenting
Reading the configuration file from python code
Now, we will talk about the method to read from the config file. As mentioned earlier, ConfigParser is the module/class used to create configuration files. First, ConfigParser object has to be initialized: config = configparser.ConfigParser(); The following are functions:
Initialization of ConfigParser
You can can initiate the configuration file with the following syntax. Here the variable “config” will contain all the values
config = configparser.ConfigParser()
Write to a Configuration file with ConfigParser
Although normally you normally edit to a configuration file in a text editor by hand, there are times where you want to programmatically write to a config file. For example, this could be to create a default config file which a user can then use as a basis to change or edit. You may also want to over-ride a config entry (after confirming with the user) that is erroneous.
Once the object is initialised, we can now write in it. There are ways through which we can initialize the section to write in the config file. We are going use the example mentioned above in file format. Let’s initialize the default section using dictionary.
Example:
config['default'] = {
"host" : "192.168.1.1",
"port" : "22",
"username" : "username",
"password" : "password"
}
Here, “default” is the name of the section (the part in the actual configuration file that had the square “[” and “]” brackets) and curly braces denote the start and end of a dictionary. Inside the dictionary are key-value pairs i.e. “host” is the key and “192.168.1.1” is the value separated by colon “:”
Now, let’s initialize the database section using empty dictionary and add the key-value pairs line by line.
Example:
config['database'] = {}
config['database']['port'] = "22"
config['database']['forwardx11'] = "no"
config['database']['name'] = "db_test"
Here, “database” is the name of the section and curly braces denote the same start and end of a dictionary. In this case, the dictionary is empty. Key-value pairs i.e. “port” is the key and “22” is the value separated by colon “=.” This method provides a lot more flexibility.
Here’s the full code so far:
import configparser
config = configparser.ConfigParser()
config['default'] = {
"host" : "192.168.1.1",
"port" : "22",
"username" : "username",
"password" : "password"
}
config['database'] = {}
config['database']['port'] = "22"
config['database']['forwardx11'] = "no"
config['database']['name'] = "db_test"
with open('test.ini', 'w') as configfile:
config.write(configfile);
After initializing the sections in config, you can now write it to a config file:
with open('test.ini', 'w') as configfile:
config.write(configfile);
Now, you will be able to see the file named test.ini created.
Read config from the config file using ConfigParser
The next step is to read the file which you just have created.
- The config file can be read by using read() method: config.read(‘test.ini’). This will read the test.ini file which you just created.
- If you want to print just the sections available in configuration file, method sections() can be used: config.sections().
- Next is getting the value of any key stored in the section. config[‘database’][‘name’]
This will give you the value which is “db_test” of the key called “name” stored in data_base section.
The following code will print out all the values stored against the keys in the default section using a for loop.
for key in config['default']:
print(config['default'][key])
Code:

Output:

Changing the datatype of the configuration value from ConfigParser
The datatype of the object of ConfigParser is string by default. This is fine for most situations, but then suppose you want to get a true/false value instead, or a number value to do maths operations. For this the string default may not work. We can typecast/covert the datatype of the object of configparser or the datatype of keys of section into any other type such as integer, float etc. In order to change the datatype of object, you have to covert it manually or by using getter methods. The best and the preferred way is to use getter methods.
There are three getter methods:
- getint();
- getfloat();
- getboolean();
Example: config['default'].getint('port')
getint() will covert the datatype of port key of section “default” into “integer”. If you use the typeof(); method on port then it will show integer type now.
There is another way of doing it:
Example: config.getboolean('data_base', 'forwardx11')
In this way, config file is invoking the getboolean() method and its takin two parameters as argument. The first is the name of the section and the other is the key whole value’s type will be changed.
What to do if a value is not available from a configfile
A fallback result can also be obtained. Fallback is the result obtained when the key or section we want to get isn’t available.
Example: config.get('default', 'database', fallback='not_database')
In this case, not_database will be returned if the “database” key isn’t available or the section default is not found.
Conclusion
We come to know about the setup i.e. importing the ConfigParser first to create configuration files. Next section was about the file format. There you can check about the basic syntax of creating a configuration file. It consists of sections and key-value pairs.
We played with the data types of keys in default and data_base sections. We can change datatypes using getter methods. Last but not the least, we studied about the basic api like write, read and about fallback.
Using configuration files is not difficult and can save a lot of time. So in your next coding work, take the extra few minutes to create a configuration file instead of hardcoding.
Full Code: ConfigParser Example Code
import configparser
config = configparser.ConfigParser()
#Set up default item for hosts using dictionary
config['default'] = {"host" : "192.168.1.1",
"port" : "22",
"username" : "username",
"password" : "password" }
#setup config item bytes
config['database'] = {}
config['database']['port'] = "22"
config['database']['forwardx11'] = "no"
config['database']['name'] = "db_test"
#Write default file
with open('test.ini', 'w') as configfile:
config.write(configfile)
#Open the file again to try to read it
config.read('test.ini')
#Print the sections
print(config.sections())
print( config['database']['name'] )
#Print each key pair
for key in config['default']:
print(config['default'][key])
#print the type of integer value
print (type (config['default'].getint('port')))
print( config.getboolean('database', 'forwardx11') )
#Print default value
print( config.get('default', 'databaseabc', fallback='not_database') )
Output:

Reference
https://docs.python.org/3/library/configparser.html
Want to see more useful tips?
How To Use Python trio for Structured Concurrency
Advanced
Python’s asyncio is powerful but has a well-known problem: it is easy to accidentally “fire and forget” tasks that run in the background with no guarantee they will be waited for, no guarantee their errors will be caught, and no easy way to cancel them when something goes wrong. Tasks can leak, exceptions can silently disappear, and cancellation can leave your program in an inconsistent state. These are not just theoretical problems — they cause real bugs in production async code.
trio is an alternative async library built around a concept called structured concurrency. In trio, all concurrent tasks are managed through a nursery — a scope that guarantees all tasks it spawns will be finished before the nursery exits. No task can outlive the nursery that created it. Errors always propagate to the right place. Cancellation is clean and predictable. The result is async code that is much easier to reason about and debug.
This article covers how to install and run trio programs, how to use nurseries for concurrent tasks, how to handle errors and cancellation, how trio’s memory channels replace asyncio queues, and how trio compares to asyncio. By the end you will understand structured concurrency and be able to write trio programs that handle concurrency correctly from day one.
Concurrent Tasks with trio: Quick Example
Here is the simplest trio program that runs two tasks concurrently using a nursery:
# quick_trio.py
import trio
async def task_a():
print("Task A: starting")
await trio.sleep(1)
print("Task A: done after 1 second")
async def task_b():
print("Task B: starting")
await trio.sleep(0.5)
print("Task B: done after 0.5 seconds")
async def main():
async with trio.open_nursery() as nursery:
nursery.start_soon(task_a)
nursery.start_soon(task_b)
print("Both tasks complete!")
trio.run(main)
Output:
Task A: starting
Task B: starting
Task B: done after 0.5 seconds
Task A: done after 1 second
Both tasks complete!
trio.open_nursery() creates a scope where both tasks run concurrently. The async with block does not exit until both tasks are done. This is the core guarantee of structured concurrency: the nursery always waits for its children. The sections below go deeper into error handling, cancellation, channels, and real-world patterns.
What Is trio and Why Use It?
trio is a Python async library designed around the principle that concurrent code should be structured the same way sequential code is: with clear entry and exit points, predictable control flow, and reliable error propagation. It was created as a response to the implicit complexity in asyncio task management.
| Feature | asyncio | trio |
|---|---|---|
| Concurrent tasks | asyncio.create_task() | nursery.start_soon() |
| Task lifetime | Can outlive their creator | Always bounded by nursery |
| Error propagation | May be silently dropped | Always propagated to nursery |
| Cancellation | Complex, error-prone | Clean, scope-based |
| Communication | Queue, Event, Condition | Memory channels (send/receive) |
| Timeout | asyncio.wait_for() | trio.move_on_after(), trio.fail_after() |
Install with pip:
# pip install trio
import trio
print(trio.__version__)
0.25.0

Understanding Nurseries
A nursery is trio’s central concept. It is a context manager that owns a group of concurrent tasks. When you enter the nursery block, you can spawn tasks. When the block exits (the body of the async with finishes), trio waits for all spawned tasks to finish before continuing. If any task raises an exception, the nursery cancels all remaining tasks and re-raises the exception.
# trio_nurseries.py
import trio
async def fetch_data(url, delay):
"""Simulate fetching data from a URL."""
print(f"Fetching {url}...")
await trio.sleep(delay) # simulate network delay
print(f"Done: {url}")
return f"data from {url}"
async def main():
results = []
async with trio.open_nursery() as nursery:
# Spawn three concurrent "fetches"
nursery.start_soon(fetch_data, "https://api.example.com/users", 1.0)
nursery.start_soon(fetch_data, "https://api.example.com/posts", 0.5)
nursery.start_soon(fetch_data, "https://api.example.com/comments", 0.8)
# Code here runs AFTER all three tasks finish
print("All fetches complete. Continuing with results.")
trio.run(main)
Output:
Fetching https://api.example.com/users...
Fetching https://api.example.com/posts...
Fetching https://api.example.com/comments...
Done: https://api.example.com/posts
Done: https://api.example.com/comments
Done: https://api.example.com/users
All fetches complete. Continuing with results.
All three fetches start immediately and run concurrently. They complete in order of their delay, not the order they were started. The “All fetches complete” line only prints after the slowest task (users, 1.0s) finishes. This guarantee — that the nursery always waits for all children — is what makes trio programs safe to reason about.
Error Handling in Nurseries
In asyncio, an exception in a background task can be silently lost if you do not explicitly await the task and check for errors. In trio, any exception in a child task immediately cancels all sibling tasks and propagates to the nursery scope. You cannot accidentally swallow errors.
# trio_errors.py
import trio
async def good_task():
print("Good task: running")
await trio.sleep(2)
print("Good task: done")
async def failing_task():
print("Failing task: about to fail")
await trio.sleep(0.5)
raise ValueError("Something went wrong in the task!")
async def main():
try:
async with trio.open_nursery() as nursery:
nursery.start_soon(good_task)
nursery.start_soon(failing_task)
except* ValueError as eg:
print(f"Caught error group: {eg.exceptions}")
trio.run(main)
Output:
Good task: running
Failing task: about to fail
Failing task: about to fail -- ValueError raised
Good task cancelled (sibling failed)
Caught error group: [ValueError('Something went wrong in the task!')]
When failing_task raises ValueError, trio immediately cancels good_task and collects all exceptions into an ExceptionGroup. The except* syntax (Python 3.11+) handles exception groups. For older Python, use except trio.MultiError. The key insight is that no exception disappears silently — trio ensures every error is seen and handled.

Timeouts and Cancellation Scopes
Cancellation in trio is handled through cancellation scopes. Every nursery is itself a cancellation scope. You can also create explicit scopes with trio.move_on_after() (continue after timeout) or trio.fail_after() (raise exception after timeout).
# trio_cancellation.py
import trio
async def slow_operation():
print("Starting slow operation...")
await trio.sleep(10) # Would take 10 seconds
print("This line will never print if cancelled")
async def main():
# move_on_after: cancel the block after N seconds, then continue
print("-- move_on_after example --")
with trio.move_on_after(2) as cancel_scope:
await slow_operation()
if cancel_scope.cancelled_caught:
print("Operation timed out -- continuing with partial result")
# fail_after: cancel and raise TooSlowError after N seconds
print("\n-- fail_after example --")
try:
with trio.fail_after(1):
await slow_operation()
except trio.TooSlowError:
print("Operation failed: took too long")
trio.run(main)
Output:
-- move_on_after example --
Starting slow operation...
Operation timed out -- continuing with partial result
-- fail_after example --
Starting slow operation...
Operation failed: took too long
Use move_on_after when a timeout is acceptable — for example, fetching optional metadata that you will skip if it is slow. Use fail_after when the operation is required and a timeout means something is wrong. The cancel_scope.cancelled_caught attribute tells you whether the timeout actually fired, so you can distinguish a normal exit from a cancelled exit.
Memory Channels for Task Communication
Tasks in a nursery often need to pass data to each other. trio provides memory channels as the safe, built-in way to do this. A channel has a send end and a receive end. Sending is async if the channel is full; receiving is async if the channel is empty. This ensures proper backpressure.
# trio_channels.py
import trio
async def producer(send_channel, items):
"""Produce items and send them through the channel."""
async with send_channel:
for item in items:
print(f"Producing: {item}")
await send_channel.send(item)
await trio.sleep(0.1) # simulate work
async def consumer(receive_channel, name):
"""Receive and process items from the channel."""
async with receive_channel:
async for item in receive_channel:
print(f"Consumer {name} processing: {item}")
await trio.sleep(0.2) # simulate processing
async def main():
send_channel, receive_channel = trio.open_memory_channel(max_buffer_size=5)
async with trio.open_nursery() as nursery:
nursery.start_soon(producer, send_channel, range(6))
nursery.start_soon(consumer, receive_channel, "A")
print("All items processed!")
trio.run(main)
Output:
Producing: 0
Consumer A processing: 0
Producing: 1
Producing: 2
Consumer A processing: 1
Producing: 3
Producing: 4
Producing: 5
Consumer A processing: 2
Consumer A processing: 3
Consumer A processing: 4
Consumer A processing: 5
All items processed!
The async with send_channel and async with receive_channel context managers ensure the channel is properly closed when the task finishes. When the send end is closed, the receiver’s async for loop exits cleanly. Use trio.open_memory_channel(0) for a rendezvous channel (send blocks until a receiver is ready) or a positive integer for a buffered channel.

Real-Life Example: Concurrent URL Health Checker
# trio_url_checker.py
import trio
import urllib.request
import urllib.error
from dataclasses import dataclass
from typing import List
@dataclass
class HealthResult:
url: str
status: int = 0
ok: bool = False
error: str = ""
latency_ms: float = 0.0
async def check_url(url: str, results: list, timeout: float = 5.0):
"""Check a single URL and record the result."""
start = trio.current_time()
try:
# trio doesn't have built-in HTTP, use thread for blocking call
response_code = await trio.to_thread.run_sync(
lambda: urllib.request.urlopen(url, timeout=timeout).getcode()
)
latency = (trio.current_time() - start) * 1000
results.append(HealthResult(
url=url, status=response_code,
ok=(200 <= response_code < 300), latency_ms=latency
))
except urllib.error.HTTPError as e:
results.append(HealthResult(url=url, status=e.code, ok=False, error=str(e)))
except Exception as e:
results.append(HealthResult(url=url, ok=False, error=str(e)))
async def check_all(urls: List[str], concurrency: int = 5) -> List[HealthResult]:
"""Check all URLs concurrently with a limit on parallel requests."""
results = []
limiter = trio.CapacityLimiter(concurrency)
async def bounded_check(url):
async with limiter:
await check_url(url, results)
async with trio.open_nursery() as nursery:
for url in urls:
nursery.start_soon(bounded_check, url)
return sorted(results, key=lambda r: r.url)
async def main():
urls = [
"https://httpbin.org/status/200",
"https://httpbin.org/status/404",
"https://httpbin.org/delay/1",
"https://jsonplaceholder.typicode.com/posts/1",
"https://jsonplaceholder.typicode.com/users/1",
]
print(f"Checking {len(urls)} URLs...\n")
with trio.fail_after(15):
results = await check_all(urls, concurrency=3)
print(f"{'URL':<50} {'Status':>8} {'OK':>5} {'Latency':>10}")
print("-" * 78)
for r in results:
status = r.status if r.status else "ERR"
latency = f"{r.latency_ms:.0f}ms" if r.ok else r.error[:15]
print(f"{r.url:<50} {str(status):>8} {'Yes' if r.ok else 'No':>5} {latency:>10}")
trio.run(main)
Output:
Checking 5 URLs...
URL Status OK Latency
------------------------------------------------------------------------------
https://httpbin.org/delay/1 200 Yes 1043ms
https://httpbin.org/status/200 200 Yes 89ms
https://httpbin.org/status/404 404 No HTTP Error
https://jsonplaceholder.typicode.com/posts/1 200 Yes 134ms
https://jsonplaceholder.typicode.com/users/1 200 Yes 128ms
This checker runs all URL checks concurrently, limited to 3 at a time by trio.CapacityLimiter. The entire batch fails with TooSlowError if it takes more than 15 seconds. The trio.to_thread.run_sync() call offloads the blocking HTTP call to a thread without blocking the trio event loop. You could extend this to send Slack alerts, write results to a database, or retry failed URLs with backoff.
Frequently Asked Questions
Should I use trio or asyncio?
For new projects where you want the cleanest possible async code and do not need compatibility with existing asyncio libraries, trio is excellent. For projects that use FastAPI, aiohttp, or other asyncio-based frameworks, stick with asyncio — trio is not compatible with the asyncio event loop. The anyio library provides an abstraction that works on both trio and asyncio backends if you need portability.
How do I make HTTP requests in trio?
trio does not include an HTTP client. Use httpx with the trio backend: install httpx[trio] and use httpx.AsyncClient() inside your trio program. For simple cases, trio.to_thread.run_sync() offloads any blocking HTTP call to a thread without blocking the event loop, as shown in the real-life example above.
How do I return values from nursery tasks?
trio tasks cannot directly return values to the nursery (unlike asyncio’s gather() which collects return values). The idiomatic approach is to pass a shared list or use a memory channel. Tasks append results to a shared list (as in the URL checker example), and the caller reads from the list after the nursery exits. Alternatively, use a send channel inside tasks and a receive loop outside the nursery.
What does “cancel-safe” mean and why does it matter?
A function is cancel-safe if it behaves correctly even when cancelled mid-execution. trio can cancel any awaitable at any await point. If your code holds a lock, writes to a file, or modifies shared state across multiple awaits, cancellation mid-way can leave things in an inconsistent state. trio’s built-in primitives (channels, locks, events) are cancel-safe by design. When writing your own code, avoid long operations across multiple awaits without proper cleanup using try/finally or code>shield().
Can I use trio with regular threading?
Yles. trio.to_thread.run_sync(func) runs a blocking function in a thread pool without blocking the event loop. trio.from_thread.run_sync(async_func) calls async trio functions from a thread. These bridge the sync/async boundary cleanly and are the recommended way to use blocking libraries (like database drivers or legacy HTTP clients) inside trio programs.
Conclusion
The trio library brings structured concurrency to Python async programming. You learned how nurseries guarantee task lifetime and error propagation, how cancellation scopes handle timeouts cleanly, how memory channels enable safe producer-consumer patterns, and how CapacityLimiter controls concurrency. The URL health checker showed all these concepts working together in a realistic scenario.
The structured concurrency model takes some getting used to, but the payoff is async code that behaves predictably even in error and cancellation scenarios. The next step is to convert one small asyncio program to trio and observe how the error handling and task lifetime guarantees change your debugging experience. The trio documentation is exceptionally detailed and includes explanations of the design decisions behind each API choice.
Related Articles
Frequently Asked Questions
What is ConfigParser used for in Python?
ConfigParser is a built-in Python module for reading and writing configuration files in INI format. It handles settings organized into sections with key-value pairs, making it easy to store and retrieve application configuration without hardcoding values.
What format does ConfigParser use?
ConfigParser uses the INI file format with sections in square brackets ([section]), followed by key-value pairs using = or : as delimiters. Comments start with # or ;. There is always a [DEFAULT] section for fallback values.
How do I read a config file with ConfigParser?
Create a ConfigParser() instance, call config.read('filename.ini'), then access values with config['section']['key'] or config.get('section', 'key'). Use getint(), getfloat(), or getboolean() for type conversion.
Can ConfigParser handle nested sections?
No, ConfigParser does not support nested sections natively. For nested configuration structures, consider using TOML (tomllib in Python 3.11+), YAML (PyYAML), or JSON configuration files instead.
What is the difference between ConfigParser and JSON for configuration?
ConfigParser uses human-friendly INI format with sections and is ideal for simple settings. JSON supports nested structures and lists but lacks comments. ConfigParser has built-in type conversion methods and a DEFAULT section for fallback values, while JSON requires manual type handling.
Trackbacks/Pingbacks