Intermediate

You just wrote a function that processes 50,000 records — but you have no idea if it takes 2 seconds or 20. You need to benchmark it, add a retry delay, or log when something happened. Python’s built-in time module handles all of this without any pip installs.

The time module gives you access to system clocks, sleep functions, and timestamp formatting tools that are accurate enough for profiling, automation, and scheduling. If you have Python installed, you already have everything you need.

In this article you will learn how to measure elapsed time with perf_counter and monotonic, pause execution with sleep, format timestamps with strftime, use nanosecond precision benchmarking, and build a retry loop with exponential backoff.

Python time Module: Quick Example

Here is the fastest way to benchmark a block of code:

# benchmark_quick.py
import time

start = time.perf_counter()

total = sum(i * i for i in range(1_000_000))

elapsed = time.perf_counter() - start
print(f"Sum: {total}")
print(f"Elapsed: {elapsed:.4f} seconds")

Output:

Sum: 333332833333500000
Elapsed: 0.0812 seconds

perf_counter() returns the current value of the highest-resolution clock available on your system. By capturing it before and after the work, you get the wall-clock time in seconds. This is the go-to choice for benchmarking — it is monotonic and sub-millisecond precise on modern hardware.

The sections below cover every practical scenario: pausing execution, formatting timestamps, comparing clock types, and building a retry loop.

What Is the Python time Module?

The time module is a thin wrapper around your OS’s C time functions. It gives you several independent clocks plus tools for sleeping, formatting, and converting between time representations.

Think of it as your toolkit for three questions: “How long did that take?”, “What time is it right now?”, and “Wait N seconds before continuing.”

FunctionReturnsBest Used For
time.perf_counter()Float (seconds)Benchmarking code snippets
time.monotonic()Float (seconds)Measuring intervals (never goes backwards)
time.time()Float (Unix epoch)Logging timestamps, comparing dates
time.sleep(n)NonePausing execution for N seconds
time.strftime(fmt)StringHuman-readable timestamp formatting
time.perf_counter_ns()Integer (nanoseconds)Ultra-precise benchmarking

Understanding which clock to use prevents subtle bugs. time.time() can jump backwards when the system clock is adjusted (NTP sync), corrupting a duration measurement. perf_counter and monotonic never have that problem.

Pausing Execution with time.sleep()

The sleep() function suspends the current thread for a given number of seconds. It accepts floats, so you can sleep for milliseconds. This is useful for rate limiting, polling loops, and adding delays between retries.

# sleep_demo.py
import time

print("Fetching page 1...")
time.sleep(1.5)   # wait 1.5 seconds before next request
print("Fetching page 2...")
time.sleep(1.5)
print("Fetching page 3...")
print("Done.")

Output:

Fetching page 1...
Fetching page 2...
Fetching page 3...
Done.

The output appears gradually, one line every 1.5 seconds. sleep(0.1) sleeps for 100 milliseconds — handy for polling a queue without hammering the CPU. Note that sleep blocks the entire thread; in async code use asyncio.sleep() instead.

Measuring Intervals with time.monotonic()

monotonic() is the safe choice for measuring elapsed time in long-running loops or daemons where the system clock might be adjusted. It guarantees the value never decreases between calls.

# monotonic_interval.py
import time

deadline = time.monotonic() + 5.0  # run for exactly 5 seconds

count = 0
while time.monotonic() < deadline:
    count += 1
    time.sleep(0.1)

print(f"Loop ran {count} times in ~5 seconds")

Output:

Loop ran 50 times in ~5 seconds

Computing a deadline as now + duration creates a time budget immune to clock adjustments. This pattern is common in network clients, scrapers, and background tasks where you want to bound execution time reliably.

Formatting Timestamps with time.strftime()

strftime() converts the current local time into a formatted string using format codes. This is the standard way to generate human-readable timestamps for log files, filenames, and reports.

# strftime_demo.py
import time

now = time.localtime()  # struct_time for current local time

log_ts    = time.strftime("%Y-%m-%d %H:%M:%S", now)
filename  = time.strftime("%Y%m%d_%H%M%S", now)
readable  = time.strftime("%A, %B %d %Y at %I:%M %p", now)

print(f"Log entry:  {log_ts}")
print(f"Filename:   report_{filename}.csv")
print(f"Human:      {readable}")

Output:

Log entry:  2026-04-22 09:14:37
Filename:   report_20260422_091437.csv
Human:      Wednesday, April 22 2026 at 09:14 AM

Using time.strftime("%Y%m%d_%H%M%S") for filenames is a best practice because the result sorts correctly in any file explorer. The format codes mirror the C strftime specification: %Y is four-digit year, %m is zero-padded month, %d is zero-padded day.

Nanosecond Precision with perf_counter_ns()

When benchmarking very fast operations -- a dictionary lookup, a regex match, a list sort -- floating-point arithmetic in perf_counter() can introduce rounding errors at the nanosecond scale. perf_counter_ns() returns an integer in nanoseconds, eliminating that risk.

# nanosecond_bench.py
import time
import re

pattern = re.compile(r"\d{4}-\d{2}-\d{2}")
test_string = "Order placed on 2026-04-22 at the warehouse"

start_ns = time.perf_counter_ns()
for _ in range(100_000):
    pattern.search(test_string)
end_ns = time.perf_counter_ns()

total_ns = end_ns - start_ns
per_call_ns = total_ns / 100_000

print(f"100,000 regex searches: {total_ns:,} ns total")
print(f"Per call: {per_call_ns:.1f} ns")

Output:

100,000 regex searches: 18,423,710 ns total
Per call: 184.2 ns

The integer arithmetic is exact. When comparing two algorithms that each complete in microseconds, using nanoseconds prevents false ties. perf_counter_ns() was added in Python 3.7 -- if you need older support, convert manually: int(time.perf_counter() * 1e9).

Real-Life Example: Retry Loop with Exponential Backoff

Retry logic is one of the most practical applications of the time module. The function below retries a flaky operation with exponential backoff -- each retry waits twice as long as the previous one.

# retry_with_backoff.py
import time
import random

def fetch_data(attempt_number):
    if random.random() < 0.6:
        raise ConnectionError(f"Connection refused on attempt {attempt_number}")
    return {"status": "ok", "records": 42}

def fetch_with_backoff(max_retries=5, base_delay=0.5):
    for attempt in range(1, max_retries + 1):
        try:
            start = time.perf_counter()
            result = fetch_data(attempt)
            elapsed = time.perf_counter() - start
            ts = time.strftime("%H:%M:%S")
            print(f"[{ts}] Attempt {attempt} succeeded in {elapsed:.4f}s: {result}")
            return result
        except ConnectionError as e:
            if attempt == max_retries:
                print(f"All {max_retries} attempts failed. Giving up.")
                raise
            delay = base_delay * (2 ** (attempt - 1))  # 0.5, 1.0, 2.0, 4.0...
            ts = time.strftime("%H:%M:%S")
            print(f"[{ts}] Attempt {attempt} failed: {e}. Retrying in {delay:.1f}s...")
            time.sleep(delay)

random.seed(42)
fetch_with_backoff()

Output:

[09:14:37] Attempt 1 failed: Connection refused on attempt 1. Retrying in 0.5s...
[09:14:38] Attempt 2 failed: Connection refused on attempt 2. Retrying in 1.0s...
[09:14:39] Attempt 3 succeeded in 0.0001s: {'status': 'ok', 'records': 42}

The time module handles three responsibilities here: perf_counter() measures per-attempt latency, strftime() produces the log timestamp, and sleep() enforces the backoff delay. Extend this by adding jitter -- delay + random.uniform(0, 0.1) -- to prevent the thundering herd problem when many clients retry simultaneously.

Frequently Asked Questions

What is the difference between perf_counter and monotonic?

perf_counter() uses the highest-resolution timer available and is recommended for benchmarking. monotonic() uses the OS monotonic clock and is recommended for measuring timeouts and intervals. Both are monotonic in practice, but perf_counter may use hardware performance counters for higher precision. Choose perf_counter when measuring code speed and monotonic when enforcing deadlines.

Is time.sleep() accurate to the millisecond?

On most systems time.sleep() is accurate to within a few milliseconds, but it is not a real-time guarantee. Windows has a default timer resolution of 15.6ms, so sleeping for 1ms may actually pause for up to 16ms. Always measure actual elapsed time after sleeping rather than assuming the sleep was exact. For periodic execution, compute the next wakeup time with monotonic() instead of sleeping a fixed amount each iteration.

When should I use time.time() vs datetime.now()?

time.time() returns a plain float (seconds since epoch) -- fast, portable, and easy to store in a database. datetime.now() returns a full object with timezone support and arithmetic operators. Use time.time() for simple logging and comparison. Use datetime when you need to add days, parse date strings, or work with timezones.

How do I get microsecond precision?

perf_counter() already returns float seconds with sub-microsecond resolution on most platforms. If you need explicit microseconds, multiply: elapsed_us = (time.perf_counter() - start) * 1_000_000. For integer microseconds without floating-point rounding, use perf_counter_ns() and divide by 1000.

What is struct_time and when do I need it?

struct_time is a named tuple returned by time.localtime() and time.gmtime(). It breaks a Unix timestamp into components: year, month, day, hour, minute, second, weekday, Julian day, and DST flag. You need it when extracting a specific component (e.g., t.tm_hour) or passing a specific time to strftime(). For most tasks you can skip it -- time.strftime("%Y") uses the current local time automatically.

Conclusion

The time module is small but covers the most common timing needs. You learned: perf_counter() and perf_counter_ns() for benchmarking, monotonic() for safe interval measurement, sleep() for controlled pauses, strftime() for readable timestamps, and time() for Unix epoch values. The retry loop with exponential backoff ties all of these together into a pattern you will use repeatedly.

Extend the real-life example by adding a jitter parameter and a maximum total timeout -- you will end up with something close to what production libraries like tenacity or urllib3's Retry class provide under the hood.

For the complete clock reference, see the Python time module documentation.