I was recently browsing Reddit and saw a developer mention how much they love the itertools module. One comment stuck with me:
"On top of my head I think pairwise is my preferred. It is very useful in so many contexts."
If you’re building backends with FastAPI, "many contexts" usually means handling data streams, calculating changes, or optimizing database hits. Today, let’s look at two specific tools—pairwise and its newer sibling batched—and how they can transform your API code.
Analyzing Trends with itertools.pairwise
In a financial app, you rarely care about a single price; you care about the change. pairwise (introduced in Python 3.10) is the cleanest way to compare a value with the one that came before it.
The FastAPI Example: Stocks Volatility Checker
Imagine an endpoint that receives a sequence of stock prices and needs to identify "volatile" moments where the price jumped more than 2% between ticks.
from fastapi import FastAPI, HTTPException
from itertools import pairwise
from pydantic import BaseModel
app = FastAPI()
class StockData(BaseModel):
symbol: str
prices: list[float]
@app.post("/analyze-volatility")
async def analyze_volatility(data: StockData):
if len(data.prices) < 2:
raise HTTPException(status_code=400,
detail="Need at least 2 prices to compare")
jumps = []
# (p1, p2), (p2, p3), (p3, p4)...
for prev_price, current_price in pairwise(data.prices):
change = (current_price - prev_price) / prev_price
if abs(change) > 0.02: # Over 2% change
jumps.append({
"from": prev_price,
"to": current_price,
"change_percent": round(change * 100, 2)
})
return {"symbol": data.symbol, "volatile_moments": jumps}
The API endpoint can be tested with a simple curl request:
curl -X POST http://localhost:8000/analyze-volatility -H "Content-Type: application/json" -d '{"symbol": "AAPL", "prices": [150.0, 152.0, 151.0, 153.0, 154.0]}'
Before pairwise, we usually had to mess around with range indexes or "zip with a slice," which looked like this: zip(prices, prices[1:]). Not only is that harder to read, but it also creates an unnecessary copy of the list in memory.
So why use it?
-
Zero Memory Overhead: It uses an iterator, meaning it yields values one by one rather than creating a new list.
-
Less Logic Needed: Unlike chunking, pairwise creates an overlapping window. If you have [A, B, C, D], it yields:
(A, B)
(B, C)
(C, D)
- Excellent for: Calculation of deltas, trend analysis, and pathfinding (connecting point A to point B).
Scaling with itertools.batched
While pairwise helps us read the data, batched (new in Python 3.12) helps us write or send it. If you have 10,000 stock updates to save to a database or send to a third-party notification service, doing it one-by-one is a performance killer. You need to group them.
The FastAPI Example: Bulk Notification Service
Let's build a service that takes a massive list of price alerts and sends them to a messaging provider in batches of 10 to respect rate limits.
from fastapi import FastAPI
from itertools import batched
import asyncio
app = FastAPI()
async def send_to_provider(batch):
# Simulate an external API call
await asyncio.sleep(0.1)
print(f"Sent batch: {batch}")
@app.post("/push-alerts")
async def push_alerts(alerts: list[str]):
# batched('ABCDEFG', 3) --> ABC, DEF, G
count = 0
for chunk in batched(alerts, 10):
await send_to_provider(chunk)
count += 1
return {"status": "Alerts sent", "total_batches": count}
Batching (also known as chunking) is one of the most common patterns in backend engineering. Before Python 3.12, we had to use complicated "recipes" from the documentation or third-party libraries like more-itertools.
So why use it?
-
The Efficiency Factor: In a FastAPI context, the biggest bottleneck is usually I/O (Input/Output).
-
The Problem Solved: Sending 1,000 individual SQL INSERT statements or 1,000 separate HTTP requests creates massive overhead, therefore batched allows you to group these into a single database transaction or a single bulk API call.
-
Pro-Tip: batched yields tuples. If your downstream function (like a database driver) requires a list, you can simply call list(chunk).
You don't always need a heavy data-processing library like Pandas to handle sequences. Sometimes, the most elegant solution is already sitting inside Python’s standard library. By using pairwise and batched, you make your FastAPI endpoints more readable and significantly more efficient.



