HTTPX Wrapper with Rate Limiting and Caching Transports.
The goal of this project is a combination of convenience and as a demonstration of how to assemble HTTPX Transports in different combinations.
HTTP request libraries, rate limiting and caching are topics with deep rabbit holes: the technical implementation details & the decisions an end user has to make. The convenience part of this package is abstracting away certain decisions and making certain opinionated decisions as to how caching & rate limiting should be controlled.
This came about while implementing caching & rate limiting for edgartools: reducing network requests and improving overall performance led to a myriad of decisions. The SEC's Edgar site has a strict 10 request per second limit, while providing not-very-helpful caching headers. Overriding these caching headers with custom rules is necessary in certain cases.
This project provides four cache_mode options:
- Disabled: Rate Limiting only
- Hishel-File: Cache using Hishel using FileStorage
- Hishel-S3: Cache using Hishel using S3Storage
- FileCache: Use a simpler filecache backend that uses file modified and created time and only revalidates using last-modified. For sites where last-modified is provided.
Cache Rules are defined as a dictionary of site regular expressions to path regular expressions.
{
'site_regex': {
'url_regex': duration,
'url_regex2': duration,
'.*': 3600, # cache all paths for this site for an hour
}
}- HTTPS_PROXY: HTTPS_PROXY environment variable is propagated to the HTTPX Transport
Note that the Manager object is intended to be long lived, doesn't need to be used as a context manager.
from httpxthrottlecache import HttpxThrottleCache
url = "https://httpbingo.org/get"
with HttpxThrottleCache(cache_mode="Hishel-File",
cache_dir = "_cache",
rate_limiter_enabled=True,
request_per_sec_limit=10,
user_agent="your user agent") as manager:
# Single synchronous request
with manager.http_client() as client:
response = client.get(url)
print(response.status_code)from httpxthrottlecache import HttpxThrottleCache
url = "https://httpbingo.org/get"
with HttpxThrottleCache(cache_mode="Hishel-File",
cache_dir = "_cache",
rate_limiter_enabled=True,
request_per_sec_limit=10,
user_agent="your user agent") as manager:
# Batch request
responses = manager.get_batch([url,url])
print([r[0] for r in responses])from pathlib import Path
from httpxthrottlecache import HttpxThrottleCache
with HttpxThrottleCache(cache_mode="Disabled") as mgr:
urls = {f"https://httpbingo.org/get?{i}": Path(f"file{i}") for i in range(10)}
results = mgr.get_batch(urls=urls)from httpxthrottlecache import HttpxThrottleCache
import asyncio
url = "https://httpbingo.org/get"
with HttpxThrottleCache(cache_mode="Hishel-File",
cache_dir = "_cache",
rate_limiter_enabled=True,
request_per_sec_limit=10) as manager:
# Async request
async with manager.async_http_client() as client:
tasks = [client.get(url) for _ in range(2)]
responses = await asyncio.gather(*tasks)
print(responses)The FileCache implementation ignores response caching headers. Instead, it treats data as "fresh" for a client-provided max age. The max age is defined in a cacherule, as defined above.
Once the max age is expired, the FileCache Transport will revalidate the data using the Last-Modified date. TODO: Revalidate using ETAG as well.
The FileCache implementation stores files as the raw bytes plus a .meta sidecar. The .meta provides headers, such as Last-Modified, which are used for revalidation. The raw bytes are in the native format - binary files are in their native format, compressed gzip streams are stored as compressed gzip data, etc.
FileCache uses FileLock to ensure only one writer to a cached object. This means that (currently) multiple simultaneous cache misses will stack up waiting to write to file. This locking is intended mainly to allow multiple processes to share the same cache.
FileCache initially stages data to a .tmp file, then upon completion, copies to the final file.
No cache cleanup is done - that's your problem.
Rate limiting is implemented via pyrate_limiter. This is a leaky bucket implementation that allows a configurable number of requests per time interval.
pyrate_limiter supports a variable of backends. The default backend is in-memory, and a single Limiter can be used for both sync and asyncio requests, across multiple threads. Alternative limiters can be used for multiprocess and distributed rate limiting, see examples for more.