How to Take Website Screenshots with Python (API Guide 2026)
Step-by-step guide to capturing website screenshots with Python using a REST API. Includes code examples with requests, aiohttp, async batch processing, and Django/Flask integration.
Python is one of the most popular languages for automation, scripting, and data pipelines — and that makes it an ideal choice for automating website screenshots. Whether you are building a monitoring tool, a content pipeline, or a competitive intelligence system, capturing screenshots programmatically with Python is straightforward when you use a dedicated screenshot API.
In this guide you will learn everything you need to take website screenshots with Python in 2026: from a simple one-liner to async batch processing for thousands of URLs.
Why Use an API Instead of Selenium or Playwright?
Before we dive into the code, it is worth understanding why a screenshot API is often a better choice than running a headless browser locally:
Headless browsers (Selenium, Playwright) require you to:
- Install Chromium or Firefox binaries on every machine
- Manage browser versions and driver compatibility
- Handle memory usage when processing many pages in parallel
- Deal with flaky rendering on CI/CD servers and Docker containers
A screenshot API gives you:
- A single HTTP request per screenshot
- No local browser installation
- Consistent rendering across all environments
- Auto-scaling — hundreds of concurrent screenshots without extra infrastructure
- A free tier to get started without a credit card
CaptureAPI offers a REST endpoint that is designed for exactly this use case. The free plan includes 100 screenshots per month, which is enough for most small automation projects.
Getting Started
Install the requests library
Python's built-in urllib can make HTTP requests, but requests is the standard choice for API calls:
pip install requestsIf you are building an async pipeline (covered later), also install aiohttp:
pip install aiohttpGet your free API key
Sign up at [captureapi.dev/dashboard](/dashboard) to receive your API key instantly. No credit card required for the free tier.
Your first screenshot in 5 lines
import requests
response = requests.get(
"https://captureapi.dev/api/v1/screenshot",
params={"url": "https://example.com", "width": 1280, "height": 720},
headers={"X-API-Key": "cap_your_api_key_here"},
)
response.raise_for_status()
with open("screenshot.png", "wb") as f:
f.write(response.content)
print("Screenshot saved!")That is all it takes. The API renders the page in a real Chromium browser, waits for JavaScript to execute, and returns a PNG image in the response body.
Complete Python Client
For real projects you will want a reusable client with error handling, retries, and configurable options:
import os
import requests
from pathlib import Path
from typing import Optional, Literal
class CaptureAPIClient:
BASE_URL = "https://captureapi.dev/api/v1"
def __init__(self, api_key: Optional[str] = None):
self.api_key = api_key or os.environ["CAPTURE_API_KEY"]
self.session = requests.Session()
self.session.headers.update({"X-API-Key": self.api_key})
def screenshot(
self,
url: str,
width: int = 1280,
height: int = 720,
format: Literal["png", "jpeg", "webp"] = "png",
full_page: bool = False,
wait_for_selector: Optional[str] = None,
wait_ms: int = 0,
) -> bytes:
"""Capture a screenshot and return the raw image bytes."""
params = {
"url": url,
"width": width,
"height": height,
"format": format,
"fullPage": str(full_page).lower(),
}
if wait_for_selector:
params["waitForSelector"] = wait_for_selector
if wait_ms > 0:
params["waitMs"] = wait_ms
response = self.session.get(
f"{self.BASE_URL}/screenshot",
params=params,
timeout=30,
)
response.raise_for_status()
return response.content
def screenshot_to_file(self, url: str, output_path: str, **kwargs) -> Path:
"""Capture a screenshot and save it directly to a file."""
image_bytes = self.screenshot(url, **kwargs)
path = Path(output_path)
path.parent.mkdir(parents=True, exist_ok=True)
path.write_bytes(image_bytes)
return path
def pdf(
self,
url: str,
format: Literal["A4", "Letter", "A3"] = "A4",
landscape: bool = False,
) -> bytes:
"""Generate a PDF and return the raw bytes."""
response = self.session.post(
f"{self.BASE_URL}/pdf",
json={"url": url, "format": format, "landscape": landscape},
timeout=60,
)
response.raise_for_status()
return response.contentUsage example
client = CaptureAPIClient() # reads CAPTURE_API_KEY from env
# Save a screenshot to disk
client.screenshot_to_file(
"https://github.com",
"output/github.png",
width=1440,
height=900,
full_page=True,
)
# Capture a full-page screenshot as WebP (smaller file size)
webp_bytes = client.screenshot(
"https://vercel.com",
format="webp",
full_page=True,
)
# Generate a PDF
pdf_bytes = client.pdf("https://example.com/report", format="A4")
with open("report.pdf", "wb") as f:
f.write(pdf_bytes)Batch Processing with asyncio and aiohttp
When you need to capture screenshots of many URLs — competitor price pages, product listings, search results — sequential requests are too slow. Use asyncio with aiohttp to process dozens of URLs in parallel:
import asyncio
import aiohttp
import os
from pathlib import Path
CAPTURE_API_KEY = os.environ["CAPTURE_API_KEY"]
BASE_URL = "https://captureapi.dev/api/v1/screenshot"
async def capture_one(session: aiohttp.ClientSession, url: str, output_dir: str) -> dict:
"""Capture a single screenshot and save it."""
params = {
"url": url,
"width": 1280,
"height": 720,
"format": "webp",
}
headers = {"X-API-Key": CAPTURE_API_KEY}
try:
async with session.get(BASE_URL, params=params, headers=headers) as resp:
resp.raise_for_status()
image_bytes = await resp.read()
# Sanitise the URL into a safe filename
filename = url.replace("https://", "").replace("/", "_") + ".webp"
path = Path(output_dir) / filename
path.write_bytes(image_bytes)
return {"url": url, "status": "ok", "path": str(path)}
except Exception as exc:
return {"url": url, "status": "error", "error": str(exc)}
async def capture_batch(urls: list[str], output_dir: str, concurrency: int = 5) -> list[dict]:
"""Capture screenshots for a list of URLs with controlled concurrency."""
Path(output_dir).mkdir(parents=True, exist_ok=True)
semaphore = asyncio.Semaphore(concurrency)
async def bounded_capture(session, url):
async with semaphore:
return await capture_one(session, url, output_dir)
async with aiohttp.ClientSession() as session:
tasks = [bounded_capture(session, url) for url in urls]
return await asyncio.gather(*tasks)
# --- Run it ---
urls = [
"https://github.com",
"https://vercel.com",
"https://nextjs.org",
"https://tailwindcss.com",
"https://stripe.com",
]
results = asyncio.run(capture_batch(urls, output_dir="screenshots", concurrency=5))
for r in results:
if r["status"] == "ok":
print(f"✅ {r['url']} → {r['path']}")
else:
print(f"❌ {r['url']}: {r['error']}")The concurrency=5 limit prevents hitting the API rate limit on the free tier. Starter and Pro plans support higher concurrency — up to 20 parallel requests on the Pro plan.
Integration with Django and Flask
Django management command
A common pattern is to run screenshot jobs as a Django management command, scheduled via Celery or a cron job:
# myapp/management/commands/capture_product_screenshots.py
from django.core.management.base import BaseCommand
from myapp.models import Product
from capture_client import CaptureAPIClient # your client module
class Command(BaseCommand):
help = "Capture screenshots for all active products"
def handle(self, *args, **options):
client = CaptureAPIClient()
products = Product.objects.filter(is_active=True, screenshot_outdated=True)
for product in products:
try:
image_bytes = client.screenshot(
product.url,
width=1200,
height=630, # OG image dimensions
format="webp",
)
product.save_screenshot(image_bytes)
product.screenshot_outdated = False
product.save(update_fields=["screenshot_outdated"])
self.stdout.write(f"✅ {product.name}")
except Exception as exc:
self.stderr.write(f"❌ {product.name}: {exc}")Flask route for on-demand thumbnails
from flask import Flask, request, send_file, abort
import io
from capture_client import CaptureAPIClient
app = Flask(__name__)
client = CaptureAPIClient()
@app.route("/thumbnail")
def thumbnail():
url = request.args.get("url")
if not url or not url.startswith("https://"):
abort(400, "url parameter is required and must use HTTPS")
try:
image_bytes = client.screenshot(
url,
width=int(request.args.get("width", 1200)),
height=int(request.args.get("height", 630)),
format="webp",
)
return send_file(
io.BytesIO(image_bytes),
mimetype="image/webp",
download_name="thumbnail.webp",
)
except Exception as exc:
abort(500, str(exc))Advanced Options
Wait for dynamic content
Many modern websites load content asynchronously. Use the waitForSelector parameter to wait for a specific element before capturing:
# Wait for the main chart to render before capturing
image = client.screenshot(
"https://dashboard.example.com",
wait_for_selector="#chart-container",
wait_ms=500, # extra 500ms after selector appears
)Custom viewport and full-page capture
# Mobile viewport
mobile = client.screenshot(
"https://example.com",
width=390,
height=844, # iPhone 14 dimensions
)
# Full-page capture (captures below the fold too)
full = client.screenshot(
"https://example.com/landing",
full_page=True,
width=1440,
)Environment variable management
Never hardcode API keys. Use environment variables:
# .env file (add to .gitignore!)
CAPTURE_API_KEY=cap_your_key_herefrom dotenv import load_dotenv
load_dotenv()
client = CaptureAPIClient() # reads from env automaticallyError Handling Best Practices
The API returns standard HTTP status codes. Handle them explicitly:
import requests
from requests.exceptions import HTTPError, Timeout, ConnectionError
def safe_screenshot(client, url: str) -> bytes | None:
try:
return client.screenshot(url)
except HTTPError as e:
if e.response.status_code == 402:
print("Plan limit reached. Upgrade at captureapi.dev/pricing")
elif e.response.status_code == 422:
print(f"Invalid URL or parameters: {url}")
elif e.response.status_code == 429:
print("Rate limit hit — add a retry with exponential backoff")
else:
print(f"API error {e.response.status_code}: {e}")
return None
except Timeout:
print(f"Request timed out for {url}")
return None
except ConnectionError:
print("Network error — check your connection")
return NoneFrequently Asked Questions
Does this work with websites that require authentication?
Yes. For pages behind a login, you can pass cookies or HTTP headers using the API's headers and cookies parameters. The API also supports basic auth via the URL scheme (https://user:pass@example.com).
Can I capture localhost or internal URLs?
Localhost URLs are not reachable from external API servers. For internal URLs, use a tunnel like ngrok or expose the service temporarily during CI/CD runs.
What is the maximum page size for full-page captures?
The API captures pages up to 15,000 pixels tall. For very long pages, consider capturing specific sections using CSS clip parameters.
Is the free tier enough for production?
The free tier (100 screenshots/month) is ideal for development and low-traffic scenarios. For production use, the Starter plan at $9/month provides 1,000 screenshots — enough for most small applications. See [captureapi.dev/pricing](/pricing) for all plans.
How do I track API usage from Python?
You can check your current usage by calling the API's status endpoint:
response = requests.get(
"https://captureapi.dev/api/v1/status",
headers={"X-API-Key": "cap_your_key"},
)
data = response.json()
print(f"Used: {data['used']} / {data['limit']} this month")Summary
Taking website screenshots with Python is straightforward when you use a screenshot API:
- **Install** `requests` (and `aiohttp` for async)
- **Get a free API key** at [captureapi.dev/dashboard](/dashboard)
- **Use the simple client** for single screenshots or the async batch processor for high-volume jobs
- **Integrate** with Django, Flask, or any Python framework via standard HTTP
For PDF generation from Python, see our guide on [HTML to PDF conversion](/blog/html-to-pdf-conversion-complete-guide). To learn how to generate Open Graph images dynamically, check out [dynamic OG images for SEO](/blog/dynamic-og-images-seo).
Start capturing with the free tier today — no credit card required.