Python Web Scraper with robots.txt Compliance and Data Export

0

Develop a Python web scraper using BeautifulSoup that scrapes [website_type] pages for [data_fields]. Respect robots.txt using urllib.robotparser, implement polite delays of [delay_seconds] seconds between requests, and handle rate limiting with exponential backoff. Export scraped data to both CSV and JSON formats with [export_schema]. Include user-agent rotation, session management for cookies, error handling for network issues and parsing failures, progress tracking with [logging_level] logging, and optional proxy support. Structure the code with separate classes for scraping logic, data models, and export functionality.

Created by caster

Prompt has not been run.