A production-ready python scraper for extracting product data from target using Selenium. This scraper efficiently extracts aggregateRating, availability, brand and related data from target pages.
- What This Scraper Extracts
- Quick Start
- Supported URLs
- Configuration
- Output Schema
- Anti-Bot Protection
- How It Works
- Error Handling & Troubleshooting
- Alternative Implementations
- Product Information:
- Aggregaterating: Aggregated rating information
- Availability: Product availability status
- Brand: Brand name
- Currency: Currency code (e.g., USD)
- Features: Product features
- Category Information: Category name, ID, URL, description, and banner image
- Python 3.7 or higher
- pip package manager (for Python) or npm (for Node.js)
- Install required dependencies:
pip install selenium beautifulsoup4 requests-
Get your ScrapeOps API key from https://scrapeops.io/app/register/ai-builder
-
Update the API key in the scraper:
API_KEY = "YOUR-API-KEY" # Replace with your ScrapeOps API key- Navigate to the scraper directory:
cd python/selenium/product_data- Edit the URLs in
scraper/target.com_scraper_product_v1.py:
if __name__ == "__main__":
urls = [
"https://www.target.com/p/gioberti-men-s-long-sleeve-brushed-flannel-plaid-checkered-shirt-with-corduroy-contrast/-/A-93271805?preselect=93271827#lnk=sametab",
]- Run the scraper:
python scraper/target.com_scraper_product_v1.pyThe scraper will generate a timestamped JSONL file (e.g., target_com_product_data_page_scraper_data_20260114_120000.jsonl) containing all extracted data.
See example/product.json for a sample of the extracted data structure.
This scraper supports target product data page URLs:
https://www.target.comhttps://www.target.com/p/gioberti-men-s-long-sleeve-brushed-flannel-plaid-checkered-shirt-with-corduroy-contrast/-/A-93271805?preselect=93271827#lnk=sametab
The scraper supports several configuration options. See the scraper code for available parameters.
The scraper can use ScrapeOps for anti-bot protection and request optimization:
API_KEY = "YOUR-API-KEY" # Your ScrapeOps API key
payload = {
"api_key": API_KEY,
"url": url,
"optimize_request": True, # Enables request optimization
}ScrapeOps Features:
- Proxy rotation (may help reduce IP blocking)
- Request header optimization (can help reduce detection)
- Rate limiting management
- Note: CAPTCHA challenges may occur depending on site behavior and cannot be guaranteed to be resolved automatically
The scraper outputs data in JSONL format (one JSON object per line). Each object contains:
| Field | Type | Description | Example |
|---|---|---|---|
aggregateRating |
object | Aggregated rating information | Object with 4 fields |
availability |
string | Product availability status | in_stock |
brand |
string | Brand name | GIOBERTI |
category |
string | Category information | Casual Button Down Shirts |
currency |
string | Currency code (e.g., USD) | USD |
description |
string | Description or details | Shop Gioberti Men's 100% Cotton Brushed Flannel Pl... |
features |
array | Product features | True to Size |
images |
array | Image URL | Array of objects (see example) |
name |
string | Name or title | Gioberti Men's 100% Cotton Brushed Flannel Plaid C... |
preDiscountPrice |
number | Original price before discount | 49.99 |
price |
number | Current price | 23.99 |
productId |
string | Unique product identifier | 93271827 |
reviews |
array | Review data | [] |
seller |
object | Seller information | Object with 3 fields |
serialNumbers |
array | Serial number information | Array of objects (see example) |
specifications |
array | Product specifications | [] |
url |
string | URL or link to the resource | https://www.target.com/p/gioberti-men-s-100-cotton... |
videos |
array | Unique identifier | Array of objects (see example) |
The scraper outputs data in JSONL format (one JSON object per line). Each object contains the fields listed in the table above. See example/product.json for a complete example.
Product/Listing Fields:
aggregateRating(object): Aggregated rating informationavailability(string): Product availability statusbrand(string): Brand namecurrency(string): Currency code (e.g., USD)features(array): Product featuresimages(array): Image URLpreDiscountPrice(number): Original price before discountprice(number): Current priceproductId(string): Unique product identifierreviews(array): Review dataseller(object): Seller informationspecifications(array): Product specificationsvideos(array): Unique identifierCategory Fields:
category(string): Category information
Metadata Fields:
description(string): Description or detailsurl(string): URL or link to the resource
Other Fields:
name(string): Name or titleserialNumbers(array): Serial number informationThis scraper can integrate with ScrapeOps to help handle target's anti-bot measures:
target may employ various anti-scraping measures including:
- Rate limiting and IP blocking
- Browser fingerprinting
- CAPTCHA challenges (may occur depending on site behavior)
- JavaScript rendering requirements
- Request pattern analysis
The scraper can use ScrapeOps proxy service which may provide:
- Proxy Rotation: May help distribute requests across multiple IP addresses
- Request Optimization: May optimize headers and request patterns to reduce detection
- Retry Logic: Built-in retry mechanism with exponential backoff
Note: Anti-bot measures vary by site and may change over time. CAPTCHA challenges may occur and cannot be guaranteed to be resolved automatically. Using proxies and browser automation can help reduce blocking, but effectiveness depends on the target site's specific anti-bot measures.
- Sign up for a free account at https://scrapeops.io/app/register/ai-builder
- Get your API key from the dashboard
- Replace
YOUR-API-KEYin the scraper code - The scraper can use ScrapeOps for requests (if configured)
Free Tier: ScrapeOps offers a generous free tier perfect for testing and small-scale scraping.
The scraper uses Selenium to navigate to target.com pages in a browser, wait for content to load, and extract structured data using CSS selectors and DOM parsing. The extracted data is normalized and saved in JSONL format for efficient processing.
1. No Data Extracted
Symptoms: Scraper runs but produces empty output files.
Solutions:
- Verify the URL format is correct
- Check if the page requires JavaScript rendering
- Ensure your ScrapeOps API key is valid
- Check network connectivity
2. Rate Limiting / Blocked Requests
Symptoms: HTTP 429 errors or empty responses.
Solutions:
- Reduce concurrency settings
- Increase delay between requests
- Verify ScrapeOps API key has sufficient credits
3. Parsing Errors
Symptoms: Errors in extraction logic or missing fields.
Solutions:
- The site may have updated their HTML structure
- Check if selectors need updating
- Review the actual HTML structure of the target page
Enable detailed logging:
logging.basicConfig(level=logging.DEBUG) # Change from INFO to DEBUG
This will show:
- Request URLs and responses
- Extraction steps
- Parsing errors
- Retry attempts
The scraper includes retry logic with configurable retry attempts and exponential backoff.
This repository provides multiple implementations for scraping target Product Data pages:
- BeautifulSoup - BeautifulSoup implementation
- Playwright - Playwright implementation
- Cheerio & Axios - Cheerio & Axios implementation
- Playwright - Playwright implementation
- Puppeteer - Puppeteer implementation
Use BeautifulSoup/Cheerio when:
- You need fast, lightweight scraping
- JavaScript rendering is not required
- You want minimal dependencies
- You're scraping simple HTML pages
Use Playwright or Selenium when:
- Pages require JavaScript rendering
- You need to interact with dynamic content
- You need to handle complex anti-bot measures
- You want to simulate real browser behavior
The scraper supports concurrent requests. See the scraper code for configuration options.
Recommendations:
- Start with minimal concurrency for testing
- Gradually increase based on your ScrapeOps plan limits
- Monitor for rate limiting or blocking
Data is saved in JSONL format (one JSON object per line):
- Efficient for large datasets
- Easy to process line-by-line
- Can be imported into databases or data processing tools
- Each line is a complete, valid JSON object
The scraper processes data incrementally:
- Products are written to file immediately after extraction
- No need to load entire dataset into memory
- Suitable for scraping large pages
- Respect Rate Limits: Use appropriate delays and concurrency settings
- Monitor ScrapeOps Usage: Track your API usage in the ScrapeOps dashboard
- Handle Errors Gracefully: Implement proper error handling and logging
- Validate URLs: Ensure URLs are valid target pages before scraping
- Update Selectors: target may change HTML structure; update selectors as needed
- Test Regularly: Test scrapers regularly to catch breaking changes early
- ScrapeOps Documentation: https://scrapeops.io/docs
- Framework Documentation: See framework-specific documentation
- Example Output: See
example/product.jsonfor sample data structure - Scraper Code: See
scraper/target.com_scraper_product_v1.pyfor implementation details
This scraper is provided as-is for educational and commercial use. Please ensure compliance with target's Terms of Service and robots.txt when using this scraper.