Skip to content

Latest commit

 

History

History
230 lines (168 loc) · 9.47 KB

File metadata and controls

230 lines (168 loc) · 9.47 KB

Target Scrapers - Node

Node.js License ScrapeOps

Production-ready Node scrapers for extracting product category, product data, product search from target.com using Cheerio & Axios, Playwright, Puppeteer.

A comprehensive collection of production-ready Node scrapers for extracting data from target.com. Build powerful target product category, product data, product search scrapers using Cheerio & Axios, Playwright, Puppeteer. Perfect for web scraping target pages with integrated anti-bot protection.

📊 What Data You Can Scrape

These Node scrapers extract data from target.com:

  • Target Category Listing Pages (product_category) - Extract product listings from category/browse pages with pagination and subcategory navigation
  • Target Product Pages (product_data) - Extract detailed product information including specifications, pricing, images, reviews, and seller details
  • Target Search Result Pages (product_search) - Extract search results with product listings, pagination, related searches, and sponsored products

📁 Scraper Structure

Each scraper type in the Target repository follows this structure:

cheerio-axios/
├── product_data/
│   ├── scraper/
│   │   └── {site}_scraper_product_v1.js
│   ├── example/
│   │   └── product.json
│   └── README.md
├── product_search/
│   ├── scraper/
│   │   └── {site}_scraper_product_search_v1.js
│   ├── example/
│   │   └── product_search.json
│   └── README.md
├── product_category/
│   ├── scraper/
│   │   └── {site}_scraper_product_category_v1.js
│   ├── example/
│   │   └── product_category.json
│   └── README.md
├── reviews/          # Coming soon
└── sellers/          # Coming soon

Each scraper directory contains:

  • scraper/ - Implementation files
  • example/ - Sample JSON output files
  • README.md - Detailed documentation for that scraper

🚀 Features

  • Multiple Framework Support: Choose from Cheerio & Axios, Playwright, Puppeteer
  • Production-Ready: Battle-tested scrapers with error handling and retry logic
  • Anti-Bot Protection: Optional ScrapeOps support that may help with proxy rotation and request optimization
  • Comprehensive Data Extraction: Product data, search results, and category listings
  • JSONL Output Format: Efficient, line-by-line JSON output for easy processing
  • Well-Documented: Detailed READMEs for each scraper with examples and troubleshooting
  • Active Maintenance: Regular updates to handle target's changing HTML structure

📋 Requirements

🎯 Quick Start

  1. Choose a framework based on your needs (see comparison below)
  2. Navigate to the framework directory and follow its README for setup
  3. Get your ScrapeOps API key from https://scrapeops.io/app/register/ai-builder

For framework-specific setup and usage, see:

📚 Supported Frameworks

Framework Speed JavaScript Dependencies Browser Best For
Cheerio & Axios ⚡⚡⚡ Very Fast ❌ No ✅ Minimal ❌ None Static HTML, high volume
Playwright ⚡⚡ Medium ✅ Yes ⚠️ Moderate ✅ Chromium/Firefox/WebKit Modern JS sites, cross-browser
Puppeteer ⚡⚡ Medium ✅ Yes ⚠️ Moderate ✅ Chrome/Firefox/Edge Legacy support, WebDriver

Framework Documentation

🛡️ Anti-Bot Protection

All scrapers can integrate with ScrapeOps to help handle target's anti-bot measures:

  • Proxy Rotation: May help distribute requests across multiple IP addresses
  • Request Header Optimization: May optimize headers to reduce detection
  • Rate Limiting Management: Built-in rate limiting and retry logic

Note: Anti-bot measures vary by site and may change over time. CAPTCHA challenges may occur and cannot be guaranteed to be resolved automatically. Using proxies and browser automation can help reduce blocking, but effectiveness depends on the target site's specific anti-bot measures.

Free Tier Available: ScrapeOps offers a generous free tier perfect for testing and small-scale scraping.

Get your API key at https://scrapeops.io/app/register/ai-builder

📦 Output Format

All scrapers output data in JSONL format (one JSON object per line):

  • Efficient: Each line is a complete JSON object
  • Streamable: Process line-by-line without loading entire file
  • Database-Friendly: Easy to import into databases
  • Large Dataset Support: Handles millions of records efficiently

Example output file: {site}_com_product_page_scraper_data_20260114_120000.jsonl

🤔 Choosing the Right Framework

Use Cheerio & Axios when:

  • ✅ Pages don't require JavaScript rendering
  • ✅ You need maximum speed and throughput
  • ✅ You want minimal dependencies
  • ✅ You're scraping static HTML content

Use Playwright when:

  • ✅ Pages require JavaScript rendering
  • ✅ You need cross-browser support
  • ✅ You want modern async/await API
  • ✅ You need to interact with dynamic elements

Use Puppeteer when:

  • ✅ Pages require JavaScript rendering
  • ✅ You need Chrome/Chromium automation
  • ✅ You want Chrome DevTools Protocol access
  • ✅ You need to interact with dynamic elements

⚠️ Common Issues & Solutions

Issue: "Cannot find module" or "MODULE_NOT_FOUND"

Solution:

npm install
# Or install specific packages
npm install cheerio axios  # For cheerio-axios
npm install playwright     # For playwright
npm install puppeteer      # For puppeteer

Issue: "Playwright browsers not installed"

Solution:

npx playwright install chromium
# Or install all browsers
npx playwright install

Issue: "Rate limiting or blocked requests"

Solution:

  • Verify ScrapeOps API key is correct
  • Check ScrapeOps dashboard for account status
  • Reduce concurrency settings
  • Increase delays between requests

Issue: "Empty output or missing data"

Solution:

  • Verify URL format is correct
  • Check if target changed HTML structure
  • Update selectors if needed
  • Enable debug logging to see extraction steps

🔗 Alternative Implementations

This repository also provides Python implementations:

📖 Best Practices

  1. Use Virtual Environments: Isolate dependencies per project
  2. Respect Rate Limits: Use appropriate delays and concurrency settings
  3. Monitor ScrapeOps Usage: Track your API usage in the ScrapeOps dashboard
  4. Handle Errors Gracefully: Implement proper error handling and logging
  5. Validate URLs: Ensure URLs are valid target pages before scraping
  6. Update Selectors Regularly: target may change HTML structure
  7. Test Regularly: Test scrapers regularly to catch breaking changes early
  8. Handle Missing Data: Some products may not have all fields; handle null values appropriately
  9. Browser Management: For browser automation, ensure proper cleanup and resource management
  10. Use JSONL Format: Efficient for large datasets and streaming processing

📚 Resources & Documentation

Framework Documentation

  • Cheerio & Axios: Cheerio & Axios documentation
  • Playwright: Playwright documentation
  • Puppeteer: Puppeteer documentation

External Resources

Project Resources

  • Root README: ../README.md - Overview of all implementations
  • Framework READMEs: See individual framework directories for specific guides
  • Scraper READMEs: See individual scraper directories for detailed documentation

⚖️ License

This scraper is provided as-is for educational and commercial use. Please ensure compliance with target's Terms of Service and robots.txt when using these scrapers.

See LICENSE for full license details.

⚠️ Disclaimer

This software is provided for educational and commercial purposes. Users are responsible for ensuring their use complies with:

  • target's Terms of Service
  • target's robots.txt
  • Applicable laws and regulations
  • Rate limiting and respectful scraping practices

The authors and contributors are not responsible for any misuse of this software.