Overview
Details scraping (also called “page scraping” or “URL scraping”) allows you to extract specific data from multiple individual pages. Instead of scraping a list from one page, you provide a list of URLs and Easy Scraper visits each URL to extract the data you specify. Use cases:- Extract product details from a list of product URLs
- Gather article content from multiple blog posts
- Collect company information from business profile pages
- Scrape property details from real estate listings
- Extract contact information from multiple profile pages
How Page Details Scraping Works
Provide URLs
- URLs scraped from a list
- A CSV file you upload
Define data to extract
Run the scraper
Step-by-Step Guide
Step 1: Select URLs
- From List Scraping
- Upload CSV
Scrape a list
Click 'Scrape each URL'
Select the URL field
Step 2: Select Data to Extract
Now you need to tell Easy Scraper what data to extract from each page.Select a test URL
Click 'Select data to extract'
Click on elements
- Product title
- Price
- Description
- Images
- Specifications
Review the preview
Name your fields
Step 3: Name Your Scraper (Optional)
Your scraper is automatically saved to your browser for future reuse.Name your scraper
Automatic save
Reuse later
Managing Saved Scrapers
Export a scraper
Export a scraper
Import a scraper
Import a scraper
Delete a scraper
Delete a scraper
Rename a scraper
Rename a scraper
Step 4: Run the Scraper
Configure wait times
- Min wait time: Minimum delay between pages (default: 1s)
- Max wait time: Maximum time to wait for page to load (default: 3s)
Click 'Start Scraping'
Monitor progress
- Number of pages scraped
- Number of pages remaining
- Current page being scraped
View results
Step 5: Export Your Data
After scraping completes, export your data in your preferred format:- Download CSV: Best for spreadsheet applications
- Download JSON: Best for programmatic use
- Copy to Clipboard: Copy as TSV to paste directly into Excel, Google Sheets, or other applications
Field Types
Details scrapers can extract the same three types of data as list scrapers:| Type | Description | Examples |
|---|---|---|
| Text | Visible text content | Product names, descriptions, prices, headings |
| Link URL | The href attribute of links | Related product URLs, external references |
| Image URL | The src attribute of images | Product photos, thumbnails, logos |
Tips for Successful Page Details Scraping
Choose a good test URL
Choose a good test URL
Be specific when selecting elements
Be specific when selecting elements
Test with a small sample first
Test with a small sample first
Use appropriate wait times
Use appropriate wait times
- Fast websites: 1-2 seconds between pages
- Slow websites: 3-5+ seconds
- If you’re getting incomplete data, increase the max wait time
Handle missing data gracefully
Handle missing data gracefully
Name your scrapers
Name your scrapers
Common Issues and Solutions
Field is empty for some pages
Field is empty for some pages
- The element may not exist on some pages (this is normal)
- Content may be loaded dynamically - increase the max wait time
- Check if the website uses different templates for different pages
Scraper stops before finishing all URLs
Scraper stops before finishing all URLs
- Check your browser console for errors
- The website may have rate limiting - increase wait times and try again
- Some URLs may be invalid or return errors - check your URL list
Extracted data is incorrect
Extracted data is incorrect
- Use “Verify” on the test URL to see exactly what’s being selected
- Try clicking more precisely on the exact element you want
- Remove the field and select it again
Can't reuse a saved scraper on similar pages
Can't reuse a saved scraper on similar pages
- Scrapers are domain-specific by design
- Create a new scraper for the new domain
- You can export the old scraper and manually modify the JSON if the sites are very similar
Upload CSV shows wrong URLs
Upload CSV shows wrong URLs
- Make sure your CSV has a header row
- Manually select the correct column from the dropdown
- Check that URLs are properly formatted (include
https://orhttp://)