Skip to main content

Overview

Details scraping (also called “page scraping” or “URL scraping”) allows you to extract specific data from multiple individual pages. Instead of scraping a list from one page, you provide a list of URLs and Easy Scraper visits each URL to extract the data you specify. Use cases:
  • Extract product details from a list of product URLs
  • Gather article content from multiple blog posts
  • Collect company information from business profile pages
  • Scrape property details from real estate listings
  • Extract contact information from multiple profile pages

How Page Details Scraping Works

1

Provide URLs

Give Easy Scraper a list of URLs to visit. This can come from:
  • URLs scraped from a list
  • A CSV file you upload
2

Define data to extract

Create a scraper by selecting which fields to extract from each page. You do this by visiting a test URL and clicking on the elements you want to scrape.
3

Run the scraper

Easy Scraper visits each URL, extracts the data, and compiles it into a table that you can export.

Step-by-Step Guide

Step 1: Select URLs

  • From List Scraping
  • Upload CSV
The easiest way to get URLs for details scraping is from a list scraping result.
1

Scrape a list

First, use List Scraping to scrape a list that includes URLs (links to detail pages).
2

Click 'Scrape each URL'

After list scraping completes, click the “Scrape each URL” button at the top of the results.
3

Select the URL field

Choose which field from your list scraping results contains the URLs you want to visit.
This workflow is ideal for scraping e-commerce products: first scrape the product list to get URLs, then scrape each product page for detailed information.

Step 2: Select Data to Extract

Now you need to tell Easy Scraper what data to extract from each page.
1

Select a test URL

Select a test URL from the list of URLs you provided. This should be a representative page that has all the data you want to extract.
2

Click 'Select data to extract'

This opens the test page and enters “recording mode” where you can click on elements to select them.
3

Click on elements

Click on each piece of data you want to extract. For example:
  • Product title
  • Price
  • Description
  • Images
  • Specifications
Each click adds a new field to your scraper. The extension will show you an overlay as you select each field.
4

Review the preview

After selecting your fields, Easy Scraper shows you a preview of the extracted data. Check that all fields are captured correctly.
5

Name your fields

Give each field a descriptive name (e.g., “Product Title”, “Price”, “Main Image URL”). This helps you identify the data in your export.
Easy Scraper generates smart selectors for each element that work across multiple pages of the same website. The selectors are designed to be reliable even if the exact HTML structure varies slightly between pages.

Step 3: Name Your Scraper (Optional)

Your scraper is automatically saved to your browser for future reuse.
1

Name your scraper

Give your scraper a descriptive name (e.g., “Amazon Product Details”, “Blog Article Scraper”).
2

Automatic save

The scraper is automatically saved to your browser and associated with the website’s domain.
3

Reuse later

Next time you scrape pages from the same website, you can select your saved scraper instead of creating a new one.

Managing Saved Scrapers

Click the export button to save your scraper as a JSON file. This allows you to share scrapers with others or move scrapers between browsers.
Click the import button and select a previously exported scraper JSON file.
Click the delete button to remove a scraper you no longer need.
Edit the scraper name field and the new name is automatically saved.

Step 4: Run the Scraper

1

Configure wait times

Set how long to wait between visiting each page:
  • Min wait time: Minimum delay between pages (default: 1s)
  • Max wait time: Maximum time to wait for page to load (default: 3s)
Scraping too quickly may trigger rate limiting or anti-bot measures on some websites. Use appropriate wait times to be respectful of server resources.
2

Click 'Start Scraping'

Easy Scraper will visit each URL in sequence and extract the data.
3

Monitor progress

The extension shows you:
  • Number of pages scraped
  • Number of pages remaining
  • Current page being scraped
4

View results

Once complete, all extracted data is displayed in a table. Each row represents one page, and each column represents one field you selected.

Step 5: Export Your Data

After scraping completes, export your data in your preferred format:
  • Download CSV: Best for spreadsheet applications
  • Download JSON: Best for programmatic use
  • Copy to Clipboard: Copy as TSV to paste directly into Excel, Google Sheets, or other applications

Field Types

Details scrapers can extract the same three types of data as list scrapers:
TypeDescriptionExamples
TextVisible text contentProduct names, descriptions, prices, headings
Link URLThe href attribute of linksRelated product URLs, external references
Image URLThe src attribute of imagesProduct photos, thumbnails, logos
When selecting an element, Easy Scraper automatically detects whether to extract text, a link, or an image URL based on the element type.

Tips for Successful Page Details Scraping

Select a test URL that has all the data you want to extract. Some pages may be missing optional fields, so test with a complete example.
Click directly on the element containing the data you want. Don’t click on parent containers unless you want all the text from that container.
Before scraping hundreds of URLs, test with just 5-10 URLs to ensure your scraper is working correctly. You can always run it again on all URLs.
  • Fast websites: 1-2 seconds between pages
  • Slow websites: 3-5+ seconds
  • If you’re getting incomplete data, increase the max wait time
Not all pages will have all fields. Empty fields will appear as blank cells in your export, which is normal.
Your scrapers are automatically saved. Give them descriptive names so you can easily find and reuse them for similar pages in the future.

Common Issues and Solutions

Problem: A field works on the test URL but is empty for other URLs.Solutions:
  • The element may not exist on some pages (this is normal)
  • Content may be loaded dynamically - increase the max wait time
  • Check if the website uses different templates for different pages
Problem: Details scraping stops before visiting all URLs.Solutions:
  • Check your browser console for errors
  • The website may have rate limiting - increase wait times and try again
  • Some URLs may be invalid or return errors - check your URL list
Problem: The scraper extracts the wrong data or includes extra content.Solutions:
  • Use “Verify” on the test URL to see exactly what’s being selected
  • Try clicking more precisely on the exact element you want
  • Remove the field and select it again
Problem: A scraper that works on one domain doesn’t work on another similar website.Solutions:
  • Scrapers are domain-specific by design
  • Create a new scraper for the new domain
  • You can export the old scraper and manually modify the JSON if the sites are very similar
Problem: After uploading a CSV, the wrong column is being detected as URLs.Solutions:
  • Make sure your CSV has a header row
  • Manually select the correct column from the dropdown
  • Check that URLs are properly formatted (include https:// or http://)

Learn List Scraping

Learn how to scrape lists of items with URLs that you can use for details scraping