Skip to main content

General Questions

Yes, Easy Scraper is currently free to use. You can scrape as many pages as you need without any subscription or payment.
Easy Scraper works on all Chromium-based browsers including:
  • Google Chrome
  • Microsoft Edge
  • Brave
  • Opera
  • Vivaldi
Note: Easy Scraper is not compatible with Firefox or Safari.
No! Easy Scraper is designed for everyone. You interact with a visual interface - just click on the data you want to extract. No coding, programming, or technical knowledge required.
Yes. All scraping happens locally in your browser. The data you scrape is stored only on your device and is never sent to Easy Scraper’s servers. Your scraped data remains completely private.
Technically, Easy Scraper can attempt to scrape most websites. However:
  • You should always respect the website’s Terms of Service
  • Some websites have technical measures that prevent scraping
  • Some content may be protected by copyright
  • Always scrape ethically and responsibly
Easy Scraper supports multiple languages including English, Spanish, French, German, Chinese, Japanese, and more. The interface automatically adapts to your browser’s language settings.

Getting Started

  1. Visit the Chrome Web Store
  2. Click “Add to Chrome” (or your browser’s equivalent)
  3. Confirm the installation
  4. Pin the extension to your toolbar for easy access
See the Getting Started guide for detailed instructions.
This can happen for several reasons:
  • The page is still loading - wait a few seconds and refresh
  • The page doesn’t have a repeating list structure
  • The list uses a complex or non-standard layout
  • Content is loaded dynamically - try scrolling or interacting with the page first
If you’re sure there’s a list on the page, try refreshing and waiting longer for the page to fully load.
Easy Scraper can extract three types of data:
  1. Text content - Any visible text on the page
  2. Links (URLs) - The href attribute from link elements
  3. Images (URLs) - The src attribute from image elements
The extension automatically detects which type to extract based on the element you click.
Yes! Easy Scraper supports pagination through:
  • Infinite scroll - Automatically scrolls to load more items
  • Next page buttons - Clicks “Next” or page numbers
  • Load more buttons - Clicks “Load More” or “Show More”
See the List Scraping guide for details.

List Scraping

Common causes:
  • Wait time too short: Items didn’t finish loading before scraping moved on
  • Scroll issues: Page didn’t scroll far enough to trigger loading
  • Rate limiting: Website blocked further requests
  • Max items limit: You set a limit on the number of items
Solutions:
  • Increase the max wait time in your configuration
  • Add a scroll delay
  • Reduce scraping speed to avoid rate limiting
  • Remove or increase the max items limit
  1. Start scraping the first page normally
  2. In the scraping configuration, choose a load more option:
    • “Scroll Down” for infinite scroll
    • “Click Next Page Button” for pagination
  3. Configure wait times
  4. Optionally set a max items limit
  5. Start scraping
See the Pagination Options section for detailed instructions.
This can happen when:
  • The page has multiple instances of the same list
  • Pagination loads items that were already scraped
  • The page structure causes the same elements to be detected multiple times
Solutions:
  • Check if the page has duplicate content
  • Refresh and try again
  • After exporting, remove duplicates in your spreadsheet application
There’s no hard limit, but practical considerations include:
  • Browser memory (very large scrapes may slow down or crash)
  • Website rate limiting (sites may block excessive requests)
  • Time (scraping thousands of items can take hours)
Recommendation: For large scrapes, use the “Max items” limit to scrape in smaller batches.

Page Details Scraping

  • List scraping: Extracts multiple items from a single page (e.g., product listings)
  • Page details scraping: Visits multiple URLs and extracts data from each individual page (e.g., product detail pages)
They work great together: use list scraping to get URLs, then use page details scraping to get more information from each URL.
Two ways:
  1. From list scraping: Scrape a list that includes URLs, then click “Scrape each URL”
  2. Upload a CSV: Upload a CSV file with a column containing URLs
See the Page Details Scraping guide for step-by-step instructions.
Yes! When creating a details scraper:
  1. Select your fields as normal
  2. Give your scraper a descriptive name
  3. It’s automatically saved
  4. Next time you scrape pages from the same domain, select your saved scraper from the dropdown
You can also export scrapers as JSON files to back them up or share them.
Scrapers are domain-specific and designed for pages with similar structure. If a scraper doesn’t work:
  • The new pages may have a different HTML structure
  • Content may be in different locations or use different element types
  • You may need to create a new scraper for the new page type
Test your scraper on a few examples before running it on many URLs.
Yes, this is completely normal! Not every page will have every field. For example:
  • Some products don’t have reviews
  • Some listings don’t have images
  • Optional fields may be blank for some items
Empty fields will appear as blank cells in your export, which is expected behavior.
You can scrape as many URLs as you want, but consider:
  • Time: With 2-3 second delays, 100 URLs takes 3-5 minutes, 1000 URLs takes 30-50 minutes
  • Rate limiting: Scraping too many pages too quickly may trigger anti-bot measures
  • Browser performance: Keep the browser window active for best results
Recommendation: Test with 10-20 URLs first, then scale up once you’re confident the scraper works correctly.

Technical Issues

Solutions:
  1. Refresh the page you’re on and try again
  2. Close and reopen the extension
  3. Restart your browser
  4. Check if the extension is enabled (check chrome://extensions/)
  5. Try updating to the latest version of the extension
  6. Clear your browser cache and cookies
This can happen if:
  • You closed the popup window
  • The browser tab became inactive (some browsers throttle background tabs)
  • You ran out of memory
  • The website blocked further requests
Solutions:
  • Keep the popup window open while scraping
  • Keep the browser tab active (don’t minimize or switch away)
  • Close other tabs if you’re running out of memory
  • Increase wait times and try again
Common causes and solutions:For dynamically loaded content:
  • Increase the max wait time to 5-10 seconds
  • Manually interact with the page before scraping
For timing issues:
  • Add a scroll delay (500-1000ms)
  • Increase min wait times between actions
For JavaScript-heavy sites:
  • Let the page fully load before opening the extension
  • Wait for loading spinners to disappear
  • Some sites may not be compatible with scraping
This means the scraper couldn’t find the element using its selector. This happens when:
  • The page structure is different from the test page
  • Content hasn’t loaded yet
  • The website changed its HTML structure
Solutions:
  • Increase wait times
  • Remove and re-add the field
  • Create a new scraper if the website changed significantly
  • Check if the element actually exists on the page
Scraping speed depends on:
  • Your configured wait times (intentional delays)
  • How fast the website responds
  • How much data you’re extracting
This is normal - web scraping is inherently a slow process because you need to:
  • Wait for pages to load
  • Be respectful of server resources
  • Avoid triggering anti-bot measures
To speed up (carefully):
  • Reduce min wait times
  • Reduce max wait times if pages load quickly
  • Remove unnecessary fields
This means the website has detected automated access and is challenging you to prove you’re human.What to do:
  1. Complete the CAPTCHA manually
  2. Slow down your scraping (increase wait times to 3-5+ seconds)
  3. Reduce the number of pages per session
  4. Take breaks between scraping sessions
Frequent CAPTCHAs indicate the website doesn’t want automated access. Consider if continuing is appropriate.

Exporting Data

  • CSV: Best for Excel, Google Sheets, databases, and data analysis
  • JSON: Best for programming, APIs, and further processing with scripts
  • Copy (TSV): Best for quickly pasting into spreadsheets or documents
All formats contain the same data - choose based on what you plan to do with it.
This is an encoding issue. Easy Scraper exports CSV files with UTF-8 encoding, which supports all international characters.If you see garbled characters in Excel:
  1. Don’t double-click the CSV file to open it
  2. Open Excel first
  3. Use File > Import > CSV/Text File
  4. Choose UTF-8 encoding in the import wizard
If you see garbled characters in Google Sheets: This shouldn’t happen - Google Sheets handles UTF-8 automatically. Try uploading the file again.
Downloaded files (CSV and JSON) are saved to your browser’s default download location, typically:
  • Windows: C:\Users\YourName\Downloads
  • Mac: /Users/YourName/Downloads
  • Linux: /home/YourName/Downloads
Check your browser’s download settings to see or change the location.

Best Practices

Follow these best practices:
  1. Test first: Always test with a small sample before full scrape
  2. Use adequate wait times: Don’t rush - give pages time to load
  3. Verify selectors: Use the “Verify” button to check your selections
  4. Monitor progress: Watch the first few items to ensure everything works
  5. Save your work: Export data frequently and save scrapers for reuse
There’s no universal answer, but general guidelines:
  • Safe for most sites: 2-3 seconds
  • Conservative: 3-5 seconds
  • Very cautious: 5-10 seconds
If you’re getting blocked or seeing CAPTCHAs, slow down significantly.
Consider scraping during off-peak hours (evenings, weekends) for:
  • Large scraping jobs (hundreds or thousands of pages)
  • Websites that may have limited server capacity
  • Being extra respectful of server resources
For small scrapes (dozens of pages), timing is less critical.

Data and Privacy

Easy Scraper requires certain browser permissions to function:
  • Access to all websites: Needed to scrape any website you visit
  • Storage: To save your scrapers and scraping data locally
  • Active tab: To interact with the page you’re currently viewing
These are standard permissions for web scraping extensions. Your data stays local on your device.
No. All scraping happens locally in your browser. Easy Scraper does not:
  • Send your scraped data to any servers
  • Track what websites you scrape
  • Collect or store your personal information
  • Share your data with third parties
Your scraped data and saved scrapers remain entirely on your device.
Your scraped data is stored locally in your browser until you:
  • Export it (download CSV/JSON or copy to clipboard)
  • Clear your browser data
  • Uninstall the extension
The data never leaves your computer unless you explicitly export and share it.

Still Need Help?