How does Contentsquare recreate your site pages?
Contentsquare uses our CS Scrapers to collect and save your site's static resources like CSS and image files along with the HTML of your pages. This allows you to analyze recreations of your site pages exactly as your users experienced them using Contentsquare—even after you've made changes to that same site page you can retroactively analyze user experience without having additional requests sent to your servers.
There are two main Contentsquare features that rely on the CS Scrapers to recreate your site pages:
How do the CS Scrapers work?
As soon as data collection is triggered, the CS Scrapers parse all the URLs in a site page to gather all CSS, images of fonts, HTML, and resources, for that page, download them, and store them.
There are two CS Scrapers:
- Session replay Scraper
The Session replay Scraper downloads the static resources during the collection of a session.
When a session is collected for Session replay, the Tag sends an event containing the static resource URLs to CS servers. Then the URLs are each checked to verify that the corresponding resources were scraped within the last 6 hours. If they were not, the resources are then scraped and stored.
- Zoning Analysis Scraper
This scraper works on-demand: it downloads static resources anytime you take snapshots of your site pages in Zoning analysis.

How are the CS Scrapers whitelisted?
Your IT/Security administrators whitelist the CS Scrapers during your Contentsquare implementation process. View CS Scraper whitelist here.
A whitelist is a security list that grants network access only to specific IP addresses, whereas the rest are denied access. Whitelisting the CS Scrapers help ensure your site pages render completely and correctly in Zoning and Session replay and data collection remains consistent over time. It helps guarantee your ability to analyze your site pages using Contentsquare even after your static resources expire.
What if the CS Scrapers aren't working?
In some cases, static resource downloading might be blocked by your server/ firewalls, for security purposes. This could cause issues with Zonings or Session replays displaying correctly.
Please use this article to help Zoning analysis / Session replay work with your unique site.
FAQs
Is it possible to deactivate the CS Scrapers?
Yes, we can prevent the CS Scrapers from retrieving static resources. If you choose to deactivate the scrapers Contentsquare will rely on your live resources, which might result in inconsistent or unavailable resources in Zoning analysis and Session replay.
Can issues related to accurate site page recreation in Session replay/ Zoning analysis be fixed?
Yes, most issues can be solved unless our technology has been blacklisted from your server.
-
In some cases, mobile sites will not allow the CS scrapers to function because we use a non-mobile user_agent. If servers are set up to refuse non-mobile requests, our technology will not work.
-
If a resource, e.g. a profile picture, requires a site user to be logged in, in order to display, the scrapers may not be able to access it. We recommend analyzing pages like this in CS Live.
- For any other issues, you can use this article to troubleshoot or create a support ticket.
What if my IT services alert me when Contentsquare makes a server request?
Our requests are legal and should not impact your servers. There is a limit of 6 requests/second and the use of any of the access granting processes guarantees data security.
What if there is a failed request?
It is expected behavior for requests to fail if there is a server error or related error. For example, if the CS Scrapers try to access a resource that requires being logged in (e.g. 403 error) or a non-mobile blocker is set on the client-side for mobile sites or if servers block Contentsquare due to query volume.