March 11, 2026|Crawlstack Team

Build a Price Tracker in 10 Minutes with Crawlstack

Price trackers are one of the most useful things you can build for yourself. This tutorial builds a working price tracker using Crawlstack in about 10 minutes.

Price trackers are one of the most useful things you can build for yourself — and one of the most annoying to set up with traditional scraping tools. Headless browsers, proxy services, cron jobs, databases. It adds up fast.

This tutorial builds a working price tracker using Crawlstack, which runs directly in your browser. No server required. Total time: about 10 minutes.


What We're Building

A crawler that:

  • Visits a product page (or a list of product pages)
  • Extracts the product name and current price
  • Stores results in a local dataset with deduplication and versioning
  • Can be re-run on a schedule to track price changes over time

Prerequisites


Step 1: Open the Crawlstack Creator

Navigate to the product page you want to track. Open DevTools (F12 or Cmd+Option+I) and click the Crawlstack tab.

Click New Crawler and give it a name — something like price-tracker.


Step 2: Inspect the Price Element

Before writing any code, use the DevTools Elements panel to find the CSS selector for the price. On most e-commerce sites, the price is in a predictable element.

Right-click the price on the page → Inspect. Note the class or ID. For example:

.product-price
#priceblock_ourprice
[data-testid="price"]

Do the same for the product title.


Step 3: Write the Extraction Script

Back in the Crawlstack Creator, paste this script:

await onLoad();

const title = document.querySelector('h1')?.innerText?.trim();
const price = document.querySelector('.product-price')?.innerText?.trim();

if (!title || !price) {
  console.warn('Could not find title or price — check your selectors');
  return;
}

await runner.publishItems([{
  // key ensures deduplication per URL
  key: location.href,
  // version tracks changes over time — we use the price as the version
  version: price,
  data: {
    title,
    price,
    url: location.href,
    scrapedAt: new Date().toISOString()
  }
}]);

Update .product-price and h1 with the actual selectors from Step 2.


Step 4: Test It

Click Test Run. You should immediately see your item appear in the Items panel on the right:

{
  "title": "Outdoor Tent Pro 3000",
  "price": "$249.99",
  "url": "https://example-store.com/products/outdoor-tent",
  "scrapedAt": "2026-03-13T10:42:00.000Z"
}

If nothing appears, open the Logs tab — it'll tell you if the selector didn't match anything.


Step 5: Enable Change Tracking

The version field is doing important work here. Crawlstack uses it to detect when an item has changed. If you re-run the crawler and the price is the same, the item won't be duplicated — it'll be marked as unchanged. If the price drops to $199.99, a new version of the item is stored.

This means your dataset becomes a price history, not just a snapshot.


Step 6: Track Multiple Products

To track a list of products, you can seed the crawler with multiple URLs. Create a simple task list at the start of your script:

const { taskDef } = await runner.getCurrentTask();

// First run: seed the product URLs
if (!taskDef.ctx?.type) {
  const productUrls = [
    'https://example-store.com/products/tent',
    'https://example-store.com/products/sleeping-bag',
    'https://example-store.com/products/hiking-boots',
  ];

  await runner.addTasks(productUrls.map(href => ({
    href,
    ctx: { type: 'product' }
  })));
  return;
}

// For each product page
await onLoad();

const title = document.querySelector('h1')?.innerText?.trim();
const price = document.querySelector('.product-price')?.innerText?.trim();

await runner.publishItems([{
  key: location.href,
  version: price,
  data: { title, price, url: location.href, scrapedAt: new Date().toISOString() }
}]);

Step 7: Export or Access via API

Once you've run the crawler, your data is available in three ways:

Export as CSV or JSON — click the Download button in the Crawlstack panel. Perfect for pasting into a spreadsheet.

SQLite — the full dataset lives in a local SQLite file you can query directly.

Public API relay — if you've enabled the HTTP tunnel, you can hit your data programmatically:

GET /crawler/price-tracker/items

This lets you build a simple notification script that polls the API and sends you an alert when a price changes.


What's Next

  • Schedule the crawler to run automatically using Crawlstack's monitoring features
  • Add AI filtering to extract prices from inconsistently structured pages (useful when selectors vary across sites)
  • Extend to scrape an entire category page and automatically discover new products to track

The full script from this tutorial is available as a template on GitHub — you can deploy it to your own Crawlstack instance in one click.


Crawlstack is a self-hosted scraping infrastructure that runs inside your browser or Docker. Get started for free.

Ready to try it?

Get started with Crawlstack today and experience the future of scraping.

Get Started Free