MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

Why Fake PostgreSQL Test Data Misses Real Bugs

2026-03-18 13:00:00

Most teams do not have a testing problem. They have a test data realism problem.

Locally, the app runs against [email protected], User 1, and a seed script nobody wants to maintain. In CI, fixtures slowly drift away from reality. Then the bugs show up after deploy:

  • a customer name has an apostrophe or accent
  • a field is NULL where your code assumed a string
  • an account has 47 related records instead of 2
  • a query that worked on 20 rows falls over on 20,000
  • shared staging data gets mutated by three people at once

If that sounds familiar, the answer usually is not "write more tests." It is "stop testing against fake data."

What teams actually want

What most teams actually want is not a full copy of production, not a giant pg_dump, and not another 400-line seed script. They want production-like data:

  • realistic enough to expose bugs
  • small enough to restore locally and in CI
  • safe enough to use outside production
  • reproducible enough that every developer can get the same result

That is the gap most dev workflows never solve cleanly.

Why the usual approaches break down

Seed scripts rot

Seed scripts are fine when your app has five tables. They get painful when your schema grows:

  • every migration breaks something
  • relationships get harder to maintain
  • the data gets less realistic over time
  • nobody wants to own the script anymore

You end up with a setup that is reproducible, but not especially useful. We wrote a deeper comparison of seed scripts vs production snapshots.

pg_dump is great for backups, not dev environments

pg_dump solves a different problem. It copies everything:

  • all rows
  • all tables
  • all PII
  • all the size and baggage of production

That is useful for backup and recovery. It is usually overkill for local development and CI.

For dev workflows, full dumps create new problems:

  • slow restores
  • bloated local databases
  • longer CI jobs
  • sensitive data showing up in places it should not

Most of the time, you do not need the entire database. You need the right slice of it. We wrote a full comparison of pg_dump vs Basecut if you want the details.

The better pattern: subset, anonymize, restore

The workflow that makes sense looks more like this:

  1. Start from one or more root tables.
  2. Filter to the rows you actually care about.
  3. Follow foreign keys to pull in the connected data.
  4. Anonymize sensitive fields inline.
  5. Save the snapshot.
  6. Restore it anywhere you need it.

That gives you a connected, realistic, privacy-safe subset of production instead of a raw copy. This is the workflow we built Basecut around for PostgreSQL: FK-aware extraction, automatic PII anonymization, and one-command restores for local dev, CI, and debugging.

The reason this approach works is simple: it treats test data as a repeatable snapshot problem, not a hand-crafted fixture problem.

What "production-like" should actually mean

The phrase gets used loosely. In practice, production-like data should have four properties.

1. Realistic structure

It should reflect the real relationships, optional fields, and edge cases in your schema.

2. Referential integrity

If you copy one row from orders, you usually also need related rows from users, line_items, shipments, and whatever else your app expects to exist together.

3. Privacy safety

Emails, names, phone numbers, addresses, and other sensitive fields need to be anonymized before the data lands on laptops, CI runners, or logs.

4. Repeatability

Developers need a predictable way to recreate the same kind of dataset without asking someone to send them a dump.

If any one of those is missing, the workflow gets shaky fast.

Why FK-aware extraction matters

This is the part many DIY approaches get wrong. Randomly sampling rows from each table sounds easy until you restore them. Then you get:

  • orders pointing to missing users
  • line items pointing to missing products
  • child rows without their parents
  • failed restores or strange app behavior after restore

A useful snapshot has to behave like a self-contained mini-version of production.

That is why FK-aware extraction matters. In Basecut, snapshots are built by following foreign keys in both directions and collecting a connected subgraph of your data. The result is something you can restore into an empty database without ending up with broken references.

That matters more than people think. It is the difference between:

  • "the data loaded"
  • and
  • "the app actually behaves like it does in production"

What the workflow looks like in practice

The nice part is that this can stay simple. Basecut starts with a small YAML config that tells it:

  • where to start
  • how far to traverse relationships
  • how much data to include
  • how anonymization should work

Example:

version: '1'
name: 'dev-snapshot'

from:
  - table: users
    where: 'created_at > :since'
    params:
      since: '2026-01-01'

traverse:
  parents: 5
  children: 10

limits:
  rows:
    per_table: 1000
    total: 50000

anonymize:
  mode: auto

Then the workflow becomes:

basecut snapshot create \
  --config basecut.yml \
  --name "dev-snapshot" \
  --source "$DATABASE_URL"

basecut snapshot restore dev-snapshot:latest \
  --target "$LOCAL_DATABASE_URL"

That is the whole loop:

  • inspect schema
  • create snapshot
  • restore anywhere

In practice, most teams can get from install to first snapshot in a few minutes. You can try this workflow with Basecut — the CLI is free for small teams.

The privacy part is not optional

One more requirement: privacy. If you are moving production-like data into dev and CI, PII handling cannot be a manual cleanup step.

At minimum, your workflow should:

  • detect common PII automatically
  • allow explicit masking rules
  • preserve join integrity where needed
  • anonymize during extraction, not afterward

Basecut handles this with automatic PII detection plus 30+ anonymization strategies. It also supports deterministic masking, which matters when the same source value needs to map to the same fake value across related tables.

If [email protected] turns into one fake email in users and a different fake email somewhere else, your data stops behaving like the real system. That is exactly the sort of detail that makes fake dev data feel fine right up until it is not.

Why this works well in CI too

This pattern is just as useful in CI as it is locally. Instead of checking brittle fixtures into the repo, you restore a realistic snapshot before the test suite runs.

For example:

name: Test
on: [push, pull_request]

jobs:
  test:
    runs-on: ubuntu-latest
    services:
      postgres:
        image: postgres:15
        env:
          POSTGRES_USER: postgres
          POSTGRES_PASSWORD: postgres
          POSTGRES_DB: test_db
        ports:
          - 5432:5432

    steps:
      - uses: actions/checkout@v4

      - name: Install Basecut CLI
        run: |
          curl -fsSL https://basecut.dev/install.sh | sh
          echo "$HOME/.local/bin" >> $GITHUB_PATH

      - name: Restore snapshot
        env:
          BASECUT_API_KEY: ${{ secrets.BASECUT_API_KEY }}
        run: |
          basecut snapshot restore test-data:latest \
            --target "postgresql://postgres:postgres@localhost:5432/test_db"

      - name: Run tests
        run: npm test

That gives your pipeline real shapes, real relationships, and realistic edge cases without restoring an entire production dump on every run. It also keeps snapshots small enough that restores stay fast. We go deeper in our CI/CD test data guide.

When this is worth doing

You probably want production-like snapshots if:

  • your app has more than a handful of tables
  • your bugs are often data-dependent
  • you need realistic data in local dev
  • your CI pipeline should test against something closer to reality
  • you handle meaningful PII
  • your team is tired of maintaining fixtures or seed scripts

You might not need it yet if:

  • the product is brand new
  • you do not have real production data yet
  • the schema is tiny
  • completely fictional demo data is the goal

This is not about replacing every fixture in your test suite. Unit tests still benefit from tiny, explicit test data. The value here is in integration tests, local development, CI, onboarding, and debugging where data shape matters.

A practical rollout

If you want to adopt this without overcomplicating it, start small.

  1. Pick one painful workflow.
    Usually local dev onboarding, shared staging, or CI integration tests.

  2. Define a small snapshot.
    Keep the restore fast. Start with a few root tables and sensible row limits.

  3. Turn on anonymization from day one.
    Do not leave this for later.

  4. Restore it somewhere useful immediately.
    Local dev DB or CI test DB is usually enough to prove the value.

  5. Expand gradually.
    Add more tables, better filters, and refresh automation once the loop is working.

That gets you to useful production-like data quickly without turning the whole thing into a platform project.

Final thought

Most teams are not under-testing. They are testing against data that makes them feel safe. That is not the same thing.

If your local environments and CI pipelines run against tiny, stale, or fake data, they will keep giving you false confidence. Production-like snapshots are one of the highest-leverage ways to make development and testing feel closer to the real system without dragging raw production data everywhere.

If you want to try this with PostgreSQL, Basecut is free for small teams. Or dig into the quickstart guide and how FK-aware extraction works.

Why I Built a Browser-Based Image Tool That Never Uploads Your Photos

2026-03-18 12:53:56

Why I Built a Browser-Based Image Tool That Never Uploads Your Photos

Every time you upload an image to an online tool, you're trusting that server with your data. Your photos contain GPS coordinates, camera serial numbers, timestamps, and sometimes even facial recognition markers — all embedded as EXIF metadata that most people don't even know exists.

I wanted to build something different. A tool that processes images entirely in the browser, so your photos never leave your device.

Here's what I learned building PixelFresh, and the technical decisions behind it.

The Hidden Data in Your Photos

Before diving into the technical side, let's look at what's actually in your photos. A typical JPEG from a smartphone contains:

EXIF Field Privacy Risk
GPS Coordinates Exact location where photo was taken
DateTime When you were at that location
Camera Make/Model Device fingerprinting
Serial Number Unique device identifier
Thumbnail May contain original crop before editing
Software What apps you use

A single photo can reveal where you live, work, and travel — and most image optimization tools upload these to their servers before processing.

The Canvas API Approach

The key insight is that the HTML5 Canvas API naturally strips EXIF metadata when you draw an image onto it. Here's the simplest version:

function stripExif(file) {
  return new Promise((resolve) => {
    const img = new Image();
    const canvas = document.createElement('canvas');
    const ctx = canvas.getContext('2d');

    img.onload = () => {
      canvas.width = img.width;
      canvas.height = img.height;
      ctx.drawImage(img, 0, 0);

      canvas.toBlob((blob) => {
        resolve(blob); // Clean image — no EXIF data
      }, 'image/jpeg', 0.92);
    };

    img.src = URL.createObjectURL(file);
  });
}

When you call canvas.toBlob(), the output contains only pixel data. No EXIF, no GPS, no camera info. It's the simplest and most reliable way to strip metadata without parsing binary EXIF structures.

Beyond Metadata: Creating Unique Images

Stripping EXIF is step one. But what if you need each processed image to have a unique file hash? This matters for:

  • E-commerce sellers uploading to multiple marketplaces
  • Content creators distributing across platforms
  • Anyone who needs distinct files from the same source

Simply re-encoding a JPEG at the same quality produces the same file. To create genuinely unique files, I implemented what I call "AI pixel reconstruction":

function pixelReconstruct(ctx, width, height) {
  const imageData = ctx.getImageData(0, 0, width, height);
  const data = imageData.data;

  // 1. Subtle gamma adjustment (random per invocation)
  const gamma = 0.98 + Math.random() * 0.04; // 0.98–1.02

  // 2. Micro channel mixing
  const mix = (Math.random() - 0.5) * 2; // ±1 per channel

  for (let i = 0; i < data.length; i += 4) {
    // Apply gamma
    data[i]     = Math.pow(data[i] / 255, gamma) * 255;     // R
    data[i + 1] = Math.pow(data[i + 1] / 255, gamma) * 255; // G
    data[i + 2] = Math.pow(data[i + 2] / 255, gamma) * 255; // B

    // Add micro noise (imperceptible, ±1 value)
    data[i]     = Math.max(0, Math.min(255, data[i] + mix));
    data[i + 1] = Math.max(0, Math.min(255, data[i + 1] + mix));
    data[i + 2] = Math.max(0, Math.min(255, data[i + 2] + mix));
  }

  ctx.putImageData(imageData, 0, 0);
}

Each invocation uses different random seeds, producing images that:

  • Look identical to the human eye
  • Have completely different file hashes (MD5/SHA)
  • Have different perceptual hashes (pHash/dHash)

Video Frame Extraction with Scene Detection

Another feature I built was extracting key frames from video. Instead of capturing every frame (which would give you thousands of near-identical images), I implemented scene change detection:

async function detectScenes(video, threshold = 30) {
  const scenes = [];
  const canvas = document.createElement('canvas');
  const ctx = canvas.getContext('2d');

  // Downsample for performance (160x90)
  canvas.width = 160;
  canvas.height = 90;

  let prevFrame = null;
  const duration = video.duration;
  const interval = 0.5; // Check every 0.5 seconds

  for (let sec = 0; sec < duration; sec += interval) {
    video.currentTime = sec;
    await new Promise(r => video.onseeked = r);

    ctx.drawImage(video, 0, 0, 160, 90);
    const frame = ctx.getImageData(0, 0, 160, 90).data;

    if (prevFrame) {
      let diff = 0;
      for (let i = 0; i < frame.length; i += 4) {
        diff += Math.abs(frame[i] - prevFrame[i]);     // R
        diff += Math.abs(frame[i+1] - prevFrame[i+1]); // G
        diff += Math.abs(frame[i+2] - prevFrame[i+2]); // B
      }

      const avgDiff = diff / (160 * 90 * 3);
      if (avgDiff > threshold) {
        scenes.push(sec); // Scene change detected
      }
    }

    prevFrame = new Uint8Array(frame);
  }

  return scenes;
}

The algorithm compares consecutive downscaled frames pixel-by-pixel. When the average difference exceeds a threshold, it marks a scene change. Then we capture at full resolution (1080p/4K) at those timestamps.

Performance Considerations

Processing images client-side has its challenges:

Memory management — Large images (4000×3000) consume significant memory. I process one image at a time and explicitly release ObjectURLs:

URL.revokeObjectURL(objectUrl); // Free memory after use

Web Workers — For batch processing, offloading pixel manipulation to a Web Worker prevents UI freezing. The main thread handles only the Canvas API calls (which must run on the main thread due to DOM access).

JPEG quality — I settled on 92% quality as the sweet spot. Below 90%, compression artifacts become noticeable. Above 95%, file sizes balloon with no perceptible improvement.

The Architecture

The entire app is a single HTML file with inline JavaScript — no build framework, no npm dependencies (at runtime), no backend:

index.html (~1500 lines)
├── HTML structure
├── Tailwind CSS (CDN)
├── i18n system (4 languages)
├── Image processing pipeline
├── Video scene detection
└── ZIP download (JSZip CDN)

This "zero-dependency" approach means:

  • No server costs — Hosted as static files on Vercel
  • Instant loading — No framework hydration delay
  • Complete privacy — Impossible to leak data since there's no server to leak to
  • Offline capable — Works without internet after initial load

Lessons Learned

  1. Canvas toBlob() is your friend for metadata stripping. Don't try to parse and remove EXIF fields manually — just redraw the image.

  2. Random seeds matter for unique file generation. Using Math.random() for gamma and noise values ensures every output is genuinely different.

  3. Downsample for analysis, full-res for output. Scene detection at 160×90 is fast enough for real-time processing, but final captures should use the original video resolution.

  4. Single-file architecture works for tools up to ~2000 lines. Beyond that, consider splitting — but don't split prematurely.

  5. Privacy as a feature resonates strongly with users. "Your photos never leave your device" is a concrete, verifiable claim that builds trust.

Try It

PixelFresh is free, requires no sign-up, and works in any modern browser. The source is a single HTML file — you can literally view-source to verify that no data is sent anywhere.

If you're building browser-based tools, I'd love to hear about your approach to client-side processing. Drop a comment below!

This article was originally published on the PixelFresh blog.

🛡️ Protecting Your Code and Methodology in the Age of AI: Introducing RCF Protocol v1.2.7

2026-03-18 12:50:45

As developers, we’ve always lived by the mantra of "Open Source." But in 2026, the landscape has shifted. We're no longer just sharing code with other humans; we're providing high-quality training data for AI models and automated extraction scripts.

Standard licenses like MIT or GPL were built for a different era. They don't distinguish between a human reading your code to learn and an AI "digesting" your unique correlation methodologies to replicate them at scale.

That’s why I built RCF (Restricted Correlation Framework).

🚀 What is RCF?

RCF is an author-defined licensing protocol that creates a clear boundary between Visibility and Usage Rights.

The core philosophy is simple: Visibility ≠ Rights.

  • ALLOWED: Reading, studying, and manual auditing.
  • RESTRICTED: AI/ML training, automated extraction, and methodology replication.

It’s the first license designed to be AI-resistant while remaining Source-Visible.

🆕 What’s New in v1.2.7: The Audit Release

We’ve just released RCF v1.2.7, and it brings a game-changing feature for protecting your IP: RCF-Audit.

🛡️ 1. Cryptographic Compliance (RCF-Audit)

You can now generate an immutable, cryptographically signed report of your project's compliance.
The rcf-cli audit tool creates a SHA-256 snapshot of all your protected assets, giving you a verifiable proof of ownership and protection status.

📦 2. Full SDK Parity

Whether you're in the Node.js or Python ecosystem, RCF is ready. We’ve synchronized version 1.2.7 across:

🤖 3. Hardened AI Protections

Improved semantic markers for granular protection:

  • [RCF:PUBLIC] — Architecture and concepts.
  • [RCF:PROTECTED] — Core methodologies (Visible, not replicable).
  • [RCF:RESTRICTED] — Sensitive implementation details.

🛠️ How to get started in 60 seconds

Want to protect your next project?

Step 1: Install the CLI

# For Python
pip install rcf-cli --upgrade

# For Node.js (Global)
npm install -g rcf-protocol

Step 2: Initialize

rcf-cli init --project "My Secret Sauce" --author "Alice Developer"

This generates your NOTICE.md and .rcfignore.

Step 3: Run an Audit

rcf-cli audit . --license-key YOUR_KEY

Get your RCF-AUDIT-REPORT.json
and prove your methodology is protected.

💎 Join the Movement

We believe that specialized logic and unique correlation methodologies deserve protection without going into stealth mode. RCF lets you show the world what you've built without giving away how you did it to a bot.

Check out the full specification and grab a license at: 👉 rcf.aliyev.site

Let's write code that belongs to us.

Building a Competitor Monitoring Tool in 30 Lines of Node.js

2026-03-18 12:50:29

I wanted to know when my main competitor changed their pricing page. Not in a shady way — I just wanted to stop finding out weeks after the fact when someone mentioned it in a Slack thread.

The naive solution is bookmarking and checking manually. I did this for about two weeks before admitting it wasn't going to stick. What I actually wanted was to set it once and get notified when something changed.

Here's the script I wrote. It's about 30 lines.

The approach

SnapAPI's /v1/analyze endpoint returns structured data about any URL: the page type, the primary CTA, detected technologies, and more — from a real Chromium browser session. So instead of scraping HTML and parsing it, I can just ask "what's the CTA on this page right now?" and diff that against yesterday's answer.

No parsing. No XPath selectors that break when the site redesigns. Just a clean JSON field.

The script

// competitor-monitor.js
// Polls a URL daily, alerts when the primary CTA or tech stack changes.
// Requirements: Node.js 18+ (built-in fetch), SNAPAPI_KEY env var.

const fs = require('fs');

const API_KEY  = process.env.SNAPAPI_KEY;
const URL      = process.env.MONITOR_URL || 'https://yourcompetitor.com';
const STATE    = './monitor-state.json';

if (!API_KEY) {
  console.error('Set SNAPAPI_KEY env var. Free key at https://snapapi.tech');
  process.exit(1);
}

async function analyze(url) {
  const res = await fetch(
    `https://api.snapapi.tech/v1/analyze?url=${encodeURIComponent(url)}`,
    { headers: { 'x-api-key': API_KEY } }
  );
  if (!res.ok) throw new Error(`SnapAPI ${res.status}: ${await res.text()}`);
  return res.json();
}

async function main() {
  const current = await analyze(URL);

  const snapshot = {
    cta:          current.primary_cta,
    technologies: current.technologies.sort().join(','),
    page_type:    current.page_type,
    checked_at:   new Date().toISOString(),
  };

  // Load previous state
  const prev = fs.existsSync(STATE)
    ? JSON.parse(fs.readFileSync(STATE, 'utf8'))
    : null;

  if (prev) {
    if (prev.cta !== snapshot.cta) {
      console.log(`[CHANGE] CTA: "${prev.cta}" → "${snapshot.cta}"`);
    }
    if (prev.technologies !== snapshot.technologies) {
      console.log(`[CHANGE] Tech stack changed`);
      console.log(`  Before: ${prev.technologies}`);
      console.log(`  After:  ${snapshot.technologies}`);
    }
    if (prev.page_type !== snapshot.page_type) {
      console.log(`[CHANGE] Page type: "${prev.page_type}" → "${snapshot.page_type}"`);
    }
    if (prev.cta === snapshot.cta && prev.technologies === snapshot.technologies) {
      console.log(`[OK] No changes detected. CTA: "${snapshot.cta}"`);
    }
  } else {
    console.log(`[INIT] Baseline set. CTA: "${snapshot.cta}"`);
  }

  fs.writeFileSync(STATE, JSON.stringify(snapshot, null, 2));
}

main().catch(err => { console.error(err.message); process.exit(1); });

Run it:

export SNAPAPI_KEY=your_key_here
export MONITOR_URL=https://competitor.com/pricing
node competitor-monitor.js

Set up a cron to run it daily:

0 9 * * * SNAPAPI_KEY=your_key MONITOR_URL=https://competitor.com/pricing node /path/to/competitor-monitor.js >> /var/log/monitor.log 2>&1

That's it. Every morning at 9am it checks the page, diffs against yesterday's state, and logs any changes.

What it actually catches

This has been running against three competitor URLs for about six weeks. Things it's caught so far:

  • A competitor changed their hero CTA from "Start free trial" to "Get started free" — subtle, but a signal they were A/B testing the copy
  • One competitor's tech stack changed — they dropped an analytics tool I didn't recognize, suggesting a product change
  • A pricing page shifted from "product landing page" to "checkout flow" — they had quietly removed their pricing and replaced it with a direct checkout redirect

None of these would have made it into my RSS feed or Twitter alerts.

Extending it

Slack alerts: Replace the console.log calls with a fetch to a Slack webhook URL:

async function alert(message) {
  await fetch(process.env.SLACK_WEBHOOK, {
    method: 'POST',
    headers: { 'Content-Type': 'application/json' },
    body: JSON.stringify({ text: message }),
  });
}

Multiple URLs: Wrap the main() logic in a loop over an array of URLs. Each call uses one API credit. With the free tier (100/month), you can monitor 3 URLs daily with room to spare. The $9/month Starter plan covers 33 URLs/day.

Store history: Instead of overwriting monitor-state.json, append to a history.jsonl file (one JSON object per line). Gives you a full audit trail to diff across time.

Visual diff: Add ?screenshot=true to the analyze URL and save each screenshot to disk. Now you have a visual history alongside the structural diff — useful for catching layout or design changes that don't show up in the text fields.

Free API key

100 API calls/month, no credit card, active in 30 seconds.

snapapi.tech

Automate Screenshots and Metadata Checks with SnapAPI + Zapier (No Code Required)

2026-03-18 12:50:05

SnapAPI + Zapier Integration Guide

Automate screenshots, metadata checks, and OG audits — no code required.

What You Can Automate

Zapier connects apps with "if this, then that" workflows called Zaps. Connect it to SnapAPI and you can:

  • Screenshot every new form submission URL — someone fills out a lead form with their website, Zapier screenshots it automatically and saves it to Google Drive
  • Audit OG tags on every new blog post — your CMS publishes a post, Zapier checks the metadata and alerts you in Slack if og:image is missing
  • Monitor competitor pages — every Monday, take a fresh screenshot of a list of competitor URLs and save them to a Notion database

None of these require writing code. Zapier's built-in Webhooks action handles the SnapAPI call.

What You Need

  • A Zapier account — the free tier (100 tasks/month) is enough to get started
  • A SnapAPI API key — free at snapapi.tech, 100 calls/month, no credit card

Step-by-Step: Screenshot a URL with Zapier

This example Zap screenshots any new Google Form response URL and saves the image to Google Drive. Adapt the trigger to whatever fits your workflow.

Step 1 — Create a new Zap

Go to zapier.com/app/zaps and click + Create Zap.

Step 2 — Set the Trigger

Choose your trigger app and event. For this example:

  • App: Google Forms
  • Trigger Event: New Response in Spreadsheet

Connect your Google account and select the spreadsheet. In the "Set up trigger" step, map which column contains the URL you want to screenshot.

Step 3 — Add a Webhooks by Zapier Action

Click + to add an action step.

  • App: Webhooks by Zapier
  • Action Event: GET

Click Continue.

Step 4 — Configure the Webhooks Action

In the Set up action panel:

  • URL:
  https://api.snapapi.tech/v1/screenshot
  • Query String Params — add two rows:
    | Key | Value |
    |---|---|
    | url | {{your URL field from the trigger}} |
    | format | png |

  • Headers — add one row:
    | Key | Value |
    |---|---|
    | x-api-key | YOUR_SNAPAPI_KEY |

  • Response Type: File

Step 5 — Test the Step

Click Test step. Zapier will make a live request to SnapAPI. If successful, you'll see a file object in the response — that's your screenshot.

Step 6 — Add a Final Action (Save or Send)

Add another action after the Webhooks step:

  • Save to Google Drive: App = Google Drive, Action Event = Upload File, File = the Webhooks response file
  • Or send via Slack: App = Slack, Action Event = Send File, File = the Webhooks response

Step 7 — Turn on the Zap

Click Publish. The Zap runs automatically from now on.

Alternative: Code by Zapier (for metadata)

If you need the JSON metadata (og:title, og:image, etc.) rather than the image file, use Code by Zapier instead of Webhooks:

  • App: Code by Zapier
  • Action Event: Run Javascript

Paste this code, substituting your API key:

const url = inputData.pageUrl; // mapped from your trigger
const apiKey = 'YOUR_SNAPAPI_KEY';

const res = await fetch(
  `https://api.snapapi.tech/v1/metadata?url=${encodeURIComponent(url)}`,
  { headers: { 'x-api-key': apiKey } }
);
const data = await res.json();

return {
  title:       data.og_title || data.title,
  description: data.og_description || data.description,
  og_image:    data.og_image,
  missing_og:  !data.og_image ? 'YES' : 'NO'
};

The output fields (title, description, og_image, missing_og) become available in downstream steps — pipe them into Slack, Notion, or a Google Sheet.

Three Complete Zap Ideas

Zap 1 — Screenshot new leads automatically

  • Trigger: Google Sheets → New Row (lead's website URL in column B)
  • Action: Webhooks by Zapier → GET https://api.snapapi.tech/v1/screenshot?url={{Column B}}&format=png with x-api-key header
  • Action: Google Drive → Upload File (save screenshot, named by lead name)
  • Result: Every new lead in your CRM gets a screenshot of their site saved to Drive automatically

Zap 2 — OG tag alert on new blog posts

  • Trigger: RSS by Zapier → New Item in Feed (your blog RSS URL)
  • Action: Code by Zapier → run the metadata snippet above with inputData.pageUrl = RSS Item URL
  • Action (conditional): Filter by Zapier → Only continue if missing_og equals YES
  • Action: Slack → Send Message ("⚠️ Missing og:image on: {{RSS Item URL}}")
  • Result: Slack alert every time a post goes live without an OG image

Zap 3 — Weekly competitor screenshot archive

  • Trigger: Schedule by Zapier → Every Monday at 9am
  • Action: Webhooks by Zapier → GET screenshot for competitor URL #1
  • Action: Google Drive → Upload File
  • (duplicate Actions for each competitor URL)
  • Result: A weekly visual archive of competitor homepages in a dated Google Drive folder

Get Your Free SnapAPI Key

100 API calls/month, no credit card, active in 30 seconds.

snapapi.tech

I Built an AI SEO Content Generator — Generate 2000-Word Articles in 60 Seconds

2026-03-18 12:49:41

Content marketers and bloggers spend hours writing SEO-optimized articles. I built WriteKit to cut that down to 60 seconds.

What WriteKit Does

You enter a keyword and select your preferences:

  • Article type: Blog post, product review, how-to guide, or listicle
  • Language: English, Chinese, Japanese, or Korean
  • Word count: 1,500 to 3,000 words
  • Tone: Professional, casual, academic, or friendly

Hit "Generate" and get a complete, SEO-optimized article with:

  • SEO title with your target keyword
  • Meta description (150-160 characters)
  • Structured H2/H3 headings
  • Natural keyword density (3-5 mentions)
  • Internal linking suggestions

The Tech Stack

Layer Technology
Framework Next.js 14 (App Router)
AI Engine OpenAI GPT-4o-mini
Auth NextAuth.js (GitHub + Google)
Database Supabase (Postgres)
Payments LemonSqueezy
Deployment Vercel

How the AI Generation Works

The core is a carefully crafted prompt that instructs GPT-4o-mini to:

  1. Research the topic thoroughly
  2. Structure the content with proper heading hierarchy
  3. Include the target keyword naturally
  4. Write in an engaging, human-like tone
  5. Add internal linking anchor text suggestions

The response streams in real-time using ReadableStream, so you see the article being written character by character — like watching a fast typist.

Pricing

  • Free: 1 article/day, 1,500 words max, English only
  • Pro: $19/month — Unlimited articles, 3,000 words, all languages
  • Business: $49/month — Everything + API access + bulk generation

Try It

Check out the live demo: WriteKit

What content creation pain points do you face? Would love to hear what features matter most to you.