MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

Who's hiring — December 2025

2025-12-02 23:27:00

Product engineers, Developer advocates, or Community builders?

If you're looking for a new opportunity in the dev tools space, this post is for you.

20+ dev-first companies hiring in December 2025

Below are 20+ open roles in dev-first companies.

Featured opportunities

More open roles

From previous editions

Wrapping up

That's a wrap! If this helped, please add some ❤️🦄🤯🙌🔥

Every Sunday, I hand-pick open roles in the dev tools space and post them on Twitter / X and LinkedIn.

Think it. Build it. Ship it.

v0.dev — Your collaborative AI assistant to design, iterate, and scale full-stack applications for the web.

Start building for free

Who else's hiring?

Is your company hiring? Please let me know! Reply here or send me a DM, and I'll make sure to add it to the next edition.

See you next month — keep it up! 👋

Converting Figma Design To Next.js Code In Minutes

2025-12-02 23:24:03

In this guide, you will learn how to convert Figma designs into production-ready Next.js code code in minutes using Kombai, a specialized frontend AI agent that's about to become your new best friend.

Before we jump in, here is what we will cover:

  • Understanding what Kombai AI agent is

  • Configuring your Tech stack

  • Generating and reviewing a code plan

  • Generating, running and previewing code

  • Analyzing output results

Here is a preview of the final results.

Understanding what is Kombai AI Agent

Kombai is a highly specialized AI agent built for frontend developme*nt,* designed to automate the conversion of Figma designs into clean, production-ready code.

It is a domain-specific agent, meaning its AI models are fine-tuned for the nuances of UI (User Interface) creation, making it more effective at this task than general-purpose coding assistants.

Here is a breakdown of Kombai key capabilities:

  • Multi-input support: Works with Figma designs, text instructions, images, or existing code

  • Tech stack aware: Supports 30+ frontend libraries including React, Next.js, Material UI, Tailwind, and more

  • Smart planning: Generates editable development plans before complex tasks using plan mode.

  • Codebase integration: Scans and reuses existing components and theme variables in your codebase.

  • Auto-fixing: Automatically fixes TypeScript and linting errors after generating code from Figma designs or images.

  • Sandbox preview: Run and test generated code in a local environment

  • Three modes: Plan Mode for code plan generation, Code Mode for code generation, and Ask Mode for codebase understanding.

  • Preview browser: Kombai browser lets you preview your app’s local deployment in the browser with built-in listeners so you can iterate fast by sending elements and errors back to Kombai as context.

You can learn more about Kombai AI agent here on Kombai docs.

Image from Notion

Prerequisites

To fully understand this tutorial, you need to have a basic understanding of Next.js

Step 1: Setting Up Your Project and the Kombai Agent

In this section, you will learn how to set up your Next.js project and Kombai AI agent in your IDE (VS Code, Cursor, WindSurf, etc).

Let’s get started.

First, open your Next.js project in your IDE or set up one using the command below.

npx [email protected] kombai-nextjs-app --yes

Then Install the Kombai AI extension from the marketplace, as shown below.


Once the Kombai extension is installed, click Sign In button in Kombai. Then you will be redirected to the Kombai website to Sign up or log in to your Kombai account.

After signing in, you will be redirected back to the IDE.

Step 2: Scan and Configure your tech stack

Once you have set up your project and the Kombai extension in your IDE, configure your tech stack.

The Kombai agent offers a wide range of tech stacks which lets you generate code that aligns with your workspace’s requirements.

In this case, our Tech stack will be Next.js, and TypeScript. To configure your tech stack, follow the steps as shown below and then save the configuration.

Step 3: Define Code Generation Rules

After scanning and configuring your tech stack, set persistent instructions for Kombai using rules and Agents.md files. The rules provide project level instructions to Kombai in order to enforce specific coding standards, architectural patterns, and output formats

To set the rules or instructions, create a AGENTS.md file in your project and add the project rules or instructions.

You can generate the project instructions by first taking screen shots of your Figma design, as shown below.

Image from Notion

Then navigate to Google Gemini, add the screenshot to the chat and generate the instructions using the prompt below.

Give me a prompt for generating a Next.js + TypeScript + Tailwind css web app 
from the provided Figma designs. Describe structure of each page section in 
detail

Once the instructions have been generated, add them to your AGENTS.md file, as shown below.

Image from Notion

After that, Kombai automatically picks up the **AGENTS.md** file stored, as shown below.

Image from Notion

Step 4: Connect Figma and add your Figma design to Kombai

Once you have defined code generation rules, connect your Figma account with Kombai in order to access your Figma design and generate code based on them, as shown below.

Once you have connected your Figma account, copy your Figma design link, and add it to Kombai, as shown below.

Step 5: Generate, and review code plan

After connecting your Figma account and adding your Figma designs to Kombai, use Kombai plan mode to generate a detailed code plan for the given input before starting the code generation, as shown below.


You can learn more about Kombai plan mode here on Kombai docs.

Image from Notion

Step 6: Generate, Run and Fix Code Errors

Once the coding plan is approved or updated, Kombai will generate the code based on the plan, and auto-fix any TS or linting errors found in the generated code, as shown below.


After that, Kombai will run the development server and you can fix any build errors, as shown below.

Step 7: Preview your App using Kombai Browser

After generating code and fixing errors, run the code in Kombai Browser to preview the generated code.

If you find any visual mismatches in the preview, you can tag DOM elements and suggest Kombai to make the necessary changes. Kombai will regenerate the code accordingly, as shown below.

Step 8: Analyzing Kombai Figma Design to Next.js Code Output Results

Once you have run and previewed your app, analyze the generated code and the UI to make sure it is close to what you wanted.

From the output results, Kombai gives you clean, semantic JSX that a human would write since the agent understands layout and structure, not just layers.

Kombai gets you 80-90% of the way where it builds the perfect UI scaffolding, which is the most laborious part. You, the developer, are still needed for the last 10-20% and that's the fun part.

Also, as you can see from the preview results below, Kombai was able to generate frontend UI that is pixel perfect with the Figma design. The code only requires minimal tweaks to match the desired UI design.

What Differentiates Kombai from other Figma Design to Code tools

After using general coding agents, MCPs and Kombai to convert Figma designs into Next.js Code, Kombai significantly outperforms these other AI tools in building UX & UI from Figma, images, or text based prompts as shown below in benchmarks for real-world frontend tasks.

Image from Notion

Here are the key reasons for its high-quality output:

  • "Human-Like" UX Interpretation:

    • Unlike older tools that required strict layer naming or auto-layout in Figma, Kombai uses deep learning models to understand the design visually, similar to how an experienced human developer would.
    • It intelligently infers structure, component boundaries, spacing, and layout hierarchies even from messy or non-ideal design files.
  • Workflow Integration and Safety:

    • Kombai integrates directly into developer IDEs (like VS Code).
    • Planning: For complex tasks, it generates an editable development plan (often in Markdown) before writing code.
    • Sandbox Preview: It can spin up a local development server to preview the generated UI in isolation to validate the output before it touches your main codebase.
    • Error Correction: It automatically attempts to fix linting and TypeScript errors, and you can feed runtime errors back to the agent for correction.
  • Codebase Awareness:

    • It can analyze your existing repository to detect your current tech stack, styling frameworks, and component libraries. This allows it to generate code that is consistent and integrates seamlessly into your project's architecture.

In essence, Kombai acts as a highly specialized AI partner for frontend teams, taking on the task of building the user interface shell so human developers can focus on complex application logic and unique user experiences.

Conclusion

In conclusion, Kombai AI agent changes the developer's role where it eliminates the most time-consuming, least creative part of frontend development.

Note that this isn't about replacing developers but upgrading their frontend developement workflow.

You are no longer a translator for a designer but a reviewer and integrator of high-quality code, which allows you to move 10x faster and focus on what actually matters: your app's logic, state, and data flow.

Stop hand-coding your UI boilerplate. Grab a free Kombai account, install the VS Code extension, and try converting one of your own Figma frames.

You won't just be surprised by how much time you get back, you will change how you approach frontend development.

Docker Image Compression: gzip vs zstd

2025-12-02 23:15:35

Docker images are already compressed when you push them to registries like Docker Hub, GHCR, AWS ECR, etc.

So why would anyone compress an image again by using gzip or zstd?

why this happen meme

Because in the real world, engineers often need to:

- move images between servers (air-gapped networks),

- migrates registries,

- backup image to object,

- load or save images repeatedly in Continuous Integration (CI) pipelines.

In these cases, docker save produces a raw .tar file often huge.
Compressing that tarball can cut transfer and storage time by 50-80%.

But what’s the best compression tool? So we will test gzip vs zstd.

🤔 When Do We Need Image Compression?

Like I said before, you need to compress your image to:

- transfer image between SSH or local network (LAN),

- work with offline or air-gapped servers,

- backup images to object storage,

- migrate to new registry.

Skip compression when images are pushed directly to registry like DockerHub, registries are handled that.

Test Up

To get a realistic number, we will test three images:

- alpine:latest -> ~8MB

- maridb:10.6.18 -> ~396MB

- Custom Jupyterlab image -> ~5.4GB

Environment

1. Local Computer

- CPU: 4 cores / 8 threads

- Storage: NVMe SSD

- OS: Ubuntu 22.04

- Docker Version: 27x

- Tools: gzip, zstd, time, scp

2. VPS

- CPU: 4 cores / 8 threads

- Storage: VirtIO-backed SSD

- OS: Ubuntu 22.04

- Docker Version: 27x

- Tools: gzip, zstd, time

Command used

Below are the commands used to save, compress, transfer, decompress, and load the Docker images during testing.

1. Save the image (local machine)

docker pull <image_name>
docker save <image_name> -o <image_name>.tar
$ docker pull alpine:latest

latest: Pulling from library/alpine
Digest: sha256:4b7ce07002c69e8f3d704a9c5d6fd3053be500b7f1c69fc0d80990c2ad8dd412
...
$ docker save alpine:latest -o alpine_latest.tar

$ docker save mariadb:10.6.18 -o mariadb_10.tar

$ docker save danielcristh0/datascience-notebook:python-3.10.11 -o jupyter_notebook.tar


$ ls -lh

total 5,6G
-rw------- 1 user group 8,3M Nov 28 19:51 alpine_latest.tar
-rw------- 1 user group 5,2G Nov 28 20:18 jupyter_notebook.tar
-rw------- 1 user group 384M Nov 28 20:14 mariadb_10.tar

2. Compress the image (local machine)

- gzip

time gzip -k <image>.tar
$ time gzip -k alpine_latest.tar

gzip -k alpine_latest.tar  0,44s user 0,01s system 99% cpu 0,452 total
$ time gzip -k mariadb_10.tar

gzip -k mariadb_10.tar  17,21s user 0,62s system 99% cpu 17,979 total
$ time gzip -k jupyter_notebook.tar

gzip -k jupyter_notebook.tar  238,83s user 3,56s system 99% cpu 4:03,16 total

-k → keep original file

gzip uses a single CPU thread at its default level (≈ level 6)

- zstd

time zstd -T0 -19 <image>.tar
$  time zstd -T0 -19 alpine_latest.tar

alpine_latest.tar    : 37.01%   (8617984 => 3189867 bytes, alpine_latest.tar.zst)
zstd -T0 -19 alpine_latest.tar  3,64s user 0,10s system 100% cpu 3,734 total
$  time zstd -T0 -19 mariadb_10.tar

mariadb_10.tar       : 16.95%   (402636288 => 68258055 bytes, mariadb_10.tar.zst)
zstd -T0 -19 mariadb_10.tar  172,89s user 0,66s system 191% cpu 1:30,81 total
$  time zstd -T0 -22 jupyter_notebook.tar
Warning : compression level higher than max, reduced to 19
zstd: jupyter_notebook.tar.zst already exists; overwrite (y/n) ? y

jupyter_notebook.tar : 24.79%   (5560227328 => 1378450873 bytes, jupyter_notebook.tar.zst)
zstd -T0 -22 jupyter_notebook.tar  4759,54s user 19,32s system 188% cpu 42:11,68 total

-T0 → use all CPU threads

-22 → request maximum compression (automatically reduced to -19 by zstd)

3. Transfer to VPS

- gzip

time scp <image_name>.tar.gz user@vps:/tmp/
$ time scp alpine_latest.tar.gz onomi@myserver:/tmp
alpine_latest.tar.gz    100% 3588KB 174.8KB/s   00:20

scp alpine_latest.tar.gz onomi@myserver:/tmp  0,11s user 0,29s system 1% cpu 23,208 total


$ time scp mariadb_10.tar.gz [email protected]:/tmp/
mariadb_10.tar.gz   100%  114MB   2.2MB/s   00:50

scp mariadb_10.tar.gz [email protected]:/tmp/  0,46s user 0,84s system 2% cpu 52,457 total


$ time scp jupyter_notebook.tar.gz onomi@myserver:/tmp/
jupyter_notebook.tar.gz  100% 1765MB   3.4MB/s   08:35

scp jupyter_notebook.tar.gz onomi@myserver:/tmp/  5,03s user 10,42s system 2% cpu 8:38,50 total

- zstd

time scp <image_name>.tar.zst user@vps:/tmp/
$ time scp alpine_latest.tar.zst onomi@myserver:/tmp
alpine_latest.tar.zst   100% 3115KB 343.4KB/s   00:09

scp alpine_latest.tar.zst onomi@myserver:/tmp  0,10s user 0,18s system 1% cpu 22,728 total


$ time scp mariadb_10.tar.zst onomi@myserver:/tmp/
mariadb_10.tar.zst  100%   65MB   3.0MB/s   00:21

scp mariadb_10.tar.zst onomi@myserver:/tmp/  0,29s user 0,59s system 3% cpu 23,285 total


$ time scp jupyter_notebook.tar.zst onomi@myserver:/tmp/
jupyter_notebook.tar.zst    100% 1315MB   2.3MB/s   09:44

scp jupyter_notebook.tar.zst onomi@myserver:/tmp/  3,94s user 7,64s system 1% cpu 9:46,33 total

4. Load the Image on the Server (VPS)

Now the compressed images are transferred to the VPS, the next step is to decompress them and load the Docker image into the remote server.

- gzip

time gzip -dk <image>.tar.gz
$ time gzip -dk alpine_latest.tar.gz

real    0m0.189s
user    0m0.116s
sys     0m0.039s


$ time gzip -dk mariadb_10.tar.gz


real    0m5.108s
user    0m3.813s
sys     0m1.129s

$ time gzip -dk jupyter_notebook.tar.gz

real    1m8.344s
user    0m48.466s
sys     0m13.408s

- zstd

time zstd -d <image>.tar.zst
$ time zstd -d alpine_latest.tar.zst
alpine_latest.tar.zst: 8617984 bytes

real    0m4.121s
user    0m0.041s
sys     0m0.043s

$ time zstd -d mariadb_10.tar.zst
mariadb_10.tar.zst  : 402636288 bytes

real    0m3.455s
user    0m0.983s
sys     0m0.927s

$ time zstd -d jupyter_notebook.tar.zst

jupyter_notebook.tar.zst: 5560227328 bytes

real    0m31.810s
user    0m14.599s
sys     0m13.600s

Decompression in zstdis extremely fast, 5-10x faster than compression, even for large files.

5. Loading the Image Into Docker

Once decompressed, load the .tar file:

docker load -i <image>.tar
$ docker load -i jupyter_notebook.tar

Loaded image: danielcristh0/datascience-notebook:python-3.10.11
$ docker images

IMAGE                                               ID             DISK USAGE   CONTENT SIZE   EXTRA
danielcristh0/datascience-notebook:python-3.10.11   9b38bf7c570f       11.4GB         5.56GB

6. Analysis: gzip vs zstd

After running all compression, transfer, decompression, and loading tests across three different Docker images, let's compare gzip and zstd.

Size Comparison

zstd consistently produces much smaller output files than gzip, especially on medium and large images.

Image Actual Size gzip Size zstd Size Reduction (gzip) Reduction (zstd)
alpine:latest 8.3 MB 3.5 MB 3.1 MB ~57% ~62%
mariadb:10.6.18 384 MB 114 MB 65 MB ~70% ~83%
jupyter-notebook 5.2 GB 1.7 GB 1.3 GB ~67% ~75%

zstd gives around 20-50% better compression than gzip.

Speed
Image Actuaal Size gzip Size zstd Size Reduction (gzip) Reduction (zstd)
alpine:latest 8.3 MB 3.5 MB 3.1 MB ~57% ~62%
mariadb:10.6.18 384 MB 114 MB 65 MB ~70% ~83%
jupyter-notebook 5.2 GB 1.7 GB 1.3 GB ~67% ~75%

zstd gives better compression but requires significantly more CPU.

Transfer Speed (via SCP)

Because zstd produces smaller files, transfer times are 2x faster. But on larger files, zstd can still lose to gzip depending on CPU and disk performance.

Image gzip Transfer zstd Transfer
alpine 20 s 9 s
mariadb 50 s 21 s
jupyter-notebook 8m 35s 9m 44s

When Should You Use gzip vs zstd?

Use zstd when you want

- The smallest compressed Docker images

- Fast decompression

- Faster transfers across networks

- Long-term backups

Use gzip when you want:

- Fast compression

- Low CPU usage

- Simple, predictable behavior

- Occasional small image transfers

TL;DR

If you need to compress Docker images, here’s the quick answer:

Use zstd when you want:

- Smaller archive sizes (around 20-50% smaller than gzip)

- Faster decompression

- Faster network transfers

Use gzip when you want:

- Fast compression

- Low CPU usage

- Simplicity

Aight, that's all. Thank you for taking your time to read this post.

Feel free to give me feedback, tips, or different benchmarks. I’d love to hear your feedback and continue the discussion.

Happy containerization! 🐳

AI Agents and context-aware permissions

2025-12-02 23:12:39

As the internet evolves, misconfigured permissions become a much bigger threat. Why? Because of two words: artificial intelligence—or, AI.

Enterprise organizations have always needed tight control over their systems. Permissions are necessary for protecting access to resources, as well as meeting compliance rules and customer obligations. An over-permissioned user would be able to access sensitive information; for example, manager-level permissions that let an employee access their entire team’s salary when they should only be able to check their own. Once teams spot such mistakes, they can correct it and move forward without much disruption.

That’s no longer true once AI enters the picture. AI agents do what humans do, from accessing data to sending emails. But unlike humans, they move thousands times faster—which means their mistakes move faster too. When a human makes a mistake, it’s just one mistake. When an AI agent makes a mistake, it can quickly snowball into thousands more. This is because of three characteristics:

  1. Multi-System. AI agents usually don’t operate with a single system. They pull and push data across CRMs, databases, and whatever other systems are needed. So if an agent makes a single bad request, it can spread incorrect information across multiple systems. With write access, the agent can accidentally execute destructive actions like deleting or overwritten critical data.
  2. Scale. A human analyst might only run five queries in an afternoon. An AI agent on the other hand, might execute thousands in just a few seconds. Over-permissioning of humans has been tolerable, because impact is limited by time. But the slightest over-permissioning of an agent can lead to a volume of mistakes before security teams can even react.
  3. Blind Execution. Once an AI agent gets a valid token, it can continue running until the token expires. It doesn’t check if the user has been deactivated or for any other context. Everything seems to “just work,” but that seamlessness hides a serious gap: each request can slip past risk signals that a human would recognize.

Because of these risks, agents as both powerful and dangerous. Not only do they increase a user’s capacity, they also amplify possible consequences. The solution (although it is really just a precaution) is context-aware permissions. Rather than grant an AI agent static permissions, the system verifies every action based on the live state of the request. For example, a financial application might prevent a sudden request at 3am if it is normally only used during the day.

In this article, we’ll dive into how context-aware models work, common patterns, good practices, and the challenges to consider at scale.

Understanding the Risk

Although context-aware permissions clearly help lower risk, what actually are the risks? Without these safeguards in place, what is the worst that could happen? The answer: a lot. Let’s look at three scenarios.

Customer Data Exposure

Consider an AI support bot that is tasked to retrieve data from a CRM and use it in another system (e.g. Snowflake) or to send emails. If this bot has a stale token which holds outdated permissions, it could unintentionally expose customer information that it is no longer authorized to access. While this may seem harmless in theory, it can dangerously violate customer data custody contracts and create legal liabilities.

Information Misconfiguration

If an AI agent regularly reads from databases, but has mis-scoped access, then it could accidentally pull more data than intended. For example, suppose an AI agent that is only meant to query a database with test accounts. With misconfigured permissions, an agent might pull information about actual production accounts instead. That agent might then inadvertently leak customer data.

Uncontrolled Bulk Actions

An AI agent could be assigned to clean-up accounts that have been marked for deletion, such as due to inactivity. But if the agent has broad access, then it might mistakenly delete all accounts because the model’s non-deterministic nature. Without proper controls, an unsupervised AI agent can easily wipe out terabytes of informations within minutes.

Evaluating Access Against Live Signals

Context-aware permissioning examines the contextual signals of each request by gathering environmental cues, such as device types or network security. For instance, an up-to-date company-managed laptop with would be considered lesser risk than a personal smartphone on public Wi-Fi.

Network conditions also play a role. A request made through a corporate VPN is different from the same request made through public Wi-Fi. Timing influences risk scores as well. A query in the middle of the work day is expected, and much more normal than a sudden spike in activity at midnight. In short, context fluid. It shifts with the user, device, and activity.

As such, the responses can be just as dynamic. Rather than a simple yes/no, agents adjust their behavior based on risk. In a trusted context, full results might be delivered without issue. But when conditions are riskier, the same query might be reduced to read-only or have sensitive details redacted.

This adaptability is what sustains resilient AI systems. Agents can operate across several sources without stopping for manual checks, yet their actions are still tightly governed by the live contextual signals of each request. Context-aware permissioning weighs identity beyond just the user—time, place, and conditions all matter.

How Teams Put Context-Aware Models into Practice

Context-aware permissioning becomes more difficult when considering its trade-offs. These strategies strengthen security but introduce drawbacks such as increased latency and system complexity. Tools such as Oso can help mitigate some of these issues, particularly to simplify developer effort. The following patterns highlight both the advantages and disadvantages of context-aware permissioning.

Conditional Delegation with Context Scoping

Conventional delegation models work on a simple principle: The agent assumes the identity of a human user and retains the defined access scope until the token expires. While a good baseline, this method overlooks the risk of an over-permissioned user or general human error.

On the other hand, conditional delegation transforms static inheritance into a dynamic evaluation process. Whenever the agent presents a user token, a policy decision point (PDP) assesses the surrounding signals and then generates a downstream credential scoped to fit those conditions.

The result is finer-grained control. For example, a developer might retain write access in staging, but if their laptop falls out of compliance, the PDP can adjust permissions to read-only.

The downside, however, is operational overhead. PDPs rely on real-time feeds from downstream services, which can get messy as developers try to stitch signals across a distributed ecosystem.

Mid-Session Risk Re-Evaluation

Static-token systems (e.g., JWTs) operate on the assumption that the issuer’s status won’t change during the token’s lifespan. In reality, an employee could be off-boarded mid-shift or a device could fall out of compliance. Although these situations are infrequent, the potential impact is severe. For instance, a user retaining access to a bank account they were removed from.

Re-evaluating risk during a session eliminates that blind spot by managing tokens as temporary. Systems modeled after Continuous Access Evaluation (CAE) principles don’t wait for tokens to expire. Instead, they use revocation channels to end sessions immediately whenever token permissions are updated.

The downside is added latency and coordination. Every re-evaluation incurs a performance cost, and revocation needs tighter integration across downstream services. For workloads where even a single unauthorized request could compromise highlight sensitive information, such as patient data in a healthcare app where access is temporarily granted to care providers, the trade-off between a few extra milliseconds is often justified.

Adaptive Responses

Most enterprises still manage access as a binary decision: grant or deny. This all-or-nothing approach does not work well with AI agents operating in workflows with adaptive steps. A request denial blocks data, but also halts the agent’s entire process.

Adaptive responses introduce a more flexible alternative. Rather than stopping the agent entirely, the system can either limit request rates or route the request to a human for review. The agent is able to continue operating, but with guardrails to limit potential damage.

This method of graceful fallbacks is particularly important in AI systems where uptime matters most. Customer support bots for instance can’t simply fail whenever a risk arises. By implementing tiered responses, the system maintains a balance between availability and safety.

However, putting adaptive responses into practice is far from simple. Policies require fine-grained enforcement, sometimes at the field level. Transparency is also important because security teams must be able to trace the system’s decisions (such as why it throttle a query) through comprehensive logs and audit trails.

Behavioral Context as Input

Even an agent’s own behavior can serve as a signal. Agents generate telemetry through query patterns, download volumes, request timing, and more. A sudden surge in a certain action or concurrent logins from different locations can indicate heightened risk.

Developers can mitigate this risk by incorporating behavior-based checks. While a human might take hours to extract a dataset, an unmonitored agent can complete the same task almost instantly/ By supplying the PDP with behavioral signals, the system can identify and counter misuse immediately without human intervention.

The real challenge here is calibration. If thresholds are too strict, users will be overwhelmed with re-authentication requests. If thresholds are too lenient, suspicious activity can slip by unnoticed. To improve decision accuracy, most enterprises combine behavior scores with other contextual signals (such as device or location).

Closing Thoughts

Context-aware permissions are simple in theory, but much harder in practice. Every time a live signal is evaluated, that additional check adds latency. Fragmented systems deliver signals asynchronously. Complex token exchange flows will require extra validations. And every masked field or throttled request must be accurately logged for security teams to analyze even months later.

Even so, the effort is worth it for sensitive applications. Role-based access determines what a user should be able to do, but it is context-aware permissions which ensure that those rules are actually being enforced correctly. By linking identity to the current conditions of every request, it makes AI agents’ behavior more predictable.

This approach is most effective when authorization is centralized. Platforms like Oso offer a unified control plan where policies are written once and consistently enforced across applications and APIs. Rather than implementing context checks independently for every service, teams can manage them in one central location using Oso.

If you would like to learn more, check out the LLM Authorization chapter in our Authorization Academy.

FAQ

What is context-aware permissioning?

It’s an access model that evaluates every request based on the current conditions, such as device and network, rather than depending on static roles.

Why aren’t static roles enough for AI agents?

Agents run at machine speed, sometimes across multiple systems simultaneously. Conditions might change mid-session, but static roles don’t account for that. This means a stale token can continue working even after the user it belongs to is off-boarded.

What’s the risk of using service accounts for agents?

Service accounts often carry broad, long-lived permissions. If an agent operates under such an account, it can bypass user-specific roles and revocations. This can turn a single integration into a system-side security exposure.

What is mid-session risk re-evaluation?

It’s a system where tokens are short-lived and constantly re-validated. If risk indicators signal change, such as a device falling out of compliance, sessions can be revoked instantly instead of waiting for the token to expire.

What are adaptive responses?

Adaptive responses move beyond simple “grant or deny” decisions with graduated actions. Rather than blocking an agent completely, systems can instead redact sensitive data or limit request rates.

How does behavioral context factor into permissioning?

Agents produce telemetry (query patterns, data volume, request timing) that can be compared to established baselines. Unexpected deviations can then trigger re-evaluation.

Why Do Some SaaS Startups Grow Faster on Google While Others Stay Invisible?

2025-12-02 22:57:15

Understanding Why SaaS SEO Feels Different

If there’s one thing I’ve learned working with early stage SaaS teams, it’s that SEO hits differently in this space. You’re not just targeting generic terms or chasing backlinks. You’re trying to show up for users who are actively searching for solutions to specific problems, often tied to workflows or niche industry issues. When I first helped a small SaaS team audit their content, we realized their blog was filled with broad topics that never aligned with user intent. That moment taught me how critical it is for SaaS companies to map content to actual product value.

The Unique Challenge of SaaS Search Intent

Unlike e-commerce or local businesses, SaaS search journeys are multi layered. Users don’t just search “software for invoicing” and then buy instantly. They usually follow a path:

  1. Understanding the problem
  2. Searching for frameworks or comparisons
  3. Evaluating features
  4. Assessing whether the product fits their workflow

This longer funnel means SEO content has to be built strategically. You’re not only ranking for keywords, you’re supporting a buyer across several touchpoints. This is where a SaaS SEO agency becomes valuable, and where partners like MADX often step in with guidance based on real product led growth patterns.

What a Strong SaaS SEO Framework Usually Includes

After working on a handful of SaaS SEO sprints, I noticed the agencies that get results always share a similar structure. Below is a simple breakdown of elements that consistently matter:

  • Keyword clusters tied directly to product use cases
  • A technical SEO foundation that supports scalable content
  • Conversion focused content, not just traffic focused content
  • Real competitive analysis instead of copying ranking pages

This combination tends to outperform random topic based publishing because it reflects how users actually think when searching for SaaS products.

Lessons I’ve Picked Up From Real Projects

One of the most surprising things I discovered was how often SaaS companies bury their strongest content assets. In one project, their best performing case study was two clicks deep in a dropdown. Another team had a powerful comparison page that wasn’t even indexed. Fixing these small issues brought immediate lifts in signups. It reminded me that SEO for SaaS is just as much about UX and structure as it is about keywords.

Another thing that consistently shows up is the need for clarity. Some SaaS websites explain features but never show the specific problems they solve. When we rewrote a product page for a workflow tool and tied every feature to a pain point, the page started converting almost double within a month.

A Quick Checklist for SaaS Teams Starting SEO

If I were advising a founder starting SEO today, I’d share this simple checklist. It’s not perfect, but it covers the essentials before scaling content:

  • Identify your top 3 use cases and build keyword clusters around them
  • Create comparison and alternative pages early
  • Validate search intent before writing anything
  • Make sure Google can crawl everything important
  • Turn your blog into a funnel, not a content dump

Following these steps helped me reduce wasted content production across multiple SaaS projects.

Final Thoughts

SEO for SaaS isn’t about publishing more, it’s about publishing smarter. When content aligns with product value and search intent, growth becomes more predictable and sustainable. And while a specialized team can help, a lot of the foundation starts with understanding your users deeply and structuring content around their journey.

⚛️ Quantum Machine Learning Algorithms: A Technical Deep Dive (With Qiskit Code)

2025-12-02 22:56:38

Quantum Machine Learning (QML) blends quantum computing with classical ML to unlock speed-ups in optimization, sampling, and high-dimensional learning.
This post covers core QML algorithms with practical Python/Qiskit examples so developers can start experimenting today.

🔹 1. Quantum Support Vector Machines (QSVM)

QSVMs use quantum kernel estimation, which allows efficient mapping of classical data into extremely high-dimensional Hilbert spaces.

🧪 Qiskit Example — Quantum Kernel

from qiskit import BasicAer
from qiskit.utils import QuantumInstance
from qiskit_machine_learning.kernels import QuantumKernel
from qiskit.circuit.library import ZZFeatureMap

feature_map = ZZFeatureMap(feature_dimension=2, reps=2)
quantum_kernel = QuantumKernel(feature_map=feature_map,
quantum_instance=QuantumInstance(BasicAer.get_backend('statevector_simulator')))

kernel_matrix = quantum_kernel.evaluate(x_vec=[[0, 1]], y_vec=[[1, 0]])
print(kernel_matrix)

🔹 2. Quantum Neural Networks (QNNs)

QNNs use variational quantum circuits trained with classical optimizers in a hybrid loop.

Qiskit Example — A Simple VQC Model

from qiskit import Aer
from qiskit.algorithms.optimizers import COBYLA
from qiskit_machine_learning.neural_networks import TwoLayerQNN

backend = Aer.get_backend("statevector_simulator")

qnn = TwoLayerQNN(num_inputs=2, num_qubits=2, quantum_instance=backend)
optimizer = COBYLA()

def objective(x):
return qnn.forward(x)[0]

result = optimizer.minimize(objective, x0=[0.1, 0.5])
print(result)

🔹 3. Quantum k-Means Clustering

Quantum k-means speeds up distance calculations using superposition and amplitude encoding.

Sketch (high-level qiskit pseudocode)

from qiskit_machine_learning.algorithms import QSVM

Quantum k-means isn't fully standardized yet, many researchers use quantum kernels.

Distances can be computed via swap tests or inner-product circuits.

(If you want, I can include a custom quantum distance circuit.)

🔹 4. Quantum PCA (qPCA)

Quantum PCA uses quantum phase estimation (QPE) to extract principal components of a density matrix.

Minimal Example (Qiskit)

from qiskit import QuantumCircuit
from qiskit.circuit.library import QFT

n_qubits = 3
qc = QuantumCircuit(n_qubits)

Example: QPE structure (simplified)

qc.h(range(n_qubits - 1))
qc.append(QFT(num_qubits=n_qubits-1), range(n_qubits-1))

qc.draw('mpl')

(This acts as the backbone for qPCA implementations.)

🔹 5. Quantum Generative Adversarial Networks (QGANs)

QGANs combine:

A quantum generator

A classical OR quantum discriminator

Qiskit Example — QGAN Training Loop

from qiskit_machine_learning.algorithms import QGAN
from qiskit import BasicAer

real_data = [[0.1], [0.9]]
qgan = QGAN(real_data=real_data,
num_qubits=1,
batch_size=20,
quantum_instance=BasicAer.get_backend("qasm_simulator"))

qgan.run()

🔹 6. QAOA and VQE (Optimization Algorithms)

Many ML tasks (loss minimization, feature selection, clustering) are optimization problems.
QAOA and VQE use quantum circuits to optimize energy landscapes.

Qiskit Example — Simple VQE

from qiskit.algorithms import VQE
from qiskit.algorithms.optimizers import SLSQP
from qiskit.circuit.library import TwoLocal
from qiskit.primitives import Estimator
from qiskit.quantum_info import SparsePauliOp

hamiltonian = SparsePauliOp.from_list([("ZZ", 1)])
ansatz = TwoLocal(2, "ry", "cx", reps=2)
optimizer = SLSQP()

vqe = VQE(Estimator(), ansatz, optimizer)
result = vqe.compute_minimum_eigenvalue(operator=hamiltonian)

print(result.eigenvalue)

🧠 Why QML Matters for Developers

✔ Exponential feature mapping
✔ Faster optimization in ML workflows
✔ Parallelism via superposition
✔ More expressive generative models
✔ Useful for chemistry, finance, logistics, and material science

Quantum advantage in ML is still emerging, but developers experimenting today with Qiskit, PennyLane, Cirq, or AWS Braket will lead the next wave of AI innovation.

🚀 Final Thoughts

Quantum ML is not just “faster computing” — it’s a fundamentally new computational paradigm.