MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

Inside My AI Workflow: How I Get Real Work Done With Prompts

2025-11-08 10:33:55

AI isn’t magical. It’s leverage.

Most people use AI the way they use Google: type a question, hope for something useful, and leave with surface-level answers.

That’s not how I run my life, brands, or business.

I use AI as a thinking partner, execution engine, and speed multiplier across product-building, writing, decision-making, and coding.

Today, I’m opening my workflow, not theory, not motivation, but the exact AI workflow I use daily as a founder, developer, and creator.

If you’ve ever wondered, “What does an AI-driven day actually look like?”, this is it.

My Core Rule: AI Isn’t a Tool, It’s an Extension of My Mind

AI doesn’t replace my thinking.
It amplifies it.

My workflow is built on one belief:

“If AI can think, draft, structure, refine, test, and execute faster than me, it should.”

My time is better spent on judgment, taste, creativity, and vision.

My Hybrid AI Workflow (Founder + Developer)

1️⃣ Clarify the “Outcome” Before Touching AI

Most people start with a prompt.
I start with intention.

I ask myself one question:

“What result do I want AI to produce that will move my work forward?”

This avoids friction, confusion, and wasted iterations.

2️⃣ Use AI to Break the Work into Smart Modules

Whether I’m writing a book chapter or designing a Python automation, I let AI structure the work first.

Prompt Template I use:

Act as a project architect. Break this into logical phases, tasks, and the smartest sequence to complete it with AI support.

This turns overwhelming work into clear, bite-sized units.

3️⃣ Build the “AI Working Memory” First

Before asking AI to execute, I load the context.

For a product build, this may include:

  • Audience
  • Use case
  • Tech stack
  • Constraints
  • Success criteria

For a book or article, this may include:

  • Tone + voice
  • Reader level
  • Structure
  • Key message

This saves 70% re-explaining time later.

4️⃣ Use AI for Execution, But Keep Judgment Human

Here’s where most get it wrong:
They delegate thinking to AI.

I don’t.

I delegate output, not ownership.

Examples:

AI vs Human Role

AI builds. I refine.

5️⃣ Use AI to Audit, Stress-Test, and Improve the Work

A beginner stops at “done.”
An AI-native professional ends at “elevated.”

I always run a stress-test round:

Act as a critical reviewer. Identify flaws, missing elements, and ways to improve clarity, depth, and value. Suggest fixes.

This upgrades good work into exceptional work, fast.

6️⃣ Close With “Leverage Outputs,” Not Just Completion

The biggest mistake people make with AI:
They finish a task and walk away.

My rule:

Every AI-created outcome should create secondary value automatically.

Example:
One Dev.to article leads to → 3 email ideas → 2 shorts → 1 framework → 1 tool concept.

I don’t complete outputs. I multiply them.

The Light Rebellion Perspective

Most people treat AI like a shortcut.

I treat AI as a new operating system for personal power.

The world is sleepwalking into the AI era with old systems, old workflows, and old definitions of “work.”

This hybrid workflow is not the future; it is the minimum requirement for those who want to stay relevant, competitive, and unstoppable.

If schools won’t teach this, creators and founders will.
If they don’t, AI-native thinkers will take the lead.

Final Thought

The question is no longer:

“Should I use AI?”

The real question is:

“What version of myself do I become when I learn to think and work with AI?”

Because after you experience an AI-powered workflow, you won’t go back.

Next Article:

“The Prompt Layer Most Beginners Miss”

Once you discover this layer, prompt mastery starts making sense.

Why it's time to ditch UUIDv4 and switch to UUIDv7!

2025-11-08 10:29:13

I've been using UUIDv4 as my go-to identifier for database primary keys for quite a long time, moving from sequential integer IDs (auto-increment/SERIAL). UUIDv4 immediately reminds me of the time when we didn't have better alternatives for distributed systems.

Apart from being widely adopted and having massive usage, UUIDv4 has some issues that can be easily fixed with a "more modern" alternative.

I've recently started using UUIDv7 and it does have many advantages in comparison with UUIDv4.

First of all, it's really fast (UUIDv7, with its time-ordered structure, claims to be 2-5x faster for inserts than UUIDv4). Writing records to the database has been a real pleasure. The insert performance and index maintenance are considerably faster.

Second, it naturally sorts by creation time, whereas UUIDv4 doesn't do this at all. With UUIDv4, each time you insert a record, it lands in a random position in your B-tree index, causing page splits and fragmentation. This leads to degraded performance over time. Furthermore, you will still need to add a separate created_at timestamp column if you want to sort records chronologically. In addition, there is always index fragmentation when inserting with UUIDv4 (UUIDv7 appends sequentially to the index).

UUIDv7 (and a few others such as ULID that can also be used as time-ordered identifiers) handle this under the hood. This way, when you are dealing with high-volume inserts and large databases, you won't have any bad surprises like severe performance degradation or bloated indexes, for instance.

For instance, here's how UUIDv7 structures its data:

018c8e8a-9d4e-7890-a123-456789abcdef
└─timestamp─┘ └───random bits────┘

The first 48 bits contain a Unix timestamp in milliseconds, so UUIDs generated over time are naturally sequential. You can also (and very easily) start using UUIDv7 in your existing projects without migrating old UUIDv4 records - both can coexist in the same column.

// Node.js usage
const { v7: uuidv7 } = require('uuid');
const id = uuidv7();
console.log(id); // 018c8e8a-9d4e-7890-a123-456789abcdef
// example database record
{
  "id": "018c8e8a-9d4e-7890-a123-456789abcdef",
  "user_id": "018c8e8a-9d50-7000-c345-6789abcdef01",
  "created_at": "2024-11-08T10:30:00Z"
}

It's also good to mention that UUIDv7 maintains the same 128-bit format as UUIDv4, so it works with all existing UUID columns in your database.

Benchmark

UUIDv4 database inserts: 2,847ms
UUIDv7 database inserts: 2,763ms

Note: My benchmark was run with Node v20.10.0

// Run: node benchmark-uuid-comparison.js

// UUIDv4 benchmark
const { randomUUID } = require("crypto");

console.time("UUIDv4 database inserts");
for (let time = 0; time < 10_000_000; time++) {
  randomUUID();
}
console.timeEnd("UUIDv4 database inserts");

// UUIDv7 benchmark
const { v7: uuidv7 } = require("uuid");

console.time("UUIDv7 database inserts");
for (let time = 0; time < 10_000_000; time++) {
  uuidv7();
}
console.timeEnd("UUIDv7 database inserts");

// note, I've used 10_000_000 with _ which are numeric separators
// https://github.com/pH-7/GoodJsCode#-clearreadable-numbers

The real performance difference shows up in actual database operations where UUIDv7's sequential nature prevents index fragmentation.

Library support

With UUIDv7, the identifiers generated are associated with the correct timestamp, so we are sure we maintain temporal ordering while keeping global uniqueness.

UUIDv7 is available in most modern libraries:

  • Node.js: uuid package v9.0.0+
  • Python: uuid-utils or uuid7 package
  • Go: github.com/google/uuid
  • Java: Built into JDK 21+

Downsides...

The only case where you should still use UUIDv4 is when you explicitly don't want temporal ordering. This occurs either when you're building security tokens, API keys, or session IDs where predictability could be a security concern. The problem is that UUIDv7's timestamp-based structure reveals when the identifier was created. We need something completely unpredictable in these scenarios, such as pure random identifiers that don't leak any information about creation time.

However, for database primary keys and foreign keys, UUIDv7 is the clear winner. Anyway, it's worth trying it in your next project 😉

Now, it will give a significant boost to performance for your database operations, as well as better index efficiency over time, which is the most important at the end of the day, right? 😊

Alternatives

UUIDv6 is another time-based UUID option (still relatively new and not very popular either), that is essentially a fixed version of UUIDv1. It also provides sequential ordering like UUIDv7, but it still includes MAC address information (or random node ID) in its structure, which UUIDv7 avoids entirely for privacy reasons.

When to still use UUIDv4

Finally, although you might not need this, it's nice to mention that UUIDv4 is still perfectly valid for non-database use cases like temporary identifiers, request IDs, or any scenario where temporal ordering isn't needed.

From Manual Testing to AI Agents: A 90-Day Transformation Roadmap

2025-11-08 10:07:21

The software testing landscape is undergoing a seismic shift. As AI agents become increasingly sophisticated, QA teams have an unprecedented opportunity to augment their capabilities and deliver higher quality software faster. But the transition from manual testing to AI-assisted workflows can feel overwhelming.
This 90-day roadmap will guide you through a practical, phase-by-phase approach to integrating AI agents into your testing practice—from your first automation scripts to deploying intelligent agents that can reason about your application.

Why Make the Shift Now?
Manual testing served us well for decades, but modern software development demands more:

Speed: CI/CD pipelines require instant feedback
Coverage: Applications are too complex for purely manual validation
Consistency: Human testers have off days; AI agents don't
Scale: Testing across browsers, devices, and configurations is exponentially growing

AI agents aren't here to replace testers—they're here to handle the repetitive work so you can focus on exploratory testing, edge cases, and strategic quality initiatives.

The 90-Day Roadmap

Phase 1: Foundation (Days 1-30)
Goal: Build automation fundamentals and understand AI
capabilities

Week 1-2: Assessment & Learning

Audit your current testing process: Document what you test manually, how long it takes, and what's most repetitive
Learn automation basics: If you're new to automation, start with free resources on Selenium, Playwright, or Cypress
Explore AI testing tools: Research tools like Testim, Mabl, Applitools, and functionize to understand what's possible

Action Items:

Pick 5 critical user flows in your application
Create a spreadsheet tracking manual test execution time
Complete a Playwright or Cypress tutorial (both have excellent docs)

Week 3-4: First Automation Scripts

Choose your framework: Playwright is excellent for modern web apps, Cypress for rapid development, Selenium for legacy support
Write your first tests: Start with login, signup, and basic navigation
Set up CI/CD integration: Get tests running in GitHub Actions, GitLab CI, or Jenkins

Tools to explore:

Playwright: Modern, fast, multi-browser support
Cypress: Developer-friendly, great debugging
Selenium: Industry standard, massive ecosystem

Quick Win: Automate one smoke test suite that runs on every deployment

Phase 2: AI-Assisted Testing (Days 31-60)

Goal: Integrate AI tools for test generation, maintenance, and visual validation

Week 5-6: AI-Powered Test Generation
This is where things get exciting. AI code generators can dramatically accelerate test creation.

Tools to leverage:

  1. GitHub Copilot / Cursor / Windsurf: AI pair programmers that excel at generating test code - Prompt: "Write a Playwright test that validates checkout flow with payment processing" ** - Copilot** will generate comprehensive test scaffolding

2. Step-to-Code Generators:

STEP-TO-CODE GENERATOR (Open Source):
https://github.com/77QAlab/step-to-code-generator Convert
plain English test steps into executable code Playwright,
Cypress, or TestCafe. Features AI-powered
autocomplete with 34+ pre-built suggestions, custom step
templates, test data management, and a selector helper
tool. Perfect for manual testers transitioning to
automation—no coding experience required.

Testim: Records your actions and converts them to
stable, self-healing tests

Katalon Recorder: Free Chrome extension that generates
Selenium code
•** Checkly's AI test generator:** Converts plain English
descriptions to Playwright tests

PRACTICAL EXERCISE:
• Use Cursor or GitHub Copilot to generate 10 test scenarios
from user stories
• Compare the AI-generated code to what you'd write manually
• Refine prompts to get better output (be specific about
assertions, error handling)

PRO TIP: AI code generators work best when you provide context. Include your page object patterns, naming conventions, and existing test examples in your prompts.

Week 7-8: Self-Healing Tests & Visual AI

One of the biggest pain points in test automation is maintenance. AI can help. Implement self-healing:

Testim: Uses ML to automatically update locators when UI
changes
Mabl: Self-healing capabilities plus integrated visual
testing
Healenium: Open-source self-healing for Selenium

Add visual validation:

Applitools: Industry-leading visual AI that catches UI bugs
humans miss
Percy: Visual testing integrated with your existing tests
Chromatic: Storybook-focused visual regression testing

Action items:

  1. Integrate Applitools or Percy with 5 critical user flows
  2. Set up baseline images
  3. Intentionally break UI to see how visual AI catches issues

ROI Moment: Visual AI typically catches 10-20% more bugs than functional tests alone

Phase 3: AI Agents & Intelligent Testing (Days 61-90)

Goal: Deploy autonomous AI agents that reason about your application.

Week 9-10: AI Agent Fundamentals

AI agents go beyond automation—they explore, reason, and adapt.
Understanding AI Testing Agents

  • Autonomous exploration: Agents discover new paths through your app.
  • Intelligent assertions: They understand what “looks wrong” contextually.
  • Natural language interaction: Describe what to test in plain English.

Tools to Explore

QA Wolf – Generates and maintains Playwright tests
→ Converts manual test cases to automated tests
→ Handles ongoing maintenance

Octomind – Auto-discovers test cases
→ Agents explore your app autonomously
→ Creates tests from discovered user flows

Relicx – Generates tests from session replays
→ Learns from production usage
→ Creates realistic scenarios

Momentic – Low-code AI testing with intelligent assertions
→ Visual editor with AI-powered element detection
→ Self-maintaining test suite

Week 9 Exercise

✅ Pick one AI agent platform (Octomind has a generous free tier)
✅ Let it crawl your staging environment
✅ Review the tests it generates
✅ Refine and incorporate them into your suite

Week 10-11: Building Custom AI Testing Workflows

Let’s go advanced—build custom AI agents using LLM APIs.

Custom Agent Pattern Example

import Anthropic from '@anthropic-ai/sdk';
const anthropic = new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY });

async function generateTestCases(userStory) {
  const message = await anthropic.messages.create({
    model: 'claude-sonnet-4-20250514',
    max_tokens: 2000,
    messages: [{
      role: 'user',
      content: `Generate comprehensive Playwright test cases for: ${userStory}

      Include: happy path, error cases, edge cases, and accessibility checks.
      Format as executable Playwright code.`
    }]
  });
  return message.content[0].text;
}

Use Cases for Custom AI Agents

  • Test data generation: Create realistic datasets
  • Bug report analysis: AI suggests new tests from crash data
  • Accessibility validation: AI reviews WCAG compliance
  • Performance testing: Generates realistic load patterns

Tools for Custom Agent Development

  • LangChain – Build complex AI agent workflows
  • Claude API / OpenAI API – LLMs for reasoning & analysis
  • Playwright + AI – Combine browser automation with decision-making

Week 12: Integration & Optimization

  1. CI/CD Pipeline Enhancement
name: AI-Powered Testing
on: [push, pull_request]
jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
      - name: Run AI-generated tests
        run: npx playwright test
      - name: Visual AI comparison
        uses: applitools/eyes-playwright-action@v1
      - name: AI bug analysis
        run: node scripts/analyze-failures.js

2. Monitoring & Learning Loop

  • Set up dashboards (Grafana, DataDog)
  • Track test time, flakiness, bug detection rate
  • Let AI agents learn from failures

3. Team Training

  • Document AI testing workflows
  • Teach prompt engineering for test generation
  • Define when to use AI vs manual testing

Essential Tools Summary

Foundation

  • Playwright / Cypress
  • GitHub Actions / GitLab CI
  • Step-to-Code Generator

AI-Assisted Testing

  • GitHub Copilot / Cursor
  • Testim / Katalon
  • Applitools / Percy
  • AI Agent Testing

QA Wolf / Octomind

  • Relicx / Momentic
  • Claude API / OpenAI API
  • Advanced Workflows
  • LangChain
  • Playwright + LLMs

cost consideration

Measuring Success

how to measure success

Common Pitfalls to Avoid

  1. Automating bad manual tests – Fix strategy first.
  2. Over-relying on AI – Understand the basics.
  3. Ignoring false positives – Tune your visual baselines.
  4. Not involving the team – Transformation is cultural.
  5. Analysis paralysis – Week 1 = research, Week 2 = action.

Week-by-Week Checklist

Days 1-30 – Foundation
☐ Document current process
☐ Choose framework
☐ Write 10 automated tests
☐ Set up CI/CD
☐ Research 4+ AI tools

Days 31-60 – AI Integration
☐ Enable Copilot or Cursor
☐ Generate 20+ AI tests
☐ Implement visual AI testing
☐ Try 2 self-healing solutions
☐ Cut maintenance time 30 %

Days 61-90 – AI Agents
☐ Deploy one AI agent platform
☐ Build 1 custom workflow
☐ Hit 70 % automated coverage
☐ Train team on AI testing
☐ Document ROI + next steps

Beyond 90 Days: The Future

  1. Exploratory AI agents: Continuous production testing
  2. AI-powered load testing: Realistic user simulation
  3. Predictive quality: Risk forecasting for code changes
  4. Security agents: AI that thinks like a hacker

💬 The QA engineers who thrive won’t just execute tests—they’ll orchestrate intelligent agents and interpret insights that shape the future of quality.

Final Thoughts

The shift from manual testing to AI-assisted quality engineering isn’t about replacing people—it’s about amplifying impact.
In 90 days, you can evolve from running repetitive scripts to orchestrating intelligent test agents that elevate your product quality, speed, and innovation.

role and permission package for Laravel 11/12

2025-11-08 10:07:16

A highly optimized role and permission package for Laravel 11/12 with advanced features including multiple guards, wildcard permissions, super admin, expirable permissions, expirable roles, and Laravel Gate integration.

composer require saeedvir/laravel-permissions

See Document

Star This Package

✅ Role-based Access Control (RBAC)
✅ Permission Management
✅ Direct User Permissions
✅ Works with any model Advanced Features

🚀 Multiple Guards Support - Separate permissions for web, api, admin
🎯 Wildcard Permissions - Use posts.* to grant all post permissions
👑 Super Admin Role - Automatically has ALL permissions
⏰ Expirable Permissions - Set expiration dates on permissions
⏰ Expirable Roles - Set expiration dates on roles

high-performance shopping cart package

2025-11-08 10:01:06

A high-performance shopping cart package for Laravel 11/12 with tax calculation, discounts, coupons, and flexible storage options.

composer require saeedvir/shopping-cart

See Document

https://github.com/saeedvir/shopping-cart/

Star this package

✨ Features
Core Features
🛒 Item Management: Easily add, update, and remove items with an intuitive API
🎨 Attributes & Options: Custom attributes for variations (size, color, etc.)
💰 Tax Calculation: Automatic tax application based on configurable rules
🎟️ Discounts & Coupons: Full coupon system with validation and discount codes
💾 Flexible Storage: Session or database storage options
📦 Multiple Instances: Support for cart, wishlist, compare, and custom instances
🎯 Buyable Trait: Add cart functionality directly to your models
💱 Currency Formatting: Built-in currency formatting with helper functions
Performance & Optimization
⚡ Cache::memo() Integration: 99% fewer config lookups
🚀 High Performance: 87% faster than traditional implementations
💨 Memory Efficient: 99% less memory usage with smart data storage
📊 Database Optimized: Indexed queries and bulk operations
🔥 Production Ready: Handles 10,000+ concurrent users
📈 Scalable: Efficiently manages 1000+ item carts