MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

Struggling with RAG in PHP? Discover Neuron AI components

2026-02-09 17:11:05

Implementing Retrieval-Augmented Generation (RAG) is often the first "wall" PHP developers hit when moving beyond simple chat scripts. While the concept of “giving an LLM access to your own data” is straightforward, the tasks required to make it work reliably in a PHP environment can be frustrating. You have to manage document parsing, vector embeddings, storage in a vector database, and the final prompt orchestration. Most developers end up trying to glue several disparate libraries together, only to find that the resulting system is brittle and hard to maintain.

Neuron was designed to eliminate this friction. It provides a built-in RAG module that handles the heavy lifting of the data pipeline, allowing you to focus on the logic of your agent rather than the mechanics of vectors management and similarity search. In a typical scenario, like building a support agent that needs to “read” your company’s internal documentation, you don’t want to manually handle the chunking of text or the API calls to OpenAI’s embedding models. Neuron abstracts these into a fluent workflow where you define a “Data Source,” and the framework ensures the most relevant snippets of information are injected into the agent’s context window at runtime.

Understanding the Foundation: What RAG Really Means

Retrieval Augmented Generation breaks down into three critical components that work in harmony to solve a fundamental problem in AI: how do we give language models access to specific, up-to-date, or proprietary information that wasn’t part of their original training data?

The “G” part of the RAG acronym is straightforward—we’re talking about “Generative” AI models like GPT, Claude, Gemini, or any large language model that can produce human-like text responses. These models are incredibly powerful, but they have a significant limitation: they only know what they were trained on, and that knowledge has a cutoff date. They can’t access your company’s internal documents, your personal notes, or real-time information from your databases.

This is where the “Retrieval Augmented” component becomes transformative. Instead of relying solely on the model’s pre-trained knowledge, we augment its capabilities by retrieving relevant information from external sources at the moment of generation. Think of it as giving your AI agent a research assistant that can instantly find and present relevant context before answering any question.

Below you can see an example of how this process should work:

RAG Architecture in PHP

The Magic Behind Embeddings and Vector Spaces

To understand how retrieval works in practice, we need to dive into embeddings—a concept that initially seems abstract but becomes intuitive once you see it in action. An embedding is essentially a mathematical representation of text, images, or any data converted into a list of numbers called a vector. What makes this powerful is that similar concepts end up with similar vectors, creating a mathematical space where related ideas cluster together.

Vector store clustering

When I first started working with Neuron AI, I was amazed by how this actually works in practice. Imagine you have thousands of documents—customer support tickets, product manuals, internal wikis, research papers. Traditional keyword search would require exact matches or clever Boolean logic to find relevant information. But with embeddings, you can ask a question like “How do I troubleshoot connection issues?” and the system will find documents about network problems, authentication failures, and server timeouts, even if those documents never use the exact phrase “connection issues”.

The process works by converting both your question and all your documents into these mathematical vectors. The system then calculates which document vectors are closest to your question vector in this multi-dimensional space. It’s like having a librarian who understands the meaning and context of your request, not just the literal words you used.

You can go deeper into this technology in the article below.

https://inspector.dev/vector-store-ai-agents-beyond-the-traditional-data-storage/

The Challenge of real RAG Implementations

The conceptual understanding of RAG is one thing, actually building a working system is another challenge entirely. This is where the complexity really emerges, and it’s why Neuron is such a valuable tool for PHP developers entering this space.

The ecosystem involves multiple moving parts: you need to chunk your documents effectively, generate embeddings using appropriate models, store and index those embeddings in a vector database, implement semantic search functionality, and then orchestrate the retrieval and generation process seamlessly.

Neuron AI RAG PHP

Each of these steps involves technical decisions that can significantly impact your agent’s performance (speed, and quality of responses). How do you split long documents into meaningful chunks? Which embedding model works best for your domain? How do you handle updates to your knowledge base? How do you balance retrieval accuracy with response speed? These questions become more pressing when you’re building production systems that need to scale and perform reliably.

In the detailed implementation guide that follows, we’ll explore how Neuron simplifies this complex orchestration, providing PHP developers with tools and patterns that make RAG agent development both accessible and powerful.

Install Neuron AI

To get started, you can install the core framework and the RAG components via Composer:

composer require neuron-core/neuron-ai

Create a RAG Agent

To create a RAG, Neuron provides you with a dedicated class you can extend to orchestrate the necessary components like the AI provider, vector store and the embeddings provider.

First, let’s create the RAG class:

php vendor/bin/neuron make:rag App\\Neuron\\MyRAG

Here is an example of a RAG implementation:

namespace App\Neuron;

use NeuronAI\Providers\AIProviderInterface;
use NeuronAI\Providers\Anthropic\Anthropic;
use NeuronAI\RAG\Embeddings\EmbeddingsProviderInterface;
use NeuronAI\RAG\Embeddings\OpenAIEmbeddingsProvider;
use NeuronAI\RAG\RAG;
use NeuronAI\RAG\VectorStore\FileVectorStore;
use NeuronAI\RAG\VectorStore\VectorStoreInterface;

class MyRAG extends RAG
{
    protected function provider(): AIProviderInterface
    {
        return new Anthropic(
            key: 'ANTHROPIC_API_KEY',
            model: 'ANTHROPIC_MODEL',
        );
    }

    protected function embeddings(): EmbeddingsProviderInterface
    {
        return new VoyageEmbeddingsProvider(
            key: 'VOYAGE_API_KEY',
            model: 'VOYAGE_MODEL'
        );
    }

    protected function vectorStore(): VectorStoreInterface
    {
        return new FileVectorStore(
            directory: __DIR__,
            name: 'demo'
        );
    }
}

In the example above we provided the RAG with a connection to:

  • The LLM (Anthropic in this case)
  • The Embedding provider – The service able to transform text into vector embeddings
  • The vector store to persist the generated embeddings and perform document retrieval

Be sure to provide the appropriate infomration to connect with these services. You have plenty of options for each of these components. You can use local systems or managed services, so feel free to explore the documentation to choose your preferred ones: https://docs.neuron-ai.dev/components/ai-provider

Feed Your RAG With A Knowledge Base
At this stage the vector store behind our RAG agent is empty. If we send a prompt to the agent it will be able to respond leveraging only the underlying LLM training data.

use NeuronAI\Chat\Messages\UserMessage;

$response = MyRAG::make()
    ->chat(
        new UserMessage('What size is the door handle on our top car model?')
    );

echo $response->getContent();

// I don't really know specifically about your top car model. Do you want to provide me with additional information?
We need to feed the RAG with some knowledge to make it able to respond to questions about private information outside its default training data.

Neuron AI Data Loader

To build a structured AI application you need the ability to convert all the information you have into text, so you can generate embeddings, save them into a vector store, and then feed your Agent to answer the user’s questions.

Neuron has a dedicated module to simplify this process. In order to answer the previous question (What size is the door handle on our top car model?) we can feed the rag with documents (Markdown files, PDFs, HTML pages, etc) containing such information.

You can do it in a just a few lines of code:

use NeuronAI\RAG\DataLoader\FileDataLoader;

// Use the file data loader component to process documents
$documents = FileDataLoader::for(__DIR__)
        ->addReader('pdf', new \NeuronAI\RAG\DataLoader\PdfReader())
        ->addReader(['html', 'xhtml'], new \NeuronAI\RAG\DataLoader\HtmlReader())
        ->getDocuments();

MyRAG::make()->addDocuments($documents);

As you can see from the example above you can just point the data loader to a directory containing all the files you want to load into the vector store, and it automatically does:

  • Extract all text inside files
  • Chunk the content with a splitting strategy
  • Pass all the documents into the RAG to generate ambeddings and finally persist all this information into the vector store.

It’s just an example to demonstrate how you can create a complete data pipeline for your agentic application in 5 lines of code. You can learn more about the extensibility and customization opportunities for readers and splitters in the documentation: https://docs.neuron-ai.dev/rag/data-loader

Talk to the chat bot

Imagine having previously populated the vector store with the knowledge base you want to connect to the RAG agent, and now you want to ask questions.

To start the execution of a RAG you call the chat() method:

use App\Neuron\MyRAG;
use NeuronAI\Chat\Messages\UserMessage;

$response = MyRAG::make()->chat(
    new UserMessage('What size is the door handle on our top car model?')
);

echo $response->getContent();

// Based on 2025 sales results, the top car model in your catalog is XXX...

Monitoring & Debugging

Many of the Agents you build with NeuronAI will contain multiple steps with multiple invocations of LLM calls, tool usage, access to external memories, etc. As these applications get more and more complex, it becomes crucial to be able to inspect what exactly your agent is doing and why. Share feedback on the editor

Why is the model taking certain decisions? What data is the model reacting to?

The Inspector team designed Neuron AI with built-in observability features, so you can monitor AI agents were running, helping you maintain production-grade implementations with confidence.

To start monitoring your agentic systems you need to add the INSPECTOR_INGESTION_KEY variable in your application environment file. Authenticate on app.inspector.dev
to create a new one.

INSPECTOR_INGESTION_KEY=nwse877auxxxxxxxxxxxxxxxxxxxxxxxxxxxx

When your agents are being executed, you will see the details of their internal steps on the Inspector dashboard.

Inspector monitoring Neuron AI

Resources

If you are getting started with AI Agents, or you simply want to elevate your skills to a new level here is a list of resources to help you go in the right direction:

Moving Forward

The complexity of orchestrating embeddings, vector databases, and language models might seem a bit daunting, but remember that every expert was once a beginner wrestling with these same concepts.

The next step is to dive into the practical implementation. Neuron AI framework is designed specifically to bridge the gap between RAG theory and production-ready agents, handling the complex integrations while giving you the flexibility to customize the behavior for your specific use case. Start building your first RAG agent today and discover how powerful context-aware AI can transform your applications.

Multi-Line Editing in edittrack: Work with Multiple Routes Simultaneously

2026-02-09 17:05:10

Planning complex routes often requires more than a single track. Whether you're mapping a multi-day trek, comparing alternative paths, or managing a network of interconnected trails, keeping all your routes in one workspace lets you see the full picture and maintain context while editing each part independently.

Starting with version 2.0-beta, edittrack introduces multi-line editing—the ability to create, edit, and manage multiple independent tracks within the same TrackManager. This post walks through the feature, its real-world applications, and how to work with it.

What multi-line editing offers

Multi-line editing lets you work with multiple independent line strings—called "parts"—in a single edittrack workspace. Each part is a complete track with its own control points, segments, POIs, and routing configuration. You can:

  • Create as many parts as you need
  • Switch between parts to edit them individually
  • See all parts simultaneously for context
  • Style active and inactive parts differently for visual clarity
  • Delete parts you no longer need

When you draw, move points, or add POIs, only the currently active part is affected. Other parts remain untouched, letting you refine one route without disrupting others.

Real-world use cases

Multi-day trips

Break a long journey into daily segments. Each day becomes its own part, so you can edit day 3 without accidentally moving points from day 1. All days remain visible on the map, giving you a sense of the full itinerary while you tweak individual stages.

Alternative routes

Present options to clients or collaborators by drawing multiple variations side-by-side:

  • Scenic route vs. fastest route
  • Technical mountain bike trail vs. beginner-friendly path
  • Main highway vs. scenic backroads

Compare distances, elevation profiles, and surface types across alternatives before committing to one.

Route networks

Build interconnected trail systems where a main route has optional side trips:

  • A primary hiking trail with viewpoint spurs
  • Different starting points converging at a common destination
  • Loop routes with shortcut options

Each branch is its own part, making it easy to modify without tangling the entire network.

Activity segments

Separate parts based on how you'll travel:

  • Walk to the trailhead (part 0), bike the main trail (part 1), drive home (part 2)
  • Approach routes vs. main climbing routes
  • Paved access roads vs. off-road trails

Each part can use different routing profiles or snapping configurations if needed.

Version control and experimentation

Keep your original route as part 0, then create part 1 for an optimized version. Compare them directly on the map. If the experiment doesn't work, simply delete the new part or switch back to the original.

Working with multiple parts: the API

If you're familiar with edittrack from the first blog post, working with multiple parts builds naturally on what you already know. The key difference: instead of managing one track, you now manage multiple parts and switch between them.

Creating new parts

Start with an existing TrackManager instance. Call createNewPart() to add a new empty part:

const partIndex = trackManager.createNewPart();
// Returns the index (0, 1, 2, etc.) and automatically switches to that part

The new part becomes the active part, and you can immediately start drawing on it. The first part you create is always part 0.

Switching between parts

To edit a different part, use workOnPart():

trackManager.workOnPart(1); // Switch to part 1

After switching, all drawing, point manipulation, and POI operations affect only that part. To check which part is currently active:

const currentPart = trackManager.activePart();
console.log(`Editing part ${currentPart}`);

Managing parts

Find out how many parts you have:

const total = trackManager.partsCount();

Delete a part you no longer need:

trackManager.deletePart(2); // Remove part 2

Deleting a part removes all its features—control points, segments, and POIs—and renumbers subsequent parts to maintain a sequential index. If you delete the active part, edittrack automatically switches to another available part.

Automatic switching on drag

By default, edittrack lets you grab features from any part and automatically switches to that part when you start dragging. This makes quick edits intuitive: just grab a point from part 2, and edittrack switches context so you can move it.

To disable this behavior (e.g., to prevent accidental edits to inactive parts):

trackManager.switchPartOnDrag = false;

With this set to false, dragging features from inactive parts is blocked entirely, enforcing strict isolation between parts.

Getting data from all parts

The familiar methods—getSegments(), getControlPoints(), and getPOIs()—return data only from the currently active part. If you need data from all parts at once, use the "all" variants:

const allSegments = trackManager.getAllSegments(); // Returns Feature[][]
const allControlPoints = trackManager.getAllControlPoints();
const allPOIs = trackManager.getAllPOIs();

Each of these returns a nested array: one sub-array per part. For example, allSegments[0] contains segments from part 0, allSegments[1] contains segments from part 1, and so on.

Iterating through parts

To process each part individually without manually switching back and forth, use the partsGenerator() method:

for (const {index, trackData} of trackManager.partsGenerator()) {
  console.log(`Part ${index} has ${trackData.segments.length} segments`);
  // Process trackData.segments, trackData.controlPoints, trackData.pois
}

The generator automatically restores the original active part when the loop completes, so you don't have to track state manually.

History management

Undo and redo now track both features and the active part. When you undo, edittrack restores not just the previous geometry and points, but also which part you were editing:

await trackManager.undo();
await trackManager.redo();

This means you can switch parts, make edits, switch again, and undo your way back through the entire sequence—edittrack remembers which part was active at each step.

Visual feedback and user experience

To help users distinguish between parts, edittrack sets an active property on every feature. This boolean indicates whether the feature belongs to the currently active part. You can use it in your styles to make active parts stand out:

const trackLineStyle = {
  "stroke-width": 6,
  "stroke-color": ["case",
    ["==", ["get", "active"], false],
    "#00883c80",  // Semi-transparent green for inactive parts
    "#00883cff"   // Opaque green for active parts
  ],
  "text-value": ["concat", "", ["get", "part"]], // Display part number as text
  "text-fill-color": "#fff",
};

Similarly, control points can use conditional styling:

const controlPointStyle = {
  "circle-fill-color": ["case",
    ["==", ["get", "active"], false],
    "#0071ec80",  // Semi-transparent blue for inactive
    "#0071ecff"   // Opaque blue for active
  ],
};

The part property stores the numeric index (0, 1, 2, etc.) of each feature's part. Displaying this number as a label on the map helps users keep track of which part is which, especially when working with many parts.

Try it yourself

The live demo now includes three new controls:

  • Add a new line string: creates a new empty part
  • Change active line string: cycles through existing parts
  • Delete active line string: removes the currently active part

To experiment:

  1. Draw a route on part 0 by clicking points on the map
  2. Click "Add a new line string" to create part 1
  3. Draw a second route—notice the first route becomes semi-transparent
  4. Click "Change active line string" to switch back to part 0 and modify it
  5. Try dragging points from inactive parts to see automatic switching in action

The demo uses the conditional styling shown above, so you'll see clear visual distinction between active and inactive parts.

Links

License

BSD-3-Clause

The Silence Weapon: When bad news stops flowing upward

2026-02-09 17:01:00

The Corporate Breakdown Files — Episode 2

The Cost of Speaking

Episode 1 showed how organizations redesign incentives so that appearing successful becomes more valuable than being accurate.

But appearances do not maintain themselves.

They require protection.

This episode examines the inevitable second-order effect of incentive collapse: a system where silence is no longer a communication failure, but a rational survival strategy.

When truth becomes expensive, quiet becomes professional.

Silence Is Engineered

An organization’s health is not revealed by what is said in its boardrooms, but by what cannot be said in its hallways.

The Silence Weapon is not cultural drift.
It is not fear.
It is not individual weakness.

It is the predictable output of a system that has made truth-telling the highest-risk behavior available.

Silence does not emerge accidentally.
It is trained, reinforced, and eventually automated.

Anatomy of the Silence Weapon

Silence does not arrive all at once.
It assembles itself through rational adaptations—each one defensible, each one survivable.

Phase 1 — Signal Inversion

The First Punished Messenger

Every silence culture has an origin.

Usually, it starts with a meeting.

A routine review. Slides are green. Timelines confident. Then one contributor deviates—calmly, factually—by naming a concrete risk. Not speculative. Documented. Structural.

The response is immediate and polite.

Questions move away from the issue and toward the messenger.
Is this aligned?
Is this the right forum?
Could this have been handled differently?

No one disputes the data.
The discomfort lies elsewhere.

Afterward, there is no reprimand. Only distance.
Invitations stop arriving. Feedback shifts toward tone, timing, stakeholder sensitivity.

The lesson spreads faster than any memo.

The system has inverted the signal:
the problem is no longer the risk — it is the person who surfaced it.
Illustration of a meeting where truthful signals are distorted before reaching leadership.The first truth is not ignored. It is inverted. (Gemini generated image)

Phase 2 — Self-Censorship Optimization

Message Softening

No rule is announced. None is needed.

Engineers adapt.

The next report still contains the issue—but transformed.
“Failure mode” becomes instability.
“Unrecoverable” becomes requires observation.
“Will block delivery” becomes may impact timing.

Nothing is false. Everything is diluted.

Drafts circulate quietly before submission. Language is adjusted to reduce exposure, not increase clarity.

“Let’s soften this.”
“Maybe phrase it more constructively.”
“We don’t want surprises.”

This is not deception.
It is optimization.

Truth remains present, but stripped of urgency, agency, and consequence.

Phase 3 — Escalation Theater

The Ritual of Risk

Eventually, risks reach leadership.

By then, they are ceremonial.

There is a forum. A template. Red items are allowed—within boundaries. Each requires an owner, a mitigation, and a confidence indicator.

The same risks return week after week.

Statuses rotate:
In progress.
Under clarification.
Monitoring.

No decision is taken that would challenge the plan itself.

Leadership sees evidence of control.
Teams see that escalation changes nothing.

Documentation replaces resolution.

Failure, when it arrives, will appear sudden—despite having been reviewed repeatedly.
Illustration of leadership applauding risk reviews while structural damage spreads unseen.Escalation without consequence is theater. (Gemini generated image)

Phase 4 — Silent Alignment

When Complicity Becomes Culture

At this stage, silence no longer requires enforcement.

It becomes ambient.

People still talk—but not upward.
In corridors. In private chats. In glances exchanged after meetings.

Everyone knows.

They know the deadline is unattainable.
They know the architecture is brittle.
They know the dashboard is cosmetic.

They also know the cost of being the next messenger.

Each individual makes the same rational calculation:
speaking changes nothing and endangers everything.

Alignment is achieved—not around truth, but around restraint.
Illustration of employees outwardly aligned while suppressing unspoken concerns.Silence doesn’t mean agreement. It means calculation. (Gemini generated image)

What the Silence Weapon Actually Does

The Silence Weapon does not hide failure.
It delays it.

By the time reality breaks through—via customers, regulators, or the market—it arrives fully formed and impossible to contextualize.

Executives are surprised.
Investigations are commissioned.
Lessons are promised.

The system survives intact.

Silence is not the absence of information.
It is the presence of a filter optimized for career preservation.

Bridge: The Need for Substitutes

Once silence dominates, the organization faces a new problem.

Decisions still require justification.

If truth can no longer travel upward, something else must take its place.

That substitute is Process.

Checklists.
Gates.
Compliance rituals.

Not to guide decisions—but to replace them.

Call to Action

If you’ve worked inside a system where silence felt safer than accuracy, you’re not alone—and you’re not imagining it.

This series exists to name the patterns professionals learn to survive but are never taught to recognize.

Follow the series.
The next fracture explains how silence is formalized.

Next: Episode 3 — The Process Illusion
When documentation replaces decisions—and failure gains an alibi.

🔎 The Corporate Breakdown Files — Full Series Overview

  • Prologue — Power Without Accountability: How Modern Corporations Create Their Own Failures
  • Prequel — The Blind Spot: Why Companies Collapse While Leaders Celebrate
  • Episode 1 — The Incentive Collapse
  • Episode 2 — The Silence Weapon
  • Episode 3 — The Process Illusion
  • Episode 4 — Deniability Engineering
  • Episode 5 — The Metrics Mirage
  • Episode 6 — Narrative Control
  • Episode 7 — The Gatekeeper Class
  • Episode 8 — Quiet Exits, Quiet Collapse
  • Episode 9 — The Conflict Vacuum
  • Episode 10 — Silo Warfare
  • Episode 11 — The Snap Moment
  • Episode 12 — Rebirth or Rot
  • Episode 13 — Scapegoat Economics

👉 New episodes released as the real-world case evolves.

🔖 Follow this series for real-world patterns of corporate dysfunction — and how to survive them.

© 2026 Abdul Osman. All rights reserved. You are welcome to share the link to this article on social media or other platforms. However, reproducing the full text or republishing it elsewhere without permission is prohibited.

Burnout or Depression? You’re Not Broken—You’re Trying to Survive 🔥😔

2026-02-09 16:57:57

Some days you’re just tired.
Other days you feel numb, detached, or empty.

And eventually, the question creeps in:

“Am I burned out… or am I depressed?”

They can feel identical—but they’re not. And knowing the difference can change everything about how you heal.

At NVelUp, mental health professionals work with people across Washington, Idaho, New Mexico, and Utah to uncover what’s really happening beneath chronic exhaustion—and how to move forward with clarity 🌱.

Why Burnout and Depression Feel the Same (At First)

Burnout and depression often overlap, especially in the beginning.

You might notice:

Low energy that never seems to return

Trouble focusing

Emotional withdrawal

Irritability or quiet anger

Feeling “off,” but not knowing why

🧠 New insight: Studies show chronic stress and depression both disrupt the brain’s reward system—meaning motivation and pleasure shut down in very similar ways. That’s why burnout can feel like depression even when it starts externally.

But the why matters.

Burnout: When Your Life Asks Too Much for Too Long 🔥

Burnout happens when stress becomes constant—work pressure, caregiving, emotional labor, or simply surviving without rest.

Signs of burnout often include:

Feeling emotionally drained or overwhelmed

Detachment or cynicism toward responsibilities

Lower productivity or creativity

Irritability and impatience

Feeling better after rest, weekends, or time away

Burnout is usually situational. Change the conditions, and symptoms often ease.

Depression: When the Weight Doesn’t Lift 😔

Depression goes deeper. It’s not just about stress—it affects mood, thinking, body, and identity.

Common signs include:

Persistent sadness or emptiness

Loss of interest in things you once loved

Guilt or feeling like a burden

Sleep or appetite changes

Fatigue that doesn’t improve with rest

Difficulty feeling hope or joy

💡 New insight: Depression can persist even when life “looks fine” on the outside—because it’s driven by internal neurochemical and emotional processes, not just circumstances.

Burnout vs. Depression (In Real Life Terms)

Burnout says: “I can’t keep doing this.”

Depression says: “I don’t see the point anymore.”

Burnout improves with rest.
Depression often doesn’t.

If symptoms spill into every part of life—or last beyond a few weeks—it may be more than burnout 🚨.

When Burnout Turns Into Depression

Unchecked burnout can slowly evolve into depression.

🧬 New insight: Long-term stress keeps cortisol levels elevated, which disrupts sleep, hormones, and mood regulation—raising the risk of depression, anxiety, and emotional numbness.

This is why many people experience both—and need care that looks at the whole picture.

Sometimes It’s Something Else (or Multiple Things)

Burnout or depression symptoms can overlap with:

Anxiety disorders

Adult ADHD

PTSD or unresolved trauma

Bipolar or mood disorders

Hormonal imbalances or chronic fatigue

That’s why a professional evaluation matters—it brings clarity instead of guesswork.

What Real Support Can Look Like 🩺

At NVelUp, care starts with understanding—not assumptions.

Support may include:

Psychiatric evaluation for accurate diagnosis

Medication management when needed

Therapy to process stress and emotional patterns

Naturopathic care for adrenal & hormonal health

Nutrition guidance to restore energy

Movement and fitness support to regulate stress

This integrated approach helps ensure burnout isn’t mistaken for depression—or overlooked entirely.

🔗 Learn more about whole-person mental health care at https://nvelup.com

If This Feels Familiar, You’re Not Weak 🌤️

Burnout means your environment needs care.
Depression means your inner system needs support.

Both are real.
Both are treatable.
And neither is a personal failure.

Ready for Clarity, Not Just Coping 🌿

If you’re unsure whether you’re burned out, depressed, or somewhere in between—you don’t have to figure it out alone.

👉 Explore compassionate, professional mental health care at https://nvelup.com

Support exists. Healing is possible. And you don’t have to rush 🌱.

Building a Windows Productivity Tool Using GitHub Copilot CLI

2026-02-09 16:56:19

This is a submission for the GitHub Copilot CLI Challenge

What I Built

I built a small productivity tool called PowerShell Helper.

While working on Windows, I often switch between Bash-enabled terminals and PowerShell. I keep missing the common Bash commands that I use daily, and I usually end up either installing Bash, Googling the PowerShell equivalent, or asking an AI tool questions like:

“How do I curl a URL in PowerShell?”

This constant context switching breaks flow.

So I built PowerShell Helper — a lightweight CLI tool that lets me describe what I want in plain English and instantly get the correct PowerShell command along with a simple explanation.

It’s a small tool, but it solves a real daily pain point for me.

Demo

GitHub repository:

https://github.com/amarpreetbhatia/powershell-helper

The tool runs directly from the command prompt or PowerShell.

Example usage:

  • Convert a natural language task into a PowerShell command
  • See a short explanation so you actually learn the command instead of just copying it

My Experience with GitHub Copilot CLI

This was the most impressive part of the project.

I built this entire tool in about 15 minutes, without opening any IDE.

Everything was done using a simple command prompt and GitHub Copilot CLI.

Copilot CLI helped me:

  • Generate the initial PowerShell script structure
  • Refine the command output to be clear and beginner-friendly
  • Add small safety and usability improvements
  • Quickly draft documentation without overthinking it

What I really liked is how Copilot CLI fits naturally into the terminal workflow. Instead of switching between editor, browser, and AI chat, I stayed in the CLI the whole time.

This project showed me how Copilot CLI isn’t just for writing code — it’s a powerful way to think, build, and ship small productivity tools very quickly.

I’ve just launched a public testing version of Vendly

2026-02-09 16:50:42

I’ve just launched a public testing version of Vendly
---> vendly.pk

Vendly is an e-commerce platform I’m building to help small businesses and non-technical sellers go online without the complexity and cost of traditional tools.

Important note:
This is NOT the final MVP.
This is a testing and validation build.

The purpose of this release is to:
• Understand real demand
• Identify user pain points
• Learn what features sellers actually want (not what we assume)
• Test UX, flows, and positioning

Too many platforms today are:
• Overloaded with plugins
• Expensive from day one
• Hard to set up without technical knowledge

Vendly’s long-term goal is simple:
Give businesses a clean, affordable foundation to grow online — without distractions.

I’m actively looking for:
• Feedback from founders & builders
• Insights from online sellers
• Feature requests from real users

If you’ve ever built, sold, or struggled with e-commerce tools, I’d love to hear your thoughts.

Building in public. Learning fast. Improving faster.
🚧 Feedback > Perfection