MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

NFT Community Hub - A Read-Only Safe Space for NFT Communities

2026-01-04 04:54:03

This is a submission for the DEV's Worldwide Show and Tell Challenge Presented by Mux

What I Built

NFT Community Hub is a community-first web platform built for my Facebook group with 3K+ NFT collectors, artists, and creators. Unlike marketplaces that require risky wallet connections, we keep every interaction strictly read-only - zero risk, maximum safety. Artists get free exposure, collectors discover quality NFTs safely, and our community grows through collaboration, not fear.

Key Features:

  • Read-only wallet verification (no transfers possible)
  • Gallery for NFT showcases
  • Community-curated collections
  • Proposal board with governance voting
  • Learning resources and security guides
  • Admin moderation system

My Pitch Video

Demo

Live Demo: https://nftcommunity.vercel.app/

Testing Instructions:

Option 1: Quick Browse (No Login Required)

  1. Visit: https://nftcommunity.vercel.app/
  2. Navigate to "Gallery" - see NFT showcases
  3. Navigate to "Collections" - browse curated collections
  4. Navigate to "Proposals" - view community proposals
  5. Navigate to "Learning" - access educational resources
  6. Navigate to "Security" - view security guides

Option 2: Full Experience (With Wallet - Recommended)

  1. Visit: https://nftcommunity.vercel.app/
  2. Click "Connect Wallet" (optional but recommended)
  3. Sign message - read-only, completely safe (no transactions possible)
  4. Browse features with full access
  5. Test gallery, collections, proposals

Important Notes:

  • All wallet interactions are read-only - no transfers are possible
  • No funds are at risk - you're only signing a message
  • You can disconnect anytime
  • All features work without wallet connection (limited functionality)

GitHub Repository: [Not shared publicly - code available upon request]

The Story Behind It

I run a Facebook group (https://www.facebook.com/groups/173861991846648) with 3K+ NFT community members who were struggling with:

  • No safe way to showcase NFTs - Every platform required risky wallet connections
  • Fear of scams - Wallet hacks and phishing attacks are everywhere
  • Need for free exposure - Artists can't afford marketplace fees
  • Lack of community curation - No trusted way to discover quality NFTs

There's a massive gap in the NFT space for a read-only, community-first platform that prioritizes safety and education over trading. With 3K+ active community members and growing, there's clear demand for a safe space where artists can showcase, collectors can discover, and communities can collaborate without wallet risk.

What Makes It Special:

  • First read-only NFT platform - No one else offers zero-risk wallet connection
  • Community-first approach - Built for my Facebook group, by the community
  • Free for artists - No marketplace fees unlike OpenSea/Rarible
  • Safety-first design - Education and security built-in
  • Governance system - Community votes on features

Technical Highlights

Tech Stack:

  • Frontend: Next.js 16 (App Router) with TypeScript, Material UI, Framer Motion
  • Wallet Integration: wagmi + viem (read-only signatures only - no eth_sendTransaction calls)
  • Backend: Next.js API Routes, MongoDB Atlas, Cloudflare R2
  • APIs: Reservoir API for NFT data, WalletConnect v2 for multi-wallet support

Technical Approach:

  • Read-only wallet verification - Nonce-based authentication, signature-only flow
  • On-chain ownership verification - Server-side checks using Reservoir API
  • Community governance - Wallet-based voting system with MongoDB persistence
  • Admin moderation - Role-based access control with moderation logs
  • Scalable architecture - Serverless-friendly, MongoDB for data, R2 for media

What Makes It Unique Technically:

  • First platform to implement truly read-only wallet connections (no transaction methods)
  • Community-driven curation with on-chain verification
  • Safety-first architecture with education built-in

Impact:

  • 3K+ Facebook group members actively using the platform
  • Zero wallet incidents - read-only approach works
  • Growing submissions - artists finding value in free exposure
  • Active governance - community proposals driving feature development

Core Features (Fully Working):

  • ✅ Read-only wallet verification
  • ✅ Gallery with NFT showcases
  • ✅ Collections browsing (curated)
  • ✅ Proposal board with voting
  • ✅ Admin moderation system
  • ✅ Learning & security pages
  • ✅ Profile system

Features In Progress:

  • 🚧 Watchlist alerts (API ready)
  • 🚧 Real-time notifications
  • 🚧 Advanced analytics dashboard

By submitting this project, I confirm that my video adheres to Mux's terms of service: https://www.mux.com/terms

Built with ❤️ for the NFT community. Safety first, always read-only.

Laravel + Vue (Inertia) + gRPC: building a simple BFF that talks to a gRPC User service

2026-01-04 04:49:52

Why gRPC in a Laravel + Vue project?

If you’re building a modern Laravel + Vue app, your default instinct is usually REST/JSON.

gRPC is different: you define your API as a contract in a .proto file, then generate strongly-typed client/server code (stubs). It runs over HTTP/2 and uses Protocol Buffers for compact serialization.

But there’s a catch for browser apps: a Vue app in the browser can’t just “talk gRPC” directly without extra infrastructure (gRPC-Web + a proxy such as Envoy). So a practical pattern is:

Vue (browser) -> Laravel (HTTP/JSON) -> gRPC microservice (internal)

That Laravel layer becomes your BFF (Backend For Frontend).

This repo demonstrates exactly that.

Architecture (what we built)

Goal: Display a user on a Vue page, but fetch it through Laravel, which calls a gRPC server.

Flow:

  1. Vue page (UserShow.vue) calls GET /api/users/{id}
  2. Laravel API route calls App\Services\UserGrpc
  3. UserGrpc uses the generated gRPC client to call UserService.GetUser
  4. A Node gRPC server returns user data (for demo purposes, reading the same SQLite DB Laravel uses)

Folder layout highlights:

  • proto/user/v1/user.proto → the contract
  • generated/App/Grpc/... → generated PHP stubs (client classes)
  • grpc-server/ → demo gRPC server (Node)
  • app/Services/UserGrpc.php → Laravel gRPC client wrapper
  • resources/js/Pages/UserShow.vue → Vue/Inertia page

Step 1 — Scaffold Laravel + Vue (Inertia)

For a fresh project, Laravel’s Vue starter kit (Inertia) is a great base.

Step 2 — Define the gRPC contract (.proto)

Create:

proto/user/v1/user.proto

Example contract:

syntax = "proto3";

package user.v1;

service UserService {
  rpc GetUser (GetUserRequest) returns (GetUserResponse);
}

message GetUserRequest {
  string id = 1;
}

message GetUserResponse {
  string id = 1;
  string name = 2;
}

This .proto file is the single source of truth.

Step 3 — Generate PHP gRPC client code

Once you have the .proto, you generate client classes for PHP.

Using Buf for generation

Buf is a tool that standardizes Protobuf workflows and generation config (buf.yaml, buf.gen.yaml).

npx buf generate

This repo keeps generated files under:

generated/App/Grpc/...

Tip: Many teams do NOT commit generated code (they generate in CI). But committing it is OK for demos and fast setup.

Step 4 — Implement a gRPC Server (Node demo)

Important: generating PHP client stubs does NOT create a server.

You still need a gRPC server implementation in some language (Go, Node, Java, PHP, etc).

In this repo, the demo server is under grpc-server/ and is started with Node.

Install deps:

cd grpc-server
npm install

If you see errors like:

  • Cannot find module 'dotenv'
  • Cannot find module 'better-sqlite3'

That just means you haven’t installed the packages yet:

npm install dotenv better-sqlite3 @grpc/grpc-js @grpc/proto-loader

A minimal server.js shape looks like:

require("dotenv").config();
const grpc = require("@grpc/grpc-js");
const protoLoader = require("@grpc/proto-loader");

const PROTO_PATH = __dirname + "/../proto/user/v1/user.proto";
const packageDef = protoLoader.loadSync(PROTO_PATH, {
  keepCase: true,
  longs: String,
  enums: String,
  defaults: true,
  oneofs: true,
});

const userProto = grpc.loadPackageDefinition(packageDef).user.v1;

// Demo handler
function GetUser(call, callback) {
  const { id } = call.request;

  // For a minimal demo:
  callback(null, { id, name: `User #${id}` });
}

function main() {
  const server = new grpc.Server();
  server.addService(userProto.UserService.service, { GetUser });

  const addr = process.env.GRPC_ADDR || "0.0.0.0:50051";
  server.bindAsync(addr, grpc.ServerCredentials.createInsecure(), () => {
    console.log(`gRPC server listening on ${addr}`);
    server.start();
  });
}

main();

Start the gRPC server:

node server.js

If Laravel says Connection refused, it simply means the gRPC server isn’t running or isn’t listening on the expected address/port.

Step 5 — Laravel gRPC client wrapper (UserGrpc)

In Laravel, we wrap generated gRPC calls in a service class:

app/Services/UserGrpc.php

Important fix: return values must come from $resp, not from $req.

<?php

declare(strict_types=1);

namespace App\Services;

use App\Grpc\User\V1\GetUserRequest;
use App\Grpc\User\V1\UserServiceClient;
use Grpc\ChannelCredentials;
use const Grpc\STATUS_OK;

class UserGrpc
{
    private UserServiceClient $client;

    public function __construct()
    {
        $this->client = new UserServiceClient(
            env('USER_SVC_ADDR', '127.0.0.1:50051'),
            ['credentials' => ChannelCredentials::createInsecure()]
        );
    }

    public function getUser(string $id): array
    {
        $req = new GetUserRequest();
        $req->setId($id);

        [$resp, $status] = $this->client->GetUser($req)->wait();

        if ($status->code !== STATUS_OK) {
            throw new \RuntimeException($status->details, $status->code);
        }

        return [
            'id' => $resp->getId(),
            'name' => $resp->getName(),
        ];
    }
}

If Grpc\ChannelCredentials is “not found”

That usually means the gRPC PHP extension is not enabled for the PHP you’re running (CLI vs FPM can differ).

Quick checks:

php -m | grep grpc
php --ini

And enable extension=grpc.so in the correct php.ini (the one shown by php --ini).

Step 6 — Expose an API endpoint in Laravel

Because the browser shouldn’t call gRPC directly, we expose a classic JSON API endpoint:

routes/api.php:

use Illuminate\Support\Facades\Route;
use App\Services\UserGrpc;

Route::get('/users/{id}', function (string $id, UserGrpc $userGrpc) {
    return response()->json($userGrpc->getUser($id));
});

Step 7 — Render a Vue page with Inertia

We keep your normal Inertia “page route” in routes/web.php:

use Illuminate\Support\Facades\Route;
use Inertia\Inertia;

Route::get('/users/{id}', function (string $id) {
    return Inertia::render('UserShow', [
        'id' => $id,
    ]);
});

Then your Vue page (resources/js/Pages/UserShow.vue) can fetch from /api/users/{id}.

A clean Inertia-friendly version uses props (no Vue Router required):

<script setup lang="ts">
import { computed, onMounted, ref } from "vue";

type User = { id: string; name: string };

const props = defineProps<{ id: string }>();
const id = computed(() => props.id);

const loading = ref(true);
const error = ref<string | null>(null);
const user = ref<User>({ id: "", name: "" });

onMounted(async () => {
  try {
    const res = await fetch(`/api/users/${encodeURIComponent(id.value)}`);
    if (!res.ok) throw new Error(`HTTP ${res.status}`);
    user.value = await res.json();
  } catch (e: any) {
    error.value = e?.message ?? "Unknown error";
  } finally {
    loading.value = false;
  }
});
</script>

<template>
  <div class="p-4">
    <h1 class="text-xl font-semibold">User</h1>

    <p v-if="loading">Loading...</p>
    <p v-else-if="error" class="text-red-600">{{ error }}</p>

    <div v-else class="mt-3">
      <div><b>ID:</b> {{ user.id }}</div>
      <div><b>Name:</b> {{ user.name }}</div>
    </div>
  </div>
</template>

Now you can open:

  • http://127.0.0.1:8000/users/1

…and it will load user data through the full chain:
Vue → Laravel API → gRPC client → gRPC server.

Step 8 — Seed fake users

Since you already have Laravel’s DB, the easiest approach is:

  1. Create a seeder (or factory)
  2. Run migrate + seed

Example:

php artisan migrate --seed

If your gRPC server reads the same SQLite DB file, it can return real seeded rows.

Run everything (local)

Terminal 1 — gRPC server:

cd grpc-server
node server.js

Terminal 2 — Laravel + Vite:

composer install
cp .env.example .env
php artisan key:generate

npm install
npm run dev

php artisan serve

When is this pattern worth it?

Use this setup when:

  • you want typed contracts between internal services
  • you have multiple backends in different languages
  • you want a stable interface (proto) shared across teams
  • you want performance and streaming features (gRPC supports streaming)

If your app is a small monolith and public-facing API is your only concern, REST might still be simpler.

Source: https://github.com/VincentCapek/laravel-vue-grpc-bff

Memento: Give Claude Code Persistent Memory So You Stop Repeating Yourself

2026-01-04 04:46:50

If you use Claude Code, you've probably had this experience:

You: Install the dependencies
Claude: npm install
You: No, use pnpm in this project
Claude: pnpm install

Next session? Same thing. Claude has no memory between conversations.

The Problem

Claude Code reads CLAUDE.md files at the start of each session - that's how it learns your project conventions. But manually maintaining these files is tedious, and you often forget to document the corrections you make during coding sessions.

The Solution: Memento

I built Memento - a Claude Code command that analyzes your conversations and extracts actionable insights automatically.

Named after the Christopher Nolan film where the protagonist leaves notes for his future self (because he can't form new memories), Memento helps you leave notes for future Claude sessions.

How It Works

At the end of any coding session, run:

/memento

Memento will:

  1. Analyze your conversation for corrections, preferences, and learnings
  2. Filter for actionable insights (not vague observations)
  3. Categorize into project-specific vs. personal preferences
  4. Present suggestions for you to approve

You select which ones to keep, and they're appended to the appropriate CLAUDE.md file.

What Makes a Good Suggestion?

Memento focuses on actionable insights:

✅ Actionable ❌ Not Actionable
"Use pnpm, not npm" "Check package manager"
"Tests are in __tests__/" "This was a good session"
"Always show command before running" "Be more careful"

Installation

One-liner:

mkdir -p ~/.claude/commands && curl -fsSL https://raw.githubusercontent.com/SeanZoR/claude-memento/main/.claude/commands/memento.md -o ~/.claude/commands/memento.md

That's it! The command is now available in all your Claude Code sessions.

Try It Out

Give it a spin and let me know what you think! Issues and PRs welcome.

GitHub logo SeanZoR / claude-memento

Extract session memories into CLAUDE.md - because Claude forgets, but your notes don't

███╗   ███╗███████╗███╗   ███╗███████╗███╗   ██╗████████╗ ██████╗
████╗ ████║██╔════╝████╗ ████║██╔════╝████╗  ██║╚══██╔══╝██╔═══██╗
██╔████╔██║█████╗  ██╔████╔██║█████╗  ██╔██╗ ██║   ██║   ██║   ██║
██║╚██╔╝██║██╔══╝  ██║╚██╔╝██║██╔══╝  ██║╚██╗██║   ██║   ██║   ██║
██║ ╚═╝ ██║███████╗██║ ╚═╝ ██║███████╗██║ ╚████║   ██║   ╚██████╔╝
╚═╝     ╚═╝╚══════╝╚═╝     ╚═╝╚══════╝╚═╝  ╚═══╝   ╚═╝    ╚═════╝

Extract session memories into CLAUDE.md
Because Claude forgets, but your notes don't.

License GitHub stars Claude Code

Quick StartHow It WorksExamplesRoadmapContributing

The Problem

Claude has no memory between sessions. Every time you start a new conversation, it's a blank slate. You've probably noticed yourself repeating the same instructions:

  • "Use pnpm, not npm"
  • "Tests go in __tests__/ folders"
  • "I prefer TypeScript over JavaScript"

Sound familiar?

The Solution

Memento is a Claude Code command that analyzes your coding sessions and extracts actionable insights into your CLAUDE.md files. Future Claude sessions automatically read these files, giving Claude "memory" of your preferences and project conventions.

Like Leonard in the film Memento, Claude has no





Have you built any tools to improve your Claude Code workflow? I'd love to hear about them in the comments!

Public

2026-01-04 04:44:31

Check out this Pen I made!

WebSocket VS Polling VS SSE

2026-01-04 04:40:21

📌 The Classic Request-Response Model (and Its Limitations)

How Standard Web Apps Work

In a typical web app:

  1. A client (browser/app) sends a request to the server.
  2. The server processes it (DB access, computation, etc.).
  3. The server sends back a response.
  4. The connection closes.

This cycle is simple and efficient for most applications.

👉 But here’s the key problem:

Once the response is done, the server cannot send fresh data to the client unless the client asks again.

Example: A Stock Market App

Suppose you have a simple stock application:

  • 🧑‍💻 Clients A, B, C connect and request current stock prices.
  • 📡 The server responds — and bam! connection closes.
  • 📉 Later, prices change on the server.
  • But clients A, B, C still only have old (stale) data.

This becomes a real-time problem:
👉 How does the server tell clients that data has changed?

🚀 Solution 1: WebSockets

WebSockets let you keep a persistent full-duplex connection open between clients and servers.

What Does This Mean?

Instead of:

Client → Server → Response → Connection closes

WebSockets keep the connection open:

Client ↔ Server ↔ Client ↔ Server

This allows:

  • The server to push updates anytime.
  • The client to send data anytime.
  • Both sides talk without closing the connection.

How It Works (Simple Diagram)

Client                         Server
  | — WebSocket handshake →     |
  |                             |
  | ← Accept & open channel —   |
  |                             |
  | — Updates can flow both →   |
  |                             |

Once the connection is open, either side can send data.

Pros of WebSockets

✅ Real real-time updates
✅ Low latency
✅ Full duplex (two-way communication)

Cons of WebSockets

❌ Hard to scale — it’s stateful (server must remember every connected client)
❌ If you have millions of connections, scaling horizontally becomes expensive
❌ Servers must synchronize updates among themselves in clustered systems

🚀 Solution 2: Polling

Polling is the simplest alternative to WebSockets.

What Is Polling?

Instead of keeping a connection alive, the client asks the server again and again:

Client: “Any new updates?”
Server: “Nope.”
Client: “Any new updates?”
Server: “Yes — here you go!”

Simple Polling Example

Let’s say the client checks every 2 seconds:

0s → “Give me new data”
2s → “Give me new data”
4s → “Give me new data”
…

If new data appears at 3.5s, the client will only get it at the next poll (4s).

👉 That means the maximum delay is equal to your poll interval — 2 seconds in this example.

Pros of Polling

✅ Easy to implement
✅ Works with load balancers and many servers
✅ Stateless — each request is independent

Cons of Polling

❌ Not truly real-time
❌ Can waste requests if no new data
❌ Frequent polling may still add network load

🚀 Solution 3: Long Polling

Long polling is an optimized form of polling.

What Is Long Polling?

Instead of responding immediately, the server holds the request open until:

  • New data arrives, or
  • A timeout expires

Then it responds with data in one shot.

Example: Long Polling for 5 Seconds

Client → Server: “Any updates?”  
Server: Hold request for 5 seconds

If updates come within 5s:
  Server → Client: Latest updates
Then client immediately re-requests.

Pros of Long Polling

✅ Fewer requests than short polling
✅ More “real-time” feel than simple polling
✅ Still stateless

Cons of Long Polling

❌ Can still hold server resources
❌ Not as instant as WebSockets
❌ Server must manage held requests

📊 Comparing the Approaches

Technique Real-Time Scalability Server Load Complexity
Polling Moderate (delayed) 🔥 Easy 🔥 Medium 🟢 Easy
Long Polling Good 🔥 Good 🔥 Medium 🟡 Moderate
WebSockets Excellent 🔻 Hard 🔻 High 🟡 Moderate

🧠 Real-World Considerations

Do You Always Need Full Real-Time?

Not always.

For example, in a stock chart app:

  • You might only need fresh price updates, not two-way communication.
  • Buying/selling can still happen via regular POST API routes.

That means:

  • WebSockets might be overkill.
  • Polling or long polling might be perfectly fine.

Why Polling Works Well with Load Balancers

When you scale with many backend servers and a load balancer:

  • Polling requests get distributed across servers,
  • You avoid being tied to one server connection,
  • If a server goes down, your next poll goes to another healthy server.

🏁 My Final Thoughts

Real-time systems aren’t magic — they’re about choosing the right tool for the job:

🔹 Need instant push updates? → WebSockets
🔹 Need lightweight, scalable updates? → Polling / Long Polling
🔹 Want a mix of both? → Start with polling, evolve as needed

Every choice has trade-offs. Understanding the fundamental communication patterns helps you make the best architectural decision — and prevents unnecessary complexity early on.

From idea to shipping in 14 days: My journey building a Markdown tool

2026-01-04 04:37:53

Hello DEV community!

I’m excited to finally share a project that has been living in the back of my mind for a very long time.

Three days ago, I launched MD-To.com, a free online Markdown converter.

But the story actually starts about two weeks ago. I finally decided to stop procrastinating and just build the thing. To speed up the process, I used Cursor paired with Claude Opus 4.5. The development velocity was insane—what felt like a month's worth of work got compressed into just a fortnight.

The Launch & The Pivot

I registered the domain and pushed the site live just 3 days ago.

But as soon as it was live, I wasn't 100% happy with it. So, over the last 72 hours, I completely redesigned the UI and overhauled the copywriting to make the experience much smoother and more professional.

What can it do?

The goal was simple: a converter that respects your privacy (no server uploads) and covers all the edge cases developers need.

Here are the current features:

Privacy First: Everything happens locally in your browser. Your files never hit my server.

Markdown to "Anything": Convert MD to Word (.docx), PDF, HTML, LaTeX, and even Confluence Wiki syntax.

Reverse Conversion: Turn HTML or Word docs back into clean Markdown.

Data Tools: Convert CSV or JSON directly into Markdown Tables (and vice versa).

Editor: A built-in editor with real-time preview.

100% Free: No login, no watermarks, no limits.

I need your feedback

Since I just finished the redesign, I am looking for honest feedback from other developers.

Is the UI intuitive?

Are there any file formats you wish it supported?

Did you find any bugs in the conversion logic?

Please give it a try at https://md-to.com/ and let me know what you think in the comments. I’m ready to fix bugs and add features based on your suggestions!

Thanks for reading!