2026-01-04 04:54:03
This is a submission for the DEV's Worldwide Show and Tell Challenge Presented by Mux
NFT Community Hub is a community-first web platform built for my Facebook group with 3K+ NFT collectors, artists, and creators. Unlike marketplaces that require risky wallet connections, we keep every interaction strictly read-only - zero risk, maximum safety. Artists get free exposure, collectors discover quality NFTs safely, and our community grows through collaboration, not fear.
Key Features:
Live Demo: https://nftcommunity.vercel.app/
Testing Instructions:
Option 1: Quick Browse (No Login Required)
Option 2: Full Experience (With Wallet - Recommended)
Important Notes:
GitHub Repository: [Not shared publicly - code available upon request]
I run a Facebook group (https://www.facebook.com/groups/173861991846648) with 3K+ NFT community members who were struggling with:
There's a massive gap in the NFT space for a read-only, community-first platform that prioritizes safety and education over trading. With 3K+ active community members and growing, there's clear demand for a safe space where artists can showcase, collectors can discover, and communities can collaborate without wallet risk.
What Makes It Special:
Tech Stack:
eth_sendTransaction calls)Technical Approach:
What Makes It Unique Technically:
Impact:
Core Features (Fully Working):
Features In Progress:
By submitting this project, I confirm that my video adheres to Mux's terms of service: https://www.mux.com/terms
Built with ❤️ for the NFT community. Safety first, always read-only.
2026-01-04 04:49:52
If you’re building a modern Laravel + Vue app, your default instinct is usually REST/JSON.
gRPC is different: you define your API as a contract in a .proto file, then generate strongly-typed client/server code (stubs). It runs over HTTP/2 and uses Protocol Buffers for compact serialization.
But there’s a catch for browser apps: a Vue app in the browser can’t just “talk gRPC” directly without extra infrastructure (gRPC-Web + a proxy such as Envoy). So a practical pattern is:
✅ Vue (browser) -> Laravel (HTTP/JSON) -> gRPC microservice (internal)
That Laravel layer becomes your BFF (Backend For Frontend).
This repo demonstrates exactly that.
Goal: Display a user on a Vue page, but fetch it through Laravel, which calls a gRPC server.
Flow:
UserShow.vue) calls GET /api/users/{id}
App\Services\UserGrpc
UserGrpc uses the generated gRPC client to call UserService.GetUser
Folder layout highlights:
proto/user/v1/user.proto → the contractgenerated/App/Grpc/... → generated PHP stubs (client classes)grpc-server/ → demo gRPC server (Node)app/Services/UserGrpc.php → Laravel gRPC client wrapperresources/js/Pages/UserShow.vue → Vue/Inertia pageFor a fresh project, Laravel’s Vue starter kit (Inertia) is a great base.
.proto)
Create:
proto/user/v1/user.proto
Example contract:
syntax = "proto3";
package user.v1;
service UserService {
rpc GetUser (GetUserRequest) returns (GetUserResponse);
}
message GetUserRequest {
string id = 1;
}
message GetUserResponse {
string id = 1;
string name = 2;
}
This .proto file is the single source of truth.
Once you have the .proto, you generate client classes for PHP.
Buf is a tool that standardizes Protobuf workflows and generation config (buf.yaml, buf.gen.yaml).
npx buf generate
This repo keeps generated files under:
generated/App/Grpc/...
Tip: Many teams do NOT commit generated code (they generate in CI). But committing it is OK for demos and fast setup.
Important: generating PHP client stubs does NOT create a server.
You still need a gRPC server implementation in some language (Go, Node, Java, PHP, etc).
In this repo, the demo server is under grpc-server/ and is started with Node.
Install deps:
cd grpc-server
npm install
If you see errors like:
Cannot find module 'dotenv'Cannot find module 'better-sqlite3'That just means you haven’t installed the packages yet:
npm install dotenv better-sqlite3 @grpc/grpc-js @grpc/proto-loader
A minimal server.js shape looks like:
require("dotenv").config();
const grpc = require("@grpc/grpc-js");
const protoLoader = require("@grpc/proto-loader");
const PROTO_PATH = __dirname + "/../proto/user/v1/user.proto";
const packageDef = protoLoader.loadSync(PROTO_PATH, {
keepCase: true,
longs: String,
enums: String,
defaults: true,
oneofs: true,
});
const userProto = grpc.loadPackageDefinition(packageDef).user.v1;
// Demo handler
function GetUser(call, callback) {
const { id } = call.request;
// For a minimal demo:
callback(null, { id, name: `User #${id}` });
}
function main() {
const server = new grpc.Server();
server.addService(userProto.UserService.service, { GetUser });
const addr = process.env.GRPC_ADDR || "0.0.0.0:50051";
server.bindAsync(addr, grpc.ServerCredentials.createInsecure(), () => {
console.log(`gRPC server listening on ${addr}`);
server.start();
});
}
main();
Start the gRPC server:
node server.js
If Laravel says Connection refused, it simply means the gRPC server isn’t running or isn’t listening on the expected address/port.
UserGrpc)
In Laravel, we wrap generated gRPC calls in a service class:
app/Services/UserGrpc.php
Important fix: return values must come from $resp, not from $req.
<?php
declare(strict_types=1);
namespace App\Services;
use App\Grpc\User\V1\GetUserRequest;
use App\Grpc\User\V1\UserServiceClient;
use Grpc\ChannelCredentials;
use const Grpc\STATUS_OK;
class UserGrpc
{
private UserServiceClient $client;
public function __construct()
{
$this->client = new UserServiceClient(
env('USER_SVC_ADDR', '127.0.0.1:50051'),
['credentials' => ChannelCredentials::createInsecure()]
);
}
public function getUser(string $id): array
{
$req = new GetUserRequest();
$req->setId($id);
[$resp, $status] = $this->client->GetUser($req)->wait();
if ($status->code !== STATUS_OK) {
throw new \RuntimeException($status->details, $status->code);
}
return [
'id' => $resp->getId(),
'name' => $resp->getName(),
];
}
}
Grpc\ChannelCredentials is “not found”
That usually means the gRPC PHP extension is not enabled for the PHP you’re running (CLI vs FPM can differ).
Quick checks:
php -m | grep grpc
php --ini
And enable extension=grpc.so in the correct php.ini (the one shown by php --ini).
Because the browser shouldn’t call gRPC directly, we expose a classic JSON API endpoint:
routes/api.php:
use Illuminate\Support\Facades\Route;
use App\Services\UserGrpc;
Route::get('/users/{id}', function (string $id, UserGrpc $userGrpc) {
return response()->json($userGrpc->getUser($id));
});
We keep your normal Inertia “page route” in routes/web.php:
use Illuminate\Support\Facades\Route;
use Inertia\Inertia;
Route::get('/users/{id}', function (string $id) {
return Inertia::render('UserShow', [
'id' => $id,
]);
});
Then your Vue page (resources/js/Pages/UserShow.vue) can fetch from /api/users/{id}.
A clean Inertia-friendly version uses props (no Vue Router required):
<script setup lang="ts">
import { computed, onMounted, ref } from "vue";
type User = { id: string; name: string };
const props = defineProps<{ id: string }>();
const id = computed(() => props.id);
const loading = ref(true);
const error = ref<string | null>(null);
const user = ref<User>({ id: "", name: "" });
onMounted(async () => {
try {
const res = await fetch(`/api/users/${encodeURIComponent(id.value)}`);
if (!res.ok) throw new Error(`HTTP ${res.status}`);
user.value = await res.json();
} catch (e: any) {
error.value = e?.message ?? "Unknown error";
} finally {
loading.value = false;
}
});
</script>
<template>
<div class="p-4">
<h1 class="text-xl font-semibold">User</h1>
<p v-if="loading">Loading...</p>
<p v-else-if="error" class="text-red-600">{{ error }}</p>
<div v-else class="mt-3">
<div><b>ID:</b> {{ user.id }}</div>
<div><b>Name:</b> {{ user.name }}</div>
</div>
</div>
</template>
Now you can open:
http://127.0.0.1:8000/users/1…and it will load user data through the full chain:
Vue → Laravel API → gRPC client → gRPC server.
Since you already have Laravel’s DB, the easiest approach is:
Example:
php artisan migrate --seed
If your gRPC server reads the same SQLite DB file, it can return real seeded rows.
Terminal 1 — gRPC server:
cd grpc-server
node server.js
Terminal 2 — Laravel + Vite:
composer install
cp .env.example .env
php artisan key:generate
npm install
npm run dev
php artisan serve
Use this setup when:
If your app is a small monolith and public-facing API is your only concern, REST might still be simpler.
Source: https://github.com/VincentCapek/laravel-vue-grpc-bff
2026-01-04 04:46:50
If you use Claude Code, you've probably had this experience:
You: Install the dependencies
Claude: npm install
You: No, use pnpm in this project
Claude: pnpm install
Next session? Same thing. Claude has no memory between conversations.
Claude Code reads CLAUDE.md files at the start of each session - that's how it learns your project conventions. But manually maintaining these files is tedious, and you often forget to document the corrections you make during coding sessions.
I built Memento - a Claude Code command that analyzes your conversations and extracts actionable insights automatically.
Named after the Christopher Nolan film where the protagonist leaves notes for his future self (because he can't form new memories), Memento helps you leave notes for future Claude sessions.
At the end of any coding session, run:
/memento
Memento will:
You select which ones to keep, and they're appended to the appropriate CLAUDE.md file.
Memento focuses on actionable insights:
| ✅ Actionable | ❌ Not Actionable |
|---|---|
| "Use pnpm, not npm" | "Check package manager" |
"Tests are in __tests__/" |
"This was a good session" |
| "Always show command before running" | "Be more careful" |
One-liner:
mkdir -p ~/.claude/commands && curl -fsSL https://raw.githubusercontent.com/SeanZoR/claude-memento/main/.claude/commands/memento.md -o ~/.claude/commands/memento.md
That's it! The command is now available in all your Claude Code sessions.
Give it a spin and let me know what you think! Issues and PRs welcome.
███╗ ███╗███████╗███╗ ███╗███████╗███╗ ██╗████████╗ ██████╗ ████╗ ████║██╔════╝████╗ ████║██╔════╝████╗ ██║╚══██╔══╝██╔═══██╗ ██╔████╔██║█████╗ ██╔████╔██║█████╗ ██╔██╗ ██║ ██║ ██║ ██║ ██║╚██╔╝██║██╔══╝ ██║╚██╔╝██║██╔══╝ ██║╚██╗██║ ██║ ██║ ██║ ██║ ╚═╝ ██║███████╗██║ ╚═╝ ██║███████╗██║ ╚████║ ██║ ╚██████╔╝ ╚═╝ ╚═╝╚══════╝╚═╝ ╚═╝╚══════╝╚═╝ ╚═══╝ ╚═╝ ╚═════╝
Extract session memories into CLAUDE.md
Because Claude forgets, but your notes don't.
Quick Start • How It Works • Examples • Roadmap • Contributing
Claude has no memory between sessions. Every time you start a new conversation, it's a blank slate. You've probably noticed yourself repeating the same instructions:
__tests__/ folders"Sound familiar?
Memento is a Claude Code command that analyzes your coding sessions and extracts actionable insights into your CLAUDE.md files. Future Claude sessions automatically read these files, giving Claude "memory" of your preferences and project conventions.
Like Leonard in the film Memento, Claude has no…
Have you built any tools to improve your Claude Code workflow? I'd love to hear about them in the comments!
2026-01-04 04:40:21
In a typical web app:
This cycle is simple and efficient for most applications.
👉 But here’s the key problem:
Once the response is done, the server cannot send fresh data to the client unless the client asks again.
Suppose you have a simple stock application:
This becomes a real-time problem:
👉 How does the server tell clients that data has changed?
WebSockets let you keep a persistent full-duplex connection open between clients and servers.
Instead of:
Client → Server → Response → Connection closes
WebSockets keep the connection open:
Client ↔ Server ↔ Client ↔ Server
This allows:
Client Server
| — WebSocket handshake → |
| |
| ← Accept & open channel — |
| |
| — Updates can flow both → |
| |
Once the connection is open, either side can send data.
✅ Real real-time updates
✅ Low latency
✅ Full duplex (two-way communication)
❌ Hard to scale — it’s stateful (server must remember every connected client)
❌ If you have millions of connections, scaling horizontally becomes expensive
❌ Servers must synchronize updates among themselves in clustered systems
Polling is the simplest alternative to WebSockets.
Instead of keeping a connection alive, the client asks the server again and again:
Client: “Any new updates?”
Server: “Nope.”
Client: “Any new updates?”
Server: “Yes — here you go!”
Let’s say the client checks every 2 seconds:
0s → “Give me new data”
2s → “Give me new data”
4s → “Give me new data”
…
If new data appears at 3.5s, the client will only get it at the next poll (4s).
👉 That means the maximum delay is equal to your poll interval — 2 seconds in this example.
✅ Easy to implement
✅ Works with load balancers and many servers
✅ Stateless — each request is independent
❌ Not truly real-time
❌ Can waste requests if no new data
❌ Frequent polling may still add network load
Long polling is an optimized form of polling.
Instead of responding immediately, the server holds the request open until:
Then it responds with data in one shot.
Client → Server: “Any updates?”
Server: Hold request for 5 seconds
If updates come within 5s:
Server → Client: Latest updates
Then client immediately re-requests.
✅ Fewer requests than short polling
✅ More “real-time” feel than simple polling
✅ Still stateless
❌ Can still hold server resources
❌ Not as instant as WebSockets
❌ Server must manage held requests
| Technique | Real-Time | Scalability | Server Load | Complexity |
|---|---|---|---|---|
| Polling | Moderate (delayed) | 🔥 Easy | 🔥 Medium | 🟢 Easy |
| Long Polling | Good | 🔥 Good | 🔥 Medium | 🟡 Moderate |
| WebSockets | Excellent | 🔻 Hard | 🔻 High | 🟡 Moderate |
Not always.
For example, in a stock chart app:
That means:
When you scale with many backend servers and a load balancer:
Real-time systems aren’t magic — they’re about choosing the right tool for the job:
🔹 Need instant push updates? → WebSockets
🔹 Need lightweight, scalable updates? → Polling / Long Polling
🔹 Want a mix of both? → Start with polling, evolve as needed
Every choice has trade-offs. Understanding the fundamental communication patterns helps you make the best architectural decision — and prevents unnecessary complexity early on.
2026-01-04 04:37:53
Hello DEV community!
I’m excited to finally share a project that has been living in the back of my mind for a very long time.
Three days ago, I launched MD-To.com, a free online Markdown converter.
But the story actually starts about two weeks ago. I finally decided to stop procrastinating and just build the thing. To speed up the process, I used Cursor paired with Claude Opus 4.5. The development velocity was insane—what felt like a month's worth of work got compressed into just a fortnight.
I registered the domain and pushed the site live just 3 days ago.
But as soon as it was live, I wasn't 100% happy with it. So, over the last 72 hours, I completely redesigned the UI and overhauled the copywriting to make the experience much smoother and more professional.
The goal was simple: a converter that respects your privacy (no server uploads) and covers all the edge cases developers need.
Here are the current features:
Privacy First: Everything happens locally in your browser. Your files never hit my server.
Markdown to "Anything": Convert MD to Word (.docx), PDF, HTML, LaTeX, and even Confluence Wiki syntax.
Reverse Conversion: Turn HTML or Word docs back into clean Markdown.
Data Tools: Convert CSV or JSON directly into Markdown Tables (and vice versa).
Editor: A built-in editor with real-time preview.
100% Free: No login, no watermarks, no limits.
Since I just finished the redesign, I am looking for honest feedback from other developers.
Is the UI intuitive?
Are there any file formats you wish it supported?
Did you find any bugs in the conversion logic?
Please give it a try at https://md-to.com/ and let me know what you think in the comments. I’m ready to fix bugs and add features based on your suggestions!
Thanks for reading!