MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

Vibe Coding a Pokémon Search App with Replit

2026-02-02 18:42:09

Originally published on my blog

TL;DR

We're going to build a Pokémon search app. But we're not going to write even one line of code.

🤨

I know, coming from a developer (who used to take pride in what he wrote by his bare hands), it sounds blasphemous, but stay with me here and see how far these kind of tools have come in only the last year.

We'll be building the app by so-called vibe coding. Basically, you describe what you want using plain English, and the AI tool does the heavy lifting. You steer it by reviewing, re-prompting, and guiding to the final product (that you'll be happy with 🤞).

The Tools

In this post, we'll be using [Replit](https://replit.com/refer/nikolabreznjak), and in the future ones we'll tackle a few others like **Cursor* and Claude Code.

Edit: here's the one with Cursor: Vibe Coding a Pokémon Search App with Cursor

For bonus points, we won't even type anything. Instead, we'll use a tool like *WhisprFlow, or some similar ones like SuperWhisper, VoiceTypr (I know, what's up with naming products by dropping wowels 🤷‍♂️).

The Old Ways™

Previously, I wrote the "classic" (may I say, "old" at this point?) tutorials building the same app with two different tech stacks:

This one is different. Same Pokémon search app. Less typing. More vibes (😂 I know, my puns are terrible).

Let's Start Prompting

If you want to follow along, create an account over at *Replit.

Pro tip: copy/paste these and adapt in case you're feeling adventurous.

Prompt 1: Spec first, code later

First of all, always start with a plan before writing any code. This was TheWayToDoIt™ back in 'normal' development, and it's no different in vibe coding either. Make sure to select the Plan mode in Replit as shown below:

and add the following prompt:

"Our task is to build a Pokémon search web app. Before writing any code, produce a detailed PRD (product requirement document) (pages, components, state, API calls, error states, loading states, accessibility, folder structure, and a deployment plan for GitHub Pages).

After the PRD, list a numbered implementation checklist.

Do NOT write code yet."

If you want, you can click the improve prompt button (I did) and you'll get your prompt improved to something like:

A comprehensive web-based Pokémon search application that allows users to search and view detailed information about Pokémon using the PokéAPI.

Core Features:

  • Search Pokémon by name or ID with autocomplete suggestions
  • Display detailed Pokémon information (stats, types, abilities, sprites)
  • Browse and filter Pokémon list with pagination
  • Responsive design that works on mobile and desktop devices

Visual References:
Inspired by the official Pokémon website and Pokédex interfaces, known for their vibrant, game-accurate designs and intuitive card-based layouts.

Style Guide:

  • Colors: Primary #FFCB05 (Pokémon yellow), Secondary #3D7DCA (Pokémon blue), Accent #FF0000 (Pokéball red), Background #F8F8F8 (light grey), Text #2A2A2A (dark grey), Card Background #FFFFFF (white)
  • Design: Flexo/Poppins/Roboto fonts, card-based grid layout, rounded corners (12px radius), smooth transitions, responsive design with mobile-first approach, clean search interface with prominent search bar

Special Requirements:

  • Must include a detailed PRD (Product Requirement Document) covering: pages, components, state management, API calls, error states, loading states, accessibility considerations, folder structure, and deployment plan for GitHub Pages
  • Provide a numbered implementation checklist after the PRD
  • DO NOT include actual code implementation - documentation and planning only

That's a much better prompt than what I came up with 😬, so let's use that. At this point you can tweak the plan if you want.

A few options

You can select how 'autonomous' you want the agent to act (High is fine in this case), and you can choose to select App testing as that will make sure that it resolves any obvious bugs that may pop up.

After you're happy with the selection, click the Start building button.

Now we wait... ⏳

In my case, it took Replit 21 minutes to finish everything (IMO, takes longer when you select the App testing), and it spent $7.27. Let's see what we got for that Frapuchino priced Starbucks cofee...

Preview

A cool thing I like about Replit is that they have everything integrated in their web interface (they have a mobile app, but we won't dig into that now). And by everything, I mean everything: database, auth, secrets, domain purchasing, wiring up with Stripe, you name it.

Of all these cool things, they also have the Preview pane, where you can, well, preview your app as it's being built.

This is what it came up with for me:

Publish and share

To publish your work 'online' so that you can share it with others you need to:

  • choose the (sub)domain: _I went with pokedex-nikolabreznjak.replit
  • click on the Publish now button

And, there you go, app will be accessible via (in my case): https://pokedex-nikolabreznjak.replit.app/.

Conclusion: the future is now!?

Here's the uncomfortable truth: those who still treat AI coding tools as "cheating" or "not there yet" are losing out. If not on shipping MVPs faster, then on getting up to speed on unfamiliar codebases, and learning faster in general.

⚠️ And this last part is something I want to emphasise: if you "just" vibe code and have absolutelly no idea how the thing YOU built works (and have no desire to learn), then you're missing the point.

Instead, you've got an amazing oportunity that the devs in the past didn't have: you can ask the tool to tell you "how does this code work". And, you can ask all the questions without the fear of showing your lack of knowledge.

And, if you're worried about quality (valid!), you don't have to YOLO it. Do this instead:

  1. Tell the model: Explain what you intend to do without writing any code
  2. Make it produce a long spec document (architecture, edge cases, test plan).
  3. Iterate on that doc until it's legit
  4. Then say something along the lines of: Now, iplement this spec perfectly

I would have never dreamed that the hottest new programming language will be [insertLanguageHere]. I say it like that because, technically, you could write in your language and some (most?) of the tools would understand you. If not, you can throw it in a ChatGPT (or any similar tool) for translation and then feed it translated into the AI tool.

No, the world won't make the devs extinct, but it surely will enable a lot of non-devs to create things that make actual money. It's your choice if you want to watch from the sidelines or get in on the action and see how it can help you - maybe now's the time to do that project you 'never had the time for'.

I'm cheering for you, good luck!

Disclaimer

Links prepended with a * are referral links.

If you enjoy the content and decide to sign up through those links, you'll be helping me feed my caffeine addiction ☕️

Thanks a bunch, you glorious human! 🙌

New achievement

You made it to the end, here's a 🎖️

P.S. In case you were wondering, the style has been recently influenced by the amazingly refreshing litRPG book series called *Dungeon Crawler Carl.

Enjoy! 👋

GoAI - A Clean, Multi-Provider LLM Client for Go

2026-02-02 18:41:38

If you’re building AI-powered features in Go, you’ve probably noticed a pattern:

Every LLM provider has slightly different APIs, request formats, error handling, and configuration styles. Switching models often means rewriting glue code instead of focusing on your product.

That’s where GoAI comes in.

GoAI is a lightweight Go library that gives you one unified interface for chatting with multiple LLM providers — without pulling in heavy dependencies or hiding important details.

What is GoAI?

GoAI is a minimal, dependency-free Go library for building chat-based applications with modern LLMs.

Its design goals are simple:

  • One clean API across providers
  • Native Go patterns (context, errors, options)
  • No external dependencies
  • Easy to extend

If you like libraries that do one thing well, this one will feel right at home.

Supported LLM Providers

Out of the box, GoAI supports a growing list of popular providers:

  • OpenAI (GPT-4o, GPT-4-Turbo, etc.)
  • Anthropic (Claude)
  • Google Gemini
  • xAI (Grok)
  • Mistral
  • Perplexity
  • Ollama (for local models)

Switching providers is mostly a configuration change, not a rewrite.

Installation

go get github.com/dariubs/goai

That’s it — no transitive dependency explosion.

Basic Usage

Here’s how simple it is to send a chat prompt:

client, err := goai.New(
    "openai",
    "gpt-4o",
    goai.WithAPIKey(os.Getenv("OPENAI_API_KEY")),
)
if err != nil {
    log.Fatal(err)
}

ctx := context.Background()
response, err := client.Chat(ctx, "Hello from Go!")
if err != nil {
    log.Fatal(err)
}

fmt.Println(response.Content)

The same Chat call works across providers — only the provider name and model change.

Why GoAI Instead of Provider SDKs?

1. A Consistent API

Every provider has its own quirks. GoAI normalizes them into a single, predictable interface.

2. First-Class Context Support

Timeouts, cancellations, and request lifetimes are handled the Go way — via context.Context.

3. Typed Errors

You’re not stuck string-matching error messages. GoAI exposes structured errors you can reason about.

4. Zero Dependencies

No HTTP wrappers, no magic middleware, no surprises. Just the Go standard library.

Configuration via Options

GoAI uses functional options for configuration:

client, _ := goai.New(
    "anthropic",
    "claude-3-opus",
    goai.WithAPIKey(os.Getenv("ANTHROPIC_API_KEY")),
    goai.WithTimeout(30*time.Second),
    goai.WithTemperature(0.7),
)

Clean, explicit, and easy to extend.

Great Fit For

  • AI-powered Go services
  • Prototyping across multiple LLMs
  • Switching providers without refactors
  • Internal tools and CLIs
  • Local + cloud model workflows (via Ollama)

Extending goai

Want to add a new provider?

The codebase is intentionally small and readable. Implement the provider interface, plug it in, and you’re good to go. No framework archaeology required.

Final Thoughts

GoAI doesn’t try to be clever — and that’s its strength.

If you’re a Go developer who wants:

  • control instead of magic,
  • clarity instead of abstractions,
  • and flexibility without lock-in,

then GoAI is worth a look.

GoAI on GitHub

Building a Synchronised Internet Radio System with PHP, JS, and Zero Streaming Infrastructure

2026-02-02 18:35:41

Every traditional radio station has a simple promise: everyone tuning in hears the same thing at the same time. You turn on the radio at 2:05 PM, and you hear whatever programme is five minutes in, not the beginning, not a random track. You and every other listener are perfectly in sync.

I needed to build exactly that for a charity organisation's website, but online, without any live-streaming infrastructure. No Icecast. No Shoutcast. No WebRTC. No streaming servers at all.

Just PHP, JavaScript, MySQL, and a bit of maths.

In this article, I will walk you through the architecture, the sync algorithm that makes it work, and the pitfalls I encountered along the way.

The Problem

The charity already had a radio page on its website, essentially a list of uploaded MP3 files that visitors could click to play individually. News bulletins, member interviews, traditional music, and entertainment shows. Good content, but no structure. No "station" feel, just a glorified media player. A visitor would land on the page and see a list of 70+ files with no idea what to play.

What they wanted was a proper radio experience:

  • Admins upload media files and set schedules (e.g., "Morning News at 06:30, Member Spotlight at 09:00")
  • Certain tracks are designated as "loop/filler" media that play continuously when nothing is scheduled
  • When a listener opens the player, they hear whatever should be playing right now, mid-track if necessary
  • All listeners are synchronised, everyone hears the same content at the same position
  • It should work with the existing PHP stack

The constraint that changed everything: no live streaming. The charity doesn't have the infrastructure budget for a streaming server, and frankly doesn't need one. All their content is pre-recorded. The "live" element is purely the schedule of what plays when.

The Architecture

Here's the mental model:

┌─────────────────────────────────────────────────────┐
│                    ADMIN PANEL                       │
│  Upload media → Set schedules → Manage loop playlist │
└──────────────┬──────────────────────┬────────────────┘
               │                      │
               ▼                      ▼
        ┌─────────────┐      ┌──────────────┐
        │  /uploads/   │      │    MySQL      │
        │  (MP3/MP4)   │      │  (schedules,  │
        │              │      │   media meta,  │
        │              │      │   loop order)  │
        └──────┬───────┘      └──────┬────────┘
               │                      │
               │    ┌─────────────────┘
               │    │
               ▼    ▼
        ┌─────────────────┐
        │  /api/now-playing│ ◄── The brain
        │     .php         │
        └────────┬─────────┘
                 │
                 │  JSON: { media_url, offset, remaining, next }
                 │
        ┌────────▼─────────┐
        │   Player (JS)     │
        │                   │
        │  1. Fetch API     │
        │  2. Set src       │
        │  3. Seek to offset│
        │  4. Play          │
        │  5. Re-sync timer │
        └───────────────────┘

The entire system has four components:

  1. Upload & metadata extraction:- Admin uploads media; the server detects duration via ffprobe (with a browser-based fallback)
  2. Schedule management:- Admin says "play media #7 at 14:00 on Tuesday"
  3. The now-playing API:- The brain. Given the current time, it determines what media should be playing and calculates the exact offset
  4. The player:- Fetches the API, loads the media, seeks to the offset, plays, and periodically re-syncs

No WebSockets. No streaming protocol. Just an HTTP API and the HTML5 <audio> / <video> element.

The Database

Three tables. Deliberately simple.

CREATE TABLE radio_media (
    id INT AUTO_INCREMENT PRIMARY KEY,
    title VARCHAR(255) NOT NULL,
    artist VARCHAR(255) DEFAULT '',
    filename VARCHAR(255) NOT NULL,
    filepath VARCHAR(500) NOT NULL,
    media_type ENUM('audio', 'video') NOT NULL DEFAULT 'audio',
    duration FLOAT NOT NULL DEFAULT 0,  -- seconds
    file_size BIGINT DEFAULT 0,
    is_loop TINYINT(1) NOT NULL DEFAULT 0,
    loop_position INT NOT NULL DEFAULT 0,
    cover_image VARCHAR(500) DEFAULT NULL,
    active TINYINT(1) NOT NULL DEFAULT 1,
    INDEX idx_loop (is_loop, loop_position)
);

CREATE TABLE radio_schedule (
    id INT AUTO_INCREMENT PRIMARY KEY,
    media_id INT NOT NULL,
    title VARCHAR(255) DEFAULT NULL,
    start_time DATETIME NOT NULL,
    end_time DATETIME NOT NULL,  -- auto-calculated: start_time + duration
    description TEXT,
    active TINYINT(1) NOT NULL DEFAULT 1,
    FOREIGN KEY (media_id) REFERENCES radio_media(id) ON DELETE CASCADE,
    INDEX idx_schedule_time (start_time, end_time)
);

CREATE TABLE radio_settings (
    setting_key VARCHAR(100) PRIMARY KEY,
    setting_value TEXT NOT NULL
);

A few design decisions worth noting:

end_time is stored, not computed. When an admin creates a schedule entry, the server calculates end_time = start_time + media.duration and stores it. This makes the "what's playing now?" query a simple range check rather than a join-and-compute.

is_loop and loop_position on radio_media convert any uploaded track into a filler playlist item. The loop_position determines the play order and is reorderable via drag-and-drop in the admin panel.

radio_settings stores the loop_epoch, a fixed reference timestamp that's critical to the loop sync algorithm. More on that next.

The Sync Algorithm

This is where the actual magic happens. There are two distinct sync problems to solve: scheduled content and loop content.

Scheduled Content Sync

This one is straightforward. If an admin schedules a 30-minute programme to start at 14:00, and a listener connects at 14:12:30, the offset is:

offset = current_time - schedule_start_time
offset = 14:12:30 - 14:00:00
offset = 750 seconds (12 minutes 30 seconds)

Every listener who calls the API at 14:12:30 gets offset: 750. They all seek to 12:30 in the audio file. Done.

The SQL query to find the active schedule:

$stmt = $db->prepare("
    SELECT s.*, m.*
    FROM radio_schedule s
    JOIN radio_media m ON s.media_id = m.id
    WHERE s.active = 1 AND m.active = 1
      AND s.start_time <= ?
      AND s.end_time > ?
    ORDER BY s.start_time DESC
    LIMIT 1
");
$stmt->execute([$now, $now]);

The ORDER BY start_time DESC with LIMIT 1 means that if two schedules overlap, the most recently started one wins. This is a deliberate choice; it lets admins override a running programme by scheduling something on top of it without deleting the original.

Loop Content Sync

The loop playlist is trickier. There's no "start time" for a loop — it plays continuously when nothing is scheduled. If the loop playlist is 2 hours of music, and it's been running "since forever", how do you tell two listeners connecting at slightly different moments to both play from the same position?

The answer: a fixed epoch and modular arithmetic.

// A fixed reference point that never changes once set
$epoch = strtotime(getSetting('loop_epoch', '2024-01-01 00:00:00'));

// Total duration of all loop tracks combined
$totalLoopDuration = array_sum(array_column($loopMedia, 'duration'));

// How far into the infinite loop cycle are we?
$elapsed = time() - $epoch;
$posInCycle = fmod($elapsed, $totalLoopDuration);

Think of it this way: imagine the loop playlist started playing at the epoch (1 January 2024 00:00:00) and has been playing non-stop ever since, repeating when it reaches the end. At any given moment, you can calculate exactly where in the playlist we would be:

  1. Calculate total seconds since the epoch
  2. Take the modulo of that against the total loop duration
  3. Walk through the track list to find which track that position falls in
  4. The remainder is the offset within that track
$accumulated = 0;
foreach ($loopMedia as $i => $track) {
    if ($accumulated + $track['duration'] > $posInCycle) {
        // This is the current track
        $currentTrack = $track;
        $trackOffset = $posInCycle - $accumulated;
        $trackRemaining = $track['duration'] - $trackOffset;
        break;
    }
    $accumulated += $track['duration'];
}

The beauty of this approach is that it's purely deterministic. Two servers, ten servers, a thousand listeners, no matter the traffic, they all compute the same result for the same timestamp. No shared state needed between requests. No "what was the last track we played?" tracking. Just maths.

Why a fixed epoch? Because fmod(elapsed, totalDuration) needs a stable reference point. If you used, say, "the time the loop started," you would need to persist and synchronise that state. The epoch is just a constant in the database; it can be any date in the past.

What if the loop playlist changes? If an admin adds or removes tracks, the total duration changes, which shifts everyone's position. This is acceptable; it's no different from a radio station changing its playlist. Listeners hear a brief jump. In practice, this rarely happens mid-playback.

Schedule-Loop Transitions

There's a subtle edge case: what happens when a loop track is playing and a scheduled programme is about to start? The player needs to switch at exactly the right moment.

The API handles this by checking if a scheduled item starts before the current loop track ends:

$nextSchedule = getNextScheduled($db, $nowDt);
$nextCheckIn = $trackRemaining;

if ($nextSchedule) {
    $scheduleStartsIn = strtotime($nextSchedule['start_time']) - $now;
    if ($scheduleStartsIn > 0 && $scheduleStartsIn < $trackRemaining) {
        // Tell the player to check back sooner
        $nextCheckIn = $scheduleStartsIn;
    }
}

The next_check_in value in the API response tells the player: "call me again in X seconds." Normally, this is the remaining duration of the current track. But if a schedule starts sooner, it returns the time until that schedule begins. The player re-fetches, gets the new scheduled media, and seamlessly transitions.

The Now-Playing API

The full response from /api/now-playing.php looks like this:

{
    "status": "scheduled",
    "server_time": 1738465200,
    "media": {
        "id": 42,
        "title": "Morning News Bulletin",
        "artist": "Ekiti Radio",
        "url": "/radio/uploads/media_abc123.mp3",
        "media_type": "audio",
        "duration": 1800,
        "cover_image": "/radio/uploads/covers/cover_xyz.jpg"
    },
    "offset": 750.23,
    "remaining": 1049.77,
    "schedule_title": "Ekiti Iroyin — Morning News",
    "next": {
        "type": "scheduled",
        "title": "Heritage Hour",
        "start_time": "2026-02-02 07:00:00"
    },
    "next_check_in": 1049.77
}

Key fields:

  • status: Either "scheduled" (a timetabled programme), "loop" (filler content), or "offline" (nothing to play)
  • offset: Where in the media file the player should seek to
  • remaining: Seconds until this media ends, used for progress display
  • next_check_in: When the player should call the API again

The server_time field is included so the player can detect clock drift between client and server, though in practice I found this unnecessary for most use cases.

Sync, Autoplay, and Latency Compensation on The Player

The player is a single HTML page with vanilla JavaScript. No frameworks. It uses the native <audio> and <video> elements with a custom UI on top.

The Core Playback Loop

async function fetchNowPlaying() {
    const requestStartTime = Date.now();

    const resp = await fetch(BASE + '/api/now-playing.php?_=' + Date.now());
    const data = await resp.json();
    const requestDuration = (Date.now() - requestStartTime) / 1000;

    const mediaChanged = !currentMedia || currentMedia.id !== data.media.id;

    if (mediaChanged) {
        loadMedia(data, requestDuration);
    } else {
        syncPosition(data.offset + requestDuration);
    }

    updateProgress(data.offset + requestDuration, data.media.duration);
    scheduleNextCheck(data.next_check_in);
}

Notice requestDuration. This is the first latency compensation mechanism.

Latency Compensation

Between the moment the server calculates the offset and the moment the player receives it, time has passed. On a fast connection, that's 50–200ms. On a slow mobile connection, it could be 1–2 seconds. If we don't account for this, listeners will consistently be slightly behind.

The fix is simple: measure the round-trip time and add half of it (or, more practically, the full request duration) to the offset:

const requestStartTime = Date.now();
const resp = await fetch(url);
const data = await resp.json();
const requestDuration = (Date.now() - requestStartTime) / 1000;

// The server said "you should be at 750 seconds" — but that was
// requestDuration ago. So we should actually be at:
activePlayer.currentTime = data.offset + requestDuration;

This is applied both when loading new media and when syncing position on existing media:

function loadMedia(data, requestDuration = 0) {
    currentMedia = data.media;
    // ... player setup ...

    activePlayer.src = data.media.url;
    activePlayer.currentTime = data.offset + requestDuration;
    activePlayer.volume = document.getElementById('volSlider').value;

    if (shouldAutoPlay) {
        activePlayer.play().then(() => {
            setPlayingState(true);
        }).catch(err => {
            console.warn('Autoplay failed:', err);
            setPlayingState(false);
        });
    }
}

Periodic Drift Correction

Even with accurate initial seeking, players drift over time. Buffering stalls, background tabs get throttled, devices sleep and wake, dozens of things can push a player out of sync.

The solution is a periodic re-sync. Every SYNC_INTERVAL seconds (default: 30), the player re-fetches the API and compares:

function syncPosition(expectedOffset) {
    if (!activePlayer || !isPlaying) return;
    const actualOffset = activePlayer.currentTime;
    const drift = Math.abs(actualOffset - expectedOffset);

    if (drift > MAX_DRIFT) {
        console.log(`Sync correction: drift ${drift.toFixed(1)}s`);
        activePlayer.currentTime = expectedOffset;
    }
}

// Re-sync periodically
setInterval(() => {
    if (isPlaying) fetchNowPlaying();
}, SYNC_INTERVAL * 1000);

The MAX_DRIFT threshold (default: 2 seconds) prevents constant micro-corrections. If the drift is under 2 seconds, it's imperceptible and not worth the audio glitch of seeking. If it's over 2 seconds, we force-correct.

The Autoplay Problem

Modern browsers aggressively block autoplay. You can't just call .play() on page load, the browser will reject it unless the user has interacted with the page first.

My solution uses a shouldAutoPlay flag that tracks user intent:

let shouldAutoPlay = false;

function play() {
    shouldAutoPlay = true; // User has expressed intent to listen

    if (!activePlayer || !activePlayer.src) {
        fetchNowPlaying(); // This will call loadMedia, which checks shouldAutoPlay
        return;
    }

    activePlayer.play().then(() => {
        setPlayingState(true);
    }).catch(() => {});
}

function pause() {
    shouldAutoPlay = false; // User wants silence
    if (activePlayer) activePlayer.pause();
    setPlayingState(false);
}

The key insight: once a user clicks play, shouldAutoPlay stays true forever (until they explicitly pause). This means when the player transitions between tracks, either because the current track ended or because fetchNowPlaying loaded a new one it automatically continues playing without requiring another click. The browser allows this because the original user gesture established an active audio context.

Seamless Track Transitions

When the current track ends (either a scheduled programme finishing or a loop track completing), the player needs to fetch the next thing to play. The ended event handles this:

audioEl.addEventListener('ended', fetchNowPlaying);
videoEl.addEventListener('ended', fetchNowPlaying);

But we don't just rely on ended. The API response includes next_check_in, which proactively schedules a re-fetch:

function scheduleNextCheck(seconds) {
    clearTimeout(checkTimer);
    checkTimer = setTimeout(fetchNowPlaying, Math.max(seconds, 2) * 1000);
}

This double approach of using both proactive scheduling and reactive ended listeners, means transitions happen cleanly even if there's slight timing mismatch between the calculated remaining time and the actual media duration.

Duration Detection

Here's a problem you won't find in most tutorials: to schedule a media file, you need its duration. To get the duration, you typically need ffprobe (from the ffmpeg suite) on the server. But not every shared hosting environment has ffmpeg installed.

The solution is a two-tier approach:

Tier 1: Server-side with ffprobe

function getMediaDuration(string $filepath): ?float {
    $cmd = 'ffprobe -v error -show_entries format=duration '
         . '-of csv=p=0 ' . escapeshellarg($filepath);
    $output = trim(shell_exec($cmd) ?? '');
    return is_numeric($output) ? (float) $output : null;
}

Tier 2: Browser-side JavaScript fallback

During upload, the admin panel creates a temporary media element to read the duration:

function handleFile(file) {
    const url = URL.createObjectURL(file);
    const el = file.type.startsWith('video') 
        ? document.createElement('video') 
        : document.createElement('audio');

    el.preload = 'metadata';
    el.onloadedmetadata = () => {
        detectedDuration = el.duration; // Got it!
        URL.revokeObjectURL(url);
    };
    el.src = url;
}

This detected duration is sent along with the upload form data. Additionally, the player itself can report duration back to the server for any media that still has duration = 0:

if (data.media.duration === 0 && activePlayer && activePlayer.duration) {
    fetch(BASE + '/api/media.php', {
        method: 'PUT',
        headers: { 'Content-Type': 'application/json' },
        body: JSON.stringify({ id: data.media.id, duration: activePlayer.duration })
    });
}

This means even if both upload-time detections fail, the first person to play the media in a browser will "teach" the server its duration, and scheduling becomes available from that point.

Upload Handling

Uploads go through /api/upload.php. The important bits:

// Generate a unique filename to prevent file collisions
$uniqueName = uniqid('media_', true) . '.' . $ext;
$destPath = UPLOAD_DIR . '/' . $uniqueName;

move_uploaded_file($file['tmp_name'], $destPath);

// Try to get duration server-side
$duration = getMediaDuration($destPath);

// Fall back to JS-provided duration
if ($duration === null && !empty($_POST['duration'])) {
    $duration = (float) $_POST['duration'];
}

// Insert into database
$stmt = $db->prepare("
    INSERT INTO radio_media 
    (title, artist, description, filename, filepath, media_type, 
     mime_type, duration, file_size, is_loop, loop_position, cover_image)
    VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
");

Files are stored with unique generated names (not the original filename) to avoid collisions and path traversal issues. The original title is stored in the database for display.

For large media files (think a 2-hour programme recording), you'll need to adjust PHP's upload limits:

; php.ini
upload_max_filesize = 500M
post_max_size = 500M
max_execution_time = 300

And if using Nginx:

client_max_body_size 500M;

Schedule Overlap Handling

What happens if an admin accidentally (or intentionally) creates overlapping schedules? Rather than blocking the overlap, the system allows it and applies a simple rule: the most recently started schedule wins. This logic behaves exactly like LIFO (Last-In, First-Out), but for a timeline rather than a stack.

// The ORDER BY DESC, LIMIT 1 means the latest-starting active 
// schedule takes priority
$stmt = $db->prepare("
    SELECT s.*, m.*
    FROM radio_schedule s
    JOIN radio_media m ON s.media_id = m.id
    WHERE s.active = 1 AND m.active = 1
      AND s.start_time <= ?
      AND s.end_time > ?
    ORDER BY s.start_time DESC
    LIMIT 1
");

The API does warn the admin about conflicts when creating a schedule:

$stmt = $db->prepare("
    SELECT s.id, s.start_time, s.end_time, m.title
    FROM radio_schedule s
    JOIN radio_media m ON s.media_id = m.id
    WHERE s.active = 1
      AND s.start_time < ?  -- new end
      AND s.end_time > ?    -- new start
");
$stmt->execute([$endTime, $startTime]);
$conflicts = $stmt->fetchAll();

if (!empty($conflicts)) {
    $response['warnings'] = ['This schedule overlaps with existing entries.'];
}

This is a conscious design choice. In a charity radio context, the admin might want to interrupt regular programming with an urgent announcement by simply scheduling on top. The last-start-wins rule makes this intuitive.

What About Video?

The system supports both audio and video out of the box. The player switches between an <audio> and <video> element depending on the media type:

function loadMedia(data, requestDuration = 0) {
    const isVideo = data.media.media_type === 'video';

    if (isVideo) {
        audioEl.pause();
        audioEl.src = '';
        activePlayer = videoEl;
        document.getElementById('videoWrap').classList.add('visible');
        document.getElementById('coverWrap').style.display = 'none';
    } else {
        videoEl.pause();
        videoEl.src = '';
        activePlayer = audioEl;
        document.getElementById('videoWrap').classList.remove('visible');
        document.getElementById('coverWrap').style.display = '';
    }

    activePlayer.src = data.media.url;
    activePlayer.currentTime = data.offset + requestDuration;
}

Video files are heavier, so seeking might take longer (the browser needs to download and buffer the video up to the seek point).

Performance Considerations

API Call Frequency

Each active listener calls /api/now-playing.php roughly:

  • Once on page load
  • Once every SYNC_INTERVAL seconds (default: 30)
  • Once per track transition

For 100 concurrent listeners with 30-second sync, that's about 200 requests per minute, plus transition spikes. This is very manageable for any standard PHP hosting.

The API query hits indexed columns (start_time, end_time, is_loop, active), so even with a large media library and schedule history, response times stay under 10ms.

Media File Delivery

The media files are served as standard static files by your web server (Apache/Nginx). The HTML5 media element handles range requests natively, which means:

  • Listeners only download from the seek point forward (not the entire file)
  • Nginx/Apache serve range requests efficiently without PHP involvement
  • No PHP process is tied up during media delivery

This is a crucial advantage over streaming-based solutions where every byte flows through your application layer.

Cache Busting

The API call includes a cache buster to prevent stale responses:

fetch(BASE + '/api/now-playing.php?_=' + Date.now())

You should also set appropriate cache headers server-side:

header('Cache-Control: no-cache, no-store, must-revalidate');

Limitations and Honest Trade-offs

This approach has genuine limitations compared to a proper streaming setup:

Seek latency on slow connections. When a listener connects, their browser needs to download from the seek point. On a slow connection, there might be a few seconds of buffering before audio starts. A streaming server would deliver the audio in real-time from the exact point.

File format matters. MP3 seeking depends on bitrate consistency. VBR (Variable Bit Rate) MP3s can have inaccurate seeks. CBR (Constant Bit Rate) or properly indexed files work best.

No sub-second sync precision. Listeners will typically be within 1–2 seconds of each other. For a radio station, this is perfectly acceptable. For something like a live concert simulcast, it wouldn't be.

No adaptive bitrate. A proper streaming server can adjust quality based on the listener's connection speed. Here, every listener gets the same file. If you upload a 320kbps MP3, listeners on slow connections might buffer.

The loop playlist shift problem. If you change the loop playlist while listeners are active (add/remove tracks), everyone's position jumps because the total duration changes. The workaround is to make playlist changes during scheduled content or off-peak hours.

For a charity radio station playing pre-recorded content, these trade-offs are entirely acceptable. You get a fully functional radio experience with zero infrastructure beyond a basic web server.

Embedding the Player

The player can be embedded anywhere on your existing website via an iframe:

<iframe 
    src="/radio/player.php" 
    width="500" 
    height="700" 
    frameborder="0"
    allow="autoplay"
></iframe>

Or you can use the API directly to build a custom mini-player:

async function initMiniRadio() {
    const resp = await fetch('/radio/api/now-playing.php');
    const data = await resp.json();

    if (data.status === 'offline') {
        document.getElementById('radio').textContent = 'Off Air';
        return;
    }

    const audio = document.getElementById('radio-audio');
    audio.src = data.media.url;
    audio.currentTime = data.offset;

    document.getElementById('track-name').textContent = data.media.title;
}

Because the sync logic lives entirely in the API, any player that can make an HTTP request and seek an audio element can participate. You could build a React player, a mobile app, or even a CLI player and they would all stay in sync.

Wrapping Up

The core insight of this project is that you don't need streaming infrastructure to build a radio station. If all your content is pre-recorded, which is the case for most community, charity, and niche radio stations, then the problem reduces to:

  1. Store the schedule
  2. Given the current time, compute what should be playing and where
  3. Tell the player to seek to that position
  4. Periodically re-sync to correct drift

The fmod() trick for loop synchronisation is the technique I'm most happy with. It's elegant, stateless, and trivially scalable. You could have a million listeners and the server computation is identical, one modulo operation and a walk through an array.

The full codebase includes the admin panel (media upload, library management, schedule builder, loop playlist with drag-and-drop reorder), the API endpoints, and the player. It runs on any standard PHP hosting with MySQL.

If you're building something similar, whether it's a community radio, a podcast-as-radio-station, or any "synchronised playback from pre-recorded content" system, the principles here apply directly. The sync algorithm doesn't care about your tech stack; it's just arithmetic.

Github Link: https://github.com/IAmMasterCraft/online-radio-system

ORC-55 in Plain English: What It Means for $BZR and the Future of Digital Commerce

2026-02-02 18:35:36

For nearly a decade, ERC-20 has been the standard for digital tokens. It solved a critical problem: how do you create a fungible asset that moves between wallets, trades on exchanges, and integrates with decentralized finance? But ERC-20 was never designed for what's happening now—real digital commerce at scale, across multiple blockchains, without intermediaries taking a cut.

Enter ORC-55. It's not a replacement for ERC-20 so much as an evolution for a world where tokens need to do more than trade—they need to settle transactions, survive infrastructure failures, and maintain consistent identity across multiple execution environments.

Why ERC-20 Falls Short

To understand ORC-55, it helps to understand what's broken about ERC-20 in a commerce context.

Single-Chain Dependency

Every ERC-20 token inherits the fate of its host blockchain. When Ethereum gas fees spike to $100 per transaction, tokens on Ethereum become expensive to use. When a chain suffers congestion, governance failures, or security issues, every token living there suffers. If you want to move to a cheaper or faster chain, you face wrapped tokens, bridges, liquidity fragmentation, and user confusion. The token loses its unified identity.

Admin Keys Create Risk

Most ERC-20 tokens include functions that let a team or multisig mint new tokens, pause transfers, or upgrade the contract. While framed as "flexibility," these capabilities introduce a fundamental trust problem. Rules can change. Supply can inflate. The economic guarantees users rely on are not actually guarantees—they're subject to governance decisions made by a small group.

For commerce, this is a dealbreaker. A marketplace escrow system, a payment processor, or a merchant accounting system needs predictable, unchangeable rules. If the token's behavior can be altered by a vote or a key holder, the entire commerce layer becomes unstable.

Security Vulnerabilities

The standard ERC-20 approval mechanism has a known race condition. A malicious actor can exploit the timing gap between approval updates to extract more tokens than the holder authorized. This vulnerability has existed for years, and most implementations still carry it.
Off-Chain Verification

There's no native way to cryptographically verify a token's authenticity or features directly on-chain. Users rely on block explorers, third-party interfaces, or documentation—creating unnecessary trust assumptions and potential points of attack.

What ORC-55 Actually Does

ORC-55 was built specifically to address these gaps. It's stricter than ERC-20 and more opinionated about what a commerce-grade token should guarantee.

Immutable Rules, Forever

ORC-55 contracts are completely immutable and zero-admin:

No upgrade functions
No minting after deployment
No pause or blacklist mechanisms
No privileged owner accounts

This isn't a limitation. It's a feature. When you're settling real transactions between buyers and sellers, you need certainty that the rules won't change. The economic policy you see on day one is the economic policy forever.

Supply That Only Decreases

ORC-55 enforces deflationary supply:

Total supply can only go down, never up
All burns are tracked transparently on-chain
Circulating supply is reported natively in the token contract

No team can mint more tokens later. No governance vote can alter monetary policy. The supply discipline is built into the code, not dependent on promises or periodic governance decisions.

One Coin, Many Chains

Here's where ORC-55 represents a genuine paradigm shift. Instead of being "stuck" on a single chain, ORC-55 tokens deploy at the same contract address across multiple blockchains simultaneously.

$BZR launched live on ten major chains at inception: Ethereum, BNB Chain, Base, Polygon, Arbitrum, Avalanche, zkSync Era, Cronos, Mantle, and Optimism.

The same token. The same address. Ten different blockchains.

This eliminates the need for wrapped tokens, bridges, and the fragmentation that comes with them. If Ethereum becomes congested, transactions flow to Arbitrum or Polygon. If Base offers better performance, users route there. The token doesn't lose its identity. Liquidity doesn't split into ten different versions. BZR just continues operating wherever it's most efficient.

Race-Proof Approvals

ORC-55 requires zero-first or atomic allowance updates, eliminating the ERC-20 approval race condition. This is a straightforward security improvement that prevents a known class of attacks.

On-Chain Verification

ORC-55 implements ERC-5267 for metadata transparency. Version, standard compliance, ABI hash, and feature set are cryptographically verifiable directly on-chain. You don't need to trust an explorer or third party to verify what the token actually does.

What This Means for Merchants and Users

For someone building a marketplace, integrating payments, or simply holding BZR long-term, ORC-55 translates into practical benefits:

Stable Rules: No surprise mints, no governance risks, no admin keys that could override your transactions.

Predictable Supply: You can reason about scarcity without wondering if the team will vote to inflate the supply later.

Surviving Chain Failures: If one blockchain has an outage, BZR continues operating on nine others. Your transactions don't halt because one execution environment went down.

One Asset Identity: Exchanges list BZR once and support deposits/withdrawals across all ten chains. Wallet integrations treat it as a single asset. Payment processors don't have to manage ten different token versions.

Commerce-Grade Certainty: The token was built for transactions that matter—escrows, settlements, real buyer-seller interactions—not just speculation.

Why Chains Matter Less Now

For years, we've thought of blockchain as the primary thing and tokens as derivatives. "I own Ethereum tokens" or "I use BSC coins." ORC-55 flips this relationship.

The asset becomes the brand and trust anchor. Blockchains become service providers competing on speed, cost, and security. Liquidity follows the token, not the chain. Users follow predictability, not infrastructure.

Looking Forward

As digital commerce scales, the token standards that win will be the ones that make life simpler for merchants, reduce systemic risk, and provide genuine guarantees rather than just promises.

ORC-55—and $BZR as its first large-scale implementation—represents a direct attempt to set that new standard. It's not theoretical. It's live. It's trading. It's being used across ten major blockchains.

The future of cryptocommerce won't be defined by which chain wins or which platform dominates. It will be defined by tokens that are stable, portable, and designed to survive anything the infrastructure throws at them.

ORC-55 is what that future looks like.

This article is intended for informational purposes only.

Learning CSS Basics #Part 1

2026-02-02 18:33:42

Today I studied the basics of CSS or Cascading Style Sheets and what CSS does in web development. CSS is used to style web pages and make them look better. With CSS, we can change colors, fonts, sizes, and the layout of elements on a page. It helps separate the design from the content, which makes the code cleaner and easier to manage.

One of the first things I learned was CSS selectors. Selectors are used to choose which HTML element you want to style. For example, to style a paragraph:

p {
  color: blue;
}

I also learned about properties and values. A property is what you want to change, and the value is how you change it. For example, changing font size:

h1 {
  font-size: 32px;
}

I learned about fonts and text styling too. CSS allows us to change how text looks:

p {
  font-family: Arial, sans-serif;
  text-align: center;
}

Spacing was another important lesson. I learned about margin and padding, which help control space around elements:

div {
  margin: 20px;
  padding: 10px;
}

Revisiting DSA Through Mini Projects #1: Optimizing a Phonebook with HashMap

2026-02-02 18:24:27

I am starting to revise the DSA topics when an idea popped in; how about starting to create one mini project based on each topic while revisiting them? This way it can give me insights into how data structures and algorithms are applied in the real world and also I can have something to showcase.

First Project of This Series: A Phonebook App

Without further ado, here is the first project of my learning series: a Phonebook App. Check the app here: https://hashphonebook.netlify.app/

I made this app some years ago but at that time I used the logic of linear searching. Which is good because it was a demo project and there is no need to store thousands of numbers, so the search can check all the possibilities and then return the desired result.

Initially I stored my contacts in a simple JavaScript Array:

let userArray = [
    { userName: "Jane", number: "1237893457" },
    { userName: "Oscar", number: "4562317895" }
];

And to find a contact I used .find() which loops through the array one by one:

// Time Complexity: O(n) Linear Time
const result = userArray.find(contact => contact.userName === "Oscar");

But Why Is Linear Search Not Enough?

In the real world, the scenario is different. We need to store many numbers in our contact list. For example: we store 100000 numbers and in an emergency case, taking the worst scenario if our phonebook takes O(100000) time to show the result then it is too much in this fast-paced world.

This is where performance actually matters not just whether the app works or not!

Optimizing the Search Logic

So the solution is using something that can give results in a shorter time. I changed the array to a HashMap and searched through the map using the key (username).

This way, the lookup time is much faster (O(1)) and more suitable for real-world use cases. Because A Hashmap acts like a real-world dictionary. You don't read every word to find "Strawberry" you flip straight to S.

let contactMap = {
    "Jane": "1237893457",
    "Oscar": "4562317895"
};

// Time Complexity: O(1) Constant Time
const number = contactMap["Jane"];

It doesn't matter if I have 5 contacts or 5 billion. The lookup speed is constant.

The Code Refactor

Here is the actual code I changed in my project:

Before (Array.push):

userArray.push({ userName, contactNumber });

After (Map Key-Value):

// Storing as a key-value pair and making the search case-insensitive
contactMap[userName.toLowerCase()] = { 
    originalName: userName, 
    number: contactNumber 
};

This project made me realize that a "working code" is not always "good code". Linear search was enough for a demo but it doesn't scale and real-world applications demand better solutions.

This is just the first step in my DSA revision journey and I plan to keep building small projects like this!