MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

How Developers Can Earn $5,000 Writing a Practical Technical Article

2026-02-10 20:47:44

I usually ignore writing contests. Most of them feel either too marketing-heavy or too vague to be worth the effort. This one stood out because it’s clear, technical, and genuinely built around real developer work.

Iron Software is running a technical writing challenge focused entirely on IronPDF, and the idea is straightforward: write a real, useful article showing how IronPDF is used in practical development scenarios.

The top prize is $5,000, but honestly, that’s not the only reason this is worth doing.

What You’re Expected to Write

You write one original article (minimum 1,000 words), and it must be about IronPDF. No generic PDF theory, no filler content.

You can approach it in a few solid ways:

Practical IronPDF Tutorials

This is probably the most natural option for most developers.

Examples:

  • Generating PDFs in .NET, Java, Python, or Node.js
  • Converting HTML to PDF for reports, invoices, or dashboards
  • Editing, merging, stamping, or securing PDFs
  • Handling real edge cases you’ve actually faced

Think: “Here’s a problem I had, and this is how I solved it using IronPDF.”

Real-World Case Studies

If you’ve worked with:

  • Enterprise systems
  • Banking or finance apps
  • Reporting or compliance workflows
  • Automated document pipelines

You can frame your article as a case study: what the problem was, what constraints existed, and how IronPDF fit into the solution.

Comparisons (IronPDF vs Other Libraries)

If you’ve used alternatives like iText, Aspose, QuestPDF, or open-source tools, you can compare them — as long as the comparison is technical, honest, and experience-based.

Balanced comparisons actually read more authentic and human than forced praise.

Thought Leadership (Still Grounded)

This works well if you enjoy stepping back a bit.

Topics like:

  • PDF automation in real systems
  • PDF/A or PDF/UA compliance
  • Developer productivity when dealing with documents

IronPDF still needs to be central, but you’re free to discuss the bigger picture.

Where You Publish (This Part Matters)

You don’t submit the article directly to Iron Software.

Instead, you publish it publicly on a developer platform like:

Bonus Platforms (Extra Visibility)

These platforms are marked as Bonus and typically carry higher visibility or authority:

  • TechCrunch (Bonus)
  • Stack Overflow Blog (Bonus)
  • Microsoft Community (Bonus)
  • The New Stack (Bonus)
  • InfoQ (Bonus)
  • Visual Studio Magazine (Bonus)
  • Redgate Simple Talk (Bonus)

Standard Developer Publishing Platforms

These are widely used, developer-focused platforms and are perfectly valid for submission:

  • C# Corner (Bonus)
  • HackerNoon
  • DEV.to
  • DZone
  • CodeProject
  • Software Engineering Daily
  • Baeldung
  • Built In
  • Towards Dev

Don't forget to include a backlink to www.ironpdf.com

This is important because you keep the article. It lives on your profile, under your name, regardless of the contest outcome.

How You Actually Submit Your Entry

After publishing your article, there’s one simple extra step:

  1. Go to 👉 https://ironsoftware.com/ironpdf-writing-contest/
  2. Submit your email address
  3. Paste the link to your published article

That’s it.

Even if you don’t win, your article stays public, indexed, and visible on platforms developers actually read — which is great for your portfolio, personal brand, and even job opportunities.

Prizes (Clear and Simple)

  • 🥇 $5,000 for the top article
  • 🥈 x 10 $500 USD Vouchers

Judging is based on:

  • Technical accuracy
  • Clarity and usefulness
  • Originality
  • Real value to other developers

In short: Would another developer actually learn something from this?

Why This Is Worth Doing (Even If You Don’t Win)

Good technical writing compounds.

A solid IronPDF article can:

  • Strengthen your public developer profile
  • Show real-world problem-solving skills
  • Help in interviews, freelancing, or dev advocacy roles
  • Stay discoverable long after the contest ends

Worst case: you gain a strong, published technical article.
Best case: you gain that and $5,000.

If you already work with PDFs, reporting, or document automation, this challenge aligns naturally with work you’re probably doing anyway. No gimmicks, no vague prompts — just write something genuinely useful and submit the link.

That alone makes it worth considering.

Copilot Ralph. Built a Desktop UI for GitHub Copilot.

2026-02-10 20:45:30

This is a submission for the GitHub Copilot CLI Challenge

What I Built

I built a desktop app for Copilot to work on requirements in a Ralph loop mode. The app has a plan mode and also creates git commits for each task.

For those who prefer video you can watch the video demo, for readers here's what you can do in the app

  • Vibe code by defining tasks
  • Refer files in the tasks with @fileName
  • Plan mode to plan your tasks before executing
  • Start building
  • View files changed in each taks in a Git view.

Demo

GitHub Repository

GitHub logo ashiqsultan / copilot-ralph

A Desktop app to run copiot in ralph mode

Copilot Ralph

A desktop application to vibe code using GitHub Copilot. Define your tasks in a simple UI to build them in a Ralph loop way.

Copilot Ralph screenshot

Download

MacOS: Download link (universal dmg)

For checksum check Releases page.

Requirement

You need to have copilot cli installed

If not please refer GitHub Copilot CLI Installation

Whats included

  • Plan mode
  • Each requirement runs in a separate call, to prevent llm context rot.
  • Git commit on each task
  • View files chanes on each task in git diff view
  • Full visibiliy on whats happening as everything is store as plain json and txt files.

How it works

The requirements are stored in plain JSON file Plan mode is optional, but recommended. The entire plan is sent as a single call to AI. The plan is stored for each requirement Starting the execution mode will pick the requirements one by one and build the application

Inspiration

Original post on Ralph loop. This is the post that triggered the Ralph way to use claude code. And I thought of building an UI for this for copilot because, TBH, Copilot is the chepest pricing I could find right now for AI assisted coding.

My app doesn’t strictly follow the OP, because I’ve moved some responsibilities out of the AI layer and into the orchestration layer. The system now handles the following tasks:

Architecture of copilot ralph

  • Linear selection of requirements
  • process.txt updates after each task
  • Creating Git commits

I made this way so the system is predictable and to reduce ambiguity of relying on AI

My Experience with GitHub Copilot CLI

This is my first experice with AI in CLI (yes I dont have Claude code), before this I have used copilot only inside VSCode but using it in a CLI feels like more control. I liked how we can mention files with @ (which I also included in my app btw). I also loved the --yolo flag and I was mostly starting my sessions with yolo mode. Almost forgot to mention the --resume session was a saver and the information on how much premium request info at end of session was really helpful.

Choosing models

I must say this, choosing models in copilot felt more like a survival game. As I'm in a free tier I dont want to lose all my premium requests, since my app in itself is an UI for copilot, so testing the app also means additional calls so I followed this strategy. Sonnet 4.5 for normal tasks and Opus only when required so I dont use x3 the limit.

This is my typical flow

  • Start in yolo mode
  • Select models based on requirement.
  • Choose the files I think the AI should know for the task
  • First the Plan mode
  • Then Execute mode

If you like the project a Github Star ⭐ would be great.

You can visit the webpage or Github to download and try the app.

When LiteLLM Becomes a Bottleneck: Exploring Gateway Alternatives

2026-02-10 20:40:48

LiteLLM has become a go-to starting point for teams building LLM-powered systems. At first, it feels like magic: a single library that connects multiple providers, handles routing, and abstracts away all the messy differences. For early experiments and small prototypes, it works so well that you barely notice what’s happening under the hood.

But as I started moving a LiteLLM-based system into production, the cracks began to show. Reliability, latency, memory usage, and long-running stability weren’t just minor annoyances anymore  they were walls I kept running into.

I didn’t realize it at first, but LiteLLM alone wasn’t enough for the scale I was aiming for. That’s when I started looking into gateway-based architectures and the different ways teams solve these operational challenges.

Why LiteLLM Is Often the First Choice

LiteLLM solves a real and immediate problem: unifying access to multiple LLM providers behind a single interface. For teams experimenting with OpenAI, Anthropic, Azure, or others, it removes a lot of boilerplate.

It’s especially appealing because:

  • It’s provider-agnostic
  • It supports logging and routing
  • It integrates easily into existing Python-based stacks

For small teams or early prototypes, LiteLLM often works well enough that there’s no reason to look elsewhere.

The issues tend to appear later.

What Starts to Break as Usage Grows

As LiteLLM deployments grow in traffic and uptime expectations, several recurring problems begin to show up. These aren’t theoretical many are reflected in open GitHub issues.

At the time of writing, LiteLLM has 800+ open issues, which is not unusual for a popular open-source project, but it does signal sustained operational complexity.

A few representative examples:

  • Issue #12067 – Performance and stability degradation under load
  • Issue #6345 – Memory-related issues accumulating over time
  • Issue #9910 – Logging and internal state affecting request handling

Individually, each issue can often be worked around. Collectively, they point to a deeper pattern.

Database in the Request Path

One recurring theme is that logging and persistence are tightly coupled to request handling. When a database sits directly in the request path, every call becomes vulnerable to:

  • I/O contention
  • Locking delays
  • Cascading slowdowns during spikes

As traffic increases, this can turn observability ironically into a performance liability.

Performance Degradation Over Time

Another common complaint is that services perform well initially, then slowly degrade:

  • Memory usage grows
  • Latency becomes inconsistent
  • Periodic restarts become necessary to maintain stability

For production systems expected to run continuously, this creates operational overhead and uncertainty.

Predictability Becomes Hard

At small scale, these issues are tolerable. At larger scale, they make capacity planning and SLOs difficult. Teams start compensating with:

  • Over-provisioning
  • Aggressive restarts
  • Disabling features like detailed logging

At that point, the original simplicity starts to erode.

Why These Problems Are Hard to Fix Incrementally

It’s tempting to assume these issues can be patched one by one. In practice, many of them stem from core architectural decisions.

LiteLLM is not primarily designed as a high-throughput, long-running gateway. It’s designed as a flexible abstraction layer. As usage grows, responsibilities accumulate:

  • Routing
  • Logging
  • Persistence
  • Retry logic
  • Provider normalization

Each additional responsibility increases pressure on the request path.

This is where the gateway model becomes relevant.

Gateway-Based Architectures as an Alternative

A gateway treats LLM access as infrastructure, not just a library. The core idea is separation of concerns:

  • Request handling stays fast and minimal
  • Logging and metrics are asynchronous
  • State is pushed out of the hot path
  • Long-lived stability is a first-class goal

This mirrors patterns already established in API gateways, service meshes, and reverse proxies.

Instead of embedding everything into the application runtime, the gateway becomes a dedicated control layer.

Bifrost as a Reference Implementation

Bifrost takes this gateway-first approach seriously. Rather than positioning itself as a drop-in wrapper, it’s designed to sit between applications and LLM providers as a standalone system.

For more detailed documentation and the GitHub repository, check these links:

Several design choices are particularly relevant when contrasting it with LiteLLM.

No Database in the Request Path

One of the most important differences is that Bifrost does not place a database in the request path.

Logs, metrics, and traces are collected asynchronously. If logging backends slow down or fail, requests continue flowing.

The result:

  • API latency remains stable under load
  • Observability does not penalize throughput
  • Failures are isolated instead of cascading

This single decision eliminates an entire class of performance issues.

Consistent Performance Over Time

Bifrost is built to run continuously without requiring periodic restarts. Memory usage is designed to remain stable rather than growing unbounded with traffic.

This matters operationally:

  • No “it was fast yesterday” surprises
  • Easier autoscaling
  • Predictable SLOs

For teams running gateways 24/7, this predictability often matters more than feature breadth.

Stable Memory Usage

Memory leaks and gradual accumulation are some of the hardest production problems to debug. Bifrost’s architecture prioritizes:

  • Bounded memory usage
  • Clear lifecycle management
  • Isolation between requests

That reduces the need for manual intervention and defensive restarts.

Alternatives Worth Considering

The LLM gateway space offers several viable approaches, each optimized for different environments and team needs. Here’s a quick breakdown my top choices:

Bifrost

Strong focus on performance, stability, and gateway fundamentals. Designed for teams that want a dedicated, production-grade LLM control plane.

  • High-throughput, low-latency request handling
  • Emphasis on reliability and operational stability
  • Clear separation between gateway and application logic
  • Better suited for backend-heavy or infra-driven teams

Cloudflare AI Gateway

Well integrated into Cloudflare’s ecosystem. A solid option if you’re already using Cloudflare for edge networking and observability.

  • Built-in rate limiting, logging, and analytics
  • Edge-first architecture with global distribution
  • Easy setup for existing Cloudflare users
  • Tighter coupling to Cloudflare services

Vercel AI Gateway

Optimized for Vercel-hosted applications. Convenient for frontend-heavy teams but more opinionated in deployment model.

  • Seamless integration with Vercel projects
  • Optimized for serverless and edge functions
  • Minimal configuration required
  • Less flexible outside the Vercel ecosystem

Kong AI Gateway

Built on top of Kong’s API gateway. Powerful, but often heavier and more complex to operate.

  • Leverages mature API gateway capabilities
  • Strong policy, security, and plugin ecosystem
  • Suitable for enterprises already running Kong
  • Higher operational overhead and learning curve

Each option represents a different balance between control, simplicity, scalability, and ecosystem lock-in there’s no universal “best,” only what fits your stack and team maturity.

Choosing the Right Tool Based on Scale

LiteLLM is often a good choice when:

  • You’re experimenting or prototyping
  • Traffic is low to moderate
  • You value flexibility over predictability

Gateway-based solutions make more sense when:

  • Traffic is sustained and growing
  • Latency and uptime matter
  • You want observability without performance penalties
  • You need long-running stability

Neither approach is universally “better.” They serve different stages of maturity.

Final Thoughts

LiteLLM plays an important role in the ecosystem, and its popularity reflects that. But as systems scale, architectural assumptions start to matter more than convenience.

Gateway-based solutions exist because teams consistently run into operational limits with long-running, high-throughput LLM workloads. Whether it’s Bifrost, Cloudflare AI Gateway, Vercel AI Gateway, or Kong AI Gateway, these platforms provide a predictable control layer, stable performance, and observability without slowing down requests.

If LiteLLM is starting to feel like a bottleneck rather than an enabler, that’s usually a signal not that you chose the wrong tool, but that your system has outgrown it.

At that point, evaluating gateway-based alternatives isn’t premature. It’s practical, and it helps you scale with confidence.

Building the Yamaokaya Map (Unofficial)

2026-02-10 20:35:42

About Yamaokaya Map (Unofficial)

Do you know about the wonderful ramen chain “Ramen Yamaokaya” in Japan?

Ramen Yamaokaya is a nationwide ramen chain founded in 1988 in Ushiku City, Ibaraki Prefecture, Japan. It's known for its rich tonkotsu broth and for allowing you to freely customize noodle firmness, flavor intensity, and fat content. Many locations are open 24 hours, making it beloved by truck drivers and night shift workers. I myself have been a fan for over 20 years. My home store is the legendary “Minami 2-jo Store” in Sapporo. I always order the shoyu ramen with less fat.

img

Last year's AWS Summit Japan 2025 inspired me to think about how I could support Yamaokaya within my area of expertise.

// Detect dark theme var iframe = document.getElementById('tweet-1937737622954381592-363'); if (document.body.className.includes('dark-theme')) { iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=1937737622954381592&theme=dark" }

So, I built an unofficial web application called “Yamaokaya Map.” This map lets you view store information for Ramen Yamaokaya locations nationwide.

https://yama.dayjournal.dev

This app supports PWA, so you can add it to your smartphone's home screen.

How to add:

  • iOS (Safari): Share button → “Add to Home Screen.”
  • Android (Chrome): Menu → “Add to Home Screen.”

Ramen Yamaokaya Store Types

Ramen Yamaokaya has four store types.

1. Ramen Yamaokaya
The standard Yamaokaya. Offers classic tonkotsu-based menu items. There are over 150 locations nationwide. I always order the Shoyu Ramen.

2. Niboshi Ramen Yamaokaya
A specialty shop serving niboshi (dried sardine) broth ramen. You can enjoy a different flavor profile from the standard Yamaokaya. I'm not a fan of niboshi, so I've actually never been.

3. Miso Ramen Yamaokaya
A shop specializing in miso ramen, known for its rich miso soup. Here, I recommend ordering the Shoyu Ramen deliberately. There are only 3 locations, all in Hokkaido.

4. Gyoza no Yamaokaya
A new concept store focusing on gyoza. There's only one location in all of Japan, located in Sapporo.

The map released this time uses icons to distinguish these four store types, and you can toggle their display on or off via layer switching.

Advance Preparation

Contacting the Official Website

Since I was going to perform scraping this time, I checked with the official website beforehand. They gave me a very warm response. Thanks to that, I immediately wanted to go eat there again.

img

Data Acquisition and Processing

This time, I'll use Python for scraping. I'll combine Playwright, pandas, and geopy to acquire and process the data.

  • Scraping: Playwright
  • Data Processing: pandas
  • DMS→DD Conversion: geopy
yamaokaya-data
└── script
    ├── scrape_yamaokaya.py
    ├── latlon_yamaokaya.py
    ├── column_yamaokaya.py
    ├── csv2geojson.py

Map Application

First, fork the Amazon Location Service v2 starter template. Then, add the files and code needed for the Yamaokaya Map.

MapLibre GL JS & Amazon Location Service Starter

Execution environment

  • node v24.4.1
  • npm v11.4.2
yamaokaya-map
├── LICENSE
├── README.md
├── dist
│   └── index.html
├── img
│   ├── README01.gif
│   ├── README02.png
│   └── README03.png
├── index.html
├── package-lock.json
├── package.json
├── public
│   ├── manifest.json
│   ├── data
│   │   ├── yama.geojson
│   │   ├── niboshi.geojson
│   │   ├── miso.geojson
│   │   └── gyouza.geojson
│   └── icons
│       ├── yama.png
│       ├── niboshi.png
│       ├── miso.png
│       └── gyouza.png
├── src
│   ├── main.ts
│   ├── style.css
│   └── vite-env.d.ts
├── tsconfig.json
└── vite.config.ts

Install the package

npm install

Publishing Settings in Amplify Gen2

Using the starter repository I forked, I’ll publish it on GitHub in the Amplify Console (Gen2), referencing an article I wrote previously.

https://memo.dayjournal.dev/memo/aws-amplify-016

Data Acquisition and Processing

Scraping

The script scrapes store information from the official website. Since the official site dynamically generates content, I use Playwright to control the browser and retrieve the data. From each store's detail page, I extract the store name, address, phone number, business hours, parking information, seat types, shower room availability, the detail page URL, and the store's location information.

Example of retrieving the store name

from playwright.sync_api import sync_playwright
import pandas as pd

def scrape_yamaokaya_shops():
    shops = []
    with sync_playwright() as p:
        browser = p.chromium.launch(headless=True)
        context = browser.new_context(
            user_agent='Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36',
        )
        context.set_default_timeout(10000)
        context.set_default_navigation_timeout(10000) 
        page = context.new_page()
        main_url = "https://www.yamaokaya.com/shops/"
        page.goto(main_url, wait_until='networkidle', timeout=10000)
        page.wait_for_timeout(5000)
        shop_links = page.eval_on_selector_all(
            'a[href*="/shops/"]',
            'els => [...new Set(els.map(el => el.href).filter(href => /shops\\/\\d+/.test(href)))]'
        )
        for url in shop_links:
            try:
                page.goto(url, wait_until='domcontentloaded', timeout=10000)
                page.wait_for_timeout(5000)
                name = page.evaluate("""() => {
                    const h = document.querySelector('h2, h1, .shop-name');
                    return h?.innerText?.trim() || document.title.split('|')[0].trim();
                }""")
                shops.append({'url': url, 'name': name or '不明'})
            except Exception as e:
                shops.append({'url': url, 'name': 'エラー'})
        browser.close()    
    return pd.DataFrame(shops)

if __name__ == "__main__":
    df = scrape_yamaokaya_shops()
    df.to_csv('yamaokaya_shops.csv', index=False, encoding='utf-8-sig')

DMS→DD Conversion

The location data scraped is in DMS (degrees, minutes, seconds) format. To display it with the map library, I convert it to DD format (decimal degrees). I use geopy to handle multiple conversion patterns.

Example of DMS→DD conversion

from typing import Tuple
from geopy import Point
# 変換前 "43°03'28.6""N 141°21'22.2""E"
def _convert_with_geopy(dms_string: str) -> Tuple[float, float]:
    cleaned = dms_string.replace('""', '"')
    point = Point(cleaned)
    return point.latitude, point.longitude

Column Name Change

Before converting the data to GeoJSON, I change Japanese column names to English.

Example of column name change

column_mapping = {
    '店舗名': 'store_name',
    '住所': 'address',
    '電話番号': 'phone_number',
    '営業時間': 'business_hours',
    '駐車場': 'parking',
    '座席の種類': 'seating_types',
    'シャワー室': 'shower_room',
    'その他': 'other_info'
}
df_renamed = df.rename(columns=column_mapping)

CSV to GeoJSON Conversion

Finally, I convert the CSV to GeoJSON format. Files are output separately for each store type.

Example of CSV to GeoJSON conversion

import json
import pandas as pd
def create_geojson_features(df):
    features = []
    for _, row in df.iterrows():
        properties = {}
        for col in df.columns:
            if col not in ['lat', 'lon']:
                value = row[col]
                if pd.isna(value):
                    properties[col] = None
                else:
                    properties[col] = str(value)
        feature = {
            "type": "Feature",
            "geometry": {
                "type": "Point",
                "coordinates": [row['lon'], row['lat']]
            },
            "properties": properties
        }
        features.append(feature)
    return features

GeoJSON output result

{
    "type": "Feature",
    "geometry": {
    "type": "Point",
    "coordinates": [
        141.3561,
        43.0579
    ]},
    "properties": {
        "store_name": "ラーメン山岡家 南2条店",
        "details": "https://www.yamaokaya.com/shops/1102/",
        "address": "札幌市中央区南2条西1丁目6-1",
        "phone_number": "(011) 242-4636",
        "business_hours": "5:00-翌4:00",
        "parking": "なし",
        "seating_types": "カウンター席: 13",
        "shower_room": "なし",
        "other_info": "まちなかのちいさなお店です。"
    }
},

Creating the Map Application

Setting the Background Map

For this project, I use MapLibre GL JS as the map library and Amazon Location Service for the background map.

import './style.css';
import 'maplibre-gl/dist/maplibre-gl.css';
import 'maplibre-gl-opacity/dist/maplibre-gl-opacity.css';
import maplibregl from 'maplibre-gl';
import OpacityControl from 'maplibre-gl-opacity';

const region = import.meta.env.VITE_REGION;
const mapApiKey = import.meta.env.VITE_MAP_API_KEY;
const mapName = import.meta.env.VITE_MAP_NAME;

const map = new maplibregl.Map({
    container: 'map',
    style: `https://maps.geo.${region}.amazonaws.com/maps/v0/maps/${mapName}/style-descriptor?key=${mapApiKey}`,
    center: [138.0000, 38.5000],
    zoom: baseZoom,
    maxZoom: 20
});

Layer Configuration

I set up layers for each store type and assign custom icons to them.

interface LayerConfig {
    name: string;
    iconPath: string;
    iconId: string;
    visible: boolean;
}
const layerConfigs: Record<string, LayerConfig> = {
    'gyouza': {
        name: '餃子の山岡家',
        iconPath: 'icons/gyouza.png',
        iconId: 'gyouza-icon',
        visible: true
    },
    'miso': {
        name: '味噌ラーメン山岡家',
        iconPath: 'icons/miso.png',
        iconId: 'miso-icon',
        visible: true
    },
    'niboshi': {
        name: '煮干しラーメン山岡家',
        iconPath: 'icons/niboshi.png',
        iconId: 'niboshi-icon',
        visible: true
    },
    'yama': {
        name: 'ラーメン山岡家',
        iconPath: 'icons/yama.png',
        iconId: 'yama-icon',
        visible: true
    }
};

Adding GeoJSON Layers

I add the GeoJSON data as layers. I configure the icon size to change based on the zoom level.

function addGeoJsonLayer(id: string, config: LayerConfig, data: GeoJSONData): void {
    map.addSource(id, {
        type: 'geojson',
        data: data
    });
    map.addLayer({
        id: id,
        type: 'symbol',
        source: id,
        layout: {
            'icon-image': config.iconId,
            'icon-size': [
                'interpolate',
                ['linear'],
                ['zoom'],
                6, baseIconSize * 0.5,
                10, baseIconSize * 0.6,
                14, baseIconSize * 0.7,
                18, baseIconSize * 0.8
            ],
            'icon-allow-overlap': true,
            'icon-ignore-placement': false,
        },
        paint: {
            'icon-opacity': 1.0,
        }
    });
}

Implementing Popups

Clicking a store icon displays the store information in a popup. It shows the address, phone number, business hours, parking information, seating types, etc.

function createPopupContent(props: StoreProperties): string {
    const contentParts: string[] = [];

    if (props.store_name) {
        contentParts.push(`<h3>${props.store_name}</h3>`);
    }

    const details: string[] = [];
    if (props.address) {
        details.push(`<strong>住所:</strong> ${props.address}`);
    }
    if (props.phone_number) {
        details.push(`<strong>電話:</strong> <a href="tel:${props.phone_number}">${props.phone_number}</a>`);
    }
    // ...
}

Layer Toggling

Implemented layer toggling (show/hide) using maplibre-gl-opacity.

const overLayers = {
    'yama': 'ラーメン山岡家',
    'niboshi': '煮干しラーメン山岡家',
    'miso': '味噌ラーメン山岡家',
    'gyouza': '餃子の山岡家',
};

const opacityControl = new OpacityControl({
    overLayers: overLayers,
    opacityControl: false
});
map.addControl(opacityControl, 'bottom-left');

Summary

This time, I built the "Yamaokaya Map (Unofficial)" using a structure that includes Playwright for scraping, geopy for DMS→DD conversion and CSV→GeoJSON conversion, and map display via MapLibre GL JS and Amazon Location Service. Visualizing this on a map reveals new insights. The northernmost store is in Wakkanai. Stores are located in surrounding areas rather than central Tokyo. While they have expanded into Kyushu, there are no stores in Shikoku. And there is only one Gyoza no Yamaokaya store nationwide. This way, Ramen Yamaokaya's store opening strategy becomes clear.

img

Please use this when searching for a nearby store or looking for Ramen Yamaokaya while traveling!

Best Browser Split Screen Extensions for 2026: A Practical Guide

2026-02-10 20:33:56

Introduction

Browser split screen lets you view multiple web pages at the same time, so you can compare, reference, copy and paste without constant tab switching.

People use split screen in a browser for three main reasons:

  1. Compare and decide: pricing pages, product docs, research papers, reviews
  2. Work while referencing: write in one pane, read sources in another
  3. Monitor and operate: dashboards, chat, tickets, analytics, trading, alerts

Who benefits most:

  1. Product managers, founders, and marketers doing competitive research
  2. Engineers reading docs while coding
  3. Analysts and investors tracking multiple sources
  4. Students reading and taking notes side by side
  5. Anyone on a single monitor who wants less tab switching

In 2026, the best split screen tool is the one that matches your workflow style. There are two big approaches:

  1. Native split view inside Chrome, clean and reliable, but limited to 2 panes
  2. Extensions that either create multiple browser windows, or create multiple panes inside one tab

Chrome Split View

Official reference: Google Chrome Help: Use split view for multitasking

Chrome Split View is Chrome's built in split view feature. It lets you display 2 websites within a single Chrome window. One side is active, the other side is inactive, and most toolbar actions apply to the active side only.

Demo of Chrome Split View

Features

  1. Two pane split inside one Chrome window
  2. Open a tab in split view via tab right click
  3. Open a link in split view via link right click
  4. Drag and drop a tab or link to the left or right edge to create split view
  5. Manage split view, close left or right, reverse position, separate split view
  6. Keyboard shortcut support (macOS: ⌘ + Option + n, Windows and Linux: Shift + Alt + n)

Pros

  1. No extension needed, very low setup cost
  2. Reliable, because it is built in
  3. Works with almost all sites, including sites that block embedding
  4. Great as a fallback when extensions fail

Cons

  1. Only 2 panes
  2. Many actions apply to the active side only, which can feel limiting for heavy multitasking
  3. Limited layout control compared with power user extensions

Comment

Chrome Split View is the baseline in 2026, and you can treat it as the default for simple two pane tasks and as the fallback for strict sites. If your daily workflow needs more than two panes, repeatable layouts, or a true research workspace, you will outgrow it quickly.

Dualless

Official: Dualless on Chrome Web Store

Stats: Last Updated 2023-12-29, Users 1M, Rating 3.95/5.0

Dualless is a classic split screen extension for people without a second monitor. It splits by creating separate browser windows with preset ratios.

Demo of Dualless

Features

  1. One click split with preset ratios
  2. Creates two windows sized to your screen
  3. Merge windows back

Pros

  1. Very easy to start, minimal learning
  2. Fine for simple two window workflows on one monitor
  3. Large user base, widely known

Cons

  1. Creates separate windows, which can quickly lead to window clutter
  2. Preset ratios are limited, and do not feel flexible for real work
  3. Users frequently report crashes or instability
  4. Users frequently report side effects like losing tab groups or pinned tabs
  5. Window pairing and merging can feel unpredictable
  6. Extra browser UI in each window wastes screen space

Comment

If your goal is two pages side by side, Chrome Split View is simpler and cleaner. Dualless only makes sense if you specifically want two separate windows and you accept the clutter tradeoff.

Tab Resize

Official: Tab Resize on Chrome Web Store

Stats: Last Updated 2024-06-11, Users 1M, Rating 4.31/5.0

Tab Resize is a popular split screen tool that resizes the current tab and tabs to the right into layouts across separate windows. It is strong for multi monitor setups, shortcuts, and quick preset layouts.

Demo of Tab Resize

Features

  1. Multiple layout presets
  2. Multi monitor support features
  3. Shortcut keys
  4. Undo last resize
  5. Can resize only highlighted tabs

Pros

  1. Fast and productive for people already comfortable managing multiple windows
  2. Support multiple monitors.
  3. Works well with sites that block embedding because each pane is a real window

Cons

  1. Not a true multi pane workspace inside one tab
  2. Still creates window clutter, especially if you repeat this many times per day
  3. Many users dislike that it opens new windows instead of splitting inside one window
  4. Does work in fullscreen mode.
  5. Users report bugs such as hotkeys failing or the popup lagging

Comment

Tab Resize is best when you intentionally want window tiling, especially across multiple monitors. If you want a single tab workspace that stays organized, it is the wrong tool.

PageVS

Official: PageVS on Chrome Web Store

Stats: Last Updated 2025-12-19, Users 669, Rating 4.67/5.0

PageVS is a new and modern split screen extension that turns one tab into a multi pane workspace. Instead of spawning many windows, it keeps everything inside a single tab and lets you freely arrange panes.

Demo of PageVS

Features

  1. Up to 36 resizable panes in one tab
  2. Split modes: vertical, horizontal, auto
  3. Freeform drag and resize, not locked to a grid
  4. Scroll sync to keep pages aligned
  5. Cross pane highlight to compare text across panes
  6. Layout bookmarks and one click restore
  7. Light mode, dark mode, experimental reader mode

Pros

  1. Best overall for heavy research workflows, because it avoids window clutter
  2. Most flexible layout control among common options
  3. Layout bookmarking enables repeatable workflows, for example daily dashboards, competitor tracking, multi doc reading
  4. Strong early reviews for single screen productivity, for example: "best Chrome extension… single screen setup… significantly boosting my productivity."

Cons

  1. A small number of sites may not work well in an in tab multi pane model, especially sites that block embedding or require special security policies, Gmail is a common example
  2. If you only ever need two panes with zero customization, Chrome Split View is simpler

Comment

PageVS is my personal favorite for doing research, comparisons, writing, and monitoring on a single screen. Use Chrome Split View only as a backup for the small number of sites that don't work well inside panes.

Split Screen for Google Chrome

Official: Split Screen for Google Chrome on Chrome Web Store

Stats: Last Updated 2023-10-17, Users 300K, Rating 3.75/5.0

Split Screen for Google Chrome focuses on splitting and resizing browser windows into sections, often marketed for meetings and presentations. It is primarily a window management tool.

Demo of Split Screen for Google Chrome

Features

  1. Two click window split and resize
  2. Ratio adjustments
  3. Multi monitor related features

Pros

  1. Simple for quick meeting layouts and basic window arrangement
  2. Easy to understand, minimal setup

Cons

  1. Not an in tab multi pane workspace
  2. Reviews often mention weak multi monitor behavior and that it feels redundant versus OS window management
  3. Less flexible than power tools for research workflows

Comment

For two pages, Chrome Split View usually replaces it. For serious research workflows, PageVS replaces it. This is a niche tool if you mainly want a simple window arran

Split Screen on Mac

Official: Split Screen on Mac on Chrome Web Store

Stats: Last Updated 2025-04-05, Users 6K, Rating 3.93/5.0

Split Screen on Mac provides preset layouts and utilities for moving and resizing windows. It aims to reduce manual window resizing effort.

Demo of Split Screen on Mac

Features

  1. Preset vertical, horizontal, quadrant layouts
  2. Two click operations and shortcuts

Pros

  1. Better guided layouts than older window split tools
  2. Simple for window based workflows

Cons

  1. Still window based, so heavy use can still create clutter
  2. Often requires extra steps, like creating or preparing windows before applying a layout

Comment

If you prefer window based splitting and want preset snapping, it can help. For most users, Chrome Split View is enough for two panes, and PageVS is better when you need more than two panes.

Tile Tabs WE

Official: Tile Tabs WE on Chrome Web Store

Stats: Last Updated 2023-03-18, Users 80K, Rating 3.62/5.0

Tile Tabs WE is a powerful tiling tool that can arrange tabs into tiled sub windows, with many customization options and saved layouts.

Demo of Tile Tabs WE

Features

  1. Tile many tabs into a layout
  2. High customization, keyboard shortcuts, context menus
  3. Saved layouts and UI hiding options
  4. Often used for monitoring workflows

Pros

  1. Very customizable, good for advanced users who want tiling and saved layouts
  2. Useful for multi screen monitoring workflows

Cons

  1. Learning curve is higher than most alternatives
  2. Still window based, so heavy use can still create clutter
  3. Instability and broken features are recurring complaints for some users
  4. Older update history can be a maintenance risk if you want active development

Comment

Tile Tabs WE is for advanced tiling fans who can tolerate complexity and occasional rough edges. If you want a modern, reliable, single tab workspace with flexible layouts and less clutter, PageVS is the stronger default.

Verdict

If you want the best split screen experience in 2026, pick based on your workflow:

  1. Best overall: PageVS on Chrome Web Store Best for research, writing, analysis, and anyone who wants a true multi pane workspace inside one tab.
  2. Best built in fallback: Chrome Split View official help Best when you only need two panes, or when a site does not work well inside an in tab pane model.
  3. Best for window tiling power users: Tab Resize on Chrome Web Store Best if you already manage many windows and want quick presets and shortcuts.

For most people on a single monitor, PageVS gives the biggest productivity boost because it removes window clutter while adding real layout control.