MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

Exploring Ktor: A Modern Networking Framework for Kotlin

2026-01-07 15:58:13

Ktor is an asynchronous networking framework developed by JetBrains, designed for building both server and client applications in Kotlin. While Retrofit has been the go-to standard for Android networking for many developers, Ktor is predicted to be the future of networking in Kotlin.

The traditional reliance on Retrofit has placed many Android developers in a comfort zone. Although Retrofit offers numerous benefits, this article will not delve into those advantages or compare Ktor and Retrofit directly. Instead, we will focus on how to effectively use Ktor in your application.

Ultimately, the choice between Ktor and Retrofit depends on project requirements, scope, and the developer’s preferences. Each framework has its own pros and cons, so let’s dive into how to set up Ktor in your project.

Project Setup and Dependencies

  1. Add Ktor and Serialization Plugins To get started, you need to add the Kotlin Serialization plugin in your project’s build.gradle.kts file (root):
plugins {
    id("org.jetbrains.kotlin.plugin.serialization") version "2.1.20"
}
  1. Add Ktor Dependencies

Next, in your app-level build.gradle.kts, include the necessary Ktor dependencies:kotli

plugins {
    id("kotlinx-serialization")
}

dependencies {
    implementation(platform("io.ktor:ktor-bom:3.1.2"))
    implementation("io.ktor:ktor-client-android")
    implementation("io.ktor:ktor-client-serialization")
    implementation("io.ktor:ktor-client-logging")
    implementation("io.ktor:ktor-client-content-negotiation")
    implementation("io.ktor:ktor-serialization-kotlinx-json")
}

Creating a Reusable HttpClient
Before making requests, it’s essential to create a reusable HttpClient. This can be done using a singleton pattern or dependency injection (DI) frameworks like Koin or Hilt.

Benefits of a Reusable Client
Persistent Connections: A reusable client maintains persistent connections (keep-alive), which reduces latency for subsequent requests by avoiding the overhead of establishing new connections.
Kotlin Multiplatform Compatibility: If you are using Kotlin Multiplatform, reusing a Ktor client allows you to share networking logic across Android, iOS, and other platforms, minimizing code duplication and ensuring consistent behavior.
Resource Management: Managing a single client simplifies cleanup (e.g., calling client.close()) when your app or module is shutting down, preventing resource leaks.
Example of Creating a Client Instance
Here’s how you can create a client instance:

private const val NETWORK_TIME_OUT = 15000

val httpClient = HttpClient(Android) {
    install(ContentNegotiation) {
        json(Json {
            prettyPrint = true
            isLenient = true
            ignoreUnknownKeys = true
        })
    }
    install(HttpTimeout) {
        requestTimeoutMillis = NETWORK_TIME_OUT
        connectTimeoutMillis = NETWORK_TIME_OUT
        socketTimeoutMillis = NETWORK_TIME_OUT
    }
    install(Logging) {
        logger = object : Logger {
            override fun log(message: String) {
                Log.v("KtorLogger", message)
            }
        }
        level = LogLevel.ALL
    }
    install(DefaultRequest) {
        header(HttpHeaders.ContentType, ContentType.Application.Json)
    }
    defaultRequest {
        contentType(ContentType.Application.Json)
        accept(ContentType.Application.Json)
    }
}

Making Requests with Ktor
Example: GET Request
To fetch a list of locations, you can use the following function:

suspend fun fetchLocationsList(): User {
    return httpClient.get("https://api.example.com/locations/123").body()
}

To create a new location, you can use this function:

suspend fun createLocationList(location: Location): HttpResponse {
    return httpClient.post("https://api.example.com/locations") {
        contentType(ContentType.Application.Json)
        setBody(location)
    }
}

Creating Data Models
When creating data models, you need to use the @Serializable annotation from Kotlinx. Here’s an example:

@Serializable
data class Location(
    val locationId: Int,
    val locationName: String,
    val locationLatLong: String
)

Note
The serialization plugin has already been added in the app-level build.gradle.kts.

Logging and Debugging
Ktor provides a logging feature that allows you to monitor requests and responses in Logcat. It is also highly compatible with DI frameworks like Koin and Hilt. For logging purposes, you can integrate libraries such as Timber or Klogging.

Final Thoughts
Choosing between Retrofit and Ktor depends on your project’s requirements, scope, and your team’s familiarity with the tools. Retrofit remains a solid, reliable choice for traditional Android apps, while Ktor excels in modern, Kotlin-first, and multiplatform environments. If you’re ready to explore Ktor, the steps above will help you get started quickly and efficiently.

Stay tuned for future articles where we’ll dive deeper into integrating Ktor with dependency injection (Koin) and advanced logging using Timber.

Ready to modernize your Android networking stack? Give Ktor a try and experience the flexibility of Kotlin-first development.

Building a Government Tender Intelligence System with Python: Lessons from the Real World

2026-01-07 15:56:00

Government tenders are one of the largest structured data sources available in India. Every day, thousands of new tenders are published across central, state, and PSU portals. Yet for most businesses and developers, this data remains noisy, fragmented, and hard to use.

This article is written for developers who are curious about how tender intelligence platforms are actually built, what technical challenges exist, and how Python-based systems can turn raw tender listings into decision-ready signals. The ideas here come from real-world problems faced while working on platforms like Bidsathi, which focuses on making tender data usable instead of overwhelming.

Why Government Tender Data Is a Hard Engineering Problem

At first glance, tenders look simple. Title, department, value, deadline. In reality, tender data is one of the messiest datasets you will ever work with.

Here’s why:

Data is spread across hundreds of portals

No standard schema exists

PDFs dominate instead of structured APIs

Titles are inconsistent and often misleading

Updates and corrigenda change data after publishing

From a systems perspective, tenders behave like a constantly mutating dataset. If you scrape once and forget, your data becomes wrong very quickly.

This is where most naive scraping projects fail.

Designing a Tender Data Pipeline (High-Level Architecture)

A reliable tender intelligence system usually has four layers:

Collection layer – scraping or ingestion

Normalization layer – cleaning and structuring

Intelligence layer – filtering, scoring, tagging

Delivery layer – alerts, dashboards, exports

Platforms like Bidsathi focus heavily on layers two and three because raw data alone does not help users make decisions.

For developers, the real learning happens beyond scraping.

Scraping Is the Easy Part (Relatively)

Python is still the most practical language for tender scraping due to its ecosystem.

Common tools:

requests + BeautifulSoup for static pages

Selenium or Playwright for JS-heavy portals

pdfplumber or tabula-py for BOQ PDFs

The mistake many developers make is assuming scraping equals value. It does not.

If you scrape 10,000 tenders a day but cannot answer “which 20 matter to me,” you have built noise at scale.

This is exactly the problem Bidsathi tries to solve downstream.

Normalizing Tender Data: Where Real Work Begins

After scraping, you typically face:

20 ways of writing the same department name

Dates in multiple formats

Values written in words, numbers, or missing

Locations buried inside descriptions

A practical approach:

Maintain controlled vocabularies for departments and sectors

Convert all dates to UTC timestamps

Standardize values into numeric ranges

Extract entities using rule-based NLP

This step alone often takes more effort than scraping itself.

From an engineering mindset, normalization is loss minimization. Every inconsistency you leave behind multiplies downstream errors.

Adding Intelligence: From Data to Signals

This is where tender platforms separate themselves from raw listing sites.

Some intelligence techniques that actually work:

Keyword-based sector tagging

Value-based filtering (micro vs large tenders)

Deadline urgency scoring

Location relevance matching

Historical buyer behavior analysis

For example, Bidsathi does not just show tenders. It highlights which tenders are actually relevant based on industry, value band, and timeline. That relevance layer is what users pay attention to.

As a developer, this is where your logic starts influencing business outcomes.

Automating Alerts Instead of Dashboards

One counterintuitive insight: most users don’t want dashboards. They want timely alerts.

Engineers often overbuild UIs when a simple rule engine + notification system would deliver more value.

A common workflow:

Run daily ingestion jobs

Apply filtering rules per user

Trigger email or WhatsApp alerts

Provide deep links to full tender details

This “push over pull” model is central to platforms like Bidsathi, because procurement decisions are time-sensitive.

From a psychological angle, reducing cognitive load increases action rates.

SEO and Programmatic Pages: A Developer’s Blind Spot

Tender platforms also face a search visibility challenge. Each tender is a potential long-tail search query.

But mass-generating pages without quality control leads to:

Crawled but not indexed pages

Duplicate intent issues

Thin content penalties

The engineering fix is not “more content,” but smarter templates:

Structured summaries

Contextual internal linking

Freshness indicators

Clear canonical logic

This is one reason Bidsathi focuses on curated, structured tender pages instead of dumping raw scraped text.

Developers working on SEO-heavy platforms need to think like search engines, not just coders.

What Developers Usually Underestimate

If you are thinking of building something similar, here are the most underestimated challenges:

Handling corrigenda and updates cleanly

Avoiding duplicate tenders across portals

Maintaining historical accuracy

Balancing crawl speed vs site stability

Keeping users from information overload

None of these are solved with one clever script. They require systems thinking.

Why Tender Intelligence Is a Long-Term System, Not a Side Project

Tender data compounds. The longer your system runs, the more historical context you gain:

Which departments delay awards

Which buyers favor certain value ranges

Seasonal tender patterns

Industry-wise opportunity cycles

Platforms like Bidsathi benefit from this compounding effect. Each day of clean data makes the next day more valuable.

From a mathematical standpoint, intelligence platforms have increasing returns over time, unlike one-off scrapers.

Final Thoughts for Developers

If you are a developer interested in civic tech, procurement data, or real-world automation problems, government tenders are a goldmine of complexity.

But scraping is just step one.

The real engineering challenge lies in turning chaotic public data into clear, timely, and actionable signals. That is where platforms like Bidsathi focus their effort, and that is where developers can build systems that actually matter.

If you enjoyed this breakdown, you can explore how tender intelligence is implemented in practice at bidsathi.com, or use these ideas to build your own procurement data pipeline.

Reflexes, Cognition, and Thought

2026-01-07 15:55:39

In my previous posts, I focused on sharing the basics—making LEDs blink and understanding wiring. Today’s adventure was about expanding on what my droid will actually need to function.

The droid will have a multi-layered "brain." I’ve been working on the Reflex Layer with the Arduino Uno for prototyping. In this post, I’ll review what I’ve learned there and explore the Cognition Layer using computer vision and local AI.

The Reflex Layer: Data Types & Geometry

Before the droid can walk, it has to have a way to traverse the world (or at least my home). I used the Arduino to test the "logic" of movement before ever plugging in a motor.

Visual Odometer

I built a Visual Odometer using 4 LEDs to represent 4 bits of a signed char. I wanted to visualize an "integer overflow." By starting the counter at 120 (near the 127 limit of a 1-byte signed integer), I could watch the moment the droid "lost its mind." As soon as it hit 128, the odometer flipped to -128.

Seeing the LEDs jump and the Serial Monitor report a negative distance was a tactile lesson: pick the right "storage box" (data type) for your sensors, or your droid will think it's traveling backward just because it reached a limit.

Simulating Motion with Light

Since I don’t have something that physically moves yet, I used a photoresistor (light sensor) to simulate "steps." Every time I flashed my phone light, the Arduino registered a movement. I also had the LED change color based on the light being detected so I could see really quickly whether my code was working.

I used the Pythagorean Theorem ($a^2 + b^2 = h^2$) to calculate the "as-the-crow-flies" distance from the starting point. Using the Serial Plotter, I could see the $X$ and $Y$ coordinates stair-stepping while the distance tracked a smooth, calculated curve.

#include <math.h>
// ... logic to detect light pulse ...
if (sensorValue < 400 && !triggered) { 
    xPos += 5; 
    yPos += 3; 
    // h = sqrt(x^2 + y^2)
    hypotenuse = sqrt(pow(xPos, 2) + pow(yPos, 2));
    triggered = true;
}

Adding Some Motion

At this point, I figured I could just add some hardware and experience the spinning of the motor based on the distance traveled. I was a bit surprised when I opened up the servos packaging and discovered that I didn't know what to do with them.

Close-up of a GeekServo with a 2-pin connector. A hand holds a red and black wire, highlighting the connection confusion.

I unpacked the Arduino motor shield figuring it would be obvious where it would plug-in, but nope. While the shield was easily installed, the wiring wasn't straightforward.

Side profile of an Arduino Uno with an L298N motor shield stacked on top, showing the various header pins and screw terminals.

I could not figure out where the wires on the Geek Servos were supposed to go.

I tried guessing the connection and successfully saw my LED light up, but there was no spinning motor.

GeekServo connected to a breadboard using male-to-male jumper wires. A status LED is lit, but the motor is stationary.

Which is when I realized, that my servo was actually just a motor.

I also realized that I likely needed an external power source to support this hardware. I have other servos to try, but I really want these to work since they are LEGO-compatible. To keep the momentum, I decided to look at the second layer of the brain.

The Cognition Layer: Enter the Raspberry Pi 5

I set up my sparkly new Raspberry Pi 5 from a CanaKit. This is the "High-Functioning" brain. This kit was super easy and the video was very straightforward to follow—a great "intro to building a computer" kit. After a quick setup and package update, I dove straight into Edge AI.

Sidequest: The Screenshot Struggle I spent way too long trying to automate screenshots on the Pi for this blog. I learned the hard way that scrot only produces black screens on the new Pi OS (Wayland). After fighting with grim and slurp, I realized I'd figure that part out later. No screenshots for now!

"I See You"

I hooked up an ELP 2.0 Megapixel camera and installed Ollama to run a local Vision Language Model (openbmb/minicpm-v4.5). I wrote a Python script using OpenCV (cv2) to grab a frame and feed it to the model.

The Result: Success! The Pi analyzed the camera feed locally and described me and the room.

DROID SAYS: 
Observing: A human with glasses and purple attire occupies the center of an indoor space; 
ceiling fan whirs above while wall decor and doorframes frame background elements—a 
truly multifaceted environment!

It took about 3 minutes to process one frame. My droid currently has the processing speed of a very deep philosopher—it’s not going to win any races yet, but it is truly thinking about its surroundings.

The Vision Script (vision_test.py)

Here is the bridge between the camera and the AI:

import cv2
import ollama
import os
import time

def capture_and_analyze():
    # Initialize USB Camera
    cam = cv2.VideoCapture(0)

    if not cam.isOpened():
        print("Error: Could not access /dev/video0. Check USB connection.")
        return
    print("--- Droid Vision Active ---")

    # Warm-up: Skip a few frames so the auto-exposure adjusts
    for _ in range(5):
        cam.read()
        time.sleep(0.1)

    ret, frame = cam.read()

    if ret:
        img_path = 'droid_snapshot.jpg'
        cv2.imwrite(img_path, frame)
        print("Image captured! Sending to MiniCPM-V-4.5...")
        try:
            # Querying the local Ollama model
            response = ollama.chat(
                model='openbmb/minicpm-v:4.5',
                messages=[{
                    'role': 'user',
                    'content': 'Act as a helpful LEGO droid. Describe what you see in one short, robotic sentence.',
                    'images': [img_path]
                }]
            )
            print("\nDROID SAYS:", response['message']['content'])
        except Exception as e:
            print(f"Ollama Error: {e}")

        # Clean up the photo after analysis
        if os.path.exists(img_path):
            os.remove(img_path)
    else:
        print("Error: Could not grab a frame.")
    cam.release()

if __name__ == "__main__":
    capture_and_analyze()

What's Next

I'm going to figure out the motors, but for now, I'm going to focus on refining the Vision + AI pieces. I am going to try using the opencv-face-recognition library and experiment with different, smaller models to see if I can speed up that 3-minute "thought" process!

Learning Landscape Heightmaps and Sculpting Tools in Unreal Engine (Day 12)

2026-01-07 15:55:00

I didn’t know landscapes could be edited outside Unreal Engine.

Heightmaps completely changed how I look at terrain creation.

Day 12 made landscapes feel less scary.

This post is part of my daily learning journey in game development.

I’m sharing what I learn each day — the basics, the confusion, and the real progress —
from the perspective of a beginner.

What I tried / learned today

On Day 12, I learned about landscape heightmaps and the landscape sculpting tools in Unreal Engine.

I understood that a heightmap is basically a grayscale image that controls the height of the terrain.

Dark areas are lower, light areas are higher.

I learned how to:

  • Import a heightmap while creating a landscape
  • Export a heightmap from an existing landscape
  • Modify terrain using landscape tools like Sculpt, Smooth, and Flatten

Using these tools, I could shape hills, smooth uneven areas, and control the overall terrain more precisely.

What confused me

At first, I was confused about how a simple black-and-white image could control terrain height.

I also didn’t fully understand which sculpting tool to use and when.

Using the wrong tool sometimes ruined the landscape shape.

What worked or finally clicked

Once I connected the idea of grayscale values to height, things became clear.

I also realized that sculpting is not about perfection.

Using low strength values and making small changes gave much better results.

Exporting and re-importing heightmaps helped me understand how landscapes can be edited and reused.

One lesson for beginners

  • Heightmaps control terrain using grayscale
  • Use sculpting tools slowly and gently
  • Small changes matter more than big ones
  • Practice is more important than perfect shapes

Slow progress — but I’m building a strong foundation.

If you’re also learning game development,
what was the first thing that confused you when you started?

See you in the next post 🎮🚀

Type Hints Make AI Code Generation Significantly Better

2026-01-07 15:54:47

If you're using AI coding assistants without type hints, you're leaving performance on the table.

When you ask an AI to complete this:

def process(data):
    # TODO: split by comma and return uppercase words

The AI has to guess what data is. A string? A file? A list?

But with type hints:

def process(data: str) -> list[str]:
    # TODO: split by comma and return uppercase words

Now the AI knows:

  • data is definitely a string
  • It should return a list of strings
  • Methods like .split() and .upper() are appropriate

The result: More accurate completions, fewer hallucinations, less back-and-forth.

This extends to entire codebases. When your functions have type hints, AI tools can:

  • Generate code that matches your existing types
  • Suggest appropriate methods for the given types
  • Catch inconsistencies in their own output
  • Understand relationships between modules

Where this matters most:

  • API handlers (FastAPI uses type hints for automatic validation)
  • Data processing pipelines
  • Any code that interfaces with AI-generated components

Type hints are documentation for both humans and machines. In 2026, that dual purpose matters more than ever.

This is adapted from my upcoming book, Zero to AI Engineer: Python Foundations.

I share excerpts like this on Substack → https://substack.com/@samuelochaba

Mastering GraphQL with Ktor: A Modern Networking Guide for Android

2026-01-07 15:51:35

Originally published on Medium: https://medium.com/@supsabhi/mastering-graphql-with-ktor-a-modern-networking-guide-for-android-028f388836ed

Modern Android apps demand flexible, efficient and scalable networking solutions. While REST APIs have been the standard for years, GraphQL has emerged as a powerful alternative ,especially for apps that need precise data fetching and reduced network overhead. Instead of hitting multiple endpoints to get different pieces of data, GraphQL allows you to ask for exactly what you need in a single request.

In my previous article, I discussed how to set up Ktor, the modern Kotlin-first networking framework developed by JetBrains. We explored how it provides a lightweight alternative to Retrofit. If you haven’t read that yet, I highly recommend checking it out to get your base client set up.You can find the article here:

Exploring Ktor: A Modern Networking Framework for Kotlin:
https://medium.com/@supsabhi/exploring-ktor-a-modern-networking-framework-for-kotlin-462e6769a721

Now, let’s take things a step further and explore how you can integrate GraphQL with Ktor to build efficient APIs. Whether you are a beginner or looking to modernize your stack, this guide will cover everything from setup to making your first request.

What is GraphQL?
GraphQL is a query language for APIs that allows the client to request exactly the data it needs ,nothing more nothing less.

The major difference is that REST uses multiple endpoints with fixed response structures, while GraphQL uses a single endpoint with flexible queries to give optimized responses. This bandwidth efficiency is perfect for mobile apps where every kilobyte counts.

Why Use GraphQL with Ktor?
Using Ktor for GraphQL gives you:

Minimal Overhead: No need for heavy external libraries,keep your APK size small.
Full Control: You define exactly how the request and response are handled.
Multiplatform Ready: The same logic works seamlessly in Kotlin Multiplatform (KMP).
Kotlin-First: Coroutine-based, no callbacks, and works smoothly with Clean Architecture & MVVM

Setting Up the Project:
Before integrating GraphQL, ensure you have Ktor in your Android project. Add these dependencies to your app-level build.gradle.kts:

dependencies {
    implementation(platform("io.ktor:ktor-bom:3.1.2"))
    implementation("io.ktor:ktor-client-android")
    implementation("io.ktor:ktor-client-content-negotiation")
    implementation("io.ktor:ktor-serialization-kotlinx-json")
    implementation("io.ktor:ktor-client-logging")
}

Ensure the Kotlin Serialization plugin is enabled in your root build.gradle.kts:

plugins 
{
    id("org.jetbrains.kotlin.plugin.serialization") 
    version "2.1.20"
}

Understanding How GraphQL Works with Ktor
With GraphQL, every operation is usually a POST request to a single endpoint. The query or mutation is passed as a JSON body.

  1. Define the Request Model First, create a data model to wrap your query:
@Serializable
data class GraphQLRequest(
    val query: String,
    val variables: Map<String, String>? = null
)
  1. Define the Generic Response Wrapper GraphQL always returns data inside a data field and errors inside an errors field.
@Serializable
data class GraphQLResponse<T>(
    val data: T? = null,
    val errors: List<GraphQLError>? = null
)

@Serializable
data class GraphQLError(val message: String)

Making Your First Request
Let’s fetch a list of countries using a public GraphQL API. First, define your data models:

@Serializable
data class CountriesData(val countries: List<Country>)

@Serializable
data class Country(
    val code: String,
    val name: String,
    val emoji: String,
    val capital: String? = null
)

Implementation
Here is how you execute the query using the httpClient we created in the previous article:

suspend fun fetchCountries(): GraphQLResponse<CountriesData> {
    val countryQuery = """
        query {
            countries {
                code
                name
                emoji
                capital
            }
        }
    """.trimIndent()


return httpClient.post("https://countries.trevorblades.com/graphql") {
        setBody(GraphQLRequest(query = countryQuery))
    }.body()
}

Handling Errors and Logging
GraphQL can return an HTTP 200 status even if there are errors in the query logic. Always check the error list:

val response = fetchCountries()
if (!response.errors.isNullOrEmpty()) {
    response.errors.forEach { error ->
        Log.e("GraphQL Error", error.message)
    }
}

For professional logging, I recommend using Timber. I’ve written a detailed guide on setting it up here: Effortless Android Logging with Timber and Kotlin.:
https://medium.com/@supsabhi/effortless-android-logging-with-timber-and-kotlin-f0aaa0a701b7

Final Thoughts
Integrating GraphQL with Ktor is straightforward and gives Android developers a modern, flexible and Kotlin-native networking stack. It fits beautifully with MVVM and Clean Architecture. By leveraging one unified HttpClient, you can now handle both REST and GraphQL seamlessly.

GitHub Repository for Hands-On Reference
To make this article more practical and hands-on, I’ve published a public GitHub repository that demonstrates everything discussed above using a real GraphQL API.

The goal of this project is not just to “fetch data”, but to show how GraphQL should be integrated idiomatically in an Android application using Ktor, without forcing REST-based abstractions on top of it.
📂 GitHub Repository:

GraphQL Example – Android (Jetpack Compose + Ktor)

This project is a simple, clean Android sample demonstrating how to consume a GraphQL API using Ktor in a Jetpack Compose application, following Clean Architecture and MVVM principles.

The goal of this repository is to show idiomatic GraphQL handling in Android without REST-style abstractions like fake HTTP status codes or generic response wrappers.

✨ Features

  • ✅ GraphQL API integration using Ktor
  • ✅ Jetpack Compose UI
  • ✅ Clean Architecture (Data → Domain → UI)
  • ✅ MVVM with unidirectional data flow
  • ✅ Kotlin Coroutines
  • ✅ Koin for dependency injection
  • ✅ Proper GraphQL error handling (data vs errors)
  • ✅ No Retrofit, no REST-style CommonResponse

🔗 API Used (Free & Public)

Countries GraphQL API

https://countries.trevorblades.com/graphql

Sample query:

query {
  countries {
    code
    name
    capital
  }
}

This API is:

  • Completely free
  • No authentication required
  • Ideal for demos and learning

🏗️ Architecture Overview

UI (Jetpack