MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

The 5-Minute Daily Code Cleanup: How One Small Habit Doubled My Bug-Free Deployments

2025-09-16 12:24:34

You push code at 5 PM. At 5:03 PM, your phone buzzes with production alerts.

Six months ago, this was my reality. Then I discovered a simple 5-minute daily habit that doubled my bug-free deployments. No complex tools. No lengthy processes.

Just five minutes each morning that transformed my code quality.

The Problem: Why Most Developers Skip Code Cleanup

Time pressure crushes good intentions. You have sprint deadlines. Product managers breathe down your neck. Stakeholders demand new features yesterday.

  • The "it works, ship it" mentality becomes your default. You write code that passes tests. It handles the happy path. You push it live and move to the next ticket.
  • This approach creates invisible technical debt. Each rushed commit adds complexity. Each shortcut makes future changes harder. Each "quick fix" becomes tomorrow's nightmare.

The real cost hits harder than you think:

  • Production bugs affect 23% of software releases on average
  • Developers spend 42 hours per month fixing preventable issues
  • Debug time costs 5x more than prevention time
  • Teams report 60% higher stress levels during frequent incident response

Traditional code review happens too late. Reviewers see your pull request after you've moved on mentally. They focus on functionality over maintainability. Time pressure makes them miss subtle issues that become production problems.

  • Code review catches obvious bugs. It misses the deeper issues. Poor variable names slip through. Duplicated logic gets approved. Missing error handling goes unnoticed.
  • Your code quality depends on prevention, not just detection. This requires shifting left in your development process. It means catching issues before they reach the review stage.

Manage Your All issue by creating task in Teamcamp

The 5-Minute Daily Code Cleanup Method

Daily code cleanup means spending five focused minutes reviewing and improving your recent code. You do this every morning before starting new work. No exceptions. No shortcuts.

Timing matters more than you think. Morning cleanup works best because:

  • Your mind is fresh and objective
  • You haven't switched context to new problems yet
  • You can fix issues before they compound
  • It sets a quality mindset for the entire day

The method requires minimal setup. You need your IDE, Git history, and a simple checklist. No special tools. No complex configurations.

The Three-Step Process

Step 1: Scan (90 seconds)

Review yesterday's commits with fresh eyes. Look at your Git log. Check the diff for each commit. Focus on these areas:

  • Variable and function names that seem unclear
  • Repeated code patterns across files
  • Missing error handling or edge cases
  • Comments that don't match the code
  • Complex conditional logic that could simplify

Step 2: Clean (3 minutes)

Fix obvious issues immediately. Don't overthink. Make improvements that take under 30 seconds each:

  • Rename variables for clarity (data becomes userProfile)
  • Remove commented-out code blocks
  • Add missing null checks or error handling
  • Extract magic numbers into named constants
  • Split long functions into smaller pieces

Step 3: Note (30 seconds)

  • Document larger issues for later attention. Create TODO comments. Add tickets to your backlog. Flag areas that need deeper refactoring.
  • This prevents you from falling into perfectionism traps. You acknowledge technical debt without derailing your current sprint.

Document Your all issue Project wise at One place with Teamcamp File & Documents feature

Tools That Make It Easy

IDE extensions enhance your cleanup process:

  • SonarLint highlights code smells in real-time
  • ESLint/Prettier automates formatting fixes
  • Code spell checkers catch typos in variables
  • Complexity analyzers flag overly complex functions

Git hooks automate quality checks:

  • Pre-commit hooks run linters automatically
  • Commit message templates enforce clear descriptions
  • Branch protection rules require status checks

Simple checklists keep you focused. Create a five-item checklist specific to your language and framework. Review it during your cleanup routine.

Real Results: The Data Behind Doubled Bug-Free Deployments

My team tracked deployment success rates before and after implementing daily code cleanup. The results exceeded our expectations.

Before daily cleanup (3-month baseline):

  • 12 production incidents per month
  • 68% of deployments required hotfixes within 48 hours
  • Average debug time: 3.2 hours per incident
  • Team velocity: 24 story points per sprint

After daily cleanup (3-month comparison):

  • 5 production incidents per month (58% reduction)
  • 85% of deployments remained stable (25% improvement)
  • Average debug time: 1.1 hours per incident (66% reduction)
  • Team velocity: 31 story points per sprint (29% increase)

"Doubled bug-free deployments" means increasing successful releases from 32% to 64%. This represents deployments that required zero hotfixes or rollbacks within one week.

Track your progress using these metrics:

  • Deployment success rate (releases without issues)
  • Time between deployment and first bug report
  • Number of critical incidents per month
  • Code review cycle time improvements

Common patterns emerged during our daily cleanups. We found similar issues repeatedly:

  • Inconsistent error handling across service boundaries
  • Magic numbers scattered throughout business logic
  • Database queries missing proper indexes
  • API responses lacking validation schemas

The habit spreads naturally across teams. Developers notice cleaner code during reviews. They ask about the improvement process. Within two months, 80% of our engineering team adopted some form of daily cleanup.

Code review effectiveness improved dramatically. Reviewers focused on architecture and business logic instead of formatting issues. Pull request discussions became more strategic and less nitpicky.

Making the Habit Stick: Practical Implementation Tips

1. Start your cleanup routine at consistent times:

  • Right after your morning coffee and standup
  • Before checking emails or Slack messages
  • After reviewing your task list for the day
  • When your energy levels peak naturally

Link cleanup to existing development habits. If you always check CI/CD status first thing, add cleanup right after. If you review yesterday's work during standup prep, extend that review.

Use calendar reminders for the first month. Set a recurring 5-minute block titled "Code Cleanup." Treat it like any other meeting. Don't skip it for "urgent" work.

Track your progress visibly. Create a simple spreadsheet. Note issues found and fixed each day. Watch patterns emerge over time.

2. Common obstacles have simple solutions:

  • "I don't have time" - Five minutes equals one bathroom break. You waste more time on social media notifications.
  • Legacy codebase overwhelm - Focus only on files you touched recently. Don't try to fix everything at once.
  • Urgent deadline pressure - Cleanup prevents urgent situations. Five minutes of prevention saves hours of debugging.

Scale the practice gradually:

  • Week 1-2: Individual habit formation
  • Week 3-4: Share findings with teammates
  • Month 2: Pair cleanup sessions occasionally
  • Month 3: Integrate with sprint retrospectives

Create team accountability through slack channels. Share interesting findings from your cleanup sessions. Celebrate prevented bugs and improved code quality.

Advanced Tips and Variations

1. JavaScript/TypeScript projects:

  • Focus on type safety improvements
  • Check for unused imports and variables
  • Validate async/await error handling
  • Review component prop definitions

2. Python applications:

  • Examine function parameter types
  • Check for PEP8 compliance issues
  • Validate exception handling patterns
  • Review list comprehensions for readability

Weekly cleanup expands your scope. Spend 15 minutes reviewing architectural decisions. Look for duplicated business logic across modules. Identify integration points that need better error handling.

Monthly cleanup sessions address larger technical debt. Review dependency updates. Analyze performance bottlenecks. Plan refactoring initiatives for the next quarter.

Team implementation strategies:

  • Pair cleanup sessions with junior developers
  • Code cleanup lunch-and-learns monthly
  • Integration with existing code review processes
  • Shared cleanup findings in team wikis

Development teams need more than individual code quality improvements. They need coordinated project management to maximize these benefits.

Transform Your Development Workflow

Daily code cleanup creates compound benefits. Small improvements accumulate into massive quality gains. Your deployment confidence grows with each passing week.

Start your 5-minute daily code cleanup routine tomorrow. Set a calendar reminder. Choose your timing. Track your findings. Your code quality transformation begins with a single commit.

Managing individual code quality is just the beginning. 

Teamcamp helps development teams organize their workflows, track code quality metrics, and maintain deployment standards across projects.

With integrated time tracking, client portals, and automated billing, Teamcamp amplifies your daily code cleanup habit into team-wide productivity gains

Explore All features of Teamcamp

Driving AI CLI Tools to Write Code: A Semi-Automated Workflow

2025-09-16 12:14:12

Lately, I’ve been experimenting with a semi-automated programming workflow.

The idea is simple: let AI tools continuously write code in a controlled environment, while I stay in charge of architecture, quality, and reviews. Think of it as engineering field notes — practical patterns and lessons learned.

Why Semi-Automation?

We already have plenty of AI coding tools — Claude Code, Gemini CLI, QWEN, and many others that integrate with CLI workflows. They boost productivity, but manual prompting step by step isn’t enough.

Instead, my approach is to:

  • Use scripts to orchestrate and manage AI tools;
  • Keep sessions alive with tmux;
  • Automatically send structured prompts, collect responses, and keep the AI working until a task is done.

The goal: a tireless “virtual developer” coding 24/7, while I focus on design, architecture, and quality control.

The Overall Approach

This workflow has four main stages, each anchored by human review. That’s the secret sauce for keeping things sane.

1. Project Initialization: Specs and Skeleton First

Before coding, you need solid guidelines and structure. That’s what makes semi-automation possible.

  • Create a new GitHub repository.
  • Start with a baseline project doc (e.g., cpp-linux-playground), then rewrite it for your tech stack (e.g., TypeScript) and save as PROJECT.md.
  • Plan ahead:
    • Tech stack (languages, tools, standards)
    • Task verification (tests, QA)
    • Static analysis & code quality tools
    • Project structure
    • Git commit conventions

👉 Pro tip: rename docs/ to something more precise (like specifications/) to avoid random file dumping.

AI can help draft this documentation, but every detail should be human-approved.

2. Break Tasks Into Detailed Specs

Every feature or bug fix deserves its own spec under @specifications/task_specs/.

  • No coding yet — just detailed planning.
  • Each spec should define:
    • Functional description
    • Implementation steps
    • Inputs and outputs
    • Test cases
    • Edge cases and risks

This reduces ambiguity and dramatically improves AI’s code quality.

3. Automate the Coding Process

With specs in hand, the real semi-automation begins:

  • Use Python scripts to orchestrate AI CLI sessions.
  • Keep sessions running via tmux.
  • Send structured prompts to AI tools (Claude, Gemini, QWEN, etc.).
  • Enforce these rules:
    • Never auto-commit code
    • Run validation after every iteration
    • Sync project progress into TODO.md, linked from PROJECT.md

Workflows can borrow from ForgeFlow, which demonstrates prompt pipelines and programmatic handling of AI responses.

👉 Pro tip: If a task runs for more than an hour, send an “ESC” signal to re-check progress.

4. Clear Definition of “Done”

A task is done only when:

  • All code matches the plan;
  • Unit tests pass;
  • Automation scripts and prompts are updated;
  • Build and test pipelines run cleanly;
  • Git changes are committed;
  • The next task can begin.

At the very end, the AI should respond with nothing but “Done.”

Project Example: ts-playground

  • ts-playground
    This project serves as:

  • A structured playground for mastering TypeScript;

  • A CI/CD-enabled environment;

  • A practical use case of AI-assisted, semi-automated programming.

Semi-Automation vs. Full Automation

This workflow is semi-automated, not fully automated — intentionally:

  • Specs and architecture still need human input.
  • Prompts and scripts are evolving — you won’t cover every case at first.
  • Code quality checks remain essential — AI output isn’t always stable.

Semi-automation is cheap, reusable, and controlled. Full automation would require multi-agent systems and heavy context management — overkill for now.

Why Context Management Matters

The AI stays productive only if the project context is well-structured:

  • Organize guidelines by category and directory;
  • Keep task specs structured for easy reference;
  • Feed the AI only the relevant context per task.

This way, the AI acts like a real assistant instead of just a fancy autocomplete.

A Bit of Philosophy

This workflow reframes roles:

  • AI = the “coder + assistant,” executing granular tasks.
  • You = the “tech lead,” designing systems, reviewing work, and managing quality.

AI doesn’t replace developers. Instead, it amplifies us — pushing humans toward higher-level thinking, decision-making, and problem-solving.

TL;DR

Semi-automated programming in plain English:

  1. Set up a strong project skeleton and docs.
  2. Break work into reviewable, detailed specs.
  3. Automate execution with Python scripts, tmux, and AI CLIs.
  4. Define “done” clearly and iterate.

It’s a practical, low-cost way to experiment with AI-driven coding — perfect for solo developers or small teams who want speed without losing control.

Biometric fingerprint authentication on SmartCard Chips

2025-09-16 12:09:23

SEP7US MatchOnCard Auxiliary

SEP7US

During the years 2013 to 2018, in my early programming journey, I worked on projects related to smart cards based on ISO/IEC 7816-4 smart card chips. Below, I present SEP7US, a library I implemented that was used for biometric match-on-card verification, following NIST’s MINEX guidelines.

Author

Project Repository

You can find the full project here: GitHub - SEP7US

I consider it very important to briefly explain how this library works, since there is very little public documentation available about biometric standards.

Disclaimer

SEP7US Match on Card 0x7E3

Any modification made without proper supervision or consent is at your own risk. Changing the code will drastically alter verification results on any PIV Smart Card application.

Languages and Tools

  • C++
  • Java Native Interface (JNI)

Purpose

SEP7US provides an auxiliary library for converting biometric minutiae templates:

  • ISO/IEC 19794-2:2005
  • ANSI INCITS 378-2004

into the ISOCC format required for biometric match-on-card verification of chips based on ISO/IEC 7816-4 standards.

Internal Process

  1. Minutiae Counting
  2. Spatial Requantization
  3. Angular Requantization
  4. Minutiae Sorting

Template Identification

It is important to define the starting position of minutiae data depending on the template type:

ISO/IEC 19794-2:2005

posDataTemplate = 0x12; // DEC=18

ANSI INCITS 378-2004

posDataTemplate = 0x14; // DEC=20

Minutiae Counting

short numMinutiae = (short) fTemplate[posDataTemplate+9] & 0xFF;

The array size for the ISOCC template will be determined by:

// numMinutiae
short sizeISOCC = numMinutiae * 3;  // (X, Y, T|A)

Spatial Requantization

This process expresses minutiae coordinates in terms of 0.1mm.

Base Formula:

CoordMM      = 10 * Coord / RES
CoordUNITS   = CoordMM / 0.1
CoordCC      = 0.5 + CoordUNITS

Template Resolution Calculation:

short xres = (short) (fTemplate[posDataTemplate+0] << 8 | fTemplate[posDataTemplate+1]) & 0xFF;
short yres = (short) (fTemplate[posDataTemplate+2] << 8 | fTemplate[posDataTemplate+3]) & 0xFF;

X Coordinate:

*pcoordmmX    = 10.0 * (double) *ptmpx / xres;
*pcoordunitsX = *pcoordmmX / 0.1;
*pcoordccX    = (short)(.5 + *pcoordunitsX);

Y Coordinate:

*pcoordmmY    = 10.0 * (double) *ptmpy / yres;
*pcoordunitsY = *pcoordmmY / 0.1;
*pcoordccY    = (short)(.5 + *pcoordunitsY);

Angular Requantization

The angular requantization represents minutiae angles in 6 bits (0–63), considering that the maximum value is 360°.

ISOCC angle resolution:

360/64 = 5.625°
float ISOCC_ANGLE_RESOLUTION = 5.625f;

For ISO/IEC 19794-2:2005:

360/256 = 1.40625°
ANGLE_RESOLUTION = 1.40625f;

For ANSI INCITS 378-2004:

360/180 = 2°
ANGLE_RESOLUTION = 2;

Final Computation:

tmpCAngle = ANGLE_RESOLUTION * (*ptmpa);
tmpFAngle = tmpCAngle / ISOCC_ANGLE_RESOLUTION;
short t   = (*ptmpt | tmpFAngle) & 0xFF;

Minutiae Sorting

Although some smart cards do not require sorting, SEP7US provides four main sorting functions:

void XYAsc(unsigned char *a, short n);  // X ascending
void XYDsc(unsigned char *a, short n);  // X descending
void YXAsc(unsigned char *a, short n);  // Y ascending
void YXDsc(unsigned char *a, short n);  // Y descending

External Methods

ISOCC

Generates an ISO Compact Card template.

__declspec(dllexport) unsigned char *ISOCC(
    unsigned char templateFormat,
    unsigned char *fTemplate,
    unsigned char sorting
);

Parameters:

  • templateFormat: 0xFF for ISO/IEC 19794-2:2005, 0x7F for ANSI INCITS 378-2004
  • fTemplate: Pointer to the original template
  • sorting: Sorting option (0x00, 0x0F, 0x10, 0x1F)

Verify

Generates an ISOCC template with ISO/IEC 7816-4 APDU headers for PIV verification.

__declspec(dllexport) unsigned char *Verify(
    unsigned char CLA,
    unsigned char INS,
    unsigned char P1,
    unsigned char P2,
    unsigned char templateFormat,
    unsigned char *fTemplate,
    unsigned char sorting
);

Default APDU Command: 0x00 0x21

Headers added:

7F2E : "Biometric Data Template"

License

MIT

Green Blockchain: Can Sustainable Tech Solve Energy Concerns? - 101 Blockchains #379646

2025-09-16 12:03:00

Green Blockchain: Can Sustainable Tech Solve Energy Concerns? - 101 Blockchains

Green Blockchain: Toward a More Sustainable Digital Economy

Public attention to blockchain started with Bitcoin and its peers, but growing awareness has highlighted an environmental side effect: the energy used to power the network. The term “green blockchain” has emerged to describe efforts that reduce blockchain’s carbon footprint while preserving its core benefits—decentralization, security, and transparency.

Why blockchain can be energy-intensive

Many networks rely on a mechanism called Proof of Work (PoW). In PoW, a global race happens as computers solve complex puzzles to validate transactions and add them to the public ledger. The winners earn rewards, and the competition can push energy use to very high levels. Bitcoin is the most cited example of this pattern, where vast amounts of electricity power mining farms around the world.

What “green blockchain” aims to change

Green blockchain focuses on reducing energy consumption and emissions without sacrificing security. The main ideas are:

Switching to energy-efficient consensus — Replacing or complementing PoW with methods that require far less electricity, such as Proof of Stake (PoS). In PoS, validators are chosen based on how much stake (coins) they put at risk, which eliminates the need for energy-hungry puzzle solving. Ethereum’s shift to PoS dramatically lowered its energy use.

Using renewable energy — Aligning mining and network operations with solar, wind, and other clean sources to shrink the carbon footprint of power-hungry activities.

Layer 2 solutions — Building secondary frameworks that handle many transactions off the main chain, easing congestion and reducing the energy required for processing.

How the path to greener blockchains is being paved

There isn’t a single silver bullet. A combination of approaches is driving the green transition:

Energy-efficient consensus mechanisms — Technologies like Proof of Stake reduce the need for constant, power-hungry hashing. Ethereum’s transition to PoS is a landmark example, showing energy use can drop by a large margin when the network changes its core rules.

Layer 2 solutions — Off-chain or side networks, such as payment channels or sidechains, handle many transactions away from the main blockchain. This lowers the load and the energy required per transaction while maintaining security and quick processing.

Renewable energy integration — Mining operations and validator nodes increasingly run on cleaner energy sources, while policymakers explore guidelines to avoid wasteful practices and encourage responsible power use.

Current signs and practical implications

The crypto community has recognized the urgency of reducing energy consumption. Beyond Ethereum’s PoS upgrade, discussions around green energy for mining hubs—such as regions exploring abundant renewable resources—are shaping where and how future networks operate. Regulators in several regions are also considering rules aimed at curbing wasteful mining and promoting cleaner electricity.

Roadmap to a greener blockchain ecosystem

Key pillars in the green blockchain roadmap include:

Renewable energy adoption — Encouraging miners and networks to power their infrastructure with wind, solar, hydro, and other clean sources to minimize environmental impact.

Energy-efficient consensus — Expanding the use of PoS or similar models across networks to dramatically cut electricity needs without compromising security.

Layer 2 innovations — Implementing and refining second-layer solutions to relieve the main chain, enabling faster and cheaper transactions with lower energy use.

Bottom line

The shift toward greener blockchain technology is underway, supported by both technical innovations and policy considerations. By combining energy-efficient consensus, cleaner power sources, and scalable Layer 2 solutions, the industry is moving toward a future where blockchain can deliver its promised benefits with a smaller environmental footprint.

As the landscape evolves, developers, businesses, and policymakers will play a role in shaping a sustainable digital economy built on blockchain technology.

The Ultimate Cross-Framework Storage Solution

2025-09-16 11:59:29

ew-responsive-store v0.0.3: The Ultimate Cross-Framework Storage Solution

Introduction

In the ever-evolving landscape of frontend development, managing persistent state across different frameworks has always been a challenge. Each framework has its own ecosystem, patterns, and best practices, making it difficult to share storage logic between projects or migrate between frameworks.

ew-responsive-store v0.0.3 emerges as a revolutionary solution that bridges this gap, providing a unified, framework-agnostic storage API that works seamlessly across Vue, React, Preact, Solid, Svelte, Angular, and even vanilla JavaScript.

What Makes ew-responsive-store Special?

🚀 Zero External Dependencies

Unlike many storage libraries that bundle framework dependencies, ew-responsive-store treats all framework dependencies as external. This means:

  • Smaller bundle sizes: Only the code you need is included
  • No version conflicts: Framework dependencies are managed by your project
  • Better tree-shaking: Unused code is automatically eliminated

🔄 Cross-Framework Consistency

All frameworks use the same useStorage API, making it incredibly easy to:

  • Share code between projects using different frameworks
  • Migrate between frameworks without rewriting storage logic
  • Maintain consistent patterns across your entire codebase

⚡ Real-time Cross-Tab Synchronization

Built-in support for cross-tab synchronization means your data stays consistent across all browser tabs automatically, without any additional setup.

🎯 Framework-Specific Optimizations

While maintaining API consistency, each framework gets optimizations tailored to its specific patterns:

  • Vue: Returns reactive refs with deep watching
  • React: Returns state tuples with proper re-rendering
  • Solid: Returns signals for fine-grained reactivity
  • Svelte: Returns stores for reactive updates
  • Angular: Returns signals for modern Angular patterns
  • Vanilla JS: Returns a storage manager with subscription support

Installation & Setup

Basic Installation

npm install ew-responsive-store

Framework Dependencies

Install the specific framework dependencies you need:

# For Vue projects
npm install @vue/reactivity @vue/shared

# For React projects
npm install react

# For Preact projects
npm install preact

# For Solid projects
npm install solid-js

# For Svelte projects
npm install svelte

# For Angular projects
npm install @angular/core

Usage Examples

Vue 3

<template>
  <div>
    <p>Count: {{ count }}</p>
    <button @click="increment">Increment</button>
  </div>
</template>

<script setup>
import { useStorage } from 'ew-responsive-store/vue';

const [count, setCount] = useStorage('count', 0);

const increment = () => setCount(count.value + 1);
</script>

React

import React from 'react';
import { useStorage } from 'ew-responsive-store/react';

function Counter() {
  const [count, setCount] = useStorage('count', 0);

  return (
    <div>
      <p>Count: {count}</p>
      <button onClick={() => setCount(count + 1)}>Increment</button>
    </div>
  );
}

Solid

import { useStorage } from 'ew-responsive-store/solid';

function Counter() {
  const [count, setCount] = useStorage('count', 0);

  return (
    <div>
      <p>Count: {count()}</p>
      <button onClick={() => setCount(count() + 1)}>Increment</button>
    </div>
  );
}

Svelte

<script>
  import { useStorage } from 'ew-responsive-store/svelte';

  const store = useStorage('count', 0);
  let count = $store;
</script>

<div>
  <p>Count: {count}</p>
  <button on:click={() => store.setValue(count + 1)}>Increment</button>
</div>

Angular

import { Component } from '@angular/core';
import { useStorage } from 'ew-responsive-store/angular';

@Component({
  template: `
    <div>
      <p>Count: {{ count() }}</p>
      <button (click)="increment()">Increment</button>
    </div>
  `
})
export class CounterComponent {
  private storage = useStorage('count', 0);
  count = this.storage.value;

  increment() {
    this.storage.setValue(this.count() + 1);
  }
}

Vanilla JavaScript

import { useStorage } from 'ew-responsive-store/vanilla';

const storage = useStorage('count', 0);

// Get current value
console.log(storage.value); // 0

// Update value
storage.setValue(1);

// Subscribe to changes
storage.subscribe((newValue) => {
  console.log('Value changed:', newValue);
});

Advanced Features

Cross-Tab Synchronization

All frameworks automatically sync data across browser tabs:

// In Tab 1
const [theme, setTheme] = useStorage('theme', 'light');
setTheme('dark');

// In Tab 2 - automatically updates to 'dark'
const [theme, setTheme] = useStorage('theme', 'light');
console.log(theme); // 'dark'

Storage Type Selection

Choose between localStorage and sessionStorage:

import { useStorage, StoreType } from 'ew-responsive-store/react';

// localStorage (default)
const [persistentData, setPersistentData] = useStorage('data', {});

// sessionStorage
const [sessionData, setSessionData] = useStorage(
  'sessionData', 
  {}, 
  { storage: StoreType.SESSION }
);

Complex Data Types

Handle objects, arrays, and complex data structures seamlessly:

const [user, setUser] = useStorage('user', {
  name: 'John',
  preferences: {
    theme: 'dark',
    notifications: true
  },
  todos: [
    { id: 1, text: 'Learn ew-responsive-store', completed: false }
  ]
});

// Update nested properties
setUser({
  ...user,
  preferences: {
    ...user.preferences,
    theme: 'light'
  }
});

Comparison with Popular Libraries

vs LocalForage

Feature ew-responsive-store LocalForage
Framework Support ✅ All major frameworks ❌ Vanilla JS only
API Consistency ✅ Same API across frameworks ❌ Single API
Cross-tab Sync ✅ Built-in ❌ Manual implementation
Bundle Size ✅ Zero external deps ❌ Includes IndexedDB polyfills
TypeScript ✅ Full type safety ✅ Good type support
Learning Curve ✅ Framework-native patterns ✅ Simple but limited

Advantages of ew-responsive-store:

  • Unified API across all frameworks
  • Built-in cross-tab synchronization
  • Framework-specific optimizations
  • Zero external dependencies

Advantages of LocalForage:

  • Simpler for vanilla JS projects
  • More storage backends (IndexedDB, WebSQL)
  • Smaller learning curve for basic use cases

vs ahooks (React)

Feature ew-responsive-store ahooks
Framework Support ✅ All frameworks ❌ React only
Storage Focus ✅ Specialized for storage ❌ General purpose hooks
Cross-tab Sync ✅ Built-in ❌ Manual implementation
Bundle Size ✅ Minimal ❌ Large hook library
API Consistency ✅ Same across frameworks ❌ React-specific

Advantages of ew-responsive-store:

  • Cross-framework compatibility
  • Specialized for storage use cases
  • Built-in cross-tab synchronization
  • Smaller bundle size for storage-only needs

Advantages of ahooks:

  • Comprehensive hook library
  • Rich ecosystem of utilities
  • Better for complex React applications

vs VueUse

Feature ew-responsive-store VueUse
Framework Support ✅ All frameworks ❌ Vue only
Storage Focus ✅ Specialized for storage ❌ General purpose utilities
Cross-tab Sync ✅ Built-in ❌ Manual implementation
API Consistency ✅ Same across frameworks ❌ Vue-specific
Bundle Size ✅ Minimal ❌ Large utility library

Advantages of ew-responsive-store:

  • Cross-framework compatibility
  • Specialized for storage use cases
  • Built-in cross-tab synchronization
  • Smaller bundle size for storage-only needs

Advantages of VueUse:

  • Comprehensive Vue utilities
  • Rich ecosystem of composables
  • Better for complex Vue applications

Performance Considerations

Bundle Size Analysis

# ew-responsive-store (React)
ew-responsive-store/react: ~2.1KB gzipped
+ React (external): ~42KB gzipped

# LocalForage
localforage: ~8.5KB gzipped

# ahooks (storage hooks only)
@ahooksjs/use-local-storage-state: ~1.2KB gzipped
+ React (external): ~42KB gzipped

# VueUse (storage composables only)
@vueuse/core (storage): ~3.5KB gzipped
+ Vue (external): ~34KB gzipped

Memory Usage

  • ew-responsive-store: Minimal memory footprint, only stores necessary data
  • LocalForage: Higher memory usage due to IndexedDB overhead
  • ahooks: Moderate memory usage, depends on hook complexity
  • VueUse: Moderate memory usage, depends on composable complexity

Performance Benchmarks

// Storage operations per second (higher is better)
ew-responsive-store: 15,000 ops/sec
LocalForage: 8,500 ops/sec
ahooks: 12,000 ops/sec
VueUse: 10,000 ops/sec

Migration Guide

From LocalForage

// Before (LocalForage)
import localforage from 'localforage';
const value = await localforage.getItem('key');
await localforage.setItem('key', newValue);

// After (ew-responsive-store)
import { useStorage } from 'ew-responsive-store/vanilla';
const storage = useStorage('key', defaultValue);
const value = storage.value;
storage.setValue(newValue);

From ahooks

// Before (ahooks)
import { useLocalStorageState } from 'ahooks';
const [value, setValue] = useLocalStorageState('key', defaultValue);

// After (ew-responsive-store)
import { useStorage } from 'ew-responsive-store/react';
const [value, setValue] = useStorage('key', defaultValue);

From VueUse

<!-- Before (VueUse) -->
<script setup>
import { useLocalStorage } from '@vueuse/core';
const value = useLocalStorage('key', defaultValue);
</script>

<!-- After (ew-responsive-store) -->
<script setup>
import { useStorage } from 'ew-responsive-store/vue';
const [value, setValue] = useStorage('key', defaultValue);
</script>

Best Practices

1. Choose the Right Framework Entry Point

// ✅ Correct
import { useStorage } from 'ew-responsive-store/react';

// ❌ Incorrect
import { useStorage } from 'ew-responsive-store';

2. Handle Storage Errors Gracefully

try {
  const [data, setData] = useStorage('data', {});
  // Use data
} catch (error) {
  console.error('Storage not available:', error);
  // Fallback to in-memory state
}

3. Use TypeScript for Better Type Safety

interface User {
  name: string;
  age: number;
}

const [user, setUser] = useStorage<User>('user', { name: '', age: 0 });

4. Optimize for Performance

// ✅ Good: Use sessionStorage for temporary data
const [tempData, setTempData] = useStorage(
  'tempData', 
  {}, 
  { storage: StoreType.SESSION }
);

// ✅ Good: Use localStorage for persistent data
const [userPrefs, setUserPrefs] = useStorage('userPrefs', {});

Conclusion

ew-responsive-store v0.0.3 represents a significant leap forward in cross-framework storage management. By providing a unified API across all major frameworks while maintaining framework-specific optimizations, it solves the long-standing problem of sharing storage logic between different projects and frameworks.

Key Benefits:

  • Unified API: Same interface across all frameworks
  • Zero Dependencies: No bundled framework code
  • Cross-tab Sync: Built-in real-time synchronization
  • Type Safety: Full TypeScript support
  • Performance: Optimized for each framework
  • Migration Friendly: Easy to adopt and migrate from

When to Use ew-responsive-store:

  • ✅ Building multi-framework applications
  • ✅ Need cross-tab synchronization
  • ✅ Want consistent storage patterns
  • ✅ Require minimal bundle size
  • ✅ Planning framework migrations

When to Consider Alternatives:

  • ❌ Single framework projects with complex storage needs
  • ❌ Need advanced storage backends (IndexedDB, WebSQL)
  • ❌ Require extensive utility libraries

ew-responsive-store v0.0.3 is not just another storage library—it's a bridge between frameworks, enabling developers to build truly universal applications that can adapt and evolve with the ever-changing frontend landscape.

more information.

Typescript : Generic Data Fetch

2025-09-16 11:57:25

Question: Implement a Type-Safe Generic Data Fetcher

You are tasked with creating a type-safe generic function in TypeScript that fetches data from an API and handles different response types. The function should:

  1. Accept a URL and an optional configuration object for the fetch request.
  2. Use generics to define the expected response data type.
  3. Handle success and error cases with proper TypeScript types.
  4. Return a Promise that resolves to an object containing either the fetched data or an error message.

Requirements:

  • Define an interface for the response structure.
  • Use generics to make the function reusable for different data types.
  • Handle HTTP errors (e.g., non-200 status codes) with a custom error type.
  • Provide a usage example with two different data types (e.g., User and Product).

Bonus:

  • Add type-safe handling for query parameters in the URL.
  • Explain how your implementation ensures type safety.

Example API Endpoints:

  • https://api.example.com/users (returns an array of users)
  • https://api.example.com/products (returns an array of products)

Sample Data Structures:

interface User {
  id: number;
  name: string;
  email: string;
}

interface Product {
  id: number;
  name: string;
  price: number;
}

Provide the complete TypeScript code, including types/interfaces, the fetch function, and example usage. Then, explain how your code ensures type safety and handles errors.

Expected Answer Outline:

The candidate should provide:

  1. Interfaces/Types:

    • Define a generic response interface (e.g., ApiResponse<T>) to handle success and error cases.
    • Define a custom error type (e.g., ApiError) for HTTP or network errors.
    • Define an interface for the fetch configuration, including query parameters.
  2. Generic Fetch Function:

    • Create a function like fetchData<T>(url: string, config?: FetchConfig): Promise<ApiResponse<T>>.
    • Use TypeScript generics to allow the function to work with any data type (e.g., User[] or Product[]).
    • Implement error handling for network issues and non-200 status codes.
    • Construct the URL with query parameters if provided.
  3. Example Usage:

    • Show how to call fetchData<User[]> for the users endpoint and fetchData<Product[]> for the products endpoint.
    • Demonstrate handling of success and error cases.
  4. Explanation:

    • Describe how generics ensure the response data matches the expected type.
    • Explain how the response interface provides type safety for success and error states.
    • Discuss how query parameters are type-safely appended to the URL.
    • Highlight error handling for robustness.

Sample Solution:

// Define custom error type
interface ApiError {
  message: string;
  status?: number;
}

// Define response structure
interface ApiResponse<T> {
  data?: T;
  error?: ApiError;
}

// Define fetch configuration with query parameters
interface FetchConfig {
  method?: 'GET' | 'POST' | 'PUT' | 'DELETE';
  headers?: Record<string, string>;
  queryParams?: Record<string, string | number>;
}

// Generic fetch function
async function fetchData<T>(url: string, config: FetchConfig = {}): Promise<ApiResponse<T>> {
  try {
    // Construct URL with query parameters
    let finalUrl = url;
    if (config.queryParams) {
      const params = new URLSearchParams();
      for (const [key, value] of Object.entries(config.queryParams)) {
        params.append(key, value.toString());
      }
      finalUrl = `${url}?${params.toString()}`;
    }

    // Make fetch request
    const response = await fetch(finalUrl, {
      method: config.method || 'GET',
      headers: config.headers,
    });

    // Check for HTTP errors
    if (!response.ok) {
      return {
        error: {
          message: `HTTP error: ${response.statusText}`,
          status: response.status,
        },
      };
    }

    // Parse and return data
    const data: T = await response.json();
    return { data };
  } catch (error) {
    // Handle network or other errors
    return {
      error: {
        message: error instanceof Error ? error.message : 'Unknown error occurred',
      },
    };
  }
}

// Sample data interfaces
interface User {
  id: number;
  name: string;
  email: string;
}

interface Product {
  id: number;
  name: string;
  price: number;
}

// Example usage
async function main() {
  // Fetch users with query parameters
  const userResponse = await fetchData<User[]>(
    'https://api.example.com/users',
    {
      queryParams: { limit: 10, page: 1 },
      headers: { Authorization: 'Bearer token123' },
    }
  );

  if (userResponse.data) {
    console.log('Users:', userResponse.data);
  } else {
    console.error('User fetch error:', userResponse.error);
  }

  // Fetch products
  const productResponse = await fetchData<Product[]>(
    'https://api.example.com/products',
    {
      queryParams: { category: 'electronics' },
    }
  );

  if (productResponse.data) {
    console.log('Products:', productResponse.data);
  } else {
    console.error('Product fetch error:', productResponse.error);
  }
}

main();

Explanation of Type Safety and Error Handling:

  1. Generics:

    • The T generic type ensures the data property in ApiResponse<T> matches the expected type (e.g., User[] or Product[]). This prevents type mismatches at compile time.
    • For example, calling fetchData<User[]> ensures the data property is typed as User[], and TypeScript will flag any incorrect usage.
  2. Response Structure:

    • The ApiResponse<T> interface uses a union-like structure (data?: T; error?: ApiError) to ensure the response is either successful (data) or failed (error). This forces consumers to handle both cases explicitly.
  3. Query Parameters:

    • The FetchConfig interface allows type-safe query parameters via Record<string, string | number>. The URLSearchParams API ensures parameters are correctly formatted in the URL.
  4. Error Handling:

    • HTTP errors (non-200 status codes) return an ApiError with the status code and message.
    • Network or parsing errors are caught in the try-catch block and return an ApiError with a descriptive message.
    • The ApiError interface ensures errors are structured and type-safe.
  5. Type Safety Benefits:

    • TypeScript enforces that the consumer of fetchData knows the expected data type upfront, reducing runtime errors.
    • The optional data and error properties in ApiResponse ensure the consumer checks for errors before accessing data, preventing null/undefined errors.
    • The FetchConfig interface ensures only valid configuration properties are passed, and query parameters are safely serialized.

Bonus Considerations:

  • The candidate could extend the function to support request bodies for POST/PUT requests by adding a body field to FetchConfig with proper typing.
  • They could add type guards or utility functions to simplify response handling (e.g., isSuccessResponse<T>(response: ApiResponse<T>): response is { data: T }).

This question tests:

  • TypeScript fundamentals (interfaces, generics, union types).
  • Practical API integration with fetch.
  • Error handling and type safety.
  • Ability to explain design decisions.

It’s suitable for intermediate to senior developers and can be scaled down (e.g., remove query params) or up (e.g., add request body handling) based on the candidate’s experience level.