MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

Running Local LLMs with NeuroLink and Ollama: Complete Guide

2026-02-17 13:46:37

Your LLM API bill just hit $5,000 this month. OpenAI went down twice last week. And your legal team is nervous about sending proprietary data to external servers.

Sound familiar? Here's how to take back control with local LLM deployment using Ollama and NeuroLink.

TL;DR

  • Install Ollama: brew install ollama (or curl -fsSL https://ollama.com/install.sh | sh)
  • Run a model: ollama run llama3.1:8b
  • Connect with NeuroLink: provider: "ollama" in config
  • Full privacy, zero API costs, <100ms latency

Read on for the complete setup guide...

Table of Contents

  • Why Run LLMs Locally?
  • Setting Up Ollama
  • Configuring NeuroLink for Ollama
  • Model Selection Guide
  • Performance Optimization
  • Hybrid Cloud and Local Patterns
  • Troubleshooting Common Issues

Why Run LLMs Locally?

The rise of capable open-source language models has fundamentally changed how developers approach AI integration. No longer are you locked into cloud-only solutions with their associated costs, latency, and privacy concerns.

Privacy and Data Sovereignty

When you run models locally, your data never leaves your infrastructure. This is critical for:

  • Healthcare applications handling protected health information (PHI)
  • Financial services processing sensitive customer data
  • Legal tech working with privileged communications
  • Enterprise applications dealing with proprietary business information

Cost Predictability

Cloud LLM APIs charge per token, which can lead to unpredictable costs as usage scales. Local deployment converts this variable cost into a fixed infrastructure investment. Once you have the hardware, your marginal cost per inference approaches zero.

Latency Optimization

Network round-trips to cloud providers introduce latency that can be unacceptable for real-time applications. Local inference eliminates this network overhead entirely. On properly configured hardware, you can achieve response times measured in milliseconds rather than seconds.

Offline Capability

Local models work without internet connectivity, enabling deployment in:

  • Air-gapped environments
  • Edge devices with intermittent connectivity
  • Mobile applications requiring offline functionality
  • Disaster recovery scenarios

Setting Up Ollama

Ollama has emerged as the leading solution for running LLMs locally. It provides a simple, Docker-like experience for model management.

Installation

macOS (Homebrew):

brew install ollama

macOS/Linux (Direct Download):

curl -fsSL https://ollama.com/install.sh | sh

Docker:

docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama

Starting the Ollama Server

After installation, start the Ollama service:

ollama serve

On macOS and Windows, Ollama typically runs as a background service automatically. On Linux, you may want to configure it as a systemd service:

# /etc/systemd/system/ollama.service
[Unit]
Description=Ollama Service
After=network-online.target

[Service]
ExecStart=/usr/local/bin/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3

[Install]
WantedBy=default.target

Enable and start the service:

sudo systemctl enable ollama
sudo systemctl start ollama

Pulling Your First Model

With Ollama running, pull a model to get started:

# Pull Llama 3.1 8B - great balance of capability and speed
ollama pull llama3.1:8b

# Pull Mistral 7B - excellent for general tasks
ollama pull mistral:7b

# Pull CodeLlama for programming tasks
ollama pull codellama:13b

Verify the model is available:

ollama list

Test it with a quick prompt:

ollama run llama3.1:8b "Explain quantum computing in simple terms"

Configuring NeuroLink for Ollama

NeuroLink's provider-agnostic architecture makes Ollama integration straightforward.

Basic Configuration

In your NeuroLink configuration file, add Ollama as a provider:

# neurolink.config.yaml
providers:
  ollama:
    type: ollama
    base_url: http://localhost:11434
    default_model: llama3.1:8b
    timeout: 120
    retry:
      max_attempts: 3
      backoff_multiplier: 2

Environment Variables

Alternatively, configure via environment variables:

export NEUROLINK_OLLAMA_BASE_URL=http://localhost:11434
export NEUROLINK_OLLAMA_DEFAULT_MODEL=llama3.1:8b
export NEUROLINK_OLLAMA_TIMEOUT=120

Programmatic Configuration

For more control, configure Ollama programmatically:

import { NeuroLink } from "@juspay/neurolink";

// Initialize NeuroLink with Ollama
const nl = new NeuroLink({
  providers: [{
    name: "local",
    config: {
      baseUrl: "http://localhost:11434",
      defaultModel: "llama3.1:8b",
      timeout: 120,
      keepAlive: "5m"  // Keep model loaded for 5 minutes
    }
  }]
});

// Use the local provider
const response = await nl.generate({
  input: { text: "Write a function to calculate fibonacci numbers" },
  provider: "local"
});

Multiple Model Configuration

Configure multiple Ollama models for different use cases:

providers:
  ollama-fast:
    type: ollama
    base_url: http://localhost:11434
    default_model: llama3.1:8b

  ollama-code:
    type: ollama
    base_url: http://localhost:11434
    default_model: codellama:13b

  ollama-large:
    type: ollama
    base_url: http://localhost:11434
    default_model: llama3.1:70b

Model Selection Guide

Choosing the right model for your use case is crucial for balancing capability with resource requirements.

General Purpose Models

Model Size VRAM Best For
Llama 3.1 8B 4.7GB 8GB min General chat, summarization, simple reasoning
Llama 3.1 70B 40GB 48GB+ Complex reasoning, nuanced tasks
Mistral 7B 4.1GB 6GB min Quick tasks, high throughput

Coding Models

Model Size VRAM Best For
CodeLlama 13B 7.4GB 12GB min Code generation, debugging
DeepSeek Coder 33B 19GB 24GB min Complex programming tasks

Quantization Options

Ollama supports various quantization levels that trade quality for reduced resource requirements:

# Full precision (largest, highest quality)
ollama pull llama3.1:8b-fp16

# 8-bit quantization (good balance)
ollama pull llama3.1:8b-q8_0

# 4-bit quantization (smallest, slight quality reduction)
ollama pull llama3.1:8b-q4_0

Performance Optimization

Getting the best performance from local LLMs requires attention to hardware configuration and Ollama settings.

GPU Acceleration

For optimal performance, use a GPU with sufficient VRAM:

# Check if Ollama is using GPU
ollama ps

# Force CPU-only mode (if needed)
OLLAMA_GPU_LAYERS=0 ollama serve

Memory Management

Configure system resources appropriately:

# Set maximum loaded models
export OLLAMA_MAX_LOADED_MODELS=2

# Set VRAM limit (in bytes)
export OLLAMA_GPU_MEMORY=8589934592  # 8GB

# Configure context window (affects memory usage)
export OLLAMA_NUM_CTX=4096

Custom Modelfile for Optimization

Create a custom Modelfile for optimized inference:

# Modelfile.optimized
FROM llama3.1:8b

# Increase context window
PARAMETER num_ctx 8192

# Optimize for speed
PARAMETER num_batch 512
PARAMETER num_thread 8

# Reduce temperature for more deterministic outputs
PARAMETER temperature 0.7
PARAMETER top_p 0.9

# System prompt for your use case
SYSTEM """You are a helpful assistant optimized for technical questions."""

Build and use the optimized model:

ollama create llama-optimized -f Modelfile.optimized
ollama run llama-optimized

Hybrid Cloud and Local Patterns

One of NeuroLink's most powerful features is the ability to seamlessly combine local and cloud providers.

Fallback Pattern

Use local inference by default, falling back to cloud when local resources are exhausted:

import { NeuroLink } from "@juspay/neurolink";

const nl = new NeuroLink({
  providers: [
    { name: "local", config: { baseUrl: "http://localhost:11434" } },
    { name: "openai", config: { apiKey: process.env.OPENAI_API_KEY } },
    { name: "anthropic", config: { apiKey: process.env.ANTHROPIC_API_KEY } }
  ],
  failover: {
    enabled: true,
    primary: "local",
    fallbackProviders: ["openai", "anthropic"],
    triggerOn: ["timeout", "overload", "error"]
  }
});

// This will try local first, then cloud if needed
const response = await nl.generate({
  input: { text: "Complex analysis task..." },
  maxTokens: 2000
});

Task-Based Routing

Route requests to appropriate providers based on task characteristics:

import { NeuroLink } from "@juspay/neurolink";

const nl = new NeuroLink({
  providers: [
    { name: "local", config: { baseUrl: "http://localhost:11434" } },
    { name: "anthropic", config: { apiKey: process.env.ANTHROPIC_API_KEY } }
  ],
  routing: {
    rules: [
      {
        taskType: "simple_qa",
        provider: "local",
        model: "llama3.1:8b"
      },
      {
        taskType: "code_generation",
        provider: "local",
        model: "codellama:13b"
      },
      {
        taskType: "complex_reasoning",
        provider: "anthropic",
        model: "claude-3-opus"
      }
    ]
  }
});

// Automatically routes to appropriate provider
const response = await nl.generate({
  input: { text: "Write a sorting algorithm" },
  taskType: "code_generation"
});

Privacy-Preserving Routing

Automatically route sensitive data to local inference:

import { NeuroLink } from "@juspay/neurolink";

const nl = new NeuroLink({
  providers: [
    { name: "ollama", config: { baseUrl: "http://localhost:11434" } },
    { name: "openai", config: { apiKey: process.env.OPENAI_API_KEY } }
  ],
  middleware: {
    guardrails: {
      piiDetection: {
        enabled: true,
        patterns: [
          { name: "ssn", regex: "\\b\\d{3}-\\d{2}-\\d{4}\\b" },
          { name: "email", regex: "[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\\.[A-Za-z]{2,}" }
        ],
        sensitiveKeywords: ["confidential", "proprietary"],
        localProvider: "ollama",
        cloudProvider: "openai"
      }
    }
  }
});

// Automatically routes to local if sensitive data detected
const response = await nl.generate({
  input: { text: "Analyze this customer data: SSN 123-45-6789..." }
  // Routes to local Ollama automatically
});

Troubleshooting Common Issues

Model Loading Failures

Symptom: "Error: model not found" or slow initial response

Solutions:

# Verify model is downloaded
ollama list

# Re-pull if corrupted
ollama rm llama3.1:8b
ollama pull llama3.1:8b

# Check disk space
df -h ~/.ollama

Out of Memory Errors

Symptom: "CUDA out of memory" or system freeze

Solutions:

# Use smaller model
ollama pull llama3.1:8b-q4_0

# Reduce context window
export OLLAMA_NUM_CTX=2048

# Limit GPU memory
export OLLAMA_GPU_MEMORY=6442450944  # 6GB

Slow Inference

Symptom: Response times exceeding expectations

Solutions:

# Verify GPU is being used
ollama ps

# Check for thermal throttling
nvidia-smi -l 1

# Increase batch size for throughput
# In Modelfile:
PARAMETER num_batch 1024

Connection Refused

Symptom: NeuroLink cannot connect to Ollama

Solutions:

# Verify Ollama is running
curl http://localhost:11434/api/tags

# Check firewall settings
sudo ufw allow 11434/tcp

# Restart Ollama service
sudo systemctl restart ollama

Conclusion

Running local LLMs with Ollama and NeuroLink provides a powerful, flexible, and privacy-preserving approach to AI integration. By following this guide, you've learned how to:

  1. Set up and configure Ollama for local inference
  2. Integrate Ollama with NeuroLink for seamless model access
  3. Select appropriate models for your use cases
  4. Optimize performance for your hardware
  5. Implement hybrid cloud and local patterns
  6. Troubleshoot common deployment issues

The combination of local and cloud inference gives you unprecedented flexibility in how you deploy AI capabilities. Start with local models for development and privacy-sensitive tasks, scale to cloud providers when you need additional capacity or capabilities, and let NeuroLink handle the complexity of managing multiple providers.

Found this helpful? Drop a comment below with your questions or share your experience with local LLMs!

Want to try NeuroLink?

Follow us for more AI development content:

Stop Sorting Just to Check Order: 5 Fast O(n) Methods in Python

2026-02-17 13:41:46

The Problem With Using sort() Just to Verify Order

A common mistake I see (and have made!):

Don't do this for validation

def is_sorted_bad(lst):
    return lst == sorted(lst)

Python's Timsort is O(n log n) and copies the whole list. For a million elements? ~20 million operations and extra memory — just for a yes/no! 😩
When the list is already sorted (often the case), it's pure waste.
Big O graph showing O(n log n) vs O(n)
Caption: Why sorting for verification hurts performance
The methods below are all O(n) with early exit — they stop at the first out-of-order pair.

Quick Note: What Does "Sorted" Mean?

Non-decreasing: Allows duplicates (<=) → [1, 2, 2, 3] ✅
Strictly increasing: No duplicates (<) → [1, 2, 2, 3] ❌

Same for descending (flip operators). All methods assume comparable elements.

Method 1: all() + zip() — The Go-To Pythonic Way 🔥

def is_sorted(lst):
    return all(x <= y for x, y in zip(lst, lst[1:]))

# Strictly increasing
def is_strictly_sorted(lst):
    return all(x < y for x, y in zip(lst, lst[1:]))

Lazy evaluation + early exit = super efficient.

Method 2: Classic For Loop — Interview Favorite

def is_sorted(lst):
    for i in range(len(lst) - 1):
        if lst[i] > lst[i + 1]:
            return False
    return True

Method 3: all() + range() — When You Need the Index

def is_sorted(lst):
    return all(lst[i] <= lst[i + 1] for i in range(len(lst) - 1))

Great for extensions like finding the first violation.

Method 4: itertools.pairwise() — Cleanest (Python 3.10+)

from itertools import pairwise

def is_sorted(lst):
    return all(x <= y for x, y in pairwise(lst))

No slice overhead — true O(1) space!

Method 5: operator.le — For Custom Objects

import operator

def is_sorted_by(lst, key):
    return all(key(x) <= key(y) for x, y in zip(lst, lst[1:]))

Perfect for sorting objects by attribute.

Descending Order? Just Flip It!

Ascending vs Descending

def is_non_increasing(lst):
    return all(x >= y for x, y in zip(lst, lst[1:]))

Edge Cases

  1. Empty/single-element lists → Always True
  2. Mixed types → TypeError (wrap in try/except)
  3. None values → Handle manually

Performance Quick Comparison

Method Time/Space Notes Best For
all() + zip() Readable, common Everyday use
For loop Clear, no tricks Interviews
all() + range() Useful when index is needed Extensions (e.g., finding first violation)
pairwise() True O(1) space Modern Python (3.10+)
operator.le Great for custom objects Key-based or attribute sorting

NumPy/Pandas? Use Their Built-ins!

# NumPy
np.all(arr[:-1] <= arr[1:])

# Pandas
series.is_monotonic_increasing

When sort() Wins

If you need the sorted list anyway → Just call .sort() — Timsort is O(n) on sorted data!

That's it! Which method do you use most? Drop a comment below 👇
For the complete guide (with FAQs, full quiz, and more edge cases), head to my blog:
https://emitechlogic.com/check-if-list-is-sorted-python/

Related Articles on My Blog

Loved this efficiency tip? Here are more Python algorithm and performance guides:

  1. I Implemented Every Sorting Algorithm in Python — The Results Nobody Talks About (Benchmarked on CPython)
  2. Python Optimization Guide: How to Write Faster, Smarter Code
  3. How to Check Palindrome in Python: 5 Efficient Methods (2026 Guide)
  4. How Python Searches Data: Linear Search, Binary Search, and Hash Lookup Explained
  5. How to Reverse a String in Python: Performance, Memory, and the Tokenizer Trap
  6. Type Conversion in Python: The Ultimate Guide

Day 8 – Connecting All the Dots (Frontend Integration &amp; Polish)

2026-02-17 13:41:00

Up until now, Phase #3 (Frontend) was mostly about building pieces: sections, hooks, JSON files, layouts, navigation, and individual pages.

Day 8 was the day when I finally stopped creating new components and started doing something more product-like:

Making everything talk to each other properly.

This was the first time the portfolio started behaving like a cohesive product instead of a collection of screens.

From "Components" to "Product"

As a developer, it’s very tempting to keep adding features:

  • one more section,
  • one more animation,
  • one more component.

But as a product owner/project lead, I forced myself to step back and ask:

  • Do all pages feel consistent?
  • Does the data flow make sense?
  • Is the UX predictable for a real user?

So Day 8 became an integration and refinement day.

No new architecture.
No new fancy features.
Just alignment and polish.

1. Updating All Pages with Real Data

All the pages under src/pages/*.jsx were updated to properly consume:

  • the latest JSON structures,
  • the new hooks,
  • and the finalised section components.

This included:

  • About
  • Projects
  • Experiences
  • Achievements
  • Open Source
  • Contact

Each page now follows the same mental model:

JSON → Hook → Page → Sections → UI

Which means:

  • No hardcoded text.
  • No magic values.
  • Everything comes from data.

From a long-term perspective, this is huge:

  • Tomorrow, I can replace JSON with an API.
  • The UI won’t change at all.

That’s real frontend architecture, not just React code.

2. Making the Navigation Truly Reflect the Product

Earlier, the Navbar and Footer existed mostly as components.
On Day 8, they finally became product navigation.

I updated:

  • Navbar links
  • Footer links
  • Contact references

So that:

  • There are no dead routes.
  • There are no fake sections.
  • Everything matches the actual pages.

This is one of those things users never notice, but they instantly feel when it’s wrong.

Broken navigation = broken trust.

3. Contact Page: From Feature to Experience

The Contact page was already built earlier. Day 8 was about making it feel complete:

  • Ensuring form states work correctly.
  • Making sure social links are consistent.
  • Aligning the content with the rest of the site's tone.

At this point, Contact stopped being:

"A form component"

and became:

"The only real entry point for human interaction with the product."

That’s a very different mindset.

4. The Most Underrated Skill: Restraint

This day taught me something important:

Sometimes the best engineering decision is to not build anything new.

Instead:

  • review,
  • refactor,
  • simplify,
  • align.

As a solo developer wearing multiple hats (PM, designer, architect, dev), this step is very easy to skip.

But this is exactly what separates:

  • a demo project

from

  • a product foundation.

Product Thinking vs Tutorial Thinking

A tutorial would say:

"Today we updated some pages."

A product mindset says:

"Today we validated the entire frontend architecture."

That’s the difference.

Day 8 was about:

  • coherence,
  • consistency,
  • and credibility.

Not code volume.

Not feature count.

But system quality.

What Changed Emotionally (Not Technically)

This might sound weird, but after Day 8:

I stopped seeing this as:

"my React portfolio"

And started seeing it as:

"my personal product"

Something I could:

  • evolve,
  • extend,
  • maintain,
  • and even hand over to someone else.

That’s a huge psychological shift for any engineer.

Why Day 8 Matters More Than It Looks

Day 8 won’t impress on GitHub commit history. It’s a small diff. A few files changed.

But in real-world projects, this is exactly how products mature:

Not with big features. But with small integration days that remove friction everywhere.

Closing Thought

Day 8 was not about building. It was about owning the system.

And that, more than any new component, is what actually makes you a senior engineer in mindset, not just in years of experience.

If you’re following along, the complete source lives here:
👉 GitHub Repository: Portfolio.

Building a Modern Full-Stack Application: Architecture First

2026-02-17 13:40:14

The Journey Begins

Ever started a project that seemed simple at first, only to watch it spiral into a tangled mess of spaghetti code? I've been there. But what if I told you that spending time upfront on architecture could save you hundreds of hours of refactoring later?

In this series, I'm going to walk you through building a production-ready full-stack application with clean architecture principles. This isn't theoretical fluff—this is real code, real patterns, and real lessons learned from building an enterprise-level tutoring management system.

What we'll build:

  • A modern .NET 9 Web API backend
  • A React 19 + TypeScript frontend
  • PostgreSQL database with Entity Framework Core
  • Azure AD authentication
  • Real-time features and background processing

But more importantly, we'll build it the right way—with clean architecture, proper layering, and maintainable patterns that scale.

In this first post, we'll understand WHY architecture matters by looking at what happens when you skip it. We'll see the real problems that emerge and why "we'll fix it later" never works.

The Cost of "We'll Fix It Later"

Here's what typically happens when you skip architecture:

Week 1: "Let's just get something working"

  • Direct database calls in controllers
  • No thought to testing
  • "We'll clean it up later"

Month 3: "We need to add features fast"

  • Copy-paste existing code
  • Each developer does things differently
  • Technical debt accumulating

Month 6: "Why is everything breaking?"

  • Changes in one place break unrelated features
  • Can't add tests (too tightly coupled)
  • Fear of touching existing code

Month 12: "We need to rewrite this"

  • Too expensive to fix
  • Business pressure to keep adding features
  • Team morale drops

The Truth: Later never comes. Technical debt compounds like credit card interest. What takes 1 hour to do right initially will take 10 hours to fix later, or 100 hours to rewrite.

Architecture First vs. Code First

Code First Approach:

Start coding → Hit problems → Try to refactor → Too late → Live with mess
  • ✅ Fast initial progress
  • ❌ Slows down dramatically over time
  • ❌ Hard to test
  • ❌ Difficult to maintain
  • ❌ Expensive to change

Architecture First Approach:

Plan architecture → Implement with patterns → Maintain structure → Scale easily
  • ⚠️ Slower initial start (1-2 days planning)
  • ✅ Consistent velocity over time
  • ✅ Easy to test
  • ✅ Simple to maintain
  • ✅ Changes are isolated and safe

Real-world analogy: Building a house. You can start nailing boards together and see progress immediately, but without a blueprint, you'll end up with crooked walls and no plumbing. Architects spend weeks on blueprints because it saves months during construction.

A Real Example: What Happens Without Architecture

Let me show you exactly what happens when you skip architecture. Here's real code that "works" but creates problems:

// Controller.cs - This is what happens without architecture
public class StudentController : ControllerBase
{
    [HttpGet]
    public async Task<IActionResult> GetStudents()
    {
        // Direct database access in controller? Bad!
        using var connection = new NpgsqlConnection("connection_string");
        var students = await connection.QueryAsync<Student>("SELECT * FROM Students");

        // Business logic in controller? Also bad!
        foreach(var student in students)
        {
            if(student.Age < 18)
                student.RequiresParentalConsent = true;
        }

        // Returning database entities directly? Triple bad!
        return Ok(students);
    }
}

Let's Understand Why This Is Problematic

Let me walk you through what's happening here and why each line is a problem:

Line 1: Direct Database Connection

using var connection = new NpgsqlConnection("connection_string");

What this does: Creates a direct connection to PostgreSQL database from the controller.

Why it's bad:

  • Your controller now KNOWS about PostgreSQL specifically. Want to switch to SQL Server tomorrow? You'll have to change your controller code.
  • The connection string is hardcoded (or at best, injected here), meaning the controller is responsible for database connectivity.
  • Testing becomes impossible - to test this method, you need an actual database running. No unit tests possible!

Real-world analogy: It's like a restaurant waiter walking into the kitchen, cooking the food themselves, and serving it. The waiter should just take orders and deliver food - not know how to operate the stove!

Line 2: Direct Database Query

var students = await connection.QueryAsync<Student>("SELECT * FROM Students");

What this does: Executes a raw SQL query and maps results to Student objects.

Let's break down the complex terms:

"Raw SQL query" - This is a direct database command written in SQL (Structured Query Language), the language databases understand. It's "raw" because you're writing the actual database commands yourself instead of using a higher-level abstraction.

"Maps results to Student objects" - The database returns rows of data (like Excel spreadsheet rows). The QueryAsync<Student> method takes those rows and converts them into C# Student objects:

Database returns:
┌────┬───────────┬─────┬──────────┐
│ Id │ Name      │ Age │ Email    │
├────┼───────────┼─────┼──────────┤
│ 1  │ John Doe  │ 20  │ [email protected]  │
│ 2  │ Jane Doe  │ 19  │ [email protected] │
└────┴───────────┴─────┴──────────┘

Gets "mapped" to C# objects:
new Student { Id = 1, Name = "John Doe", Age = 20, Email = "[email protected]" }
new Student { Id = 2, Name = "Jane Doe", Age = 19, Email = "[email protected]" }

Why it's bad:

  • Your controller now knows about database tables and SQL syntax
  • If the Students table structure changes, you must change the controller
  • You're selecting ALL columns (SELECT *) even if you only need a few
  • SQL injection risks if you ever add parameters ← This is CRITICAL!

Understanding SQL Injection - A Security Nightmare

What is SQL Injection?

SQL injection is when an attacker tricks your application into running malicious database commands. It's one of the most dangerous security vulnerabilities.

Vulnerable Code Example:

// NEVER DO THIS! ☠️ Extremely dangerous!
public async Task<IActionResult> GetStudentByName(string name)
{
    // Building query by concatenating user input
    var query = "SELECT * FROM Students WHERE Name = '" + name + "'";
    var student = await connection.QueryAsync<Student>(query);
    return Ok(student);
}

What happens when a normal user searches for "John"?

-- Query becomes:
SELECT * FROM Students WHERE Name = 'John'
-- ✅ Works fine, returns John's record

What happens when an ATTACKER enters: John'; DROP TABLE Students; --

-- Query becomes:
SELECT * FROM Students WHERE Name = 'John'; DROP TABLE Students; --'
-- ☠️ DISASTER! This:
-- 1. Selects John
-- 2. DELETES YOUR ENTIRE STUDENTS TABLE
-- 3. -- comments out the rest

Your entire Students table is GONE! All student data. Deleted. Forever.

Even worse attacks:

-- Attacker enters: ' OR '1'='1
SELECT * FROM Students WHERE Name = '' OR '1'='1'
-- ☠️ Returns ALL students (security breach - data exposure)

-- Attacker enters: '; UPDATE Students SET GPA = 4.0 WHERE Name = 'Attacker'; --
SELECT * FROM Students WHERE Name = ''; UPDATE Students SET GPA = 4.0 WHERE Name = 'Attacker'; --'
-- ☠️ Changes grades in database (data manipulation)

-- Attacker enters: '; SELECT password, email FROM Users; --
-- ☠️ Steals passwords (credential theft)

Why this happens:

  • User input is treated as code instead of data
  • The database can't tell the difference between your intended query and the attacker's injected commands
  • It's like giving someone a form to fill out and they write instructions in the blank spaces that you then follow blindly

The Safe Way - Parameterized Queries:

// ✅ Safe - Using parameters
public async Task<IActionResult> GetStudentByName(string name)
{
    var query = "SELECT * FROM Students WHERE Name = @Name";
    var student = await connection.QueryAsync<Student>(query, new { Name = name });
    return Ok(student);
}

What changes?

  • @Name is a parameter placeholder
  • The value is passed separately: new { Name = name }
  • The database driver automatically escapes the input
  • User input is treated as data, never as code

When attacker enters: John'; DROP TABLE Students; --

-- Query stays:
SELECT * FROM Students WHERE Name = @Name
-- But the parameter value is:
@Name = "John'; DROP TABLE Students; --"
-- Database treats the ENTIRE string as the name to search for
-- It looks for a student literally named "John'; DROP TABLE Students; --"
-- Finds nothing, returns empty result
-- ✅ Your table is SAFE!

Real-world impact:

  • 2008: Heartland Payment Systems - 134 million credit cards stolen via SQL injection
  • 2012: Yahoo - 450,000 passwords leaked
  • 2017: Equifax breach - 147 million people affected (initially exploited a different vulnerability, but SQL injection was found in their systems)
  • SQL injection has been in the OWASP Top 10 security risks for over a decade

The lesson: NEVER concatenate user input into SQL queries. Always use parameterized queries or ORMs (like Entity Framework) that handle this automatically.

Line 3-6: Business Logic in Controller

foreach(var student in students)
{
    if(student.Age < 18)
        student.RequiresParentalConsent = true;
}

What this does: Loops through students and applies a business rule.

Why it's bad:

  • This business rule ("under 18 requires parental consent") should be reusable
  • What if you need this logic in 10 different places? Copy-paste it 10 times?
  • Controllers should coordinate, not calculate
  • This logic can't be unit tested separately

Line 7: Returning Database Entities

return Ok(students);

What this does: Sends the Student database entity directly to the API caller.

Why it's REALLY bad:

  • You're exposing your internal database structure to the world
  • If Student entity has sensitive fields (SSN, password hash), they're now public
  • Changing your database means breaking your API contract
  • Circular references in entities can crash JSON serialization

The Ripple Effects: A Timeline of Pain

This code works today, sure. But watch what happens over time:

Month 1: "Let's add filtering by grade level"

  • Now you have SQL queries scattered across multiple controllers
  • Each controller has slightly different filtering logic
  • Bugs multiply because logic is duplicated

Month 3: "We need to switch from PostgreSQL to SQL Server"

  • You must modify EVERY SINGLE CONTROLLER
  • 50+ files changed, hundreds of lines modified
  • High risk of bugs
  • Weekend deployment becomes a week-long migration

Month 6: "Let's add unit tests"

  • Every test requires a real database
  • Tests are slow (100-1000ms each instead of <1ms)
  • Tests fail if database is down or has wrong data
  • Basically untestable without complex infrastructure

Month 12: "The API is exposing sensitive data!"

  • Emergency security patch needed
  • Must audit every single endpoint
  • Can't tell what data is being exposed where
  • Compliance violations, potential lawsuits

Year 2: "We need to add caching"

  • Every controller must be modified
  • Caching logic duplicated everywhere
  • Some developers forget to add it
  • Inconsistent behavior across endpoints

Year 3: "New developer joins the team"

  • Takes weeks to understand where logic lives
  • Accidentally introduces bugs because everything is connected
  • Afraid to change anything
  • Team velocity drops to 25% of what it should be

There has to be a better way. And there is.

The Investment vs. The Return

Without Architecture:

  • Day 1-10: Fast progress! 🚀
  • Month 1-3: Slowing down... 🐌
  • Month 6+: Every change is risky and slow 🐢
  • Year 2: Considering a rewrite 💸💸💸

With Architecture:

  • Day 1-2: Learning and planning 📚
  • Day 3-10: Structured progress 🏗️
  • Month 1-12: Consistent velocity ⚡
  • Year 2+: Easy to maintain and extend ✨

The Math:

  • Without architecture: 100 hours saved initially, 1000+ hours of pain later
  • With architecture: 10 hours invested initially, 100s of hours saved over time

Architecture is not overhead. Architecture is debt prevention.

What's Next?

Now that you understand WHY architecture matters and what happens when you skip it, in Part 2 we'll explore the different architectural approaches available:

  • No Architecture (Script Pattern) - When is it okay?
  • Traditional N-Tier - Better, but still has problems
  • Active Record Pattern - Simple but limiting
  • Repository Pattern - Getting closer
  • Domain-Driven Design - For complex domains
  • Microservices - The scaling solution
  • Clean Architecture - The sweet spot ⭐

For each pattern, I'll show you:

  • Real code examples with line-by-line explanations
  • What problems it solves
  • What problems it creates
  • When to use it (and when NOT to)
  • Why Clean Architecture wins for most production applications

Key Takeaways

"We'll fix it later" never happens - Technical debt compounds exponentially

Without architecture, every change becomes dangerous - Fear of touching code kills velocity

SQL injection is real and devastating - Billions of dollars lost due to this vulnerability

Architecture is debt prevention, not overhead - 10 hours invested saves 100s later

Controllers doing everything is a time bomb - Database, business logic, HTTP all mixed together

Testing becomes impossible without separation - Can't test what you can't isolate

Team scalability requires structure - New developers need clear boundaries

Discussion

Have you experienced the pain of "we'll fix it later"? What was the tipping point that made you invest in architecture? Share your stories in the comments below!

Next in Series: Part 2: Comparing Architectural Approaches - Finding the Right Pattern
Tags: #dotnet #csharp #architecture #softwaredevelopment #webapi #programming #technicaldebt #coding

This series is based on real experiences building an enterprise tutoring management system. All code examples have been generalized for educational purposes.

Secure Remote Access to AWS Resources from On-Premises

2026-02-17 13:34:51

Modern AWS architectures avoid exposing networks and instead focus on intent-based access:

  • Application access
  • Administrative access
  • Controlled network access

This reference architecture demonstrates three secure access patterns from on-prem to AWS using:

  • AWS Client VPN
  • EC2 Instance Connect Endpoint
  • AWS Verified Access

Each pattern serves a distinct purpose and should be used together—not interchangeably.

Architecture Context

The VPC contains:

  • Private EC2 instances
  • Private RDS databases
  • Internal Application Load Balancer
  • No public subnets for compute
  • No inbound SSH or database ports

Remote users enter AWS only through explicit access services.

Flow 1: Application Access via AWS Verified Access

Use Case

Secure access to private web applications without VPN or public exposure.

Flow

Key Characteristics

  • Identity-based access
  • No network-level trust
  • No VPC-wide visibility
  • No public ALB required

Flow 2: Administrative EC2 Access via EC2 Instance Connect Endpoint

Use Case

Secure SSH access to private EC2 instances without bastion hosts or public IPs.

Flow

Key Characteristics

  • IAM-based authentication
  • Short-lived access
  • No inbound SSH rules
  • Fully auditable

Flow 3: Network-Level Access via AWS Client VPN

Use Case

Broad access to private AWS resources such as databases, internal APIs, or legacy tools.

Flow

Key Characteristics

  • Encrypted tunnel
  • Route-based access
  • Subnet-level reachability
  • Works well for legacy workflows

VPN vs Verified Access (When to Use What)

This is a common design decision, and the wrong choice often leads to overexposure.

Comparison Table

Aspect AWS Client VPN AWS Verified Access
Access Model Network-level Application-level
Trust Boundary VPC/Subnet Identity & policy
User Sees Network Yes No
VPN Client Required Yes No
Best For DBs, legacy apps, tools Web apps, dashboards
Lateral Movement Risk Higher Very low
Zero Trust Alignment Partial Strong

Simple Rule of Thumb

If users need a network → VPN
If users need an app → Verified Access

Why These Services Work Best Together

Requirement Service
Internal web applications AWS Verified Access
EC2 administration EC2 Instance Connect Endpoint
Database / legacy access AWS Client VPN

This layered approach ensures:

  • No over-privileged access
  • Clear separation of concerns
  • Reduced blast radius
  • Easier audits and compliance

Security Best Practices Highlighted

  • No public EC2 or RDS
  • No inbound SSH from on-prem
  • IAM-driven access
  • Explicit access entry points
  • Multi-AZ design

Final Thoughts

Secure remote access to AWS is not about choosing one tool.

It’s about:

  • Matching access method to intent
  • Avoiding unnecessary network exposure
  • Enforcing identity at the entry point

By combining:

  • AWS Verified Access
  • EC2 Instance Connect Endpoint
  • AWS Client VPN

you get a secure, scalable, and least-privilege remote access model from on-prem to AWS.

Beginner's Guide To TypeScript + React Integration

2026-02-17 13:18:47

Introduction to TypeScript

What is TypeScript?

TypeScript is a strongly typed programming language that builds on JavaScript, giving you better tooling at any scale. Developed and maintained by Microsoft, TypeScript adds optional static typing and class-based object-oriented programming to JavaScript. Since TypeScript is a superset of JavaScript, existing JavaScript programs are also valid TypeScript programs.

TypeScript code cannot run directly in browsers or Node.js - it must first be compiled (transpiled) to JavaScript. This compilation step is where TypeScript catches type errors, helping you find bugs before your code runs.

Why TypeScript over JavaScript?

While JavaScript is powerful and flexible, it has limitations when building large-scale applications. TypeScript addresses these limitations:

  • Static Type Checking - Catch errors at compile time rather than runtime
  • Better IDE Support - Enhanced autocompletion, navigation, and refactoring
  • Improved Code Readability - Type annotations serve as documentation
  • Enhanced Refactoring - Safely rename symbols and restructure code
  • Latest ECMAScript Features - Use modern features with backward compatibility

TypeScript vs JavaScript

Feature JavaScript TypeScript
Type System Dynamic typing Static typing (optional)
Compilation Interpreted directly Compiles to JavaScript
Error Detection Runtime errors Compile-time errors
IDE Support Basic Enhanced (IntelliSense)
Interfaces Not supported Fully supported
Generics Not supported Fully supported

Installing TypeScript

Step 1: Install Node.js

Before using TypeScript, you need Node.js installed on your machine. Follow these steps:

  1. Visit the official Node.js website: https://nodejs.org
  2. Download the LTS (Long Term Support) version for your operating system
  3. Run the installer and follow the installation wizard
  4. Verify the installation by opening a terminal and running:
node --version
# Should output: v20.x.x or higher

npm --version
# Should output: 10.x.x or higher

Step 2: Install TypeScript Globally

Once Node.js is installed, you can install TypeScript globally using npm:

npm install -g typescript

Step 3: Verify TypeScript Installation

tsc --version
# Should output: Version 5.x.x

Tip: You can also install TypeScript locally in a project using npm install typescript --save-dev. This is recommended for projects to ensure consistent versions across team members.

Setting up TypeScript with Node.js

To set up a new TypeScript project, create a directory and initialize it:

mkdir my-typescript-project
cd my-typescript-project
npm init -y
npm install typescript --save-dev

Create a TypeScript configuration file (tsconfig.json):

npx tsc --init

Here's a recommended configuration for beginners:

{
  "compilerOptions": {
    "target": "ES2020",
    "module": "commonjs",
    "strict": true,
    "esModuleInterop": true,
    "skipLibCheck": true,
    "outDir": "./dist",
    "rootDir": "./src"
  },
  "include": ["src/**/*"],
  "exclude": ["node_modules"]
}

Your First TypeScript Program

Create a file called hello.ts in your src folder:

// src/hello.ts
function greet(name: string): string {
  return "Hello, " + name + "!";
}

const message: string = greet("TypeScript");
console.log(message);

Notice the type annotations: name: string specifies the parameter type, and : string after the parentheses specifies the return type.

Understanding the Compilation Process

TypeScript must be compiled to JavaScript before it can run:

# Compile a single file
tsc src/hello.ts

# Compile using tsconfig.json
tsc

# Watch mode - recompile on changes
tsc --watch

When you run the tsc command (or tsc --watch for continuous compilation), TypeScript creates a new dist folder containing the compiled JavaScript files. You'll see your hello.ts file transformed into hello.js with all type annotations removed:

// dist/hello.js (compiled output)
"use strict";
function greet(name) {
  return 'Hello, ' + name + '!';
}
const message = greet('TypeScript');
console.log(message);

Here's what the project structure looks like in VS Code after compilation:

Project Structure

Note: All type annotations are removed during compilation. Types only exist at development and compile time - they help catch errors but don't affect runtime behavior.

Running the Compiled Code

Now that your TypeScript code has been compiled to JavaScript, you can run it using Node.js. Execute the compiled JavaScript file from the dist folder:

# Run the compiled JavaScript file
node dist/hello.js

You should see the following output in your terminal:

Hello, TypeScript!

Congratulations! You've successfully written, compiled, and executed your first TypeScript program. The workflow is: write .ts files → compile with tsc → run the output .js files with node.

Tip: You can also use ts-node to run TypeScript files directly without manual compilation: npm install -g ts-node, then run ts-node src/hello.ts. This is useful during development.

TypeScript Basics

Basic Types (string, number, boolean)

TypeScript provides several basic types that form the foundation of the type system:

// String type
let firstName: string = "John";
let lastName: string = 'Doe';

// Number type (includes integers and floats)
let age: number = 30;
let price: number = 99.99;
let hex: number = 0xf00d;

// Boolean type
let isActive: boolean = true;
let isCompleted: boolean = false;

Type Inference

TypeScript can automatically infer types based on assigned values:

// TypeScript infers the type automatically
let name = "Alice";    // inferred as string
let count = 42;        // inferred as number
let isValid = true;    // inferred as boolean

// Type is locked after inference
name = "Bob";   // OK
name = 123;     // Error: Type 'number' is not assignable to type 'string'

Best Practice: Use explicit types when the initial value doesn't clearly indicate the intended type, or when declaring variables without immediate initialization.

Arrays and Tuples

TypeScript provides two ways to define arrays, and introduces tuples for fixed-length arrays:

// Arrays - two syntax options
let numbers: number[] = [1, 2, 3, 4, 5];
let names: Array<string> = ["Alice", "Bob", "Charlie"];

// Mixed arrays
let mixed: (string | number)[] = [1, "two", 3];

// Tuples - fixed length with specific types
let person: [string, number] = ["Alice", 30];
let coordinate: [number, number, number] = [10, 20, 30];

// Accessing tuple elements
console.log(person[0]);  // "Alice" (string)
console.log(person[1]);  // 30 (number)

Enums

Enums define a set of named constants, making code more readable:

// Numeric enum (auto-increments from 0)
enum Direction {
  Up,     // 0
  Down,   // 1
  Left,   // 2
  Right   // 3
}

// Numeric enum with custom values
enum StatusCode {
  OK = 200,
  BadRequest = 400,
  NotFound = 404,
  ServerError = 500
}

// String enum
enum Color {
  Red = "RED",
  Green = "GREEN",
  Blue = "BLUE"
}

// Using enums
let direction: Direction = Direction.Up;
let status: StatusCode = StatusCode.OK;

Using 'as const' Instead of Enums

While enums are useful, modern TypeScript development often favors using as const assertions instead. The as const assertion tells TypeScript to infer the most specific type possible, making values readonly and literal types.

Why use 'as const' over enums?

  • Tree-shaking friendly - Regular objects with 'as const' can be tree-shaken by bundlers, while enums cannot
  • No runtime overhead - 'as const' objects don't generate extra JavaScript code like enums do
  • Better type inference - Works seamlessly with TypeScript's type system
  • Simpler JavaScript output - The compiled code is just a plain object
// Using 'as const' instead of enums
const Direction = {
  Up: "UP",
  Down: "DOWN",
  Left: "LEFT",
  Right: "RIGHT"
} as const;

// Create a type from the object values
type Direction = typeof Direction[keyof typeof Direction];
// Type is: "UP" | "DOWN" | "LEFT" | "RIGHT"

// Usage
let move: Direction = Direction.Up;  // OK
move = "UP";                          // Also OK
move = "DIAGONAL";                    // Error!

// Another example with status codes
const StatusCode = {
  OK: 200,
  BadRequest: 400,
  NotFound: 404,
  ServerError: 500
} as const;

type StatusCode = typeof StatusCode[keyof typeof StatusCode];
// Type is: 200 | 400 | 404 | 500

Best Practice: For new projects, prefer 'as const' objects over enums. They provide the same benefits with better bundle size and simpler JavaScript output. Use enums only when you need reverse mapping (numeric enums) or when working with legacy codebases.

any, unknown, and never

TypeScript provides special types for handling dynamic values:

// any - opts out of type checking (avoid when possible)
let flexible: any = 4;
flexible = "string";    // OK
flexible = true;        // OK
flexible.anything();    // OK (but risky!)

// unknown - type-safe alternative to any
let uncertain: unknown = 4;
uncertain = "string";   // OK

// Must check type before using
if (typeof uncertain === "string") {
  console.log(uncertain.toUpperCase());  // OK
}

// never - represents values that never occur
function throwError(message: string): never {
  throw new Error(message);
}

function infiniteLoop(): never {
  while (true) {}
}

Tip: Prefer 'unknown' over 'any' when you don't know the type. It forces type checking before use, making your code safer.

void Type

The void type represents the absence of a return value:

// Function that doesn't return anything
function logMessage(message: string): void {
  console.log(message);
}

// Arrow function with void return
const printNumber = (num: number): void => {
  console.log(num);
};

Type Assertions

Type assertions tell TypeScript you know more about a value's type:

// Two syntax options
let someValue: unknown = "Hello, TypeScript!";

// Angle-bracket syntax
let strLength1: number = (<string>someValue).length;

// "as" syntax (required in React - JSX)
let strLength2: number = (someValue as string).length;

// Working with DOM elements
const input = document.getElementById("myInput") as HTMLInputElement;
input.value = "Hello!";

// Non-null assertion
function getValue(arr: number[], index: number): number {
  return arr[index]!;  // Assert it won't be undefined
}

The ! operator is called the non-null assertion operator. It tells TypeScript that you are certain the value will not be null or undefined, even though TypeScript thinks it might be. In the example above, arr[index] could potentially be undefined if the index is out of bounds, but using ! tells TypeScript "trust me, this value exists."

// More examples of non-null assertion (!)
const button = document.getElementById("submit")!;
// Without !, TypeScript thinks button could be null

// Use when you're certain a value exists
interface User {
  name: string;
  email?: string;  // optional property
}

function sendEmail(user: User) {
  // We checked elsewhere that email exists
  const email = user.email!;  // Assert it's not undefined
  console.log("Sending to:", email);
}

Warning: Use the non-null assertion (!) sparingly. It bypasses TypeScript's null checks, so incorrect usage can lead to runtime errors. Prefer proper null checks (if statements or optional chaining ?.) when possible.

Literal Types

Literal types specify exact values a variable can hold:

// String literal types
type Direction = "north" | "south" | "east" | "west";
let heading: Direction = "north";  // OK
heading = "northeast";             // Error!

// Numeric literal types
type DiceRoll = 1 | 2 | 3 | 4 | 5 | 6;
let roll: DiceRoll = 4;  // OK
roll = 7;                // Error!

// In function parameters
function move(direction: "up" | "down" | "left" | "right") {
  console.log(`Moving ${direction}`);
}

move("up");        // OK
move("diagonal");  // Error!

Functions in TypeScript

Function Type Annotations

TypeScript allows type annotations on function parameters and return values:

// Basic function with types
function add(a: number, b: number): number {
  return a + b;
}

// Function type as a variable
let multiply: (x: number, y: number) => number;
multiply = function(x, y) {
  return x * y;
};

// Type alias for function types
type MathOperation = (a: number, b: number) => number;

const subtract: MathOperation = (a, b) => a - b;
const divide: MathOperation = (a, b) => a / b;

// Function with object parameter
function printUser(user: { name: string; age: number }): void {
  console.log(`${user.name} is ${user.age} years old`);
}

What Happens When You Pass Wrong Types?

TypeScript catches type mismatches at compile time, preventing runtime errors before your code even runs:

// Calling functions with wrong types
add(5, 10);       // OK: returns 15
add("5", 10);     // Error: Argument of type 'string' is not
                  // assignable to parameter of type 'number'

multiply(3, 4);   // OK: returns 12
multiply(3, "4"); // Error: Argument of type 'string' is not
                  // assignable to parameter of type 'number'

subtract(10, 5);  // OK: returns 5
subtract(10);     // Error: Expected 2 arguments, but got 1

printUser({ name: "Alice", age: 30 });  // OK
printUser({ name: "Bob" });             // Error: Property 'age' is
                                        // missing in type '{ name: string; }'
printUser("Alice");                     // Error: Argument of type 'string'
                                        // is not assignable to parameter

Key Benefit: These errors appear in your IDE as you type and during compilation - not at runtime. This is one of TypeScript's biggest advantages: catching bugs before your code runs!

Optional & Default Parameters

Functions can have optional (?) and default parameter values:

// Optional parameters (must come after required)
function greet(name: string, greeting?: string): string {
  if (greeting) {
    return `${greeting}, ${name}!`;
  }
  return `Hello, ${name}!`;
}

greet("Alice");         // "Hello, Alice!"
greet("Bob", "Hi");     // "Hi, Bob!"

// Default parameters
function createUser(
  name: string,
  role: string = "user",
  active: boolean = true
) {
  return { name, role, active };
}

createUser("Alice");                 // default role & active
createUser("Bob", "admin");          // default active
createUser("Charlie", "mod", false); // all specified

Important: In TypeScript, every function parameter must have its type explicitly defined. Unlike JavaScript, you cannot leave parameters untyped. This requirement ensures type safety throughout your codebase and enables the compiler to catch type-related errors early in development.

// Every parameter needs a type annotation
function process(name: string, count: number, active: boolean) {
  // All parameters have explicit types
}

// This would cause an error in TypeScript:
// function process(name, count, active) { }
// Error: Parameter 'name' implicitly has an 'any' type

// Even callback parameters need types
function fetchData(callback: (data: string) => void) {
  callback("result");
}

Rest Parameters

Rest parameters accept any number of arguments as an array:

// Rest parameters with type annotation
function sum(...numbers: number[]): number {
  return numbers.reduce((total, n) => total + n, 0);
}

sum(1, 2);           // 3
sum(1, 2, 3, 4, 5);  // 15

// Rest with other parameters
function buildName(first: string, ...rest: string[]): string {
  return first + " " + rest.join(" ");
}

buildName("John", "Paul", "Smith");  // "John Paul Smith"

// Spread in function calls
const nums: number[] = [1, 2, 3];
sum(...nums);  // 6

Understanding Spread in Function Calls

The spread operator (...) allows you to expand an array into individual arguments when calling a function. This is the opposite of rest parameters - while rest parameters collect multiple arguments into an array, spread expands an array into separate arguments.

// Without spread - you'd have to pass each element manually
const numbers = [10, 20, 30];
sum(numbers[0], numbers[1], numbers[2]);  // 60 - tedious!

// With spread - the array is expanded into individual arguments
sum(...numbers);  // 60 - much cleaner!

// How it works:
// sum(...numbers) is equivalent to sum(10, 20, 30)

// Combining arrays with spread
const moreNumbers = [40, 50];
sum(...numbers, ...moreNumbers);  // 150

// Spread with Math functions
const values = [5, 10, 3, 8, 1];
Math.max(...values);  // 10
Math.min(...values);  // 1

// TypeScript ensures type safety with spread
const strings = ["a", "b", "c"];
sum(...strings);  // Error: Argument of type 'string' is not
                  // assignable to parameter of type 'number'

Tip: The spread operator is especially useful when working with arrays of unknown length, or when you want to pass array elements as individual function arguments without modifying the original function.

Function Overloading

Function overloading lets you define multiple signatures for one function:

// Overload signatures
function format(value: string): string;
function format(value: number): string;
function format(value: Date): string;

// Implementation
function format(value: string | number | Date): string {
  if (typeof value === "string") {
    return value.toUpperCase();
  } else if (typeof value === "number") {
    return value.toFixed(2);
  } else {
    return value.toISOString();
  }
}

format("hello");      // "HELLO"
format(3.14159);      // "3.14"
format(new Date());   // "2024-01-15T...

Want to Learn More?

This article covers just the fundamentals of TypeScript from the TypeScript + React ebook. To master advanced topics like Generics, Advanced Types, Utility Types, and TypeScript with React Integration, check out the complete ebook:

📘 Beginner's Guide To TypeScript + React Integration - From TypeScript Basics to React Integration

More Such Useful Ebooks Are On The Way In Upcoming Weeks🔥

About Me

I'm a freelancer, mentor, full-stack developer working primarily with React, Next.js, and Node.js with a total of 12+ years of experience.

Alongside building real-world web applications, I'm also an Industry/Corporate Trainer training developers and teams in modern JavaScript, Next.js and MERN stack technologies, focusing on practical, production-ready skills.

Also, created various courses with 3000+ students enrolled in these courses.

My Portfolio: https://yogeshchavan.dev/