2026-02-17 13:46:37
Your LLM API bill just hit $5,000 this month. OpenAI went down twice last week. And your legal team is nervous about sending proprietary data to external servers.
Sound familiar? Here's how to take back control with local LLM deployment using Ollama and NeuroLink.
brew install ollama (or curl -fsSL https://ollama.com/install.sh | sh)ollama run llama3.1:8b
provider: "ollama" in configRead on for the complete setup guide...
The rise of capable open-source language models has fundamentally changed how developers approach AI integration. No longer are you locked into cloud-only solutions with their associated costs, latency, and privacy concerns.
When you run models locally, your data never leaves your infrastructure. This is critical for:
Cloud LLM APIs charge per token, which can lead to unpredictable costs as usage scales. Local deployment converts this variable cost into a fixed infrastructure investment. Once you have the hardware, your marginal cost per inference approaches zero.
Network round-trips to cloud providers introduce latency that can be unacceptable for real-time applications. Local inference eliminates this network overhead entirely. On properly configured hardware, you can achieve response times measured in milliseconds rather than seconds.
Local models work without internet connectivity, enabling deployment in:
Ollama has emerged as the leading solution for running LLMs locally. It provides a simple, Docker-like experience for model management.
macOS (Homebrew):
brew install ollama
macOS/Linux (Direct Download):
curl -fsSL https://ollama.com/install.sh | sh
Docker:
docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
After installation, start the Ollama service:
ollama serve
On macOS and Windows, Ollama typically runs as a background service automatically. On Linux, you may want to configure it as a systemd service:
# /etc/systemd/system/ollama.service
[Unit]
Description=Ollama Service
After=network-online.target
[Service]
ExecStart=/usr/local/bin/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3
[Install]
WantedBy=default.target
Enable and start the service:
sudo systemctl enable ollama
sudo systemctl start ollama
With Ollama running, pull a model to get started:
# Pull Llama 3.1 8B - great balance of capability and speed
ollama pull llama3.1:8b
# Pull Mistral 7B - excellent for general tasks
ollama pull mistral:7b
# Pull CodeLlama for programming tasks
ollama pull codellama:13b
Verify the model is available:
ollama list
Test it with a quick prompt:
ollama run llama3.1:8b "Explain quantum computing in simple terms"
NeuroLink's provider-agnostic architecture makes Ollama integration straightforward.
In your NeuroLink configuration file, add Ollama as a provider:
# neurolink.config.yaml
providers:
ollama:
type: ollama
base_url: http://localhost:11434
default_model: llama3.1:8b
timeout: 120
retry:
max_attempts: 3
backoff_multiplier: 2
Alternatively, configure via environment variables:
export NEUROLINK_OLLAMA_BASE_URL=http://localhost:11434
export NEUROLINK_OLLAMA_DEFAULT_MODEL=llama3.1:8b
export NEUROLINK_OLLAMA_TIMEOUT=120
For more control, configure Ollama programmatically:
import { NeuroLink } from "@juspay/neurolink";
// Initialize NeuroLink with Ollama
const nl = new NeuroLink({
providers: [{
name: "local",
config: {
baseUrl: "http://localhost:11434",
defaultModel: "llama3.1:8b",
timeout: 120,
keepAlive: "5m" // Keep model loaded for 5 minutes
}
}]
});
// Use the local provider
const response = await nl.generate({
input: { text: "Write a function to calculate fibonacci numbers" },
provider: "local"
});
Configure multiple Ollama models for different use cases:
providers:
ollama-fast:
type: ollama
base_url: http://localhost:11434
default_model: llama3.1:8b
ollama-code:
type: ollama
base_url: http://localhost:11434
default_model: codellama:13b
ollama-large:
type: ollama
base_url: http://localhost:11434
default_model: llama3.1:70b
Choosing the right model for your use case is crucial for balancing capability with resource requirements.
| Model | Size | VRAM | Best For |
|---|---|---|---|
| Llama 3.1 8B | 4.7GB | 8GB min | General chat, summarization, simple reasoning |
| Llama 3.1 70B | 40GB | 48GB+ | Complex reasoning, nuanced tasks |
| Mistral 7B | 4.1GB | 6GB min | Quick tasks, high throughput |
| Model | Size | VRAM | Best For |
|---|---|---|---|
| CodeLlama 13B | 7.4GB | 12GB min | Code generation, debugging |
| DeepSeek Coder 33B | 19GB | 24GB min | Complex programming tasks |
Ollama supports various quantization levels that trade quality for reduced resource requirements:
# Full precision (largest, highest quality)
ollama pull llama3.1:8b-fp16
# 8-bit quantization (good balance)
ollama pull llama3.1:8b-q8_0
# 4-bit quantization (smallest, slight quality reduction)
ollama pull llama3.1:8b-q4_0
Getting the best performance from local LLMs requires attention to hardware configuration and Ollama settings.
For optimal performance, use a GPU with sufficient VRAM:
# Check if Ollama is using GPU
ollama ps
# Force CPU-only mode (if needed)
OLLAMA_GPU_LAYERS=0 ollama serve
Configure system resources appropriately:
# Set maximum loaded models
export OLLAMA_MAX_LOADED_MODELS=2
# Set VRAM limit (in bytes)
export OLLAMA_GPU_MEMORY=8589934592 # 8GB
# Configure context window (affects memory usage)
export OLLAMA_NUM_CTX=4096
Create a custom Modelfile for optimized inference:
# Modelfile.optimized
FROM llama3.1:8b
# Increase context window
PARAMETER num_ctx 8192
# Optimize for speed
PARAMETER num_batch 512
PARAMETER num_thread 8
# Reduce temperature for more deterministic outputs
PARAMETER temperature 0.7
PARAMETER top_p 0.9
# System prompt for your use case
SYSTEM """You are a helpful assistant optimized for technical questions."""
Build and use the optimized model:
ollama create llama-optimized -f Modelfile.optimized
ollama run llama-optimized
One of NeuroLink's most powerful features is the ability to seamlessly combine local and cloud providers.
Use local inference by default, falling back to cloud when local resources are exhausted:
import { NeuroLink } from "@juspay/neurolink";
const nl = new NeuroLink({
providers: [
{ name: "local", config: { baseUrl: "http://localhost:11434" } },
{ name: "openai", config: { apiKey: process.env.OPENAI_API_KEY } },
{ name: "anthropic", config: { apiKey: process.env.ANTHROPIC_API_KEY } }
],
failover: {
enabled: true,
primary: "local",
fallbackProviders: ["openai", "anthropic"],
triggerOn: ["timeout", "overload", "error"]
}
});
// This will try local first, then cloud if needed
const response = await nl.generate({
input: { text: "Complex analysis task..." },
maxTokens: 2000
});
Route requests to appropriate providers based on task characteristics:
import { NeuroLink } from "@juspay/neurolink";
const nl = new NeuroLink({
providers: [
{ name: "local", config: { baseUrl: "http://localhost:11434" } },
{ name: "anthropic", config: { apiKey: process.env.ANTHROPIC_API_KEY } }
],
routing: {
rules: [
{
taskType: "simple_qa",
provider: "local",
model: "llama3.1:8b"
},
{
taskType: "code_generation",
provider: "local",
model: "codellama:13b"
},
{
taskType: "complex_reasoning",
provider: "anthropic",
model: "claude-3-opus"
}
]
}
});
// Automatically routes to appropriate provider
const response = await nl.generate({
input: { text: "Write a sorting algorithm" },
taskType: "code_generation"
});
Automatically route sensitive data to local inference:
import { NeuroLink } from "@juspay/neurolink";
const nl = new NeuroLink({
providers: [
{ name: "ollama", config: { baseUrl: "http://localhost:11434" } },
{ name: "openai", config: { apiKey: process.env.OPENAI_API_KEY } }
],
middleware: {
guardrails: {
piiDetection: {
enabled: true,
patterns: [
{ name: "ssn", regex: "\\b\\d{3}-\\d{2}-\\d{4}\\b" },
{ name: "email", regex: "[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\\.[A-Za-z]{2,}" }
],
sensitiveKeywords: ["confidential", "proprietary"],
localProvider: "ollama",
cloudProvider: "openai"
}
}
}
});
// Automatically routes to local if sensitive data detected
const response = await nl.generate({
input: { text: "Analyze this customer data: SSN 123-45-6789..." }
// Routes to local Ollama automatically
});
Symptom: "Error: model not found" or slow initial response
Solutions:
# Verify model is downloaded
ollama list
# Re-pull if corrupted
ollama rm llama3.1:8b
ollama pull llama3.1:8b
# Check disk space
df -h ~/.ollama
Symptom: "CUDA out of memory" or system freeze
Solutions:
# Use smaller model
ollama pull llama3.1:8b-q4_0
# Reduce context window
export OLLAMA_NUM_CTX=2048
# Limit GPU memory
export OLLAMA_GPU_MEMORY=6442450944 # 6GB
Symptom: Response times exceeding expectations
Solutions:
# Verify GPU is being used
ollama ps
# Check for thermal throttling
nvidia-smi -l 1
# Increase batch size for throughput
# In Modelfile:
PARAMETER num_batch 1024
Symptom: NeuroLink cannot connect to Ollama
Solutions:
# Verify Ollama is running
curl http://localhost:11434/api/tags
# Check firewall settings
sudo ufw allow 11434/tcp
# Restart Ollama service
sudo systemctl restart ollama
Running local LLMs with Ollama and NeuroLink provides a powerful, flexible, and privacy-preserving approach to AI integration. By following this guide, you've learned how to:
The combination of local and cloud inference gives you unprecedented flexibility in how you deploy AI capabilities. Start with local models for development and privacy-sensitive tasks, scale to cloud providers when you need additional capacity or capabilities, and let NeuroLink handle the complexity of managing multiple providers.
Found this helpful? Drop a comment below with your questions or share your experience with local LLMs!
Want to try NeuroLink?
Follow us for more AI development content:
2026-02-17 13:41:46
A common mistake I see (and have made!):
def is_sorted_bad(lst):
return lst == sorted(lst)
Python's Timsort is O(n log n) and copies the whole list. For a million elements? ~20 million operations and extra memory — just for a yes/no! 😩
When the list is already sorted (often the case), it's pure waste.
Big O graph showing O(n log n) vs O(n)
Caption: Why sorting for verification hurts performance
The methods below are all O(n) with early exit — they stop at the first out-of-order pair.
Non-decreasing: Allows duplicates (<=) → [1, 2, 2, 3] ✅
Strictly increasing: No duplicates (<) → [1, 2, 2, 3] ❌
Same for descending (flip operators). All methods assume comparable elements.
def is_sorted(lst):
return all(x <= y for x, y in zip(lst, lst[1:]))
# Strictly increasing
def is_strictly_sorted(lst):
return all(x < y for x, y in zip(lst, lst[1:]))
Lazy evaluation + early exit = super efficient.
def is_sorted(lst):
for i in range(len(lst) - 1):
if lst[i] > lst[i + 1]:
return False
return True
def is_sorted(lst):
return all(lst[i] <= lst[i + 1] for i in range(len(lst) - 1))
Great for extensions like finding the first violation.
from itertools import pairwise
def is_sorted(lst):
return all(x <= y for x, y in pairwise(lst))
No slice overhead — true O(1) space!
import operator
def is_sorted_by(lst, key):
return all(key(x) <= key(y) for x, y in zip(lst, lst[1:]))
Perfect for sorting objects by attribute.
def is_non_increasing(lst):
return all(x >= y for x, y in zip(lst, lst[1:]))
| Method | Time/Space Notes | Best For |
|---|---|---|
| all() + zip() | Readable, common | Everyday use |
| For loop | Clear, no tricks | Interviews |
| all() + range() | Useful when index is needed | Extensions (e.g., finding first violation) |
| pairwise() | True O(1) space | Modern Python (3.10+) |
| operator.le | Great for custom objects | Key-based or attribute sorting |
# NumPy
np.all(arr[:-1] <= arr[1:])
# Pandas
series.is_monotonic_increasing
If you need the sorted list anyway → Just call .sort() — Timsort is O(n) on sorted data!
That's it! Which method do you use most? Drop a comment below 👇
For the complete guide (with FAQs, full quiz, and more edge cases), head to my blog:
https://emitechlogic.com/check-if-list-is-sorted-python/
Loved this efficiency tip? Here are more Python algorithm and performance guides:
2026-02-17 13:41:00
Up until now, Phase #3 (Frontend) was mostly about building pieces: sections, hooks, JSON files, layouts, navigation, and individual pages.
Day 8 was the day when I finally stopped creating new components and started doing something more product-like:
Making everything talk to each other properly.
This was the first time the portfolio started behaving like a cohesive product instead of a collection of screens.
As a developer, it’s very tempting to keep adding features:
But as a product owner/project lead, I forced myself to step back and ask:
So Day 8 became an integration and refinement day.
No new architecture.
No new fancy features.
Just alignment and polish.
All the pages under src/pages/*.jsx were updated to properly consume:
This included:
Each page now follows the same mental model:
JSON → Hook → Page → Sections → UI
Which means:
From a long-term perspective, this is huge:
That’s real frontend architecture, not just React code.
Earlier, the Navbar and Footer existed mostly as components.
On Day 8, they finally became product navigation.
I updated:
So that:
This is one of those things users never notice, but they instantly feel when it’s wrong.
Broken navigation = broken trust.
The Contact page was already built earlier. Day 8 was about making it feel complete:
At this point, Contact stopped being:
"A form component"
and became:
"The only real entry point for human interaction with the product."
That’s a very different mindset.
This day taught me something important:
Sometimes the best engineering decision is to not build anything new.
Instead:
As a solo developer wearing multiple hats (PM, designer, architect, dev), this step is very easy to skip.
But this is exactly what separates:
from
A tutorial would say:
"Today we updated some pages."
A product mindset says:
"Today we validated the entire frontend architecture."
That’s the difference.
Day 8 was about:
Not code volume.
Not feature count.
But system quality.
This might sound weird, but after Day 8:
I stopped seeing this as:
"my React portfolio"
And started seeing it as:
"my personal product"
Something I could:
That’s a huge psychological shift for any engineer.
Day 8 won’t impress on GitHub commit history. It’s a small diff. A few files changed.
But in real-world projects, this is exactly how products mature:
Not with big features. But with small integration days that remove friction everywhere.
Day 8 was not about building. It was about owning the system.
And that, more than any new component, is what actually makes you a senior engineer in mindset, not just in years of experience.
If you’re following along, the complete source lives here:
👉 GitHub Repository: Portfolio.
2026-02-17 13:40:14
Ever started a project that seemed simple at first, only to watch it spiral into a tangled mess of spaghetti code? I've been there. But what if I told you that spending time upfront on architecture could save you hundreds of hours of refactoring later?
In this series, I'm going to walk you through building a production-ready full-stack application with clean architecture principles. This isn't theoretical fluff—this is real code, real patterns, and real lessons learned from building an enterprise-level tutoring management system.
What we'll build:
But more importantly, we'll build it the right way—with clean architecture, proper layering, and maintainable patterns that scale.
In this first post, we'll understand WHY architecture matters by looking at what happens when you skip it. We'll see the real problems that emerge and why "we'll fix it later" never works.
Here's what typically happens when you skip architecture:
Week 1: "Let's just get something working"
Month 3: "We need to add features fast"
Month 6: "Why is everything breaking?"
Month 12: "We need to rewrite this"
The Truth: Later never comes. Technical debt compounds like credit card interest. What takes 1 hour to do right initially will take 10 hours to fix later, or 100 hours to rewrite.
Code First Approach:
Start coding → Hit problems → Try to refactor → Too late → Live with mess
Architecture First Approach:
Plan architecture → Implement with patterns → Maintain structure → Scale easily
Real-world analogy: Building a house. You can start nailing boards together and see progress immediately, but without a blueprint, you'll end up with crooked walls and no plumbing. Architects spend weeks on blueprints because it saves months during construction.
Let me show you exactly what happens when you skip architecture. Here's real code that "works" but creates problems:
// Controller.cs - This is what happens without architecture
public class StudentController : ControllerBase
{
[HttpGet]
public async Task<IActionResult> GetStudents()
{
// Direct database access in controller? Bad!
using var connection = new NpgsqlConnection("connection_string");
var students = await connection.QueryAsync<Student>("SELECT * FROM Students");
// Business logic in controller? Also bad!
foreach(var student in students)
{
if(student.Age < 18)
student.RequiresParentalConsent = true;
}
// Returning database entities directly? Triple bad!
return Ok(students);
}
}
Let me walk you through what's happening here and why each line is a problem:
Line 1: Direct Database Connection
using var connection = new NpgsqlConnection("connection_string");
What this does: Creates a direct connection to PostgreSQL database from the controller.
Why it's bad:
Real-world analogy: It's like a restaurant waiter walking into the kitchen, cooking the food themselves, and serving it. The waiter should just take orders and deliver food - not know how to operate the stove!
Line 2: Direct Database Query
var students = await connection.QueryAsync<Student>("SELECT * FROM Students");
What this does: Executes a raw SQL query and maps results to Student objects.
Let's break down the complex terms:
"Raw SQL query" - This is a direct database command written in SQL (Structured Query Language), the language databases understand. It's "raw" because you're writing the actual database commands yourself instead of using a higher-level abstraction.
"Maps results to Student objects" - The database returns rows of data (like Excel spreadsheet rows). The QueryAsync<Student> method takes those rows and converts them into C# Student objects:
Database returns:
┌────┬───────────┬─────┬──────────┐
│ Id │ Name │ Age │ Email │
├────┼───────────┼─────┼──────────┤
│ 1 │ John Doe │ 20 │ [email protected] │
│ 2 │ Jane Doe │ 19 │ [email protected] │
└────┴───────────┴─────┴──────────┘
Gets "mapped" to C# objects:
new Student { Id = 1, Name = "John Doe", Age = 20, Email = "[email protected]" }
new Student { Id = 2, Name = "Jane Doe", Age = 19, Email = "[email protected]" }
Why it's bad:
SELECT *) even if you only need a fewWhat is SQL Injection?
SQL injection is when an attacker tricks your application into running malicious database commands. It's one of the most dangerous security vulnerabilities.
Vulnerable Code Example:
// NEVER DO THIS! ☠️ Extremely dangerous!
public async Task<IActionResult> GetStudentByName(string name)
{
// Building query by concatenating user input
var query = "SELECT * FROM Students WHERE Name = '" + name + "'";
var student = await connection.QueryAsync<Student>(query);
return Ok(student);
}
What happens when a normal user searches for "John"?
-- Query becomes:
SELECT * FROM Students WHERE Name = 'John'
-- ✅ Works fine, returns John's record
What happens when an ATTACKER enters: John'; DROP TABLE Students; --
-- Query becomes:
SELECT * FROM Students WHERE Name = 'John'; DROP TABLE Students; --'
-- ☠️ DISASTER! This:
-- 1. Selects John
-- 2. DELETES YOUR ENTIRE STUDENTS TABLE
-- 3. -- comments out the rest
Your entire Students table is GONE! All student data. Deleted. Forever.
Even worse attacks:
-- Attacker enters: ' OR '1'='1
SELECT * FROM Students WHERE Name = '' OR '1'='1'
-- ☠️ Returns ALL students (security breach - data exposure)
-- Attacker enters: '; UPDATE Students SET GPA = 4.0 WHERE Name = 'Attacker'; --
SELECT * FROM Students WHERE Name = ''; UPDATE Students SET GPA = 4.0 WHERE Name = 'Attacker'; --'
-- ☠️ Changes grades in database (data manipulation)
-- Attacker enters: '; SELECT password, email FROM Users; --
-- ☠️ Steals passwords (credential theft)
Why this happens:
The Safe Way - Parameterized Queries:
// ✅ Safe - Using parameters
public async Task<IActionResult> GetStudentByName(string name)
{
var query = "SELECT * FROM Students WHERE Name = @Name";
var student = await connection.QueryAsync<Student>(query, new { Name = name });
return Ok(student);
}
What changes?
@Name is a parameter placeholder
new { Name = name }
When attacker enters: John'; DROP TABLE Students; --
-- Query stays:
SELECT * FROM Students WHERE Name = @Name
-- But the parameter value is:
@Name = "John'; DROP TABLE Students; --"
-- Database treats the ENTIRE string as the name to search for
-- It looks for a student literally named "John'; DROP TABLE Students; --"
-- Finds nothing, returns empty result
-- ✅ Your table is SAFE!
Real-world impact:
The lesson: NEVER concatenate user input into SQL queries. Always use parameterized queries or ORMs (like Entity Framework) that handle this automatically.
Line 3-6: Business Logic in Controller
foreach(var student in students)
{
if(student.Age < 18)
student.RequiresParentalConsent = true;
}
What this does: Loops through students and applies a business rule.
Why it's bad:
Line 7: Returning Database Entities
return Ok(students);
What this does: Sends the Student database entity directly to the API caller.
Why it's REALLY bad:
This code works today, sure. But watch what happens over time:
Month 1: "Let's add filtering by grade level"
Month 3: "We need to switch from PostgreSQL to SQL Server"
Month 6: "Let's add unit tests"
Month 12: "The API is exposing sensitive data!"
Year 2: "We need to add caching"
Year 3: "New developer joins the team"
There has to be a better way. And there is.
Without Architecture:
With Architecture:
The Math:
Architecture is not overhead. Architecture is debt prevention.
Now that you understand WHY architecture matters and what happens when you skip it, in Part 2 we'll explore the different architectural approaches available:
For each pattern, I'll show you:
✅ "We'll fix it later" never happens - Technical debt compounds exponentially
✅ Without architecture, every change becomes dangerous - Fear of touching code kills velocity
✅ SQL injection is real and devastating - Billions of dollars lost due to this vulnerability
✅ Architecture is debt prevention, not overhead - 10 hours invested saves 100s later
✅ Controllers doing everything is a time bomb - Database, business logic, HTTP all mixed together
✅ Testing becomes impossible without separation - Can't test what you can't isolate
✅ Team scalability requires structure - New developers need clear boundaries
Have you experienced the pain of "we'll fix it later"? What was the tipping point that made you invest in architecture? Share your stories in the comments below!
Next in Series: Part 2: Comparing Architectural Approaches - Finding the Right Pattern
Tags: #dotnet #csharp #architecture #softwaredevelopment #webapi #programming #technicaldebt #coding
This series is based on real experiences building an enterprise tutoring management system. All code examples have been generalized for educational purposes.
2026-02-17 13:34:51
Modern AWS architectures avoid exposing networks and instead focus on intent-based access:
This reference architecture demonstrates three secure access patterns from on-prem to AWS using:
Each pattern serves a distinct purpose and should be used together—not interchangeably.
The VPC contains:
Remote users enter AWS only through explicit access services.
Secure access to private web applications without VPN or public exposure.
Secure SSH access to private EC2 instances without bastion hosts or public IPs.
Broad access to private AWS resources such as databases, internal APIs, or legacy tools.
This is a common design decision, and the wrong choice often leads to overexposure.
| Aspect | AWS Client VPN | AWS Verified Access |
|---|---|---|
| Access Model | Network-level | Application-level |
| Trust Boundary | VPC/Subnet | Identity & policy |
| User Sees Network | Yes | No |
| VPN Client Required | Yes | No |
| Best For | DBs, legacy apps, tools | Web apps, dashboards |
| Lateral Movement Risk | Higher | Very low |
| Zero Trust Alignment | Partial | Strong |
If users need a network → VPN
If users need an app → Verified Access
| Requirement | Service |
|---|---|
| Internal web applications | AWS Verified Access |
| EC2 administration | EC2 Instance Connect Endpoint |
| Database / legacy access | AWS Client VPN |
This layered approach ensures:
Secure remote access to AWS is not about choosing one tool.
It’s about:
By combining:
you get a secure, scalable, and least-privilege remote access model from on-prem to AWS.
2026-02-17 13:18:47
TypeScript is a strongly typed programming language that builds on JavaScript, giving you better tooling at any scale. Developed and maintained by Microsoft, TypeScript adds optional static typing and class-based object-oriented programming to JavaScript. Since TypeScript is a superset of JavaScript, existing JavaScript programs are also valid TypeScript programs.
TypeScript code cannot run directly in browsers or Node.js - it must first be compiled (transpiled) to JavaScript. This compilation step is where TypeScript catches type errors, helping you find bugs before your code runs.
While JavaScript is powerful and flexible, it has limitations when building large-scale applications. TypeScript addresses these limitations:
| Feature | JavaScript | TypeScript |
|---|---|---|
| Type System | Dynamic typing | Static typing (optional) |
| Compilation | Interpreted directly | Compiles to JavaScript |
| Error Detection | Runtime errors | Compile-time errors |
| IDE Support | Basic | Enhanced (IntelliSense) |
| Interfaces | Not supported | Fully supported |
| Generics | Not supported | Fully supported |
Before using TypeScript, you need Node.js installed on your machine. Follow these steps:
node --version
# Should output: v20.x.x or higher
npm --version
# Should output: 10.x.x or higher
Once Node.js is installed, you can install TypeScript globally using npm:
npm install -g typescript
tsc --version
# Should output: Version 5.x.x
Tip: You can also install TypeScript locally in a project using
npm install typescript --save-dev. This is recommended for projects to ensure consistent versions across team members.
To set up a new TypeScript project, create a directory and initialize it:
mkdir my-typescript-project
cd my-typescript-project
npm init -y
npm install typescript --save-dev
Create a TypeScript configuration file (tsconfig.json):
npx tsc --init
Here's a recommended configuration for beginners:
{
"compilerOptions": {
"target": "ES2020",
"module": "commonjs",
"strict": true,
"esModuleInterop": true,
"skipLibCheck": true,
"outDir": "./dist",
"rootDir": "./src"
},
"include": ["src/**/*"],
"exclude": ["node_modules"]
}
Create a file called hello.ts in your src folder:
// src/hello.ts
function greet(name: string): string {
return "Hello, " + name + "!";
}
const message: string = greet("TypeScript");
console.log(message);
Notice the type annotations: name: string specifies the parameter type, and : string after the parentheses specifies the return type.
TypeScript must be compiled to JavaScript before it can run:
# Compile a single file
tsc src/hello.ts
# Compile using tsconfig.json
tsc
# Watch mode - recompile on changes
tsc --watch
When you run the tsc command (or tsc --watch for continuous compilation), TypeScript creates a new dist folder containing the compiled JavaScript files. You'll see your hello.ts file transformed into hello.js with all type annotations removed:
// dist/hello.js (compiled output)
"use strict";
function greet(name) {
return 'Hello, ' + name + '!';
}
const message = greet('TypeScript');
console.log(message);
Here's what the project structure looks like in VS Code after compilation:
Note: All type annotations are removed during compilation. Types only exist at development and compile time - they help catch errors but don't affect runtime behavior.
Now that your TypeScript code has been compiled to JavaScript, you can run it using Node.js. Execute the compiled JavaScript file from the dist folder:
# Run the compiled JavaScript file
node dist/hello.js
You should see the following output in your terminal:
Hello, TypeScript!
Congratulations! You've successfully written, compiled, and executed your first TypeScript program. The workflow is: write .ts files → compile with tsc → run the output .js files with node.
Tip: You can also use ts-node to run TypeScript files directly without manual compilation:
npm install -g ts-node, then runts-node src/hello.ts. This is useful during development.
TypeScript provides several basic types that form the foundation of the type system:
// String type
let firstName: string = "John";
let lastName: string = 'Doe';
// Number type (includes integers and floats)
let age: number = 30;
let price: number = 99.99;
let hex: number = 0xf00d;
// Boolean type
let isActive: boolean = true;
let isCompleted: boolean = false;
TypeScript can automatically infer types based on assigned values:
// TypeScript infers the type automatically
let name = "Alice"; // inferred as string
let count = 42; // inferred as number
let isValid = true; // inferred as boolean
// Type is locked after inference
name = "Bob"; // OK
name = 123; // Error: Type 'number' is not assignable to type 'string'
Best Practice: Use explicit types when the initial value doesn't clearly indicate the intended type, or when declaring variables without immediate initialization.
TypeScript provides two ways to define arrays, and introduces tuples for fixed-length arrays:
// Arrays - two syntax options
let numbers: number[] = [1, 2, 3, 4, 5];
let names: Array<string> = ["Alice", "Bob", "Charlie"];
// Mixed arrays
let mixed: (string | number)[] = [1, "two", 3];
// Tuples - fixed length with specific types
let person: [string, number] = ["Alice", 30];
let coordinate: [number, number, number] = [10, 20, 30];
// Accessing tuple elements
console.log(person[0]); // "Alice" (string)
console.log(person[1]); // 30 (number)
Enums define a set of named constants, making code more readable:
// Numeric enum (auto-increments from 0)
enum Direction {
Up, // 0
Down, // 1
Left, // 2
Right // 3
}
// Numeric enum with custom values
enum StatusCode {
OK = 200,
BadRequest = 400,
NotFound = 404,
ServerError = 500
}
// String enum
enum Color {
Red = "RED",
Green = "GREEN",
Blue = "BLUE"
}
// Using enums
let direction: Direction = Direction.Up;
let status: StatusCode = StatusCode.OK;
While enums are useful, modern TypeScript development often favors using as const assertions instead. The as const assertion tells TypeScript to infer the most specific type possible, making values readonly and literal types.
Why use 'as const' over enums?
// Using 'as const' instead of enums
const Direction = {
Up: "UP",
Down: "DOWN",
Left: "LEFT",
Right: "RIGHT"
} as const;
// Create a type from the object values
type Direction = typeof Direction[keyof typeof Direction];
// Type is: "UP" | "DOWN" | "LEFT" | "RIGHT"
// Usage
let move: Direction = Direction.Up; // OK
move = "UP"; // Also OK
move = "DIAGONAL"; // Error!
// Another example with status codes
const StatusCode = {
OK: 200,
BadRequest: 400,
NotFound: 404,
ServerError: 500
} as const;
type StatusCode = typeof StatusCode[keyof typeof StatusCode];
// Type is: 200 | 400 | 404 | 500
Best Practice: For new projects, prefer 'as const' objects over enums. They provide the same benefits with better bundle size and simpler JavaScript output. Use enums only when you need reverse mapping (numeric enums) or when working with legacy codebases.
TypeScript provides special types for handling dynamic values:
// any - opts out of type checking (avoid when possible)
let flexible: any = 4;
flexible = "string"; // OK
flexible = true; // OK
flexible.anything(); // OK (but risky!)
// unknown - type-safe alternative to any
let uncertain: unknown = 4;
uncertain = "string"; // OK
// Must check type before using
if (typeof uncertain === "string") {
console.log(uncertain.toUpperCase()); // OK
}
// never - represents values that never occur
function throwError(message: string): never {
throw new Error(message);
}
function infiniteLoop(): never {
while (true) {}
}
Tip: Prefer 'unknown' over 'any' when you don't know the type. It forces type checking before use, making your code safer.
The void type represents the absence of a return value:
// Function that doesn't return anything
function logMessage(message: string): void {
console.log(message);
}
// Arrow function with void return
const printNumber = (num: number): void => {
console.log(num);
};
Type assertions tell TypeScript you know more about a value's type:
// Two syntax options
let someValue: unknown = "Hello, TypeScript!";
// Angle-bracket syntax
let strLength1: number = (<string>someValue).length;
// "as" syntax (required in React - JSX)
let strLength2: number = (someValue as string).length;
// Working with DOM elements
const input = document.getElementById("myInput") as HTMLInputElement;
input.value = "Hello!";
// Non-null assertion
function getValue(arr: number[], index: number): number {
return arr[index]!; // Assert it won't be undefined
}
The ! operator is called the non-null assertion operator. It tells TypeScript that you are certain the value will not be null or undefined, even though TypeScript thinks it might be. In the example above, arr[index] could potentially be undefined if the index is out of bounds, but using ! tells TypeScript "trust me, this value exists."
// More examples of non-null assertion (!)
const button = document.getElementById("submit")!;
// Without !, TypeScript thinks button could be null
// Use when you're certain a value exists
interface User {
name: string;
email?: string; // optional property
}
function sendEmail(user: User) {
// We checked elsewhere that email exists
const email = user.email!; // Assert it's not undefined
console.log("Sending to:", email);
}
Warning: Use the non-null assertion (!) sparingly. It bypasses TypeScript's null checks, so incorrect usage can lead to runtime errors. Prefer proper null checks (if statements or optional chaining
?.) when possible.
Literal types specify exact values a variable can hold:
// String literal types
type Direction = "north" | "south" | "east" | "west";
let heading: Direction = "north"; // OK
heading = "northeast"; // Error!
// Numeric literal types
type DiceRoll = 1 | 2 | 3 | 4 | 5 | 6;
let roll: DiceRoll = 4; // OK
roll = 7; // Error!
// In function parameters
function move(direction: "up" | "down" | "left" | "right") {
console.log(`Moving ${direction}`);
}
move("up"); // OK
move("diagonal"); // Error!
TypeScript allows type annotations on function parameters and return values:
// Basic function with types
function add(a: number, b: number): number {
return a + b;
}
// Function type as a variable
let multiply: (x: number, y: number) => number;
multiply = function(x, y) {
return x * y;
};
// Type alias for function types
type MathOperation = (a: number, b: number) => number;
const subtract: MathOperation = (a, b) => a - b;
const divide: MathOperation = (a, b) => a / b;
// Function with object parameter
function printUser(user: { name: string; age: number }): void {
console.log(`${user.name} is ${user.age} years old`);
}
TypeScript catches type mismatches at compile time, preventing runtime errors before your code even runs:
// Calling functions with wrong types
add(5, 10); // OK: returns 15
add("5", 10); // Error: Argument of type 'string' is not
// assignable to parameter of type 'number'
multiply(3, 4); // OK: returns 12
multiply(3, "4"); // Error: Argument of type 'string' is not
// assignable to parameter of type 'number'
subtract(10, 5); // OK: returns 5
subtract(10); // Error: Expected 2 arguments, but got 1
printUser({ name: "Alice", age: 30 }); // OK
printUser({ name: "Bob" }); // Error: Property 'age' is
// missing in type '{ name: string; }'
printUser("Alice"); // Error: Argument of type 'string'
// is not assignable to parameter
Key Benefit: These errors appear in your IDE as you type and during compilation - not at runtime. This is one of TypeScript's biggest advantages: catching bugs before your code runs!
Functions can have optional (?) and default parameter values:
// Optional parameters (must come after required)
function greet(name: string, greeting?: string): string {
if (greeting) {
return `${greeting}, ${name}!`;
}
return `Hello, ${name}!`;
}
greet("Alice"); // "Hello, Alice!"
greet("Bob", "Hi"); // "Hi, Bob!"
// Default parameters
function createUser(
name: string,
role: string = "user",
active: boolean = true
) {
return { name, role, active };
}
createUser("Alice"); // default role & active
createUser("Bob", "admin"); // default active
createUser("Charlie", "mod", false); // all specified
Important: In TypeScript, every function parameter must have its type explicitly defined. Unlike JavaScript, you cannot leave parameters untyped. This requirement ensures type safety throughout your codebase and enables the compiler to catch type-related errors early in development.
// Every parameter needs a type annotation
function process(name: string, count: number, active: boolean) {
// All parameters have explicit types
}
// This would cause an error in TypeScript:
// function process(name, count, active) { }
// Error: Parameter 'name' implicitly has an 'any' type
// Even callback parameters need types
function fetchData(callback: (data: string) => void) {
callback("result");
}
Rest parameters accept any number of arguments as an array:
// Rest parameters with type annotation
function sum(...numbers: number[]): number {
return numbers.reduce((total, n) => total + n, 0);
}
sum(1, 2); // 3
sum(1, 2, 3, 4, 5); // 15
// Rest with other parameters
function buildName(first: string, ...rest: string[]): string {
return first + " " + rest.join(" ");
}
buildName("John", "Paul", "Smith"); // "John Paul Smith"
// Spread in function calls
const nums: number[] = [1, 2, 3];
sum(...nums); // 6
The spread operator (...) allows you to expand an array into individual arguments when calling a function. This is the opposite of rest parameters - while rest parameters collect multiple arguments into an array, spread expands an array into separate arguments.
// Without spread - you'd have to pass each element manually
const numbers = [10, 20, 30];
sum(numbers[0], numbers[1], numbers[2]); // 60 - tedious!
// With spread - the array is expanded into individual arguments
sum(...numbers); // 60 - much cleaner!
// How it works:
// sum(...numbers) is equivalent to sum(10, 20, 30)
// Combining arrays with spread
const moreNumbers = [40, 50];
sum(...numbers, ...moreNumbers); // 150
// Spread with Math functions
const values = [5, 10, 3, 8, 1];
Math.max(...values); // 10
Math.min(...values); // 1
// TypeScript ensures type safety with spread
const strings = ["a", "b", "c"];
sum(...strings); // Error: Argument of type 'string' is not
// assignable to parameter of type 'number'
Tip: The spread operator is especially useful when working with arrays of unknown length, or when you want to pass array elements as individual function arguments without modifying the original function.
Function overloading lets you define multiple signatures for one function:
// Overload signatures
function format(value: string): string;
function format(value: number): string;
function format(value: Date): string;
// Implementation
function format(value: string | number | Date): string {
if (typeof value === "string") {
return value.toUpperCase();
} else if (typeof value === "number") {
return value.toFixed(2);
} else {
return value.toISOString();
}
}
format("hello"); // "HELLO"
format(3.14159); // "3.14"
format(new Date()); // "2024-01-15T...
This article covers just the fundamentals of TypeScript from the TypeScript + React ebook. To master advanced topics like Generics, Advanced Types, Utility Types, and TypeScript with React Integration, check out the complete ebook:
📘 Beginner's Guide To TypeScript + React Integration - From TypeScript Basics to React Integration
More Such Useful Ebooks Are On The Way In Upcoming Weeks🔥
I'm a freelancer, mentor, full-stack developer working primarily with React, Next.js, and Node.js with a total of 12+ years of experience.
Alongside building real-world web applications, I'm also an Industry/Corporate Trainer training developers and teams in modern JavaScript, Next.js and MERN stack technologies, focusing on practical, production-ready skills.
Also, created various courses with 3000+ students enrolled in these courses.
My Portfolio: https://yogeshchavan.dev/