MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

Decoding Movement: Emulating Biological Motion for Smarter Robots

2025-11-26 13:02:05

Decoding Movement: Emulating Biological Motion for Smarter Robots

Ever watched a cat effortlessly navigate a complex environment and wondered how to program a robot to do the same? We're constantly striving to create robots with the agility and adaptability of animals, but the traditional approach of programming every joint movement is incredibly complex and often fails in unpredictable situations. What if robots could learn to move, not just follow pre-defined paths?

This is where neuromechanical emulation comes in. The core idea is to build a virtual model of an animal's body, connect it to a simulated nervous system controlled by a neural network, and then train that system to reproduce real-world movements captured from motion capture data. It allows AI agents to learn motor control skills in a simulated environment.

Think of it like this: instead of coding a pianist's finger movements, we're teaching an AI to feel the music and move its (simulated) fingers accordingly. The simulation uses a physics engine to emulate how muscles, bones, and joints interact, resulting in more realistic and robust movement patterns.

Benefits for Developers:

  • Faster Development: Train robots in simulation before deploying to the real world.
  • Improved Agility: Discover novel movement strategies through AI-driven exploration.
  • Robust Control: Create robots that can adapt to unforeseen obstacles and changes in their environment.
  • Data Efficiency: Learn complex movements from relatively small datasets.
  • Easier Customization: Adapt learned controllers to different robot morphologies with minimal retraining.
  • Advanced Simulation: Facilitates the creation of highly accurate simulated environments for robotic design and testing

One major implementation challenge is computational cost. Accurate physics simulations are demanding. However, by using parallel processing and optimized algorithms, we can significantly reduce training times. A practical tip is to start with simplified models and gradually increase complexity as training progresses.

This approach has implications beyond just robotics. Imagine creating personalized physical therapy programs based on emulations of healthy movement, or developing advanced exoskeletons that anticipate and assist human motion seamlessly. By unlocking the secrets of animal locomotion, we are not only building smarter robots but also gaining a deeper understanding of ourselves.

Related Keywords: neuromechanics, animal locomotion, biomechanics, robot control, machine learning, reinforcement learning, MJX physics engine, MIMIC dataset, physics-based animation, biological robots, soft robotics, AI for robotics, computational neuroscience, motor control, embodied AI, simulation environment, digital twin, virtual reality, augmented reality, motion capture, behavioral science, animal behavior

Welcome Thread - v353

2025-11-26 13:00:00

  1. Leave a comment below to introduce yourself! You can talk about what brought you here, what you're learning, or just a fun fact about yourself.

  2. Reply to someone's comment, either with a question or just a hello. 👋

  3. Come back next week to greet our new members so you can one day earn our Warm Welcome Badge!

penguin pointing with text we need you

Want To Write As A Coder? Start With TIL Posts

2025-11-26 13:00:00

I originally posted this post on my blog a long time ago in a galaxy far, far away.

If you want to start a coding blog, don't start with a deep dive of the Linux Kernel or other cryptic topics—unless you're an expert on those topics.

Instead write short "Today I Learned" (TIL) posts.

TIL posts are shorter posts where you share something you've found or figured out.

With TIL posts, you don't have to worry about long introductions or conclusions. Just write a good headline, a code block, a quick explanation, and your sources. And write using your own words, like in a conversation with a coworker.

That's enough to make a post worth reading.

TIL posts invite people into your learning journey.

Don't try to lecture the coding world about what they should do. Start documenting your learning instead.

Instead of writing "5 VS Code extensions every coder should install," try "TIL: 5 VS Code extensions I couldn't avoid installing."

Or instead of "5 Git commands every coder should know," covering the same basic Git commands every beginner writes about, write "TIL: 5 Git basic commands to use everyday" or "TIL: How Git Status works."

Spent 20 minutes or more figuring out something? write a TIL post. That's the easiest way to start a coding blog.

Apart from coding, writing has been one of the best skills I've developed as a coder. It's taught me to research, organize, and present ideas clearly.

That's why I made it one of the strategies in my book, Street-Smart Coding: 30 Ways to Get Better at Coding. It's the guide I wish I had when I was starting out, trying to go from junior to senior.

Get your copy of Street-Smart Coding here

Make & Makefiles: A Modern Developer's Guide to Classic Automation

2025-11-26 12:53:18

Introduction

In an era of complex build systems and framework-specific tooling, there's a compelling case for revisiting Make—a build automation tool that has quietly powered software development since 1976. Despite its age, Make remains remarkably relevant for modern development workflows, from React applications to containerized microservices.

This blog helps you to understand why Make deserves a place in your development toolkit and how to leverage it effectively in contemporary projects.

The Challenge of Modern Development Workflows

Every development team faces the same fundamental challenge: managing increasingly complex build and deployment processes. Consider a typical React application workflow:

  • Installing dependencies across multiple directories
  • Running development servers with specific configurations
  • Executing test suites before deployments
  • Building optimized production bundles
  • Deploying to various environments

Teams typically address these needs through npm scripts, resulting in package.json files cluttered with intricate command chains:

"scripts": {
  "dev": "NODE_ENV=development webpack-dev-server --config webpack.dev.js",
  "build:prod": "NODE_ENV=production webpack --config webpack.prod.js && cp -r public/* dist/",
  "deploy": "npm run test && npm run build:prod && firebase deploy"
}

While functional, this approach has limitations. Scripts become unwieldy, cross-platform compatibility issues arise, and coordinating tasks across different tools and languages becomes challenging.

What is Make?

Make operates on a simple yet powerful principle: it executes tasks (called targets) based on dependencies and recipes. Unlike language-specific build tools, Make is language-agnostic, making it ideal for projects using different languages and diverse technology stacks.

The Anatomy of a Makefile

A Makefile consists of rules, each containing:

  1. Target: The task name or file to be created
  2. Dependencies: Prerequisites that must be satisfied first
  3. Recipe: Shell commands that execute the task

Here's a basic example:

build: install
    npm run build
    echo "Build completed successfully"

install:
    npm install

When you run make build, Make automatically executes install first (if needed), then proceeds with the build commands.

Key Concepts Every Developer Should Know

Variables for Maintainability

Makefile variables reduce repetition and improve maintainability:

PROJECT_DIR = ./app
BUILD_DIR = $(PROJECT_DIR)/build
NODE_ENV = production

build:
    cd $(PROJECT_DIR) && NODE_ENV=$(NODE_ENV) npm run build

The .PHONY Declaration

One of Make's most important concepts is distinguishing between file targets and task targets. By default, Make assumes targets represent files. If a file with the target's name exists, Make considers it "up to date" and skips execution.

The .PHONY declaration prevents this behavior:

.PHONY: test build deploy

test:
    npm test

build:
    npm run build

Without .PHONY, if a test file or directory exists in your project, make test would fail to run your tests—a common source of confusion for newcomers.

Dependency Management

Make excels at managing task dependencies. Complex workflows become self-documenting:

deploy: lint test build
    firebase deploy

lint:
    eslint src/

test:
    jest --coverage

build:
    webpack --mode production

Running make deploy automatically executes linting, testing, and building in the correct order before deployment.

Practical Implementation: Make in a Modern React Project

Let's examine a production-ready Makefile for a React application with Firebase deployment:

# React + Firebase Project Makefile
# ================================

.PHONY: help install dev build test lint clean deploy check-env

# Configuration
NODE_ENV ?= development
PORT ?= 3000
BUILD_DIR = build
COVERAGE_DIR = coverage

# Default target displays available commands
help:
    @echo "Available targets:"
    @echo "  install    - Install all dependencies"
    @echo "  dev        - Start development server"
    @echo "  build      - Create production build"
    @echo "  test       - Run test suite with coverage"
    @echo "  lint       - Run ESLint checks"
    @echo "  deploy     - Deploy to Firebase (runs tests first)"
    @echo "  clean      - Remove generated files"

# Install dependencies
install:
    @echo "Installing dependencies..."
    npm ci
    @if [ -d "functions" ]; then \
        cd functions && npm ci; \
    fi
    @echo "Dependencies installed successfully"

# Development server
dev: check-env
    NODE_ENV=$(NODE_ENV) PORT=$(PORT) npm start

# Production build
build: clean
    @echo "Creating production build..."
    NODE_ENV=production npm run build
    @echo "Build complete: $(BUILD_DIR)/"

# Run tests with coverage
test:
    @echo "Running test suite..."
    npm test -- --coverage --watchAll=false
    @echo "Tests completed. Coverage report: $(COVERAGE_DIR)/index.html"

# Lint codebase
lint:
    @echo "Running ESLint..."
    npx eslint src/ --ext .js,.jsx,.ts,.tsx
    @echo "Linting complete"

# Clean generated files
clean:
    @echo "Cleaning build artifacts..."
    rm -rf $(BUILD_DIR) $(COVERAGE_DIR)
    @echo "Clean complete"

# Deploy to Firebase
deploy: lint test build
    @echo "Starting deployment..."
    firebase deploy
    @echo "Deployment successful"

# Check environment setup
check-env:
    @which node > /dev/null || (echo "Node.js is required but not installed" && exit 1)
    @which npm > /dev/null || (echo "npm is required but not installed" && exit 1)

This Makefile provides several advantages:

  1. Self-documentation: Running make help displays all available commands
  2. Dependency management: Deployment automatically runs lints and tests
  3. Environment flexibility: Variables can be overridden at runtime
  4. Error handling: The check-env target ensures prerequisites are met
  5. Consistent workflow: Team members use identical commands regardless of their local setup

Best Practices and Recommendations

1. Structure and Organization

Organize your Makefile logically. Group related targets and use comments to explain complex operations:

# ========================================
# Development Tasks
# ========================================

.PHONY: dev dev-backend dev-frontend

dev: dev-backend dev-frontend

dev-backend:
    cd backend && npm run dev

dev-frontend:
    cd frontend && npm start

2. Error Handling

Make your targets robust with proper error checking:

deploy: 
    @if [ -z "$(API_KEY)" ]; then \
        echo "ERROR: API_KEY is not set"; \
        exit 1; \
    fi
    firebase deploy --token $(API_KEY)

3. Cross-Platform Compatibility

While Make is primarily Unix-based, you can write more portable Makefiles:

# Use $(RM) instead of rm for better portability
clean:
    $(RM) -rf build/

4. Progressive Enhancement

Start simple and add complexity as needed. Begin with basic targets for common tasks, then gradually add more sophisticated features like parallel execution or conditional logic.

When to Choose Make

Make is particularly valuable when:

  • Working across multiple languages: Projects combining JavaScript, Python, Go, or other languages
  • Managing complex workflows: Multi-step build and deployment processes
  • Requiring consistency: Ensuring all team members follow identical procedures
  • Integrating with CI/CD: Make commands translate seamlessly to pipeline steps
  • Simplifying onboarding: New developers can start with make install && make dev

However, consider alternatives if:

  • Your project is purely JavaScript-based and npm scripts suffice
  • You need complex build optimizations specific to a framework
  • Your team strongly prefers language-specific tooling

Conclusion

Make might be an old tool, but it's still relevant. By providing a language-agnostic, dependency-aware task runner, Make offers a solution to workflow automation that remains remarkably relevant.

For modern developers, Make isn't about replacing webpack, npm, or other contemporary tools. Instead, it's about orchestrating these tools into cohesive, repeatable workflows. If you like this blog and want to learn more about Frontend Development and Software Engineering, you can follow me on Dev.

Why You Should Use Panic Instead of Fatal for Cleanup

2025-11-26 12:51:14

When writing code in Go we often run into errors that are critical. These are errors that mean our application cannot continue running. To handle these we usually look at two options. We can use panic or we can use log.Fatal.

They both stop the application but they do it in very different ways. The main difference is how they treat the defer function.

The Problem with Fatal

When you use log.Fatal the application stops immediately. It calls a system function called os.Exit(1). This is a hard stop. The program does not look back and it does not run any cleanup code.

Let us look at a standard example. Imagine we are connecting to a database in our main function. We want to make sure the database connection closes when the app stops.

package main

import (
    "fmt"
    "log"
)

func main() {
    fmt.Println("1. Opening Database Connection...")

    // We schedule this to run when the function exits
    defer fmt.Println("Clean up: Closing Database Connection.")

    // Something bad happens here
    log.Fatal("CRITICAL ERROR: System failure!")
}

Output:

1. Opening Database Connection...
2025/11/26 10:00:00 CRITICAL ERROR: System failure!
exit status 1

Notice what is missing? The line "Clean up: Closing Database Connection" never printed. The defer function was ignored. If this was a real application the database connection might remain open on the server side until it times out because we never sent the close signal.

Why Panic is Different

Now let us look at panic. When a program panics it begins to shut down but it does not stop immediately. It starts "unwinding" the stack. This means it goes back up through the functions you called.

Crucially, panic executes any deferred functions it finds along the way.

Here is the same example using panic:

package main

import (
    "fmt"
)

func main() {
    fmt.Println("1. Opening Database Connection...")

    // We schedule this to run when the function exits
    defer fmt.Println("Clean up: Closing Database Connection.")

    // Something bad happens here
    panic("CRITICAL ERROR: System failure!")
}

Output:

1. Opening Database Connection...
Clean up: Closing Database Connection.
panic: CRITICAL ERROR: System failure!

[stack trace...]

The Logic: Why Use Panic?

You should use panic instead of log.Fatal when you have critical resources to manage.

In our example we initialized our application and opened a main database connection. The connection was successful. But then another error occurred later in the code.

If we use log.Fatal the app just dies. The database connection is never explicitly closed.

If we use panic the application understands that it needs to crash but it is polite about it. It pauses to run the defer function. This allows us to strictly close the database connection. Once the cleanup is done the application finally shuts down and prints the error message.

Summary

  • log.Fatal: Stops the app instantly. It skips defer functions. Use this only if you have no cleanup to do.
  • panic: Stops the normal flow but runs defer functions first. Use this when you need to ensure connections, files, or resources are closed properly before the program exits.