MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

TinyMCE Example

2025-07-06 03:29:47

Check out this Pen I made!

💾 Save What You Search: A Chrome Extension Created with Runner H

2025-07-06 03:25:46

This is a submission for the Runner H "AI Agent Prompting" Challenge

What I Built

I created a Chrome extension called SideSave, designed especially for content professionals, SEOs, and competitive analysis.

The idea came from my own workflow writing search-optimized content, where I often need to analyze how competitors position themselves in Google's sponsored results.

The problem is that these ads — although rich in copywriting, links, persuasive hooks, and keywords — disappear quickly. Just by refreshing the search or waiting a few minutes, the same ad may no longer appear.

This makes analysis and later study difficult.

SideSave solves this by saving ads with title, link, snippet, and a clear visual marker that it's a sponsored result.

Demo

How I Used Runner H

I used Runner H as a tool to automatically and accurately generate the extension code.

With a well-structured prompt, I was able to generate:

  • A V3-compatible manifest.json
  • content.js scripts that dynamically detect Google results
  • Ad detection based on text like "Sponsored"
  • A floating “➕ Save to SideSave” button with smart behavior
  • A fixed sidebar with a flat/striped style
  • Persistent storage using chrome.storage.local
  • Communication between popup and content script via chrome.runtime.sendMessage

All of this was generated with the help of the AI in Runner, without me needing to manually code everything.

Automation saved me hours of fine-tuning and accelerated the prototyping process.

Use Case & Impact

The main use case for this extension is for digital marketers, content creators, copywriters, and SEO specialists who want to study how ads are structured on search pages — especially when creating campaigns, optimized landing pages, or analyzing competitors.

With SideSave, you can:

  • Save and classify sponsored ads for future reference
  • Compare headline and description copy used in competitor campaigns
  • Identify link patterns used in paid ads
  • Store ideas for your own strategic content creation

The impact is clear: it makes analytical and creative work easier by keeping key insights readily accessible in the browser.

Social Love

https://youtu.be/WoncvzT_bUI

From 0 to 100% API Test Coverage with Keploy AI – My Journey

2025-07-06 03:19:54

Over the past few days, I had the amazing opportunity to work on API testing using Keploy AI as part of the Keploy API Fellowship. In this post, I’ll walk you through everything I did — from building a Node.js API to running AI-powered tests and integrating it into a CI/CD pipeline.

🔧 My API Project
I built a Student Manager API using:

  1. Node.js & Express – for the backend
  2. MongoDB Atlas – as the database
  3. Swagger – to document the API
  4. Endpoints for GET, POST, PUT, DELETE at /api/students

GitHub Repository:
https://github.com/kishorecodesinpython/student-api-server

🧪 Task 1 – API Testing with Keploy AI
Step 1: Created an OpenAPI Schema

I defined all endpoints and schemas using Swagger UI, hosted at /api-docs.

Step 2: Ran Keploy in Docker

Since I’m using Windows, I had to use Docker with WSL2. I ran this command: docker compose up --build
This built and launched my API and Keploy CLI together inside containers.

Step 3: Recorded API Calls

I sent multiple requests using curl and Postman to record traffic, while Keploy captured them in real-time. Then I ran: keploy test ...
This generated multiple test cases from actual traffic. I got a Test Drive report with:

  • 27 Test Suites
  • 20 Accepted
  • 7 Rejected

Step 4: Debugging Docker & Environment Issues

This was not all smooth! I faced a few problems:

  • Docker WSL2 was broken (resolved via reset and reinstall)
  • MongoDB URI wasn’t passed properly (fixed using dotenv)
  • Curl commands needed to be corrected for schema match

Step 5: CI/CD Integration

I integrated Keploy testing into a GitHub Actions pipeline, which automatically:

  • Built my app using Docker
  • Ran all tests
  • Validated test outputs

🌐 Task 2 – Chrome Extension API Testing
I explored the Keploy Chrome Extension to test real-world APIs.

Site 1: DummyJSON
I captured a GET request to /products using the Chrome console and the Keploy extension.

Site 2: JSONPlaceholder
Tested endpoints like GET /posts, POST /posts, and validated response handling.

The Chrome Extension made it incredibly easy to record calls and generate test cases on the fly.

💡 What I Learned

  • Keploy’s AI-generated tests helped me go from zero to complete test coverage in minutes.
  • Docker with WSL2 on Windows takes patience and careful setup.
  • The Chrome extension is perfect for testing third-party/public APIs.
  • CI/CD test integration adds confidence to production readiness.

📸 Final Screenshots I Shared:

  • Swagger API Docs UI
  • Keploy “Test Drive” bunny report
  • Docker logs running Keploy
  • MongoDB connected confirmation in terminal

🏁 Conclusion
Thanks to Keploy, I transitioned from writing tests manually to using AI for full automation. This fellowship was one of the most hands-on testing experiences I’ve had — and I’ll definitely be applying these workflows to future projects.

GitHub Repo:
https://github.com/kishorecodesinpython/student-api-server

Let me know what you think or if you want to connect!

Keploy #APITesting #CI_CD #Nodejs #MongoDB #Docker #OpenSource #AIinTesting #KeployFellowship

Image description
Image description
Image description
Image description

Beginner's Guide: How i installed Nginx on AWS EC2 Instance

2025-07-06 03:16:18

Deploying a web server on the cloud is easier than ever, especially with AWS. In this article, I’ll walk you through the steps I took to install NGINX on an Ubuntu-based EC2 instance using AWS.

Before starting, ensure you have:

  1. An AWS account: If you dont have an AWS account, you can simply register at https://amazon.aws.com

Registration page

and if you have an account, login using your root email.

Image description

once you are logged in,

  1. Create an EC2 Instance: Search for Instance on the search bar and click on launch instance

Image description

To create your instance, do the following
a. Type your instance name e.g my_nginx_instance note: do not put in spaces in between your instance name.

Image description

b. Select Ubuntu as your OS (Operating System)

Image description

c. Select your instance type and choose t3.micro for the purpose of our testing and this article.
d. Create your keypair: Keypairs are very important and must be kept and saved. You can either use an existing keypair if you have or create a new keypair
e.Next is to create your security group. For the purpose of this article..kindly select allow SSH from anywhere and Allow SSH from internet.

Image description

f. Then click on Launch Instance.

Image description

  1. Open your Gitbash and we can run some commands from the terminal.
  2. Make sure to change directory to where your keypair was downloaded. As for me, my keypair is currently downloaded in my downloads folder. so i am going to cd into downloads.

Image description

  1. SSh into your Ec2 Instance

Image description

  1. Update your ubuntu OS: sudo apt update -y

Image description

  1. Next is to install Nginx: sudo apt install nginx -y

Image description

  1. Next is to start your Nginx: sudo systemctl start nginx
  2. You can check status of your Nginx: sudo systemctl status nginx

Image description

Image description

  1. Next is to enable Nginx: sudo systemctl enable nginx

Image description

  1. Access your Ip address and you will get to see Nginx Landing page

Image description

  1. You can go ahead to modify the landing page to your own words or text. using sudo nano /var/www/html/index.nginx-debian.html

I hope i was able to take you through my basic step in setting up an nginx server on an AWS instance.

Thank you.

Real Time Communication SSE Advanced Streaming Web

2025-07-06 03:15:05

As a junior student, I encountered a challenge while developing a campus second-hand trading platform: how to implement real-time chat functionality between buyers and sellers? Traditional HTTP request-response patterns clearly couldn't meet real-time communication needs. After deep research, I discovered a surprisingly elegant solution.

Project Information
🚀 Hyperlane Framework: GitHub Repository
📧 Author Contact: [email protected]
📖 Documentation: Official Docs

The Magic of WebSocket: Bidirectional Real-time Communication

WebSocket protocol solves HTTP's unidirectional communication limitations by establishing full-duplex communication channels between clients and servers. The framework I chose impressed me with its WebSocket support, completely encapsulating the complex protocol upgrade process so developers can focus solely on business logic.

use hyperlane::*;
use hyperlane_macros::*;

#[ws]
#[get]
async fn chat_handler(ctx: Context) {
    // Get WebSocket upgrade request key
    let key: String = ctx.get_request_header(SEC_WEBSOCKET_KEY).await.unwrap();

    // Handle client messages
    let request_body: Vec<u8> = ctx.get_request_body().await;

    // Send response to client
    let _ = ctx.set_response_body(key).await.send_body().await;
    let _ = ctx.set_response_body(request_body).await.send_body().await;
}

#[tokio::main]
async fn main() {
    let server = Server::new();
    server.host("0.0.0.0").await;
    server.port(8080).await;
    server.route("/chat", chat_handler).await;
    server.run().await.unwrap();
}

This code demonstrates the framework's simplicity. Using the #[ws] attribute marker, the framework automatically handles WebSocket protocol upgrades, eliminating developer concerns about underlying handshake processes.

Building a Complete Chat System

In my campus trading platform project, I needed to implement a multi-room chat system. Users could communicate with sellers in real-time on product detail pages, discussing product details, prices, and other information.

1. Room Management System

use hyperlane::*;
use hyperlane_macros::*;
use hyperlane_broadcast::*;
use std::collections::HashMap;
use std::sync::Arc;
use tokio::sync::RwLock;
use serde::{Deserialize, Serialize};

#[derive(Clone, Serialize, Deserialize)]
struct ChatMessage {
    user_id: u32,
    username: String,
    content: String,
    timestamp: chrono::DateTime<chrono::Utc>,
    message_type: MessageType,
}

#[derive(Clone, Serialize, Deserialize)]
enum MessageType {
    Text,
    Image,
    File,
    System,
}

// Global chat room management
static mut CHAT_ROOMS: Option<BroadcastMap<String>> = None;

fn get_chat_rooms() -> &'static BroadcastMap<String> {
    unsafe {
        CHAT_ROOMS.get_or_insert_with(|| BroadcastMap::new())
    }
}

// Connection management
type ConnectionManager = Arc<RwLock<HashMap<String, Vec<String>>>>;

static mut CONNECTION_MANAGER: Option<ConnectionManager> = None;

fn get_connection_manager() -> &'static ConnectionManager {
    unsafe {
        CONNECTION_MANAGER.get_or_insert_with(|| {
            Arc::new(RwLock::new(HashMap::new()))
        })
    }
}

This design uses a global broadcast manager to handle multi-room chat, with each room having independent message channels.

2. WebSocket Connection Handling

#[ws]
#[get]
async fn chat_room_handler(ctx: Context) {
    let room_id = ctx.get_route_params().await.get("room_id")
        .unwrap_or("general").to_string();
    let user_id = ctx.get_route_params().await.get("user_id")
        .unwrap_or("anonymous").to_string();

    let connection_id = format!("{}_{}", user_id, chrono::Utc::now().timestamp_millis());

    // Register connection
    register_connection(&room_id, &connection_id).await;

    let chat_rooms = get_chat_rooms();
    let mut receiver = chat_rooms.subscribe_unwrap_or_insert(&room_id);

    // Send welcome message
    let welcome_message = ChatMessage {
        user_id: 0,
        username: "System".to_string(),
        content: format!("User {} joined the room", user_id),
        timestamp: chrono::Utc::now(),
        message_type: MessageType::System,
    };

    let welcome_json = serde_json::to_string(&welcome_message).unwrap();
    let _ = chat_rooms.send(&room_id, welcome_json);

    // Handle message sending and receiving
    tokio::select! {
        // Receive client messages
        _ = async {
            loop {
                let message_data = ctx.get_request_body().await;
                if !message_data.is_empty() {
                    if let Ok(message_str) = String::from_utf8(message_data) {
                        if let Ok(mut chat_message) = serde_json::from_str::<ChatMessage>(&message_str) {
                            chat_message.timestamp = chrono::Utc::now();
                            let broadcast_message = serde_json::to_string(&chat_message).unwrap();
                            let _ = chat_rooms.send(&room_id, broadcast_message);
                        }
                    }
                }
            }
        } => {},

        // Broadcast messages to client
        _ = async {
            while let Ok(message) = receiver.recv().await {
                let _ = ctx.set_response_body(message).await.send_body().await;
            }
        } => {}
    }

    // Clean up connection
    cleanup_connection(&room_id, &connection_id).await;

    // Notify other users that someone left
    let leave_message = format!("User {} left the room", user_id);
    broadcast_to_room(&room_id, &leave_message).await;
}

async fn register_connection(room_id: &str, connection_id: &str) {
    let manager = get_connection_manager();
    let mut connections = manager.write().await;

    connections.entry(room_id.to_string())
        .or_insert_with(Vec::new)
        .push(connection_id.to_string());
}

async fn cleanup_connection(room_id: &str, connection_id: &str) {
    let manager = get_connection_manager();
    let mut connections = manager.write().await;

    if let Some(room_connections) = connections.get_mut(room_id) {
        room_connections.retain(|id| id != connection_id);
        if room_connections.is_empty() {
            connections.remove(room_id);
        }
    }
}

async fn broadcast_to_room(room_id: &str, message: &str) {
    let chat_rooms = get_chat_rooms();
    let _ = chat_rooms.send(room_id, message.to_string());
}

3. Advanced Feature Implementation

To enhance user experience, I also implemented some advanced features:

// Message history
#[derive(Clone)]
struct MessageHistory {
    messages: Arc<RwLock<HashMap<String, Vec<ChatMessage>>>>,
}

impl MessageHistory {
    fn new() -> Self {
        Self {
            messages: Arc::new(RwLock::new(HashMap::new())),
        }
    }

    async fn add_message(&self, room_id: &str, message: ChatMessage) {
        let mut messages = self.messages.write().await;
        messages.entry(room_id.to_string())
            .or_insert_with(Vec::new)
            .push(message);
    }

    async fn get_recent_messages(&self, room_id: &str, limit: usize) -> Vec<ChatMessage> {
        let messages = self.messages.read().await;
        if let Some(room_messages) = messages.get(room_id) {
            room_messages.iter()
                .rev()
                .take(limit)
                .cloned()
                .collect::<Vec<_>>()
                .into_iter()
                .rev()
                .collect()
        } else {
            Vec::new()
        }
    }
}

// Online user statistics
async fn get_online_users(room_id: &str) -> Vec<String> {
    let manager = get_connection_manager();
    let connections = manager.read().await;

    if let Some(room_connections) = connections.get(room_id) {
        room_connections.iter()
            .map(|conn| conn.split('_').next().unwrap_or("unknown").to_string())
            .collect::<std::collections::HashSet<_>>()
            .into_iter()
            .collect()
    } else {
        Vec::new()
    }
}

// Message filtering and validation
fn validate_message(message: &ChatMessage) -> bool {
    // Check message length
    if message.content.len() > 1000 {
        return false;
    }

    // Check for sensitive words
    let sensitive_words = ["spam", "advertisement"];
    for word in sensitive_words {
        if message.content.to_lowercase().contains(word) {
            return false;
        }
    }

    true
}

Client Implementation

To completely demonstrate real-time communication effects, I also implemented the corresponding JavaScript client:

class ChatClient {
  constructor(roomId, userId) {
    this.roomId = roomId;
    this.userId = userId;
    this.ws = null;
    this.messageHandlers = [];
  }

  connect() {
    const wsUrl = `ws://localhost:8080/chat/${this.roomId}/${this.userId}`;
    this.ws = new WebSocket(wsUrl);

    this.ws.onopen = () => {
      console.log('Connected to chat room:', this.roomId);
      this.onConnectionOpen();
    };

    this.ws.onmessage = (event) => {
      try {
        const message = JSON.parse(event.data);
        this.handleMessage(message);
      } catch (e) {
        console.error('Failed to parse message:', e);
      }
    };

    this.ws.onerror = (error) => {
      console.error('WebSocket error:', error);
    };

    this.ws.onclose = () => {
      console.log('Disconnected from chat room');
      this.onConnectionClose();
    };
  }

  sendMessage(content, messageType = 'Text') {
    if (this.ws && this.ws.readyState === WebSocket.OPEN) {
      const message = {
        user_id: parseInt(this.userId),
        username: `User${this.userId}`,
        content: content,
        message_type: messageType,
      };

      this.ws.send(JSON.stringify(message));
    }
  }

  handleMessage(message) {
    this.messageHandlers.forEach((handler) => handler(message));
  }

  onMessage(handler) {
    this.messageHandlers.push(handler);
  }

  onConnectionOpen() {
    // Handle post-connection setup
    this.sendMessage('Hello everyone!', 'System');
  }

  onConnectionClose() {
    // Handle post-disconnection, can implement auto-reconnect
    setTimeout(() => {
      console.log('Attempting to reconnect...');
      this.connect();
    }, 3000);
  }

  disconnect() {
    if (this.ws) {
      this.ws.close();
    }
  }
}

// Usage example
const chatClient = new ChatClient('room123', '456');

chatClient.onMessage((message) => {
  const messageElement = document.createElement('div');
  messageElement.innerHTML = `
        <strong>${message.username}:</strong> 
        ${message.content} 
        <small>(${new Date(message.timestamp).toLocaleTimeString()})</small>
    `;
  document.getElementById('messages').appendChild(messageElement);
});

chatClient.connect();

// Send message
document.getElementById('sendButton').onclick = () => {
  const input = document.getElementById('messageInput');
  chatClient.sendMessage(input.value);
  input.value = '';
};

Real Application Results

After my campus trading platform went live, the real-time chat functionality received unanimous user praise. Through monitoring data, I discovered:

  1. Low Latency: Message transmission latency averaged under 50ms
  2. High Concurrency: Single chat rooms could stably support 500+ users online simultaneously
  3. Stability: 30 days of continuous operation without any WebSocket connection exceptions
  4. Resource Efficiency: Server memory usage reduced by 70% compared to traditional polling solutions

This data proves the framework's excellent performance in real-time communication scenarios.

Project Repository: GitHub

Author Email: [email protected]

Digital Marketing AI Coach

2025-07-06 03:14:54

This is a submission for the Runner H "AI Agent Prompting" Challenge

What I Built

I built a Digital Marketing AI Coach using Runner H that acts as a weekly performance coach for each member of my digital marketing team. It helps track how each member is doing based on their content submissions, deadlines, and social media engagement, and sends them custom tips to improve, along with creative content suggestions for next week.

This agent not only saves me hours of reviewing content sheets and giving feedback but also keeps the whole team motivated and improving, week by week.

As someone managing a content team, I face these problems every week:

  • Some team members post content on our dedicated social media platforms late or miss deadlines
  • Not all content gets the engagement we expect
  • It's hard to give personalized, timely feedback to everyone
  • Creativity drops when people don’t know what to post next

Instead of doing everything manually, I wanted a solution that could analyze performance, suggest improvements, and boost creativity for each member.

Demo

Here is the PDF generated by Runner H: Digital Marketing AI Coach

The screenshot of the automated mail sent and Runner H
https://drive.google.com/drive/folders/1LBSHpE7Hsr6VtE1YS0lPBweqTeaJdbno?usp=sharing

How I Used Runner H

Runner H acts as my AI content strategist.

It reads our shared Google Sheet where we track content types, deadlines, engagement (likes), and whether each post was missed or not. Then for each person, Runner H:

Analyzes:

  • How many tasks they completed per month compared to the given tasks per month (according to KPI)
  • Which were late
  • What kind of content performed best/worst

Generates a personalized report:

  • Performance summary of each member
  • Give improvement suggestions to the members
  • Suggests one creative challenge to try (I call it a “Strategy Spin Card”)

Sends that report directly to the team member (via email)
Then it creates a manager summary:

  • Top performer of the week
  • Missed deadlines overview
  • What content types are working best
  • Suggests what to improve across the team

The Prompt I Gave Runner H

Access the Google Sheet at:

[Insert Google Sheet Link]

This sheet contains the following columns:

Deadline | Posted Date | Content Type | Topic | Platform | Assigned To | Status | Is Missed | Likes | Rating

TASK 1: Individual Performance Reports
For each unique person in the "Assigned To" column:

- Analyze their weekly data:
- Total content pieces submitted
- Number of missed deadlines (Is Missed = Yes)
- Average engagement (Likes)
- Highest- and lowest-performing content types (based on Likes)

Generate a personalized weekly report that includes:

- What they did well
- What they need to improve
- Two new content ideas for next week based on:
- Their strong formats
- Weak engagement areas
- Platform trends

A randomly assigned "Strategy Spin Card" from:

“Use a trending reel audio”

“Post a behind-the-scenes video”

“Ask a thought-provoking poll”

“Turn a meme into a carousel”

“Start your post with a question”

Send the report to each person via email. [email]


TASK 2: Manager Weekly Summary

Generate a summary email/report for the team manager that includes:

Stats:

- Total content published this week
- Overall engagement average
- Overall % of deadlines met (vs missed)
- Top 3 most used content types

Highlights:

- Top performer (by average Likes)
- Most improved member (compared to last week)
- Most effective content type (team-wide)

 Recommendations:

- If engagement is down, suggest new post types
- If one format is repeated too much, suggest variety

OUTPUT FORMAT

Send each team member a friendly message with their personal stats, tips, and creativity challenge.


OBJECTIVE
Keep the team aligned, inspired, and improving using weekly performance insights and creative nudges.

Encourage content diversity and reduce the manager workload.

Use Case & Impact

Use Case

Who It’s For:

  • Team leads or managers who oversee content creators, interns, or marketing teams
  • Startups or small businesses relying on consistent social media presence
  • Creative teams needing structured, weekly performance insights
  • Organizations that track content output via Google Sheets or Notion but struggle with timely feedback and motivation

What It Solves:

  • Lack of real-time, personalized feedback for team members
  • Time-consuming manual performance tracking and reviews
  • Team members unsure of what content types to try next
  • Missed deadlines and inconsistent creativity
  • No visibility into what’s actually working on social media

Impact

For Managers:

  • Saves 3–5 hours weekly on performance reviews
  • Eliminates the need to manually check content metrics
  • Offers a clear team-wide performance snapshot every week
  • Makes it easy to track progress and see who needs support
  • Encourages a growth-focused culture, not just output

For Team Members:

  • Receives personalized coaching without feeling micromanaged
  • Gains insight into what content worked or didn’t, with reasons
  • Feels more motivated and accountable thanks to weekly feedback
  • Gets new, fun content suggestions to stay creative
  • Encouraged to experiment and level up their strategy

For the Brand:

  • Increases content quality and consistency
  • Boosts overall engagement through data-informed strategy
  • Ensures the team is aligned with goals and creative direction
  • Promotes continuous improvement in a lightweight, human-friendly way