2025-07-06 03:25:46
This is a submission for the Runner H "AI Agent Prompting" Challenge
I created a Chrome extension called SideSave, designed especially for content professionals, SEOs, and competitive analysis.
The idea came from my own workflow writing search-optimized content, where I often need to analyze how competitors position themselves in Google's sponsored results.
The problem is that these ads — although rich in copywriting, links, persuasive hooks, and keywords — disappear quickly. Just by refreshing the search or waiting a few minutes, the same ad may no longer appear.
This makes analysis and later study difficult.
SideSave solves this by saving ads with title, link, snippet, and a clear visual marker that it's a sponsored result.
I used Runner H as a tool to automatically and accurately generate the extension code.
With a well-structured prompt, I was able to generate:
manifest.json
content.js
scripts that dynamically detect Google results
chrome.storage.local
chrome.runtime.sendMessage
All of this was generated with the help of the AI in Runner, without me needing to manually code everything.
Automation saved me hours of fine-tuning and accelerated the prototyping process.
The main use case for this extension is for digital marketers, content creators, copywriters, and SEO specialists who want to study how ads are structured on search pages — especially when creating campaigns, optimized landing pages, or analyzing competitors.
With SideSave, you can:
The impact is clear: it makes analytical and creative work easier by keeping key insights readily accessible in the browser.
2025-07-06 03:19:54
Over the past few days, I had the amazing opportunity to work on API testing using Keploy AI as part of the Keploy API Fellowship. In this post, I’ll walk you through everything I did — from building a Node.js API to running AI-powered tests and integrating it into a CI/CD pipeline.
🔧 My API Project
I built a Student Manager API using:
GitHub Repository:
https://github.com/kishorecodesinpython/student-api-server
🧪 Task 1 – API Testing with Keploy AI
Step 1: Created an OpenAPI Schema
I defined all endpoints and schemas using Swagger UI, hosted at /api-docs.
Step 2: Ran Keploy in Docker
Since I’m using Windows, I had to use Docker with WSL2. I ran this command: docker compose up --build
This built and launched my API and Keploy CLI together inside containers.
Step 3: Recorded API Calls
I sent multiple requests using curl and Postman to record traffic, while Keploy captured them in real-time. Then I ran: keploy test ...
This generated multiple test cases from actual traffic. I got a Test Drive report with:
Step 4: Debugging Docker & Environment Issues
This was not all smooth! I faced a few problems:
Step 5: CI/CD Integration
I integrated Keploy testing into a GitHub Actions pipeline, which automatically:
🌐 Task 2 – Chrome Extension API Testing
I explored the Keploy Chrome Extension to test real-world APIs.
Site 1: DummyJSON
I captured a GET request to /products using the Chrome console and the Keploy extension.
Site 2: JSONPlaceholder
Tested endpoints like GET /posts, POST /posts, and validated response handling.
The Chrome Extension made it incredibly easy to record calls and generate test cases on the fly.
💡 What I Learned
📸 Final Screenshots I Shared:
🏁 Conclusion
Thanks to Keploy, I transitioned from writing tests manually to using AI for full automation. This fellowship was one of the most hands-on testing experiences I’ve had — and I’ll definitely be applying these workflows to future projects.
GitHub Repo:
https://github.com/kishorecodesinpython/student-api-server
Let me know what you think or if you want to connect!
2025-07-06 03:16:18
Deploying a web server on the cloud is easier than ever, especially with AWS. In this article, I’ll walk you through the steps I took to install NGINX on an Ubuntu-based EC2 instance using AWS.
Before starting, ensure you have:
and if you have an account, login using your root email.
once you are logged in,
To create your instance, do the following
a. Type your instance name e.g my_nginx_instance note: do not put in spaces in between your instance name.
b. Select Ubuntu as your OS (Operating System)
c. Select your instance type and choose t3.micro for the purpose of our testing and this article.
d. Create your keypair: Keypairs are very important and must be kept and saved. You can either use an existing keypair if you have or create a new keypair
e.Next is to create your security group. For the purpose of this article..kindly select allow SSH from anywhere and Allow SSH from internet.
f. Then click on Launch Instance.
I hope i was able to take you through my basic step in setting up an nginx server on an AWS instance.
Thank you.
2025-07-06 03:15:05
As a junior student, I encountered a challenge while developing a campus second-hand trading platform: how to implement real-time chat functionality between buyers and sellers? Traditional HTTP request-response patterns clearly couldn't meet real-time communication needs. After deep research, I discovered a surprisingly elegant solution.
Project Information
🚀 Hyperlane Framework: GitHub Repository
📧 Author Contact: [email protected]
📖 Documentation: Official Docs
WebSocket protocol solves HTTP's unidirectional communication limitations by establishing full-duplex communication channels between clients and servers. The framework I chose impressed me with its WebSocket support, completely encapsulating the complex protocol upgrade process so developers can focus solely on business logic.
use hyperlane::*;
use hyperlane_macros::*;
#[ws]
#[get]
async fn chat_handler(ctx: Context) {
// Get WebSocket upgrade request key
let key: String = ctx.get_request_header(SEC_WEBSOCKET_KEY).await.unwrap();
// Handle client messages
let request_body: Vec<u8> = ctx.get_request_body().await;
// Send response to client
let _ = ctx.set_response_body(key).await.send_body().await;
let _ = ctx.set_response_body(request_body).await.send_body().await;
}
#[tokio::main]
async fn main() {
let server = Server::new();
server.host("0.0.0.0").await;
server.port(8080).await;
server.route("/chat", chat_handler).await;
server.run().await.unwrap();
}
This code demonstrates the framework's simplicity. Using the #[ws]
attribute marker, the framework automatically handles WebSocket protocol upgrades, eliminating developer concerns about underlying handshake processes.
In my campus trading platform project, I needed to implement a multi-room chat system. Users could communicate with sellers in real-time on product detail pages, discussing product details, prices, and other information.
use hyperlane::*;
use hyperlane_macros::*;
use hyperlane_broadcast::*;
use std::collections::HashMap;
use std::sync::Arc;
use tokio::sync::RwLock;
use serde::{Deserialize, Serialize};
#[derive(Clone, Serialize, Deserialize)]
struct ChatMessage {
user_id: u32,
username: String,
content: String,
timestamp: chrono::DateTime<chrono::Utc>,
message_type: MessageType,
}
#[derive(Clone, Serialize, Deserialize)]
enum MessageType {
Text,
Image,
File,
System,
}
// Global chat room management
static mut CHAT_ROOMS: Option<BroadcastMap<String>> = None;
fn get_chat_rooms() -> &'static BroadcastMap<String> {
unsafe {
CHAT_ROOMS.get_or_insert_with(|| BroadcastMap::new())
}
}
// Connection management
type ConnectionManager = Arc<RwLock<HashMap<String, Vec<String>>>>;
static mut CONNECTION_MANAGER: Option<ConnectionManager> = None;
fn get_connection_manager() -> &'static ConnectionManager {
unsafe {
CONNECTION_MANAGER.get_or_insert_with(|| {
Arc::new(RwLock::new(HashMap::new()))
})
}
}
This design uses a global broadcast manager to handle multi-room chat, with each room having independent message channels.
#[ws]
#[get]
async fn chat_room_handler(ctx: Context) {
let room_id = ctx.get_route_params().await.get("room_id")
.unwrap_or("general").to_string();
let user_id = ctx.get_route_params().await.get("user_id")
.unwrap_or("anonymous").to_string();
let connection_id = format!("{}_{}", user_id, chrono::Utc::now().timestamp_millis());
// Register connection
register_connection(&room_id, &connection_id).await;
let chat_rooms = get_chat_rooms();
let mut receiver = chat_rooms.subscribe_unwrap_or_insert(&room_id);
// Send welcome message
let welcome_message = ChatMessage {
user_id: 0,
username: "System".to_string(),
content: format!("User {} joined the room", user_id),
timestamp: chrono::Utc::now(),
message_type: MessageType::System,
};
let welcome_json = serde_json::to_string(&welcome_message).unwrap();
let _ = chat_rooms.send(&room_id, welcome_json);
// Handle message sending and receiving
tokio::select! {
// Receive client messages
_ = async {
loop {
let message_data = ctx.get_request_body().await;
if !message_data.is_empty() {
if let Ok(message_str) = String::from_utf8(message_data) {
if let Ok(mut chat_message) = serde_json::from_str::<ChatMessage>(&message_str) {
chat_message.timestamp = chrono::Utc::now();
let broadcast_message = serde_json::to_string(&chat_message).unwrap();
let _ = chat_rooms.send(&room_id, broadcast_message);
}
}
}
}
} => {},
// Broadcast messages to client
_ = async {
while let Ok(message) = receiver.recv().await {
let _ = ctx.set_response_body(message).await.send_body().await;
}
} => {}
}
// Clean up connection
cleanup_connection(&room_id, &connection_id).await;
// Notify other users that someone left
let leave_message = format!("User {} left the room", user_id);
broadcast_to_room(&room_id, &leave_message).await;
}
async fn register_connection(room_id: &str, connection_id: &str) {
let manager = get_connection_manager();
let mut connections = manager.write().await;
connections.entry(room_id.to_string())
.or_insert_with(Vec::new)
.push(connection_id.to_string());
}
async fn cleanup_connection(room_id: &str, connection_id: &str) {
let manager = get_connection_manager();
let mut connections = manager.write().await;
if let Some(room_connections) = connections.get_mut(room_id) {
room_connections.retain(|id| id != connection_id);
if room_connections.is_empty() {
connections.remove(room_id);
}
}
}
async fn broadcast_to_room(room_id: &str, message: &str) {
let chat_rooms = get_chat_rooms();
let _ = chat_rooms.send(room_id, message.to_string());
}
To enhance user experience, I also implemented some advanced features:
// Message history
#[derive(Clone)]
struct MessageHistory {
messages: Arc<RwLock<HashMap<String, Vec<ChatMessage>>>>,
}
impl MessageHistory {
fn new() -> Self {
Self {
messages: Arc::new(RwLock::new(HashMap::new())),
}
}
async fn add_message(&self, room_id: &str, message: ChatMessage) {
let mut messages = self.messages.write().await;
messages.entry(room_id.to_string())
.or_insert_with(Vec::new)
.push(message);
}
async fn get_recent_messages(&self, room_id: &str, limit: usize) -> Vec<ChatMessage> {
let messages = self.messages.read().await;
if let Some(room_messages) = messages.get(room_id) {
room_messages.iter()
.rev()
.take(limit)
.cloned()
.collect::<Vec<_>>()
.into_iter()
.rev()
.collect()
} else {
Vec::new()
}
}
}
// Online user statistics
async fn get_online_users(room_id: &str) -> Vec<String> {
let manager = get_connection_manager();
let connections = manager.read().await;
if let Some(room_connections) = connections.get(room_id) {
room_connections.iter()
.map(|conn| conn.split('_').next().unwrap_or("unknown").to_string())
.collect::<std::collections::HashSet<_>>()
.into_iter()
.collect()
} else {
Vec::new()
}
}
// Message filtering and validation
fn validate_message(message: &ChatMessage) -> bool {
// Check message length
if message.content.len() > 1000 {
return false;
}
// Check for sensitive words
let sensitive_words = ["spam", "advertisement"];
for word in sensitive_words {
if message.content.to_lowercase().contains(word) {
return false;
}
}
true
}
To completely demonstrate real-time communication effects, I also implemented the corresponding JavaScript client:
class ChatClient {
constructor(roomId, userId) {
this.roomId = roomId;
this.userId = userId;
this.ws = null;
this.messageHandlers = [];
}
connect() {
const wsUrl = `ws://localhost:8080/chat/${this.roomId}/${this.userId}`;
this.ws = new WebSocket(wsUrl);
this.ws.onopen = () => {
console.log('Connected to chat room:', this.roomId);
this.onConnectionOpen();
};
this.ws.onmessage = (event) => {
try {
const message = JSON.parse(event.data);
this.handleMessage(message);
} catch (e) {
console.error('Failed to parse message:', e);
}
};
this.ws.onerror = (error) => {
console.error('WebSocket error:', error);
};
this.ws.onclose = () => {
console.log('Disconnected from chat room');
this.onConnectionClose();
};
}
sendMessage(content, messageType = 'Text') {
if (this.ws && this.ws.readyState === WebSocket.OPEN) {
const message = {
user_id: parseInt(this.userId),
username: `User${this.userId}`,
content: content,
message_type: messageType,
};
this.ws.send(JSON.stringify(message));
}
}
handleMessage(message) {
this.messageHandlers.forEach((handler) => handler(message));
}
onMessage(handler) {
this.messageHandlers.push(handler);
}
onConnectionOpen() {
// Handle post-connection setup
this.sendMessage('Hello everyone!', 'System');
}
onConnectionClose() {
// Handle post-disconnection, can implement auto-reconnect
setTimeout(() => {
console.log('Attempting to reconnect...');
this.connect();
}, 3000);
}
disconnect() {
if (this.ws) {
this.ws.close();
}
}
}
// Usage example
const chatClient = new ChatClient('room123', '456');
chatClient.onMessage((message) => {
const messageElement = document.createElement('div');
messageElement.innerHTML = `
<strong>${message.username}:</strong>
${message.content}
<small>(${new Date(message.timestamp).toLocaleTimeString()})</small>
`;
document.getElementById('messages').appendChild(messageElement);
});
chatClient.connect();
// Send message
document.getElementById('sendButton').onclick = () => {
const input = document.getElementById('messageInput');
chatClient.sendMessage(input.value);
input.value = '';
};
After my campus trading platform went live, the real-time chat functionality received unanimous user praise. Through monitoring data, I discovered:
This data proves the framework's excellent performance in real-time communication scenarios.
Project Repository: GitHub
Author Email: [email protected]
2025-07-06 03:14:54
This is a submission for the Runner H "AI Agent Prompting" Challenge
I built a Digital Marketing AI Coach using Runner H that acts as a weekly performance coach for each member of my digital marketing team. It helps track how each member is doing based on their content submissions, deadlines, and social media engagement, and sends them custom tips to improve, along with creative content suggestions for next week.
This agent not only saves me hours of reviewing content sheets and giving feedback but also keeps the whole team motivated and improving, week by week.
As someone managing a content team, I face these problems every week:
Instead of doing everything manually, I wanted a solution that could analyze performance, suggest improvements, and boost creativity for each member.
Here is the PDF generated by Runner H: Digital Marketing AI Coach
The screenshot of the automated mail sent and Runner H
https://drive.google.com/drive/folders/1LBSHpE7Hsr6VtE1YS0lPBweqTeaJdbno?usp=sharing
Runner H acts as my AI content strategist.
It reads our shared Google Sheet where we track content types, deadlines, engagement (likes), and whether each post was missed or not. Then for each person, Runner H:
Analyzes:
Generates a personalized report:
Sends that report directly to the team member (via email)
Then it creates a manager summary:
Access the Google Sheet at:
[Insert Google Sheet Link]
This sheet contains the following columns:
Deadline | Posted Date | Content Type | Topic | Platform | Assigned To | Status | Is Missed | Likes | Rating
TASK 1: Individual Performance Reports
For each unique person in the "Assigned To" column:
- Analyze their weekly data:
- Total content pieces submitted
- Number of missed deadlines (Is Missed = Yes)
- Average engagement (Likes)
- Highest- and lowest-performing content types (based on Likes)
Generate a personalized weekly report that includes:
- What they did well
- What they need to improve
- Two new content ideas for next week based on:
- Their strong formats
- Weak engagement areas
- Platform trends
A randomly assigned "Strategy Spin Card" from:
“Use a trending reel audio”
“Post a behind-the-scenes video”
“Ask a thought-provoking poll”
“Turn a meme into a carousel”
“Start your post with a question”
Send the report to each person via email. [email]
TASK 2: Manager Weekly Summary
Generate a summary email/report for the team manager that includes:
Stats:
- Total content published this week
- Overall engagement average
- Overall % of deadlines met (vs missed)
- Top 3 most used content types
Highlights:
- Top performer (by average Likes)
- Most improved member (compared to last week)
- Most effective content type (team-wide)
Recommendations:
- If engagement is down, suggest new post types
- If one format is repeated too much, suggest variety
OUTPUT FORMAT
Send each team member a friendly message with their personal stats, tips, and creativity challenge.
OBJECTIVE
Keep the team aligned, inspired, and improving using weekly performance insights and creative nudges.
Encourage content diversity and reduce the manager workload.
Who It’s For:
What It Solves:
For Managers:
For Team Members:
For the Brand: