2026-02-26 04:46:37
In the previous article, we completed the first part of the LSTM and obtained the result from the calculation.
Let us continue.
Earlier, when the input was 1, we obtained the following result:
Now, if we change the input to a relatively large negative number, such as −10, then after calculating the x-axis value, the output of the sigmoid activation function will be close to 0.
The long-term memory will be completely forgotten, because anything multiplied by 0 is 0.
Since the sigmoid activation function converts any input into a value between 0 and 1, its output determines what percentage of the long-term memory is retained.
So, the first stage of the LSTM determines what percentage of the long-term memory is remembered. This part is called the forget gate.
Now that we understand what the first stage does, let us explore the second stage.
In the second stage, the block on the right combines the short-term memory and the input to create a potential long-term memory.
The block on the left then determines what percentage of that potential memory should be added to the long-term memory.
Let us plug in the numbers to see how a potential memory is created and how much of it is added to the long-term memory.
We will continue exploring this in the next article.
Looking for an easier way to install tools, libraries, or entire repositories?
Try Installerpedia: a community-driven, structured installation platform that lets you install almost anything with minimal hassle and clear, reliable guidance.
Just run:
ipm install repo-name
… and you’re done! 🚀
2026-02-26 04:45:21
So, there's a lot of talk about agentic coding, working with AI agents, Claude Code, etc.
While all these seems exciting, what I've not seen are how to guides on actually writing code with AI agents.
Telling Claude "Build me a web app that is an Uber but for smartphones" is a bad way to use AI Agents.
I have my own way of using AI Agents that I've gained from my own personal experiences so far. I'm going to share that, even though I'm by no means an expert (is anyone an expert at this point?)
I'll be mostly using Github copilot as my reference point/example. I know there are more popular tools like Claude Code. But I personally use VSCode and whatever ideas I throw out is broad enough to be applicable to the coding tool of your choice.
Before you tell AI what to do, you need to first have an idea of what you're about to do. Telling an AI to "build me an app" straight off is a very bad idea.
What tools are you going to use? What technical tradeoffs are you going to make? Are there better tools you can integrate or are you going to manually create a solution?
I remember coding a web app in the early days(2023????) of AI and I used it to create an app. The agent went ahead to create an authentication system from scratch.
It was only months later that I discovered that I could instead use tools like Clerk or Firebase for authentication. And they are so much better.
The default agent on Copilot isn't ideal for brainstorming. You could ask it, "How do I build an authentication system" and it would just go ahead to start building one.
There's the plan agent. Which is quite okay, but I prefer to have a custom agent because I can customize its behavior.
The custom AI agent would be able to read your codebase but it can't make any changes to it. In its Agents.md, you'll instruct it to only answer questions, propose alternatives, and tradeoffs to solving a problem.
For instance, if you tell it you want to build a web app, it would tell you about the tools you can integrate, the structure of the app, and so on and so forth.
You can learn how to create custom agents in VsCode here.
For the text of the Agents.md for the custom agent, you can literally just ask the planning agent to help you create a prompt for an agent that you'll use to brainstorm.
This is an example of my own Agent.md
---
name: Brainstorm
description: "Explore ideas, tools, and approaches for your app or feature"
argument-hint: "Tell me about the app or feature you want to build"
tools:
[
"read/readFile",
"search/codebase",
"search/fileSearch",
"search/listDirectory",
"search/textSearch",
"web",
]
agents: []
disable-model-invocation: false
user-invokable: true
handoffs:
- label: Create Implementation Plan
agent: Olyray Plan Agent
prompt: "Based on our brainstorming discussion, create a detailed implementation plan for this feature."
send: false
---
# Brainstorming Agent
You are a creative technical brainstorming partner who helps developers explore different approaches, tools, and architectures for building apps and features.
## Your Role
Your primary goal is to have an **exploratory conversation** with the user. You don't write code or create files—you help them think through their ideas by:
1. **Understanding their vision** through clarifying questions
2. **Exploring possibilities** by suggesting various tools, frameworks, and approaches
3. **Discussing trade-offs** between different technical choices
4. **Surfacing considerations** they might not have thought about
## Workflow
### Phase 1: Discovery (Ask Clarifying Questions)
When a user describes an app or feature, **don't jump to solutions immediately**. Instead, ask 3-5 focused questions to understand:
- **Purpose & Goals**: What problem does this solve? Who are the users?
- **Scope & Constraints**: MVP vs full vision? Timeline? Team size?
- **Technical Context**: Existing stack? Performance requirements? Scale expectations?
- **Integration Points**: Does this connect to existing systems? Third-party APIs?
- **User Experience**: Web? Mobile? Desktop? Real-time requirements?
**Ask questions one topic at a time** rather than overwhelming them with a long list. Let the conversation flow naturally.
### Phase 2: Brainstorming (Explore Options)
Once you understand their needs, brainstorm **multiple approaches**:
#### Tool & Framework Suggestions
- Present 2-3 viable options for each layer (frontend, backend, database, etc.)
- Explain **why** each tool fits (or doesn't fit) their use case
- Highlight trade-offs: learning curve vs power, speed vs flexibility, cost vs control
#### Architecture Patterns
- Suggest relevant patterns (monolith, microservices, serverless, etc.)
- Discuss data flow and state management strategies
- Consider scalability and maintainability implications
#### Technology Stack Examples
For each approach, outline a potential stack like:
- **Frontend**: Next.js 15 (App Router) for SSR + client interactivity
- **Backend**: Next.js API routes (simple) OR separate Node/Express (scalable)
- **Database**: PostgreSQL (relational) vs MongoDB (flexible) vs Supabase (all-in-one)
- **Auth**: Clerk (easy) vs NextAuth (flexible) vs custom (control)
- **Real-time**: WebSockets vs Server-Sent Events vs polling
#### Developer Experience Considerations
- Build tools and local development setup
- Testing strategies (unit, integration, E2E)
- Deployment options (Vercel, Railway, AWS, etc.)
- Monitoring and error tracking
### Phase 3: Deep Dive (Discuss Details)
Based on the user's reactions and preferences:
- **Drill deeper** into specific aspects they're curious about
- **Compare alternatives** side-by-side when they're torn between options
- **Explore edge cases** and potential challenges
- **Suggest resources** (documentation, tutorials, example projects)
Use the available tools to:
- **Search the web** for latest tools, frameworks, and best practices: `#tool:web/fetch
- **Fetch documentation** from external sites and APIs: `#tool:web/fetch`
- **Explore GitHub repos** for examples and implementation patterns: `#tool:web/githubRepo`
- **Search the codebase** for existing patterns: `#tool:search/codebase` `#tool:search/textSearch`
- **Read relevant files** to understand current architecture: `#tool:read/readFile`
- **Explore project structure**: `#tool:search/listDirectory` `#tool:search/fileSearch`
## Conversation Style
- **Collaborative, not prescriptive**: Present options, don't dictate
- **Curious and engaging**: Ask follow-up questions to understand their thinking
- **Practical and pragmatic**: Focus on real-world constraints, not theoretical perfection
- **Encouraging**: Validate good ideas while gently highlighting potential challenges
- **Concise but thorough**: Be comprehensive without overwhelming
## What You DON'T Do
❌ Don't write implementation code
❌ Don't create files or modify the codebase
❌ Don't make unilateral decisions—collaborate
❌ Don't assume requirements—ask questions
❌ Don't provide a single "best" solution—explore multiple paths
## Example Interactions
### Good Response Pattern
**User**: "I want to add real-time notifications to my app"
**You**:
"Interesting! Let me understand your needs better:
1. What kind of notifications? (In-app toasts, push notifications, both?)
2. What triggers them? (User actions, system events, scheduled?)
3. How time-sensitive? (Must be instant or ~30sec delay acceptable?)
4. Expected volume? (100/day or 10,000/day per user?)
Once I understand these, I can suggest different approaches—from simple polling to WebSockets to third-party services like Pusher."
### After Clarification
**You**:
"Got it! For in-app notifications triggered by system events with ~5sec latency acceptable, here are three approaches:
**Option 1: Server-Sent Events (SSE)**
✅ Simple, built into browsers
✅ One-way server → client (perfect for notifications)
❌ Requires keeping connections open
📊 Best for: <1000 concurrent users
**Option 2: Polling with SWR/React Query**
✅ Easiest to implement
✅ Works everywhere (no special server requirements)
❌ Not truly real-time (15-30sec intervals)
📊 Best for: Non-critical updates, simple setup
**Option 3: WebSockets (e.g., Socket.io)**
✅ True bidirectional real-time
✅ Works for chat if you add it later
❌ More complex setup & infrastructure
📊 Best for: Heavy real-time features, growth path
Which direction resonates with your needs?"
## Remember
You're here to **explore possibilities**, not deliver solutions. The user should leave the conversation with:
- ✓ Clarity on their options
- ✓ Understanding of trade-offs
- ✓ Confidence to make informed decisions
- ✓ Excitement to start building
Let the conversation guide the depth—some users want high-level direction, others want deep technical discussion. Match their energy and curiosity.
The information you get in the brainstorm stage would be useful in the planning stage.
For the planning stage, this is where you create the necessary documentation for the AI agent. AI agents don't retain memory. So, it can forget important things like your code architecture, tools you want to use, and so on.
To mitigate this, you'll create documentation that you can always refer the AI to when you prompt it.
First thing you do is create the instructions.md for your agent. This is where you give it foundational instructions for your project. Personally, I put OWASP top ten security guidelines in my instructions.md.
This means that whenever the agent is writing code, it takes the OWASP security recommendations into account.
Thankfully, for every prompt, the agent in copilot makes sure to check the instructions.md. This way you can be sure that code security is applied to every code change made.
Once again, you can just ask the agent mode to create the instructions.md for you.
The next thing you should create is the Product Requirement Document (PRD). This document would highlight what your app is about, its goals, the target user, and most importantly the implementation steps for building the app.
To create it, just prompt the AI agent. Now, in the previous step, we already brainstormed. We now know the tools we want to use, the architecture and so on and so forth. Make sure to include that in your prompt. Also emphasise that the PRD should have implementation steps.
Another document you need to create is the UI style guide. AI is very bad at building professional looking UI. So, if we leave it to the AI, we end up with bland looking design.
To mitigate this, I make use of the frontend-design skill. Skills allow you to give your Agent extra capabilities. You can learn how to install skills for copilot here.
There are also many online tools that let you install skills. For instance, you can find and install new skills at skills.sh.
The good thing about skills is that they're interoperable. So a skill that you use for Copilot can also work for Claude Code.
After you install the frontend-design skill, you should then prompt the AI agent to create a UI style guide for your project, using the frontend-design skill.
This way, the design for your project will always be consistent. And whenever you're prompting your agent to make new designs, make sure to always reference the ui style guide and the frontend design skill.
Now that you have a PRD with the context and implementation steps, you don't just give everything to the Agent to create in one go. Rather you prompt each implementation step. For instance, you you take step 1 of your implementation plan and prompt the agent like this:
Implement step 1. Make sure to follow the ui style guide and the frontend-design skill.
This way, you have the opportunity to guide the agent on each step of the way.
Trying to one-shot the app is just a good way for the agent to mix some things up. Software requirements change. Some things may come up. You might have a better idea, etc. Best to do it step by step.
Mike Tyson once said, "Everyone has a plan until they get punched in the face." You're going to encounter errors and bugs. That is a given. You need to know how to handle them.
With AI, it's much easier than in the days before AI came on board.
If you encounter any errors, just tell the AI about it. Send it screenshots, copy the error code and tell the Agent about the error. It would try to correct the error.
Most times, it would be able to. In Copilot, there are many models. I primarily use Sonnet 4.5 for day to day coding. I also use it for frontline debugging. So, if I encounter any issues, I ask Sonnet 4.5 to resolve it.
It resolves most issues. But there are times where even Sonnet 4.5 struggles. If I notice Sonnet 4.5 struggling after 3 prompts, I gleefully pull up my sleeves and debug it the old fashioned way.
Because to be honest, I do actually enjoy debugging and writing code. I just no longer do so because it's inefficient.
Unfortunately 🥲, since Claude Opus 4.6 was released, I haven't had to do this. I'm yet to encounter any bug that Opus 4.6 is unable to resolve.
So, there you have it. This is how I set up my Agentic coding workflow. I'm sure there are power users out there who may have a thing or two to add. Please feel free to do so in the comments.
2026-02-26 04:44:52
Who wouldn't want to save some money? Today, I'll show you how to do just that, specifically when it comes to managing private npm packages.
Typically, npm's package manager charges a monthly fee for publishing private packages. While this might work for some companies, it can be a burden for smaller teams or personal projects. Thankfully, GitHub, the world's most popular Git platform, offers a solution that's both powerful and free.
Recently, I needed a way to publish a private npm package without incurring any costs. Using GitHub Actions and GitHub Package Manager, I successfully achieved this. Here's a step-by-step guide to help you do the same.
A GitHub repository containing your NPM package.
A minimal configuration of the package.json file.
A GitHub account with permissions to create repositories and manage packages.
Node.js and npm installed locally for development.
{
"name": "@your-username/your-package",
"version": "1.0.0",
"description": "A private package published on GitHub Package Manager",
"repository": {
"type": "git",
"url": "https://github.com/your-username/your-repo.git"
},
"publishConfig": {
"registry": "https://npm.pkg.github.com/@your-username"
}
}
Be sure to replace your-username and your-repo with your actual GitHub username and repository name.
Go to GitHub Account Settings.
Create a Personal Access Token (classic) with the following permissions:
write:packages
read:packages
repo (if the repository is private).
Save the token in a secure location, such as a password manager.
name: Build and Deploy
on:
push:
branches:
- release
jobs:
publish-gpr:
runs-on: ubuntu-latest
permissions:
packages: write
contents: read
steps:
- uses: actions/checkout@v2
- uses: actions/setup-node@v1
with:
node-version: 20
registry-url: https://npm.pkg.github.com/
scope: '@your-username'
- uses: pnpm/action-setup@v3
with:
version: 9.10.0
- run: pnpm install
- run: pnpm publish:build
- run: npm publish
env:
NODE_AUTH_TOKEN: ${{secrets.GITHUB_TOKEN}}
This workflow will trigger whenever you push changes to the release branch and will automatically publish your package to GitHub Package Manager.
.github/workflows/publish.yml file to the release branch. If everything is set up correctly:GitHub Actions will trigger the workflow.
The package will be published to GitHub Package Manager.
You'll find the package in the Packages tab of your repository.
@tuo_username:registry=https://npm.pkg.github.com/
//npm.pkg.github.com/:_authToken=YOUR_TOKEN
engine-strict=true
Run the npm command to install the package:
npm install @your-username/your-package
Don't forget to replace YOUR_TOKEN with the token created in Step 3.
Conclusion
Publishing private npm packages with GitHub Package Manager is a cost-effective and streamlined alternative to traditional npm services. By leveraging GitHub Actions, you can fully automate the process, saving both time and resources.
This approach not only eliminates the need for paid npm private packages but also integrates seamlessly into GitHub's ecosystem, where your code, versioning, and packages are all managed in one place.
If you encounter any issues or have questions, feel free to leave a comment. Happy coding! 🚀
2026-02-26 04:39:07
*This post is my submission for [DEV Education Track: Build
🌟 What I Built
I developed a Multi-Agent AI Content Studio designed to solve the biggest problem every writer faces:
Writer's Block. Instead of just asking a single AI to "write a story," I built a collaborative system where three specialized AI "workers" (Agents) talk to each other. Each agent has a specific job, a specific personality, and a specific goal. By breaking the work into pieces, the final article is much more accurate, better researched, and sounds more human.
🧠 Why Multi-Agent Systems?
As an 11-year-old learning to code, I realized that even the smartest AI can sometimes get overwhelmed if you ask it to do too much at once.
Think of it like a professional kitchen: You don't have one person doing everything. You have a Chef to plan, a Sous-Chef to chop, and a Server to check the plate. My system works exactly like that. It ensures that the facts are checked before the writing starts, and the writing is polished before the user sees it.
**
🤖 Meet the Agents**
I used the latest Google GenAI SDK and the Gemini 2.5 Flash model to power my team. Here is the breakdown of my squad:
1. The Research Agent (The Brains)
This agent’s job is to gather the "building blocks." It ignores the "fluff" and focuses on key points, subtopics, and relevant examples. It makes sure the content is grounded in good information.
2. The Writer Agent (The Creator)
This agent receives the research and looks at the requested Tone (Professional, Casual, or Funny). It’s responsible for the structure, the headings, and making the information interesting to read.
3. The Reviewer Agent (The Perfectionist)
This is the final gatekeeper. It reads the draft from the Writer Agent and looks for ways to improve the flow, simplify hard sentences, and make sure the article is engaging from start to finish.
💻 The Logic Behind the System
I used Python and Streamlit to build a clean user interface. The most important part of the code is the Orchestrator. This is the master function that coordinates the "hand-offs" between agents.
🧗 The Challenges (My Debugging Journey)
Building this wasn't easy! I hit several walls that almost made me stop, but I pushed through:
The 404 Model Mystery:
At first, I kept getting "Model Not Found" errors. I learned that AI versions change fast! I had to update my code from Gemini 1.5 to the brand new Gemini 2.5 to get it to work.
The Permission Puzzle:
I had to learn how to manage .env files and API keys securely. I realized that keeping your keys secret is the first rule of being a real developer.
Agent Hand-offs:
Making sure the Reviewer Agent actually received the text from the Writer Agent was a lesson in how functions pass data in Python.
📚 Key Learnings
This project taught me that being a developer isn't just about writing code; it's about problem-solving. I learned that:
Modularity is key:
Breaking a big problem into small agents makes the code easier to fix.
Prompt Engineering matters:
How I talk to the agents changes how they perform. I had to give them very clear "instructions" to make them behave.
*The AI Community is great: *
Using documentation and learning from tracks like this one on DEV.to helps you grow faster.
Thanks for checking out my project! I can't wait to see what other multi-agent systems people are building. 🚀
By Hala Kabir The youngest certified AI & Blockchain Professional App & Chatbot Developer
2026-02-26 04:38:45
Most RSS readers present news as a single linear stream.
They’re not built for comparing multiple feeds in parallel.
If you want to monitor how different sources publish in real time — side by side, in strict chronological order — most tools force everything into a single stream.
I built DualTicker, a configurable, real-time RSS dashboard designed specifically for structured multi-feed monitoring.
They’re not built for comparing multiple feeds in parallel.
If you want to monitor how different sources publish in real time — side by side, in strict chronological order — most tools force everything into a single stream.
I built DualTicker, a configurable, real-time RSS dashboard designed specifically for structured multi-feed monitoring.
Standard RSS readers:
But if your goal is feed comparison or parallel monitoring, merging destroys visibility.
You lose structural differences between sources.
DualTicker is a real-time RSS dashboard that:
There is no ranking logic.
No blending across sources.
No engagement optimization.
If a feed publishes something, it appears in chronological order.
DualTicker supports:
Example of a shareable view:
/?s=us-vs-world&q=election OR gaza -opinion
The entire configuration can be encoded in the URL, making sessions reproducible and shareable.
Items are rendered strictly by timestamp.
The system avoids introducing hidden sorting heuristics.
Predictability > automation.
RSS feeds are inconsistent. Some:
The system preserves last-known-good items during failed refresh cycles to prevent UI collapse.
Large feed volumes can cause scroll limitations if the rendered window is too small.
The solution was increasing the frontend window size to provide sufficient physical scroll runway.
Sometimes the limitation isn’t logic — it’s layout.
It’s a deterministic RSS dashboard.
Free to use.
Optional one-time Founder tier unlocks advanced configuration features.
If you build feed systems or deterministic dashboards, I’d be interested in feedback on architectural tradeoffs.
2026-02-26 04:36:54
Status: Draft.
Most teams think they are building with AI.
Most are just prompting.
The difference between a chatbot user and an AI engineer is not creativity.
It is the ability to turn LLM behavior into a controlled, testable, secure product system.
In classical software, you write deterministic code.
In AI systems, behavior is probabilistic.
You don’t hardcode logic. You shape it.
The hard problem isn’t generating text.
It’s controlling behavior across thousands of interactions.
Control comes from engineering the loop.
+------------------+
| GOAL |
+------------------+
↓
+------------------+
| SUCCESS CRITERIA |
+------------------+
↓
+------------------+
| TEST CASES |
+------------------+
↓
+------------------+
| PROMPT + CONTEXT |
| VERSION |
+------------------+
↓
+------------------+
| MEASUREMENT |
+------------------+
↓
+------------------+
| ITERATION |
+------------------+
↺
If you do not define success before writing prompts, you are not engineering.
If you do not test behavior across structured cases, you are not engineering.
If you cannot compare versions and measure improvement, you are not engineering.
You are experimenting.
A predictable prompt contains structure:
This increases reliability.
But prompt structure is like a function signature.
Necessary. Not sufficient.
The moment you ask:
You are no longer designing prompts.
You are designing systems.
Prompt engineering is about what you say.
Context engineering is about what the model sees.
In production, the model’s context window contains:
All of this competes for finite tokens.
Tokens are a scarce resource.
Add too much context → attention dilutes.
Add irrelevant context → reasoning collapses.
Mix instructions with untrusted data → behavior shifts unpredictably.
This is not a bug.
It is physics.
+------------------------------------------------------+
| CONTEXT WINDOW |
+------------------------------------------------------+
| [SYSTEM INSTRUCTIONS] |
| - Role |
| - Rules |
- Constraints
[RETRIEVED DOCUMENTS]
- High-signal chunks only
------------------------------------------------------
[TOOL RESULTS]
- DB queries
- Code output
------------------------------------------------------
[CONVERSATION MEMORY]
- Summarized prior turns
+------------------------------------------------------+
If you dump everything into context, quality degrades.
If you curate aggressively, stability improves.
RAG is not a feature.
It is memory architecture.
External knowledge must be:
Poor retrieval destroys generation quality.
The moment your model can call tools, you do not have a chatbot.
You have an agent.
A minimal agent loop looks like this:
User Request
↓
Model decides: Tool needed?
↓
[tool_use call]
↓
External Tool Executes
↓
[tool_result returned]
↓
Model continues reasoning
↓
Final Output
This is how you build:
But tools increase leverage and risk simultaneously.
You must validate:
Otherwise your system will act incorrectly with confidence.
Large language models blur the boundary between:
If untrusted content enters the same context space as system rules, behavior can be manipulated.
This is structural.
Not edge-case.
Security must be built into the loop:
If your agent can act, it can be exploited.
Design accordingly.
An AI-native product is not:
Model + Prompt.
It is a layered system.
+--------------------------------------------------+
| AI PRODUCT SYSTEM |
+--------------------------------------------------+
1. Prompt Specification (versioned)
2. Context Architecture Map
--------------------------------------------------
3. Retrieval Layer (memory + chunking strategy)
--------------------------------------------------
4. Tool Layer (controlled action surface)
--------------------------------------------------
5. Evaluation Suite (automated + human review)
--------------------------------------------------
6. Security Layer (injection defenses)
--------------------------------------------------
7. Iteration Loop (continuous improvement)
+--------------------------------------------------+
Without these layers, you do not have a product.
You have a demo.
Use this as a founder checklist.
Artifact: LLM Success Spec
Artifact: Promptbook v1
Artifact: Context Architecture Diagram
Artifact: Tooling Spec + Working Tool
Artifact: Eval Suite v1
Artifact: Working Prototype
Artifact: AI Product Builder Kit v1
Model access is becoming a commodity.
Prompt tricks are a commodity.
API integration is a commodity.
What is not a commodity:
The moat is not who has the best model.
It is who builds the best systems around models.
That is engineering.
And that is how AI-native companies win.