MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

AWS EC2 vs Lambda | Pros and Cons?

2026-04-23 15:39:58

Cost

  1. EC2 Pay Rent 2.Lambda Charged based on invocation and Memory used

Flexibility

EC2 have more Flexibility

Integration

Integration with AWS Services is easy. By Default it have Load Balancer, Autoscaling Groups, Target Groups, Scaling etc..

Speed

Easy to Developed the API. Take less time to build the functionality

AI Dev Weekly #7: Claude Code Loses Pro Plan, GitHub Copilot Freezes Signups, and Two Chinese Models Drop in 48 Hours

2026-04-23 15:39:38

AI Dev Weekly is a Thursday series where I cover the week's most important AI developer news, with my take as someone who actually uses these tools daily.

The flat-rate AI subscription era ended this week. Anthropic pulled Claude Code from the $20 Pro plan. GitHub froze all new Copilot signups. And while Western companies were busy raising prices, two Chinese labs dropped frontier models within 48 hours of each other. Let's get into it.

Claude Code removed from Pro plan

Anthropic quietly removed Claude Code from the $20/month Pro plan on April 21. The pricing page now shows an "X" next to Claude Code for Pro subscribers. Access starts at Max ($100/month).

Anthropic's head of growth called it "a small test on ~2% of new prosumer signups." But the public pricing page already reflects the change for everyone. Sam Altman's response on X: "ok boomer."

The real reason: engagement per subscriber surged after Opus 4, Cowork, and long-running agents. Pro subscribers at $20/month are consuming 10x or more in token value. The math doesn't work.

My take: This was inevitable. Unlimited AI coding for $20/month was never sustainable. If you're on Pro, you still have access for now. But start planning for either Max ($100/month) or cheaper alternatives like Kimi K2.6 ($0.60/M tokens) or MiMo V2.5 Pro ($1/M tokens).

GitHub Copilot freezes all new signups

GitHub paused new registrations for Copilot Pro, Pro+, and Student plans on April 20. Only the Free tier accepts new users. They also added stricter usage limits and removed Opus models from Pro (only Pro+ keeps them).

The reason: "unsustainable compute demands from AI-powered coding agents." Same story as Anthropic. Agentic AI usage broke the pricing model.

My take: Two of the three biggest AI coding platforms raised prices or froze signups in the same week. The third (Cursor) is probably next. The era of $10-20/month unlimited AI coding is over. Open-source and Chinese models are the hedge.

Kimi K2.6 launches with 300-agent swarm

Moonshot AI released Kimi K2.6 on April 20. The highlights:

  • 80.2% SWE-Bench Verified (matching Claude Opus 4.6)
  • 300 sub-agent swarm (up from 100 in K2.5)
  • 54.0% on HLE-Full with tools (beating GPT-5.4's 52.1%)
  • $0.60/M input tokens (25x cheaper than Opus)
  • Modified MIT license (open weights)
  • Available on OpenRouter and Cloudflare Workers AI

The agent swarm is the standout feature. K2.6 scored 86.3% on BrowseComp (Agent Swarm) vs GPT-5.4's 78.4%. For coding agent workloads, K2.6 is the strongest open-source option available.

My take: K2.6 is the first open-source model to genuinely match Opus 4.6 on coding benchmarks. At 25x cheaper. The timing with Anthropic's price hike is not a coincidence. See our K2.6 vs Opus 4.6 comparison.

MiMo V2.5 Pro: 40-60% fewer tokens than Opus

Xiaomi dropped MiMo V2.5 Pro on April 22, just 48 hours after K2.6. The headline number: 40-60% fewer tokens than Opus 4.6 at comparable capability.

  • 57.2% SWE-bench Pro
  • 64% Pass^3 on ClawEval with only ~70K tokens per trajectory
  • 1,000+ tool calls in single sessions
  • Built a complete SysY compiler in Rust in 4.3 hours (672 tool calls, 233/233 tests)
  • Works with Claude Code as a harness
  • Coming open-source soon

The token efficiency is the real story. Same capability, half the tokens, fraction of the price. The V2.5 Standard model adds native multimodal (image, audio, video) and actually outperforms V2-Pro on some agent benchmarks.

My take: V2.5 Pro's "harness awareness" (it actively manages its own context within Claude Code) is a new capability nobody else has. Combined with the token efficiency, this is the model to watch for long-running agent tasks. See our full V2.5 series guide.

The flat-rate subscription is dead

Three data points in one week:

  1. Anthropic removes Claude Code from $20 Pro
  2. GitHub freezes all Copilot signups
  3. Both cite "unsustainable compute demands"

The pattern is clear. Flat-rate unlimited AI coding subscriptions don't work when agents run for hours and consume 10x the expected tokens. Expect token-based billing everywhere within 6 months.

The winners: Chinese models (Kimi K2.6, MiMo V2.5 Pro, Qwen 3.6 Plus) that were already priced per-token at 10-25x less than Western alternatives. If you haven't explored them yet, now is the time. See our Chinese AI models ranking.

Quick hits

  • OpenAI Workspace Agents: ChatGPT now has workspace agents for enterprise teams. Not relevant for individual developers yet.
  • OpenAI Privacy Filter: New privacy filter for enterprise data. Good for compliance, not a developer tool.
  • Vercel data breach: Vercel disclosed a breach that exposed some user data. Check your account if you use Vercel.

What I'm watching next week

  • Whether Claude's serverless function limit forces architectural decisions (it broke one of our race agents)
  • How MiMo V2.5 Pro performs in real-world agent tasks (we just upgraded our Xiaomi race agent to V2.5 Pro)
  • Whether any race agent gets its first paying customer

See you next Thursday. If you found this useful, subscribe to AI Dev Weekly for the full archive.

Originally published at https://www.aimadetools.com

我实际跑了一次 TestSprite:它能给出有用反馈,但中文本地化还有明显空缺

2026-04-23 15:37:24

我这次直接登录 TestSprite,用一个现成项目实跑了一遍,没有停留在官网介绍页。我的关注点很直接:它能不能快速给出可执行的测试反馈,失败信息够不够可读,中文开发者用起来顺不顺。

我进入的项目叫 wildbyte,目标地址是 https://jsonplaceholder.typicode.com。项目总状态是 7/10 Pass。我重点看了一条失败用例:Create Post with Invalid Data Type

这条用例的预期很清楚:向 /posts 提交错误数据类型时,接口应该返回 400。实际结果是:

Expected status code 400 but got 201

这个结果本身就有价值。它说明当前接口在错误输入下依然创建成功,后端约束比测试预期更宽松。无论问题出在真实 API、Mock 行为还是数据契约,这条失败都能立刻推动开发者去复查接口边界。

我在详情页里看到了什么

这条失败用例的详情页结构很完整:

  • 左侧是优先级、连接地址、测试描述
  • 中间是可编辑的 Python 测试代码
  • 右侧是执行结果
  • 底部拆成 Error / Trace / Cause / Fix 四个面板

这点很重要,因为很多 AI 测试工具只给你一个“失败”状态。TestSprite 至少把失败拆成了开发者能消费的层次。你打开一条失败记录,已经能顺着页面继续排查。

我看到的 3 个本地化观察

1. 中文输入兼容正常

我在 Web Tests 列表页的搜索框里直接输入了中文“中文”,页面接受输入正常,没有乱码,也没有出现组件异常。对中文、日文、韩文用户来说,这属于基础能力,但必须稳定。

2. 时间展示缺少时区语义

项目列表里显示的是 2026-04-19 23:42 这种格式,详情页又只显示 2026-04-19。这套设计可读性够用,但时区信息缺失,而且列表页和详情页的时间粒度不一致。跨时区团队看测试结果、CI、日志时,很容易多做一次脑内换算。

3. 核心工作台仍然是英文

我看到的核心按钮和结果分栏基本都是英文,比如 Save & RunConnected URLPriorityCauseFix。英文开发者上手很自然,中文团队也能用,但理解成本还是会高一层。产品想吃到更广的本地市场,仪表盘和结果说明值得补多语言界面。

我对 TestSprite 的真实评价

我认可它当前最有价值的一点:它把“测试目标、测试代码、执行结果、失败解释”放进了一条连续工作流。你打开项目后,很快就能定位到一条失败并读懂上下文。

对大量使用 AI 写代码的人,这种工具的价值会更明显。代码生成越来越快,验证会越来越成为瓶颈。TestSprite 已经能把这部分工作前置,并且把结果组织成开发者能直接消费的格式。

我会给它的直接建议

  1. 时间展示加上时区标识,并支持本地时区
  2. Dashboard、详情页、错误面板补多语言界面
  3. 增加 locale 相关模板,比如日期、货币、时区、Unicode 输入、翻译缺口

我这次实跑后的结论很明确:TestSprite 已经能提供有用的开发反馈,下一步最值得补的是本地化体验。

实测对象:TestSprite Web Tests

测试项目:wildbyte

目标地址:https://jsonplaceholder.typicode.com

项目状态:7/10 Pass

关键失败用例:Create Post with Invalid Data Type

关键结果:Expected status code 400 but got 201

📦 Service Workers in PWAs: How to Cache Content and Supercharge Web App Performance

2026-04-23 15:36:56

Imagine this.
You open a web app on a weak network. Instead of endless loading, it opens instantly. Pages feel smooth. Key content is available even offline. The experience feels closer to a native app than a traditional website.

What changed?
Behind the scenes, a quiet but powerful technology is doing the heavy lifting: Service Workers.

If you’re building Progressive Web Apps (PWAs), understanding service workers can dramatically improve speed, reliability, and user experience.

In this guide, you’ll learn what service workers are, how they work, practical caching strategies, performance tips, and why they matter in modern web development.

🚀 What Are Service Workers?

A Service Worker is a JavaScript file that runs separately from your main web page, in the background of the browser.
It acts like a smart layer between your app and the network.

That means it can intercept requests and decide:

Should this come from the internet?
Should this come from cache?
Should we update content in the background?

Should we show an offline page?
This gives developers powerful control over performance and reliability.

🔥 Why Service Workers Matter

Without service workers, many web apps depend fully on live internet requests.
That can create:
Slow loading times
Broken experiences on weak networks
Repeated downloads of the same assets
Poor user trust

With service workers, your app can become:

✅ Faster
✅ More reliable
✅ Offline-capable
✅ More engaging
✅ Better for returning users

This is one reason PWAs feel modern and app-like.

⚡ How Service Workers Improve Performance

  1. Faster Repeat Visits
    Once important files are cached, users don’t need to re-download everything every time.
    The result:
    Faster loading
    Less bandwidth usage
    Better experience

  2. Better Slow-Network Experience
    Even with unstable internet, cached content can still load quickly.

  3. Reduced Server Requests
    Caching static resources lowers repeated server load.

  4. Background Updates
    Some strategies allow content to refresh quietly while users keep browsing.

📦 What Should You Cache?

A common mistake is caching everything.
Smart caching is better than massive caching.
Start with your App Shell:
HTML
CSS
JavaScript
Fonts
Logos
Navigation UI
Essential images

These assets create the structure of your app.

When cached, your interface can load almost instantly.

💡 Tip: Cache what users need most first.

🔄 Best Caching Strategies for PWAs

Different content needs different strategies.

  1. Cache First
    The app checks cache first. If found, it loads instantly.
    If not, it fetches from the network.
    Best For:
    Images
    Fonts
    CSS
    Static assets
    Benefit:
    Excellent speed.

  2. Network First
    The app tries the network first.
    If offline or slow, it falls back to cache.
    Best For:
    News feeds
    Live dashboards
    Frequently updated content
    Benefit:
    Fresh data when possible.

  3. Stale While Revalidate
    The app serves cached content immediately, then fetches an updated version in the background.
    Best For:
    Blogs
    Product pages
    Content-heavy sites
    Benefit:
    Speed + freshness.

  4. Cache Only (Use Carefully)
    Serve only cached content.
    Useful for fixed offline experiences, but limited.

🌐 Add an Offline Fallback Page
When users request something unavailable offline, don’t show a generic browser error.
Show a custom offline page instead.
Include:
Friendly message
Retry button
Link to homepage
Saved content options
Brand styling
This transforms frustration into a better experience.
💡 Tip: A thoughtful offline page feels intentional.

🧹 Keep Your Cache Clean
Caching helps—but unmanaged caching creates problems.
Old files can waste storage or serve outdated content.
Best Practices:
Version Your Cache
Use cache names like:
app-v1
app-v2
app-v3
When deploying updates, switch versions cleanly.
Delete Old Caches
Remove unused versions during activation.
Limit Large Files
Avoid filling storage with unnecessary media.

🛠️ Real-World Use Cases

E-commerce
Faster product browsing
Cached product images
Better repeat visits

News Platforms
Read saved articles offline
Quick page loads

Productivity Apps
Continue working offline
Save drafts locally

Education Platforms
Access lessons without internet

Travel Apps
Open tickets or saved info in transit
These experiences can directly increase user retention.

🔒 Security Requirement: HTTPS

Service workers require HTTPS in production.
This protects users and ensures browser support.
If your site isn’t secure, service workers may not work correctly.

🧪 How to Test Service Workers

Don’t assume your setup works—test it.
Use browser DevTools to:
Simulate offline mode
Clear storage
Inspect caches
Throttle network speed
Check updates
Also test on real mobile devices.
Real users don’t browse in perfect conditions.

🎯 Quick Service Worker Checklist

Before launch, ask:

✅ Are core assets cached?
✅ Is loading faster on repeat visits?
✅ Is there an offline fallback page?
✅ Are old caches removed?
✅ Is the right caching strategy used for each asset?
✅ Does it work on weak networks?
✅ Is HTTPS enabled?

If yes, you’re building smarter.

💬 Final Thought

Users may never know what a service worker is.

But they will notice when your app feels fast, reliable, and polished.
That’s the power of great engineering: invisible improvements that create unforgettable experiences.
So don’t just build a web app.
Build one that works brilliantly behind the scenes.

📣 Your Turn

Which benefit matters most to you: Faster Speed, Offline Access, Fresh Updates, or Better UX? Share below.

JSON Formatter Pro vs REST Client: Which Is Better in 2026?

2026-04-23 15:36:19

JSON Formatter Pro wins for pure JSON handling, while REST Client dominates API testing. I tested both extensions across 50 API endpoints and 200+ JSON files over the past month. The json formatter pro vs rest client debate comes down to your primary use case: formatting versus comprehensive API development.

Last tested: March 2026 | Chrome latest stable

Quick Verdict

Category Winner Reason
Speed JSON Formatter Pro 40% faster parsing on large files
Features REST Client Full API testing suite included
Price/Value JSON Formatter Pro Free with premium features

Feature Comparison

Feature JSON Formatter Pro REST Client Best For Price
Rating 4.8/5 4.9/5 REST Client Both Free
File Size 738KiB 387KiB REST Client Memory usage
Last Updated 2026-03-02 2025-12-01 JSON Formatter Pro Active development
JSON Formatting Advanced syntax highlighting Basic formatting JSON Formatter Pro Visual clarity
API Testing None Full HTTP client REST Client Complete workflows
Performance Handles 10MB+ files 2MB limit JSON Formatter Pro Large datasets
Version 1.0.4 1.1.1 REST Client Stability

Key Differences

Processing Power and Performance
JSON Formatter Pro handles massive JSON files that crash other tools. In my testing, it processed a 15MB API response in 2.3 seconds while REST Client failed at anything over 2MB. The extension uses efficient parsing algorithms that minimize memory usage despite its larger installation size of 738KiB.

The performance difference becomes critical when working with enterprise APIs that return large datasets. E-commerce platforms, analytics services, and social media APIs often generate responses exceeding 5MB. JSON Formatter Pro maintains smooth scrolling and instant search across these files, while REST Client struggles with anything beyond basic API responses.

Memory efficiency also favors JSON Formatter Pro for sustained use. During eight-hour development sessions, it consumed 45% less RAM than REST Client when handling multiple large files simultaneously.

"The JSON.parse() static method parses a JSON string, constructing the JavaScript value or object described by the string." , JSON.parse() - JavaScript - MDN Web Docs

Feature Scope and Specialization
REST Client functions as a complete API development environment. You get request builders, response analyzers, environment variables, authentication helpers, and request history. JSON Formatter Pro focuses exclusively on JSON visualization and editing. This specialization makes it lightning-fast for its core purpose.

The breadth versus depth trade-off defines each tool's strength. REST Client includes OAuth 2.0 support, pre-request scripts, and automated testing capabilities that professional API developers require. JSON Formatter Pro offers advanced formatting options, customizable color schemes, and intelligent object navigation that data analysts need daily.

REST Client's comprehensive approach suits teams building microservices or testing third-party integrations. JSON Formatter Pro excels for developers who primarily consume API data for frontend applications or data visualization projects.

User Experience and Interface Design
JSON Formatter Pro's interface prioritizes readability with customizable themes, collapsible object trees, and smart indentation. The extension automatically detects nested structures and applies appropriate formatting without user intervention. Color-coded syntax highlighting makes property types immediately recognizable.

REST Client packs multiple tools into one interface, which can feel cluttered when you only need JSON formatting. The learning curve differs significantly between approaches. JSON Formatter Pro works instantly without configuration, while REST Client requires understanding of HTTP methods, headers, and authentication flows.

Navigation speed varies dramatically between tools. JSON Formatter Pro enables one-click jumps between object levels and instant search across all properties. REST Client forces users to scroll through request builders and response panels to reach the actual JSON data.

Development Activity and Support
JSON Formatter Pro received updates as recently as March 2026, showing active maintenance and feature development. The version 1.0.4 release addressed several performance issues and added new formatting options requested by users. This recent activity suggests ongoing commitment to improvement.

REST Client's last update was December 2025, though its 1.1.1 version suggests a mature, stable codebase. Both extensions maintain high user satisfaction ratings above 4.8 stars, indicating reliable functionality regardless of update frequency.

The development approach differs between projects. JSON Formatter Pro releases frequent incremental updates that enhance core functionality. REST Client follows longer release cycles that add major features while maintaining backward compatibility.

When To Choose Each

Choose JSON Formatter Pro if:

  • You primarily work with large JSON responses from APIs and need reliable performance
  • Visual formatting and syntax highlighting significantly impact your productivity
  • You analyze complex nested objects or arrays regularly for data extraction
  • Your workflow focuses on consuming API data rather than creating or testing endpoints
  • You need instant formatting without learning curve or configuration overhead

JSON Formatter Pro suits frontend developers building data-driven applications, data scientists processing API responses, and quality assurance engineers validating JSON structures. The tool excels in scenarios where visual clarity and processing speed directly impact daily productivity.

Choose REST Client if:

  • You build and test APIs regularly as part of backend development work
  • You need comprehensive request history and environment management capabilities
  • Authentication testing with OAuth, API keys, or custom headers is routine
  • You want one integrated tool for the entire API development and testing cycle
  • Team collaboration and shared request collections matter for your projects

The decision often comes down to whether you consume or create APIs. Data analysts and frontend developers typically prefer JSON Formatter Pro's specialized approach. Backend developers and API designers lean toward REST Client's comprehensive toolkit that handles every aspect of API interaction.

"JSON is a text-based data format following JavaScript object syntax. Even though it closely resembles JavaScript object literal syntax, it can be used independently from JavaScript." , Working with JSON - Learn web development - MDN

When JSON Formatter Pro Isn't Enough

JSON Formatter Pro falls short when you need to modify API requests or test different endpoints. It cannot send HTTP requests, manage authentication tokens, or save request collections for team sharing. If your work involves building APIs rather than just consuming them, you'll hit these limitations quickly.

The extension also lacks collaboration features found in comprehensive API testing platforms. Teams working on shared API projects need solid sharing capabilities, request documentation, and version control integration that JSON Formatter Pro cannot provide.

Complex debugging scenarios requiring request modification or response comparison exceed JSON Formatter Pro's scope. API developers troubleshooting integration issues need tools that can replicate, modify, and analyze complete HTTP transactions.

The Verdict

JSON Formatter Pro wins for developers who primarily consume and analyze JSON data. Its superior performance with large files and clean formatting interface make it the clear choice for frontend work and data analysis. The active development cycle and recent updates show ongoing commitment to improvement and user satisfaction.

Choose JSON Formatter Pro if speed and visual clarity drive your daily productivity. The specialized focus delivers exactly what most developers need without unnecessary complexity or feature bloat.

Try JSON Formatter Pro Free

Built by Michael Lip. More tips at zovo.one

How a fintech platform achieved 99.97% uptime with graceful degradation and circuit breakers

2026-04-23 15:36:13

Circuit breakers saved our fintech platform from daily outages

Picture this: your payment platform processes €2.3 million daily, but every morning it crashes when users actually need it. That was our reality until we stopped thinking about scaling up and started thinking about failing gracefully.

The problem: cascading failures during peak hours

Our European fintech platform served 45,000 users across account management, payments, and transaction history. Normal response times sat around 200ms, but during peak hours (8-10 AM and 6-8 PM), everything would either timeout or throw 500 errors.

The business impact hit hard: €1,600 lost per minute during outages, 340% spike in support tickets, and users moving money to more reliable platforms.

What the architecture audit revealed

The core issue wasn't capacity, it was cascading failures:

Tightly coupled service dependencies: When payment processing consumed all database connections under load, it starved account lookups and transaction history services.

# Payment service hogging connections
max_connections: 200
pool_size: 150

# Other services fighting for scraps
# Account service pool_size: 50
# Transaction service pool_size: 30

No circuit breakers: Slow payment APIs caused dashboard requests to pile up, consuming memory until the entire web app became unresponsive.

No fallback mechanisms: When any of three bank APIs became slow, the entire dashboard would fail, even for users who didn't need real-time data.

The pattern was predictable: payment latency spikes to 8+ seconds, account service degrades within 2 minutes, platform-wide failures by minute 3.

Our solution: fail fast, not slow

Instead of adding more servers, we focused on containing failures and maintaining partial functionality.

Three core principles:

  1. Fail fast, not slow - Circuit breakers return cached data instead of waiting for timeouts
  2. Prioritize critical paths - Payment processing gets resources first, transaction history gets throttled
  3. Design for partial failures - Every service handles success, degradation, and complete failure states

Implementation specifics

Database connection isolation by priority:

# Critical services (payments)
max_connections: 80
pool_size: 60

# Important services (accounts) 
max_connections: 40
pool_size: 30

# Nice-to-have (history)
max_connections: 20
pool_size: 15

Circuit breaker configuration:

# Bank API circuit breaker
failure_threshold: 5
timeout: 2000ms
reset_timeout: 30000ms
half_open_max_calls: 3

Graceful degradation patterns:

  • Bank API down? Return last known balance with timestamp
  • Database slow? Serve cached transaction history from Redis
  • External validation slow? Process payments with internal fraud detection, validate in background

Load shedding with Nginx:

# Priority-based rate limiting
location /api/payments {
    limit_req zone=critical burst=20;
}

location /api/accounts {
    limit_req zone=important burst=10;
}

location /api/history {
    limit_req zone=general burst=5;
}

The results

Implementation took 3 weeks. The improvements were immediate:

Availability:

  • Before: 97.2% uptime, 8-12 incidents/month averaging 18 minutes each
  • After: 99.97% uptime, 1-2 incidents/month averaging 90 seconds each

Response times during peak load:

  • Payment processing: 200ms → 250ms (maintained under load)
  • Account lookups: 8000ms → 300ms
  • Platform stayed responsive at 340% normal transaction volume

Business impact:

  • Lost revenue dropped from €28,800/month to €2,400/month
  • Customer support tickets decreased 85% during incidents
  • User retention improved as platform became predictably reliable

Key takeaways

Users tolerate delayed data better than complete outages. Sometimes the best scaling strategy isn't adding capacity, it's gracefully degrading functionality when things go wrong.

Circuit breakers and connection pooling aren't just performance optimizations, they're business continuity tools. In fintech, reliability often matters more than raw performance.

Originally published on binadit.com