MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

Break Global Barriers: Introducing the Ultimate Translator API for Developers

2026-01-14 20:46:15

In today's hyper-connected digital economy, "local" is a word of the past. If your application only speaks one language, you're leaving over 75% of the world’s internet users behind.

But as developers, we know the pain: heavy machine learning models, complex cloud provider configurations, and the sticker shock of enterprise translation services.

That’s why we’ve built the ImbueData Translator API—a lightweight, high-performance solution designed by developers, for developers.

What is the Translator API?
The ImbueData Translator API is a comprehensive language intelligence suite that handles the heavy lifting of localization through a single, streamlined REST interface. Whether you're building a global chat app, localized e-commerce, or accessible content platforms, we've got you covered.

Key Capabilities at a Glance:
🚀 Instant Translation: Translate text across dozens of global languages with high accuracy.
🔍 Smart Detection: Automatically identify the source language from any text input—no more manual tagging.
🎙️ Natural Text-to-Speech (TTS): Convert text into clear, human-like audio instantly.
⚡ Lightweight & Fast: No heavy SDKs. Just simple HTTP requests that return structured JSON.
Why Developers Love It (The "Expert" Edge)
Most translation services force you into a corner. We chose a different path:

  1. Developer-First Integration
    Stop wrestling with 500MB SDKs. Our API is built on the philosophy of simplicity. A single GET request is all it takes to go from English to Spanish, or from text to a high-quality .mp3 file.

  2. Built-in Intelligence
    Forget about fromLanguage parameters if you don't want them. Our endpoint features integrated auto-detection. Just send the text, and we’ll figure out the rest.

  3. All-in-One Language Suite
    Why use three different providers for translation, detection, and speech-to-audio? Consolidate your stack and reduce latency by using one unified endpoint.

Real-World Use Cases
Content Localization: Effortlessly translate blogs, product descriptions, and UI elements on the fly.
Customer Support: Enable real-time chat translation for global support teams.
Accessibility: Use the TTS endpoint to build screen readers or audio versions of your articles.
Language Learning: Power apps that need instant pronunciation and translation.
Get Started in Seconds
Integrating the Translator API is as simple as:

bash
curl -G "https://api.imbuedata.com/v1/translator/translate" \
--data-urlencode "text=Hello world" \
--data-urlencode "toLang=es"
Response:

json
{
"status": 200,
"langDetect": "en",
"translatedText": "Hola Mundo"
}
Ready to Go Global?
The barrier to entry for internationalization has just been lowered. Stop building for a single country and start building for the planet.

👉 Try the Translator API now: imbuedata.com

Developers #API #Localization #SaaS #BuildInPublic #Translation #MachineLearning #WebDev #Medium #DevTo

CSS @scope is now Baseline — Supported in All Major Browsers!

2026-01-14 20:44:51

Big news for frontend developers 🎉
With Firefox 146 officially supporting the @scope at-rule, CSS @scope is now available across Chrome, Safari, and Firefox — earning the Baseline: Newly Available status.

Why this matters 👇
The @scope at-rule introduces a new way to scope CSS styles to a specific part of the DOM, helping reduce global CSS conflicts and making styles more predictable, modular, and maintainable.

What is @scope in CSS?
@scope defines a local styling context
• :scope represents the scope root
• Styles apply only within that defined boundary
• No more over-specific selectors or CSS leakage

Key Benefits ✨
✅ Better CSS organization
✅ Reduced specificity wars
✅ Safer component-level styling
✅ Cleaner, more maintainable stylesheets
✅ Perfect fit for design systems & component-based UI

Where can you use @scope?

You can use @scope in linked CSS stylesheets and inline style blocks, including cases where inline usage introduces some interesting behaviors.

By default, CSS is globally scoped, even when rules appear to be nested inside elements like main.

@scope changes this by enabling true contextual styling without relying on heavy or deeply nested class naming conventions.

This represents a significant improvement in modern CSS architecture and is an important feature for frontend engineers who focus on scalability, performance, and long-term maintainability.

Practical Use Cases of CSS @scope and How to Implement Them.

Have you started using @scope yet?
Would you adopt it in production today, or would you prefer to wait a bit longer?

Tracking File Upload Progress on AWS S3 – Lessons from Large File Uploads

2026-01-14 20:32:33

Post ThumbnailHave you ever tried uploading a large file to AWS S3 and wanted to show progress reliably?

I recently worked on a feature that involved uploading videos and large files, and I learned a lot about browser limitations, multipart uploads, and UX improvements. Here’s a breakdown of what I discovered.

Why onUploadProgress Alone Isn't Enough

For small files, Axios gives us a handy onUploadProgress callback:

axios.put(uploadUrl, file, {
  headers: { "Content-Type": file.type },
  onUploadProgress: (progressEvent) => {
    const percent = Math.round((progressEvent.loaded * 100) / progressEvent.total!);
    console.log("Progress:", percent, "%");
  },
});

✅ Works perfectly for files ≤ 20MB

But when we tried large files (150MB+):

  • Progress either lagged behind
  • Or jumped directly to 100% at the end

Why? Because the browser sends the file in a single request, making onUploadProgress unreliable.

The Solution: Multipart Upload (Chunking)

With large files, the file must be split into chunks before uploading:

  • Each chunk is a separate request
  • Progress is tracked per chunk
  • Failed chunks can be retried individually

Example:

// 500MB file with 20MB chunks → 25 requests
const CHUNK_SIZE = 20 * 1024 * 1024;

Some might ask: Isn’t 25 requests a lot?
Not really. Each request is short-lived and uploaded with controlled concurrency.
This makes uploads predictable, faster, and easier to retry on network failures.

Choosing the Right Chunk Size

AWS S3 requires minimum 5MB per chunk.

We define:

const MIN_CHUNK_SIZE = 5 * 1024 * 1024;       // 5MB
const DESIRED_CHUNK_SIZE = 20 * 1024 * 1024; // 20MB

Then check the file size:

  • ≤ 20MB → Upload normally (no multipart)
  • > 20MB → Split into chunks and start multipart upload

Why 20MB? There’s no magic number where uploads fail. Factors like network speed, connection stability, and device performance all play a role.

Backend Workflow for Multipart Uploads

Previously, the backend returned a single S3 URL:

{
  "uploadUrl": "https://my-bucket.s3.amazonaws.com/file.mp4"
}

Now it returns:

{
  "uploadId": "UPLOAD_ID_PLACEHOLDER",
  "key": "/FILE_NAME_PLACEHOLDER",
  "parts": [
    {
      "partNumber": 1,
      "uploadUrl": "https://my-bucket.s3.amazonaws.com/KEY_PLACEHOLDER"
    }
  ],
  "expiresAt": "EXPIRATION_DATE_PLACEHOLDER"
}
  • Each chunk is uploaded to its corresponding uploadUrl
  • S3 returns an ETag for each chunk → acts as a fingerprint to confirm the chunk was uploaded successfully
  • ETags + part numbers are required for CompleteMultipartUpload to assemble the file in order

Tracking Progress for Large Files

  • Small files → browser events track progress by bytes
  • Large files → track progress based on the number of chunks uploaded

Benefits:

  1. Progress is stable and predictable
  2. Better UX
  3. No sudden jumps or delays
  4. Failed chunks can retry individually, no need to restart the entire upload

Final Step: Complete the Upload

Once all chunks are uploaded, send a final request with:

  • uploadId
  • key
  • All ETags

The backend calls CompleteMultipartUpload → file is assembled in S3 as if it were uploaded in a single request.

Takeaways

Switching to chunked uploads with progress tracking dramatically improves the experience for large files:

  • Predictable progress
  • Stable uploads
  • Retry for individual chunks
  • Better UX for your users

Is it worthwhile to learn Javascript in 2026?

2026-01-14 20:31:10

I'm 16 years old and interested in software development. Last year I learned the basics of Python. But since my interest lies in web and mobile application development, I purchased a "WEB DEVELOPMENT COURSE (HTML, CSS, Javascript, React)". Now that AI has advanced so much, I'm worried about whether learning Javascript will be useful in my career! Do you think learning Javascript would be good for my career?

Why My Node.js E-commerce App Got Slower Over Time (And It Wasn’t a Memory Leak)

2026-01-14 20:26:32

I recently finished a Node.js e-commerce build for a client. At first, it was perfect. Locally, everything was snappy. With 10 items in the database, the site felt like it was flying.

Then we went live. Even with just a few hundred products and a handful of daily customers, that "new app smell" started to fade. The site didn't crash, but it felt sluggish. It felt like it was moving through mud.

My first instinct was that I had a memory leak. I spent two days staring at Chrome DevTools heap snapshots and tracking garbage collection like a hawk.

The twist is that the memory was perfectly fine.

It turns out that you don't need millions of users to slow down a Node.js app. You just need a few bad habits that scale worse than your traffic does. Here is the breakdown of what was actually happening and how I fixed it.

The Problems: Why "Healthy" Apps Slow Down

1. The Invisible Event Loop Tax

I had a route for fetching products that seemed totally harmless. It looked like this:

app.get("/products", async (req, res) => {
  const products = await Product.find({});
  const enriched = products.map(p => {
    // Just a tiny bit of math for discounts...
    return calculateDiscount(p); 
  });
  res.json(enriched);
});

When I was testing with 20 products, that .map() took 0.5ms. No big deal. But as the client added more variants and descriptions, that "tiny" math started taking 20ms or even 50ms.

Because Node.js runs your logic on a single thread, that 50ms didn't just slow down the product page. It paused the entire server. If five people hit that page at once, the sixth person trying to just click a button was stuck waiting for a loop they weren't even part of.

2. The Async Waiting Room

We are told async/await is the magic pill for performance. I fell for it. My checkout flow was a neat little ladder of awaits:

await validateCart(cart);
await calculateTotals(cart);
await createOrder(cart);
await initiatePayment(cart);

I was treating my code like a line at the grocery store. Every step was waiting for the one before it, even if they didn't need to. If the payment gateway took 2 seconds to respond, that request sat open and hogged resources. I realized I was awaiting myself into a corner.

3. Ghost Tasks in the Background

I had a few setInterval jobs running for standard stuff like clearing out abandoned carts or sending order received emails.

The problem was that I wasn't managing their lifecycle. Some jobs were firing every minute but taking 90 seconds to finish because of slow database queries. They started piling up. The server wasn't crashing, but it was under constant background stress.

The Fixes: What Actually Moved the Needle

I did not do a massive rewrite and I did not switch frameworks. Honestly, I just stopped doing expensive things in the wrong places.

1. I stopped processing during requests

I realized that if a user is waiting for a response, I should not be crunching numbers. I moved the heavy lifting to the Write phase instead of the Read phase.

Instead of calculating discounts every time someone viewed a product, I started pre-calculating them whenever a product was saved or updated in the database.

// After: Pre-computing on save
product.discountedPrice = calculateDiscount(product);
await product.save();

Why it worked: The event loop now just fetches and sends. The request just reads data and does not process it.

2. I broke the Serial Async trap

I used Promise.all to run independent tasks in parallel. If two things do not depend on each other, they should not be waiting on each other.

// Parallel execution for independent tasks
await validateCart(cart);
const [totals, order] = await Promise.all([
  calculateTotals(cart),
  createOrder(cart)
]);
initiatePayment(order);

Why it worked: I cut the waiting room time in half. The request stays open for the shortest time possible, which keeps the server responsive.

3. I put my background jobs on a diet

I added a simple guard to ensure jobs could not overlap and I lowered the frequency of non-essential tasks.

let isRunning = false;
setInterval(async () => {
  if (isRunning) return; // Don't start if the last one is still going!
  isRunning = true;
  try {
    await cleanupAbandonedCarts();
  } finally {
    isRunning = false;
  }
}, 60000);

Why it worked: It stopped the background hum from turning into a roar. The CPU was finally free to focus on real users.

The Real Lesson

None of these fixes were magic. I did not optimize Node.js. I just optimized when and where the work was happening.

There is a massive difference between an app that works on your local machine and one that stays fast when real, messy production data starts hitting it. If you are feeling a slowdown, do not assume it is a bug. It might just be your architecture growing pains.

Demystifying API integration types

2026-01-14 20:25:14

API gateway integrations connect the gateway to various backend services, such as Lambda functions, HTTP/S endpoints, or other cloud services. The specific integration types and configurations depend on the chosen API gateway provider and the target backend.
In this article I will explain the integration types and when to use each one.

I will divide them by 3 use cases:

  1. The "Builder" tools (Lambda & Mock integrations) This one focuses on how to connect to serverless functions or create a fake backend for testing.
  2. The "Connector" Tools (HTTP & Private) Here the focus is on how to talk to other public websites or secure services hidden inside a private network (VPC).
  3. The "Power User" Tool (AWS Service) Focus on the advanced method of connecting directly to other AWS services (like DynamoDB) without writing any code.

Let's jump into the Builder Tools.
In this integration type we basically integrate with an "AWS Lambda" function, and sometimes we just use a "Mock Integration", when integrating with a lambda function, the code will execute the logic. talks to databases, or processes data. While in Mock Integration the API will just return static, hardcoded responses without actually running any backend code.

1. Lambda Integration
This is the most popular integration. When a user hits your API endpoint (e.g., GET /users), API Gateway triggers a specific AWS Lambda function.
The Crucial Decision is to choose between Proxy or Non-Proxy, this is the single most important setting to understand here.

Proxy vs non-proxy integration

Feature Proxy Integration (The Pipe) Non-Proxy Integration (The Prep Chef)
How it works API Gateway passes the entire raw request (headers, body, query params) directly to your Lambda function. API Gateway transforms the request before sending it. It can filter data, rename parameters, or change JSON to XML.
Who does the work? Your Lambda code must parse the request and format the response perfectly. API Gateway handles the messy parsing; your Lambda just receives clean data.
Best for... Modern, standard APIs where you want full control in your code. Legacy systems or when you need to clean up data before your code sees it.

2. Mock Integration (The Placeholder)
This is exactly what it sounds like. You configure the API to say: "If someone calls this endpoint, just send back this specific JSON."
Why would you use this?
The Mock will return fake data, so the frontend team keeps working while the backend code is still being developed.
You want to test how your app handles a "500 Server Error" without actually breaking your server.

The "Connector" Tools (HTTP & Private)
Now, let's look at how we connect to services that already exist somewhere else (not Lambda functions you just wrote).

3. HTTP Integration (The Messenger)
Think of this as a "pass-through." You already have a web application running somewhere else (like a legacy server or a third-party API like Google Maps). You want API Gateway to sit in front of it.
The API Gateway receives the request and forwards it straight to another URL.
Use this if you are migrating an old API to AWS. You can put API Gateway in front of your old server. To the user, it looks like a modern AWS API, but behind the scenes, it's still talking to the old server until you're ready to upgrade it.

HTTP integration

4. Private Integration (The Secure Tunnel)
This is for when your backend is hidden and not accessible via the public internet.
You may have a database or service running on an EC2 instance inside an Amazon VPC. It is secure with no public IP address.
Since it's private, API Gateway can't normally reach it.
Private Integration uses a component called VPC Link to create a secure tunnel into your private network to talk to that hidden service.

Private integration

The "Power User" Tool (AWS Service)
This is the final and often most misunderstood integration type.
Most people think: "If I want to save data to a database, I need a Lambda function to do it." AWS Service Integration says: "No, you don't."

5. AWS Service Integration (The Shortcut)
This allows API Gateway to talk directly to other AWS services like DynamoDB, SQS, SNS, or Kinesis; there is no need for a Lambda function.

  • How it works: API Gateway acts as the "client." When a request comes in, API Gateway translates it into the specific format that the AWS service (like DynamoDB) expects and sends it.
  • The "Cost": You have to set up Mapping Templates (using a language called VTL). You have to explicitly tell API Gateway: "Take the 'user_id' from the URL and put it into a DynamoDB PutItem command."

AWS service integration

Why not just use Lambda?

  1. You don't pay for Lambda execution time.
  2. One less hop means lower latency.

A common use case is the "Contact Us" form, imagine a high-traffic "Contact Us" form on a website.

  • The Old Way: API GW - Lambda - SQS Queue.
  • The Better Way: API GW - SQS Queue.

Summary of the five integration types:

Integration Type Best Use Case
Lambda Running custom business logic or calculations.
HTTP Proxying to existing web apps or 3rd party APIs.
Mock Testing, unblocking front-end teams, or handling errors.
Private Accessing internal/private resources inside a VPC.
AWS Service High-performance, direct actions without code.

You have learned about all the API Gateway integration types, are you interested in specific further details to learn about API Gateway? Let me know in the comments!