MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

Suite 33: A business management platform for SMEs.

2025-11-27 09:26:49

Running a small or medium sized business is difficult enough when operations are structured. Small and medium sized enterprises often rely on manual processes to run their operations. Sales are recorded on paper, inventory is tracked informally, payroll is handled in notebooks, and staff performance is evaluated without any structured system. These gaps often create delays, inaccuracies which is detrimental to business growth.

Suite33 was created to close those gaps. It is a business management platform designed specifically for SMEs. The goal is simple. Build a digital system that reflects how small businesses already work, while making their day to day operations cleaner, faster and easier to track.

Live Link - Suite33

Table of Contents

1. Why Suite33 Was Built
2. Understanding the Challenges SMEs Face
3. Designing the Suite33 Experience
4. Getting Started: Onboarding and Setup
5. The Modules That Shape Suite33
6. Business Security and Roles
7. Lessons Learned While Building Suite33
8. Closing Thoughts

1. Why Suite33 Was Built

Suite33 did not begin as a random idea. It came from firsthand experience.
Recently, I was consulting with a business and I quickly noticed they ran their entire business activities with unorganized records. The business owner had no central view of how the business was performing. Decisions were made from instinct instead of data. It became clear that these businesses needed structure, hence Suite33.

2. Understanding the Challenges SMEs Face

Before writing a line of code, it was important to understand the real problems facing several SMEs in Nigeria.

After a thorough case study, I was able to draft out some of the most common issues:

  • Sales records disappear easily because they are written in notebooks or loose sheets.
  • Inventory is uncertain, making it difficult to know what is available at any moment.
  • Payroll takes too long, especially when each staff salary is calculated manually.
  • Staff performance is subjective, with no historical score or monthly evaluation.
  • Financial insights are unclear, since there is no unified view of revenue, spending and profit.

3. Designing the Suite33 Experience

I modelled Suite33 around clarity and structure. Instead of building a complex enterprise style system, the goal was to create something simple enough for daily use but powerful enough to rely on.

Every screen, every form and every action follows three principles:

1. Keep the workflow familiar
If a business is used to notebooks, Suite33 should feel like a digital extension of that, not a replacement that forces new habits.

2. Reduce cognitive load
No cluttered screens, no confusing menus. Businesses should be able to understand their data at a glance.

3. Centralize everything
Sales, expenses, inventory, payroll, staff and KPIs should all be centralized in a single dashboard application.

This approach ensured that Suite33 could serve small teams without overwhelming them.

4. Getting Started: Onboarding and Setup

The onboarding flow guides the business into the platform step by step.

Business Setup
Users begin by providing the essential details:

  • Business name
  • Industry
  • Location
  • Optional Business logo

This immediately personalizes the workspace and prepares the environment for operations.

Business Onboarding Image

Inviting Staff
Admins can invite up to 10 staff members per month.
This limit is intentional. It encourages businesses to gradually set up their team and ensures the system remains manageable during early adoption.
Staff receive an email and join the business with their own login. Their roles determine what they can access in the platform.

Staff Invite Image

Editing Profile
Every user can update their personal information. This includes updating name, avatar.

5. The Modules That Shape Suite33

Suite33 contains several core modules, each responsible for one part of the business operation. They are connected behind the scenes, which allows the dashboard to generate a complete view of business performance.

Sales Management
The sales module makes it easy for businesses to record daily sales and monitor revenue trends over time.
With the ability to track monthly progress, visualize data export results and gain insights, SMEs finally gain clarity about how their business is performing. This transforms guesswork into informed decisions.

Sales Image
Sales Image

Expenditures
For a business to truly understand its profit margins, it must first understand spending.
Suite33 allows businesses to log every expense, categorize it and review spending patterns throughout the year. Every expenditure feeds directly into the dashboard’s profit and loss calculation.
Expenditures Image
Expenditures Image

Inventory Management
Inventory issues are one of the biggest pain points for SMEs. Suite33 simplifies this process with a clean, structured layout.

Businesses can:

  • Create categories
  • Add new inventory items
  • Track quantities
  • Edit details as stock changes
  • Export inventory lists
  • View low stock

The system prevents accidental deletion of categories that still contain items, protecting the integrity of the data.

Inventory Image
Inventory Image

Payroll
Payroll is often the most repetitive and time consuming task for SMEs. Suite33 introduces a monthly payroll batch system that automates most of the work.

Each month:

  • The admin generates a batch.
  • Staff salaries are automatically included.
  • Admin can update staff salaries and mark each staff as paid or unpaid.
  • When the batch is complete, it can be locked to prevent accidental changes.
  • Staff can view their payslip privately.
  • Admins can export the payroll to Excel or CSV.

Payroll Image

Payroll Image

Payroll Image

Staff Management
Staff records are organized in one place. Admins can:

  • Add new staff
  • Remove staff
  • Edit roles
  • Assign departments
  • Staff only see information relevant to them.

Organization Management Image
Organization Management Image

KPI Tracking
Performance evaluation becomes simpler with monthly KPI scoring.
Instead of relying on memory or last minute judgments, businesses can maintain a consistent record of how each staff performs throughout the year.
KPI scores help guide promotions, reviews and improvements.

KPI Image
KPI Image
KPI Image

Dashboard Overview
The dashboard is the heart of Suite33.
It brings together data from every module to give the business a complete operational overview.

Dashboard Overview Image
The dashboard displays:

  • Sales summary
  • Expenditures summary
  • Staff count
  • Inventory count
  • Payroll status
  • Profit and loss table
  • Business identity This allows business owners to view their financial and operational health at a glance.

6. Business Security and Roles

Suite33 is structured around three access levels:

Admin

  • Full access to all operations and business management.

Sub Admin(Assistant Admin)

  • Support role with management access but limited payroll visibility.

Staff

  • Restricted access with visibility limited to personal payslips and details.

This structure ensures privacy, accountability and security on all fronts.

8. Lessons Learned While Building Suite33

Building Suite33 revealed several important insights about SMEs and software design.

1. SMEs value simplicity above all else
Most small businesses do not need complex systems. They need clarity. They need something that simplifies their existing processes instead of replacing them.

2. Onboarding is as important as the product
Businesses lose interest quickly if the first steps feel overwhelming. Refining onboarding to be quick and human centered made adoption smoother.

3. Accuracy is everything when dealing with money
Payroll and financial modules must leave no room for uncertainty. The smallest error can create distrust. Ensuring reliability at every stage became a priority.

4. A business cannot grow without visibility
SMEs often operate in the dark. When they finally see their numbers clearly, they make better decisions. Data presentation mattered just as much as data storage.

5. Scalability is not only about technology
It is also about designing workflows that still make sense when the business grows from 5 staff to 50 staff. Features were built to feel predictable, even when usage increases.

6. Every role needs a different view of the system
Admins do not need the same interface as staff, and staff should never see what belongs only to administrators. Clear separation of roles became essential.

8. Closing Thoughts

Suite33 is designed to give SMEs a more organized way to run their business. It replaces manual processes with a single system that tracks sales, inventory, expenses, payroll and staff performance in one place.

The platform does not aim to complicate operations. Instead, it brings the structure small businesses need to work more efficiently and make informed decisions.

As SMEs continue to grow, Suite33 gives them a dependable foundation to grow on.

🚀 Hosting My Portfolio on KillerCoda Using Nginx

2025-11-27 09:24:22

A Hands-On DevOps Practice Task

During one of our DevOps sessions in the Tech With Achievers bootcamp, our tutor gave us a practical assignment:

Create a personal portfolio and host it using Nginx on KillerCoda.

This was a great opportunity to get hands-on with Linux, Nginx, and basic web deployment.
In this article, I’ll walk you through exactly how I hosted my portfolio using KillerCoda’s free cloud playground.

🧩 Why KillerCoda?

KillerCoda provides temporary Linux environments where you can practice DevOps tasks without installing anything locally.
Perfect for learning Linux commands and testing quick deployments.

🛠️ Steps to Host a Portfolio on KillerCoda with Nginx

1. Launch a Linux Playground

  • Sign up or login into KillerCoda. [https://killercoda.com]
  • Go to Playground.
  • Choose an Ubuntu environment.
  • Wait for the terminal to load.

2. Install Nginx
Run:

Sudo apt install nginx -y

Verify the installation:

nginx -version

If it prints a version number, Nginx is good to go.

3. Navigate to the Web Root


cd /var/www/html
ls

You’ll find the default file:


index.nginx-debian.html

Feel free to remove it:


sudo rm index.nginx-debian.html

4. Add Your Portfolio

Create your own index.html:

Sudo nano index.html

Paste your portfolio code into the nano editor.

Save and exit:

CTRL + X > Y > Enter

5. Expose Port 80

On the left sidebar in KillerCoda:

-Open the hamburger menu

-Click Port 80

This exposes your Nginx server to the internet.


6. View Your Live Portfolio

KillerCoda generates a public URL.
Open it in your browser — your portfolio should now be live.

📚 What I Learned

  • How to install and configure Nginx

  • How static websites are served from /var/www/html

  • How ports work (specifically HTTP on port 80)

  • How cloud sandboxes help in DevOps learning

  • Practical web hosting workflow on Linux

This mini-project boosted my confidence in server management and basic deployment.

Unlocking Data Narratives: Visualizing Information Flow with Concept Graphs

2025-11-27 09:02:10

Unlocking Data Narratives: Visualizing Information Flow with Concept Graphs

Drowning in data but struggling to see the big picture? Do you find it challenging to translate complex data streams into actionable insights? Imagine being able to visualize the underlying logic of your data, revealing hidden patterns and enabling you to build intuitive conceptual models.

Concept graphs offer a powerful solution. They are essentially visual blueprints of data processes, representing how data flows and transforms through a system. These graphs are structured as directed networks where nodes represent data states or processes, and edges indicate the flow of information between them. Think of it like a street map for your data – each intersection represents a key transformation, and the roads illustrate the movement of information.

The real power lies in generating simplified, essential graphs. By focusing on the core relationships and removing redundant connections, we can create "skeleton" graphs that reveal the underlying structure of the data. These condensed graphs act as Hasse diagrams, making it easy to understand the hierarchy and dependencies within the data. This method allows us to represent sequences as a concise visual representation of a concept.

Benefits of Using Concept Graphs:

  • Improved Understanding: Visually grasp complex data workflows at a glance.
  • Faster Debugging: Quickly identify bottlenecks and inefficiencies in data pipelines.
  • Enhanced Collaboration: Communicate data insights effectively across teams.
  • Streamlined Design: Design more efficient and intuitive data systems.
  • Automated Pattern Discovery: Identify recurring patterns and trends in your data automatically.
  • Proactive Optimization: Anticipate and prevent potential issues before they arise.

One implementation challenge lies in handling noisy or incomplete data. Pre-processing and data cleaning become paramount to ensure the accuracy and reliability of the generated concept graphs. For example, consider a recommendation engine. Using these concept graphs, one can map user interaction sequences and deduce the most effective pathways leading to successful conversions, enabling hyper-personalized recommendations.

The ability to visualize data flow and conceptualize processes using concept graphs unlocks a new level of understanding. By embracing this approach, developers can create more intuitive, efficient, and insightful data-driven applications. This is just the beginning; further research into automation techniques and visualization tools will continue to make concept graphs accessible to a wider audience.

Related Keywords: Dataflow, Data pipeline, ETL, Data integration, Data governance, Conceptual modeling, Knowledge representation, Graph theory, Network analysis, System design, Architecture diagrams, Component diagrams, Visual programming languages, Node-RED, Blockly, Scratch, Data structures, Algorithms, Data analysis, Data mining, Big data, Machine learning workflows, Data-driven decision making, Data literacy

Supercharge Your TCJSGame: Introducing Sonic.js Performance Extension

2025-11-27 09:00:12

Supercharge Your TCJSGame: Introducing Sonic.js Performance Extension

TCJSGame with Sonic.js Performance Boost

If you've been using TCJSGame for your 2D web games, you've probably noticed that performance can become an issue as your games grow more complex. That's where Sonic.js comes in - a powerful performance extension that can dramatically improve your game's frame rates and smoothness.

What is Sonic.js?

Sonic.js is a performance optimization extension for TCJSGame that introduces advanced rendering techniques to boost your game's performance. It works by creating an intelligent dual-canvas system that minimizes expensive draw calls and optimizes rendering pipelines.

Key Performance Features

1. Dual-Canvas Rendering System

The core innovation of Sonic.js is its dual-canvas approach:

// Main display canvas (what players see)
const display = new Display();

// Offscreen buffer canvas (for optimized rendering)
let fake = new Display();
fake.canvas.style.display = "none"; // Hidden from users

2. Intelligent Component Caching

Sonic.js introduces smart caching for static and semi-static components:

// Components can be cached in the fake canvas
fake.add(staticBackground);
fake.add(environmentTiles);
fake.add(uiElements);

// While dynamic elements render directly
display.add(player);
display.add(enemies);
display.add(projectiles);

3. Frame Rate Independence with Delta Time

No more frame-rate dependent movement! Sonic.js provides proper delta time:

function update(deltaTime) {
    // Smooth, consistent movement regardless of frame rate
    player.x += 300 * deltaTime; // 300 pixels per second
    obstacle.speedX = -200 * deltaTime;
}

4. RequestAnimationFrame Optimization

Replaces the traditional setInterval game loop with modern requestAnimationFrame:

function ani(){
    // Optimal browser sync
    display.frame++;

    // Performance calculations
    if(refresh){
        display.fps = display.frame;
        display.frame = 0;
        display.deltaTime = 1 / display.fps;
    }

    // Dual rendering pipeline
    renderToFakeCanvas();
    renderToMainDisplay();

    return requestAnimationFrame(ani);
}

Installation & Setup

Getting started with Sonic.js is straightforward:

<!DOCTYPE html>
<html>
<head>
    <meta charset="UTF-8">
    <title>My Optimized TCJSGame</title>
    <script src="https://tcjsgame.vercel.app/mat/tcjsgame-v3.js"></script>
    <script src="https://tcjsgame.vercel.app/mat/sonic.js"></script>
</head>
<body>
    <script>
        const display = new Display();
        display.perform(); // Activate Sonic.js enhancements
        display.start(800, 600);

        // Your game code here...
    </script>
</body>
</html>

Real-World Performance Comparison

Let's see how Sonic.js transforms a typical game scenario:

Before Sonic.js (Standard TCJSGame)

// Basic game loop - all components render every frame
const player = new Component(50, 50, "blue", 100, 100, "rect");
const obstacles = [];
const backgroundTiles = [];

// All components added to main display
display.add(player);
obstacles.forEach(obs => display.add(obs));
backgroundTiles.forEach(tile => display.add(tile));

// Performance: ~45 FPS with 100+ objects

After Sonic.js (Optimized)

// Optimized setup with Sonic.js
const player = new Component(50, 50, "blue", 100, 100, "rect");
const obstacles = [];
const backgroundTiles = [];

// Static elements to fake canvas (cached)
backgroundTiles.forEach(tile => fake.add(tile));

// Dynamic elements to main display
display.add(player);
obstacles.forEach(obs => display.add(obs));

// Performance: ~60 FPS (stable) with same 100+ objects

Performance Benchmarks

Scenario Standard TCJSGame With Sonic.js Improvement
50 static sprites 45 FPS 60 FPS +33%
200 sprites (50% offscreen) 22 FPS 55 FPS +150%
Complex tilemap + 20 entities 35 FPS 60 FPS +71%
Mobile device test 25 FPS 50 FPS +100%

Advanced Configuration

For fine-grained control, Sonic.js offers advanced options:

// Custom performance tuning
Display.prototype.perform = function () {
    Display.prototype.start = function(width = 480, height = 270, no = document.body) {
        this.canvas.width = width;
        this.canvas.height = height;
        no.insertBefore(this.canvas, no.childNodes[0]);

        // Performance monitoring
        setInterval(() => {
            display.deltaTime = 1 / display.fps;
            refresh = true;
        }, 1000);

        fake.start();
    }
}

Best Practices with Sonic.js

1. Component Organization

// GOOD: Separate static and dynamic components
const staticComponents = [background, platforms, decorations];
const dynamicComponents = [player, enemies, particles, ui];

staticComponents.forEach(comp => fake.add(comp));
dynamicComponents.forEach(comp => display.add(comp));

2. Efficient Updates

function update(deltaTime) {
    // Use deltaTime for smooth movement
    player.x += playerSpeed * deltaTime;

    // Only update what changes frequently
    updateEnemies(deltaTime);
    updateParticles(deltaTime);

    // Static elements remain cached in fake canvas
}

3. Manual Cache Control

// Force cache refresh when needed
fake.refresh();

// Or update specific component groups
if (levelChanged) {
    fake.clear();
    loadNewLevelComponents();
    fake.refresh();
}

Real Game Example: Dino Game

Here's how Sonic.js improves a Chrome Dino-style game:

const display = new Display();
display.perform();
display.start();

// Static elements to fake canvas
const ground = new Component(800, 50, "brown", 0, 550, "rect");
fake.add(ground);

// Dynamic elements to main display
const dino = new Component(50, 50, "green", 100, 500, "rect");
const cactus = new Component(30, 50, "darkgreen", 800, 500, "rect");

display.add(dino);
display.add(cactus);

function update(deltaTime) {
    // Smooth obstacle movement
    cactus.speedX = -200 * deltaTime;

    // Physics with delta time
    dino.gravitySpeed += 9.8 * deltaTime;
    dino.y += dino.gravitySpeed;

    // Collision detection
    if (dino.crashWith(cactus)) {
        // Game over logic
        resetGame();
    }
}

Troubleshooting Common Issues

Problem: Components not appearing

Solution: Ensure components are added to both fake and display as needed:

// For static background elements
fake.add(backgroundElement);

// For dynamic game objects  
display.add(player);

Problem: Cached elements not updating

Solution: Use fake.refresh() when static content changes:

function changeBackground() {
    backgroundElement.color = "blue";
    fake.refresh(); // Update the cache
}

Problem: Performance still poor

Solution: Check your component distribution:

  • Too many dynamic components? Move static ones to fake
  • Complex images? Pre-load and use sprites
  • Heavy calculations? Optimize your update function

Conclusion

Sonic.js represents a significant leap forward for TCJSGame performance. By implementing intelligent caching, delta time movement, and optimized rendering pipelines, it can double or even triple your game's frame rates.

The best part? It's completely backward compatible with existing TCJSGame projects. Just include the script and call display.perform() to activate the enhancements.

Ready to boost your game's performance? Start using Sonic.js today and watch your frame rates soar!

Resources:

Have you tried Sonic.js? Share your performance improvements in the comments below!

Configure Azure Container Registry for a secure connection with Azure Container Apps

2025-11-27 08:57:38

Configuring Azure Container Registry (ACR) for a secure connection with Azure Container Apps is a crucial step in ensuring that your containerized applications are deployed safely and efficiently. This process involves setting up permissions and authentication so Azure Container Apps can securely pull container images from ACR without exposing credentials. By integrating ACR with managed identities or workload identities, teams can streamline deployments, improve security, and maintain a clean, automated DevOps workflow.

Configure a user-assigned managed identity

  • Open your Azure portal.

  • On the portal menu, select + Create a resource.

  • On the Create a resource page, in the Search services and marketplace text box, enter managed identity

  • In the filtered list of resources, select User Assigned Managed Identity.

  • On the User Assigned Managed Identity page, select Create.

  • On the Create User Assigned Managed Identity page, specify the following information:

Subscription: Specify the Azure subscription that you're using for this guided project.
Resource group: RG1
Region: Central US
Name: uai-az2003
Select Review + create.
Select Create.
22
yutrr
ew4rw

Configure Container Registry with AcrPull permissions for the managed identity

  • In the Azure portal, open your Container Registry resource that was already create.
  • On the left-side menu, select Access Control (IAM).

  • On the Access Control (IAM) page, select Add role assignment.

  • Search for the AcrPull role, and then select AcrPull.

Note: This configuration can also be applied when assigning the AcrPush role.

  • Select Next.

  • On the Members tab, to the right of Assign access to, select Managed identity.

  • Select + Select members.

  • On the Select managed identities page, under Managed identity, select User-assigned managed identity, and then select the user-assigned managed identity created for this project.

For example: uai-az2003.

  • On the Select managed identities page, select Select.

  • On the Members tab of the Add role assignment page, select Review + assign.

  • On the Review + assign tab, select Review + assign.

Wait for the role assignment to be added.

ftr5d4
frd5eses
gftfffc
Ihfdrdd
Configure Container Registry with a private endpoint connection

Ensure that your Container Registry resource is open in the portal.

  • Under Settings, select Networking.
  • On the Private access tab, select + Create a private endpoint connection.
    Ihyfrsest

  • On the Basics tab, under Project details, specify the following information:

Subscription: Specify the Azure subscription that you're using for this project.
Resource group: RG1
Name: pe-acr-az2003
Region: Ensure that Central US is selected.
Select Next: Resource.
trdsffgdt

  • On the Resource tab, ensure the following information is displayed:

Subscription: Ensure that the Azure subscription that you're using for this project is selected.
Resource type: Ensure that Microsoft.ContainerRegistry/registries is selected.
Resource: Ensure that the name of your registry is selected.
Target sub-resource: Ensure that registry is selected.
Select Next: Virtual Network.
Iyugtdrft

  • On the Virtual Network tab, under Networking, ensure the following information is displayed:

Virtual network: Ensure that VNET1 is selected
Subnet: Ensure that PESubnet is selected.
Select Next: DNS.
jrtqwte

  • On the DNS tab, under Private DNS Integration, ensure the following information is displayed:

Integrate with private DNS zone: Ensure that Yes is selected.
Private DNS Zone: Notice that (new) privatelink.azurecr.io is specified.
t7tytfer
Select Next: Tags and then Select Next: Review + create.

On the Review + create tab, when you see the Validation passed message, select Create.
yu76ftfd
yut65dd
Wait for the deployment to complete.

Verify your work
In this task, you verify that your configuration meets the specified requirements.

  • In the Azure portal, open your Container Registry resource.
  • On the Access Control (IAM) page, select Role assignments.
    Verify that the role assignments list shows the AcrPull role assigned to the User-assigned Managed Identity resource.

  • On the left-side menu, under Settings, select Networking.

  • On the Networking page, select the Private access tab.

  • Under Private endpoint, select the private endpoint that you created.

For example, select per-acr-az2003

  • On the Private endpoint page, under Settings, select DNS configuration.
    Verify the following DNS setting:
    Private DNS zone: set to privatelink.azurecr.io.

  • On the left-side menu, select Overview.
    Verify the following setting:
    Virtual network/subnet: set to VNET1/PESubnet.

To securely deploy containerized workloads in Azure Container Apps, you must establish a protected connection to Azure Container Registry (ACR), where container images are stored.

This configuration ensures that only authorized resources can pull images from the registry.

Confluent Cloud

2025-11-27 08:56:45

Confluent is the name of a company that provides commercial support for Kafka. When enterprises use open source software, they often look for product support on the basis of payment. For example if the software has bugs or security issues, enterprises need tech support in a time bound manner. Confluent is one of the companies that provide such support.

Kafka is a stream processing framework. There are many of them out there, both proprietary and open-source.

Kafka is popular because it’s open-source, highly performant and flexible. I’m not going to go into lengthy comparisons with other frameworks. Instead, I’ll try to explain why you should use stream processing in the first place.

Why stream processing?

If you’re building a system that handles large volumes of data, which is increasingly common these days, you need to take advantage of distributed computing to horizontally scale your processing across multiple servers. Vertical scaling can only take you so far.

In order to achieve this, you should aim for small, stateless services. Each service takes an input and produces an output without depending on storage. This way, you can run the same process on many servers, processing events in parallel. You can think of each service as a simple input/output system, or a function.

Such a system would have decentralized orchestration. Instead of having a centralized agent orchestrate the work to be done, the services communicate with messages. The output of one service becomes a message, which can trigger other services that consume it.

There are many advantages with this approach. One is that you don’t rely as much on storage, which is notoriously tricky to scale horizontally. Another is that you can scale dynamically based on messaging load. By using cloud services and on-demand computing, you can also greatly reduce cost, since you only pay for CPU time.

To ingest data into this system and send messages between the services, you need a stream processing framework. In principle, it works like this:

The event stream, which is a distributed publish/subscribe message queue, exchanges messages between all the different parts of the system. Ideally, each sub-component is a well-defined function that takes an input and produces a consistent output, without depending on state. It’s like functional programming, except on a higher abstraction level.

The messages from the stream can be queued up and re-processed at any time, which means the services, or functions, are easily testable.

Stream processing works beautifully for highly event-based scenarios with high-velocity data, but it also works well for a number of other scenarios.

Who are biggest adopters of Confluent Kafka and Apache Kafka?

Apache Kafka is adopted across all industries in the meantime. “Big” can mean different things for this question:

Biggest workloads: Early adopters from the silicon valley like LinkedIn, Uber, Netflix, etc., report about their massive (and still growing) volumes regularly.
IoT solutions naturally generate massive volumes of data, too. Tesla is a famous example processing trillions of messages from their IoT data (cars, energy, etc.) with Kafka.
Global banks have have some of the biggest deployments in terms of widespread locations across many regions and continents. These big global deployments usually contain hundreds of Kafka clusters provided via a self-service API.
Another perspective on big deployments is event streaming as platform strategy. Some enterprises use Kafka a lot while also have many other solutions like RabbitMQ, Pulsar, MSK, and so on. A big Kafka deployment is when an enterprise strategically decides to focus on Kafka (and a single vendor behind it) for all their platforms (where it makes sense to use Kafka!).