MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

Sridhar Vembu: Redefining Leadership in the Age of Noise

2025-11-12 11:07:42

In 2019, a journalist visiting rural Tamil Nadu was surprised to find a man in a simple cotton shirt walking down a muddy lane, laptop in hand, stopping now and then to greet farmers by name. That man wasn’t a local teacher or a government official — he was Sridhar Vembu, the founder of Zoho Corporation, a global SaaS powerhouse valued in billions.

When asked why he had moved from Silicon Valley to a small Indian village, his answer was characteristically grounded:

“If rural India doesn’t develop, India doesn’t develop. Technology must serve the people who need it the most.”

That one statement captures both his philosophy and his leadership DNA — purpose over prestige, substance over spectacle.

Mind Behind the Mission

Sridhar Vembu

Born in Thanjavur, Tamil Nadu, into a humble middle-class family, Sridhar Vembu’s early life revolved around hard work, values, and education. His brilliance earned him a place at IIT Madras and later a Ph.D. from Princeton University. After a brief stint at Qualcomm in San Diego, he realized that personal success in the West meant little if it did not contribute to India’s collective progress.

He didn’t reject the West — he redefined success itself. To him, true progress was not about joining the system but building one that others could join.

Entrepreneurship by Design, Not Hype

In 1996, along with his brothers, Sridhar Vembu co-founded AdventNet, which later evolved into Zoho Corporation. The early days were quiet and intense. While most startups chased investors, Zoho chased excellence.
Vembu believed in a radical principle:

“A business must create value before it creates valuation.”

This meant building slowly, deliberately, and independently — without external funding. The result was extraordinary. Over the years, Zoho transformed from a single-product company to a complete ecosystem of over 55 business applications, spanning CRM, marketing, finance, HR, operations, and analytics.

The Grandeur of Zoho: India’s Silent Powerhouse

Zoho Corporation

When people outside India hear the name Zoho, they often think of a single product — maybe CRM, email, or office tools. But those who’ve experienced its depth know that Zoho is not a product; it’s a universe.

Over the past two decades, Zoho has quietly built one of the most comprehensive business software ecosystems in the world — a complete suite of over 55 integrated applications that help organizations manage everything from sales, marketing, and finance to HR, operations, and analytics.

At its core lies the Zoho One platform — an “operating system for business.”
With a single subscription, a company can access every essential tool to run its operations — CRM, accounting, payroll, recruitment, inventory, project management, communication, analytics, and even AI-driven insights.

What’s remarkable is that all of this is homegrown — designed, built, and maintained in India.
No acquisitions. No outsourcing. Every line of code reflects an obsession with quality, efficiency, and independence.

Empowering Small and Medium Enterprises (SMEs): The True Impact

MSME

While Zoho serves global enterprises, its real magic lies in how it empowers small and mid-sized businesses — the backbone of every economy.
Sridhar Vembu often says that “software should level the playing field, not tilt it.” And Zoho does just that.

Here’s how:

  • Affordability with depth: Where global SaaS products price themselves out of reach for small players, Zoho offers enterprise-grade capabilities at a fraction of the cost — often 90% cheaper than equivalent Western solutions.
  • Ease of adoption: Its low-code and no-code tools (like Zoho Creator) allow non-technical founders and local entrepreneurs to automate workflows and build applications without heavy engineering teams.
  • Localized empowerment: Zoho’s tools support local languages, currencies, and compliance standards, enabling small businesses from rural India to compete globally.
  • Simplicity of integration: Because all Zoho apps are part of a unified ecosystem, data flows seamlessly across departments — turning small enterprises into digitally intelligent organizations.

For thousands of small manufacturers, retailers, and service firms across India and beyond, Zoho has become a digital growth engine — one that doesn’t demand Western consultants, complex licensing, or expensive infrastructure.

In short, Zoho is democratizing enterprise technology — something even the biggest multinationals have struggled to do genuinely.

Challenging the Global Goliaths

It is often said that “Zoho is India’s answer to Microsoft, Google, and Salesforce — all rolled into one.”
And that’s not an exaggeration.

  • Zoho Mail and Workplace compete directly with Google Workspace and Microsoft 365.
  • Zoho CRM stands shoulder to shoulder with Salesforce, offering similar (and sometimes better) features at a fraction of the price.
  • Zoho Books and Zoho People rival QuickBooks, Xero, and Workday in functionality.
  • And Zoho Analytics, one of the most elegant BI tools in the market, competes with Tableau and Power BI.

Yet, Zoho doesn’t play the same game. It doesn’t rely on aggressive marketing, data monetization, or acquisition-fueled expansion.
Its growth model is built on trust, privacy, and performance — values increasingly rare in today’s digital economy.

By choosing to stay private, by refusing to sell user data, and by building every core technology in-house, Zoho has become a symbol of sovereign innovation — a company that proves India can produce world-class technology without dependency.

For global corporations used to controlling markets through pricing power and platform dominance, Zoho represents a quiet but powerful challenge.
It proves that ethical, self-reliant technology can still win — and that innovation doesn’t need a Silicon Valley ZIP code or a venture capital halo.

The Rural Renaissance: Turning Vision into Ecosystem

As Zoho was conquering global markets, Vembu began another quiet revolution — this time in rural India. He relocated to Tenkasi, where he built the Zoho Schools of Learning — an unconventional program that replaces formal college education with hands-on training in technology, design, and business thinking.

Through this initiative, rural youth — many from modest backgrounds — are now directly contributing to world-class software development. Over 15% of Zoho’s workforce comes from these schools.
Vembu didn’t just create jobs; he created an ecosystem of opportunity.

This is localization at its highest form — not about shifting factories, but building intellectual capital in villages.

A Lesson in Leadership: The Power of Silence

In a world obsessed with personal branding and noise, Sridhar Vembu practices a leadership style rooted in silence and substance.
He avoids the limelight, refrains from social media debates, and often cycles to work through the narrow lanes of Tenkasi.

He believes that leadership is not about being seen — it’s about creating value that others feel.
That quiet confidence — the ability to influence without announcing it — is what separates builders from showmen.

Beyond Zoho: Building the Next Generation of Builders

Today, Vembu’s vision extends beyond Zoho. He advocates for distributed development ecosystems — where each village can become a micro-hub of technology, education, and entrepreneurship.
His initiatives focus on:

  • Rural skill development and digital literacy
  • Micro-enterprise incubation
  • Sustainable living models that blend agriculture with technology

He envisions a future where “progress doesn’t mean migration” — where people can stay in their hometowns and still participate in the global economy.

And true to his philosophy, he does this quietly — no PR campaigns, no hashtags, no publicity drives. Just work. Real, tangible, impactful work.

The Legacy of Quiet Builders

Sridhar Vembu’s journey is more than the story of a successful entrepreneur. It is the story of how conviction can outlast capital, and how purpose can outperform publicity.

In a business world driven by valuations and visibility, he reminds us that freedom, focus, and faith in people are the ultimate multipliers of success.

Zoho is not just India’s pride; it’s India’s proof — that global excellence can emerge from humility, that innovation can coexist with simplicity, and that true leadership is defined not by how loud you are, but by how deeply you serve.

Here’s to the quiet builders — like Sridhar Vembu — who lead not from the stage, but from the soil. They remind us that the future belongs not to those who shout the loudest, but to those who build the strongest.

Would you like me to now create a LinkedIn-ready version — trimmed to ~1,200 words, with impactful subheadings, short paragraphs, and 1–2 strong pull quotes to make it visually engaging and shareable?

The Psychology Behind Powerful Prompts

2025-11-12 11:02:49

Most people believe prompting is a technical skill.
In reality, it’s a psychological one.

Every time you type into ChatGPT, you’re not just talking to a model; you’re designing how an intelligence interprets intention, tone, and context.

And the difference between an average output and an exceptional one usually comes down to how well you understand the human mind behind the machine.

1️⃣ The Mirror Effect:

AI mirrors you.
Your tone, clarity, confidence, and precision all echo back in its output.

If you prompt from confusion, you’ll get noise.
If you prompt from clarity, you’ll get structure.

“AI doesn’t read your words.
It reads the state of mind behind them.”

Before writing a prompt, pause and ask:
Am I clear about what I really want and why?

2️⃣ Emotional Framing Changes Output Quality:

People forget that large language models are trained on human emotion—they pick up cues from empathy, curiosity, and urgency.

Try these two versions:

❌ “Write an onboarding email.”
✅ “Write an onboarding email that makes a new user feel excited, confident, and valued.”

Same instruction.
Different emotion.
Completely different response.

Powerful prompts speak to feelings, not just functions.

3️⃣ The Authority Cue:

AI responds differently when it knows who it is.
But it performs even better when it knows who you are.

Include both identities.

You are a senior technical writer.
I am the founder of an AI company preparing documentation for developers.

Why it works:
Authority + Context + Purpose = Trust + Precision.
The psychology of respect is universal, even in machines.

4️⃣ The Cognitive Load Rule:

If a human brain struggles to hold too much information, an AI model behaves similarly.
Overload it, and you’ll get generic results.

So I use this 3-section mental structure:

Context → Goal → Constraints

Instead of dumping everything in one paragraph, separate ideas.
This lowers cognitive load and raises creative accuracy.

5️⃣ The Reward Trigger:

Humans perform better when they know the success criteria.
So does AI.

End your prompt with a reward signal:

“Deliver the best version possible.
If it’s good, I’ll use it in production.”

It sounds trivial, but it subtly tells AI what quality means to you.
Outputs often improve by 10–20 % when you include feedback or a purpose anchor.

My Own Insight:

Most people fight AI with words.
Prompt thinkers lead AI with psychology.

Once you realise that every AI response is shaped by human emotion, context, and structure, you stop prompting mechanically and start communicating intentionally.

The secret isn’t in what you ask.
It’s in how your mind frames the question.

Final Thought:

Prompting isn’t programming.
It’s persuasion.
And the better you understand human behaviour, the more extraordinary your AI collaborations become.

Machines don’t create meaning.
We do.
AI just learns to follow our pattern of thought.

Next Article

Next, we’ll explore how to turn this psychology into action:

“Prompting for Revenue: How I Build Books, Brands & Products With ChatGPT.”

This one will show exactly how I translate prompt mastery into business results, step by step.

📐 Material 3 Adaptive: Implementing Window Size Classes in Kotlin Compose

2025-11-12 10:59:14

Hey everyone! 👋

I wanted to share a look at a crucial technique for modern Android development: implementing Window Size Classes to create truly adaptive layouts that align with the recent updates to Material 3 Adaptive Guidance.

MobileWindowSize.kt

The MobileWindowSize enum in this code snippet is the foundation. It categorizes the available screen space based on Google's defined WindowWidthSizeClass and WindowHeightSizeClass (Compact, Medium, Expanded).

🧐 Material 3 & Component Placement:

This code directly drives key design decisions based on the available screen space, like:

  • Navigation: Deciding whether to use a bottom NavigationBar (Compact), a NavigationRail (Medium), or a permanent NavigationDrawer (Expanded).

  • Content Layout: Reorganizing content (e.g., using a master-detail pattern) to maximize screen utility on tablets and desktops.

By mapping screen dimensions to profiles like MOBILE_PORTRAIT or DESKTOP, we ensure our app components are always displayed according to Material 3's best practices for large screens.

📚 Resources & Official Documentation

For the full guide on implementation and understanding the key breakpoints used in M3 Adaptive, these are the best resources:

1. Official Google Guide: Check out the complete guide on implementing this strategy: Use window size classes | Adaptive layouts in Compose | Android Developers

2. Practical Course: To see another code implementation of Window Size Classes using Material 3 Adaptive, check out this Compose crash course from Philipp Lackner: The Full Jetpack Compose Responsive UI Crash Course (The topic starts at [31:21]).

Implementing this logic is non-negotiable for delivering a high-quality, multi-form factor user experience!

Gerenciamento seguro e eficiente de tokens com Go e Redis

2025-11-12 10:56:34

Publicado originalmente no Medium

Carolina Vila-Nova

Em um cenário de microsserviços e APIs expostas, o gerenciamento seguro e eficiente de tokens de autenticação é crucial para garantir a integridade e o desempenho do sistema.

Recentemente, tive a oportunidade de trabalhar na implementação de uma abordagem que combina cache com refresh automático de tokens, tirando proveito do ecossistema robusto do Go e da alta performance do Redis.

A seguir, detalho essa solução, explorando os desafios iniciais, as estratégias de design e os benefícios alcançados.

gopher with a lock

Nosso ponto de partida foi a necessidade de autenticação junto a uma API externa para poder gerar — no ambiente desse cliente mas em situações específicas definidas por nossa regra de negócio — mensagens informativas aos usuários.

Uma questão que logo se impôs foi onde armazenar o token obtido, levando em consideração o alto volume de tráfego na nossa API. Armazená-lo diretamente em uma instância do serviço não era uma opção viável, pois colocaria em risco a segurança e a escalabilidade do sistema.

A ideia central foi armazenar o token em um cache distribuído, com um tempo de vida (TTL) equivalente ao tempo de validade do próprio token. Algumas características fizeram com que a escolha do Redis como sistema de cache para esse caso de uso não fosse aleatória, entre elas:

  • Desempenho: com armazenamento em memória, o Redis oferece tempos de resposta extremamente rápidos, da ordem de milissegundos. Isso é essencial quando se precisa acessar e validar o token de forma rápida em cada requisição. Um cache lento impactaria diretamente a performance da aplicação.

  • Suporte nativo a TTL: o Redis se encarrega de expirar o token automaticamente após o tempo definido, garantindo que ele seja renovado antes de expirar e prevenindo erros de autenticação. Essa funcionalidade integrada simplifica a lógica de refresh e garante sua confiabilidade.

  • Atomicidade: as operações acontecem como um todo ou não acontecem, garantindo a consistência dos dados no cache. Isso é importante no contexto do cache de tokens para evitar condições de corrida quando múltiplas instâncias do serviço tentam acessar ou atualizar o token simultaneamente.

  • Escalabilidade horizontal: o suporte ao clustering permite escalar horizontalmente a capacidade do cache conforme a demanda aumenta sem comprometer a performance e evitando gargalos

A solução

Antes de realizar qualquer requisição à API protegida, a função verifica se existe um token válido no cache.

func GetToken(key string) (string, error) {  
 cacheToken, err := cache.Get(key)  
 if err != nil {  
  return "", err  
 }  
 return cacheToken, nil  
}

O sistema checa também se esse token ainda é válido, considerando uma margem de segurança (um “buffer”) para evitar requisições com tokens prestes a expirar.

Bibliotecas como a github.com/golang-jwt/jwt/v5 nos permitem decodificar o JWT de forma segura a partir de suas Claims e verificar a data de expiração.

import (  
    "time"  
    "github.com/golang-jwt/jwt/v5"  
)  

func ExpiredToken(tokenString string, buffer time.Duration) (bool, error) {  
    token, _, err := new(jwt.Parser).ParseUnverified(tokenString, jwt.MapClaims{})  
    if err != nil {  
        return true, err // Considera expirado em caso de erro na leitura  
    }  

    claims, ok := token.Claims.(jwt.MapClaims)  
    if !ok {  
        return true, err  // Considera expirado se não conseguir extrair claims  
    }  

    exp, ok := claims["exp"].(float64)  
    if !ok {  
       return true, err // Considera expirado se "exp" não estiver presente  
    }  

    expirationTime := time.Unix(int64(exp), 0)  
 return time.Now().Add(buffer).After(expirationTime), nil  
}  

//Nota: essa biblioteca oferece mais opções de validação. Nesta implementação,  
// usamos ParseUnverified apenas para extrair a data de expiração.   
//O token é validado de fato pela API de autenticação

Caso o token não seja encontrado no cache ou tenha expirado (dentro do "buffer" de segurança), o sistema solicita um novo token à API de autenticação e o armazena no cache com o TTL apropriado.

Pode-se ainda usar a biblioteca crypto/rand do Go para gerar um ID único para cada token (um "nonce"), aumentando a segurança e rastreabilidade.

func GetNewToken() (string, error) {  
 // Lógica para obter novo token da API de autenticação  
 // ...  
 return "newToken", nil  
}  

func StoreTokenCache(key string, token string, ttl time.Duration) error {  
 err := cache.Set(key, token, ttl)  
 if err != nil {  
  return err  
 }  
 return nil  
}

O TTL do cache garante que o token seja automaticamente atualizado antes de expirar, evitando a necessidade de tentativas repetidas ("retries") em caso de falha na autenticação. A margem de segurança garante que o token seja renovado antes mesmo de expirar, prevenindo erros. Goroutines são utilizadas para realizar o refresh de forma assíncrona, sem bloquear o fluxo principal da aplicação.

func stageRefresh(key string, timeToRefresg time.Duration) {  
 go func() { // Cria uma goroutine para o refresh  
  time.Sleep(timeToRefresh) // espera o tempo determinado  

  newToken, err := GetNewToken()  
  if err != nil {  
   fmt.Println("Failed to renew token:", err)  
   return  
  }  

  err = StoreTokenCache(key, newToken, tokenTtl)  
  if err != nil {  
   fmt.Println("Failed to store new token in cache:", err)  
   return  
  }  

  fmt.Println("Token refreshed")  
 }()  
}  

// Chamada quando o token é gerado/obtido pela primeira vez  
stageRefresh(tokenKey, timeToRefresh)

Benefícios da abordagem adotada

  • Segurança: O token não fica armazenado localmente por tempo indeterminado, reduzindo a janela de oportunidade para ataques.

  • Escalabilidade: O cache distribuído permite que múltiplas instâncias do serviço compartilhem o mesmo token, eliminando a necessidade de autenticações redundantes.

  • Resiliência: O refresh automático garante que o token esteja sempre disponível, minimizando interrupções no serviço.

  • Performance: Evita retries, otimizando o tempo de resposta das requisições.

Em resumo, o uso do cache com refresh automático nos permitiu criar uma solução robusta, segura e escalável para o gerenciamento de tokens, garantindo a disponibilidade e a integridade do serviço.

Fontes

github.com/go-redis/redis/v8: Para interagir com o Redis e armazenar e/ou invalidar os tokens de maneira eficiente

github.com/golang-jwt/jwt/v5: Para manipulação de tokens JWT, incluindo decodificação, validação e, opcionalmente, geração (embora a geração do token seja tipicamente feita pelo servidor de autenticação)

Unlocking the Power of Microservices: How to Build Scalable Apps and Earn Online

2025-11-12 10:53:00

In today’s fast-paced tech world, building scalable applications is no longer a luxury; it's a necessity. As a developer, mastering microservices architecture can transform the way you build apps, and it can also open doors to exciting freelance opportunities. Imagine being able to build systems that not only scale effortlessly but also command premium pay for your expertise. Sounds like a dream, right?

Let me take you on a journey through the world of microservices, why it’s so in-demand right now, and how you can use it to unlock new career opportunities and build a lucrative online presence.

What Are Microservices?

Before we dive into the opportunities, let’s get clear on what microservices are. At its core, microservices architecture breaks down an application into smaller, independent services. Each service handles a specific task and can be developed, deployed, and scaled independently. Think of it like a team of specialists, each doing its own job but working together to form a larger system. This is a sharp contrast to the monolithic architecture, where everything is tightly coupled, making it harder to scale and manage.

A Simple Example:

Imagine you’re building an e-commerce website. In a monolithic architecture, everything — user authentication, payment processing, product catalog, and order management — might exist in one big, complex system. But with microservices, each of these functions could be its own independent service. The payment service could be updated or scaled without touching the rest of the app. The product catalog service can be fine-tuned without affecting user login or checkout.

Sounds neat, doesn’t it? This modular approach has a lot of advantages, especially when it comes to scaling apps efficiently. But here's the most exciting part: microservices are highly in demand in the industry right now, and they are reshaping the way companies approach software development.

Why Microservices Are Hot in 2023

In 2023, cloud-native technologies like Kubernetes, Docker, and AWS are enabling developers to create cloud-based microservices applications faster and more efficiently than ever before. This wave is pushing companies to adopt microservices, making it a valuable skill for any developer.

The Microservices Demand:

With companies moving towards cloud-based infrastructures, microservices have become a game-changer for scalability and agility. By adopting microservices, businesses can:

  • Scale individual components of their applications independently.
  • Quickly roll out new features without affecting the entire system.
  • Improve fault tolerance — if one service fails, the others can continue running smoothly.

It’s a win-win for developers too. As companies seek developers who specialize in building microservices, freelance rates for this skill are skyrocketing. Some developers are landing consulting gigs with companies building enterprise-level applications, while others are monetizing their expertise by launching SaaS products.

How You Can Leverage Microservices for Freelancing

Let’s be real for a second: The world of freelancing can be overwhelming. But specializing in microservices can give you the competitive edge needed to stand out in a crowded market.

  1. Freelance Projects: Businesses are actively looking for developers who can help them transition from monolithic systems to microservices. Whether you're building a new app from scratch or re-architecting an existing one, there’s a growing demand for skilled developers in this space.
  • Tip: Platforms like Upwork, Toptal, and Freelancer are filled with opportunities for microservices-focused projects. As a specialist, you can demand higher rates for your expertise.
  1. Consulting: Many large organizations struggle with adopting microservices due to the complexity involved. If you’ve mastered this architecture, you can offer consulting services to businesses in need of guidance on best practices, tools, and implementation strategies.
  • Tip: Start by offering free insights or blog posts on microservices to build a portfolio. Over time, you can offer paid consulting sessions.
  1. Building Products: Microservices architecture is also fantastic for building and scaling software products. With this approach, you can easily integrate third-party services, manage traffic spikes, and iterate rapidly. Plus, microservices are ideal for SaaS (Software as a Service) models.
  • Tip: Create a MVP (Minimum Viable Product) with microservices and monetize it via subscription models. The flexibility of microservices allows you to pivot and scale as your user base grows.
  1. Teaching and Tutorials: If you love to share knowledge, microservices is an area with a lot of educational demand. Developers at all skill levels want to understand microservices’ fundamentals, best practices, and implementation strategies. You can create courses, YouTube tutorials, or even write detailed blogs on the subject to earn passive income.

How to Start Learning and Mastering Microservices

The best way to get started with microservices is by breaking it down into digestible chunks. Here’s a simple roadmap to get you going:

  1. Learn the Basics of Distributed Systems: Since microservices rely on distributed systems, it's important to grasp concepts like network communication, message brokers, and load balancing.
  1. Master Cloud Platforms and Containers: Tools like Docker and Kubernetes are integral to building and deploying microservices.
  • Recommended courses: Check out free resources on Docker and Kubernetes on platforms like Udemy, Coursera, or YouTube.
  1. Understand the Tech Stack: Microservices require a tech stack that supports independent service development. Popular choices include Node.js, Java Spring Boot, Go, and Python for building services.
  1. Practice with Real-World Projects: Building small projects or contributing to open-source microservices projects will help solidify your knowledge and give you real-world experience.
  • Recommended platform: GitHub to find open-source microservices projects.
  1. Build Your Portfolio: Show off your work by creating a personal portfolio or blog. This will not only boost your credibility but also attract potential clients and employers.

The Bottom Line: Why Microservices Is Your Ticket to High-Earning Opportunities

In 2023, microservices are more than just a buzzword. They are a proven way to build scalable applications, and developers who specialize in this field are being rewarded with high-paying projects and consulting gigs. Whether you're a freelancer looking to stand out or an aspiring entrepreneur looking to create a SaaS product, mastering microservices can unlock numerous opportunities.

So, if you're ready to take the plunge into the world of microservices, now is the time. It’s not just a trend — it’s the future of software development. Start learning, building, and leveraging your expertise to earn online and create innovative solutions for businesses.

Final Thoughts

Microservices architecture is an exciting, evolving field that offers endless possibilities. Whether you're looking to scale your freelance career or build the next big SaaS product, mastering this approach can be a game-changer. So, start small, keep learning, and watch your career soar.

Concorrência em Go: o uso de locks e channels para evitar deadlocks

2025-11-12 10:49:16

Publicado originalmente no [Medium]

Carolina Vila-Nova
A concorrência é um dos superpoderes do Go, mas também pode se tornar uma dor de cabeça para o desenvolvedor iniciante. Felizmente, Go nos dá múltiplas ferramentas nativas para lidar com código concorrente –a questão é saber quando usar cada uma delas. Este artigo aborda esse tema, usando exemplos com diferentes complexidades.

gopher learning

Sincronização para quê?

Imagine que você está construindo um contador de visitantes para um site.

type VisitorCounter struct {  
    count int  
}  

func (c *VisitorCounter) Increment() {  
    c.count++    
}  

func (c *VisitorCounter) GetCount() int {  
    return c.count    
}

À primeira vista, o código parece correto. Mas quando N goroutines são executadas simultaneamente (e de maneira randômica), algo estranho acontece:

// Duas requisições HTTP chegam simultaneamente  
// Goroutine A (Usuário 1):           Goroutine B (Usuário 2):  
 c.count (valor atual: 100)        c.count (valor atual: 100)  
incrementa para 101                  incrementa para 101    
escreve 101 em c.count              escreve 101 em c.count  

// Resultado: count = 101 (deveria ser 102)  
// Um visitante foi "perdido"

É o que se chama de “race condition”, ou condição de corrida: como as threads acessam um recurso compartilhado mas não estão sincronizadas, o resultado final depende de qual é executada primeiro. Em um site com milhares de visitantes simultâneos, você poderia perder uma quantidade significativa de dados. Fca clara assim a necessidade de sincronizar o acesso a dados compartilhados.

Usando mutexes

Uma possível solução é usar a primitiva de sincronização mutex (acrônimo de "mutual exclusion") para garantir que apenas uma goroutine possa modificar o contador por vez:

type SafeVisitorCounter struct {  
    mu    sync.Mutex  
    count int  
}  

func (c *SafeVisitorCounter) Increment() {  
    c.mu.Lock()  
    defer c.mu.Unlock()  
    c.count++  // Apenas uma goroutine executa isso por vez  
}  

func (c *SafeVisitorCounter) GetCount() int {  
    c.mu.Lock()  
    defer c.mu.Unlock()  
    return c.count  // Leitura também precisa ser protegida  
}

Agora você tem um contador mais seguro — mas também um problema de performance. Se seu site tem um dashboard que exibe o contador em tempo real e 1.000 usuários querem visualizá-lo simultaneamente, todos vão ter que esperar em fila para ler o valor, mesmo que não exista conflito entre as leituras.

Otimizando leituras com RWMutex

É aí que podemos empregar um mutex que possa distinguir entre operações de leitura e de escrita:

type OptimizedVisitorCounter struct {  
    mu    sync.RWMutex    
    count int  
}  

func (c *OptimizedVisitorCounter) Increment() {  
    c.mu.Lock()         // Escrita ainda precisa de acesso exclusivo  
    defer c.mu.Unlock()  
    c.count++  
}  

func (c *OptimizedVisitorCounter) GetCount() int {  
    c.mu.RLock()        // Múltiplas leituras podem acontecer simultaneamente  
    defer c.mu.RUnlock()  
    return c.count  
}

Com essa mudança, N usuários podem ler o contador simultaneamente, enquanto apenas as escritas (incrementos) precisam esperar por acesso exclusivo. Assim, a performance melhora para workloads com muitas leituras.

Um exemplo mais complexo: gerenciamento de tokens

Agora que entendemos os conceitos básicos, vamos aplicá-los a um problema mais realista. Vamos construir um microsserviço que precisa se autenticar junto a uma API externa usando tokens JWT. Esses tokens expiram periodicamente e precisam ser renovados, mas você não quer ter que fazer uma nova chamada à API a cada requisição.


func (s *TokenService) GetToken(ctx context.Context) (string, error) {  
    s.mu.RLock()  
    if time.Now().Before(s.expiresAt) && s.token != "" {  
        token := s.token  
        s.mu.RUnlock()  
        return token, nil  
    }  
    s.mu.RUnlock()  

    s.mu.Lock()  
    defer s.mu.Unlock()  

    newToken, expiresAt, err := s.apiClient.GetNewToken(ctx)  
    if err != nil {  
        return "", err  
    }  

    s.token = newToken  
    s.expiresAt = expiresAt  
    return newToken, nil  
}

Aqui há pequeno problema. Se 50 requisições chegarem simultaneamente quando o token está expirando, todas vão tentar renová-lo, resultando em 50 chamadas desnecessárias à API externa.

O padrão "double-check"

Uma solução elegante para esse problema é o padrão “double-check”, ou seja, uma dupla checagem da condição, uma vez com o read lock e outra vez com o write lock.


func (s *SimpleTokenService) GetToken() string {  
 // primeira checagem - leitura rápida  
 s.mu.RLock()  
 if time.Now().Before(s.expiry) && s.token != "" {  
  token := s.token  
  s.mu.RUnlock()  
  return token   
 }  
 s.mu.RUnlock()  

 // segunda checagem com write lock  
 s.mu.Lock()  
 defer s.mu.Unlock()  

 // Verifica se outra goroutine já obteve o token atualizado  
 if time.Now().Before(s.expiry) && s.token != "" {  
  return s.token   
 }  

 // Apenas uma goroutine chega até aqui  
 fmt.Println("🔄 Making API call...")  
 time.Sleep(100 * time.Millisecond)   

 s.token = fmt.Sprintf("token-%d", time.Now().Unix())  
 s.expiry = time.Now().Add(5 * time.Second)  
 return s.token  
}

O double-check previne o problema do “thundering herd”, que ocorre quando várias threads competem por um recurso limitado assim que ele se torna disponível. Em vez de cada uma fazer uma chamada à API, elas compartilham o resultado de apenas uma chamada.

Quando as coisas dão errado: "deadlocks"

Quando nossa aplicação se torna mais complexa e múltiplos recursos são compartilhados entre vários processos, surge o risco de deadlock. Situações comuns em que isso ocorre são acessos a dispositivos ou a execução de transações em bases de dados.

Considere um sistema de transferência bancária:

type Account struct {  
    id      int  
    mu      sync.Mutex  
    balance float64  
}  

func Transfer(from, to *Account, amount float64) error {  
    from.mu.Lock()  
    defer from.mu.Unlock()  

    to.mu.Lock()    
    defer to.mu.Unlock()  

    if from.balance < amount {  
        return errors.New("saldo insuficiente")  
    }  

    from.balance -= amount  
    to.balance += amount  
    return nil  
}

Mas veja o que ocorre quando duas transferências simultâneas acontecem em direções opostas:

// Goroutine 1: Transfer(contaA, contaB, 100)  
contaA.mu.Lock() // ✅ Consegue lock A  
contaB.mu.Lock() // ⏳ Espera Goroutine 2 liberar B  
// Goroutine 2: Transfer(contaB, contaA, 200)  
contaB.mu.Lock() // ✅ Consegue lock B  
contaA.mu.Lock() // ⏳ Espera Goroutine 1 liberar A  
// Resultado: Ambas esperam para sempre

Os dois processos ficam em estado de espera mútua indefinida, cada um bloqueando um recurso que o outro precisa e que só pode ser libertado pelo processo bloqueado, impedindo a execução de ambos e, potencialmente, do sistema.

Uma possível solução seria estabelecer uma ordem consistente para a aquisição de locks:

func SafeTransfer(from, to *Account, amount float64) error {  
    first, second := from, to  
    if first.id > second.id {  
        first, second = second, first  
    }  

    first.mu.Lock()  
    defer first.mu.Unlock()  

    second.mu.Lock()  
    defer second.mu.Unlock()  

    if from.balance < amount {  
        return errors.New("saldo insuficiente")  
    }  

    from.balance -= amount  
    to.balance += amount  
    return nil  
}

Agora, independentemente da direção da transferência, os locks são sempre adquiridos na mesma ordem, eliminando a possibilidade de deadlock ou de race condition.

Quando usar channels

Quando não há estado compartilhado a ser protegido, mas há a necessidade de garantir uma ordem de processamento, uma coordenação de comportamentos ou estados a comunicar, o Go oferece uma ferramenta distinta: channels.

No caso a seguir, queremos processar uploads de imagens em um pipeline:

func ProcessImageUploads(imagePaths []string) []ProcessedImage {  
    rawImages := make(chan string, 10)  
    resizedImages := make(chan ResizedImage, 10)  
    finalImages := make(chan ProcessedImage, 10)  

    go func() {  
        defer close(rawImages)  
        for _, path := range imagePaths {  
            rawImages <- path  // "Aqui está uma imagem para processar"  
        }  
    }()  

    go func() {  
        defer close(resizedImages)  
        for path := range rawImages {  
            img := loadImage(path)  
            resized := resize(img)  
            resizedImages <- resized  // "Aqui está uma imagem redimensionada"  
        }  
    }()  

    go func() {  
        defer close(finalImages)  
        for resized := range resizedImages {  
            final := applyFilters(resized)  
            finalImages <- final  // "Aqui está uma imagem processada"  
        }  
    }()  

    var results []ProcessedImage  
    for final := range finalImages {  
        results = append(results, final)  
    }  

    return results  
}

Aqui, o uso de channels garante que as imagens sejam processas em um pipeline em que os estágios correm de maneira concorrente para melhor performance e cada um processa as imagens conforme elas são disponibilizadas.

Combinando ambas as abordagens

Em sistemas reais, frequentemente precisamos fazer uso de ambas as abordagens. Considere um sistema de métricas que coleta dados de múltiplas fontes:

type MetricsCollector struct {  
    // Channel para coordenação - receber eventos  
    events chan MetricEvent  

    // Lock para proteção - guardar métricas agregadas  
    mu      sync.RWMutex  
    metrics map[string]float64  
}  

func (m *MetricsCollector) Start() {  
    go func() {  
        for event := range m.events {  // Channel: coordenação  
            m.mu.Lock()                // Lock: proteção de dados  
            m.metrics[event.Name] += event.Value  
            m.mu.Unlock()  
        }  
    }()  
}  

func (m *MetricsCollector) RecordEvent(name string, value float64) {  
    m.events <- MetricEvent{Name: name, Value: value}  // Channel: comunicação  
}  

func (m *MetricsCollector) GetMetric(name string) float64 {  
    m.mu.RLock()     // Lock: leitura segura  
    defer m.mu.RUnlock()  
    return m.metrics[name]  
}

Aqui usamos channels para coordenar o envio de eventos (comportamento) e locks para proteger o map de métricas (dados).

Framework de decisão

Use mutex quando:

  • Você tem dados compartilhados (variáveis, maps, slices)
  • Múltiplas goroutines precisam ler/escrever os mesmos dados
  • Você quer simplicidade e performance para proteção de estado

Use RWMutex quando:

  • Você tem muito mais leituras que escritas
  • A performance de leituras concorrentes é importante
  • Você ainda está protegendo dados compartilhados

Use channels quando:

  • Você quer coordenar o comportamento entre goroutines
  • Você está implementando pipelines ou worker pools
  • Você quer comunicar valores, não proteger estado compartilhado

Use ambos quando:

  • Você tem um sistema complexo com coordenação E proteção de dados
  • Você quer separar claramente a comunicação da proteção de estado

Lembre-se dessa máxima do Go:

“Don’t communicate by sharing memory; share memory by communicating.”

Conclusão

A chave para um código Go concorrente robusto é escolher a ferramenta certa para cada problema específico, entender as compensações e sempre pensar em cenários de falha.

PS — O Go tem uma ferramenta marota para detectar deadlocks: a flag -race

go run -race main.go