MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

New Features in .NET 10 & C# 14 — The Expert’s Playbook (2025)

2025-11-11 23:11:20

New Features in .NET 10 & C# 14 — The Expert’s Playbook (2025)

New Features in .NET 10 & C# 14 — The Expert’s Playbook (2025)

.NET 10 (LTS) and C# 14 dropped today — November 11, 2025. As an LTS release, .NET 10 is supported through November 14, 2028. This post is your concise, code‑first tour of what's new across the stack: runtime, C#, ASP.NET Core, and EF Core 10.

Why this post?

Because this release meaningfully changes how you start small (file‑based apps), how you compose APIs (Minimal API validation + OpenAPI 3.1), and how you model data (EF Core 10 complex types & JSON). And C# 14 is packed with quality‑of‑life and performance wins.

Table of Contents

  • What’s New in .NET 10
  • What’s New in C# 14
  • What’s New in ASP.NET Core in .NET 10
  • What’s New in EF Core 10
  • Other Changes in .NET 10
  • Migration Notes & Practical Tips
  • Summary

What’s New in .NET 10

1) File‑Based Apps (single‑file C#)

C# now behaves like a first‑class scripting language for CLIs and utilities. You can run a single *.cs file with dotnet run — no .sln or .csproj required.

dotnet run main.cs

File‑based apps support SDK and NuGet references via #: directives at the top of your file:

#:sdk Microsoft.NET.Sdk.Web
#:package Microsoft.EntityFrameworkCore.Sqlite@9.0.0

using Microsoft.EntityFrameworkCore;

var builder = WebApplication.CreateBuilder();
builder.Services.AddDbContext<OrderDbContext>(o => o.UseSqlite("Data Source=orders.db"));
var app = builder.Build();

app.MapGet("/orders", async (OrderDbContext db) => await db.Orders.ToListAsync());
app.Run();
return;

public record Order(string OrderNumber, decimal Amount);
public class OrderDbContext(DbContextOptions<OrderDbContext> options) : DbContext
{
    public DbSet<Order> Orders => Set<Order>();
}

Reference existing projects too:

#:project ../ClassLib/ClassLib.csproj

Cross‑platform shell scripts

#!/usr/bin/env dotnet
chmod +x app.cs
./app.cs

Grow up when needed

Convert your script to a full project at any time:

dotnet project convert app.cs

Multi‑file scripting is expected to expand in future versions, but this single‑file flow already unlocks fast prototypes and ops tools.

What’s New in C# 14

C# 14 focuses on ergonomics and performance. Highlights below with paste‑ready snippets.

1) Extension Members / Extension Blocks

Group instance & static extensions (methods and properties) for a receiver in one block.

public static class StringExtensions
{
    extension(string value)
    {
        public bool IsNullOrEmpty() => string.IsNullOrEmpty(value);
        public string Truncate(int max) => string.IsNullOrEmpty(value) || value.Length <= max
            ? value : value[..max];

        // static extension on the receiver type
        public static bool IsAscii(char c) => c <= 0x7F;
    }
}

2) Extension Properties

Make intent obvious and templates cleaner:

public static class EnumerableExtensions
{
    extension<T>(IEnumerable<T> src)
    {
        public bool IsEmpty => !src.Any();
        public int Count => src.Count();
    }
}

3) Private fields & caching inside extension blocks

public static class CacheExtensions
{
    extension<T>(IEnumerable<T> src)
    {
        private List<T>? _list;
        public List<T> Materialized => _list ??= src.ToList();
        public bool IsEmpty => Materialized.Count == 0;
    }
}

4) Static extension members

public static class ProductExtensions
{
    extension(Product)
    {
        public static Product CreateDefault() => new() { Name = "Unnamed", Price = 0 };
        public static bool IsValidPrice(decimal price) => price >= 0;
    }
}

5) Null‑conditional assignment

Assign with ?. without manual null checks:

user?.Profile = LoadProfile();

6) The field keyword (backing field access)

Cleaner properties without manual fields:

public class ConfigReader
{
    public string FilePath
    {
        get => field ??= "data/config.json";
        set => field = value ?? throw new ArgumentNullException(nameof(value));
    }
}

7) Lambda parameter modifiers without types

delegate bool TryParse<T>(string text, out T result);
TryParse<int> parse = (text, out result) => int.TryParse(text, out result);

8) Partial constructors & partial events

Perfect for source generators:

public partial class User
{
    public partial User(string name);
    public partial event Action<string> Saved;
}

9) User‑defined compound assignment operators

Improve performance for in‑place ops:

public struct Money(string currency, decimal amount)
{
    public decimal Amount { get; private set; } = amount;
    public string Currency { get; } = currency;

    public void operator +=(Money b)
    {
        if (Currency != b.Currency) throw new InvalidOperationException();
        Amount += b.Amount;
    }
}

10) nameof for unbound generics & implicit Span conversions

Console.WriteLine(nameof(List<>)); // "List"
// Many calls now infer ReadOnlySpan<T> without type noise.

What’s New in ASP.NET Core in .NET 10

1) Built‑in validation for Minimal APIs

builder.Services.AddValidation();

app.MapPost("/products",
    ([Range(1, int.MaxValue)] int productId, [Required] string name) =>
        TypedResults.Ok(new { productId, name })
);

Disable on a route if needed:

app.MapPost("/raw", (int id, string name) => TypedResults.Ok(id))
   .DisableValidation();

2) Server‑Sent Events (SSE)

Lightweight real‑time streams via TypedResults.ServerSentEvents.

public record StockPriceEvent(string Id, string Symbol, decimal Price, DateTime Timestamp);

public class StockService
{
    public async IAsyncEnumerable<StockPriceEvent> Generate([EnumeratorCancellation] CancellationToken ct)
    {
        var symbols = new[] { "MSFT", "AAPL", "GOOG", "AMZN" };
        while (!ct.IsCancellationRequested)
        {
            yield return new StockPriceEvent(DateTime.UtcNow:o, symbols[Random.Shared.Next(symbols.Length)],
                                             Math.Round((decimal)(100 + Random.Shared.NextDouble()*50), 2),
                                             DateTime.UtcNow);
            await Task.Delay(TimeSpan.FromSeconds(2), ct);
        }
    }
}

builder.Services.AddSingleton<StockService>();
app.MapGet("/stocks", (StockService s, CancellationToken ct) =>
    TypedResults.ServerSentEvents(s.Generate(ct), eventType: "stockUpdate"));

3) OpenAPI 3.1 + YAML

builder.Services.AddOpenApi(o => o.OpenApiVersion = Microsoft.OpenApi.OpenApiSpecVersion.OpenApi3_1);

if (app.Environment.IsDevelopment())
{
    app.MapOpenApi("/openapi/{documentName}.yaml");
}

4) JSON Patch with System.Text.Json

dotnet add package Microsoft.AspNetCore.JsonPatch.SystemTextJson --prerelease

What’s New in EF Core 10

1) Complex Types (incl. optional & JSON mapping)

modelBuilder.Entity<Customer>(b =>
{
    b.ComplexProperty(c => c.ShippingAddress);
    b.ComplexProperty(c => c.BillingAddress, c => c.ToJson());
});

public class Customer
{
    public int Id { get; set; }
    public Address ShippingAddress { get; set; } = default!;
    public Address? BillingAddress { get; set; }  // optional
}

public struct Address
{
    public required string Street { get; set; }
    public required string City { get; set; }
    public required string ZipCode { get; set; }
}

2) LeftJoin / RightJoin (LINQ operators)

var q = context.Students.LeftJoin(
    context.Departments,
    s => s.DepartmentID,
    d => d.ID,
    (s, d) => new { s.FirstName, s.LastName, Department = d.Name ?? "[NONE]" });

3) ExecuteUpdate for JSON columns

await context.Blogs.ExecuteUpdateAsync(s =>
    s.SetProperty(b => b.Details.Views, b => b.Details.Views + 1));

4) Named query filters

modelBuilder.Entity<Blog>()
    .HasQueryFilter("SoftDelete", b => !b.IsDeleted)
    .HasQueryFilter("Tenant", b => b.TenantId == tenantId);

var all = await context.Blogs.IgnoreQueryFilters(["SoftDelete"]).ToListAsync();

5) Regular lambdas in ExecuteUpdateAsync

await context.Blogs.ExecuteUpdateAsync(s =>
{
    s.SetProperty(b => b.Views, 8);
    if (nameChanged) s.SetProperty(b => b.Name, "foo");
});

Other Changes in .NET 10

  • Performance: broader JIT & GC improvements across runtime.
  • SDK: better dotnet CLI UX for script→project workflows.
  • Libraries: incremental APIs refinements; Aspire updates.

Migration Notes & Practical Tips

  • Target frameworks: bump to net10.0 and enable C# 14 (<LangVersion>preview</LangVersion> may not be needed once toolchain is updated).
  • Minimal APIs: adopt AddValidation(); standardize 400 responses via IProblemDetailsService.
  • OpenAPI: switch to 3.1; consider serving YAML for human‑readable docs.
  • EF Core: start with complex types for embedded value objects and named filters for multitenancy/soft‑delete.
  • Scripting: keep small CLIs as single files; convert when they need structure.
  • Perf: use compound operator overloads and implicit spans in hot paths.
  • Security: same TLS defaults; re‑audit auth flows if you expose SSE or file‑scripts in ops.

Summary

  • .NET 10 (LTS): stable base through 2028.
  • C# 14: extension members, field, null‑conditional assignment, partial constructors/events — less boilerplate, more clarity.
  • ASP.NET Core 10: Minimal API validation, OpenAPI 3.1/YAML, SSE.
  • EF Core 10: complex types, JSON updates, Left/RightJoin, named filters.

If you ship APIs, CLIs, or data‑heavy apps, this release will reduce ceremony and increase velocity.

✍️ Written by: Cristian Sifuentes — .NET/C# & architecture enthusiast. If you liked this, consider subscribing to the newsletter for more deep dives and production‑ready templates.

Setting Up Solid Cache on Heroku with a Single Database

2025-11-11 23:08:49

I recently wanted to use Solid Cache for a new MVP I'm building out in Ruby on Rails. One thing I wanted was for this to all be deployed to a PaaS and in this case I was using Heroku. While I know Rails 8 pushes hard for Kamal and rolling your own deployment, for a lot of early projects it's nice to just have all the DevOps and CI/CD taken care of for you.

This creates a problem when it comes to Solid Cache. Rails recommends running these with SQLite and in fact I have a production application using SQLite for everything that works amazing. However, Heroku is an ephemeral application server and as such, wipes out your SQLite stores on every deployment.

Since this was an MVP I really just wanted to manage one database rather than introduce Redis or another instance of Postgres. After a lot of failed attempts and much googling, this was the solution I came up with.

The Problem

After deploying the Rails 8 application to Heroku, I encountered this error when trying to use rate limiting:

PG::UndefinedTable (ERROR: relation "solid_cache_entries" does not exist)

This occurred because Rails 8's rate limiting feature depends on Rails.cache, which in production is configured to use Solid Cache by default. However, the solid_cache_entries table didn't exist in our database.

This worked locally for me because in development Rails uses an in memory store so no database was required. It wasn't until deployment that I was able to see the error.

Understanding Solid Cache

Rails 8 introduces the "Solid" stack as default infrastructure:

  • Solid Cache - Database-backed cache store (replaces Redis/Memcached)
  • Solid Queue - Database-backed job queue (replaces Sidekiq/Resque)
  • Solid Cable - Database-backed Action Cable (replaces Redis for WebSockets)

By default, Rails 8 expects these to use separate databases. The solid_cache:install generator creates:

  • config/cache.yml - Cache configuration
  • db/cache_schema.rb - Schema file (NOT a migration)
  • Configuration pointing to a separate cache database

Why Use a Single Database on Heroku?

For our MVP, I chose to use a single PostgreSQL database for several reasons:

Cost and Simplicity

  • Heroku provides one DATABASE_URL - Additional databases cost extra
  • Simpler architecture - Fewer moving parts during initial development
  • Easier to manage - Single database connection, single backup strategy

Future Scalability Options

When you outgrow this setup, you have clear upgrade paths:

  • Redis - Better performance for high-traffic apps, separate caching layer
  • Separate PostgreSQL database - Isolate cache from primary data
  • Managed cache service - Heroku Redis, AWS ElastiCache, etc.

Why Not SQLite for Cache on Heroku?

While Solid Cache supports SQLite, Heroku's filesystem is ephemeral:

  • Files are wiped on dyno restart (at least once per 24 hours)
  • Deployments create new dynos with fresh filesystems
  • You'd lose all cached data frequently

SQLite-backed Solid Cache works great for:

  • Single-server VPS deployments (Kamal, Hetzner, DigitalOcean Droplets)
  • Containerized apps with persistent volumes
  • Development/staging environments

But for Heroku and similar PaaS platforms, use PostgreSQL or Redis for caching.

What I Tried (And What Didn't Work)

Attempt 1: Running the Generator

bin/rails solid_cache:install

Result: Created cache_schema.rb but no migration file. Changed cache.yml to point to a separate cache database that doesn't exist on Heroku.

Attempt 2: Official Multi-Database Setup

Following the official Rails guides, I configured database.yml with separate database entries:

production:
  primary:
    url: <%= ENV["DATABASE_URL"] %>
  cache:
    url: <%= ENV["DATABASE_URL"] %>  # Same database, different connection

And cache.yml:

production:
  database: cache

Result: The cache_schema.rb file wasn't loaded by db:migrate or db:prepare. Rails expected separate databases with separate schema files.

Attempt 3: Using db:prepare

Ran bin/rails db:prepare hoping it would load all schema files.

Result: Only loaded db/schema.rb (main migrations), ignored db/cache_schema.rb.

The Solution: Migration-Based Approach

After researching (including this Reddit thread), I found the working solution for single-database Heroku deployments.

Step 1: Configure cache.yml for Single Database

Remove the database: configuration from production in config/cache.yml:

# config/cache.yml
default: &default
  store_options:
    max_size: <%= 256.megabytes %>
    namespace: <%= Rails.env %>

development:
  <<: *default

test:
  <<: *default

production:
  <<: *default  # No database: specified - uses primary connection

Important: According to the Solid Cache README, when you omit database, databases, or connects_to settings, Solid Cache automatically uses the ActiveRecord::Base connection pool (your primary database).

Step 2: Create a Migration for the Cache Table

Generate a migration to create the solid_cache_entries table:

bin/rails generate migration CreateSolidCacheEntries --database=cache

The --database=cache flag keeps the migration organized (though it still runs against the primary database in our single-DB setup).

Step 3: Copy Table Definition from cache_schema.rb

Update the generated migration with the exact table structure from db/cache_schema.rb:

# db/migrate/YYYYMMDDHHMMSS_create_solid_cache_entries.rb
class CreateSolidCacheEntries < ActiveRecord::Migration[8.1]
  def change
    create_table :solid_cache_entries do |t|
      t.binary :key, limit: 1024, null: false
      t.binary :value, limit: 536870912, null: false
      t.datetime :created_at, null: false
      t.integer :key_hash, limit: 8, null: false
      t.integer :byte_size, limit: 4, null: false

      t.index :byte_size
      t.index [:key_hash, :byte_size]
      t.index :key_hash, unique: true
    end
  end
end

Step 4: Run Migration Locally

bin/rails db:migrate

Verify the table was created:

bin/rails runner "puts ActiveRecord::Base.connection.table_exists?('solid_cache_entries')"
# Should output: true

Step 5: Keep Development Simple

Leave development environment using :memory_store in config/environments/development.rb:

config.cache_store = :memory_store

This is the Rails convention and keeps development simple. Production uses Solid Cache, development uses in-memory caching.

Step 6: Deploy to Heroku

Your existing Procfile with the release phase will handle the migration:

release: bundle exec rails db:migrate
web: bundle exec puma -C config/puma.rb

Deploy and the migration runs automatically during the release phase.

Verification

After deployment, verify Solid Cache is working:

  1. Check Heroku logs during release phase - should see migration run
  2. Test rate limiting - Try to trigger rate limits on login endpoints
  3. Check cache in Rails console:
   # On Heroku
   heroku run rails console

   # Test cache
   Rails.cache.write('test_key', 'test_value')
   Rails.cache.read('test_key')  # Should return 'test_value'

   # Check table
   SolidCache::Entry.count  # Should be > 0 if cache is working

Cache Expiration and Cleanup

Solid Cache includes automatic cleanup to prevent indefinite growth. Our configuration uses both size and age-based expiration:

# config/cache.yml
default: &default
  store_options:
    max_age: <%= 30.days.to_i %>     # Delete entries older than 30 days
    max_size: <%= 256.megabytes %>   # Delete oldest when exceeds 256MB
    namespace: <%= Rails.env %>

How Automatic Cleanup Works

Solid Cache uses a write-triggered background thread (not a separate job system):

  1. Write tracking - Every cache write increments an internal counter
  2. Automatic activation - After ~50 writes, cleanup runs on a background thread
  3. Cleanup logic:
    • If max_size exceeded → Delete oldest entries (LRU eviction)
    • If max_age set and size OK → Delete entries older than max_age
    • Deletes in batches of 100 entries (expiry_batch_size)
  4. SQL-based deletion - Runs standard SQL DELETE queries:
   DELETE FROM solid_cache_entries
   WHERE created_at < NOW() - INTERVAL '30 days'
   LIMIT 100

Important Characteristics

  • No Solid Queue required - Uses built-in Ruby threads, not Active Job
  • No cron jobs needed - Self-managing and automatic
  • Database-agnostic - Pure SQL, works with any ActiveRecord adapter
  • Efficient - Background thread idles when cache isn't being written to
  • ⚠️ Write-dependent - Cleanup only triggers when cache receives writes

For rate limiting (which writes on every login attempt), this mechanism works perfectly and requires no additional infrastructure.

Why 30 Days for Rate Limiting?

Rate limiting data is inherently short-lived:

  • Rate limit windows are 3 minutes
  • Session data expires in days, not months
  • Old cache entries from expired sessions serve no purpose

30 days is generous for this use case and prevents cache bloat while maintaining safety margins.

Key Takeaways

  1. Rails 8's default Solid Stack assumes multi-database setup - This doesn't match Heroku's single DATABASE_URL model
  2. Schema files aren't migrations - db/cache_schema.rb won't be loaded by db:migrate
  3. Omitting database: in cache.yml uses the primary connection - This is the key for single-database setups
  4. Create a regular migration - Convert the schema file to a migration for single-database deployments
  5. SQLite doesn't work on ephemeral filesystems - Use PostgreSQL or Redis for caching on Heroku
  6. Development can use :memory_store - No need to complicate local development
  7. Automatic cleanup is built-in - Solid Cache handles expiration via background threads, no Solid Queue or cron jobs required

Future Migration Path

When your app scales and you need better cache performance:

  1. Add Heroku Redis (~$15/month for hobby tier)
  2. Update production.rb:
   config.cache_store = :redis_cache_store, { url: ENV['REDIS_URL'] }
  1. Remove Solid Cache dependency if desired, or keep for other purposes

The migration is straightforward and won't require code changes beyond configuration.

Additional Resources

I Built an API-First Document Workflow Engine (Looking for Feedback)

2025-11-11 23:07:52

Hey everyone — over the last few months, I’ve been building SignumFlow, an API-first system for uploading documents and running programmatic workflows.

I built this because most workflow/e-signature platforms are UI-first. They require you to send users to their hosted UI and work inside their UX expectations — which is great for non-technical teams, but restrictive if you want everything embedded directly into your product.

I wanted something closer to Stripe-style developer ergonomics, but for docs + approval routing.

So I built SignumFlow.

What It Does Today

Right now SignumFlow supports:

  • Uploading documents
  • Starting workflows (sequential + parallel routing)
  • Retrieving workflow + document state
  • Developer portal → get API keys, view usage, see quickstarts

No UI is required, just call the API, handle responses, and keep your users inside your app.

Docs + quickstart:
👉 https://docs.signumflow.com

What’s Still in Progress

These are actively being built:

  • Approval actions (minimal endpoint first)
  • Webhooks

My goal is that approvals and lifecycle events can be driven either:

  • manually (approve/reject via API), or
  • automated rules (coming later)

Webhooks will complement polling so apps can react immediately to workflow transitions.

Why I’m Building This

Every product I’ve worked on that needs approvals/docs ends up reinventing the same pieces:

  • Upload + store documents
  • Route them through multiple people
  • Track state + timestamps
  • Log actions
  • Notify systems
  • Handle versions
  • Generate/PDF-ify
  • Keep users from bouncing between two platforms

Eventually, the internal tool grows into a workflow engine.

But building + maintaining that layer is painful and especially around lifecycle state, routing, concurrency, and auditing.

So the idea with SignumFlow is:

Let devs own the UI + business logic, and outsource the workflow mechanics to an API.

Basic Example

# Upload a document
curl -X POST https://api.signumflow.com/api/v1/upload \
  -H "Authorization: $API_KEY" \
  -F "[email protected]"

# Initialize a workflow
curl -X POST https://api.signumflow.com/api/v1/workflow/init \
  -H "Authorization: $API_KEY" \
  -d '{
    "documentId": "doc_123",
    "steps": [
      { "assignee": "[email protected]" },
      { "assignee": "[email protected]" }
    ]
  }'

# Check workflow state
curl https://api.signumflow.com/api/v1/workflow/workflow_123 \
  -H "Authorization: $API_KEY"

Architecture (high-level)

  • Runtime: Node.js (TypeScript)
  • Storage: S3
  • DB: PostgreSQL
  • API: REST (JSON)
  • Workflow Engine: stateless + async routing
  • Auth: API key per apps -
  • Deployment: AWS Lambda + API Gateway (Serverless)
  • Pattern: REST + polling today; webhooks soon
  • Developer Portal: app + key management

Workflows are modeled as step graphs (seq/parallel).
State transitions are recorded + queryable.

Who Might Use This?

  • SaaS products that need embedded approvals
  • Internal tooling teams
  • Platforms needing customer workflows
  • Construction, insurance, real estate, healthcare, legal
  • Anything requiring document routing through multiple hands

This is not meant to replace full UI-heavy signature platforms.
It’s for developers who want control and flexibility.

What I’m Looking For

If you try the API or skim the docs, I’d love feedback on:

  • What’s confusing / unclear?
  • Missing must-have API surface?
  • Naming improvements?
  • A feature you expect but don’t see?
  • Does the value proposition make sense?

Even one small insight would be super helpful this early.

Docs:
👉 https://docs.signumflow.com

Roadmap (Short term)

  1. Approval endpoints
  2. Webhooks
  3. Templates
  4. Admin UI (optional)
  5. SDKs (TS + Python)

If you’ve built workflow engines, approval systems, or signature integrations before, your feedback would mean a lot.

Thanks for reading!
Happy to answer any questions.


Junior
Founder, SignumFlow

Trabalhar para empresas internacionais não é impossível

2025-11-11 23:00:54

Trabalhar para empresas multinacionais ou internacionais pode ser desafiador, mas está longe de ser impossível.
O que acontece, na maioria das vezes, é que você perde oportunidades porque nem tenta, no fim, acaba se sabotando.

1. O medo de tentar

Trabalhar para uma empresa internacional parece um sonho distante até o dia em que você percebe que a maior barreira não está lá fora, mas dentro da sua cabeça.

Muitos profissionais brasileiros têm qualificação técnica, mas se bloqueiam por causa do inglês, da síndrome do impostor ou simplesmente pela falta de informação sobre o processo.

Muita gente nem se candidata porque pensa: “meu inglês não é bom o suficiente” ou “não vou dar conta”.
Mas o segredo é simples: ninguém começa pronto, você se prepara no caminho.

2. Passos antes da primeira entrevista

Tenha sua estrutura legal pronta

Se o seu objetivo é trabalhar como contractor (PJ) para fora, abra seu CNPJ e pesquise sobre emissão de invoice internacional, contratos e recebimento por plataformas como Deel, Payoneer ou Wise.
Isso mostra profissionalismo e evita surpresas quando a proposta chegar.

Avalie as opções CLT nacionais

Nem toda vaga precisa ser internacional para ser global.
Multinacionais que operam no Brasil oferecem cargos CLT com salários competitivos e benefícios sólidos, como férias, plano de saúde e estabilidade.

Lembre-se: 30 dias de férias é um luxo até em empresas dos EUA.
Em muitos casos, vagas internacionais oferecem apenas 10 a 15 dias de PTOs, ou até o famoso “ilimitado”, que raramente é usado de verdade.

Prepare seu inglês funcional

Você não precisa soar como um nativo, mas precisa se comunicar bem.
Foque em inglês técnico e inglês conversacional para reuniões.
Hoje há muito acesso a professores, plataformas e até comunidades gratuitas para praticar.

O inglês é uma ponte e você não precisa atravessá-la correndo. Apenas comece a caminhar.

Fortaleça seu portfólio técnico

Antes de se candidatar, revise seus projetos no GitHub.
Mostre código limpo, organização e boas práticas.
Inclua um README bem feito, isso demonstra profissionalismo e facilita a avaliação por recrutadores estrangeiros.

Evite criar projetos inteiros com IA se você não entende o que está implementando.

3. A mentalidade

“Você pode ter toda a skill técnica do mundo, mas se não tiver consistência e iniciativa, vai continuar vendo os outros conquistarem o que você quer.”

A comparação é cruel e quase nunca reflete a realidade.
Você se compara com o fulano, o ciclano, o influenciador que já está lá fora e acaba parado no mesmo lugar.

O que realmente faz diferença é dar o primeiro passo.

Caminhe ao lado de pessoas que buscam o mesmo que você.
Converse com quem já conseguiu.
Afaste-se de quem joga baldes de água fria nos seus sonhos.

A cada processo, teste e entrevista, você ganha experiência.
Ninguém acerta de primeira, e tudo bem.
Você aprende com os “nãos”, com os feedbacks e com o tempo.

A consistência e o hábito de melhoria contínua fazem toda a diferença, especialmente nos soft skills.
Com cada entrevista, você se comunica melhor, se apresenta com mais clareza e ganha confiança.

Se tiver amigos mais experientes, simule entrevistas. Converse em inglês sobre o seu trabalho. Isso já é um treino valioso.

4. Os desafios reais

O processo não é um conto de fadas.
Cuidado com quem vende consultorias prometendo emprego internacional em 30 dias. Pode acontecer? Pode.
Mas na maioria dos casos, leva tempo e exige preparo.

A rotina internacional também tem suas diferenças.

  • Fuso horário (reuniões muito cedo ou muito tarde)
  • Comunicação 100% em inglês
  • Autonomia e disciplina (menos supervisão)
  • Adaptação cultural e de comunicação

Nos primeiros meses é normal sentir nervosismo, ansiedade e até travar no inglês.
Você vai esquecer palavras, errar verbos, ficar inseguro, e isso é parte do processo.

Com o tempo, a confiança vem.
E a recompensa é enorme: crescimento profissional, aprendizado contínuo e uma experiência global que transforma sua carreira.

O maior desafio é entrar na primeira empresa. Depois disso, tudo flui com muito mais naturalidade.

5. O primeiro passo ainda é o mais importante

Você não precisa ser o melhor, só precisa estar preparado quando a oportunidade aparecer.

Trabalhar com empresas internacionais não é um prêmio, é a consequência natural de quem se prepara, se expõe e continua evoluindo.

Talk Early, Fail Less: How Communication Builds Great Teams

2025-11-11 23:00:00

Every great product starts with people, not code.
This post kicks off my new series People Over Pixels, where I share the soft skills that I believe are more important than any framework, syntax, or architecture.
After years of working in different teams, I’ve realized that the success of a product depends less on how we code, and more on how we communicate.

When One Missing Conversation Costs Million of Dollars

In 1999, NASA lost the Mars Climate Orbiter, a $125 million spacecraft. Not because of bad hardware, or a complex bug, but because two brilliant teams weren’t aligned. One used imperial units. The other used metric.
A single misunderstanding destroyed years of work and one of the most advanced spacecraft ever built [1].

That story stuck with me because it shows something simple yet profound:

Even the best engineers fail when they stop talking the same language.

Why Teams Need to Talk

The best teams I’ve worked with didn’t avoid problems, they talked about them early. They didn’t guess what others meant, they asked. They didn’t hide mistakes, they shared them before they grew.
Communication isn’t about meetings or documentation. It’s about creating a shared understanding, so everyone moves in the same direction.

The Power of Asking Questions

I used to stay quiet when I wasn’t 100% sure about something. I thought asking too many questions would make me look less experienced, but the truth is the opposite.
The people who ask questions make everyone smarter. They catch assumptions, expose risks, and help the team think more clearly.

The best developers aren’t the ones who always know the answer, they’re the ones who make it safe for everyone to ask the hard questions.

Lessons from Toyota: Communication as Quality

Toyota’s production system is famous for one rule: any worker on the line can pull the andon cord to stop production if they see a problem.
It doesn’t matter who you are, everyone is empowered to speak up. That culture of openness made Toyota one of the most consistent, high-quality manufacturers in the world.
In software, "pulling the andon cord" means saying:

  • "I think we might have a problem here."
  • "I’m not sure I understood this requirement."
  • "Can we discuss another approach?"

That’s how teams build quality, not just in their code, but in their relationships.

Communication Is a Force Multiplier

Good communication turns individual effort into collective success. It’s how teams stay aligned, fix problems faster, and build trust over time.
When people talk, they discover solutions no one could find alone. When they stay silent, even the best code can’t save them.

💡 Final Thought

Great teams aren’t made of the most talented individuals, they’re made of people who talk, listen, and learn together. So, if you want your next project to succeed, don’t just focus on writing perfect code.

Focus on creating conversations that lead to better code, because in the end:

Teams that talk early, fail less, and build more.

🛰️ References

[1] Simscale - When NASA Lost a Spacecraft Due to a Metric Math Mistake

✅ That’s all, folks!

💬 Let’s Connect

Have any questions, suggestions for improvement, or just want to share your thoughts?

Feel free to leave a comment here, or get in touch with me directly on LinkedIn — I’d love to connect!

🔥 Why Your Deep Neural Network Fails at Layer 50 (And How ResNet Fixes It)

2025-11-11 22:57:00

TL;DR:

  • 💡 Training networks deeper than 20 layers? You're probably hitting the degradation problem
  • ✅ ResNet's skip connections solved what seemed impossible in 2015
  • 📊 From 22 layers (AlexNet) to 152+ layers without accuracy loss
  • 🎁 Pre-trained ResNet-50 gets you 76% ImageNet accuracy in 10 lines of code
  • ⚠️ Understanding v1.5 vs v1 can save you 0.5% accuracy

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

The Problem That Stumped Everyone

Here's the counterintuitive nightmare that kept researchers up at night:

Deeper networks should = better performance, right?

Wrong. Catastrophically wrong.

In 2015, teams were hitting a wall. Add more than 20 layers to your CNN? Watch your training accuracy decrease. Not overfit - just... fail.

# What researchers saw:
20-layer network: 85% accuracy 
56-layer network: 78% accuracy  

# This made ZERO sense

The cruel irony? A deeper network should theoretically match a shallow one by learning identity mappings in extra layers. But gradient descent couldn't figure this out.

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

💡 The Residual Learning Breakthrough

Kaiming He and his team at Microsoft Research asked a brilliant question:

"What if we stop asking layers to learn the underlying mapping H(x), and instead learn the residual F(x) = H(x) - x?"

The Skip Connection Magic

Instead of this:

output = layer(input)  # Learn H(x) directly

Do this:

output = layer(input) + input  # Learn F(x), add input back

Why this works:

  • If the optimal mapping is close to identity, it's easier to push F(x) → 0 than to learn H(x) = x
  • Gradients flow directly through skip connections (no vanishing gradient hell)
  • The network can "choose" whether to use a layer or skip it

Think of it like this: Instead of teaching someone a complex route, you teach them the detours from the highway they already know.

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

🎯 ResNet-50 Architecture Deep Dive

ResNet-50 has 50 layers organized in bottleneck blocks:

Input (224×224×3)
    ↓
7×7 conv, stride 2
    ↓
3×3 max pool
    ↓
[1×1 conv → 3×3 conv → 1×1 conv] × 3   # Stage 1
    ↓
[1×1 conv → 3×3 conv → 1×1 conv] × 4   # Stage 2
    ↓
[1×1 conv → 3×3 conv → 1×1 conv] × 6   # Stage 3
    ↓
[1×1 conv → 3×3 conv → 1×1 conv] × 3   # Stage 4
    ↓
Global Average Pooling
    ↓
Fully Connected (1000 classes)

🔍 Bottleneck Block Anatomy

class BottleneckBlock:
    def forward(self, x):
        identity = x

        # 1×1 conv reduces dimensions
        out = conv1x1(x, filters=64)

        # 3×3 conv does the heavy lifting
        out = conv3x3(out, filters=64)

        # 1×1 conv restores dimensions
        out = conv1x1(out, filters=256)

        # THE MAGIC: Add skip connection
        out += identity  # 🎁

        return relu(out)

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

⚠️ ResNet v1 vs v1.5: The Detail That Matters

Microsoft's v1.5 modification:

# v1 (original)
Bottleneck:
  1×1 conv, stride=2  # Downsampling here
  3×3 conv, stride=1
  1×1 conv, stride=1

# v1.5 (improved)
Bottleneck:
  1×1 conv, stride=1
  3×3 conv, stride=2  # Downsampling moved here
  1×1 conv, stride=1

Impact:

  • ✅ +0.5% top-1 accuracy on ImageNet
  • ❌ ~5% slower inference (more computation in 3×3 layer)

💡 When to use which:

  • v1.5: When accuracy is critical (research, competitions)
  • v1: When speed matters (production, edge devices)

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

🚀 Get Started in 5 Minutes

Installation

pip install transformers torch datasets

Classify Any Image

from transformers import AutoImageProcessor, ResNetForImageClassification
import torch
from PIL import Image

# Load pre-trained ResNet-50 v1.5
processor = AutoImageProcessor.from_pretrained("microsoft/resnet-50")
model = ResNetForImageClassification.from_pretrained("microsoft/resnet-50")

# Load your image
image = Image.open("your_image.jpg")

# Preprocess and predict
inputs = processor(image, return_tensors="pt")

with torch.no_grad():
    logits = model(**inputs).logits

# Get prediction
predicted_class = logits.argmax(-1).item()
label = model.config.id2label[predicted_class]

print(f"Prediction: {label}")
print(f"Confidence: {torch.softmax(logits, dim=1).max().item():.2%}")

📊 Output Example

Prediction: golden_retriever
Confidence: 94.73%

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

💪 Real-World Performance

ImageNet-1k Results (224×224):

Metric ResNet-50 v1.5
Top-1 Accuracy 76.13%
Top-5 Accuracy 92.86%
Parameters 25.6M
Inference (GPU) ~5ms/image

Why ResNet-50 is the go-to baseline:

  1. Strong accuracy without being massive
  2. Fast inference (perfect for production)
  3. Transfer learning superstar (works on custom datasets with minimal fine-tuning)
  4. Available in every framework (PyTorch, TensorFlow, ONNX)

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

🎁 Pro Tips for Fine-Tuning

Freeze Early Layers

# Early layers learn general features (edges, textures)
# Freeze them, train only later layers

for param in model.resnet.embedder.parameters():
    param.requires_grad = False

for param in model.resnet.encoder.stages[0].parameters():
    param.requires_grad = False

Learning Rate Strategy

# Use lower LR for pre-trained weights
optimizer = torch.optim.AdamW([
    {'params': model.resnet.parameters(), 'lr': 1e-5},
    {'params': model.classifier.parameters(), 'lr': 1e-3}
])

Data Augmentation (Critical!)

from transformers import AutoImageProcessor

processor = AutoImageProcessor.from_pretrained(
    "microsoft/resnet-50",
    do_resize=True,
    do_center_crop=True,
    do_normalize=True,
    # Add augmentation
    do_flip=True,
    do_random_crop=True
)

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

🔥 Common Mistakes (And How to Avoid Them)

❌ Mistake #1: Wrong Input Size

# ResNet-50 expects 224×224
image = processor(image, size={"height": 224, "width": 224})

❌ Mistake #2: Forgetting Normalization

# ResNet was trained with ImageNet normalization
# processor handles this automatically
# DON'T normalize manually unless you know what you're doing

❌ Mistake #3: Not Using GPU

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = model.to(device)
inputs = {k: v.to(device) for k, v in inputs.items()}

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

🎯 When to Use ResNet vs. Alternatives

Use ResNet-50 when:

  • ✅ You need a solid baseline fast
  • ✅ Inference speed matters
  • ✅ You have limited training data (transfer learning)
  • ✅ You're deploying to production

Consider alternatives when:

  • 🔄 You need the absolute best accuracy → EfficientNet, ConvNeXt
  • 🔄 You have massive compute → Vision Transformers (ViT)
  • 🔄 You need tiny models → MobileNet, EfficientNet-Lite
  • 🔄 You're working with very high-res images → ResNet-101/152

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

📚 The Legacy

ResNet didn't just win ImageNet 2015. It changed how we think about deep learning:

  1. Skip connections are now everywhere (Transformers, Diffusion Models, etc.)
  2. Proved that depth matters when done right
  3. Made transfer learning practical for computer vision
  4. Inspired architectural innovations (DenseNet, ResNeXt, ResNeSt)

"Residual learning is one of those ideas that seems obvious in retrospect but was revolutionary when introduced." - Andrej Karpathy

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

🚀 Your Turn

Try this challenge:

  1. Download ResNet-50
  2. Test it on 10 images from your photo library
  3. Check how many it gets right
  4. Share your results in the comments!

Going deeper?

  • Fine-tune on your custom dataset
  • Compare v1 vs v1.5 speed on your hardware
  • Try ResNet-101 for that extra accuracy boost

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

What's been your experience with ResNet? Still using it in production, or have you moved to newer architectures? Drop your thoughts below! 👇

Found this useful? Follow for more deep learning breakdowns where I actually explain why things work, not just how.

═══════════════════════════════

📌 References

DeepLearning #ComputerVision #MachineLearning #ResNet #NeuralNetworks #AI #Python #PyTorch