MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

Org Charts for AI Agents: Mapping Your Human and AI Workforce

2025-12-14 03:17:41

I'm already doing this. My teams have AI agents doing real work, with defined roles, human owners, and performance metrics. We've moved past "should we use AI?" a long time ago. But when I talk to other engineering leaders, most are still running pilots on "how to use ChatGPT effectively." They're debating tools while we're deploying workers. If that's you, wake up. AI agents are here. They're not coming. They're already doing work. And they need to be somewhere in your org chart.

I'm not being metaphorical. These aren't tools that sit on a shelf waiting to be invoked. They're systems that do real work across the entire development lifecycle. They read Jira tickets and break them down into smaller, actionable tasks. They analyze the codebase to understand context before writing code. They write the code itself. They review pull requests from both humans and other agents, catching issues before merge. They run tests, interpret failures, and fix what broke. They deploy to staging and production. They update ticket status and add implementation notes. They generate documentation when features ship. They run 24/7. They have defined responsibilities. They produce output that affects your business.

If that sounds like a job description, that's because it is.

The question isn't whether AI agents belong on your org chart. The question is why you haven't put them there yet.

The wake-up call most teams need

Let me describe what I'm seeing in organizations that are actually ahead on AI adoption.

Company A has agents embedded in their entire development workflow. One agent monitors the backlog, breaks down tickets, and prepares implementation plans before engineers even start their day. Another picks up tasks and writes the actual code, creating PRs ready for review. A third reviews every PR, checking for security issues, test coverage, and architectural consistency. A fourth handles deployments, monitors rollouts, and rolls back automatically if error rates spike. Their engineering lead treats these agents like team members because functionally, they are. They have owners, performance metrics, and defined responsibilities.

Company B still has their engineering team debating whether Copilot is worth the license cost. They're running a three-month pilot with a committee to evaluate results. Their developers manually review every PR line by line, deploy through a manual checklist, and spend the first hour of every ticket just understanding what needs to be built.

The gap between these two isn't technology. It's mindset.

Company A asked: "How do we integrate AI into how we work?" Company B asked: "Should we use AI?" By the time Company B finishes asking, Company A will have deployed their fourth agent.

This is the wake-up call: AI agents are here. They're working. They're producing output. The adoption curve for agentic AI has been faster than anything we've seen before. Within two years, roughly a third of enterprises have deployed agents in production. And the organizations actually using them? Most already treat agents as coworkers, not tools. If you're still thinking about this as "adopting a new tool," you've already fallen behind teams that are thinking about it as "building a hybrid workforce."

Why agents belong on the org chart

I know what you're thinking. "Putting software on an org chart sounds ridiculous." But hear me out.

Org charts exist for clarity. They answer: Who does what? Who's responsible for what? Who reports to whom? If an AI agent is doing meaningful work, those questions apply to it too.

When you don't include AI agents in your organizational structure, you create invisible workers. Work gets done, but nobody knows exactly what's doing it or who's accountable when it goes wrong. That's not a small problem. That's the recipe for incidents that nobody can trace, drift that nobody notices, and technical debt that compounds invisibly.

Here's what putting AI agents on the org chart actually solves:

Accountability. Every agent has a human owner. When the development agent writes code that breaks in production, someone is responsible for improving its guardrails. When the code review agent starts missing security issues, someone tunes its rules. When the deployment agent causes a failed release, someone owns the post-mortem. When the ticket analysis agent consistently overestimates complexity, someone adjusts its model. No more "the AI did it" as an excuse.

Visibility. Your team can see what's actually doing the work. Everyone knows the ticket analysis agent breaks down and estimates new issues before sprint planning. The development agent picks up approved tasks and creates PRs. The code review agent checks every PR before the tech lead sees it. The deployment agent handles staging releases automatically but flags production deploys for human approval. No mystery workers.

Planning. When you understand your full workforce (human and AI), you can plan capacity properly. You know what you have, what it can do, and where the gaps are. You can make real decisions about when to hire humans versus when to deploy another agent.

Coordination. Workflows become explicit. "New tickets get analyzed by the ticket analysis agent, which breaks them into tasks and estimates complexity. The development agent picks up tasks and writes the code. The code review agent checks every PR. If it passes automated checks, the tech lead does final review. The deployment agent handles staging, runs integration tests, and notifies the team. Production deploy requires human approval." Everyone knows the handoff points between humans and agents.

What this looks like in practice

Let me make this concrete.

The wrong way: You give developers access to Copilot and call it done. Some use it heavily, some ignore it. Nobody knows which code was AI-assisted. PRs get merged without anyone understanding if the AI suggestions were good or just fast. When bugs slip through, there's no way to trace whether AI-generated code was the cause. The team has AI, but no structure around it.

The right way: You deploy agents with clear positions in your org structure. Your development agent reports to your Tech Lead. It picks up tasks from the backlog, analyzes the codebase for context, writes the code, adds tests, and creates PRs. The Tech Lead reviews its output, provides feedback when the approach is wrong, and approves when it's right. Your code review agent also reports to the Tech Lead. It checks every PR for security vulnerabilities, test coverage gaps, and violations of your architectural patterns. It comments on PRs, requests changes, and approves when standards are met. Humans handle the judgment calls: is this the right approach? Does this solve the actual problem? Everyone knows the workflow. It's documented. It's managed.

Same pattern applies across the development lifecycle. Your ticket analysis agent reports to whoever owns backlog grooming. Your development agent reports to whoever owns the codebase and architecture. Your deployment agent reports to whoever owns release management. Your documentation agent reports to whoever owns developer experience. Each has clear scope, clear ownership, and clear metrics.

This isn't theoretical. My teams work this way, and every high-performing team I know has already made this shift. They don't think of AI as a tool they use. They think of it as a capability they manage.

Best practices from teams actually doing this

I lead teams that work this way, and I'm in contact with engineering leaders across the world doing the same. Some patterns work better than others.

Give every agent a human owner

This is non-negotiable. Every AI agent needs a human who is responsible for its output. Not "responsible if something goes wrong." Responsible, period.

That human should:

  • Review the agent's outputs regularly (not just when there's a problem)
  • Know what the agent is supposed to do and what it's not supposed to do
  • Have the authority to tune its behavior or shut it down
  • Be the escalation path when the agent encounters something outside its scope

Think of it like managing an extremely productive but occasionally confused team member. They need oversight. They need feedback. They need someone paying attention.

Define explicit boundaries

AI agents should have clear job descriptions. What tasks they handle. What decisions they can make. When they must escalate to humans.

This isn't just about safety (though it is). It's about reliability. An agent with clear boundaries is predictable. You know what to expect from it. Your team knows what to expect from it. Customers know what to expect from it.

Vague scope leads to vague results. If you can't articulate exactly what your agent is supposed to do, you're not ready to deploy it.

Onboard and train them like team members

New AI agents should go through an onboarding process. Load them with your context: codebase architecture, coding standards, style guidelines, past decisions, and domain knowledge. A development agent needs to understand your patterns, your conventions, and why things are built the way they are. Configure access permissions carefully. Set up integration points with your ticketing system, code repository, CI/CD pipeline, and communication tools.

Then train your human team to work with them. What can the agent do? What are its limitations? How do you interpret its outputs? How do you give it feedback?

The teams that skip this step wonder why their agents produce inconsistent results. The teams that invest in proper onboarding get agents that actually fit into their workflows.

Set goals and measure performance

If your human team members have KPIs, your AI agents should too.

For a development agent: Code quality of generated output. How often its PRs pass review on the first attempt. Test coverage of code it writes. Bugs introduced per feature. Time from ticket to working PR.

For a code review agent: Accuracy of flagged issues. False positive rate. Time saved per review. Security vulnerabilities caught. Bugs that slipped through despite review.

For a ticket analysis agent: Quality of task breakdowns. Accuracy of complexity estimates. Time saved in sprint planning. How often humans override its suggestions.

For a deployment agent: Successful deployment rate. Mean time to rollback when issues occur. False positive rate on health checks. Incidents caused by deployment failures.

Track this data. Review it regularly. If an agent isn't meeting its targets, tune it or remove it. Don't let underperforming agents linger just because "AI is supposed to be good."

Keep humans in the loop for consequential actions

Some actions are too important to delegate fully. Production deployments. Database migrations. Changes to authentication or payment systems. Anything that could take down the service or expose customer data.

For these, the right pattern is: agent recommends, human approves, agent executes. The development agent writes the code and creates the PR, but a human reviews before merge. The deployment agent prepares the release and runs pre-flight checks, but a human approves production deploys. Then the agent handles the actual execution, monitoring, and rollback if needed.

This isn't about not trusting AI. It's about maintaining appropriate control over decisions that matter. Even great AI agents make mistakes. For high-stakes decisions, you want a human checkpoint.

The uncomfortable conversations this forces

Putting AI on your org chart forces conversations that many teams have been avoiding.

"What are we actually paying people to do?" When agents handle the routine work, human roles need to shift. Are your developers still manually checking PRs for test coverage and linting issues? Why? Are they still writing boilerplate code that an agent could generate? Are they still manually updating Jira tickets after every commit? The value of human work should be in architecture decisions, complex problem-solving, and handling the edge cases that AI can't reason about.

"How do we grow junior talent?" If AI handles the entry-level tasks that used to train juniors, how do juniors learn? This is a real problem that requires intentional design. Junior developers need to understand what the AI is doing, not just accept its output. They need opportunities to work without AI assistance so they build foundational skills.

"Who's actually accountable when AI fails?" AI failures aren't like software bugs. They're often subtle, contextual, and hard to detect until damage is done. Someone needs to be watching. Someone needs to care. If nobody on your team owns the AI agent's behavior, you have a governance gap.

"How much of our capability is human versus AI?" Some organizations are discovering that more of their output than expected is AI-generated. That's not necessarily bad, but it requires honesty about what you're building and who's building it.

The risks nobody wants to talk about

I'd be doing you a disservice if I only talked about the upside. Deploying AI agents without proper structure creates real problems.

Most AI projects fail, and it's rarely the technology. The pattern I see repeatedly: teams deploy agents, get excited about initial results, then watch things fall apart over months. The failure isn't usually the AI itself. It's organizational. Siloed decision-making. No clear ownership. Agents that automate broken processes instead of reimagining them. If your current workflow is a mess, an AI agent will just create mess faster.

Agents can drift without anyone noticing. Unlike human employees who complain when things aren't working, agents just keep running. They'll quietly degrade, produce increasingly irrelevant outputs, or develop blind spots as your business changes around them. Without active monitoring and regular review, you end up with agents that technically work but practically don't help.

Shadow agents are already in your organization. Teams are deploying AI assistants, connecting them to systems, and using them for work without telling IT, security, or leadership. This isn't malicious. It's people trying to be more productive. But it means you have invisible workers making decisions, accessing data, and producing outputs with zero oversight. The solution isn't to ban experimentation. It's to channel it into structured pilots with proper governance.

Integration with legacy systems is harder than it looks. That shiny new agent needs to talk to your five-year-old ticketing system, your decade-old ERP, and your custom-built internal tools. Every integration point is a failure point. Every data handoff is an opportunity for things to go wrong. Plan for this. Budget for this. Don't assume the agent will "just work."

Costs compound in ways you don't expect. The API calls, the compute, the storage, the maintenance, the tuning, the monitoring. Running agents at scale isn't free. Some organizations have been surprised to find their AI "cost savings" evaporating into operational expenses they hadn't budgeted for. Track the total cost of ownership, not just the initial deployment.

The governance question isn't optional. Who audits the agent's decisions? Who checks for bias in its outputs? Who ensures it's not leaking sensitive data in its prompts? Who handles it when a customer complains about an agent interaction? If you don't have answers to these questions before deployment, you're building on sand.

None of this means you shouldn't deploy agents. It means you should deploy them with eyes open, with proper structure, and with humans who are actually paying attention.

What changes, what doesn't

What changes:

Your org chart now includes non-human workers with defined roles. Planning and capacity discussions include AI capabilities. Job descriptions evolve to focus on judgment, oversight, and collaboration with AI.

New roles are already emerging. Some teams have "agent supervisors" who manage portfolios of AI workers the way a manager oversees human teams. Others have "orchestrators" who design how humans and agents hand off work to each other. The most effective people in these roles aren't necessarily the deepest technical experts. They're generalists who understand the business, can spot when an agent is drifting off-course, and know when to override automation with human judgment. The specialists become the exception handlers, the ones who step in when agents encounter situations outside their training.

Hierarchies flatten. When one person can effectively oversee dozens of agents doing work that used to require a large team, you need fewer layers of management. But you need those remaining humans to be much better at systems thinking, quality judgment, and strategic direction.

What doesn't change:

Humans are still responsible. Every AI action ultimately traces back to a human decision to deploy that AI, configure it a certain way, and keep it running. Quality still matters. AI-generated output isn't automatically good. It needs review, validation, and continuous improvement. Culture still drives outcomes. An organization that treats AI as a magic fix will get poor results. An organization that thoughtfully integrates AI into its culture will thrive.

Start small, but start now

If you haven't thought about where AI fits in your organization, start.

Pick one agent. Maybe it's a ticket analysis agent that breaks down new issues and estimates complexity. Maybe it's a development agent that picks up well-defined tasks and creates working PRs. Maybe it's a code review agent that checks every PR for security issues and test coverage. Maybe it's a deployment agent that handles staging releases and runs smoke tests automatically.

Give it a clear scope. Assign a human owner. Define its success metrics. Put it somewhere in your team structure where its role makes sense.

Then watch how it performs. Tune it. Improve it. Learn how to manage it.

The goal isn't to have AI everywhere immediately. The goal is to develop the organizational muscle for working with AI as part of your team, not just as a tool you occasionally use. The first agent teaches you more about your organization than any planning document could. You'll discover where your processes are actually unclear, where your data is messier than you thought, and where your team's comfort with AI-assisted work really stands.

Once the first agent is working well, expand thoughtfully. Not by deploying agents everywhere at once, but by picking the next highest-value, lowest-risk opportunity and applying what you learned. The teams that succeed treat this as continuous capability building, not a one-time transformation project.

The teams that figure this out now will be running hybrid workforces of humans and AI agents, coordinating seamlessly, shipping faster than competitors who are still debating whether to adopt AI at all.

The teams that don't? They'll still be running three-month pilots while their competitors deploy their tenth agent.

The bottom line

AI agents aren't tools you use. They're workers you manage. The sooner you internalize that shift, the sooner you can start building the organizational capabilities to leverage them effectively.

Your org chart is a representation of how you get work done. If AI agents are doing work (and they are, whether you acknowledge it or not), they belong there. Not because they're human. Because they're doing jobs that matter, and those jobs need accountability, oversight, and coordination just like any other.

The debate about whether to use AI is over. The teams that recognized this are already operating differently. They're building hybrid workforces. They're thinking about agents as team members. They're developing new management practices for this new reality.

The question isn't whether this shift is coming. It's whether you'll be ready when it arrives at your door, or still debating whether to open it.

Building hybrid teams of humans and AI agents requires intentional organizational design. If you're wrestling with how to structure this transition for your team, I'm always interested in these conversations.

Why GPT-5.2 is Coming Soon: The Race to Lead the AI Revolution

2025-12-14 03:16:07

Introduction: The AI Competition Intensifies

The artificial intelligence landscape has entered a critical phase of competitive acceleration. OpenAI's announcement of GPT-5.2, originally scheduled for later in December 2025 but accelerated to December 9, marks a significant turning point in the ongoing battle for AI supremacy. This aggressive timeline shift reveals the intensity of competition in the generative AI market, where releasing cutting-edge models has become as much a strategic imperative as a technical achievement.

AI Market Competition Landscape 2025

AI Market Competition Landscape 2025

The Catalyst: Google's Gemini 3 Game-Changer

The immediate trigger for GPT-5.2's accelerated launch is the extraordinary impact of Google's Gemini 3 model, which launched in November 2025 with capabilities that caught even OpenAI's leadership off guard. This development forced Sam Altman, CEO of OpenAI, to issue an internal "code red" directive, pushing the company to advance GPT-5.2's release timeline by several weeks.

Gemini 3 Pro represents a paradigm shift in multimodal AI capabilities. With a revolutionary context window of 1 million tokens (compared to GPT-5.1's 128,000 tokens), Google's model can simultaneously process entire codebases, hours of video transcripts, and comprehensive legal documents. The model's performance metrics are particularly impressive, achieving 81% on the MMMU-Pro benchmark and 87.6% on Video-MMMU, demonstrating unmatched superiority in video and multimodal understanding.

Performance Comparison: GPT-5.1 vs Gemini 3 Pro

Performance Comparison: GPT-5.1 vs Gemini 3 Pro

Performance Metrics: A Detailed Comparison

The competitive gap between these models becomes clearer when examining specific performance indicators. While GPT-5.1 excels in certain domains, Gemini 3 Pro's architecture reveals strategic advantages in others:

Metric GPT-5.1 Gemini 3 Pro
Context Window 128,000 tokens 1,048,576 tokens (1M)
Output Capacity 16,834 tokens 65,536 tokens
MMMU Benchmark 84.2% 81%
SWE-Bench Verified 76.3% Not published
Key Strength Code and tool-based reasoning Video and multimodal depth

Performance Comparison: GPT-5.1 vs Gemini 3 Pro

Performance Comparison: GPT-5.1 vs Gemini 3 Pro

While GPT-5.1's 84.2% MMMU score edges slightly ahead of Gemini 3's 81%, this masks a critical truth: Gemini 3 dominates in practical, real-world multimodal scenarios, particularly video processing and long-context understanding.

Strategic Business Factors: Why Competition Is Accelerating

Market Share and Competitive Pressure

The AI market has fundamentally shifted from monopolistic dominance to fierce pluralistic competition. Data from OpenRouter's analysis of 100 trillion tokens reveals that no single proprietary model exceeds 25% of open-source token usage by late 2025, indicating a rapidly fragmenting market where even dominant players must continuously innovate. Meanwhile, Chinese AI models have surged from 13% to approximately 30% of global usage, tripling their market share in a single year and creating additional competitive pressure.

Pricing Wars as a Competitive Tool

Google's aggressive pricing strategy has become a strategic weapon in the AI wars. The company has significantly reduced API costs to attract developers away from OpenAI's ecosystem. Gemini 3 Pro's pricing structure—starting at \$2.00 per 1 million input tokens for standard context (compared to OpenAI's \$1.25)—becomes competitive when developers factor in capabilities like the massive context window and superior video understanding. However, the real strategic move was pricing aggressiveness targeting the developer ecosystem.

API Pricing Comparison: Cost per 1M Tokens

API Pricing Comparison: Cost per 1M Tokens

The Core Question: Why Release GPT-5.2 When 5.1 Is Still New?

This seemingly contradictory timeline—GPT-5.1 released in November 2025, followed by GPT-5.2 in December—reveals how fiercely competitive the AI industry has become. OpenAI's rationale includes several critical factors:

1. Rapid Feature Integration

GPT-5.1 already introduced significant innovations: adaptive reasoning that dynamically allocates computational effort based on task complexity, new coding tools like apply_patch and shell integration, and extended prompt caching for cost optimization. However, Gemini 3's capabilities exposed gaps in GPT-5.1's architecture, necessitating accelerated development of GPT-5.2 with enhanced image generation and video processing capabilities.

2. Market Perception and Developer Mindshare

In the AI market, perception of technological leadership directly translates to developer adoption. When Google's Gemini 3 received public praise from competitors like Elon Musk and even OpenAI's own Sam Altman, it signaled a genuine competitive threat. Releasing GPT-5.2 quickly—even with incremental improvements—prevents the narrative from settling on "Gemini 3 is the best available model."

3. Necessity of Continuous Evolution

Tech giants have recognized that holding the market leadership position requires releasing new models frequently with demonstrably advanced features. Unlike traditional software releases with multi-year cycles, frontier AI models operate on monthly timescales. Organizations investing in AI infrastructure need confidence that their chosen platform will remain state-of-the-art.

4. Data and Resource Advantages of Incumbents

As your draft notes, companies like Google and Microsoft possess unparalleled data advantages. Google, as the world's dominant search engine, accumulates vast quantities of user search queries, interaction patterns, and real-world information that can train superior models. Similarly, Microsoft's integration with enterprises provides crucial data for building enterprise-focused AI systems. OpenAI must compensate for these disadvantages through aggressive release schedules and feature differentiation.

The Broader Context: Market Fragmentation

AI Model Release Timeline & Competition Intensity (Aug-Dec 2025)

AI Model Release Timeline & Competition Intensity (Aug-Dec 2025)

An important context shift has occurred: the AI market has moved beyond a two-player game. By late 2025, the competitive landscape includes not just OpenAI and Google DeepMind, but also Anthropic, xAI, and increasingly capable open-source alternatives. OpenAI's "code red" response suggests internal recognition that complacency could allow competitors to capture significant market share and mindshare.

Strategic Strengths of Each Platform

Understanding why OpenAI must compete so aggressively requires recognizing where each company's advantages lie:

Google's Advantages:

  • Integrated into world's largest search engine with unmatched data access
  • Massive computational resources and infrastructure
  • Deep integration with enterprise Google Workspace products
  • Superior multimodal and video understanding capabilities

OpenAI's Advantages:

  • First-mover advantage in ChatGPT adoption and brand recognition
  • Specialized coding and tool-integration capabilities
  • Community trust and strong developer ecosystem adoption
  • Focused product philosophy and rapid iteration cycles

GPT-5.1 vs Gemini 3 Pro: Key Strengths Comparison

GPT-5.1 vs Gemini 3 Pro: Key Strengths Comparison

What GPT-5.2 Needs to Deliver

For GPT-5.2 to justify its accelerated release and reclaim competitive dominance, it must address Gemini 3's strongest advantages:

  1. Enhanced Image Generation - Closing the gap in visual AI capabilities
  2. Improved Multimodal Processing - Better understanding of complex image and video content
  3. Cost Optimization - Making the model more economically competitive
  4. Extended Context - Moving toward larger context windows (though not necessarily matching Gemini's 1M)
  5. Agent Capabilities - Advanced tool integration for complex workflows

Conclusion: A Race Without Finish Line

The story of GPT-5.2's accelerated release embodies a fundamental truth about modern AI development: these are not discrete products with stable release dates, but rather continuously evolving capabilities in a market where leadership is temporary and constantly contested. OpenAI's decision to release GPT-5.2 just weeks after GPT-5.1 reflects not poor planning, but rather rational response to genuine competitive threat.

The AI industry in 2025 has entered a phase where companies with the data, resources, and computational power of Google, Microsoft, and OpenAI are locked in an arms race that benefits users through rapid capability improvements but creates pressure for ever-faster innovation cycles. For developers and organizations, this means the AI platform they choose today may already be obsolete by the time they've fully integrated it into their systems—making adaptability more valuable than loyalty to any single vendor.

The question is no longer "Which AI model is best?" but rather "How do we build systems flexible enough to incorporate the latest capabilities as they emerge?" In this new competitive reality, GPT-5.2's December release is not an anomaly—it is the new normal.

Swift #12: Funciones

2025-12-14 03:13:39

Las funciones son bloques de código delimitadas por llaves ({) e identificadas por un nombre. A diferencia de los bloques de código usados en los bucles y condicionales, no hay que satisfacer ninguna condición en las funciones para funcionar, sino que las líneas de código de la función se ejecutan siempre que se la invoca.

Una vez que se llama una función, la ejecución del programa continúa dentro de la misma y, cuando termina, regresa a la línea de código que la invocó para continuar con la ejecución anterior.

func f1() { // Declaración y Definición
  let x = 1 + 2 // 2
}

f1() // Invocación

var counter = 0
while counter < 3 { // Invocando tres veces
  f1()
  counter += 1
}

La ventaja de una función es que podemos llamarla desde cualquier parte y siempre ejecutará la misma operación. No se requiere saber cómo funciona internamente, solo hay que pasarle los valores deseados y leer el resultado.

Parámetros

Se puede pasar valores a una función por medio de parámetros dentro de paréntesis. Los parámetros se definen como constantes separadas por comas, así: nombre1: Tipo1, nombre2: Tipo2, .... Cuando se llama la función, los parámetros se vuelven constantes dentro del contexto del bloque de la función.

func f2(a: Int, b: Int) {
  let x = a + b
}

f2(a: 1, b: 2) // provoca que x = 1 + 2
f2(a: 2, b: 5) // provoca que x = 2 + 5

Etiquetas de los argumentos

Cada vez que se llama una función con parámetros se debe poner el nombre de cada parámetro antes del valor del argumento. Estas so las etiquetas de los argumentos.

A veces el nombre del parámetro es claro para el contexto al interior de la función pero para la lectura del llamado. Por esta razón, se puede definir la etiqueta de cada argumento de forma explícita, declarándola antes del nombre del parámetro separados por un espacio.

func f2(first a: Int, second b: Int) {
  let x = a + b
}

f2(first: 1, second: 2) // provoca que x = 1 + 2

Notar que dentro de la función se usan los parámetros a y b, y fuera se ven las etiquetas first y second.

Si, en cambio, se quiere omitir la etiqueta, se puede reemplazar por un guión bajo (_).

func f2(first a: Int, _ b: Int) {
  let x = a + b
}

f2(first: 1, 2) // provoca que x = 1 + 2

Parámetro por referencia inout

Usualmente los argumentos de una función son copiados como constantes en los parámetros de la misma. Sin embargo, se puede pasar una referencia a una variable como argumento a la función para modificar esa variable dentro de la misma. Para esto se debe marcar el parámetro como inout.

func f4() {
  var counter = 0
  f5(a: &counter)
  print(counter) // 1
}
func f5(a: inout Int) {
  a += 1
}
f4() // Imprime: 1

Notar que:

  1. No se puede pasar un valor escalar como argumento. Se necesita pasar una variable que se pueda modificar. En caso de que se trate de pasar un escalar, se tendrá el error: Cannot pass immutable value as inout argument: literals are not mutable.
  2. Se usa el operador & para extraer la referencia a la variable. Si se olvida, se tendrá el error: Passing value of type 'Int' to an inout parameter requires explicit '&'

Valor por defecto de un parámetro

En la declaración de la función, después del tipo de un parámetro, se puede anexar un argumento por defecto con la sintaxis nombre: Tipo = valor_por_defecto. Así, se puede omitir pasar un valor para este parámetro.

func f2(first a: Int, second b: Int = 1) {
  let x = a + b
}

f2(first: 3) // provoca que x = 3 + 1

Valor de retorno

Los valores definidos dentro de una función no pueden ser accedidos por fuera de ella. Se podría decir que están atrapados. Para poder comunicar un resultado con el resto del programa, una función puede retornar un valor con la expresión return que, al llamarse, termina la ejecución de la función.

Para retornar un valor se debe cambiar la firma de la función, especificando el tipo del valor de retorno (e.g. -> Tipo).

func f2(a: Int, b: Int) -> Int {
  return a + b
}

let x1 = f2(a: 1, b: 2) // provoca que x1 = 3
let x2 = f2(a: 2, b: 5) // provoca que x2 = 7

Si la función solo tiene una instrucción que es retornar un valor, se puede omitir la palabra clave return.

func f3(a: Int, b: Int) -> Int {
  a + b
}

Sobrecarga de funciones

No se puede crear más de una función con la mismo nombre; nombre, tipo y número de parámetros. Sin embargo, se puede crear más de una función con el mismo nombre y variar sus parámetros: en tipo y cantidad. A esto se le conoce como Sobrecarga de funciones.

func f(value: Int) -> Int {
  value + 1
}
func f(value: String) -> Int {
  value.count
}
print(f(value: "Hola")) // Infiere que usa f(value: String)
print(f(value: 1)) // Infiere que usa f(value: Int)

También se puede sobrecargar el tipo del valor de retorno, sin embargo, si se deja la puerta abierta para que el compilador de Swift infiera el tipo de función que debe ejecutar, se tendrá el error: Ambiguous use of '...'. Por ejemplo:

func f(value: Int) -> Int {
  value + 1
}
func f(value: Int) -> Double {
    Double(value) + 1.0
}
let result = f(value: 1) // Ambiguous use of 'f(value:)
print(result)

Para resolver este problema, hay que suministrar un poco más de información que le permita al compilador deducir cuál función se debe usar. Por ejemplo:

func f(value: Int) -> Int {
  value + 1
}
func f(value: Int) -> Double {
    Double(value) + 1.0
}
let result: Double = f(value: 1) // ✅
print(result)

Funciones genéricas

Consideremos el escenario de dos funciones con el mismo nombre, el mismo valor de retorno y un solo parámetro, solo que sería de un tipo diferente en cada función. Luego, el sistema seleccionará la función a ejecutar dependiendo del tipo del argumento.

func f(value: Int) -> String {
  "Valor: \(value)"
}
func f(value: String) -> String {
  "Valor: \(value)"
}
let result1 = f(value: 1)
let result2 = f(value: "Hola")

En el ejemplo anterior, las dos funciones tenían el mismo cuerpo. En este caso, podemos evitar duplicar código haciendo una función genérica:

func f<T>(value: T) -> String {
  "Valor: \(value)"
}
let result1 = f(value: 1) // Valor: 1
let result2 = f(value: "Hola") // Valor: Hola

Un tipo genérico funciona como plantilla. Cuando se invoca la función, esta plantilla se convierte en el tipo de valor recibido.

Puede haber más de un tipo de dato plantilla, por ejemplo:

func f<T, U>(value1: T, value2: U) -> String {
  "Valores: \(value1) \(value2)"
}
let result1 = f(value1: 1, value2: 0.5) // Valores: 1 0.5
let result2 = f(value1: "Hola", value2: "Chao") // Valores: Hola Chao

Biblioteca estándar

La biblioteca estándar de Swift incluye varios operadores, tipos primitivos y funciones predefinidas como:

  1. print(String): Imprime un string en la consola.
  2. abs(Int): Retorna el valor absoluto de un entero.
  3. max(Values): Retorna el valor más grande
  4. min(Values): Retorna el valor más pequeño.

Funciones para detener la ejecución del programa

  1. fatalError(String): Detiene la ejecución de la aplicación e imprime un mensaje en la consola.
  2. precondition(Bool, String): Detiene la ejecución de la aplicación si la condición es false, e imprime un mensaje en consola.

Funciones para crear colecciones

  1. stride(from: Value, through: Value, by: Value): Retorna una colección de valores desde from hasta through (inclusive) en intervalos de by.
  2. stride(from: Value, to: Value, by: Value): Retorna una colección de valores desde from hasta antes de through (excluyéndolo) en intervalos de by.
  3. repeatElement(Value, count: Int): Retorna una colección repitiendo count veces el valor Value.
  4. zip(Collection, Collection): Retorna una colección de tuplas con los valores de los argumentos en orden secuencial.

Alcance de una función (Scope)

Un bloque de código se define como un conjunto de variables e instrucciones envueltas entre llaves (i.e. { ... }). Las variables declaradas dentro de un bloque no pueden ser accedidas desde otro bloque diferente. La región del código que tiene acceso a estas variables se conoce como "Alcance" o "Scope" del bloque.

Las variables definidas en un scope global pueden ser accedidas desde un scope local, pero no al revés.

Learn Dart Programming Language: A Beginner's Guide

2025-12-14 03:10:34

Dart is surging in popularity for cross-platform mobile apps, especially with Flutter's ecosystem exploding over the years.

This article explains Dart's essentials in a 6-minute read. No prior experience required: just copy-paste the snippets and experiment. By the end, you'll learn the core of the Dart Programming Language.

Introduction: Why Dart Rules Mobile Dev

If you're diving into mobile with Flutter, Dart is your optimized powerhouse: type-safe, null-secure, and compiling to native code for buttery-smooth performance. We'll cover variables, operators, strings, conditionals, collections, for loops, functions, and nullability—the core toolkit for dynamic UIs and API-driven apps. Fire up DartPad.dev and follow along. Ready? Code on!

2025 Trend Tie-In: With AI tools like Firebase ML demanding fast data handling, these basics answer top queries like "How to safely parse JSON in Flutter?" and "Efficient lists for adaptive layouts?"

Variables: The Building Blocks Everyone Trips On

Variables hold your app's data, like user names or scores. Dart keeps it type-safe yet flexible: var infers types, final sets once at runtime, and const locks in compile-time values.

void main() {
  var name = 'Samuel';  // Inferred as String
  final age = 1;      // Can't reassign after init
  const pi = 3.14;    // Immutable from compile time
  print('$name is $age years old (pi: $pi)');
}

Output: Samuel is 1 years old (pi: 3.14).

Pro Tip: Newbies often blur final and constfinal suits API fetches (runtime), const for hardcoded UI constants. In your first Flutter app, final prevents reassignment bugs in stateful widgets.

Common Q Fix: "Var vs. final vs. const?"—Google's top Dart search, vital for immutable data in mobile caching.

Operators: Crunch Numbers and Logic Like a Pro

Operators handle math, comparisons, and decisions. Arithmetic basics: +, -, *, /, % (modulo). Relational: ==, !=, >, <. Logical: && (and), || (or), ! (not).

void main() {
  int a = 10, b = 3;
  print(a + b);  // 13
  print(a % b);  // 1 (remainder)
  print(a > b && b != 0);  // true
}

These fuel game logic or form validations—everyday mobile essentials. Null-aware operators? Coming up with nullability.

Trend Alert: As apps integrate AI (e.g., user input filtering), logical ops slash processing time, cutting battery drain on devices.

Common Q: "Quick comparisons without errors?"—A Reddit staple for validation flows.

Strings: Mastering Text for Dynamic UIs

Strings manage text: use single/double quotes for basics, triples for multiline. Interpolate vars with $var or ${expression}.

void main() {
  String greeting = 'Hello, Dart!';
  String multiline = '''
  This is
  a poem.
  ''';
  print('$greeting World: ${2 + 2}');  // Hello, Dart! World: 4
  print(multiline);
}

Escape quotes with \.

Pro Tip: Leverage for Flutter's debug console prints—saves debugging hours in complex layouts. Perfect for localization strings in global apps.

Common Q Fix: "Embedding vars in strings?"—Huge for UI text in internationalized mobile features.

Conditionals: Branching for Smarter App Logic

Conditionals steer flow: if, else if, else for basics; ternary condition ? true : false for shorthand. Switch handles multiples.

void main() {
  int score = 85;
  if (score >= 90) {
    print('A+');
  } else if (score >= 80) {
    print('B');  // Runs here
  } else {
    print('Keep grinding');
  }
  // Ternary: String grade = score >= 90 ? 'A' : 'B';
  print(grade);
}

Switch example:

switch (score ~/ 10) {  // Integer division
  case 9: print('Great!'); break;
  default: print('Try again');
}

Core for auth flows or adaptive UIs—Flutter's conditional rendering (e.g., Visibility widgets) builds on this.

Tip: Skip == true on bools; Dart's implicit.

Common Q: "Ternary vs. if-else in performance?"—Top for concise mobile code.

Collections: Lists and Maps for Data Power

Collections group data: Lists (ordered arrays) and Maps (key-value pairs). Spread with ... to merge.

void main() {
  List<String> fruits = ['apple', 'banana'];
  fruits.add('cherry');  // Now 3 items
  print(fruits[1]);  // banana (0-indexed)

  Map<String, int> scores = {'Course': 100, 'User': 95};
  scores['Newbie'] = 80;
  print(scores['Course']);  // 100

  // Spread: List<String> more = [...fruits, 'date'];
}

Lists shine in todo apps; maps parse JSON APIs—fuel for Flutter's ListView.builder.

Pro Tip: Index carefully (0-based) to dodge crashes in dynamic feeds.

Common Q Fix: "Iterating maps error-free?"—Critical for mobile API handling, per dev forums.

For Loops: Iterating Efficiently

For loops handle repetition: classic counters or for-in for collections.

void main() {
  for (int i = 0; i < 3; i++) {
    print('Loop $i');  // 0,1,2
  }

  List<int> nums = [1, 2, 3];
  for (var num in nums) {
    print(num * 2);  // 2,4,6
  }
}

Stick to these for Flutter builds—while later. Avoid off-by-one with length - 1.

Trend Tip: Optimized loops trim battery use; test on emulators for real-world mobile perf.

Common Q: "For-in vs. forEach?"—Efficiency debate in list-heavy apps.

Functions: Reusable Code for Modular Apps

Functions encapsulate logic: returnType name(params) { }. Optionals via [] (positional) or {} (named). Arrows for brevity.

int add(int a, int b) => a + b;  // Arrow shorthand

void greet([String? name = 'World']) {  // Positional optional
  print('Hi, $name!');
}

void main() {
  print(add(5, 3));  // 8
  greet('Dart');     // Hi, Dart!
  greet();           // Hi, World!
}

Named: greet(name: 'Flutter'). Your Flutter widget foundation—reuse for clean code.

Pro Tip: Defaults prevent nil params in user inputs.

Common Q Fix: "Optional params gotchas?"—Interview classic for modular mobile design.

Nullability: The Crash-Proof Shield

Dart's null-safe (since 2.12) era means ? for optionals, ! for assertions, ?? for fallbacks, and late for deferred inits.

String? nullableName;  // Can be null
String name = nullableName ?? 'Default';  // Safe default

void main() {
  int? maybeNum = null;
  print(maybeNum! + 1);  // Crashes if null—avoid!
  print(maybeNum ?? 42); // 42 if null
}

Slashing 80% of runtime errors in Flutter deploys.

Hot Trend: Null safety cements Dart as #1 for reliable mobile—pair with async for API calls.

Common Q: "Handling nulls in JSON?"—Top 2025 query with rising microservices.

Conclusion: Level Up to Flutter Mastery

You've nailed Dart's core—now practice in DartPad for practical experience.

Top 5 Mobile Dev Questions This Answers (2025 Trends):

  1. Nulls in APIs? (Nullability section—Firebase integrations).
  2. Efficient data lists? (Collections + loops for dynamic UIs).
  3. Reusable UI logic? (Functions for widget modularity).
  4. Text for global apps? (Strings for i18n).
  5. Conditional layouts? (For adaptive designs on varying screens).

What's your first Dart project? Drop thoughts below—let's trend together! 🚀

Go Slices: The Pointer Paradox Why Your Appends Disappear (Understanding when slice modifications persist and when they vanish)

2025-12-14 03:09:20

Ever faced a situation where modifying a slice inside a helper function updates the original data sometimes, but appending mysteriously does nothing?

Welcome to the Go slice paradox.

The Confusing Scenario

func main() {
    users := []*User{
        {Name: "Alice", Age: 25},
        {Name: "Bob", Age: 30},
    }

    updateUsers(users)

    fmt.Println("After update:")
    for _, u := range users {
        fmt.Printf("%s: %d\n", u.Name, u.Age)
    }
    // Alice: 40  - Wait, this changed!
    // Bob: 30
    // Charlie: ? - Where did Charlie go?!
}

func updateUsers(users []*User) {
    // This change persists
    users[0].Age = 40

    // This change disappears
    users = append(users, &User{Name: "Charlie", Age: 35})
}

Why does Alice update but Charlie disappears into the void?

Let’s unpack the internals.

Understanding What’s Inside a Slice

A Go slice is not an array.

It is a tiny 3-field header:

  • Pointer to underlying array
  • Length
  • Capacity
// Conceptual slice header
type sliceHeader struct {
    pointer  *array
    length   int
    capacity int
}

When passed to a function, this header is copied, not the underlying data.

The Pointer Double Whammy in []*User

With []*User, there are two indirection layers:

  1. Slice header → points to array
  2. Array elements → pointers to user structs
users (slice header)
    │
    ├──→ [pointer1, pointer2] (array of pointers)
    │        │          │
    │        │          └──→ Bob struct
    │        │
    │        └──→ Alice struct
    │
    └── length: 2, capacity: 2

Rules of Engagement

Rule 1: Struct modifications persist

func updateUsers(users []*User) {
    users[0].Age = 40 // Updates original struct
}

Why?

Because both caller and callee share the same pointer to Alice.

Rule 2: Slice appends vanish

func updateUsers(users []*User) {
    users = append(users, &User{Name: "Charlie"})
}

append may allocate a new array and produces a new slice header, but only inside the function.

The caller still points to the old slice.

Visualization

Before function call

main.users → [ptr1, ptr2] (len=2, cap=2)

Inside function after append

updateUsers.users → [ptr1, ptr2, ptr3] (len=3, cap=4)
main.users        → unchanged

After function returns

Charlie → eligible for GC

Fixing Appends (Make Them Persist)

Option 1: Pass a pointer to the slice

func main() {
    users := []*User{...}
    updateUsers(&users)
}

func updateUsers(users *[]*User) {
    *users = append(*users, &User{Name: "Charlie"})
}

Option 2: Return the updated slice

func main() {
    users := []*User{...}
    users = updateUsers(users)
}

func updateUsers(users []*User) []*User {
    users[0].Age = 40
    return append(users, &User{Name: "Charlie"})
}

Option 3: Use methods on custom types

type UserList []*User

func (ul *UserList) AddUser(name string, age int) {
    *ul = append(*ul, &User{Name: name, Age: age})
}

func main() {
    users := UserList{...}
    users.AddUser("Charlie", 35)
}

Key Takeaways

  • When you update the fields of a struct through a pointer inside a slice, the changes are reflected in the original data because both the caller and the callee reference the same memory.
  • When you modify the slice itself (length or capacity), those changes do not carry over because the slice header is passed by value.
  • As a result, using append inside a function won’t affect the caller’s slice unless:

    • you return the modified slice, or
    • you pass a pointer to the slice.

The Mantra

Pointers in slices share memory, but slice headers don’t.
If you grow a slice in another function, pass a pointer or return it.

If you found this useful, share it with a Go dev who’s currently yelling at their debugger 😉

Navigating the Switch: How to Choose the Right Linux Distro in 2026

2025-12-14 03:08:33

A penguin standing at a crossroads with signposts for different Linux distributions.

A penguin standing at a crossroads with signposts for different Linux distributions.

As we approach October 14, 2025, the date Microsoft officially ends support for Windows 10, many users are looking for alternatives to keep their digital lives secure and up-to-date. For a growing number of people, that alternative is Linux.

However, stepping into the world of Linux can feel like standing at a massive crossroads. With hundreds of "distributions" (or distros) available, how do you choose the one that fits your needs? This guide will help you navigate the ecosystem and find your perfect match.

What is Linux, Really?

Before diving into choices, it is important to understand that Linux is technically not an operating system itself, but a kernel—the core component that manages the system's resources. What we call "Linux" are actually distributions built around this kernel. Each distro packages the kernel with its own selection of software, desktop interfaces, and package management systems to create a complete operating system.

Linux

This architecture is why Linux is so flexible. It powers everything from the world's top supercomputers and servers (like Google, Amazon, and Microsoft Azure) to Android phones and personal laptops.

7 Factors to Consider When Choosing a Distro

Finding the "best" distro is subjective; it depends entirely on what you need your computer to do. Here are the key factors to evaluate:

factotr

  1. User-Friendliness: If you are migrating from Windows or macOS, you likely want an intuitive interface. Distros like Linux Mint and Ubuntu are designed to feel familiar immediately.
  2. Purpose: Define your primary use case.
    • General Use: Web browsing, media, office work.
    • Gaming: High performance and driver support.
    • Development: Programming tools and environment stability.
    • Server Management: Uptime and security.
  3. Hardware Compatibility: Linux breathes new life into old machines, but not every distro is lightweight. A modern distro like Pop!_OS is great for new hardware, while Lubuntu is perfect for aging laptops.
  4. Software Availability: Ensure the apps you need are supported. Most popular distros have massive repositories, but some niche ones might require more tinkering to get proprietary software running.
  5. Community Support: A strong community is your safety net. Distros with active forums (like Arch or Ubuntu) make troubleshooting infinitely easier.
  6. Release Cycle:
    • Fixed Release (LTS): Stable versions released every few years (e.g., Ubuntu LTS, Debian). Great for stability.
    • Rolling Release: Continuous updates with the absolute latest software (e.g., Arch Linux, Manjaro). Great for having cutting-edge features.
  7. Customization: Do you want a system that works out of the box, or do you want to build it brick-by-brick?

Recommendations by User Type

1. The "First-Time Switcher" (Beginners)

If you want a "just works" experience with a Windows-like feel:

bg1

  • Linux Mint: The gold standard for Windows refugees. It’s stable, familiar, and polished.
  • Ubuntu: The most popular desktop Linux. It has arguably the best third-party support and documentation.
  • Zorin OS: Specifically designed to look and behave exactly like Windows or macOS.

2. The Developer & Power User

For those who need robust tools and control:

  • Fedora: widely used by professional developers; it balances cutting-edge features with stability. Linus Torvalds himself is known to use Fedora.
  • Debian: The rock-solid foundation for many other distros (including Ubuntu). It prioritizes stability above all else.
  • Arch Linux: For those who want to build their system from the ground up. It offers ultimate control but requires a "do-it-yourself" attitude.

dev

3. The Gamer

Yes, you can game on Linux! Valve's Proton layer has changed the game.

  • Garuda Linux: An Arch-based distro optimized specifically for gaming with performance tweaks and a stunning "dragonized" neon aesthetic.
  • Pop!_OS: Fantastic out-of-the-box support for NVIDIA GPUs, making it a favorite for gamers and AI developers.
  • Manjaro: Accessible Arch-based Linux that offers access to the vast Arch User Repository (AUR) without the complex setup.

gamer

4. The Privacy Advocate

kali

  • Kali Linux: While mainly for penetration testing and security research, it is the standard for cybersecurity professionals.
  • Tails: An amnesic system designed to leave no trace on the computer you are using.
  • Qubes OS: Uses virtualization to isolate every app, offering reasonably high security.

5. The Hardware Resurrector

For old computers that struggle with Windows 10/11:

hwr

  • Lubuntu or Xubuntu: Lightweight flavors of Ubuntu that use minimal system resources.
  • Puppy Linux: Extremely small and fast, designed to run entirely in RAM.

The Linux Family Tree

Understanding a distro's "ancestry" can help you predict how it behaves.

fam

  • Debian-based: Known for stability and the .deb package format.
    • Examples: Ubuntu, Linux Mint, Pop!_OS, Kali Linux, Parrot OS.
  • Red Hat-based: Enterprise-focused standards, often using .rpm packages.
    • Examples: Fedora, CentOS, RHEL (Red Hat Enterprise Linux).
  • Arch-based: Rolling releases that prioritize the latest software.
    • Examples: Arch Linux, Manjaro, Garuda Linux.
  • Independent: Built from scratch, not based on another distro.
    • Examples: Alpine Linux (popular in containers), Slackware.
  • Android-based: Linux kernel modified for mobile devices.
    • Examples: LineageOS.

(Note: While macOS is Unix-like and shares heritage with Linux via BSD, it is based on the Darwin kernel, not Linux.)

Conclusion: Just Try It!

The beauty of Linux is that you don't have to commit blindly. Almost every distribution offers a Live USB version. You can download an ISO, flash it to a USB drive, and boot your computer into Linux to test user-friendliness, hardware compatibility, and aesthetics without installing anything or erasing your Windows data.

Switching to Linux is not just about changing operating systems; it is a journey into a community that values freedom, privacy, and collaboration. Whether you choose the stability of Debian, the ease of Mint, or the power of Arch, the right distro is out there waiting for you.

Enjoy exploring the world of Linux!