2025-11-25 00:19:41
Got a short one for you this week.
If you work in the “DevOps” field, by now you have used YAML to some extent in tools like Kubernetes, Docker Compose, or Ansible. How about some CI/CD like GitHub Actions? Maybe you are one of those psychopaths who write Terraform in either JSON or YAML—it should be illegal to do that, in my opinion.
Either way, you have seen YAML and maybe like it, maybe hate it, but we cannot deny the fact that it is king for a lot of tools out there, with JSON and TOML nearby, trying to take the spot.
Now, I’m on the middle ground. As much as YAML helped me when I started my career in Ansible and later on with Kubernetes—since it’s easy to write and understand—it’s ALSO a nightmare to work with because of its super helpful but stupid indentation rules. And while you might say, “Well, just use a formatter and stop crying…” Sure, crying is fun, but what if the formatter isn’t aware of the schema and suddenly puts that item at a different indentation level? Well, now we both cry together, and that is beautiful!
You may know that I use Neovim (btw), so even with a formatter and a great LSP + linter, it all becomes useless if you start spewing out random YAML files. So, it is imperative that your editor is aware of your schema and what goes where, which is why things like schemastore help out a lot when working with these filetypes. A bit ironic, as the SchemaStore project is mostly for JSON, but we can all live happily together.
Setting up a good workflow is not exactly easy—a lot of manual intervention is still required. Maybe other editors do a better job. If they do, please let me know, as I use YAML a lot!
So what alternatives do we have? Glad you asked.
YAML is King. Change my mind, please!
Adios 👋
2025-11-25 00:12:42
One of the long-standing challenges with confidential computing has been building applications that do more than secure backend logic. TEEs excel at protected execution, secret key handling, and verifiable compute, but exposing a user-facing frontend from inside an enclave has always been awkward.
Developers typically had to rely on external proxies, manually managed TLS certificates, and domain routing pipelines that operated outside the TEE boundary. That meant extra tooling, inconsistent setups across providers, and, ironically, more opportunities for data to leave the secure boundary than most teams wanted.
The latest update to ROFL changes this dynamic in a meaningful way. ROFL now provides built-in proxy support and automated HTTPS endpoint creation, allowing applications to run both frontend and backend components entirely inside confidential environments.
This isn’t just a quality-of-life improvement, it shifts what “confidential apps” can practically look like.
The new proxy layer in ROFL acts as the gateway between the public Internet and the enclave-based application. Instead of requiring developers to configure NGINX, Traefik, or bespoke proxy setups, ROFL now handles:
From the developer’s perspective, hosting a frontend in ROFL now resembles deploying to a modern PaaS, only with hardware-backed confidentiality at every step.
ROFL’s deployment flow remains container-based, but with an additional annotation step to declare the domain an app should serve from. After redeploying, the CLI provides DNS instructions for whichever domain the developer wants to use.
Once DNS is configured, a quick restart triggers certificate creation. Importantly, certificate keys are generated inside the enclave, and never leave it.
The proxy infrastructure routes based on TLS handshake metadata, not plaintext preserving the enclave’s data-isolation guarantees.
Read the Docs here: https://docs.oasis.io/build/rofl/features/proxy/
This feature closes one of the last major gaps in building production-grade confidential applications:
Confidential computing becomes far more approachable when deploying a secure app is as simple as annotating a compose file and configuring DNS.
In practical terms, teams can now build:
all running inside trusted hardware, with no external proxy infrastructure to maintain.
ROFL has always focused on giving developers a general-purpose execution layer for verifiable, confidential off-chain logic. Adding native frontend hosting pushes it toward a true full-stack confidential compute platform.
The more that infrastructure fades into the background, the easier it becomes for developers to actually use these capabilities. This update moves the ecosystem one step closer to confidential applications that are secure by default—not secure with caveats.
2025-11-25 00:06:00
Recomiendo ver antes - instalacion de Homebrew y asdf en ubuntu ( es corto son 5 comandos)
Nota: No olvidemos que lo mas importante que tiene hoy Dart es Flutter, aunque casi que necesitat su propio apartado
Por APT (via repositorio oficial)
sudo apt update
sudo apt install apt-transport-https
sudo sh -c 'wget -qO- https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add -'
sudo sh -c 'wget -qO /etc/apt/sources.list.d/dart_stable.list https://storage.googleapis.com/dart-archive/channels/stable/release/latest/linux_packages/dart_stable.list'
sudo apt update
sudo apt install dart
brew install dart
dart pub add <paquete>
dart pub get
sudo apt update
sudo apt install curl git unzip
asdf plugin add dart https://github.com/patoconnor43/asdf-dart.git
asdf list-all dart
asdf install dart 3.4.0
asdf global dart 3.4.0
.tool-versions
dart 3.4.0
Crear archivo: touch hola.dart
Código de hola.dart
void main() {
print('Hola Mundo desde Dart!');
}
💻 Ejecutar Localmente:
dart run hola.dart
NOTA: ambos codigos son compatibles con Dart 3.9
Que Hace:
📝 Crear archivo: touch server.dart
📦 Contenido de server.dart
import 'dart:convert';
import 'dart:io';
void main() async {
final server = await HttpServer.bind(InternetAddress.loopbackIPv4, 9000);
print('Servidor corriendo en http://localhost:9000');
await for (HttpRequest request in server) {
final params = request.uri.queryParameters;
final username = params['username'] ?? 'invitado';
final escaped = htmlEscape.convert(username);
final htmlContent = """
<!DOCTYPE html>
<html lang="es">
<head>
<meta charset="UTF-8">
<title>Hola</title>
</head>
<body style="text-align:center">
<img src="https://dev.to/oscarpincho/mejora-ubuntu-con-homebrew-y-asdf-gestor-de-versiones-multiples-4nka" />
<h1>Hola, $escaped</h1>
</body>
</html>
""";
request.response.headers.contentType = ContentType.html;
request.response.write(htmlContent);
await request.response.close();
}
}
▶️ Correr el proyecto / levantar el servidor
dart run server.dart
👉 visitar:http://localhost:9000/?username=Homero
🔗 Para probar el sanitizado de url:http://localhost:9000/?username=<h1>--Homer--</h1>
Que Hace:
data.json
/characters
/characters/:id
Ejemplo de archivo: data.json
[
{
"id": 1,
"age": 39,
"name": "Homer Tompson",
"portrait_path": "https://cdn.thesimpsonsapi.com/500/character/1.webp"
},
{
"id": 2,
"age": 39,
"name": "Marge Simpson",
"portrait_path": "https://cdn.thesimpsonsapi.com/500/character/2.webp"
}
]
📝 Crear archivo: touch api.dart
📦 Contenido de api.dart
import 'dart:convert';
import 'dart:io';
void main() async {
final server = await HttpServer.bind(InternetAddress.loopbackIPv4, 9001);
print("Servidor JSON en http://localhost:9001");
await for (HttpRequest request in server) {
final path = request.uri.path;
final characters = await loadCharacters();
if (request.method == 'GET' && path == '/characters') {
_sendJson(request, characters);
continue;
}
if (request.method == 'GET' && path.startsWith('/characters/')) {
final idStr = path.split('/').last;
final id = int.tryParse(idStr);
if (id == null) {
_sendJson(request, {"error": "ID inválido"}, status: 400);
continue;
}
final character = findById(characters, id);
if (character == null) {
_sendJson(request, {"error": "Personaje no encontrado"}, status: 404);
} else {
_sendJson(request, character);
}
continue;
}
_sendJson(request, {
"error": "Ruta no encontrada",
"url_lista": "http://localhost:9001/characters",
"url_personaje": "http://localhost:9001/characters/1"
}, status: 404);
}
}
Future<List<Map<String, Object?>>> loadCharacters() async {
final file = File('data.json');
final content = await file.readAsString();
final List<dynamic> jsonData = jsonDecode(content);
return jsonData.cast<Map<String, Object?>>();
}
Map<String, Object?>? findById(List<Map<String, Object?>> list, int id) {
for (final item in list) {
if (item["id"] == id) return item;
}
return null;
}
void _sendJson(HttpRequest request, Object data, {int status = 200}) {
request.response.statusCode = status;
request.response.headers.contentType = ContentType.json;
request.response.write(jsonEncode(data));
request.response.close();
}
▶️ Correr el proyecto / levantar el servidor
dart run api.dart
👉 Visitar:http://localhost:9001/characters
Para probar un personaje por ID:http://localhost:9001/characters/1
2025-11-25 00:05:50
Many organizations are working hard to meet NIS2, DORA, or supply-chain-security requirements.
And yet they still fail at a point that seems almost trivial:
👉 They can’t technically prove what actually happened.
Auditors ask:
“Show me when, by whom, why, how, and with what something was deployed.”
And the usual reality is:
— 7 tools
— 5 ticket systems
— 0 unified evidence
— 100% headache
The solution is simple — but hard to enforce:
Everything that matters must live versioned in Git.
Code
IaC
Policies-as-Code
Pipelines
Evidence
Risk decisions
Recovery paths
Not scattered.
Not “documented somewhere.”
But commit-based, signed, traceable.
That turns Git into the Technical Source of Trust.
And suddenly NIS2 & DORA become things you can prove, not just answer vaguely.
🔐 NIS2
End-to-end automated traceability across the entire software supply chain — without manual heroism.
🧩 DORA
Operational resilience by design through reproducible recovery paths and verifiable risk decisions.
🇪🇺 Digital Sovereignty
Sovereign code hosting: the technical proof that you operate independently, controllably, and audit-ready.
What GitSecOps Changes in Practice
No more “documentation theater”
Auditors review technical evidence — not slide decks
Dev, Sec, and Ops speak from the same data
Every decision is versioned
Every deviation is visible
Every delivery is auditable
Why I'm Writing About This
I build systems that prove trust — not promise it.
And GitSecOps is the first approach that puts compliance on a technical foundation without slowing down teams.
If you want to see how GitSecOps can be implemented in practice, I regularly share patterns, examples, and real use cases here.
2025-11-25 00:05:50
Many organizations are working hard to meet NIS2, DORA, or supply-chain-security requirements.
And yet they still fail at a point that seems almost trivial:
👉 They can’t technically prove what actually happened.
Auditors ask:
“Show me when, by whom, why, how, and with what something was deployed.”
And the usual reality is:
— 7 tools
— 5 ticket systems
— 0 unified evidence
— 100% headache
The solution is simple — but hard to enforce:
Everything that matters must live versioned in Git.
Code
IaC
Policies-as-Code
Pipelines
Evidence
Risk decisions
Recovery paths
Not scattered.
Not “documented somewhere.”
But commit-based, signed, traceable.
That turns Git into the Technical Source of Trust.
And suddenly NIS2 & DORA become things you can prove, not just answer vaguely.
🔐 NIS2
End-to-end automated traceability across the entire software supply chain — without manual heroism.
🧩 DORA
Operational resilience by design through reproducible recovery paths and verifiable risk decisions.
🇪🇺 Digital Sovereignty
Sovereign code hosting: the technical proof that you operate independently, controllably, and audit-ready.
What GitSecOps Changes in Practice
No more “documentation theater”
Auditors review technical evidence — not slide decks
Dev, Sec, and Ops speak from the same data
Every decision is versioned
Every deviation is visible
Every delivery is auditable
Why I'm Writing About This
I build systems that prove trust — not promise it.
And GitSecOps is the first approach that puts compliance on a technical foundation without slowing down teams.
If you want to see how GitSecOps can be implemented in practice, I regularly share patterns, examples, and real use cases here.
2025-11-25 00:01:34
Both Gemini 3 Pro (Google/DeepMind) and Claude Sonnet 4.5 (Anthropic) are 2025-era flagship models optimized for agentic, long-horizon, tool-using workflows — and both place heavy emphasis on coding. Claimed strengths diverge: Google pitches Gemini 3 Pro as a general-purpose multimodal reasoner that also shines at agentic coding, while Anthropic positions Sonnet 4.5 as the best coding/agent model in the world with particularly strong edit/tool success and long-running agents.
Short answer up front: both models are top-tier for software engineering tasks in late 2025. Claude Sonnet 4.5 nudges ahead on some pure software-engineering bench metrics, while Google’s Gemini 3 Pro (Preview) is the broader, multimodal, agentic powerhouse—especially when you care about visual context, tool use, long-context work and deep agent workflows.
I currently use both models, and they each have different advantages in the development environment. I will now compare them in this article.
Gemini 3 Pro is only available to Google AI Ultra subscribers and paid Gemini API users. However, the good news is that CometAPI, as an all-in-one AI platform, has integrated Gemini 3 Pro, and you can try it for free.
Gemini 3 Pro (available initially as gemini-3-pro-preview) is Google/DeepMind’s latest “frontier” LLM in the Gemini 3 family. It’s positioned as a high-reasoning, multimodal model optimized for agentic workflows (that is, models that can operate with tool use, orchestrate subagents, and interact with external resources). It emphasizes stronger reasoning, multimodality (images, video frames, PDFs), and explicit API controls for internal “thinking” depth.
thinking_level parameter) so you can trade latency for deeper internal reasoning. high is the default for Gemini 3 Pro.media_resolution) to tune image/video fidelity vs cost — useful when you want the model to read small text in screenshots or analyze frames.Claude Sonnet 4.5 is Anthropic’s 2025 release that Anthropic markets as its strongest model for coding, agentic workflows and “using computers” (controlling tools, browsers, terminals, spreadsheets, etc.). It emphasizes improved edit capability, tool success, extended thinking, long-running agent coherence (30+ hours of autonomous task execution in demonstrations), and lower code-editing error rates versus prior generations. Anthropic bills Sonnet 4.5 as their “best coding model” with large gains in edit reliability and long-horizon task coherence.
| Aspect | Gemini 3 Pro (Preview) | Claude Sonnet 4.5 |
|---|---|---|
| Model / release status |
gemini-3-pro-preview — Google / DeepMind frontier model (preview). Released Nov 2025 (preview). |
claude-sonnet-4-5 — Anthropic Sonnet-class frontier model (GA / announced Sep 29, 2025). |
| Target positioning (coding & agents) | General-purpose frontier model with emphasis on reasoning + multimodal + agentic workflows; positioned as Google’s top coding/agent model. | Specialized for coding, long-horizon agenting and computer use (Anthropic’s “best for coding & complex agents”). |
| Key developer features |
thinking_level control for deeper internal reasoning; built-in Google tool integrations (Search grounding, code execution, file/URL context); dedicated image variant for text+image workflows. |
Agent SDKs, VS Code integration (Claude Code), file & code-execution tools, long-horizon agent improvements (explicitly tested for multi-hour runs). Emphasis on iterative edit/run/test workflows and checkpointing. |
| Context window (input / output) |
1,000,000 tokens input / 64k tokens output for gemini-3-pro-preview
|
1,000,000 tokens input / 64k tokens output |
| Pricing (published baseline) | $2 / $12 per 1M tokens (input / output) for the <200k tier; higher rates for >200k ( show $4 / $18 for >200k). | Anthropic published baseline: $3 / $15 per 1M tokens (input / output) for Sonnet 4.5; |
| Multimodal capability (vision/video/audio) | Full multimodal support: text, images, audio, video frames with configurable image/video resolution parameters; dedicated gemini-3-pro-image-preview. Strong emphasis on image OCR/visual extraction for coding UIs/screenshots. |
Supports vision (text+image) inputs and uses vision to support coding workflows; primary emphasis is agentic integration (using visual context inside agent flows rather than image generation parity). |
| Long-horizon agentic performance & persistence | “Thinking” primitives for explicit multi-step internal reasoning; strong math/reasoning & multimodal deep reasoning. Good at decomposing complex algorithmic tasks.Best for heavy single-response reasoning + multimodal analysis. | Anthropic emphasizes long-horizon agentic coherence — Anthropic reports internal tests where Sonnet 4.5 maintained coherent multi-step tool use for 30+ hours and improves continuous agent stability vs prior models. Good fit for persistent automation and CI-style agent workflows. |
| Output quality for coding (edits, tests, reliability) | Very strong single-shot reasoning + code generation; built-in tools to run code via Google’s tooling; high marks on algorithmic benchmarks per vendor claims. Practical advantage when the workflow mixes visual specs + code. | Designed for iterative edit→run→test loops; Sonnet 4.5 highlights improved “patching” reliability (rejection sampling / scoring techniques to pick robust patches) and tooling that supports iterative developer workflows (checkpoints, tests). |
Gemini 3 Pro: presented as a multimodal, general-purpose foundation model with explicit engineering for “thinking” and tool use: the design emphasizes deep reasoning, video/audio understanding, and agentic orchestration via built-in function calling and code execution environments. Google frames Gemini 3 Pro as the “most intelligent” in the family, optimized for wide tasks beyond code (though agentic coding is a priority).
Claude Sonnet 4.5: optimized specifically for agentic workflows and code: Anthropic emphasizes instruction-following,tool reliability, edit/correction proficiency, and long horizon state management. The engineering focus is to minimize destructive or hallucinated edits and to make robust real-world computer interactions.
Takeaway: Gemini 3 Pro is pitched as a top generalist that’s been pushed hard on multimodal reasoning and agentic integration; Sonnet 4.5 is pitched as a specialist for coding and agentic tool use with enhanced edit/correction guarantees.
thinking_level parameter for controlling internal compute/latency tradeoffs. Deep integration into Google infra makes it convenient for teams already on Google Cloud.Benchmarks vary slightly depending on the evaluator and configuration (single attempt vs. multi-attempt, tool access, extended-thinking settings). Below are Benchmark data analysis of coding ability:
Claude Sonnet 4.5 (Anthropic reported): 77.2% (200k thinking budget; 78.2% in 1M config). Anthropic also reports an 82.0% high-compute score using parallel attempts/rejection sampling.
Gemini 3 Pro (DeepMind reporting / related leaderboards): ~76.2% single-attempt on SWE-bench (vendor table). Public leaderboards vary (Gemini and Sonnet trade narrow margins).
Gemini 3 Pro: Terminal/agentic bench numbers (vendor table) show strong performance (e.g., Terminal-Bench 54.2% in vendor table), competitive with Sonnet’s agentic strengths.
Sonnet 4.5: excels in agentic tool orchestration (Anthropic reports substantial gains on OSWorld and Terminal-style benchmarks and highlights longer continuous task performance).
Takeaway: the two models are very close on modern code-understanding and code-generation benchmarks; Sonnet 4.5 has a slight edge on some software-engineering verification suites (Anthropic’s published numbers), while Gemini 3 Pro is extremely competitive and often leads on multimodal and some coding-competition style leaderboards. Always validate with the exact evaluation configuration (tool access, context-size, thinking budgets), because those knobs materially change scores.
media_resolution (low/medium/high token budgets per image/frame), image generation/editing (separate image preview model), and explicit guidance for OCR/visual detail. This makes Gemini particularly strong when coding tasks require reading screenshots, UI mockups, or video frames.If your workflow heavily relies on UI screenshots, design specs in images, or video walkthroughs that the model must analyze to produce or modify code, Gemini’s dedicated image resolution controls and image-generation variant can be a practical advantage. If your pipeline is agent-driven automation (clicking around, running commands, editing files across tools), Claude’s agent SDK and code-execution tooling are first-class.
Sonnet 4.5 can maintain coherent work for over 30 hours across complex multi-stage tasks (planning, research, litigation drafting, long-running code tasks). This endurance plus Anthropic’s alignment emphasis makes Sonnet an attractive choice for end-to-end automation where the model must keep track of goals and maintain safe behavior.
Gemini 3 Pro introduces a “Deep Think” variant and richer internal thinking APIs for multi-step planning, coupled with Google’s agentic IDE. In practice this means Gemini can both plan and execute agentic steps across tools (editor, shell, web). If your automation requires external tool access with artifact creation, Gemini’s integrated agentic tooling (Antigravity) is a strong plus. Note: Deep Think trades latency for depth.
In the “Vending-Bench 2” simulation test, Gemini 3 outperformed Claude 4.5 by running a virtual company for a whole year and remaining profitable. In short-term tests, the Gemini 3 Pro and Claude 4 Sonnet data were similar, but the difference became more pronounced over longer testing periods.
thinking_level and Deep Think promise greater single-response depth.As a developer, you should choose a model based on your needs and its characteristics, not just the cheapest one. If the task can be handled by two models, decide based on the context.
If you want to use two models simultaneously, I recommend CometAPI, which provides both Gemini 3 Pro Preview API and Claude Sonnet 4.5 API, and is priced at 20% of the official price.
| Gemini 3 Pro Preview | GPT-5.1 | |
|---|---|---|
| Input Tokens | $1.60 | $2.4.00 |
| Output Tokens | $9.60 | $12.00 |
Gemini 3 Pro (Preview) and Claude Sonnet 4.5 are both state-of-the-art choices for coding assistants in late 2025. Sonnet 4.5 edges out Gemini in specific software-engineering verification benchmarks and stamina on long-horizon tasks, while Gemini 3 Pro brings stronger multimodal understanding and deep agentic tooling that can execute in editor/terminal/browser environments. The right choice depends on whether your primary need is pure code reasoning and verification (Sonnet), or multimodal, agentic, tool-augmented development (Gemini). For enterprise-grade deployment, many teams will reasonably adopt a hybrid approach, using whichever model is strongest for a particular stage of the dev workflow.