MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

🌀 Simulating Time Loops in a Shed: A Spin-Biased FDTD Journey

2025-11-13 01:35:58

Toroidal CTC sim is a Python-based wave simulation that models directional delay in a closed-loop waveguide.

It’s built to explore what happens when you drag a refractive index perturbation around a torus—and how that messes with time-of-flight.

I’m not a physicist. I’m a shed enthusiast. But I wanted to see if I could build a desktop analogue for closed timelike curves (CTCs) using classical wave mechanics. Turns out, you can.

🔧 What It Does

  • Simulates 1D FDTD wave propagation in a toroidal geometry
  • Applies a rotating index perturbation to bias wave direction
  • Measures arrival-time shifts across a sweep of angular velocities (Ω)
  • Outputs reproducible CSV timing data, plots, and animations
  • Explores delay-based logic, recursive computation, and metaphysical edge cases

📦 Built for Reproducibility

  • All parameters in parameters.json
  • Results in timing_data.csv
  • Modular code, clean CLI, and full documentation
  • Zenodo DOI: 10.5281/zenodo.17592350

🖼️ Visuals

figure1.png

Arrival time vs. Ω — directional delay emerges as the perturbation spins

🔗 Links

🧠 Why?

Because time loops are cool.

Because reproducible metaphysics is underrated.

Because sometimes the best place to simulate a paradox is a shed.

If you're on arXiv or ResearchGate and resonate with delay-based logic, wave simulations, or metaphysical modeling—I'd love an endorsement or a nudge. This project is open-source, reproducible, and built for curious minds. Ping me or star the repo if it made you think weird thoughts.

🏷️ Tags

opensource #simulation #python #physics #fdtd #reproducibility #timetravel #computationalphysics

Beyond apps: what comes after the app store and google play

2025-11-13 01:33:19

Our phone is a cognitive dumpster. How intent-centric architectures will replace app-based clutter.

Photo by Florian Schmetz on Unsplash

Our phones have become cognitive dumpsters. 127 icons, each a gateway to a separate universe with its own rules, logins, loyalty cards, and push notifications. We’ve reached peak absurdity: our pockets hold supercomputers, while our brains perform the chores of digital handymen — searching, comparing, downloading, logging in, paying.

Today’s tech giant presentations, with Apple as the prime example, increasingly resemble Nokia in its decline: endless iterations of cases, slightly brighter screens, marginally faster chips. This is an evolutionary dead end masking a crisis of ideas. Innovation has been reduced to swapping aluminum for titanium and adding another sensor. The specs race has exhausted itself because it no longer transforms how we interact with the world. Platforms have matured, hardened, and stagnated. It seems the next revolution won’t come from Silicon Valley, but from those daring to offer not a new product, but a new logic of interaction.

From interface to intent: a paradigm shift

Imagine a platform where the central element isn’t your home screen, but your goal. You don’t open a taxi app. You say: “To the airport by 5 PM.” Your agent — your digital self — consults an open registry, finds everyone who can solve this task, chooses the best based on your criteria, and directly, via API, books the ride. No downloading. No logins. No pizza discount notifications.

We’re accustomed to voice assistants promising such scenarios, but their fundamental limitation is that they’re merely add-ons to an outdated architecture dominated by monolithic apps and walled gardens. They can only work with what the platform explicitly allows them. The main question isn’t what we want to achieve, but what architecture our devices must have to make it truly possible?

The answer lies not in updating software, but in rethinking the very core of digital interaction. We must elevate the agent from a beggar knocking on closed app doors to a full-fledged orchestrator with direct access to an open market of machine-readable services. This requires a new operating environment where the central process isn’t a window manager, but a task scheduler working with a global skill registry. Where security is ensured not by app store reviews, but by execution isolation and cryptographic trust protocols. Where the device isn’t a program repository, but an intelligent gateway to the world of services.

This isn’t an improved voice assistant. It’s a paradigm shift in control. I call it the i*ntent-centric operating environment (ICOE).* Its core isn’t an operating system, but your personal proactive intelligent agent (PIA) — a digital extension of your will.

Three principles of the new architecture

1. The agent as your digital self

Your agent is your cognitive immune system. It must be proactive, run locally, and be your only interface. It operates exclusively on your device. Its task is to understand your request’s essence, not to sell you a partner’s service. Its loyalty is solely to you. The interface ceases to be an icon forest — your main interlocutor becomes an intelligent agent.

2. The world as a service menu

An open global skill registry — a decentralized catalog where any company can publish machine-readable descriptions of its services. No need to download apps — services become virtual and available to your agent. Operators can earn money not from selling apps or in-app advertising, but by providing specific functions and API calls — via micropayments, subscriptions, or pay-per-result models. This creates competition at the API quality level, not the app level.

3. Security through transparency and isolation

Tasks are executed in isolated containers. The agent orchestrates micro-services from the registry to achieve your goal, not the goal of an advertising algorithm.

How it works in practice

  1. Not an “App Store,” but an “Operator Registry.” A decentralized catalog where companies register services not as .apk files, but as sets of machine-readable intents and API endpoints.
  2. The user’s AI Agent is the only client. When you say “Order a taxi to work,” your AI agent:
  • Comprehends the request
  • Searches the registry for providers capable of fulfilling this intent
  • Selects the best one (based on your preferences: price, speed, subscription)
  • Directly accesses the chosen provider’s API using a standard protocol to execute the action
  1. The Developer is a Service Operator, Not an Interface Creator. Their task isn’t to design screen buttons, but to:
  • Provide a stable, secure, and standardized API
  • Describe their service’s capabilities in a machine-readable format for the registry
  • Compete on API quality, not icon brightness in a store
  • Earn money by providing specific functions and API calls

The intent-centric operating environment (ICOE) concept

This might seem like a bold idea, but all the necessary technologies have matured for its implementation. Modern AI systems can run on-device, understanding context and request semantics. Decentralized registry technologies have been proven for creating trusted systems and secure data storage. Microservice architecture has become the standard for modern IT infrastructures, ensuring flexibility and scalability. All these components exist and are ready for integration into a new paradigm of digital interaction. A new intention-based OS might not arrive tomorrow, but within a 10–15 year horizon, such a transition is inevitable.

For me, this is deeper than just architecture or technology. It’s a matter of digital sovereignty and cognitive ecology. We spend energy managing interfaces instead of spending it on creativity, communication, and decision-making. We will stop thinking about “apps.” We will simply live in a digital world that understands our intentions and responds with quiet, effective action.

A diagram showing how an intent-centric operating system works: an AI agent fulfills user requests (e.g., ‘order a taxi’) by selecting the best provider from a global registry of services, instead of relying on individual apps.

Conclusion: what’s next?

The presented concept of an intent-centric operating environment (ICOE) describes a paradigm shift from managing applications to declaring target states. The system’s key elements are an AI agent (PIA) for semantic analysis and planning, a decentralized machine-readable operator registry, and an isolated execution environment.

In the context of current developments, partial analogs should be noted. For example, the Rabbit R1 project uses a Large Action Model to simulate human actions in existing interfaces, representing a workaround that doesn’t require service cooperation. Humane AI Pin focuses on alternative I/O interfaces. Voice assistants (Siri, Alexa) and automation platforms (IFTTT) work as add-ons over traditional OSs, maintaining dependence on GUI apps and manual configuration. In any case, these are all layers on top of a monolithic OS, attempting to reconcile the old world of apps with the world of automation.

ICOE’s fundamental difference lies in proposing a fundamentally new architecture based on a machine-readable skill registry and decentralized service orchestration. Instead of simulating human actions in outdated interfaces or creating another abstraction layer, the system presupposes providers transitioning to a new interaction standard. This ensures semantic compatibility, security, and scalability absent in existing solutions.

Beyond the App Store and Google Play lies a world without apps. A future where our digital experience is defined not by downloaded programs, but by our own intentions.

But for this world to become reality, key challenges must be solved: who will architect the universal language of digital intentions, and who will create the trusted environment for their execution? This isn’t a theoretical question — it’s the space for the next revolution.

The question is whether one of the current corporate digital giants will do this, or a new promising startup, free from the burden of outdated paradigms. The answer will determine who builds the next big AI platform — replacing obsolete app stores.

SOLIDWORKS: Blech-Eigenschaften zu Zeichnungen hinzufügen

2025-11-13 01:19:54

Schnelle Lösung zum Hinzufügen von Eigenschaftsinformationen zu Ihren Blechteil-Zeichnungen in SOLIDWORKS.

Schnellstart

// Rechtsklick auf Abwicklungsansicht
// Anmerkung > Schnittlisteneigenschaften wählen
// Für Eigenschaftszuordnung: $PRPWLD:"PropertyName"

Wichtige Punkte

  • Anmerkung muss an Abwicklungsansicht angehängt sein
  • Angepasste Anmerkungen zur Designbibliothek hinzufügen für Wiederverwendung
  • Eigenschaftseinheiten werden über Dokumenteigenschaften verwaltet

Nützliche Eigenschaftscodes

  • $PRPWLD:"Bounding Box Length" - Begrenzungsbox-Länge
  • $PRPWLD:"Sheet Metal Thickness" - Blechdicke
  • $PRPWLD:"Cutting Length-Outer" - Äußere Schnittlänge
  • $PRPWLD:"Bends" - Anzahl der Biegungen

Weitere Tipps und technische Informationen zu SOLIDWORKS finden Sie in unserem Artikel „Anleitung zum Hinzufügen von Blech-Eigenschaften zu SOLIDWORKS-Zeichnungen“.

SuperClaude: Tu Agente de IA ya no es Junior, Ahora es Arquitecto y se Sube al Stack.

2025-11-13 01:12:38

El hype de la IA generativa para código ha pasado a la fase de Productividad Profesional. SuperClaude es la prueba: no es solo un chatbot que te escupe funciones, sino un meta-programming framework que transforma a herramientas como Claude Code en un socio de desarrollo especializado y consciente del contexto. Esto significa que, por fin, la IA está aprendiendo a pensar como un ingeniero senior.

Los 3 Commits Clave

1. 🤖 Modo Experto a Demanda: Personas Cognitivas

SuperClaude introduce Personas Cognitivas como el "System Architect" y el "Security Engineer". Esto es crucial porque el framework se enfoca en corregir la tendencia de la IA a saltarse pasos críticos de planning, arquitectura y testing. Ya no es solo "hazme un componente", sino "diseña la API con enfoque DDD". Utiliza más de 22 comandos *slash* para tareas específicas, desde análisis de seguridad (/user:analyze --security) hasta la solución de problemas en producción (/user:troubleshoot --prod).

2. 🧠 Pensamiento Multi-Hilos (Multi-Hop Reasoning): Adiós a la Ficción

El framework dota al agente de una capacidad de razonamiento iterativo (Multi-Hop Reasoning) de hasta 5 saltos, permitiéndole profundizar en conceptos y seguir cadenas causales (e.g., Causa → Efecto → Prevención). Más importante aún, utiliza un Quality Scoring (Puntuación de Calidad) para validar la confianza de la fuente y la coherencia de la síntesis. Si el resultado es una alucinación, SuperClaude tiene un built-in para detectarlo.

3. 🔄 Auditoría y Aprendizaje Basado en Casos (Case-Based Learning)

SuperClaude implementa un mecanismo de Auditoría de Razonamiento que rastrea el proceso lógico de la IA, lo cual es esencial para la confianza y la depuración. Además, utiliza el Aprendizaje Basado en Casos (Case-Based Learning) para guardar las estrategias de éxito o fracaso en el prompt inicial. Esto significa que, si le enseñas cómo abordar un problema de React-Native en una sesión, aplicará esa estrategia y mejorará las futuras soluciones, optimizando su comportamiento.

🧠 Efecto en el Stack y la Industria

Esta herramienta marca el pivot de la IA en la industria: de ser una muleta para novatos a ser una herramienta de delegación para seniors.

  1. Reducción del Backlog Trivial: Para un equipo, significa que tareas de bajo valor como corregir errores menores en los build logs o vaciar el *backlog* de pequeñas mejoras (accesibilidad, UI polish) pueden delegarse completamente al agente. El foco del equipo se libera para el desarrollo de features de alto impacto y arquitectura.
  2. Contexto Persistente en Full-Stack: Si trabajas con stacks complejos (como Next.js o un backend con NestJS/Django), sabes que el context switching es el principal asesino de la productividad. SuperClaude está diseñado para mantener el contexto a través de todo el flujo, desde el prompt hasta la solución, gracias a su capacidad de aprendizaje iterativo.
  3. La IA se Vuelve Auditable: El rastreo de la cadena de razonamiento es la clave de la adopción en entornos empresariales. Permite a los Senior Developers confiar en el código generado, ya que pueden revisar por qué la IA tomó esa decisión y asegurarse de que sigue los estándares de arquitectura del proyecto.

Moraleja Dev: Ya no es suficiente con que tu agente sepa escribir una función. En la fase 2.0 de la IA, lo que importa es que sepa planificar, auditar su proceso y aprender de sus errores para que la próxima vez sea más eficiente.

🔗 Fuentes y Documentación

https://superclaude.netlify.app/

Beyond Scheduling: How Kubernetes Uses QoS, Priority, and Scoring to Keep Your Cluster Balanced

2025-11-13 01:12:24

When every Pod screams for CPU and memory, who decides who lives, who waits, and who gets evicted?

Kubernetes isn't just a scheduler — it's a negotiator of fairness and efficiency.
Every second, it balances hundreds of workloads, deciding what runs, what waits, and what gets terminated — while maintaining reliability and cost efficiency.

This article unpacks how Quality of Service (QoS), Priority Classes, Preemption, and Bin-Packing Scoring come together to keep your cluster stable and fair.

⚙️ The Challenge: Competing Workloads in Shared Clusters

When multiple workloads share cluster resources, conflicts are inevitable:

  1. High-traffic apps starve lower workloads.
  2. Batch jobs hog memory.
  3. Pods without limits cause unpredictable evictions.

Kubernetes addresses this by applying a layered decision-making model — QoS, Priority, Preemption, and Scoring.

🧭 QoS (Quality of Service): Who Gets Evicted First

Each Pod belongs to a QoS class based on CPU and memory configuration:

QoS Class Description Eviction Priority
Guaranteed Requests = Limits for all containers Evicted last
Burstable Requests < Limits Evicted after BestEffort
BestEffort No requests/limits set Evicted first

💡 Lesson: Always define requests and limits — QoS decides who survives under node pressure.

🧱 Priority Classes: Who Runs First

QoS defines who stays, while Priority Classes define who starts.
Assigning PriorityClass values (integer-based) helps rank workloads during scheduling.

apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
  name: critical-services
value: 100000
description: Critical platform workloads

💡 Lesson: Reserve high priorities for mission-critical services.
Overusing "high" priority leads to chaos — not resilience.

⚔️ Preemption: Controlled Sacrifice, Not Chaos

When a high-priority Pod can't be scheduled:

  1. The scheduler identifies lower-priority Pods occupying resources.
  2. Marks them for termination.
  3. Reschedules the high-priority Pod.

This is guided by PodDisruptionBudgets (PDBs) to avoid excessive collateral damage.

💡 Lesson: Preemption is controlled resilience — ensuring important workloads run while maintaining order.

⚖️ Scoring & Bin-Packing: Finding the Right Home

Once eligible nodes are filtered, Kubernetes enters the scoring phase to find the best fit.

Plugins involved:

  • LeastRequestedPriority → favors underutilized nodes.
  • BalancedResourceAllocation → balances CPU & memory use.
  • ImageLocalityPriority → prefers nodes with cached images.
  • NodeAffinityPriority → honors affinity preferences.
  • TopologySpreadConstraint → ensures zone diversity.

Each node receives a score (0–100) from multiple plugins.
Weighted scores are combined:

final_score = (w1*s1) + (w2*s2) + ...

How weights work:

Scheduler plugins have default weights that you can customize via the scheduler configuration. For example:

  • LeastRequestedPriority: weight 1 (default) — spreads pods across nodes
  • BalancedResourceAllocation: weight 1 (default) — prevents CPU/memory imbalance
  • ImageLocalityPriority: weight 1 (default) — prefers nodes with cached images
  • NodeAffinityPriority: weight 2 (default) — stronger preference for affinity matches

You can adjust these weights in the kube-scheduler config to prioritize different strategies. Higher weights mean that plugin's score has more influence on the final decision.

QoS defines survivability.
Priority defines importance.
Scoring defines placement.

Together, they shape a stable and efficient cluster.

📖 Real-World Example: Critical Service Under Pressure

Imagine your payment service needs to scale during a traffic spike:

  1. Priority Class (value: 100000) ensures the payment pod is considered before batch jobs.
  2. QoS (Guaranteed) with matching requests/limits protects it from eviction when nodes fill up.
  3. Scoring evaluates nodes: Node A has the payment image cached (ImageLocalityPriority: 85), Node B is underutilized (LeastRequestedPriority: 90). Node B wins.
  4. Preemption kicks in if no nodes have capacity: a low-priority batch job pod (BestEffort QoS) gets evicted to make room.

Without these mechanisms:

  • Payment pods might wait behind batch jobs
  • Random evictions could kill critical services
  • Poor node selection causes slow startup times

With proper configuration:

  • Critical services schedule first
  • Predictable eviction order protects important workloads
  • Optimal node placement reduces latency

🧩 Visual Flow: Kubernetes Scheduling & Bin-Packing

Kubernetes Scheduling Flow

🔧 Troubleshooting Common Issues

"Why is my high-priority pod still pending?"

  • Check node resources: kubectl describe nodes to see available CPU/memory
  • Verify PriorityClass is applied: kubectl get pod <pod-name> -o jsonpath='{.spec.priorityClassName}'
  • Check for taints/tolerations: high priority doesn't bypass node taints
  • Review preemption logs: kubectl logs -n kube-system <scheduler-pod> for preemption attempts

"My Guaranteed QoS pod got evicted — why?"

  • Node pressure evictions respect QoS, but disk pressure can evict any pod
  • Check node conditions: kubectl get nodes -o wide for DiskPressure or MemoryPressure
  • Verify requests/limits match exactly: kubectl describe pod <pod-name> to confirm Guaranteed class

"Pods are scheduling to the wrong nodes"

  • Review scoring plugins: check kube-scheduler config for disabled plugins
  • Verify node labels/affinity: kubectl get nodes --show-labels
  • Check resource requests: pods with large requests may have limited node options
  • Inspect scheduler events: kubectl get events --field-selector involvedObject.kind=Pod

"Preemption isn't working"

  • Ensure PriorityClass exists: kubectl get priorityclass
  • Check PDB constraints: PodDisruptionBudgets can prevent preemption
  • Verify pod priority values: lower-priority pods must exist for preemption to occur
  • Review scheduler configuration: preemption may be disabled in custom scheduler configs

🧠 Key Lessons for SREs & Platform Teams

✅ Always define CPU/memory requests & limits.
✅ Use PriorityClasses sparingly.
✅ Test evictions under simulated stress.
✅ Combine QoS + PDB + Priority for controlled resilience.
✅ Observe scheduling metrics (kube_pod_status_phase, scheduler_score) regularly.

🚀 Takeaway

Kubernetes doesn't just schedule Pods — it negotiates priorities.
Reliability doesn't come from overprovisioning, but from predictable, fair, and disciplined scheduling.

Resilience = Consistency in scheduling decisions.

💬 Connect with Me

✍️ If you found this helpful, follow me for more insights on Platform Engineering, SRE, and CloudOps strategies that scale reliability and speed.

🔗 Follow me on LinkedIn if you’d like to discuss reliability architecture, automation, or platform strategy.

Images are generated using Gemini-AI

blob10

2025-11-13 01:07:36

CREATE OR REPLACE FUNCTION blob_to_text_large(p_blob BLOB) RETURN VARCHAR2 IS
l_text VARCHAR2(32767);
l_raw RAW(32767);
l_blob_length NUMBER;
l_max_safe_bytes NUMBER := 32000; -- Conservative safe limit
BEGIN
IF p_blob IS NULL THEN
RETURN '';
END IF;

l_blob_length := DBMS_LOB.GETLENGTH(p_blob);

IF l_blob_length IS NULL OR l_blob_length = 0 THEN
RETURN '';
END IF;

-- Handle large BLOBs by truncating to safe size
IF l_blob_length > l_max_safe_bytes THEN
-- Extract only safe portion and indicate truncation
l_raw := DBMS_LOB.SUBSTR(p_blob, l_max_safe_bytes, 1);

BEGIN
  l_text := UTL_RAW.CAST_TO_VARCHAR2(UTL_RAW.CONVERT(l_raw, 'AL32UTF8', 'AL32UTF8'));
  l_text := SUBSTR(l_text, 1, 32000) || '... [TRUNCATED: ' || l_blob_length || ' bytes]';
EXCEPTION
  WHEN OTHERS THEN
    l_text := '[BINARY_DATA_TRUNCATED: ' || l_blob_length || ' bytes]';
END;

ELSE
-- Handle normal-sized BLOBs
l_raw := DBMS_LOB.SUBSTR(p_blob, l_blob_length, 1);

BEGIN
  l_text := UTL_RAW.CAST_TO_VARCHAR2(UTL_RAW.CONVERT(l_raw, 'AL32UTF8', 'AL32UTF8'));
EXCEPTION
  WHEN OTHERS THEN
    l_text := '[BINARY_DATA: ' || l_blob_length || ' bytes]';
END;

END IF;

RETURN l_text;

EXCEPTION
WHEN OTHERS THEN
RETURN 'ERROR: ' || SUBSTR(SQLERRM, 1, 100);
END blob_to_text;
/