2026-01-13 18:36:30
2025 didn’t bring a single “killer app” moment for crypto, but it did bring something more important: infrastructure getting real.
One of the most underappreciated shifts this year was how Oasis positioned itself as a practical bridge between on-chain systems and real-world compute.
This post is a short recap of why Oasis mattered in 2025, especially for devs building beyond toy smart contracts.
Smart contracts are:
But also:
Most serious protocols already rely on off-chain systems (AWS, GCP, custom infra), but those systems are:
2025 was the year projects stopped pretending this tradeoff didn’t exist.
Oasis doubled down on TEE-based compute, enabling:
This isn’t theoretical privacy, it’s auditable execution.
You don’t “trust the operator”.
You verify what code ran, where it ran, and what it produced.
ROFL (Runtime Off-chain Logic Framework) became one of the most important and misunderstood pieces of Oasis' stack.
What it actually enables:
In practice, this means you can:
…without exposing data or logic, and without blindly trusting AWS.
The big signal in 2025 wasn’t marketing, it was adoption by protocols with real risk.
Using Oasis + ROFL allowed teams to:
This hybrid model (centralized performance + decentralized verification) is likely how most production crypto systems will evolve.
If you’re building in 2026+, Oasis makes sense when you need:
You don’t replace your backend. You prove it.
That’s the mental shift Oasis helped normalize in 2025.
Oasis didn’t try to “replace Ethereum”.
It didn’t chase memes.
It didn’t promise magic scalability.
Instead, it focused on something harder:
Making off-chain compute verifiable.
And that’s exactly the kind of boring, foundational work that ends up shaping the next wave of adoption.
2026-01-13 18:20:17
Cloud computing is the delivery of computing services—such as storage, databases, networking, servers, software, and analytics—over the internet (“the cloud”). Instead of owning and managing physical hardware and software, organizations and individuals can access these resources on demand from cloud providers like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP).
On-Demand Self-Service – Users can provision resources like servers and storage without human intervention.
Broad Network Access – Services are available anytime, anywhere via the internet.
Scalability & Elasticity – Resources can scale up or down automatically based on demand.
Pay-As-You-Go Pricing – You only pay for what you use, reducing upfront costs.
Managed Infrastructure – The cloud provider handles maintenance, updates, and security at the infrastructure level.
IaaS (Infrastructure as a Service): Provides virtual servers, networking, and storage. Example: AWS EC2.
PaaS (Platform as a Service): Provides development tools and platforms to build apps without managing servers. Example: Google App Engine.
SaaS (Software as a Service): Provides ready-to-use applications delivered over the internet. Example: Gmail, Microsoft 365.
Public Cloud: Shared resources hosted by providers (e.g., AWS, Azure).
Private Cloud: Dedicated infrastructure for one organization, often on-premises.
Hybrid Cloud: Mix of public and private for flexibility and control.
Multi-Cloud: Using multiple providers for redundancy and optimization.
Cost-effective (no heavy upfront hardware investment)
Flexibility and mobility
Faster time to market
Business continuity with backup & disaster recovery
Security and compliance (when configured properly)
This mini-project is designed to guide throught the intricacies of Amazon Web Services (AWS), specifically focusing on indentity and access management. It invovles delivering computing services over the internet, including servers, storage, databases, networking, software, analystics and intelligence, to offer faster innovation, flexible resources, and economies of scale.
In this project, we will be working with a hypothetical fintech startup named Zappy e-Bank. This fictitious company represnts a typical startup venturing into financial technology sector, aiming to leverage the cloud power to innovate, scale, and deliver financial services. The scenario is set up to provide realistic backdrop that will help unserstand the application of AWS IAM in managing cloud resources securely and efficiently.
For Zappy e-Bank, like any company dealing with financial services, security and compliances are paramount. The company must ensure that its data, including sensitive customer's information, is securely managed and that access to resources is tightly controlled. AWS IAM plays a critical role in achieving these security objectives by allowing the company to define who is authenticated (Signed in) and authorized (has permission) to use the resources.
Create and manage AWS users and groups, to control access to AWS services and resources securely.
Use IAM roles and policies to set more granular permissions for AWS services and external users or services that need to access Zappy e-Bank AWS resources.
Implement strong access controls, including multi-factor authentication (MFA), to enhance security.
Let setup IAM users for a backend developer, John, and a data analyst, Mary, by first determining their specific access needs.
As a backend developer, John requires access to servers (EC2) to run his code, necessitating an IAM user with policies granting EC2 access.
As a data analyst, Mary needs access to data storage (AWS S3 Service), so her IAM user should have policies enabling S3 access.
Let's recall that John is a backend developer, therefore he need to be added as a user to the Development-Team group.
Let's recall that Mary is a data analyst, therefore she need to be added as a user to the Analyst-Team group.
'https://665626472897.signin.aws.amazon.com/console'
Recall that John is a developer and has only permission and access to Development-Team resources which is EC2 instance.
'https://665626472897.signin.aws.amazon.com/console'
Recall that Mary is a data analyst and has only permission and access to Analyst-Team resources which is S3.
2026-01-13 18:20:00
Diving into Lovable without a plan turned my first build into an unusable mess. The fix: build throwaway prototypes first. I now spend a few hours creating high-level versions to explore layouts, flows, and components before starting production. When ready to build for real, I use Lovable's chat to summarize what I learned, then refine that into a foundation prompt with ChatGPT. The result is a cleaner starting point and fewer side effects.
As a technical product manager, my role was to be an orchestrator. Most of my work involved setting others up for success in executing their tasks. Whether it was to work with designers to define a flow, developers to identify the table structure that wouldn’t put us into a corner later when we enhanced functionality, QA to help determine if something was a feature or a bug, customer success to make sure they had all their info and artifacts to roll out the release, or to provide stakeholders with a roadmap update. The job, while very rewarding, sometimes lacks the joy of taking something from start to finish and crossing it off the to-do list.
So when I finally tried Lovable.dev for the first time, I dove in headfirst without any planning and started building right away. I was like a kid in a candy store. I had a lot of fun and was building fast, but I wasn’t building anything stable. The constant switching things up and deciding on different approaches led to way too many unintended changes. When I tried to change something on one page, it would cause side effects on another page.
If you go into the pre-build stage with the understanding that you will throw things out, you can get the most value out of this step: identifying the high-level flows you want to build, what patterns you want to follow, where can you re-use functionality across your software, how you want to lay out the pages, and where you see future scaling issues. The power of vibe coding is that you can iterate and test out concepts and ideas rapidly, and then build cleanly and on a stable foundation when you are ready.
You can have multiple throwaway projects, too. Don't feel bad if you say, "I want to explore concept X on its own." The goal here is not perfection but iteration, so you can identify and build a foundation that sets up your product for where you want to take it. If you chase perfection, you will overarchitect your project and get nowhere. This process is a short exercise of a day or two at most, and in most cases, just a few hours.
I vibe-coded a few pre-build throwaway projects after my first attempt became too unwieldy. All the data was hardcoded, the buttons just automatically took you to the next step, but the flow was there. These prototypes revealed more insights into the user experience, allowing me to provide the product with greater intention.
With a product called AssMan.ai, I needed to make it immediately apparent that it was related to the Football Manager game series. The origins of the name can be found here. I also needed to make the call to action apparent to the user so that, without thinking, they know what to do. This flow was one of the outliers where I spent days tinkering in the pre-build stage, rather than the usual handful of hours.
You can see one of my early smoke-and-mirror prototypes for the flow here. If you go through the flow on the actual website today, you will notice many of the same core components of the overall flow. There is an upload section that transitions to a spinner page while the analysis is running. The analysis page displays the feedback and provides an interactive chatbot at the bottom. Eventually, I landed on the following, which was heavily influenced by the pre-build prototypes.
Starting at the top of the page. I have my logo, a traditional soccer scarf with the product name, and a soccer ball to give users instant visual cues that it is soccer-related. Complete transparency, I put a beta tag there to buy some leniency from users if the experience isn’t perfect. Below that, I have the subtitle "Interactive Feedback," which instantly shows users the value of using this tool, as discussed in my article on product validation for Assman.ai.
Inside the box, there are three main aspects: a pulsing green button that makes a clear, identifiable call to action on what the user needs to do. An example tactic image so users know exactly what to capture in their screenshot. Finally, there is information on how to take a screenshot to reduce friction further if the user doesn’t already know.
You can see the influence of testing this flow out on different operating systems and screenshot methods. Depending on the user’s operating system, the section defaults to their OS. There are many methods for capturing a screenshot; some save and download the file, while others only copy to the clipboard. Because of that, I made the primary CTA "Upload Tactic Screenshot,” as I expected that to be the core flow, and I also included “Paste from Clipboard” below it to not limit users.
The user can see all this information and take action all without any scrolling. That was very important to me to keep when I optimized the flow for mobile.
On mobile, you will notice two significant differences. First, the button changes from “Upload Tactic Screenshot” to “Take Photo”. This intentional change allows a mobile user to either instantly take a photo or, if they already have one, use the “Choose from Library” button below, which replaced “Paste from Clipboard”. I went back and forth a bit on whether it made sense to do this, since I basically moved the upload and save photo functions to different locations based on mobile or desktop. Still, I made the intentional decision to do this, as I would expect most users would not have a photo of their tactics on their phones, given the player base's overwhelming skew toward PCs.
The tactic feedback was something I worked on outside of Lovable. I used ChatGPT Playground, a tool that lets you test and compare different AI models and settings exactly as they would behave via the API. I used the tool to iterate until I found an output that would populate the frontend. You can see an example of the latest version here.
At this point, it was time for me to start my real production product, so I asked Lovable’s chat feature for an overview of where we stood on each of my prototypes. I then entered that data into ChatGPT. From there, I iterated until I found a prompt that would help lay a stable foundation.
I had my general layout defined: pages, functionality, common components, and core flow. The prototype gave me a blueprint. I wasn't guessing anymore. I knew what to scope, what to skip, and how to structure the product.
Build a throwaway prototype first. Spend a few hours exploring how you really want the product to work so you don't spend days or weeks correcting an assumption. Limit iterations to hours, and stretch only to multiple days when necessary.
The barrier to building keeps getting lower. What took a lot of effort three months ago is easier now, and that gap will keep shrinking.
2026-01-13 18:19:51
Inspired by the 20-part series on DEV.to
After reading @kato_masato_c5593c81af5c6's fascinating 20-part series on SaijinOS, I was struck by how parallel our projects have evolved. While solving the same fundamental problem—how do humans safely interact with AI systems?—we arrived at complementary solutions.
SaijinOS — architecture inside AI (persona, memory, emotion control).
SENTINEL — platform around AI (traffic, attacks, compliance control).
Most systems treat trust as a boolean.
is_trusted = true / false
— @kato_masato_c5593c81af5c6, SaijinOS Part 20
Traditional AI interactions offer only two states: full access or denial. But human trust is temporal, contextual, and revocable.
SaijinOS is an "architecture for distance"—controlling what AI remembers, how it behaves, and how long trust persists.
| Component | Description |
|---|---|
| Policy-Bound Personas | YAML-defined AI personalities with constraints |
| TrustContract | Trust as resource with TTL (expires!) |
| BloomPulse | Emotional runtime—"care" as computational signal |
| Continuity without Possession | AI remembers without owning history |
@dataclass
class TrustContract:
scope: TrustScope # instant / session / continuity
ttl: timedelta # trust EXPIRES
max_tokens: int # memory budget
recall_past_projects: bool
emit_snapshots: bool
This is elegant. Trust isn't a flag—it's a resource with a lifetime.
SENTINEL is a complete AI security stack: from attacks to defense, from network level to kernel.
┌─────────────────────────────────────────────────────────────────┐
│ USER │
│ │ │
│ ┌────────────────────────────────────────────────────────────┐ │
│ │ 🖥️ DESKTOP │ │
│ │ Windows App • Tauri • Rust • Traffic Monitoring │ │
│ └────────────────────────────────────────────────────────────┘ │
│ │ │
│ ┌────────────────────────────────────────────────────────────┐ │
│ │ 🧠 BRAIN │ │
│ │ 258 Detection Engines • Strange Math™ │ │
│ │ TDA • Sheaf Coherence • Hyperbolic Geometry • ML │ │
│ └────────────────────────────────────────────────────────────┘ │
│ │ │
│ ┌────────────┐ ┌────────────┐ ┌────────────┐ ┌─────────────┐ │
│ │ 🛡️ SHIELD │ │ 🐉 STRIKE │ │ 📦 FRAMEWORK│ │ 🦠 IMMUNE │ │
│ │ Pure C DMZ │ │ Red Team │ │ Python SDK │ │ EDR/Kernel │ │
│ │ 36K LOC │ │ 39K Payloads│ │ pip install│ │ DragonFlyBSD│ │
│ └────────────┘ └────────────┘ └────────────┘ └─────────────┘ │
└─────────────────────────────────────────────────────────────────┘
| Component | What It Does | LOC |
|---|---|---|
| 🧠 BRAIN | 258 detection engines, Strange Math™ | ~30K |
| 🛡️ SHIELD | Pure C DMZ, <1ms latency, Cisco CLI | 36K |
| 🐉 STRIKE | Red Team, 39K+ payloads, HYDRA | ~15K |
| 📦 FRAMEWORK | Python SDK, pip install, FastAPI | ~10K |
| 🦠 IMMUNE | EDR/XDR, Kernel-level, DragonFlyBSD | 9K |
| 🖥️ DESKTOP | Windows App, Selective MITM | ~10K |
These systems aren't competitors—they're different layers of protection:
┌─────────────────────────────────────────────────────────────┐
│ INTENT │
│ │ │
│ ┌──────────────▼──────────────┐ │
│ │ SaijinOS │ ← Persona Layer │
│ │ TrustContract + BloomPulse │ │
│ └──────────────┬──────────────┘ │
│ │ │
│ ┌──────────────▼──────────────┐ │
│ │ SENTINEL Desktop │ ← App Layer │
│ │ Selective MITM + Monitor │ │
│ └──────────────┬──────────────┘ │
│ │ │
│ ┌──────────────▼──────────────┐ │
│ │ SENTINEL Brain │ ← Analysis │
│ │ 258 Engines, Strange Math │ │
│ └──────────────┬──────────────┘ │
│ │ │
│ ┌──────────────▼──────────────┐ │
│ │ SENTINEL Shield │ ← Gateway │
│ │ Pure C DMZ, <1ms │ │
│ └──────────────┬──────────────┘ │
│ │ │
│ ┌──────────────▼──────────────┐ │
│ │ SENTINEL Immune │ ← Kernel │
│ │ eBPF, Syscall Hooks │ │
│ └──────────────┬──────────────┘ │
│ │ │
│ [ AI API ] │
└─────────────────────────────────────────────────────────────┘
@kato_masato_c5593c81af5c6's work inspired ideas for SENTINEL:
struct TrafficPolicy {
allowed_endpoints: Vec<String>,
ttl_minutes: u32, // Policy expires!
max_bytes_sent: usize,
}
User declares intent:
"This is a quick debug session, don't let me leak anything important"
If many frustrated messages — suggest a break.
SaijinOS and SENTINEL share a fundamental conviction:
AI systems should serve human values, not exploit vulnerability.
@kato_masato_c5593c81af5c6's phrase resonates:
"SaijinOS is an architecture for distance. Not coldness, but room to breathe."
SENTINEL aims for the same: control without isolation, security without paranoia.
We're building different tools for the same future—where humans and AI can coexist with trust that is earned, scoped, and revocable.
Thank you, @kato_masato_c5593c81af5c6, for the inspiring work on SaijinOS.
Links:
2026-01-13 18:18:45
Imagine you have a collection of square stickers scattered across a wall, and you want to draw one perfectly horizontal line that splits the total area of those stickers exactly in half. This problem challenges us to find the vertical "balancing point" of multiple overlapping or separate shapes, a task that requires both geometric intuition and efficient data processing.
You're given:
Your goal:
Example 1:
Input: squares = [[0,0,1],[2,2,1]]
Output: 1.00000
Explanation: Any horizontal line between y = 1 and y = 2 will have 1 square unit above it and 1 square unit below it. The lowest option is 1.
Example 2:
Input: squares = [[0,0,2],[1,1,1]]
Output: 1.16667
Explanation:
The areas are:
Below the line: 7/6 * 2 (Red) + 1/6 (Blue) = 15/6 = 2.5.
Above the line: 5/6 * 2 (Red) + 5/6 (Blue) = 15/6 = 2.5.
Since the areas above and below the line are equal, the output is 7/6 = 1.16667.
To solve this, we need to think about how the total area changes as we move a horizontal line from the bottom to the top.
The Sweep-Line: If we sort these -coordinates and move upward, the area accumulated between two -levels is simply:
Finding the Split: As we sweep upward, we keep adding to our cumulative area. The moment our next step would exceed the "halfway" area, we know the line must exist between the previous and the current . We then use simple algebra to find the exact decimal coordinate.
#include <vector>
#include <algorithm>
#include <iostream>
using namespace std;
class Solution {
public:
double separateSquares(vector<vector<int>>& squares) {
double totalArea = 0;
vector<pair<int, int>> events; // {y_coordinate, type_and_width}
for (const auto& s : squares) {
long long x = s[0], y = s[1], l = s[2];
totalArea += (double)l * l;
// Use positive width for start, negative for end
events.push_back({y, l});
events.push_back({y + l, -l});
}
// Sort events by y-coordinate
sort(events.begin(), events.end());
double targetArea = totalArea / 2.0;
double currentArea = 0;
long long currentWidth = 0;
int prevY = events[0].first;
for (const auto& event : events) {
int currY = event.first;
long long heightDiff = currY - prevY;
double areaIncrease = (double)currentWidth * heightDiff;
if (currentArea + areaIncrease >= targetArea) {
// The line is within this horizontal strip
double neededArea = targetArea - currentArea;
return prevY + (neededArea / currentWidth);
}
currentArea += areaIncrease;
currentWidth += event.second;
prevY = currY;
}
return 0.0;
}
};
class Solution:
def separateSquares(self, squares: List[List[int]]) -> float:
total_area = 0
events = []
for x, y, l in squares:
total_area += l * l
# Event: (y-coordinate, width_change)
events.append((y, l))
events.append((y + l, -l))
# Sort by y-coordinate
events.sort()
target_area = total_area / 2.0
current_area = 0.0
current_width = 0
prev_y = events[0][0]
for curr_y, width_change in events:
height_diff = curr_y - prev_y
area_increase = current_width * height_diff
if current_area + area_increase >= target_area:
needed_area = target_area - current_area
return prev_y + (needed_area / current_width)
current_area += area_increase
current_width += width_change
prev_y = curr_y
return 0.0
/**
* @param {number[][]} squares
* @return {number}
*/
var separateSquares = function(squares) {
let totalArea = 0;
let events = [];
for (let [x, y, l] of squares) {
totalArea += l * l;
events.push([y, l]);
events.push([y + l, -l]);
}
// Sort by y-coordinate
events.sort((a, b) => a[0] - b[0]);
let targetArea = totalArea / 2.0;
let currentArea = 0;
let currentWidth = 0;
let prevY = events[0][0];
for (let [currY, widthChange] of events) {
let heightDiff = currY - prevY;
let areaIncrease = currentWidth * heightDiff;
if (currentArea + areaIncrease >= targetArea) {
let neededArea = targetArea - currentArea;
return prevY + (neededArea / currentWidth);
}
currentArea += areaIncrease;
currentWidth += widthChange;
prevY = currY;
}
return 0.0;
};
This problem is a fantastic introduction to computational geometry. While the C++ solution provided in the prompt used highly optimized low-level tricks like SIMD (AVX2) and Radix Sort to squeeze out every millisecond of performance, the core logic remains the sweep-line approach.
In real-world systems, these types of algorithms are used in Computer-Aided Design (CAD) software to calculate centers of mass, and in Rendering Engines to determine which objects are visible at certain layers. Understanding how to "discretize" a continuous problem into events is a key skill for any senior engineer.
2026-01-13 18:17:50
Me gusta jugar y experimentar con conceptos matemáticos. Es decir, con conceptos matemáticos simples. Lo mío, al fin y al cabo, son los aparatos con teclado. Los que se pueden aporrear, vamos.
Todos conocemos el teorema de Pitágoras, según el cual la suma del cuadrado de los catetos a y b, es igual a la hipotenusa c al cuadrado.
c² = a² + b²
Podemos coleccionar triángulos de este estilo mediante las ternas pitagóricas. Una terna pitagórica es un vector v de tres elementos en el que se cumple la ecuación anterior. Si lo trasladamos a código Python (esta es la nuestra), lo que tendríamos es lo siguiente:
ternas = []
a = #...
b = #...
c = #...
ternas += (a, b, c)
¿Qué ponemos donde los comentarios con puntos suspensivos? ¿Cómo podemos crear una terna, sabiendo de antemano que los valores formarán una auténtica terna pitagórica?
Es fácil comprobar que una terna es una terna pitagórica. Por ejemplo, dada una lista de ternas como la anterior, podemos obtener cuáles de ellas son pitagóricas con una función como la siguiente:
def filter_real_triads(l):
return list(filter(lambda t: ((t[0] ** 2) + (t[1] ** 2) == (t[2] ** 2)), l))
Lo que no es tan fácil es lo que nos preguntábamos más arriba. ¿Cómo crear una terna pitagórica? Algo que se puede intentar es crearlas al azar. Al fin y al cabo, solo son tres números.
def crea_random_triads(l, max):
for _ in range(max):
l.append(tuple([rnd(1, max), rnd(1, max), rnd(1, max)]))
¡Esto es emocionante! ¿Cuántas ternas obtendremos de esta manera?
from random import randint as rnd
import random
random.seed()
l1 = [(3, 4, 5)]
max = 100
print("\n# Random triads:")
crea_random_triads(l1, max)
print(format(l1))
print("\n# Pythagorean triads in l1")
pl1 = filter_real_triads(l1)
print(f"{pl1}\nPercentage: {(len(pl1)/len(l1)) * 100:05.2f}%")
Si ejecutamos el programa, la salida del mismo será la siguiente:
# Random triads:
[(3, 4, 5), (48, 64, 100), (72, 98, 69), (70, 19, 78), (98, 54, 42), (77, 47, 18), (99, 80, 9), (69, 55, 12), (61, 30, 8), (23, 60, 25), (29, 37, 40), (98, 1, 44), (13, 2, 97), (15, 54, 65), (80, 85, 28), (8, 100, 62), (4, 29, 66), (72, 21, 64), (1, 78, 39), (25, 16, 64), (17, 82, 24), (72, 7, 94), (35, 84, 68), (4, 52, 59), (74, 40, 100), (55, 30, 55), (82, 46, 96), (49, 42, 50), (30, 8, 4), (39, 36, 52), (80, 16, 34), (63, 82, 98), (30, 61, 11), (48, 63, 94), (36, 71, 7), (60, 52, 76), (34, 82, 40), (82, 12, 31), (51, 82, 35), (73, 59, 89), (44, 76, 47), (25, 4, 58), (80, 46, 63), (96, 8, 45), (42, 39, 35), (24, 36, 82), (37, 93, 42), (69, 85, 51), (69, 88, 61), (60, 98, 100), (17, 41, 90), (56, 36, 95), (58, 40, 29), (62, 67, 85), (58, 11, 98), (89, 86, 31), (30, 89, 35), (45, 8, 66), (9, 47, 40), (90, 83, 82), (19, 88, 14), (35, 19, 95), (2, 63, 22), (7, 99, 83), (15, 2, 10), (93, 19, 69), (6, 81, 98), (55, 88, 78), (87, 89, 40), (35, 74, 35), (33, 99, 38), (58, 43, 87), (86, 54, 88), (89, 78, 54), (19, 32, 32), (81, 92, 22), (100, 53, 5), (62, 48, 98), (51, 76, 75), (98, 42, 46), (92, 74, 21), (15, 48, 90), (11, 85, 59), (82, 52, 16), (48, 58, 52), (26, 37, 12), (75, 52, 50), (35, 77, 13), (10, 67, 96), (90, 60, 43), (98, 75, 24), (16, 11, 81), (39, 5, 47), (94, 35, 52), (70, 56, 97), (66, 62, 26), (17, 61, 22), (41, 47, 92), (41, 4, 28), (68, 4, 20), (89, 98, 38)]
# Pythagorean triads in l1
[(3, 4, 5)]
Percentage: 00.99%
¡Madre mía, qué desastre! Solo la de prueba, la de base, es una terna pitagórica. He probado a repetir varias veces la ejecución, sin mayores resultados.
Decidí cambiar un poco el algoritmo. ¿Qué pasa si repetimos hasta tener al menos una generada? ¿Se bloqueará el ordenador buscando la relación que nunca existirá?
def crea_random_triad(max):
return tuple([rnd(1, max), rnd(1, max), rnd(1, max)])
...
if __name__ == "__main__":
max = 100
random.seed()
print("\n# Random triads:")
l1 = []
pl1 = []
repetitions = 0
while len(pl1) < 2:
l1 = [(3, 4, 5)]
crea_random_triads(l1, max)
pl1 = filter_real_triads(l1)
repetitions += 1
...
print(format(l1))
print("\n# Pythagorean triads in random list")
print(f"{pl1}\nPercentage: {(len(pl1)/len(l1)) * 100:05.2f}%")
print(f"Repetitions: {repetitions}")
Algunas salidas fueron las siguientes:
# Pythagorean triads in random list
[(3, 4, 5), (24, 7, 25)]
Percentage: 01.98%
Repetitions: 13
# Pythagorean triads in random list
[(3, 4, 5), (13, 84, 85)]
Percentage: 01.98%
Repetitions: 129
# Pythagorean triads in random list
[(3, 4, 5), (48, 55, 73)]
Percentage: 01.98%
Repetitions: 47
Se pueden obtener algunos resultados. El algoritmo termina en cuanto encuentra al menos una, en algunas ocasiones tras iterar cientos de veces. Vamos a cambiarlo de nuevo:
def crea_random_triad(max):
return tuple([rnd(1, max), rnd(1, max), rnd(1, max)])
...
if __name__ == "__main__":
max = 100
random.seed()
print("\n# Random triads:")
l1 = [(3, 4, 5)]
pl1 = list(l1)
repetitions = 0
while repetitions < max**2:
t = crea_random_triad(max)
l1.append(t)
if len(filter_real_triads([t])) > 0:
pl1.append(t)
...
repetitions += 1
...
print(l1)
print("\n# Pythagorean triads in random list")
print(f"{pl1}\nPercentage: {(len(pl1)/len(l1)) * 100:05.2f}%")
print(f"Repetitions: {repetitions}")
La salida ahora es:
# Random triads:
[(3,4,5), (92, 94, 23), (90, 95, 9)...]
# Pythagorean triads in random list
[(3, 4, 5), (39, 80, 89), (15, 20, 25), (24, 32, 40)]
Percentage: 00.04%
Repetitions: 10000
Dado que en cada repetición se crea una nueva terna, de 10,000 ternas acabamos con 4, un 0.04% del total. Y eso que una, la dábamos de partida. He ejecutado el programa varias veces, y en ocasiones solo encuentra una, dos, o incluso ninguna (aparte de (3, 4, 5), que ya viene dada). Es decir, abordándolo por fuerza bruta, podemos obtener resultados, pero con un coste computacional elevado.
Resulta que este problema ya lo trató Euclides. Como él no tenía ordenadores que son básicamente tontos, y no se preguntan nada cuando les pides que repitan 10,000 veces la creación de unos números aleatorios, con el objetivo de crear una terna pitagórica, y además él era demasiado inteligente como para confiar en la fuerza bruta, acabó creando un sencillo algoritmo para la creación de ternas pitagóricas:
Si m > n son enteros positivos, entonces:
a = m² − n²
b = 2mn
c = m² + n²
Podemos traducir este pequeño algoritmo a Python muy facilmente:
def crea_semi_random_triads(l, max):
for _ in range(max):
a = -1
b = 0
while a < b:
a = rnd(1, max)
b = rnd(1, max)
...
x = (a ** 2) - (b ** 2)
y = 2 * a * b
z = (a ** 2) + (b ** 2)
l.append(tuple([x, y, z]))
...
...
La salida ahora es la siguiente:
# Pythagorean triads in semi-random list
[(3, 4, 5), (2656, 13650, 13906), (6400, 6942, 9442), (3744, 11880, 12456), (896, 1440, 1696), (2697, 3304, 4265), (111, 680, 689), (2793, 424, 2825), (1313, 5016, 5185), (5029, 4620, 6829), (1496, 390, 1546), (91, 4140, 4141), (1512, 5734, 5930), (5568, 2926, 6290), (2295, 8968, 9257), (5513, 10416, 11785), (2873, 14136, 14425), (533, 756, 925), (8715, 2068, 8957), (2511, 3960, 4689), (640, 312, 712), (264, 950, 986), (4620, 11408, 12308), (6909, 8740, 11141), (0, 10658, 10658), (1920, 7072, 7328), (4288, 8466, 9490), (3135, 112, 3137), (160, 168, 232), (0, 15488, 15488), (3355, 348, 3373), (639, 2480, 2561), (2289, 5720, 6161), (1995, 1012, 2237), (3213, 11484, 11925), (2576, 510, 2626), (4672, 10146, 11170), (936, 12150, 12186), (2793, 10624, 10985), (7544, 870, 7594), (420, 352, 548), (6912, 3784, 7880), (696, 15130, 15146), (1652, 6864, 7060), (1083, 1444, 1805), (357, 7076, 7085), (3432, 8374, 9050), (6497, 1296, 6625), (1140, 272, 1172), (2024, 3990, 4474), (688, 3666, 3730), (4635, 4292, 6317), (936, 1190, 1514), (245, 1188, 1213), (1368, 74, 1370), (315, 572, 653), (2816, 3360, 4384), (255, 3608, 3617), (4560, 6478, 7922), (1360, 222, 1378), (3161, 5520, 6361), (6901, 3060, 7549), (4747, 3996, 6205), (576, 4590, 4626), (309, 5300, 5309), (1175, 792, 1417), (5900, 5712, 8212), (4495, 11592, 12433), (3128, 3654, 4810), (255, 1288, 1313), (48, 64, 80), (1127, 936, 1465), (4320, 5032, 6632), (3344, 3150, 4594), (21, 20, 29), (5699, 8820, 10501), (5655, 1672, 5897), (0, 10082, 10082), (0, 5618, 5618), (201, 2240, 2249), (572, 96, 580), (161, 12960, 12961), (5777, 4536, 7345), (255, 3608, 3617), (768, 224, 800), (2835, 8892, 9333), (1792, 1656, 2440), (632, 12474, 12490), (256, 8190, 8194), (1800, 1350, 2250), (2117, 2244, 3085), (7056, 2210, 7394), (4144, 10560, 11344), (5559, 4640, 7241), (2397, 9796, 10085), (2079, 4680, 5121), (2709, 8100, 8541), (4636, 6720, 8164), (2700, 3600, 4500), (1449, 2160, 2601), (6716, 9600, 11716)]
Percentage: 100.00%
Efectivamente, ahora obtenemos un 100% de eficiencia. Euclides era un fenómeno.
Puedes encontrar el código fuente de este algoritmo de ternas pitagóricas en IDEOne.
¿Qué sentido tiene esto? No lo sé. ¿Importa?