2025-11-18 06:36:22
Prototypal inheritance is one of JavaScript's most powerful yet misunderstood features. Unlike classical inheritance found in languages like Java or C++, JavaScript uses a prototype-based approach that's both flexible and elegant. Let's unpack this concept from the ground up.
Imagine you're building a user management system. You have a basic user object with common properties and methods. Now you need to create admin and guest variants. Should you copy-paste all the user code? Absolutely not! This is where prototypal inheritance shines—it lets you build new objects on top of existing ones.
[[Prototype]] Property
Every JavaScript object has a secret weapon: a hidden property called [[Prototype]]. Think of it as a reference pointer that says, "If you can't find what you're looking for in me, check this other object."
This [[Prototype]] can point to:
null (end of the chain)When you try to access a property or method that doesn't exist on an object, JavaScript automatically searches up the prototype chain until it finds it—or reaches null.
__proto__
The historical way to access [[Prototype]] is through __proto__. Here's a classic example:
let animal = {
eats: true,
walk() {
console.log("Animal walks");
}
};
let rabbit = {
jumps: true
};
// Set animal as rabbit's prototype
rabbit.__proto__ = animal;
console.log(rabbit.jumps); // true (own property)
console.log(rabbit.eats); // true (inherited from animal)
rabbit.walk(); // "Animal walks" (inherited method)
When we access rabbit.eats, JavaScript thinks: "I don't see eats in rabbit... let me check its prototype. Ah! Found it in animal!"
__proto__ vs [[Prototype]]
Here's a crucial distinction that trips up many developers:
[[Prototype]] is the actual internal property__proto__ is a getter/setter for accessing itModern JavaScript recommends using Object.getPrototypeOf() and Object.setPrototypeOf() instead, but __proto__ remains widely supported and intuitively clear for learning.
Prototypes can form chains of arbitrary length:
let animal = {
eats: true,
walk() {
console.log("Animal walks");
}
};
let rabbit = {
jumps: true,
__proto__: animal
};
let longEar = {
earLength: 10,
__proto__: rabbit
};
longEar.walk(); // "Animal walks" (found in animal)
console.log(longEar.jumps); // true (found in rabbit)
console.log(longEar.eats); // true (found in animal)
JavaScript searches: longEar → rabbit → animal → Object.prototype → null
Here's where things get interesting. Prototypes are read-only lookup chains. When you write to a property, it goes directly to the object itself:
let animal = {
eats: true,
walk() {
console.log("Generic animal walk");
}
};
let rabbit = {
__proto__: animal
};
// This creates a new method on rabbit, doesn't modify animal
rabbit.walk = function() {
console.log("Rabbit bounce!");
};
rabbit.walk(); // "Rabbit bounce!" (own method)
animal.walk(); // "Generic animal walk" (unchanged)
Accessor properties (getters/setters) are the exception. When you assign to an accessor property, you're actually calling its setter function:
let user = {
name: "John",
surname: "Smith",
set fullName(value) {
[this.name, this.surname] = value.split(" ");
},
get fullName() {
return `${this.name} ${this.surname}`;
}
};
let admin = {
__proto__: user,
isAdmin: true
};
console.log(admin.fullName); // "John Smith" (getter from prototype)
admin.fullName = "Alice Cooper"; // Calls setter from prototype
console.log(admin.fullName); // "Alice Cooper"
this: Always Points to the Caller
This is perhaps the most important concept: this is determined by the object before the dot, not where the method is defined.
let animal = {
walk() {
if (!this.isSleeping) {
console.log(`${this.name} walks`);
}
},
sleep() {
this.isSleeping = true;
}
};
let rabbit = {
name: "White Rabbit",
__proto__: animal
};
let cat = {
name: "Fluffy",
__proto__: animal
};
rabbit.sleep(); // Sets rabbit.isSleeping = true
cat.sleep(); // Sets cat.isSleeping = true
console.log(rabbit.isSleeping); // true
console.log(cat.isSleeping); // true
console.log(animal.isSleeping); // undefined
Even though sleep() is defined in animal, when called on rabbit or cat, this refers to the calling object. This means methods are shared, but state is not—a beautiful design pattern!
The for...in loop iterates over both own and inherited properties:
let animal = {
eats: true
};
let rabbit = {
jumps: true,
__proto__: animal
};
// Only own properties
console.log(Object.keys(rabbit)); // ["jumps"]
// Both own and inherited
for (let prop in rabbit) {
console.log(prop); // "jumps", then "eats"
}
To distinguish between own and inherited properties:
for (let prop in rabbit) {
let isOwn = rabbit.hasOwnProperty(prop);
if (isOwn) {
console.log(`Own: ${prop}`);
} else {
console.log(`Inherited: ${prop}`);
}
}
// Own: jumps
// Inherited: eats
hasOwnProperty Show Up in the Loop?
Excellent question! hasOwnProperty itself is inherited from Object.prototype, but it doesn't appear in for...in loops because it's marked as non-enumerable (enumerable: false). This is true for all Object.prototype methods.
[[Prototype]] is the internal link, __proto__ is how we access itthis always refers to the calling object, not where the method is definedObject.keys() ignore inherited properties, only for...in includes themUnderstanding prototypes is crucial for:
Prototypal inheritance might seem strange at first, especially if you come from a classical OOP background. But once it clicks, you'll appreciate its simplicity and power. It's the foundation of JavaScript's object model and understanding it deeply will make you a much more effective JavaScript developer.
2025-11-18 06:28:54
So yesterday i started working on my chess engine, written in c++,i'm writing these blogs and a couple more i'll write in the future for quite a selfish reason (to track my progress) and also because i want someone else, confused like me in the future to use this, hopefully, as a road map to write their engine.
right so i started yesterday, i started with well board representation, I'm a programmer not a writer so please forgive my fuck ups here and there, some times i won't make sense but lie to yourself and pretend you do understand what i'm on.
I chose the most intuitive approach, the 8x8 array representation.
i basically made an enum, something like this
`
enum {empty, whitepawn, whiteknight..., blackking};
`
from there i basically hard-coded each each in the 8x8 array.
and then proceeded to have print function, which prints the pieces on the console, it maps a string i hard-coded to the enum values.
something like
1.pieces = " PNBRQKpnbrqk"
it works only because the string indices and the enum piece value match, hopefully that makes sense
right now i just implemented a simple no rule move function
it receive from coordinates and to coordinates
manipulates the array by making from coordinate empty and to coordinate with the previous from coordinate, the re-prints
the board.
this is my repo to keep up if you like
https://github.com/PainIam/Pain_ENGINE
2025-11-18 06:26:18
Homebrew es un gestor de paquetes de línea de comandos gratuito y de código abierto diseñado para macOS (y también Linux, bajo el nombre de Linuxbrew o Homebrew on Linux). Su propósito principal es simplificar la instalación y administración de software que Apple no incluye de forma nativa.
brew install <paquete>. En esencia, facilita a los desarrolladores y usuarios avanzados obtener el software que necesitan en su sistema sin tener que compilarlo manualmente o buscar instaladores individuales. Es como una "tienda de aplicaciones" para tu terminal./bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
# Añadir al PATH
echo 'eval "$(/home/linuxbrew/.linuxbrew/bin/brew shellenv)"' >> ~/.bashrc
eval "$(/home/linuxbrew/.linuxbrew/bin/brew shellenv)"
# Instalar dependencias básicas
brew install gcc git
brew install <pkg>
brew uninstall <pkg>
brew list
brew update
brew upgrade
brew search <nombre>
asdf es un gestor de versiones extensible y multifuncional de línea de comandos. A diferencia de otros gestores que se centran en un único lenguaje (como nvm para Node.js o rvm para Ruby), asdf utiliza un sistema de *plugins* para gestionar múltiples versiones de diferentes lenguajes de programación, runtimes y herramientas (como Node.js, Python, Ruby, Java, Elixir, etc.) de forma centralizada.
asdf puede garantizar que el entorno correcto esté activo automáticamente para cada directorio de trabajo, lo que evita conflictos de dependencias y simplifica el mantenimiento de múltiples entornos de desarrollo.git clone https://github.com/asdf-vm/asdf.git ~/.asdf --branch v0.14.0
echo '. "$HOME/.asdf/asdf.sh"' >> ~/.bashrc
echo '. "$HOME/.asdf/completions/asdf.bash"' >> ~/.bashrc
source ~/.bashrc
asdf plugin add <lang>
asdf list-all <lang>
asdf install <lang> <version>
asdf global <lang> <version>
asdf local <lang> <version>
asdf uninstall <lang> <version>
asdf plugin add golang https://github.com/asdf-community/asdf-golang.git
asdf install golang 1.22.4
asdf global golang 1.22.4
.tool-versions
# .tool-versions
golang 1.22.4
# Puedes añadir otras herramientas gestionadas por ASDF
php 8.2.12
nodejs 20.10.0
ruby 3.2.2
nota del autor: Este es mi primer post desde... en la vida, aunque ando por esto de la programación desde 2013.
La idea detrás de esta serie de posts y de mi perfil en general es más traer aquí un montón de archivos markdown que tengo siempre dando vueltas en mi PC. Así que espero lo tomen como yo, como una guía para recordar cómo cuerno es que se hacía esto o aquello.
Y aunque sé que hoy es más fácil pedirle esto a una IA que venir a encontrarlo aquí, a veces la IA puede tener todas las respuestas, pero uno no tener las preguntas correctas.
En suma, a mí me sirve, y si a alguien más también, genial.
Para mí es un archivo markdown menos dando vueltas en mi PC, y eso ya es bastante 😁
Att: Oscar Pincho
2025-11-18 06:16:29
This guide explains how to properly set up COQ.nvim autocomplete
with Mason and LSP servers in Neovim. It includes the most
common pitfalls and a working configuration.
Make sure you have:
ms-jpq/coq_nvim
ms-jpq/coq.artifacts (optional, extra completions)williamboman/mason.nvim
williamboman/mason-lspconfig.nvim
neovim/nvim-lspconfig
Example (lazy.nvim):
{
"ms-jpq/coq_nvim",
branch = "coq",
},
{
"ms-jpq/coq.artifacts",
branch = "artifacts",
},
{
"williamboman/mason.nvim",
config = true,
},
{
"williamboman/mason-lspconfig.nvim",
dependencies = { "neovim/nvim-lspconfig" },
},
COQ does not start automatically unless configured.
vim.g.coq_settings = { auto_start = 'shut-up' }
Alternatively, use manually later:
:COQnow
require("mason").setup()
local coq = require("coq")
require("mason-lspconfig").setup({
ensure_installed = { "pyright", "jdtls", "dockerls", "elixirls", "ts_ls" },
automatic_installation = true,
handlers = {
function(server_name)
require("lspconfig")[server_name].setup(
coq.lsp_ensure_capabilities({})
)
end,
["elixirls"] = function()
require("lspconfig").elixirls.setup(
coq.lsp_ensure_capabilities({
settings = {
flags = {
debounce_text_changes = 150,
},
elixirLS = {
dialyzerEnabled = false,
fetchDeps = false,
}
}
})
)
end,
},
})
autocmd FileType markdown setlocal omnifunc=coq#complete
Remove this. COQ does not use omnifunc.
:LspInfo
:COQnow
Open a file and type --- completion should appear.
:COQnow
:LspInfo
coq.lsp_ensure_capabilities()
vim.defer_fn
Install a Markdown LSP:
ensure_installed = { "marksman", ... }
You now have a clean Neovim setup using COQ + Mason + LSP.
2025-11-18 06:07:22
You can find the original post on my blog
Lately I've found myself wanting to work on some side project ideas. Every time I had to implement the same stuff again and again: basic library configuration, routing, layout, forms, authentication. Most of the time I would start a project using Vite and copy-paste parts from other apps I'd worked on.
That alone could take a couple of hours or days without making any progress on my idea.
The things I usually need are simple. I like working with Tanstack Query for handling HTTP requests and caching. I implement authentication using sessions where an HTTP-only cookie is set by the backend, and on the frontend I make a request against a /me or /user endpoint to check if the user has a session. I set up some interceptors to log the user out on 401 requests. Finally, I like to wire basic input fields so I can build forms without having to care about error handling or wiring the field every time.
Here is a more detailed list of the libraries that I use:
It was time for my side project to be a React template to help me build my next ideas faster. I published the code on GitHub.
In my project, I prefer separating files by logic/context rather than file type. So you won't find components and hooks folders in the project.
It follows a DDD-like approach where each feature goes into the features folder. Features are composed of a template file and a view model where I wire stuff like forms or HTTP requests.
src/
├── features/
│ ├── login/
│ │ ├── index.tsx
│ │ └── use-handler.ts
│ └── register/
│ ├── index.tsx
│ └── use-hander.ts
├── data-access/
│ ├── api.ts
│ └── users.schema.ts
└── commong/
└── auth/
I keep features agnostic of storage or external dependencies by putting them in the data-access folder. So data-access describes and validates API requests using Ky and Zod.
Then in the view model, I use Tanstack Query to call the API and get a response.
In the past, I used to create custom hooks inside data-access so I didn't expose Tanstack Query into my features. Although this helped a lot with testing and keeping features agnostic of implementation details, it required a lot of boilerplate and honestly, I depend a lot on Tanstack Query for many things, so I decided to simplify the setup.
I also created a mock server to help me prototype fast without touching the backend yet. The mock server supports authentication using an HTTP-only cookie, MFA by printing an OTP to the console, and OAuth2 providers for login—currently it implements GitHub.
The mock server can read JSON files from the filesystem and use them as a database. It also supports all CRUD operations. That way I can build CRUD APIs fast to verify my ideas. Usually it helps me a lot to build a POC of the UI and improve it by using it, instead of spending more time designing and thinking about how it should work.
I plan to continue working on the template by adding support for more OAuth providers like Google and enhancing security by implementing XSRF and other best practices.
Further improvements include:
If you have any issues with the template, please create an issue or share any other ideas.
Thanks for reading.
2025-11-18 05:58:11
The examples in this post are available in a demo repository here: https://github.com/liavzi/custom-open-api-ts-client.
In one of the projects I'm working on, we use a simple API service to communicate with the server:
export class ApiService {
private baseUrl = '';
constructor() {
}
get(endpoint: string): Observable<any> {
...
}
post(endpoint: string, body: any: Observable<any> {
...
}
}
// when I need to use it
apiService.get("iHateToCopyThisEveryTime").subscribe(response: AnotherTypeINeedToManuallyCreateEveryTime) => {});
The first problem is that I always have to manually pass the endpoint URL. This usually means copy pasting it from the backend, which is repetitive and easy to mess up.
The second problem is even worse: whenever I need to GET or POST json data, I also need to manually create a matching TypeScript interface that represents the server’s request or response model. This is tedious and error prone. If someone changes a property name or adds a new field on the server and forgets to update the client, things break silently.
So the goal became clear: automatically generate a TypeScript client. Whenever an API endpoint is added or changed on the server, the client should get a matching, fully typed function, automatically. No more copy-pasting URLs, no more mismatched interfaces, and no more guessing. Just calling strongly typed functions that feel like any other regular TypeScript function.
ASP.NET Core supports OpenAPI out of the box, so generating the specification is pretty straightforward:
using Backend.OpenApi;
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddControllers();
// Learn more about configuring OpenAPI at https://aka.ms/aspnet/openapi
builder.Services.AddOpenApi("internal-api", options =>
{
options.AddOperationTransformer(new AddMethodNameOperationTransformer());
});
var app = builder.Build();
if (app.Environment.IsDevelopment())
{
app.MapOpenApi();
}
app.UseAuthorization();
app.MapControllers();
app.Run();
Pay special attention to the following line, which adds a custom operation transformer:
options.AddOperationTransformer(new AddMethodNameOperationTransformer());
and the code of the transformer:
using Microsoft.AspNetCore.Mvc.Controllers;
using Microsoft.AspNetCore.OpenApi;
using Microsoft.OpenApi;
namespace Backend.OpenApi;
public class AddMethodNameOperationTransformer : IOpenApiOperationTransformer
{
public async Task TransformAsync(OpenApiOperation operation, OpenApiOperationTransformerContext context,
CancellationToken cancellationToken)
{
if (context.Description.ActionDescriptor is not ControllerActionDescriptor controllerActionDescriptor)
return;
operation.AddExtension("x-method-name", new JsonNodeExtension(controllerActionDescriptor.ActionName));
}
}
Thanks to the transformer, the generated OpenAPI spec will now include the actual C# method name as part of the operation metadata. This becomes extremely helpful when generating the TypeScript client, because we can produce clean, predictable function names instead of trying to infer them from routes.
For example, consider an API endpoint that returns a list of books:
[HttpGet]
public IEnumerable<Book> GetAllBooks()
{
return Books;
}
It appears in the generated JSON like this:
"/api/Books": {
"get": {
"tags": [
"Books"
],
"responses": {
"200": {
"description": "OK",
"content": {
"text/plain": {
"schema": {
"type": "array",
"items": {
"$ref": "#/components/schemas/Book"
}
}
},
"application/json": {
"schema": {
"type": "array",
"items": {
"$ref": "#/components/schemas/Book"
}
}
},
"text/json": {
"schema": {
"type": "array",
"items": {
"$ref": "#/components/schemas/Book"
}
}
}
}
}
},
"x-method-name": "GetAllBooks"
},
Now we want to generate the OpenAPI JSON on every build.
ASP.NET Core makes this straightforward. We can simply follow the official instructions from Microsoft’s documentation.
After that, we add the following lines to the .csproj file:
<PropertyGroup>
<OpenApiDocumentsDirectory>$(ProjectDir)../../Frontend/src/app/contracts</OpenApiDocumentsDirectory>
</PropertyGroup>
<Target Name="CreateTypescriptClient" AfterTargets="Build" Condition="'$(Configuration)' == 'Debug'">
<Exec Command="npm run generate-contracts" WorkingDirectory="$(ProjectDir)../../Frontend" />
</Target>
This configuration instructs the build process to output the generated OpenAPI JSON into the client’s contracts directory.
Then, whenever the project is built in debug mode, it automatically runs the generate-contracts npm script, ensuring your TypeScript client stays in sync with the API definitions.
Next, we’ll implement the generate-contracts script using the Hey API package.
After installing HeyAPI, add the following scripts to your package.json:
"scripts": {
"ng": "ng",
"start": "ng serve",
...
"generate-contracts": "openapi-ts && npm run generate-api-services",
"generate-api-services": "node src/app/contracts/createApiServices.mjs"
},
The generate-contracts will now create types directly from the OpenAPI JSON.
Hey API is highly customizable, look at the official documantaion to to see how to configure it using the openapi-ts.config file.
Now, here’s the really interesting part. The script createApiServices.mjs
iterates over the OpenAPI JSON and generates custom API services using ts-morph.
Take some time to explore the file and its comments to see exactly how it works under the hood.
For example, check out the BooksApiService, which is one of the services generated by this script:
import { Injectable } from '@angular/core';
import { ApiService, RequestParam } from '../../../api-service';
import { Observable } from 'rxjs';
import { Book } from '@contracts';
@Injectable({ providedIn: 'root' })
export class BooksApiService {
constructor(private readonly apiService: ApiService) {
}
getAllBooks(apiServiceRequestParams?: RequestParam): Observable<Book[]> {
return this.apiService.handleInternalApiCall({
url: "/api/Books",
pathParams: {},
queryParams: {},
httpVerb: "get",
requestBody: undefined,
apiServiceRequestParams
});
}
addBook(requestBody: Book, apiServiceRequestParams?: RequestParam): Observable<Book> {
return this.apiService.handleInternalApiCall({
url: "/api/Books",
pathParams: {},
queryParams: {},
httpVerb: "post",
requestBody: requestBody,
apiServiceRequestParams
});
}
getBookByTitle(title: string, apiServiceRequestParams?: RequestParam): Observable<undefined> {
return this.apiService.handleInternalApiCall({
url: "/api/Books/title/{title}",
pathParams: {title},
queryParams: {},
httpVerb: "get",
requestBody: undefined,
apiServiceRequestParams
});
}
getBooksByAuthor(author: string, apiServiceRequestParams?: RequestParam): Observable<Book[]> {
return this.apiService.handleInternalApiCall({
url: "/api/Books/author/{author}",
pathParams: {author},
queryParams: {},
httpVerb: "get",
requestBody: undefined,
apiServiceRequestParams
});
}
}
As you can see, it aligns perfectly with the server-side controller:
namespace Backend.Controllers
{
using Microsoft.AspNetCore.Mvc;
using System.Collections.Generic;
[ApiController]
[Route("api/[controller]")]
public class BooksController : ControllerBase
{
private static readonly List<Book> Books = new()
{
new Book { Title = "The Hobbit", Author = "J.R.R. Tolkien" },
new Book { Title = "1984", Author = "George Orwell" },
new Book { Title = "To Kill a Mockingbird", Author = "Harper Lee" }
};
[HttpGet]
public IEnumerable<Book> GetAllBooks()
{
return Books;
}
[HttpPost]
public Book AddBook([FromBody] Book book)
{
Books.Add(book);
return book;
}
[HttpGet("title/{title}")]
public Book? GetBookByTitle(string title)
{
return Books.FirstOrDefault(b => b.Title.Equals(title, StringComparison.OrdinalIgnoreCase));
}
[HttpGet("author/{author}")]
public IEnumerable<Book> GetBooksByAuthor(string author)
{
return Books.Where(b => b.Author.Equals(author, StringComparison.OrdinalIgnoreCase));
}
}
public class Book
{
public required string Title { get; set; }
public required string Author { get; set; }
}
}
The amazing part is that this approach frees us from remembering URLs or building query strings - we can just call regular functions. Check out
books.component.ts to see how simple it is to use.
Some might argue that this couples the client to the server’s structure.
But in practice, for internal APIs, I don’t really care about URLs and query strings - they’re just implementation details. All I want is a straightforward way to call my server!
The final piece is implementing the ApiService.
The handleInternalApiCall(args: InternalApiCallArgs) method receives all the information needed to call the server, making integration with your existing project pretty straightforward. You can see my intentionally simplified implementation in the link above.
That’s it! With this setup, generating TypeScript clients from your OpenAPI spec is no longer a chore. You don’t have to worry about URLs, query strings, or boilerplate code - just call the API like a normal function.
Give it a try in your own projects and see how much smoother your development can become.