2026-04-22 00:45:18
Introduction
Regarding the latest model announced by Anthropic, "Claude Mythos," the CEO of Anthropic has explicitly stated that it was "created to specialize in programming."
However, one of the fields where Claude Mythos is currently receiving the highest praise in the market is actually "cybersecurity."
💻 The True Nature of Programming is "Logical Thinking"
In the first place, programming is not just the act of typing code. Its essence is an intellectual task that demands advanced logical thinking skills.
Logical thinking, in other words, is a process of "repeated reasoning and its synthesis." It requires the ability to systematically think through how a system should operate and why an error is occurring.
Both humans and AI primarily use the following two patterns when logically reasoning things out to derive an answer.
Example: "All humans are mortal" (Rule) + "Socrates is a human" (Event) ➡️ "Therefore, Socrates is mortal" (Conclusion)
Example: "Person A, Person B, and Person C all experienced an error with this software" (Examples) ➡️ "There must be a common defect in this software" (Reasoning)
🧠 The Evolution from "Word Prediction" to "Logical Stacking"
Previous AI (conventional LLMs) were simply mechanisms that predicted the "word with the highest probability of coming next" based on massive amounts of training data. While they could write plausible sentences, they did not possess logical thinking in the truest sense.
However, the architecture of the latest models like Claude Mythos has evolved.
Its greatest feature is that, rather than mere word prediction, it can accurately stack logical steps like "A therefore B, B therefore C, thus the conclusion is D" just like a human (or at a speed surpassing humans).
🛡️ Why Does This Translate to Strength in Cybersecurity?
At this point, the dots connect.
This is because cybersecurity is the ultimate "logic puzzle" and a domain of "clashing reasoning."
When finding vulnerabilities or preventing cyberattacks, security experts utilize the aforementioned "reasoning" to its absolute limit.
Inductive approach: Discerning from massive access logs (examples) that "this is not normal access, but an attack pattern targeting a specific vulnerability."
Deductive approach: Tracing tens of thousands of lines of code based on the premise that "this function in this language is prone to memory leaks" (rule), and proving that "therefore, the system can be hijacked from here" (conclusion).
As a result of being forged to "specialize in programming," Claude Mythos has achieved an extraordinary level of reasoning ability to deeply understand code structures and uncover bugs and inconsistencies.
The perfect combination of this "ability to understand programming" and the "reasoning power to accurately stack logical steps" has led to an overwhelming strength in the highly complex domain of cybersecurity, leaving other AI completely unrivaled.
2026-04-22 00:40:20
When a founder walks into a pitch meeting, they're rehearsed. The deck is polished. The metrics are (hopefully) real.
But investors still get blindsided.
Theranos, WeWork, FTX—all had charismatic founders with compelling narratives. All committed fraud or catastrophic governance failures. And in every case, the warning signs were psychological, not financial.
Standard VC diligence covers:
But it almost never assesses:
This matters because founder character is the single highest predictor of catastrophic failure.
A founder can have a great market and solid traction, but if they're willing to lie to investors—or genuinely can't see their own limitations—the company will eventually implode. The math just breaks eventually, and then what happens?
You don't need a psychologist in the boardroom. A few behavioral questions can surface character gaps:
On accountability:
Founders with high narcissism tend to externalize failure ("the market wasn't ready") rather than examine their own role. Founders with psychopathic traits may invent narratives that protect their ego.
On ambition vs. realism:
Founders with extreme narcissism or Machiavellianism often minimize or ignore risks. They believe their will or charisma can overcome objective obstacles.
On stress responses:
Founders with low emotional regulation or high dominance drives may punish dissent, creating yes-men teams that miss obvious problems.
Research across 10+ years shows:
These aren't subtle traits. They show up in behavior, language patterns, and how people respond under pressure.
Add behavioral assessment to your process. A 10-minute behavioral interview adds almost no friction but flags character issues early.
Stress-test founder claims. Ask why their TAM is 10x larger than comparable markets. Ask why their retention is better than industry average. Founders with high integrity will have thoughtful answers or admit uncertainty. Founders with low integrity will confabulate.
Check reference patterns. Not just "did they execute?" but "did they tell the truth when things went wrong?" and "did they take ownership or blame others?"
Consider psychological screening for large rounds. For Series A+, a short psychometric assessment of founder traits (especially Dark Tetrad) costs $200–500 and can prevent $10M+ losses.
You've probably already lost money on a founder who had great metrics but a character problem. That founder either:
All of these are predictable if you ask the right questions and listen for the right patterns.
The best investors I know do this intuitively. They meet a founder and sense something off—but they can't always articulate why. That's psychologically valid intuition, but it's not scalable or defensible in a partnership.
By making founder psychology explicit in your diligence, you:
The bottom line: A great deck with mediocre founders will lose. Great founders with a weak market will iterate. But charismatic fraudsters with a decent market will destroy capital fast.
Start asking. Start listening.
What founder red flags have you caught? And more importantly—why do you think you caught them when others didn't?
2026-04-22 00:31:57
El 44% de las canciones que se suben a Deezer por día son generadas por IA. Cuando leí eso tuve que releer dos veces. No porque me parezca imposible, sino porque el número es tan concreto y tan incómodo al mismo tiempo.
Después hice algo que no debería haber hecho si quería dormir bien: corrí git blame sobre mis commits del último mes.
El debate que levantó el número de Deezer fue predecible. Artistas enojados, ejecutivos con discursos preparados, think pieces sobre el futuro de la música. Todos apuntando al mismo blanco: la calidad del contenido generado por IA.
Y ahí creo que está el error de framing.
El problema no es si el código generado por un agente funciona. A esta altura, mayormente funciona. El problema es otro: ¿qué significa que algo sea tuyo cuando no lo pensaste vos?
En música es más fácil de ver porque la autoría es cultural, casi romántica. Pero en software tendemos a esconderlo detrás de pragmatismo. "Si pasa los tests, está bien." Ya escribí sobre eso — los agentes que pasan tus tests son exactamente el problema, no la solución.
Así que fui a buscar el número real en mis proyectos.
# Revisión de commits del último mes
# Quería saber cuánto código "mío" era realmente mío
git log --since="1 month ago" --author="Juan Torchia" --pretty=format:"%H %s" | head -50
# Después, por cada commit, revisé el diff
git show --stat <hash>
# Y finalmente, la pregunta honesta:
# ¿Cuántas líneas de este diff pensé yo?
# ¿Cuántas pegué de un agente sin leer del todo?
No tengo un script que detecte automáticamente si el código lo escribí yo o lo generó Claude. Ojalá. Lo que hice fue más artesanal y más incómodo: revisé commit por commit y traté de ser honesto conmigo mismo.
¿Este bloque de TypeScript lo diseñé yo o le pedí al agente que generara "una función que valide el schema" y después ajusté el nombre de una variable?
// Este tipo de código es el que me generó la duda
// Lo reconozco porque es demasiado prolijo para ser mío de primera pasada
// Y porque el nombre de la función es exactamente lo que yo le habría pedido a un agente
function validarSchemaContratos(data: unknown): data is ContratoInput {
if (!data || typeof data !== 'object') return false;
const contrato = data as Record<string, unknown>;
// ¿Escribí esta validación? ¿O la pedí?
// Honestamente: la pedí. Y la mergué sin pensar mucho más.
return (
typeof contrato.id === 'string' &&
typeof contrato.monto === 'number' &&
contrato.monto > 0 &&
typeof contrato.fechaVigencia === 'string'
);
}
El número al que llegué: alrededor del 38% de las líneas mergeadas ese mes tenían algún grado de generación por agente donde mi contribución real fue el prompt, no el diseño.
No el 44% de Deezer. Pero lo suficientemente cerca como para que la incomodidad sea real.
En 2024 migré un monorepo de npm a pnpm. El install pasó de 14 minutos a 90 segundos. El equipo no lo podía creer. Y ese cambio lo entendí completamente: cada decisión, cada trade-off, cada razón por la que pnpm maneja el hoisting diferente. Ese conocimiento es mío.
Ahora pienso en cuánto del código que mergeo hoy puedo defender con ese mismo nivel de entendimiento. Y la respuesta honesta es: no todo.
Eso no es un problema de los agentes. Es un problema mío de proceso.
La distinción que importa no es "escribí yo cada caracter" vs "lo generó una IA". Esa es una discusión falsa. La distinción real es:
¿Puedo defender cada decisión de diseño en code review? ¿Entiendo los trade-offs? ¿Si este código falla a las 3am, sé por dónde empezar a buscar?
Si la respuesta es no, el problema no es de autoría filosófica. Es operacional.
Acá están los gotchas reales que encontré en mi revisión:
1. El agente optimiza para el caso que describiste, no para tu sistema
// El agente generó esto cuando le pedí paginación
// Funciona perfecto para la descripción que le di
// El problema: mi DB tiene 2M de rows y OFFSET es devastador en escala
// Lo que el agente generó (correcto para el enunciado)
const resultados = await db.query(
`SELECT * FROM contratos
ORDER BY created_at DESC
LIMIT $1 OFFSET $2`,
[pageSize, page * pageSize]
);
// Lo que necesitaba (cursor-based pagination)
// Esto lo sé porque sé mi sistema — el agente no
const resultados = await db.query(
`SELECT * FROM contratos
WHERE created_at < $1
ORDER BY created_at DESC
LIMIT $2`,
[cursor, pageSize]
);
Mergué la primera versión. La encontré en producción tres semanas después cuando el endpoint de contratos empezó a tardar 8 segundos en la página 50.
2. El código generado no tiene memoria de tus decisiones anteriores
Esto lo conecto con algo que analicé en el diff de system prompts de Claude entre versiones — los modelos no tienen contexto de por qué tu arquitectura tomó ciertas decisiones históricas. Eso lo noté cuando estaba viendo cómo evolucionan los prompts de sistema entre versiones de Claude: el modelo sabe mucho, pero no sabe tu historia.
Resultado: el código generado es técnicamente correcto y arquitecturalmente inconsistente con decisiones que tomaste hace seis meses.
3. La deuda técnica generada es más difícil de rastrear
Cuando yo escribo código malo, generalmente sé por qué lo escribí. Contexto de tiempo, legacy, trade-off consciente. Cuando un agente genera código subóptimo que yo mergué sin pensarlo, no tengo esa memoria. El git blame me dice que soy el autor. Mi cabeza no recuerda la decisión.
Esto tiene implicancias directas en la seguridad. Si no entendés completamente lo que mergeaste, tampoco entendés tu superficie de ataque. Lo que pasó con Vercel en abril y la supply chain es un ejemplo de cómo la tercerización sin comprensión real crea vectores que no ves venir.
4. La confianza en las herramientas reemplaza el juicio propio
Este es el más sutil. Cuando la herramienta genera el código y los tests pasan, hay una presión implícita para mergear. El CI está verde. ¿Qué más querés? Escribí algo sobre esto con relación a la confianza en herramientas de configuración de entorno y agentes: la pregunta no es si la herramienta funciona, es si vos entendés qué está haciendo y por qué.
No voy a decir "usá menos IA" porque eso es una respuesta emocionalmente satisfactoria y prácticamente inútil. Lo que sí hago:
Code review propio antes de mergear, sin el contexto del chat con el agente
Cierro la conversación. Abro el diff. Me pregunto: ¿puedo explicar cada línea? Si no puedo, no mergeo hasta poder.
Separo commits de "agente revisado" y "mío"
No en el mensaje del commit público, sino en mi proceso mental. Los commits donde el agente tuvo peso significativo los marco mentalmente para revisión más profunda en el futuro.
El agente genera borradores, yo diseño la arquitectura
Cambié cómo formulo los prompts. En lugar de "generame una función que haga X", uso "explicame los trade-offs entre approach A y B para este caso" y después escribo la implementación basándome en esa discusión. Más lento. Más mío.
El paralelismo con Deezer no es que el contenido generado por IA sea malo. Es que cuando no sabés distinguir qué es tuyo y qué no, perdés algo importante — y en software ese algo se llama comprensión del sistema que mantenés.
¿El código generado por IA es menos confiable que el código escrito por humanos?
No necesariamente. El código generado por IA puede ser perfectamente confiable en términos funcionales. El problema no es la confiabilidad del output sino la comprensión del autor. Código que no entendés completamente — independientemente de quién lo generó — es código que no podés mantener, debuggear ni defender en un incidente de producción.
¿Cuánto del código en proyectos profesionales es generado por IA hoy?
No hay un número oficial y consistente para software como sí lo hay para música en Deezer. GitHub Copilot reportó en 2023 que el 46% del código en proyectos que usan la herramienta es generado por IA. Mi experiencia personal ese mes específico fue alrededor del 38% con algún grado de generación por agente. El número varía enormemente por equipo, rol y tipo de tarea.
¿Qué diferencia hay entre usar IA para generar código y usar Stack Overflow?
Es una pregunta legítima y la diferencia es de grado, no de tipo. Con Stack Overflow generalmente entendés lo que copiás porque el contexto es más limitado y debés adaptarlo. Con un agente la generación es tan completa y tan adaptada a tu caso que la ilusión de comprensión es mayor. El riesgo no es copiar — es creer que entendés cuando no entendés.
¿Cómo afecta esto a la seguridad del código?
Significativamente. Si no entendés completamente lo que mergeaste, no podés razonar sobre tu superficie de ataque. Los agentes generan código correcto para el caso descrito pero pueden introducir vulnerabilidades en contextos que no conocen — tu modelo de datos específico, tus políticas de autenticación, tu arquitectura de permisos. Revisá los commits generados con el mismo rigor que revisarías el código de un desarrollador externo que no conoce tu sistema.
¿Debería preocuparme si el 44% de lo que sube a Deezer es IA?
Depende de qué te preocupa. Si te preocupa la calidad técnica del audio, probablemente no. Si te preocupa el ecosistema creativo y la sustentabilidad económica de los artistas humanos, sí hay razones para pensarlo. En software el análogo sería: si el 44% de tu codebase fue generado sin comprensión real del equipo, tenés un problema de mantenibilidad y un equipo que no conoce su propio sistema — eso sí debería preocuparte.
¿Hay alguna forma de detectar automáticamente qué código fue generado por IA en un repositorio?
Hoy no de forma confiable. Existen detectores pero tienen tasas de falso positivo y falso negativo altas. La pregunta más útil no es "¿lo generó una IA?" sino "¿el autor puede defender cada decisión de diseño?". Eso no lo detecta ningún script — lo revela el code review y los incidentes de producción.
El 44% de Deezer molesta porque hace visible algo que preferimos no cuantificar. Cuando es música es fácil señalarlo. Cuando es nuestro propio código, la resistencia a hacer el análisis es mayor.
Hice el git blame. No me gustó todo lo que encontré. Pero ahora sé dónde estoy parado.
Lo que haría diferente no es usar menos agentes — es tener más honestidad sobre la diferencia entre "mergeé código que funciona" y "entiendo el sistema que estoy construyendo". La primera es ejecución. La segunda es ingeniería.
Y si en Deezer el 44% es IA, la pregunta que me hago para el año que viene no es cómo reducir ese número. Es cómo asegurarse de que quien lo sube, lo entiende.
En mi caso, eso empieza por no cerrar la sesión del agente antes de cerrar el diff.
¿Corriste git blame sobre tu último mes? ¿Qué número encontraste? Escribime — genuinamente quiero saber si es solo mi proyecto o si estamos todos en el mismo lugar.
Este artículo fue publicado originalmente en juanchi.dev
2026-04-22 00:31:18
Deleting is permanent. Learn the safe patterns that prevent accidental data loss.
Remove-Item (del/rm) deletes files. Unlike Windows trash bin, PowerShell deletes files permanently—they don't go to recycle bin. So you must be careful.
Always preview what you're deleting before actually running Remove-Item.
# Delete one file
Remove-Item report.txt
# No confirmation—it's gone immediately!
# That's why you check before deleting
# See what WOULD be deleted without actually deleting
Remove-Item *.txt -WhatIf
# Shows files that match *.txt
# Safe to check before running for real!
# PowerShell asks 'Are you sure?' before deleting
Remove-Item *.log -Confirm
# You must type 'Y' or 'Yes' to confirm
# Step 1: See what matches
Get-ChildItem *.tmp
# Step 2: Preview deletion
Remove-Item *.tmp -WhatIf
# Step 3: Delete for real
Remove-Item *.tmp
# Step 4: Verify it's gone
Get-ChildItem *.tmp
The golden rule of deletion - use this pattern EVERY TIME:
# 1. Check what exists
Get-ChildItem *.log
# 2. Preview what gets deleted
Remove-Item *.log -WhatIf
# 3. Look at step 2 output carefully
# 4. THEN delete for real
Remove-Item *.log
# This pattern has saved me from disasters many times!
Dangerous command to AVOID:
Remove-Item * -Recurse -Force # DON'T! Deletes EVERYTHING!
Stop reading and start practicing right now:
The interactive environment lets you type these commands and see real results immediately.
This is part of the PowerShell for Beginners series:
You now understand:
Practice these examples until they feel natural. Then tackle the next command in the series.
Ready to practice? Head to the interactive environment and try these commands yourself. That's how it sticks!
What PowerShell commands confuse you? Drop it in the comments!
2026-04-22 00:29:50
How to understand pressure, reduce self-doubt, and move through performance anxiety with more steadiness
Lauren Bonvini is a Seattle-based stage fright coach who helps performers, speakers, and creatives work through performance anxiety and build confidence, presence, and self-trust. For readers who want a visual companion to these ideas, Lauren Bonvini’s Stage Fright and Confidence guide on SlideShare offers another practical way to explore this topic.
Stage fright can affect people in ways that are both obvious and subtle. For some, it shows up before a presentation or performance as a rush of nerves, tension, and racing thoughts. For others, it appears more quietly as avoidance, self-doubt, overthinking, or the habit of holding back in moments where they want to speak clearly and confidently. In either case, the experience can be frustrating because it often affects people who are capable, prepared, and deeply invested in what they want to say.
One of the most difficult parts of stage fright is that it can make simple moments feel much bigger than they really are. A meeting, a creative performance, a talk, or even a conversation can start to feel loaded with pressure the moment a person becomes aware that they are being seen and heard. Suddenly, the focus shifts away from communication and toward self-protection. Instead of thinking clearly about the message, the mind becomes preoccupied with fear.
That is why a practical approach matters. Overcoming stage fright is not about becoming a completely fearless person. It is about learning how to understand what is happening, reduce the intensity of self-judgment, and build the kind of confidence that holds up under pressure.
Stage fright is not only about speaking or performing. It is often about visibility. When people are in a situation where others are watching, listening, or evaluating, the moment can start to feel risky. The body responds quickly to that sense of pressure, even when there is no real danger.
That response can include:
At the same time, anxious thoughts often become louder:
This combination of physical activation and fear-based thinking is what makes stage fright feel so overwhelming. A person may still have the same skills, experience, and ideas they had before the pressure started, but anxiety interferes with access to them.
This is important because it means stage fright does not define ability. It affects performance, but it is not proof that someone is incapable.
A common mistake is believing that confidence should come first. Many people assume they need to feel completely calm before they can trust themselves. If they still feel anxious, they take that as a sign that they are not ready.
Unfortunately, that mindset keeps people stuck.
Confidence is rarely something that appears before experience. More often, it is built through experience. It grows when someone steps into a moment, feels the discomfort, survives it, and begins to realize that nerves do not automatically ruin performance.
Waiting to feel perfect usually increases pressure. The more someone believes they must feel confident first, the more alarming anxiety becomes when it appears. This creates a second layer of fear, where the person is no longer only responding to the speaking or performance moment. They are responding to the fact that they are anxious.
A more helpful goal is not perfect calm. It is steadiness. It is the ability to remain connected enough to yourself and your message that you can keep going, even if the moment feels uncomfortable.
Confidence is often misunderstood as a personality trait. In reality, it is something that can be developed through repetition, reflection, and self-trust. A practical approach focuses on what helps people do that in real situations.
Prepare for clarity, not control
Preparation helps, but it works best when it creates familiarity rather than rigidity. Focus on the key message, the structure of what you want to say, and the points that matter most. Know where you are going, but do not make yourself dependent on saying everything perfectly.
Trying to control every word often increases anxiety. It creates the feeling that even a small shift will ruin everything. Clear preparation creates flexibility, and flexibility creates more confidence.
Shift attention outward
When stage fright rises, attention often turns inward. People start monitoring their voice, their body, their facial expressions, and every sign that they may not be doing well enough. This kind of self-monitoring adds pressure and makes it harder to stay present.
A better focus is outward:
Support the body
Because stage fright is physical, it helps to use simple physical tools. Slowing the breath, dropping the shoulders, softening the jaw, and feeling both feet on the ground can all help reduce the intensity of the body’s stress response.
These are small actions, but they are effective because they signal stability to the nervous system. They do not have to erase anxiety in order to help. They only need to make the moment a little more manageable.
Reframe what anxiety means
A lot of the suffering around stage fright comes from interpretation. People feel nervous and immediately decide that something is wrong. That interpretation adds more fear to an already stressful moment.
A more supportive view is this: anxiety often means the moment matters. It means there is visibility, vulnerability, or pressure attached to what is happening. That does not make the feeling enjoyable, but it does make it understandable.
Once anxiety is seen as activation rather than failure, it becomes much easier to work with.
Perfectionism is one of the biggest reasons stage fright feels harder than it needs to. When someone believes they need to appear flawless in order to be effective, every moment of visibility becomes high stakes.
Perfectionism makes small mistakes feel enormous. It causes people to over-monitor themselves and disconnect from their natural voice. It also keeps them focused on what could go wrong instead of what they actually want to communicate.
In reality, people usually connect more with presence than perfection. They respond to sincerity, clarity, honesty, and conviction. A person does not need to be flawless to be compelling. They need to be present enough to communicate something real.
Letting go of perfectionism does not mean lowering standards. It means shifting toward healthier standards, ones based on connection, clarity, and recovery rather than flawless performance.
Most lasting confidence is built gradually. People grow stronger not because one moment suddenly changes everything, but because they keep practicing visibility in ways that stretch them without overwhelming them.
That may include:
And self-trust is the foundation of real confidence.
Stage fright does not mean you are not capable. It means pressure is affecting how the moment feels. Once that becomes clear, it becomes much easier to respond in a more constructive way. Instead of fighting yourself, you can support yourself. Instead of waiting to feel perfect, you can build confidence by practicing steadiness, improving your relationship with pressure, and learning to trust yourself more over time.
Lauren Bonvini helps performers, speakers, and creatives build that kind of confidence through a practical approach to stage fright, performance anxiety, and self-trust. For a visual resource that expands on these ideas, explore Lauren Bonvini’s Stage Fright and Confidence guide on SlideShare.
2026-04-22 00:23:55
A few weeks ago, @swyx nerd-sniped @zachmeyer into building an open-source Dropbox. Zach took it seriously, and the result is Locker: a self-hostable file storage platform that covers most of what you'd actually use Dropbox or Google Drive for, without the subscription or lock-in.
I came across the thread on X, spent some time getting Locker running on Railway, and figured the deployment notes were worth writing up — especially since the setup has a few non-obvious pieces that trip you up if you're looking to self-host.
Locker is a Dockerized Next.js application backed by PostgreSQL. The GitHub repo is worth a look — the tech stack is modern and clean: Next.js 16 App Router, tRPC for end-to-end type safety, Drizzle ORM, BetterAuth, and Tailwind CSS, organized as a Turborepo monorepo with pnpm workspaces.
Feature-wise it covers the things you'd actually miss from the commercial alternatives:
ls, cd, find, cat, and grep via a terminal panel; reads your actual stored files lazily from the configured storage providerThe last one is either delightful or unnecessary depending on your personality.
Most self-hosted file storage tools tie you to a specific backend. Locker doesn't. You set BLOB_STORAGE_PROVIDER in your environment and point it at wherever you want files to live:
| Provider | Value | What you need |
|---|---|---|
| Local disk | local |
Just a directory path |
| AWS S3 | s3 |
Access key, secret, region, bucket |
| Cloudflare R2 | r2 |
Account ID, keys, bucket |
| Vercel Blob | vercel |
A read/write token |
If you already have an S3 bucket, you can point Locker at it and immediately have a UI over your existing data. If you're starting fresh, local disk works out of the box. Switching later is one variable change.
Locker is designed to run with Docker Compose — a migrate container runs the database migrations first, then the web container starts once migrations complete. That's the intended flow.
Deploying to Railway takes a bit more work because Railway runs a single container rather than orchestrating multiple services. I spent some time getting this right: the key issue is that migrations need to run before the app starts, and the Dockerfile in the repo builds a Next.js standalone output that doesn't include the migration tooling in the final image by default.
The solution was a custom Dockerfile.railway that copies the migration dependencies into the runner stage and runs them as part of the startup command:
CMD ["sh", "-c", "cd /app/packages/database && pnpm drizzle-kit migrate && cd /app && node apps/web/server.js"]
Drizzle tracks which migrations have already been applied, so this is idempotent — subsequent deploys only apply new migrations and skip the rest.
If you want to skip all of this and just get a running instance, I published a one-click Railway template that handles everything automatically — Postgres, volume for file storage, migrations on startup, and all the required environment variables pre-configured:
After deployment you get a clean file management interface, workspace support for teams, and the ability to generate share links for any file or folder. The virtual bash shell is accessible via a terminal panel and lets you navigate your file tree with standard Unix commands — which turns out to be genuinely useful when you want to script something against your stored files.
Authentication supports email/password out of the box, and you can add Google OAuth by dropping in GOOGLE_CLIENT_ID and GOOGLE_CLIENT_SECRET.
For personal use and small teams, maybe yes. For a large organisation relying on it as primary infrastructure, I'd want to see more production mileage first — the project is still relatively young. That said, the tech choices are solid, the codebase is readable, and the maintainer is active.
The hosted version at locker.dev is available if you want to try it before committing to self-hosting.
I write about cloud, security, privacy, and self-hosted infrastructure at alphasec.io. If Railway templates are your thing, I maintain a collection covering everything from starter kits to AI apps and security tools.