2026-01-22 01:28:17
Implementando un servidor SFTP seguro con AWS Transfer Family y Amazon Cognito
Muchas organizaciones necesitan ofrecer servicios de transferencia segura de archivos sin administrar servidores SFTP tradicionales, usuarios del sistema operativo o la gestión manual de llaves SSH.
En este artículo presento una arquitectura basada en servicios nativos de AWS para construir una solución SFTP segura, escalable y totalmente administrada, que soporta autenticación por usuario/contraseña y llaves públicas SSH, además de automatización basada en eventos para el ciclo de vida de los usuarios.
Visión general de la arquitectura
La siguiente imagen muestra la arquitectura de la solución:
Componentes principales
Modelos de autenticación soportados
La solución soporta dos métodos de autenticación:
Requisitos previos
Antes de implementar esta solución, asegúrate de cumplir con lo siguiente:
Paso 1: Crear el User Pool de Amazon Cognito
Amazon Cognito funciona como el servicio central de identidad para los usuarios SFTP.
Decisiones clave de configuración
Paso 2: Crear los buckets de Amazon S3
La solución utiliza dos buckets de Amazon S3:
1. Bucket de datos
Almacena los archivos de los usuarios.
Estructura recomendada:
s3://sftp-data-bucket/username/
2. Bucket de llaves públicas
Almacena las llaves públicas SSH de cada usuario.
Estructura recomendada:
s3://sftp-public-keys-bucket/username/key.pub
Las políticas de IAM aseguran que cada usuario solo tenga acceso a su propio prefijo dentro del bucket.
Paso 3: Configurar AWS Transfer Family
Se debe crear un servidor AWS Transfer Family (SFTP) con las siguientes características:
Paso 4: Implementar la Lambda de autenticación personalizada
Esta función Lambda actúa como proveedor de identidad personalizado.
Flujo de autenticación
if event.get("Password"):
authenticate_user_with_cognito(username, password)
else:
public_keys = get_public_keys_from_s3(username)
Responsabilidades de la función
Paso 5: Automatizar la gestión de carpetas con EventBridge
El ciclo de vida de los usuarios se maneja mediante eventos.
Creación de carpetas
Eliminación o renombrado de carpetas
Paso 6: Modelo de seguridad con IAM
La seguridad se basa en el principio de mínimos privilegios:
Flujo operativo
Autenticación basada en contraseña
aws cognito-idp admin-set-user-password \
--user-pool-id <UserPoolId> \
--username <username> \
--permanent \
--password <password>
Autenticación basada en llave pública SSH
ssh-keygen -m PEM
aws s3api put-object \
--bucket <PublicKeyBucketName> \
--key <username>/key.pub \
--body key.pub
Beneficios de esta arquitectura
Conclusión
Esta arquitectura demuestra cómo los servicios nativos de AWS pueden combinarse para entregar una solución SFTP segura, escalable y fácil de operar.
Al utilizar AWS Transfer Family, Amazon Cognito, Lambda, EventBridge y S3, es posible eliminar servidores SFTP tradicionales sin sacrificar seguridad, control ni trazabilidad.
2026-01-22 01:27:45
A modern, beautiful YouTube video downloader built with Tauri and React
Download Latest Release: https://github.com/vanloctech/youwee/releases/latest
Source Code: https://github.com/vanloctech/youwee
Report a Bug: Please use the "Bug Report" flair or open an issue on GitHub.
2026-01-22 01:27:07
This tutorial was written by Fidaa Berrjeb.
This tutorial is for developers who:
Need to sync your MongoDB database and your offline-first apps? In this tutorial, we'll walk you through setting up an end-to-end demonstration of bi-directional data sync between local ObjectBox databases on client devices and a MongoDB Atlas cluster. Together, we'll build a system that ensures offline-first functionality while keeping data in sync across devices and databases.
By the end of this tutorial, you'll have a working demo seamlessly synchronizing local app data from Android and iOS devices with a MongoDB Atlas cluster.
You’ll learn how to:
What you’ll be setting up for MongoDB Data Sync:
Before you begin, ensure you have the following:
The system's architecture revolves around the ObjectBox Sync Server, which functions as the main facilitator of device-to-cloud-to-device data synchronization. Here's a high-level overview of how the components interact:
In this section, we are going to set up a MongoDB Atlas cluster that stores and manages our application's core data.
You can also configure more granular privileges.
IMPORTANT
The minimum supported version for the connector is currently MongoDB 5.0, but 8.0 is recommended. MongoDB Atlas, Community, and Enterprise work. Only a MongoDB replica set instance provides the necessary features for the MongoDB Sync Connector to work.
Let’s set up the ObjectBox Sync Server, a synchronization engine, and connect it to our MongoDB cluster to perform bidirectional data-sync.
ObjectBox Sync Server connected to the MongoDB Atlas cluster
git clone [email protected]:objectbox/objectbox-sync-examples.git
cd tasks/server-mongo-compose
docker-compose.yml file inside the tasks/server-mongo-compose folder. To connect to the MongoDB cluster, you need to adjust the file and add the connection string which we saved in the previous section.This is an example of my docker-compose.yml file:
services:
sync-server:
image: objectboxio/sync-server-trial
container_name: sync-server
restart: always
# Note: user and group ID 1000 may have to be adjusted to match your user IDs:
user: "${UID:-1000}:${GID:-1000}"
ports:
- "9980:9980"
- "9999:9999"
volumes:
# Expose "current" (this) directory as "/data" in the container.
- .:/data
command: >
- --model
- ./objectbox-model.json
- --admin-bind
- 0.0.0.0:9980
- --mongo-url
- ${MONGO_URL}
- --mongo-db
- ${MONGO_DB}
- --debug
.env file inside the current directory and add variables.
MONGO_URL=<MongoDB connection string>
MONGO_DB=<databaseName>
docker compose up
✔ Network server-mongo-compose_default
✔ Container sync-server
After the start, the following services/ports are available:
9980—open http://localhost:9980/ to access the Admin UI (and activate the trial)9999—the target for Sync clientsWe can use the task manager app for the different languages (Java and Swift) as the clients. To run the JAVA example app, we need to satisfy some prerequisites.
brew install openjdk@17
brew install gradle
git clone [email protected]:objectbox/objectbox-sync-examples.git
cd objectbox-sync-examples/tasks/client-java
./gradlew build
./gradlew run
> Task :java-main-sync:run
Welcome to the ObjectBox tasks-list app example
Available commands are:
ls [-a] list tasks - unfinished or all (-a flag)
new task_text create a new task with the text 'task_text'
done ID mark task with the given ID as done
rm ID delete task with the given ID
exit close the program
help display this help
ls -a and you will see the data synchronized from the MongoDB database.
<==<==========---> 80% EXECUTING [25s]
ID Text Date Created Date Finished
5 taskFromMongoUserSpecific 27-06-2025 02:00:00 27-06-2025 02:00:00
6 taskFromMongoUserSpecific 1 27-06-2025 02:00:00 27-06-2025 02:00:00
7 task2 01-09-2025 14:57:15 Task not completed yet
<==========---> 80% EXECUTING [37s]
> :app:run
client-java/app/objectbox-models/default.json.IMPORTANT
The models must be identical (including UIDs) across server and clients. Otherwise, sync can fail or behave unpredictably. If you want to start the client from scratch, you need to remove the database foldertasks/client-java/tasks-synced.
rm -rf tasks/client-java/app/tasks-synced
Congrats! You now have:
Now, simply swap in your own entities, deploy the Sync Server where you need it, and you’re production-ready for offline-first at scale.
If you want to go deeper into ObjectBox Sync, offline-first architectures, and hybrid AI use cases, here are some good next reads:
2026-01-22 01:19:42
Sometime late last year I opened Notepad on Windows 11 and watched it use 100MB of RAM.
Notepad. The app that used to be 200KB.
Microsoft had added Copilot integration, some half-baked markdown preview, and turned a text editor into another bloated thing. I was using some random free website just to read .md files at that point.
I'd been building stuff with AI for about two years. I'd even shipped Rust projects before using this approach. But I don't actually know Rust. I don't know any programming languages. I can't read code line by line and tell you what it does.
So I built another one.
Ferrite now has 800+ GitHub stars. Every line of its 15,000+ lines of Rust code was generated by AI. Here's the real story of how that went.
I use Claude through Cursor for the actual coding. For task management and keeping context between sessions, Task Master handles PRD parsing and task generation. When the AI needs current documentation for libraries like egui, it pulls it through Context7.
The short version of the workflow:
Each task gets a handover document that gives the AI exactly the context it needs. Fresh chat, paste the handover, work until done, update the handover for the next task. The full workflow documentation is public if you want the details.
The boring stuff. Setting up a Rust project, file I/O, config handling, window management. AI is great at "do the standard thing."
Bug fixes. Describing a bug and getting a fix is often faster than traditional debugging. "The scroll position resets when I switch tabs" turns into working code in one prompt.
Feature implementation. I describe what I want and it generates code that integrates with the existing architecture. Most of the time.
Edge cases. The Mermaid diagram rendering is a good example. We support flowcharts, sequence diagrams, class diagrams, state diagrams, and several others. Flowcharts are the most complete right now. The 0.2.5 release made the basics much more reliable, but there's still a lot of work to do before all those diagram types actually work well. That's on the roadmap for 0.2.6 and 0.3.0.
AI generates the happy path. The weird inputs, unusual syntax, nested edge cases? Those break.
Performance. I had to guide optimization heavily. We went from 250MB idle to around 72MB, but I had to identify what to optimize. The AI can't look at the memory profiler.
Architecture. "Should this be a separate module?" AI gives answers. They're not always good ones. I had to learn enough about the codebase to evaluate whether what it suggested made sense.
I didn't put "built with AI" in the README from day one. Honestly, I didn't think about it.
When the project started getting attention somewhere around version 0.2.0, someone pointed it out. Fair point. I added it right away.
If you're using the app, you should know how it was made. If you're contributing, you should know what you're working with. The code has inconsistent patterns in places, probably bugs I haven't found yet. A senior Rust developer would find things to cringe at. But it works.
Here's what Ferrite does now:
Native Mermaid rendering. This is the main thing. Diagrams render directly in the preview without JavaScript or external services. Flowcharts are solid. Other diagram types are getting there.
Performance that makes sense. About 72MB of RAM idle. Fast startup. The kind of performance you'd expect from a native app instead of a web browser pretending to be one.
Split view editing. Raw markdown on the left, rendered preview on the right. Both sides are editable.
The rest. Syntax highlighting for 40+ languages, Git integration, session persistence, keyboard shortcut customization, multi-encoding support. Chinese translation is 100% complete thanks to contributors.
I never actually tried Obsidian or Typora before building this. But from what I can tell looking at benchmarks and user reports, Ferrite holds up well against them on performance. And it's free and open source.
Yes.
I don't know programming languages. That hasn't changed. But the app exists. People use it. It does what I needed it to do.
For a side project where "works" matters more than "perfect," this was the right call.
Links:
All the prompts, handover documents, and development history are public. Fork it, learn from it, tell me what I got wrong.
The Mermaid renderer is planned for extraction as a standalone Rust crate. If that interests you, follow the project.
2026-01-22 01:19:32
I'm building a financial agent on AWS, and for the last few days, my frontend and backend have been living in separate worlds. My Python Lambda function did the work, and my React dashboard just looked pretty with fake data. Today, I connected them.
The Backend Change
First, I had to make sure my Lambda function actually returned data usable by a browser. This meant returning a JSON object with CORS headers.
Python
return {
'statusCode': 200,
'headers': {
"Access-Control-Allow-Origin": "*", # Allow browser access
"Content-Type": "application/json"
},
'body': json.dumps({
"data": {
"transactions": saved_items,
"ai_analysis": ai_analysis
}
})
}
The Frontend Logic
I used Vite for my React setup. The key was to replace my hardcoded data arrays with a fetch call to the unique Function URL provided by AWS.
JavaScript
useEffect(() => {
const fetchData = async () => {
const response = await fetch("YOUR_LAMBDA_URL_HERE");
const result = await response.json();
setTransactions(result.data.transactions);
};
fetchData();
}, []);
Now, every time the dashboard loads, it triggers a live financial audit in the cloud. No more fake numbers.
2026-01-22 01:16:48
En el directorio raíz del proyecto (en nuestro caso DatabasePoC), cree la clase CoreDataManager y agregue la siguiente implementación:
import CoreData
class CoreDataManager {
static let instance = CoreDataManager()
private let container: NSPersistentContainer
private init() {
// Construir la instancia con el nombre de nuestro modelo de datos
container = NSPersistentContainer(name: "CompanyModel")
// Se inicia el contenedor.
container.loadPersistentStores { description, error in
if let error {
print("Error loading stores: \(error.localizedDescription)")
}
}
}
// Se requiere un contexto para manipular las instancias
func getViewContext() -> NSManagedObjectContext {
container.viewContext
}
func getBackgroundContext() -> NSManagedObjectContext {
container.newBackgroundContext()
}
}
Aquí creamos una instancia de NSPersistentContainer con el nombre del modelo de datos y damos acceso a un contexto para manipular a la base de datos. Por otro lado, por medio del patrón singleton podemos tener una referencia a esta instancia.
Crear una instancia del NSManagedObject (e.g. DBEmployee) y asignar valores para sus atributos y relaciones (opcional). Luego pedir al contexto que se guarde.
let department = DBDepartment(context: context)
department.name = "Recursos Humanos"
let employee = DBEmployee(context: context)
employee.name = "Juan Pérez"
employee.age = 30
employee.position = "Ingeniero"
// CoreData actualiza la inversa automáticamente
employee.department = department
do {
try context.save()
} catch {
print("Error al guardar empleado: \(error)")
}
Obtener la instancia del NSManagedObject (e.g. DBEmployee), pedir al contexto que la elimine y luego pedirle que se guarde.
let request: NSFetchRequest<DBEmployee> = DBEmployee.fetchRequest()
request.predicate = NSPredicate(format: "name == %@", "Juan Pérez")
do {
if let employee = try context.fetch(request).first {
context.delete(employee)
try context.save()
}
} catch {
print("Error: \(error)")
}
Obtener todas las instancia del NSManagedObject (e.g. DBEmployee), por medio del NSManagedObject y luego pedirle al contexto que extraiga todas las entradas.
let request: NSFetchRequest<DBEmployee> = DBEmployee.fetchRequest()
request.predicate = NSPredicate(format: "name == %@", "Juan")
request.sortDescriptors = [NSSortDescriptor(key: "age", ascending: true)]
do {
let employees = try context.fetch(request)
} catch {
print("Error: \(error)")
}
Puedo aplicar filtros con NSPredicate y aplicar varios criterios de ordenamiento simultáneamente con NSSortDescriptor . Si no se aplica ningún descriptor de orden, no habrá orden definido. Con esto quiero decir que NO SE DEBE ASUMIR ORDEN por fecha de creación.
Obtener la instancia del NSManagedObject (e.g. DBEmployee), cambiar sus atributos y luego pedirle al contexto que se guarde.
let request: NSFetchRequest<DBEmployee> = DBEmployee.fetchRequest()
request.predicate = NSPredicate(format: "name == %@", "Juan Pérez")
do {
if let employee = try context.fetch(request).first {
employee.name = "John Doe"
try context.save()
}
} catch {
print("Error: \(error)")
}