MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

AWS SFTP con AWS Transfer Family y Amazon Cognito

2026-01-22 01:28:17

Implementando un servidor SFTP seguro con AWS Transfer Family y Amazon Cognito

Muchas organizaciones necesitan ofrecer servicios de transferencia segura de archivos sin administrar servidores SFTP tradicionales, usuarios del sistema operativo o la gestión manual de llaves SSH.

En este artículo presento una arquitectura basada en servicios nativos de AWS para construir una solución SFTP segura, escalable y totalmente administrada, que soporta autenticación por usuario/contraseña y llaves públicas SSH, además de automatización basada en eventos para el ciclo de vida de los usuarios.

Visión general de la arquitectura
La siguiente imagen muestra la arquitectura de la solución:

Componentes principales

  • AWS Transfer Family – Endpoint SFTP administrado
  • Amazon Cognito – Autenticación y gestión de usuarios
  • AWS Lambda – Proveedor de identidad personalizado y automatización
  • Amazon EventBridge – Orquestación basada en eventos
  • Amazon S3 – Almacenamiento de datos y llaves públicas
  • AWS IAM – Control de acceso con privilegios mínimos

Modelos de autenticación soportados
La solución soporta dos métodos de autenticación:

  • Autenticación basada en usuario y contraseña
  • Autenticación basada en llaves públicas SSH Ambos métodos son manejados dinámicamente por una función Lambda que actúa como proveedor de identidad personalizado para AWS Transfer Family.

Requisitos previos
Antes de implementar esta solución, asegúrate de cumplir con lo siguiente:

  • AWS CloudTrail habilitado, con Management Events activos Esto es necesario para que Amazon EventBridge pueda detectar eventos administrativos provenientes de Amazon Cognito (por ejemplo, creación o eliminación de usuarios). No se requieren configuraciones adicionales previas.

Paso 1: Crear el User Pool de Amazon Cognito
Amazon Cognito funciona como el servicio central de identidad para los usuarios SFTP.
Decisiones clave de configuración

  • Inicio de sesión basado en nombre de usuario
  • Política de contraseñas fuerte
  • Habilitar AdminCreateUser para automatización
  • No se requiere Hosted UI Cada usuario creado en Cognito representa un usuario SFTP.

Paso 2: Crear los buckets de Amazon S3
La solución utiliza dos buckets de Amazon S3:

1. Bucket de datos
Almacena los archivos de los usuarios.
Estructura recomendada:

s3://sftp-data-bucket/username/

2. Bucket de llaves públicas
Almacena las llaves públicas SSH de cada usuario.
Estructura recomendada:

s3://sftp-public-keys-bucket/username/key.pub

Las políticas de IAM aseguran que cada usuario solo tenga acceso a su propio prefijo dentro del bucket.

Paso 3: Configurar AWS Transfer Family
Se debe crear un servidor AWS Transfer Family (SFTP) con las siguientes características:

  • Protocolo: SFTP
  • Tipo de endpoint: VPC
  • Security Group permitiendo el puerto TCP 22
  • Proveedor de identidad personalizado: AWS Lambda AWS Transfer Family invoca la función Lambda cada vez que un usuario intenta autenticarse.

Paso 4: Implementar la Lambda de autenticación personalizada
Esta función Lambda actúa como proveedor de identidad personalizado.
Flujo de autenticación

  1. AWS Transfer Family invoca la función Lambda
  2. La función analiza la solicitud
  3. Se determina el método de autenticación utilizado Ejemplo de pseudocódigo
if event.get("Password"):
    authenticate_user_with_cognito(username, password)
else:
    public_keys = get_public_keys_from_s3(username)

Responsabilidades de la función

  • Validar credenciales contra Amazon Cognito
  • Recuperar llaves públicas SSH desde Amazon S3
  • Retornar dinámicamente:
    • Rol de IAM
    • Política IAM en línea
    • Mapeo de directorio home lógico Esto permite autorización dinámica por sesión.

Paso 5: Automatizar la gestión de carpetas con EventBridge
El ciclo de vida de los usuarios se maneja mediante eventos.
Creación de carpetas

  • Evento: AdminCreateUser en Cognito
  • Servicio: Amazon EventBridge
  • Destino: Función Lambda La función Lambda:
  • Crea las carpetas del usuario en ambos buckets
  • Prepara el entorno antes del primer acceso

Eliminación o renombrado de carpetas

  • Evento: AdminDeleteUser
  • Servicio: Amazon EventBridge
  • Destino: Función Lambda En lugar de eliminar la información, las carpetas pueden: Renombrarse
  • Archivarse
  • Moverse a un prefijo de resguardo

Paso 6: Modelo de seguridad con IAM

La seguridad se basa en el principio de mínimos privilegios:

  • Cada sesión SFTP recibe un rol de IAM temporal
  • El acceso se limita a:
    • El prefijo del usuario en S3
    • Las acciones estrictamente necesarias
  • Las llaves públicas están protegidas contra eliminación accidental Este modelo garantiza aislamiento fuerte entre usuarios.

Flujo operativo
Autenticación basada en contraseña

  1. Se crea el usuario en Cognito
  2. Se establece la contraseña mediante AWS CLI o CloudShell
  3. El usuario se conecta usando un cliente SFTP estándar
aws cognito-idp admin-set-user-password \
  --user-pool-id <UserPoolId> \
  --username <username> \
  --permanent \
  --password <password>

Autenticación basada en llave pública SSH

  1. Generar el par de llaves:
ssh-keygen -m PEM
  1. Cargar la llave pública en S3:
aws s3api put-object \
  --bucket <PublicKeyBucketName> \
  --key <username>/key.pub \
  --body key.pub
  1. Conectarse usando cualquier cliente SFTP compatible

Beneficios de esta arquitectura

  • Sin servidores que administrar
  • Autenticación completamente administrada
  • Automatización basada en eventos
  • Seguridad por diseño
  • Escalabilidad automática
  • Auditoría completa mediante CloudTrail

Conclusión
Esta arquitectura demuestra cómo los servicios nativos de AWS pueden combinarse para entregar una solución SFTP segura, escalable y fácil de operar.
Al utilizar AWS Transfer Family, Amazon Cognito, Lambda, EventBridge y S3, es posible eliminar servidores SFTP tradicionales sin sacrificar seguridad, control ni trazabilidad.

[Open Source] yt-dlp GUI with a beautiful interface and full features, support 1800+ websites.

2026-01-22 01:27:45

A modern, beautiful YouTube video downloader built with Tauri and React

Download Latest Release: https://github.com/vanloctech/youwee/releases/latest

Source Code: https://github.com/vanloctech/youwee

Report a Bug: Please use the "Bug Report" flair or open an issue on GitHub.

Features

  • Batch Downloads - Download multiple videos at once
  • Playlist Support - Download entire YouTube playlists
  • Multiple Quality Options - From 360p to 4K Ultra HD
  • Subtitle Support - Ability to embed subtitles into videos or save them as separate files.
  • Batch Downloading - Support for downloading multiple videos simultaneously.
  • Expanded Compatibility - Added support for 1800+ websites powered by yt-dlp.
  • Download Management - Added Download History and Library sections.
  • 8K Resolution Support - High-quality video downloading now supports up to 8K resolution.
  • Developer Tools - Added access to yt-dlp logs and execution commands for debugging.
  • Audio Extraction - Extract audio in MP3, M4A, or Opus formats
  • 6 Beautiful Themes - Midnight, Aurora, Sunset, Ocean, Forest, Candy
  • Dark/Light Mode - Choose your preferred appearance
  • H.264 Codec - Maximum compatibility with all players
  • File Size Estimation - Know the size before downloading
  • Fast & Lightweight - Built with Tauri for minimal resource usage
  • No External Dependencies - yt-dlp bundled with the app

Youwee

MongoDB Data Sync for Offline-First Apps: Keep Data in Sync With ObjectBox and MongoDB Atlas

2026-01-22 01:27:07

This tutorial was written by Fidaa Berrjeb.

Who this guide is for

This tutorial is for developers who:

  • Need to synchronize their MongoDB database with their offline-first applications.
  • Are looking to set up an end-to-end demonstration of bi-directional data synchronization between local ObjectBox databases on client devices and a MongoDB Atlas cluster.
  • Want to build a system that ensures offline-first functionality while maintaining data consistency across devices and databases.

Need to sync your MongoDB database and your offline-first apps? In this tutorial, we'll walk you through setting up an end-to-end demonstration of bi-directional data sync between local ObjectBox databases on client devices and a MongoDB Atlas cluster. Together, we'll build a system that ensures offline-first functionality while keeping data in sync across devices and databases.

What you’ll learn and achieve

By the end of this tutorial, you'll have a working demo seamlessly synchronizing local app data from Android and iOS devices with a MongoDB Atlas cluster.

  • Changes made on client devices will be pushed to the MongoDB Atlas cluster.
  • Updates made in MongoDB will flow back to the connected client apps.

You’ll learn how to:

  • Configure the MongoDB Sync Connector.
  • Run initial/full sync from MongoDB.
  • Verify a two-way data sync across a Java client.

What you’ll be setting up for MongoDB Data Sync:

  • MongoDB Atlas cluster: A cloud-based centralized datastore connected to your ObjectBox Sync Server via the ObjectBox MongoDB Sync Connector.
  • ObjectBox Sync Server: Acts as the core facilitator for device connectivity, conflict resolution, and real-time data streaming.
  • Client applications: Java application, each leveraging local ObjectBox databases for offline-first storage capabilities.

Prerequisites

Before you begin, ensure you have the following:

  • A MongoDB Atlas account with the atlasAdmin role (for full administration in Atlas)
  • Sample application data to synchronize across both MongoDB Atlas and your client devices
  • Basic Java knowledge for setting up the client app Gradle installed

Architecture overview

The system's architecture revolves around the ObjectBox Sync Server, which functions as the main facilitator of device-to-cloud-to-device data synchronization. Here's a high-level overview of how the components interact:

Central sync setup for ObjectBox and MongoDB Atlas

  • Task application (Java): Two sample clients (CLI or UI) perform local create, read, update, and delete (CRUD) operations using an embedded ObjectBox database. This database is both a local object store and an on-device vector database that uses the Hierarchical Navigable Small Worlds (HNSW) algorithm to enable offline semantic search, on-device.
  • ObjectBox Sync Server: This is a central synchronization engine for bidirectional sync. It manages secure connections from numerous ObjectBox clients, efficiently queues data changes, and exposes an admin UI for monitoring and configuration. It’s deployed via Docker in the cloud, on-premises, or at the edge.
  • ObjectBox MongoDB Sync Connector: Operating within the ObjectBox Sync Server, it’s specifically engineered to stream changes both ways between ObjectBox and MongoDB, maps types/IDs, and ensures updates are applied consistently on each side.
  • MongoDB Atlas: Atlas is a fully managed cloud database service that stores and handles your application's core data.

Initial configuration steps

1. Configure the MongoDB Atlas cluster

In this section, we are going to set up a MongoDB Atlas cluster that stores and manages our application's core data.

  1. Log in with your MongoDB Cloud user account and create a new MongoDB Atlas cluster. You may reuse an existing cluster.
  2. Once the MongoDB Atlas cluster is set up, click the "Connect" button, copy the connection string, and save it for the next step. For this demo, we used the MongoDB Atlas cluster SolutionsAssurance on MongoDB version 8.0, hosted in the AWS region eu-south-2 (Spain).
  3. Create the database which you want to sync with ObjectBox. In our case, it was the objectbox_sync database.
  4. Create collections which you want to synchronize. We used the Task collection. All collections you wish to synchronize with ObjectBox must exist within MongoDB before setting up the connector.
  5. Create a new database user or use an existing one. The connector authenticates to MongoDB using the username and password of a database user. You must have readWrite@ permissions for the specific database that you’re looking to synchronize with ObjectBox. Complete the previous connection string with the newly created database user and password.

user privileges for the created database user

You can also configure more granular privileges.

IMPORTANT
The minimum supported version for the connector is currently MongoDB 5.0, but 8.0 is recommended. MongoDB Atlas, Community, and Enterprise work. Only a MongoDB replica set instance provides the necessary features for the MongoDB Sync Connector to work.

2. Set up the Sync Server and MongoDB with Docker Compose

Let’s set up the ObjectBox Sync Server, a synchronization engine, and connect it to our MongoDB cluster to perform bidirectional data-sync.

ObjectBox Sync Server connected to the MongoDB Atlas cluster

  1. To run the Sync Server, you need to have a Docker engine installed on your local machine. You can download it from the Docker Desktop downloads.
  2. Clone the objectbox-sync-examples repository and navigate to the directory to run the Sync Server and a MongoDB instance with Docker Compose.
git clone [email protected]:objectbox/objectbox-sync-examples.git
cd tasks/server-mongo-compose
  1. Check the docker-compose.yml file inside the tasks/server-mongo-compose folder. To connect to the MongoDB cluster, you need to adjust the file and add the connection string which we saved in the previous section.

This is an example of my docker-compose.yml file:

services:
 sync-server:
   image: objectboxio/sync-server-trial
   container_name: sync-server
   restart: always
   # Note: user and group ID 1000 may have to be adjusted to match your user IDs:
   user: "${UID:-1000}:${GID:-1000}"
   ports:
     - "9980:9980"
     - "9999:9999"
   volumes:
     # Expose "current" (this) directory as "/data" in the container.
     - .:/data
   command: >
      - --model
      - ./objectbox-model.json
      - --admin-bind
      - 0.0.0.0:9980
      - --mongo-url
      - ${MONGO_URL}
      - --mongo-db
      - ${MONGO_DB}
      - --debug
  1. Create a .env file inside the current directory and add variables.
MONGO_URL=<MongoDB connection string> 
MONGO_DB=<databaseName>
  1. Run the server. This command initiates the services/containers as described in the docker-compose.yml script:
docker compose up
  1. The output should look something like this:
 ✔ Network server-mongo-compose_default 
✔ Container sync-server

After the start, the following services/ports are available:

  • ObjectBox Admin UI at port 9980—open http://localhost:9980/ to access the Admin UI (and activate the trial)

ObjectBox Admin UI

  • ObjectBox Sync Server at port 9999—the target for Sync clients

ObjectBox Sync Server at port 9999

  1. Open the Admin UI http://localhost:9980/ and start the trial.

ObjectBox trial at http://localhost:9980/

  1. Open the "MongoDB Connector" section and check that the connector is running.

  1. To start the synchronization with the MongoDB database, you need to perform the initial sync. For this, open the "Full Sync" section and click "Full Sync from MongoDB."

  1. After the initial sync completion, go to the "Data" section and check that data from your MongoDB database is present there.

3. Prepare the Java client app

We can use the task manager app for the different languages (Java and Swift) as the clients. To run the JAVA example app, we need to satisfy some prerequisites.

  1. Install the Java Development Kit. Any Java LTS release is supported (for example, Java 17 or Java 21) . If running in macOS, you can use Homebrew to install it. Simply run:
brew install openjdk@17 
  1. Install Gradle. If running in macOS, you can use Homebrew Gradle. Run:
brew install gradle
  1. Clone the objectbox-sync-examples repository (if you didn't before) and navigate to the directory of the Java client app.
git clone [email protected]:objectbox/objectbox-sync-examples.git
cd objectbox-sync-examples/tasks/client-java
  1. Build and run the project.
./gradlew build
./gradlew run
  1. You should see a message similar to this listing the available commands to use the app. This is a CLI TODO list app. You can add, remove, update, and list tasks from the command line. These tasks will be written in the local ObjectBox database and pushed to the ObjectBox Sync Server and the MongoDB database.
> Task :java-main-sync:run
Welcome to the ObjectBox tasks-list app example
Available commands are: 
    ls [-a]        list tasks - unfinished or all (-a flag) 
    new task_text  create a new task with the text 'task_text' 
    done ID        mark task with the given ID as done 
    rm ID          delete task with the given ID 
    exit           close the program 
    help           display this help
  1. To see all data synchronized from the MongoDB database, you can run the command ls -a and you will see the data synchronized from the MongoDB database.
<==<==========---> 80% EXECUTING [25s]
ID    Text                      Date Created         Date Finished        
5     taskFromMongoUserSpecific 27-06-2025 02:00:00  27-06-2025 02:00:00  
6     taskFromMongoUserSpecific 1 27-06-2025 02:00:00  27-06-2025 02:00:00  
7     task2                     01-09-2025 14:57:15  Task not completed yet 
<==========---> 80% EXECUTING [37s]
> :app:run
  1. The data model file is generated by the Sync client at client-java/app/objectbox-models/default.json.

IMPORTANT
The models must be identical (including UIDs) across server and clients. Otherwise, sync can fail or behave unpredictably. If you want to start the client from scratch, you need to remove the database folder tasks/client-java/tasks-synced.

rm -rf tasks/client-java/app/tasks-synced

Congrats! You now have:

  • A MongoDB Atlas cluster acting as the cloud source of truth.
  • An ObjectBox Sync Server bridging devices and MongoDB via the ObjectBox Sync Server and MongoDB Sync Connector.
  • A Java client with local persistence in sync in real time.

Now, simply swap in your own entities, deploy the Sync Server where you need it, and you’re production-ready for offline-first at scale.

Further reading

If you want to go deeper into ObjectBox Sync, offline-first architectures, and hybrid AI use cases, here are some good next reads:

I shipped an 800-star Markdown editor without knowing Rust

2026-01-22 01:19:42

Sometime late last year I opened Notepad on Windows 11 and watched it use 100MB of RAM.

Notepad. The app that used to be 200KB.

Microsoft had added Copilot integration, some half-baked markdown preview, and turned a text editor into another bloated thing. I was using some random free website just to read .md files at that point.

I'd been building stuff with AI for about two years. I'd even shipped Rust projects before using this approach. But I don't actually know Rust. I don't know any programming languages. I can't read code line by line and tell you what it does.

So I built another one.

Ferrite now has 800+ GitHub stars. Every line of its 15,000+ lines of Rust code was generated by AI. Here's the real story of how that went.

The Setup

I use Claude through Cursor for the actual coding. For task management and keeping context between sessions, Task Master handles PRD parsing and task generation. When the AI needs current documentation for libraries like egui, it pulls it through Context7.

The short version of the workflow:

  1. I research what to build using Perplexity and other models for tech decisions
  2. A high-end model generates the PRD based on my direction, then other models evaluate it
  3. Task Master breaks the PRD into structured tasks
  4. Claude implements each task in a fresh chat session
  5. I test the results, describe what breaks
  6. Iterate until it works

Each task gets a handover document that gives the AI exactly the context it needs. Fresh chat, paste the handover, work until done, update the handover for the next task. The full workflow documentation is public if you want the details.

What Actually Works Well

The boring stuff. Setting up a Rust project, file I/O, config handling, window management. AI is great at "do the standard thing."

Bug fixes. Describing a bug and getting a fix is often faster than traditional debugging. "The scroll position resets when I switch tabs" turns into working code in one prompt.

Feature implementation. I describe what I want and it generates code that integrates with the existing architecture. Most of the time.

What Breaks

Edge cases. The Mermaid diagram rendering is a good example. We support flowcharts, sequence diagrams, class diagrams, state diagrams, and several others. Flowcharts are the most complete right now. The 0.2.5 release made the basics much more reliable, but there's still a lot of work to do before all those diagram types actually work well. That's on the roadmap for 0.2.6 and 0.3.0.

AI generates the happy path. The weird inputs, unusual syntax, nested edge cases? Those break.

Performance. I had to guide optimization heavily. We went from 250MB idle to around 72MB, but I had to identify what to optimize. The AI can't look at the memory profiler.

Architecture. "Should this be a separate module?" AI gives answers. They're not always good ones. I had to learn enough about the codebase to evaluate whether what it suggested made sense.

The Transparency Thing

I didn't put "built with AI" in the README from day one. Honestly, I didn't think about it.

When the project started getting attention somewhere around version 0.2.0, someone pointed it out. Fair point. I added it right away.

If you're using the app, you should know how it was made. If you're contributing, you should know what you're working with. The code has inconsistent patterns in places, probably bugs I haven't found yet. A senior Rust developer would find things to cringe at. But it works.

The Actual Product

Here's what Ferrite does now:

Native Mermaid rendering. This is the main thing. Diagrams render directly in the preview without JavaScript or external services. Flowcharts are solid. Other diagram types are getting there.

Performance that makes sense. About 72MB of RAM idle. Fast startup. The kind of performance you'd expect from a native app instead of a web browser pretending to be one.

Split view editing. Raw markdown on the left, rendered preview on the right. Both sides are editable.

The rest. Syntax highlighting for 40+ languages, Git integration, session persistence, keyboard shortcut customization, multi-encoding support. Chinese translation is 100% complete thanks to contributors.

I never actually tried Obsidian or Typora before building this. But from what I can tell looking at benchmarks and user reports, Ferrite holds up well against them on performance. And it's free and open source.

Would I Do This Again?

Yes.

I don't know programming languages. That hasn't changed. But the app exists. People use it. It does what I needed it to do.

For a side project where "works" matters more than "perfect," this was the right call.

Links:

All the prompts, handover documents, and development history are public. Fork it, learn from it, tell me what I got wrong.

The Mermaid renderer is planned for extraction as a standalone Rust crate. If that interests you, follow the project.

How to Connect a React App to an AWS Lambda Function URL.

2026-01-22 01:19:32

I'm building a financial agent on AWS, and for the last few days, my frontend and backend have been living in separate worlds. My Python Lambda function did the work, and my React dashboard just looked pretty with fake data. Today, I connected them.

The Backend Change

First, I had to make sure my Lambda function actually returned data usable by a browser. This meant returning a JSON object with CORS headers.

Python

The critical return statement

return {
'statusCode': 200,
'headers': {
"Access-Control-Allow-Origin": "*", # Allow browser access
"Content-Type": "application/json"
},
'body': json.dumps({
"data": {
"transactions": saved_items,
"ai_analysis": ai_analysis
}
})
}
The Frontend Logic

I used Vite for my React setup. The key was to replace my hardcoded data arrays with a fetch call to the unique Function URL provided by AWS.

JavaScript
useEffect(() => {
const fetchData = async () => {
const response = await fetch("YOUR_LAMBDA_URL_HERE");
const result = await response.json();
setTransactions(result.data.transactions);
};
fetchData();
}, []);

Now, every time the dashboard loads, it triggers a live financial audit in the cloud. No more fake numbers.

[CoreData] Manipular los datos de una base de datos

2026-01-22 01:16:48

Inicializando el Stack de Core Data

En el directorio raíz del proyecto (en nuestro caso DatabasePoC), cree la clase CoreDataManager y agregue la siguiente implementación:

import CoreData
class CoreDataManager {
  static let instance = CoreDataManager()
  private let container: NSPersistentContainer
  private init() {
    // Construir la instancia con el nombre de nuestro modelo de datos
    container = NSPersistentContainer(name: "CompanyModel")

    // Se inicia el contenedor.
    container.loadPersistentStores { description, error in
    if let error {
      print("Error loading stores: \(error.localizedDescription)")
    }
  }
}
  // Se requiere un contexto para manipular las instancias
  func getViewContext() -> NSManagedObjectContext {
    container.viewContext
  }
  func getBackgroundContext() -> NSManagedObjectContext {
    container.newBackgroundContext()
  }
}

Aquí creamos una instancia de NSPersistentContainer con el nombre del modelo de datos y damos acceso a un contexto para manipular a la base de datos. Por otro lado, por medio del patrón singleton podemos tener una referencia a esta instancia.

Agregar elementos a la base de datos

Crear una instancia del NSManagedObject (e.g. DBEmployee) y asignar valores para sus atributos y relaciones (opcional). Luego pedir al contexto que se guarde.

let department = DBDepartment(context: context)
department.name = "Recursos Humanos"
let employee = DBEmployee(context: context)
employee.name = "Juan Pérez"
employee.age = 30
employee.position = "Ingeniero"
// CoreData actualiza la inversa automáticamente
employee.department = department
do {
  try context.save()
} catch {
  print("Error al guardar empleado: \(error)")
}

Eliminar elementos de la base de datos

Obtener la instancia del NSManagedObject (e.g. DBEmployee), pedir al contexto que la elimine y luego pedirle que se guarde.

let request: NSFetchRequest<DBEmployee> = DBEmployee.fetchRequest()
request.predicate = NSPredicate(format: "name == %@", "Juan Pérez")
do {
  if let employee = try context.fetch(request).first {
    context.delete(employee)
    try context.save()
  }
} catch {
  print("Error: \(error)")
}

Listar elementos de la base de datos

Obtener todas las instancia del NSManagedObject (e.g. DBEmployee), por medio del NSManagedObject y luego pedirle al contexto que extraiga todas las entradas.

let request: NSFetchRequest<DBEmployee> = DBEmployee.fetchRequest()
request.predicate = NSPredicate(format: "name == %@", "Juan")
request.sortDescriptors = [NSSortDescriptor(key: "age", ascending: true)]
do {
  let employees = try context.fetch(request)
} catch {
  print("Error: \(error)")
}

Puedo aplicar filtros con NSPredicate y aplicar varios criterios de ordenamiento simultáneamente con NSSortDescriptor . Si no se aplica ningún descriptor de orden, no habrá orden definido. Con esto quiero decir que NO SE DEBE ASUMIR ORDEN por fecha de creación.

Actualizar elementos de la base de datos

Obtener la instancia del NSManagedObject (e.g. DBEmployee), cambiar sus atributos y luego pedirle al contexto que se guarde.

let request: NSFetchRequest<DBEmployee> = DBEmployee.fetchRequest()
request.predicate = NSPredicate(format: "name == %@", "Juan Pérez")
do {
  if let employee = try context.fetch(request).first {
    employee.name = "John Doe"
    try context.save()
  }
} catch {
  print("Error: \(error)")
}