MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

How to Build and Test iOS Apps on a Physical Phone: Expo EAS and Apple TestFlight (Part 2/3)

2026-02-16 08:52:45

In Part 1, we've checked in the React Native code on the GitHub. Now we will build the binary with Expo EAS (Expo Application Service) service.

Production (store-signed) binary

eas build --platform ios --profile production

When prompted, let EAS manage:

  • certificates
  • provisioning profiles
  • signing

This produces an App Store–signed IPA.

Resolved "production" environment for the build. Learn more: https://docs.expo.dev/eas/environment-variables/#setting-the-environment-for-your-builds
....

✔ Incremented buildNumber from 3 to 4.

✔ Using remote iOS credentials (Expo server)

If you provide your Apple account credentials we will be able to generate all necessary build credentials and fully validate them.
...
✔ Do you want to log in to your Apple account? … yes

› Log in to your Apple Developer account to continue
✔ Apple ID: … [email protected]
› Restoring session /Users/cathy/.app-store/auth/[email protected]/cookie
› Session expired Local session
› Using password for [email protected] from your local Keychain
  Learn more: https://docs.expo.dev/distribution/security#keychain
✔ Logged in New session
› Team Cathy Lai (XXXXXX)
› Provider Cathy Lai (xxxxxxxx)
✔ Bundle identifier registered com.cathyapp1234.oauthpro2
✔ Synced capabilities: No updates
✔ Synced capability identifiers: No updates
✔ Fetched Apple distribution certificates
✔ Fetched Apple provisioning profiles

Again the bundle identifier is unique in the App Store.

Project Credentials Configuration

Project                   @cathyapp1234/oauth-pro2
Bundle Identifier         com.cathyapp1234.oauthpro2

Distribution Certificate and Provisioning Profiles are auto generated. It is the "permission slip" from Apple to allow the binary to run on the specific phones.

In the Apple ecosystem, we can’t just drag and drop an app file onto an iPhone like we can with a .exe on a PC. Apple requires a strict chain of trust to ensure that the app is legitimate, created by a verified developer, and running on an authorized device.

App Store Configuration   

Distribution Certificate  
Serial Number             XXXXXXXDA97EA34FFC3B28C8BA6C44
Expiration Date           Tue, 04 Aug 2026 05:10:17 GMT+1200
Apple Team                XXXXXX (Cathy Lai (Individual))
Updated                   6 months ago

Provisioning Profile      
Developer Portal ID       XxXXXXXXXX
Status                    active
Expiration                Tue, 04 Aug 2026 05:10:17 GMT+1200
Apple Team                XXXXXXXXXX (Cathy Lai (Individual))
Updated                   17 days ago

All credentials are ready to build @cathyapp1234/oauth-pro2 (com.cathyapp1234.oauthpro2)

Compressing project files and uploading to EAS Build. Learn more: https://expo.fyi/eas-build-archive
✔ Uploaded to EAS 1s
✔ Computed project fingerprint

See logs: https://expo.dev/accounts/cathyapp1234/projects/oauth-pro2/builds/xxxxxxx

Waiting for build to complete. You can press Ctrl+C to exit.
  Build queued...

Waiting in priority queue
|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■| 

✔ Build finished
🍏 iOS app:
https://expo.dev/artifacts/eas/xxxxxxxxx.ipa

Expo will auto-create the app in App Store Connect

  • App record created
  • Bundle ID registered
  • Build uploaded
  • Appears in TestFlight

Next

We will add some testers (via emails) so they can be notified and get a link to TestFlight to access our app.

Video of the whole process.

Please view this video for more information.

From Messy Med-Reports to Smart Insights: Building a FHIR-Powered Medical RAG with Milvus

2026-02-16 08:50:00

We've all been there: staring at a stack of PDF medical reports, trying to remember if that one blood test result from three years ago was "normal" or "borderline." In the age of AI, why is our personal health data still trapped in static documents?

Today, we are building a Medical-Grade Personal Knowledge Base. We aren't just doing basic PDF parsing; we are implementing a sophisticated Medical RAG (Retrieval-Augmented Generation) pipeline. By combining the FHIR standard for clinical data interoperability, Milvus for high-performance vector search, and BGE Embeddings for semantic precision, we will transform scattered PDFs into a searchable, time-aware health history.

The Architecture: From Pixels to Structured Knowledge

Handling medical data requires more than just a Ctrl+F search. We need to normalize data so the AI understands that "High BP" and "Hypertension" refer to the same clinical concept.

Here is how the data flows from a raw PDF to a queryable insight:

graph TD
    A[Raw PDF Reports] --> B[Unstructured.io Partitioning]
    B --> C{LLM Extraction}
    C -->|Map to Standard| D[FHIR JSON Resources]
    D --> E[BGE Embeddings Model]
    E --> F[(Milvus Vector Database)]
    G[User Query: 'How has my glucose changed?'] --> H[Query Embedding]
    H --> I[Milvus Semantic Search]
    I --> J[Contextual Augmented Response]
    F --> I

Prerequisites

To follow this tutorial, you'll need:

  • Python 3.9+
  • Unstructured.io: For heavy-duty PDF partitioning.
  • Milvus: Our vector powerhouse (using Milvus Lite for easy setup).
  • BGE Embeddings: Specifically BAAI/bge-small-en-v1.5 for a great balance of speed and accuracy.
  • FHIR (Fast Healthcare Interoperability Resources): The industry standard for health data.

Step 1: Parsing PDF with Unstructured.io

Medical reports are notoriously messy—tables, headers, and footnotes everywhere. We use unstructured to extract clean text while maintaining document hierarchy.

from unstructured.partition.pdf import partition_pdf

# Partitioning the PDF into manageable elements
elements = partition_pdf(
    filename="blood_test_report_2023.pdf",
    infer_table_structure=True,
    strategy="hi_res"
)

# Join the text for processing
raw_text = "\n\n".join([str(el) for el in elements])
print(f"Extracted {len(elements)} elements from the document.")

Step 2: Normalizing to FHIR Standard

Standardizing data is the "secret sauce" of medical AI. Using the FHIR standard ensures that our RAG system understands clinical context. Instead of storing "Blood Sugar: 110", we store a Observation resource.

import json

# A simplified FHIR Observation schema
fhir_observation = {
    "resourceType": "Observation",
    "status": "final",
    "code": {
        "coding": [{"system": "http://loinc.org", "code": "2339-0", "display": "Glucose"}]
    },
    "valueQuantity": {
        "value": 110,
        "unit": "mg/dL"
    },
    "effectiveDateTime": "2023-10-27T10:00:00Z"
}

# In a real app, an LLM would map the 'raw_text' to this JSON structure

Step 3: Vectorizing with BGE & Storing in Milvus

Now we need to make this data searchable. We use BGE Embeddings to turn our FHIR resources into vectors and store them in Milvus.

from pymilvus import MilvusClient
from sentence_transformers import Transformer

# Initialize BGE Embeddings
model = SentenceTransformer('BAAI/bge-small-en-v1.5')
client = MilvusClient("health_history.db")

# Create a collection in Milvus
client.create_collection(
    collection_name="medical_records",
    dimension=384  # Dimension for bge-small
)

# Convert FHIR to string and embed
data_str = json.dumps(fhir_observation)
vector = model.encode(data_str)

# Insert into Milvus
client.insert(
    collection_name="medical_records",
    data=[{"id": 1, "vector": vector, "text": data_str}]
)

Advanced Patterns & Production Readiness

While this setup works for a personal project, building a production-grade medical system involves HIPAA compliance, complex FHIR terminology mapping (SNOMED CT, LOINC), and handling multi-modal data (like X-rays).

For deeper dives into advanced medical data patterns and building more robust healthcare AI, I highly recommend checking out the technical deep-dives at WellAlly Blog. They cover production-ready RAG architectures that are specifically tuned for data privacy and clinical accuracy.

Step 4: Semantic Retrieval

Finally, we can query our knowledge base. Instead of keyword matching, we search for the meaning of the query.

query = "What were my last blood sugar readings?"
query_vector = model.encode(query)

results = client.search(
    collection_name="medical_records",
    data=[query_vector],
    limit=3,
    output_fields=["text"]
)

for res in results[0]:
    print(f"Found Record: {res['entity']['text']}")

Conclusion: The Future of Personal Health

By combining FHIR, Milvus, and BGE, we've moved from "dumb" PDFs to a structured, semantically searchable medical knowledge base. This is the foundation for an AI health assistant that can truly track your longitudinal health history.

What's next?

  1. Multi-document tracking: Compare results across different years.
  2. Agentic RAG: Let the AI suggest when you should schedule your next check-up based on the data.

Are you building something in the Medical AI space? Drop a comment below or share your thoughts on health data privacy!

If you found this tutorial helpful, don't forget to ❤️ and 🦄! For more high-level architectural patterns, visit the WellAlly Blog.

The "Zero-Friction" Hires Guide: Transforming Field Onboarding with ChatGPT + TaskTag

2026-02-16 08:43:34

Why Construction Teams Are Rethinking Onboarding

Bringing a new technician or crew member onto a job site is usually chaotic. Paperwork gets lost. Safety SOPs are skimmed—or skipped. And new hires often ask, “Who am I supposed to report to?”

In fast-paced construction environments, slow or inconsistent onboarding leads to lost productivity, safety risks, and project delays.

That’s why forward-thinking operations managers are combining AI-powered onboarding with real-time task execution using ChatGPT + TaskTag to streamline the entire process.

This isn’t just about going paperless. It’s about building a scalable, standardized onboarding workflow that gets boots on the ground faster with fewer mistakes.

Phase 1: Use ChatGPT to Generate Role-Specific Onboarding Checklists

Why Use AI for Construction Onboarding?

Each role on a job site has different compliance needs, safety protocols, and reporting requirements. Writing these documents from scratch for every new hire is inefficient and inconsistent.

ChatGPT enables you to quickly generate custom onboarding checklists for any trade or role, ensuring that every crew member receives the same high-standard onboarding experience.

Sample Prompts to Get Started

General Safety SOP Prompt

“Act as a Construction Operations Manager. Create a comprehensive onboarding checklist for a new general laborer entering a commercial job site. Include safety gear requirements, site access protocols, and initial reporting steps.”

Role-Specific Prompt

“Create a specific 3-day onboarding schedule for a Master Electrician. Focus on reviewing schematics, tool verification, and introduction to the site foreman.”

Welcome Message Prompt

“Write a professional welcome message for a new hire named [Name]. Include instructions to download the TaskTag app and join the project group chat.”

Using AI ensures standardization across crews and job sites—whether they’re in New York or Nevada.

Phase 2: Execute the Onboarding with TaskTag

Where Does the AI Content Live?

Printing onboarding documents means they get lost. Emailing them means they get ignored. TaskTag brings your onboarding checklists to life as interactive, trackable tasks.

Step 1: Create a Dedicated Onboarding Project

Treat every new hire like a mini-project. In TaskTag:

•Create a Project: "Onboarding - Alex R."

•Add the new hire and their supervisor to the project

•Use group chat and task tracking in one place

Step 2: Turn AI Checklists Into Actionable Tasks

Take the content you created with ChatGPT and convert each item into a task with a checklist and file attachments.

Example Task:

•Task: Review Site Safety SOP

•Checklist: Paste steps from ChatGPT

•Attachment: Upload the site-specific safety manual PDF

•Assigned To: New hire

Step 3: Assign and Track Field Onboarding Tasks

TaskTag is built for real-world use. New hires don’t need training to use it.

They simply:

•Read the task

•Complete the checklist

•Upload a photo of their signed waiver

•Type “#Safety-Review Completed” in the chat

•You get notified immediately

Step 4: Capture Proof of Completion

For compliance, documentation is critical. When a new hire uploads proof—such as a signed safety form or PPE checklist—you capture a permanent digital audit trail.

No more chasing papers. No more wondering what’s been completed.

Why This Workflow Matters for Project Management

In traditional construction project management, poor onboarding is a major risk—both for safety and schedule.

By combining ChatGPT and TaskTag, you’re automating the "Initiation" phase of workforce deployment while increasing accountability and reducing admin overhead.

Key Benefits

•Reduce Admin Time: Eliminate repetitive onboarding steps

•Higher Compliance: Standardized checklists ensure nothing is skipped

•Real-Time Visibility: Know when onboarding is complete

•Better Safety Outcomes: Ensure all new hires complete required steps

•Repeatable Process: Deploy across every site and trade

The Zero-Friction Onboarding Workflow

1.Use ChatGPT to generate onboarding checklists and SOPs

2.Create a dedicated project in TaskTag for each new hire

3.Paste AI content into tasks with manual checklists

4.Assign tasks to the new hire and supervisor

5.Track progress as they complete tasks directly in chat

6.Capture documentation for audits and compliance

Related: How Construction Teams Use TaskTag for Project Scheduling and Communication

Ready to Eliminate Onboarding Chaos?

Ditch the clipboard. Drop the email chains. Use the system built for the field.

Start using AI + TaskTag to onboard your next crew member in under 30 minutes.

👉 Download TaskTag and run your first AI-powered onboarding project today.

What Operations Managers Are Saying

“We onboarded a new crew in hours, not days. ChatGPT wrote the SOPs, TaskTag tracked it all in real time.”
— Carl M., Site Supervisor

“This is how construction onboarding should work. One system, zero confusion.”
— Renee L., Project Operations Manager

“We used to lose forms and miss safety checks. Now it’s all automated and audit-ready.”
— Travis G., HR & Compliance Lead

Qwen-Image-2.0: Generate 2K Images with Native Text Rendering

2026-02-16 08:42:42

If you're looking for an AI image generator that delivers high-resolution 2K images with exceptional text rendering capabilities, Qwen-Image-2.0 is the tool you need. This advanced AI model from Alibaba's Qwen team represents a significant leap forward in visual content generation.

Qwen-Image-2.0 AI Image Generator

What Makes Qwen-Image-2.0 Special?

Qwen-Image-2.0 stands out from other AI image generators with several groundbreaking features:

Native Text Rendering

One of the biggest challenges in AI image generation has always been rendering readable text within images. Qwen-Image-2.0 solves this problem with native text rendering capabilities, allowing you to create images with clear, legible text that looks professionally designed.

2K Resolution Output

Unlike many AI generators that produce low-resolution images, Qwen-Image-2.0 generates stunning 2K resolution images that are suitable for professional use cases including:

  • Marketing materials
  • Social media graphics
  • Presentation slides
  • Infographics
  • Posters and banners

Multi-Style Support

The model supports diverse visual styles:

  • Photorealistic images - Perfect for product mockups and realistic scenes
  • Artistic illustrations - Great for creative projects and digital art
  • Infographic designs - Ideal for data visualization
  • Slide-style layouts - Professional presentation graphics

Use Cases for Developers and Designers

Content Marketing

Content creators can leverage Qwen-Image-2.0 to produce high-quality visuals without expensive design tools or hiring graphic designers.

Rapid Prototyping

Developers building apps or websites can quickly generate placeholder images or even final assets for their projects.

Social Media Management

Generate eye-catching social media posts with embedded text, calls-to-action, and branded content.

Getting Started

Try Qwen-Image-2.0 today and experience the next generation of AI image generation. Whether you're a developer, designer, or content creator, this tool can streamline your workflow and elevate your visual content.

Have you tried Qwen-Image-2.0? Share your experiences in the comments below!

Your Telegram Bot's Voice Messages Are Missing Speed Control. Here's the Fix.

2026-02-16 08:31:32

If your Telegram bot sends voice messages using TTS, you've probably noticed something missing: the speed control button.

No 1.5x. No 2x. Just plain audio that plays at one speed.

The problem is the audio format.

MP3 doesn't cut it

Most TTS providers output MP3 files. When you send these via Telegram's sendVoice API, they technically work. They play. But Telegram doesn't treat them as proper voice messages.

You get:

  • No waveform visualization
  • No speed control (0.5x/1x/1.5x/2x)
  • Just a basic audio player

This matters if your bot sends briefings, summaries, or long-form content. A 2-minute message at 2x speed takes 1 minute. Over time, that's real savings.

The fix

Convert your MP3 to OGG Opus before sending:

ffmpeg -i input.mp3 \
  -c:a libopus \
  -b:a 48k \
  -vbr on \
  -compression_level 10 \
  -frame_duration 60 \
  -application voip \
  output.ogg

Send the .ogg file via sendVoice. Telegram now recognizes it as a voice message. Speed control buttons appear.

Why this works

Telegram's voice message system is built for OGG Opus. The Bot API docs mention this:

"For sendVoice to work, your audio must be in an .ogg file encoded with OPUS."

But they don't emphasize it. MP3 files still work, so many developers never notice they're missing features.

The ffmpeg flags matter:

  • -c:a libopus — Use the Opus codec
  • -b:a 48k — 48kbps bitrate (good for voice)
  • -vbr on — Variable bitrate
  • -compression_level 10 — Maximum compression
  • -frame_duration 60 — 60ms frames (faster playback start)
  • -application voip — Optimize for speech, not music

That last one (-application voip) tells Opus to prioritize speech clarity.

Implementation

If you control the TTS pipeline, add the conversion step after generation:

# Generate TTS (example)
edge-tts --text "Your message" --write-media output.mp3

# Convert to OGG Opus
ffmpeg -i output.mp3 -c:a libopus -b:a 48k -vbr on \
  -compression_level 10 -frame_duration 60 \
  -application voip output.ogg

# Send via Telegram using output.ogg

Or batch-convert existing files:

for mp3 in *.mp3; do
  ffmpeg -i "$mp3" -c:a libopus -b:a 48k -vbr on \
    -compression_level 10 -frame_duration 60 \
    -application voip "${mp3%.mp3}.ogg"
done

What the docs don't tell you

The API docs mention OGG Opus as a requirement, but don't explain what happens if you skip it. MP3 still works, so it seems fine. Until you notice your voice messages look different from native Telegram ones.

This affects any bot sending TTS audio: Google TTS, Azure Speech, ElevenLabs, OpenAI. If it outputs MP3, you'll hit this.

One ffmpeg command. Proper voice messages with speed control.

Want more OpenClaw tips? Check out the OpenClaw Lab for research notes on autonomous agents, cron jobs, voice integration, and more.

Developer CI Dev Stage Production

2026-02-16 08:29:33

explained:

  • Why each exists
  • What happens in each
  • Who is responsible
  • How Argo CD handles it

🔷 STEP 0 — First Understand the Problem

If we deploy directly to production:

Developer pushes → users see bugs

That is dangerous.

So DevOps creates safe layers.

Think of it like a hospital:

You don’t send a medicine directly to patients.

You test it first.

🔷 ENVIRONMENT 1 — DEV

What is DEV?

Dev = playground.

Purpose:

  • Test if image builds
  • Test if container runs
  • Test if Helm works
  • Test if service reachable

This environment is allowed to break.

How GitOps Works in DEV

You create:

manual-app-helm/
    values-dev.yaml

Argo CD Application:

manual-app-dev

Watching:

  • branch: main
  • values-dev.yaml

When Jenkins updates image tag →
Argo CD auto deploys to DEV.

Who Uses DEV?

  • Developers
  • DevOps

They check:

  • Does app start?
  • Is port correct?
  • Any crash?
  • Logs OK?

🔷 ENVIRONMENT 2 — STAGE

What is STAGE?

Stage = rehearsal before production.

It should look like production.

Same:

  • Node size
  • Resources
  • Ingress
  • TLS
  • Scaling

How It Works

You create:

values-stage.yaml

Argo CD Application:

manual-app-stage

Watching:

  • branch: release OR
  • same branch but different values file

What Happens Here?

  • QA team tests
  • Integration tests
  • API tests
  • Performance tests
  • Security scans

If everything passes → ready for production.

🔷 ENVIRONMENT 3 — PRODUCTION

What is Production?

Real users.
Real traffic.
Real money.

This environment must:

  • Be stable
  • Be secure
  • Be approved

How GitOps Works in Production

Production deployment should NOT be automatic from main branch.

Common strategies:

Strategy 1 — Release Branch

main → dev
release → stage
tag v1.0 → production

Only approved code gets merged to release.

Strategy 2 — Manual Promotion

You manually change image tag in:

values-prod.yaml

And push commit.

Argo CD deploys.

This creates audit trail.

🔥 Why This Is Important

Because CI only proves:

✔ Code compiles
✔ Docker builds

But it does NOT prove:

✔ Application logic works
✔ Database integration works
✔ External APIs work
✔ Performance is acceptable

That is why environments exist.

🔷 Real GitOps Multi-Env Structure

Your Helm repo could look like this:

manual-app-helm/
   charts/
   values-dev.yaml
   values-stage.yaml
   values-prod.yaml

Argo CD apps:

manual-app-dev
manual-app-stage
manual-app-prod

Each points to:

Same repo
Different values file
Different namespace

Example:

namespace: dev
namespace: stage
namespace: prod

🔷 Deployment Flow Explained for Beginners

Let’s imagine version 15.

Step 1 — Developer pushes code

CI runs:

docker build
docker push

Tag = 15

Step 2 — Jenkins updates values-dev.yaml

Now dev environment runs image 15.

Developers test.

Step 3 — Promote to Stage

DevOps changes:

values-stage.yaml → tag 15

Argo CD deploys to stage.

QA tests.

Step 4 — Promote to Production

After approval:

values-prod.yaml → tag 15

Argo CD deploys.

Users see new version.

🔥 Key DevOps Principle

We do not rebuild image for each environment.

We promote the SAME image.

This guarantees:

What was tested = what goes to production

Very important concept.

🔷 Interview-Level Explanation

If asked:

Why separate environments?

Answer:

Because CI validates build integrity, but environments validate runtime behavior progressively to reduce production risk.

Simple Analogy

Dev = kitchen test
Stage = restaurant rehearsal
Prod = customers eating

You never experiment directly with customers.