MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

Chip Design Breakthrough: Predicting Performance Before Layout

2025-12-02 09:02:06

Chip Design Breakthrough: Predicting Performance Before Layout

Tired of lengthy design cycles and performance surprises late in the game? Imagine knowing your chip's power consumption and speed before committing to the physical layout. That's the promise of a new machine learning approach that's revolutionizing how we design integrated circuits.

The core concept is to build a predictive model that learns from the early stages of design, specifically the netlist. This model is then fine-tuned to estimate parasitic effects – the unwanted capacitances and resistances that arise from the physical layout – and predict final performance metrics like timing and power. Think of it like predicting the taste of a cake based on the recipe (netlist) while accounting for how your oven (layout tools) might affect the final result (parasitics).

This approach uses transfer learning in a clever way. First, the model is trained on smaller, simpler designs to learn the general relationships between the netlist and parasitics. Then, it's fine-tuned with data from larger, more complex designs to account for the unique challenges they present.

Benefits for Developers:

  • Reduced Design Iterations: Catch performance bottlenecks early and avoid costly redesigns.
  • Faster Time-to-Market: Accelerate the design process by making informed decisions upfront.
  • Optimized Performance: Explore design alternatives with confidence, knowing the performance implications.
  • Improved Power Efficiency: Minimize power consumption by identifying and mitigating potential hotspots.
  • Enhanced Design Exploration: Easily evaluate the impact of different architectural choices.
  • Better Resource Allocation: Optimize resource allocation based on accurate performance predictions.

One implementation challenge lies in generating sufficient training data, especially for novel architectures. A practical tip is to leverage existing design databases, even if they aren't perfectly matched to your current design, and augment them with simulated data.

This technique opens up exciting possibilities beyond traditional chip design. Imagine using it to optimize the placement of components on a printed circuit board, or even to predict the performance of a complex software system based on its architecture.

Ultimately, this parasitic-aware prediction method promises to reshape the landscape of hardware design, enabling faster, more efficient, and more reliable development of integrated circuits.

Related Keywords: Netlist, Performance Prediction, EDA, Electronic Design Automation, Transfer Learning, Domain Adaptation, Parasitic Extraction, VLSI, Chip Design, Integrated Circuits, Machine Learning for Hardware, Deep Learning, Graph Neural Networks, Model Training, Inference, Optimization, Circuit Simulation, Hardware Acceleration, Cloud Computing, AI in Hardware, Predictive Modeling, Design Automation, Silicon Design, Semiconductor

WHAT IS A SECRET?

2025-12-02 08:52:30

A secret is any piece of sensitive information that must be protected from unauthorized access.

Examples:

  • API keys
  • Access tokens
  • Database passwords
  • Private keys (.pem)
  • Confluent Cloud credentials
  • Terraform backend credentials
  • OAuth tokens
  • SSH keys

Goal: Keep secrets encrypted at rest, encrypted in transit, and never stored in plaintext (repo, logs, artifacts).

🔥 2. WHY SECRETS MANAGEMENT IS CRITICAL

A senior DevOps engineer must prevent:

  • Credential leaks
  • Unauthorized access
  • Accidental commits to Git
  • Hardcoding in Terraform, Kubernetes, Docker, or CI/CD

Secrets leaks cause:

  • Environment compromise
  • Data breaches
  • Unauthorized AWS usage costing thousands
  • Repository takeovers

This is why we use secure secret stores, not files.

🟦 3. SECRET STORAGE OPTIONS DevOps MUST know

There are 4 main secret management solutions you must understand:

Tool Where Used Strengths Weaknesses
GitHub Secrets GitHub CI/CD Easy to use, encrypted Not for runtime apps
AWS Secrets Manager Apps running on AWS Automatic rotation, IAM integration Expensive at scale
AWS SSM Parameter Store AWS Systems Manager Cheaper than Secrets Manager Rotation not native
HashiCorp Vault Enterprise multi-cloud Most secure, dynamic secrets Complex to manage

🟩 4. GITHUB SECRETS — Used for CI/CD Only

✔ Where used:

  • GitHub Actions CI/CD pipelines

✔ What it stores:

  • AWS access key + secret key
  • Docker registry token
  • Terraform Cloud token
  • Confluent Cloud credentials
  • Any deployment API keys

✔ How it works:

  • GitHub encrypts the secret with libsodium
  • Only GitHub Actions that run in your repository can access it
  • Not available to fork PRs

✔ Security rules for senior DevOps:

  • Never store database passwords here for applications
  • Never store long-lived AWS keys (prefer OIDC)
  • Rotate keys every 90 days
  • Give repositories minimum access
  • Avoid storing complex JSON — use AWS Parameter Store instead

❌ GitHub Secrets DO NOT replace:

  • Secrets Manager
  • Vault
  • Kubernetes Secrets
  • Application runtime secrets

GitHub Secrets are ONLY for CI/CD.

🟥 5. AWS SECRETS MANAGER — Production-grade secret storage

✔ Where used:

Production microservices on AWS.

✔ Features:

  • Automatic rotation (Lambda)
  • Version history
  • Multi-account access with IAM
  • Replication across regions
  • KMS encryption (built-in)

✔ Typical use cases:

  • Store RDS master password
  • Store Confluent API secret
  • Store Stripe keys
  • Store OAuth tokens
  • Store DB credentials for ECS tasks

✔ Access via IAM:

ecsTaskExecutionRole:
  can access secret: arn:aws:secretsmanager:...

✔ Code example (ECS task environment):

{
  "name": "DB_PASSWORD",
  "valueFrom": "arn:aws:secretsmanager:us-east-2:xxx:secret:db_pass"
}

❗ When to choose Secrets Manager:

  • You need rotation
  • You need strict auditing
  • You manage cross-account apps

🟨 6. AWS SSM PARAMETER STORE

(SecureString parameters)

✔ Cheaper alternative to Secrets Manager

Costs: $0 for Standard tier
(Secrets Manager costs $0.40 per secret per month)

✔ Good for Dev / QA / Non-critical secrets

✔ Use cases:

  • Microservice configs
  • Non-rotating tokens
  • S3 bucket names
  • Feature flags

❗ NOT recommended

for production database passwords (no rotation).

🟪 7. HASHICORP VAULT — The most advanced system

This is enterprise-level secret management.

✔ Why Vault is used:

  • Supports AWS, GCP, Azure, Kubernetes, On-prem
  • Dynamic secrets (temporary DB creds)
  • Encryption as a service (Transit)
  • PKI certificate generation
  • Fine-grained access policies
  • Audit logs
  • Can run on-prem or as HCP Vault Cloud

✔ Dynamic secrets example:

Vault generates:

  • A PostgreSQL username/password
  • Valid for 1 hour
  • Automatically deleted afterward

Perfect for:

  • Short-lived CI/CD tasks
  • High-security environments
  • Banks, healthcare, fintech

✔ Vault is used by:

  • Uber
  • Stripe
  • Goldman Sachs
  • Netflix

🟦 8. Kubernetes Secrets (Optional but DevOps MUST know)

✔ Stored inside etcd (encrypted with KMS in prod)

✔ Used for:

  • API keys
  • DB passwords
  • TLS certs

✔ Mounted as:

  • env variables
  • files

🟫 9. Terraform & Secrets — Senior Level Knowledge

Terraform NEVER stores secrets in:

  • Git
  • tf files
  • modules

❗❗ Secrets MUST be passed via:

  • terraform.tfvars (locally only)
  • CI/CD environment variables
  • SSM Parameter Store
  • Secrets Manager

✔ Example bad code (DO NOT DO):

password = "MySecret123"

✔ Good:

password = var.db_password

✔ Best:

password = data.aws_secretsmanager_secret_version.db_password.secret_string

🟩 10. How Secrets Flow in a Real CI/CD Pipeline

Example using GitHub Actions + AWS Secrets Manager:

Step 1:

Secrets stored in Secrets Manager

Step 2:

EC2/ECS Lambda uses IAM role to access secrets

Step 3:

GitHub Actions stores only:

  • AWS Access Key
  • Secret Key
  • Confluent API key

Step 4:

Terraform deploys infrastructure
→ references secrets with ARN

Step 5:

Applications retrieve secrets using:

  • AWS SDK
  • IAM role permissions

🟥 11. What NOT TO DO (Senior DevOps Knowledge)

❌ Never store secrets in GitHub repository
❌ Never store secrets in Slack or Teams
❌ Never store secrets in Docker image
❌ Never store secrets in YAML files
❌ Never store secrets in Terraform state
❌ Never store secrets in code comments
❌ Never echo secrets in CI logs
❌ Never send secrets in email

If leaked → rotate immediately.

🟦 12. Interview-Level Explanation (You can say this)

“In my pipelines, GitHub Secrets are used only for CI/CD credentials.
For application runtime secrets, I use AWS Secrets Manager or SSM Parameter Store depending on rotation requirements.
I avoid hardcoding secrets in Terraform by pulling them from the secret stores at runtime.
For enterprise multi-cloud environments, I integrate HashiCorp Vault with AWS IAM and Kubernetes service accounts for secure authentication and dynamic secrets.
All secrets are KMS-encrypted and never exposed in logs.”

This is senior-level.

*🔵 SECRETS FLOW — High-Level Diagram *

                        ┌──────────────────────────┐
                        │     Developer Machine     │
                        │    (Push Git Changes)     │
                        └─────────────┬────────────┘
                                      │
                                      ▼
                         ┌────────────────────────┐
                         │    GitHub Repository    │
                         └─────────────┬───────────┘
                                      │
                                      ▼
                         ┌──────────────────────────────┐
                         │     GitHub Actions Runner     │
                         │   (CI/CD Workflow Execution)  │
                         └──────────────┬────────────────┘
      SECRETS ENTER HERE FROM GITHUB →  │ 
                                      ▼
   ┌───────────────────────────────────────────────────────────────────────────────┐
   │                           GitHub Secrets Storage                               │
   │    - AWS_ACCESS_KEY_ID                                                         │
   │    - AWS_SECRET_ACCESS_KEY                                                     │
   │    - CONFLUENT_API_KEY                                                         │
   │    - CONFLUENT_API_SECRET                                                      │
   │    - EXISTING_VPC_ID                                                           │
   │    - SUBNETS / SG IDs                                                          │
   └───────────────┬──────────────────────────────────────────────────────────────┘
                   │ ENV VARIABLES PASSED TO TERRAFORM
                   ▼
        ┌───────────────────────────────────────┐
        │            TERRAFORM ENGINE           │
        │   terraform init / plan / apply       │
        └─────────────────────────┬─────────────┘
                                  │
         TERRAFORM USES SECRETS → │ 
                                  ▼
         ┌─────────────────────────────────────────────┐
         │          AWS Terraform Provider              │
         └───────────┬──────────────────────────────────┘
                     │
                     ▼
────────────────────────────────────────────────────────────────────────
│                         AWS Cloud                                     │
│                                                                        │
│   ┌────────────────────────────────────────────────────────────────┐   │
│   │                     AWS IAM (Identity)                          │   │
│   │   - Permissions for Terraform                                   │   │
│   │   - Permissions for ECS tasks                                   │   │
│   └────────────────────────────────────────────────────────────────┘   │
│                                                                        │
│   ┌────────────────────────────────────────────────────────────────┐   │
│   │              AWS Secrets Manager / Parameter Store              │   │
│   │   - Terraform can CREATE secrets here                           │   │
│   │   - ECS tasks retrieve secrets automatically                    │   │
│   └────────────────────────────────────────────────────────────────┘   │
│                                                                        │
│   ┌────────────────────────────────────────────────────────────────┐   │
│   │                        AWS ECS Cluster                         │   │
│   │   - Backend container                                           │   │
│   │   - Producer container                                          │   │
│   │   - Payment / Fraud / Analytics                                 │   │
│   │   - Containers read secrets at runtime                          │   │
│   └────────────────────────────────────────────────────────────────┘   │
│                                                                        │
────────────────────────────────────────────────────────────────────────

🔐 Explanation of Every Secret Component

Everything explained as a senior DevOps must understand.

#1 — GitHub Secrets (CI/CD)

GitHub Secrets are stored encrypted in GitHub.
They are used only during workflow execution.

Used in your project:

  • AWS_ACCESS_KEY_ID
  • AWS_SECRET_ACCESS_KEY
  • RDS_PASSWORD
  • CONFLUENT_API_KEY
  • CONFLUENT_API_SECRET
  • VPC_ID / SUBNET_IDS / SG_IDS

GitHub Secrets lifetime:

  • Used only during pipeline execution
  • Not accessible after the run
  • Good for CI/CD only, NOT for runtime

#2 — AWS Secrets Manager

This is AWS’s official secret storage system.

What you store here:

  • Database passwords
  • API keys
  • Confluent secrets
  • JWT secrets
  • Backend environment variables

Why AWS Secrets Manager is better than GitHub Secrets:

GitHub Secrets = CI/CD
AWS Secrets = Runtime

ECS Tasks → automatically fetch secrets from Secrets Manager and inject into containers.

Benefits:

  • Automatic rotation
  • KMS encryption
  • IAM auth
  • Direct injection into ECS Task Definitions
  • No need to expose environment variables in Terraform

#3 — AWS Parameter Store (SSM)

Simpler version of Secrets Manager.

When to use:

  • When you need configuration (not secrets)
  • When cost matters (cheaper than Secrets Manager)
  • When you need infrastructure parameters

Sample:

  • /backend/SERVICE_URL
  • /kafka/bootstrap
  • /env/prod/feature-flag

#4 — HashiCorp Vault (Senior-level DevOps Topic)

Vault is used in enterprise environments for high-grade secret management.

Why Vault?

  • Dynamic secrets (MySQL, AWS IAM, Kafka credentials)
  • Zero-trust access
  • Multi-cloud support
  • Token-based authentication
  • Secret leasing (expires automatically)
  • Audit logs

Vault is often used when:

  • You have Kubernetes clusters
  • You need dynamic credentials
  • You need multi-cloud
  • You need compliance (PCI, HIPAA)
  • You need secret encryption policies

#5 — Terraform and Secrets

Terraform itself should never store secrets in .tf files.

Correct ways:

  1. Pass secrets through TF_VAR_* from GitHub
  2. Read secrets from Secrets Manager
  3. Use Sensitive = true variables

Incorrect:

  • Hardcoding secrets
  • Committing terraform.tfvars with passwords

#6 — How Secrets flow in YOUR project

Step 1 — GitHub Actions Reads GitHub Secrets

GitHub Secrets → environment variables → Terraform variables

Step 2 — Terraform writes:

  • VPC, subnets, SGs
  • RDS
  • ECS cluster
  • Task definitions
  • ALB

Step 3 — Optional: Terraform can push secrets to AWS Secrets Manager

Then ECS Task Definitions read from Secrets Manager at runtime.

#7 — Interview-Level Summary

A Senior DevOps must know:

✔ GitHub Secrets

Used for pipeline-level secrets only.

✔ AWS Secrets Manager

For production runtime secrets.

✔ AWS Parameter Store

For configuration and non-secret values.

✔ HashiCorp Vault

Enterprise-grade, dynamic secrets, KMS integration.

✔ Terraform Secrets Handling

Never hardcode.
Use TF_VAR + Secrets Manager injection.

✔ ECS Secret Injection

ECS can read secrets directly
(no environment variables exposed).

🟦 B — Interview Cheat Sheet

Here are short, crisp answers:

❓What is GitHub Secrets?

Pipeline-only encrypted secret store.
Used to authenticate Terraform, Docker, AWS during CI/CD.

❓Why not store runtime secrets in GitHub?

Because GitHub Secrets only live during CI/CD.
Containers need secrets at runtime → use AWS Secrets Manager.

❓What is AWS Secrets Manager?

Fully managed encrypted secret store with rotation, IAM, audit logging.

❓What is Parameter Store?

Cheaper config store for non-secrets.

❓What is HashiCorp Vault?

Enterprise secret management offering dynamic credentials and zero-trust access.

❓How does Terraform handle secrets?

Use sensitive variables + backend secrets.
Never commit secrets.

❓How does ECS access secrets?

Through “valueFrom” Secrets Manager ARNs in the task definition.

Web App or Mobile App? Choosing Your MVP Platform

2025-12-02 08:50:19

Last Tuesday, a founder asked me: "Should I build a mobile app or a web app first?"

I asked her three questions:

  • Where are your users when they need your product most?
  • Can they accomplish the core task on a laptop?
  • Do you have 16 weeks or 8 weeks to validate your idea?

She went quiet. Then: "I haven't thought about any of that."

Most founders haven't. They just know they need an app—capital A, as if "app" only means one thing.

Let's fix that, because this decision will either accelerate your startup or drain six months before you realize you chose wrong.

The Unsexy Truth About Platform Choice

Here's what nobody tells you: The "right" platform isn't about what's trendy or what your competitor built. It's about where your specific users need to solve their specific problem.

A meditation app that expects users to sit at their desktop three times a day? Dead on arrival.

A B2B analytics dashboard that sales teams only check from their phones? Also dead.

The platform isn't a preference. It's a constraint dictated by user behavior.

When Mobile Makes Sense (And When It Doesn't)

Mobile apps are seductive. They feel "real" in a way web apps don't. You can hold them. They have an icon. Your mom can find them in the App Store.

But mobile is expensive and slow. Here's when it's worth it:

Your product requires mobility

Not "would be nice to have on mobile." Requires it.

Examples that require mobile:

  • Ride-sharing (obviously)
  • Fitness tracking (people don't run with laptops)
  • Food delivery (customers order from anywhere)
  • Field service management (technicians work on-site)
  • Location-based social apps (the whole point is you're out in the world)

Examples that don't require mobile (despite what founders think):

  • Most B2B SaaS (decision-makers work at desks)
  • Project management tools (detailed work needs screen space)
  • CRM systems (sales teams can wait until they're at a computer)
  • Analytics dashboards (complex data needs real estate)

You need native device features

The camera. GPS. Push notifications. Biometric authentication. Offline functionality that actually works.

If your core value proposition depends on these, mobile is non-negotiable.

But be honest with yourself. Do you need the camera, or do you just think it would be cool to have? Because "cool" costs weeks in development time.

Your users are mobile-first (actually)

Not "they use their phones a lot." Everyone uses their phones a lot.

I mean: they're solving this specific problem primarily from their phones, and forcing them to a desktop is friction you can't afford.

Consumer apps targeting Gen Z? Mobile-first makes sense.

Enterprise software for finance teams? They're on desktops with three monitors.

You're competing in an app store

Sometimes the distribution channel IS the strategy. If your users expect to find solutions in the App Store or Google Play, you need to be there.

But remember: App Store discovery is brutal. Getting featured is lottery-odds. Organic downloads without marketing are basically zero.

A web app can be discovered through Google, shared via link, and accessed instantly with no install friction. Don't underestimate that.

The Web App Case (Stronger Than You Think)

Web apps get dismissed as "less serious" than mobile apps. This is nonsense.

Slack built a $27B company on a web app. Notion became a household name with web-first. Linear, Figma, Airtable—all web-first, mobile later.

Here's when web wins:

Your MVP needs to ship in weeks, not months

Building a mobile app means:

  • Developing for iOS (Swift/SwiftUI)
  • Developing for Android (Kotlin/Jetpack Compose)
  • Or building cross-platform (React Native/Flutter) and fighting framework limitations
  • Submitting to app stores and waiting for approval
  • Dealing with versioning, updates, and review cycles

Building a web app means:

  • One codebase
  • Deploy whenever you want
  • Update instantly for all users
  • No approval process
  • Iterate based on feedback the same day

For MVPs, this velocity is everything. You're trying to learn if your idea works, not create the perfect product.

We recently worked with a founder on a contractor management platform. They initially wanted mobile apps. We asked why. "Because our competitors have apps."

We talked them into web-first. We built and launched in 8 weeks. They got early users, learned what features actually mattered, and are now building a mobile companion app for the one feature that needs it: time tracking on job sites.

If they'd gone mobile-first? 16 weeks, and half the learning because updating the app requires app store approval.

Complex interfaces or data-heavy workflows

Phones have 6 inches of screen space. Some tasks need more.

Financial dashboards, spreadsheet-heavy tools, design software, code editors, complex configuration interfaces—these fight mobile constraints instead of embracing them.

A responsive web app gives you desktop power when users need it and mobile access when they don't.

You're targeting businesses, not consumers

B2B users work differently than B2C users.

They're at desks. They have workflows. They need integrations with other tools. They want to open fifteen tabs and toggle between them. They're doing focused work that requires real estate and a real keyboard.

Yes, they'll eventually want mobile access for certain tasks. But if you make them download an app before they can even try your product, you've added friction to an already-complex B2B sales process.

A web app with a clean, bookmarkable URL beats "download our app" every time in enterprise sales.

Budget and timeline matter

Let's talk reality.

A native iOS app from a competent team: several months for an MVP

A native Android app: another several months

Both platforms: significant time investment (or you go cross-platform and deal with framework compromises)

A responsive web app that works everywhere: 6-10 weeks typically

Those numbers aren't arbitrary. Mobile development is just more complex because you're building for multiple platforms or dealing with cross-platform framework overhead.

If you're bootstrapping or pre-seed, that timeline difference is the difference between shipping and not shipping.

The Cross-Platform Middle Ground (Proceed With Caution)

"Why not React Native or Flutter? Then I get both platforms for the price of one!"

In theory, yes. In practice, it's complicated.

Cross-platform frameworks have gotten good. React Native powers Facebook, Instagram, Shopify. Flutter powers Google Ads, Alibaba, BMW. These aren't toys.

But they come with trade-offs:

You're fighting framework limitations

Native iOS and Android features take time to get implemented in React Native/Flutter. Sometimes they never do. You're always waiting for the framework to catch up.

Need the latest iOS feature Apple just announced? Native developers get it immediately. Cross-platform developers wait months for library support.

The 80/20 rule applies

80% of your app works great cross-platform. The other 20%—the weird edge cases, the platform-specific UX expectations, the native integrations—costs 80% of your development time.

Experienced mobile developers can navigate this. But if you're hiring an agency or junior developers, expect headaches.

Performance isn't quite native

For most apps, this doesn't matter. But if you're building something graphics-intensive or performance-critical, you'll notice the difference.

When cross-platform makes sense:

You need mobile (not web) but can't afford two native apps. Your app doesn't rely heavily on cutting-edge platform features. You have experienced React Native or Flutter developers. Your app is utility-focused, not trying to feel "premium" or compete with highly-polished native apps.

For MVPs, I'm skeptical. You're adding framework complexity when you're trying to learn fast. But if you're confident mobile is the right call and you need both platforms, it's a reasonable compromise.

The Decision Framework

Stop guessing. Use this:

Start with your core user behavior

Where are users when they need to solve the problem you're solving?

At their desk working? → Web On the go constantly? → Mobile Could be either? → Web first, mobile later.

Identify your must-have features

Make a list. Be ruthless. Not "nice to have"—must have for v1.

Any of these mobile-only?

  • Camera/photo capture
  • GPS/location tracking
  • Push notifications for time-sensitive actions
  • Offline functionality
  • Biometric authentication

If yes → Mobile app If no → Web app is probably faster and cheaper

Consider your distribution strategy

How will users discover you?

App Store/Google Play discovery → Mobile SEO and content marketing → Web Direct sales and demos → Web (easier to demo, no install friction) Viral sharing/invites → Web (links beat "download this app").

Run the economics

Time to MVP:

  • Web: 6-10 weeks
  • Mobile (one platform): 10-14 weeks
  • Mobile (both platforms): 14-20 weeks

Maintenance and updates:

  • Web: Deploy anytime, instant updates

  • Mobile: App store approval, versioned updates, user update friction

Be honest about your timeline. Ambitious founders always underestimate how long mobile takes.

Accept that you might need both eventually

This isn't web OR mobile forever. It's web or mobile FIRST.

Most successful products end up with both. The question is sequencing.

Slack started web. Then mobile. Instagram started mobile. Then web. Uber started mobile. (They had to.)

The pattern: start where your users are, nail the core experience, then expand to other platforms once you've proven the concept.

Don't try to be everywhere at once. You'll ship slower and learn less.

What About Progressive Web Apps?

Someone always asks about PWAs—Progressive Web Apps that work like native apps but run in the browser.

The promise: Best of both worlds. Web development speed with app-like features.

The reality: Promising but limited.

PWAs can do:

  • Work offline
  • Install to home screen
  • Send push notifications (on Android and some browsers)
  • Access camera and location
  • Feel app-like with proper UX

PWAs can't do:

  • List in the App Store (Apple doesn't allow it, really)
  • Access certain native features
  • Match native performance for complex apps
  • Get the "legitimacy" signal of being a "real app"

For certain use cases—especially consumer web apps that want app-like features without the overhead—PWAs are great.

But if your strategy depends on App Store distribution or advanced native features, PWAs won't cut it.

Real Talk: What We Actually Recommend

At SociiLabs, most of our MVP projects start with web apps. Not because we don't build mobile (we do), but because web gives founders the fastest path to learning.

Here's our usual recommendation flow:

Phase 1: Web MVP (6-10 weeks)

Build the core functionality as a responsive web app. Launch fast. Get real users. Learn what actually matters versus what you thought would matter.

Most features you think are critical? They're not. You learn this by shipping and watching what users actually do.

Phase 2: Iterate on web (1-3 months)

Fix what's broken. Add what's missing. Find product-market fit. Build out the features that drive retention and revenue.

This is so much easier on web. No app store approvals. No version fragmentation. Just ship.

Phase 3: Mobile companion (if needed) (8-12 weeks)

Once you know what works, build a mobile app that focuses on the specific use cases where mobile actually adds value.

Not "our whole product but on mobile." A focused mobile experience for the workflows that benefit from mobility.

Example: We built a project management tool for construction companies. Web app first. They used it for planning and reporting. Then we built a mobile app specifically for job site updates and photo documentation. The mobile app does three things really well. The web app does everything else.

That's the pattern. Prove it works, then expand strategically.

Not Sure Which Way to Go?

We build both web apps and mobile apps at SociiLabs, so we don't have a dog in this fight. What we care about is helping you make the right call for your specific situation.

If you're stuck on this decision, we're happy to talk it through. No sales pitch, no commitment—just a conversation about your users, your timeline, and what actually makes sense.

We use AI-assisted development to move faster than traditional agencies, which means we can typically deliver MVPs in 6-10 weeks instead of the usual 4-6 months. But speed doesn't matter if we're building the wrong thing on the wrong platform.

So before we talk about timelines or scope, we'll ask the annoying questions: Who's your actual user? Where are they when they need this? What are you really trying to learn with this MVP?

Because here's the thing: most founders who come to us wanting mobile apps actually need web apps first. And the ones who need mobile usually need a more focused version than what they're imagining.

Our job isn't to build what you ask for. It's to help you figure out what you actually need, then build that really well.

Want to talk about your project? Book a time here. We'll give you our honest take.

The Bottom Line

Web or mobile isn't about technology preferences. It's about user behavior, business constraints, and strategic sequencing.

Ask yourself:

  • Where are my users when they need this product?
  • What features are actually essential for v1?
  • How fast do I need to learn if this works?
  • What can I realistically build and maintain?

Answer those honestly, and the platform choice becomes obvious.

And if it's not obvious? That probably means web-first is the safer bet. You can always build mobile later. You can't get back the months you spent building the wrong thing.

What's your take? Have you launched web-first or mobile-first? What did you learn? Drop a comment—we'd love to hear what worked (or didn't) for your project.

Achieve Prisma-like Developer Experience in EF Core! Introduction to Linqraft

2025-12-02 08:49:11

The other day, I released a C# library called Linqraft! In this article, I'd like to introduce it.
Top page that I put some effort into creating

Motivation

C# is a wonderful language. With powerful type safety, a rich standard library, and the ability to handle everything from GUI app development to web development, I think it's an excellent language for almost any purpose.

However, there's something about C# that has been frustrating me on a daily basis.

That is: "Defining classes is tedious!" and "Writing Select queries is tedious!"

Since C# is a statically-typed language, you basically have to define all the classes you want to use. While this is unavoidable to some extent, having to define derived classes every time is extremely tedious.
Especially when using an ORM (Object-Relational Mapping) for database access, the shape of data you want must be defined as a DTO (Data Transfer Object) every time, resulting in writing similar class definitions over and over again.

Let's compare this with Prisma, a TypeScript ORM. In Prisma, you can write:

// user type is automatically generated from schema file
const users = await prisma.user.findMany({
  // Specify the data you want with select
  select: {
    id: true,
    name: true,
    posts: {
      // You can also specify related table data with select
      select: {
        title: true,
      },
    },
  },
});

// The type of users automatically becomes: (automatically done for you!)
// This type can also be easily reused
type Users = {
  id: number;
  name: string;
  posts: {
    title: string;
  }[];
}[];

If you try to do the same thing in C#'s EF Core, it looks like this:

// Assume Users type is defined in a separate file
var users = dbContext.Users
    // Specifying the data you want with Select is the same
    .Select(u => new UserWithPostDto
    {
        Id = u.Id,
        Name = u.Name,
        // Child classes are also specified with Select in the same way
        Posts = u.Posts.Select(p => new PostDto { Title = p.Title }).ToList()
    })
    .ToList();

// You have to define the DTO class yourself!
public class UserWithPostDto
{
    public int Id { get; set; }
    public string Name { get; set; }
    public List<PostDto> Posts { get; set; }
}
// Same for child classes
public class PostDto
{
    public string Title { get; set; }
}
// Since we already have a User class, it seems like it could be auto-generated from there...

In this regard, Prisma is clearly easier and more convenient. Even though we're already defining the Users type as a class1, it feels frustrating to have to manually define derived DTO classes.

The above scale is still tolerable, but it gets even more painful in more complex cases.

var result = dbContext.Orders
    .Select(o => new OrderDto
    {
        Id = o.Id,
        Customer = new CustomerDto
        {
            CustomerId = o.Customer.Id,
            CustomerName = o.Customer.Name,
            // Tedious part
            CustomerAddress = o.Customer.Address != null
                ? o.Customer.Address.Location
                : null,
            // Wrap in another DTO because we don't want to check every time
            AdditionalInfo = o.Customer.AdditionalInfo != null
                ? new CustomerAdditionalInfoDto
                {
                    InfoDetail = o.Customer.AdditionalInfo.InfoDetail,
                    CreatedAt = o.Customer.AdditionalInfo.CreatedAt
                }
                : null
        },
        Items = o.Items.Select(i => new OrderItemDto
        {
            ProductId = i.ProductId,
            Quantity = i.Quantity,
            // Same for arrays. Hard to read...
            ProductComments = i.CommentInfo != null
                ? i.CommentInfo.Comments.Select(c => new ProductCommentDto
                {
                    CommentText = c.CommentText,
                    CreatedBy = c.CreatedBy
                }).ToList()
                : new List<ProductCommentDto>()
        }).ToList()
    })
    .ToList();

// Not shown here, but all DTO class definitions used above also need to be defined

First of all, there are already 5 DTOs in the above example, which is extremely tedious. But even more annoying is the "null checking".
First, EF Core's Select expressions cannot use ?. (null-conditional operator). Specifically, it cannot be used inside Expression<...>.
Therefore, you have to write code that uses ternary operators to check for null, and if it's not null, access the member below it.

For child classes alone, you can simply write o.A != null ? o.A.B : null, but as this gets deeper to grandchild classes and great-grandchild classes, the null checking code keeps growing and becomes very hard to read.

// Unbelievably hard to read
Property = o.A != null && o.A.B != null && o.A.B.C != null
    ? o.A.B.C.D
    : null

The same applies when picking up array values in child classes (which can be null), requiring tedious code.

// Give me a break
Items = o.Child != null
    ? o.Child.Items.Select(i => new ItemDto{ /* ... */ }).ToList()
    : new List<ItemDto>()

What do you think? I really hate this.

What I Wanted

Looking at the Prisma example above again, it has roughly the following features (using TypeScript language features as well):

  • When you write a query once, the corresponding type is generated
  • You can write ?. directly in queries without worrying about null checking

After thinking about it, I realized that by combining anonymous types, source generators, and interceptors, these features could be achieved.

Attempting the Implementation

Using Anonymous Types

Are you familiar with C#'s anonymous types? It's a feature where the compiler automatically generates a corresponding class when you write new { ... } as shown below.

// Don't write a type name after new
var anon = new
{
    Id = 1,
    Name = "Alice",
    IsActive = true
};

Some of you may not have used this much, but it's very convenient for defining disposable classes in Select queries.

var users = dbContext.Users
    .Select(u => new
    {
        Id = u.Id,
        Name = u.Name,
        Posts = u.Posts.Select(p => new { Title = p.Title }).ToList()
    })
    .ToList();

// You can access and use it normally
var user = users[0];
Console.WriteLine(user.Name);
foreach(var post in user.Posts)
{
    Console.WriteLine(post.Title);
}

However, as it's called an "anonymous" type, the actual type name doesn't exist, so it cannot be used as method arguments or return values. This restriction is quite painful, so it surprisingly doesn't have many opportunities to shine.

Auto-generating Corresponding Classes

This means that if we create a source generator that automatically generates corresponding classes based on what's defined with anonymous types, wouldn't that work? This is a natural progression. Linqraft achieves exactly this.

Specifically, using a specific method name (SelectExpr) as a hook point, it automatically generates class definitions based on the anonymous type passed as an argument.
Since it would be inconvenient if you couldn't specify the generated class name, it's designed to allow you to specify the class name as a generic type argument.

var users = dbContext.Users
    // In this case, auto-generate a class called UserDto
    .SelectExpr<User,UserDto>(u => new
    {
        Id = u.Id,
        Name = u.Name,
        Posts = u.Posts.Select(p => new { Title = p.Title }).ToList()
    })
    .ToList();

// ---
// A class like this is auto-generated
public class UserDto
{
    public int Id { get; set; }
    public string Name { get; set; }
    public List<PostDto_Hash1234> Posts { get; set; }
}
// Child classes are also auto-generated
// Hash value is automatically added to avoid name conflicts
public class PostDto_Hash1234
{
    public string Title { get; set; }
}

You just look at the elements of the passed anonymous type and generate the corresponding class definition using Roslyn API (though it's quite difficult!). Simple, right?

At this point, we've achieved automatic class generation, but we need to replace the behavior of the called SelectExpr to work like a normal Select.
This is where interceptors come in.

Replacing Processing with Interceptors

Did you know that C# has a feature called interceptors?
Since it's such a niche area, few people probably know about it, but it's a feature that allows you to hook specific method calls and replace them with arbitrary processing.
It was preview-released in .NET 8 and became stable in .NET 9.

Even if I say that, it might be hard to imagine, so let's consider code like this:

// Pattern calling a very time-consuming process with constant values
var result1 = "42".ComputeSomething();   // case 1
var result2 = "420".ComputeSomething();  // case 2
var result3 = "4200".ComputeSomething(); // case 3

Since it's being called with constant values, we should be able to calculate the results at compile time. In such cases, by pre-implementing interceptors in combination with source generators, you can replace calls like this:

// Imagine this class is auto-generated by Source Generator.
// Public level can be file
file static class PreExecutedInterceptor
{
    // Get the hash value of the call site using Roslyn API and attach InterceptsLocationAttribute
    [global::System.Runtime.CompilerServices.InterceptsLocationAttribute(1, "(hash of case1)")]
    // Function name can be random. Arguments and return value should be the same as the original function
    public static int ComputeSomething_Case1(this string value)
    {
        // Pre-calculate and return the result for case 1
        return 84;
    }

    // Same for case 2 and 3
    [global::System.Runtime.CompilerServices.InterceptsLocationAttribute(1, "(hash of case2)")]
    public static int ComputeSomething_Case2(this string value) => 168;

    [global::System.Runtime.CompilerServices.InterceptsLocationAttribute(1, "(hash of case3)")]
    public static int ComputeSomething_Case3(this string value) => 336;
}

While defining as a regular extension method would cause definition duplication, using interceptors allows you to replace different processing for each call site.

Linqraft uses this mechanism to intercept SelectExpr calls and replace them with regular Select.

// Suppose there's a call like this
var orders = dbContext.Orders
    .SelectExpr<Order,OrderDto>(o => new
    {
        Id = o.Id,
        CustomerName = o.Customer?.Name,
        CustomerAddress = o.Customer?.Address?.Location,
    })
    .ToList();
// Example of generated code
file static partial class GeneratedExpression
{
    [global::System.Runtime.CompilerServices.InterceptsLocationAttribute(1, "hash of SelectExpr call")]
    // Need to keep the base anonymous type conversion query, so selector is also taken as an argument (not actually used)
    public static IQueryable<TResult> SelectExpr_0ED9215A_7FE9B5FF<TIn, TResult>(
        this IQueryable<TIn> query,
        Func<TIn, object> selector)
    {
        // Can only receive <TIn> by specification, but we actually know the original type so cast it
        var matchedQuery = query as object as IQueryable<global::Order>;
        // Convert the pseudo-query to a regular Select
        // Map to the auto-generated DTO class created earlier
        var converted = matchedQuery.Select(s => new global::OrderDto
        {
            Id = s.Id,
            // Mechanically replace null-conditional operator with regular ternary operator check
            CustomerName = s.Customer != null ? s.Customer.Name : null,
            CustomerAddress = s.Customer != null && s.Customer.Address != null
                ? s.Customer.Address.Location
                : null,
        });
        // Can only return <TResult> by specification so cast again
        return converted as object as IQueryable<TResult>;
    }
}

This allows users to write queries easily with the feeling of a regular Select!

And Towards Zero Dependencies

With the above measures, all calls to SelectExpr are completely intercepted by separately generated code. As a result, the original SelectExpr body has nothing to do and exists only for editor completion.

If so, if we output that dummy method itself with a source generator, we shouldn't need a reference to Linqraft itself! So that's what we do.

public static void ExportAll(IncrementalGeneratorPostInitializationContext context)
{
    context.AddSource("SelectExprExtensions.g.cs", SelectExprExtensions);
}

const string SelectExprExtensions = $$""""
    {{CommonHeader}}

    using System;
    using System.Collections.Generic;
    using System.Linq;

    /// <summary>
    /// Dummy expression methods for Linqraft to compile correctly.
    /// </summary>
    internal static class SelectExprExtensions
    {
        /// <summary>
        /// Create select expression method, usable nullable operators, and generate instance DTOs.
        /// </summary>
        public static IQueryable<TResult> SelectExpr<TIn, TResult>(this IQueryable<TIn> query, Func<TIn, TResult> selector)
            where TIn : class => throw InvalidException;

        // Other variants are also included here
    }
    """";

Then, if you enable DevelopmentDependency, you can make it a package that's not included in the actual build output at all!

<PropertyGroup>
  <DevelopmentDependency>true</DevelopmentDependency>
</PropertyGroup>

In fact, when you install Linqraft via nuget etc., it should look like this. This means it's a development-only package.

<PackageReference Include="Linqraft" Version="0.4.0">
  <PrivateAssets>all</PrivateAssets>
  <IncludeAssets>runtime; build; native; contentfiles; analyzers</IncludeAssets>
</PackageReference>

Analyzer for Replacing Existing Code

Now, some of you who have heard the story so far may want to try it right away!
For those people, Linqraft also provides a Roslyn Analyzer that automatically replaces existing Select queries with SelectExpr.
It's very easy to use - just right-click on the Select query part and replace it in one go from Quick Actions.

Summary

So, by using Linqraft to write queries simply like this:

  • Corresponding DTO classes are automatically generated,
  • You can write without worrying about null checking,
  • Plus it has zero dependencies, so it's no different from hand-written code,
  • Migration is reasonably easy.
// Zero dependencies!
var orders = dbContext.Orders
    .SelectExpr<Order, OrderDto>(o => new
    {
        Id = o.Id,
        // You can write with ?.!
        CustomerName = o.Customer?.Name,
        CustomerAddress = o.Customer?.Address?.Location,
    })
    .ToList();
// OrderDto class and its contents are auto-generated!

I hate to say it myself, but I think we've created a pretty useful library.
Please try it out! If you like it, please give us a star.
https://github.com/arika0093/Linqraft

Side Note

I also put some effort into the introduction web page. Specifically, you can test functionality on the web page!
I also implemented a feature that parses Token information with Roslyn and feeds it into Monaco Editor for syntax highlighting.

Playground screen

Please check this out as well.
https://arika0093.github.io/Linqraft/

  1. Think of it as defining the schema (Prisma's schema) as classes in C#. This part isn't too painful. ↩

Stop Fixing Code Manually: How NeuroLint Automates What ESLint Can't

2025-12-02 08:48:54

The Problem Every Developer Knows Too Well

You've been there. It's 2 AM, and you're staring at a wall of ESLint errors. Missing key props in React lists. Hydration mismatches because someone used localStorage without a server-side guard. Accessibility warnings everywhere.

ESLint tells you what's wrong. But you still have to fix it yourself.

The cost? Hours of manual fixes. Delayed releases. Production bugs that could have been prevented.

What if there was a tool that didn't just identify problems, but actually fixed them?

Introducing NeuroLint

NeuroLint is a deterministic code transformation engine that automatically fixes over 50 common issues in React, Next.js, and TypeScript projects.

The key difference? No AI. No guessing. No hallucinations.

While AI coding tools can produce unpredictable results, NeuroLint uses Abstract Syntax Tree (AST) parsing and rule-based transformations. Same input, same output, every time.

# Install globally
npm install -g @neurolint/cli

# Analyze your project
neurolint analyze . --verbose

# Preview fixes (safe, no changes)
neurolint fix . --all-layers --dry-run

# Apply fixes
neurolint fix . --all-layers

The 5-Step Fail-Safe

Every transformation goes through a 5-step validation process:

  1. AST-First Transformation — Parses code into an Abstract Syntax Tree
  2. First Validation — Checks if the transformation is syntactically correct
  3. Regex Fallback — Falls back to regex if AST parsing fails
  4. Second Validation — Re-validates the result
  5. Accept Only If Valid — Changes only applied if they pass. Otherwise, automatic revert.

This is why NeuroLint never breaks your code.

7-Layer Progressive Architecture

Layer What It Fixes
1. Configuration Modernizes tsconfig.json, next.config.js, package.json
2. Patterns Removes console.log, fixes HTML entities, cleans unused imports
3. Components Adds React keys, accessibility attributes, button types
4. Hydration Adds SSR guards for localStorage, window, document
5. Next.js Adds 'use client' directives, optimizes Server Components
6. Testing Generates error boundaries and test scaffolding
7. Adaptive Learns patterns from previous fixes and reapplies them

10 Concrete Fixes

Here's exactly what NeuroLint does:

  1. Missing React keys — Adds unique key props to .map() lists
  2. Hydration guards — Wraps localStorage, window, document in SSR checks
  3. Button types — Adds type="button" to prevent form submissions
  4. Accessibility — Adds aria-label to buttons, alt to images
  5. 'use client' — Adds missing directives to client components
  6. Console.log removal — Strips debug statements
  7. HTML entities — Converts & to &amp;, < to &lt;
  8. Unused imports — Removes dead imports
  9. var → const/let — Modernizes variable declarations
  10. forwardRef removal — Migrates deprecated React 19 patterns

Real-World Example

Before:

function Button({ children, onClick }) {
  return <button onClick={onClick}>{children}</button>;
}

After Layer 3 (Components):

function Button({ children, onClick }) {
  return (
    <button 
      onClick={onClick}
      aria-label={typeof children === 'string' ? children : undefined}
      type="button"
    >
      {children}
    </button>
  );
}

After Layer 5 (Next.js):

'use client';

interface ButtonProps {
  children: React.ReactNode;
  onClick?: () => void;
}

function Button({ children, onClick }: ButtonProps) {
  return (
    <button 
      onClick={onClick}
      aria-label={typeof children === 'string' ? children : undefined}
      type="button"
    >
      {children}
    </button>
  );
}

export default Button;

Migration Tools

React 19 Migration

neurolint migrate-react19 . --verbose

Handles forwardRef removal, string refs → callback refs, ReactDOM.rendercreateRoot.

Dependency Compatibility

neurolint check-deps . --fix

Detects React 19 incompatibilities and auto-generates fixes.

Why Not AI?

Feature AI Tools NeuroLint
Predictable output Can hallucinate Same input = same output
Auditable changes Black box Every change documented
Framework migrations Manual prompting One command
Backup system None Automatic timestamped backups

Getting Started

# Install
npm install -g @neurolint/cli

# Analyze
neurolint analyze src/ --verbose

# Preview fixes (safe)
neurolint fix src/ --all-layers --dry-run

# Apply with backup
neurolint fix src/ --all-layers --backup

Open Source (Apache 2.0)

Free forever. Commercial use allowed. No restrictions.

GitHub: github.com/Alcatecablee/Neurolint-CLI

Try It Today

npm install -g @neurolint/cli
neurolint analyze . --verbose

Your future self will thank you.

Questions? Open an issue on GitHub or drop a comment below!

ProofQR.xyz - a blockchain-based QR code verification system

2025-12-02 08:42:01

I

 wanted something to work on over the weekend and wanted to dive into Web3 and also do stuff involving QR code generation (not really sure why just figured it might be an untapped market). After doing some looking online and asking ChatGPT what was needed in the world of Web3 it came up with a QR code based verification system for blockchain items and content. I spent some time researching the concept then started building with Next.js. The photos you see are the result of a few days of work (the back end and logic was 90% done by me with 10% Claude debugging. The design was all AI as I have the artistic talent of a cheeseburger). This is also working using the testnet for now as I do not want to lose funds testing.

So here is how it works currently:

Step 1: You sign in with you wallet (this is not stored anywhere and you will have to sign in each time)
Step 2: Enter any data you want to have recorded on the ETH blockchain (my example is a url)
Step 3: Click generate and confirm the transaction
Step 4: wait a few seconds. I find most QR codes generate in under 30 seconds
Step 5: Save your QR code

To validate, simply point your phone camera at the QR code and scan it. This will open up the validation page and show you if the QR code is valid or not. It will also show how many times the code has been scanned and the Etherscan link

Why is this necessary? Traditional QR codes can be easily copied or faked. If someone counterfeits your product, they can just copy the QR code. There's no way to prove which one is authentic. ProofQR provides a way to make sure your data is secure and protected on the ETH blockchain.

What I need from you is feedback. I want ideas on how to make this better and potential additions to add. Any and all feedback is wanted.

Thank you for viewing my post. Stay tuned for future updates!