2025-12-02 09:02:06
Tired of lengthy design cycles and performance surprises late in the game? Imagine knowing your chip's power consumption and speed before committing to the physical layout. That's the promise of a new machine learning approach that's revolutionizing how we design integrated circuits.
The core concept is to build a predictive model that learns from the early stages of design, specifically the netlist. This model is then fine-tuned to estimate parasitic effects – the unwanted capacitances and resistances that arise from the physical layout – and predict final performance metrics like timing and power. Think of it like predicting the taste of a cake based on the recipe (netlist) while accounting for how your oven (layout tools) might affect the final result (parasitics).
This approach uses transfer learning in a clever way. First, the model is trained on smaller, simpler designs to learn the general relationships between the netlist and parasitics. Then, it's fine-tuned with data from larger, more complex designs to account for the unique challenges they present.
Benefits for Developers:
One implementation challenge lies in generating sufficient training data, especially for novel architectures. A practical tip is to leverage existing design databases, even if they aren't perfectly matched to your current design, and augment them with simulated data.
This technique opens up exciting possibilities beyond traditional chip design. Imagine using it to optimize the placement of components on a printed circuit board, or even to predict the performance of a complex software system based on its architecture.
Ultimately, this parasitic-aware prediction method promises to reshape the landscape of hardware design, enabling faster, more efficient, and more reliable development of integrated circuits.
Related Keywords: Netlist, Performance Prediction, EDA, Electronic Design Automation, Transfer Learning, Domain Adaptation, Parasitic Extraction, VLSI, Chip Design, Integrated Circuits, Machine Learning for Hardware, Deep Learning, Graph Neural Networks, Model Training, Inference, Optimization, Circuit Simulation, Hardware Acceleration, Cloud Computing, AI in Hardware, Predictive Modeling, Design Automation, Silicon Design, Semiconductor
2025-12-02 08:52:30
A secret is any piece of sensitive information that must be protected from unauthorized access.
Examples:
Goal: Keep secrets encrypted at rest, encrypted in transit, and never stored in plaintext (repo, logs, artifacts).
A senior DevOps engineer must prevent:
Secrets leaks cause:
This is why we use secure secret stores, not files.
There are 4 main secret management solutions you must understand:
| Tool | Where Used | Strengths | Weaknesses |
|---|---|---|---|
| GitHub Secrets | GitHub CI/CD | Easy to use, encrypted | Not for runtime apps |
| AWS Secrets Manager | Apps running on AWS | Automatic rotation, IAM integration | Expensive at scale |
| AWS SSM Parameter Store | AWS Systems Manager | Cheaper than Secrets Manager | Rotation not native |
| HashiCorp Vault | Enterprise multi-cloud | Most secure, dynamic secrets | Complex to manage |
GitHub Secrets are ONLY for CI/CD.
Production microservices on AWS.
ecsTaskExecutionRole:
can access secret: arn:aws:secretsmanager:...
{
"name": "DB_PASSWORD",
"valueFrom": "arn:aws:secretsmanager:us-east-2:xxx:secret:db_pass"
}
(SecureString parameters)
Costs: $0 for Standard tier
(Secrets Manager costs $0.40 per secret per month)
for production database passwords (no rotation).
This is enterprise-level secret management.
Vault generates:
Perfect for:
Terraform NEVER stores secrets in:
terraform.tfvars (locally only)password = "MySecret123"
password = var.db_password
password = data.aws_secretsmanager_secret_version.db_password.secret_string
Example using GitHub Actions + AWS Secrets Manager:
Secrets stored in Secrets Manager
EC2/ECS Lambda uses IAM role to access secrets
GitHub Actions stores only:
Terraform deploys infrastructure
→ references secrets with ARN
Applications retrieve secrets using:
❌ Never store secrets in GitHub repository
❌ Never store secrets in Slack or Teams
❌ Never store secrets in Docker image
❌ Never store secrets in YAML files
❌ Never store secrets in Terraform state
❌ Never store secrets in code comments
❌ Never echo secrets in CI logs
❌ Never send secrets in email
If leaked → rotate immediately.
“In my pipelines, GitHub Secrets are used only for CI/CD credentials.
For application runtime secrets, I use AWS Secrets Manager or SSM Parameter Store depending on rotation requirements.
I avoid hardcoding secrets in Terraform by pulling them from the secret stores at runtime.
For enterprise multi-cloud environments, I integrate HashiCorp Vault with AWS IAM and Kubernetes service accounts for secure authentication and dynamic secrets.
All secrets are KMS-encrypted and never exposed in logs.”
This is senior-level.
┌──────────────────────────┐
│ Developer Machine │
│ (Push Git Changes) │
└─────────────┬────────────┘
│
▼
┌────────────────────────┐
│ GitHub Repository │
└─────────────┬───────────┘
│
▼
┌──────────────────────────────┐
│ GitHub Actions Runner │
│ (CI/CD Workflow Execution) │
└──────────────┬────────────────┘
SECRETS ENTER HERE FROM GITHUB → │
▼
┌───────────────────────────────────────────────────────────────────────────────┐
│ GitHub Secrets Storage │
│ - AWS_ACCESS_KEY_ID │
│ - AWS_SECRET_ACCESS_KEY │
│ - CONFLUENT_API_KEY │
│ - CONFLUENT_API_SECRET │
│ - EXISTING_VPC_ID │
│ - SUBNETS / SG IDs │
└───────────────┬──────────────────────────────────────────────────────────────┘
│ ENV VARIABLES PASSED TO TERRAFORM
▼
┌───────────────────────────────────────┐
│ TERRAFORM ENGINE │
│ terraform init / plan / apply │
└─────────────────────────┬─────────────┘
│
TERRAFORM USES SECRETS → │
▼
┌─────────────────────────────────────────────┐
│ AWS Terraform Provider │
└───────────┬──────────────────────────────────┘
│
▼
────────────────────────────────────────────────────────────────────────
│ AWS Cloud │
│ │
│ ┌────────────────────────────────────────────────────────────────┐ │
│ │ AWS IAM (Identity) │ │
│ │ - Permissions for Terraform │ │
│ │ - Permissions for ECS tasks │ │
│ └────────────────────────────────────────────────────────────────┘ │
│ │
│ ┌────────────────────────────────────────────────────────────────┐ │
│ │ AWS Secrets Manager / Parameter Store │ │
│ │ - Terraform can CREATE secrets here │ │
│ │ - ECS tasks retrieve secrets automatically │ │
│ └────────────────────────────────────────────────────────────────┘ │
│ │
│ ┌────────────────────────────────────────────────────────────────┐ │
│ │ AWS ECS Cluster │ │
│ │ - Backend container │ │
│ │ - Producer container │ │
│ │ - Payment / Fraud / Analytics │ │
│ │ - Containers read secrets at runtime │ │
│ └────────────────────────────────────────────────────────────────┘ │
│ │
────────────────────────────────────────────────────────────────────────
Everything explained as a senior DevOps must understand.
GitHub Secrets are stored encrypted in GitHub.
They are used only during workflow execution.
This is AWS’s official secret storage system.
GitHub Secrets = CI/CD
AWS Secrets = Runtime
ECS Tasks → automatically fetch secrets from Secrets Manager and inject into containers.
Simpler version of Secrets Manager.
/backend/SERVICE_URL/kafka/bootstrap/env/prod/feature-flagVault is used in enterprise environments for high-grade secret management.
Terraform itself should never store secrets in .tf files.
Correct ways:
Incorrect:
GitHub Secrets → environment variables → Terraform variables
Then ECS Task Definitions read from Secrets Manager at runtime.
A Senior DevOps must know:
Used for pipeline-level secrets only.
For production runtime secrets.
For configuration and non-secret values.
Enterprise-grade, dynamic secrets, KMS integration.
Never hardcode.
Use TF_VAR + Secrets Manager injection.
ECS can read secrets directly
(no environment variables exposed).
Here are short, crisp answers:
Pipeline-only encrypted secret store.
Used to authenticate Terraform, Docker, AWS during CI/CD.
Because GitHub Secrets only live during CI/CD.
Containers need secrets at runtime → use AWS Secrets Manager.
Fully managed encrypted secret store with rotation, IAM, audit logging.
Cheaper config store for non-secrets.
Enterprise secret management offering dynamic credentials and zero-trust access.
Use sensitive variables + backend secrets.
Never commit secrets.
Through “valueFrom” Secrets Manager ARNs in the task definition.
2025-12-02 08:50:19
Last Tuesday, a founder asked me: "Should I build a mobile app or a web app first?"
I asked her three questions:
She went quiet. Then: "I haven't thought about any of that."
Most founders haven't. They just know they need an app—capital A, as if "app" only means one thing.
Let's fix that, because this decision will either accelerate your startup or drain six months before you realize you chose wrong.
Here's what nobody tells you: The "right" platform isn't about what's trendy or what your competitor built. It's about where your specific users need to solve their specific problem.
A meditation app that expects users to sit at their desktop three times a day? Dead on arrival.
A B2B analytics dashboard that sales teams only check from their phones? Also dead.
The platform isn't a preference. It's a constraint dictated by user behavior.
Mobile apps are seductive. They feel "real" in a way web apps don't. You can hold them. They have an icon. Your mom can find them in the App Store.
But mobile is expensive and slow. Here's when it's worth it:
Not "would be nice to have on mobile." Requires it.
Examples that require mobile:
Examples that don't require mobile (despite what founders think):
The camera. GPS. Push notifications. Biometric authentication. Offline functionality that actually works.
If your core value proposition depends on these, mobile is non-negotiable.
But be honest with yourself. Do you need the camera, or do you just think it would be cool to have? Because "cool" costs weeks in development time.
Not "they use their phones a lot." Everyone uses their phones a lot.
I mean: they're solving this specific problem primarily from their phones, and forcing them to a desktop is friction you can't afford.
Consumer apps targeting Gen Z? Mobile-first makes sense.
Enterprise software for finance teams? They're on desktops with three monitors.
Sometimes the distribution channel IS the strategy. If your users expect to find solutions in the App Store or Google Play, you need to be there.
But remember: App Store discovery is brutal. Getting featured is lottery-odds. Organic downloads without marketing are basically zero.
A web app can be discovered through Google, shared via link, and accessed instantly with no install friction. Don't underestimate that.
Web apps get dismissed as "less serious" than mobile apps. This is nonsense.
Slack built a $27B company on a web app. Notion became a household name with web-first. Linear, Figma, Airtable—all web-first, mobile later.
Here's when web wins:
Building a mobile app means:
Building a web app means:
For MVPs, this velocity is everything. You're trying to learn if your idea works, not create the perfect product.
We recently worked with a founder on a contractor management platform. They initially wanted mobile apps. We asked why. "Because our competitors have apps."
We talked them into web-first. We built and launched in 8 weeks. They got early users, learned what features actually mattered, and are now building a mobile companion app for the one feature that needs it: time tracking on job sites.
If they'd gone mobile-first? 16 weeks, and half the learning because updating the app requires app store approval.
Phones have 6 inches of screen space. Some tasks need more.
Financial dashboards, spreadsheet-heavy tools, design software, code editors, complex configuration interfaces—these fight mobile constraints instead of embracing them.
A responsive web app gives you desktop power when users need it and mobile access when they don't.
B2B users work differently than B2C users.
They're at desks. They have workflows. They need integrations with other tools. They want to open fifteen tabs and toggle between them. They're doing focused work that requires real estate and a real keyboard.
Yes, they'll eventually want mobile access for certain tasks. But if you make them download an app before they can even try your product, you've added friction to an already-complex B2B sales process.
A web app with a clean, bookmarkable URL beats "download our app" every time in enterprise sales.
Let's talk reality.
A native iOS app from a competent team: several months for an MVP
A native Android app: another several months
Both platforms: significant time investment (or you go cross-platform and deal with framework compromises)
A responsive web app that works everywhere: 6-10 weeks typically
Those numbers aren't arbitrary. Mobile development is just more complex because you're building for multiple platforms or dealing with cross-platform framework overhead.
If you're bootstrapping or pre-seed, that timeline difference is the difference between shipping and not shipping.
"Why not React Native or Flutter? Then I get both platforms for the price of one!"
In theory, yes. In practice, it's complicated.
Cross-platform frameworks have gotten good. React Native powers Facebook, Instagram, Shopify. Flutter powers Google Ads, Alibaba, BMW. These aren't toys.
But they come with trade-offs:
Native iOS and Android features take time to get implemented in React Native/Flutter. Sometimes they never do. You're always waiting for the framework to catch up.
Need the latest iOS feature Apple just announced? Native developers get it immediately. Cross-platform developers wait months for library support.
80% of your app works great cross-platform. The other 20%—the weird edge cases, the platform-specific UX expectations, the native integrations—costs 80% of your development time.
Experienced mobile developers can navigate this. But if you're hiring an agency or junior developers, expect headaches.
For most apps, this doesn't matter. But if you're building something graphics-intensive or performance-critical, you'll notice the difference.
You need mobile (not web) but can't afford two native apps. Your app doesn't rely heavily on cutting-edge platform features. You have experienced React Native or Flutter developers. Your app is utility-focused, not trying to feel "premium" or compete with highly-polished native apps.
For MVPs, I'm skeptical. You're adding framework complexity when you're trying to learn fast. But if you're confident mobile is the right call and you need both platforms, it's a reasonable compromise.
Stop guessing. Use this:
Where are users when they need to solve the problem you're solving?
At their desk working? → Web On the go constantly? → Mobile Could be either? → Web first, mobile later.
Make a list. Be ruthless. Not "nice to have"—must have for v1.
Any of these mobile-only?
If yes → Mobile app If no → Web app is probably faster and cheaper
How will users discover you?
App Store/Google Play discovery → Mobile SEO and content marketing → Web Direct sales and demos → Web (easier to demo, no install friction) Viral sharing/invites → Web (links beat "download this app").
Time to MVP:
Maintenance and updates:
Web: Deploy anytime, instant updates
Mobile: App store approval, versioned updates, user update friction
Be honest about your timeline. Ambitious founders always underestimate how long mobile takes.
This isn't web OR mobile forever. It's web or mobile FIRST.
Most successful products end up with both. The question is sequencing.
Slack started web. Then mobile. Instagram started mobile. Then web. Uber started mobile. (They had to.)
The pattern: start where your users are, nail the core experience, then expand to other platforms once you've proven the concept.
Don't try to be everywhere at once. You'll ship slower and learn less.
Someone always asks about PWAs—Progressive Web Apps that work like native apps but run in the browser.
The promise: Best of both worlds. Web development speed with app-like features.
The reality: Promising but limited.
For certain use cases—especially consumer web apps that want app-like features without the overhead—PWAs are great.
But if your strategy depends on App Store distribution or advanced native features, PWAs won't cut it.
At SociiLabs, most of our MVP projects start with web apps. Not because we don't build mobile (we do), but because web gives founders the fastest path to learning.
Here's our usual recommendation flow:
Build the core functionality as a responsive web app. Launch fast. Get real users. Learn what actually matters versus what you thought would matter.
Most features you think are critical? They're not. You learn this by shipping and watching what users actually do.
Fix what's broken. Add what's missing. Find product-market fit. Build out the features that drive retention and revenue.
This is so much easier on web. No app store approvals. No version fragmentation. Just ship.
Once you know what works, build a mobile app that focuses on the specific use cases where mobile actually adds value.
Not "our whole product but on mobile." A focused mobile experience for the workflows that benefit from mobility.
Example: We built a project management tool for construction companies. Web app first. They used it for planning and reporting. Then we built a mobile app specifically for job site updates and photo documentation. The mobile app does three things really well. The web app does everything else.
That's the pattern. Prove it works, then expand strategically.
We build both web apps and mobile apps at SociiLabs, so we don't have a dog in this fight. What we care about is helping you make the right call for your specific situation.
If you're stuck on this decision, we're happy to talk it through. No sales pitch, no commitment—just a conversation about your users, your timeline, and what actually makes sense.
We use AI-assisted development to move faster than traditional agencies, which means we can typically deliver MVPs in 6-10 weeks instead of the usual 4-6 months. But speed doesn't matter if we're building the wrong thing on the wrong platform.
So before we talk about timelines or scope, we'll ask the annoying questions: Who's your actual user? Where are they when they need this? What are you really trying to learn with this MVP?
Because here's the thing: most founders who come to us wanting mobile apps actually need web apps first. And the ones who need mobile usually need a more focused version than what they're imagining.
Our job isn't to build what you ask for. It's to help you figure out what you actually need, then build that really well.
Want to talk about your project? Book a time here. We'll give you our honest take.
Web or mobile isn't about technology preferences. It's about user behavior, business constraints, and strategic sequencing.
Ask yourself:
Answer those honestly, and the platform choice becomes obvious.
And if it's not obvious? That probably means web-first is the safer bet. You can always build mobile later. You can't get back the months you spent building the wrong thing.
What's your take? Have you launched web-first or mobile-first? What did you learn? Drop a comment—we'd love to hear what worked (or didn't) for your project.
2025-12-02 08:49:11
The other day, I released a C# library called Linqraft! In this article, I'd like to introduce it.
C# is a wonderful language. With powerful type safety, a rich standard library, and the ability to handle everything from GUI app development to web development, I think it's an excellent language for almost any purpose.
However, there's something about C# that has been frustrating me on a daily basis.
That is: "Defining classes is tedious!" and "Writing Select queries is tedious!"
Since C# is a statically-typed language, you basically have to define all the classes you want to use. While this is unavoidable to some extent, having to define derived classes every time is extremely tedious.
Especially when using an ORM (Object-Relational Mapping) for database access, the shape of data you want must be defined as a DTO (Data Transfer Object) every time, resulting in writing similar class definitions over and over again.
Let's compare this with Prisma, a TypeScript ORM. In Prisma, you can write:
// user type is automatically generated from schema file
const users = await prisma.user.findMany({
// Specify the data you want with select
select: {
id: true,
name: true,
posts: {
// You can also specify related table data with select
select: {
title: true,
},
},
},
});
// The type of users automatically becomes: (automatically done for you!)
// This type can also be easily reused
type Users = {
id: number;
name: string;
posts: {
title: string;
}[];
}[];
If you try to do the same thing in C#'s EF Core, it looks like this:
// Assume Users type is defined in a separate file
var users = dbContext.Users
// Specifying the data you want with Select is the same
.Select(u => new UserWithPostDto
{
Id = u.Id,
Name = u.Name,
// Child classes are also specified with Select in the same way
Posts = u.Posts.Select(p => new PostDto { Title = p.Title }).ToList()
})
.ToList();
// You have to define the DTO class yourself!
public class UserWithPostDto
{
public int Id { get; set; }
public string Name { get; set; }
public List<PostDto> Posts { get; set; }
}
// Same for child classes
public class PostDto
{
public string Title { get; set; }
}
// Since we already have a User class, it seems like it could be auto-generated from there...
In this regard, Prisma is clearly easier and more convenient. Even though we're already defining the Users type as a class1, it feels frustrating to have to manually define derived DTO classes.
The above scale is still tolerable, but it gets even more painful in more complex cases.
var result = dbContext.Orders
.Select(o => new OrderDto
{
Id = o.Id,
Customer = new CustomerDto
{
CustomerId = o.Customer.Id,
CustomerName = o.Customer.Name,
// Tedious part
CustomerAddress = o.Customer.Address != null
? o.Customer.Address.Location
: null,
// Wrap in another DTO because we don't want to check every time
AdditionalInfo = o.Customer.AdditionalInfo != null
? new CustomerAdditionalInfoDto
{
InfoDetail = o.Customer.AdditionalInfo.InfoDetail,
CreatedAt = o.Customer.AdditionalInfo.CreatedAt
}
: null
},
Items = o.Items.Select(i => new OrderItemDto
{
ProductId = i.ProductId,
Quantity = i.Quantity,
// Same for arrays. Hard to read...
ProductComments = i.CommentInfo != null
? i.CommentInfo.Comments.Select(c => new ProductCommentDto
{
CommentText = c.CommentText,
CreatedBy = c.CreatedBy
}).ToList()
: new List<ProductCommentDto>()
}).ToList()
})
.ToList();
// Not shown here, but all DTO class definitions used above also need to be defined
First of all, there are already 5 DTOs in the above example, which is extremely tedious. But even more annoying is the "null checking".
First, EF Core's Select expressions cannot use ?. (null-conditional operator). Specifically, it cannot be used inside Expression<...>.
Therefore, you have to write code that uses ternary operators to check for null, and if it's not null, access the member below it.
For child classes alone, you can simply write o.A != null ? o.A.B : null, but as this gets deeper to grandchild classes and great-grandchild classes, the null checking code keeps growing and becomes very hard to read.
// Unbelievably hard to read
Property = o.A != null && o.A.B != null && o.A.B.C != null
? o.A.B.C.D
: null
The same applies when picking up array values in child classes (which can be null), requiring tedious code.
// Give me a break
Items = o.Child != null
? o.Child.Items.Select(i => new ItemDto{ /* ... */ }).ToList()
: new List<ItemDto>()
What do you think? I really hate this.
Looking at the Prisma example above again, it has roughly the following features (using TypeScript language features as well):
?. directly in queries without worrying about null checkingAfter thinking about it, I realized that by combining anonymous types, source generators, and interceptors, these features could be achieved.
Are you familiar with C#'s anonymous types? It's a feature where the compiler automatically generates a corresponding class when you write new { ... } as shown below.
// Don't write a type name after new
var anon = new
{
Id = 1,
Name = "Alice",
IsActive = true
};
Some of you may not have used this much, but it's very convenient for defining disposable classes in Select queries.
var users = dbContext.Users
.Select(u => new
{
Id = u.Id,
Name = u.Name,
Posts = u.Posts.Select(p => new { Title = p.Title }).ToList()
})
.ToList();
// You can access and use it normally
var user = users[0];
Console.WriteLine(user.Name);
foreach(var post in user.Posts)
{
Console.WriteLine(post.Title);
}
However, as it's called an "anonymous" type, the actual type name doesn't exist, so it cannot be used as method arguments or return values. This restriction is quite painful, so it surprisingly doesn't have many opportunities to shine.
This means that if we create a source generator that automatically generates corresponding classes based on what's defined with anonymous types, wouldn't that work? This is a natural progression. Linqraft achieves exactly this.
Specifically, using a specific method name (SelectExpr) as a hook point, it automatically generates class definitions based on the anonymous type passed as an argument.
Since it would be inconvenient if you couldn't specify the generated class name, it's designed to allow you to specify the class name as a generic type argument.
var users = dbContext.Users
// In this case, auto-generate a class called UserDto
.SelectExpr<User,UserDto>(u => new
{
Id = u.Id,
Name = u.Name,
Posts = u.Posts.Select(p => new { Title = p.Title }).ToList()
})
.ToList();
// ---
// A class like this is auto-generated
public class UserDto
{
public int Id { get; set; }
public string Name { get; set; }
public List<PostDto_Hash1234> Posts { get; set; }
}
// Child classes are also auto-generated
// Hash value is automatically added to avoid name conflicts
public class PostDto_Hash1234
{
public string Title { get; set; }
}
You just look at the elements of the passed anonymous type and generate the corresponding class definition using Roslyn API (though it's quite difficult!). Simple, right?
At this point, we've achieved automatic class generation, but we need to replace the behavior of the called SelectExpr to work like a normal Select.
This is where interceptors come in.
Did you know that C# has a feature called interceptors?
Since it's such a niche area, few people probably know about it, but it's a feature that allows you to hook specific method calls and replace them with arbitrary processing.
It was preview-released in .NET 8 and became stable in .NET 9.
Even if I say that, it might be hard to imagine, so let's consider code like this:
// Pattern calling a very time-consuming process with constant values
var result1 = "42".ComputeSomething(); // case 1
var result2 = "420".ComputeSomething(); // case 2
var result3 = "4200".ComputeSomething(); // case 3
Since it's being called with constant values, we should be able to calculate the results at compile time. In such cases, by pre-implementing interceptors in combination with source generators, you can replace calls like this:
// Imagine this class is auto-generated by Source Generator.
// Public level can be file
file static class PreExecutedInterceptor
{
// Get the hash value of the call site using Roslyn API and attach InterceptsLocationAttribute
[global::System.Runtime.CompilerServices.InterceptsLocationAttribute(1, "(hash of case1)")]
// Function name can be random. Arguments and return value should be the same as the original function
public static int ComputeSomething_Case1(this string value)
{
// Pre-calculate and return the result for case 1
return 84;
}
// Same for case 2 and 3
[global::System.Runtime.CompilerServices.InterceptsLocationAttribute(1, "(hash of case2)")]
public static int ComputeSomething_Case2(this string value) => 168;
[global::System.Runtime.CompilerServices.InterceptsLocationAttribute(1, "(hash of case3)")]
public static int ComputeSomething_Case3(this string value) => 336;
}
While defining as a regular extension method would cause definition duplication, using interceptors allows you to replace different processing for each call site.
Linqraft uses this mechanism to intercept SelectExpr calls and replace them with regular Select.
// Suppose there's a call like this
var orders = dbContext.Orders
.SelectExpr<Order,OrderDto>(o => new
{
Id = o.Id,
CustomerName = o.Customer?.Name,
CustomerAddress = o.Customer?.Address?.Location,
})
.ToList();
// Example of generated code
file static partial class GeneratedExpression
{
[global::System.Runtime.CompilerServices.InterceptsLocationAttribute(1, "hash of SelectExpr call")]
// Need to keep the base anonymous type conversion query, so selector is also taken as an argument (not actually used)
public static IQueryable<TResult> SelectExpr_0ED9215A_7FE9B5FF<TIn, TResult>(
this IQueryable<TIn> query,
Func<TIn, object> selector)
{
// Can only receive <TIn> by specification, but we actually know the original type so cast it
var matchedQuery = query as object as IQueryable<global::Order>;
// Convert the pseudo-query to a regular Select
// Map to the auto-generated DTO class created earlier
var converted = matchedQuery.Select(s => new global::OrderDto
{
Id = s.Id,
// Mechanically replace null-conditional operator with regular ternary operator check
CustomerName = s.Customer != null ? s.Customer.Name : null,
CustomerAddress = s.Customer != null && s.Customer.Address != null
? s.Customer.Address.Location
: null,
});
// Can only return <TResult> by specification so cast again
return converted as object as IQueryable<TResult>;
}
}
This allows users to write queries easily with the feeling of a regular Select!
With the above measures, all calls to SelectExpr are completely intercepted by separately generated code. As a result, the original SelectExpr body has nothing to do and exists only for editor completion.
If so, if we output that dummy method itself with a source generator, we shouldn't need a reference to Linqraft itself! So that's what we do.
public static void ExportAll(IncrementalGeneratorPostInitializationContext context)
{
context.AddSource("SelectExprExtensions.g.cs", SelectExprExtensions);
}
const string SelectExprExtensions = $$""""
{{CommonHeader}}
using System;
using System.Collections.Generic;
using System.Linq;
/// <summary>
/// Dummy expression methods for Linqraft to compile correctly.
/// </summary>
internal static class SelectExprExtensions
{
/// <summary>
/// Create select expression method, usable nullable operators, and generate instance DTOs.
/// </summary>
public static IQueryable<TResult> SelectExpr<TIn, TResult>(this IQueryable<TIn> query, Func<TIn, TResult> selector)
where TIn : class => throw InvalidException;
// Other variants are also included here
}
"""";
Then, if you enable DevelopmentDependency, you can make it a package that's not included in the actual build output at all!
<PropertyGroup>
<DevelopmentDependency>true</DevelopmentDependency>
</PropertyGroup>
In fact, when you install Linqraft via nuget etc., it should look like this. This means it's a development-only package.
<PackageReference Include="Linqraft" Version="0.4.0">
<PrivateAssets>all</PrivateAssets>
<IncludeAssets>runtime; build; native; contentfiles; analyzers</IncludeAssets>
</PackageReference>
Now, some of you who have heard the story so far may want to try it right away!
For those people, Linqraft also provides a Roslyn Analyzer that automatically replaces existing Select queries with SelectExpr.
It's very easy to use - just right-click on the Select query part and replace it in one go from Quick Actions.
So, by using Linqraft to write queries simply like this:
// Zero dependencies!
var orders = dbContext.Orders
.SelectExpr<Order, OrderDto>(o => new
{
Id = o.Id,
// You can write with ?.!
CustomerName = o.Customer?.Name,
CustomerAddress = o.Customer?.Address?.Location,
})
.ToList();
// OrderDto class and its contents are auto-generated!
I hate to say it myself, but I think we've created a pretty useful library.
Please try it out! If you like it, please give us a star.
https://github.com/arika0093/Linqraft
I also put some effort into the introduction web page. Specifically, you can test functionality on the web page!
I also implemented a feature that parses Token information with Roslyn and feeds it into Monaco Editor for syntax highlighting.
Please check this out as well.
https://arika0093.github.io/Linqraft/
Think of it as defining the schema (Prisma's schema) as classes in C#. This part isn't too painful. ↩
2025-12-02 08:48:54
You've been there. It's 2 AM, and you're staring at a wall of ESLint errors. Missing key props in React lists. Hydration mismatches because someone used localStorage without a server-side guard. Accessibility warnings everywhere.
ESLint tells you what's wrong. But you still have to fix it yourself.
The cost? Hours of manual fixes. Delayed releases. Production bugs that could have been prevented.
What if there was a tool that didn't just identify problems, but actually fixed them?
NeuroLint is a deterministic code transformation engine that automatically fixes over 50 common issues in React, Next.js, and TypeScript projects.
The key difference? No AI. No guessing. No hallucinations.
While AI coding tools can produce unpredictable results, NeuroLint uses Abstract Syntax Tree (AST) parsing and rule-based transformations. Same input, same output, every time.
# Install globally
npm install -g @neurolint/cli
# Analyze your project
neurolint analyze . --verbose
# Preview fixes (safe, no changes)
neurolint fix . --all-layers --dry-run
# Apply fixes
neurolint fix . --all-layers
Every transformation goes through a 5-step validation process:
This is why NeuroLint never breaks your code.
| Layer | What It Fixes |
|---|---|
| 1. Configuration | Modernizes tsconfig.json, next.config.js, package.json |
| 2. Patterns | Removes console.log, fixes HTML entities, cleans unused imports |
| 3. Components | Adds React keys, accessibility attributes, button types |
| 4. Hydration | Adds SSR guards for localStorage, window, document |
| 5. Next.js | Adds 'use client' directives, optimizes Server Components |
| 6. Testing | Generates error boundaries and test scaffolding |
| 7. Adaptive | Learns patterns from previous fixes and reapplies them |
Here's exactly what NeuroLint does:
key props to .map() listslocalStorage, window, document in SSR checkstype="button" to prevent form submissionsaria-label to buttons, alt to images& to &, < to <
Before:
function Button({ children, onClick }) {
return <button onClick={onClick}>{children}</button>;
}
After Layer 3 (Components):
function Button({ children, onClick }) {
return (
<button
onClick={onClick}
aria-label={typeof children === 'string' ? children : undefined}
type="button"
>
{children}
</button>
);
}
After Layer 5 (Next.js):
'use client';
interface ButtonProps {
children: React.ReactNode;
onClick?: () => void;
}
function Button({ children, onClick }: ButtonProps) {
return (
<button
onClick={onClick}
aria-label={typeof children === 'string' ? children : undefined}
type="button"
>
{children}
</button>
);
}
export default Button;
neurolint migrate-react19 . --verbose
Handles forwardRef removal, string refs → callback refs, ReactDOM.render → createRoot.
neurolint check-deps . --fix
Detects React 19 incompatibilities and auto-generates fixes.
| Feature | AI Tools | NeuroLint |
|---|---|---|
| Predictable output | Can hallucinate | Same input = same output |
| Auditable changes | Black box | Every change documented |
| Framework migrations | Manual prompting | One command |
| Backup system | None | Automatic timestamped backups |
# Install
npm install -g @neurolint/cli
# Analyze
neurolint analyze src/ --verbose
# Preview fixes (safe)
neurolint fix src/ --all-layers --dry-run
# Apply with backup
neurolint fix src/ --all-layers --backup
Free forever. Commercial use allowed. No restrictions.
GitHub: github.com/Alcatecablee/Neurolint-CLI
npm install -g @neurolint/cli
neurolint analyze . --verbose
Your future self will thank you.
Questions? Open an issue on GitHub or drop a comment below!
2025-12-02 08:42:01
I 
wanted something to work on over the weekend and wanted to dive into Web3 and also do stuff involving QR code generation (not really sure why just figured it might be an untapped market). After doing some looking online and asking ChatGPT what was needed in the world of Web3 it came up with a QR code based verification system for blockchain items and content. I spent some time researching the concept then started building with Next.js. The photos you see are the result of a few days of work (the back end and logic was 90% done by me with 10% Claude debugging. The design was all AI as I have the artistic talent of a cheeseburger). This is also working using the testnet for now as I do not want to lose funds testing.
So here is how it works currently:
Step 1: You sign in with you wallet (this is not stored anywhere and you will have to sign in each time)
Step 2: Enter any data you want to have recorded on the ETH blockchain (my example is a url)
Step 3: Click generate and confirm the transaction
Step 4: wait a few seconds. I find most QR codes generate in under 30 seconds
Step 5: Save your QR code
To validate, simply point your phone camera at the QR code and scan it. This will open up the validation page and show you if the QR code is valid or not. It will also show how many times the code has been scanned and the Etherscan link
Why is this necessary? Traditional QR codes can be easily copied or faked. If someone counterfeits your product, they can just copy the QR code. There's no way to prove which one is authentic. ProofQR provides a way to make sure your data is secure and protected on the ETH blockchain.
What I need from you is feedback. I want ideas on how to make this better and potential additions to add. Any and all feedback is wanted.
Thank you for viewing my post. Stay tuned for future updates!