2025-11-10 10:36:27
Hi everyone,
I’m currently working on a small web project and I’m splitting my workflow between the backend and frontend since I’ll need a “heavy” processing component for automated spatial operations.
This part mainly involves running scripts that perform spatial tasks such as calculating point distances, handling geometry intersections, and updating labels based on spatial results.
Apart from that, the project also includes the usual web elements — a standard database, content pages, multimedia, etc.
So far, I’ve been considering Strapi for the backend and Vercel for the frontend, but I suspect I’ll need an additional service to handle the spatial processing layer.
Has anyone here worked on something similar (ideally keeping costs low)?
What tools, architectures, or workflows would you recommend?
Any feedback or corrections to this general approach are more than welcome.
Thanks in advance for any advice or experiences you can share! 😊
2025-11-10 10:30:00
Most face-recognition systems fail in real life.
Why? Because they try to recognize faces every single frame — slow, unstable, and guaranteed to misidentify people the moment they move their head.
The solution is not “more detection.”
The solution is SORT.
🚀 Why SORT Changes Everything
SORT (Simple Online & Real-Time Tracking) gives you:
Think of SORT as the glue that holds your face-recognition pipeline together.
🔥 The Recognition Pipeline (Super Simple)
1️⃣ Detect faces only every N frames
if frame_id % 3 == 0:
faces = face_app.get(frame)
2️⃣ Use SORT to track faces between detectionstracked = tracker.update(detections)
3️⃣ Assign embeddings via IoU matching
for det_bbox, det_emb in last_face_map.items():
if compute_iou(track_bbox, det_bbox) > 0.45:
track_embeddings[track_id] = det_emb
4️⃣ Search identity from Qdrant
hits = client.search(collection_name="faces", query_vector=embedding.tolist(), limit=1)
if hits and hits[0].score > 0.75:
name = hits[0].payload["name"]
5️⃣ Register new users instantly
if key=='r' and unknown_face:
client.upsert("faces", [PointStruct(id=uuid4(), vector=emb.tolist(), payload={"name": name})])
🎯 Why This Works So Damn Well
Traditional Method Our SORT Method
Detect every frame → low FPS Detect periodically → high FPS
Identity flickers Consistent identity
CPU overload Efficient
Recompute embeddings repeatedly Compute once per track
SORT turns your pipeline from “weak and jittery” to smooth, stable, and lightning fast.
🧠 InsightFace for Embeddings
InsightFace gives crisp 512-dimensional embeddings:
emb = normalize(face.embedding)
These vectors go straight into Qdrant to enable fast similarity search.
🗂️ Qdrant as the Face Database
Qdrant stores embeddings like a search engine:
client.create_collection(
collection_name="faces",
vectors_config=VectorParams(size=512, distance=Distance.COSINE)
)
Querying is instant — even with tens of thousands of faces.
🔄 Putting It All Together
Your real-time loop becomes:
Detect → Track → Attach Embedding → Qdrant Search → Show Name
Instead of:
Detect → Recognize → Detect → Recognize → Detect → Recognize → (lag forever)
🏁 Final Takeaway
The winning formula is simple:
InsightFace → reliable embeddings
SORT → stable tracking
Qdrant → lightning-fast comparison
Together, they create a recognition system that actually works in the real world —
fast, smooth, accurate, and scalable.
2025-11-10 10:27:43
Here’s a strong recommendation for an open-source WAF (Web Application Firewall) that’s been developed for nearly 10 years. It comes in both community and professional editions, and the community edition (free) is more than capable of handling most use cases.
Let’s start with the basics for those who might not be familiar:
A WAF (Web Application Firewall) is a security solution deployed in front of websites at the application layer, offering protection through the following features:
Detects and blocks common web attacks like SQL injection, XSS (cross-site scripting), and more via predefined rules.
Provides protection against large-scale attacks like DDoS by filtering malicious traffic.
Allows filtering based on IP address, region, or suspicious requests.
Ensures input validation and error masking based on security standards like OWASP and PCI-DSS.
Supports SSL certificates and HTTPS traffic control to secure communication.
Today, I’m recommending SafeLine, a WAF developed by Chaitin Technology over the last 10 years. Powered by an intelligent semantic analysis algorithm, it’s built for the community, and its robust detection capabilities ensure hackers won’t breach your defenses.
You can install it with just one command:
bash -c "$(curl -fsSLk https://waf.chaitin.com/release/latest/setup.sh)"
To log into the management console, open your browser and visit https://<your-ip>:9443. Follow the instructions on the screen.
If you can access GitHub, download it directly from:
https://github.com/chaitin/safeline
If GitHub is inaccessible, try the demo at:
https://demo.waf.chaitin.com:9443/dashboard
Here’s a breakdown of SafeLine’s major highlights:
The WAF is containerized for quick deployment with a single command, reducing installation complexity. Pre-configured security settings allow you to use it right out of the box, simplifying management.
SafeLine uses an in-house developed intelligent semantic analysis algorithm to detect unknown threats. It doesn’t rely on traditional signature rules, making it effective against 0-day attacks. The detection is precise with low false-positive rates, offering reliable protection.
SafeLine operates with a rule-free engine and high-efficiency algorithms that keep latency in the millisecond range. Its high concurrency handling allows a single CPU core to support heavy traffic, with excellent horizontal scaling capability.
The WAF’s traffic processing engine is built on Nginx, ensuring stability and reliability. It also comes with a built-in health-check mechanism, providing an impressive uptime of 99.99%.
2025-11-10 10:20:46
SafeLine is a Web Application Firewall (WAF) developed by Chaitin Technology. It leverages advanced technologies like big data and machine learning to create a network attack detection system. SafeLine continuously monitors and analyzes global threat intelligence, attack data, and vulnerability information in real time. This enables it to quickly identify and detect unknown security threats, accurately determine the type and source of attacks, and promptly issue alerts.
Additionally, SafeLine features a proprietary intelligent defense engine and a user-friendly visual management interface, providing efficient attack prevention and comprehensive security monitoring. This makes it an essential tool for delivering secure and reliable cloud security services to users.
You can choose the installation method based on your system environment, with support for one-click installation:
bash -c "$(curl -fsSLk https://waf.chaitin.com/release/latest/setup.sh)"
Once the installation is complete, you can access the SafeLine management interface by visiting https://<your-server-ip>:9433.
To protect a site, simply navigate to Web Services → Add Web Service and add the site you wish to secure.
I set up a blog system using Typecho, a lightweight blogging platform.
Initially, the web page’s source code was not encrypted, making it vulnerable to attacks.
The first step in using SafeLine’s dynamic protection is to add the resources you want to protect.
For example, I chose to protect the admin/login.php file. After adding it, simply click Save to enable dynamic protection.
Once dynamic protection is enabled, the source code of the resources you’ve chosen will be dynamically encrypted. When you access a protected resource, you’ll notice a significant difference.
Before Dynamic Protection: The source code is visible and unprotected.
After Dynamic Protection: The source code is dynamically encrypted, enhancing security.
SafeLine’s protection logs provide a clear record of successfully intercepted attacks. For instance, if a directory scanner attempts to probe your site, SafeLine will effectively block the attempt, and you can see the detailed logs of this activity.
As network security evolves, SafeLine, the next-generation WAF from Chaitin Technology, uses big data and machine learning to protect users from cyberattacks. It is easy to install, supporting both online and one-click installation methods. With the dynamic protection feature, users can encrypt site resources to enhance security. Additionally, SafeLine offers detailed protection logs, enabling users to monitor and block potential attacks. SafeLine provides comprehensive network security protection and monitoring, ensuring a safer digital environment.
Try SafeLine Today
GitHub Repository: https://ly.safepoint.cloud/rZGPJRF
Official Website: https://ly.safepoint.cloud/eGtfrcF
Live Demo: https:https://ly.safepoint.cloud/DQywpL7
2025-11-10 10:17:58
2025-11-10 10:07:30
In Week 2 of the Terraform Basics series, we’ll take the configuration we built last week and make it flexible, reusable, and more secure.
Table Of Contents
1. Recap: What We Built Last Week
2. Improving What We Have - Using Variables for Flexibility & Security
3. Updated Terraform Files
4. Deploying to Azure
5. Wrap-Up
1.Recap : What We Built Last Week
Last week, we built our first Azure Virtual Machine and the prerequisite resources that goes with an Azure VM.
Here's a visual reminder:
You can check out the full details of Week 1 here
2. Improving What We Have
Last week we were able to successfully deploy our first VM to our Azure Tenant. But, the way we deployed it had a lot of bad practices that are not secure, scalable or flexible. This week, we are going to move on to better practices.
Using Variables for Flexibility & Security
In Week 1, we hard-coded almost everything. The region of the VM, its size, name, credentials. The reason this is not optimal is because in real-world environments these are going to differ from one deployment to another.
Let's go back to our architecture for example. We had 4 .tf files and each resource has some variables associated with them. If you wanted to create the exact set of resources in another region with different resource names, you would have to create a copy of all the .tf files, change the name variables of each resource, and then deploy it. You can imagine as the environment you work in gets bigger, this manual effort becomes a real roadblock and prone to errors.
That is why we use variables.
In terraform, you can define a variable like so :
Default attribute is optional. In case there are no values provided for the variable, the variable becomes the default value that is specified.
Sensitive flag redacts the value of the variable from console output, logs and terraform plan outputs, so you don't accidentally leak sensitive information like admin passwords of VMs.
In order to change our configuration file to have the admin_username and admin_password attributes of the VM to refer to the variables we created, we use the var.variablename format, like so:
Now you may be asking, what are the values of these variables ? We just defined a default value but not the actual value. You are right.
There are two ways to define the values, but for this week's purpose we would only go over one.
Environment Variables
These are the variables you define on your environment where your Terraform Files are hosted, in my case my Windows PC.
In windows, you can define Environment variables by opening the Start and typing Environment Variables and clicking "Edit the system environment variables" result that comes up. After that, click on "Environment Variables". Select New, and create the environment variable in the format that is shown.
In order for Terraform to recognize the variable you define here, you must add TF_VAR_ prefix before the variable name. Variable name also has to match exactly and it's case sensitive.
Once you define as shown in the picture above, you can click "OK" and close the pop-up windows. Close and re-open Visual Studio Code for environment variables to be recognized.
After completing all the steps described, Terraform will now refer to the variables we created and our username and password will not be hard-coded and less vulnerable than before.
Here's a challenge for you, using what you have learned above, try to create variables for the name attributes of all the 5 resources we created using the default parameter (No need for environment variables). You can find the solution
3. Updated Terraform Files
Updated File Structure:
Latest version of the files can be found in screenshots below, as well as in this GitHub repository.
providers.tf
Same as last week.
resource-group.tf
variables.tf
Here, I used variables for names for all 5 resources that we created, and I also used variables for Virtual Machine's size attribute, resource group's location attribute, and NIC's private IP attribute. Things like IP, VM Size, and Location are often change a lot per resource or project so it always makes sense to use variables for those attributes.
virtual-machine.tf
Changes are highlighted in red.
virtual-network.tf
Changes are highlighted in red.
4. Running Terraform
For Windows, open a command prompt or a PowerShell and navigate to your Terraform project folder you created (in my case, Azure). From your project folder, run the following commands :
You can see our environment variables in action here by having the username changed to localadmin, and the sensitive tag blocking the password from being displayed in terraform plan output :
5. Wrap-Up
That wraps up Week 2 of the Terraform Basics series. Each week, we’re getting closer to a real-world Terraform environment. Next week, we’ll cover terraform.tfvars, secure the VM with a Network Security Group, and explore dynamic blocks.
I hope this was helpful to understand Terraform Basics, and hope to see you again in Week 3!