2026-01-05 16:00:00
I ignored Starter Content at first.
I thought it was optional and not that important.
Day 10 showed me why beginners shouldn’t skip it.
This post is part of my daily learning journey in game development.
I’m sharing what I learn each day — the basics, the confusion, and the real progress —
from the perspective of a beginner.
On Day 10, I learned about Starter Content in Unreal Engine and how Selection Mode works.
While creating a new Unreal project, there is an option to include Starter Content.
It adds basic materials, meshes, and props that help you test things quickly.
I also learned that if you already created a project without Starter Content, you can still add it later:
Along with this, I explored Selection Mode, which controls how objects are selected and manipulated inside the viewport.
At first, I thought Starter Content could only be added during project creation.
I didn’t know it was possible to add it afterward.
Selection Mode also confused me because sometimes I couldn’t select or move objects properly, and I didn’t understand why.
Once I added Starter Content manually, things became easier.
Having ready-made assets helped me focus on learning Unreal Engine instead of searching for resources.
Understanding Selection Mode helped me realize how important proper selection is when working in the viewport.
Small things — but they make a big difference for beginners.
Slow progress — but I’m building a strong foundation.
If you’re also learning game development,
what was the first thing that confused you when you started?
See you in the next post 🎮🚀
2026-01-05 15:58:41
As I stated in a previous post, I am slowly but surely migrating from Big Tech. This journey started many years ago but built further momentum during 2025, with the questionable actions of major players in both technology and social media. My focus is now on Open Source and privacy-focused applications and services.
Sometime ago I picked-up an 8GB Raspberry Pi 4. I used this as my daily driver for quite a while, undertaking some basic Python lessons using Thonny, an excellent IDE for fledgling Pythonistas, which I still currently use on all my machines while I weigh up the pros and cons of things like Pulsar Editor, PyCharm or Zed Editor.
I use Joplin to take any lessons notes with but aim to move over to something like Logseq for storing snippets of information and linking them. Calibre is used to sort my eBook collection, including the excellent Automate the Boring Stuff with Python by Al Sweigart and a number of useful guides by Flavio Copes.
A Lenovo G50 laptop has been happily running Manjaro Linux with Xfce for quite sometime now and is my back-up development machine. Before the skyrocketing prices of RAM, I was going to build a new desktop PC with Fedora KDE. This will still happen but probably not until 2027. In the meantime, I aim to simply dual boot my ASUS TUF laptop as I move further away from dependence on Microsoft.
Syncthing keeps my project files synchronised across my machines and, along with an external hard drive and Déjà Dup, I use cloud services such as Filen for backup purposes. This works extremely well for myself, though I do plan to use something like Nextcloud or OpenMediaVault in the future.
Whereas I refreshed my GitHub account at the start of the year, I will be moving to Codeberg before the end of 2026. I will probably join the Fediverse as well at some point, to have some kind of presence on decentralised social media.
Whereas there are other Open Source tools and services I use, this post has covered most of the ones I use on a daily basis. With some research, you may find one or two excellent alternatives to use yourself.
2026-01-05 15:58:13
Stack Overflow has become a double-edged sword for the developer community. While it's an invaluable resource for technical knowledge, it has significant issues:
Toxic Environment: Beginners often face harsh comments and downvotes for "not searching enough" or asking "too basic" questions. This discourages newcomers from learning.
Geo-blocking: For developers in countries like Iran, accessing Stack Overflow and many coding platforms is either blocked or severely restricted. This creates an unequal playing field in the global tech community.
Privacy Concerns: All questions and your entire history are public by default. There's no easy way to ask sensitive questions about proprietary code or internal systems.
I built CodeAnswr to address these pain points with a modern, inclusive approach:
Building CodeAnswr serverless was the right choice for a solo developer with big dreams:
Frontend: SvelteKit
Database: Cloudflare D1
Compute: Cloudflare Workers
AI Integration: Claude API
CodeAnswr is now live, completely serverless, and scales globally without the complexity of traditional infrastructure. The edge-database performance is exceptional, with queries completing in under 100ms even for complex operations.
I'm continuously improving CodeAnswr based on user feedback. If you're interested in joining a toxic-free, privacy-respecting, globally accessible coding Q&A platform, check it out at https://codeanswr.com
Feedback and contributions are welcome! If you're a developer who's felt the pain points I mentioned, I'd love to hear your thoughts.
Have you experienced these problems with existing coding platforms? What features would you add to an ideal Q&A platform? Drop your thoughts in the comments!
We just fixed our OAuth flow and it's now 100% operational. Check out the leaderboard to see the top contributors!
2026-01-05 15:55:02
Retrieval-Augmented Generation (RAG) is often presented as a prompting technique or a lightweight runtime enhancement for LLMs. While this may work for demos, it breaks down quickly once you try to build a production-ready AI backend system with FastAPI..
The moment you want persistence, reproducibility, scalability, and clear separation of responsibilities, RAG inevitably leads to a vector database, because similarity-based retrieval cannot be treated as a stateless runtime concern.. Not as an optional optimization, but as the central infrastructure component that makes retrieval reliable and operational.
This article focuses exactly on that transition by integrating RAG into a FastAPI backend and treating the vector store as a first-class backend dependency that is configured, injected, and consumed like any other production system component.
In current literature, Retrieval-Augmented Generation is not described as a single step or a simple pipeline, but as a composition of multiple responsibilities that together enable grounded generation.
At its core, RAG combines:
While these responsibilities are often collapsed into a single conceptual flow, in a production backend they naturally separate into distinct backend concerns. Knowledge must first be ingested, transformed, embedded, and stored in a way that allows efficient semantic access. Retrieval then becomes a query-time operation that selects relevant information based on vector similarity. Only after this retrieval step does the generative model come into play.
Seen this way, RAG is not a monolithic process but an architectural pattern that explicitly connects storage, retrieval, and generation. The vector database forms the backbone of this pattern, acting as the system of record for knowledge and as the execution layer for retrieval. This perspective makes it clear why RAG belongs in the backend infrastructure and not inside the AI logic itself.
To keep the focus sharp, this article deliberately excludes:
The goal here is not to explain how to generate embeddings, but how to integrate a vector-based RAG component cleanly into a FastAPI backend.
With the architectural role of RAG clarified, the next step is to materialize it as an actual backend dependency.
At the center of the RAG setup sits the vector store. In this project, it is implemented using Qdrant via LangChain. The QdrantVectorStore used here is a LangChain-provided abstraction that encapsulates all communication with the underlying vector database. It is responsible for embedding queries, executing similarity searches, and mapping results back into document objects. For simplicity and clarity, this project relies on this existing LangChain implementation rather than introducing a custom database layer. By returning the QdrantVectorStore as a VectorStore dependency, the application stays decoupled from Qdrant-specific details while still leveraging a production-ready vector database.
def init_qdrant_vector_store(settings: Settings = Depends(get_settings)) -> VectorStore:
"""
Initialize and return the vector store used for retrieval.
"""
embeddings = get_openai_embeddings(
settings.qdrant_vector_store.embedding_model,
settings.openai_model.api_key
)
client = QdrantClient(path=settings.qdrant_vector_store.path)
if not client.collection_exists(
collection_name=settings.qdrant_vector_store.collection_name
):
client.create_collection(
collection_name=settings.qdrant_vector_store.collection_name,
vectors_config=VectorParams(
size=settings.qdrant_vector_store.vector_size,
distance=settings.qdrant_vector_store.distance
)
)
return QdrantVectorStore(
client=client,
collection_name=settings.qdrant_vector_store.collection_name,
embedding=embeddings,
)
This dependency is responsible for:
From the rest of the application’s perspective, this behaves exactly like a database connection.
Before retrieval can happen, the vector store must be populated. This is handled via a deliberately minimal upload endpoint /upload/chunks.
@router.post("/chunks", response_model=UploadResponse)
def upload_chunks(
documents: DocumentChunks,
vector_store: QdrantVectorStore = Depends(init_qdrant_vector_store)
):
if len(documents.chunks) == 0:
raise HTTPException(status_code=400, detail="No chunks to upload found")
uuids = [str(uuid4()) for _ in range(len(documents.chunks))]
chunk_ids_added = vector_store.add_documents(
documents=documents.chunks,
ids=uuids
)
if len(chunk_ids_added) == 0:
raise HTTPException(
status_code=500,
detail="Uploading chunks to vector store failed"
)
return UploadResponse(
success=True,
message=f"{len(documents.chunks)} chunks uploaded"
)
This endpoint assumes:
This reinforces the idea that RAG starts with data ingestion, not with prompting.
With data in place, the query endpoint becomes straightforward.
@router.post(path="/query", response_model=Insight)
def create_insight(
request: InsightQuery,
settings: Settings = Depends(get_settings),
llm: BaseChatModel = Depends(init_openai_chat_model),
vector_store: VectorStore = Depends(init_qdrant_vector_store)
):
Here, the vector store is injected alongside the LLM. Neither depends on the other. They are simply resources orchestrated by the endpoint.
The RAG chain then pulls context from the retriever and passes it into the prompt
The important point is not the retrieval logic itself, but where it lives:
This clean separation is what allows RAG to exist as a backend component rather than bleeding into AI logic.
def run_rag_insight_chain(
prompt_messages: ChatModelPrompt,
llm: BaseChatModel,
retriever: VectorStoreRetriever,
question: str
) -> Insight:
context = retriever.invoke(question)
prompt_template = ChatPromptTemplate([
("system", prompt_messages.system),
("human", prompt_messages.human)
])
parser = PydanticOutputParser(pydantic_object=Insight)
chain = prompt_template | llm | parser
return chain.invoke({
"format_instruction": parser.get_format_instructions(),
"question": question,
"context": context
})
The agent itself remains completely unaware of how the context was created — exactly as it should be.
By modeling the vector database as a dependency:
Once treated that way, it naturally fits into dependency injection, lifecycle management, and clean system boundaries.
In a production AI backend, RAG works through a vector store rather than through ad-hoc logic embedded in the AI layer. The vector store is the RAG system. Everything else simply consumes it.
RAG does not need to complicate your AI backend. When implemented via a vector store and injected like any other backend resource, it becomes predictable, scalable, and maintainable.
By separating ingestion, retrieval, and generation, you gain the freedom to evolve each part independently without turning your AI code into a tightly coupled system.
In the end, RAG is just another backend component. And treating it that way is exactly what makes it powerful.
💻 Code on GitHub: hamluk/fastapi-ai-backend/part-3
2026-01-05 15:55:00
Day 5 of 30. Today we're talking money - specifically, how to spend as little of it as possible.
Hosting costs can have a significant impact on a bootstrapped project. We've seen projects spend $1000+/month on
infrastructure for apps with zero users, especially when running on bigger cloud providers like AWS or Azure. We're
optimizing for cheap until our needs justifies otherwise.
Our target for allscreenshots is to spend less than $20 USD/month for our hosting costs and CI/CD infrastructure. In this post we're diving into how we intend to get there.
Options like these are the current developer choice, especially for NextJS platforms. The platforms provide a great Developer Experience (DX), generous free tiers and overall, painless deployment.
Why we didn't choose them:
While this might be viable options for some, we had a free reasons not to go for them:
For a static site or simple API, these are great. For a screenshot service running headless Chrome, they're not the
right fit.
The enterprise options. Almost infinitely scalable, but at cost of money and complexity.
Why we didn't choose them:
While in the future we might need a bigger scale, we don't see the need today. However, since we're running on Docker
images, migrating to a cloud provider could be relatively straightforward in the future.
These VPS providers provide a solid middle ground. They have simple and predictable pricing, good docs, and offer good
capacity at reasonable costs.
Benefits:
Vultr is our favorite and goto hosting provider, and we've used Vultr successfully in other projects.
However, we found something cheaper which we are willing to give a try.
A German hosting company that's popular in Europe but less known in the US.
Why we chose them:
The value is above seems like a very good deal, and we'll do some proper benchmarks just to make sure Hetzner is up to the job.
Here's exactly what we're running and what it costs:
| Service | Provider | Cost |
|---|---|---|
| VPS (2 vCPU, 4GB RAM) | Hetzner CX22 | $4.50/month |
| Object Storage (screenshots) | Cloudflare R2 | $0 (free tier) |
| Domain | NameCheap | ~$10/year |
| Email (transactional) | Resend | $0 (free tier) |
| SSL | Let's Encrypt | $0 |
| CI/CD | GitHub Actions | $0 |
| Container Registry | GitHub Packages | $0 |
Total: ~$5.50/month (plus ~$0.80/month domain cost)
Well under our $20 budget, leaving room for growth.
Screenshots need to live somewhere, and an S3 compatible storage is the standard. We have several options:
R2 gives us 10GB storage and 10 million reads/month free. That's a lot of screenshots before we pay anything. And when
we do pay, it's $0.015/GB/month for storage with no egress fees.
For a service that's literally serving images, zero egress fees is huge.
Managed databases - We're running Postgres in Docker on the same VPS. Yes, we're responsible for backups (automated
with a cron job to R2, plus the machine itself it backed up). But managed Postgres starts at $15+/month, with unpredictable costs at scale.
Managed Redis - We're not using Redis yet. Postgres handles our job queue, and caching will be done on the application layer. This is one less service to pay for and manage, which keeps things simple.
Log aggregation - We're using Docker's built-in logging for now. More fancy observability, like Datadog, New Relic or ELK can wait.
Monitoring - A free uptime checker (we're using UptimeRobot) and basic health endpoints.
CDN - Cloudflare's free tier in front of everything. Free SSL, free caching, free DDoS protection.
We're not being cheap for the sake of it, plus, we see this more as being frugal with our budget.
We're trying to make a conscious choice about how we spend our budget, and we allow for growth.
These are example scenarios, and will most likely change over time, but a possible future plan could look like the following:
At 100 paying users: Move Postgres to a managed service, and backups and maintenance become worth paying for.
At 100,000 daily screenshots: Consider a second worker VPS or upgrade to a beefier single machine.
At $500/month revenue: Revisit all infrastructure decisions. We'll have data on what actually needs scaling.
Until then, we're using a slightly scrappy approach. If you have comments or suggestions, we'd love to hear from you!
Let's be honest about the trade-offs:
Single point of failure. One VPS means one machine to fail. We're accepting this risk. Hetzner has good uptime, and
we can restore from backups to a new VPS in under an hour.
Limited resources. 4GB RAM is enough for Postgres + Spring Boot + a couple Playwright instances. But we can't run
ten parallel browser sessions. We'll optimize carefully and queue jobs, and most likely we'll spin up more machines when we require more capacity.
No geographic redundancy. All our infra is in one Hetzner data center. If that data center has issues, we're down.
Again, acceptable at this stage, but it's something we'll look into when we need to.
Manual scaling. Adding capacity means provisioning new VPS instances manually since there is no auto-scaling. This is fine fine for now to keep
our costs under control, but might change in the near future.
Total spent so far: $4.50 for the Hetzner VPS (first month). Everything else is on a free tier.
On Day 6 we're finally writing the fun code. Getting Playwright running, capturing our first programmatic screenshot, and
learning what breaks when you try to render the web.
The Frugal Architect by Werner Vogels
Werner Vogels is Amazon's CTO, and this collection of essays explores cost-conscious system design. The irony of AWS's
CTO writing about frugality isn't lost on us, but the principles are solid.
His core laws include "make cost a non-functional requirement," "unobserved systems lead to unknown costs," and
"cost-aware architectures implement cost controls."
The key insight for us: cost efficiency isn't something you bolt on later. It's a design constraint from day one.
Choosing Hetzner over AWS, R2 over S3, and Postgres-as-queue over Redis aren't just about saving money - they're about
building a sustainable business where unit economics work even at small scale.
The essays are free to read online, but we'd recommend bookmarking them for reference.
Current stats:
If you want to see the service in action, checkout allscreenshots for a free trial.
2026-01-05 15:54:51
This is a submission for the DEV's Worldwide Show and Tell Challenge Presented by Mux
I built Pilot, an Instagram automation and deal management platform for creators, founders, and small teams who sell through Instagram.
Pilot turns comments and DMs into a structured sales workflow. Instead of manually replying, losing leads, or tracking deals in screenshots, everything runs through one system: automated replies, lead capture, contact management, and deal tracking, all connected to your Instagram account.
It's built for people who already have demand and want clarity, not hacks.
To test:
Another way to test it is to try putting a comment on this reel. Pilot will automatically reach out to you with the ebook described in the reel.
[Note: it might not work because I have run out of Gemini API credits]
Instagram has quietly become a real sales channel. Creators close brand deals there. Small businesses sell products there. Coaches and founders run their entire pipeline through DMs.
But the tooling hasn't caught up.
Most people still manage everything manually. Replies are copy-pasted. Leads get buried. There's no visibility into who's serious and who's not.
Pilot exists to bring structure to that chaos.
Not to replace how people sell on Instagram, but to support it with systems that scale.
Once someone experiences Instagram with a real pipeline behind it, they don't go back.
A key focus was personalisation.
The AI assistant works off the user's actual business data, contacts, and deal history, not generic prompts or canned replies.
By submitting this project, I confirm that my video adheres to Mux's terms of service: https://www.mux.com/terms