2025-12-07 22:41:17
This is a submission for the DEV's Worldwide Show and Tell Challenge Presented by Mux
I'm reusing a previous project of mine:
I built a portable device that captures its surroundings and enhances it with real-time insights and knowledge capabilities. Users can place the device in a meeting, for example, to get live transcription, ask questions to the AI it connects to, and receive automatic summaries. The system can package all of the content and send it by email for later review, analysis, or record keeping.
Since this project relies on dedicated hardware to function as intended, it's not possible to provide a full end-to-end demo without physically shipping a device to the judges.
However, I've created a simulation environment that allows you to preview the frontend experience and explore the core interactions:
Source Code:
Portable device for real-time audio transcription and interactive summaries.
This is the main repository for my submission to AssemblyAI Challenge.
Each subfolder includes instructions for running the project locally.
For a more detailed overview, including screenshots, you can read the submission sent to the challenge here:
https://dev.to/milewski/echosense-your-pocket-sized-companion-for-smarter-meetings-3i71
This project was originally created for a dev.to hackathon, but the idea was inspired by a real-world observation. I often attend meetings where most participants are non-native English speakers. I noticed some coworkers relying on some automatic captioning software instaled on their computer just to keep up, either because their English inst good enough or because the mix of accents made things difficult to understand.
This was the root inspiration for EchoSense. I wanted to build something that not only provided real-time captioning, but went beyond what tools like Zoom or Teams offered at the time by adding features like live transcription, AI-powered insights, summaries, and more. I believe a device like this can be useful for a wide variety of users, and since it's simple to assemble and build, it could even be a fun weekend project.
I also drew inspiration from a few open-source hardware projects, such as:
Both share all the files and tutorials needed to build a "smart" device yourself. I love this approach, it's a great way to learn, experiment, and deepen your understanding of how hardware and software work together.
The stack isn't complicated. The device is built on an ESP32, but instead of using traditional C-based code, it's written in Rust. I chose Rust because I believe it's better suited for embedded development: it provides a modern developer experience, strong safety guarantees, and excellent performance. In practice, this made development significantly easier and less error-prone than if I had written it in C.
Due to hardware limitations, the device itself isn't capable of running transcription or large language model tasks locally. Instead, it captures audio and streams data to a third-party service for real-time transcription and AI processing, keeping the hardware lightweight while still enabling powerful functionality.
One interesting challenge I faced during this project was that the ESP32 variant I chose has an extremely small stack memory. It would sometimes crash simply by receiving an extra item in a JSON array from the server... in other words, a JSON response of just a few kilobytes would instantly crash the device when attempting to parse it because there wasn't enough memory to hold the data. This is almost unthinkable for the new generation of developers who are used to working with effectively unlimited resources and architect as if client devices have infinite RAM. In most cases today that mindset doesn't cause issues, because memory is abundant, but on a constrained device like this, every byte matters and small decisions can have huge consequences.
Because of this limitation, the software had to be written with extreme efficiency to avoid stack overflows while still handling tasks in real time and serving multiple clients.
This experience gave me a glimpse of what it must have been like to build computer software or video games when machines were orders of magnitude slower than this microcontroller. It's fascinating how far computing has come, and how disconnected most modern developers are from the constraints that used to define everyday programming.
2025-12-07 22:39:48
I used to be a passionate tinkerer when it came to Vim/Neovim configuration. Lately, however, that passion has cooled. The last time I significantly touched my config was back in January of this year. Since then, I've been using Neovim daily without making any fixes or updates.
After nearly a year of stagnation, I happened to update the lock file of my configuration, which is managed using Nix. This triggered a realization: it was time to clear out the accumulating deprecation warnings and modernize my setup to keep up with the evolving Neovim scene. Here is a log of my "Spring Cleaning" in December.
:checkhealth Warnings
For a while (before updating the lock file), I had been ignoring a persistent warning in my :messages regarding the usage of the obsolete vim.fn.sign_define function:
- ⚠️ WARNING Defining diagnostic signs with :sign-define or sign_define() is deprecated. Feature will be removed in Nvim 0.12
- ADVICE:
- use vim.diagnostic.config() instead.
I finally decided to stop ignoring it. After reading :h diagnostic-signs, I refactored the code to use the modern vim.diagnostic.config API:
diff --git a/lua/m15a/plugins/lspconfig/init.lua b/lua/m15a/plugins/lspconfig/init.lua
index 3cfbd2124d..8644fedc68 100644
--- a/lua/m15a/plugins/lspconfig/init.lua
+++ b/lua/m15a/plugins/lspconfig/init.lua
@@ -97,12 +97,19 @@
end
local function setup_diagnostic_signs()
- -- Change diagnostic signs in the gutter:
- -- https://github.com/neovim/nvim-lspconfig/wiki/UI-Customization#change-diagnostic-symbols-in-the-sign-column-gutter
+ local signs = {
+ text = {},
+ texthl = {},
+ numhl = {},
+ }
for type, text in pairs(assets.fonts.diagnostic) do
+ local severity = vim.diagnostic.severity[type:gsub('^%l+', string.upper)]
local hl = 'DiagnosticSign' .. type:gsub('^%l', string.upper)
- vim.fn.sign_define(hl, { text = text, texthl = hl, numhl = hl })
+ signs.text[severity] = text
+ signs.texthl[severity] = hl
+ signs.numhl[severity] = hl
end
+ vim.diagnostic.config { signs = signs }
end
function M.setup()
The "framework" API of nvim-lspconfig was deprecated (Sep 18, 2025), now that Neovim supports vim.lsp.config (as of Dec 11, 2024). To align with this shift, I updated the legacy API calls in my config:
require("lspconfig") with vim.lsp.config.diff --git a/lua/m15a/plugins/lspconfig/init.lua b/lua/m15a/plugins/lspconfig/init.lua
index a6f8dbbfce..d99c110643 100644
--- a/lua/m15a/plugins/lspconfig/init.lua
+++ b/lua/m15a/plugins/lspconfig/init.lua
@@ -1,6 +1,5 @@
local namespace = ...
-local lspconfig = require 'lspconfig'
local tbuiltin = require 'telescope.builtin'
local assets = require 'm15a.assets'
local keymap = require 'm15a.keymap'
@@ -27,7 +26,8 @@
local function setup_servers()
for server, cmd in pairs(servers) do
if vim.fn.executable(cmd) > 0 then
- lspconfig[server].setup(require_server_config(server))
+ vim.lsp.config(server, require_server_config(server))
+ vim.lsp.enable(server)
end
end
end
I reconsidered my historical Telescope key mappings to improve ergonomics. Previously, opening a picker required three keystrokes (e.g., <Leader>ef, <Leader>eg, etc.), as configured below:
vim.api.nvim_set_keymap("", "[telescope]", "<Nop>", { noremap = true })
vim.api.nvim_set_keymap("", "<Leader>e", "[telescope]", {})
vim.keymap.set("n", "[telescope]f", require("telescope.builtin").find_files)
I have now reduced this to just two keystrokes: <Leader>f, <Leader>g, etc. It feels much faster and more direct.
This plugin offers a massive number of syntax textobjects. In the past, I blindly mapped almost all of them. However, in practice, I only use a handful. I decided to keep only the essentials:
keymaps = {
["ik"] = "@class.inner",
["ak"] = "@class.outer",
["ic"] = "@comment.inner",
["ac"] = "@comment.outer",
["im"] = "@function.inner",
["am"] = "@function.outer",
["ia"] = "@parameter.inner",
["aa"] = "@parameter.outer",
},
The repository freddiehaddad/feline.nvim—a previously maintained fork of the original feline.nvim—was deleted. I was a fan of this
minimalist statusline framework, but with the repo gone, migration was necessary. I decided to switch to lualine.nvim, which is another minimal (and widely-adopted) statusline plugin.
The dressing.nvim repository has been archived. I read the issue where the author explains the decision:
[F]olke came out with snacks.nvim and it contains both a fuzzy
vim.ui.select implementation and a good vim.ui.input. Now, at long
last, I can say that this plugin has outlived its usefulness and
retire it.
I looked at the suggested successor, snacks.nvim. However, it comes with far more features than I need. I prefer a simpler plugin for this functionality. For now, I have decided to keep using the archived dressing.nvim until a more focused alternative appears.
I migrated from nvim-lastplace to lastplace.nvim. Although I knew the nvim-lastplace repository had been archived for a while, I kept using it simply because it worked. This update brings me back to a maintained version.
I decided to remove markview.nvim. I realized that I dislike heavily concealed text. The dynamic width changes that occur when switching between normal and insert modes were becoming visually distracting.
I removed gitsigns.nvim in favor of Jujutsu, which I have recently started using as my Git frontend.
I had overlooked it previously, but it is incredibly helpful when editing HTML.
I have seldom used folding and its plugins. My preference has been that code should be split into small enough files so that folding becomes unnecessary.
However, I realized that for certain tasks—specifically scripts for deploying to pipeline orchestrators that prohibit external packaging—I am often forced to put many modules into a single script file. In these cases, folding helps me. I decided to introduce nvim-origami to handle this.
It took about a week of spare time to finish this configuration overhaul. My next major update will likely happen after nvim-treesitter releases its main branch, as that is expected to bring several breaking changes.
2025-12-07 22:38:20
In today’s fast-paced digital world, designers and developers need tools that are fast, intuitive, and efficient. Tools that reduce repetitive work, speed up experimentation, and help produce clean, professional results — without wasting hours adjusting values manually.
That’s exactly why Codezelo was created.
A free platform offering a suite of interactive CSS tools built to help front-end developers and UI/UX designers craft beautiful components with real-time preview and ready-to-use CSS code.
In this article, you’ll discover the 7 most powerful tools available right now on Codezelo — and how each one can level up your workflow.
Create smooth, elegant corners in seconds
The Border Radius Playground allows you to experiment with different corner radius values using a clean and simple interface.
Whether you're designing cards, buttons, or layout components, the tool gives you precise control over each corner — with instant visual feedback.
Features:
Design clean, realistic shadows effortlessly
Shadows play a major role in visual hierarchy and depth.
This tool lets you fine-tune offsets, blur, spread, and color — and preview every change instantly, making it easy to create shadows that fit your design perfectly.
Features:
Turn colors into stunning visual backgrounds
Perfect for modern UI design, this tool lets you create beautiful color gradients with full control over direction, color stops, and style.
Features:
Bring motion to your UI with ease
Animations make interfaces feel alive and engaging.
Instead of writing keyframes from scratch, this tool allows you to visually control duration, delay, timing function, and iteration — with a live preview.
Features:
Master responsive layout design
Flexbox is one of the most essential tools in modern web layout creation.
This playground provides a hands-on environment to experiment with all major Flexbox properties and instantly understand how each value behaves.
Features include:
Build complex grid layouts effortlessly
CSS Grid is powerful — but sometimes tricky.
This tool makes grid design intuitive, allowing you to adjust columns, rows, spacing, and gaps visually.
Features:
Experiment with transformations visually
Transforms are great for creating dynamic and interactive UI effects.
This tool lets you try out scale, rotate, translate, and skew values in real time — and see immediate results on the element.
Features:
If you're looking to speed up your workflow, experiment faster, and create cleaner UI components — Codezelo offers tools that make your work simpler and more enjoyable.
Free tools • Faster design • Better results
(https://www.codezelo.com/en/tags/css-tools/)
Whether you're a front-end developer, a UI/UX designer, or even a beginner exploring CSS, these tools will help you build cleaner, more professional designs — without writing complicated code.
If you found this article helpful, feel free to share it to help more developers discover these tools ❤️
2025-12-07 22:37:15
🚀 I just published my first Kotlin app — would love your feedback!
Hey everyone!
After months of learning Kotlin + Jetpack Compose, I finally released my first Android app on Google Play:
👉 Programming Quiz Guide
Test your knowledge in Java, C++, Python, Angular & more — with explanations after every answer.
Google Play link:
https://play.google.com/store/apps/details?id=com.lydatakis.CodeMentorQuiz
🎯 Why I built this
I’ve been teaching myself Android development, and I wanted a project that:
I also personally enjoy learning through quizzes, so I decided to build a developer-focused quiz app.
🔧 What’s inside
🧰 Tech Stack
If anyone is interested, I’m happy to share code snippets or explain how I implemented anything.
💬 Thanks!
If you have a moment to try it or leave a review, it would mean a lot.
Happy to answer any questions about how I built it — maybe it helps other beginners too!
Thanks for reading 🙌
2025-12-07 22:32:14
All of us have encountered this situation: inheriting a project that has remained untouched for years, only to discover it is now outdated. Recently, I was assigned to such a project with a clear directive to modernise a legacy application built on .NET Framework 4.7 and successfully migrate it to the latest LTS release, .NET 10.
It was a journey filled with architectural archaeology, AI tooling, and some tough decisions. Here is the story of how we managed the upgrade of this legacy application using Visual Studio 2026, and what we learned along the way.
My journey began with a classic induction meeting with my Line Manager and Product Owner. Once the pleasantries were done, I popped the hood to see how the project was structured.
As expected with legacy code, there wasn't much of a "structure" to speak of. It was a flat architecture where almost all business logic lived directly inside the Controllers. Even worse, database contexts were being instantiated right there in the controller methods, something like this:
public class LegacyController: Controller {
public ActionResult Index() {
// Direct instantiation inside the controller? Check.
using (var db = new MyDbContext()) { ... }
}
}
I knew that if I started refactoring immediately, I would generate a massive amount of changes that would be impossible to review properly, increasing the risk of failure.
I decided to run the application to understand the flow without getting bogged down in the nitty-gritty of every single class. I also ran the existing unit tests. Unsurprisingly, red lights flashed everywhere.
The project lacked a proper CI/CD pipeline, so over time, business logic had drifted, while the tests had been forgotten.
I went back to the Azure Board and tagged my Line Manager and PO with a plan. I proposed a strict order of operations to mitigate risk:
I spent a few days fixing bugs in the test suite until everything was green. Only then was I ready for the big lift.
Using the newly released Visual Studio 2026, I started by auditing our NuGet packages and third-party references.
One significant hurdle was a reference to Excel-DNA. When upgrading, checking vendor compatibility is non-negotiable. I had to comb through their documentation to find their specific upgrade process for .NET 10 compatibility.
We were also using Entity Framework 6 (EF6). While EF6 is technically compatible with .NET 10, the long-term goal is obviously Entity Framework Core. However, trying to do that migration simultaneously would have blown up the scope. My manager and I agreed: get to .NET 10 first, upgrade to EF Core later.
Visual Studio 2026 offers a tool called Modernizer, which leverages AI (ChatGPT, Gemini, Claude) to handle upgrades.
| Tool | Verdict |
|---|---|
| Modernizer (AI) | Fantastic for simple class libraries. It proposed excellent upgrade plans and did the heavy lifting. |
| Legacy Upgrade Assistant | Necessary for the complex UI projects. The AI struggled with our old AngularJS/MVC mix, so I had to enable this legacy tool via Tools > Options. |
I started with the solution's four projects. Three were simple class libraries, so I used the Modernizer AI tool to upgrade them to .NET 10 first.
Pro Tip: When upgrading, resist the urge to use new features immediately. .NET 10 has cool toys like file-scoped namespaces, but mixing feature changes with framework upgrades creates noisy Pull Requests (PRs). I kept the code changes minimal so my team could actually review the PR.
The main website was an ASP.NET MVC application with AngularJS. The Legacy Upgrade Assistant handled the C# conversion well, but it had no idea what to do with the UI.
The Bundling Problem:
Old ASP.NET had built-in bundling for CSS and JS. .NET 10 doesn't work that way. I had to:
I decided to use Gulp to compile our TypeScript and LESS files and bundle them. I utilised AI heavily to generate the Gulp tasks (which worked perfectly), creating a single output file referenced in our index.html.
Here is where I had to break my own rule about "minimal changes." The old code was so non-SOLID that I literally couldn't inject the services I needed into the controllers.
I had to refactor the presentation layer, introducing the Unit of Work pattern and cleaning up the dependency injection. It made the PR larger, but it was unavoidable to get the application running.
With the refactoring done, I ran the project. Miraculously (or perhaps due to the preparation), it worked.
I performed end-to-end testing, deployed to the test environment, and handed it over to QA. Everything looked green. I opened the PR, which was only "50% of the job" considering the future work needed, but the platform was now on .NET 10.
We successfully reached .NET 10, but the journey isn't over. We still need to upgrade to EF Core and add observability.
However, the business has shifted priorities. The old AngularJS frontend is now a security vulnerability. We aren't just upgrading to modern Angular; my manager wants a complete rewrite of the presentation layer using React.
There’s just one catch: I don't know React very well yet.
It’s going to be a challenging task, but I plan to tackle it head-on. Stay tuned, I’ll be sharing how I manage that learning curve in my next post.
Happy coding!