2025-11-20 01:54:32
I often say this program changed my life — and I mean it in the deepest, most genuine way. It has shaped my personal and professional growth in ways I never imagined as someone just exploring AWS.
My journey with AWS Community Builders began back in 2020, when I was invited to join while still an AWS Student Ambassador. At that time, I was just a kid fascinated by cloud, clueless about communities, and excited by every new AWS announcement. I had no idea how profoundly this program would impact my life.
When I joined, the program was very different from what it is today. Back then:
I remember seeing Jason managing everything basically alone — emails, spreadsheets, swag, onboarding, sessions — with support from a few AWS folks who already had full-time roles. It was grassroots, manual, and driven purely by passion.
Fast forward to today, the program has matured dramatically:
And with CBs growing 5–10×, the scale is phenomenal.
I’ve had the privilege of witnessing this evolution from both sides — as a learner and as a contributor.
The Community Builders program became the place where my curiosity turned into action.
Here, I:
In 2023, I achieved 12× AWS Certifications — earning the golden jacket, something I’m truly proud to have maintained through 2025.
As I went deeper into AWS during those years of learning and certification prep, my curiosity transformed into genuine technical depth. I’ve always considered myself an AWS geek — endlessly curious, obsessed with announcements, and excited to explore anything new. I would spin up architectures just to understand how something behaved under the hood, break things on purpose, fix them, and document everything along the way. Over time, I somehow became that “AWS friend” people messaged when they were stuck, and I loved being able to help.
Writing became the natural extension of that curiosity.
Every time I learned something, I blogged about it. Anything that fascinated me, I wrote about it. It became second nature — almost like thinking on paper.
I started writing consistently on dev.to, at a time when not many CBs were publishing regularly. That consistency earned me multiple awards on the platform and introduced me to the world of technical writing professionally — and honestly, quite profitably. Over the years, I’ve written 100+ blog posts across platforms and built a writing routine strong enough that it eventually pushed me to start my own AWS-focused newsletter, which now has nearly 50 editions filled with insights, architectures, updates, and learnings I wanted to share with the community.
There were experiments beyond writing, too.
A few of us even created a GitHub organization for CBs, hoping it would become a shared space where we could collaborate on projects, store useful repos, or build tools together. It didn’t fully take off, but it reflected the energy, ambition, and creativity of the early days of the program.
And like any meaningful journey, mine had lessons.
During the phase when I was publishing AWS whitepaper reviews, some posts triggered plagiarism concerns because referencing the material closely made it appear too similar. I did receive warnings from Jason. Even though my intentions were good — to review, not replicate — it taught me how to be more careful, how to credit properly, and how to write in a way that respects original sources while still adding my own perspective. Those lessons shaped me as a writer and as a community contributor.
As my writing gained momentum, something else naturally followed: speaking.
What began as small sessions for friends or college students gradually grew into meetups, community events, and conference talks. Writing helped me articulate ideas; speaking helped me communicate them. Together, they pushed me out of my comfort zone and into roles I never imagined myself taking.
To date, I’ve delivered 100+ talks across:
Every talk made me better at breaking down complex concepts. Every audience brought new questions that pushed my understanding deeper. And every session reminded me why I love this field so much.
This combination — deep technical curiosity, 100+ blog posts, a 50-edition newsletter, dev.to awards, experiments like the CB GitHub org, lessons from early mistakes, and 100+ speaking engagements — has shaped a huge part of who I am in the AWS ecosystem. It built my voice, strengthened my confidence, and helped me contribute meaningfully to the community that shaped me.
In 2022, through the CB Slack, I connected with Farrah and Walid — and got an opportunity to attend KubeCon in person. That event changed my community trajectory.
I discovered AWS User Groups — something I had heard CBs talk about but never found locally. AWS UG Vadodara had no social presence then, so I didn’t even know it existed.
Once I joined, everything shifted.
I became a co-organizer in 2023, and unexpectedly, a bridge — connecting CBs with UGs and vice versa. Being part of a community at that scale, on-ground and in-person, was transformative for me. I’ll write a separate post on this journey, but it’s one of the most meaningful chapters of my life.
If I have to pick one thing that defines my experience, it’s the people.
I’ve met:
Some I learned from.
Some learned from me.
Some became travel buddies to summits and community events.
I’ve mentored 100+ people for certifications and another 100+ on how to become a Community Builder.
And I’ve met CBs who later became AWS Heroes, or joined AWS, or built something incredible. Knowing I played even a small part in their journey means the world to me.
Yes, the swag has been next level.
At home, there’s literally a dedicated cupboard just for AWS goodies.
Yes, I received exam vouchers, AWS credits, and plenty of opportunities to experiment and build.
Yes, I got to speak at virtual, local, and even international events — something unimaginable for the shy kid I used to be.
But more than all of that…
…this program gave me confidence.It gave me community.It gave me belonging.It gave me a platform to grow — personally and professionally.
From being scared to speak in public, to traveling cities and countries to share knowledge — I owe a big part of who I am today to this program.
I truly appreciate this program — not as a title or badge, but as a catalyst that shaped my career and identity.
Thank you to every AWS Community Builder past and present.
Thank you to everyone who answered my questions, read my blogs, mentored me, or let me mentor them.
Thank you to the people who built this space, nurtured it, and made it what it is today.
And a very special thank you to the folks like Jason, Farrah, and the entire AWS community team — who work behind the scenes to keep this ecosystem thriving.
This community helped me grow.
It helped me find my voice.
And it continues to inspire me every single day.
Thank you for building something that genuinely changes lives — including mine.
2025-11-20 01:48:57
If you’re using Makefiles — add a make help command.
It auto-parses comments and shows all tasks instantly.
Simple hack, huge productivity boost.
help:
@awk 'BEGIN {FS=":"} \
/^#/ {comment=substr($$0,3)} \
/^[a-zA-Z0-9_-]+:/ {printf "\033[36m%-20s\033[0m %s\n", $$1, comment}' Makefile
2025-11-20 01:48:14
SQL Server 2025 has been officially announced at Microsoft Ignite 2025 and is now generally available. It had been in public preview since May of 2025 and consumers have been eagerly anticipating the official release of one of Microsoft's most popular software products. I had previous written a fun article where I used AI and historical date like previews, events, and past release dates to try and guess the release date of SQL Server 2025.
Now it's finally here, and with it comes a whole host of highly anticipated features, most notably AI integration.
Built-in AI & vector search
Modern developer productivity enhancements
Security, performance & scalability
Hybrid and cloud‐centric analytics & data integration
2025-11-20 01:46:14
Cara Hapus Atau Nonaktifkan akun Indodana Untuk membatalkan layanan Indodana (baik pinjaman tunai atau transaksi PayLater), Anda harus menghubungi whatsapp+62822193377) layanan pelanggan resmi Indodana secara langsung. Tidak ada opsi pembatalan instan melalui aplikasi.
2025-11-20 01:42:21
Yesterday, for the first time I introduced the world to something I have been working on for few months.
My post on Klefki Keys - https://dev.to/dev_man/monetize-your-side-hustles-for-every-api-call-introducing-klefki-keys-2go8
To summarize - Klefki keys allows developers to monitize their APIs / scripts or any side hustle (with no commission deducted, its a free to use tool!) by generating API Keys for their users.
Ever since I started working on it this is atleast 3rd or 4th time when planned on launching it - but guess what? I would stop myself everytime thinking - "Hmm.. I don't think this is enough yet - let me add xyz feature or fix abc bug then it should be good to launch". Having gone through this cycle many times now I finally decided to not work on any more development unless that dev work is any user's need.
"Feedback is all you need."
Klefki Keys is still in its early stages, and I’d really appreciate any thoughts from the community —
I'm particularly looking for:
Even a small comment or quick impression helps a lot — it genuinely shapes the direction of the platform!!
2025-11-20 01:35:11
In the world of data analytics, the choice of data format plays a crucial role in efficiency, storage, and processing. Different formats cater to various needs, from simple text-based exchanges to optimized binary storage for big data systems. In this article, we'll dive into six common data formats: CSV, SQL (relational tables), JSON, Parquet, XML, and Avro.
For each format, I'll explain it in simple terms and represent a small dataset using it. The dataset is a simple collection of student records:
Name: Alice, Register Number: 101, Subject: Math, Marks: 90
Name: Bob, Register Number: 102, Subject: Science, Marks: 85
Name: Charlie, Register Number: 103, Subject: English, Marks: 95
Let's explore each format one by one.
CSV (Comma Separated Values)
CSV is a straightforward text format where each row of data is a line, and values within the row are separated by commas (or other delimiters). It's like a basic spreadsheet without any fancy features. CSV is popular because it's easy to generate, read, and compatible with most tools, but it lacks built-in schema or data types, which can lead to parsing issues.
Here's our student dataset in CSV format:
SQL (Relational Table Format)
SQL represents data in relational tables, which are like grids with rows (records) and columns (fields). It's not a file format itself but a way to structure data in databases. Each table has a defined schema specifying data types, and you can query it using SQL language. It's great for structured data with relationships but requires a database system to manage.
Here's how our dataset would look as SQL statements to create and populate a table:
JSON (JavaScript Object Notation)
JSON is a flexible, text-based format that stores data as key-value pairs (objects) or lists (arrays). It's human-readable, supports nested structures, and is widely used in web services, APIs, and configuration files. JSON is self-describing but can be verbose for large datasets.
Our dataset as a JSON array of objects:
Parquet (Columnar Storage Format)
Parquet is a binary, columnar storage format designed for big data processing. Instead of storing data row by row, it groups values by column, which enables better compression and faster analytics queries (e.g., summing a single column without scanning everything). It's popular in systems like Hadoop and Spark.
Since Parquet is binary, it can't be shown as readable text. Below is a hexadecimal representation of the Parquet file for our dataset (generated using Python's PyArrow library):
XML (Extensible Markup Language)
XML is a text-based markup language that uses hierarchical tags to structure data. It's like a tree of elements, making it suitable for complex, nested data. XML is verbose and self-descriptive but less efficient for large volumes due to its size. It's common in enterprise systems and web services.
Our dataset in XML format:
Avro (Row-based Storage Format)
Avro is a compact, binary row-based format that includes the data schema within the file. This allows for schema evolution (changing structures over time) and efficient serialization. It's row-oriented, making it good for write-intensive workloads, and is commonly used in Apache Kafka and Hadoop ecosystems.
Avro being binary, here's the schema in JSON format, followed by a Python code snippet that would generate the binary file:
Code to generate the Avro file:
Conclusion
Each of these formats has its place in data analytics. Text-based ones like CSV, JSON, and XML are great for readability and interoperability, while binary formats like Parquet and Avro excel in performance and scalability for big data. Choose based on your use case—whether it's quick exports, complex queries, or efficient storage. If you're working in cloud environments, formats like Parquet often shine due to their compression and query optimization.
profile
The DEV Team
Promoted
Gemini 3 image
Build anything.
Built on a foundation of state-of-the-art reasoning, Gemini 3 Pro delivers unparalleled results across every major AI benchmark compared to previous versions. It also surpasses 2.5 Pro at coding, mastering both agentic workflows and complex zero-shot tasks.