MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

Final Round AI vs Interview Coder: Best AI Tool for Coding Interview Prep (2026)

2026-04-06 14:53:10

Most developers don’t fail interviews because they lack knowledge; they fail because they can’t communicate it under pressure.

This is a common problem in technical interviews, especially during coding interview prep where knowing the solution isn’t the same as explaining it clearly.

You’ve solved enough problems. You’ve reviewed patterns. You know how things work. And yet, in the actual interview, something feels off. Your explanation gets messy, you lose your train of thought, and suddenly a problem you’ve solved before feels unfamiliar.

That gap is exactly why AI interview tools like Final Round AI and Interview Coder are becoming popular in coding interview prep.

I didn’t look at them as “features to compare" but as tools trying to solve a very specific problem: helping developers translate knowledge into performance. And once you look at them that way, the difference between them becomes much clearer.

Why AI Interview Tools Are Getting So Much Attention

Coding interview prep has always been heavily skewed toward knowledge.

We spend hours solving problems, memorizing patterns, and reviewing system design concepts. That part is necessary, but it’s only half the equation. Interviews are interactive, and that interaction is where most candidates struggle.

It’s not just about getting to the right answer; it’s about how you get there, how you explain your decisions, and how you react when things don’t go as planned.

That’s the gap AI interview tools are trying to fill in modern technical interviews. But depending on how they approach it, they either improve your thinking… or just polish your answers.

What Is Final Round AI?

Final Round AI is an AI interview tool designed for coding interview prep, built around the idea that interview performance is a skill on its own.

Final Round AI AI coding interview practice tool interface

Instead of treating interviews like a list of questions to practice, it focuses on how you structure your thoughts while speaking. You’re constantly pushed to explain your reasoning, stay organized, and avoid the kind of scattered thinking that usually shows up when you’re under pressure.

What stands out is that it doesn’t try to make things easier. It tries to make them realistic. You’re practicing in a way that forces you to deal with the same friction you’d face in an actual interview.

This kind of mock interview practice is especially useful for improving performance in real technical interviews.

Over time, that shifts your mindset. You stop chasing perfect answers and start focusing on delivering clear, structured explanations.

Explore Final Round AI

What Is Interview Coder?

Interview Coder is an AI interview tool more aligned with traditional coding interview prep.

Interview Coder interview prep tool

It helps you build and refine answers to common technical interview questions by giving suggestions, guiding your responses, and exposing you to common interview questions. It’s practical, easy to use, and especially helpful if you’re still figuring out how to approach certain problems.

The environment feels safe and controlled, making it a good option for low-pressure mock interview practice.

That’s useful, but it also means you’re not really training the part of interviews that tends to break people: thinking clearly while someone is actively evaluating you.

Explore Interview Coder

Final Round AI vs Interview Coder: A Deep Comparison

At a glance, both tools seem to help with interview prep. In practice, they focus on completely different aspects of it.

1. How Close It Feels to a Real Interview

The biggest difference in coding interview prep shows up in how each tool simulates real technical interviews.

Final Round AI creates an environment where you have to respond in real time, organize your thoughts as you speak, and maintain clarity without pausing to rethink everything. That constant pressure is intentional, because it mirrors what actually happens in interviews.

Interview Coder feels more like a guided workspace. You can take your time, rethink your answers, and gradually improve them. It’s a smoother experience, but also one step removed from the reality of live interviews.

If your goal is to reduce surprises on interview day, realism matters more than comfort.

2. Practicing Thinking vs Practicing Answers

Key difference: thinking vs answering

Final Round AI trains the process of thinking out loud. You’re not just solving the problem; you’re learning how to communicate your reasoning as it unfolds. That’s a skill most developers don’t consciously practice, but it’s exactly what interviewers pay attention to.

Interview Coder focuses on the outcome. It helps you shape better answers, refine your explanations, and understand what a strong response looks like.

One improves how you think in the moment. The other improves what you say after thinking about it.

3. The Type of Feedback You Get

Feedback is where many AI interview tools sound useful but don’t actually improve technical interview performance.

Final Round AI leans into communication-focused feedback. It highlights where your explanation loses clarity, where your structure breaks down, and where you skip important steps in your reasoning. It’s less about correctness and more about how your answer is experienced by someone listening.

Interview Coder’s feedback is more content-driven. It helps you make your answers more complete, more polished, and more aligned with expected solutions.

Both are helpful, but they improve different layers of your performance.

4. The Overall Experience

In coding interview prep, how an AI interview tool feels to use affects how consistently you’ll practice.

Final Round AI is more immersive. It demands attention and puts you in situations that feel close to real interviews. That intensity can be challenging, but it’s also what makes it effective.

Interview Coder is lighter and more flexible. You can jump in, practice a few questions, and leave with something useful. It’s easier to integrate into a routine, especially if you’re balancing prep with other commitments.

So it really comes down to whether you want something that pushes you or something that supports you.

Summary Comparison: Final Round AI vs Interview Coder

Here’s a quick comparison of both AI interview tools for coding interview prep:

Aspect Final Round AI Interview Coder
Interview simulation High, close to real scenarios Moderate, more controlled
Thinking in real time Core focus Limited
Feedback type Communication and clarity focused Answer and content focused
Learning style Immersive and performance-driven Guided and preparation-focused
Best use case Interview readiness Answer refinement

Both tools support technical interviews, but they focus on different parts of the coding interview prep process.

Who Each Tool Is Best For

In coding interview prep, choosing the right AI interview tool depends on whether you’re trying to improve your understanding of technical interview questions or your ability to perform under real interview pressure.

Interview Coder is a good fit if you:

  • Are still building your understanding of common interview questions
  • Want structured guidance on how to answer effectively
  • Prefer a calmer, low-pressure way to improve

Final Round AI makes more sense if you:

  • Want to simulate real interview conditions as closely as possible
  • Need to improve how you explain your thinking in real time
  • Tend to lose clarity or structure when under pressure
  • Want to practice staying composed while solving problems live
  • Care about how your answers sound, not just what they contain
  • Are preparing for interviews where communication is heavily evaluated

Why Final Round AI Feels Different

After testing both in a coding interview prep, the distinction becomes pretty straightforward.

Interview Coder helps you build better answers in a controlled environment.

Final Round AI focuses on what happens when control is gone which is exactly what happens in real technical interviews and you’re expected to perform.

That difference might not seem huge at first, but it becomes very obvious the moment you step into a real interview.

Because interviews don’t test how well you’ve prepared in isolation. They test how well you can communicate that preparation under pressure.

The Developer Takeaway

If you’ve ever finished an interview and thought:

  • “I could’ve explained that way better”
  • “I lost my structure halfway through”
  • “I knew it… I just didn’t show it properly”

then you’re not dealing with a knowledge problem.

You’re dealing with a performance gap.

That’s exactly where Final Round AI stands out. It doesn’t try to give you cleaner answers to memorize. It forces you to deal with how you think, how you speak, and how you hold it together when things get uncomfortable.

And that’s the part most prep completely ignores.

Final Thoughts

AI interview tools are slowly reshaping coding interview prep, but it’s also exposing something that was always there.

There’s a difference between understanding a problem and communicating your understanding.

Some tools help you close the first gap.

Others help you close the second.

And when you’re sitting in front of an interviewer in a real technical interview, the second one is usually the one that decides how things go.

💬 Have you tried either of these tools? Or do you rely on platforms like LeetCode or mock interviews? I’m curious what actually worked for you.

Thanks for reading! 🙏🏻
Please follow Hadil Ben Abdallah & Final Round AI for more 🧡
Final Round AI LinkedIn GitHub

Agentic interaction using AppFunctions

2026-04-06 14:49:26

Given the rise of agentic interaction on Android, we need a fast, reliable API to make app capabilities discoverable and executable by agents. Through the years, Google has introduced several native frameworks to bridge the gap between the operating system, its system-level assistants, and apps.

App Actions is the long-standing predecessor to AppFunctions. It uses shortcuts.xml and built-in intents to map specific user requests directly to app features. Android Slices were an attempt to surface interactive snippets of an app’s UI directly within the assistant or search interface; they have been effectively deprecated since 2021. Then there's the Direct Actions API, a framework introduced to allow voice assistants to query a foreground app for its specific capabilities in real-time. Gone too. Finally, the Assist API: the fundamental system-level hook that allows a native agent to read the screen context, providing the situational awareness necessary for agents to act on behalf of the user.

In retrospect, the failure of these predecessors likely wasn't due to a lack of vision, but rather a fundamental mismatch between static engineering and the needs of dynamic intelligence. App Actions relied on a rigid library of built-in intents. If an app feature didn't fit into one of Google’s binding categories, it effectively didn't exist to the assistant. Android Slices were killed by the UI maintenance trap. By forcing developers to build and maintain restricted, templated versions of their interface that often felt out of place, Google asked for too much effort for too little user engagement. The Direct Actions API failed because of its requirement that an app is actively running on the screen, which prevents the assistant from performing tasks autonomously. And while the Assist API provided the eyes for the system, it lacked the intelligence. It could scrape a messy tree of text and nodes from the screen, but it couldn't reliably parse that data into meaningful actions without massive compute power and significant privacy trade-offs. Ultimately, these frameworks offered narrow shortcuts when the ecosystem instead required a universal language.

AppFunctions

Unlike its predecessors, which tried to force apps into predefined boxes or complex UI mirrors, the AppFunctions model treats the app as a collection of capabilities to be indexed, rather than a destination to be visited. By shifting the focus from how the app looks to what the app can do, Google is moving toward a model where the agent doesn't just deep-link you into a screen, but picks up the tools to finish the job for you.

AppFunctions have been in the works since late 2024. Although the official android.app.appfunctions package didn't land in the core framework until API level 36, the missing link for developers was the appfunctions Jetpack library, which began its alpha rollout in May 2025. This library allowed early adopters to start wiring their apps for tool use before the corresponding platform APIs were finalized. At that stage, it was a framework waiting for a brain; Jetpack supplied the plumbing, but assistants such as Gemini were not consistently able to invoke those tools on every device or build. Android 16 adds the platform hooks for discovery and execution on supported hardware. As of today, Google still frames the overall agent push as early / beta and describes two parallel tracks:

  • AppFunctions as structured, self-describing entry points (what this article is about: discrete capabilities agents can call)
  • UI automation for longer flows when there is no tailored integration: previewed on devices such as the Galaxy S26 series and select Pixel 10 models, in limited verticals and regions, with multi-step delegation already part of that story

In Google’s February 2026 post on the intelligent OS, the Looking ahead section states that Android 17 is meant to broaden these same capabilities; that includes structured AppFunctions and the agentic UI automation previews already tied to hardware such as the Galaxy S26 series and select Pixel 10 models; the stated aim is to reach more users, more developers, and more device manufacturers.

Let's turn to the stack you can use today: Gradle, Kotlin, and the device-facing adb checks that validate a real APK against the current Jetpack and platform drops.

Implementing an AppFunction

To start implementing AppFunctions, your development environment might require a few specific upgrades. First, ensure you are running a recent version of Android Studio to access the latest Gemini-integrated testing tools. While the Jetpack library itself can target a lower minSdk where compatibility allows, you’ll want compileSdk 36 and typically targetSdk 36 so the Android 16 framework can index and run your AppFunctions on device. Next, declare the Jetpack coordinates in your version catalog, then wire plugins, SDK level, KSP, dependencies, and merge ordering in the app module.

Version catalog (gradle/libs.versions.toml)

[versions]
appFunctions = "1.0.0-alpha08"

[libraries]
androidx-appfunctions = {
 group = "androidx.appfunctions",
 name = "appfunctions",
 version.ref = "appFunctions"
}
androidx-appfunctions-service = {
 group = "androidx.appfunctions",
 name = "appfunctions-service",
 version.ref = "appFunctions" 
}
androidx-appfunctions-compiler = {
 group = "androidx.appfunctions",
 name = "appfunctions-compiler",
 version.ref = "appFunctions"
}

App module (app/build.gradle.kts)

plugins {
 alias(libs.plugins.android.application)
 alias(libs.plugins.kotlin.android)
 // Apply KSP to process the @AppFunction annotations
 alias(libs.plugins.google.devtools.ksp)
}

android {
 // compileSdk 36 aligns with Android 16, where platform AppFunctions APIs land
 compileSdk = 36    
 // ... rest of your config
}

ksp {
 arg("appfunctions:aggregateAppFunctions", "true")
}

dependencies {
 implementation(libs.androidx.appfunctions)
 implementation(libs.androidx.appfunctions.service)
 ksp(libs.androidx.appfunctions.compiler)
}

// Run each merge*Assets after its matching ksp*Kotlin so AppFunctions metadata is generated first
tasks.configureEach {
  if (!name.startsWith("merge") || !name.endsWith("Assets")) return@configureEach
  if (name.contains("ArtProfile")) return@configureEach
  val variant = name.removePrefix("merge").removeSuffix("Assets")
  val kspTask = "ksp${variant}Kotlin"
  if (tasks.names.contains(kspTask)) {
    dependsOn(kspTask)
  }
}

While the Jetpack library automates the plumbing (from generating schemas to registering them) the system fundamentally relies on AppSearch for on-device indexing. The beauty of the library is that it handles the AppSearch integration entirely behind the scenes; you don't need to manage sessions or write storage boilerplate yourself for your AppFunctions to become discoverable. With the environment ready, the next step is to spell out what distinguishes an AppFunction in source.

At its core, an AppFunction is a standard Kotlin function; but it carries a few specific decorations that turn it from a private app method into a public system tool:

  • The @AppFunction annotation signals to the compiler that a method should be exported as such a system-level tool. Use @AppFunction(isDescribedByKDoc = true) when you write a real KDoc block on that method; the compiler folds that documentation into the metadata agents and indexers consume, so parameter semantics (for example that app1 and app2 are package names) are not left implicit.
  • AppFunctionContext provides the function with essential situational awareness, such as information about the calling party or access to the app's own resources.
  • AppFunctionSerializable ensures your custom data classes are properly handled while they travel across process boundaries.

Let's see this in action in a real utility. In my app Be nice, a core feature is the ability to create app pairs (launching two apps in split-screen simultaneously). By exposing this as an AppFunction, we turn a sequence of UI interactions (opening a dialog, choosing apps, customizing parameters) into a single voice command. On eligible devices and assistant builds (as mentioned, Google’s rollout is still limited) you can ask Gemini to create an app pair for contacts and clock. The agent will call our AppFunction, passing the two apps’ package names as strings.

package de.thomaskuenneth.benice.appfunctions

import androidx.appfunctions.AppFunctionContext
import androidx.appfunctions.service.AppFunction
import de.thomaskuenneth.benice.R

class BeNiceFunctions {

  /**
   * Launches two installed apps together in split screen.
   *
   * @param context Execution context supplied by the AppFunctions runtime.
   * @param app1 Name of the first app in the pair.
   * @param app2 Name of the second app in the pair.
   * @return A localized message describing success or failure.
   */
  @AppFunction(isDescribedByKDoc = true)
  suspend fun createAppPair(
    context: AppFunctionContext,
    app1: String,
    app2: String
  ): String {
    val success = performPairing(app1, app2)
    return if (success) {
      context.context.getString(
        R.string.pair_created_success,
        app1, app2
      )
    } else {
      context.context.getString(
        R.string.pair_created_failure
      )
    }
  }

  private fun performPairing(a: String, b: String): Boolean {
    // Be Nice logic omitted for brevity
    return true
  }
}

I used suspend fun, like Google’s own AppFunctions examples do, so we can easily call other suspending APIs from the body whenever the implementation does real async work instead of returning immediately.

Writing a function with @AppFunction creates the capability. However, because AppFunctions are designed to be executed headlessly by the system (even if the app isn't in the foreground), the Android framework needs a static entry point to find and instantiate the code. Previous versions of the Jetpack library required quite a bit of additional boilerplate. Thankfully, most of that is now handled automatically through Manifest Merging and KSP: when you include the appfunctions-service dependency, a pre-built PlatformAppFunctionService is merged into your app's manifest, acting as the universal entry point for the system. The ksp { } block and tasks.configureEach section in the Gradle listing earlier connect your code to that service: appfunctions:aggregateAppFunctions tells the compiler to emit the aggregate inventory and related assets AppSearch reads, and the merge-after-KSP ordering ensures that output is packaged into the APK.

Testing AppFunctions

Next, let's check that your AppFunctions succeed on real setups.

adb shell cmd app_function list-app-functions | grep -F de.thomaskuenneth.benice

This command should print one or more lines that mention your package and expose each AppFunction’s stable id (often a ClassName#methodName form). This proves the OS indexer has picked up the app after install.

List of appfunctions for the Be nice package

On some Android 16 emulator images that command may return No shell command implementation. In my case, updating the AVD to a system image at API level 36.1 brought the app_function shell path to life; Android Studio shows that revision when you choose the platform image, as in the screenshot below.

Emulator system image with API level 36.1 (Android Studio)

Executing an AppFunction on the command line can look frightening:

FID=$(adb shell cmd app_function list-app-functions | grep -F de.thomaskuenneth.benice | grep -oE '[A-Za-z0-9_.]+#[A-Za-z0-9_]+' | head -n1) && adb shell "cmd app_function execute-app-function --package de.thomaskuenneth.benice --function \"$FID\" --parameters '{\"app1\":[\"com.foo\"],\"app2\":[\"com.bar\"]}'"

Executing an AppFunction

This command prints a JSON payload: on success it is the AppFunction’s return value (here, the string that createAppPair builds); on failure you may see App function not found (wrong id) or a JSON parse error if --parameters does not use the same AppSearch-style encoding as the example; note how each string argument is passed as a one-element JSON array.

If listing or execution still fails, confirm the aggregate assets are actually packaged (for example unzip -l app/build/outputs/apk/debug/app-debug.apk | grep app_functions should list app_functions.xml and app_functions_v2.xml).

For automated tests, treat the layers separately: run the adb checks above on a device to verify that your metadata is packaged and the indexer has picked up your app. Jetpack’s AppFunctionTestRule is built for local JVM runs (the docs pair it with Robolectric-style environments) so you can exercise AppFunctionManager and your @AppFunction logic without a cable; Google explicitly says to prefer real system-level checks when you can. Add instrumented or integration coverage on an API 36+ image when you care about the full stack (AppSearch sync, shell availability, release vs debug). None of that replaces the device-facing section here; it complements it, and deserves its own write-up once you outgrow copy-paste snippets.

Wrap-up

AppFunctions sit at the intersection of Jetpack, KSP, manifest merging, and on-device indexing (messy in preview, powerful when the wiring is right). When you integrate them, you keep coming back to three things: platform context, a small Gradle and Kotlin surface that connects the aggregate compiler flag to PlatformAppFunctionService, and the device or APK checks that show whether packaging and indexing still line up.

On the official AppFunctions overview, Google only documents adb shell cmd app_function list-app-functions as a shell check; there is no second, documented adb path for the full schema text (including KDoc folded in via isDescribedByKDoc). For that, read assets/app_functions.xml / app_functions_v2.xml from the APK, or query metadata through AppFunctionManager-style APIs—the same place agents are expected to pull richer descriptions. Anything further that adb shell cmd app_function help shows on a given device is platform-specific and is not spelled out on that overview page.

One caveat worth carrying forward: in my project, making each merge*Assets task depend on the matching ksp*Kotlin task was necessary so KSP-generated AppFunctions assets were present before packaging. That ordering is not spelled out in every official sample, and it may stop being required as the Android Gradle Plugin, KSP, or the AppFunctions toolchain tightens its own task graph. Treat it as something to validate on your stack: if app_functions.xml / app_functions_v2.xml show up in the APK without the extra tasks.configureEach block, you can drop it; if they are missing at runtime, the dependency ordering is still a reliable fix.

If you ship with minify enabled (isMinifyEnabled / R8), the AppFunctions AndroidX artifacts ship consumer ProGuard rules that keep much of the generated and reflection-heavy surface for you. You should still smoke-test a release build on a device: if execution or discovery fails only after shrinking, inspect R8 output and add targeted -keep rules for your own @AppFunction classes or related types; start from what the library already merges rather than copying random snippets from older posts.

In this article, I showed you how to implement an AppFunction. As a publisher you expose AppFunctions in your APK and rely on indexing plus your own validation of arguments. Callers that discover or execute other apps’ functions go through AppFunctionManager-style APIs and sit behind platform rules; privileged assistants hold permissions such as EXECUTE_APP_FUNCTIONS that ordinary store apps do not get by declaring a line in the manifest. Google still describes much of the end-to-end agent path as experimental and capacity-limited, so assume your parameters can be reached only by trusted system-side callers today, and still treat them like untrusted input.

References

The Future of Android Apps with AppFunctions by fellow GDE Shreyas Patil goes deep on dependency injection with AppFunctionConfiguration, a note-taking sample, and adb-driven execution. Further reading beyond the links already woven through the article (overview, Jetpack release notes, platform android.app.appfunctions, intelligent-OS blog, Play listing, and AppFunctionTestRule):

Stop shipping var_dump() to production — enforce it with PHPStan

2026-04-06 14:43:40

Stop shipping var_dump() to production — enforce it with PHPStan

We’ve all done it.

You add a quick var_dump() or dd() while debugging…
and somehow it survives code review 😅

Or worse:

  • someone uses DB::raw() where it shouldn’t be used
  • a controller starts calling repositories directly
  • architecture rules slowly fall apart

The problem

PHPStan is great — but enforcing custom rules like this is not trivial.

You either:

  • write a custom PHPStan rule (time-consuming)
  • or use something limited like banned functions

What I wanted

I needed something that could:

  • ban specific functions (var_dump, dd)
  • restrict certain method calls
  • enforce architecture boundaries
  • be configurable without writing PHP code

The solution

I built a small PHPStan extension that lets you define forbidden patterns:

parameters:
  forbidden_node:
    nodes:
      - type: Expr_FuncCall
        functions: [var_dump, dd]

Now PHPStan reports:

Forbidden function var_dump() used in App\Service\UserService.php:42

Why this is useful

You can enforce rules like:

  • ❌ no debug functions in production
  • ❌ no direct DB calls in controllers
  • ❌ no cross-layer violations
  • ❌ no unsafe patterns

Repo

👉 https://github.com/rajmundtoth0/phpstan-forbidden-nodes

Curious how others handle this — do you enforce rules like this in your projects?

Data Engineer's Guide to Linux: Why It Is Your Secret Weapon

2026-04-06 14:43:33

If you are stepping into the world of data engineering, you will quickly realize that if SQL and Python are the languages spoken, then Linux is the room they are spoken in. Most modern infrastructure, from cloud servers to Docker containers run on Linux. This read is here to walk you through why Linux matters and help you manage the flow of data by navigating the terminal.

Data engineering involves processing large amounts of data. While Windows and Mac are great for local development, they carry bloat. (GUI updates & background updates which enable smooth rendering of desktop icons). Since Linux is headless, it means that 100% of RAM & CPU goes towards processing queries & scripts, not rendering desktop icons. Other advantages of working with Linux include:-

  • It gives you granular control over memory and CPU which is important when working with terabytes of data

  • Automation. It has crontab which is the Linux scheduler. Here you can script repetitive tasks like moving files or checking logs using Bash

  • It is a cloud standard. AWS, Azure and Google Cloud are built on Linux

  • It has Security Features

Linux as an operating system has four mandates

  1. Hardware management
  2. File system management
  3. Process management
  4. Security

In this article we will briefly look at commands used in each of the four mandates

Hardware Management

The commands for this mandate help you explore the advantage of "granular control over memory and CPU."

The commands for this mandate help you explore the advantage of “granular control over memory and CPU”. Below are a list of some of the commands used and what they achieve

  • lscpu - This displays info about the architecture of your CPU
  • free -h - This shows how much space is available in your CPU. The -h flag stands for human readable (GB instead of bytes)
  • lsblk – Lists all blocked devices attached to the system
  • df -h – shows how much disk space is used Vs. what is available on mounted devices

Process Management

Everytime you run a script or python query, Linux interpret that as a process. Managing these processes ensures that one “stuck script” doesn’t lead to crashing of the whole server.

  • ps aux – Shows a snapshot of every running process on the system
  • top or htop – this is the primary command used to understand why your pipeline is slow. It gives you a colourful interactive list of what is running. Its is more like task manager in your windows
  • kill<PID> - Stops a hanging process or one that is consuming too many resources using its process ID (PID)
  • & - adding this to your script ensures it is running in the background so you can keep usng your terminal ( python3 script.py &)
  • Jobs – lists the processes you have running in the background of your current session

Security

Linux is a multi-user system. Security and permission features ensure that user A cannot accidentally delete or alter files, folders, or even pipelines for user B.

  • chmod – This command changes file permissions to show who can read, write, and execute
  • chown – this command changes the ownership of the files or folder
  • passwd – allows you to make changes to your password
  • whoami – this is a quick way to check which user account you are currently running on
  • history – shows a list of recently executed commands

File System Management

File management involves mainly managing workflow. In this section, we will learn how to manage files: opening them, reading them, moving them from one folder to another, and even giving permissions to the files. We will use screenshots as a practical illustration.

Connecting to the cloud server

In this case, we use ssh root@yourIpAdress

Connecting to the cloud server

Finding out which user account is running

Here, you use the command whoami

Finding out which user account is running

Adding a user successfully

Since Linux is a multi-user this is how to add users

Adding a user successfully

Switching user to the newly created user

Switching user to the newly created user

Creating a new folder in the homepage of the new user account

Creating a new folder in the homepage of the new user account

Creating subfolders

Creating subfolders

Creating files in the subfolders

Creating files in the subfolders

Changing file permission to make it executable

 Changing file permission to make it executable

Creating a backup file in a different folder

Creating backup file in a different folder

Moving files from one folder to another

Moving files from one folder to another

From the images above, you have observed some of the most common file navigation commands used to create directories, move between directories, explore directories, view changes, and change permissions, etc. Some of the basic commands include:-

  • ssh - used to connect to a cloud server
  • cd - change directory, move to a directory within the current working directory
  • pwd - print current working directory
  • mkdir - make a directory/folder
  • su - switch user
  • whoami - gives you the current user
  • ls - list the files and folders in the current directory
  • ls -l - list the files and folders in the current directory and include the hidden files
  • cat - output the contents of the file in text
  • vi - use vim to edit a file
  • nano - use nano to edit a file Usually written as nano file.py to mean open file.py for editing
  • sudo useradd -m user_name- used to add users
  • sudo passwd - used to change the administrator's password
  • passwd - when entered, it prompts you to give the old password, then asks you to type the new password
  • touch - create a file
  • man - This is a user manual to search for any command
  • wget - used to download files online
  • head - prints the first 10 rows of a document without opening the whole doc. Useful for large files
  • mv - used to move files from one folder to another
  • cp - used to copy files from one folder to another. It is also used to rename files
  • grep text - grep is used to search for text in a file. grep text will return instances of the word "text" in a document

While there are more commands used in Linux, the read provides a beginner-friendly guide into understanding what Linux is all about and basic navigation.

If you enjoyed the read, please like, share, and subscribe for more articles as we document my journey from a beginner to an expert in data engineering.

Why Some AI Feels “Process-Obsessed” While Others Just Ship Code

2026-04-06 14:31:47

I ran a simple experiment.

Same codebase.
One AI rated it 9/10 production-ready.
Another rated it 5/10.

At first, it looks like one of them is wrong. But the difference is not accuracy — it’s philosophy.

Two Types of AI Behavior

1. Process-Driven (Audit Mindset)

  • Focus: edge cases, failure modes, scalability
  • Conservative scoring
  • Assumes production = survives real-world stress

2. Outcome-Driven (Delivery Mindset)

  • Focus: working solution, completeness
  • Generous scoring
  • Assumes production = can be shipped

What’s Actually Happening

Both are correct — under different assumptions.

  • One asks: “Will this break in production?”
  • The other asks: “Does this solve the problem?”

You’re not comparing quality.
You’re comparing evaluation lenses.

Failure Modes

Process-driven systems

  • Over-analysis
  • Slower shipping
  • Can block progress

Outcome-driven systems

  • Hidden technical debt
  • Overconfidence
  • Production surprises later

What Developers Should Do

Don’t pick sides. Use both.

Practical workflow:

  1. Build fast (outcome-driven)
  2. Audit hard (process-driven)
  3. Fix only high-risk issues

Redefining “Production Ready”

Production-ready is not “it works”.

It means:

  • Handles failures
  • Has logging + observability
  • Is secure
  • Is maintainable by others

Final Thought

If one AI says 9/10 and another says 5/10, don’t ask:

“Which one is right?”

Ask:

What assumptions is each one making?

Programming Language vs Scripting Language

2026-04-06 14:30:38

In the world of software development, the terms programming language and scripting language are often used interchangeably. However, they have distinct purposes, execution methods, and use cases.

Let’s break it down in a simple and clear way.

What is a Programming Language?
A programming language is a formal language used to write instructions that a computer can execute. These languages are typically compiled before execution.

Key Features:

  • Requires compilation (converted into machine code)
  • Faster execution
  • Used for building full-scale applications
  • More control over hardware Examples:
  • C
  • C++
  • Java Use Cases:
  • Operating systems
  • Desktop applications
  • Game development

What is a Scripting Language?

A scripting language is designed to automate tasks and is usually interpreted rather than compiled.

Key Features:

  • No need for compilation
  • Runs line-by-line (interpreted)
  • Slower compared to compiled languages
  • Easier and quicker to write

Examples:

  • Python
  • JavaScript
  • PHP

Use Cases:

  • Web development
  • Automation scripts
  • Backend services