2025-12-08 05:49:08
If you’ve been coding in Python for even a week, you’ve probably heard one word over and over again: functions. But why are they such a big deal? Let’s break it down — dev-to-dev.
A function in Python is simply a reusable block of code that performs a specific task. Instead of writing the same lines again and again, you wrap them inside a function and call it whenever you need it. Clean. Organized. Powerful.
Think of it like creating your own mini-machine:
You build it once → You use it anywhere in your script → It saves time + avoids bugs.
Here’s what makes functions awesome:
Reusability: Write once, run everywhere.
Clarity: Break big problems into small, readable pieces.
Efficiency: Debug faster and scale your code easily.
Team Friendly: Makes your code understandable for others (and future you!).
And the heart of it all? The simple Python syntax:
def greet(name):
return f"Hello, {name}!"
One definition — unlimited greetings.
Functions are the building blocks of every serious Python project, from automation to APIs to AI.
What’s your favorite use of functions in your code? Drop it below ...
2025-12-08 05:45:55
SELECT
op_date,
op,
op_data ->> 'worder_m_id',
op_data ->> 'item_attribute1_id',
op_data ->> 'qty_man'
FROM uyumlog.log
WHERE table_name = 'prdt_worder_m' and op_data ->> 'worder_m_id' = '4149'
ORDER BY op_date asc
2025-12-08 05:44:32
UPDATE PSMT_INVOICE_M
SET SHIPPING_COUNTRY_ID = SRC.COUNTRY_ID,
SHIPPING_CITY_ID = SRC.CITY_ID,
SHIPPING_TOWN_ID = SRC.TOWN_ID
FROM ( SELECT
PM.INVOICE_M_ID,
FE.COUNTRY_ID,
FE.CITY_ID,
FE.TOWN_ID
FROM PSMT_INVOICE_M PM
JOIN FIND_ENTITY FE ON PM.ENTITY_ID = FE.ENTITY_ID
) SRC
WHERE PSMT_INVOICE_M.INVOICE_M_ID = SRC.INVOICE_M_ID
2025-12-08 05:43:51
MERGE INTO FINT_CAD_D TRG
USING
(
SELECT
CD.ROWID AS RID,
CM.CREDIT_ACC_ID
FROM FINT_CAD_D CD
JOIN FINT_CAD_M CM ON CD.CAD_M_ID = CM.CAD_M_ID
WHERE CM.CAD_M_ID = 4918
) SRC ON (TRG.ROWID = SRC.RID)
WHEN MATCHED THEN
UPDATE SET TRG.CREDIT_ACC_ID = SRC.CREDIT_ACC_ID
2025-12-08 05:41:56
DO $$
DECLARE
result CONSTANT refcursor:= 'result';
BEGIN
PERFORM RPA_HRMD_REGISTER (@UsrId@::INTEGER);
PERFORM RPA_HRMD_EMPLOYEE (@UsrId@::INTEGER);
OPEN result FOR
SELECT
PYR.PAYROLL_ID,
EMP.EMPLOYEE_ID,
REG.REGISTER_ID,
REG.REGISTER_CODE AS "SİCİL NO",
REG.REGISTER_NAME||' '||REG.REGISTER_SURNAME AS "ADI SOYADI",
REG.CITIZENSHIP_NO AS "TC KİMLİK NO",
PYR.AMT_NET AS "NET ÖDENEN"
FROM HRMT_PAYROLL PYR
INNER JOIN RP_HRMD_EMPLOYEE EMP ON PYR.EMPLOYEE_ID = EMP.EMPLOYEE_ID
INNER JOIN RP_HRMD_REGISTER REG ON REG.REGISTER_ID = EMP.REGISTER_ID
WHERE TO_CHAR(PYR.PAYROLL_YEAR) = TO_CHAR('@Year@')
AND TO_CHAR(PYR.PAYROLL_MONTH) = TO_CHAR('@Month@')
ORDER BY REG.REGISTER_CODE;
END
$$;
FETCH ALL FROM result;
2025-12-08 05:41:29
Table of Contents
1. Recap of Previous Weeks
2. What are Terraform Modules and Why Do We Use Modules in Terraform?
3. Understanding Module Inputs and Outputs
4. Creating Our First Terraform Module
5. Using the Module in Our Root Configuration
6. Deploying to Azure
7. Wrap-Up
1. Recap of Previous Weeks
Over the past five weeks, we built a small but realistic Azure environment while learning the core Terraform concepts. We started by deploying our first VM, then introduced variables and tfvars files to make the configuration more flexible. We added security with NSGs and dynamic blocks, and finally exposed useful information through output values.
Here's the visual representation of the infrastructure we have so far:
Here's what .tf files we have under our Project Folder in Visual Studio Code:
nsg.tf > Contains the Network Security Group and the NSG association.
outputs.tf > Contains the output values.
providers.tf > Contains the provider block (AzureRM)
resource-group.tf > Contains the resource group resource block.
terraform.tfvars > Contains the values for our variables for the project.
variables.tf > Contains the definition of all variables we used for our project.
virtual-machine.tf > Contains the virtual machine and Network Interface Card (NIC) resources.
virtual-network.tf > Contains the Virtual Network and subnet resources.
At this point, we have a fully working VM deployment that is parameterized, secure and able to surface important data. As the configuration grows, we are beginning to see repeated patterns, and managing multiple similar resources would quickly become difficult.
This is the perfect time to introduce Terraform modules.
If you missed the previous week's posts and want to do a deeper dive, here are the links to them :
Week 1
Week 2
Week 3
Week 4
Week 5
GitHub repository link for this series, where you can find the full .tf files used for every week including this week.
Now let's get started with Terraform Modules!
2. What are Terraform Modules and Why Do We Use Modules in Terraform?
A Terraform module is simply a collection of resources grouped together so they can be reused as a single unit. Instead of rewriting the same VM, NIC or NSG blocks every time we deploy a new virtual machine, we define them once inside a module and call that module whenever we need another instance.
Before we continue, it helps to understand how Terraform organizes configuration. The folder where your main Terraform files live is called the root module. This is where Terraform starts. Any module you place in a subfolder becomes a child module, and it cannot run on its own. The root module provides the inputs, and the child module returns outputs. This relationship is what allows us to reuse infrastructure patterns cleanly without duplicating code.
Modules become especially important in real-world environments. Imagine a team managing ten VMs across development, staging and production. Without modules, each environment ends up with its own copy of the same resource blocks. A simple change—like updating tags or adjusting an NSG rule—must now be repeated in multiple places. Over time, small differences creep in, files become harder to manage and consistency across environments suffers.
Modules solve this problem by centralizing the logic. We define the VM pattern once and deploy it anywhere by simply passing different input values. This keeps the configuration consistent, reduces duplication and makes it much easier to maintain and scale the infrastructure as it grows.
3. Understanding Module Inputs and Outputs
Before we begin creating our first module, it helps to understand how information moves between the root module and a child module. A child module cannot directly read variables or resources from the root module. Instead, Terraform requires us to explicitly pass values into the module and explicitly return values from it. This makes modules predictable, reusable and easier to maintain.
Module Inputs
Inputs are the values a module needs in order to create its resources. These inputs are defined as variables inside the module, and the root module supplies the actual values when calling the module. In our case, the VM module will need things like the resource group name, resource group location, subnet ID, VM size, admin credentials, NIC settings and allowed ports.
Module Outputs
Outputs are values that the module exposes back to the root module. This allows the root to access information created inside the module without reaching into its internal resources. For example, the module can return the VM’s ID, NIC name or private IP address so they can be used elsewhere in the configuration or displayed after deployment.
With this idea of inputs and outputs in mind, we can now begin refactoring our VM, NIC and NSG resources into a dedicated Terraform module.
4. Creating Our First Terraform Module
Disclaimer: This is more of "refactoring" our previous Terraform code into a module rather than creating a module from scratch. This is very useful in real-world scenarios where someone created a set of resources that will be re-used repeatedly and you want to turn that Terraform code into a reusable module.
In this step, we are going to move the VM, NIC and NSG resources into a modules/vm folder and wire them up so they still use the same variables as before.
The resource group, virtual network and subnet will stay in the root configuration. Our existing variables.tf, outputs.tf and terraform.tfvars will also remain in the root. We will keep using those files to provide values and simply pass them into the new module.
First, create the folder for the VM module, and the new .tf files like so :
The main.tf file we create will hold the resource blocks for the module, which will be the Virtual Machine, Virtual Machine NIC, NSG, and the NSG association resources. Copy and paste those resources from virtual-machine.tf and nsg.tf files to the main.tf, like so :
After copying the code, you can remove the virtual-machine.tf and nsg.tf files.
There is one important change we need to make at this point. In the previous weeks, the VM, NIC and NSG resources in virtual-machine.tf and nsg.tf were reading location and resource_group_name directly from the resource group in the root, because all resources were at the root module level. An example of this is referring to resource group name and location attributes like this:
Now that we are inside a child module, we can no longer access root resources this way. Instead, the module should receive the resource group name and location as input variables.
So, while you are in modules/vm/main.tf, replace any usage of azurerm_resource_group.prod.name with var.rg_name, and any usage of azurerm_resource_group.prod.location with var.rg_location, and usage of azurerm_subnet_prod.id with var.subnet_id, like so:
Next, create modules/vm/variables.tf and define the variables that the module expects. These should match the names you are already using in main.tf for the VM and NSG, like so :
This should address the errors the Visual Studio Code were showing in this manner:
Finally, create modules/vm/outputs.tf. Here we define the values that we want to expose back to the root configuration. These are the same kinds of values you exposed in Week 5, but now they come from the module instead of directly from the VM resource.
As you can see, we only took the outputs that are specific to resources under our main.tf file, and left out the resource group output that still is in the root and not in the module.
In order to refer to the outputs in our VM module, we must now use module.module_name.output_name syntax at the beginning of our outputs we have defined in our root outputs.tf file, like so:
Notice how the resource group output is still the same, as that is not a resource we included in the VM module.
5. Using the Module in Our Root Configuration
Now that we have created our VM child module and moved all VM-related resources into it, the next step is to connect this module to our root configuration. This is where we pass in the variables from our existing variables.tf and continue using terraform.tfvars just like in previous weeks.
Create a new main.tf under the root directory (Azure), like so:
Here, we will add a module block to actually call the child module we created under modules/vm and create the VM resource. The syntax to create a module looks like this:
Source is where the module is relative to the root directory.
The rest of the variables are the Inputs that the Module expects to be able to create the module. So what happens when we are calling the VM module is we are passing the variable values in terraform.tfvars file as inputs when the VM is being created using the child module.
Notice how azurerm_subnet.prod.id is not a variable. It’s an attribute that exists only after the virtual network and subnet are created. Because the VM module depends on the subnet being created first, referencing the subnet ID here creates an implicit dependency. Terraform automatically ensures that the subnet is created before the VM module runs, simply because the module cannot receive this value until the subnet exists.
With this module block in place, the root configuration is now responsible for providing inputs (via variables and resource outputs), while the child module handles the actual creation of the VM, NIC and NSG. This separation keeps the root clean and makes the VM definition fully reusable for future deployments.
6. Deploying to Azure
Okay, that was a lot of editing and moving things around. This week, we'll add some additional commands to make sure everything is in tact.
Let's start with initializing Terraform by issuing terraform init command:
Since we moved a lot of code around, the formatting in our .tf files may no longer be consistent. Terraform provides a built-in tool for this. Running the terraform fmt command will automatically clean up and standardize the formatting across all your Terraform files so everything is properly aligned and easier to read. Since we have a hierarchical folder structure, in order to apply terraform fmt command to all files in all directories, we will append -recursive flag at the end of terraform fmt command.
Great, looks like i had multiple files that required formatting which are listed above.
Before applying any changes, it is a good idea to run the terraform validate command. This command checks whether your Terraform files are written correctly and whether the configuration is structurally sound. It does not contact Azure or create any resources. It simply reads your files and confirms that the syntax, references and module structure are valid. Running this helps catch issues early before running a plan or apply:
Now we are going to run terraform plan as usual:
Great! Looks like it's still creating 7 resources (VM, VM NIC, NSG, NSG Association, Virtual Network, Subnet , Resource Group), and outputs are still the same as well. Looks like our module configuration is working.
Next, we run terraform apply -auto-approve command:
Perfect, all resources are created successfully, and at the end of our apply, we got our outputs printed out.
Checking the Azure Portal, I can also see the VM created successfully with the values expected:
Finally, don't forget to issue terraform destroy -auto-approve command to prevent incurred costs:
7. Wrap-Up
By introducing our first Terraform module, we transformed our configuration from a simple, single-VM setup into a more modular and reusable design. Modules are how Terraform scales—whether you're deploying five VMs or fifty, the pattern stays the same. This week we learned how to move existing resources into a module, pass data into it from the root and expose outputs back out so other parts of the configuration can use them.
From here, you’ll start to see why teams rely on modules in every real Terraform project. In the upcoming posts, we’ll expand this module, deploy multiple instances and continue turning our growing configuration into something that mirrors real infrastructure practices.
Thanks for reading and stay tuned for more!