MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

How APIs Actually Travel Between Systems

2025-12-27 20:37:52

So far, we’ve talked about APIs as a way for systems to communicate.

But here’s an important clarification.

An API defines what systems say to each other. Network protocols define how those messages travel.

Let's discuss about some of the most common ways messages move across the network.

Think of APIs as the language. Think of protocols as the roads, vehicles, and traffic rules.

Let’s walk through the important ones in simple terms.

HTTP – the basic conversation

HTTP is the most common way applications talk over the internet. When you open a website or an app loads data, it usually uses HTTP underneath.

How it behaves:

  1. One side sends a message
  2. The other side replies
  3. Conversation ends

It’s like sending a letter and waiting for a reply. This works well for:

  1. Loading pages
  2. Fetching data
  3. Simple request–response communication

HTTPS – HTTP with a lock

HTTPS is the same as HTTP, but secured. Think of it like sending the same letter, but inside a locked envelope. Only the sender and receiver can read it.

That’s why HTTPS is used for:

  1. Login systems
  2. Payments
  3. Any sensitive data

From an API point of view, nothing changes in behavior. Only privacy and safety improve.

WebSocket – keeping the call open

WebSocket is different. Instead of sending a message and closing the connection, it keeps the line open.

It’s like a phone call:

  1. Both sides stay connected
  2. Both can speak anytime

This is used when:

Messages need to flow both ways where Updates must be instant

Examples:

  1. Live chat
  2. Real-time notifications
  3. Live dashboards

APIs using WebSockets don’t “ask again and again”.They listen continuously.

TCP – slow but reliable delivery

TCP is about reliability. Imagine sending fragile items through a courier service that:

  1. Confirms delivery
  2. Resends if something is lost
  3. Keeps things in order

That’s TCP. Most APIs rely on TCP underneath because:

  1. Messages arrive correctly
  2. Nothing is lost silently

It’s slower than some options, but trustworthy.

UDP – fast but no guarantees

UDP is the opposite approach. It sends messages quickly, without checking if they arrived.

Imagine shouting information across a room: Some people may miss it
But it’s fast

UDP is used where speed matters more than perfection:

  1. Video calls
  2. Live streaming
  3. Online games

APIs rarely use UDP directly, but systems built on real-time data often do.

HTTP/3 (QUIC) – modern and faster roads

HTTP/3 is a newer version of HTTP that runs on top of UDP.

The idea is simple:

  1. Reduce delays
  2. Recover faster from network issues

It’s useful for:

  1. Mobile apps
  2. IoT devices
  3. Real-time experiences

From the API point of view, this mostly happens behind the scenes.

SMTP and FTP – special-purpose messengers

Some protocols exist for very specific jobs.

  1. SMTP is designed for emails.
  2. FTP is designed for file transfers.

They are not general API communication tools, but they still represent structured system-to-system communication.

They follow strict rules, just like APIs do.

Conclusion

APIs don’t float magically between systems. They travel using protocols like:

  1. HTTP / HTTPS
  2. WebSocket
  3. TCP / UDP

Documentation tells you:

  1. Which protocol is used
  2. What behavior to expect
  3. Whether communication is instant or delayed
  4. Whether it’s one-time or continuous

Without this clarity, teams guess. And guessing leads to fragile systems.

Why this matters for documentation?

When API documentation is good, it answers questions like:

  1. Do I request or do I listen?
  2. Is this real-time or delayed?
  3. Is this secure?
  4. Will I get a response immediately?

Without documentation, developers press buttons randomly, like using a TV remote without knowing what each button does.

And that brings us back to the same point again.

Good documentation doesn’t just explain APIs. It explains how to communicate safely and correctly.

From ClickOps to DevOps: My First Infrastructure as Code Project with Terraform

2025-12-27 20:37:26

Introduction
Like many cloud enthusiasts, I started my AWS journey using the Management Console—clicking through wizards, manually selecting subnets, and hoping I didn't forget a configuration step. It works, but it’s prone to human error and hard to replicate.
This week, I decided to level up. I started learning Terraform to embrace Infrastructure as Code (IaC).
In this post, I’ll walk you through my very first hands-on task: provisioning a custom network stack and launching an EC2 instance entirely through code.

The Architecture
Instead of just launching a default instance, I wanted to build the network from scratch to understand how the components connect. Here is what I built:

  • VPC: A custom Virtual Private Cloud.
  • Subnet: A public subnet for the instance.
  • Internet Gateway (IGW): To allow internet access.
  • Route Table: Configuring routes to the IGW.
  • Security Group: Allowing SSH, HTTP, and HTTPS.
  • EC2 Instance: The server itself.

Why Terraform?
Before diving into the code, here are the immediate benefits I realized while working on this:
Speed: I can destroy and recreate the entire infrastructure in seconds with one command.
No Human Error: No more accidentally clicking the wrong checkbox. The code is the source of truth.
Documentation: The code itself acts as documentation for the infrastructure.

The Code
Here is a look at the main.tf file I created.

1. The Network Setup
First, we define the VPC and the Internet Gateway.
`resource "aws_vpc" "terra-vpc" {
cidr_block = "10.0.0.0/16"
tags = {
Name = "terra-vpc"
}
}

resource "aws_internet_gateway" "terra-igw" {
vpc_id = aws_vpc.terra-vpc.id
}

resource "aws_subnet" "terra-subnet1" {
vpc_id = aws_vpc.terra-vpc.id
cidr_block = "10.0.1.0/24"
}`

2. Security Groups
This was the trickiest part! I learned that enabling traffic requires specific ingress (incoming) and egress (outgoing) rules.
`resource "aws_security_group" "terra-ec2-sg" {
name = "terraform-ec2-sg"
vpc_id = aws_vpc.terra-vpc.id
}

Allow SSH from anywhere
resource "aws_vpc_security_group_ingress_rule" "allow_ssh" {
security_group_id = aws_security_group.terra-ec2-sg.id
cidr_ipv4 = "0.0.0.0/0"
from_port = 22
to_port = 22
ip_protocol = "tcp"
}

Allow all outbound traffic
resource "aws_vpc_security_group_egress_rule" "allow_all" {
security_group_id = aws_security_group.terra-ec2-sg.id
cidr_ipv4 = "0.0.0.0/0"
ip_protocol = "-1" # Represents all protocols
}`

3. The Instance
Finally, tying it all together by launching the EC2 instance inside our new security group and subnet.
`resource "aws_instance" "first_terra_instance" {
ami = "ami-02b8269d5e85954ef" # Check your region!
instance_type = "t3.micro"
key_name = "terra-key-pair"
vpc_security_group_ids = [aws_security_group.terra-ec2-sg.id]
subnet_id = aws_subnet.terra-subnet1.id

tags = {
Name = "Terraform-EC2"
}
}`

The Workflow: 4 Magic Commands
Learning the syntax is one thing, but understanding the lifecycle is another. These are the four commands I used constantly:

- terraform init: Initializes the directory and downloads the necessary AWS providers.
- terraform validate: A lifesaver! It checks your code for syntax errors before you even try to run it.
- terraform plan: This is my favorite. It shows a "dry run" of what will be created, changed, or destroyed. It gives you confidence before making changes.
- terraform apply: The command that actually makes the API calls to AWS to build the resources.

Conclusion
Building this project gave me a much deeper appreciation for modern DevOps practices. It’s empowering to see an empty AWS account populate with resources just by typing terraform apply.
My next step? I plan to look into Terraform Variables to stop hardcoding values and make this script reusable for different environments.
If you are just starting with cloud, I highly recommend picking up Terraform. It changes the way you look at infrastructure!

Have you worked with Terraform? What was the first resource you automated? Let me know in the comments! 👇

When Teams Go Quiet, It's Dangerous: Reading Project Crisis Signals

2025-12-27 20:33:00

The Slack channel is quiet.

In standup meetings, everyone just says "Nothing special."

Looks peaceful, right?

No. It's likely the calm before the storm.

After watching numerous projects, I realized something. When teams suddenly go quiet, it's a signal that something is seriously wrong.

The Truth Communication Graphs Tell

Results from analyzing actual project communication patterns:

Messages/Day
200 |                    🔥
150 |****
100 |    ****
 50 |        ****
  0 |____________****____
    W1  W2  W3  W4  W5  W6  W7

W1-2: Active Discussion (Normal)
W3-4: Focused Development (Normal)
W5-6: Anxious Silence (Danger)
W7: Problem Explosion (Crisis)

The silence of W5-6 predicts the explosion of W7.

Four Types of Silence

1. Focus Silence (Good Silence) ✅

Characteristics:
- Intermittent but clear communication
- "Focus mode" status message
- Updates at set times
- Quick responses to questions

Signal:
"Focusing on API implementation today. Will update at 5 PM."

This silence is healthy. Proof of immersion in work.

2. Stuck Silence (Stuck Silence) ⚠️

Characteristics:
- Vague answers like "in progress"
- Avoiding specific questions
- No help requests
- Same task for days

Signal:
"Still working on it..." (Same answer for 3 days)

Stuck but can't speak. Immediate intervention needed.

3. Conflict Silence (Conflict Silence) 🚨

Characteristics:
- Communication breakdown between specific team members
- Increased DMs instead of public channels
- Avoiding eye contact in meetings
- Passive-aggressive messages

Signal:
"I'll do as you decided." (Avoiding responsibility)

Team is about to break. Mediation urgent.

4. Resignation Silence (Resignation Silence) 💀

Characteristics:
- Giving up on opinions
- Only minimal responses
- No new ideas
- Just waiting for quitting time

Signal:
"Yes, understood." (Answer to everything)

Near burnout. May have already mentally left.

Real Case: A Startup's Crisis

Week 1-2: Active Start

Daily Messages: 150
"How about doing it this way?"
"Good idea!"
"Let's try it!"

Week 3-4: Healthy Focus

Daily Messages: 80
"Sharing today's goals"
"Please review PR"
"Merge complete"

Week 5: Anxious Signal

Daily Messages: 40
"In progress"
"Yes"
"Confirmed"

Week 6: Dangerous Silence

Daily Messages: 15
(Mostly bot messages)
Standup attendance 50%

Week 7: Explosion

"Why are you only saying this now?"
"It was the wrong direction from the start"
"I can't do this anymore"

Project 3 weeks delayed
2 team members quit

Early Warning System for Communication Patterns

Quantitative Indicators

def communication_health_check(team_data):
    """Check team communication health"""

    indicators = {
        "message_decrease": -30,  # Warn if 30%+ decrease
        "response_time_increase": 2,   # Warn if 2x slower
        "emoji_usage_decrease": -50,  # Warn if 50% decrease
        "question_decrease": -40,      # Warn if 40% decrease
    }

    if team_data.message_decrease > indicators["message_decrease"]:
        return "🚨 Danger: Communication volume plummeting"

    if team_data.response_time > indicators["response_time_increase"]:
        return "⚠️ Warning: Response delay"

    return "✅ Normal"

Qualitative Signals

Healthy Team:

  • "What do you think about this?"
  • "Can I help you?"
  • "Made a mistake lol, will fix"
  • "Good work today!"

Dangerous Team:

  • "..."
  • "Yes"
  • "Confirmed"
  • (Read but no response)

PM Response Strategy

Level 1: Prevention (Daily)

## Daily Checklist

- [ ] Check Slack activity
- [ ] DM quiet team members
- [ ] Check standup atmosphere
- [ ] Actively resolve blockers

Level 2: Early Intervention (When Signal Detected)

1:1 Meeting Template:

"You've been quiet lately, are you stuck somewhere?"
(Never say "Why aren't you talking?")

"Let me know anytime if you need help"
(Don't pressure)

"Is collaboration with other team members going well?"
(Check for conflict)

Level 3: Crisis Management (When Silence Continues)

Team Reset Meeting:

  1. Acknowledge problem: "Communication doesn't seem to be working well"
  2. Safe space: "Please speak freely"
  3. Specific actions: "Let's change it this way going forward"
  4. Follow-up: "Let's check in again tomorrow"

Communication Activation Techniques

1. Structured Check-in

Monday: "Share this week's goals"
Wednesday: "Mid-check & blockers"
Friday: "Retrospective & next week plan"

2. Build Psychological Safety

const safe_space_rules = {
  mistake_ok: 'Mistakes are okay',
  questions_encouraged: 'Not knowing is normal',
  opinions_respected: 'Different opinions are good too',
  learn_from_failure: 'Learn from failures',
};

3. Async Communication Tools

Daily Log:

## Today's TIL (Today I Learned)

- Learned:
- Stuck on:
- Need help:

4. Team Atmosphere Makers

Communication Facilitation Activities:

  • Monday morning: Weekend stories
  • Wednesday: Random coffee chat
  • Friday: This week's MVP selection

Special Management for Remote Teams

Remote teams make it harder to detect silence.

Additional Indicators

remote_indicators = {
    "camera_off": "Burnout signal",
    "late_login": "Motivation decline",
    "early_logoff": "Considering leaving team",
    "emoji_only_response": "Avoiding opinions"
}

Response Methods

  1. Regular 1:1: Once a week mandatory
  2. Virtual Coffee Time: Non-work conversation
  3. Screen Share Pair Work: Prevent isolation
  4. Explicit Praise: In public channels

Communication Pattern Dashboard

Weekly Communication Health
━━━━━━━━━━━━━━━━━━━━━━━━
Message Volume:     ████████░░ 80%
Response Speed:     ██████░░░░ 60%
Positive Word %:   █████████░ 90%
Question Frequency: ███████░░░ 70%
━━━━━━━━━━━━━━━━━━━━━━━━
Overall: ⚠️ Attention Needed

Magic Questions to Break Silence

When stuck:

  • "How far have you progressed?"
  • "Shall we look from the first step together?"

When resolving conflict:

  • "Shall we hear different perspectives?"
  • "Let's find common ground first"

When overcoming resignation:

  • "What do you want to gain from this project?"
  • "How can we make it more fun?"

Conclusion: Noisy Teams are Healthy Teams

A quiet office might look like good work.

But healthy teams are noisy.

  • Lots of questions
  • Active opinions
  • Laughter
  • Occasional arguments

Silence is not golden.
In projects, silence is a danger signal.

When teams go quiet, reach out first.
"How are you doing lately?"

This one phrase can save the project.

Want to track and improve team communication and collaboration in real-time? Check out Plexo.

Quark's Outlines: Emulating Sequence and Mapping Types in Python

2025-12-27 20:11:39

Quark’s Outlines: Emulating Sequence and Mapping Types in Python

Overview, Historical Timeline, Problems & Solutions

An Overview of Emulating Sequence and Mapping Types in Python

What does it mean to emulate sequences and mappings in Python?

When you create your own class in Python, you may want it to act like a list or a dictionary. You can do this by emulating the behavior of Python’s built-in sequence and mapping types. This means your object will respond to the same operations as a list or a dict — things like indexing, slicing, or assigning values with square brackets.

To make this work, you define certain special methods. These methods have names like __getitem__, __setitem__, and __len__. Python calls these methods when you use square brackets or when you check the length with len().

Python lets your objects act like lists or dicts by defining special methods.

class Box:
    def __init__(self):
        self.data = [10, 20, 30]
    def __getitem__(self, key):
        return self.data[key]

b = Box()
print(b[1])
# prints:
# 20

By defining __getitem__, this object now works like a list when accessed by index.

What methods let you emulate Python sequences and mappings?

Python gives you several special methods to control bracket-based access and assignment. These include:

  • __getitem__(self, key) to read a value
  • __setitem__(self, key, value) to assign a value
  • __delitem__(self, key) to delete a value
  • __len__(self) to return how many items are in the object

If you define these methods in your class, your object will behave like a list (sequence) or a dictionary (mapping), depending on the type of keys it accepts.

Python lets you create custom containers that work with square brackets.

class DictLike:
    def __init__(self):
        self.store = {}
    def __setitem__(self, key, value):
        self.store[key] = value
    def __getitem__(self, key):
        return self.store[key]

d = DictLike()
d["color"] = "red"
print(d["color"])
# prints:
# red

This object now behaves like a dictionary by using the __getitem__ and __setitem__ methods.

A Historical Timeline of Emulating Sequence and Mapping Types in Python

Where do Python’s emulation rules come from?

Python’s design allows custom objects to behave like built-in ones. This behavior comes from early object-oriented design patterns that made behavior depend on method names, not types. Over time, Python gave more control to programmers through special method names.

People built early models for custom containers

1970s —Operator overloading in languages like C++ allowed classes to define behavior for built-in operators.

1980 —ABC language used readable syntax for structured data types, which influenced Python.

People designed Python’s special method system

1991 —Python added __getitem__ and friends to let user-defined types act like lists or dicts.

2000 —Slicing support extended with __getslice__ and other slice helpers.

2008 —Python 3 simplified slicing logic by standardizing __getitem__ to accept slice objects.

2025 —Python class interfaces stable for emulating all standard container behaviors with simple methods.

Problems & Solutions with Emulating Sequences and Mappings in Python

How do you use Python’s special methods to build custom containers?

Python gives you the ability to create your own types that behave like sequences or mappings. These problems show how you can let your objects respond to indexing, length checks, assignments, and deletions by defining special methods.

Problem: How do you return values using square brackets in Python?

You are building a class to store data. You want people to get values using square brackets, like box[2]. You do not want them to call a method. You want to use normal Python syntax.

Problem: You want to support obj[key] syntax in your class.

Solution: Define the __getitem__ method in your class.

Python lets you return values with square brackets using __getitem__.

class MyList:
    def __init__(self):
        self.data = ['a', 'b', 'c']
    def __getitem__(self, index):
        return self.data[index]

m = MyList()
print(m[1])
# prints:
# b

When Python sees m[1], it calls m.__getitem__(1) behind the scenes.

Problem: How do you assign a value using square brackets in Python?

You want your object to work like a dictionary. You want to write d["key"] = "value" and have it store the data. You do not want to use a custom method name. You want to use Python’s built-in assignment style.

Problem: You want to support obj[key] = value syntax.

Solution: Define the __setitem__ method.

Python lets you assign values with square brackets using __setitem__.

class MyDict:
    def __init__(self):
        self.data = {}
    def __setitem__(self, key, value):
        self.data[key] = value

d = MyDict()
d["color"] = "blue"
print(d.data)
# prints:
# {'color': 'blue'}

Python uses __setitem__ to store the value when it sees square-bracket assignment.

Problem: How do you delete a value with square brackets in Python?

You made a class that stores values like a list. Now you want users to delete a value by writing del obj[2]. You want this to behave like del mylist[2].

Problem: You want to support del obj[key] syntax.

Solution: Define the __delitem__ method.

Python lets you delete values with square brackets using __delitem__.

class MyList:
    def __init__(self):
        self.data = [1, 2, 3]
    def __delitem__(self, index):
        del self.data[index]

m = MyList()
del m[1]
print(m.data)
# prints:
# [1, 3]

This method lets your object clean up values using standard Python syntax.

Problem: How do you return the length of a custom container in Python?

You want users to call len(obj) and get the number of items. You do not want them to use a custom method like .size(). You want to match how Python works with lists and dicts.

Problem: You want len(obj) to work on your object.

Solution: Define the __len__ method in your class.

Python lets you define container length with __len__.

class MyBox:
    def __init__(self):
        self.items = [10, 20, 30]
    def __len__(self):
        return len(self.items)

b = MyBox()
print(len(b))
# prints:
# 3

When Python sees len(b), it calls b.__len__() behind the scenes.

Problem: How do you support both indexes and keys in Python?

You are building a class that sometimes acts like a list and sometimes like a dict. You want it to support both integer indexes and string keys.

Problem: You want flexible key types inside square brackets.

Solution: Handle both inside the __getitem__ and related methods.

Python lets you handle keys and indexes in one method.

class Hybrid:
    def __init__(self):
        self.data = {0: "zero", "one": 1}
    def __getitem__(self, key):
        return self.data[key]

h = Hybrid()
print(h[0])
print(h["one"])
# prints:
# zero
# 1

You control which keys are valid. Python passes them straight to your method.

Like, Comment, Share, and Subscribe

Did you find this helpful? Let me know by clicking the like button below. I'd love to hear your thoughts in the comments, too! If you want to see more content like this, don't forget to subscribe. Thanks for reading!

Mike Vincent is an American software engineer and app developer from Los Angeles, California. More about Mike Vincent

📊 Visualize Your Coding Journey: Check Your GitHub Stats

2025-12-27 20:09:59

Have you ever wondered what your coding timeline really looks like beyond the standard green contribution squares on GitHub?

We all love seeing those green squares light up, but sometimes they don't tell the whole story. They don't tell you which languages dominate your repos, how your tech stack has evolved over the years, or which specific repositories earn the most stars.

I recently stumbled upon a tool that turns your GitHub profile into a beautiful visual report, and I had to share it.

Profile Summary For Github

🚀 Meet "Profile Summary for GitHub"

It is an open-source tool that analyzes your public repositories and generates a comprehensive dashboard of your coding habits.

Instead of just counting commits, it breaks down your profile into actionable insights.

🔗 Check it out here: profile-summary-for-github.com

✨ Key Features

Here is why I think this tool stands out:

  • Language Breakdown: It doesn't just show your top language; it visualizes the ratio of every language you use across all repositories.
  • Repository Insights: You can see which repos have the most stars and forks in a clean bar chart.
  • Timeline Analysis: It shows a "Commits per Repository" chart over time, which is great for seeing which projects you were obsessed with during specific years.
  • No Sign-up Required: You just enter your GitHub username, and it generates the report instantly.

🧐 How to Use It

  1. Go to profile-summary-for-github.com.
  2. Enter your GitHub username.
  3. Analyze your charts!

You can also view your profile directly via this URL pattern:

https://profile-summary-for-github.com/user/YOUR_USERNAME

💡 Why It Matters

As developers, we often forget how much we've learned or how our interests have shifted. Seeing a visual representation of your move from, say, Java to Python, or seeing that side project you worked on for three months straight in 2021, is incredibly validating.

It is also a great link to include in your portfolio or resume to give recruiters a quick snapshot of your technical focus.

👇 Let's Connect!

Give it a try and let me know in the comments: What is your #1 most used language according to the tool?

Lessons from the Coursera–Udemy Merger Deal

2025-12-27 20:06:18

Mergers have long been a defining strategy in technology and education. They allow companies to combine strengths, achieve scale, and navigate rapidly changing markets. In the context of education technology (edtech),and electronic learning (e-learning) consolidation can unlock value that individual platforms struggle to deliver alone.

In December 2025, the online learning landscape took a major turn as Coursera agreed to acquire Udemy in an all-stock deal valued at roughly $2.5 billion. This move signals a significant consolidation in the e-education market.

Both companies bring distinct and complementary strengths. Coursera is known for its partnerships with universities and professional certificates, while Udemy operates a broad marketplace of instructor-created courses. The combined entity aims to reach a larger global audience, accelerate AI-driven learning experiences, and serve both individual learners and enterprise customers more effectively.

Why mergers are important in edtech
Mergers can help platforms:

  1. Expand reach and scale in a crowded market.

  2. Invest in innovation—especially AI personalization and skills mapping.

  3. Reduce operational costs through synergies.

These strategic benefits are central to the Coursera-Udemy rationale.

Current trends and challenges

The edtech space is evolving quickly. Learners increasingly seek bite-sized, practical skills training alongside traditional credentials. At the same time, some challenges have emerged:

Instructor retention and monetization; Many Udemy instructors have expressed frustration with revenue shares and shrinking returns, leading some to consider launching courses independently or on platforms like Teachable and Thinkific, where creators keep a larger share of revenue.

Competition from independent course platforms; Instructors building their own sites or using LMS tools are capturing niche markets and setting their own pricing, often above the average Udemy price point as they seek more control and higher margins. Notable examples include Traversy Media, NetNinja, and Academind who previous had their courses on Udemy and are now owning independent e-learning platforms for their tech courses.

These shifts reflect broader industry dynamics: learners want value and flexibility, while instructors seek fair compensation and ownership of their content.

Uncertainties ahead

Despite the strategic rationale, several questions remain:

Pricing structures: There has been no official announcement on pricing changes post-merger. It is unclear whether the combined platform will maintain Udemy’s affordable individual course prices or more closely align with Coursera’s subscription or credential models.

Platform integration: Merging two different business and technology models—marketplace versus institutional partnerships—will be complex. Integration success will significantly influence the user experience for both learners and instructors.

Conclusion

The Coursera–Udemy merger is a strategic response to mounting competition, AI disruption, and evolving learner expectations. It underscores the importance of scale and innovation in edtech, but it also highlights ongoing challenges: instructor economics, platform differentiation, and how to balance accessibility with sustainable business models.

For professionals engaged in lifelong learning, this merger reinforces a simple reality: continuous skill development remains essential, regardless of how platforms evolve.