2026-03-03 00:58:52
We’ve all been there. You’re reading a brilliant, massive piece of content, you hit the bottom, and suddenly you have to manually scroll all the way back up like it’s 2004.
Dropping a "Scroll to Top" button into your React app is a no-brainer for user experience. But if you aren't careful, that tiny little convenience button can absolutely tank your app's performance.
Today, I’m going to share a highly optimized, buttery smooth ScrollToTop component that rescues your users without causing a chaotic avalanche of React re-renders.
When I first built one of these, I fell into the classic trap: storing the exact window.scrollY position in React state.
Here is the brutal reality of that approach: every single pixel your user scrolls triggers a state update. If they scroll down a long page, you are forcing your component to re-render hundreds of times a second. It’s a massive resource hog for absolutely no reason.
Instead of tracking the exact scroll depth, we only care about one thing: Has the user scrolled past our threshold? (In this case, 300px). We track a single boolean value, meaning React only re-renders twice—once when the button appears, and once when it vanishes.
Here is the optimized component:
import { useState, useEffect } from 'react';
import './ScrollToTop.css';
const ScrollToTop = () => {
const [isVisible, setIsVisible] = useState(false);
useEffect(() => {
const handleScroll = () => {
// We only toggle state when crossing the 300px mark
if (window.scrollY > 300) {
setIsVisible(true);
} else {
setIsVisible(false);
}
};
// The { passive: true } option is crucial for scroll performance!
window.addEventListener('scroll', handleScroll, { passive: true });
return () => {
window.removeEventListener('scroll', handleScroll);
};
}, []); // Empty array ensures this listener only mounts once
const handleClick = () => {
window.scrollTo({
top: 0,
behavior: 'smooth'
});
};
return (
<div className='scroll-to-top-button-div'>
<button
className={`btn ${isVisible ? 'show' : 'hide'}`}
type='button'
onClick={handleClick}
aria-label="Scroll to top"
>
Scroll to Top ⇑
</button>
</div>
);
};
export default ScrollToTop;
The Styling: Ditch display: none
You cannot animate an element that toggles between display: block and display: none. It just violently snaps in and out of existence. To get that buttery smooth fade, we use opacity and visibility instead, pairing it with pointer-events: none so the invisible button doesn't block underlying clicks.
.btn {
padding: 12px 20px;
border: none;
border-radius: 8px;
position: fixed;
bottom: 20px;
right: 20px;
z-index: 1000;
background-color: slateblue;
color: white;
cursor: pointer;
/* Transition opacity and visibility for a smooth fade */
transition: opacity 0.4s ease, visibility 0.4s ease, transform 0.3s ease;
box-shadow: 0 4px 6px rgba(0,0,0,0.2);
}
.btn:hover {
background-color: orange;
transform: translateY(-3px);
}
.show {
opacity: 0.85;
visibility: visible;
}
.hide {
opacity: 0;
visibility: hidden;
pointer-events: none; /* Prevents invisible clicks */
}
Why This Component is a Winner
Zero Drag: By ditching pixel-tracking, your browser can breathe easy.
Passive Listeners: Adding { passive: true } tells the browser that our event listener won't prevent the default scroll behaviour, keeping the scroll itself perfectly smooth.
Accessible & Clean: The aria-label keeps it friendly for screen readers, and the CSS animations make it feel like a premium UI feature.
Feel free to snag this for your own projects! Let me know in the comments how you handle scroll events in your React builds—are there any custom hooks you prefer over a standard useEffect?
2026-03-03 00:57:29
Joins and window functions are two of the most powerful tools in SQL. Understanding them well is really key in the journey of becoming a data professional.
A join combines rows from two or more tables based on a related column. The most common is the INNER JOIN, which returns only rows where a match exists in both tables. If you need to preserve all rows from one side regardless of a match, you use a LEFT or RIGHT JOIN — NULLs fill in where no match is found. A FULL OUTER JOIN goes further, returning all rows from both tables with NULLs on either side where matches are missing. Less common but worth knowing, a CROSS JOIN produces every possible combination of rows between two tables, and a SELF JOIN joins a table to itself — handy for hierarchical data like employee-manager relationships.
Here's a join query combining employees with their departments:
SELECT e.name, d.name AS department, e.salary
FROM employees e
LEFT JOIN departments d ON e.dept_id = d.id;
Window functions compute values across a set of rows related to the current row — without collapsing the result like GROUP BY does. They use the OVER() clause, with PARTITION BY to define groups and ORDER BY to control row ordering within each group.
Ranking functions like RANK(), DENSE_RANK(), and ROW_NUMBER() are among the most used. The difference is: RANK() skips numbers after a tie, while DENSE_RANK() does not. Aggregate functions like SUM() and AVG() can also be used as window functions. Add an ORDER BY inside OVER() and they become cumulative, perfect for running totals. LAG and LEAD let you look at previous or next row values without a self-join, making period-over-period comparisons simple. Functions like FIRST_VALUE and NTILE round out the toolkit for benchmarking and bucketing data into equal groups.
below is an example of a window function showing each student' score alongside the class average and their rank without losing any rows. Assuming that a list of students and their exam score was initially given.SELECT
name,
subject,
score,
ROUND(AVG(score) OVER (PARTITION BY subject), 2) AS class_average,
RANK() OVER (PARTITION BY subject ORDER BY score DESC) AS class_rank
FROM student_scores;
2026-03-03 00:54:00
I have a self-hosted video merging service — Merge Video. It downloads YouTube videos with yt-dlp, merges them with ffmpeg, and uploads the result back to YouTube.
On localhost everything worked. On a cloud server:
ERROR: Sign in to confirm you're not a bot.
Use --cookies-from-browser or --cookies for authentication.
YouTube detected the datacenter IP and blocked downloads. The fix seemed obvious: extract cookies from Chrome, upload to server, done.
My hypothesis: extracting cookies is a solved problem. Dozens of tools exist. Should take 10 minutes.
Reality: I spent 3 hours trying 6 different methods. Only one worked — and it wasn't Chrome.
I run strategic sessions — 6-to-10-hour workshops where teams map complex systems before making decisions. We draw the object of management, analyze its place in a larger system, do hindsight and foresight, and only then build a plan. Over 150 sessions so far.
The cookie debugging was the exact opposite of this process.
I had an AI coding assistant — Antigravity by Google DeepMind — writing scripts, testing APIs, trying approaches. It's incredibly fast. But here's what happened: we burned through 6 attempts in 3 hours because neither of us stopped to map the system first.
The moment I paused and asked: "What's the plan? Pros, cons, risks, options?" — the answer became obvious in minutes:
| Without strategy | With strategy |
|---|---|
| Try DPAPI → fail | Map the landscape: Chrome encrypts, Firefox doesn't |
| Try rookiepy → fail | Identify constraints: app-bound encryption since July 2024 |
| Try CDP → fail | Evaluate options: only 2 viable paths (Firefox or proxies) |
| Try OAuth → fail | Choose: Firefox = free, proxies = $1.50/GB |
| Try Bearer token → fail | Execute: 10 lines of Python, done |
| Try Firefox → works | |
| 6 attempts, 3 hours | 1 attempt, 5 minutes |
This is the pattern I see in every strategic session: teams that jump to solutions before mapping the system waste time on dead ends. Teams that invest 30 minutes in analysis often find the answer without trying a single wrong approach.
AI tools are powerful executors. They'll write any script you ask for in seconds. But they'll also happily execute 5 wrong approaches before someone asks: "Wait — what are we actually dealing with?"
The human's job isn't to write code. It's to bring strategy — to ask the right questions before the first line of code is written.
Before you Google "export Chrome cookies" — read this.
When I first asked Antigravity to help extract cookies, its very first suggestion was to install a popular Chrome extension — Get cookies.txt. Quick, easy, thousands of users. I didn't install it — something felt off about giving a third-party extension access to all my browser sessions.
Good instinct. That extension turned out to be malware. It was silently sending all your cookies — login sessions, banking tokens, everything — to its developer. Google removed it from the Chrome Web Store and flagged it as malware.
This isn't an isolated case. Any browser extension with cookie access permissions can steal your sessions. Cookie-stealing malware like Raccoon, RedLine, and Vidar has been doing exactly this for years.
The rule: never install a third-party extension to export cookies. Use built-in tools or read the database directly.
Chrome encrypts cookies with DPAPI + AES-256-GCM on Windows. I wrote a Python script to call CryptUnprotectData via ctypes:
class DATA_BLOB(ctypes.Structure):
_fields_ = [("cbData", ctypes.wintypes.DWORD),
("pbData", ctypes.POINTER(ctypes.c_char))]
result = ctypes.windll.crypt32.CryptUnprotectData(
ctypes.byref(input_blob), None, None, None, None, 0,
ctypes.byref(output_blob)
)
Result: CryptUnprotectData failed
Why: Chrome 127+ (July 2024) introduced app-bound encryption. The decryption key is now bound to the Chrome binary itself. External programs — even running as the same user — can't decrypt cookies anymore.
rookiepy is a Rust-based Python library built specifically for extracting browser cookies. It handles modern Chrome encryption:
import rookiepy
cookies = rookiepy.chrome([".youtube.com", ".google.com"])
Ran it as Administrator.
Result: RuntimeError: decrypt_encrypted_value failed
Why: Same app-bound encryption. Even with admin privileges, no external process can access Chrome's cookie encryption key on Chrome 127+.
If you can't decrypt cookies externally, maybe ask Chrome directly. Chrome's DevTools Protocol has Network.getAllCookies:
ws.send(json.dumps({"id": 1, "method": "Network.getAllCookies"}))
cookies = json.loads(ws.recv())["result"]["cookies"]
This requires starting Chrome with --remote-debugging-port=9222.
Result: TCP connect to (127.0.0.1 : 9222) failed
Why: On Windows, if Chrome is already running, a new instance with the debug flag opens as a window in the existing process — without the debugging port. I killed Chrome, restarted with the flag. Chrome still wouldn't bind to port 9222. Possibly Windows Defender or a Chrome policy blocking it.
yt-dlp added native OAuth2 support in 2024. It uses YouTube TV's device code flow:
yt-dlp --username oauth2 --password '' https://youtu.be/...
# Go to google.com/device and enter code: XXXX-XXXX
Result: Login with OAuth is no longer supported
Why: YouTube deprecated this authentication method. It worked for a few months, then Google killed it. The yt-dlp wiki now says: "Use --cookies instead."
My app already had Google OAuth for YouTube uploads. Users sign in, I get their access_token. Why not pass it to yt-dlp?
cmd.extend(["--add-header", f"Authorization:Bearer {access_token}"])
Result: Request had insufficient authentication scopes
Why: yt-dlp uses YouTube's InnerTube API internally, not the YouTube Data API v3. They're completely different authentication systems:
| API | Authentication | Used by |
|---|---|---|
| YouTube Data API v3 | OAuth2 Bearer token | App uploads |
| YouTube InnerTube API | Session cookies | yt-dlp downloads |
A Data API token simply can't authenticate InnerTube requests.
After 5 failures with Chrome, I tried Firefox.
Firefox doesn't encrypt cookies.
Firefox stores cookies in a plain SQLite database at %APPDATA%/Mozilla/Firefox/Profiles/*/cookies.sqlite. No DPAPI. No AES-GCM. No app-bound encryption. Just SQL:
import sqlite3, shutil, glob, os
profiles = os.path.join(os.environ["APPDATA"], "Mozilla", "Firefox", "Profiles")
db = max(glob.glob(f"{profiles}/*/cookies.sqlite"), key=os.path.getsize)
shutil.copy2(db, "tmp.sqlite") # Copy — Firefox locks the file
conn = sqlite3.connect("tmp.sqlite")
rows = conn.execute(
"SELECT host, name, value, path, expiry, isSecure "
"FROM moz_cookies WHERE host LIKE '%youtube%' OR host LIKE '%google%'"
).fetchall()
print(f"✅ {len(rows)} cookies extracted")
Result: ✅ 67 cookies saved
10 lines of Python. No admin rights. No encryption libraries. No fighting the browser.
| # | Method | Blocked by | Time |
|---|---|---|---|
| 1 | DPAPI + AES-GCM | Chrome 127+ app-bound encryption | 30 min |
| 2 | rookiepy (Rust lib) | Same encryption, even as admin | 15 min |
| 3 | Chrome DevTools Protocol | Port 9222 wouldn't bind | 40 min |
| 4 | yt-dlp OAuth2 | Deprecated by YouTube | 20 min |
| 5 | Per-User Bearer token | Wrong API (InnerTube ≠ Data API) | 30 min |
| 6 | Firefox SQLite | Nothing | 5 min |
| What I expected | What actually happened |
|---|---|
| Cookie extraction is a solved problem | Chrome made it unsolvable in July 2024 |
| Admin rights bypass any protection | App-bound encryption ignores admin |
| OAuth tokens work across Google APIs | InnerTube ≠ Data API |
| yt-dlp has built-in auth | YouTube deprecated it |
| Chrome extensions export cookies safely | The most popular one was malware |
| It should take 10 minutes | It took 3 hours |
Chrome's app-bound encryption exists for a good reason: protecting users from cookie-stealing malware. Before Chrome 127, any program running as the same Windows user could read all Chrome cookies — login sessions, banking tokens, everything.
Google's fix: bind the decryption key to the Chrome binary. Only Chrome's own process can decrypt cookies. This blocks Raccoon, RedLine, Vidar, and all similar stealers.
The collateral damage: legitimate tools like yt-dlp, password managers, and cookie exporters also break.
Firefox took a different approach: cookies are plain SQLite, and security relies on OS-level protections — file permissions and user isolation. Neither approach is objectively better. Chrome prioritizes defense against same-user malware. Firefox prioritizes interoperability.
After extracting cookies, deployment was straightforward:
# Extract on local machine (Windows)
python extract_cookies.py
# → cookies.txt (67 YouTube/Google cookies, Netscape format)
# Upload to server
scp cookies.txt user@server:/opt/app/backend/
yt-dlp uses them automatically:
COOKIES_FILE = Path(__file__).parent / "cookies.txt"
cmd = ["yt-dlp", "-f", "bestvideo+bestaudio", ...]
if COOKIES_FILE.exists():
cmd.extend(["--cookies", str(COOKIES_FILE)])
Trade-off: Cookies expire in ~2 weeks. When they do: open Firefox, visit YouTube, close Firefox, re-run the script. 60 seconds.
Start with Firefox. Chrome's encryption makes external cookie extraction impossible since July 2024.
Never install cookie-export extensions. The popular "Get cookies.txt" was literal malware. Read the SQLite database directly.
Check the timeline. Any cookie extraction tutorial from before Chrome 127 (mid-2024) won't work on modern Chrome.
InnerTube ≠ Data API. If you're building around yt-dlp, Google OAuth tokens from your app won't help — yt-dlp needs browser cookies, not API tokens.
Don't fight the browser. When a browser actively resists external access, use a browser that doesn't.
| Component | Status |
|---|---|
| Cookie extraction (Firefox) | ✅ Working |
| yt-dlp downloads with cookies | ✅ Working |
| Server deployment | ✅ Deployed |
| YouTube upload after merge | ✅ OAuth2 |
| Cookie auto-refresh | ❌ Manual (every ~2 weeks) |
The extraction script and the full project are open source: github.com/maximosovsky/merge-video
This is part of a series about building Merge Video — a self-hosted video merging service. Previous: I Tried to Merge 52 Video Files Automatically.
Building in public, one utility at a time. Follow the journey: LinkedIn · GitHub
2026-03-03 00:44:21
If you build iOS apps long enough, you start noticing where your time actually goes.
It is not always complex architecture decisions. It is not even SwiftUI layout bugs. Often, it is the small workflow interruptions that quietly eat away at your focus.
For many iOS developers, one of those interruptions is the traditional REST API client.
For years, browser-based API tools have been the default choice. They are powerful, widely adopted, and packed with features. But just because something is standard does not mean it fits every workflow equally well.
When your entire day revolves around Xcode, simulators, and real Apple devices, traditional API clients can start to feel like friction rather than support.
Let’s break down why.
iOS development already requires juggling multiple layers:
Now add a browser-based REST API client into that mix.
Every time an endpoint fails, the typical routine looks like this:
None of these steps are difficult. But repeated dozens of times per week, they break momentum.
Flow state matters in development. Even small disruptions can slow problem-solving. When API debugging feels detached from your core development tools, it becomes a separate task instead of an integrated one.
There is another issue that many developers quietly ignore.
We build mobile apps. Our users interact with APIs from phones. Yet we often test APIs from a desktop browser on stable WiFi.
This gap matters more than it seems.
Authentication flows sometimes behave differently on real devices. Network latency changes performance behavior. Token storage and refresh cycles can expose edge cases that do not show up on desktop testing.
A traditional REST API client running in a browser does not always reflect real-world mobile conditions. That disconnect can delay discovering bugs until later in development.
Modern REST API client tools are powerful. They support automation, team collaboration, mock servers, scripting, and more.
But most iOS developers do not need all of that during daily debugging.
Sometimes you just want to:
Feature overload does not always increase productivity. Sometimes it adds cognitive load.
Apple developers are used to native tools that feel fast and focused.
Xcode is tightly integrated with macOS. Instruments works seamlessly with devices. Even small utilities feel optimized for the platform.
When a REST API client is built specifically for Apple devices, the experience changes.
For example, HTTPBot takes a native-first approach. Instead of running inside a browser, it works directly on iPhone, iPad, and macOS. That means developers can test APIs on real devices without leaving their ecosystem.
This may sound like a minor difference, but it changes the workflow.
You can debug a request on your Mac during development, then open the same collection on your iPhone to test it under real network conditions. There is no need to recreate environments or copy configurations across platforms.
The process feels connected instead of fragmented.
Many subtle API issues only surface on actual devices.
Mobile networks fluctuate. Background tasks behave differently. Token expiration timing may not match desktop expectations.
When your REST API client runs natively on iOS, you can test directly from the device your users rely on.
That reduces the gap between testing and production behavior.
Instead of simulating mobile conditions, you are experiencing them.
The biggest slowdown in iOS development rarely comes from writing Swift code. It comes from micro interruptions that add up over time.
Switching tools. Recreating requests. Copying tokens. Adjusting environments manually.
Traditional REST API client tools are powerful, but they were designed for general web development workflows. iOS development has different needs.
That is why some developers are exploring lighter, native options like HTTPBot for day-to-day API debugging. It is not about replacing collaboration tools entirely. It is about choosing the right tool for the right context.
When API testing feels integrated into your Apple workflow, you spend less time navigating tools and more time building features.
Traditional API clients are not bad tools. They have earned their place in backend development and team collaboration.
But iOS development has unique constraints. Real devices matter. Flow state matters. Context switching matters.
If your REST API client constantly pulls you away from Xcode or forces you into a desktop-only workflow, it might be slowing you down more than you realize.
Sometimes improving productivity is not about learning a new framework. It is about aligning your tools with your environment.
For Apple developers, that alignment can make API debugging feel faster, more natural, and more connected to the apps they are building.
And that small shift can have a surprisingly large impact on how smoothly development moves forward.
2026-03-03 00:44:01
Technical due diligence sounds like something for bankers and lawyers, but a lot of the work sits with engineers. If your company is looking at buying a small product or platform, at some point someone will ask you to look at the code, the infrastructure, and the team and say whether the deal makes sense from a technical side.
In practice you usually want answers to three basic questions:
This is a look at those questions for SMB-scale deals from a developer’s point of view.
Most public material on tech DD comes from large deals: private equity, corporate roll-ups, multi-region IT. That setup assumes many systems, many teams and months of coordinated work.
A small acquisition is different.
Often there is one core product, a primary codebase and a small engineering team. Instead of working only with a data room, you can usually get read-only access to the repository, to the main cloud account and to monitoring dashboards. You can speak directly with the people who built the system. That makes the work more hands-on and less about documents.
Because the scope is smaller, you go deeper. You read the code and see how it is structured. You look at the data model and how hard it is to change. You trace a feature from commit to production to understand deployment and release. You check what monitoring and alerting is in place and how incidents are handled. You also confirm basic points like IP ownership and key third-party licences.
Time and budget follow this pattern. Enterprise DD can run for months and involve several teams. For SMBs, two to four weeks is common. If the DD budget grows to the size of a full-time salary for a small deal, the process is probably overbuilt.
Integration is narrower too. You are not planning a full IT merger. The questions are more direct: can this product authenticate against your current identity provider, can you move or sync data without rewrites, can you run it alongside your existing stack for a while.
Red flags land differently. In a small company, a single developer holding most of the knowledge, missing IP assignment, or a production setup with no backups can be enough to change or stop the deal. In larger companies, you see broader but more distributed problems: old architectures, mixed security practice, partial compliance. Those often lead to discounts and integration plans rather than an immediate no.
If you are the engineer involved, your main job is to keep the scope honest: small deal, focused DD; and to make sure the findings stay tied to the business decision, not just to technical taste.
You can do some of this yourself, but many buyers bring in a firm that runs technical DD as their main work. These firms are not all the same. It helps to think in types rather than in brand names.
Some, like MEV, act much like external engineering leads. They read architecture, infrastructure and code, then link what they see to delivery speed, stability and integration effort. They are useful when the main concern is whether the product can support the growth case and how much work it will take to make it fit your environment.
Others have a background in testing and software quality, such as System Verification. They pay attention to coverage, test strategy, environments and release practices. They fit deals where long-term reliability and support cost are central.
In regulated sectors you see firms like Techrivo. They mix technical checks with detailed work on security controls, data handling and process maturity. They are relevant when a mistake does not just lead to downtime but to audits and fines.
Some groups, for example Liberty Advisor Group, look at IT and business together. They connect technical risk to operations and financial exposure. That is useful when the target depends on shared systems like ERP or when the finance team wants a direct link between technical findings and the model.
There are also providers tuned to early-stage companies, such as Upsilon IT, which use structured frameworks to assess team practices, scalability limits and immediate debt; benchmark-oriented firms like Crosslake Technologies that compare what they see with data from many past deals; and engineering-heavy shops like Mad Devs that focus on deep code and infrastructure review.
Beyond that, there are more specific specialists: Vysus Group works in industrial and asset-heavy environments; Zartis often appears in European cross-border deals; VisionX looks closely at AI and ML claims and checks whether the systems behind them are real and maintainable.
The point is simple: decide what sort of risk matters most in your deal—code quality, reliability, compliance, integration, AI, industrial systems—and pick a firm whose normal work lines up with that area.
Read more https://mev.com/blog/top-technical-due-diligence-firms-for-smbs
Whatever firm you work with, the output should help you and your leadership make decisions and plan real work. A large slide deck with vague comments is not enough.
The report should start with a short summary that a non-technical reader can follow. It should say whether anything blocks the deal, which issues change what the buyer should pay, and what needs attention in the first year. If someone on the business side can read only this section and explain it back, it is doing its job.
Each major finding below that should answer four plain questions: what is the issue, why it exists, what it does to the business and what to do about it. For example, if a service cannot scale beyond a certain point, the report should say whether this comes from design, implementation or infrastructure limits, and what that means for the planned customer or data growth.
You will also need rough ranges for effort. Nobody expects exact estimates from a DD team, but it matters a lot whether a fix is measured in weeks or years, and whether you need one engineer or several. Without ranges, you cannot connect findings to valuation or to post-deal staffing.
A good report also suggests order. It should be clear which issues need work in the first ninety days, which belong in a one-year plan and which can wait. Many teams use the report as a starting point for their roadmap.
Finally, the report should show how the conclusions were reached. References to parts of the codebase, infrastructure diagrams, log samples and notes from interviews make it possible for your own engineers to verify and extend the work later.
Here is a simple test for any DD report:
Even a strong provider will produce weak results if the engagement is set up badly. Three things matter most: objectives, access and communication.
Start by writing down what you need to decide. Common examples: can the platform handle the growth in the business case, are there any security or compliance gaps that would block use, and will integration into your stack cost more than planned. Share this with the provider and ask them to frame the work and the report around these points.
Next, organise access. For small deals this usually means read-only access to the main repository, to cloud accounts or dashboards, to incident and uptime records and to the people who know how the system behaves. If the provider only sees prepared slides and policy documents, you will get generic output.
Finally, agree on how you will talk during the engagement. Short, regular check-ins let you catch misalignment early. If they are spending days on a component you plan to replace soon after closing, you can redirect them.
Throughout, ask them to keep translating into business terms. For each major issue, ask what it does to valuation, operating cost and integration timing. That is the layer your leadership will use when making decisions.
Three questions help when you are choosing or steering a provider:
If they can answer those clearly, the work they produce is more likely to be useful to both engineers and the rest of the company.
2026-03-03 00:43:38
Day 8 of my #1HourADayJourney. Today, I shifted roles from a "Fortress Guardian" to a System Administrator. A huge part of securing any database or server environment is managing the human element—onboarding new talent and securing the accounts of those who leave.
Today’s focus was the full lifecycle of a user account. Here is what I practiced:
To add a new team member, I learned how to create an account with a pre-configured home directory (essential for workspace persistence):
# -m ensures the home directory /home/b.smith is created
sudo useradd -m b.smith
sudo passwd b.smith
When adding users to groups, never forget the -a flag. If you run usermod -G without it, the user will be removed from all their previous groups.
# -a (append) -G (groups)
sudo usermod -aG developers b.smith
In a security audit scenario, you rarely want to userdel (delete) an account immediately, as you need their data preserved for legal reasons. Instead, we "lock" the account:
# This adds an '!' to the password field in /etc/shadow, disabling login
sudo passwd -l j.doe
Follow my journey: #1HourADayJourney