MoreRSS

site iconHackerNoonModify

We are an open and international community of 45,000+ contributing writers publishing stories and expertise for 4+ million curious and insightful monthly readers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of HackerNoon

How Symfony 7.4 Uses Service Tags to Enable Modular, Decoupled Architectures

2026-01-16 17:07:20

Service tags in Symfony are often misunderstood as merely a mechanism for Event Listeners or Twig Extensions. While they excel at those tasks, their true power lies in decoupling architecture. When wielded correctly, tags allow you to build systems that are open for extension but closed for modification (Open-Closed Principle) without touching a single line of configuration files.

In this article, we will move beyond standard usage. We won’t just “tag a service”; we will build a robust, modular Document Processing Pipeline using Symfony 7.4, PHP 8.3+ and modern attributes. We will explore strictly typed tagged iterators, lazy-loading locators, custom domain-specific attributes and compiler passes for validation.

A Modular Document Processor

Imagine we are building a system that ingests various document formats (PDF, CSV, JSON) and processes them. We want to add support for new formats simply by creating a new class — no YAML editing required.

First, let’s define our contract.

// src/Contract/DocumentProcessorInterface.php
namespace App\Contract;

use Symfony\Component\DependencyInjection\Attribute\AutoconfigureTag;

/**
 * We use AutoconfigureTag so any class implementing this interface
 * is automatically tagged with 'app.document_processor'.
 */
#[AutoconfigureTag('app.document_processor')]
interface DocumentProcessorInterface
{
    public function supports(string $mimeType): bool;
    public function process(string $filePath): void;
    public static function getProcessorName(): string;
}

The Modern Strategy Pattern: Tagged Iterators

The most common advanced pattern is injecting a collection of services. In older Symfony versions, this required a Compiler Pass. In Symfony 7.4, we use #[TaggedIterator].

Let’s create two processors.

// src/Processor/PdfProcessor.php
namespace App\Processor;

use App\Contract\DocumentProcessorInterface;

class PdfProcessor implements DocumentProcessorInterface
{
    public function supports(string $mimeType): bool
    {
        return $mimeType === 'application/pdf';
    }

    public function process(string $filePath): void
    {
        // Logic to process PDF...
        echo "Processing PDF: $filePath\n";
    }

    public static function getProcessorName(): string
    {
        return 'pdf_v1';
    }
}
// src/Processor/CsvProcessor.php
namespace App\Processor;

use App\Contract\DocumentProcessorInterface;

class CsvProcessor implements DocumentProcessorInterface
{
    public function supports(string $mimeType): bool
    {
        return $mimeType === 'text/csv';
    }

    public function process(string $filePath): void
    {
        echo "Processing CSV: $filePath\n";
    }

    public static function getProcessorName(): string
    {
        return 'csv_v1';
    }
}

Now, the DocumentManager that consumes these. We will use the index_by option to create a keyed collection, which is vastly superior to a simple list when you need direct access or debugging clarity.

// src/Service/DocumentManager.php
namespace App\Service;

use App\Contract\DocumentProcessorInterface;
use Symfony\Component\DependencyInjection\Attribute\TaggedIterator;

final readonly class DocumentManager
{
    /**
     * @param iterable<string, DocumentProcessorInterface> $processors
     */
    public function __construct(
        #[TaggedIterator(
            tag: 'app.document_processor', 
            indexAttribute: 'key', // We will learn how to populate this "key" dynamically later
            defaultIndexMethod: 'getProcessorName' // Fallback method on the class
        )]
        private iterable $processors
    ) {}

    public function processDocument(string $filePath, string $mimeType): void
    {
        // Because we used 'defaultIndexMethod', our iterable keys are now 'pdf_v1', 'csv_v1', etc.
        foreach ($this->processors as $key => $processor) {
            if ($processor->supports($mimeType)) {
                echo "Selected processor [$key]...\n";
                $processor->process($filePath);
                return;
            }
        }

        throw new \InvalidArgumentException("No processor found for $mimeType");
    }
}

The defaultIndexMethod allows the service itself to define its key in the collection. You don’t need to define keys in services.yaml

Advanced: Custom Attributes for Domain-Specific Configuration

The previous example is clean, but generic. What if we want to attach metadata to our processors, such as a priority or a specific type, without implementing methods for every single piece of configuration?

We can create a Custom PHP Attribute that acts as a wrapper around the service tag.

Create the Attribute

// src/Attribute/AsDocumentProcessor.php
namespace App\Attribute;

use Symfony\Component\DependencyInjection\Attribute\AutoconfigureTag;

#[\Attribute(\Attribute::TARGET_CLASS)]
class AsDocumentProcessor extends AutoconfigureTag
{
    public function __construct(
        string $type,
        int $priority = 0
    ) {
        parent::__construct('app.document_processor', [
            'type' => $type,
            'priority' => $priority // Symfony automatically sorts by this attribute
        ]);
    }
}

By extending AutoconfigureTag, we inherit Symfony’s native ability to apply the tag automatically. We map our domain properties (type, priority) directly into the tag’s attributes array.

Refactor Processors

Now our processors look semantic and declarative.

// src/Processor/JsonProcessor.php
namespace App\Processor;

use App\Attribute\AsDocumentProcessor;
use App\Contract\DocumentProcessorInterface;

#[AsDocumentProcessor(type: 'json', priority: 10)]
class JsonProcessor implements DocumentProcessorInterface
{
    public function supports(string $mimeType): bool
    {
        return $mimeType === 'application/json';
    }

    public function process(string $filePath): void
    {
        echo "Processing JSON (Priority High)\n";
    }

    public static function getProcessorName(): string
    {
        return 'json_fast';
    }
}

If you inject iterable $processors now, the JsonProcessor will appear before others because of the priority: 10.

Lazy Loading with #[TaggedLocator]

In large applications with dozens of processors, instantiating every single service just to find the one that supports application/pdf is memory-inefficient. This is where Service Locators come in.

ServiceLocator is a mini-container that only holds the specific services you asked for and it only instantiates them when you explicitly call get().

// src/Service/LazyDocumentManager.php
namespace App\Service;

use App\Contract\DocumentProcessorInterface;
use Symfony\Component\DependencyInjection\Attribute\TaggedLocator;
use Symfony\Component\DependencyInjection\ServiceLocator;

final readonly class LazyDocumentManager
{
    /**
     * @param ServiceLocator<DocumentProcessorInterface> $locator
     */
    public function __construct(
        #[TaggedLocator(
            tag: 'app.document_processor',
            indexAttribute: 'type' // Matches the 'type' key in our AsDocumentProcessor attribute
        )]
        private ServiceLocator $locator
    ) {}

    public function process(string $type, string $filePath): void
    {
        if (!$this->locator->has($type)) {
            throw new \InvalidArgumentException("No processor registered for type: $type");
        }

        // The service is instantiated ONLY here
        $processor = $this->locator->get($type);
        $processor->process($filePath);
    }
}

The Magic: Because our AsDocumentProcessor attribute passed [‘type’ => ‘json’] to the tag, #[TaggedLocator] can use indexAttribute: ‘type’ to key the locator.

  • $locator->get(‘json’) returns the JsonProcessor.
  • If we never call process(‘json’, …), the JsonProcessor is never created.

Advanced Validation with Compiler Passes

Sometimes, attributes and standard injection aren’t enough. What if you need to ensure that no two processors claim the same ‘type’? Or if you need to wrap every processor in a generic LoggerDecorator?

This requires a Compiler Pass. This code runs during the container compilation phase (before the cache is frozen), allowing for powerful meta-programming.

// src/DependencyInjection/Compiler/ProcessorValidatorPass.php
namespace App\DependencyInjection\Compiler;

use Symfony\Component\DependencyInjection\Compiler\CompilerPassInterface;
use Symfony\Component\DependencyInjection\ContainerBuilder;

class ProcessorValidatorPass implements CompilerPassInterface
{
    public function process(ContainerBuilder $container): void
    {
        $tag = 'app.document_processor';
        $services = $container->findTaggedServiceIds($tag);

        $seenTypes = [];

        foreach ($services as $id => $tags) {
            // A service might have multiple tags, iterate them
            foreach ($tags as $attributes) {
                if (!isset($attributes['type'])) {
                    continue; // Skip if using the interface Autoconfigure without the custom attribute
                }

                $type = $attributes['type'];

                if (isset($seenTypes[$type])) {
                    throw new \LogicException(sprintf(
                        'Duplicate document processor type "%s" detected in services "%s" and "%s".',
                        $type,
                        $seenTypes[$type],
                        $id
                    ));
                }

                $seenTypes[$type] = $id;
            }
        }
    }
}

Registering the Compiler Pass

// src/Kernel.php
namespace App;

use App\DependencyInjection\Compiler\ProcessorValidatorPass;
use Symfony\Bundle\FrameworkBundle\Kernel\MicroKernelTrait;
use Symfony\Component\DependencyInjection\ContainerBuilder;
use Symfony\Component\HttpKernel\Kernel as BaseKernel;

class Kernel extends BaseKernel
{
    use MicroKernelTrait;

    protected function build(ContainerBuilder $container): void
    {
        $container->addCompilerPass(new ProcessorValidatorPass());
    }
}

Now, if you copy JsonProcessor and forget to change type: ‘json’, the container will throw a clear, descriptive error during compilation (or cache warmup), preventing runtime bugs.

The “Secret Sauce”: Dynamic Tag Configuration

There is one extremely advanced edge case: What if you want to use a custom attribute, but you cannot extend AutoconfigureTag (perhaps the attribute comes from a third-party library or you want to keep your Domain layer pure without Symfony dependencies)?

You can use registerAttributeForAutoconfiguration in the Kernel.

Let’s say you have this Pure PHP attribute:

// src/Domain/Attribute/Worker.php
namespace App\Domain\Attribute;

#[\Attribute(\Attribute::TARGET_CLASS)]
class Worker
{
    public function __construct(
        public string $queueName,
        public int $retries = 3
    ) {}
}

This attribute knows nothing about Symfony. To make it useful, we bridge it in Kernel.php:

// src/Kernel.php

// ... inside the build() method ...

$container->registerAttributeForAutoconfiguration(
    \App\Domain\Attribute\Worker::class,
    static function (
        \Symfony\Component\DependencyInjection\ChildDefinition $definition, 
        \App\Domain\Attribute\Worker $attribute, 
        \ReflectionClass $reflector
    ): void {
        // We dynamically add the tag based on the attribute
        $definition->addTag('app.worker', [
            'queue' => $attribute->queueName,
            'retries' => $attribute->retries
        ]);

        // We can even manipulate the service definition itself!
        $definition->addMethodCall('setMaxRetries', [$attribute->retries]);
    }
);

This is the pinnacle of decoupling. Your domain logic (Worker attribute) remains pure, while your infrastructure (Kernel) wires it into the framework.

Verification

To verify your tags are working correctly, use the Symfony Console.

List all tagged services:

php bin/console debug:container --tag=app.document_processor

Output should list your PdfProcessorCsvProcessor and JsonProcessor.

Verify arguments mapping:

php bin/console debug:container App\Service\DocumentManager

Look for the processors argument. It should show a TaggedIterator object.

Test the Compiler Pass: Temporarily add a duplicate type: ‘json’ to another class and run:

php bin/console cache:clear

You should see the LogicException we defined.

Conclusion

We have traveled far beyond simple event listeners. We have:

  1. Defined contracts using #[AutoconfigureTag].
  2. Built typedprioritized collections with #[TaggedIterator].
  3. Optimized performance with lazy-loading #[TaggedLocator].
  4. Enforced architecture rules with Compiler Passes.
  5. Bridged Pure PHP Attributes to Symfony Tags.

This approach creates applications that are easy to test, easy to extend and remarkably clean to read.

If you found this deep dive into Symfony internals helpful, let’s connect on LinkedIn [https://www.linkedin.com/in/matthew-mochalkin/]. I share advanced PHP and architecture insights weekly.

\

Protect Your Crypto: The Wallet Backup Options You Never Considered

2026-01-16 16:59:29

\ Getting locked out of your digital wallet can feel like watching your set of house keys drop into the ocean as you stand on the shore. You may not expect it, but it's over before you realize it and leaves a sting forever—not to mention the financial losses. In most cases with crypto, you’re the only person who has control of your private keys. No one else can get access to your funds, so no one else can help you. Because of this, it's helpful to know which backup options are available before losing access to your digital wallet.

Depending on the type of crypto wallet you’re using, you may have multiple options available for backing it up. With a bit of reading and organization, you’ll discover that a plan to recover your funds in case of emergency isn’t that difficult.

Let’s see what we can (and should) do to protect our funds.

Seed Phrases: The Baseline Backup

In most wallets, you’re provided with a seed phrase when you install the app for the first time. This is a sequence of either 12 or 24 random words based on standards like BIP39, designed to recover your entire wallet on another device in the event that the primary one becomes lost. The idea is quite simple. If your phone falls into the pool or your laptop won't start, you can use these words to recover your coins elsewhere. No need for anything else.

That’s possible because the coins were never in your device, but in a distributed ledger composed of hundreds or thousands of nodes (computers) worldwide, depending on the network.

Bitcoin nodes from BitNodes

Storage is the key factor when working with seed phrases. Good options include writing them down and placing the recording somewhere that’s protected from physical damage (i.e., humidity, fire hazards) or inquisitive pets. Some have chosen to engrave their seed phrases on steel plates to protect them against corrosion, and some others have chosen to keep two or more paper copies of their seed phrase stored separately, in safe locations.

Above all, seed phrases must be maintained completely offline. A photo on the cloud or a screenshot buried in a downloads folder has caused many people trouble, for instance. Investigating the recovery process using a "practice" wallet holding a minimal amount of currency can help ensure all elements are working for you. Spending an hour verifying and testing can save a significant amount of time and aggravation later on.

Hardware Wallets & Split Backups

Hardware wallets can provide an additional level of security, as they store the user’s private key(s) in a small device that doesn’t connect to the Internet and that the user is still able to use (unlike a piece of paper, for instance). Brands such as Ledger and Trezor have different designs and offer different forms of recovery, but the concept is comparable to having a small safe in your pocket.

Now, when it comes to backup features, not all hardware wallets offer the same functions. Trezor was the first manufacturer to create Shamir backup (also called SLIP-39). In this case, several recovery shares can be created and must be combined to recover your funds. You can even afford to lose some of the recovery shares and still be able to retrieve the wallet. This mechanism allows you to distribute the backup responsibility across multiple locations or people, which is like creating a "back up for your back up" system.

https://youtu.be/cRh-NCvHkzM?si=4hYDi9T6WuI5Qzzc&embedable=true

However, Shamir isn’t something native to every hardware wallet. Other vendors have their own backup standards and approaches, so it helps to check each model before making a purchase. Each manufacturer has a different way of approaching recovery, and by doing a little research, we can find the alternative that best suits our needs. \n

Multisig, Social Recovery, and Custodial Options

Some users prefer backups that have multiple users and devices involved, rather than relying on a single source. With a multisignature solution, several keys must be present in order for a transaction to take place. This means that losing one key shouldn’t cause you to lose access to everything you own; instead, it works more like a locked box that requires multiple keys to open. Each person involved in this process keeps their own key to their piece of the lock, and by working together and coordinating their efforts, they can protect themselves from any undue problems.

Meanwhile, social recovery wallets offer a different approach. Instead of guarding a seed phrase alone, you can select trusted individuals who will assist you in restoring access to your account when it’s lost or otherwise becomes inaccessible due to technical issues. Users who prefer to receive support from other people when something goes wrong or if they’re concerned about losing a physical copy of their seed phrase can easily use this type of protection.

It’s available in wallets like Ready (formerly Argent) and Safe (formerly Gnosis Safe). It does demand careful selection of guardians, though, so it helps to choose people who understand their role and keep their devices safe.

Now, for people who prioritize ease of use over full self-custody, custodial services remain an option. These platforms hold keys on behalf of users and manage recovery through their own support teams. The main drawback is trust: you’re giving up full control. While it benefits users in terms of convenience, it also introduces the additional risk that the service could become nonoperational or a victim of fraudulent activity, which would put their users at risk of loss. Crypto exchanges like Binance or Coinbase can act as custodial wallets. Some newcomers begin this way and later graduate to non-custodial setups once they feel comfortable.

Backups in Obyte

Like a truly decentralized and self-custodial crypto wallet, Obyte offers private keys to its users. In this case, they’re twelve random words you must write down and store offline. There’s no other way to access your wallet without them. Additionally, if you want to store part of your funds offline for security reasons, you can create a textcoin (basically, another private key) with them inside, and then delete it from the History in the wallet.

Multisignature features are also available in the Obyte wallet. Two or more signers (devices) can approve or not approve every transaction from a multidevice account.

Now, here’s a trick you must know about backups in Obyte: the main seed phrase (and public textcoins) can only back up non-private tokens. Coins like Blackbytes (GBB), smart contracts, multisignature accounts, and chats can only be protected with a full backup, available from the general settings in the wallet. This will give you an archive that you must store on your own device. Private textcoins can also be an easy way to back up private assets.

https://youtu.be/3Xcb3c9mEtc?si=nGCO62SXYrCn_MTv&embedable=true

Beyond the wallet itself, GBYTE, the main asset of Obyte, is available for trading on centralized crypto exchanges like NonKYC.io and Biconomy. Once the coin leaves the wallet app and enters the exchange, it stops being non-custodial, and it’s entirely in the hands of those companies. Therefore, you should do your due diligence if you want to handle your funds without issues.

In any case, whichever method feels right, a small moment spent creating a backup today can save a long story tomorrow.

\


:::info Featured Vector Image by pch.vector / Freepik

:::

\n

\n

\

3 Key Discoveries That Turned Online Data Into a Business Superpower

2026-01-16 16:42:42

Before the internet, major decisions were often made based on intuition and experience. The shift from guesswork to insight wasn’t gradual; it was a revolution powered by counter-intuitive discoveries.

How to Build a DAO from Scratch with Solidity and Foundry, Part 1: Designing the Governance Token

2026-01-16 16:35:05

DAO (Decentralized Autonomous Organization) is a system that enables collective decision-making through code, without relying on traditional organizational hierarchies such as boards of directors, CEOs, or CTOs. Instead of trust in individuals or institutions, DAOs rely on smart contracts deployed on a blockchain.

At its core, a DAO allows participants to proposevote, and execute decisions in a transparent and verifiable way. Voting power is typically derived from tokens held by participants, where each token represents a unit of voting weight.

A typical on-chain DAO is composed of three main smart contracts:

  1. Token contract: Defines the governance token and tracks voting power.

  2. Governor contract: Manages proposals and voting logic: who can propose, how votes are counted, quorum requirements, and proposal outcomes.

  3. Timelock contract: Acts as a security layer by enforcing a delay between proposal approval and execution, giving participants time to react to potentially harmful decisions.

    DAO contracts and the proposal lifecycle

The lifecycle of a proposal is simple but powerful: a proposal is submitted to the Governor, votes are collected based on token ownership, and once the proposal is approved, it is forwarded to the Timelock for delayed execution. If the proposal fails, it is simply discarded.

In this article series, we will build a DAO from the ground up using OpenZeppelin model. In this part (Part 1), we will focus on writing, deploying, and testing the governance token, which we will call GovernanceToken. This token will later be used to enable on-chain voting and decision-making in the DAO.

The Token Code

Without further ado, here is the code:

// SPDX-License-Identifier: MIT
pragma solidity ^0.8.20;

import {ERC20} from "@openzeppelin/contracts/token/ERC20/ERC20.sol";
import {ERC20Permit} from "@openzeppelin/contracts/token/ERC20/extensions/ERC20Permit.sol";
import {ERC20Votes} from "@openzeppelin/contracts/token/ERC20/extensions/ERC20Votes.sol";
import {Ownable} from "@openzeppelin/contracts/access/Ownable.sol";
import {Nonces} from "@openzeppelin/contracts/utils/Nonces.sol";

contract GovernanceToken is ERC20, ERC20Permit, ERC20Votes, Ownable {
    constructor()
        ERC20("GovernanceToken", "MGT")
        ERC20Permit("GovernanceToken")
        Ownable(msg.sender)
    {
        _mint(msg.sender, 1_000_000 * 10 ** decimals());
    }

    // Optional: Add controlled minting
    function mint(address to, uint256 amount) external {
        require(msg.sender == owner(), "Only owner can mint");
        _mint(to, amount);
    }

    // ── Conflict resolution ──

    // Both ERC20 and ERC20Votes define _update
    function _update(address from, address to, uint256 amount)
        internal
        override(ERC20, ERC20Votes)
    {
        super._update(from, to, amount);
    }

    // Both ERC20Permit and Nonces define nonces()
    function nonces(address owner)
        public
        view
        override(ERC20Permit, Nonces)
        returns (uint256)
    {
        return super.nonces(owner);
    }
}

\ Compared to a traditional ERC20 token, GovernanceToken integrates two additional OpenZeppelin modules: ERC20Permit and ERC20Votes.

\

  • ERC20Votes adds governance-specific functionality, most notably getPastVotes(account, blockNumber). This function returns an account’s voting power at a specific block, rather than its current balance. In a DAO context, this snapshot mechanism is critical: voting power is fixed at the moment a proposal is created, preventing users from manipulating votes by buying or transferring tokens after the fact.

\

  • ERC20Permit enables gasless approvals via signatures (EIP-2612), allowing users to delegate or approve voting power without sending an on-chain transaction.

The most important logic resides in the constructor, which initializes all inherited modules and mints one million governance tokens to the deployer. We also define an optional mint function, restricted to the contract owner, to allow controlled token issuance after deployment (useful for testing or future governance decisions).

Finally, two functions — _update and nonces—must be explicitly overridden. This is required because they are defined in multiple parent contracts. The overrides simply delegate execution to super, ensuring that all inherited behaviors are correctly composed and that the compiler’s inheritance conflicts are resolved cleanly.

Building the Token

To build our governance token, we will use Foundry, a fast and modern Ethereum development toolkit. The following steps assume a Linux environment, but the workflow is similar on macOS.

We start by installing Foundry using the official installation script:

curl -L https://foundry.paradigm.xyz | bash

After installation, the script instructs us to update our shell environment and install the Foundry binaries:

source ~/.bashrc   # path may vary depending on your system
foundryup

This installs the full Foundry toolchain: forge (build & test), cast (CLI interactions), anvil (local node), and chisel (REPL).

Next, we initialize a new Foundry project in an empty directory:

mkdir DAO
cd DAO
forge init

This generates a complete project scaffold, including src/script/, and test/ directories. By default, Foundry creates example Counter contracts and tests. Since we only want the project structure, we can safely remove these example files and replace them with our own contracts.

For now, we add our governance token under src/:

src/
└── GovernanceToken.sol

(Containing the GovernanceToken contract defined in the previous section.)

Because our token relies on OpenZeppelin modules, we must install the OpenZeppelin Contracts library:

forge install OpenZeppelin/openzeppelin-contracts

This command vendors OpenZeppelin into the lib/ directory and makes its contracts available for import within our project.

Finally, we compile the project:

forge build

If everything is set up correctly, the compilation completes successfully and generates an out/ directory. This folder contains the compiled artifacts (ABIs and bytecode) for GovernanceToken as well as all inherited OpenZeppelin dependencies.

At this point, our governance token is fully compiled and ready to be deployed and tested — steps we will cover in the next sections.

Deploying the Token

With the governance token compiled, we can now deploy it to a local blockchain. Foundry makes this process straightforward through deployment scripts.

We start by creating a deployment script DeployGovernanceToken.s.sol under the script/ directory:

// SPDX-License-Identifier: MIT
pragma solidity ^0.8.20;
import {Script} from "forge-std/Script.sol";
import {GovernanceToken} from "../src/GovernanceToken.sol";
contract DeployGovernanceToken is Script {
    function run() external {
        vm.startBroadcast();
        new GovernanceToken();
        vm.stopBroadcast();
    }
}

This script defines a run function that Foundry will execute. The vm.startBroadcast() / vm.stopBroadcast() pair tells Foundry to send transactions to the network, rather than simulating them.

Next, we launch a local Ethereum network using Anvil (in a separate terminal):

anvil

Anvil starts a local node on http://127.0.0.1:8545 and prints a list of pre-funded accounts along with their private keys. These accounts are intended for development and testing only.

With Anvil running, we can deploy the contract using forge script:

forge script script/DeployGovernanceToken.s.sol \
  --rpc-url http://127.0.0.1:8545 \
  --broadcast \
  --private-key <ANVIL_PRIVATE_KEY>

The RPC URL and private key are taken directly from Anvil’s output. When the command succeeds, Foundry prints the transaction hash, deployed contract address, gas usage, and the block number in which the contract was created.

To quickly verify that the deployment worked, we can query the deployed contract using cast. For example, calling totalSupply() confirms that the initial mint occurred as expected:

cast call <DEPLOYED_CONTRACT_ADDRESS> \
  "totalSupply()(uint256)" \
  --rpc-url http://127.0.0.1:8545

The returned value corresponds to 1,000,000 tokens with 18 decimals (1000000000000000000000000 [1e24]), matching the amount minted in the constructor.

At this stage, our governance token is live on a local network and ready to be used for testing voting, delegation, and — eventually — DAO governance.

Testing the Token

To validate our governance token’s behavior, we can write unit tests using forge-std, Foundry’s testing framework. Tests live in the test/ directory and are written in Solidity.

Below is a simple test that verifies the mint function works as expected:

// test/GovernanceToken.t.sol
pragma solidity ^0.8.20;
import {Test} from "forge-std/Test.sol";
import {GovernanceToken} from "../src/GovernanceToken.sol";
contract TokenTest is Test {
    GovernanceToken token;
    function setUp() public {
        token = new GovernanceToken();
    }
    function testMint() public {
        uint256 before = token.balanceOf(address(this));
        token.mint(address(this), 100);
        uint256 after_ = token.balanceOf(address(this));
        assertEq(after_ - before, 100);
    }
}

The setUp function is executed before each test and deploys a fresh instance of GovernanceToken, ensuring isolation between test cases. The testMint function then checks that calling mint increases the recipient’s balance by the expected amount.

Running the test suite is as simple as:

forge test

Foundry compiles the contracts, executes the test, and reports the results. A passing test confirms that our token’s minting logic behaves correctly.

Conclusion

In this article, we tackled the first building block of a DAO: the governance token. We began by examining the token contract itself, with particular attention to the OpenZeppelin modules it inherits from and the additional governance-related features they provide.

We then walked through the full development workflow using Foundry — from initializing a project, to deploying the token on a local Anvil network, and finally validating its behavior with unit tests.

This governance token will serve as the foundation for everything that follows. In the next parts of this series, we will build on top of it by introducing delegation, voting mechanics, and the core governance contracts that transform this token into a fully functional on-chain DAO.

I hope you found this article useful. Feel free to like, share, and subscribe for more content in the series.

Miscellaneous: Extra Commands

All commands shown in this article were executed inside a Docker container created with the following command:

docker run -it ubuntu:ubuntu@sha256:72297848456d5d37d1262630108ab308d3e9ec7ed1c3286a32fe09856619a782

Using a pinned image digest ensures full reproducibility, as the environment will always be identical regardless of when or where the container is launched.

To run Anvil in a separate terminal, we simply attached to the same container:

docker exec -it <CONTAINER_NAME> bash
anvil

The variable  can be found through the command:

$ docker ps

Foundry also allows you to run deployment scripts without a live network. The following command executes the script in a simulated environment and reports gas usage, without broadcasting any transactions:

forge script script/DeployGovernanceToken.s.sol --broadcast

This mode is useful for quickly validating deployment logic and estimating gas costs. If you want to simulate or execute transactions against an actual network (local or remote), simply provide an RPC URL using the --rpc-url flag.

Miscellaneous: Warnings

During development, you may encounter warnings related to dependencies rather than your own contracts. In our case, the compiler emitted warnings originating from the lib/forge-std library:

Warning (2424): Natspec memory-safe-assembly special comment for inline assembly is deprecated
and scheduled for removal. Use the memory-safe block annotation instead.
   --> lib/forge-std/src/StdStorage.sol:301:13

These warnings are caused by a version mismatch between the Solidity compiler and the installed version of forge-std. Newer Solidity versions deprecate the @memory-safe-assembly NatSpec comment in favor of the memory-safe block annotation, while older library versions may still use the deprecated syntax.

Since the issue originates in a dependency, the simplest fix is to update forge-std to the latest version:

cd lib/forge-std
git pull origin master
git checkout master
cd -

After updating the library, the warnings disappear and the project compiles cleanly again.

This is a good reminder that compiler warnings are not always caused by your own code. When working with fast-evolving toolchains like Foundry and Solidity, keeping dependencies up to date is often necessary to avoid noisy or misleading warnings.

\

Laravel 12 Prompts Guide: Prompt Types, Validation, and an Interactive Seeder Generator Example

2026-01-16 16:29:29

Key Takeaways

  • Laravel Prompts provides a beautiful, user-friendly interface for command-line applications with zero dependencies
  • The package offers multiple input types including text, password, select, multiselect, confirm, search, and progress bars
  • Laravel 12 includes Prompts natively, making CLI interactions more intuitive and visually appealing
  • Prompts automatically handles validation, error messages, and keyboard navigation
  • Perfect for creating installation wizards, configuration tools, and interactive artisan commands

Index

  1. Introduction to Laravel Prompts
  2. Understanding Laravel Prompts Components
  3. Statistics
  4. Available Prompt Types
  5. Practical Implementation: Database Seeder Generator
  6. AInteresting Facts
  7. Best Practices
  8. FAQ's
  9. Conclusion

Introduction to Laravel Prompts

Laravel Prompts is a PHP package designed to add beautiful and user-friendly forms to command-line applications. Introduced in Laravel 10 and fully integrated into Laravel 12, it transforms the way developers build interactive CLI tools. The package eliminates the complexity of terminal interactions while maintaining a consistent, professional appearance across different operating systems.

The beauty of Laravel Prompts lies in its simplicity. Developers no longer need to worry about cursor positioning, input validation styling, or cross-platform compatibility. Everything works seamlessly out of the box, allowing you to focus on building features rather than fighting with terminal quirks.

Understanding Laravel Prompts Components

Laravel Prompts consists of several core components that work together to create interactive experiences. At its foundation, the package uses a renderer that handles the visual presentation of prompts across different terminal emulators. The input handler manages keyboard events, supporting both arrow keys and vim-style navigation.

The validation system integrates seamlessly with Laravel's existing validation rules. You can apply the same validation logic you use in web forms to your CLI prompts. Error messages appear inline, providing immediate feedback without disrupting the user's flow.

Each prompt type is designed with specific use cases in mind. Text inputs handle single-line responses, select dropdowns present choices elegantly, and progress bars provide visual feedback during long-running operations.

Statistics

Package Adoption and Performance Metrics:

  • Laravel Prompts has been downloaded over 15 million times since its release (Source: Packagist.org)
  • The package supports PHP 8.1+ and works across Windows, macOS, and Linux environments
  • Laravel 12 includes Prompts as a first-party package, integrated directly into the framework
  • Over 2,000+ GitHub stars on the official repository, demonstrating strong community adoption (Source: GitHub Laravel Prompts)
  • The package has zero runtime dependencies, keeping your application lightweight

Available Prompt Types

Laravel Prompts offers eight distinct prompt types, each optimized for specific interactions:

Text Input handles single-line text entry with placeholder support and real-time validation. Use it for names, URLs, or any short string input.

Textarea provides multi-line input capabilities, perfect for descriptions or longer text content. Users can navigate with arrow keys and submit with Ctrl+D.

Password masks input characters while typing, essential for sensitive information. The package ensures password fields never log or display their contents.

Confirm presents yes/no questions with keyboard shortcuts. Users can press Y/N or use arrow keys to select their choice.

Select creates dropdown menus for choosing from predefined options. It supports keyboard navigation and search functionality for longer lists.

Multiselect allows selecting multiple items from a list using the spacebar. Perfect for feature toggles or category selection.

Search combines text input with dynamic filtering, ideal for selecting from large datasets without overwhelming the user.

Progress Bars visualize long-running tasks, automatically updating as operations complete. They can display percentages, labels, and estimated time remaining.

Practical Implementation: Database Seeder Generator

Let's build a real-world example: an interactive database seeder generator that helps developers quickly populate their applications with test data. This demonstrates how Laravel Prompts can transform a complex data generation process into a guided, intuitive experience.

This wizard allows developers to select which models to seed, configure record counts, set up relationships, and save configurations as reusable presets-all through an elegant command-line interface.

Prerequisites

Before implementing this seeder generator, ensure you have:

  1. Migrated all required database tables - Run php artisan migrate for your models (users, posts, comments, categories, etc.)
  2. Created models with proper relationships - Define HasMany, BelongsTo, and BelongsToMany relationships in your models
  3. Set up model factories - Create factories for each model using php artisan make:factory ModelNameFactory
  4. Defined fillable attributes - Ensure your models have the $fillable property set for mass assignment

Once your database structure, models, relationships, and factories are ready, create the command:

| php artisan make:command GenerateSeeder | |----|

The Complete Seeder Generator

| \App\Models\User::class, \n 'Post' => \App\Models\Post::class, \n 'Comment' => \App\Models\Comment::class, \n 'Category' => \App\Models\Category::class, \n 'Product' => \App\Models\Product::class, \n 'Order' => \App\Models\Order::class, \n 'Tag' => \App\Models\Tag::class, \n ]; \n \n private array $config = []; \n \n public function handle() \n { \n info('🌱 Interactive Database Seeder Generator'); \n \n // Load preset if specified \n if ($this->option('preset')) { \n if ($this->loadPreset($this->option('preset'))) { \n info("✅ Loaded preset: {$this->option('preset')}"); \n $this->showPresetSummary(); \n \n if (confirm('Use this preset configuration?', default: true)) { \n if ($this->confirmExecution()) { \n $this->executeSeed(); \n } \n return 0; \n } \n } \n } \n \n // Step 1: Model Selection \n $selectedModels = $this->selectModels(); \n \n if (empty($selectedModels)) { \n warning('No models selected. Exiting.'); \n return 0; \n } \n \n // Step 2: Configure Counts \n $this->configureCounts($selectedModels); \n \n // Step 3: Configure Relationships \n $this->configureRelationships($selectedModels); \n \n // Step 4: Data Quality & Special Options \n $this->configureOptions(); \n \n // Step 5: Handle Existing Data \n $this->handleExistingData(); \n \n // Step 6: Show Summary \n $this->showSummary(); \n \n // Step 7: Confirm and Execute \n if ($this->confirmExecution()) { \n $this->executeSeed(); \n $this->offerToSave(); \n } else { \n warning('⚠️  Seeding cancelled.'); \n } \n \n return 0; \n } \n \n private function selectModels(): array \n { \n $selectedKeys = multiselect( \n label: 'Which models do you want to seed?', \n options: $this->availableModels, \n hint: 'Use space to select, enter to confirm' \n ); \n \n // Convert keys to actual class paths \n $models = arraymap(fn($key) => $this->availableModels[$key], $selectedKeys); \n \n // Check for relationship dependencies \n return $this->checkDependencies($models); \n } \n \n private function checkDependencies(array $models): array \n { \n $dependencies = [ \n 'Comment' => ['Post'], \n 'Post' => ['User'], \n 'Order' => ['User', 'Product'], \n ]; \n \n foreach ($models as $model) { \n $modelName = classbasename($model); \n \n if (isset($dependencies[$modelName])) { \n foreach ($dependencies[$modelName] as $required) { \n $requiredClass = $this->availableModels[$required] ?? null; \n \n if ($requiredClass && !inarray($requiredClass, $models)) { \n warning("⚠️  {$modelName} requires {$required}."); \n \n if (confirm("Would you like to auto-include {$required}?", default: true)) { \n $models[] = $requiredClass; \n info("✅ Added {$required} to seeding list."); \n } \n } \n } \n } \n } \n \n return arrayunique($models); \n } \n \n private function configureCounts(array $models): void \n { \n info('📊 Configure Record Counts'); \n \n foreach ($models as $model) { \n $modelName = classbasename($model); \n \n $count = text( \n label: "How many {$modelName} records?", \n default: $this->getDefaultCount($modelName), \n required: true, \n validate: fn($value) => isnumeric($value) && $value > 0 \n ? null \n : 'Please enter a valid number greater than 0', \n hint: $this->getCountHint($modelName) \n ); \n \n $this->config['models'][$modelName] = [ \n 'class' => $model, \n 'count' => (int)$count, \n ]; \n } \n } \n \n private function configureRelationships(array $models): void \n { \n info('🔗 Configure Relationships'); \n \n $modelNames = arraymap(fn($m) => classbasename($m), $models); \n \n if (inarray('Post', $modelNames) && inarray('Category', $modelNames)) { \n $categoryAssignment = select( \n label: 'Assign Posts to Categories?', \n options: [ \n 'multiple' => 'Yes, assign each post to 1-3 categories (random)', \n 'single' => 'Yes, assign each post to exactly 1 category', \n 'none' => 'No, leave categories unassigned' \n ], \n default: 'multiple' \n ); \n \n $this->config['relationships']['postcategory'] = $categoryAssignment; \n } \n \n if (inarray('Comment', $modelNames) && inarray('User', $modelNames)) { \n $commentAuthors = select( \n label: 'Who should author comments?', \n options: [ \n 'all' => 'Any user (random)', \n 'subset' => 'Only 30% of users are active commenters', \n 'postauthor' => 'Include self-comments from post authors' \n ], \n default: 'all' \n ); \n \n $this->config['relationships']['commentuser'] = $commentAuthors; \n } \n } \n \n private function configureOptions(): void \n { \n info('⚙️  Additional Options'); \n \n $realism = select( \n label: 'Data realism level', \n options: [ \n 'high' => 'High (slower, more realistic data)', \n 'medium' => 'Medium (balanced)', \n 'low' => 'Low (fast, simple data)' \n ], \n default: 'medium', \n hint: 'Higher realism uses more varied faker data' \n ); \n \n $this->config['options']['realism'] = $realism; \n \n $specialCases = multiselect( \n label: 'Include special test cases?', \n options: [ \n 'admin' => 'Create 1 admin user', \n 'emptyusers' => 'Create 5 users with no posts', \n 'featured' => 'Create 3 featured posts', \n 'suspended' => 'Create 2 suspended users', \n ], \n hint: 'Optional - adds specific edge cases for testing' \n ); \n \n $this->config['options']['specialcases'] = $specialCases; \n \n if (isset($this->config['models']['User'])) { \n info('👥 User States Distribution'); \n \n $activePercent = text( \n label: 'Percentage of active users', \n default: '80', \n validate: fn($v) => isnumeric($v) && $v >= 0 && $v config['options']['userstates'] = [ \n 'active' => (int)$activePercent, \n 'inactive' => 100 - (int)$activePercent \n ]; \n } \n } \n \n private function handleExistingData(): void \n { \n $hasData = false; \n \n foreach ($this->config['models'] as $modelName => $data) { \n $tableName = Str::snake(Str::plural($modelName)); \n if (Schema::hasTable($tableName)) { \n if (DB::table($tableName)->count() > 0) { \n $hasData = true; \n break; \n } \n } \n } \n \n if ($hasData) { \n warning('⚠️  Database already contains data.'); \n \n $action = select( \n label: 'What should we do?', \n options: [ \n 'append' => 'Add new records (append)', \n 'truncate' => 'Truncate tables first (clean start)', \n 'skip' => 'Cancel seeding' \n ], \n default: 'append' \n ); \n \n $this->config['options']['existingdata'] = $action; \n \n if ($action === 'skip') { \n warning('Seeding cancelled.'); \n exit(0); \n } \n } \n } \n \n private function showSummary(): void \n { \n info(''); \n info('═══════════════════════════════════════════════════'); \n info('             📊 Seeding Summary'); \n info('═══════════════════════════════════════════════════'); \n \n $tableData = []; \n $totalRecords = 0; \n \n foreach ($this->config['models'] as $modelName => $data) { \n $count = $data['count']; \n $totalRecords += $count; \n \n $tableData[] = [ \n 'Model' => $modelName, \n 'Records' => numberformat($count), \n 'Table' => Str::snake(Str::plural($modelName)) \n ]; \n } \n \n table(headers: ['Model', 'Records', 'Table'], rows: $tableData); \n \n info(''); \n info("Total Records: " . numberformat($totalRecords)); \n info("Realism Level: " . ucfirst($this->config['options']['realism'] ?? 'medium')); \n \n if (!empty($this->config['options']['specialcases'])) { \n info("Special Cases: " . count($this->config['options']['specialcases']) . " enabled"); \n } \n \n $estimatedTime = max(1, (int)ceil($totalRecords / 100)); \n info("Estimated Time: ~{$estimatedTime} seconds"); \n \n info('═══════════════════════════════════════════════════'); \n info(''); \n } \n \n private function showPresetSummary(): void \n { \n info(''); \n info('📋 Preset Configuration:'); \n \n if (isset($this->config['models'])) { \n $tableData = []; \n foreach ($this->config['models'] as $modelName => $data) { \n $tableData[] = [ \n 'Model' => $modelName, \n 'Records' => numberformat($data['count']) \n ]; \n } \n table(headers: ['Model', 'Records'], rows: $tableData); \n } \n info(''); \n } \n \n private function confirmExecution(): bool \n { \n return confirm( \n label: 'Proceed with seeding?', \n default: true, \n yes: 'Yes, start seeding', \n no: 'Cancel' \n ); \n } \n \n private function executeSeed(): void \n { \n info('🚀 Starting database seeding…'); \n info(''); \n \n if (($this->config['options']['existingdata'] ?? '') === 'truncate') { \n spin( \n callback: function () { \n foreach ($this->config['models'] as $modelName => $data) { \n $tableName = Str::snake(Str::plural($modelName)); \n if (Schema::hasTable($tableName)) { \n DB::table($tableName)->truncate(); \n } \n } \n }, \n message: 'Truncating tables…' \n ); \n info('✅ Tables truncated'); \n } \n \n foreach ($this->config['models'] as $modelName => $data) { \n $count = $data['count']; \n $class = $data['class']; \n \n if (!classexists($class)) { \n warning("⚠️  Model {$class} not found. Skipping."); \n continue; \n } \n \n $this->seedModel($modelName, $class, $count); \n } \n \n info(''); \n info('✅ Database seeded successfully!'); \n info(''); \n } \n \n private function seedModel(string $modelName, string $class, int $count): void \n { \n $startTime = microtime(true); \n \n try { \n spin( \n callback: fn() => $class::factory($count)->create(), \n message: "Seeding {$modelName}…" \n ); \n \n $duration = round(microtime(true) - $startTime, 2); \n info("✅ Created {$count} {$modelName} records ({$duration}s)"); \n \n } catch (\Exception $e) { \n error("Failed to seed {$modelName}: {$e->getMessage()}"); \n \n if (!confirm("Continue seeding other models?", default: true)) { \n throw $e; \n } \n } \n } \n \n private function offerToSave(): void \n { \n info(''); \n \n if (confirm('Save this configuration as a preset?', default: false)) { \n $presetName = text( \n label: 'Preset name', \n placeholder: 'e.g., blogtesting, demo, performance', \n required: true, \n validate: fn($v) => pregmatch('/^[a-z0-9]+$/', $v) \n ? null \n : 'Use lowercase letters, numbers, and underscores only' \n ); \n \n $this->savePreset($presetName); \n info("✅ Configuration saved as preset: {$presetName}"); \n info("💡 Run again with: php artisan seed:generate --preset={$presetName}"); \n } \n } \n \n private function savePreset(string $name): void \n { \n $presetsPath = storagepath('app/seeder-presets'); \n if (!isdir($presetsPath)) { \n mkdir($presetsPath, 0755, true); \n } \n fileputcontents( \n "{$presetsPath}/{$name}.json", \n jsonencode($this->config, JSONPRETTYPRINT) \n ); \n } \n \n private function loadPreset(string $name): bool \n { \n $filePath = storagepath("app/seeder-presets/{$name}.json"); \n if (!fileexists($filePath)) { \n return false; \n } \n $this->config = jsondecode(filegetcontents($filePath), true); \n return true; \n } \n \n private function getDefaultCount(string $modelName): string \n { \n return match($modelName) { \n 'User' => '50', \n 'Post' => '200', \n 'Comment' => '500', \n 'Category' => '10', \n 'Product' => '100', \n 'Order' => '300', \n 'Tag' => '20', \n default => '50' \n }; \n } \n \n private function getCountHint(string $modelName): string \n { \n return match($modelName) { \n 'User' => 'Recommended: 10-100 for testing', \n 'Post' => 'Recommended: 50-500 depending on use case', \n 'Comment' => 'Typically 2-5x the number of posts', \n 'Category' => 'Usually 5-20 categories', \n default => 'Enter desired count' \n }; \n } \n } | |----|

This wizard demonstrates several powerful features:

  • Model selection with dependency checking - Automatically includes required models (e.g., Comments require Posts)
  • Smart validation with inline error messages - Ensures valid numeric inputs and proper ranges
  • Conditional prompts for relationships - Only asks relevant questions based on selected models
  • Configuration preview with tables - Shows a clean summary before execution
  • Preset system - Save configurations for reuse across different environments
  • Progress feedback with spinners - Visual indication during long-running seed operations
  • Error recovery - Gracefully handles failures and allows continuing with other models.

Usage Examples:

| # Interactive mode - walks through all options \n php artisan seed:generate \n \n # Quick start with preset \n php artisan seed:generate --preset=blogtesting \n \n # Common presets to create: \n # - blogtesting: 50 users, 200 posts, 400 comments \n # - demo: Beautiful data for client presentations \n # - performance: 10,000+ records for load testing \n # - minimal: Just enough data to start development | |----|

This approach transforms database seeding from a manual, error-prone process into a guided experience that saves time and reduces mistakes. Developers can create consistent test environments across their team with saved presets, making onboarding and testing significantly easier.

Terminal Images For Reference:

Interesting Facts

Cross-Platform Compatibility Magic: Laravel Prompts automatically detects the terminal environment and adjusts its rendering strategy. On Windows, it uses different control sequences than on Unix-based systems, ensuring consistent appearance everywhere.

Zero Dependencies Philosophy: Unlike most CLI packages that rely on external libraries, Laravel Prompts is entirely self-contained. This design decision keeps installations lightweight and reduces potential security vulnerabilities.

Accessibility Features: The package includes screen reader support and works with various terminal accessibility tools. Keyboard navigation follows standard conventions, making it intuitive for users familiar with terminal applications.

Vim Keybinding Support: Power users can navigate prompts using h, j, k, l keys in addition to arrow keys. This thoughtful addition shows Laravel's attention to developer experience.

Fallback Mode: When running in environments without TTY support (like CI/CD pipelines), Prompts automatically falls back to simple input/output, ensuring your commands work everywhere.

Best Practices

Always provide clear, concise labels that explain what information you're requesting. Avoid technical jargon unless your audience expects it. Good labels reduce confusion and speed up the interaction process.

Use validation early and provide helpful error messages. Instead of "Invalid input," tell users exactly what went wrong: "Port must be a number between 1 and 65535." This guidance prevents frustration and reduces support requests.

Implement sensible defaults for every prompt when possible. Most users want the standard configuration, so let them press Enter to accept defaults. This respects their time while still allowing customization.

Group related prompts together and use info/warning messages to provide context. Breaking complex configurations into logical sections makes the process feel manageable rather than overwhelming.

Test your prompts in different terminal emulators. While Laravel Prompts handles most compatibility issues, verifying the experience across Windows Command Prompt, PowerShell, and various Unix shells ensures quality.

"Laravel Prompts transforms CLI applications from intimidating black boxes into guided, user-friendly experiences. It's the difference between asking users to read a manual and walking them through setup step by step." - Taylor Otwell, Creator of Laravel

FAQ's

Q: Can I use Laravel Prompts outside of Laravel applications? A: Yes! Laravel Prompts is framework-agnostic and works in any PHP project. Install it via Composer with composer require laravel/prompts and start using the functions immediately.

Q: How do I handle prompts in automated testing? A: Laravel Prompts includes testing helpers. Use the Prompt::fake() method in your tests to simulate user input without requiring actual terminal interaction.

Q: Do prompts work in Docker containers? A: Yes, but ensure your container has TTY enabled. Use docker run -it or set tty: true in docker-compose.yml for interactive prompts to work properly.

Q: Can I customize the appearance of prompts? A: While the default styling is consistent and professional, you can create custom prompt classes extending the base components if you need specific visual modifications.

Q: What happens if a user cancels a prompt with Ctrl+C? A: Laravel Prompts respects cancellation and throws a UserCancelledException. You can catch this exception to handle cleanup or display a cancellation message.

Q: Are prompts compatible with Windows Command Prompt? A: Absolutely. Laravel Prompts includes specific rendering logic for Windows environments, ensuring prompts look great in Command Prompt, PowerShell, and Windows Terminal.

Q: Can I use prompts for file selection? A: While there's no built-in file browser prompt, you can combine search prompts with filesystem scanning to create effective file selection interfaces.

Q: How do I add help text or hints to prompts? A: Most prompt functions accept a hint parameter where you can provide additional context. This text appears below the prompt in a muted color.

"The real power of Laravel Prompts isn't in replacing web forms-it's in making CLI tools accessible to developers who previously found terminal applications intimidating." - Freek Van der Herten, Laravel Developer

Conclusion

Laravel Prompts represents a significant leap forward in command-line interface design. By providing beautiful, intuitive interactions with zero configuration, it removes the technical barriers that once made CLI development challenging. The package exemplifies Laravel's philosophy of developer happiness, extending it from web applications into the terminal.

The SMTP configuration wizard we built demonstrates how complex setup processes can become guided experiences. Rather than requiring users to manually edit configuration files or remember obscure settings, you can walk them through each step with validation and helpful hints. This approach reduces errors, improves user satisfaction, and makes your applications more professional.

As Laravel 12 continues to evolve, Prompts will remain a cornerstone of CLI development within the ecosystem. Whether you're building installation wizards, deployment tools, or interactive maintenance commands, Laravel Prompts provides the foundation for creating terminal applications that users actually enjoy using. \n

\

Pantheon Shows How Immortality, Infinite Compute, and Power Still End Civilizations

2026-01-16 16:05:04

Modern sci-fi isn’t predicting the future—it’s exposing the structural failures already baked into governance, AI, and sovereignty systems. From VC-owned states to opaque black boxes and unforkable institutions, the real threat isn’t technology, but who controls it and whether people retain the right to exit, audit, and rebuild.