MoreRSS

site iconHackerNoonModify

We are an open and international community of 45,000+ contributing writers publishing stories and expertise for 4+ million curious and insightful monthly readers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of HackerNoon

解决方案架构师如何在不丧失架构判断力的前提下运用生成式人工智能

2026-03-28 03:09:52

The role of a Solution Architect has always been to balance vision and reality. The question is, is there a change in how that process begins with the use of generative AI? The answer is, indeed, there is a change, but not in replacing the architect, but in using a powerful co-architect in the beginning of a design process.

For many decades, Solution Architects have been using their experience, documentation, whiteboards, and lengthy discussions with stakeholders for designing system’s. Architecture has always been collaborative, but historically it has also been very manual. As architects, the typical process for us usually starts with requirements collection and whiteboarding ideas with understanding existing process. Only afterwards we do use tools like Draw.io, Visio, Lucidchart, or sometimes even Mermaid scripts to transform those ideas into diagrammatic forms. This process can be laborious and time consuming.

Designing architecture is not just about conceptualizing the system’s behavior; it is also about creating each artifact manually, whether it is a diagram, a presentation, or documentation. In fact, much of the Solution Architect’s time is spent converting their concepts into visual or technical artifacts.

Today, generative AI is starting to alter this process, not by replacing Solution Architects. But by becoming something which helps Architect: a Co-Architect.

\

Before AI: Architecture Process Was Manual and Meeting-Heavy

The early stages of the design process, prior to the advent of generative AI tools, followed a pretty standard process.

A new system initiative would normally start with meetings with the stakeholders involved. The requirements would be gathered from the product teams, operations, security teams, and engineering teams.

Only after these meetings would the architect start drawing the diagrams. In many cases, the process would be as follows:

Stakeholder meetings → requirement gathering → system impact analysis → manual diagram creation → architecture reviews → presentation preparation for Level 1 followed by Level 2 & 3.

Even the drawing of a simple system interaction diagram would involve significant effort. Writing the Mermaid syntax, making the flows correct, making the visual layout correct, and making sure the diagram communicated the design effectively would involve multiple iterations.

In my own experience with large-scale enterprise systems, the early stages of the design process would involve many hours of meetings before the first diagram was even drawn.

It is the exact process that is now being augmented with the help of AI tools.

\

Enter AI: The Co-Architect Era

Modern generative AI tools like Gemini, Copilot, ChatGPT, and NotebookLM have revolutionized how the design process can start for architects.

The fundamental shift is not in the fully automation of the process but the shift is in the speeding up of the starting point and speed the time to market.

Architects do not start with a blank page anymore; they start with a partially formed architecture draft created by AI tools.

Architects do not start with version 0 of the design; they can start with version 0.7 of the design.

The evaluation and refinement of the design are still performed by the architect, and the initial phase is sped up significantly.

1. Instant Architecture Diagrams from Requirements

One of the most useful applications of AI in the field of architecture is the quick generation of diagrams.

Architects can ask the AI to generate the code using the system requirements and ask it to generate a sequence diagram or interaction diagram.

For example, they can type the following in the prompt window:

“Generate a Mermaid sequence diagram for the telecom prepaid recharge system using the API Gateway, authentication service, fraud detection service, and billing microservice.”

Within a matter of seconds, the AI generates the code in the required format, explaining the interaction between the systems.

The final diagram is not always the final design, however. This is merely the starting point that the architect can modify and change as desired.

\ Example 1- Requesting Chatgpt to desing a block diagram

Chatgpt Request & Response

Example 2 - Requesting to design a sequence diagram

Chatgpt Request & Response

2. Faster Brainstorming of Architecture Patterns

Brainstorming sessions in early stages of architecture design were traditionally carried out through open discussions and whiteboard exploration.

Though these discussions are still relevant, the application of AI can now include design suggestions at the beginning of the discussion.

Architects can pose questions such as:

“Given our requirement for high availability, PCI compliance, and 10 million transactions per day, can you suggest some architectures?”

Some possible answers that the AI can provide include:

  • Event-driven microservices
  • CQRS architecture
  • Active-active multi-region deployment
  • Circuit breaker fault tolerance
  • API gateway throttling strategies

It is essential to note that these are not the final answers to the problem; they are simply starting hypotheses that can be compared with the constraints of the environment.

Since the invent of AI tools the entire approach of the traditional brainstorming sessions changed to choosing the right approach between multiple design suggestions.

3. Faster Creation Architecture Presentation

Architecture communication is as important as architecture design.

Architects have historically invested considerable time in preparing a presentation to articulate the system design to stakeholders and leadership teams.

Preparation of architecture decks often involved writing on slides, copying diagrams, and preparing bullet points with comparison of approaches.

However, with the introduction of generative AI tools like NotebookLM and Gemini, it is now possible to accelerate this process.

For example, an architect can ask the following prompt:

“Create a 10-slide executive presentation to articulate this architecture to non-technical stakeholders.”

The AI can help with:

  • Framing business impacts
  • Risk considerations
  • Migration roadmap
  • Architectural decision points
  • Executive summaries
  • You can also provide the template if you already have for your presentation for AI to follow.

Though the results are far from perfect, it is a significant step towards reducing time and effort in architecture communication and allowing architects more time to focus on explaining the design trade-offs.

4. Faster Exploration of Design Trade-Offs

In my experience as an architect, almost every architecture decision ends up being a trade-off.

Architects often need to compare:

  • Monolith vs microservices
  • REST vs event-driven systems
  • Managed cloud services vs self-managed infrastructure
  • Messaging platforms
  • Synchronous vs Asynchronous
  • Typically, this process required researching documentation & case studies.

With AI tools, this process can be significantly sped up.

For example, an architect may want to compare:

“Kafka vs Amazon SQS for high throughput telecom transaction processing.”

In just a few seconds, the AI can produce a comparison that includes factors like latency, scalability, operational complexities, and cost considerations.

5. Visual Generation for Architecture Storytelling

Architecture is not only technical but also communicative.

Senior management tends to understand visual representations more easily than technical representations.

With the arrival of AI image generation tools, it is now possible for architects to generate conceptual images that explain architecture in a more interesting way.

These images can be used in architecture storytelling and can be used to show the interaction of systems, modernization strategies, and migration paths.

A Real Example: When AI Architecture Suggestions Fail

The recharge system handled millions of transactions every day, and the billing platform required a synchronous confirmation within a few seconds.

The preliminary architecture created by AI was a beautiful event-driven architecture where all the recharge operations would be performed asynchronously through messaging services. From a modern architecture perspective, this was a perfectly designed architecture.

The telecom billing platform we were integrating with was a legacy synchronous platform where immediate confirmation was required for each recharge transaction. The architecture designed by AI was technically sound but operationally incorrect for a legacy platform.

A powerful lesson learned:

AI can create architectures but Architecture must be validated by architects.

The Risks of Blindly Trusting AI Architecture

AI systems do not inherently understand the context of an organization & it’s current architecture.

AI systems don’t inherently understand organizational context such as legacy dependencies, regulatory requirements, enterprise architecture standards, operational maturity, or budget constraints.

An AI system would propose modern technologies such as a Kubernetes cluster, event-sourcing architecture, and global distributed architectures.

These are technically very good architectures.

However, they are not very practical if the operational maturity level of the organization is not high enough.

Architecture needs to be contextual of the current system.

Architects Must Still Evaluate ROI

Another key responsibility that architects have is the evaluation of the return on investment.

Architecture decisions have the following impacts:

  • infrastructure expenses
  • operational intricacy
  • development schedules
  • staffing needs
  • Trade offs between choices of tools available

Even if the AI is able to design complex solutions, it is not necessarily true that the solutions will be valuable to the business.

Some of the key questions that architects have to ask include:

  • Does the architecture improve reliability?
  • Does the architecture improve operational expense reduction?
  • Is the architecture aligned with the business objectives?
  • Is the effort justified?
  • Time to market will work with business timelines?

These questions cannot be answered by the AI unless it is aware of the organizational objectives.

The Right Way to Use AI as a Co-Architect

The best way to design architectures with the help of AI is as follows:

In effect, the steps involved in this are quite simple. For instance, architects normally start by providing requirements to the AI, create an initial architecture draft, explore alternative patterns, and then analyze these options before finally aligning with enterprise architecture standards.

AI is used to speed up the generation of ideas. Architects must validate the decisions.

The Real Impact: Cognitive Offloading

The biggest impact of AI in the world of architecture is not diagram automation.

It is cognitive offloading.

Architects can now spend less time:

  • drafting diagrams
  • formatting documentation
  • researching basic design patterns

And can now spend more time on:

  • evaluating system trade-offs
  • anticipating failure scenarios
  • aligning architecture with business strategy
  • mentoring engineering teams
  • AI handles the mechanical layer.

Architects handle the strategic layer.

Lessons from Using AI in Architecture

Having experimented with AI tools in architecture design, a few lessons are learned. First and foremost, AI is useful in creating initial drafts of architecture design but is not useful in overcoming legacy constraints and organizational context. In this regard, it is important to note that AI is useful in exploring design rather than being a tool of authority in creating design.

The Future of Architecture Is Augmented

Solution Architects are not becoming obsolete; instead, their role is adapting to the changes in the development of artificial intelligence systems. Today’s Solution Architect is no longer just a manual diagram creator, nor is he or she just an artifact builder, a requirement translator, or even an architectural validator; instead, he or she is becoming an AI-assisted system thinker. Instead of fighting AI, the most successful Solution Architect in the future will be one who uses AI to his or her advantage, using AI to explore and come up with architectural ideas. One thing, however, is certain: whereas AI can propose architectural patterns and design choices, AI can suggest architecture patterns, but deciding what should actually be built still requires experience and context and understanding the existing designs in place.

So, from my own point of view, the actual benefit of using AI is not in automatically designing systems but rather speeding up the initial phase of system architecture. While it is true that AI can offer architectural patterns and design alternatives, it is still necessary to have experience and context to understand what should be built and what already exists.

\

无尽的阅读:线性知识的危机

2026-03-28 03:00:45

\

When Reading Meant Trusting Sequence

If hypertext reorganised the structure of texts, it also transformed the act of reading itself.

There was a time when reading implied a form of temporal trust. One opened a text with the assumption that its internal order mattered, that meaning would unfold through progression, and that understanding depended, at least provisionally, on accepting the discipline of sequence. Even difficult books demanded patience rather than navigation. Their complexity was expected to resolve itself, if not fully, then sufficiently, by moving forward.

This confidence belonged not merely to literature but to a broader epistemic culture. The printed page suggested that knowledge itself possessed direction: premises preceding conclusions, arguments unfolding in measured succession, references subordinated to a central line of development. To read meant entering an order already shaped by another intelligence and submitting, temporarily, to its rhythm.

The crisis of that model did not begin with digital media. It began much earlier, when the accumulation of knowledge gradually exceeded the capacity of sequential containment.

Modern scholarship had already long confronted a structural contradiction: every attempt at synthesis generated new margins, new references, new archives, new exceptions that resisted reintegration. The more culture documented itself, the less any single textual line could plausibly contain what it invoked. Footnotes multiplied. Bibliographies expanded. Secondary literature acquired a density sometimes rivalling the primary text itself. The centre increasingly survived only through its peripheries.

Hypertext did not create this condition. It exposed it.

The Visibility of Alternative Paths

What appeared at first as a technical innovation - the possibility of linking one textual fragment to another - soon revealed something more fundamental: that linear reading had always depended on suppressing alternative paths in order to preserve interpretive coherence. Every text already contains more potential routes than its visible sequence admits. Hypertext merely externalised that latent.

A link interrupts not because it distracts, but because it materialises a possibility already present in reading itself: that one sentence may open onto another context, another archive, another authority, another uncertainty.

For this reason, hypertext should not be understood as the collapse of order, but as the visible appearance of competing orders. The modern reader learned gradually that reading no longer meant moving through a singular line, but stabilising oneself temporarily within a field of branching relations.

This transformation became socially ordinary long before it was philosophically absorbed. The early web accustomed readers to discontinuity without requiring them to name it. A text ceased to be a destination and became instead a temporary node within larger movement. One article led elsewhere; one citation opened another context; one unfinished argument generated further search.

The decisive change was subtle: closure ceased to be the natural expectation of reading. A text could still end, but understanding increasingly did not.

The Weight of Unread Context

This condition found one of its most influential large-scale expressions in Wikipedia. Unlike traditional encyclopaedias, Wikipedia does not simply present information; it invites perpetual lateral movement. Every article is internally unfinished because every concept appears already linked to another that modifies it, expands it, or relativises it.

A reader entering one page rarely remains there. Historical events open toward biographies, biographies toward institutions, institutions toward doctrines, doctrines toward controversies, and controversies toward revisions that remain permanently visible.

The encyclopaedia here no longer behaves as a closed authority. It becomes a navigable topology of provisional knowledge. Its authority derives precisely from this openness: visible revisions, distributed authorship, contestable references, transparent instability.

Yet this same structure introduces a new cognitive burden. If every statement leads elsewhere, where does one stop? If every concept opens additional context, what counts as sufficient understanding? The question appears banal, yet it marks a deep shift in epistemic habit.

Linear reading once allowed ignorance to remain partially hidden because sequence protected temporary incompleteness. One could proceed without mastering every surrounding context. Hypertext weakens that shelter. It makes visible how much remains outside the immediate line of attention.

This visibility generates a subtle but persistent tension: one reads while knowing that every paragraph contains unrealised departures. The result is not merely distraction. It is a changed phenomenology of reading itself.

Attention becomes layered rather than singular. One sentence is read while another possible route remains mentally active. The visible text shares cognitive space with deferred links, remembered tabs, unresolved references, and anticipated returns. The reader rarely inhabits a single textual present.

This condition has often been described simplistically as fragmentation, but fragmentation is only one aspect of a more complex transformation. What emerges is not broken reading but distributed reading: attention stretched across multiple unfinished trajectories.

Informational Trauma and Distributed Attention

The contemporary browser window offers perhaps the most ordinary image of this condition. Multiple tabs remain open not because one has abandoned reading, but because reading itself has become structurally suspended. Each tab marks an incomplete cognitive obligation: something to verify, compare, revisit, preserve, or postpone.

In earlier textual cultures, interruption often signalled failure of concentration. Today interruption frequently functions as part of concentration itself. This does not mean the transformation is harmless.

What hypertext normalised intellectually, digital platforms later intensified psychologically. The multiplication of available paths produces not only interpretive freedom but also a persistent low-level pressure: the sense that every chosen line excludes potentially relevant others.

The reader is no longer merely following thought but continually managing omission.

This is one reason contemporary informational fatigue cannot be reduced to quantity alone. The problem is not simply that there is too much to read. It is that every act of reading now occurs under awareness of adjacent unreadness. One sees more than can be integrated.

That condition was already implicit in early theories of hypertext, though often treated optimistically at the time. Multiplicity appeared as liberation from textual hierarchy, an emancipation of reading from imposed sequence. And in many respects it was exactly that. Hypertext allowed texts to behave less like monuments and more like environments.

But environments also demand orientation. Without orientation, multiplicity ceases to feel liberating and begins to resemble cognitive weather: continuous exposure without stable horizon.

This is why the language of informational trauma becomes increasingly relevant here. Trauma, in one of its structural senses, is not simply excess but the inability to organise excess within available symbolic forms. Hypertext did not create informational trauma, but it provided one of the first cultural forms through which that disproportion became legible.

The early enthusiasm surrounding hypertext literature already carried this ambiguity.

Works such as afternoon, a story did not merely celebrate narrative plurality; they also exposed how unstable narrative memory becomes when sequence loses final authority. Reading the same fragment under altered conditions changes not only interpretation but recollection. One cannot always remember whether a detail belongs to the text itself or to the path through which one reached it.

Meaning becomes relational in a stronger sense: it depends not solely on textual content but on route history.

This introduces a subtle epistemological consequence. Under hypertextual conditions, knowledge increasingly resembles position rather than possession.

What one knows depends partly on where one entered, what one omitted, what one followed, what one deferred.

The fantasy of complete reading weakens accordingly.

Even scholarly practice has adapted to this without fully acknowledging it. Research now often begins not from stable corpora but from provisional movement: search, selection, interruption, return, comparison, archival branching. The researcher behaves less like a reader of finished sequences and more like a navigator inside unstable textual density.

This change also helps explain why contemporary debates over attention often miss the deeper issue. The difficulty is not simply shorter concentration spans. It is that modern knowledge environments increasingly require simultaneous management of partial contexts.

One does not merely lose focus; one acquires too many provisional focal points. The consequence is an altered relation to certainty itself.

Knowledge Without Final Closure

Linear texts once encouraged the impression that conclusions emerge through cumulative progression. Hypertextual reading weakens that confidence because every conclusion appears surrounded by latent alternatives.

This does not necessarily produce relativism, as is sometimes feared. More often it produces provisionality: a form of understanding aware of its incomplete pathways.

That awareness may in fact be intellectually healthier than older illusions of closure. But it is also more exhausting.

One reads knowing that understanding remains revisable not simply because new facts may appear, but because unseen paths remain structurally available.

In this sense, hypertext belongs not only to media history but to epistemic history. It marks a moment when culture ceased to assume that knowledge naturally presents itself in singular lines. What replaced that assumption was not chaos, but a more demanding condition: meaning as temporary stabilisation within excess.

The older linear order has not disappeared. Books remain among the few forms that still permit deliberate continuity, and for precisely that reason they now offer something increasingly rare: protected sequence.

Yet even books are no longer read entirely outside hypertextual consciousness. A reference invites search. A concept triggers verification. A page opens outward mentally before it ends materially.

The link no longer needs to be visible to operate. Hypertext survives now less as interface than as cognitive habit. And perhaps this is its most lasting consequence: it taught reading to continue even when no definitive path remains available.


:::info Hypertextual Sketches is a micro-series of essays on hypertext, the post-modern condition of culture, semiotics, and non-linear ways of describing how meaning circulates when continuity breaks down. Original research essays were written between 1997 and 2000, in Prague, Krakow, and Leipzig, when the internet was still experimental, but its logic was already reshaping how we read, write, and think. Larger portions of this work were actually published on paper (!) between 1999 and 2003. Read today, these essays function less as historical artifacts and more as early signals of a reality we now take for granted.

:::

\

构建自愈型 Java 微服务:分步指南

2026-03-28 02:59:33

Introduction: The Evolution of Distributed Architecture

In the early days of Java development, we relied on the "Cargo Ship" architecture: massive, monolithic deployments that handled every request within a single, unified codebase. While this simplified transactions, it created a fragile ecosystem where one memory leak could crash the entire platform.

For a modern Java architect, the challenge is no longer just writing logic; it is designing systems that survive the inherent instability of distributed networks. This guide explores how to build Java microservices that are performant, self-healing, and scalable.

Step 1: Optimizing the JVM for Microservices

The biggest barrier to entry for Java microservices is the "startup tax" of the Java Virtual Machine (JVM). Traditional deployments often require significant memory to initialize the heap, which is inefficient when running hundreds of small containers.

  • Feature Highlight: GraalVM Native Images changes the game by compiling your Java code into a standalone native binary ahead of time. This eliminates the need for a heavy JVM at runtime, allowing your service to scale in milliseconds rather than seconds.
  • Technical Implementation: Instead of packaging a "Fat Jar," utilize Spring Boot's native build tools. This approach strips unnecessary metadata, resulting in a binary that uses a fraction of the RAM typically required by a legacy monolith.

Step 2: Mastering Asynchronous Flow with Completable Future

In a microservices environment, synchronous calls are the enemy of performance. If a service waits for a database or an external API, it consumes threads that could be used for other tasks.

Feature Highlight: Java’s CompletableFuture allows you to trigger an asynchronous task and continue processing without blocking the thread.

import java.util.concurrent.CompletableFuture;

public class AsyncProcessor {
    public CompletableFuture<String> fetchExternalData() {
        // Triggering the request asynchronously
        return CompletableFuture.supplyAsync(() -> {
            // Simulate an external API call
            return "Data retrieved successfully";
        }).thenApply(data -> data.toUpperCase());
    }
}

By chaining operations with .thenApply() or .thenAccept(), you create a non-blocking pipeline that scales naturally as demand increases.

Step 3: Fault Tolerance via Circuit Breakers

In distributed systems, cascading failure is a constant threat. If one service becomes slow, it can back up the entire chain.

Feature Highlight: Resilience4j is the gold standard for Java fault tolerance. It provides a decorator-based approach to Circuit Breakers.

import io.github.resilience4j.circuitbreaker.annotation.CircuitBreaker;
import org.springframework.stereotype.Service;

@Service
public class ResilienceService {
    // The Circuit Breaker monitors failure rates
    @CircuitBreaker(name = "backendService", fallbackMethod = "fallback")
    public String executeRequest() {
        return externalClient.call();
    }

    public String fallback(Exception e) {
        // Log the failure and return a cached response to keep the system alive
        return "System temporarily throttled; returning cached data.";
    }
}

When the circuit is "open," the method call is immediately bypassed, preserving your system’s resources for healthy traffic.

Step 4: Event-Driven Logic with Spring Cloud Stream

Moving from a monolithic database transaction to a decentralized model requires a new approach to communication. Rather than direct API calls, we use an Event-Driven Architecture.

The Architecture:

  • Publishers: Services that announce state changes via message topics.
  • Subscribers: Independent services that react to these changes in real-time.

This decoupling ensures that if your "Order Service" is overloaded, your "Inventory Service" remains operational, as it simply processes events from the message bus whenever it has capacity.

Step 5: Distributed Consistency via the Saga Pattern

In a monolith, @Transactional handles everything. In microservices, we must use the Saga Pattern to maintain eventual consistency across separate databases.

The Implementation:

  • Transaction A: Service 1 updates its local DB and publishes an Event_Success.
  • Transaction B: Service 2 consumes the event and performs its own local update.
  • Compensating Transaction: If Service 2 fails, it publishes Event_Failure, which triggers "rollback" logic in Service 1 to restore its previous state.

Step 6: Empirical Optimization via Memory Profiling

Even if you use GraalVM or thin jars, you will eventually face memory pressure. When that happens, you need to know exactly which objects are hogging your heap

  • Feature Highlight: Java Flight Recorder (JFR) and VisualVM is an extremely low-overhead profiling tool built into the JVM. It allows you to record the behavior of your application in production with minimal performance impact.
  • Code-Level Monitoring: Integrate the MemoryMXBean to programmatically monitor your heap usage.
import java.lang.management.ManagementFactory;
import java.lang.management.MemoryMXBean;

public class MemoryMonitor {
    public void logMemoryUsage() {
        MemoryMXBean memoryBean = ManagementFactory.getMemoryMXBean();
        long used = memoryBean.getHeapMemoryUsage().getUsed();
        long max = memoryBean.getHeapMemoryUsage().getMax();

        System.out.printf("Heap Usage: %.2f%% %n", (double)used / max * 100);
    }
}

By integrating this into your heartbeat logs, you turn "silent failures" into "observable metrics".

Step 7: Security through Identity Federation

In distributed environments, static credentials are a major vulnerability. We use Workload Identity Federation to ensure that every microservice has a temporary, scoped identity.

Technical Implementation: Use a Secure Token Service (STS) to exchange environment-level identity for a temporary JSON Web Token (JWT). This JWT is injected into the request header for internal API calls, providing a "Zero-Trust" security posture without hard-coded secrets.

Comparison: Monolith vs. Distributed Java

| Technical Layer | Monolithic Java | Distributed Microservices | |----|----|----| | Runtime | Heavyweight JVM | GraalVM Native | | Communication | Direct Method Calls | Asynchronous Event-Bus | | Resilience | Try-Catch blocks | Circuit Breakers | | State | Shared Database | Eventual Consistency (Saga) |

Final Summary

Moving from monoliths to microservices is not just a change in deployment strategy; it is a fundamental shift in reliability engineering. By utilizing GraalVM for footprint optimization, CompletableFuture for asynchronous processing, and the Saga pattern for consistency, you are building an architecture designed for "distributed chaos".

The goal of a high-performance Java architect is to build systems that heal themselves. When we stop worrying about monolithic database transactions and start designing for asynchronous events, we unlock the ability to scale globally.

人类在地球上苦苦挣扎,却要斥资200亿美元登月:优先顺序值得商榷

2026-03-28 02:54:31

The announcement from NASA that it plans to pursue a US$20 billion lunar base should have been received as a bold step forward for humanity. Instead, it raises a far more uncomfortable question: what exactly are we prioritising — and why?

Under the direction of Administrator Jared Isaacman, the agency is mapping out an “enduring presence” on the Moon, alongside ambitions for nuclear-powered spacecraft capable of reaching Mars.

At the same time, NASA has confirmed it will pause its Gateway project — a lunar-orbiting space station — in favour of focusing resources on building directly on the Moon’s surface.

On paper, it sounds like progress. In reality, it highlights a growing disconnect between technological ambition and human need.

Because while billions are being earmarked for infrastructure on a lifeless rock, conditions here on Earth remain dire for millions.

Across parts of the world, access to clean drinking water is still not guaranteed. Healthcare systems remain underfunded or entirely out of reach for vulnerable populations. Basic infrastructure — roads, sanitation, energy — continues to fail the very people it is supposed to serve.

This isn’t a fringe issue. It’s a global one.

And that’s what makes the scale of this investment so confronting. The Moon, for all its scientific value, is not inhabited. There are no communities to sustain, no ecosystems to protect, no immediate humanitarian crises to solve. It is, by definition, a long-term project — one rooted in exploration, prestige, and future potential.

Meanwhile, the crises on Earth are immediate. They are human. And they are solvable.

Supporters of space exploration will argue — correctly — that programs like these drive innovation. That technologies developed for space often find their way back to Earth, improving lives in ways we don’t immediately see.

Historically, that has been true. But it doesn’t fully address the optics, or the ethics, of spending at this scale while basic human needs remain unmet.

There is also a broader shift taking place. By pausing the Gateway project, NASA is not just refining its strategy — it is signalling a willingness to abandon one major vision in favour of another.

That raises further questions about long-term planning, consistency, and whether these decisions are being driven by science, politics, or optics.

None of this is to say humanity should stop exploring space. Exploration is part of who we are. It pushes boundaries, expands knowledge, and inspires generations. But there is a difference between exploration and imbalance.

Right now, the balance feels off.

Because when billions are directed toward building a permanent presence on the Moon — a place with no air, no water, and no life — while people on Earth are still fighting for the basics, it becomes harder to frame these ambitions as purely noble.

It starts to look less like progress, and more like a reflection of misplaced priorities.

\

生成式人工智能如何重塑技术写作

2026-03-28 02:49:08

Coffee!! Most of us start with a cup of coffee to start our day. This is no different for a GenAI-native technical writer as well.

As a technical writer, you attend the first meeting of the day whereby a product manager narrates a product feature scope along with some UI mockups. As an inquisitive technical writer, you are asking many questions during the meeting. Product managers and UX designers are sharing more information about purpose of the product feature, use cases that it is trying to solve and how it is designed for ease of use. You are not taking notes in a notebook or capture all information in digital form. Instead, you are probing all stakeholders to cover all aspects of that product feature.

Then you feel the breeze after this meeting. A break for 15 minutes before you jumps in another meeting with customer support team. Suddenly, a ping came from your manager saying that product feature is shipping tomorrow and she wants to review feature documentation before noon.

Without sweating, you power up your AI CoPilot and upload the meeting recording. All your organizational style guides, content structure for feature documentation, and other instructions are already stored in skills.md file of AI CoPilot. You give a prompt to AI CoPilot to ensure it produces a “feature documentation” and give directions to AI CoPilot on any clarifications it seeks. You take a glimpse at the CoPilot to ensure it starts working.

Once it comes back with a draft which is 100% compliant with your style guide, you review it for content accuracy

Once it comes back with a draft which is 100% compliant with your style guide, you review it for content accuracy. You are not checking grammar, tone, or formatting—those are already taken care of by the AI CoPilot. Your attention is on something more valuable: Does this content truly help the user accomplish the task?

You verify whether the steps align with the UI mockups shared in the meeting. You check if the explanations capture the real intent behind the feature. Occasionally, you add a small clarification that only a human who participated in the discussion would know. In some places, you simplify the language even further because you understand the mindset of your users.

Within minutes, the document evolves from a technically correct draft into a user-centered guide.

You then ask the AI CoPilot to generate a few additional assets:

  • A short release note summary
  • A tooltip description for the UI
  • A knowledge base snippet for customer support
  • A few search-friendly FAQs

All of them follow the same style guide and reuse the same core explanation of the feature. What used to take several hours of manual writing and editing now takes less than thirty minutes.

Before sending the document to your manager, you run one more prompt: \n “Highlight any ambiguous statements or assumptions that may confuse the user.”

The CoPilot flags two areas where the workflow might not be clear. You refine the wording, confirm the steps with the UX designer on chat, and finalize the document.

By noon, the documentation is ready for review.

Your manager responds with a simple message: “Looks good. Ship it.”

But the day of a GenAI-native technical writer does not stop there.

You then ask the AI CoPilot to convert the same feature documentation into multiple formats:

  • A how-to article
  • A short tutorial script for a product video
  • A GenAI-optimized knowledge article so that AI assistants can retrieve it accurately
  • A structured dataset that can power the product’s in-app AI help

In the past, technical writers were primarily document creators. \n Today, GenAI-native technical writers are knowledge architects.

They do not just write content. \n They design information so that humans, search engines, and AI systems can all understand it.

The real skill of a GenAI-native technical writer is not typing faster.

It is asking the right questions, structuring knowledge clearly, and using AI to amplify their thinking.

The coffee may start the day. \n But curiosity, clarity, and collaboration are what truly power a GenAI-native technical writer.

人工智能如何悄然削弱独立思考能力

2026-03-28 02:39:04

Three weeks ago, we were preparing for a launch and running extensive tests. We utilized multiple AIs for the process: GPT, Cohere, Mistral, Perplexity…

At first, I looked at the information provided and made adjustments to improve it. By the third day, however, I was just passing information through. Needless to say, it took more than twice as many days to recover and regain my independent thinking.

What happened to me?

Generative AI offers clear advantages.

It saves time, reduces friction, and provides instant access to sophisticated, relevant language tailored to specific inquiries.

This convenience is undeniable, and indeed, many people are benefiting from it.

However, lurking beneath the surface is a quieter, yet more serious problem. It has less to do with information overload—at least, not in the way people typically imagine.

The more profound problem is that AI frequently delivers answers before our own thoughts have had the chance to fully take shape.

The timing matters.

Human thinking is not just about receiving information and deciding whether it is correct.

  1. It has its own sequence.
  2. A question arises.
  3. Then there is a pause.

In that pause, we search for fragments, predict, test, feel uncertainty, notice resistance, recall earlier experience, and begin to form an orientation of our own.

At times, this process can feel tedious. Before I reach an answer, I may feel anxious or uncomfortable. Sometimes it is slow. Sometimes it feels inefficient. But that interval is not wasted time. It is often the space where thought becomes personal rather than borrowed.

==This is also where neural development, cerebral blood flow, awareness, and the growth of the mind are shaped.==

Generative AI compresses that "interim space"—that pause—to an extreme degree.

So much so, in fact, that we almost forget it ever existed.

When we pose a question, an answer returns almost instantly. This response is not merely a simple reply; in many cases, it is presented as a structured interpretation, a summary, a recommendation, a reframing of perspective, or even as a coherent and compelling explanation—one so polished that it can be accepted at face value.

Before we even have the chance to harbor a doubt, we have already begun reading the content and finding it convincing.

Before we even have the chance to formulate a hypothesis of our own, we are already engaged in evaluating the hypotheses presented to us. Without waiting for deep contemplation to take root, our brains immediately commence the task of processing information.

What is unfolding here is a situation that goes beyond a mere alteration of "speed." It fundamentally shifts the very "standpoint" of the thinking subject. We no longer follow the traditional process—moving from "inquiry" to "deliberation," and finally to the "formulation of concepts." Instead, we transition directly from "inquiry" to the "management of answers."

We classify, evaluate, compare, edit, refine, and curate these responses. In pursuit of sharper phrasing, more concise summaries, more strategic perspectives, more persuasive language—and, above all, expressions that exude a sense of our own unique "selfhood"—we continuously instruct the AI ​​to make revisions.

Thus, our minds become wholly preoccupied with the ceaseless processing of already-formed "words," before ever having the opportunity to generate original thoughts from within ourselves.

This is one reason why AI can feel mentally exhausting even when it seems to reduce effort.

Many people describe cognitive overload as a problem of information overload.

That's true, but it's not enough. The problem now is not just the amount of information, but the timing. Too many languages ​​are completed before their inner meaning is formed. The brain is incorporated into evaluation too quickly.

Working memory is filled with options, interpretations, next steps, and possible modifications before a person can identify what they are actually thinking or feeling. It's exhausting in a very specific sense. It's not deep work fatigue.

==It's constant triage fatigue.==

The idea of ​​the brain as a processing system is important here.

One of the serious risks of continued use of AI is not just that we become dependent on the answers. That means that most of the brain's costs will be increasingly mobilized to process the output produced by the AI, before it even generates the thoughts itself.

Over time, the mind can begin to function less as a field of inquiry and more as a system for processing externally generated language. Managing the output will be faster and there will be less practice asking questions.

The reason this shift in perspective is crucial is that human thought is not merely "reactive"; it is also "generative." It does not proceed in a straight line. It wanders here and there, tests possibilities, returns to its starting point, calls upon past experiences, captures emotions, and—while constantly correcting itself—slowly gives shape to that which has not yet taken form. ==It is here that the brain, the nervous system, and the mind are all actively engaged.==

Some of the most significant thoughts reveal themselves precisely ==*before*== they have been formulated into clear words. This phenomenon consists of the sensations that arise while a thought remains in a state not yet amenable to verbal expression—that is, when the thinker still lacks certainty, relying merely on intuition to sense something, and is in the very midst of groping and searching in an effort to grasp the fundamental essence of the "question" lying before them.

As AI responses become more instantaneous and more highly personalized, other subtle risks arise. The more similar the answer is to us, the easier it is to be accepted without much internal consideration. Some people read the responses and think, Okay, that makes sense.

As AI responses become more instantaneous and more highly personalized, other subtle risks arise.

The more similar the answer is to us, the easier it is to be accepted without much internal consideration. Some people read the responses and think, Okay, that makes sense.

But making sense and integrating are not the same thing. Language can feel clear even when your inner life remains unresolved. Maybe your thoughts, emotions, memories, contradictions, and lived experiences haven't caught up yet.

This is why some people feel strangely tired after receiving useful answers. Although the mind is given a structure, the self does not necessarily metabolize it.

==And the brain is busy processing one after another.==

This discrepancy has consequences. It weakens the developmental value of thought itself.

What is avoided is not delay, but the process by which doubts and questions become your own wisdom.

That means staying with uncertainty, forming hypotheses, noticing inner reactions, recalling memories, testing meaning, and revising views. These steps are not cosmetic. Those are some of the ways to build self-confidence.

They are part of the mechanism by which internal consistency is formed. ==They are part of the way our nervous system learns by actively participating rather than passively accepting.==

That loss also impacts our development. The brain changes through use.

It adapts to repeated patterns of attention, behavior, recall, evaluation, and learning. When we repeatedly form hypotheses, retrieve memories, grapple with uncertainty, and refine our perspectives, we are not merely arriving at an answer.

These uncomfortable moments help strengthen the very capacity for contemplation itself. This is a discipline—the discipline of arriving at an answer through thought.

When these intermediate stages are routinely outsourced, efficiency may improve. However, our practice of generating insight for ourselves may diminish. This does not necessarily lead to an immediate decline in intelligence. The change is much subtler.

A person may still perform exceptionally well, communicate clearly, and act quickly. Yet beneath the surface, the habit of inquiry itself may be weakening.

After all, success in life rarely comes down simply to having drafted the “right” document.

This may help explain why, despite being surrounded by tools designed to make thinking easier, many people still feel mentally constrained, burdened by a strange fatigue, and disconnected from their inner vitality.

They are not simply drowning in information or overworking. Often, the brain is kept in a constant state of processing, leaving too little time for deeper reflection and too little space for genuine deliberation.

The problem is not that AI is making people stupid. The deeper issue is that we are beginning to demand an endless stream of answers from AI, and in doing so, may be training ourselves to prioritize processing over reflection.

Naturally, these effects ripple outward in subtle ways. Mental fatigue deepens, self-trust wavers, and maintaining inner coherence becomes a struggle. People start seeking answers with more haste—settling for "good enough" rather than striving for the optimal—while growing averse to waiting and losing their tolerance for ambiguity.

There is a real risk that we start relying on external frameworks before we even have the chance to shape our own. Over time, this dulls the mind's innate vitality; the brain begins to expend nearly all its energy simply processing information. This doesn't necessarily lead to a total breakdown. Instead, it manifests more quietly: a decline in mental activity, a waning of curiosity, and a lost willingness to sit with a problem until a truly original idea takes shape.

The real danger isn't just that AI thinks for us. It’s more fundamental: AI threatens to sever the very process through which a thought matures and becomes truly our own. We need more than just answers. We need an environment where a question can deepen, where uncertainty is tolerated, and where meaning is spun from within. This is where neuroplasticity happens—this is where the "Aha!" moment lives.

If we lose that, we lose more than just cognitive capacity. We lose the ability to generate the "internal work" that allows real thinking to take root in the first place.

\ Rie, Founder of DriftLens