MoreRSS

site iconHackerNoonModify

We are an open and international community of 45,000+ contributing writers publishing stories and expertise for 4+ million curious and insightful monthly readers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of HackerNoon

2025人工智能新闻回顾:人工智能如何在工程领域走向主流

2026-02-17 19:00:07

2025 is the year AI stopped feeling like a “tool you try” and started being treated as something engineering teams have to operate.

In January, most engineering teams experienced AI through copilots and chat assistants. They were useful, sometimes impressive, but still easy to keep at arm’s length: a tab in your IDE, a prompt window on the side, a helper that sped up the parts of the job you already understood.

By December, the center of gravity had shifted. AI showed up less as a standalone interface and more as a layer threaded through the tools engineers already live in: IDEs, code review, issue tracking, incident response, and internal documentation. Chat became a coordination surface, while integrations allowed models to pull context directly from production systems and systems of record—and push changes back into them.

That shift explains why 2025 will be remembered as the year AI crossed the chasm to become embedded in engineering. Not because teams rushed autonomous agents into production, but because operating AI at scale exposed a harder question: how do you safely run AI-created code in production once the speed of writing new lines of code is no longer the constraint?

As soon as code generation accelerated, the hard problems moved downstream—intent, reviewability, testability, traceability, ownership, and resilience.

How 2025 Started: Widespread Experimentation, Shallow Integration

By the start of 2025, AI usage in software development was no longer speculative. It was already mainstream. According to the 2025 Developer Survey from Stack Overflow, more than 80% of developers reported using AI tools in their development workflows, with large language models firmly embedded in day-to-day engineering work.

What varied widely was how those tools were used.

Most teams adopted AI the way they adopt any new productivity aid: individually, opportunistically, and with limited coordination across the organization. Copilots helped engineers draft boilerplate, translate code between languages, explain unfamiliar APIs, or sketch out tests. Chat assistants handled “how do I…” questions, quick debugging sessions, and exploratory prototyping.

The impact was real, but narrow. Individual developers moved faster, while the broader system of how software operated remained largely unchanged.

AI lived at the edges of the development process rather than at its control points. It wasn’t deeply integrated into code review workflows, CI pipelines, release gates, or production telemetry. AI-generated outputs flowed into the same downstream processes as human-written code, without added context about intent, risk, or expected behavior. As a result, testing, QA, defect triage, and incident response stayed mostly manual—and increasingly strained as the volume and speed of change grew.

That mismatch created a familiar tension. Code velocity increased, but teams still struggled to confidently review, validate, and ship what they produced. As AI accelerated upstream work, pressure concentrated in the downstream stages responsible for quality and reliability.

One of the clearest signals that this wasn’t just a hype cycle came from sentiment. Even as AI usage continued to rise, overall favorable sentiment toward AI tools declined to roughly 60% in 2025, down from more than 70% in the prior two years. That shift didn’t reflect rejection; it reflected normalization. The AI honeymoon is over, but the marriage persists.

When a technology is new, teams evaluate it based on potential. Once it becomes standard, they evaluate it based on cost: reliability, correctness, security exposure, maintenance overhead, and the effort required to trust its output. By early 2025, many engineering organizations had reached that point. AI was already in the loop, and the central question had shifted from whether to use it to how to operate it responsibly at scale.

The Milestones That Pushed AI Into Engineering Operational Rhythm

If you look back at 2025’s AI news cycle, the most important milestones weren’t the loudest demos or the biggest benchmark jumps. They were the signals that AI was becoming more predictable, more integrable, and more governable, the qualities required when software moves from experimentation into production reality.

Major Model Releases: From Impressive to Operable

Across providers, 2025 model releases converged on a similar theme: less emphasis on raw capability gains and more focus on how models behave inside real engineering systems.

With GPT-5.1 and GPT-5.1 Pro, OpenAI emphasized reasoning consistency, controllability, and enterprise readiness. The real shift wasn’t just better answers; it was more operable behavior. Outputs became easier to reason about, integrate into existing workflows, and constrain within organizational guardrails. That matters when models are no longer assistants on the side, but contributors inside production pipelines.

Anthropic’s Claude Code updates reinforced the same direction from a tooling-first angle. By focusing on coding-native behavior and deeper IDE integration, Claude Code reduced friction between AI output and developer workflows. When models live where engineering work already happens—rather than in detached chat windows—they start to function as infrastructure rather than accessories.

Google’s Gemini 3 pushed the platform narrative further. Multimodal reasoning combined with tighter integration across Google’s developer ecosystem reinforced the idea that AI is not a single interface, but a capability embedded across the software supply chain.

Meanwhile, releases like DeepSeek V3.2 and LLaMA 4 continued lowering the barrier for teams that wanted greater control—self-hosted inference, private deployments, and cost-efficient customization. These models mattered less for their raw performance and more for what they enabled: experimentation with AI as part of internal infrastructure, not just as a managed API.

Taken together, these releases marked a clear transition. Models were increasingly designed to behave reliably inside production environments, not just perform well in isolation.

Emerging Categories: Quality, Validation, and Confidence Became the Battleground

The second major shift in 2025 wasn’t driven by any single model release. It emerged from what those models exposed once teams began using them at scale.

As code generation accelerated, new constraints surfaced almost immediately. Change began outpacing review, subtle defects surfaced later than teams expected, and rising complexity made system behavior harder to predict. Code written by AI tools was harder to troubleshoot and support because no one in the organization understood it deeply, including the AI tools that wrote the code.

In response, new categories focused on quality, validation, and confidence gained traction. These weren’t incremental productivity upgrades. They were attempts to rebalance a system where speed had begun to outpace certainty.

One clear signal came from the evolution of agentic tooling itself. At GitHub Universe 2025, GitHub introduced Agent HQ, reframing agents as something to be supervised and governed, not unleashed autonomously. Instead of promising replacement, Agent HQ treated agentic development as an orchestration problem, giving teams visibility into what agents were doing across providers and where human oversight still mattered. That framing acknowledged a core reality of production engineering: autonomy without guardrails increases risk.

A similar shift appeared in testing and validation. AWS’s Nova Act, launched at re:Invent 2025, positioned UI automation as an infrastructure problem rather than a scripting exercise. By treating QA workflows as coordinated agent fleets—and publishing reliability claims at scale—it signaled that testing itself needed to evolve to keep up with AI-driven development velocity.

At the same time, a new wave of attention landed on AI SRE—tools designed to detect anomalies, predict failures, and respond faster once systems are already running in production. These tools typically take one of two approaches.

Some integrate with existing observability platforms, ingesting logs, metrics, and traces from systems like Datadog, Splunk, or Prometheus. While this approach is easier to adopt, it inherits the limitations of fragmented observability. Many organizations lack consistent instrumentation, legacy systems emit unstructured logs, and critical signals remain invisible. In these environments, AI can only reason over partial data—and detection remains fundamentally reactive. By the time anomalies surface, buggy code is already in production and customers may already be impacted.

Others take a deeper, inline approach, collecting telemetry directly from infrastructure, runtime environments, or network traffic. While this enables richer signals and earlier detection, it requires extensive infrastructure integration: deploying agents across services, accessing cloud provider APIs, and handling large volumes of raw telemetry. For many organizations, especially those without mature observability practices, this creates long security reviews, operational overhead, and adoption friction.

Both approaches share a more fundamental limitation: observability data shows symptoms, not causes. Detecting rising latency or memory pressure can buy time to mitigate an incident, but it rarely helps teams identify the specific code paths, logic errors, or edge cases responsible for the failure—let alone prevent similar issues from being introduced again.

As a result, AI SRE tools address reliability after defects reach production. Valuable, but late.

What became increasingly clear in 2025 is that the hardest problems sit upstream. The gap between “tests pass” and “this code is safe in production” remains large. Customer-reported issues still arrive through support channels, detached from code context. Code reviews still rely heavily on human intuition to spot risk. And AI-generated changes increase the surface area of that gap.

The emerging opportunity isn’t better incident response—it’s preventing incidents from happening in the first place. That means shifting intelligence closer to where code is written, reviewed, and merged, and connecting real-world failure signals back to specific changes before they reach production.

Taken together, these emerging categories point to the same conclusion: the bottleneck in modern engineering has moved from writing code to validating and shipping it safely.

Funding and Partnerships: Capital Followed Developer Platforms and Measurement

Funding trends in 2025 reinforced that shift. Investment flowed toward autonomous testing, QA data generation, autonomous test automation, and predictive quality platforms. PlayerZero’s own Series A ($20M), backed by founders from companies like Vercel and Figma, reflected growing conviction that predictive software quality would become a core layer in modern engineering stacks. \n \n According to Crunchbase’s end-of-year reporting, AI accounted for approximately 50% of global venture funding in 2025, underscoring how thoroughly AI had moved from optional to assumed. For engineering leaders, however, the more important signal wasn’t the volume of capital—it was where that capital concentrated once AI adoption was no longer in question.

Two moves illustrate this clearly.

Vercel’s $300M Series F reflected confidence in developer platforms that support AI-native development at scale: rapid iteration, production performance, deployment pipelines, and the operational complexity of shipping modern software quickly.

Atlassian’s $1B acquisition of DX sent an even stronger signal. As AI increases output, leaders need better ways to understand whether that output improves delivery. DX sits squarely in the engineering intelligence category, measuring productivity, bottlenecks, and outcomes, and Atlassian explicitly framed the acquisition around helping organizations evaluate ROI as AI adoption accelerates.

The market’s behavior was consistent. Capital flowed toward platforms and measurement layers that help organizations operate AI inside real engineering systems.

Operational durability, not experimentation, became the priority.

Why Agents Haven’t Crossed the Chasm (Yet)

If 2025 was the year AI went mainstream in engineering, a natural question followed: why didn’t autonomous agents go mainstream with it?

The adoption data makes the answer clear. According to the 2025 Stack Overflow Survey, AI agents remain a minority workflow. Roughly half of developers either don’t use agents at all or rely only on simpler AI tools, and many report no near-term plans to adopt full autonomy.

This isn’t hesitation so much as a structural constraint. Autonomous agents require context that most engineering organizations don’t yet have in a reliable, machine-readable form.

Before agents can be effective, they need to understand more than code. They need clarity on:

  • How systems behave under load and what “normal” looks like in production
  • Service ownership, dependencies, and responsibility boundaries
  • Which failures matter most, and where guardrails and policies exist
  • The history behind incidents, architectural decisions, and release processes that govern safe shipping

In many organizations, that context still lives in fragments—scattered documentation, institutional knowledge, dashboards that don’t connect, and postmortems that are difficult to operationalize. When that context is incomplete or inconsistent, autonomy doesn’t create leverage. It increases risk.

As a result, many teams made a deliberate choice in 2025. Rather than pushing agents into fully autonomous roles, they focused on copilots, chat interfaces, and orchestration layers that support engineers while keeping humans firmly in the loop. These tools accelerated work without removing accountability, judgment, or review—properties that remain critical in production systems.

Before responsibility can be delegated to software agents, leaders recognized the need for stronger foundations: reliable quality signals, observability that explains why systems behave the way they do, and evaluation loops grounded in real production risk. As AI moved closer to production, those gaps became harder to ignore and more urgent to close.

From Shipping Code to Shipping Quality: The Leadership Shift That Defined 2025

By the end of 2025, AI code generation was no longer the hard part. Copilots, chat-based assistants, and agent-driven implementations were normal parts of development, but production deployment became the bottleneck. The leadership challenge shifted from “how fast can we generate code?” to “how do we ship quality code consistently as change velocity increases?”

This reframing aligns closely with how investors and operators described the market in 2025. Bessemer Venture Partners’ State of AI report describes a shift from “systems of record,” which store information, to “systems of action,” which orchestrate and validate outcomes. For engineering organizations, the implication is clear: generating artifacts quickly isn’t enough. Teams need systems that connect changes to real-world behavior, enforce constraints, and provide confidence that outputs are safe to ship.

That realization surfaced in three leadership priorities that proved more challenging—and more valuable—than code generation itself.

Preventing Defects Before They Reach Production

As velocity increased, downstream fixes became more expensive and more disruptive. Teams learned that relying on post-deploy monitoring alone was no longer sufficient. Leaders began investing in pre-merge checks that reflect real failure modes, continuous evaluation against production-like scenarios, and regression detection that surfaces risk before release. Bessemer’s report explicitly highlights “private, continuous evaluation” as mission-critical infrastructure, since public benchmarks fail to capture business-specific risk.

Measuring AI by Operational Outcomes, Not Usage

The conversation shifted from “are we using AI?” to “is AI improving outcomes we can defend?” AI investments increasingly had to tie back to metrics leaders already care about: MTTR, defect recurrence, incident frequency, and engineering capacity reclaimed.

McKinsey’s 2025 State of AI research reinforces this point. While only a minority of organizations report meaningful EBIT impact from AI, those that do tend to pair adoption with rigorous KPI tracking, workflow redesign, and validation discipline. Among high performers, improved innovation and competitive differentiation were strongly correlated with how tightly AI was integrated into operational measurement.

Coordinating AI Across the Engineering System

As AI showed up everywhere, in chat, in the IDE, in code review, and in QA, leaders had to ensure these systems worked together rather than forming a fragmented collection of “helpful” tools. Without coordination, faster code generation simply increased noise. With it, teams could reason about impact, enforce standards, and maintain confidence as complexity grew.

For engineering leaders, these priorities highlighted the real shift of 2025: AI stopped being a way to write more code and became a test of how well their organizations could manage quality, coordination, and measurement at scale.

Turning Mainstream Adoption Into a Durable Advantage

By the end of 2025, AI was no longer something engineering teams experimented with on the side. It had become something they had to operate. Copilots, chat assistants, and AI-powered tools were embedded across development, review, and testing, making AI a permanent part of how software gets built and shipped.

What separated progress from pain was not access to better models, but operational maturity. Teams that focused on preventing defects before release, measuring AI’s impact through real engineering metrics, and coordinating AI across systems were able to move faster without losing confidence. This didn't require maturity to precede adoption—many high-performing teams built these capabilities because AI adoption forced the issue. What mattered was responding to that pressure immediately, not deferring it.

Teams that treated AI as a thin layer added onto existing workflows struggled with review fatigue, regressions, and rising operational risk. Mainstream adoption, by itself, was neutral; discipline determined whether it became an advantage or a drag.

Looking ahead, this foundation is what will make the next wave of autonomy viable. Agents will only deliver real leverage once teams have reliable context, quality signals, and evaluation loops in place. 

For engineering leaders, the opportunity now is clear: turn AI from a collection of helpful tools into a strategic leverage point—one that strengthens quality, improves decision-making, and prepares the organization for what comes next.

他失去了一切,然后赌上了自己

2026-02-17 19:00:02

:::info Astounding Stories of Super-Science February, 2026, by Astounding Stories is part of HackerNoon’s Book Blog Post series. You can jump to any chapter in this book here. The Moors and the Fens, volume 1 (of 3) - Chapter III: Introduces Mr. Alfred Westwood

Astounding Stories of Super-Science February 2026: The Moors and the Fens, volume 1 (of 3) - Chapter III

Introduces Mr. Alfred Westwood

By J. H. Riddell

:::

\ One chilly day in that most depressing of all English months, November, Mr. Merapie’s principal clerk stood in a manner at once easy and graceful before the fire in the outer office. His right foot was firmly planted upon an old-fashioned chair covered with hair-cloth, and thus he was enabled to rest his elbow on his knee, and finally to place under his chin a remarkably slender gentlemanlike hand, adorned with two rings, and set off to greater advantage by a broad linen wristband, white as the driven snow, and fine as Irish looms could weave it.

To attitudinize after this fashion he considered one of the especial privileges of his situation, none of the junior clerks being ever permitted—at least never in his presence—thus conjointly to enjoy the luxury of thought, heat, and dignity, with one foot 44supported by the crazy chair, which was giving up its stuffing by almost imperceptible degrees.

Whenever he was in a particularly good, or a peculiarly bad, temper, he assumed the position above indicated, and addressed by turns words of encouragement or of rebuke to his fellow-labourers, as he exhibited his jewellery, caressed his whiskers, and apparently reflected how handsome he was.

For vanity was the most perceptible feature in the character of Mr. Alfred Westwood, whose cleverness was only equalled by his impudence—his impudence by his hypocrisy—his hypocrisy by his ambition—and his ambition by his self-esteem. He was fond of money, not exactly for love of it, but for love of himself. He wanted it to spend in the purchase of expensive broadcloths, fine linens, cambric handkerchiefs, new hats, rare perfumes, macassar oil, gold chains, signet rings, extraordinary soaps, and endless cigars. There was no clerk, or, indeed, no young sprig of the aristocracy, in London who dressed so well, or seemed so confirmed and utterly hopeless a dandy, as Alfred Westwood, whose hair was always arranged after the most becoming and approved mode; whose speech, carriage, deportment, manners, were, as he fondly believed, unexceptionable; who learnt privately all the new dances 45as they came out; who was conversant with the appearance of every scrap of feminine nobility that drove in the Park, or rode down Rotten Row; who saw each new piece which was brought out at, and criticised every new actor who crossed the boards of, any one of the theatres; who could wind his way blindfold through the West End—thread all the intricacies of that magic region better, perhaps, than if he had been born and brought up in Belgravia; who, to condense all into one single sentence, desired in the innermost recesses of his heart to be considered a fashionable man about town; and who was, in fact, head clerk in the establishment of Mr. Merapie, a rich, eccentric merchant, possessed of almost fabulous wealth, and residing in the “Modern Babylon.”

No man of mortal mould was ever able to get a near sight of the history of Mr. Westwood’s life, and read it through from beginning to end. It was currently believed he once had parents, but no one could state the circumstance as a fact of his own knowledge. Sisters, brothers, cousins, relatives, friends, he had apparently none; he appeared merely to be a stray waif, possessed of many personal attractions, floating lightly over the sea of London society, who came of necessity in contact with, and 46greeted scores of his fellow-creatures during the course of his passage from a far distant port to some unknown destination, but who belonged to no one, made a confidant of no one, seemed claimed by no one, was loved by no one—save himself.

He was of what that queen of blessed memory, Elizabeth, would, perhaps, have termed a “just height,” inasmuch as he was neither inconveniently tall, nor yet remarkably short; somewhat slight in figure, but extremely well-proportioned; of a fair complexion, with brown eyes, hair and whiskers rather dark than light, and teeth so white and regular, that from a benevolent desire not to deprive society at large of a certain pleasure, he was perpetually smiling in a manner which, although a few disagreeable persons considered it artificial, he himself deemed perfectly bewitching.

But if vanity be of all human weaknesses and follies the most contemptible and unendurable, it is also the least really hurtful to any one, save the individual who, through its intervention, lives in an atmosphere of perpetual self-congratulation; and, had Mr. Westwood’s sole characteristic been an unbounded admiration for his own person, he might daintily have walked through life, the City, and his beloved West End, unchronicled, unheeded by me.

47It was, however, his leading foible, whilst ambition appeared to be his crying sin. He desired not only admiration, but position; he wished to trade, to amass wealth, to retire, to have a town residence and a country seat, servants, equipages, vineries, green-houses, paintings, grand society; and to accomplish these few trifling desires, he at the age of seven-and-twenty had started, as the phrase runs, “on his own account,” with an available capital of ten pounds three and ninepence, and a stock of assurance which, if it could only have been coined into gold, might for years have been drawn upon ad libitum, with a positive certainty that any banker in the kingdom would honour the cheque.

But impudence, unhappily, cannot by any process of alchemy be turned into sovereigns, although it may, and frequently does, prove the means of obtaining them; and ten pounds three and ninepence, unlike the present fashionable swimming vests, will not expand to unheard-of dimensions, and keep the fortunate possessor’s head above water “for ever;” and moreover, people will sometimes weary of giving credit, and begin to ask decidedly for a settlement. In consequence of all these things, Alfred Westwood, at the expiration of two brief years, found himself “unable”—so he told all whom it might 48concern—“to meet his liabilities.” In plain words, his debts were just on the verge of seven thousand pounds, and his assets not a farthing above five and twopence.

In the interim he had lived like a lord, kept a cob, hired a valet, and lodged in St. James’s: and when in due course he passed through the Bankruptcy Court he blandly told the commissioner his expenditure had been most moderate—only six hundred per annum; and in an extremely genteel accent entered an indignant protest against an illiberal and insulting demand made by his creditors (when he politely appeared, at their desire, to answer their questions, and afford them all the assistance in his power), that he should give up, for their benefit, his watch, chain, rings, and eye-glass, with which articles he had adorned himself, to the end that he might, even in ruin, look like a gentleman.

But remonstrance proving vain, with a sigh he relinquished these unpaid-for mementoes of happier days, made a full statement of his affairs, solemnly affirmed he and he alone was the party deserving of commiseration, and proved to the satisfaction, though decidedly not to the gratification, of all present, that, let him be what he might—knave, simpleton, dupe, schemer, or fop—nothing in the shape of 49compensation could be wrung out of him, whether freeman or prisoner; that he had no friends who would “stand by him,” or, in other words, pay them; that a merciless prosecution would do him little harm, and could not, by possibility, benefit the sufferers in the slightest degree; that, finally, it was worse than useless to fling good money after bad; and therefore Mr. Westwood escaped better than a better man, and was permitted to go on his way rejoicing.

Although he thought an immense deal on the subject, he said “never a word” when he heard his creditors were about to insure his life, in order to secure themselves, if possible, against total loss; but apparently contrite, broken spirited, and broken hearted, did all that was required of him, meekly got the requisite documents filled up and signed, went quietly before the ruling powers “to be viewed,” and have the probabilities of his death discussed and the consequent rate of premium decided on; patiently held his peace for a period, and permitted those whom he had so deliberately cheated to complete their part of the business ere he, with a grim smile on his lips, began his.

What with actual anxiety, slight indisposition, and two or three sleepless nights, he found himself 50sufficiently ill to be able to carry his project into execution with a chance of success; and, accordingly, discarding all useless ornaments, with a very shabby coat, hair neither glossy nor well arranged, and a hat which he had dinged a little for the occasion, he repaired to an insurance office, where he well knew his life was considered a matter of some importance.

He desired to speak with the principals upon important business, he said; and on the strength of this assertion, he was ushered forthwith into the “presence chamber.”

“Gentlemen,” he began, in a cool, straightforward manner, “I believe you are rather interested in my longevity; I am Alfred Westwood, formerly a merchant, now a beggar, whose creditors have insured my life in your office.”

The fact being accurately remembered, the individuals thus addressed bent anxious glances on his face, scrutinized him from head to foot, and mentally calculated how many premiums he was “good for,” whilst he proceeded:

“I have come, therefore, to inform you, with feelings of deep regret—regret, more of course on my own account than yours—that I fear you will very speedily be called upon to pay the 51various policies which have been effected in this case.”

It certainly was a somewhat startling announcement, and the two elderly and one middle-aged gentlemen, to whom he so tranquilly communicated the likelihood of his demise, exclaimed in a breath,

“Good heavens! it is not possible——”

“With all due respect,” returned Mr. Westwood, “permit me to remark, it is not merely possible, but probable; my death must take place ere long, so far as I can see; not, indeed, from any disease that medicine or skill is able to cure, but because of a malady, from the certainly fatal effects of which you, and you alone, can deliver me.”

“We!” ejaculated the trio, once again in unison.

“You, gentlemen,” solemnly responded their visitor.

“And what may the malady be, and how can we avert it?” they demanded.

“The malady is starvation,” he replied, concealing, by a desperate effort, an almost irresistible inclination to smile; “money or a lucrative employment will save my life and your pockets. All my worldly goods, everything, in short, I was possessed of, save a tranquil conscience and the clothes I now wear, I made over to my creditors. 52What man could do more? and yet they are not satisfied; their malice pursues me so relentlessly that, in consequence of their evil reports, I can obtain no situation, no matter how humble. I am not fitted, either by education or constitution, for manual labour, even were such offered to me: I have no friends able to assist me: I have brains, talent, energy,—would prove invaluable to any person requiring an active, indefatigable clerk. I desire not, indeed, to make a new character for myself, but to demonstrate that which has been asserted of my present one is false. I have been unfortunate and wish to repair the past; but every office is closed before me, every mortal seems prejudiced against me. People will not permit me to work, I cannot beg; in fine, gentlemen, I must starve unless you, for your own interests, aid me in this dilemma. Recollect, I do not ask you to do this for my sake, but for your own; for I know quite enough of human nature to believe self is the motive power that turns the world, is the mainspring of men’s actions, is, in brief, the sole reason why, although I do stand in need of sympathy and compassion—though I am a ruined man, though I have been more ‘sinned against than sinning,’ you will help me.”

53Westwood was right; he had studied the worst part of human nature intently, and judged it correctly. Had he possessed the eloquence of a Demosthenes, he could not more speedily have struck the feeling in the hearts of his auditors, which he desired to reach, than by thus quietly stating that, if they did not at once lay out a small amount, either of time or money, to save his life, they would most probably have to pay, ere long, a large amount, in consequence of his death. His tale was a likely one enough; his haggard looks confirmed the statement; gold, they were satisfied he had none; it might be perfectly correct that almost insuperable obstacles precluded his obtaining employment; further, he had not flattered them; and even business men are too apt to fall into a trap baited with what, although a few persons term it contemptuous rudeness, uttered either for a motive or from chagrin, most others style frank, truthful sincerity. At all events, one thing was, as he stated, self-evident; it was apparently their interest to serve him, and accordingly, as he was clever, shrewd, and plausible, they speedily did so. The principals of the insurance office to which the creditors had paid premiums, held a sort of perplexed conference together; the end of which was, that a situation, not 54very lucrative indeed, but still a “start,” was procured for Mr. Alfred Westwood. He received five-and-twenty pounds for pressing necessities, to slightly replenish his wardrobe and enable him once again to appear as a “gentleman.” In short, he got, for the second time, fairly afloat; and when the ci-devant bankrupt found himself again in a position to earn money, he gaily twisted his whiskers, passed the men he had, in plain words, “robbed,” with a high head and confident air, and mentally murmuring, “I was too fast before, I will be slow and sure this time,” commenced, de nouveau, the struggle of existence, not as a pleasant experiment, but as an important reality.

Years passed on: some termed Alfred Westwood a conceited fop, but others affirmed he knew perfectly what he was about: he always dressed well, perpetually kept up appearances, apparently denied himself no gratification, retained one situation until he saw another likely to suit him better, but not a moment longer; became noted for shrewd cleverness and long-headedness; mounted fortune’s ladder cautiously, and, with a vivid memory of his former desperate tumble, never took his foot from one step till he was morally certain of being able to place it on the next, and keep it there; finally, he entered, 55at a salary of two hundred per annum, the counting house of Mr. John Merapie, as assistant to the principal clerk, and in that capacity did so much more than the above individual had ever dreamt of attempting, that, at the expiration of one brief year, Mr. Westwood found himself next to Mr. Merapie, chief of the establishment,—vice Roger Aymont superannuated, or, in other words, superseded.

And thus, patient reader, it came to pass that, at the not very advanced age of thirty-five, Mr. Alfred Westwood, possessed of a comfortable income, tormented with no incumbrance, in the shape of invalid father, helpless mother, insane brother, delicate sister, or tiresome child, stood enjoying the luxury of his own happy thoughts, as previously chronicled at the commencement of this chapter.

“Westwood” was the word which brought his foot down and his head erect in a second of time—“Westwood.”

“Sir,” responded the individual so addressed; having elicited which mark of attention, Mr. Merapie proceeded:

“I have to attend the Lord Mayor’s banquet this evening, and should therefore feel glad if you could make it convenient to meet my sister, Mrs. Frazer, 56who, as you have heard, is coming from the North, and see her safely home to the Square.”

Whereupon Mr. Westwood, smiling in his sweetest manner, declared, “Nothing could give him more pleasure.”

“Thank you—wouldn’t trouble you if I could avoid doing so,” returned Mr. Merapie, “but necessity, you are aware——”

“Knows no law,” supplied his clerk with a deferential bow, which bow fairly taking Mr. Merapie away from the outer office, and leaving Mr. Westwood temporary master of the inner one, he at once repaired to the latter, drew two chairs close together opposite the fire-place, leant his head against the back of one and disposed his limbs in a graceful attitude upon the other, crossed his arms majestically over his chest, and remained thus pondering and calculating chances till the time arrived for him to go and make acquaintance with the new comers, Mrs. Frazer and her two children.

Carefully as he might have scanned the pages of a book wherein his fortune was written, did Alfred Westwood scrutinize the faces of the “widowed and the orphaned,” whom he, a stranger, thus met and greeted on their arrival to an almost unfamiliar place.

57He beheld a languid, fashionable-looking lady, a boy, in whom her very soul seemed centred, and lastly, a slight pale child, with nothing especially remarkable about her, save a pair of soft mellow eyes, a perfect wilderness of dark curls, and a peculiarly quick intelligent expression of countenance.

Alfred Westwood noted every gesture, feature, word, as he conducted the trio “home;” and when, after having seen them and their luggage safely deposited in Mr. Merapie’s house, situated in what that gentleman termed “The Square,” he turned to walk to his own lodgings by the bright gaslight through some of the countless London streets, he murmured, as if by way of conclusion to some very knotty argument he had long been debating within himself, “They may prove of service to me, perhaps; at all events, I defy them to raise an obstacle in my path.”

\

:::info About HackerNoon Book Series: We bring you the most important technical, scientific, and insightful public domain books.

This book is part of the public domain. Astounding Stories. (2009). ASTOUNDING STORIES OF SUPER-SCIENCE, FEBRUARY 2026. USA. Project Gutenberg. Release date: February 14, 2026*, from https://www.gutenberg.org/cache/epub/77931/pg77931-images.html#Page_99*

This eBook is for the use of anyone anywhere at no cost and with almost no restrictions whatsoever.  You may copy it, give it away or re-use it under the terms of the Project Gutenberg License included with this eBook or online at www.gutenberg.org, located at https://www.gutenberg.org/policy/license.html.

:::

\

2026年获取实时洞察的12个必备金融市场API

2026-02-17 18:55:51

As we move into 2026, the demand for fast, reliable, and real-time financial data continues to grow. Developers, analysts, and financial institutions increasingly rely on accurate market insights to build smarter tools and make informed decisions. Financial APIs are now transforming how market data is accessed, analyzed, and applied across global markets.

Whether you are building a next-generation personal finance application or powering a high-frequency trading system, Stock Market Data APIs and Forex Data APIs play a critical role. These APIs allow seamless access to real-time price movements, historical market data, currency exchange rates, and a wide range of financial indicators through easy integration.

In this guide, we explore the top financial data APIs for 2026, covering both free and premium options to suit different needs and budgets. From leading market data API providers to global financial data platforms, we compare key features, performance, and real-world use cases. This will help you select the right API to support your financial analysis, product development, and innovation goals in an ever-evolving market landscape.

What Are Financial Data APIs?

Financial Data APIs, short for Application Programming Interfaces, are digital connectors that allow software applications to access, retrieve, and process financial information from third-party data providers. They act as the foundation of modern financial products, powering everything from stock trading platforms and forex analytics tools to AI-powered advisors and personal budgeting apps.

At their core, Financial Data APIs enable a smooth and secure flow of data between a financial data provider and an end-user application. They deliver structured information, usually in formats like JSON or XML, making it easy for developers to integrate real-time and historical financial data into websites, mobile apps, spreadsheets, and backend systems.

Think of Financial Data APIs as intelligent middlemen between fast-moving global markets and your software. They transform complex market data into clean, actionable insights that applications can process and respond to in real time, helping users make smarter and faster financial decisions.

Key Use Cases of Financial Data APIs

  • Stock Market Data APIs: Deliver real-time stock prices, historical performance data, dividends, stock splits, and advanced technical indicators for equities and ETFs.
  • Forex Data APIs: Provide live exchange rates, currency conversion values, historical trends, and insights across major, minor, and exotic currency pairs.
  • Investment Data APIs: Aggregate data from multiple asset classes to support portfolio tracking, backtesting strategies, performance monitoring, and risk analysis.
  • Open Banking APIs: Enable secure access to bank account balances, transaction histories, and financial details through trusted third-party applications.
  • Financial Services APIs: Allow seamless integration of banking, insurance, credit scoring, payments, and other financial services into modern digital products.

Why Are APIs Critical to Financial Data Integration?

APIs simplify what was once a complex and time-consuming process of financial data acquisition. By eliminating manual data entry and unreliable scraping methods, APIs allow developers to access high-quality financial data with just a few lines of code.

Through Financial Data APIs, developers can easily retrieve:

  • Up-to-the-second price feeds
  • Historical charts and financial statements
  • Economic calendars and macroeconomic indicators
  • Sentiment data, news, and earnings reports

As a result, Financial Data APIs power far more than just trading platforms and analytics dashboards. They play a vital role in machine learning models, automated trading strategies, risk management systems, and regulatory compliance solutions across the financial services ecosystem.

By reducing data maintenance overhead, these APIs free engineering teams to focus on building innovative products, improving user experiences, and delivering new features faster.

Why Financial Data APIs Matter in 2026

In 2026, developers are building smarter, faster, and more adaptive financial systems than ever before and Financial Data APIs sit at the core of this evolution. These APIs aren’t just about pulling stock prices; they’re key to powering automated workflows, model-driven decision engines, and real-time analytics that scale.

Enabling Model Context Protocols (MCPs) for Scalable AI Integration

Modern AI systems like Claude and other LLMs increasingly rely on Model Context Protocols (MCPs) to interact with tools, APIs, and databases in real time. In this setup, Financial Data APIs act as standardized data inputs, supplying models with the context required for market analysis, financial summarization, and large-scale decision making. This approach separates AI logic from data pipelines, resulting in systems that are more adaptable, secure, and scalable.

Powering Algorithmic Trading and Real-Time Decision Engines

Algorithmic trading platforms depend on speed and accuracy. Financial Data APIs deliver low-latency, high-frequency market data that enables buy and sell decisions within milliseconds. From feeding live price data into quantitative strategies to running real-time arbitrage systems, these APIs provide the performance and reliability modern trading systems require.

Automating Manual Processes and Data Entry

Manual tasks like copying stock prices into spreadsheets or tracking currency conversions are no longer needed. Financial Data APIs automate these processes completely. With just a few lines of code, developers can:

  • Pull real-time financial data into accounting systems
  • Auto-populate dashboards with live stock or commodity prices
  • Sync macroeconomic indicators directly into investment models

In short, Financial Data APIs act as the connective layer between raw market data, intelligent models, and high-performance applications, helping teams build faster, automate more, and scale smarter in 2026.

Here is a 𝗧𝗼𝗽 𝗥𝗮𝘁𝗲𝗱 updated and lightly refined version for 2026, keeping the wording clear and practical:

How to Choose the Best Financial Data API

With a wide range of financial market data solutions available in 2026, choosing the right API starts with understanding your technical needs and business goals. Whether you are building a simple dashboard, an algorithmic trading engine, or a full-scale investment research platform, your API choice will directly impact speed, accuracy, and data depth.

Key Evaluation Criteria

Data Coverage

  • Does the API support your required asset classes such as equities, forex, commodities, crypto, or bonds?
  • Does it provide real-time financial data, historical market data, or both?
  • Is the data coverage global or region-specific?

Latency and Refresh Frequency

  • Low latency is critical for live dashboards and high-frequency trading.
  • Look for fast response times and frequent data updates, such as per second or per minute.

Integration and Developer Experience

  • RESTful APIs with clear and complete documentation are ideal.
  • Check for support across major programming languages and platforms.
  • SDKs, sample code, and quick-start guides add strong value.

Scalability

  • Can the API handle increasing request volumes as your application grows?
  • Review rate limits, concurrency support, and enterprise-grade options.

Security and Compliance

  • Especially important for financial services APIs and open banking APIs.
  • Ensure encrypted connections using HTTPS, OAuth2 support, and compliance with standards like GDPR, PSD2, or SOC 2.

Cost and Licensing

  • Free stock APIs work well for MVPs and early testing.
  • For production use, compare pricing based on usage, features, and geographic coverage.
  • Choose providers with transparent pricing and flexible plans.

Matching APIs to Use Cases

For developers needing real-time market data with broad asset coverage: \n Marketstack and CurrencyLayer remain two of the strongest all-around options in 2026. Marketstack offers extensive stock, index, and commodity data from global exchanges, while CurrencyLayer delivers reliable and accurate forex data with real-time exchange rates. Both are scalable, well-documented, and well suited for production-grade financial applications.

For startups, hobbyists, and smaller-scale projects: \n Exchangerate.host is an excellent choice. Its “Starter” plan is tailored for startups and early-stage projects, making it ideal for lightweight financial apps, currency converters, and proof-of-concept builds.

For quantitative analysts and financial modeling teams: \n APIs such as Tiingo, Quandl, and Alpha Vantage provide deep historical datasets, technical indicators, and fundamental data. These are well suited for backtesting strategies, financial research, and quantitative model development.

For AI-driven systems and MCP-based integrations: \n Choose APIs that offer low-latency responses, structured JSON formats, and bulk data endpoints. These capabilities are essential for feeding financial context into AI models through Model Context Protocols (MCPs), enabling real-time and intelligent decision making in systems like Claude and other LLMs. \n Marketstack stands out here due to its fast response times, clean JSON structure, and broad market coverage, making it highly compatible with MCP pipelines.

By aligning your technical requirements and product goals with the right API capabilities, you can confidently select a financial data API that supports your strategy, performance needs, and long-term growth in 2026.

Marketstack API: A Top All-Around Pick

When it comes to versatility, performance, and ease of integration, Marketstack continues to be one of the top financial data APIs in 2026. Its developer-friendly design and broad data coverage make it suitable for everything from fintech startups to enterprise-grade trading platforms.

Why Developers Choose Marketstack

Marketstack provides real-time and historical market data for stocks, indices, ETFs, and commodities across 70+ global exchanges. It is built for scale, offering a powerful yet intuitive RESTful API with clear documentation. Whether you are building a live stock tracker, powering a robo-advisor, or integrating financial data into an MCP-based AI workflow, Marketstack adapts easily to different use cases.

Key Features

  • Real-time market data from major global exchanges
  • Historical data spanning decades for backtesting and long-term analysis
  • Fast, lightweight JSON responses ideal for low-latency applications
  • Flexible endpoints for tickers, intraday pricing, historical data, and exchange metadata
  • Simple API key-based authentication with transparent usage tiers
  • Reliable infrastructure suitable for production systems and live trading dashboards

Ideal Use Cases

  • Financial dashboards tracking global equity markets
  • Trading bots and algorithmic strategies requiring real-time stock prices
  • Investment research tools focused on historical price trends and volume analysis

Pricing Tiers \n Marketstack Pricing

Marketstack offers multiple pricing plans designed to support everything from small projects to high-volume production environments, allowing teams to scale as their data needs grow.

Marketstack’s strong balance of speed, structure, and scalability makes it one of the most recommended financial data APIs for developers building real-time and data-driven financial applications in 2026.

Exchangerate.host API: Best for Startups and Indie Developers

If you are just getting started with financial market APIs or building a lightweight currency application, Exchangerate.host remains one of the most developer-friendly and cost-effective options in 2026. It delivers reliable exchange rate data with transparent pricing and generous free and starter tiers.

Why It’s Ideal for Beginners and Indie Developers

Exchangerate.host is built with simplicity and accessibility in mind. It supports real-time and historical exchange rates across hundreds of currency pairs, along with crypto conversions and regular updates. With fully usable free and starter plans, it is a strong choice for prototyping, side projects, and early-stage fintech products.

The real value comes from the Starter Plan, which allows developers to scale beyond hobby use without committing to a large budget.

Starter Plan Highlights

  • 5,000 requests per month
  • Hourly data updates
  • Access to historical exchange rates
  • Standard support
  • HTTPS encryption
  • Currency conversion
  • Source currency switching
  • Affordable pricing at $7.99 per month
  • Additional requests priced at $0.003196 per request

Ideal Use Cases

  • Currency converters and travel budgeting apps
  • MVPs for global payment and remittance tools
  • Early-stage forex dashboard prototypes
  • Educational or no-code financial automation projects
  • AI or chatbot tools experimenting with MCP-based currency queries

Exchangerate.host is well suited for developers who need clean, consistent, and flexible access to foreign exchange data without overpaying or facing strict rate limits. It provides a smooth entry point into financial data APIs, with the flexibility to grow as your application scales in 2026.

Pricing Tiers \n Exchangerate host monthly paln

CurrencyLayer API: Best for Multi-Currency Platforms

CurrencyLayer is widely recognized as one of the most reliable APIs for real-time and historical foreign exchange data. In 2026, it continues to be trusted by thousands of businesses and developers worldwide, offering institutional-grade accuracy powered by data from banks and financial institutions.

Why Developers Trust CurrencyLayer

Built specifically for forex-focused use cases, CurrencyLayer delivers high-frequency updates, precise exchange rates, and robust endpoints through a clean, RESTful API. It is an excellent choice for teams building currency conversion tools, international pricing systems, or forex trading dashboards where real-time accuracy is critical.

Key Features

  • Real-time exchange rates for 168+ global currencies
  • High-accuracy data sourced from commercial banks and financial data aggregators
  • Historical forex data with 19+ years of coverage
  • Currency conversion and multi-currency support in a single API call
  • HTTPS encryption for secure data transmission
  • Scalable pricing plans with tiered request volumes
  • Easy integration with popular development stacks

Ideal Use Cases

  • Forex trading platforms and signal tools
  • International e-commerce pricing and checkout systems
  • Budgeting and expense tracking apps for frequent travelers
  • AI-powered financial advisors with real-time currency conversion
  • MCP-integrated bots requiring accurate currency context \n Currencylayer subscription plans with Basic, Professional, and Enterprise options for forex data access

Whether you are building a global application that converts prices in real time or training models on historical forex trends, CurrencyLayer delivers the speed, accuracy, and reliability needed to support modern multi-currency financial platforms in 2026.

Fixer.io API: Most Trusted for Currency Conversion Accuracy

When accuracy and trust are critical, Fixer.io remains one of the most reliable exchange rate APIs in 2026. Backed by official sources such as the European Central Bank, it delivers highly dependable forex data with both real-time and historical coverage.

Why Fixer.io Appeals to Precision-Focused Projects

Fixer.io is designed for teams that need trustworthy currency conversion in applications where data accuracy matters most, including accounting software, payment gateways, and financial dashboards. Its intuitive setup, stable infrastructure, and consistent data quality make it a dependable choice across industries.

Key Features

  • Real-time foreign exchange rates for 170+ global currencies
  • Data sourced from central banks and leading financial institutions
  • Historical exchange rate data dating back to 1999
  • Secure HTTPS connections with reliable daily updates
  • Simple RESTful API with clear documentation
  • Scalable pricing plans for projects of all sizes

Ideal Use Cases

  • Financial reporting platforms requiring precise currency conversion
  • Accounting and invoicing tools supporting multiple currencies
  • Payment systems needing accurate exchange rates at checkout
  • International SaaS products calculating cross-border pricing
  • AI workflows where clean and reliable currency inputs are essential

Pricing Tiers

Fixer.io offers flexible pricing tiers designed to support everything from small applications to enterprise-level financial platforms. Fixer Yearly PlanFixer.io is a strong choice when your application’s credibility depends on delivering exchange rate data that is accurate, auditable, and built for production use in 2026.

Financial Modeling Prep API: Best for Deep Financial Data

For developers and analysts who need in-depth access to company fundamentals, earnings reports, and structured financial statements, Financial Modeling Prep (FMP) remains a powerful all-in-one API platform in 2026. With over 30 years of financial history combined with real-time market data, FMP delivers the depth, accuracy, and flexibility required for advanced financial modeling, research, and analysis.

Why It’s a Go-To for Financial Analysts and Builders

Financial Modeling Prep (FMP) delivers a rich and structured data stream for U.S. and global equities. It combines live stock prices with balance sheets, cash flow statements, income statements, valuation metrics, and more. With RESTful endpoints and data available in both JSON and CSV formats, FMP supports everyone from solo quant developers to enterprise-grade financial platforms.

Key Features

  • Real-time stock prices and intraday market data
  • 30+ years of historical financial statements and ratios
  • Company fundamentals including margins, debt ratios, ROE, ROA, and more
  • Access to macroeconomic indicators and forex rates
  • Market news, press releases, and analyst sentiment
  • Structured quarterly and annual financial statement endpoints
  • Developer-friendly documentation with code samples
  • CSV download support for spreadsheet-based workflows

Ideal Use Cases

  • Stock screeners and equity research platforms
  • Automated valuation models and DCF analysis
  • Financial report automation for SaaS companies and investment firms
  • MCP-integrated workflows requiring structured financial context, such as balance sheet comparisons
  • Custom dashboards combining fundamentals, ratios, and technical indicators

Pricing Tiers (Billed Annually) \n APIsFMP is designed for teams that need clarity and control over corporate financial data. If your product depends on fundamentals rather than short-term price movements, this API provides the historical depth and data structure needed to support confident, data-driven decisions in 2026.

Alpha Vantage API: Best for Quantitative and Technical Analysis

Alpha Vantage continues to be a popular choice in 2026 among quants, algorithmic traders, and developers building tools that depend heavily on technical indicators and time-series data. With support for equities, forex, cryptocurrencies, and technical overlays, it is built for flexibility and analytical precision.

Why It’s Popular Among Quantitative Developers

Alpha Vantage offers one of the largest libraries of ready-to-use technical indicators, from moving averages and RSI to MACD and Bollinger Bands. This allows developers to quickly prototype, test, and deploy signal-based trading systems or build analytics dashboards that surface actionable insights.

Key Features

  • Real-time and historical market data for stocks, forex, and crypto
  • 50+ built-in technical indicators including RSI, MACD, Bollinger Bands, and SMA
  • Intraday, daily, weekly, and monthly time-series data
  • Fundamental data for U.S. equities
  • JSON and CSV response formats
  • Free API key with generous limits on the free plan
  • Well-supported Python, JavaScript, and Excel integrations

Ideal Use Cases

  • Backtesting strategies using historical technical patterns
  • Generating buy and sell signals with minimal custom logic
  • Building low-code or no-code trading tools with embedded analytics
  • Educational platforms and tutorials for learning technical analysis
  • MCP-enhanced modeling systems needing fast access to indicators

Whether you are experimenting with a new trading strategy or enhancing a research platform, Alpha Vantage offers a fast, reliable, and accessible entry point into quantitative financial data in 2026.

Twelve Data API: Best for Comprehensive Market Coverage

Twelve Data continues to stand out in 2026 as a full-spectrum financial data API, offering real-time and historical data across a wide range of asset classes. Whether you are working with global stocks, ETFs, indices, forex, or cryptocurrencies, Twelve Data delivers broad coverage, advanced analytics, and high-frequency updates through a single, unified API.

Why Developers Love Twelve Data

With support for 100,000+ financial instruments across 120+ countries, Twelve Data is well suited for developers building scalable platforms that span multiple asset classes. Its flexible endpoints, fast update intervals of up to 1 second, and developer-first approach make it a strong choice for both retail and institutional-grade applications.

Key Features

  • Real-time and historical price data for stocks, ETFs, forex, crypto, and commodities
  • 100+ built-in technical indicators including RSI, EMA, MACD, and Bollinger Bands
  • WebSocket support for live market data streaming
  • Batch requests, candlestick pattern recognition, and global time-series data
  • SDKs for Python, JavaScript, PHP, and more
  • Excel and Google Sheets add-ons for spreadsheet workflows
  • JSON-formatted responses for easy integration

Pricing Overview

Basic (Free):

  • 800 API requests per day
  • U.S. stocks, forex, and crypto data
  • Access to technical indicators

Grow – $79 per month:

  • 377 API credits per month
  • No daily request limit
  • Access to 24+ markets and selected fundamentals

Pro – $229 per month:

  • 1,597 API credits per month
  • 47 additional markets
  • Batch requests, market movers, extended fundamentals

Ultra – $999 per month:

  • Full market and fundamentals access
  • Mutual fund and ETF breakdowns
  • Advanced analysis data and priority support

Enterprise – $1,999 per month:

  • Enterprise-grade security
  • Dedicated support
  • 99.99% SLA
  • 17,000+ API credits per month

Ideal Use Cases

  • Multi-asset trading dashboards
  • Global portfolio monitoring platforms
  • Market data infrastructure for financial applications
  • Algorithmic trading systems using multiple data feeds
  • AI or MCP-based tools analyzing diverse instruments in real time

If you are building a financial product that requires broad market coverage without compromising on speed or accuracy, Twelve Data offers a scalable and powerful solution ready for 2026 growth.

Finnhub API: Best for Fast Prototyping and Broad Market Coverage

Finnhub continues to be a strong choice in 2026 for developers who need quick access to real-time financial data across multiple asset classes. Its generous free tier, global coverage, and well-structured documentation make it ideal for fast prototyping and scalable product development.

Why Developers Choose Finnhub

Unlike unofficial wrappers or scraped data sources, Finnhub is an officially maintained API platform. It provides real-time stock quotes, forex rates, crypto data, economic indicators, and company fundamentals through secure and scalable REST endpoints. Developers also gain access to market news, earnings calendars, sentiment analysis, analyst ratings, and ESG scores from a single API.

Key Features

  • Real-time and historical stock price data
  • Forex, crypto, and international market coverage
  • Company fundamentals, earnings data, and key financial ratios
  • Economic indicators and macroeconomic data
  • Market news, analyst ratings, and sentiment scores
  • 60 API calls per minute on the free plan
  • SDKs for Python, JavaScript, and other major languages
  • WebSocket support for real-time streaming on paid plans

Ideal Use Cases

  • Financial dashboards and mobile investment apps
  • No-code and low-code finance tools with rapid refresh needs
  • Early-stage robo-advisors and portfolio tracking platforms
  • Educational applications focused on trading and market analysis

Finnhub API Pricing Overview

Finnhub offers multiple pricing tiers based on data type and usage requirements:

  • All-in-One: Free to $2,500
  • Fundamental Data: $50 per month per market to $200 per month per market
  • ETFs, Funds, and Indices: $500 per month to $1,000 per month
  • U.S. and Global Market Data: $49.99 per month to $200 per month
  • Crypto Data: $49.99 per month to $129.99 per month

Finnhub is a strong option for teams that want fast setup, broad market access, and the flexibility to scale from prototype to production in 2026.

Tiingo API: Best for Historical Accuracy and Backtesting

When your application relies on clean, accurate, and highly granular historical data, Tiingo remains one of the most respected providers in 2026. Built with quantitative developers and institutional users in mind, Tiingo places strong emphasis on data integrity and long-term consistency across time-series datasets.

Why Quant Developers Choose Tiingo

Tiingo is known for its high-quality historical price feeds and carefully adjusted datasets. It provides access to equities, ETFs, mutual funds, and alternative data such as news and sentiment. This makes it especially popular with backtesting teams and quantitative researchers who require confidence in data accuracy over long time horizons.

Key Features

  • End-of-day and intraday historical price data with 30+ years of coverage
  • Real-time pricing for stocks, ETFs, mutual funds, and crypto
  • High-resolution time-series data optimized for backtesting workflows
  • News sentiment API with up to 3 months of queryable history
  • Optional access to fundamental data with commercial licensing
  • Developer-first approach with flat-rate pricing and strong documentation

Pricing Overview

Starter (Free):

  • 1,000 API requests per day
  • Up to 500 symbols per month
  • 1 GB monthly bandwidth

Power Plan – $30 per month or $300 per year:

  • 100,000 API requests per day
  • 96,000+ symbols per month
  • 40 GB monthly bandwidth

Business Plan – $50 per month or $499 per year:

  • Same access as Power Plan
  • Licensed for internal commercial use

Ideal Use Cases

  • Backtesting long and short equity strategies
  • Portfolio and factor model research using historical data
  • Academic and financial research projects
  • Risk modeling systems requiring precise time alignment

Tiingo is a strong choice when your work depends on premium-quality historical data and long-term accuracy. For institutional analytics, quantitative research, or rigorous backtesting, it delivers the structured and reliable datasets advanced strategies need in 2026.

Polygon.io API: Best for High-Frequency Trading and Real-Time U.S. Markets

Polygon.io remains a top choice in 2026 for developers who need speed, scale, and ultra-low-latency access to U.S. market data. It is purpose-built for trading systems, real-time dashboards, and execution platforms where tick-level precision and WebSocket streaming are essential.

Why Traders and Engineers Choose Polygon.io

Designed for algorithmic trading, brokerage platforms, and institutional-grade systems, Polygon delivers real-time market data with deep access to trades, quotes, aggregates, and market depth. It supports U.S. equities, options, forex, and crypto through a consistent API structure backed by high-performance infrastructure.

Key Features

  • Real-time data for U.S. stocks, options, forex, and crypto
  • Tick-level access to trades, quotes, and market depth
  • WebSocket support for streaming live market updates
  • Aggregated OHLC data at 1-minute, hourly, daily, or custom intervals
  • Historical trades and quotes with 20+ years of coverage
  • Market snapshots, reference data, corporate actions, and fundamentals
  • RESTful API with detailed and well-maintained documentation

Pricing Overview

Individual Plans (Monthly)

Basic – Free:

  • 5 API calls per minute
  • Up to 2 years of historical data
  • End-of-day prices, reference, and fundamental data

Starter – $29 per month:

  • Unlimited API calls
  • 5-year historical data
  • 15-minute delayed market data
  • Minute-level aggregates

Developer – $79 per month:

  • 10-year historical data
  • Second-level aggregates
  • WebSocket and trade-level access

Advanced – $199 per month:

  • Real-time market data
  • 20+ years of historical data
  • Tick-level trades and quotes
  • Unlimited API access

Business Plans

Business – $1,999 per month:

  • Unlimited API access
  • Real-time fair market value data
  • 20+ years of history
  • No exchange fees
  • WebSocket streaming
  • Full trade and quote access
  • Commercial usage license

Enterprise – Custom Pricing:

  • Dedicated SLAs
  • Custom data feeds
  • Slack support
  • Implementation services and team onboarding

Ideal Use Cases

  • High-frequency trading bots and execution platforms
  • Real-time stock and options dashboards
  • Market surveillance and dark pool monitoring tools
  • Advanced volatility modeling and tick-level backtesting

If your application demands ultra-fast execution, granular U.S. market data, and institutional-grade reliability, Polygon.io is one of the strongest API choices for 2026.

Intrinio API: Best for Financial Statements and Valuation Metrics

Intrinio is built for fintech developers and enterprises that need reliable access to structured financial statements, valuation metrics, and SEC filings. In 2026, it remains a strong choice for teams building investment platforms, analytical dashboards, and custom financial models.

Why Fintech Platforms Prefer Intrinio

Intrinio stands out for its clean, normalized financial data. It provides full income statements, balance sheets, and cash flow reports, along with pre-calculated valuation ratios such as EV/EBITDA and P/E. This significantly reduces the effort required to build investor-grade analytics and valuation tools.

Beyond core financials, Intrinio also offers advanced modules like peer comparison data, real-time IEX stock prices, and institutional-grade ESG scores, making it suitable for both enterprise applications and investment-focused platforms.

Key Features

  • Standardized financial statements including 10-Q and 10-K filings for thousands of U.S. companies
  • Valuation metrics covering price ratios, profitability, liquidity, efficiency, and leverage
  • Direct access to SEC filings via EDGAR
  • Real-time and end-of-day stock prices from IEX
  • Peer group analysis and comparison tools
  • Developer-friendly API with detailed documentation and SDKs
  • Enterprise licensing and custom data packages available

Ideal Use Cases

  • Equity research platforms and valuation modeling tools
  • Robo-advisors calculating fair value and market comparables
  • Credit scoring and financial health analysis systems
  • Compliance monitoring tools and audit dashboards

Pricing Tiers

Intrinio offers flexible pricing tiers and custom enterprise plans based on data modules and usage requirements.

Intrinio delivers the structure, depth, and flexibility required for advanced financial analysis. For teams focused on equity research, valuation modeling, or embedded financial insights, it provides a dependable data foundation for 2026 and beyond.

Here is a 𝗧𝗼𝗽 𝗥𝗮𝘁𝗲𝗱 refined, 2026-updated, and lightly polished version with clear structure and minimal word inflation, keeping everything practical and readable.

Quandl API: Best for Economic and Alternative Financial Data

Quandl, a subsidiary of Nasdaq, remains a trusted source in 2026 for alternative financial data, macroeconomic indicators, and premium institutional datasets. Its API is widely used by analysts, economists, data scientists, and financial researchers who need insights beyond standard market prices.

Why Researchers and Analysts Choose Quandl

Quandl stands out for its access to unique and hard-to-find datasets, including government macroeconomic releases, ESG scores, central bank forecasts, and industry-specific indexes. With coverage spanning commodities, real estate, energy, employment, and global economic indicators, it enables deeper, multi-dimensional analysis that market price data alone cannot provide.

Key Features

  • Extensive macroeconomic and alternative financial datasets
  • Premium databases from providers like Nasdaq, Zacks, CoreLogic, and others
  • Coverage across equities, commodities, futures, options, and global economics
  • Data available in JSON, CSV, and XML formats
  • Native integrations for Python, R, and Excel
  • Freemium access to open datasets with subscription-based premium data
  • Documentation designed for analysts, researchers, and academic use

Ideal Use Cases

  • Economic forecasting dashboards using long-term time-series data
  • Quantitative research combining financial and non-financial signals
  • Macro-driven or fundamental portfolio models
  • ESG scoring and corporate responsibility analysis
  • MCP-powered AI systems synthesizing multi-source financial context

Quandl is well-suited for teams that want to look beyond charts and price movements and focus on the macroeconomic and structural forces shaping financial markets.

Conclusion: Making the Right Financial Data API Choice in 2026

Choosing the right financial data API in 2026 is a strategic decision, not just a technical one. The APIs you integrate directly affect the accuracy, speed, and intelligence of your product, whether you are building a trading system, investment platform, AI assistant, or enterprise analytics solution.

From ultra-low-latency tick-level feeds to deep fundamental and macroeconomic datasets, modern APIs now deliver capabilities that were once limited to hedge funds and large institutions. Today, startups, solo developers, and researchers can access the same infrastructure, often with free tiers and scalable enterprise plans.

Final Takeaways

  • For a reliable, all-around stock market data solution, Marketstack remains a strong choice, especially for global equity applications.
  • For currency data and exchange rates, ExchangeRate.host, Fixer.io, and CurrencyLayer offer dependable, scalable solutions with generous entry-level plans.
  • For AI-driven systems, financial modeling, and deep analytics, platforms like Financial Modeling Prep, Intrinio, Twelve Data, Polygon.io, and Quandl provide the structure and depth advanced products require.
  • As your application grows, compare enterprise-grade APIs based on latency, coverage, support, and licensing to ensure long-term scalability.

No matter where you are in your development journey, choosing the right financial data API enables smarter decisions, faster execution, and more trustworthy user experiences. Build carefully, scale confidently, and innovate boldly.

FAQs

What is the best financial data API for global stock market coverage?

Marketstack is one of the best all-around options for global equity data. It supports 70+ exchanges, offers real-time and historical data, and provides a clean RESTful API suitable for both startups and enterprise platforms.

How do I choose between free and paid financial data APIs?

Start by matching your technical needs, such as real-time access, asset coverage, and update frequency, with what free tiers offer. As usage grows, look for APIs with transparent pricing, scalable limits, and strong developer support.

What’s the best API for financial statements and valuation metrics?

Intrinio and Financial Modeling Prep both provide structured access to income statements, balance sheets, cash flow data, and valuation ratios, making them ideal for equity research and automated valuation models.

What’s the difference between tick-level, intraday, and end-of-day data?

  • Tick-level data captures every trade or quote
  • Intraday data aggregates prices at intervals like 1-minute or 15-minute
  • End-of-day data provides daily open, high, low, and close prices

Tick-level data suits HFT and backtesting, while EOD data is better for long-term analysis.

Which APIs are best for currency conversion tools?

Fixer.io and ExchangeRate.host are popular choices due to their simple integration, broad currency coverage, and affordable pricing. Both support real-time and historical forex data for global applications.

从LLM到智能助手:记忆与规划如何让聊天机器人蜕变为行动者

2026-02-17 18:49:00

The Day Your LLM Stops Talking and Starts Doing

There’s a moment in every LLM project where you realize the “chat” part is the easy bit.

The hard part is everything that happens between the user request and the final output:

  • gathering missing facts,
  • choosing which tools to call (and in what order),
  • handling failures,
  • remembering prior decisions,
  • and not spiraling into confident nonsense when the world refuses to match the model’s assumptions.

That’s the moment you’re no longer building “an LLM app.”

You’re building an agent.

In software terms, an agent is not a magical model upgrade. It’s a system design pattern:

Agent = LLM + tools + a loop + state

Once you see it this way, “memory” and “planning” stop being buzzwords and become engineering decisions you can reason about, test, and improve.

Let’s break down how it works.

1) What Is an LLM Agent, Actually?

A classic LLM app looks like this:

user_input -> prompt -> model -> answer

An agent adds a control loop:

user_input
  -> (state) -> model -> action
  -> tool/environment -> observation
  -> (state update) -> model -> action
  -> ... repeat ...
  -> final answer

The difference is subtle but massive:

  • A chatbot generates text.
  • An agent executes a policy over time.

The model is the policy engine; the loop is the runtime.

This means agents are fundamentally about systems: orchestration, state, observability, guardrails, and evaluation.

2) Memory: The Two Buckets You Can’t Avoid

Human-like “memory” in agents usually becomes two concrete buckets:

2.1 Short-Term Memory (Working Memory)

Short-term memory is whatever you stuff into the model’s current context:

  • the current conversation (or the relevant slice of it),
  • tool results you just fetched,
  • intermediate notes (“scratchpad”),
  • temporary constraints (deadlines, budgets, requirements).

Engineering reality check: short-term memory is limited by your context window and by model behavior.

Two classic failure modes show up in production:

  1. Context trimming: you cut earlier messages to save tokens → the agent “forgets” key constraints.
  2. Recency bias: even with long contexts, models over-weight what’s near the end → old-but-important details get ignored.

If you’ve ever watched an agent re-ask for information it already has, you’ve seen both.

2.2 Long-Term Memory (Persistent Memory)

Long-term memory is stored outside the model:

  • vector DB embeddings,
  • document stores,
  • user profiles/preferences (only if you can do it safely and legally),
  • task history and decisions,
  • structured records (tickets, orders, logs, CRM entries).

The mainstream pattern is: retrieve → inject → reason.

If that sounds like RAG (Retrieval-Augmented Generation), that’s because it is. Agents just make RAG operational: retrieval isn’t only for answering questions—it’s for deciding what to do next.

The part people miss: memory needs structure

A pile of vector chunks is not “memory.” It’s a landfill.

Practical long-term memory works best when you store:

  • semantic content (embedding),
  • metadata (timestamp, source, permissions, owner, reliability score),
  • a policy for when to write/read (what gets saved, what gets ignored),
  • decay or TTL for things that stop mattering.

If you don’t design write/read policies, you’ll build an agent that remembers the wrong things forever.

\

3) Planning: From Decomposition to Search

Planning sounds philosophical, but it maps to one question:

How does the agent choose the next action?

In real tasks, “next action” is rarely obvious. That’s why we plan: to reduce a big problem into smaller moves with checkpoints.

3.1 Task Decomposition: Why It’s Not Optional

When you ask an agent to “plan,” you’re buying:

  • controllability: you can inspect steps and constraints,
  • debuggability: you can see where it went wrong,
  • tool alignment: each step can map to a tool call,
  • lower hallucination risk: fewer leaps, more verification.

But planning can be cheap or expensive depending on the technique.

3.2 CoT: Linear Reasoning as a Control Interface

Chain-of-Thought style prompting nudges the model to produce intermediate reasoning before the final output.

From an engineering perspective, the key benefit is not “the model becomes smarter.” It’s that the model becomes more steerable:

  • it externalizes intermediate state,
  • it decomposes implicitly into substeps,
  • and you can gate or validate those steps.

CoT ≠ show-the-user-everything

In production, you often want the opposite: use structured reasoning internally, then output a crisp answer.

This is both a UX decision (nobody wants a wall of text) and a safety decision (you don’t want to leak internal deliberations, secrets, or tool inputs).

3.3 ToT: When Reasoning Becomes Search

Linear reasoning fails when:

  • there are multiple plausible paths,
  • early choices are hard to reverse,
  • you need lookahead (trade-offs, planning, puzzles, strategy).

Tree-of-Thought style reasoning turns “thinking” into search:

  • expand: propose multiple candidate thoughts/steps,
  • evaluate: score candidates (by heuristics, constraints, or another model call),
  • select: continue exploring the best branches,
  • optionally backtrack if a branch collapses.

If CoT is “one good route,” ToT is “try a few routes, keep the ones that look promising.”

The cost: token burn

Search is expensive. If you expand branches without discipline, cost grows fast.

So ToT tends to shine in:

  • high-value tasks,
  • problems with clear evaluation signals,
  • situations where being wrong is more expensive than being slow.

3.4 GoT: The Engineering Upgrade (Reuse, Merge, Backtrack)

Tree search wastes work when branches overlap.

Graph-of-Thoughts takes a practical step:

  • treat intermediate reasoning as states in a directed graph,
  • allow merging equivalent states (reuse),
  • support backtracking to arbitrary nodes,
  • apply pruning more aggressively.

If ToT is a tree, GoT is a graph with memory: you don’t re-derive what you already know.

This matters in production where repeated tool calls and repeated reasoning are the real cost drivers.

3.5 XoT: “Everything of Thoughts” as Research Direction

XoT-style approaches try to unify thought paradigms and inject external knowledge and search methods (think: MCTS-style exploration + domain guidance).

It’s promising, but the engineering bar is high:

  • you need reliable evaluation,
  • tight budgets,
  • and a clear “why” for using something heavier than a well-designed plan loop.

In practice, many teams implement a lightweight ToT/GoT hybrid without the full research stack.

\

4) ReAct: The Loop That Makes Agents Feel Real

Planning is what the agent intends to do.

ReAct is what the agent actually does:

  1. Reason about what’s missing / what to do next
  2. Act by calling a tool
  3. Observe the result
  4. Reflect and adjust

Repeat until done.

This solves three real problems:

  • incomplete information: the agent can fetch what it doesn’t know,
  • verification: it can check assumptions against reality,
  • error recovery: it can reroute after failures.

If you’ve ever debugged a hallucination, you already know why this matters: a believable explanation isn’t the same thing as a correct answer.

\

5) A Minimal Agent With Memory + Planning (Practical Version)

Below is a deliberately “boring” agent loop. That’s the point.

Most production agents are not sci-fi. They’re well-instrumented control loops with strict budgets.

from dataclasses import dataclass, field
from typing import Any, Dict, List, Optional
import time
​
# --- Tools (stubs) ---------------------------------------------------------
​
def web_search(query: str) -> str:
    # Replace with your search API call + caching.
    return f"[search-results for: {query}]"
​
def calc(expression: str) -> str:
    # Replace with a safe evaluator.
    return str(eval(expression, {"__builtins__": {}}, {}))
​
# --- Memory ----------------------------------------------------------------
​
@dataclass
class MemoryItem:
    text: str
    ts: float = field(default_factory=lambda: time.time())
    meta: Dict[str, Any] = field(default_factory=dict)
​
@dataclass
class MemoryStore:
    short_term: List[MemoryItem] = field(default_factory=list)
    long_term: List[MemoryItem] = field(default_factory=list)  # stand-in for vector DB
​
    def remember_short(self, text: str, **meta):
        self.short_term.append(MemoryItem(text=text, meta=meta))
​
    def remember_long(self, text: str, **meta):
        self.long_term.append(MemoryItem(text=text, meta=meta))
​
    def retrieve_long(self, hint: str, k: int = 3) -> List[MemoryItem]:
        # Dummy retrieval: filter by substring.
        hits = [m for m in self.long_term if hint.lower() in m.text.lower()]
        return sorted(hits, key=lambda m: m.ts, reverse=True)[:k]
​
# --- Planner (very small ToT-ish idea) ------------------------------------
​
def propose_plans(task: str) -> List[str]:
    # In reality: this is an LLM call producing multiple plan candidates.
    return [
        f"Search key facts about: {task}",
        f"Break task into steps, then execute step-by-step: {task}",
        f"Ask a clarifying question if constraints are missing: {task}",
    ]
​
def score_plan(plan: str) -> int:
    # Heuristic scoring: prefer plans that verify facts.
    if "Search" in plan:
        return 3
    if "Break task" in plan:
        return 2
    return 1
​
# --- Agent Loop ------------------------------------------------------------
​
def run_agent(task: str, memory: MemoryStore, max_steps: int = 6) -> str:
    # 1) Retrieve long-term memory if relevant.
    recalled = memory.retrieve_long(hint=task)
    for item in recalled:
        memory.remember_short(f"Recalled: {item.text}", source="long_term")
​
    # 2) Plan (cheap multi-candidate selection).
    plans = propose_plans(task)
    plan = max(plans, key=score_plan)
    memory.remember_short(f"Chosen plan: {plan}")
​
    # 3) Execute loop.
    for step in range(max_steps):
        # In reality: this is an LLM call that decides "next tool" based on state.
        if "Search" in plan and step == 0:
            obs = web_search(task)
            memory.remember_short(f"Observation: {obs}", tool="web_search")
            continue
​
        # Example: do a small computation if the task contains a calc hint.
        if "calculate" in task.lower() and step == 1:
            obs = calc("6 * 7")
            memory.remember_short(f"Observation: {obs}", tool="calc")
            continue
​
        # Stop condition (simplified).
        if step >= 2:
            break
​
    # 4) Final answer: summarise short-term state.
    notes = "\n".join([f"- {m.text}" for m in memory.short_term[-8:]])
    return f"Task: {task}\n\nWhat I did:\n{notes}\n\nFinal: (produce a user-facing answer here.)"
​
# Demo usage:
mem = MemoryStore()
mem.remember_long("User prefers concise outputs with clear bullets.", tag="preference")
print(run_agent("Write a short guide on LLM agents with memory and planning", mem))

What this toy example demonstrates (and why it matters)

  • Memory is state, not vibe. It’s read/write with policy.
  • Planning can be multi-candidate without going full ToT. Generate a few, pick one, move on.
  • Tool calls are first-class. Observations update state, not just the transcript.
  • Budgets exist. max_steps is a real safety and cost control.

\

6) Production Notes: Where Agents Actually Fail

If you want this to work outside demos, you’ll spend most of your time on these five areas.

6.1 Tool reliability beats prompt cleverness

Tools fail. Time out. Rate limit. Return weird formats.

Your agent loop needs:

  • retries with backoff,
  • strict schemas,
  • parsing + validation,
  • and fallback strategies.

A “smart” agent without robust I/O is just a creative writer with API keys.

6.2 Memory needs permissions and hygiene

If you store user data, you need:

  • clear consent and retention rules,
  • permission checks at retrieval time,
  • deletion pathways,
  • and safe defaults.

In regulated environments, long-term memory is often the highest-risk component.

6.3 Planning needs evaluation signals

Search-based planning is only as good as its scoring.

You’ll likely need:

  • constraint checkers,
  • unit tests for tool outputs,
  • or a separate “critic” model call that can reject bad steps.

6.4 Observability is not optional

If you can’t trace:

  • which tool was called,
  • with what inputs,
  • what it returned,
  • and how it changed the plan,

you can’t debug. You also can’t measure improvements.

Log everything. Then decide what to retain.

6.5 Security: agents amplify blast radius

When a model can take actions, mistakes become incidents.

Guardrails look like:

  • allowlists (tools, domains, actions),
  • spend limits,
  • step limits,
  • sandboxing,
  • and human-in-the-loop gates for high-impact actions.

\

7) The Real “Agent Upgrade”: A Better Mental Model

If you remember one thing, make it this:

An agent is an LLM inside a state machine.

  • Memory = state
  • Planning = policy shaping
  • Tools = actuators
  • Observations = state transitions
  • Reflection = error-correcting feedback

Once you build agents this way, you stop chasing “the perfect prompt” and start shipping systems that can survive reality.

And reality is the only benchmark that matters.

我如何为Claude Code构建了一个热插拔后端代理

2026-02-17 18:31:09

Claude Code is my daily driver for coding. It's an agentic CLI tool from Anthropic that reads your codebase, edits files, runs commands — and it's genuinely good at it. But it has one limitation that kept bothering me: it only talks to one API backend at a time.

That's fine until Anthropic is rate-limited. Or down. Or you want to try a cheaper provider for routine tasks. Or you want teammates on a different backend than your main agent. Every time you need to switch, it's the same ritual: edit environment variables, change config files, restart the session, lose your flow.

I built AnyClaude to fix this. It's a TUI wrapper that sits between Claude Code and the API, letting you hot-swap backends with a single hotkey — no restarts, no config edits, no interruptions. Press Ctrl+B, pick a provider, keep working.

Sounds simple. It wasn't.

The Architecture

The core idea is straightforward: run a local HTTP proxy, point Claude Code at it via ANTHROPIC_BASE_URL, and route requests to whichever backend is currently active.

┌─────────────────────────────┐
│        AnyClaude TUI        │
└──────────────┬──────────────┘
               │
        ┌──────▼──────┐
        │ Claude Code │ (main agent + teammate agents)
        └──────┬──────┘
               │ ANTHROPIC_BASE_URL
        ┌──────▼──────┐
        │ Local Proxy │
        └──┬───────┬──┘
           │       │
      /v1/*│       │/teammate/v1/*
           │       │
     ┌─────▼──┐  ┌─▼──────────┐
     │ Active │  │  Teammate   │
     │Backend │  │  Backend    │
     └────────┘  └─────────────┘

AnyClaude starts a local proxy (port auto-assigned), sets ANTHROPIC_BASE_URL, and spawns Claude Code in an embedded terminal powered by alacritty_terminal. All API requests flow through the proxy, which applies transformations and forwards them to the active backend.

The proxy is built with axum and reqwest. Backends are defined in a TOML config:

[[backends]]
name = "anthropic"
display_name = "Anthropic"
base_url = "https://api.anthropic.com"
auth_type = "passthrough"

[[backends]]
name = "alternative"
display_name = "Alternative Provider"
base_url = "https://your-provider.com/api"
auth_type = "bearer"
api_key = "your-api-key"

Switching backends is just updating an atomic pointer. The next request goes to the new backend. No connection draining, no session restart — the Anthropic API is stateless, so Claude Code sends the full conversation history with every request and context carries over automatically.

Request lifecycle

Every request from Claude Code goes through a pipeline of middleware before reaching the backend:

  1. Routing. The proxy inspects the request path. /v1/* goes to the main pipeline, /teammate/v1/* to the teammate pipeline. The routing decision is attached to the request as an extension so downstream middleware knows which backend to target.

    \

  2. Authentication. The proxy rewrites auth headers based on the target backend's config. Three modes: passthrough forwards Claude Code's original headers (useful for Anthropic's OAuth), bearer replaces with Authorization: Bearer <key>, and api_key sets x-api-key.

    \

  3. Thinking pipeline. This is where thinking block filtering and adaptive thinking conversion happen. The middleware deserializes the request body, strips foreign thinking blocks from conversation history, and optionally converts the thinking format. It also initializes a ThinkingSession for tracking new thinking blocks in the response.

    Before (what Claude Code sends):

   {
     "model": "claude-opus-4-6",
     "thinking": {"type": "adaptive"},
     "messages": [
       {"role": "assistant", "content": [
         {"type": "thinking", "thinking": "Let me analyze...", "signature": "backend-A-sig"},
         {"type": "text", "text": "Here's my analysis..."}
       ]},
       {"role": "user", "content": "Continue"}
     ]
   }

\ After (what the backend receives — switched to Backend B with thinking_compat):

   {
     "model": "claude-opus-4-6",
     "thinking": {"type": "enabled", "budget_tokens": 10000},
     "messages": [
       {"role": "assistant", "content": [
         {"type": "text", "text": "Here's my analysis..."}
       ]},
       {"role": "user", "content": "Continue"}
     ]
   }

The thinking block from Backend A is stripped entirely, and adaptive is converted to enabled with an explicit budget.

\

  1. Model mapping. If the target backend has model family mappings configured, the middleware rewrites the model name in the request body and stashes the original name for reverse mapping in the response.

    Before:

   {"model": "claude-opus-4-6", ...}

After (backend has model_opus = "provider-large"):

   {"model": "provider-large", ...}

The original name claude-opus-4-6 is saved in request extensions so the reverse mapper can rewrite it back in the response stream.

\

  1. Upstream forwarding. The proxy builds a new request to the target backend, copies relevant headers, sets timeouts, and sends it via a shared reqwest client with connection pooling. For streaming responses, it wraps the response body in an ObservedStream that monitors thinking blocks and applies reverse model mapping as chunks flow through.

Each step is a separate axum middleware or extractor, so pipelines can be composed differently. The teammate pipeline skips thinking block filtering entirely — teammates are on a fixed backend, so there's nothing to filter.

The TUI layer

The proxy is only half of AnyClaude. The other half is a terminal multiplexer — the TUI that hosts Claude Code and provides the interactive controls.

Claude Code runs inside a pseudo-terminal (PTY) managed by the portable-pty crate. The PTY output feeds into an alacritty_terminal emulator, which maintains the terminal grid state — cells, colors, cursor position, scrollback buffer. The TUI renders this grid using ratatui, overlaying popup dialogs for backend switching, status metrics, and settings.

Terminal input was one of the harder parts to get right (Challenge 6 below). The input system needs to simultaneously handle raw PTY input, hotkey detection for the TUI, and mouse events for text selection — without any of these interfering with each other. Mouse tracking mode from Claude Code complicates things further: when Claude Code enables mouse tracking, the TUI must forward mouse events to the PTY instead of handling them as selection.

That's the easy part. Here's where it gets interesting.

Challenge 1: Thinking Block Signatures

Claude models produce "thinking blocks" — internal reasoning visible in the API response. Each provider signs these blocks with cryptographic signatures tied to their infrastructure. The signatures are opaque to the client, but the API validates them on the next request when they appear in conversation history.

Here's the problem: you start a session on Backend A. Claude produces several responses with thinking blocks, each signed by Backend A. You switch to Backend B mid-conversation. Claude Code sends the next request with the full conversation history — including all of Backend A's signed thinking blocks. Backend B sees foreign signatures, can't validate them, returns 400. Your session is broken.

The proxy solves this with session-aware tracking. Each backend switch starts a new "thinking session". The proxy observes response streams as they flow through, hashing thinking block content in real-time and associating each block with the current session. When a request comes in, the proxy checks each thinking block in the conversation history against its registry. Only blocks from the current session pass through. Everything else — blocks from previous sessions, regardless of which backend produced them — is stripped entirely from the request, as if that turn had no thinking.

This means switching from A to B and back to A doesn't restore old blocks. The signatures are tied not just to the provider but to the session context, so previously seen blocks aren't guaranteed to be valid even on the same backend. A clean session on each switch is the only safe approach.

This works automatically for all backends with no configuration. The proxy never modifies thinking blocks in responses — it only filters them in requests, and only after a backend switch has occurred.

Challenge 2: Adaptive Thinking Compatibility

Anthropic recently introduced adaptive thinking for Opus 4.6 — instead of a fixed token budget, the model decides when and how much to think on its own. Claude Code uses this by default, sending "thinking": {"type": "adaptive"} in requests.

The problem: not all third-party backends support adaptive thinking yet. Some still require the explicit "thinking": {"type": "enabled", "budget_tokens": N} with a fixed budget.

For non-Anthropic backends, AnyClaude converts on the fly:

  • Request body: adaptive → enabled with a configurable token budget
  • Header: anthropic-beta: adaptive-thinking-* → interleaved-thinking-2025-05-14
[[backends]]
name = "alternative"
base_url = "https://your-provider.com/api"
auth_type = "bearer"
api_key = "your-api-key"
thinking_compat = true
thinking_budget_tokens = 10000

This is a per-backend flag. Anthropic's own API handles adaptive thinking natively — you only enable thinking_compat for third-party backends that don't support it yet.

Challenge 3: Routing Agent Teams

Claude Code has an experimental Agent Teams feature where the main agent spawns teammate agents — independent Claude instances that work in parallel on subtasks, coordinating through a shared task list and direct messaging. One session acts as the team lead, others work on assigned tasks and communicate with each other.

I wanted to route teammates to a different backend than the main agent. The use case: main agent on a premium provider for complex reasoning, teammates on something cheaper for parallel subtasks. Agent Teams can use significant tokens — each teammate has its own context window — so cost control matters.

The challenge: Claude Code spawns teammates as child processes. There's no hook, no callback, no plugin system to intercept their API target. They inherit the parent's environment, including ANTHROPIC_BASE_URL, which points at AnyClaude's proxy — but they all hit the same route. From the proxy's perspective, a request from the main agent looks identical to a request from a teammate.

I explored several approaches. Trying to distinguish agents by request content (model name, headers) was fragile — Claude Code doesn't mark teammate requests differently. Modifying Claude Code itself was out of scope. The environment variable is the only control point, and it's set once at process spawn.

The solution came in two parts.

PATH shim. AnyClaude generates a tmux wrapper script at startup and places it in a temporary directory ahead of the real tmux binary in PATH. In split-pane mode, Claude Code spawns teammates via tmux — it exec's what it thinks is tmux, but it's actually a shim. The shim rewrites ANTHROPIC_BASE_URL to point at a different proxy route (/teammate/v1/* instead of /v1/*) and then exec's the real tmux binary. The teammate process has no idea it's been redirected. This required studying how Claude Code actually spawns tmux sessions — the exact flags and environment propagation varied between display modes.

Nested pipelines. The proxy uses axum's nested routing to separate traffic. Requests to /v1/* go through the main pipeline (active backend, switchable via Ctrl+B). Requests to /teammate/v1/* go through a fixed teammate pipeline that always routes to the configured teammate backend. Each pipeline has its own middleware stack — the teammate pipeline skips thinking block filtering entirely since teammates are on a fixed backend and never experience a switch.

[agent_teams]
teammate_backend = "alternative"

You can enable Claude Code's Agent Teams feature directly from AnyClaude's settings menu (Ctrl+E) — no need to manually edit Claude Code's config files.

Challenge 4: Thinking Pipeline Isolation

The thinking block filter from Challenge 1 uses a registry of seen thinking blocks to decide what to strip. But who owns that registry?

In the initial implementation, it was a single shared structure behind a mutex. One registry for the entire proxy. This worked fine with a single agent — but Agent Teams broke it immediately.

The scenario: main agent is on Backend A, teammate is on Backend B. Both are making requests concurrently. The shared registry accumulates thinking blocks from both backends, tagged by backend name. Now the main agent switches to Backend C. The filter sees Backend B's thinking blocks in the registry and flags them as foreign — but those blocks belong to the teammate's conversation, not the main agent's. The teammate's next request gets its own valid thinking blocks stripped, and the session breaks.

The fundamental problem: thinking state is per-session, but the registry was global. The main agent and each teammate have independent conversation histories with independent thinking blocks, and they switch backends independently (or in the case of teammates, not at all).

The solution: ThinkingSession as a per-request handle. Instead of one global registry, each logical agent session gets its own isolated thinking block tracker. The proxy creates a ThinkingSession for the main agent and separate ones for each teammate, attached to requests via axum's request extensions. The main agent's backend switch only affects the main agent's ThinkingSession. Teammates' sessions are completely isolated — they never see filtering triggered by the main agent's actions.

Challenge 5: Model Mapping in Both Directions

Different providers use different model names. Anthropic has claude-opus-4-6, your provider might call it provider-large. AnyClaude remaps model names per backend:

[[backends]]
name = "my-provider"
model_opus = "provider-large"
model_sonnet = "provider-medium"
model_haiku = "provider-small"

Request rewriting is straightforward — match the model name against family keywords (opussonnethaiku), substitute the configured name, done. Only configured families are remapped; omitted ones pass through unchanged.

The interesting part is reverse mapping in responses. The backend returns its own model name (e.g. provider-large) in the response body. If Claude Code sees a different model name than what it sent, it gets confused about which model it's talking to. So the proxy needs to rewrite it back to the original Anthropic name.

For non-streaming JSON responses, this is straightforward — parse the entire body, replace the model field, serialize back.

But most Claude Code interactions use streaming SSE, where the response arrives as a series of data: {...} events over a chunked HTTP connection. The model name appears in the message_start event — the very first SSE event in the stream.

AnyClaude handles this with a ChunkRewriter — a stateful closure plugged into the ObservedStream that wraps the response body. Each chunk passes through the rewriter as it arrives. The rewriter first does a fast byte-level check for the string "message_start" — if not present, the chunk passes through untouched (zero-copy). When the target event is found, the rewriter converts the chunk to text, splits into lines, parses only the data: lines as JSON, rewrites the message.model field, and re-serializes. After the first successful rewrite, the rewriter flips a done flag and becomes a no-op for all remaining chunks — the model name only needs to be rewritten once.

Challenge 6: Terminal Input

This one surprised me. I was using crossterm to capture terminal input events and forward them to the PTY running Claude Code. Certain key combinations — Option+Backspace, Ctrl+Arrow, Alt+Arrow — simply didn't work.

The root cause: crossterm parses raw terminal input into structured events, then re-encodes them back into escape sequences for the PTY. But the re-encoding doesn't perfectly round-trip. Some escape sequences that terminals emit don't have a crossterm event representation, so they're silently dropped.

The fix was writing a new crate (term_input) that forwards raw bytes directly to the PTY. No parsing, no re-encoding, no information loss. For special key detection (Option+Backspace, Shift+Enter), it uses macOS CGEvent APIs to check modifier state without interfering with the byte stream.

Getting Started

cargo install --path .

Create ~/.config/anyclaude/config.toml:

[defaults]
active = "anthropic"

[[backends]]
name = "anthropic"
display_name = "Anthropic"
base_url = "https://api.anthropic.com"
auth_type = "passthrough"

[[backends]]
name = "alternative"
display_name = "Alternative Provider"
base_url = "https://your-provider.com/api"
auth_type = "bearer"
api_key = "your-api-key"
thinking_compat = true

Run anyclaude. Press Ctrl+B to switch backends. That's it.

| Key | Action | |----|----| | Ctrl+B | Backend switcher | | Ctrl+S | Status/metrics | | Ctrl+H | Switch history | | Ctrl+E | Settings | | Ctrl+Q | Quit |

What's Next

AnyClaude is open source. It only supports Anthropic API-compatible backends — if a provider speaks the same protocol, it works. The project is actively developed and I use it daily.

If you're using Claude Code with multiple providers — or want to start — give it a try and let me know what breaks.

GitHub: github.com/arttttt/AnyClaude

MEXC发布二月储备金证明报告,BTC覆盖率升至267%

2026-02-17 18:30:04

Victoria, Seychelles, February 16, 2026

MEXC, the world's fastest-growing digital asset exchange and a pioneer of true zero-fee trading, released its February Proof of Reserve report, confirming that all major assets maintained reserve ratios above 100%. BTC reserve coverage rose to 267%, demonstrating the platform's continued commitment to transparency and user asset protection.

The February report shows reserve ratios of 267% for BTC, 112% for ETH, 117% for USDT, and 124% for USDC. MEXC wallet assets total 12,003.98 BTC, 73,433.86 ETH, $1.82 billion USDT, and $93.5 million USDC. BTC reserve coverage rose notably from January's 158% to 267%, with wallet assets nearly doubling from 6,172.88 BTC to 12,003.98 BTC. ETH reserve coverage increased from 107% to 112%, with reserves expanding from 61,729.67 ETH to 73,433.86 ETH. All reserve ratios remained well above the 1:1 backing standard.

MEXC updates its Proof of Reserve snapshots monthly, with independent audit reports published by Hacken, a leading blockchain security and audit firm. The Proof of Reserve framework utilizes Merkle Tree technology, enabling users to verify their balances while maintaining account privacy. Committed to a user-first approach, MEXC maintains ample reserves and conducts monthly independent audits to ensure all user assets remain fully protected and transparent. MEXC will maintain this monthly reporting practice, upholding industry-leading transparency standards and ensuring continued user confidence.

To view the latest Proof of Reserve snapshot and audit report, please visit MEXC's Proof of Reserves page.

About MEXC

Founded in 2018, MEXC is committed to being "Your Easiest Way to Crypto." Serving over 40 million users across 170+ countries, MEXC is known for its broad selection of trending tokens, everyday airdrop opportunities, and low trading fees. Our user-friendly platform is designed to support both new traders and experienced investors, offering secure and efficient access to digital assets. MEXC prioritizes simplicity and innovation, making crypto trading more accessible and rewarding. \n

MEXC Official WebsiteXTelegramHow to Sign Up on MEXC

For media inquiries, please contact MEXC PR team: [email protected]

Source

\

:::warning Risk Disclaimer:

This content does not constitute investment advice. Given the highly volatile nature of the cryptocurrency market, investors are encouraged to carefully assess market fluctuations, project fundamentals, and potential financial risks before making any trading decisions.

:::

\