2026-03-06 06:59:59
A practical “cookbook” for vision-language-action models: which backbones, perception pipelines, and action predictors actually work for robots.
2026-03-06 06:03:30
Let’s be brutally honest about something for a moment.
\ Most tech companies don’t have a content strategy problem. They have a discipline problem. Here’s what I mean.
\ Product gets the red carpet treatment. Roadmaps are debated for days on end. Edge cases are heavily documented. Deployments are watched like a hawk. If something breaks, there’s a postmortem.
\ Content is a different story altogether. It’s whatever someone can squeeze in at the last minute before the next sprint planning meeting.
\ That disconnect is what’s costing these companies.
\ Bursts don’t accrue. Authority only grows over time.
\ And the uncomfortable truth most don’t want to come within a ten-foot poll is that the market can feel when your presence is inconsistent. When you publish three sharp pieces of content in a month and then disappear for six weeks, it signals distraction from your end. Perhaps even doubt.
\ Trust doesn’t grow where there are gaps.
\ Heads would roll if tech founders launched a product without infrastructure. Yet they still expect thought leadership to emerge from inspiration alone. That’s not even remotely realistic.
\ Respectfully, treat content like an asset if you want it to behave like one. You can’t bend around it. Not with brainstorming sessions but with a solid plan and a repeatable system to put it into motion.
\
\ Now, let’s talk about content performance, shall we?
\ If your team is celebrating likes from founders while your sales team has never mentioned a single post, something is horribly off.
\ The real question is not how many people it reached but how much it resonates.
\ For instance:
\ That’s where to focus on.
\ And here’s another inconvenient truth: distribution is not a growth hack. It’s simply operational hygiene.
\ Publishing and praying for a major organic lift is like pushing code without checking it first. You might get lucky. Or you may not notice a problem until it’s too late to do anything about it.
\ Serious companies build distribution into their workflow. It’s what common sense dictates.
\ Repurposing your content breathes new life into ideas that still have value, transforming what already works into a foundation that reinforces your message across every format and audience.
\ Syndication is less about flooding the internet with your name and more strategically placing your work in front of people who were never going to stumble upon it otherwise, turning visibility into genuine reach.
\ When you reach out directly to someone, you're not scrambling for attention out of desperation but rather making a deliberate, targeted choice to connect with exactly the right person at exactly the right moment.
\ Founders who achieve lasting success don't win by being the loudest voice in the room but through building systems that keep them consistently visible, relevant, and impossible to ignore over time.
\ Month after month, they allow their thinking to mature and shift in public, giving their audience a front-row seat to an evolving mind rather than a polished, static persona.
\ Over time, prospects begin to trace a throughline — basically a coherent, recognizable thread of reasoning that signals this person doesn't just have opinions, but a genuine and considered point of view.
\ That kind of consistency quietly does something most marketing tactics never can, gradually dissolving the skepticism that stands between a prospect and their decision to trust you.
\ So, if your product is engineered for scale but your narrative isn’t, you’ve built asymmetry into your company. One side compounds while the other resets every quarter.
\ Before hiring another agency or redesigning your website, answer this honestly:
\ Would your content process survive a sprint review?
\ If not, refactor it. Seriously, though.
2026-03-06 01:44:26
LODZ, Poland, March 5th, 2026/Chainwire/--BTCC, the world’s longest-serving cryptocurrency exchange, today announced that its recently launched TradFi product has surpassed $200 million in cumulative trading volume since going live on February 10, 2026. To celebrate this milestone, BTCC is introducing a zero-fee trading campaign for the XAU and XAG pairs, where participants can earn up to 10 grams of gold through a tiered volume bonus program.
The $200 million milestone demonstrates strong demand for traditional market access among crypto traders. BTCC TradFi, launched in February 2026, enables users to trade traditional financial instruments, including forex, commodities, indices, and equities, directly on the BTCC platform using USDT as margin and settlement currency. TradFi aims to remove barriers for crypto traders to gain exposure to the global traditional financial markets.
Running from March 5 to March 19, 2026, the zero-fee campaign waives all trading fees on XAU and XAG pairs. Alongside the 0-fee promotion, users can also participate in a tiered bonus program based on the total TradFi trading volume during the campaign. Participants can earn up to 10g of gold by reaching the milestone of 5,000,000 USDT in cumulative trading volume across all TradFi pairs.
Precious metals have been among the most actively traded asset classes on BTCC's platform. In 2025, tokenized gold on BTCC recorded $5.72 billion in trading volume, with Q4 volume surging 809% over Q1, underscoring sustained user interest in precious metals. This momentum sets the stage for the zero-fee campaign on XAU and XAG, giving both new and existing users a cost-free entry point into one of the platform's most in-demand markets.
For traders seeking traditional market exposure without leaving the crypto ecosystem, BTCC TradFi offers a seamless platform that bridges cryptocurrency and traditional assets. The zero-fee campaign is an opportunity to explore gold and silver trading at zero trading cost on the BTCC platform. For campaign details and eligibility requirements, users may visit BTCC’s 0-fee campaign page.
Founded in 2011, BTCC is a leading global cryptocurrency exchange serving over 11 million users across 100+ countries. Partnered with 2023 Defensive Player of the Year and 2x NBA All-Star Jaren Jackson Jr. as global brand ambassador, BTCC delivers secure, accessible crypto trading services with an unmatched user experience.
Official website: https://www.btcc.com/en-US
X: https://x.com/BTCCexchange
Contact: [email protected]
Aaryn Ling
:::tip This story was published as a press release by Chainwire under HackerNoon’s Business Blogging Program
:::
Disclaimer:
This article is for informational purposes only and does not constitute investment advice. Cryptocurrencies are speculative, complex, and involve high risks. This can mean high prices volatility and potential loss of your initial investment. You should consider your financial situation, investment purposes, and consult with a financial advisor before making any investment decisions. The HackerNoon editorial team has only verified the story for grammatical accuracy and does not endorse or guarantee the accuracy, reliability, or completeness of the information stated in this article. #DYOR
\
2026-03-06 01:38:18
Miami, FL, March 5th, 2026/Chainwire/-Everstake, one of the largest global non-custodial staking and yield infrastructure providers for institutional and retail clients, and Midas, a platform for on-chain investment products, announced the official launch of mEVUSD.
Developed in collaboration with and managed by Apollo Crypto, mEVUSD is a regulatory-compliant, USDC-denominated tokenized strategy designed for yield available for institutional clients in the European Union and selected other jurisdictions, in compliance with applicable local requirements.
The launch introduces a professionally curated, market-neutral yield strategy that targets 7%–12% annual returns on stable assets, indicative, depending on market conditions. By generating yield primarily from financing and interest rate spreads rather than crypto price movements, mEVUSD offers banks, asset managers, and corporate treasuries a way to capture onchain rewards while aiming to reduce directional market exposure.
mEVUSD directly addresses the growing "Yield Gap" in the digital asset market, where money markets and treasury bills no longer satisfy institutional demand for higher onchain returns. Market data from EY highlights this shift, reporting that 84% of institutions are already utilizing or interested in stablecoins, while 76% of firms intend to invest in tokenized assets by 2026 for portfolio diversification.
mEVUSD meets this demand by transforming idle stablecoin balances into productive digital holdings. As a tokenized strategy, it provides a secure entry point into diversified, delta-neutral strategies, offering the regulatory comfort and professional risk management required by traditional non-crypto-native firms to enter the DeFi ecosystem.
The mEVUSD product introduces a comprehensive three-layered ecosystem for institutional DeFi:
“We’re seeing a structural shift in how institutions approach stablecoin capital. Passive yield is no longer sufficient — treasury teams are seeking controlled, compliant frameworks to enhance returns,” said David Kinitsky, Chief Corporate Development Officer at Everstake. “Everstake provides the underlying infrastructure layer, enabling strategy providers like Apollo Crypto to curate risk, while Midas facilitates regulated distribution. The result is streamlined access to advanced yield strategies for institutions through a single API, aligned with regulatory standards.”
To safeguard institutional capital, the strategy operates under strict risk parameters monitored by Apollo Crypto. Strategies are restricted to over-collateralized lending and basis trades on highly liquid, blue-chip DeFi protocols.
Apollo's dedicated risk management framework employs real-time monitoring of Loan-to-Value (LTV) ratios, supported by deleveraging triggers to proactively address market volatility and smart contract risk.
“Institutional-grade DeFi requires professional oversight and a clear regulatory home,” said Henrik Andersson at Apollo Crypto. “Our role is to curate the most efficient yield strategies while maintaining a thorough risk framework, ensuring that institutions can access elevated rewards without compromising on security or regulatory alignment.”
\
Dennis Dinkelmeyer, CEO of Midas added: “By partnering with Everstake and Apollo Crypto to launch mEVUSD, we have built a regulatory-compliant environment that finally aligns decentralized efficiency with institutional standards. Midas’s role is to provide the secure, regulated rails that make sophisticated strategies accessible to investors who previously lacked a clear entry point, prioritizing legal clarity, absolute transparency, and rigorously managed performance.”
The product is offered to institutional clients across the European Union and selected other jurisdictions in compliance with applicable local requirements, providing a transparent, audited pathway to digital asset yield. Persons and entities in the U.S., U.K., Canada, China, Australia, and Iran, as well as those in sanctioned jurisdictions, are excluded.
Everstake is the largest global non-custodial staking and yield infrastructure provider serving institutional and retail clients, trusted by over 1,600,000 users across 130+ Proof-of-Stake networks. Founded in 2018 by blockchain engineers, the company supports $7+ billion in staked assets, delivering institutional-grade infrastructure with 99.98% uptime.
Supporting asset managers, custodians, wallets, exchanges, and protocols, Everstake offers API-first, compliant infrastructure backed by SOC 2 Type II, ISO 27001:2022, and NIST CSF certifications, as well as GDPR and CCPA compliance, and regular smart contract audits. Its globally distributed team of 100+ professionals is committed to making staking accessible to everyone while strengthening the foundations of decentralized finance.
All metrics, including without limitations value of staked assets, total number of active users, rewards rates, and networks supported, are historical figures and may not represent the actual real-time data.
Everstake, Inc. or any of its affiliates is a software platform that provides infrastructure tools and resources for users but does not offer investment advice or investment opportunities, manage funds, facilitate collective investment schemes, provide financial services or take custody of, or otherwise hold or manage, customer assets.
Everstake, Inc. or any of its affiliates does not conduct any independent diligence on or substantive review of any blockchain asset, digital currency, cryptocurrency or associated funds. Everstake, Inc. or any of its affiliates’s provision of technology services allowing a user to stake digital assets is not an endorsement or a recommendation of any digital assets by it. Users are fully and solely responsible for evaluating whether to stake digital assets.
Midas is a platform for composable onchain investment products. It enables investors to access strategies from institutional asset managers via regulatory-compliant tokens (mTokens) that offer full transparency, instant redemptions, and native composability across DeFi protocols like Morpho and Pendle.
Founded by Dennis Dinkelmeyer (formerly Goldman Sachs), Fabrice Grinda (FJ Labs), and Romain Bourgois (formerly Ondo Finance), Midas is backed by leading investors including RRE, Framework Ventures, Creandum, HV Capital, Strobe Ventures, Ledger Cathay and Coinbase Ventures. To date, Midas has powered over $1.7B in asset issuance and paid out $37M in yield.
Apollo Crypto is an award-winning, multi-strategy digital asset manager with a distinguished eight-year track record of market outperformance. We specialize in identifying high risk-reward investments across the blockchain ecosystem, managing three liquid funds with a strategic focus on Layer 1 & 2 Blockchains, Decentralized Finance (DeFi), Real-World Assets (RWA), and early-stage pre-token projects.
Leveraging deep on-chain expertise and an extensive global network, Apollo Crypto has been at the forefront of the industry as an active DeFi investor and participant since its inception.
With more than eight years of consistent performance through multiple market cycles—including periods of extreme volatility—Apollo Crypto has delivered resilient, strong risk-adjusted returns that have distinguished it among peers.
PR Manager
Annabella-Nikol Lapshyna
Everstake
:::tip This story was published as a press release by Chainwire under HackerNoon’s Business Blogging Program
:::
Disclaimer:
This article is for informational purposes only and does not constitute investment advice. Cryptocurrencies are speculative, complex, and involve high risks. This can mean high prices volatility and potential loss of your initial investment. You should consider your financial situation, investment purposes, and consult with a financial advisor before making any investment decisions. The HackerNoon editorial team has only verified the story for grammatical accuracy and does not endorse or guarantee the accuracy, reliability, or completeness of the information stated in this article. #DYOR
\
2026-03-06 01:01:30
I've spent over a decade doing offensive security work. Breaking into organizations professionally: banks, critical infrastructure, tech companies, the Fortune 500. I've seen attack chains that are genuinely sophisticated. Zero-days. Custom implants. Months of patient lateral movement.
But you know what works embarrassingly often? Typing "admin/admin" into a login page.
Default credentials are not a new vulnerability. They're one of the oldest in the book. Every compliance framework mentions them. Every hardening guide says to change them. And yet on a meaningful percentage of the engagements my team runs, default or known-compromised credentials give us access to systems that organizations have invested millions in protecting.
The problem isn't awareness. The problem is scale.
A large enterprise can have hundreds of thousands of hosts on its internal network. Servers, databases, network appliances, printers, IoT devices, monitoring tools, backup systems, development environments, and the ghost infrastructure that nobody remembers deploying. Every one of those systems potentially shipped with vendor defaults. Checking every door is the kind of work that sounds simple until you actually try to do it with the tooling that exists today.
So I built something better.
The go-to tool for credential testing for years has been THC Hydra. It works. But "it works" comes with asterisks.
You need to compile it with specific system libraries for each protocol: libssh-dev for SSH, libmysqlclient-dev for MySQL, and so on. On a stripped-down jump box or a minimal container, that means fighting with dependencies before you've tested a single credential. I've watched operators burn an hour on compilation issues at the start of engagements more times than I can count.
Then there's the output. Hydra was designed for humans reading a terminal. When you need to process results programmatically (feed them into a report, a database, another tool) you write parsing scripts. Different parsing scripts for different engagements because the context is always slightly different.
And the pipeline problem. Modern recon tools like naabu and fingerprintx speak JSON and chain together cleanly. Hydra doesn't fit into that world. You end up writing glue code to translate between formats, and that glue code becomes its own maintenance burden.
Every engagement, same problems. I got tired of it.
Brutus is a multi-protocol credential testing tool written in Go. Single binary. Zero external dependencies. Download it and run it. It takes fingerprintx and naabu output directly and produces structured JSON.
The workflow I wanted to enable:
bash
naabu -host 10.0.0.0/8 -p 22,3306,5432,8080 -silent | \
fingerprintx --json | \
brutus -u admin -p password123
Port scan to service identification to credential testing. One pipeline. JSON in, JSON out. No intermediate files, no format conversion, no bash gymnastics.
Go was the obvious language choice because the deployment story is the entire point. Static binary. No runtime. No shared libraries. No package manager needed on the target machine. Hydra's protocol support depends on dynamically linked C libraries. Brutus implements everything in pure Go: SSH from the standard library ecosystem, database drivers without CGo dependencies. One artifact, runs everywhere.
But the feature I think matters most isn't the pipeline or the protocol support. It's the SSH keys.
Here's something that comes up on engagements more than anyone wants to admit.
The security community has catalogued a large number of publicly known, compromised SSH keys. Rapid7 maintains the ssh-badkeys repository. HashiCorp's Vagrant ships with its well-known insecure key. Appliance vendors like F5 BIG-IP, ExaGrid, and Ceragon FibeAir have shipped products with embedded keys that anyone can download from GitHub and use to log into your infrastructure.
Testing for these should be trivial. It's a known set of keys against a known set of services. But in practice, you have to track down the key collections, manage the files on disk, write scripts to iterate through them, and handle SSH connection logic. It's tedious enough that it gets done inconsistently, which means known-compromised keys sit in production environments waiting to be found.
Brutus embeds every one of these key collections directly into the binary using Go's embed package. When it encounters an SSH service, it tests every known-bad key automatically. No configuration needed. Each key carries metadata: the expected default username, the associated vendor, the CVE or advisory. The output tells you exactly what you found, not just that a key worked.
bash
cat recon_output.json | brutus
That's it. If the service is SSH, bad keys get tested. No flags, no key files, no chance of forgetting a collection.
Here's a real scenario that illustrates why this matters.
On an engagement, my team compromised virtual machines running vulnerability scanners. Each scanner had its own SSH private key for authenticating to the hosts it was responsible for scanning. The environment was segmented: different scanners covered different network zones, and each key only worked within its assigned scope.
We had multiple keys from multiple compromised scanners. We needed to map which key unlocked which hosts across which network segments. Without purpose-built tooling, this is a bash scripting nightmare. Managing connection timeouts, parsing output, tracking which key you're testing against which range.
With Brutus:
bash
naabu -host 10.1.0.0/24 -p 22 -silent | \
fingerprintx --json | \
brutus -u nessus -k /path/to/scanner1_key
Same pipeline for each compromised scanner. Different key, different target range. JSON output makes it straightforward to compare access across segments and map the full lateral movement picture.
This pattern applies every time you recover a private key on an engagement, whether it's from an automation server, a CI/CD pipeline, or a backup system. The question is always the same: where does this key work? Brutus makes answering that question repeatable.
This is the part I want to be transparent about. These features are experimental. They work in the scenarios we've tested. They also depend on external APIs, add latency and cost, and inherit all the non-determinism that comes with language models.
The problem they solve is real though. On internal assessments, you find dozens of HTTP login pages on non-standard ports. Management interfaces for switches, storage controllers, IPMI consoles, printer admin panels. Each one probably has default credentials, but you need to identify the product first, then research its defaults. Doing that manually across fifty services burns hours.
Brutus has two approaches. The first captures the HTTP response (headers, body, server signatures) and sends it to an LLM to identify the application and suggest vendor-specific defaults. It's surprisingly good at this. It'll identify a Dell iDRAC from CSS class names and JavaScript bundle structure even when "iDRAC" appears nowhere in visible text.
The second uses headless Chrome and vision analysis for JavaScript-rendered pages that break traditional form-filling tools. Screenshot the rendered page, identify the appliance visually, get defaults, fill the form, compare page state before and after to determine success.
Both features are promising. Neither is reliable enough for fully automated sweeps where you need deterministic results. The identification step alone saves real time even if you end up testing credentials manually. I think this pattern (multimodal LLMs for service identification in security tooling) is going to develop significantly, but we're in the early innings.
If you know our tooling, you know we tend to name things after Roman emperors. Brutus breaks that pattern because Marcus Junius Brutus was never an emperor. He's remembered for walking into the Senate on the Ides of March and putting a dagger in the back of the most powerful man in the world.
That felt right for a credential testing tool. It doesn't build empires. It tests whether the ones you've built will let a stranger walk through the front door.
And "Et tu, default creds?" was too good to pass up.
Brutus is open source under Apache 2.0. The GitHub repo has everything you need to get started, including a demo lab for hands-on testing.
The highest-impact contributions are new protocol plugins, additional bad key collections from appliances and vendor products, and real-world testing feedback. The plugin architecture makes adding new protocols straightforward: implement the auth logic, register it, compile.
If you've ever stared at a spreadsheet of thousands of hosts wondering how you're going to test credentials against all of them efficiently, give it a shot.
\ \
2026-03-06 01:00:09
2 DEPENDENTLY-TYPED OBJECT-ORIENTED PROGRAMMING
4 DESIGN CONSTRAINTS AND SOLUTIONS
\
Codata types. Codata types were first introduced by Hagino [Hagino 1987, 1989]. The original interpretation of codata types stems from coalgebras in category theory. An overview of the history of codata types as coalgebras is given by Setzer [Setzer 2012]. We have discussed the relation between codata types and OOP in Section 1.1. The expression problem. The expression problem poses the challenge for statically typed languages to create a type which can be extended by both new producers and new consumers. Based on earlier observations, Wadler [1998] formulated the problem and gave it its current name.
\ The expression problem for proofs is recognized as an important challenge in the verification community, but there are fewer proposed and implemented solutions than in the programming world. One popular solution in the programming world is Swierstra [2008]’s “Data Types à la carte” approach. In that approach, a type is defined as the fixpoint of a coproduct of functors which can be extended by new functors in a modular way. Most proposed solutions for dependent types are based on Swierstra [2008]’s approach and extend them to dependent types. Delaware et al. [2013a,b]; Delaware [2013] as well as Keuchel and Schrijvers [2013] implemented this approach for the Coq proof assistant and Schwaab and Siek [2013] implemented it for Agda.
\ A system for writing modular proofs in the Isabelle proof assistant has been described by Molitor [2015]. The most recent adaptation of the idea is by Forster and Stark [2020], who give an excellent presentation of this line of work in their related work section. In this article, our focus was not to propose a solution to the expression problem for proofs, since the defunctionalization and refunctionalization algorithms are whole-program transformations. Instead, our approach distills the essence of the expression problem for dependent types: Neither the functional nor the object-oriented decomposition solves the expression problem, since data types cannot be easily extended by new constructors, and codata types cannot be easily extended by new destructors.
\
Dependently-typed object-oriented programming. In Section 2 we presented our perspective on dependently-typed object-oriented programming. But we are not the first to think about this design space. Jacobs [1995] proposes using coalgebras to express object-oriented classes with coalgebraic specifications. His concept is based on three main components: objects, class implementations, and class specifications. The latter are used to specify a set of methods on an abstract state space as well as a set of assertions that define the behavior of these methods. Such a specification can then be implemented by a class.
\ A class gives a carrier set as a concrete interpretation for the state space and a coalgebra that implements the specified methods. An object is then just an element of the carrier set. In our system, we can express specifications similar to Jacobs’ proposal using self-parameters on destructors (see Section 2.2). Rather than having separate notions of specifications, classes, and objects, our system has a singular notion of codata types. Jacobs separates these notions to construct a model in which objects are indistinguishable if they are bisimilar according to their specification.
\ In contrast, in our system, we have a full syntactic duality between data and codata types through de- and refunctionalization. Hence, we need to decouple codata types from the semantics that are usually associated with them, including behavioral equality such as bisimilarity. Setzer [2006] conceived of dependently-typed object-oriented programming by specifying interfaces and having interactive programs as objects implementing these interfaces. The interfaces contain a command type, which represents the method signatures of an interface. Interactive programs are programs that react to incoming method calls by producing a return value and a new object.
\
Dependent type theories with definable Π-type and Σ-type. In Section 2.3 we demonstrated that the programmer can define both the Π-type and the Σ-type in our system, whereas in most proof assistants only the Σ-type can be defined. This is a generalization of the observation that programmers can’t define the function type in most functional programming languages, but that the function type can be defined in object-oriented languages [Setzer 2003]. Apart from this paper, the only other dependent type theory that doesn’t presuppose a built-in Π-type is by Basold [2018]; Basold and Geuvers [2016]. Like us, they give an explicit definition of the Π-type in their system.
\ Their definition, however, is slightly different from ours, since they have a more expressive core system. In their system, parameterized type constructors and type variables don’t have to be fully applied. Partially applied type constructors have a special type Γ _ ∗, and they specify a sort of simply-typed lambda calculus which governs the rules for abstracting over, and partially applying type constructors to arguments. As a result, their definition of the Π and Σ-type is a bit simpler: We have to use a previously-defined non-dependent function type 𝐴 → Type to represent the type family that the Σ and Π-types are indexed over, while they use a partially applied type variable 𝑋 : 𝐴 _ ∗. Our system is also not consistent; theirs is, and they prove both subject reduction and strong normalization.
\
Dependent pattern matching. The traditional primitive elimination forms in dependent type theories are eliminators. The eliminator for natural numbers, for example, has the type ∀(P : N → Type), P 0 → (∀n : N, P n → P (S n)) → ∀n : N, P n. They are suitable for studying the metatheory of dependent types, but programming with them isn’t very ergonomic. A more convenient alternative to eliminators is dependent pattern matching, a generalization of ordinary pattern matching to dependent types, which was first proposed by Coquand [1992].
\ While ordinary pattern matching can be compiled to eliminators and is therefore nothing more than syntactic sugar, the compilation of dependent pattern matches additionally requires Streicher [1993]’s axiom K. Hofmann and Streicher [1994] proved that this axiom does not follow from the standard elimination rules for the identity type. Since the K axiom is sometimes undesirable—it is incompatible with other principles such as univalence—a variant of dependent pattern matching which does not rely on axiom K was developed by Cockx et al. [2014]. We use a variant of dependent pattern and copattern matching which requires axiom K if we want to compile it to eliminators, but we could get rid of this dependency by applying the three restrictions presented in Cockx’ thesis [Cockx 2017, p.55].
\
Copattern matching. Copattern matching as a dual concept to pattern matching was first proposed by Abel et al. [2013]. Their work was motivated by the deficiencies of previous approaches which used constructors to represent infinite objects. For instance, the coinductive types originally introduced in Coq broke subject reduction, as noted by Giménez [1996] and Oury [2008]. Even simple infinite objects such as streams cannot be represented using constructors and pattern matching in a sensible way. This follows from the observation of Berger and Setzer [2018] that there exists no decidable equality for streams which admits a one-step expansion of a stream 𝑠 to a stream (cons 𝑛 𝑠′ ).
\
Inconsistent dependent type theories. The type theory presented in this paper is inconsistent, i.e. every type is inhabited by some term, a property it shares with most programming languages but not with proof assistants. However, the inconsistency of the theory does not imply that the properties expressed by the dependent types are meaningless. We can compare the situation to the programming language Haskell, where it is already possible to write dependent programs by using several language extensions and programming tricks [Eisenberg and Weirich 2012; Lindley and McBride 2013].
\ Instead of relying on these tricks, a more ergonomic and complete design of dependent types in Haskell has been the subject of various articles [Weirich et al. 2019, 2017] and PhD theses [Eisenberg 2016; Gundry 2013]. Their main insight also applies to our system: the central property of an inconsistent dependent type theory is type soundness [Wright and Felleisen 1994]. For example, every term of type Vec(5) can only evaluate to a vector containing five elements or diverge; it cannot evaluate to a vector of six elements. But they also show that inconsistency has downsides, especially for optimization: In a consistent theory every term of type Eq(𝑠, 𝑡) must evaluate to the term refl, and can therefore be erased during compilation. In an inconsistent theory, we cannot erase the equality witness, since we could otherwise write a terminating unsafe coercion between arbitrary types, which would violate type soundness.
\
Defunctionalization and refunctionalization. The related work on defunctionalization and refunctionalization can be partitioned into two groups: The first group only considers defunctionalization and refunctionalization for the function type, while the second group generalizes them to transformations between arbitrary data and codata types. De/Refunctionalization of the function type has a long history, which starts with the seminal paper by Reynolds [1972] and the later work of Danvy and Millikin [2009]; Danvy and Nielsen [2001]. That the defunctionalization of polymorphic functions requires GADTs was first observed by Pottier and Gauthier [2006].
\ In a recent paper, Huang and Yallop [2023] describe the defunctionalization of dependent functions, and especially how to correctly deal with type universes and positivity restrictions, but don’t consider the general case of indexed data and codata types. On the contrary, they do not use data types at all and instead introduce the construct of first-class function labels which enables them to avoid problems arising from the expressivity of data type definitions like recursive types. The generalization of defunctionalization from functions to arbitrary codata types was first described by Rendel et al. [2015] for a simply typed system without local lambda abstractions or local pattern matches.
\ That the generalization to polymorphic data and codata types then also requires GAcoDTs has been described by Ostermann and Jabs [2018]. How to treat local pattern and copattern matches in such a way as to preserve the invertibility of defunctionalization and refunctionalization has been described by Binder et al. [2019]. Recently, Zhang et al. [2022] implemented defunctionalization and refunctionalization for the programming language Scala, and used these transformations for some larger case studies. In this article, we describe the generalization to indexed data and codata types, but in distinction to Huang and Yallop [2023] we circumvent the problems of type universes and positivity restrictions by working in an inconsistent type theory.
\
Most dependently typed programming languages don’t support programming with codata as well as programming with data. The main reason some proof assistants support codata types at all was that some support was necessary for the convenient formalization of theorems about infinite and coalgebraic objects. But codata types are useful for more than just representing infinite objects like streams; they represent an orthogonal way to structure programs and proofs, with different extensibility properties and reasoning principles.
\ In this paper we have presented a vision of how programming can look in a dependently typed language in which the data and codata sides are completely symmetric and treated with equal care. By implementing this language and testing it on a case study we have demonstrated that this style of purely functional, dependently typed object-oriented programming does work. We think that this way of systematic language design, in place of ad-hoc extensions, provides a good case study on how the design of dependently typed languages and proof assistants should be approached.
\ DATA-AVAILABILITY STATEMENT
This article is accompanied by an online IDE available at polarity-lang.github.io/oopsla24 where the examples discussed in this paper can be selected and loaded. This online IDE consists of a static website hosted on GitHub pages, with all the code running in the browser on the client side. Should the hosted website despite our best efforts no longer be available, then it is possible to recreate it locally using the archived version available at Zenodo [Binder et al. 2024].
\
Andreas Abel, Brigitte Pientka, David Thibodeau, and Anton Setzer. 2013. Copatterns: Programming Infinite Structures by Observations. In Proceedings of the 40th Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (Rome, Italy) (POPL ’13). Association for Computing Machinery, New York, NY, USA, 27–38. https://doi.org/ 10.1145/2480359.2429075
\ Thorsten Altenkirch and Conor McBride. 2006. Towards Observational Type Theory. http://www.strictlypositive.org/ott. pdf Henning Basold. 2018. Mixed Inductive-Coinductive Reasoning: Types, Programs and Logic. Ph. D. Dissertation. Radboud University. https://hdl.handle.net/2066/190323
\ Henning Basold and Herman Geuvers. 2016. Type Theory Based on Dependent Inductive and Coinductive Types. In Proceedings of the Symposium on Logic in Computer Science (New York). Association for Computing Machinery, New York, NY, USA, 327–336. https://doi.org/10.1145/2933575.2934514 Ulrich Berger and Anton Setzer. 2018. Undecidability of Equality for Codata Types. In Coalgebraic Methods in Computer Science, Corina Cîrstea (Ed.). Springer, 34–55. https://doi.org/10.1007/978-3-030-00389-0_4
\ David Binder, Julian Jabs, Ingo Skupin, and Klaus Ostermann. 2019. Decomposition Diversity with Symmetric Data and Codata. Proc. ACM Program. Lang. 4, POPL, Article 30 (2019), 28 pages. https://doi.org/10.1145/3371098
\ David Binder, Ingo Skupin, Tim Süberkrüb, and Klaus Ostermann. 2024. Deriving Dependently-Typed OOP from First Principles. https://doi.org/10.5281/zenodo.10779424 Archived version of the submitted artefact.
\ Ranald Clouston, Aleš Bizjak, Hans Bugge Grathwohl, and Lars Birkedal. 2017. The Guarded Lambda-Calculus: Programming and Reasoning with Guarded Recursion for Coinductive Types. Logical Methods in Computer Science 12 (April 2017). Issue 3. https://doi.org/10.2168/LMCS-12(3:7)2016
\ Jesper Cockx. 2017. Dependent Pattern Matching and Proof-Relevant Unification. Ph. D. Dissertation. KU Leuven.
\ Jesper Cockx, Dominique Devriese, and Frank Piessens. 2014. Pattern Matching without K. In International Conference on Functional Programming. Association for Computing Machinery, New York, NY, USA, 257–268. https://doi.org/10.1145/ 2628136.2628139
\ William R. Cook. 1990. Object-Oriented Programming versus Abstract Data Types. In Proceedings of the REX Workshop / School on the Foundations of Object-Oriented Languages. Springer, 151–178. https://doi.org/10.1007/BFb0019443
\ William R. Cook. 2009. On Understanding Data Abstraction, Revisited. In Proceedings of the Conference on Object-Oriented Programming, Systems, Languages and Applications: Onward! Essays (Orlando). Association for Computing Machinery, New York, NY, USA, 557–572. https://doi.org/10.1145/1640089.1640133
\ Thierry Coquand. 1992. Pattern Matching With Dependent Types. In Proceedings of the 1992 Workshop on Types for Proofs and Programs (Bastad, Sweden), Bengt Nordström, Kent Pettersson, and Gordon Plotkin (Eds.). 66–79.
\ Olivier Danvy and Kevin Millikin. 2009. Refunctionalization at Work. Science of Computer Programming 74, 8 (2009), 534–549. https://doi.org/10.1016/j.scico.2007.10.007
\ Olivier Danvy and Lasse R. Nielsen. 2001. Defunctionalization at Work. In Proceedings of the Conference on Principles and Practice of Declarative Programming (Florence). 162–174. https://doi.org/10.1145/773184.773202
\ Benjamin Delaware, Bruno C. d. S. Oliveira, and Tom Schrijvers. 2013a. Meta-Theory à La Carte. In Symposium on Principles of Programming Languages (Rome) (POPL ’13). Association for Computing Machinery, New York, NY, USA, 207–218. https://doi.org/10.1145/2429069.2429094
\ Benjamin Delaware, Steven Keuchel, Tom Schrijvers, and Bruno C.d.S. Oliveira. 2013b. Modular Monadic Meta-Theory. In Proceedings of the 18th ACM SIGPLAN International Conference on Functional Programming (Boston, Massachusetts, USA) (ICFP ’13). Association for Computing Machinery, New York, NY, USA, 319–330. https://doi.org/10.1145/2500365. 2500587
\ Benjamin James Delaware. 2013. Feature Modularity in Mechanized Reasoning. Ph. D. Dissertation. The University of Texas at Austin. Paul Downen, Zachary Sullivan, Zena M. Ariola, and Simon Peyton Jones. 2019. Codata in Action. In European Symposium on Programming (ESOP ’19). Springer, 119–146. https://doi.org/10.1007/978-3-030-17184-1_5
\ Richard Eisenberg. 2016. Dependent Types in Haskell: Theory and Practice. Ph. D. Dissertation. University of Pennsylvania.
\ Richard A. Eisenberg, Guillaume Duboc, Stephanie Weirich, and Daniel Lee. 2021. An Existential Crisis Resolved: Type Inference for First-Class Existential Types. Proc. ACM Program. Lang. 5, ICFP, Article 64 (aug 2021), 29 pages. https: //doi.org/10.1145/3473569
\ Richard A. Eisenberg and Stephanie Weirich. 2012. Dependently Typed Programming with Singletons. In Proceedings of the Haskell Symposium (Copenhagen, Denmark) (Haskell ’12). Association for Computing Machinery, New York, NY, USA, 117–130. https://doi.org/10.1145/2364506.2364522
\ Matthias Felleisen and Robert Hieb. 1992. The Revised Report on the Syntactic Theories of Sequential Control and State. Theoretical Computer Science 103, 2 (1992), 235–271. https://doi.org/10.1016/0304-3975(92)90014-7
\ Yannick Forster and Kathrin Stark. 2020. Coq à La Carte: A Practical Approach to Modular Syntax with Binders. In Proceedings of the Conference on Certified Programs and Proofs (New Orleans) (CPP 2020). Association for Computing Machinery, New York, NY, USA, 186–200. https://doi.org/10.1145/3372885.3373817
\ Peng Fu and Aaron Stump. 2014. Self Types for Dependently Typed Lambda Encodings. In International Conference on Rewriting Techniques and Applications, Gilles Dowek (Ed.). Springer, 224–239. https://doi.org/10.1007/978-3-319-08918- 8_16
\ Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides. 1995. Design Patterns: Elements of Reusable Object-Oriented Software.
Addison-Wesley Publishing Co., Boston.
\ Richard Garner. 2009. On the Strength of Dependent Products in the Type Theory of Martin-Löf. Annals of Pure and Applied Logic 160, 1 (2009), 1–12. https://doi.org/10.1016/j.apal.2008.12.003
\ Herman Geuvers. 2001. Induction Is Not Derivable in Second Order Dependent Type Theory. In Typed Lambda Calculi and Applications, Samson Abramsky (Ed.). Springer, Berlin, Heidelberg, 166–181. https://doi.org/3-540-45413-6_16
\ Herman Geuvers. 2014. The Church-Scott Representation of Inductive and Coinductive Data. https://www.cs.vu.nl/ ~femke/courses/ep/slides/herman-data-types.pdf Presented at the TYPES 2014 meeting.
\ Eduardo Giménez. 1996. Un Calcul de Constructions Infinies et son application a la vérification de systemes communicants. Ph. D. Dissertation. Lyon, École normale supérieure (sciences).
\ Jean-Yves Girard. 1972. Interprétation fonctionelle et élimination des coupures de l’arithmétique d’ordre supérieur. Thése de Doctorat d’Etat. Université de Paris VII.
\ Brian Goetz et al. 2014. JSR 335: Lambda Expressions for the Java Programming Language. https://jcp.org/en/jsr/detail?id= 335
\ Adam Gundry. 2013. Type Inference, Haskell and Dependent Types. Ph. D. Dissertation. University of Strathclyde.
\ Tatsuya Hagino. 1987. A Categorical Programming Language. Ph. D. Dissertation. University of Edinburgh. https://doi. org/10.48550/arXiv.2010.05167
\ Tatsuya Hagino. 1989. Codatatypes in ML. Journal of Symbolic Computation 8, 6 (1989), 629–650. https://doi.org/10.1016/ S0747-7171(89)80065-3
\ Martin Hofmann. 1997. Syntax and Semantics of Dependent Types. In Extensional Constructs in Intensional Type Theory. Springer, London, 13–54. https://doi.org/10.1007/978-1-4471-0963-1_2
\ Martin Hofmann and Thomas Streicher. 1994. The Groupoid Model Refutes Uniqueness of Identity Proofs. In Proceedings Ninth Annual IEEE Symposium on Logic in Computer Science. IEEE, 208–212. https://doi.org/10.1109/LICS.1994.316071
\ William Alvin Howard. 1980. The Formulae-as-Types Notion of Construction. In To H. B. Curry: Essays on Combinatory Logic, Lambda Calculus, and Formalism, Haskell Curry, Hindley B., Seldin J. Roger, and P. Jonathan (Eds.). Academic Press.
\ Yulong Huang and Jeremy Yallop. 2023. Defunctionalization with Dependent Types. Proc. ACM Program. Lang. 7, PLDI, Article 127 (jun 2023), 23 pages. https://doi.org/10.1145/3591241
\ Antonius J. C. Hurkens. 1995. A Simplification of Girard’s Paradox. In Proceedings of the Conference on Typed Lambda Calculi and Applications. Springer, London, 266–278. http://dx.doi.org/10.1007/BFb0014058
\ Bart Jacobs. 1995. Objects and Classes, Coalgebraically. In Object Orientation with Parallelism and Persistence, Burkhard Freitag, Cliff B. Jones, Christian Lengauer, and Hans-Jörg Schek (Eds.). Springer, 83–103. https://doi.org/10.1007/978- 1-4613-1437-0_5
\ Steven Keuchel and Tom Schrijvers. 2013. Generic Datatypes à La Carte. In Workshop on Generic Programming (Boston) (WGP ’13). Association for Computing Machinery, New York, NY, USA, 13–24. https://doi.org/10.1145/2502488.2502491
\ Pieter Koopman, Rinus Plasmeijer, and Jan Martin Jansen. 2014. Church Encoding of Data Types Considered Harmful for Implementations: Functional Pearl. In Proceedings of the 26nd 2014 International Symposium on Implementation and Application of Functional Languages (IFL ’14). Association for Computing Machinery, New York, NY, USA, Article 4, 12 pages. https://doi.org/10.1145/2746325.2746330
\ Sam Lindley and Conor McBride. 2013. Hasochism: The Pleasure and Pain of Dependently Typed Haskell Programming. In Proceedings of the Haskell Symposium (Boston) (Haskell ’13). Association for Computing Machinery, New York, NY, USA, 81–92. https://doi.org/10.1145/2503778.2503786 Richard Molitor. 2015. Open Inductive Predicates. Master’s thesis. Karlsruher Institut für Technologie (KIT).
\ Hiroshi Nakano. 2000. A Modality for Recursion. In Proceedings of the Symposium on Logic in Computer Science. 255–266. https://doi.org/10.1109/LICS.2000.855774
\ Klaus Ostermann and Julian Jabs. 2018. Dualizing Generalized Algebraic Data Types by Matrix Transposition. In European Symposium on Programming. Springer, 60–85. https://doi.org/10.1007/978-3-319-89884-1_3
\ Nicolas Oury. 2008. Coinductive Types and Type Preservation. Message on the coq-club mailing list (2008). https://sympa. inria.fr/sympa/arc/coq-club/2008-06/msg00022.html
\ François Pottier and Nadji Gauthier. 2006. Polymorphic Typed Defunctionalization and Concretization. Higher-Order and Symbolic Computation 19, 1 (3 2006), 125–162. https://doi.org/10.1007/s10990-006-8611-7
\ Tillmann Rendel, Julia Trieflinger, and Klaus Ostermann. 2015. Automatic Refunctionalization to a Language with Copattern Matching: With Applications to the Expression Problem. In Proceedings of the 20th ACM SIGPLAN International Conference on Functional Programming (Vancouver, BC, Canada) (ICFP 2015). Association for Computing Machinery, New York, NY, USA, 269–279. https://doi.org/10.1145/2784731.2784763
\ John Charles Reynolds. 1972. Definitional Interpreters for Higher-Order Programming Languages. In ACMConf (Boston). Association for Computing Machinery, New York, NY, USA, 717–740. https://doi.org/10.1145/800194.805852
\ Christopher Schwaab and Jeremy G. Siek. 2013. Modular Type-Safety Proofs in Agda. In Proceedings of the 7th Workshop on Programming Languages Meets Program Verification (Rome, Italy) (PLPV ’13). Association for Computing Machinery, New York, NY, USA, 3–12. https://doi.org/10.1145/2428116.2428120
\ Anton Setzer. 2003. Java as a Functional Programming Language. In Types for Proofs and Programs, Herman Geuvers and Freek Wiedijk (Eds.). Springer, Berlin, Heidelberg, 279–298. https://doi.org/10.1007/3-540-39185-1_16
\ Anton Setzer. 2006. Object-Oriented Programming in Dependent Type Theory. Trends in Functional Programming 7 (2006), 91–108.
\ Anton Setzer. 2012. Coalgebras as Types Determined by Their Elimination Rules. In Epistemology versus Ontology. Essays on the Philosophy and Foundations of Mathematics in Honour of Per Martin-Löf, Peter Dybjer, Sten Lindström, Erik Palmgren, and Göran Sundholm (Eds.). Logic, Epistemology, and the Unity of Science, Vol. 27. Springer, Dordrecht, 351–369. https: //doi.org/10.1007/978-94-007-4435-6_16
\ Thomas Streicher. 1993. Investigations into Intensional Type Theory. Habilitationsschrift, Ludwig-Maximilians-Universität München. Wouter Swierstra. 2008. Data Types à la Carte. Journal of Functional Programming 18, 4 (2008), 423–436. https://doi.org/ 10.1017/S0956796808006758
\ Neil Tennant. 1982. Proof and Paradox. Dialectica 36, 2-3 (1982), 265–296. https://doi.org/10.1111/j.1746-8361.1982.tb00820.
\ x David Thibodeau, Andrew Cave, and Brigitte Pientka. 2016. Indexed Codata Types. In Proceedings of the International Conference on Functional Programming (Nara, Japan) (ICFP 2016). Association for Computing Machinery, New York, NY, USA, 351–363. https://doi.org/10.1145/2951913.2951929
\ Philip Wadler. 1998. The Expression Problem. (11 1998). https://homepages.inf.ed.ac.uk/wadler/papers/expression/ expression.txt Note to Java Genericity mailing list. S
\ tephanie Weirich, Pritam Choudhury, Antoine Voizard, and Richard A. Eisenberg. 2019. A Role for Dependent Types in Haskell. Proc. ACM Program. Lang. 3, ICFP, Article 101 (jul 2019), 29 pages. https://doi.org/10.1145/3341705
\ Stephanie Weirich, Antoine Voizard, Pedro Henrique Azevedo de Amorim, and Richard A. Eisenberg. 2017. A Specification for Dependent Types in Haskell. Proceedings of the ACM on Programming Languages 1, ICFP (8 2017). https://doi.org/ 10.1145/3110275
\ Andrew K. Wright and Matthias Felleisen. 1994. A Syntactic Approach to Type Soundness. Information and Computation 115, 1 (11 1994), 38–94. https://doi.org/10.1006/inco.1994.1093
\ Weixin Zhang, Cristina David, and Meng Wang. 2022. Decomposition Without Regret. arXiv preprint arXiv:2204.10411 (2022). https://doi.org/10.48550/ARXIV.2204.10411
\
:::info Authors:
:::
:::info This paper is available on arxiv under CC 4.0 license.
:::
\