Bitcoin has firmly established itself as digital gold, the apex store of value in the cryptocurrency ecosystem. Adoption has reached Wall Street, banks are expanding their crypto services and offering direct BTC exposure via ETFs. With this level of institutional integration, the next pressing question becomes: how to generate yield on BTC holdings? Making things more interesting, institutions will focus on solutions that optimize for Security, Yield and Liquidity.
This poses a fundamental challenge for any Bitcoin L2 solution (and staking): since Bitcoin lacks native yield (unless you run a miner), and serves primarily as a store of value, any yield generated in another asset faces selling pressure if the ultimate goal is to accumulate more BTC.
When Bitcoiners participate in any ecosystem – whether it's an L2, DeFi protocol, or alternative chain – their end goal remains simple: stack more sats. This creates inherent selling pressure for any token used to pay staking rewards or security budgets. While teams are developing interesting utility for alternative tokens, the reality is that without a thriving ecosystem, sustainable yield remains a pipedream.. Teams are mainly forced to bootstrap network effects via points or other incentives.
This brings us to a critical point: Bitcoin L2s' main competition isn't other Bitcoin L2s or BTCfi, but established ecosystems like Solana and Ethereum. The sustainability of yield within a Bitcoin L2 cannot be achieved until a sufficiently robust ecosystem exists within that L2 – and this remains the central challenge. Interesting new ZK rollup providers like Alpen Labs and Starknet claim they can import network effects by offering EVM compatibility on Bitcoin while enhancing security. With Bitcoin’s building tenure as a store of value, increasingly like Gold, monetisation schemes for the asset will become more common.
However, we need to face reality – with 86% of VC funding for these L2s allocated post-2024, we're still years away from maturity. Is it too late for Bitcoin L2s to catch up?
Security alone is no longer a sufficient differentiator. Solana and Ethereum have proven resilient enough to earn institutional trust, while Bitcoin L2s must justify their additional complexity, particularly around smart contract risk when interacting with UTXOs.
Being EVM-compatible does not automatically create network effects. It might help bring developers / dapps over, but creating a winning ecosystem flywheel will only become tougher with time. In fact, the winners of this cycle have differentiated with a product first approach (Hyperliquid, Pumpdotfun, Ethena…), not VM or tech. As such, providing extra BTC economic security or alignment won’t be enough without a killer product in the long run.
Incremental security improvements alone aren't the most compelling selling point – we've seen re-staking initiatives like Eigenlayer struggle with this exact issue. AVS aren't generally willing to pay extra for security (especially since they’ve had it for free); selling security is hard. We’ve seen the same promise of cryptoeconomic security fail before with Cosmos ICS and Polkadot Parachains.
That said, Bitcoin L2s do have a compelling security advantage. They inherit Bitcoin's massive $1.2T+ security budget (hashrate), far exceeding what Solana or Ethereum can offer. For institutions prioritizing safety over yield size, this edge might matter – even if yields are somewhat lower. Bitcoin Timestamping could create a completely new market. Can L2s tap into this extra economic security and liquidity while 10x’ing product experience? Again, if your security is higher but the product is not great, it won’t matter.
BTC whales aren't primarily interested in bridging assets; they want to accumulate more Bitcoin. This raises an important question: from their perspective, is there a meaningful difference between locking BTC in an L2 versus in Solana?
Perceived risk is the key factor here. An institution might actually prefer Coinbase custody over a decentralized signer set where they might not know the operators, weighing legal risk against technical risk. This perception is heavily influenced by user experience – if a product isn't intuitive, the risk is perceived as higher. A degen whale on the other hand, might be comfortable with bridging into Solayer to farm the airdrop or with ‘staking’ into Bitlayer for yield.
At Chorus One we’ve classified every staking offering to better inform our institutional clients who are interested in putting their BTC to work, following the guidance of our friends at Bitcoin Layers.
Want to dive deeper into the staking offerings available through Bitcoin Layers? Shoot our analyst Luis Nuñez (and author of this paper) a DM on X!
Since risk is perceived and depending on your yield, security and liquidity preferences, your ideal option might look like this:
And still be super convenient. We’re in an interesting period where Bitcoin TVL or BTCfi is increasing dramatically (led by Babylon), while the % of BTC that has remained idle for at least 1 year keeps rising, now at 60%. This tells us that Bitcoin dominance is growing thanks to institutional adoption, but that there’s no compelling yield solutions yet to activate the BTC.
Institutions have historically preferred lending BTC over exploring L2/DeFi solutions, primarily due to familiarity (Coinbase, Cantor). According to Binance, only 0.79% of BTC is locked in DeFi, meaning that DeFi lending (e.g. Aave) is not as popular. Even so, wrapped BTC in DeFi is still around 5 times larger than the amount of BTC in staking protocols.
Staking in Bitcoin Layers requires significant education. L2s like Stacks and CoreDAO use the proximity to miners to secure the system and tap into the liquidity by providing incentives for contribution or merge mining. More TradFi akin operations might be an interesting differentiator for a BTC L2. We've seen significant institutional engagement in basis trades in the past, earning up to 5% yield with Deribit and other brokers.
However, lending's reputation has suffered severely post-2022. The collapses of BlockFi, Celsius, and Voyager exposed substantial custodial and counterparty risks, damaging institutional trust. As mentioned, Bitcoin L2s like Stacks offer an alternative by avoiding traditional custody while including other parties like Miners to have a role in providing yield via staking. For those with a more passive appetite, staking can be the ideal solution to yield. Today however, staking solutions are early and offer just points with the promise of a future airdrop, with the exception of CoreDAO.
Staking in Bitcoin L2s is very different. Typically, we see a multi-sig of operators that order L2 transactions and timestamp a hashed representation of the block into Bitcoin. This allows for state recreation of the L2 at any point in time if the L2 is compromised. Essentially, these use Bitcoin for DA (Data Availability). This means that consensus is still dependent on the multi-sig operators, so these could still collude. Innovations with ZK (Alpen Labs, Citrea), UTXO-to-Smart Contract (Arch, Stacks) and BitVM (BoB) are all trying to improve these security guarantees.
In Ethereum, leading L2s typically have a single sequencer (vs. a multi-sig) to settle transactions to the L1. Critically however, Ethereum L1 has the capability to do fraud proofs allowing for block reorgs if there's a malicious transaction. In Bitcoin, the L1 doesn’t have verification capabilities, so this is not possible… until BitVM?
BitVM aims to allow fraud proofs on the Bitcoin L1. BitVM potentially offers a 10x improvement in security for Bitcoin L2s, but it comes with significant operational challenges.
BitVM is a magnificent project where leaders from every ecosystem are collaborating to make it a reality. We’ve seen potentially drastic improvements between BitVM1 and BitVM2:
BitVM allows fraud proofs to happen through a sequence of standard Bitcoin transactions with carefully crafted scripts. At its core, verification in BitVM works because:
1. Program Decomposition
Before any transactions occur, the program to be verified (like a SNARK verifier) is split into sub-programs that fit in a btc block:
2. Operator Claim
The operator executes the entire program off-chain and claims:
They commit to all these values using cryptographic commitments in their on-chain transactions.
3. Challenge Initiation
When a challenger believes the operator is lying:
4. The Critical On-Chain Execution
Here's where Bitcoin nodes perform the actual verification:
The challenger creates a "Disprove" transaction that:
5. Bitcoin Consensus in Action
When nodes process this transaction:
The Bitcoin network reaches consensus on this result just like it does with any transaction's validity. The technology enables Bitcoin-native verification of arbitrary computations without changing Bitcoin's consensus rules. This opens the door for more sophisticated smart contracts secured directly by Bitcoin, but implementation hurdles are substantial since operators need to front the liquidity and face several risks:
As such, incentives to operate the bridge will be quite attractive to mitigate the risks. If we’re able to mitigate these, security will be significantly enhanced and might even provide interoperability between different layers, which could unlock interesting use-cases while retaining the Bitcoin proximity. Will this proximity allow for the creation of killer products and real yields?
For a Bitcoin L2 to succeed, it must offer products unavailable elsewhere or provide substantially better user experiences. The previously mentioned Bitcoin proximity has to be exploited for differentiation.
The jury is still out on whether ZK rollup initiatives can bootstrap meaningful network effects. These rollups will ultimately need a killer app to thrive or to port them from EVM with the promise of Bitcoin liquidity. Otherwise, why would dapps choose to settle on Bitcoin?
The winning strategy for Bitcoin L2s involves:
Below, we’ll dive into some of my top institutional picks, a few of which we’ve invested in.
Babylon’s main value-add is to provide Bitcoin economic security. As we’ve mentioned several times, this offering alone will not be enough, and the team is well aware. Personally, I'm bullish on the app-chain approach, following models like Avalanche or Cosmos, but simply using BTC for the initial bootstrap of security and liquidity.
While the app-chain thesis represents the endgame, reaching network effects requires 10x the effort since everything is naturally fragmented. Success demands an extremely robust supporting framework – something only Cosmos has arguably achieved with sufficient decentralization (and suffered its consequences). Avalanche provides the centralized support needed to unify a fragmented ecosystem.
The ideal endgame resembles apps in the App Store – distinct from each other but with clear commonalities. In this analogy, Bitcoin serves as the iPhone – the trusted foundation for distribution.
Mezo (investor)
Mezo's approach with mUSD is particularly interesting as it reduces token selling pressure if mUSD gains significant utility. Their focus on "real world" applications could drive mainstream adoption, with Bitcoin-backed loans as the centerpiece. Offering fixed rates as low as 1% unlocks interesting DeFi use cases around looping with reduced risk, while undercutting costs compared to Coinbase + Morpho BTC lending offerings (at around 5%).
Plasma (investor)
Purpose built for stablecoin usage. Zero-fee USDT transfers, parallel execution and strong distribution strategies position Plasma well in the ecosystem. Other features include confidential transactions and high customization around gas and fees.
Arch is following the MegaEth approach to curate a mafia ecosystem, a parallel execution environment, and close ties to Solana. In Arch, Users send assets directly to smart contracts using native Bitcoin transactions.
Stacks has a very interesting setup since there's no selling pressure for stakers (they earn BTC rather than STX). As the oldest and most recognized Bitcoin L2 brand, they have significant advantages. While Clarity presents challenges, this may be changing with innovations like smart contract to Bitcoin transaction capabilities in development and other programming languages. StackingDAO (investor), is the leading LST in the ecosystem and provides interesting yield opportunities in both liquid STX and liquid sBTC.
Looking to stake your STX? Click here!
BOB (Building on Bitcoin)
BoB is at the forefront of BitVM development (target mainnet in 2025) and is looking to use Babylon for security bootstrapping. The team is doing a fantastic job at exploiting the BTC proximity with BitVM while developing institutional grade products.
CoreDAO features strong LST adoption tailored for institutions and is the only staking yield mechanism that's live and returns actual $. CoreDAO Ventures is doing a great job at backing teams early in their development.
Botanix is the leading multi-sig set up with their Spiderchain, where each BTC that is being bridged by the chain is operated by a new and randomized multi-sig, increasing its robustness by providing ‘forward security’. Interestingly, Botanix will not have their own token (at least initially) and will only use BTC and pBTC, meaning rewards and fees will be in BTC.
For retail users, four standout solutions I like:
Bitcoin L2s face significant challenges in their quest for adoption and sustainability. The inherent tension between Bitcoin's store-of-value proposition and the yield-generating mechanisms of L2s creates fundamental hurdles. However, projects that can offer unique capabilities, seamless user experiences, and compelling institutional cases have the potential to overcome these obstacles and carve out valuable niches in the expanding Bitcoin ecosystem.
The key to success lies not in merely replicating what Ethereum or Solana already offer, but in leveraging Bitcoin's unique strengths to create complementary solutions that expand the utility of the world's leading cryptocurrency without compromising its fundamental value proposition. Adoption is one killer product away.
Want to learn more about yield opportunities on Bitcoin? Reach out to us at research@chorus.one and let’s chat!
In the world of blockchain technology, where every millisecond counts, the speed of light isn’t just a scientific constant—it’s a hard limit that defines the boundaries of performance. As Kevin Bowers highlighted in his article Jump Vs. the Speed of Light, the ultimate bottleneck for globally distributed systems, like those used in trading and blockchain, is the physical constraint of how fast information can travel.
To put this into perspective, light travels at approximately 299,792 km/s in a vacuum, but in fiber optic cables (the backbone of internet communication), it slows to about 200,000 km/s due to the medium's refractive index. This might sound fast, but when you consider the distances involved in a global network, delays become significant. For example:
For applications like high-frequency trading or blockchain consensus mechanisms, this delay is simply too long. In decentralized systems, the problem worsens because nodes must exchange multiple messages to reach agreement (e.g., propagating a block and confirming it). Each round-trip adds to the latency, making the speed of light a "frustrating constraint" when near-instant coordination is the goal.
Beyond the physical delay imposed by the speed of light, blockchain networks face an additional challenge rooted in information theory: the Shannon Capacity Theorem. This theorem defines the maximum rate at which data can be reliably transmitted over a communication channel. It’s expressed as:
where C is the channel capacity (bits per second), B is the bandwidth (in hertz), and S/N is the signal-to-noise ratio. In simpler terms, the theorem tells us that even with a perfect, lightspeed connection, there’s a ceiling on how much data a network can handle, determined by its bandwidth and the quality of the signal.
For blockchain systems, this is a critical limitation because they rely on broadcasting large volumes of transaction data to many nodes simultaneously. So, even if we could magically eliminate latency, the Shannon Capacity Theorem reminds us that the network’s ability to move data is still finite. For blockchains aiming for mass adoption—like Solana, which targets thousands of transactions per second—this dual constraint of light speed and channel capacity is a formidable hurdle.
In a computing landscape where recent technological advances have prioritized fitting more cores into a CPU rather than making them faster, and where the speed of light emerges as the ultimate bottleneck, Jump team refuses to settle for off-the-shelf solutions or the short-term fix of buying more hardware. Instead, it reimagines existing solutions to extract maximum performance from the network layer, optimizing data transmission, reducing latency, and enhancing reliability to combat the "noise" of packet loss, congestion, and global delays.
The Firedancer project is about tailoring this concept for a blockchain world where every microsecond matters, breaking the paralysis in decision-making that arises when systems have many unoptimized components.
Firedancer is a high-performance validator client developed in C for the Solana blockchain, developed by Jump Crypto, a division of Jump Trading focused on advancing blockchain technologies. Unlike traditional validator clients that rely on generic software stacks and incremental hardware upgrades, Firedancer is a ground-up reengineering of how a blockchain node operates. Its mission is to push the Solana network to the very limits of what’s physically possible, addressing the dual constraints of light speed and channel capacity head-on.
At its core, Firedancer is designed to optimize every layer of the system, from data transmission to transaction processing. It proposes a major rewrite of the three functional components of the Agave client: networking, runtime, and consensus mechanism.
Firedancer is a big project, and for this reason it is being developed incrementally. The first Firedancer validator is nicknamed Frankendancer. It is Firedancer’s networking layer grafted onto the Agave runtime and consensus code. Precisely, Frankendancer has implemented the following parts:
All other functionality is retained by Agave, including the runtime itself which tracks account state and executes transactions.
In this article, we’ll dive into on-chain data to compare the performance of the Agave client with Frankendancer. Through data-driven analysis, we quantify if these advancements can be seen on-chain via Solana’s performance. This means that not all improvements will be visible via this analysis.
You can walk through all the data used in this analysis via our dedicated dashboard.
While signature verification and block distribution engines are difficult to track using on-chain data, studying the dynamical behaviour of transactions can provide useful information about QUIC implementation and block packing logic.
Transactions on Solana are encoded and sent in QUIC streams into validators from clients, cfr. here. QUIC is relevant during the FetchStage, where incoming packets are batched (up to 128 per batch) and prepared for further processing. It operates at the kernel level, ensuring efficient network input handling. This makes QUIC a relevant piece of the Transaction Processing Unit (TPU) on Solana, which represents the logic of the validator responsible for block production. Improving QUIC means ultimately having control on transaction propagation. In this section we are going to compare the Agave QUIC implementation with the Frankendancer fd_quic—the C implementation of QUIC by Jump Crypto.
The first difference relies on connection management. Agave utilizes a connection cache to manage connections, implemented via the solana_connection_cache module, meaning there is a lookup mechanism for reusing or tracking existing connections. It also employs an AsyncTaskSemaphore to limit the number of asynchronous tasks (set to a maximum of 2000 tasks by default). This semaphore ensures that the system does not spawn excessive tasks, providing a basic form of concurrency control.
Frankendancer implements a more explicit and granular connection management system using a free list (state->free_conn_list) and a connection map (fd_quic_conn_map) based on connection IDs. This allows precise tracking and allocation of connection resources. It also leverages receive-side scaling and kernel bypass technologies like XDP/AF_XDP to distribute incoming traffic across CPU cores with minimal overhead, enhancing scalability and performance, cfr. here. It does not rely on semaphores for task limiting; instead, it uses a service queue (svc_queue) with scheduling logic (fd_quic_svc_schedule) to manage connection lifecycle events, indicating a more sophisticated event-driven approach.
Frankendancer also implements a stream handling pipeline. Precisely, fd_quic provides explicit stream management with functions like fd_quic_conn_new_stream() for creation, fd_quic_stream_send() for sending data, and fd_quic_tx_stream_free() for cleanup. Streams are tracked using a fd_quic_stream_map indexed by stream IDs.
Finally, for packet processing, Agave approach focuses on basic packet sending and receiving, with asynchronous methods like send_data_async() and send_data_batch_async().
Frankendancer implements detailed packet processing with specific handlers for different packet types: fd_quic_handle_v1_initial(), fd_quic_handle_v1_handshake(), fd_quic_handle_v1_retry(), and fd_quic_handle_v1_one_rtt(). These functions parse and process packets according to their QUIC protocol roles.
Differences in QUIC implementation can be seen on-chain at transactions level. Indeed, a more "sophisticated" version of QUIC means better handling of packets and ultimately more availability for optimization when sending them to the block packing logic.
After the FetchStage and the SigVerifyStage—which verifies the cryptographic signatures of transactions to ensure they are valid and authorized—there is the Banking stage. Here verified transactions are processed.
At the core of the Banking stage is the scheduler. It represents a critical component of any validator client, as it determines the order and priority of transaction processing for block producers.
Agave implements a central scheduler introduced in v2.18. Its main purpose is to loop and constantly check the incoming queue of transactions and process them as they arrive, routing them to an appropriate thread for further processing. It prioritizes transaction accordingly to
The scheduler is responsible for pulling transactions from the receiver channel, and sending them to the appropriate worker thread based on priority and conflict resolution. The scheduler maintains a view of which account locks are in-use by which threads, and is able to determine which threads a transaction can be queued on. Each worker thread will process batches of transactions, in the received order, and send a message back to the scheduler upon completion of each batch. These messages back to the scheduler allow the scheduler to update its view of the locks, and thus determine which future transactions can be scheduled, cfr. here.
Frankendancer implements its own scheduler in fd_pack. Within fd_pack, transactions are prioritized based on their reward-to-compute ratio—calculated as fees (in lamports) divided by estimated CUs—favoring those offering higher rewards per resource consumed. This prioritization happens within treaps, a blend of binary search trees and heaps, providing O(log n) access to the highest-priority transactions. Three treaps—pending (regular transactions), pending_votes (votes), and pending_bundles (bundled transactions)—segregate types, with votes balanced via reserved capacity and bundles ordered using a mathematical encoding of rewards to enforce FIFO sequencing without altering the treap’s comparison logic.
Scheduling, driven by fd_pack_schedule_next_microblock, pulls transactions from these treaps to build microblocks for banking tiles, respecting limits on CUs, bytes, and microblock counts. It ensures votes get fair representation while filling remaining space with high-priority non-votes, tracking usage via cumulative_block_cost and data_bytes_consumed.
To resolve conflicts, it uses bitsets—a container that represents a fixed-size sequence of bits—which are like quick-reference maps. Bitsets—rw_bitset (read/write) and w_bitset (write-only)—map account usage to bits, enabling O(1) intersection checks against global bitset_rw_in_use and bitset_w_in_use. Overlaps signal conflicts (e.g., write-write or read-write clashes), skipping the transaction. For heavily contested accounts (exceeding PENALTY_TREAP_THRESHOLD of 64 references), fd_pack diverts transactions to penalty treaps, delaying them until the account frees up, then promoting the best candidate back to pending upon microblock completion. A slow-path check via acct_in_use—a map of account locks per bank tile—ensures precision when bitsets flag potential issues.
Vote fees on Solana are a vital economic element of its consensus mechanism, ensuring network security and encouraging validator participation. In Solana’s delegated Proof of Stake (dPoS) system, each active validator submits one vote transaction per slot to confirm the leader’s proposed block, with an optimal delay of one slot. Delays, however, can shift votes into subsequent slots, causing the number of vote transactions per slot to exceed the active validator count. Under the current implementation, vote transactions compete with regular transactions for Compute Unit (CU) allocation within a block, influencing resource distribution.
Data reveals that the Frankendancer client includes more vote transactions than the Agave client, resulting in greater CU allocation to votes. To evaluate this difference, a dynamic Kolmogorov-Smirnov (KS) test can be applied. This non-parametric test compares two distributions by calculating the maximum difference between their Cumulative Distribution Functions (CDFs), assessing whether they originate from the same population. Unlike parametric tests with specific distributional assumptions, the KS-test’s flexibility suits diverse datasets, making it ideal for detecting behavioral shifts in dynamic systems. The test yields a p-value, where a low value (less than 0.05) indicates a significant difference between distributions.
When comparing CU usage for non-vote transactions between Agave (Version 2.1.14) and Frankendancer (Version 0.406.20113), the KS-test shows that Agave’s CDF frequently lies below Frankendancer’s (visualized as blue dots). This suggests that Agave blocks tend to allocate more CUs to non-vote transactions compared to Frankendancer. Specifically, the probability of observing a block with lower CU usage for non-votes is higher in Frankendancer relative to Agave.
Interestingly, this does not correspond to a lower overall count of non-vote transactions; Frankendancer appears to outperform Agave in including non-vote transactions as well. Together, these findings imply that Frankendancer validators achieve higher rewards, driven by increased vote transaction inclusion and efficient CU utilization for non-vote transactions.
Why Frankendancer is able to process more vote transactions may be due to the fact that on Agave there is a maximum number of QUIC connections that can be established between a client (identified by IP Address and Node Pubkey) and the server, ensuring network stability. The number of streams a client can open per connection is directly tied to their stake. Higher-stake validators can open more streams, allowing them to process more transactions concurrently, cfr. here. During high network load, lower-stake validators might face throttling, potentially missing vote opportunities, while higher-stake validators, with better bandwidth, can maintain consistent voting, indirectly affecting their influence in consensus. Frankendancer doesn't seem to suffer from the same restriction.
Although inclusion of vote transactions plays a relevant role in Solana consensus, there are other two metrics that are worth exploring: Skip Rate and Validator Uptime.
Skip Rate determines the availability of a validator to correctly propose a block when selected as leader. Having a high skip rate means less total rewards, mainly due to missed MEV and Priority Fee opportunities. However, missing a high number of slots also reduces total TPS, worsening final UX.
Validator Uptime impacts vote latency and consequently final staking rewards. This metric is estimated via Timely Vote Credit (TVC), which indirectly measures the distance a validator takes to land its votes. A 100% effectiveness on TVC means that validators land their votes in less than 2 slots.
As we can see, there are no main differences pre epoch 755. Data shows a recent elevated Skip Rate for Frankendancer and a corresponding low TVC effectiveness. However, it is worth noting that, since these metrics are based on averages, and considering a smaller stake is running Frankendancer, small fluctuations in Frankendancer performances need more time to be reabsorbed.
The scheduler plays a critical role in optimizing transaction processing during block production. Its primary task is to balance transaction prioritization—based on priority fees and compute units—with conflict resolution, ensuring that transactions modifying the same account are processed without inconsistencies. The scheduler orders transactions by priority, then groups them into conflict-free batches for parallel execution by worker threads, aiming to maximize throughput while maintaining state coherence. This balancing act often results in deviations from the ideal priority order due to conflicts.
To evaluate this efficiency, we introduced a dissipation metric, D, that quantifies the distance between a transaction’s optimal position o(i)—based on priority and dependent on the scheduler— and its actual position in the block a(i), defined as
where N is the number of transactions in the considered block.
This metric reveals how well the scheduler adheres to the priority order amidst conflict constraints. A lower dissipation score indicates better alignment with the ideal order. It is clear that the dissipation D has an intrinsic factor that accounts for accounts congestion, and for the time-dependency of transactions arrival. In an ideal case, these factors should be equal for all schedulers.
Given the intrinsic nature of the dissipation, the numerical value of this estimator doesn't carry much relevance. However, when comparing the results for two types of scheduler we can gather information on which one resolves better conflicts. Indeed, a higher value of the dissipation estimator indicates a preference towards conflict resolutions rather than transaction prioritization.
Comparing Frankendancer and Agave schedulers highlights how dissipation is higher for Frankendancer, independently from the version. This is more clear when showing the dynamical KS test. Only for very few instances the Agave scheduler showed a higher dissipation with statistically significant evidence.
If the resolution of conflicts—and then parallelization—is due to the scheduler implementation or to QUIC implementation is hard to tell from these data. Indeed, a better resolution of conflicts can be achieved also by having more transactions to select from.
Finally, also by comparing the percentiles of Priority Fees for transactions we can see hints of a different conflict resolution from Frankendancer. Indeed, despite the overall number of transactions (both vote and non-vote) and extracted value being higher than Agave, the median of PF is lower.
In this article we provide a detailed comparison of the Agave and Frankendancer validator clients on the Solana blockchain, focusing on on-chain performance metrics to quantify their differences. Frankendancer, the initial iteration of Jump Crypto’s Firedancer project, integrates an advanced networking layer—including a high-performance QUIC implementation and kernel bypass—onto Agave’s runtime and consensus code. This hybrid approach aims to optimize transaction processing, and the data reveals its impact.
On-chain data shows Frankendancer includes more vote transactions per block than Agave, resulting in greater compute unit (CU) allocation to votes, a critical factor in Solana’s consensus mechanism. This efficiency ties to Frankendancer’s QUIC and scheduler enhancements. Its fd_quic implementation, with granular connection management and kernel bypass, processes packets more effectively than Agave’s simpler, semaphore-limited approach, enabling better transaction propagation.
The scheduler, fd_pack, prioritizes transactions by reward-to-compute ratio using treaps, contrasting Agave’s priority formula based on fees and compute requests. To quantify how well each scheduler adheres to ideal priority order amidst conflicts we developed a dissipation metric. Frankendancer’s higher dissipation, confirmed by KS-test significance, shows it prioritizes conflict resolution over strict prioritization, boosting parallel execution and throughput. This is further highlighted by Frankendancer’s median priority fees being lower.
A lower median for Priority Fees and higher extracted value indicates more efficient transaction processing. For validators and delegators, this translates to increased revenue. For users, it means a better overall experience. Additionally, more votes for validators and delegators lead to higher revenues from SOL issuance, while for users, this results in a more stable consensus.
The analysis, supported by the Flipside Crypto dashboard, underscores Frankendancer’s data-driven edge in transaction processing, CU efficiency, and reward potential.
Nillion has officially launched its mainnet, ushering in a new era of private, decentralized computation. Chorus One has supported the network since early days, including the Genesis Sprint and Catalyst Convergence phases. With the mainnet launch, we are now proud to join the network as a Genesis Validator, and support $NIL staking from day one!
If you're looking for a trusted validator, backed by a team of 35+ engineers committed to delivering a best-in-class staking experience, select the Chorus One validator and start staking with us today!
The rapid expansion of AI-driven applications and platforms in has revolutionized everything from email composition to the rise of virtual influencers. AI has permeated countless aspects of our daily lives, offering unprecedented convenience and capabilities. However, with this explosive growth comes an increasingly urgent question: How can we enjoy the benefits of AI without compromising our privacy? This concern extends beyond AI to other domains where sensitive data exchange is critical, such as healthcare, identity verification, and trading. While privacy is often viewed as an impediment to these use cases, Nillion posits that it can actually be an enabler. In this article, we'll delve into the current challenges surrounding private data exchange, how Nillion addresses these issues, and explore the potential it unlocks.
Privacy in blockchain technology is not a novel concept. Over the years, several protocols have emerged, offering solutions like private transactions and obfuscation of user identities. However, privacy extends far beyond financial transactions. It could be argued that privacy has the potential to unlock a multitude of non-financial use cases—if only we could compute on private data without compromising its confidentiality. Feeding private data into generative AI platforms or allowing them to train on user-generated content raises significant privacy concerns.
Every day, we unknowingly share fragments of our data through various channels. This data can be categorized into three broad types:
The publicly shared data has fueled the growth of social media and the internet, generating billions of dollars in economic value and creating jobs. Companies have capitalized on this data to improve algorithms and enhance targeted advertising, leading to a concentration of data within a few powerful entities, as evidenced by scandals like Cambridge Analytica. Users, often unaware of the implications, continue to feed these data monopolies, further entrenching their dominance. With the rise of AI wearables, the potential for privacy invasion only increases.
As awareness of the importance of privacy grows, it becomes clear that while people are generally comfortable with their data being used, they want its contents to remain confidential. This desire for privacy presents a significant challenge: how can we allow services to use data without revealing the underlying information? Traditional encryption methods require decryption before computation, which introduces security vulnerabilities and increases the risk of data misuse.
Another critical issue is the concentration of sensitive data. Ideally, high-value data should be decentralized to avoid central points of failure, but sharing data across multiple parties or nodes raises concerns about efficiency and consistent security standards.
This is where Nillion comes in. While blockchains have decentralized transactions, Nillion seeks to decentralize high-value data itself.
Nillion is a secure computation network designed to decentralize trust for high-value data. It addresses privacy challenges by leveraging Privacy-Enhancing Technologies (PETs), particularly Multi-Party Computation (MPC). These PETs enable users to securely store high-value data on Nillion's peer-to-peer network of nodes and allow computations to be executed on the masked data itself. This approach eliminates the need to decrypt data prior to computation, thereby enhancing the security of sensitive information.
The Nillion network enables computations on hidden data, unlocking new possibilities across various sectors. Early adopters in the Nillion community are already building tools for private predictive AI, secure storage and compute solutions for healthcare, password management, and trading data. Developers can create applications and services that utilize PETs like MPC to perform blind computations on private user data without revealing it to the network or other users.
The Nillion Network operates through two interdependent layers:
When decentralized applications (dApps) or other blockchain networks require privacy-enhanced data (e.g., blind computations), they must pay in $NIL, the network's native token. The Coordination Layer's nodes manage the payments between the dApp and the Petnet, while infrastructure providers on the Petnet are rewarded in $NIL for securely storing data and performing computations.
The Coordination Layer functions as a Cosmos chain, with infrastructure providers staking $NIL to secure the network, just like in other Cosmos-based chains. This dual-layer architecture ensures that Nillion can scale effectively while maintaining robust security and privacy standards.
At the heart of Nillion's architecture is the concept of clustering. Each cluster consists of a variable number of nodes tailored to meet specific security, cost, and performance requirements. Unlike traditional blockchains, Nillion's compute network does not rely on a global shared state, allowing it to scale both vertically and horizontally. As demand for storage or compute power increases, clusters can scale up their infrastructure or new clusters of nodes can be added.
Clusters can be specialized to handle different types of requests, such as provisioning large amounts of storage for secrets or utilizing specific hardware to accelerate particular computations. This flexibility enables the Nillion network to adapt to various use cases and workloads.
$NIL is the governance and staking token of the Nillion network, playing a crucial role in securing and managing the network. Its primary functions include:
Nillion's advanced data privacy capabilities open up a wide range of potential use cases, both within and beyond the crypto space:
Chorus One is a genesis validator on the Nillion mainnet, and is officially supporting $NIL staking. To stake your $NIL with us, select the Chorus One validator at the link below, and begin staking with us today!
At Chorus One, we aim to provide users with a best-in-class experience across a wide variety of networks. To maintain this standard, we periodically assess our supported networks for current and future viability. In light of market conditions and lower network activity, we have made the decision to stop supporting the networks below at the end of this month. These include:
These changes are part of an ongoing effort to streamline our focus and dedicate resources to networks with stronger long-term growth potential.
We are proud to have supported these networks and their users. However, there are a few trends we have observed that have led to our decision:
If you’re currently staking tokens on any of these networks, we kindly ask that you migrate them to a different validator by March 31, 2025. After this date, staking rewards from our public nodes will no longer be guaranteed. Please ensure your tokens are unstaked or re-delegated before then.
To view all current supported networks, node addresses, and APY, click here.
This decision allows us to allocate more resources and attention to the networks that show the most promise in terms of activity, user growth, and long-term sustainability. As we continue to grow and evolve, we remain committed to offering the best staking services and supporting the most innovative and active networks in the industry.
If you have any questions or need assistance with unstaking your tokens, our support team is here to help. Feel free to reach out to us via support@chorus.one.
Chorus One is one of the largest institutional staking providers globally, operating infrastructure for over 60 Proof-of-Stake (PoS) networks, including Ethereum, Cosmos, Solana, Avalanche, Near, and others. Since 2018, we have been at the forefront of the PoS industry, offering easy-to-use, enterprise-grade staking solutions, conducting industry-leading research, and investing in innovative protocols through Chorus One Ventures. As an ISO 27001 certified provider, Chorus One also offers slashing and double-signing insurance to its institutional clients. For more information, visit chorus.one or follow us on LinkedIn, X (formerly Twitter), and Telegram.