Contemporary Advances and Opportunities for Abundant Verified Computation

By: Neel Boronia

“ZK” is now an industry term that encompasses many subcategories of encrypted computation: true ZK, multiparty computation (MPC), fully homomorphic encryption (FHE), trusted execution environments (TEE), and more. For all intents and purposes, the following overview is limited to ZK though the other categories are worth further exploration.

Within ZK specifically, we suspect people are fundamentally misunderstanding (and therefore mispricing) the value proposition. This misunderstanding coupled with accelerating performance and decreasing cost curves should hopefully lead to the creation of de novo value in the near future (1-3 year time horizon). While the current market for ZK proofs (ZKPs) seems to be relevant mostly to the blockchain space, it’s possible that the addressable market for ZKPs extends far beyond crypto applications. In this piece, we’ll explore the mechanics and economics of ZKPs and the areas where Tower Research Ventures is interested in exploring applications.

What is a ZKP?

Many misconceptions regarding the value of ZKPs stem from a fundamental misunderstanding of what ZKPs fundamentally are. At a high level, think of them as any non-ZK arbitrary computation that has a series of inputs (known as arguments), operations (known as functions), and output(s).

For example, if we want to create a simple add function it might look like this:

ZKPs offer benefits in terms of verification and information sharing. If Alice wants Bob to be able to verify each step of the process – the initial inputs of 3 and 5, the adding function, and the output of 8 – she can simply send Bob the inputs and the function. Assuming the function is deterministic, Bob can run it to verify the results. But if Alice wants to hide any of this information – either some or all of the inputs, or the function itself – while still proving to Bob that the output is 8, she will need to use some kind of ZKP. Formally, Alice would have to create a “proof” of her computation to submit to the verifier without revealing information about the “witness” (the set of objects she wants to hide).

Technically, witnesses or proofs can accommodate any type or amount of information the user can hide an argument or set of arguments, any number of functions, or even the output. The only limiting factor is in the proof complexity itself, which scales with the number of operations the entire process takes.

Note that ZK is different from encryption. Encryption usually entails securely transferring information where you have some trust assumptions, whether on either end of the transportation or the method of transportation itself. In contrast, ZK proofs entail transferring knowledge of something. This is a critical distinction to make encryption offers one main affordance (privacy/security), while ZK offers two main affordances (privacy/security and succinctness). This second affordance is novel – and it’s the main thing people are underestimating about ZKPs.

  • Privacy: This affordance is fairly straightforward. The ability to hide information (either via encryption or inside a witness) supports the privacy guarantees required by technologies like payments processing, messaging, etc.
  • Succinctness: This means you don’t have to recompute proofs you receive (privacy means you can’t recompute them). Picture a relay race. For Alice to hand off a baton to Bob, she has to do work (running up to Bob), then Bob has to do work (running the next leg), but he doesn’t have to redo Alice’s work, even though she isn’t running in private. This is where we expect to find all the value in ZKPs. What compression algorithms are to minimizing data footprint and loss, ZKPs are to minimizing repeated calculations – it’s like a compression for computation instead of data. Even if the inputs and function are public, ZKPs can still be valuable if the verification of a proof is cheaper than the initial computation. As a basic example, this means that if 100 people are given a riddle and one person proves an answer, the remaining 99 don’t have to also solve the riddle. This is particularly valuable for computations that require collective action.

Why aren’t ZKPs everywhere already?

If ZKPs are really a form of computational compression, this is presumably important infrastructure for all types of devs, not just people pushing around magic internet money. It’s worth asking why ZKPs don’t show up literally everywhere (much in the way standard encryption has become ubiquitous). The catch with ZKPs is that they are computationally expensive and complex to create. Think of the normal process of generating a proof as something like this:

The cost of initial computation is fixed, whether one’s process ends there or they choose to feed the computation into a ZKP. The cost to verify the proof is also trivial. The main reason we don’t just use ZKPs for everything is the proving step. Thus far, this step was both too expensive and too complicated to serve as a useful primitive. However, a few recent advancements in the ecosystem have led to evolutions for both factors:

  • No longer complicated: In the last year, groups like Succinct, Risc0, Jolt, and others have launched ZK Virtual Machines (zkVMs) that enable you to take any “normal” code written in Rust and compile them into a ZKP. Prior to these products, proving a function like the addition above required you to “write a circuit” in a very esoteric language like circom. This was both time consuming and niche. Now the type of people able to make ZKPs should expand from cryptographers to anyone who can write Rust, similar to how GPT3’s API increased the surface area of ML from researchers to less specialized devs.
  • No longer expensive(ish): This factor is likely more important, but harder to measure. Today, the computational difference between a standard operation and a ZKP of that operation is many, many orders of magnitude – but the cost is basically fixed. This means it is only economically viable to generate the proof when its amortized cost over the number of verifications is cheaper than serial computation. As a basic example, if some computation costs $20, generating a proof costs $1,000, and verification costs $1, it only makes sense to generate a proof if verification will occur more than approximately 50 times (1,000 / 20).

This ratio of proving cost to computing cost is what Wei Dai (an investor at 1kx) refers to as K(appa) in his Cost of Verifiability talk. This metric has not yet achieved industry-wide recognition, but we think it’s a critical concept that will gain broader and broader adoption. Currently, Wei Dai’s estimate is that K(appa) is ≅10,000, meaning the cost premium for a ZKP is 10,000x. Technically, each computation may present a different complexity of proving depending on the specific operations (it might be more expensive to prove multiplication than adding, for example), which means that K is more of a distribution than a hard number. Furthermore, very similar to the GPU ecosystem, you can achieve efficiencies based on the hardware you prove on, as well as software accelerations of that hardware like CUDA. Snarkify, a Tower Research Ventures portfolio company, is a leader in high-performance software optimization for ZK proving. In the latest ZPrize, Snarkify achieved a 900x speedup in end-to-end proof generation using GPUs, driving significant advancements in reducing the Cost of Verification. Ingonyama, a company developing ASICs for ZKPs, estimates a 10x-100x possible improvement through pure hardware optimization.

Now, let’s turn our attention to the actual proof supply chain. The “cost of a proof” (its K, the time to make it, and the financial cost to generate it) is split across the many parties in the chain. Each party may represent a different contribution to bringing down this total cost.

The ZK proof supply chain

Let’s zoom in on the proof step:

The steps in this process are as follows:

  1. Alice wants a proof for a computation. Alice can either choose to build the entire ZKP herself or outsource it to a proof provider or marketplace of providers.
  2. Alice decides to send out her proof to a marketplace to be filled. The major marketplaces today are run by zkVM developers: Succinct, Risc0, and Gevulot to name a few.
  3. The marketplace searches for a prover to fill the order.
    1. Depending on the size of the computation, the prover network may not be able to find one person to prove the whole job.
    2. If the computation is divided into smaller jobs, the network uses proprietary software to determine how to best divide the large proof into sub-proofs, which it then parcels out to multiple provers.
  4. Those provers will use a given zkVM architecture to take the computation and create its corresponding proof.
    1. This proof computation could run on a standard CPU, a GPU with ZK acceleration (this is what Snarkify makes), or custom FPGAs for ZKPs, which is what Irreducible is making.
    2. If the proof was initially split up, an aggregator rolls up the individual proofs until it gets to a parent proof of the initial computation. Nebra is a leading player in this space.
  5. The final, aggregated proof is fulfilled back to Alice.
  6. Alice can now hand over the proof to Bob, who can trivially verify it.

Who makes the money in the supply chain?

Here is another way to visualize the stack of the proof process:

Today, all the major zkVMs are free and open-source, mainly as a way for marketplaces to gain market share towards their core products. Given this, zkVMs themselves are hard to monetize – a new closed entrant will have to overcome the efficiency of the current leaders, which can subsidize future OSS development with other business lines like the prover marketplaces they operate. As such, we’re currently unsure whether zkVMs alone can capture economic value, though it’s possible these will monetize in a way we haven’t considered.

Let’s turn to the marketplace layer. While marketplaces are generally attractive business models given their moats around liquidity and ability to compound supply and demand, it’s an open question how interchangeable the proof marketplaces are. What prevents the demand side (the proof requesters) from demanding across multiple marketplaces? What prevents the supply side (the provers) from listing their services across multiple marketplaces? Lock-in could result from specializing in certain types of hardware or accelerator supply, or from building out specific value-adds for certain demand types. Snarkify’s GPU acceleration software is a great example of this, demonstrating how deep expertise in hardware optimization can create a competitive advantage in proof generation. Scroll, a prominent blockchain-scaling solution, relies on Snarkify’s Elastic Prover for nearly half of its proving volume. Similar to the zkVM layer, it’s entirely possible that value can be found in this layer, but it does not appear immediately obvious. The possibility of marketplace lock-in will become clearer as the major marketplaces’ roadmaps are made public.

The aggregation layer also seems precarious. Wrapping proofs into other proofs should be a standard feature of a robust prover or marketplace – it’s not a detail that the initial client cares about and therefore should pay for.

Individual provers will probably be very valuable, but this seems like a space ripe for arbitrageurs. Similar to how people build specialized mining rigs or access cheaper energy for Bitcoin mining, we anticipate that certain players will be able to build custom hardware or locate their rigs in optimal localities. Solutions in this slice will have to consciously identify their ICP. Is it someone specifically interested in generating ZKPs, or someone comparing the economic value of generating ZKPs against their other avenues of monetizing computation?

All that said, some companies seem to be vertically integrating, running their own hardware, marketplace, and VM. Maybe no individual slice is too valuable, and therefore the value is in evolving into a general store like AWS. Succinct’s prover network is in private beta as are Risc0’s and Gevolut’s. As such, as these systems go live in the coming months, we’ll have to see (1) the notional volume and value of the proof requests that flow into these systems, and (2) the relative composition of both customers and provers.

What to track

Quantitatively:

  • What is the level and rate of change of K? Surprisingly, there’s no public dashboard for this. Will it be driven down primarily by hardware acceleration, software acceleration, or proof architecture acceleration?
  • What will be the notional volume of the major prover marketplaces once they are live? What will be the rate of change?

Qualitatively:

  • What are the demand drivers for ZKP generation on the app layer beyond ZK rollups?
  • How will marketplaces aim to differentiate supply and demand?

The answers to these questions may provide some insight into the direction ZKPs will take in the near future – it’s still unclear what the next big category to enjoy succinctness is beyond ZK rollups. While it’s always hard to predict new streams of value before they’re created, we can speculate where interesting applications will emerge based on the underlying need for the affordances of ZKPs, namely privacy and succinctness. Industries like healthcare, finance, and civic tech rely on a base amount of privacy and succinctness, and as such might be exciting verticals for ZKP-powered applications, assuming the cost of ZKP generation is comparable or cheaper than their current privacy solutions.

Here are a handful of ideas that might benefit from verified computation.

Finance: Privacy and succinctness are incredibly conducive to the adversarial nature of financial markets. Dark pools already command a growing share of equity trading volume – what can they become if they gain the ability to offer zk-strong guarantees on activity without revealing specific trades to the operator? How can we accelerate clearing and transferring large amounts of capital across banks, brokers, and custodians once we can provide succinct verification of stocks and flows? As an example, ING previously published a paper highlighting their research on applying ZKRPs (ZKPs over a range of data) to KYC-AML needs without having to look at underlying personal information.

Healthcare: Medical research in the US must constantly balance scientific experimentation and progress with heavy regulation of trials and even access to patient data. Current solutions require investment in becoming HIPAA-compliant (for everything from creation to storage and retrieval of records), passing an Institutional Review Board, and/or using centralized government databases. ZK, and its sibling technology, fully homomorphic encryption, offers elegant tooling to enable researchers to compute within and across datasets without revealing patient information. The Mayo Clinic wrote a paper proposing a Health-zkIDM. This would be a way for patients and providers to create ZKPs of health records, allowing them to become interoperable across departments or other providers.

Civic Tech: Governments understandably need high fidelity access to information about their citizens (income for tax purposes, demographics for census purposes, ID and beliefs for voting purposes, etc.), and many citizens understandably want to minimize the amount of data they provide. ZK’s affordances in privacy can open interesting opportunities for corporate third parties to build products that satisfy both counterparties (secure voting booths, zk-IDs for buying liquor, etc).The Government of the City of Buenos Aires recently launched QuarkID, which is a decentralized ID system that allows citizens to store personal information on their phones (age, diploma, gym membership, etc), allows 3rd party organizations to issue and verify those credentials, and allows for businesses or the government to receive encrypted credentials.

Zero-Source Software (ZSS): Every layer of the modern computing stack (from databases and operating systems to browsers and apps) has become a battleground for open-source and closed-source products to compete. Open-source software providers cite the need for transparency, cost, and editorial freedom to promote the adoption of their tools’, while closed-source providers cite the need for stable value capture and IP protection. This is arguably best displayed by the current discourse around foundational AI models. OpenAI will never want to show you the model, but you may want proof that it never used Mickey Mouse to train. Currently, many groups have counterpositioned against OpenAI by saying “we’ll just show you the inner workings of the model so you can prove it to yourself”, but if OpenAI were to generate a ZKP of its model, users may no longer care about the inner workings. As the proving costs of software collapse, what does the future of open-source software look like in comparison to the future of zero-source software?

If you’re a cryptographer, developer, or entrepreneur building in the above areas or actively exploring the frontier of how computation can be made private and succinct, Tower Research Ventures would love to speak with you about it. Please contact ventures@tower-research.com.


The views expressed herein are solely the views of the author(s), are as of the date they were originally posted, and are not necessarily the views of Tower Research Ventures LLC, or any of its affiliates. They are not intended to provide, and should not be relied upon for, investment advice, nor is any information herein any offer to buy or sell any security or intended as the basis for the purchase or sale of any investment. The information herein has not been and will not be updated or otherwise revised to reflect information that subsequently becomes available, or circumstances existing or changes occurring after the date of preparation. Certain information contained herein is based on published and unpublished sources. The information has not been independently verified by TRV or its representatives, and the accuracy or completeness of such information is not guaranteed. Your linking to or use of any third-party websites is at your own risk. Tower Research Ventures disclaims any responsibility for the products or services offered or the information contained on any third-party websites.