If you have any ideas, you can send me a message through XMTP
Overview of the Restaking Ecosystem in 2024
The restaking narrative first came to public attention at DevConnect 2023, and since then, its adoption has skyrocketed. The restaking industry has grown exponentially, from a single company, EigenLayer, to a thriving ecosystem with numerous restaking platform providers, operators, liquidity restaking protocols, and risk experts across various crypto networks.
● Restaking is still a new industry, and the community should pay close attention to second-order effects, market dynamics, and challenges that may be encountered. It is now clear that restaking has developed into a major sub-industry, and there is a lot of room for competition in the restaking space. From core platforms such as EigenLayer, Symbiotic, Babylon, and Jito, to liquidity restaking protocols and DeFi derivatives, each company has its own unique approach and vision. The restaking market proves that this is not a "winner takes all" market.
● Liquidity restaking protocols (LRTs) are often compared to liquidity staking tokens (LSTs), but they are fundamentally different. LSTs take on homogeneous economic risks, while LRTs deal with heterogeneous economic risks. LSTs have uniform risks associated with the underlying asset (i.e. ETH), while LRTs face multiple risks, such as specific AVS factors (such as inflation, slashing conditions, and technical risks), while supporting various collateral types and processing multi-currency payments.
● The entire year of 2024 may be seen as the year of the "Bitcoin Renaissance", with many teams enabling Bitcoin holders to expand Bitcoin's economic potential to protect the security of other networks without relying on third-party trust or bridging other chains. Babylon is leading this trend, unleashing the cryptoeconomic power of Bitcoin through strong technical expertise. A growing ecosystem has formed around Babylon, including emerging Bitcoin liquidity staking players such as Lombard, Solv Protocol, PumpBTC, etc.
● The Solana staking industry has not attracted much attention due to the widespread adoption of Ethereum, but it is steadily growing with a brand new concept. On Solana, restaking has gained traction, with Jito Network leading the way in this field, and companies such as Solayer, Cambrian, and Picasso also developing shared security projects. These initiatives are intended to fill some of the gaps on the path to full decentralization of the Solana native protocol.
● Oracles play a critical role in the rehypothesis space on multiple levels. They can be part of the core design of a rehypothesis platform while also addressing the growing need for accurate pricing of new crypto rehypothecation assets with different economic and technical characteristics. Additionally, oracle networks provide one of the most compelling use cases for shared security. Rehypothecation collateral allows for innovations beyond traditional oracle design, such as increasing network resiliency and service quality by increasing the cost of data operations, or creating new price feed models powered by a cost-effective rehypothesis data availability layer.
Today I lost $75 from a scam crypto coin that was launched and posted on your instagram page I would really appreciate if I could get some compensation from being a victim of this scam We can settle this in a way so everyone is happy!
Aave V3 is debuting on Era Mainnet, powered by
zksync
, unlocking unprecedented scalability, privacy, and security while expanding the DeFi user base and new institutional use cases.
Zero-knowledge proof paradigm: What is zkVM
“In the next 5 years, we will be talking about the adoption of zero-knowledge protocols as much as we are about the adoption of blockchain protocols. The potential unlocked by the breakthroughs of the past few years will sweep the crypto mainstream.”
— Jill, CSO of Espresso Systems, May 2021
Since 2021, the zero-knowledge proof (ZK) landscape has evolved into a diverse ecosystem of primitives, networks, and applications across multiple domains. However, while ZK is gradually gaining momentum, with the launch of ZK-powered rollups like Starknet and zkSync Era marking the latest advances in the space, much of ZK remains a mystery to ZK users and the crypto space as a whole.
But times are changing. We believe that zero-knowledge crypto is a powerful, pervasive tool for scaling and securing software. Simply put, ZK is the bridge to crypto mass adoption. To quote Jill again, anything involving zero-knowledge proofs (ZKPs) will create tremendous value (both fundamental and speculative) in both web2 and web3. The best minds in crypto are working hard to iterate and make ZK economically viable and production-ready. Even so, there is still much that needs to be done before the model we envision becomes a reality.
Compare ZK adoption to Bitcoin adoption, where one reason Bitcoin evolved from an internet currency on fringe enthusiast forums to “digital gold” approved by BlackRock was the proliferation of developer and community-generated content that fostered interest. For now, ZK exists in a bubble within a bubble. Information is fragmented and polarized, with articles either filled with arcane terms or too layman-like to convey any meaningful information beyond repetitive examples. It seems that everyone (experts and laymen) knows what zero-knowledge proofs are, but no one can describe how it actually works.
As one of the teams contributing to the zero-knowledge paradigm, we hope to demystify our work and help a wider audience establish a canonical foundation for understanding and analyzing ZK systems and applications, in order to promote education and discussion among relevant parties and enable the spread of relevant information.
In this article, we will introduce the basics of zero-knowledge proofs and zero-knowledge virtual machines, provide a high-level summary of the operation process of zkVM, and finally analyze the evaluation criteria of zkVM.
1. Zero-knowledge proof basics
What is a zero-knowledge proof (ZKP)?
In short, a ZKP enables one party (the prover) to prove to another party (the verifier) that they know something without revealing the specific content of that thing or any other information. More specifically, a ZKP proves knowledge of a piece of data or the result of a calculation without revealing that data or the input. The process of creating a zero-knowledge proof involves a series of mathematical models that convert the result of a calculation into otherwise meaningless information that proves that the code was successfully executed, which will be verified later.
In some cases, the amount of work required to verify a proof that has been constructed through multiple rounds of algebraic transformations and cryptography is less than the amount of work required to run the calculation. It is this unique combination of security and scalability that makes zero-knowledge cryptography such a powerful tool.
zkSNARK: Zero-Knowledge Succinct Non-Interactive Argument of Knowledge
· Relies on an initial (trusted or untrusted) setup process to establish parameters for verification
· Requires at least one interaction between the prover and verifier
· Proofs are small and easy to verify
· Rollups like zkSync, Scroll, and Linea use SNARK-based proofs
zkSTARK: Zero-Knowledge Scalable Transparent Argument of Knowledge
· No trusted setup required
· Provides high transparency by using publicly verifiable randomness to create a trustless verifiable system, i.e. generating provable random parameters for proofs and verification.
· Highly scalable, as they can quickly (not always) generate and verify proofs, even if the underlying witness (data) is large.
· No interaction is required between the prover and verifier
· The trade-off is that STARKs generate larger proofs, which are harder to verify than SNARKs.
· Proofs are harder to verify than some zkSNARK proofs, but relatively easier to verify than others.
· Starknet and zkVMs such as Lita, Risc Zero, and Succinct Labs all use STARKs.
(Note: Succinct bridge uses SNARKs, but SP1 is a STARK-based protocol)
It is worth noting that all STARKs are SNARKs, but not all SNARKs are STARKs.
2. What is zkVM?
A virtual machine (VM) is a program that runs programs. In context, a zkVM is a virtual computer that is implemented as a system, general circuit, or tool for generating zero-knowledge proofs, used to generate zkVMs for any program or computation.
zkVM does not require learning complex mathematics and cryptography to design and code ZK, allowing any developer to execute programs written in their favorite language and generate ZKPs (zero-knowledge proofs), making it easier to integrate and interact with zero-knowledge. Broadly speaking, most zkVMs mean a compiler toolchain and a proof system that are attached to the virtual machine that executes the program, not just the virtual machine itself. Below, we summarize the main components of zkVM and their functions
The design and implementation of each component is governed by the choice of proofs (SNARKs or STARKs) and instruction set architecture (ISA) for the zkVM. Traditionally, an ISA specifies the capabilities of a CPU (data types, registers, memory, etc.) and the order of operations that the CPU performs when executing a program. In context, an ISA determines the machine code that can be interpreted and executed by a VM. The choice of ISA can make a fundamental difference in the accessibility and usability of a zkVM, as well as the speed and efficiency of the proof generation process, and supports the construction of any zkVM.
Below are some examples of zkVMs and their components for reference only.
For now, we will focus on the high-level interactions between each component to provide a framework for understanding the algebraic and cryptographic processes and design trade-offs of zkVM in later articles.
3. Abstract zkVM flow
The following figure is an abstract and generalized zkVM flow chart, splitting and classifying the format (input/output) as the program moves between zkVM components.
The general process of zkVM is as follows:
(1) Compilation phase
The compiler first compiles the program written in traditional languages (C, C++, Rust, Solidity) into machine code. The format of the machine code is determined by the selected ISA.
(2) VM phase
The VM executes the machine code and generates an execution trace, which is a series of steps of the underlying program. Its format is determined by the choice of algorithm and the set of polynomial constraints. Common algorithm schemes include R1CS in Groth16, PLONKish algorithm in halo2, and AIR in plonky2 and plonky3.
(3) Verification phase
The prover receives the trace and represents it as a set of polynomials subject to a set of constraints, essentially converting the computation into algebra by mathematically mapping facts.
The prover submits these polynomials using a polynomial commitment scheme (PCS). A commitment scheme is a protocol that allows the prover to create a fingerprint of some data X, which is called a commitment to X, and then use the commitment to X to prove facts about X without revealing the content of X. PCS is a fingerprint, a "preprocessed" concise version of the computational constraints. This allows the prover to use the random values that the verifier proposes in the next step to prove facts about the computation, now represented by a polynomial equation.
The prover runs a Polynomial Interactive Oracle Proof (PIOP) to prove that the submitted polynomial represents an execution trace that satisfies the given constraints. PIOP is an interactive proof protocol where the prover sends a commitment to a polynomial, the verifier responds with random field values, and the prover provides an evaluation of the polynomial, similar to "solving" a polynomial equation using random values to convince the verifier in a probabilistic manner.
Application of the Fiat-Shamir heuristic; the prover runs PIOP in non-interactive mode, where the verifier's actions are limited to sending anonymous random challenge points. In cryptography, the Fiat-Shamir heuristic converts an interactive proof of knowledge into a digital signature for verification. This step encrypts the proof and makes it a zero-knowledge proof.
The prover must convince the verifier that the polynomial evaluation it sends to the verifier is correct, regarding the polynomial commitment it sends to the verifier. To do this, the prover produces an "evaluation" or "opening" proof, provided by the polynomial commitment scheme (fingerprint).
(4) Verifier Phase
The verifier checks the proof by following the verification protocol of the proof system, either using constraints or commitments. The verifier accepts or rejects the result based on the validity of the proof.
In summary, a zkVM proof can prove that for a given program, a given result, and a given initial condition, there exists some input that causes the program to produce the given result when executed from the given initial condition. We can combine this statement with the flow to get the following description of zkVM.
A zkVM proof will prove that for a given VM program and a given output, there exists some input that causes the given program to produce the given output when executed on the VM.
4. Evaluating zkVM
What is the criterion for evaluating zkVM? In other words, under what circumstances should we say that one zkVM is better than another? In practice, the answer depends on the use case.
Lita's market research shows that for most commercial use cases, between speed, efficiency, and simplicity, the most important attribute is either speed or kernel time efficiency, depending on the application. Some applications are price-sensitive and want to optimize the proof process to be low-energy and low-cost. For these applications, kernel time efficiency may be the most important optimization metric. Other applications, especially those related to finance or trading, are very sensitive to latency and need to optimize for speed.
Most public performance comparisons focus only on speed, which is certainly important, but is not a comprehensive measure of performance. There are also several important properties that measure the reliability of zkVM, most of which are not up to production standards, even for market-leading incumbents.
We recommend evaluating zkVMs on the following criteria, divided into two subcategories:
Baseline: used to measure the reliability of zkVM
· Correctness
· Security
· Trust assumptions
Performance: used to measure the capabilities of zkVM
· Efficiency
· Speed
· Simplicity
(1) Baseline: Correctness, Security, and Trust Assumptions
Correctness and security should be used as baselines when evaluating zkVM for mission-critical applications. There needs to be sufficient reason to be confident in the correctness, and the security claims need to be strong enough. In addition, the trust assumptions need to be weak enough for the application.
Without these properties, zkVM may be worse than useless for the application, as it may not perform as specified and expose users to hacker attacks and exploits.
Correctness
· The VM must perform the computation as expected
· The proof system must satisfy the security properties it claims
Correctness contains three major properties:
· Robustness: The proof system is true, so everything it proves is true. The verifier rejects evidence of false statements and only accepts computational results where the input produces the actual computational result.
· Completeness: The proof system is complete, able to prove all true statements. If the prover claims that it can prove the result of a computation, it must be able to produce a proof acceptable to the verifier.
· Zero-knowledge proofs: Having a proof does not reveal more about the inputs to a computation than knowing the result itself.
You can have completeness without soundness, if the proof system proves everything including false statements, it is obviously complete but not sound. And you can have soundness without completeness, if the proof system proves the existence of a program but cannot prove the computation, it is obviously sound (after all, it has never proved any false statements) but not complete.
Security
· Related to tolerances of soundness, completeness, and zero-knowledge proofs
In practice, all three correctness properties have non-zero tolerances. This means that all proofs are statistical probabilities of correctness, rather than absolute certainty. Tolerance refers to the maximum tolerable probability that a property fails. Zero tolerance is of course ideal, but in practice zkVM does not achieve zero tolerance on all of these properties. Perfect robustness and completeness seem incompatible with simplicity, and there is no known way to achieve perfect zero-knowledge proofs. A common way to measure security is in bits of security, where a tolerance of 1/(2^n) is called n-bit security. The higher the bit, the better the security.
If a zkVM is completely correct, this does not necessarily mean that it is reliable. Correctness only means that the zkVM satisfies its claimed security properties within the tolerance. It does not mean that the claimed tolerance is low enough to enter the market. Furthermore, if a zkVM is sufficiently secure, it does not mean that it is correct, and security refers to the claimed tolerance, not the tolerance actually achieved. Only when a zkVM is both completely correct and sufficiently secure can it be said that the zkVM is reliable within the claimed tolerance.
Trust Assumptions
· Assume the honesty of the prover and verifier to conclude that the zkVM operates reliably.
When a zkVM has trust assumptions, it usually takes the form of a trusted setup process. The setup process of a ZK proof system is run once to generate some information called "setup data" before the first proof is generated using this proof system. In the trusted setup process, one or more individuals generate some randomness, which is incorporated into the setup data, and it is necessary to assume that at least one of these individuals removes the randomness they incorporated into the setup data.
There are two common trust assumption models in practice.
The "honest majority" trust assumption states that more than half of a group of N people behave honestly in certain specific interactions with the system, which is a trust assumption commonly used in blockchains.
The "1/N" trust assumption states that at least one of a group of N people behaves honestly in certain specific interactions with the system, which is a trust assumption commonly used by MPC-based tools and applications.
It is generally believed that zkVM without trust assumptions is more secure than zkVM with trust assumptions, all other things being equal.
(2) zkVM trilemma: the balance between speed, efficiency, and simplicity in zkVM
9o0f7GPLNhZhjTlfyOmOcoXZvgbWjH5qf7L2Nk04.png
Speed, efficiency, and simplicity are all scalable properties. All of these factors contribute to the end-user cost of zkVM. How to weigh them in the evaluation depends on the application. In general, the fastest solution is not the most efficient or the most concise, the concise solution is not the fastest or the most efficient, and so on. Before explaining how they relate, let's define the various properties.
Speed
· How fast the prover can generate a proof
· Measured in wall-clock time, i.e., the time it takes to compute from start to finish
Speed should be defined and measured based on the specific test program, input, and system to ensure that it can be quantitatively evaluated. This metric is critical for latency-sensitive applications where timely availability of proofs is essential, but it also comes with higher overhead and larger proofs.
Efficiency
· The resources consumed by the prover, the less the better.
· Approximate user time, i.e., the computer time spent by the program code.
The prover consumes two resources: kernel time and space. Therefore, efficiency can be broken down into kernel time efficiency and space efficiency.
Kernel time efficiency: the average time the prover runs across all cores multiplied by the number of cores running the prover.
For a single-core prover, kernel time consumption and speed are the same thing. For a multi-core function prover running in multi-core mode on a multi-core system, kernel time consumption and speed are not the same thing. If a program fully utilizes 5 cores or threads for 5 seconds, this would be 25 seconds of user time and 5 seconds of wall clock time.
Space efficiency: refers to the amount of storage capacity used, such as RAM.
It is very interesting to use user time as a proxy for the energy consumed to run a computation. In the case where almost all cores are fully utilized, the energy consumption of the CPU should remain relatively constant. In this case, the user time spent on a CPU-bound, mostly user-mode code execution should be roughly linearly proportional to the watt-hours (i.e. energy) consumed by the code execution.
From the perspective of any proof operation of sufficient scale, saving energy consumption or the use of computing resources should be an interesting question, because the energy bill for proof (or cloud computing bill) is a significant cost of operation. For these reasons, user time is an interesting metric. Lower proof costs allow service providers to pass on lower proof prices to cost-sensitive customers.
Both efficiencies are related to the energy consumption of the proof process and the amount of money used by the proof process, which in turn is related to the financial cost of the proof. In order for a definition of measuring efficiency to be operational, the definition must be related to one or more test programs, one or more test inputs for each program, and one or more test systems.
Simplicity
· Size of the proofs generated and the complexity of verifying them
Simplicity is a combination of three different metrics, further broken down by the complexity of proof verification:
· Proof size: The physical size of the proof, typically in kilobytes.
· Proof verification time: The time required to verify the proof.
· Proof verification space: The memory usage during proof verification.
Verification is typically a single core operation, so speed and core time efficiency are often the same thing in this context. As with speed and efficiency, a definition of simplicity requires specifying the test program, test inputs, and test system.
Once each performance attribute is defined, we will demonstrate the impact of optimizing one attribute over the others.
· Speed: Fast proof generation results in larger proofs, but slower proof verification. The more resources consumed in generating proofs, the less efficient it is.
· Simplicity: Prover needs more time to compress the proof. But proof verification is fast. The more concise the proof, the more computational overhead it has.
· Efficiency: Minimizing resource usage will slow down proof generation and reduce proof simplicity.
Generally, optimizing for one aspect means not optimizing for another, so a multi-dimensional analysis is needed to select the best solution on a case-by-case basis.
A good way to weigh these attributes in an evaluation might be to define an acceptable level for each attribute and then determine which attributes are the most important. The most important attributes should be optimized while maintaining a good enough level on all other attributes.
Below we summarize the attributes and their key considerations:
Consensys Ceo Joe Lubin said Consensys hopes to use encrypted native methods to attract public investment, and Consensys is looking for acquisition opportunities.
When asked about the way to go public in Consensys, Lubin said we have talked about this for a long time. In our ecosystem, there are different ways to publicize it. You can launch an agreement. You can consider the agreement to vilate, and you can externalize the project.
Consensys is working with the audit company KPMG, but refuses to provide specific details. Lubin clearly states that Consensys will choose to go public through the blockchain channel instead of listing on the Nasdaq or other stock exchange.
The dust on the crust returned to Hokkaido, Japan. The sun is bright during the day, warm and pleasant, but at night, it is bitter. This weather mode causes sad snow, known as the dust on the crust. Under the seemingly beautiful unrelated electric power sector, ice and crispy snow lurking. This is something that hates.
With the acceleration of winter coming in spring, I want to relive the article "Dust on the Cruise" published a year ago. In this article, I proposed how to create a comprehensive supporting legal stable currency method. The existence of the stable currency does not depend on the Tradfi banking system. My idea is to combine the multi -cryptocurrency hedge with the short -term sustainable drop loss of the bearing to create a synthetic legal currency unit. I named it Nakadollar because I imagined that the use of Bitcoin and short -term XBTUSD "sustainable" swelling as a way to create a synthetic US dollar. I promised to support a reliable team at the end of the article, trying to turn this idea into reality.
How big is a year's change. Guy is the founder of Ethena. Before participating in Ethena, Guy worked at a hedge fund worth $ 60 billion and invested in special circumstances such as credit, private equity and real estate. GUY captured garbage coins during the summer of 2020, and never turned back since then. After reading "The Dust on the Crusher", he was inspired and decided to launch his own synthetic dollar. But as all the great entrepreneurs did, he hoped to improve my original thought. He did not use Bitcoin, but created a synthetic US dollar stabilizer using Ethereum.
GUY chooses Ethereum because Ethereum network provides native income. In order to provide security and handling transactions, Ethereum network validators directly pay a small amount of ETH for each block by protocol. This is what I call ETH pledge yield. In addition, because Ethereum is now a shrinking currency, the prices of Ethereum/USD long -term, futures and sustainable drop transactions have continued to be higher than the spot. This is the root cause. Short -term perpetual drop -down holders can get this premium. Combining the actual -owned ETH with a short -term or US dollar loss of the short -term swelling position, creating a high -income synthetic US dollar. As of this week, the pledged Ethena USD (SUSDE) currently has an annual yield of about 50%.
Without a team that can be executed, no matter how good it is, it is meaningless. Guy named his synthetic dollar to Ethena and formed a rock star team to start the protocol quickly and safely. Maelstrom became a founding consultant in May 2023. As a transaction, we received token for token. I have worked with many high -quality teams in the past, and Ethena's staff has completed the task without cutting corners. For 12 months, Ethena's stable currency USDE is only 3 weeks after the main network is online, and its circulation is close to 1 billion (TVL is $ 1 billion; 1 usde = 1 USD).
I believe that Ethena can surpass Tether to become the largest stablecoin. This prophecy takes many years to achieve it. However, I want to explain why Tether is the best and worst stable currency in the cryptocurrency field. It is the best because it may be the most profitable financial intermediary in TRADFI and encryption. This is the worst, because Tether's existence is to make its poor Tradfi bank partners happy. The jealousy of the bank and the problems brought by Tether's guardians of the US Peace Financial System may immediately lead to the end of Tether.
For all misleading Tether Fudesters, I want to say clearly. Tether is not financial fraud, nor did Tether lie to it. In addition, I have the greatest respect for Tether's founders and operators. But forgive me, Ethena will shock Tether.
This article will be divided into two parts. First of all, I will explain why the United States, the U.S. Treasury, and large US banks with political backgrounds want to destroy Tether. Secondly, I want to discuss Ethena in depth. I will briefly outline the construction of Ethena, how it maintains linked to the US dollar, and its risk factors. Finally, I will provide Ethena's valuation model for governing the tokens.
After reading this article, you will understand why I believe that Ethena is the best choice to synthesize the US dollar in the encryption ecosystem.
Note: The legal stable currency supported by the physical support is the tokens that the issuer holds the legal currency in the bank account, namely Tether, Circle, First Digital, etc. Synthetic statutory stable currency is a tokens that the issuer holds cryptocurrencies and uses short -term derivatives for hedging, namely Ethena.
Jealous green
Tether (code: USDT) is the maximum stable currency measured by circulating token. 1 usdt = 1 dollar. USDT is sent between wallets such as Ethereum and other chains. In order to maintain linked, Tether holds $ 1 for USDT in the bank account in the bank account.
Without the US dollar bank account, Tether cannot perform the function of creating USDT, hosting USDT, and redeeming USDT functions.
Creation: Without a bank account, USDT cannot be created because traders cannot send US dollars.
US dollar hosting: Without a bank account, there is no place to hold USDT USD.
USDT's redemption: If there is no bank account, USDT cannot be redeemed because there is no bank account that can send the US dollar to the redemption.
Having a bank account is not enough to ensure success, because not all banks are equal. Thousands of banks around the world can accept US dollar deposits, but only some banks have the main account in the Fed. Any bank that hopes to use the Fed to perform its obligation to perform its obligations as an agency of the US dollar must hold the main account. The Fed reserves a complete free discretion of which bank's main account is granted.
I will quickly explain the operation of the business banking business.
There are 3 banks: A, B and C. Bank A and B are located in two non -US judicial jurisdictions. Bank C is an American bank with a main account. Bank A and B hope to transfer the US dollar in the legal financial system. They each applied for the use of bank C as agents. Bank C evaluates the bank's customer group and approve.
Bank A needs to remit $ 1,000 to Bank B. The capital flow was transferred from the account of bank A in bank C to Bank B's account in bank C in bank C.
Let's change the example a little, add bank D, it is also a US Bank of America with the main account. Bank A uses C Bank as an agency bank, and Bank B uses bank D as an agent bank. What happens when Bank A wants to remit 1,000 US dollars to Bank B? The flow orientation of funds is that bank C transferred $ 1,000 from its account at the Federal Reserve to Bank D's account in the Fed. Bank D finally deposited $ 1,000 into the account of Bank B.
Generally, banks outside the United States use agency banks to call the US dollar globally. This is because once the US dollar flows between different jurisdictions, liquidation must be performed directly through the Fed.
I have been engaged in the cryptocurrency field since 2013, and usually, the banks that you deposit in the Capital Crypto Exchange in the French currency are not a bank registered in the United States, which means that it depends on the Bank of America with the main account to handle fiat currency deposits And withdrawal. These smaller non -American banks desire for deposits and bank cryptocurrency companies because they can charge high costs and do not need to pay any fees for deposits. Globally, banks usually urgently need cheap US dollar financing because the US dollar is global reserve currency. However, these small foreign banks must interact with their agent banks to handle the US dollar access business outside their residence. Although the proxy bank tolerates these fiat currencies related to the encryption business, no matter what reason, sometimes some crypto customers will be removed from smaller banks at the request of agency banks. If small -scale banks do not comply with regulations, they will lose the relationship between agency banking and the ability to transfer the US dollar internationally. A bank that lost the ability to transfer the dollar is equivalent to walking dead. Therefore, if the agency bank requires, small banks will always give up cryptocurrency customers.
When we analyze the strength of Tether's bank partners, this agent bank is essential.
Tether Bank partners: Britannia Bank & Trust, Cantor Fitzgerald, Capital Union, ANSBACHER, Deltec Bank and Trust.
Among the five banks listed, only CANTOR FITZGERALD is a bank registered in the United States. However, none of these five banks did not have the Fed's main account. Cantor Fitzgerald is a first -level dealer to help the Fed in implementing open market operations, such as buying and selling bonds. Tether's ability to transfer and hold the US dollar depends entirely on the rise of the immortal agency bank. Considering the scale of Tether's US Treasury investment portfolio, I think that their cooperation with Cantor is essential to continue entering the market.
If the CEO of these banking industry fails to obtain Tether's equity in exchange for bank services, they are fools. When I introduce each employee income index of Tether later, you will understand the reason.
This is why Tether's bank partners perform poorly. Next, I want to explain why the Fed's business model does not like Tether's business model, and why fundamentally has nothing to do with cryptocurrencies, and it is related to the operation of the US dollar currency market.
Full Reserve Bank
From the perspective of Tradfi, Tether is a completely reserved bank, also known as Narrow Bank. Fully reserved banks absorb deposits without lending it. The only service it provides is to remit money back and forth. Because the storage households do not face any risks, it almost does not pay the deposit interest. If all storage households are required to refund at the same time, the bank can immediately meet this requirement. Therefore, this name -completely reserves. In sharp contrast to some reserve banks, the bank's loan scale is greater than the foundation of deposit. If all the storage households ask for funds from a part of the bank at the same time, the bank will go bankrupt. Part of the reserve bank pays interest to attract deposits, but the reserves are risky.
Tether is essentially a US dollar bank that provides US dollar trading services driven by public chain. No loan, no funny things.
AAVE exceeded $ 130, and is now reported at $ 1304, an increase of 6.41% within the day