Notice to eth2 verifier: how to determine the validity of “proof”

How can eth2.0 verifiers get higher rewards? It turns out that the earlier the proof is packaged into the blockchain, the higher the reward the verifier will get. According to the key measure “packing distance”, this paper helps the verifier to determine the validity of “single proof” and “aggregate proof”. < / P > < p > proof refers to the vote initiated by the verifier on the current state of eth2.0 blockchain. Each active verifier initiates a proof every epoch, which is composed of the following elements: < / P > < p > one of the interesting processes is the chain head voting, which means that the verifier votes to prove the latest valid block, namely the chain head. The composition of the chain head voting is shown in the following figure: < / P > < p > here, the slot refers to the current position of the chain head, and the hash value indicates the location of the verifier. The combination of the two can uniquely identify a certain point on the blockchain, and after obtaining enough votes, the network has reached a consensus on the status of the chain. < / P > < p > although the data in each proof is relatively small, it will grow rapidly with the participation of thousands of verifiers. Since this data will always be stored on the chain, it is important to reduce the size of the stored data, which can be achieved through the aggregation process. < / P > < p > aggregation consists of multiple proofs, and all proofs are voted by the same committee, including chain head voting and final deterministic voting, and then they are combined into one aggregation proof: < / P > < p > aggregation proof is different from simple proof in two aspects. First, there are multiple verifiers in the aggregation proof. Secondly, his signature is aggregate signature, which is composed of matched simple proof signature. Aggregation proves to be very good for storage, but it brings additional communication and computing burden. < / P > < p > if each verifier is required to aggregate all proofs, it means that the information of each proof must be passed to each verifier, and the total amount of communication in it will quickly overload the network. Similarly, if aggregation is optional, the verifier is not willing to waste his resources. However, if we change the way, the network selects a subset of verifiers to perform aggregation tasks. Then they will be more willing to do their own work, because the aggregation certificate contains more verifiers and is more likely to be packaged into the chain, which means that the verifier will be more likely to receive rewards. < / P > < p > eth2.0 uses the packing distance measure to calculate the reward for the verifier’s proof. The packing distance of a slot refers to the difference between the slot to be proved and the slot to be packed first. For example, if it is proved in slot SS and packed into blocks in slot S + 1s + 1, the packing distance is 11. If it is packed into the block at S + 5S + 5, the packing distance is 55. < / P > < p > in eth2.0, the value of proof depends on the packing distance, the shorter the packing distance, the better. This is because the earlier information is available on the Internet, the more useful it will be. < / P > < p > in order to reflect the relative value of proof, different rewards are given to the verifier in charge of proof according to the size of packing distance. Specifically, the reward is multiplied by 1 / D, where D is the packing distance. < / P > < p > if the network is running well, the packing distance for all certificates will be 1. This means proving maximum effectiveness and getting the maximum reward accordingly. If the proof is delayed, the verifier’s reward will be reduced accordingly. < / P > < p > 5. If the aggregation proof has not been added to the chain, the verifier of any proposed block can package it into the block. When the packing distance of proof exceeds 1, it is necessary to find out the reasons. There are several influencing factors: < / P > < p > the verifier may have the problem of “proof generation delay”. For example, the information about the chain state may be out of date, or the verifier is not competent enough, and it takes a lot of time to generate and sign certificates. For whatever reason, the proof of delay has an indirect effect on the rest of the process. < / P > < p > once a verifier generates a proof, it needs to be broadcast to the aggregator of the network. The purpose of this process is to enable aggregators to receive the earliest proof information in time, so as to aggregate proofs before they are broadcast to the whole network. The verifier should try to connect with as many other verifiers as possible to ensure that the proof is broadcast quickly to the aggregator. < / P > < p > shows that the polymerization process may be delayed. One of the most common reasons is that the generated proof makes the node overload. However, when a large number of verifiers need aggregation proof, the speed of aggregation algorithm will also cause significant delay of aggregation. < / P > < p > proof that to be part of the data on the chain, you have to package it into blocks. However, block generation may fail. When the verifier is offline or fails to synchronize the data of other verifiers in the network, the generated invalid data will be rejected by the chain. Another effect will be caused by the block generation failure of < / P > < p > because the previous valid proof is not packed into the block, the next generated block needs to receive more proof data. The reward that can be packaged will be more likely to be proved by those who can pack more than the next one. This makes the remaining proofs have less and less package rewards, leading to the proofs missing the best block and subsequent blocks. < / P > < p > because the generation of blocks is affected by the state of the verifier, we define the earliest packaged slot, which is the first slot after the valid block generated and proved. This definition takes into account that proofs cannot be packaged into nonexistent blocks and is not affected by verifier validity. < / P > < p > however, it is possible for a malicious verifier to refuse to aggregate any given proofs or to package them into their blocks. The solution of the former is to assign multiple aggregators to each proof group, while the solution of the latter is to punish the behavior of refusing to package aggregation proofs into blocks. However, if the penalty for refusing to pack into a block is financially compensated, or if the act is politically more valuable, then the verifier responsible for the proof cannot take any action to force the verifier responsible for producing the block to pack the proof into the block. < / P > < p > for a single proof, computational proof validity may be a little interesting, but in terms of the value itself, it doesn’t make much sense. The validity of aggregation proofs can help us better understand the overall validity of a group of verifiers. The validity of an aggregate proof is the average of the validity of a single proof. For example, the validity of all verifiers in a given group is recorded for 7 days and averaged. After the startup of eth2.0, thousands of nodes will locate each other and start proposing blocks and proving blocks. Like all immature networks, there are still many problems to be solved in order to make nodes as effective as possible. As described in this paper, a clear indicator used to record node efficiency is to prove validity. If the verifier wants to maximize the reward, he can judge his own overall performance by proving the effectiveness. Iqoo5 series debut strength interpretation of “120 super full mark flagship”

Author: zmhuaxia