[ad_1]
As with the merge, the surge has more than one step to it but we’re only really interested in one important thing which is Sharding
Now before we understand what Sharding is we have to understand why Ethereum is so expensive and slow to use in the first place.
I expect that a lot of users here already know how the Ethereum bidding system works so you can skip this one right away if you want to
– Why is Ethereum so inefficient?
Ethereum currently runs on PoW (proof of work) meaning that all transactions are proven to be real by miners that solve very complicated computational problems for a small fee of course.
You as a user would obviously want your transaction to go through fast but Ethereum is a very popular chain and there are a lot of people ahead of you. For you to try and “skip ahead” of them, you’ll basically “tip” the validators a little bit more than others, and boom! You become their priority. So basically its a bidding game. Whoever is willing to “tip” more gets their transaction validated faster.
– Why does the merge come before the surge?
Think of Ethereum as a long stretch of road that has one single lane. It works well sometimes but when it gets congested, IT GETS CONGESTED…
This is exactly why the merge comes before all other upgrades.
Developers realized that Ethereum has great demand and the “road” they created(which represents the blockchain) needs a ton of new fixes. But fixing this “road” means that traffic will come to a complete halt and so will hundreds of billions of dollars in assets.
Therefore they created a completely different “road” which is the beacon chain and that chain has the capacity to undergo upgrades and development without having to stop traffic.
– What is Sharding and how does it work?
What Sharding will do is the equivalent of creating many more lanes (64 to be exact) on a brand new created road (the Beacon chain).
Ethereum can only process around 15 transactions per second and has maxed out once at only around 16 TPS. So it’s safe to say that Ethereum isn’t the fastest of blockchains. So developers came up with Sharding as a solution.
Sharding is an algorithmic process of splitting a blockchain up into many smaller pieces and, well, shards with many of these pieces overlapping and working in parallel. It is executed in a way that will allow anyone to verify their shard and still 100% trust the rest of the blockchain.
– Initial plans for Sharding and how they changed
The plan initially was the implement Sharding on the entire Ethereum blockchain (this includes the EVM, data, smart contracts, account, and everything else on the chain). However, Vitalik and the rest of the developers decided that it would be a better idea to implement Sharding on the data alone. This is why rollups will take over scaling and not the blockchain itself
– The importance of L2s for the future of Ethereum
As I mentioned in the previous section, plans for Sharding changed and the responsibilities of scalability now fall on L2s (rollups basically).
Scaling solutions like Optimism, Immutable X, Arbitrum, Loopring and many more do all of their computations and storage offchain and end up posting data back to Ethereum
Without getting into a lot of technical details, Sharding will basically put these scaling solutions on Steriods
I mentioned in my previous post that I am in fact biased towards Polygon and some people in the comments genuinely asked as to why. Here is exactly why I’m biased towards them (by the way this does not mean that I think other scaling solutions are bad. I’m fond of many other scaling solutions and I always invite competition)
The most important feature of Sharding is data availability.
Right now there are Ethereum nodes. They’re essentially a bunch of computers that keep a fully comprehensive record of the entire blockchain. As you can imagine, this takes up A TON of precious and expensive space for data.
Well, it just so happens that Polygon already has a solution for that in the form of Polygon Avail. No other scaling solution has their current technology in that sector.
What Polygon Avail essentially does is solve the problem of data availability by storing data offchain and presenting only the needed data when actually needed
It basically shows Ethereum proof that “yes, the entire data is safe and sound. Here is the small amount of data you need right now” without having to burden Ethereum with the massive amounts of data that most of it won’t be used and will slow down the Ethereum network.
Again, I think competition is an amazing thing to have especially in a tech market like this one so I support all scaling solutions but as I said, I personally think Polygon has an edge over others.
– Some cool features that come with Sharding
This post is starting to get long so I’ll talk about one feature actually that I think is very cool but is also extremely important
Sharding will actually change the way that nodes are selected to validate each shard
I’ll throw in an example given by Vitalik.
Imagine there are 6,400 nodes and 64 Shards (the standard). Every time that a shard and a block need verification, the Beacon chain (new PoS chain) will select at random 100 nodes to validate a block and the same will happen for the next block and the one after that, and so on (all at random).
The cool and important thing about this all lies in security. An attacker will now find it much harder to attack the network because he will require a large portion of the network to do so and would need that for a much longer time now.
I hope this post was helpful and I’d love to answer any questions in the comments!
I enjoyed answering questions in the previous post and would love to make another post about the upgrade after the surge (the verge, or Merkle trees update) if it wins popular demand!
[ad_2]
Source link