The 7 transaction per second limit of BTC is completely artificial. It stems in the max size of blocks being capped to 1 MB. This was introduced by Satoshi Nakamoto way back in 2010 as an easy way to keep spam from flooding the blockchain. Keep in mind that back then Bitcoin was basically worthless, so one could spam the network with millions of transactions at very low costs, rendering it unusable to everybody.
Satoshi introduced the 1 MB block size limit as a temporary measure meant to be easily lifted as transaction volume and Bitcoin price increased in the future [1]. However this didn't happen, Satoshi Nakamoto disappeared shortly after and his appointed successor, Gavin Andresen, was ousted from the project and his commit rights revoked in 2016.
A lot of people tried to raise the blocksize, and all of them failed because the developers who took over Bitcoin Core have always refused. Given that consensus failed to raised the block size the Bitcoin ABC implementation split from BTC to create Bitcoin Cash (BCH) on August 1st, 2017. Bitcoin Cash initially supported blocks of up to 8 MB, which was later raised to 32 MB. Mainnet stress tests have proven that the network works fine with blocks of up to 20 MB, making clear that the 1 MB limit of BTC is as artificial as it sounds like. Because of the higher block size accommodating to more transactions the fees in BCH are consistently under 1 cent ($0.01), and the roadmap [2] includes features to keep fees low even as BCH price increases (fractional satoshis).
I highly recommend this article [3] as the most comprehensive summary of the Bitcoin scaling debate.
So the final question would be: is 32 MB blocks the best we can do? And the answer is a clear NO. There is much room for improvement and the limits are definitely much much higher. I recommend this talk by Amaury Séchet [4] and this article on Terabyte blocks by Johannes Vermorel [5] to learn more.
One of the main goals of bitcoin was to be decentralized by allowing most computers to download and verify the entire history of bitcoin transactions within reasonable space and time. With a 32MB limit, that would amount to many TB of data rather than the current 250GB, and many weeks of verification instead of a few days.
Decentralization doesn't mean that everybody has to run their own node, but rather that anybody can do it without asking for permission from a central authority. It was always the plan that as the network grew the cost of running a node would increase, and they would be run in server farms. In the words of Satoshi Nakamoto [1]:
> The current system where every user is a network node is not the intended configuration for large scale. That would be like every Usenet user runs their own NNTP server. The design supports letting users just be users. The more burden it is to run a node, the fewer nodes there will be. Those few nodes will be big server farms. The rest will be client nodes that only do transactions and don't generate.
Keeping the block size down just to allow underpowered nodes in the network has the negative effect of hampering scalability, increasing transaction fees and keeping BTC from following its intended design as P2P electronic cash.
Note that the size of the blockchain is not an issue for users, who can rely on SPV protocol to verify transactions without having to download the full blockchain nor having to blindly trust a third-party. SPV is secure as long as the blockchain is secure (i.e. there are no 51% attacks).
Also I want to mention that in Bitcoin full nodes described the miners, i.e. nodes that generate blocks. Validating blocks without mining serves no purpose in the Bitcoin whitepaper.
Downloading years old data is pointless. Add a hash of all the existing public keys with balances (aka the ledger) every week and you can get away with just downloading 2 weeks of data + that leaguer. For people that want to data mine it, a measly 8TB dataset is tiny co pared to plenty of public datasets are hundreds of times that size. https://nasa.github.io/data-nasa-gov-frontpage/
Satoshi introduced the 1 MB block size limit as a temporary measure meant to be easily lifted as transaction volume and Bitcoin price increased in the future [1]. However this didn't happen, Satoshi Nakamoto disappeared shortly after and his appointed successor, Gavin Andresen, was ousted from the project and his commit rights revoked in 2016.
A lot of people tried to raise the blocksize, and all of them failed because the developers who took over Bitcoin Core have always refused. Given that consensus failed to raised the block size the Bitcoin ABC implementation split from BTC to create Bitcoin Cash (BCH) on August 1st, 2017. Bitcoin Cash initially supported blocks of up to 8 MB, which was later raised to 32 MB. Mainnet stress tests have proven that the network works fine with blocks of up to 20 MB, making clear that the 1 MB limit of BTC is as artificial as it sounds like. Because of the higher block size accommodating to more transactions the fees in BCH are consistently under 1 cent ($0.01), and the roadmap [2] includes features to keep fees low even as BCH price increases (fractional satoshis).
I highly recommend this article [3] as the most comprehensive summary of the Bitcoin scaling debate.
So the final question would be: is 32 MB blocks the best we can do? And the answer is a clear NO. There is much room for improvement and the limits are definitely much much higher. I recommend this talk by Amaury Séchet [4] and this article on Terabyte blocks by Johannes Vermorel [5] to learn more.
[1] https://bitcointalk.org/index.php?topic=1347.msg15366#msg153...
[2] https://www.bitcoincash.org/roadmap.html
[3] https://medium.com/hackernoon/the-great-bitcoin-scaling-deba...
[4] https://www.youtube.com/watch?v=Z0rplj8wSR4
[5] http://blog.vermorel.com/journal/2017/12/17/terabyte-blocks-...