-
Notifications
You must be signed in to change notification settings - Fork 29
add swip-35: negative incentives epic WIP #69
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Changes from 4 commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change | ||||
|---|---|---|---|---|---|---|
| @@ -0,0 +1,176 @@ | ||||||
| --- | ||||||
| SWIP: 35 | ||||||
| title: Negative Incentives (Epic) | ||||||
| author: Viktor Trón <viktor@ethswarm.org> (@zelig) | ||||||
| discussions-to: discord SWIP channel, https://discord.gg/Q6BvSkCv | ||||||
| status: Draft | ||||||
| type: Standards Track | ||||||
| category: Core | ||||||
| created: 2025-03-09 | ||||||
| --- | ||||||
|
|
||||||
| # Advanced storage guarantees | ||||||
|
|
||||||
| ## Abstract | ||||||
|
|
||||||
| This SWIP proposes an entire epic about adding further layers of incentives to the basic positive incentives in Swarm. By adopting the SWIP, swarm will offer a wider range of constructions for its users to secure the persistance of chunks and their metadata and at the same time allow a low barrier of entry service that node operators with extra storage or bzz future providers are able to increase their revenue with. | ||||||
|
|
||||||
| ## Motivation | ||||||
|
|
||||||
| The current storage incentive system has a few shortcomings mainly related to the problem that postage batches are difficult to interpret. This interpretation problem leads to quite a bit of misunderstanding and misguided (also potentially failing) expectations regarding both the stored amount and the storage period. | ||||||
|
|
||||||
| These issues are coupled with known privacy concerns regarding the shared ownership of chunks that are transparent to 3rd parties. Effectively information about the publisher as well as the chunks constituting the publication leak the system, somewhat overshadowing the otherwise rather strong privacy claims in swarm. | ||||||
|
||||||
| These issues are coupled with known privacy concerns regarding the shared ownership of chunks that are transparent to 3rd parties. Effectively information about the publisher as well as the chunks constituting the publication leak the system, somewhat overshadowing the otherwise rather strong privacy claims in swarm. | |
| These issues are coupled with known privacy concerns regarding the shared ownership of chunks that are transparent to 3rd parties. Effectively information about the publisher as well as the chunks constituting the publication leak meta-data in the system, somewhat overshadowing the otherwise rather strong privacy claims in swarm. |
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| The latter issue is solved if the stamps on individual chunks are signed by other nodes. We will introduce *stamp mixer service* of sorts in a way that the system will not lose its capacity to prevent doubly sold postage slots. | |
| The latter issue is solved if the stamps on individual chunks are signed by other nodes. We will introduce a *stamp mixer service* of sorts in a way that the system will not lose its capacity to prevent doubly sold postage slots. |
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| Last, but not least, enforcing payments for each upload does not really make sense: firstly, users had some difficulty to grasp that given the free-tier for bandwidth inentives allowing free downlad and upload with throttled throughput, how come they need to pay for the storage. This problem was most obvious for really short-lived messages that are used in chat or notifications. Since, in these cases, preservation is not meaningful or even actively not desired, paying for postage sounds not only a unnecessary but outright wasteful. Secondly, the fact that expired chunks are not deleted but are put to the *cache*[^5] poses the question how come there is no direct entry to it. | |
| Last, but not least, enforcing payments for each upload does not really make sense: firstly, users had some difficulty to grasp that given the free-tier for bandwidth inentives allowing free download and upload with throttled throughput, how come they need to pay for the storage. This problem was most obvious for really short-lived messages that are used in chat or notifications. Since, in these cases, preservation is not meaningful or even actively not desired, paying for postage sounds not only a unnecessary but outright wasteful. Secondly, the fact that expired chunks are not deleted but are put to the *cache*[^5] poses the question how come there is no direct entry to it. |
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| The most natural way to pay for storage is by the *amount* and *time period*. Surely, the unit price of storage (per chunk, per block) already suggests this. However, for most, a deal (through an act of purchase) entails that the counterparty service can be kept to the terms *precisely*: i.e., you bought a fixed quota (amount of bytes) for a fixed period (duration starting now, expiring after a fixed period or on a fixed datte). Experience shows that any deviation from this expectation comes as a surprise for the user. | |
| The most natural way to pay for storage is by the *amount* and *time period*. Surely, the unit price of storage (per chunk, per block) already suggests this. However, for most, a deal (through an act of purchase) entails that the counterparty service can be kept to the terms *precisely*: i.e., you bought a fixed quota (amount of bytes) for a fixed period (duration starting now, expiring after a fixed period or on a fixed date). Experience shows that any deviation from this expectation comes as a surprise for the user. |
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| The current primitive of storage compensation, the postage batch falls short on these. The primary reason is that it was designed with the simplicity in mind pertaining to the intergration into the redistribution system. | |
| The current primitive of storage compensation, the postage batch falls short on these. The primary reason is that it was designed with the simplicity in mind pertaining to the integration into the redistribution system. |
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| Let `s` and `e` be the start and end data of storage, period is `e-s` in units used in the oracle price. Let us propose the following scenario: owner `A` buys a postage batch `B` for `2^C` chunks and escrows amount `D` onto `B`. Now if we assume that price of storage per chunk per time unit at `s` is `p_s`, then we can guess that the expiry is `e'<=e` iff `D/(2^C*p_s)` then A assumes price will decrease. However, if the average price increases, then the effective period ends d units sooner if d is the smallest number such that the average price over the period `[s, e]` the cumulative rent goes above D `\Sum_{i=s}^e p_i*2^C >= D.` | |
| Let `s` and `e` be the start and end data of storage, period is `e-s` in units used in the oracle price. Let us propose the following scenario: owner `A` buys a postage batch `B` for `2^C` chunks and escrows amount `D` onto `B`. Now if we assume that price of storage per chunk per time unit at `s` is `p_s`, then we can guess that the expiry is `e'<=e` iff `D/(2^C*p_s)` then `A` assumes price will decrease. However, if the average price increases, then the effective period ends d units sooner if d is the smallest number such that the average price over the period `[s, e]` the cumulative rent goes above D `\Sum_{i=s}^e p_i*2^C >= D.` |
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| As long as 3. is undifferentiating due to a large number of (near) simultaneous requests | |
| As long as 3 is undifferentiating due to a large number of (near) simultaneous requests |
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| 1 and 2 will not help align identity to chunk or even chunk to chunk easily. This is why push-syncing all the chunks in the batch can be delayed: exactly in order to achieve that a larger number of simultaneous requests get published together. Ideally, the time of delay coincides with the time of batch completion, i.e., whenever the batch (prepared to support the time until original uploaders want the chunk to be stored) is fully utilised, it is closed and all the chunks are uploaded. The problem is that if the traffic is not great, especially long term storage (more expensive), publishing may need to wait beyond the chunks of the file already disappear from the swarm's cache. These chunks therefore need to be first uploaded with a rather short period just to survive and later maybe also have their subscription renewed. Such a strategy creates more usage for the short periods which then leads to the ability to fill bigger batches faster and more efficiently: the bigger the batch, the bigger the anonimity set, thus, the better the anonimity that you get. | |
| 1 and 2 will not help align identity to chunk or even chunk to chunk easily. This is why push-syncing all the chunks in the batch can be delayed: exactly in order to achieve that a larger number of simultaneous requests get published together. Ideally, the time of delay coincides with the time of batch completion, i.e., whenever the batch (prepared to support the time until original uploaders want the chunk to be stored) is fully utilised, it is closed and all the chunks are uploaded. The problem is that if the volume of traffic is not great, especially traffic relating to long term storage (more expensive), publishing may be delayed until a time after which the chunks of the file have already disappeared from the swarm's cache. These chunks therefore need to be first uploaded with a rather short period just to survive and later maybe also have their subscription renewed. Such a strategy creates more usage for the short periods which then leads to the ability to fill bigger batches faster and more efficiently: the bigger the batch, the bigger the anonymity set, thus, the better the anonymity that you get. |
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| Even though treating the postage batch as a hot wallet has quite a few advantages, such an insurance scheme bring improvement in several areas: | |
| Even though treating the postage batch as a hot wallet has quite a few advantages, such an insurance scheme brings improvements in several areas: |
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| - exacy amount | |
| - exact byte amount |
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| - since users got all they need from just knowing that a file/root was insured/kept alive | |
| - users get all the assurance they need from just knowing that a file/root was insured/kept alive |
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| - challanges with inclusion proofs etc | |
| - challenges with inclusion proofs etc |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.