Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@
"\n",
"### First We Create a Simple Database\n",
"\n",
"Step one is to create our database - we're going to do this by initializing a random list of 1s and 0s (which are the entries in our database). Note - the number of entries directly corresponds to the number of people in our database."
"Step one is to create our database - we're going to do this by initializing a random list of Trues and Falses (which are the entries in our database). Note - the number of entries directly corresponds to the number of people in our database."
]
},
{
Expand All @@ -27,7 +27,7 @@
{
"data": {
"text/plain": [
"tensor([1, 0, 0, ..., 1, 1, 1], dtype=torch.uint8)"
"tensor([False, False, False, ..., False, True, True])"
]
},
"execution_count": 1,
Expand Down Expand Up @@ -381,7 +381,7 @@
"\n",
"Thus, each person is now protected with \"plausible deniability\". If they answer \"Yes\" to the question \"have you committed X crime?\", then it might becasue they actually did, or it might be becasue they are answering according to a random coin flip. Each person has a high degree of protection. Furthermore, we can recover the underlying statistics with some accuracy, as the \"true statistics\" are simply averaged with a 50% probability. Thus, if we collect a bunch of samples and it turns out that 60% of people answer yes, then we know that the TRUE distribution is actually centered around 70%, because 70% averaged wtih 50% (a coin flip) is 60% which is the result we obtained. \n",
"\n",
"However, it should be noted that, especially when we only have a few samples, the this comes at the cost of accuracy. This tradeoff exists across all of Differential Privacy. The greater the privacy protection (plausible deniability) the less accurate the results. \n",
"However, it should be noted that, especially when we only have a few samples, this comes at the cost of accuracy. This tradeoff exists across all of Differential Privacy. The greater the privacy protection (plausible deniability) the less accurate the results. \n",
"\n",
"Let's implement this local DP for our database before!"
]
Expand Down Expand Up @@ -1186,7 +1186,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.1"
"version": "3.7.3"
}
},
"nbformat": 4,
Expand Down
6 changes: 3 additions & 3 deletions Section 1 - Differential Privacy.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@
"\n",
"### First We Create a Simple Database\n",
"\n",
"Step one is to create our database - we're going to do this by initializing a random list of 1s and 0s (which are the entries in our database). Note - the number of entries directly corresponds to the number of people in our database."
"Step one is to create our database - we're going to do this by initializing a random list of Trues and Falses (which are the entries in our database). Note - the number of entries directly corresponds to the number of people in our database."
]
},
{
Expand All @@ -27,7 +27,7 @@
{
"data": {
"text/plain": [
"tensor([1, 0, 0, ..., 1, 1, 1], dtype=torch.uint8)"
"tensor([False, False, False, ..., False, True, True])"
]
},
"execution_count": 1,
Expand Down Expand Up @@ -1186,7 +1186,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.5"
"version": "3.7.3"
}
},
"nbformat": 4,
Expand Down