Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 5 additions & 6 deletions docs/client-api/faq/what-is-a-collection.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -17,8 +17,7 @@ import LanguageContent from "@site/src/components/LanguageContent";
* **A collection** in RavenDB is a set of documents tagged with the same `@collection` metadata property.
Every document belongs to exactly one collection.

* Being a schemaless database, there is no requirement that documents in the same collection will share the same structure,
although typically, a collection holds similarly structured documents based on the entity type of the document.
* Being a schemaless database, there is no requirement that documents in the same collection will share the same structure, although typically, a collection holds similarly structured documents based on the entity type of the document.

* The collection is just a virtual concept.
There is no influence on how or where documents within the same collection are physically stored.
Expand Down Expand Up @@ -63,8 +62,8 @@ import LanguageContent from "@site/src/components/LanguageContent";
This internal index is used to query the database and retrieve only documents from a specified collection.

* **In Indexing**
* Each [Map Index](../../indexes/map-indexes.mdx) is built against a single collection (or muliple collections when using a [Multi-Map Index](../../indexes/multi-map-indexes.mdx).
During the indexing process, the index function iterates only over the documents that belong to the specified collection(s).
* Each [Map Index](../../indexes/map-indexes.mdx) is built against a single collection, or multiple collections when using a [Multi-Map Index](../../indexes/multi-map-indexes.mdx).
During the indexing process, the index function iterates only over the documents that belong to the specified collection(s).

* **In Revisions**
* Documents [Revisions](../../document-extensions/revisions/overview.mdx) can be defined per collection.
Expand All @@ -74,10 +73,10 @@ import LanguageContent from "@site/src/components/LanguageContent";

* **The @hilo Collection**
* The ranges of available IDs values returned by [HiLo algorithm](../../client-api/document-identifiers/hilo-algorithm.mdx) are per collection name.
Learn more in: [The @hilo Collection](../../studio/database/documents/documents-and-collections.mdx#the-@hilo-collection)
Learn more in: [The @hilo Collection](../../studio/database/documents/documents-and-collections.mdx#the-hilo-collection)

* **The @empty Collection**
* Learn more in: [The @empty Collection](../../studio/database/documents/documents-and-collections.mdx#the-@empty-collection)
* Learn more in: [The @empty Collection](../../studio/database/documents/documents-and-collections.mdx#the-empty-collection)



Expand Down
2 changes: 1 addition & 1 deletion docs/client-api/rest-api/rest-api-intro.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -142,7 +142,7 @@ files with the `--cert` and `--key` options respectively:
</CodeBlock>
</TabItem>

These files can be found in the configuration Zip package you recieved at the end of the setup wizard. You can download this Zip package
These files can be found in the configuration Zip package you received at the end of the setup wizard. You can download this Zip package
again by going to this endpoint: `<the URL of your server>/admin/debug/cluster-info-package`. The certificate and key are found at
the root of the package with the names: `admin.client.certificate.<your domain name>.crt`, and
`admin.client.certificate.<your domain name>.key` respectively.
Expand Down
4 changes: 2 additions & 2 deletions docs/indexes/content/_indexing-spatial-data-csharp.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ import CodeBlock from '@theme/CodeBlock';

* A spatial index can also be defined from [Studio](../../studio/database/indexes/create-map-index.mdx#spatial-field-options).

#### Exmaple:
#### Example:

<Tabs groupId='languageSyntax'>
<TabItem value="Indexing_coordinates" label="Indexing_coordinates">
Expand Down Expand Up @@ -152,7 +152,7 @@ object CreateSpatialField(string shapeWkt); // Shape in WKT string form

* Note: Modifying the strategy after the index has been created & deployed will trigger the re-indexing.

#### Exmaple:
#### Example:

<Tabs groupId='languageSyntax'>
<TabItem value="Index" label="Index">
Expand Down
4 changes: 2 additions & 2 deletions docs/indexes/content/_indexing-spatial-data-nodejs.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ import CodeBlock from '@theme/CodeBlock';

* A spatial index can also be defined from [Studio](../../studio/database/indexes/create-map-index.mdx#spatial-field-options).

#### Exmaple:
#### Example:

<Tabs groupId='languageSyntax'>
<TabItem value="Indexing_coordinates" label="Indexing_coordinates">
Expand Down Expand Up @@ -145,7 +145,7 @@ createSpatialField(wkt);

* Note: Modifying the strategy after the index has been created & deployed will trigger the re-indexing.

#### Exmaple:
#### Example:

<TabItem value="spatial_index_3" label="spatial_index_3">
<CodeBlock language="js">
Expand Down
4 changes: 2 additions & 2 deletions docs/indexes/content/_indexing-spatial-data-php.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ import CodeBlock from '@theme/CodeBlock';

* A spatial index can also be defined from [Studio](../../studio/database/indexes/create-map-index.mdx#spatial-field-options).

#### Exmaple:
#### Example:

<Tabs groupId='languageSyntax'>
<TabItem value="Indexing_coordinates" label="Indexing_coordinates">
Expand Down Expand Up @@ -209,7 +209,7 @@ object CreateSpatialField(string shapeWkt); // Shape in WKT string form

* Note: Modifying the strategy after the index has been created & deployed will trigger the re-indexing.

#### Exmaple:
#### Example:

<Tabs groupId='languageSyntax'>
<TabItem value="Index" label="Index">
Expand Down
4 changes: 2 additions & 2 deletions docs/indexes/content/_indexing-spatial-data-python.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ import CodeBlock from '@theme/CodeBlock';

* A spatial index can also be defined from [Studio](../../studio/database/indexes/create-map-index.mdx#spatial-field-options).

#### Exmaple:
#### Example:

<Tabs groupId='languageSyntax'>
<TabItem value="Indexing_coordinates" label="Indexing_coordinates">
Expand Down Expand Up @@ -138,7 +138,7 @@ class WktField(DynamicSpatialField): # Shape in WKT string format

* Note: Modifying the strategy after the index has been created & deployed will trigger the re-indexing.

#### Exmaple:
#### Example:

<Tabs groupId='languageSyntax'>
<TabItem value="Index" label="Index">
Expand Down
2 changes: 1 addition & 1 deletion docs/indexes/querying/content/_sorting-java.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -338,7 +338,7 @@ order by Name as alphanumeric

If your data contains geographical locations, you might want to sort the query result by distance from a given point.

This can be achived by using the `orderByDistance` and `orderByDistanceDescending` methods (API reference [here](../../../client-api/session/querying/how-to-make-a-spatial-query.mdx)):
This can be achieved by using the `orderByDistance` and `orderByDistanceDescending` methods (API reference [here](../../../client-api/session/querying/how-to-make-a-spatial-query.mdx)):

<Tabs groupId='languageSyntax'>
<TabItem value="Query" label="Query">
Expand Down
2 changes: 1 addition & 1 deletion docs/server/administration/monitoring/prometheus.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -218,7 +218,7 @@ to the Prometheus `yml` configuration file.
* Use the search bar to search for relevant metrics.
Typing **raven** will display a list of metrics provided by the endpoint.

* Metrics can also be found in RavenDB's enpoint output, using the browser.
* Metrics can also be found in RavenDB's endpoint output, using the browser.
In the following screenshot, for example, we can see that the priority of one of the indexes was updated to 2 (high).

![RavenDB Endpoint Output: Index Priority](./assets/RavenDB_changed-index-priority.png)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ import LanguageContent from "@site/src/components/LanguageContent";
* Updating a `Rehab` or a `Promotable` [Database Node](../../../server/clustering/distribution/distributed-database.mdx#database-topology)

* There is no single coordinator handing out tasks to a specific node.
Instaed, each node decides on its own if it is the [Reponsible Node](../../../server/clustering/distribution/highly-available-tasks.mdx#responsible-node) of the task.
Instead, each node decides on its own if it is the [Responsible Node](../../../server/clustering/distribution/highly-available-tasks.mdx#responsible-node) of the task.

* Each node will re-evaluate its responsibilities with every change made to the [Database Record](../../../client-api/operations/server-wide/create-database.mdx),
such as defining a new _index_, configuring or modifying an _Ongoing Task_, any _Database Topology_ change, etc.
Expand All @@ -46,7 +46,7 @@ to verify whether the Highly Available Tasks feature is activated in your databa
* Data subscription
* All ETL types

* If your license does **not** provide highly available tasks, the responsibilites of a failed node will be
* If your license does **not** provide highly available tasks, the responsibilities of a failed node will be
resumed when the node returns online.

* Scenarios [below](../../../server/clustering/distribution/highly-available-tasks.mdx#tasks-relocation)
Expand Down
2 changes: 1 addition & 1 deletion docs/server/clustering/rachis/consensus-operations.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ Since getting a consensus is an expensive operation, it is limited to the follow

* Creating / Deleting a database
* Adding / Removing node to / from a Database Group
* Changing database settings (e.g. revisions configuraton , conflict resolving)
* Changing database settings (e.g. revisions configuration , conflict resolving)
* Creating / Deleting Indexes (static and auto indexes)
* Configuring the [Ongoing Tasks](../../../studio/database/tasks/ongoing-tasks/general-info.mdx)

Expand Down
2 changes: 1 addition & 1 deletion docs/server/clustering/rachis/what-is-rachis.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ import LanguageContent from "@site/src/components/LanguageContent";

* Support for in memory and persistent large multi-tasks state machines.
* Reliably committing updates to a distributed set of state machines.
* Support for voting & non-voting cluster members, see [Cluster Toplogy](../../../server/clustering/rachis/cluster-topology.mdx).
* Support for voting & non-voting cluster members, see [Cluster Topology](../../../server/clustering/rachis/cluster-topology.mdx).
* Dynamic topology, nodes can be added and removed from the cluster on the fly.
* Managing situations such as handling a Leader timeout and forcing a leader to step down.
* ACID local log using the Voron Storage Engine.
Expand Down
2 changes: 1 addition & 1 deletion docs/server/configuration/memory-configuration.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ The minimum amount of available memory RavenDB will attempt to achieve (free mem

## Memory.LowMemoryCommitLimitInMb

The minimum amount of available commited memory RavenDB will attempt to achieve (free commited memory lower than this value will trigger low memory behavior). Value is in MB.
The minimum amount of available committed memory RavenDB will attempt to achieve (free committed memory lower than this value will trigger low memory behavior). Value is in MB.

- **Type**: `int`
- **Default**: `512`
Expand Down
2 changes: 1 addition & 1 deletion docs/server/configuration/tombstone-configuration.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ Time (in minutes) between tombstone cleanups.

## Tombstones.RetentionTimeWithReplicationHubInHrs

Time (in hours) to save tombsones from deletion if this server is defined
Time (in hours) to save tombstones from deletion if this server is defined
as a replication hub.

- **Type**: `TimeUnit.Hours`
Expand Down
6 changes: 3 additions & 3 deletions docs/server/content/_embedded-java.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -24,9 +24,9 @@ try (IDocumentStore store = EmbeddedServer.INSTANCE.getDocumentStore("Embedded")

## Prerequisites

There is one prerequsite and one recommendation for the Embedded package:
There is one prerequisite and one recommendation for the Embedded package:

Prerequsite:
Prerequisite:

- **.NET Core runtime** must be installed manually

Expand Down Expand Up @@ -180,7 +180,7 @@ The URL can be used for example for creating a custom document store, omitting t
## Remarks

* You can have only one instance of `EmbeddedServer`
* Method `EmbeddedServer.INTANCE.openStudioInBrowser()` can be used to open an browser instance with Studio
* Method `EmbeddedServer.INSTANCE.openStudioInBrowser()` can be used to open a browser instance with Studio



2 changes: 1 addition & 1 deletion docs/server/kb/linux-setting-limits.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ import LanguageContent from "@site/src/components/LanguageContent";
Linux security limits may degrade RavenDB performance, and in an encrypted database even prevent actual functionality,
even if physical resources allow higher performance. Additionally, debugging may be affected (i.e. core dump creation).

Setting these limits in a persistant way can be achived by editing `/etc/security/limits.conf` to recommended values:
Setting these limits in a persistent way can be achieved by editing `/etc/security/limits.conf` to recommended values:
```
* soft core unlimited
* hard core unlimited
Expand Down
4 changes: 2 additions & 2 deletions docs/server/kb/linux-setting-memlock.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -13,15 +13,15 @@ import LanguageContent from "@site/src/components/LanguageContent";

# Linux: Setting memlock when using encrypted database
Encrypted database uses extensively sodium library which requires high values of locked memory limits.
`memlock` refers to memory that will not be paged out, and it's limit can be viewed usign `ulimit -l`.
`memlock` refers to memory that will not be paged out, and its limit can be viewed using `ulimit -l`.
The modification of `memlock` limit settings can be achieved by in running session with `prlimit`:

Example, for 1MB limit:
```
prlimit -p pid --memlock 1MB:1MB
```

Persistant settings can be achieved by adding to `/etc/security/limits.conf` the following:
Persistent settings can be achieved by adding to `/etc/security/limits.conf` the following:
```
* soft memlock 1000
* hard memlock 1000
Expand Down
2 changes: 1 addition & 1 deletion docs/server/ongoing-tasks/etl/elasticsearch.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -192,7 +192,7 @@ to **omit** _delete_by_query commands and so refrain from deleting documents bef

## Elasticsearch Index Definition

* When the Elasticsearch ETL task runs for the very first time, it will create any Elsasticsearch index defined in
* When the Elasticsearch ETL task runs for the very first time, it will create any Elasticsearch index defined in
the task that doesn't exist yet.

* When the index is created, the document property that holds the RavenDB document ID will be defined
Expand Down
4 changes: 2 additions & 2 deletions docs/server/ongoing-tasks/etl/queue-etl/amazon-sqs.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@ that messages would arrive in the same order they were sent or prevent their dup

<Admonition type="info" title="">
Use standard queueing when quick delivery takes precedence over messages order and
distinctness or the recepient can make up for them.
distinctness or the recipient can make up for them.
</Admonition>
#### FIFO queueing

Expand Down Expand Up @@ -266,7 +266,7 @@ public class Basic
* Extract source documents from the "Orders" collection in RavenDB.
* Process each "Order" document using a defined script that creates a new `orderData` object.
* Load the `orderData` object to the "OrdersQueue" queue on an SQS destination.
* For more details about the script and the `loadTo` method, see the transromation script section below.
* For more details about the script and the `loadTo` method, see the transformation script section below.

<TabItem value="add_amazon_sqs_etl_task" label="add_amazon_sqs_etl_task">
<CodeBlock language="csharp">
Expand Down
2 changes: 1 addition & 1 deletion docs/server/ongoing-tasks/etl/queue-etl/azure-queue.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -172,7 +172,7 @@ public class Passwordless
* Extract source documents from the "Orders" collection in RavenDB.
* Process each "Order" document using a defined script that creates a new `orderData` object.
* Load the `orderData` object to the "OrdersQueue" in an Azure Queue Storage.
* For more details about the script and the `loadTo` method, see the [transromation script](../../../../server/ongoing-tasks/etl/queue-etl/azure-queue.mdx#the-transformation-script) section below.
* For more details about the script and the `loadTo` method, see the [transformation script](../../../../server/ongoing-tasks/etl/queue-etl/azure-queue.mdx#the-transformation-script) section below.

<TabItem value="add_azure_etl_task" label="add_azure_etl_task">
<CodeBlock language="csharp">
Expand Down
4 changes: 2 additions & 2 deletions docs/server/ongoing-tasks/etl/queue-etl/kafka.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ import LanguageContent from "@site/src/components/LanguageContent";

* In this page:
* [Add a Kafka connection string](../../../../server/ongoing-tasks/etl/queue-etl/kafka.mdx#add-a-kafka-connection-string)
* [Exmaple](../../../../server/ongoing-tasks/etl/queue-etl/kafka.mdx#example)
* [Example](../../../../server/ongoing-tasks/etl/queue-etl/kafka.mdx#example)
* [Syntax](../../../../server/ongoing-tasks/etl/queue-etl/kafka.mdx#syntax)
* [Add a Kafka ETL task](../../../../server/ongoing-tasks/etl/queue-etl/kafka.mdx#add-a-kafka-etl-task)
* [Example - basic](../../../../server/ongoing-tasks/etl/queue-etl/kafka.mdx#example-basic)
Expand Down Expand Up @@ -131,7 +131,7 @@ var res = store.Maintenance.Send(
* Extract source documents from the "Orders" collection in RavenDB.
* Process each "Order" document using a defined script that creates a new `orderData` object.
* Load the `orderData` object to the "OrdersTopic" in a Kafka broker.
* For more details about the script and the `loadTo` method, see the [transromation script](../../../../server/ongoing-tasks/etl/queue-etl/kafka.mdx#the-transformation-script) section below.
* For more details about the script and the `loadTo` method, see the [transformation script](../../../../server/ongoing-tasks/etl/queue-etl/kafka.mdx#the-transformation-script) section below.

<TabItem value="add_kafka_etl_task" label="add_kafka_etl_task">
<CodeBlock language="csharp">
Expand Down
4 changes: 2 additions & 2 deletions docs/server/ongoing-tasks/etl/queue-etl/rabbit-mq.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ import LanguageContent from "@site/src/components/LanguageContent";

* In this page:
* [Add a RabbitMQ connection string](../../../../server/ongoing-tasks/etl/queue-etl/rabbit-mq.mdx#add-a-rabbitmq-connection-string)
* [Exmaple](../../../../server/ongoing-tasks/etl/queue-etl/rabbit-mq.mdx#example)
* [Example](../../../../server/ongoing-tasks/etl/queue-etl/rabbit-mq.mdx#example)
* [Syntax](../../../../server/ongoing-tasks/etl/queue-etl/rabbit-mq.mdx#syntax)
* [Add a RabbitMQ ETL task](../../../../server/ongoing-tasks/etl/queue-etl/rabbit-mq.mdx#add-a-rabbitmq-etl-task)
* [Example - basic](../../../../server/ongoing-tasks/etl/queue-etl/rabbit-mq.mdx#example-basic)
Expand Down Expand Up @@ -126,7 +126,7 @@ var res = store.Maintenance.Send(
* Extract source documents from the "Orders" collection in RavenDB.
* Process each "Order" document using a defined script that creates a new `orderData` object.
* Load the `orderData` object to the "OrdersExchange" in a RabbitMQ broker.
* For more details about the script and the `loadTo` method overloads, see the [transromation script](../../../../server/ongoing-tasks/etl/queue-etl/rabbit-mq.mdx#the-transformation-script) section below.
* For more details about the script and the `loadTo` method overloads, see the [transformation script](../../../../server/ongoing-tasks/etl/queue-etl/rabbit-mq.mdx#the-transformation-script) section below.

<TabItem value="add_rabbitMq_etl_task" label="add_rabbitMq_etl_task">
<CodeBlock language="csharp">
Expand Down
2 changes: 1 addition & 1 deletion docs/server/ongoing-tasks/etl/snowflake.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ A Snowflake ETL task can be created using **Code** or via **Studio**.

![Add New Snowflake Task](./assets/snowflake_etl_new_task.png)

* Use the New Snowfake ETL view to define and save the new task.
* Use the New Snowflake ETL view to define and save the new task.

![Define Snowflake Task](./assets/snowflake-etl-setup.png)

Expand Down
Loading