From b926d3f9349a749273ec0f2a810a87c3511cc2f9 Mon Sep 17 00:00:00 2001 From: Sanketh I Date: Wed, 30 Apr 2025 09:49:08 -0700 Subject: [PATCH 001/146] Create ddl-limitations.md --- docs/content/stable/api/ysql/ddl-limitations.md | 1 + 1 file changed, 1 insertion(+) create mode 100644 docs/content/stable/api/ysql/ddl-limitations.md diff --git a/docs/content/stable/api/ysql/ddl-limitations.md b/docs/content/stable/api/ysql/ddl-limitations.md new file mode 100644 index 000000000000..8b137891791f --- /dev/null +++ b/docs/content/stable/api/ysql/ddl-limitations.md @@ -0,0 +1 @@ + From 9446fa75d3a6ae3e4dbd0ab31734a7128eb41778 Mon Sep 17 00:00:00 2001 From: Sanketh I Date: Wed, 30 Apr 2025 09:56:30 -0700 Subject: [PATCH 002/146] Update ddl-limitations.md --- .../stable/api/ysql/ddl-limitations.md | 24 +++++++++++++++++++ 1 file changed, 24 insertions(+) diff --git a/docs/content/stable/api/ysql/ddl-limitations.md b/docs/content/stable/api/ysql/ddl-limitations.md index 8b137891791f..50b4b69f7aa1 100644 --- a/docs/content/stable/api/ysql/ddl-limitations.md +++ b/docs/content/stable/api/ysql/ddl-limitations.md @@ -1 +1,25 @@ +--- +title: Behavior of DDL statements [YSQL] +headerTitle: Behavior of DDL statements +linkTitle: Behavior of DDL statements +description: Explains how the behavior of DDL statements works in YugabyteDB YSQL and documents differences from Postgres behavior. [YSQL]. +menu: + stable_api: + identifier: ddl-limitations + parent: api-ysql + weight: 20 +type: docs +--- + +This section describes how DDL statements work in YSQL and documents the difference in YugabyteDB behavior from PostgreSQL. + +## Concurrent DML during DDL operations + +In YugabyteDB, DML is allowed to execute while a DDL statement modifies the schema that is accessed by the DML statement. For example, an `ALTER TABLE .. ADD COLUMN` DDL statement may add a new column while a `SELECT * from
` executes concurrently on the same relation. In PostgreSQL, this is typically not allowed because such DDL statements take a table-level exclusive lock that prevents concurrent DML from executing (support for similar behavior in YugabyteDB is being tracked in [github issue]) + +In YugabyteDB, DML that run concurrently with a DDL may see one of the following results: +1. Operate with the old schema prior to the DDL. +2. Operate with the new schema after the DDL completes. +3. Encounter temporary errors such as `schema mismatch errors` or `catalog version mismatch`. It is recommended for the client to retry such operations whenever possible. +4. From e9cfdcfe0adaaf03c79a8ec81bd4622d66539bf6 Mon Sep 17 00:00:00 2001 From: Sanketh I Date: Wed, 30 Apr 2025 10:01:28 -0700 Subject: [PATCH 003/146] Update ddl-limitations.md --- docs/content/stable/api/ysql/ddl-limitations.md | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/docs/content/stable/api/ysql/ddl-limitations.md b/docs/content/stable/api/ysql/ddl-limitations.md index 50b4b69f7aa1..67e7a1259abd 100644 --- a/docs/content/stable/api/ysql/ddl-limitations.md +++ b/docs/content/stable/api/ysql/ddl-limitations.md @@ -14,7 +14,7 @@ type: docs This section describes how DDL statements work in YSQL and documents the difference in YugabyteDB behavior from PostgreSQL. -## Concurrent DML during DDL operations +## Concurrent DML during a DDL operation In YugabyteDB, DML is allowed to execute while a DDL statement modifies the schema that is accessed by the DML statement. For example, an `ALTER TABLE
.. ADD COLUMN` DDL statement may add a new column while a `SELECT * from
` executes concurrently on the same relation. In PostgreSQL, this is typically not allowed because such DDL statements take a table-level exclusive lock that prevents concurrent DML from executing (support for similar behavior in YugabyteDB is being tracked in [github issue]) @@ -22,4 +22,9 @@ In YugabyteDB, DML that run concurrently with a DDL may see one of the following 1. Operate with the old schema prior to the DDL. 2. Operate with the new schema after the DDL completes. 3. Encounter temporary errors such as `schema mismatch errors` or `catalog version mismatch`. It is recommended for the client to retry such operations whenever possible. -4. + +Most DDL statements complete quickly, so this is typically not a significant issue in practice. However, [some specific DDL statements](../the-sql-language/statements/ddl_alter_table.md#alter-type-with-table-rewrite) can take a long time to execute, so it is important to be aware of this limitation in this case. + +## Concurrent DDL during a DDL operation + +TODO: add details From 98141ac5b90febde5bd38e9c13729d0ccb9f0c09 Mon Sep 17 00:00:00 2001 From: Sanketh I Date: Wed, 30 Apr 2025 10:04:44 -0700 Subject: [PATCH 004/146] Update ddl-limitations.md --- docs/content/stable/api/ysql/ddl-limitations.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/content/stable/api/ysql/ddl-limitations.md b/docs/content/stable/api/ysql/ddl-limitations.md index 67e7a1259abd..698b49e1ace9 100644 --- a/docs/content/stable/api/ysql/ddl-limitations.md +++ b/docs/content/stable/api/ysql/ddl-limitations.md @@ -23,7 +23,7 @@ In YugabyteDB, DML that run concurrently with a DDL may see one of the following 2. Operate with the new schema after the DDL completes. 3. Encounter temporary errors such as `schema mismatch errors` or `catalog version mismatch`. It is recommended for the client to retry such operations whenever possible. -Most DDL statements complete quickly, so this is typically not a significant issue in practice. However, [some specific DDL statements](../the-sql-language/statements/ddl_alter_table.md#alter-type-with-table-rewrite) can take a long time to execute, so it is important to be aware of this limitation in this case. +Most DDL statements complete quickly, so this is typically not a significant issue in practice. However, [certain kinds of ALTER TABLE DDL statements](../the-sql-language/statements/ddl_alter_table.md#alter-type-with-table-rewrite) involve making a full copy of the table(s) whose schema is being modified. For these operations, it is not recommended to run any concurrent DML statements during the `ALTER TABLE` as the effect of the DML may not be reflected in the copied table after the DDL is complete. ## Concurrent DDL during a DDL operation From bc873fee77fe5335d07e5cda60728325cc38872a Mon Sep 17 00:00:00 2001 From: Sanketh I Date: Wed, 30 Apr 2025 10:09:27 -0700 Subject: [PATCH 005/146] Update ddl_alter_table.md --- .../api/ysql/the-sql-language/statements/ddl_alter_table.md | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/docs/content/stable/api/ysql/the-sql-language/statements/ddl_alter_table.md b/docs/content/stable/api/ysql/the-sql-language/statements/ddl_alter_table.md index fb1aa204cc21..054583f92370 100644 --- a/docs/content/stable/api/ysql/the-sql-language/statements/ddl_alter_table.md +++ b/docs/content/stable/api/ysql/the-sql-language/statements/ddl_alter_table.md @@ -46,6 +46,12 @@ These variants are useful only when at least one other table inherits `t`. But a ## Semantics +{{< note >}} + +Most ALTER TABLE statements only involve a schema modification and complete quickly. However, certain specific ALTER TABLE statements require a new copy of the underlying table to be made (similar to PostgreSQL) and can potentially take a long time, depending on the sizes of the tables and indexes involved. This is typically referred to as a "table rewrite". ALTER TABLE statements that involve a table rewrite are called out specifically in [this section](#alter-type-with-table-rewrite). Note that, in addition to the longer execution time, it is also note safe to execute concurrent DML during a table rewrite. For more details and possible workarounds, see [../../ddl-limitations.md] + +{{< /note >}} + ### *alter_table_action* Specify one of the following actions. From 8e20677808b42b9ba16fa45f496cf4deaed6e2c7 Mon Sep 17 00:00:00 2001 From: Sanketh I Date: Wed, 30 Apr 2025 10:18:38 -0700 Subject: [PATCH 006/146] Update ddl_alter_table.md --- .../statements/ddl_alter_table.md | 19 ++++++++++++++++--- 1 file changed, 16 insertions(+), 3 deletions(-) diff --git a/docs/content/stable/api/ysql/the-sql-language/statements/ddl_alter_table.md b/docs/content/stable/api/ysql/the-sql-language/statements/ddl_alter_table.md index 054583f92370..9e23ea778d2f 100644 --- a/docs/content/stable/api/ysql/the-sql-language/statements/ddl_alter_table.md +++ b/docs/content/stable/api/ysql/the-sql-language/statements/ddl_alter_table.md @@ -48,9 +48,7 @@ These variants are useful only when at least one other table inherits `t`. But a {{< note >}} -Most ALTER TABLE statements only involve a schema modification and complete quickly. However, certain specific ALTER TABLE statements require a new copy of the underlying table to be made (similar to PostgreSQL) and can potentially take a long time, depending on the sizes of the tables and indexes involved. This is typically referred to as a "table rewrite". ALTER TABLE statements that involve a table rewrite are called out specifically in [this section](#alter-type-with-table-rewrite). Note that, in addition to the longer execution time, it is also note safe to execute concurrent DML during a table rewrite. For more details and possible workarounds, see [../../ddl-limitations.md] - -{{< /note >}} +Most ALTER TABLE statements only involve a schema modification and complete quickly. However, certain specific ALTER TABLE statements require a new copy of the underlying table to be made (similar to PostgreSQL) and can potentially take a long time, depending on the sizes of the tables and indexes involved. This is typically referred to as a "table rewrite". . Note that, in addition to the longer execution time, it is also note safe to execute concurrent DML during a table rewrite. For more details and possible workarounds, see [../../ddl-limitations.md]. ALTER TABLE statements that involve a table rewrite are called out specifically in the section [#alter-table-rewrite-list]. ### *alter_table_action* @@ -60,6 +58,8 @@ Specify one of the following actions. Add the specified column with the specified data type and constraint. +TODO: add details on volatile column + #### RENAME TO *table_name* Rename the table to the specified table name. @@ -253,6 +253,7 @@ Change the type of an existing column. The following semantics apply: If the change doesn't require data on disk to change, concurrent DMLs to the table can be safely performed as shown in the following example: +TODO: add specific list ```sql CREATE TABLE test (id BIGSERIAL PRIMARY KEY, a VARCHAR(50)); @@ -265,6 +266,8 @@ If the change requires data on disk to change, a full table rewrite will be done - The action creates an entirely new table under the hood, and concurrent DMLs may not be reflected in the new table which can lead to correctness issues. - The operation preserves split properties for hash-partitioned tables and hash-partitioned secondary indexes. For range-partitioned tables (and secondary indexes), split properties are only preserved if the altered column is not part of the table's (or secondary index's) range key. +TODO: add specific list + Following is an example of alter type with table rewrite: ```sql @@ -374,6 +377,16 @@ Constraints marked as `INITIALLY IMMEDIATE` will be checked after every row with Constraints marked as `INITIALLY DEFERRED` will be checked at the end of the transaction. +## Alter table operations that involve a table rewrite. + +The following alter table operations involve making a full copy of the underlying table and associated index tables. +1. Changing the primary key of a table + 1. [#add-primary-key] + 2. [#drop-primary-key] +2. Adding a column with a (volatile) default value + 1. TODO: links +4. Changing the type of a column + 1. TODO: links to above ## See also - [`CREATE TABLE`](../ddl_create_table) From d18e8f812e6bac740e128b0d6a99ffc91e558347 Mon Sep 17 00:00:00 2001 From: Sanketh I Date: Wed, 30 Apr 2025 10:25:37 -0700 Subject: [PATCH 007/146] Update ddl-limitations.md --- docs/content/stable/api/ysql/ddl-limitations.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/content/stable/api/ysql/ddl-limitations.md b/docs/content/stable/api/ysql/ddl-limitations.md index 698b49e1ace9..01433d2cd642 100644 --- a/docs/content/stable/api/ysql/ddl-limitations.md +++ b/docs/content/stable/api/ysql/ddl-limitations.md @@ -23,7 +23,7 @@ In YugabyteDB, DML that run concurrently with a DDL may see one of the following 2. Operate with the new schema after the DDL completes. 3. Encounter temporary errors such as `schema mismatch errors` or `catalog version mismatch`. It is recommended for the client to retry such operations whenever possible. -Most DDL statements complete quickly, so this is typically not a significant issue in practice. However, [certain kinds of ALTER TABLE DDL statements](../the-sql-language/statements/ddl_alter_table.md#alter-type-with-table-rewrite) involve making a full copy of the table(s) whose schema is being modified. For these operations, it is not recommended to run any concurrent DML statements during the `ALTER TABLE` as the effect of the DML may not be reflected in the copied table after the DDL is complete. +Most DDL statements complete quickly, so this is typically not a significant issue in practice. However, [certain kinds of ALTER TABLE DDL statements](../the-sql-language/statements/ddl_alter_table.md/#alter-type-with-table-rewrite) involve making a full copy of the table(s) whose schema is being modified. For these operations, it is not recommended to run any concurrent DML statements during the `ALTER TABLE` as the effect of the DML may not be reflected in the copied table after the DDL is complete. ## Concurrent DDL during a DDL operation From a7320cfb71fdeb59de6fb4ba7b24781277314221 Mon Sep 17 00:00:00 2001 From: Sanketh I Date: Wed, 30 Apr 2025 10:27:50 -0700 Subject: [PATCH 008/146] Update and rename ddl-limitations.md to ddl-behavior.md --- .../stable/api/ysql/{ddl-limitations.md => ddl-behavior.md} | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename docs/content/stable/api/ysql/{ddl-limitations.md => ddl-behavior.md} (79%) diff --git a/docs/content/stable/api/ysql/ddl-limitations.md b/docs/content/stable/api/ysql/ddl-behavior.md similarity index 79% rename from docs/content/stable/api/ysql/ddl-limitations.md rename to docs/content/stable/api/ysql/ddl-behavior.md index 01433d2cd642..2d57229514d8 100644 --- a/docs/content/stable/api/ysql/ddl-limitations.md +++ b/docs/content/stable/api/ysql/ddl-behavior.md @@ -5,7 +5,7 @@ linkTitle: Behavior of DDL statements description: Explains how the behavior of DDL statements works in YugabyteDB YSQL and documents differences from Postgres behavior. [YSQL]. menu: stable_api: - identifier: ddl-limitations + identifier: ddl-behavior parent: api-ysql weight: 20 type: docs @@ -23,7 +23,7 @@ In YugabyteDB, DML that run concurrently with a DDL may see one of the following 2. Operate with the new schema after the DDL completes. 3. Encounter temporary errors such as `schema mismatch errors` or `catalog version mismatch`. It is recommended for the client to retry such operations whenever possible. -Most DDL statements complete quickly, so this is typically not a significant issue in practice. However, [certain kinds of ALTER TABLE DDL statements](../the-sql-language/statements/ddl_alter_table.md/#alter-type-with-table-rewrite) involve making a full copy of the table(s) whose schema is being modified. For these operations, it is not recommended to run any concurrent DML statements during the `ALTER TABLE` as the effect of the DML may not be reflected in the copied table after the DDL is complete. +Most DDL statements complete quickly, so this is typically not a significant issue in practice. However, [certain kinds of ALTER TABLE DDL statements](/the-sql-language/statements/ddl_alter_table.md/#alter-type-with-table-rewrite) involve making a full copy of the table(s) whose schema is being modified. For these operations, it is not recommended to run any concurrent DML statements during the `ALTER TABLE` as the effect of the DML may not be reflected in the copied table after the DDL is complete. ## Concurrent DDL during a DDL operation From 8cab64b4a774f49dc3e38daa4f118d469fc748d1 Mon Sep 17 00:00:00 2001 From: Sanketh I Date: Wed, 30 Apr 2025 10:29:43 -0700 Subject: [PATCH 009/146] Update ddl_alter_table.md --- .../api/ysql/the-sql-language/statements/ddl_alter_table.md | 1 + 1 file changed, 1 insertion(+) diff --git a/docs/content/stable/api/ysql/the-sql-language/statements/ddl_alter_table.md b/docs/content/stable/api/ysql/the-sql-language/statements/ddl_alter_table.md index 9e23ea778d2f..ee0fc8d9c0eb 100644 --- a/docs/content/stable/api/ysql/the-sql-language/statements/ddl_alter_table.md +++ b/docs/content/stable/api/ysql/the-sql-language/statements/ddl_alter_table.md @@ -387,6 +387,7 @@ The following alter table operations involve making a full copy of the underlyin 1. TODO: links 4. Changing the type of a column 1. TODO: links to above + ## See also - [`CREATE TABLE`](../ddl_create_table) From 5e57d57fbe7ed38bf88c3dc7ef6c1801dd25afce Mon Sep 17 00:00:00 2001 From: Sanketh I Date: Wed, 30 Apr 2025 10:34:07 -0700 Subject: [PATCH 010/146] Update ddl_alter_table.md --- .../api/ysql/the-sql-language/statements/ddl_alter_table.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/content/stable/api/ysql/the-sql-language/statements/ddl_alter_table.md b/docs/content/stable/api/ysql/the-sql-language/statements/ddl_alter_table.md index ee0fc8d9c0eb..3df56c1e4e22 100644 --- a/docs/content/stable/api/ysql/the-sql-language/statements/ddl_alter_table.md +++ b/docs/content/stable/api/ysql/the-sql-language/statements/ddl_alter_table.md @@ -46,9 +46,9 @@ These variants are useful only when at least one other table inherits `t`. But a ## Semantics -{{< note >}} - +{{< note title="Table rewrites during an ALTER TABLE" >}} Most ALTER TABLE statements only involve a schema modification and complete quickly. However, certain specific ALTER TABLE statements require a new copy of the underlying table to be made (similar to PostgreSQL) and can potentially take a long time, depending on the sizes of the tables and indexes involved. This is typically referred to as a "table rewrite". . Note that, in addition to the longer execution time, it is also note safe to execute concurrent DML during a table rewrite. For more details and possible workarounds, see [../../ddl-limitations.md]. ALTER TABLE statements that involve a table rewrite are called out specifically in the section [#alter-table-rewrite-list]. +{{< /note >}} ### *alter_table_action* From ba7204849f1687ebbe1763c32ccd50c0c9846956 Mon Sep 17 00:00:00 2001 From: Sanketh I Date: Wed, 30 Apr 2025 10:43:11 -0700 Subject: [PATCH 011/146] Update ddl-behavior.md --- docs/content/stable/api/ysql/ddl-behavior.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/content/stable/api/ysql/ddl-behavior.md b/docs/content/stable/api/ysql/ddl-behavior.md index 2d57229514d8..d5695c6bfd72 100644 --- a/docs/content/stable/api/ysql/ddl-behavior.md +++ b/docs/content/stable/api/ysql/ddl-behavior.md @@ -23,7 +23,7 @@ In YugabyteDB, DML that run concurrently with a DDL may see one of the following 2. Operate with the new schema after the DDL completes. 3. Encounter temporary errors such as `schema mismatch errors` or `catalog version mismatch`. It is recommended for the client to retry such operations whenever possible. -Most DDL statements complete quickly, so this is typically not a significant issue in practice. However, [certain kinds of ALTER TABLE DDL statements](/the-sql-language/statements/ddl_alter_table.md/#alter-type-with-table-rewrite) involve making a full copy of the table(s) whose schema is being modified. For these operations, it is not recommended to run any concurrent DML statements during the `ALTER TABLE` as the effect of the DML may not be reflected in the copied table after the DDL is complete. +Most DDL statements complete quickly, so this is typically not a significant issue in practice. However, [certain kinds of ALTER TABLE DDL statements](../the-sql-language/statements/ddl_alter_table/#alter-table-operations-that-involve-a-table-rewrite) involve making a full copy of the table(s) whose schema is being modified. For these operations, it is not recommended to run any concurrent DML statements during the `ALTER TABLE` as the effect of the DML may not be reflected in the copied table after the DDL is complete. ## Concurrent DDL during a DDL operation From 5de2e8f1641b16f01e03b20908aed66b24ec4bdc Mon Sep 17 00:00:00 2001 From: Sanketh I Date: Wed, 30 Apr 2025 10:43:49 -0700 Subject: [PATCH 012/146] Update ddl_alter_table.md --- .../api/ysql/the-sql-language/statements/ddl_alter_table.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/content/stable/api/ysql/the-sql-language/statements/ddl_alter_table.md b/docs/content/stable/api/ysql/the-sql-language/statements/ddl_alter_table.md index 3df56c1e4e22..914293eb0981 100644 --- a/docs/content/stable/api/ysql/the-sql-language/statements/ddl_alter_table.md +++ b/docs/content/stable/api/ysql/the-sql-language/statements/ddl_alter_table.md @@ -47,7 +47,7 @@ These variants are useful only when at least one other table inherits `t`. But a ## Semantics {{< note title="Table rewrites during an ALTER TABLE" >}} -Most ALTER TABLE statements only involve a schema modification and complete quickly. However, certain specific ALTER TABLE statements require a new copy of the underlying table to be made (similar to PostgreSQL) and can potentially take a long time, depending on the sizes of the tables and indexes involved. This is typically referred to as a "table rewrite". . Note that, in addition to the longer execution time, it is also note safe to execute concurrent DML during a table rewrite. For more details and possible workarounds, see [../../ddl-limitations.md]. ALTER TABLE statements that involve a table rewrite are called out specifically in the section [#alter-table-rewrite-list]. +Most ALTER TABLE statements only involve a schema modification and complete quickly. However, certain specific ALTER TABLE statements require a new copy of the underlying table to be made (similar to PostgreSQL) and can potentially take a long time, depending on the sizes of the tables and indexes involved. This is typically referred to as a "table rewrite". . Note that, in addition to the longer execution time, it is also note safe to execute concurrent DML during a table rewrite. For more details and possible workarounds, see [../../ddl-limitations.md]. ALTER TABLE statements that involve a table rewrite are called out specifically in the section [#alter-table-operations-that-involve-a-table-rewrite]. {{< /note >}} ### *alter_table_action* From 50a7912cc8b440cff4d0ae499da2727ae6de0397 Mon Sep 17 00:00:00 2001 From: Sanketh I Date: Mon, 5 May 2025 17:47:18 -0700 Subject: [PATCH 013/146] Update ddl_alter_table.md --- .../statements/ddl_alter_table.md | 139 ++++++++++-------- 1 file changed, 80 insertions(+), 59 deletions(-) diff --git a/docs/content/stable/api/ysql/the-sql-language/statements/ddl_alter_table.md b/docs/content/stable/api/ysql/the-sql-language/statements/ddl_alter_table.md index 914293eb0981..ec6124a1c2fa 100644 --- a/docs/content/stable/api/ysql/the-sql-language/statements/ddl_alter_table.md +++ b/docs/content/stable/api/ysql/the-sql-language/statements/ddl_alter_table.md @@ -46,9 +46,6 @@ These variants are useful only when at least one other table inherits `t`. But a ## Semantics -{{< note title="Table rewrites during an ALTER TABLE" >}} -Most ALTER TABLE statements only involve a schema modification and complete quickly. However, certain specific ALTER TABLE statements require a new copy of the underlying table to be made (similar to PostgreSQL) and can potentially take a long time, depending on the sizes of the tables and indexes involved. This is typically referred to as a "table rewrite". . Note that, in addition to the longer execution time, it is also note safe to execute concurrent DML during a table rewrite. For more details and possible workarounds, see [../../ddl-limitations.md]. ALTER TABLE statements that involve a table rewrite are called out specifically in the section [#alter-table-operations-that-involve-a-table-rewrite]. -{{< /note >}} ### *alter_table_action* @@ -58,7 +55,19 @@ Specify one of the following actions. Add the specified column with the specified data type and constraint. -TODO: add details on volatile column +##### Table rewrites + +ADD COLUMN … DEFAULT statements require a [table rewrite](#alter-table-operations-that-involve-a-table-rewrite) when the default value uses *volatile* functions. [Volatile functions](https://www.postgresql.org/docs/current/xfunc-volatility.html#XFUNC-VOLATILITY) can return different results for different rows, so a table rewrite is required to fill in values for existing rows. For non-volatile functions, no table rewrite is required. + +Examples of volatile expressions + +- ALTER TABLE … ADD COLUMN v1 INT DEFAULT random()  +- ALTER TABLE .. ADD COLUMN v2 UUID DEFAULT gen_random_uuid(); + +Examples of non-volatile expressions (no table rewrite)  + +- ALTER TABLE … ADD COLUMN nv1 INT DEFAULT 5 +- ALTER TABLE … ADD COLUMN nv2 timestamp DEFAULT now() -- uses the same timestamp now() for all existing rows #### RENAME TO *table_name* @@ -228,20 +237,16 @@ It quietly succeeds. Now `\d children` shows that the foreign key constraint `ch Add the specified constraint to the table. +##### Table rewrites + +Adding a `PRIMARY KEY` constraint results in a full table rewrite of the main table and all associated indexes, which can be a potentially expensive operation. For more details about [table rewrites, see this section](#alter-table-operations-that-involve-a-table-rewrite). + +The reason for the table rewrite is the clustered storage by primary key that YugabyteDB uses to store rows and indexes. Tables without a `PRIMARY KEY` have a hidden one underneath and rows are stored clustered on it. These rows need to be rewritten to use the newly added primary key column. -{{< warning >}} -Adding a `PRIMARY KEY` constraint results in a full table rewrite and full rewrite of all indexes associated with the table. -This happens because of the clustered storage by primary key that YugabyteDB uses to store rows and indexes. -Tables without a `PRIMARY KEY` have a hidden one underneath and rows are stored clustered on it. The secondary indexes of the table -link to this hidden `PRIMARY KEY`. -While the tables and indexes are being rewritten, you may lose any modifications made to the table. -For reference, the same semantics as [Alter type with table rewrite](#alter-type-with-table-rewrite) apply. -{{< /warning >}} #### ALTER [ COLUMN ] *column_name* [ SET DATA ] TYPE *data_type* [ COLLATE *collation* ] [ USING *expression* ] Change the type of an existing column. The following semantics apply: -- If data on disk is required to change, a full table rewrite is needed. - If the optional `COLLATE` clause is not specified, the default collation for the new column type will be used. - If the optional `USING` clause is not provided, the default conversion for the new column value will be the same as an assignment cast from the old type to the new type. - A `USING` clause must be included when there is no implicit assignment cast available from the old type to the new type. @@ -249,50 +254,51 @@ Change the type of an existing column. The following semantics apply: - Alter type is not supported for tables with rules (limitation inherited from PostgreSQL). - Alter type is not supported for tables with CDC streams, or xCluster replication when it requires data on disk to change. See [#16625](https://github.com/yugabyte/yugabyte-db/issues/16625). -##### Alter type without table-rewrite +##### Table rewrites -If the change doesn't require data on disk to change, concurrent DMLs to the table can be safely performed as shown in the following example: +Altering a column’s type requires a [full table rewrite](#alter-table-operations-that-involve-a-table-rewrite) of the table and any indexes that contain this column when the underlying storage format changes or if the data changes. -TODO: add specific list +The following type changes are common cases where we require a table rewrite: -```sql -CREATE TABLE test (id BIGSERIAL PRIMARY KEY, a VARCHAR(50)); -ALTER TABLE test ALTER COLUMN a TYPE VARCHAR(51); -``` +| From | To | Reason for table rewrite | +| ------------ | -------------- | --------------------------------------------------------------------- | +| INTEGER | TEXT | Different storage formats. | +| TEXT | INTEGER | Needs parsing and validation. | +| JSON | JSONB | Different internal representation. | +| UUID | TEXT | Different binary format. | +| BYTEA | TEXT | Different encoding. | +| TIMESTAMP | DATE | Loses time info; storage changes. | +| BOOLEAN | INTEGER | Different sizes and encoding. | +| REAL | NUMERIC | Different precision and format. | +| NUMERIC(p,s) | NUMERIC(p2,s2) | Requires data changes if scale is changed or if precision is smaller. | -##### Alter type with table rewrite +The following type changes do not require a table rewrite: -If the change requires data on disk to change, a full table rewrite will be done and the following semantics apply: -- The action creates an entirely new table under the hood, and concurrent DMLs may not be reflected in the new table which can lead to correctness issues. -- The operation preserves split properties for hash-partitioned tables and hash-partitioned secondary indexes. For range-partitioned tables (and secondary indexes), split properties are only preserved if the altered column is not part of the table's (or secondary index's) range key. +| From | To | Notes | +| ------------ | ------------------ | ------------------------------------------------------ | +| VARCHAR(n) | VARCHAR(m) (m > n) | Length increase is compatible. | +| VARCHAR(n) | TEXT | Always compatible. | +| SERIAL | INTEGER | Underlying type is INTEGER; usually OK. | +| NUMERIC(p,s) | NUMERIC(p2,s2) | If new precision is larger and scale remains the same. | +| CHAR(n) | CHAR(m) (m > n) | PG stores it as padded TEXT, so often fine. | +| Domain types | Their base type | Compatible, unless additional constraints exist. | -TODO: add specific list +Altering a column with a (non-trivial) USING clause always requires a rewrite. -Following is an example of alter type with table rewrite: +The table rewrite operation preserves split properties for hash-partitioned tables and hash-partitioned secondary indexes. For range-partitioned tables (and secondary indexes), split properties are only preserved if the altered column is not part of the table's (or secondary index's) range key. -```sql -CREATE TABLE test (id BIGSERIAL PRIMARY KEY, a VARCHAR(50)); -INSERT INTO test(a) VALUES ('1234555'); -ALTER TABLE test ALTER COLUMN a TYPE VARCHAR(40); --- try to change type to BIGINT -ALTER TABLE test ALTER COLUMN a TYPE BIGINT; -ERROR: column "a" cannot be cast automatically to type bigint -HINT: You might need to specify "USING a::bigint". --- use USING clause to cast the values -ALTER TABLE test ALTER COLUMN a SET DATA TYPE BIGINT USING a::BIGINT; -``` +Examples of ALTER TYPE that cause a table rewrite -Another option is to use a custom function as follows: +- ALTER TABLE foo + ALTER COLUMN foo_timestamp TYPE timestamp with time zone + USING + timestamp with time zone 'epoch' + foo_timestamp * interval '1 second'; +- ALTER TABLE t ALTER COLUMN t_num1 TYPE NUMERIC(9,5) -- from NUMERIC(6,1); +- ALTER TABLE test ALTER COLUMN a SET DATA TYPE BIGINT USING a::BIGINT; -- from INT -```sql -CREATE OR REPLACE FUNCTION myfunc(text) RETURNS BIGINT - AS 'select $1::BIGINT;' - LANGUAGE SQL - IMMUTABLE - RETURNS NULL ON NULL INPUT; +Examples of ALTER TYPE that do not cause a table rewrite -ALTER TABLE test ALTER COLUMN a SET DATA TYPE BIGINT USING myfunc(a); -``` +- ALTER TABLE test ALTER COLUMN a TYPE VARCHAR(51); -- from VARCHAR(50) #### DROP CONSTRAINT *constraint_name* [ RESTRICT | CASCADE ] @@ -302,12 +308,9 @@ Drop the named constraint from the table. - `RESTRICT` — Remove only the specified constraint. - `CASCADE` — Remove the specified constraint and any dependent objects. -{{< warning >}} -Dropping the `PRIMARY KEY` constraint results in a full table rewrite and full rewrite of all indexes associated with the table. -This happens because of the clustered storage by primary key that YugabyteDB uses to store rows and indexes. -While the tables and indexes are being rewritten, you may lose any modifications made to the table. -For reference, the same semantics as [Alter type with table rewrite](#alter-type-with-table-rewrite) apply. -{{< /warning >}} +##### Table rewrites + +Dropping the `PRIMARY KEY` constraint results in a full table rewrite and full rewrite of all indexes associated with the table, which is a potentially expensive operation. More details and common limitations of table rewrites [are described in this section](#alter-table-operations-that-involve-a-table-rewrite). #### RENAME [ COLUMN ] *column_name* TO *column_name* @@ -377,16 +380,34 @@ Constraints marked as `INITIALLY IMMEDIATE` will be checked after every row with Constraints marked as `INITIALLY DEFERRED` will be checked at the end of the transaction. -## Alter table operations that involve a table rewrite. +## Alter table operations that involve a table rewrite + +Most ALTER TABLE statements only involve a schema modification and complete quickly. However, certain specific ALTER TABLE statements require a new copy of the underlying table (and associated index tables, in some cases) to be made and can potentially take a long time, depending on the sizes of the tables and indexes involved. This is typically referred to as a "table rewrite". This behavior is [similar to PostgreSQL](https://www.crunchydata.com/blog/when-does-alter-table-require-a-rewrite), though the exact scenarios when a rewrite is triggered may differ between PostgreSQL and YugabyteDB. + +It is not safe to execute concurrent DML on the table during a table rewrite because the results of any concurrent DML are not guaranteed to be reflected in the copy of the table being made. This behavior is also similar to PostgreSQL, where a table lock typically excludes concurrent DML on the table during the rewrite. + +When such expensive rewrites have to be performed, it is recommended to combine them into a single ALTER TABLE as shown below to avoid multiple expensive rewrites. + +``` +ALTER TABLE t ADD COLUMN c6 UUID DEFAULT gen_random_uuid(), ALTER COLUMN c8 TYPE TEXT +``` + + ALTER TABLE statements that involve a table rewrite are called out specifically in the following sections. The following alter table operations involve making a full copy of the underlying table and associated index tables. 1. Changing the primary key of a table - 1. [#add-primary-key] - 2. [#drop-primary-key] -2. Adding a column with a (volatile) default value - 1. TODO: links -4. Changing the type of a column - 1. TODO: links to above + 2. [#add-primary-key] + 3. [#drop-primary-key] +4. Adding a column with a (volatile) default value + 5. TODO: links +6. Changing the type of a column + 7. TODO: links to above + 8. + +The following alter table operations involve making a full copy of the underlying table (and possibly associated index tables). +1. [Adding](#add-alter-table-constraint-constraints) or [dropping](#drop-constraint-constraint-name-restrict-cascade) the primary key of a table. +2. [Adding a column with a (volatile) default value](#add-column-if-not-exists-column-name-data-type-constraint-constraints). +4. [Changing the type of a column](#alter-column-column-name-set-data-type-data-type-collate-collation-using-expression). ## See also From 9bed5f3ec6c68862c54835642e54a2da12d1506d Mon Sep 17 00:00:00 2001 From: Sanketh I Date: Mon, 5 May 2025 18:14:00 -0700 Subject: [PATCH 014/146] Update ddl-behavior.md --- docs/content/stable/api/ysql/ddl-behavior.md | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) diff --git a/docs/content/stable/api/ysql/ddl-behavior.md b/docs/content/stable/api/ysql/ddl-behavior.md index d5695c6bfd72..1504cb85ac9d 100644 --- a/docs/content/stable/api/ysql/ddl-behavior.md +++ b/docs/content/stable/api/ysql/ddl-behavior.md @@ -18,13 +18,14 @@ This section describes how DDL statements work in YSQL and documents the differe In YugabyteDB, DML is allowed to execute while a DDL statement modifies the schema that is accessed by the DML statement. For example, an `ALTER TABLE
.. ADD COLUMN` DDL statement may add a new column while a `SELECT * from
` executes concurrently on the same relation. In PostgreSQL, this is typically not allowed because such DDL statements take a table-level exclusive lock that prevents concurrent DML from executing (support for similar behavior in YugabyteDB is being tracked in [github issue]) -In YugabyteDB, DML that run concurrently with a DDL may see one of the following results: -1. Operate with the old schema prior to the DDL. -2. Operate with the new schema after the DDL completes. -3. Encounter temporary errors such as `schema mismatch errors` or `catalog version mismatch`. It is recommended for the client to retry such operations whenever possible. +In YugabyteDB, when a DDL modifies the schema of tables that are accessed by concurrent DML statements, the DML statement may +1. Operate with the old schema prior to the DDL, or +2. Operate with the new schema after the DDL completes, or +3. Encounter temporary errors such as `schema mismatch errors` or `catalog version mismatch`. It is recommended for the client to [retry such operations](https://www.yugabyte.com/blog/retry-mechanism-spring-boot-app/) whenever possible. -Most DDL statements complete quickly, so this is typically not a significant issue in practice. However, [certain kinds of ALTER TABLE DDL statements](../the-sql-language/statements/ddl_alter_table/#alter-table-operations-that-involve-a-table-rewrite) involve making a full copy of the table(s) whose schema is being modified. For these operations, it is not recommended to run any concurrent DML statements during the `ALTER TABLE` as the effect of the DML may not be reflected in the copied table after the DDL is complete. +Most DDL statements complete quickly, so this is typically not a significant issue in practice. However, [certain kinds of ALTER TABLE DDL statements](../the-sql-language/statements/ddl_alter_table/#alter-table-operations-that-involve-a-table-rewrite) involve making a full copy of the table(s) whose schema is being modified. For these operations, it is not recommended to run any concurrent DML statements on the table being modified by the `ALTER TABLE` as the effect of such concurrent DML may not be reflected in the table copy. ## Concurrent DDL during a DDL operation -TODO: add details +DDL statements that affect entities in different databases can be run concurrently. However, you cannot concurrently execute two DDL statements that affect entities in the same database. DDL that relate to shared objects like roles, tablespaces are considered as affecting all databases in the cluster, so they cannot be run concurrently with any per-database DDL. + From c085f7e8bad8c4eaea2b62f4e15accdb8491c558 Mon Sep 17 00:00:00 2001 From: Sanketh I Date: Mon, 5 May 2025 18:14:50 -0700 Subject: [PATCH 015/146] Update ddl_alter_table.md --- .../the-sql-language/statements/ddl_alter_table.md | 13 ------------- 1 file changed, 13 deletions(-) diff --git a/docs/content/stable/api/ysql/the-sql-language/statements/ddl_alter_table.md b/docs/content/stable/api/ysql/the-sql-language/statements/ddl_alter_table.md index ec6124a1c2fa..1db470246aca 100644 --- a/docs/content/stable/api/ysql/the-sql-language/statements/ddl_alter_table.md +++ b/docs/content/stable/api/ysql/the-sql-language/statements/ddl_alter_table.md @@ -46,7 +46,6 @@ These variants are useful only when at least one other table inherits `t`. But a ## Semantics - ### *alter_table_action* Specify one of the following actions. @@ -392,18 +391,6 @@ When such expensive rewrites have to be performed, it is recommended to combine ALTER TABLE t ADD COLUMN c6 UUID DEFAULT gen_random_uuid(), ALTER COLUMN c8 TYPE TEXT ``` - ALTER TABLE statements that involve a table rewrite are called out specifically in the following sections. - -The following alter table operations involve making a full copy of the underlying table and associated index tables. -1. Changing the primary key of a table - 2. [#add-primary-key] - 3. [#drop-primary-key] -4. Adding a column with a (volatile) default value - 5. TODO: links -6. Changing the type of a column - 7. TODO: links to above - 8. - The following alter table operations involve making a full copy of the underlying table (and possibly associated index tables). 1. [Adding](#add-alter-table-constraint-constraints) or [dropping](#drop-constraint-constraint-name-restrict-cascade) the primary key of a table. 2. [Adding a column with a (volatile) default value](#add-column-if-not-exists-column-name-data-type-constraint-constraints). From ae7d99b08160ca610b33cc2d62b9b62d6859f30f Mon Sep 17 00:00:00 2001 From: Sanketh I Date: Mon, 5 May 2025 18:21:00 -0700 Subject: [PATCH 016/146] Update ddl_alter_table.md --- .../api/ysql/the-sql-language/statements/ddl_alter_table.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/content/stable/api/ysql/the-sql-language/statements/ddl_alter_table.md b/docs/content/stable/api/ysql/the-sql-language/statements/ddl_alter_table.md index 1db470246aca..63fed114e236 100644 --- a/docs/content/stable/api/ysql/the-sql-language/statements/ddl_alter_table.md +++ b/docs/content/stable/api/ysql/the-sql-language/statements/ddl_alter_table.md @@ -271,7 +271,7 @@ The following type changes are common cases where we require a table rewrite: | REAL | NUMERIC | Different precision and format. | | NUMERIC(p,s) | NUMERIC(p2,s2) | Requires data changes if scale is changed or if precision is smaller. | -The following type changes do not require a table rewrite: +The following type changes do not require a rewrite when there is no associated index table on the column. When there is an associated index table on the column, a rewrite is performed on the index table alone but not on the main table. | From | To | Notes | | ------------ | ------------------ | ------------------------------------------------------ | From 123b0679e5c51563a5449562556fa7a62b1202c2 Mon Sep 17 00:00:00 2001 From: Sanketh I Date: Wed, 7 May 2025 16:22:10 -0700 Subject: [PATCH 017/146] Update ddl_alter_table.md --- .../api/ysql/the-sql-language/statements/ddl_alter_table.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/content/stable/api/ysql/the-sql-language/statements/ddl_alter_table.md b/docs/content/stable/api/ysql/the-sql-language/statements/ddl_alter_table.md index 63fed114e236..dc133c6144cf 100644 --- a/docs/content/stable/api/ysql/the-sql-language/statements/ddl_alter_table.md +++ b/docs/content/stable/api/ysql/the-sql-language/statements/ddl_alter_table.md @@ -56,7 +56,7 @@ Add the specified column with the specified data type and constraint. ##### Table rewrites -ADD COLUMN … DEFAULT statements require a [table rewrite](#alter-table-operations-that-involve-a-table-rewrite) when the default value uses *volatile* functions. [Volatile functions](https://www.postgresql.org/docs/current/xfunc-volatility.html#XFUNC-VOLATILITY) can return different results for different rows, so a table rewrite is required to fill in values for existing rows. For non-volatile functions, no table rewrite is required. +ADD COLUMN … DEFAULT statements require a [table rewrite](#alter-table-operations-that-involve-a-table-rewrite) when the default value is a *volatile* expression. [Volatile expressions](https://www.postgresql.org/docs/current/xfunc-volatility.html#XFUNC-VOLATILITY) can return different results for different rows, so a table rewrite is required to fill in values for existing rows. For non-volatile expressions, no table rewrite is required. Examples of volatile expressions From 7ddccc5e8008158adcccba1c59688dafb56caec8 Mon Sep 17 00:00:00 2001 From: Sanketh I Date: Thu, 8 May 2025 10:24:29 -0700 Subject: [PATCH 018/146] Update ddl-behavior.md --- docs/content/stable/api/ysql/ddl-behavior.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/content/stable/api/ysql/ddl-behavior.md b/docs/content/stable/api/ysql/ddl-behavior.md index 1504cb85ac9d..15fe6121d4e5 100644 --- a/docs/content/stable/api/ysql/ddl-behavior.md +++ b/docs/content/stable/api/ysql/ddl-behavior.md @@ -12,7 +12,7 @@ type: docs --- -This section describes how DDL statements work in YSQL and documents the difference in YugabyteDB behavior from PostgreSQL. +This section describes how DDL statements work in YSQL and documents the differences between YugabyteDB and PostgreSQL. ## Concurrent DML during a DDL operation @@ -27,5 +27,5 @@ Most DDL statements complete quickly, so this is typically not a significant iss ## Concurrent DDL during a DDL operation -DDL statements that affect entities in different databases can be run concurrently. However, you cannot concurrently execute two DDL statements that affect entities in the same database. DDL that relate to shared objects like roles, tablespaces are considered as affecting all databases in the cluster, so they cannot be run concurrently with any per-database DDL. +DDL statements that affect entities in different databases can be run concurrently. However, for DDL statements that impact the same database, it is recommended to execute them sequentially. DDL that relate to shared objects like roles, tablespaces are considered as affecting all databases in the cluster, so they should be run sequentially as well. From 1e896e1930bab5c45db88f4b161924d2bf8c298e Mon Sep 17 00:00:00 2001 From: Sanketh I Date: Thu, 8 May 2025 10:27:56 -0700 Subject: [PATCH 019/146] Update ddl_alter_table.md --- .../api/ysql/the-sql-language/statements/ddl_alter_table.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/content/stable/api/ysql/the-sql-language/statements/ddl_alter_table.md b/docs/content/stable/api/ysql/the-sql-language/statements/ddl_alter_table.md index dc133c6144cf..9a4e0d8a091a 100644 --- a/docs/content/stable/api/ysql/the-sql-language/statements/ddl_alter_table.md +++ b/docs/content/stable/api/ysql/the-sql-language/statements/ddl_alter_table.md @@ -383,7 +383,7 @@ Constraints marked as `INITIALLY DEFERRED` will be checked at the end of the tra Most ALTER TABLE statements only involve a schema modification and complete quickly. However, certain specific ALTER TABLE statements require a new copy of the underlying table (and associated index tables, in some cases) to be made and can potentially take a long time, depending on the sizes of the tables and indexes involved. This is typically referred to as a "table rewrite". This behavior is [similar to PostgreSQL](https://www.crunchydata.com/blog/when-does-alter-table-require-a-rewrite), though the exact scenarios when a rewrite is triggered may differ between PostgreSQL and YugabyteDB. -It is not safe to execute concurrent DML on the table during a table rewrite because the results of any concurrent DML are not guaranteed to be reflected in the copy of the table being made. This behavior is also similar to PostgreSQL, where a table lock typically excludes concurrent DML on the table during the rewrite. +It is not safe to execute concurrent DML on the table during a table rewrite because the results of any concurrent DML are not guaranteed to be reflected in the copy of the table being made. This restriction is similar to PostgresSQL, which explicitly prevents concurrent DML during a table rewrite by acquiring an ACCESS EXCLUSIVE table lock. When such expensive rewrites have to be performed, it is recommended to combine them into a single ALTER TABLE as shown below to avoid multiple expensive rewrites. From 16ff94f18b6510d3fb7ba464fec4817547481848 Mon Sep 17 00:00:00 2001 From: Sanketh I Date: Thu, 8 May 2025 21:22:35 -0700 Subject: [PATCH 020/146] Apply suggestions from code review Address review suggestions. Co-authored-by: Dwight Hodge <79169168+ddhodge@users.noreply.github.com> --- docs/content/stable/api/ysql/ddl-behavior.md | 10 ++++++---- .../statements/ddl_alter_table.md | 19 ++++++++++--------- 2 files changed, 16 insertions(+), 13 deletions(-) diff --git a/docs/content/stable/api/ysql/ddl-behavior.md b/docs/content/stable/api/ysql/ddl-behavior.md index 15fe6121d4e5..d07692942eb6 100644 --- a/docs/content/stable/api/ysql/ddl-behavior.md +++ b/docs/content/stable/api/ysql/ddl-behavior.md @@ -16,16 +16,18 @@ This section describes how DDL statements work in YSQL and documents the differe ## Concurrent DML during a DDL operation -In YugabyteDB, DML is allowed to execute while a DDL statement modifies the schema that is accessed by the DML statement. For example, an `ALTER TABLE
.. ADD COLUMN` DDL statement may add a new column while a `SELECT * from
` executes concurrently on the same relation. In PostgreSQL, this is typically not allowed because such DDL statements take a table-level exclusive lock that prevents concurrent DML from executing (support for similar behavior in YugabyteDB is being tracked in [github issue]) +In YugabyteDB, DML is allowed to execute while a DDL statement modifies the schema that is accessed by the DML statement. For example, an `ALTER TABLE
.. ADD COLUMN` DDL statement may add a new column while a `SELECT * from
` executes concurrently on the same relation. In PostgreSQL, this is typically not allowed because such DDL statements take a table-level exclusive lock that prevents concurrent DML from executing. (Support for similar behavior in YugabyteDB is being tracked in [this github issue](https://github.com/yugabyte/yugabyte-db/issues/11571).) -In YugabyteDB, when a DDL modifies the schema of tables that are accessed by concurrent DML statements, the DML statement may +In YugabyteDB, when a DDL modifies the schema of tables that are accessed by concurrent DML statements, the DML statement may do one of the following: 1. Operate with the old schema prior to the DDL, or 2. Operate with the new schema after the DDL completes, or 3. Encounter temporary errors such as `schema mismatch errors` or `catalog version mismatch`. It is recommended for the client to [retry such operations](https://www.yugabyte.com/blog/retry-mechanism-spring-boot-app/) whenever possible. -Most DDL statements complete quickly, so this is typically not a significant issue in practice. However, [certain kinds of ALTER TABLE DDL statements](../the-sql-language/statements/ddl_alter_table/#alter-table-operations-that-involve-a-table-rewrite) involve making a full copy of the table(s) whose schema is being modified. For these operations, it is not recommended to run any concurrent DML statements on the table being modified by the `ALTER TABLE` as the effect of such concurrent DML may not be reflected in the table copy. +Most DDL statements complete quickly, so this is typically not a significant issue in practice. However, [certain kinds of ALTER TABLE DDL statements](../the-sql-language/statements/ddl_alter_table/#alter-table-operations-that-involve-a-table-rewrite) involve making a full copy of the table(s) whose schema is being modified. For these operations, it is not recommended to run any concurrent DML statements on the table being modified by the `ALTER TABLE`, as the effect of such concurrent DML may not be reflected in the table copy. ## Concurrent DDL during a DDL operation -DDL statements that affect entities in different databases can be run concurrently. However, for DDL statements that impact the same database, it is recommended to execute them sequentially. DDL that relate to shared objects like roles, tablespaces are considered as affecting all databases in the cluster, so they should be run sequentially as well. +DDL statements that affect entities in different databases can be run concurrently. However, for DDL statements that impact the same database, it is recommended to execute them sequentially. + +DDL statements that relate to shared objects, such as roles or tablespaces, are considered as affecting all databases in the cluster, so they should also be run sequentially. diff --git a/docs/content/stable/api/ysql/the-sql-language/statements/ddl_alter_table.md b/docs/content/stable/api/ysql/the-sql-language/statements/ddl_alter_table.md index 9a4e0d8a091a..53b0511a5775 100644 --- a/docs/content/stable/api/ysql/the-sql-language/statements/ddl_alter_table.md +++ b/docs/content/stable/api/ysql/the-sql-language/statements/ddl_alter_table.md @@ -238,9 +238,9 @@ Add the specified constraint to the table. ##### Table rewrites -Adding a `PRIMARY KEY` constraint results in a full table rewrite of the main table and all associated indexes, which can be a potentially expensive operation. For more details about [table rewrites, see this section](#alter-table-operations-that-involve-a-table-rewrite). +Adding a `PRIMARY KEY` constraint results in a full table rewrite of the main table and all associated indexes, which can be a potentially expensive operation. For more details about table rewrites, see [Alter table operations that involve a table rewrite](#alter-table-operations-that-involve-a-table-rewrite). -The reason for the table rewrite is the clustered storage by primary key that YugabyteDB uses to store rows and indexes. Tables without a `PRIMARY KEY` have a hidden one underneath and rows are stored clustered on it. These rows need to be rewritten to use the newly added primary key column. +The table rewrite is needed because of how YugabyteDB stores rows and indexes. In YugabyteDB, data is distributed based on the primary key; when a table does not have an explicit primary key assigned, YugabyteDB automatically creates an internal row ID to use as the table's primary key. As a result, these rows need to be rewritten to use the newly added primary key column. For more information, refer to [Primary keys](../../../../../develop/data-modeling/primary-keys-ysql). #### ALTER [ COLUMN ] *column_name* [ SET DATA ] TYPE *data_type* [ COLLATE *collation* ] [ USING *expression* ] @@ -255,9 +255,9 @@ Change the type of an existing column. The following semantics apply: ##### Table rewrites -Altering a column’s type requires a [full table rewrite](#alter-table-operations-that-involve-a-table-rewrite) of the table and any indexes that contain this column when the underlying storage format changes or if the data changes. +Altering a column's type requires a [full table rewrite](#alter-table-operations-that-involve-a-table-rewrite) of the table, and any indexes that contain this column when the underlying storage format changes or if the data changes. -The following type changes are common cases where we require a table rewrite: +The following type changes commonly require a table rewrite: | From | To | Reason for table rewrite | | ------------ | -------------- | --------------------------------------------------------------------- | @@ -286,7 +286,7 @@ Altering a column with a (non-trivial) USING clause always requires a rewrite. The table rewrite operation preserves split properties for hash-partitioned tables and hash-partitioned secondary indexes. For range-partitioned tables (and secondary indexes), split properties are only preserved if the altered column is not part of the table's (or secondary index's) range key. -Examples of ALTER TYPE that cause a table rewrite +For example, the following ALTER TYPE statements would cause a table rewrite: - ALTER TABLE foo ALTER COLUMN foo_timestamp TYPE timestamp with time zone @@ -295,7 +295,7 @@ Examples of ALTER TYPE that cause a table rewrite - ALTER TABLE t ALTER COLUMN t_num1 TYPE NUMERIC(9,5) -- from NUMERIC(6,1); - ALTER TABLE test ALTER COLUMN a SET DATA TYPE BIGINT USING a::BIGINT; -- from INT -Examples of ALTER TYPE that do not cause a table rewrite +The following ALTER TYPE statement does not cause a table rewrite: - ALTER TABLE test ALTER COLUMN a TYPE VARCHAR(51); -- from VARCHAR(50) @@ -309,7 +309,7 @@ Drop the named constraint from the table. ##### Table rewrites -Dropping the `PRIMARY KEY` constraint results in a full table rewrite and full rewrite of all indexes associated with the table, which is a potentially expensive operation. More details and common limitations of table rewrites [are described in this section](#alter-table-operations-that-involve-a-table-rewrite). +Dropping the `PRIMARY KEY` constraint results in a full table rewrite and full rewrite of all indexes associated with the table, which is a potentially expensive operation. For more details and common limitations of table rewrites, refer to [Alter table operations that involve a table rewrite](#alter-table-operations-that-involve-a-table-rewrite). #### RENAME [ COLUMN ] *column_name* TO *column_name* @@ -385,13 +385,14 @@ Most ALTER TABLE statements only involve a schema modification and complete quic It is not safe to execute concurrent DML on the table during a table rewrite because the results of any concurrent DML are not guaranteed to be reflected in the copy of the table being made. This restriction is similar to PostgresSQL, which explicitly prevents concurrent DML during a table rewrite by acquiring an ACCESS EXCLUSIVE table lock. -When such expensive rewrites have to be performed, it is recommended to combine them into a single ALTER TABLE as shown below to avoid multiple expensive rewrites. +If you need to perform one of these expensive rewrites, it is recommended to combine them into a single ALTER TABLE statement to avoid multiple expensive rewrites. For example: ``` ALTER TABLE t ADD COLUMN c6 UUID DEFAULT gen_random_uuid(), ALTER COLUMN c8 TYPE TEXT ``` -The following alter table operations involve making a full copy of the underlying table (and possibly associated index tables). +The following ALTER TABLE operations involve making a full copy of the underlying table (and possibly associated index tables): + 1. [Adding](#add-alter-table-constraint-constraints) or [dropping](#drop-constraint-constraint-name-restrict-cascade) the primary key of a table. 2. [Adding a column with a (volatile) default value](#add-column-if-not-exists-column-name-data-type-constraint-constraints). 4. [Changing the type of a column](#alter-column-column-name-set-data-type-data-type-collate-collation-using-expression). From d8b296fe1ef1852bae2b8bcbc8143fa63bdcacb4 Mon Sep 17 00:00:00 2001 From: Sanketh I Date: Fri, 9 May 2025 16:30:51 -0700 Subject: [PATCH 021/146] Update docs/content/stable/api/ysql/ddl-behavior.md Co-authored-by: Dwight Hodge <79169168+ddhodge@users.noreply.github.com> --- docs/content/stable/api/ysql/ddl-behavior.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/content/stable/api/ysql/ddl-behavior.md b/docs/content/stable/api/ysql/ddl-behavior.md index d07692942eb6..a9afbce3a568 100644 --- a/docs/content/stable/api/ysql/ddl-behavior.md +++ b/docs/content/stable/api/ysql/ddl-behavior.md @@ -16,7 +16,7 @@ This section describes how DDL statements work in YSQL and documents the differe ## Concurrent DML during a DDL operation -In YugabyteDB, DML is allowed to execute while a DDL statement modifies the schema that is accessed by the DML statement. For example, an `ALTER TABLE
.. ADD COLUMN` DDL statement may add a new column while a `SELECT * from
` executes concurrently on the same relation. In PostgreSQL, this is typically not allowed because such DDL statements take a table-level exclusive lock that prevents concurrent DML from executing. (Support for similar behavior in YugabyteDB is being tracked in [this github issue](https://github.com/yugabyte/yugabyte-db/issues/11571).) +In YugabyteDB, DML is allowed to execute while a DDL statement modifies the schema that is accessed by the DML statement. For example, an `ALTER TABLE
.. ADD COLUMN` DDL statement may add a new column while a `SELECT * from
` executes concurrently on the same relation. In PostgreSQL, this is typically not allowed because such DDL statements take a table-level exclusive lock that prevents concurrent DML from executing. (Support for similar behavior in YugabyteDB is being tracked in issue {{}}.) In YugabyteDB, when a DDL modifies the schema of tables that are accessed by concurrent DML statements, the DML statement may do one of the following: 1. Operate with the old schema prior to the DDL, or From 462ef9a622b968aab529a6b82bfbcda92f49e84f Mon Sep 17 00:00:00 2001 From: Sanketh I Date: Mon, 28 Apr 2025 00:00:39 -0700 Subject: [PATCH 022/146] move location --- .../api/ysql/{ddl-behavior.md => ddl-behavior/_index.md} | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename docs/content/stable/api/ysql/{ddl-behavior.md => ddl-behavior/_index.md} (89%) diff --git a/docs/content/stable/api/ysql/ddl-behavior.md b/docs/content/stable/api/ysql/ddl-behavior/_index.md similarity index 89% rename from docs/content/stable/api/ysql/ddl-behavior.md rename to docs/content/stable/api/ysql/ddl-behavior/_index.md index a9afbce3a568..6019205de65c 100644 --- a/docs/content/stable/api/ysql/ddl-behavior.md +++ b/docs/content/stable/api/ysql/ddl-behavior/_index.md @@ -2,7 +2,7 @@ title: Behavior of DDL statements [YSQL] headerTitle: Behavior of DDL statements linkTitle: Behavior of DDL statements -description: Explains how the behavior of DDL statements works in YugabyteDB YSQL and documents differences from Postgres behavior. [YSQL]. +description: Explains specific aspects of DDL statement behavior in YugabyteDB, contrasting it with Postgres behavior. [YSQL]. menu: stable_api: identifier: ddl-behavior @@ -12,7 +12,7 @@ type: docs --- -This section describes how DDL statements work in YSQL and documents the differences between YugabyteDB and PostgreSQL. +This section describes specific concurrency-related aspects of DDL statement behavior in YugabyteDB. ## Concurrent DML during a DDL operation From b3eeacfdc1ac75a8963177b7d587f2f31a6bfc15 Mon Sep 17 00:00:00 2001 From: Bvsk Patnaik Date: Wed, 7 May 2025 19:10:02 +0000 Subject: [PATCH 023/146] [#22245] YSQL: Increase ysql_output_buffer_size to 1MiB MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Summary: ### Motivation YugabyteDB currently uses a 256KiB YSQL output buffer, compared to PostgreSQL’s default of 8KiB. A larger buffer allows YSQL to retry queries internally in the event of serialization failures. This is crucial because once YSQL sends partial results to the client, it cannot safely retry the query—doing so risks emitting duplicate results such as: ``` id ---- (0 rows) id ---- 1 (1 row) ``` In YSQL, while retries are best-effort in REPEATABLE READ, they are essential in READ COMMITTED to ensure serialization errors are not thrown to the user. Also, PostgreSQL is not subject to read restart errors because it is a single node system. In contrast, YSQL relies on retries to avoid throwing read restart errors. However, the current 256KiB buffer is often insufficient. Large SELECT queries commonly exceed this threshold. These same queries are also more likely to encounter restart errors due to read/write timestamp conflicts. As a result, increasing the output buffer size is a frequent operational change. Raise the default buffer size to 1MiB, a common recommendation, to reduce friction and improve out-of-the-box reliability. ### Impact Analysis **Q.** Do small queries incur increased memory usage? No. Although each backend allocates a 1MiB buffer space, the OS does not actually reserve this memory unless a large query requires it. This behavior can be observed using the following script to track proportional set size (PSS): ``` #!/bin/bash peak_pss=0 while true; do total_pss=0 for pid in $(ps -eo pid,comm | grep '[p]ostgres' | awk '{print $1}'); do pss=$(awk '/Pss:/ {total += $2} END {print total}' /proc/$pid/smaps 2>/dev/null) total_pss=$((total_pss + pss)) done if (( total_pss > peak_pss )); then peak_pss=$total_pss fi echo "Current PSS: ${total_pss} KB, Peak PSS: ${peak_pss} KB" sleep 1 done ``` Test Setup: ``` CREATE TABLE kv(k INT PRIMARY KEY, v INT); INSERT INTO kv SELECT i, i FROM GENERATE_SERIES(1, 100000) i; ``` `SELECT * FROM kv LIMIT 1000` → ~131 MiB PSS `SELECT * FROM kv` → ~132 MiB PSS This provides evidence that the memory usage is incremental and the cost of 1 MiB buffer size is not payed unless there is a query with a large output. **Q:** What about large queries? * With 256KiB buffer: PSS increase ~3MiB * With 1MiB buffer: PSS increase ~4MiB The incremental cost is acceptable. **Q:** How does this affect real-world workloads? Ran TPC-H (analytical workload) via BenchBase against a replication factor 1 cluster: * Idle PSS: ~120MiB * Peak PSS (with and without buffer change): ~220MiB The buffer size change had minimal impact on peak memory usage; other query-related allocations dominate. ### Caveats 1. Once allocated, buffer memory is not released until the connection closes. 2. First-row latency of large SELECT queries may increase due to buffering. This is an intentional tradeoff to reduce serialization failures. Jira: DB-11163 Test Plan: Jenkins Close: #22245 Backport-through: 2024.2 Reviewers: pjain, smishra, #db-approvers Reviewed By: pjain Subscribers: svc_phabricator, yql Differential Revision: https://phorge.dev.yugabyte.com/D43805 --- src/postgres/src/common/pg_yb_common.c | 4 ++-- src/yb/yql/pggate/pggate_flags.cc | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/src/postgres/src/common/pg_yb_common.c b/src/postgres/src/common/pg_yb_common.c index 1635723dd227..cf102100c723 100644 --- a/src/postgres/src/common/pg_yb_common.c +++ b/src/postgres/src/common/pg_yb_common.c @@ -252,9 +252,9 @@ YBGetYsqlOutputBufferSize() /* * Shouldn't reach here. But even if we do, instead of failing in a release - * build, we return 256KB as a default. + * build, we return 1 MiB as a default. */ - return 256 * 1024; + return 1024 * 1024; } diff --git a/src/yb/yql/pggate/pggate_flags.cc b/src/yb/yql/pggate/pggate_flags.cc index 6096a7b98fae..aa00a5a25708 100644 --- a/src/yb/yql/pggate/pggate_flags.cc +++ b/src/yb/yql/pggate/pggate_flags.cc @@ -65,7 +65,7 @@ DEFINE_test_flag(int64, inject_delay_between_prepare_ybctid_execute_batch_ybctid DEFINE_test_flag(bool, index_read_multiple_partitions, false, "Test flag used to simulate tablet spliting by joining tables' partitions."); -DEFINE_NON_RUNTIME_int32(ysql_output_buffer_size, 262144, +DEFINE_NON_RUNTIME_int32(ysql_output_buffer_size, 1024 * 1024, "Size of postgres-level output buffer, in bytes. " "While fetched data resides within this buffer and hasn't been flushed to client yet, " "we're free to transparently restart operation in case of restart read error."); From 030b072fce5ac7ab323776b9cc82e0afd9186787 Mon Sep 17 00:00:00 2001 From: Sergei Politov Date: Fri, 9 May 2025 19:45:06 +0300 Subject: [PATCH 024/146] [#27075] DocDB: Implement block cache for YbHnsw Summary: YbHnsw organizes data into blocks, which are loaded on demand. When new search operations require different blocks, previously loaded ones may need to be unloaded. This diff implements block cache to manage: - Dynamic loading of required blocks - Unloading of inactive blocks when new blocks are needed This ensures efficient memory usage while maintaining fast access to frequently used data. Jira: DB-16564 Test Plan: HnswTest.Cache HnswTest.ConcurrentCache Reviewers: arybochkin Reviewed By: arybochkin Subscribers: ybase Tags: #jenkins-ready Differential Revision: https://phorge.dev.yugabyte.com/D43752 --- src/yb/hnsw/hnsw-test.cc | 82 ++++++++-- src/yb/hnsw/hnsw.cc | 161 +++++++++++--------- src/yb/hnsw/hnsw.h | 57 ++++--- src/yb/hnsw/hnsw_block_cache.cc | 256 +++++++++++++++++++++++++++++--- src/yb/hnsw/hnsw_block_cache.h | 60 ++++++-- src/yb/hnsw/types.h | 10 ++ src/yb/util/metrics.h | 4 - src/yb/util/metrics_fwd.h | 4 +- 8 files changed, 494 insertions(+), 140 deletions(-) diff --git a/src/yb/hnsw/hnsw-test.cc b/src/yb/hnsw/hnsw-test.cc index 37d82ba3a8b8..79b2a2cfed6f 100644 --- a/src/yb/hnsw/hnsw-test.cc +++ b/src/yb/hnsw/hnsw-test.cc @@ -14,14 +14,24 @@ #include "yb/hnsw/hnsw.h" #include "yb/hnsw/hnsw_block_cache.h" +#include "yb/rocksdb/cache.h" + +#include "yb/util/metrics.h" #include "yb/util/random_util.h" +#include "yb/util/size_literals.h" #include "yb/util/test_util.h" +#include "yb/util/thread_holder.h" #include "yb/util/tsan_util.h" #include "yb/vector_index/vector_index_fwd.h" #include "yb/vector_index/distance.h" #include "yb/vector_index/usearch_include_wrapper_internal.h" +using namespace std::chrono_literals; +using namespace yb::size_literals; + +METRIC_DEFINE_entity(table); + namespace yb::hnsw { using IndexImpl = unum::usearch::index_dense_gt; @@ -79,10 +89,14 @@ class YbHnswTest : public YBTest { } } - void VerifySearch(const Vector& query_vector, size_t max_results) { + void VerifySearch( + const Vector& query_vector, size_t max_results, YbHnswSearchContext* context = nullptr) { + if (!context) { + context = &context_; + } vector_index::VectorFilter filter = AcceptAllVectors(); auto usearch_results = index_.filtered_search(query_vector.data(), max_results, filter); - auto yb_hnsw_results = yb_hnsw_.Search(query_vector.data(), max_results, filter, context_); + auto yb_hnsw_results = yb_hnsw_.Search(query_vector.data(), max_results, filter, *context); ASSERT_EQ(usearch_results.count, yb_hnsw_results.size()); for (size_t j = 0; j != usearch_results.count; ++j) { std::decay_t expected( @@ -91,18 +105,26 @@ class YbHnswTest : public YBTest { } } - std::vector PrepareRandom(size_t num_vectors, size_t num_searches); + std::vector PrepareRandom(bool load, size_t num_vectors, size_t num_searches); Status InitYbHnsw(bool load); void TestPerf(); void TestSimple(bool load); + void TestRandom(bool load, size_t background_threads); size_t dimensions_ = 8; size_t max_vectors_ = 65536; std::mt19937_64 rng_{42}; unum::usearch::metric_punned_t metric_; IndexImpl index_; - BlockCachePtr block_cache_ = std::make_shared(*Env::Default()); + std::unique_ptr metric_registry_ = std::make_unique(); + MetricEntityPtr metric_entity_ = METRIC_ENTITY_table.Instantiate(metric_registry_.get(), "test"); + BlockCachePtr block_cache_ = std::make_shared( + *Env::Default(), + MemTracker::GetRootTracker()->FindOrCreateTracker(1_GB, "block_cache"), + metric_entity_, + 8_MB, + 4); YbHnsw yb_hnsw_; YbHnswSearchContext context_; }; @@ -144,10 +166,11 @@ TEST_F(YbHnswTest, Persistence) { TestSimple(/* load= */ true); } -std::vector YbHnswTest::PrepareRandom(size_t num_vectors, size_t num_searches) { +std::vector YbHnswTest::PrepareRandom( + bool load, size_t num_vectors, size_t num_searches) { EXPECT_LE(num_vectors, max_vectors_); InsertRandomVectors(num_vectors); - EXPECT_OK(InitYbHnsw(false)); + EXPECT_OK(InitYbHnsw(load)); std::vector query_vectors(num_searches); for (auto& vector : query_vectors) { @@ -156,16 +179,49 @@ std::vector YbHnswTest::PrepareRandom(size_t num_vectors, size_t num_sea return query_vectors; } -TEST_F(YbHnswTest, Random) { - constexpr size_t kNumVectors = 16384; +void YbHnswTest::TestRandom(bool load, size_t background_threads = 0) { + constexpr size_t kNumVectors = 65535; constexpr size_t kNumSearches = 1024; constexpr size_t kMaxResults = 20; - auto query_vectors = PrepareRandom(kNumVectors, kNumSearches); - - for (const auto& query_vector : query_vectors) { - ASSERT_NO_FATALS(VerifySearch(query_vector, kMaxResults)); + auto query_vectors = PrepareRandom(load, kNumVectors, kNumSearches); + + if (background_threads) { + ThreadHolder threads; + for (size_t i = 0; i < background_threads; ++i) { + threads.AddThread([this, &stop = threads.stop_flag(), &query_vectors] { + YbHnswSearchContext context; + while (!stop.load()) { + size_t index = RandomUniformInt(0, query_vectors.size() - 1); + ASSERT_NO_FATALS(VerifySearch(query_vectors[index], kMaxResults, &context)); + } + }); + } + threads.WaitAndStop(10s); + } else { + for (const auto& query_vector : query_vectors) { + ASSERT_NO_FATALS(VerifySearch(query_vector, kMaxResults)); + } } + + LOG(INFO) << "Hit: " << block_cache_->metrics().hit->value(); + LOG(INFO) << "Queries: " << block_cache_->metrics().query->value(); + LOG(INFO) << "Read bytes: " << block_cache_->metrics().read->value(); + LOG(INFO) << "Evicted bytes: " << block_cache_->metrics().evict->value(); + LOG(INFO) << "Added bytes: " << block_cache_->metrics().add->value(); + LOG(INFO) << "Removed bytes: " << block_cache_->metrics().remove->value(); +} + +TEST_F(YbHnswTest, Random) { + TestRandom(false); +} + +TEST_F(YbHnswTest, Cache) { + TestRandom(true); +} + +TEST_F(YbHnswTest, ConcurrentCache) { + TestRandom(true, 4); } void YbHnswTest::TestPerf() { @@ -176,7 +232,7 @@ void YbHnswTest::TestPerf() { max_vectors_ = num_vectors; - auto query_vectors = PrepareRandom(num_vectors, num_searches); + auto query_vectors = PrepareRandom(false, num_vectors, num_searches); YbHnswSearchContext context; vector_index::VectorFilter filter = AcceptAllVectors(); MonoTime start = MonoTime::Now(); diff --git a/src/yb/hnsw/hnsw.cc b/src/yb/hnsw/hnsw.cc index 744b504ef4c6..bbc571cbd724 100644 --- a/src/yb/hnsw/hnsw.cc +++ b/src/yb/hnsw/hnsw.cc @@ -21,6 +21,7 @@ #include "yb/util/cast.h" #include "yb/util/env.h" #include "yb/util/flags.h" +#include "yb/util/scope_exit.h" #include "yb/util/size_literals.h" using namespace yb::size_literals; @@ -90,15 +91,10 @@ class YbHnswBuilder { const std::string& path) : index_(index), block_cache_(block_cache), path_(path) {} - Result> Build() { + Result> Build() { header_.Init(index_); PrepareVectors(); - VLOG_WITH_FUNC(4) - << "Size: " << index_.size() << ", max level: " << header_.max_level << ", dimensions: " - << header_.dimensions << ", connectivity: " << header_.config.connectivity - << ", connectivity_base: " << header_.config.connectivity_base - << ", expansion_search: " << header_.config.expansion_search << ", layers: " - << AsString(header_.layers); + VLOG_WITH_FUNC(4) << "Size: " << index_.size() << ", header: " << header_.ToString(); auto tmp_path = path_ + ".tmp"; RETURN_NOT_OK(block_cache_.env().NewWritableFile(tmp_path, &out_)); @@ -113,10 +109,9 @@ class YbHnswBuilder { RETURN_NOT_OK(block_cache_.env().RenameFile(tmp_path, path_)); std::unique_ptr file; RETURN_NOT_OK(block_cache_.env().NewRandomAccessFile(path_, &file)); - auto file_block_cache = std::make_unique(std::move(file), &builder_); - auto result = file_block_cache.get(); - block_cache_.Register(std::move(file_block_cache)); - return std::pair(result, header_); + auto file_block_cache = std::make_unique( + block_cache_, std::move(file), &builder_); + return std::pair(std::move(file_block_cache), header_); } private: @@ -366,6 +361,11 @@ void Header::Init(const unum::usearch::index_dense_gt& i max_vectors_per_non_base_block = CalcMaxVectorsPerLayerBlock(max_block_size, config.connectivity); } +YbHnsw::YbHnsw(Metric& metric) : metric_(metric) { +} + +YbHnsw::~YbHnsw() = default; + Status YbHnsw::Import( const unum::usearch::index_dense_gt& index, const std::string& path, BlockCachePtr block_cache) { @@ -379,17 +379,18 @@ Status YbHnsw::Init(const std::string& path, BlockCachePtr block_cache) { block_cache_ = std::move(block_cache); std::unique_ptr file; RETURN_NOT_OK(block_cache_->env().NewRandomAccessFile(path, &file)); - auto file_block_cache = std::make_unique(std::move(file)); - header_ = VERIFY_RESULT(file_block_cache->Load()); - file_block_cache_ = file_block_cache.get(); - block_cache_->Register(std::move(file_block_cache)); + file_block_cache_ = std::make_unique(*block_cache_, std::move(file)); + header_ = VERIFY_RESULT(file_block_cache_->Load()); return Status::OK(); } YbHnsw::SearchResult YbHnsw::Search( const std::byte* query_vector, size_t max_results, const vector_index::VectorFilter& filter, YbHnswSearchContext& context) const { - auto [best_vector, best_dist] = SearchInNonBaseLayers(query_vector); + context.search_cache.Bind(header_, *file_block_cache_); + auto se = ScopeExit([&context] { context.search_cache.Release(); }); + auto [best_vector, best_dist] = SearchInNonBaseLayers( + query_vector, context.search_cache); SearchInBaseLayer(query_vector, best_vector, best_dist, max_results, filter, context); return MakeResult(max_results, context); } @@ -402,23 +403,23 @@ YbHnsw::SearchResult YbHnsw::MakeResult(size_t max_results, YbHnswSearchContext& SearchResult result; result.reserve(top.size()); for (auto [distance, vector] : top) { - result.emplace_back(GetVectorData(vector), distance); + result.emplace_back(context.search_cache.GetVectorData(vector), distance); } return result; } std::pair YbHnsw::SearchInNonBaseLayers( - const std::byte* query_vector) const { + const std::byte* query_vector, SearchCache& cache) const { auto best_vector = header_.entry; - auto best_dist = Distance(query_vector, best_vector); + auto best_dist = Distance(query_vector, best_vector, cache); VLOG_WITH_FUNC(4) << "best_vector: " << best_vector << ", best_dist: " << best_dist; for (auto level = header_.max_level; level > 0;) { auto updated = false; VLOG_WITH_FUNC(4) << "level: " << level << ", best_vector: " << best_vector << ", best_dist: " << best_dist; - for (auto neighbor : GetNeighborsInNonBaseLayer(level, best_vector)) { - auto neighbor_dist = Distance(query_vector, neighbor); + for (auto neighbor : cache.GetNeighborsInNonBaseLayer(level, best_vector)) { + auto neighbor_dist = Distance(query_vector, neighbor, cache); VLOG_WITH_FUNC(4) << "level: " << level << ", neighbor: " << neighbor << ", neighbor_dist: " << neighbor_dist; @@ -447,6 +448,7 @@ void YbHnsw::SearchInBaseLayer( visited.clear(); auto& next = context.next; next.clear(); + auto& cache = context.search_cache; // We will visit at least entry vector and its neighbors. // So could use the following as initial capacity for visited. @@ -456,7 +458,7 @@ void YbHnsw::SearchInBaseLayer( auto extra_top_limit = std::max( header_.config.expansion_search, max_results) - max_results; next.push({best_dist, best_vector}); - if (!filter || filter(GetVectorData(best_vector))) { + if (!filter || filter(cache.GetVectorData(best_vector))) { top.push({best_dist, best_vector}); } visited.set(best_vector); @@ -468,19 +470,19 @@ void YbHnsw::SearchInBaseLayer( break; } next.pop(); - auto neighbors = GetNeighborsInBaseLayer(vector); + auto neighbors = cache.GetNeighborsInBaseLayer(vector); visited.reserve(visited.size() + std::ranges::size(neighbors)); for (auto neighbor : neighbors) { if (visited.set(neighbor)) { continue; } - auto neighbor_dist = Distance(query_vector, neighbor); + auto neighbor_dist = Distance(query_vector, neighbor, cache); if (top.size() < top_limit || extra_top.size() < extra_top_limit || neighbor_dist < best_dist) { next.push({neighbor_dist, neighbor}); - if (!filter || filter(GetVectorData(neighbor))) { + if (!filter || filter(cache.GetVectorData(neighbor))) { if (top.size() == top_limit) { auto extra_push = top.top().first; if (neighbor_dist < extra_push) { @@ -503,11 +505,56 @@ void YbHnsw::SearchInBaseLayer( } } -boost::iterator_range> YbHnsw::GetNeighborsInBaseLayer( - size_t vector) const { +YbHnsw::DistanceType YbHnsw::Distance(const std::byte* lhs, const std::byte* rhs) const { + using unum::usearch::byte_t; + return metric_(pointer_cast(lhs), pointer_cast(rhs)); +} + +YbHnsw::DistanceType YbHnsw::Distance( + const std::byte* lhs, size_t vector, SearchCache& cache) const { + return Distance(lhs, cache.CoordinatesPtr(vector)); +} + +boost::iterator_range> YbHnsw::MakeCoordinates( + const std::byte* ptr) const { + auto start = MisalignedPtr(ptr); + return boost::make_iterator_range(start, start + header_.dimensions); +} + +boost::iterator_range> YbHnsw::Coordinates( + size_t vector, SearchCache& cache) const { + return MakeCoordinates(cache.CoordinatesPtr(vector)); +} + +const std::byte* SearchCache::Data(size_t index) { + auto& block = blocks_[index]; + if (block) { + return block; + } + auto data = CHECK_RESULT(file_block_cache_->Take(index)); + used_blocks_.push_back(index); + return block = data; +} + +void SearchCache::Bind(std::reference_wrapper header, FileBlockCache& cache) { + DCHECK(used_blocks_.empty()); + header_ = &header.get(); + file_block_cache_ = &cache; + blocks_.resize(cache.size()); +} + +void SearchCache::Release() { + for (auto block : used_blocks_) { + blocks_[block] = nullptr; + file_block_cache_->Release(block); + } + used_blocks_.clear(); +} + +boost::iterator_range> SearchCache::GetNeighborsInBaseLayer( + size_t vector) { auto vector_data = VectorHeader(vector); - auto base_ptr = file_block_cache_->Data(*YB_MISALIGNED_PTR( - vector_data, VectorData, base_layer_neighbors_block)); + auto base_ptr = Data(*YB_MISALIGNED_PTR(vector_data, VectorData, base_layer_neighbors_block)); auto begin = *YB_MISALIGNED_PTR(vector_data, VectorData, base_layer_neighbors_begin); auto end = *YB_MISALIGNED_PTR(vector_data, VectorData, base_layer_neighbors_end); return boost::make_iterator_range( @@ -515,13 +562,19 @@ boost::iterator_range> YbHnsw::GetNeighborsInBaseL MisalignedPtr(base_ptr + end * kNeighborSize)); } -boost::iterator_range> YbHnsw::GetNeighborsInNonBaseLayer( - size_t level, size_t vector) const { - auto max_vectors_per_block = header_.max_vectors_per_non_base_block; +MisalignedPtr SearchCache::VectorHeader(size_t vector) { + return MisalignedPtr(BlockPtr( + header_->vector_data_block, header_->vector_data_amount_per_block, vector, + header_->vector_data_size)); +} + +boost::iterator_range> SearchCache::GetNeighborsInNonBaseLayer( + size_t level, size_t vector) { + auto max_vectors_per_block = header_->max_vectors_per_non_base_block; auto block_index = vector / max_vectors_per_block; vector %= max_vectors_per_block; - auto& layer = header_.layers[level]; - auto base_ptr = file_block_cache_->Data(layer.block + block_index); + auto& layer = header_->layers[level]; + auto base_ptr = Data(layer.block + block_index); auto finish = Load(base_ptr + vector * kNeighborsRefSize); auto start = vector > 0 ? Load(base_ptr + (vector - 1) * kNeighborsRefSize) : 0; @@ -540,54 +593,28 @@ boost::iterator_range> YbHnsw::GetNeighborsInNonBa return result; } -const std::byte* YbHnsw::BlockPtr( - size_t block, size_t entries_per_block, size_t entry, size_t entry_size) const { +const std::byte* SearchCache::BlockPtr( + size_t block, size_t entries_per_block, size_t entry, size_t entry_size) { block += entry / entries_per_block; entry %= entries_per_block; - return file_block_cache_->Data(block) + entry * entry_size; + return Data(block) + entry * entry_size; } -Slice YbHnsw::GetVectorDataSlice(size_t vector) const { +Slice SearchCache::GetVectorDataSlice(size_t vector) { auto vector_data = VectorHeader(vector); - auto base_ptr = file_block_cache_->Data( + auto base_ptr = Data( *YB_MISALIGNED_PTR(vector_data, VectorData, aux_data_block)); auto begin = *YB_MISALIGNED_PTR(vector_data, VectorData, aux_data_begin); auto end = *YB_MISALIGNED_PTR(vector_data, VectorData, aux_data_end); return Slice(base_ptr + begin, base_ptr + end); } -vector_index::VectorId YbHnsw::GetVectorData(size_t vector) const { +vector_index::VectorId SearchCache::GetVectorData(size_t vector) { return vector_index::TryFullyDecodeVectorId(GetVectorDataSlice(vector)); } -YbHnsw::DistanceType YbHnsw::Distance(const std::byte* lhs, const std::byte* rhs) const { - using unum::usearch::byte_t; - return metric_(pointer_cast(lhs), pointer_cast(rhs)); -} - -YbHnsw::DistanceType YbHnsw::Distance(const std::byte* lhs, size_t vector) const { - return Distance(lhs, CoordinatesPtr(vector)); -} - -MisalignedPtr YbHnsw::VectorHeader(size_t vector) const { - return MisalignedPtr(BlockPtr( - header_.vector_data_block, header_.vector_data_amount_per_block, vector, - header_.vector_data_size)); -} - -const std::byte* YbHnsw::CoordinatesPtr(size_t vector) const { +const std::byte* SearchCache::CoordinatesPtr(size_t vector) { return VectorHeader(vector).raw() + offsetof(VectorData, coordinates); } -boost::iterator_range> YbHnsw::MakeCoordinates( - const std::byte* ptr) const { - auto start = MisalignedPtr(ptr); - return boost::make_iterator_range(start, start + header_.dimensions); -} - -boost::iterator_range> YbHnsw::Coordinates( - size_t vector) const { - return MakeCoordinates(CoordinatesPtr(vector)); -} - } // namespace yb::hnsw diff --git a/src/yb/hnsw/hnsw.h b/src/yb/hnsw/hnsw.h index a1e0408332e9..40f16d46b9ee 100644 --- a/src/yb/hnsw/hnsw.h +++ b/src/yb/hnsw/hnsw.h @@ -38,6 +38,34 @@ namespace yb::hnsw { struct YbHnswVectorData; +// Provides access to a raw bytes data for a single search. +// Could be reused between searches using Bind/Release method. +class SearchCache { + public: + const std::byte* Data(size_t index); + + void Bind(std::reference_wrapper header, FileBlockCache& cache); + void Release(); + + boost::iterator_range> GetNeighborsInNonBaseLayer( + size_t level, size_t vector); + MisalignedPtr VectorHeader(size_t vector); + boost::iterator_range> GetNeighborsInBaseLayer( + size_t vector); + vector_index::VectorId GetVectorData(size_t vector); + const std::byte* CoordinatesPtr(size_t vector); + + private: + Slice GetVectorDataSlice(size_t vector); + const std::byte* BlockPtr( + size_t block, size_t entries_per_block, size_t entry, size_t entry_size); + + const Header* header_ = nullptr; + FileBlockCache* file_block_cache_ = nullptr; + std::vector blocks_; + std::vector used_blocks_; +}; + struct YbHnswSearchContext { using HeapEntry = std::pair; @@ -56,6 +84,7 @@ struct YbHnswSearchContext { Top top; ExtraTop extra_top; NextQueue next; + SearchCache search_cache; }; class YbHnsw { @@ -65,7 +94,8 @@ class YbHnsw { using Metric = unum::usearch::metric_punned_t; using SearchResult = std::vector>; - explicit YbHnsw(Metric& metric) : metric_(metric) {} + explicit YbHnsw(Metric& metric); + ~YbHnsw(); // Imports specified index to YbHnsw structure, also storing this structure to disk. Status Import( @@ -86,36 +116,25 @@ class YbHnsw { } private: - std::pair SearchInNonBaseLayers(const std::byte* query_vector) const; + std::pair SearchInNonBaseLayers( + const std::byte* query_vector, SearchCache& cache) const; void SearchInBaseLayer( const std::byte* query_vector, VectorNo best_vector, DistanceType best_dist, size_t max_results, const vector_index::VectorFilter& filter, YbHnswSearchContext& context) const; SearchResult MakeResult(size_t max_results, YbHnswSearchContext& context) const; - boost::iterator_range> GetNeighborsInNonBaseLayer( - size_t level, size_t vector) const; - - boost::iterator_range> GetNeighborsInBaseLayer( - size_t vector) const; - - const std::byte* BlockPtr( - size_t block, size_t entries_per_block, size_t entry, size_t entry_size) const; - - Slice GetVectorDataSlice(size_t vector) const; - vector_index::VectorId GetVectorData(size_t vector) const; DistanceType Distance(const std::byte* lhs, const std::byte* rhs) const; - DistanceType Distance(const std::byte* lhs, size_t vector) const; - MisalignedPtr VectorHeader(size_t vector) const; - const std::byte* CoordinatesPtr(size_t vector) const; + DistanceType Distance(const std::byte* lhs, size_t vector, SearchCache& cache) const; boost::iterator_range> MakeCoordinates( const std::byte* ptr) const; - boost::iterator_range> Coordinates(size_t vector) const; + boost::iterator_range> Coordinates( + size_t vector, SearchCache& cache) const; Metric& metric_; Header header_; - std::shared_ptr block_cache_; - FileBlockCache* file_block_cache_ = nullptr; + BlockCachePtr block_cache_; + FileBlockCachePtr file_block_cache_; }; } // namespace yb::hnsw diff --git a/src/yb/hnsw/hnsw_block_cache.cc b/src/yb/hnsw/hnsw_block_cache.cc index 9f6c9ce0034d..cfa297a7ab7a 100644 --- a/src/yb/hnsw/hnsw_block_cache.cc +++ b/src/yb/hnsw/hnsw_block_cache.cc @@ -13,13 +13,42 @@ #include "yb/hnsw/hnsw_block_cache.h" +#include + #include "yb/hnsw/block_writer.h" #include "yb/util/crc.h" +#include "yb/util/metrics.h" #include "yb/util/size_literals.h" using namespace yb::size_literals; +METRIC_DEFINE_counter(table, vector_index_cache_hit, "Vector index block cache hits", + yb::MetricUnit::kCacheHits, + "Number of hits of vector index block cache"); + +METRIC_DEFINE_counter(table, vector_index_cache_query, "Vector index block cache query", + yb::MetricUnit::kCacheQueries, + "Number of queries of vector index block cache"); + +METRIC_DEFINE_counter(table, vector_index_read, "Vector index block read bytes", + yb::MetricUnit::kBytes, + "Number of bytes read by vector index block cache"); + +METRIC_DEFINE_counter(table, vector_index_cache_add, "Vector index block bytes added to cache", + yb::MetricUnit::kBytes, + "Number of bytes added to vector index block cache"); + +METRIC_DEFINE_counter(table, vector_index_cache_evict, + "Vector index block bytes evicted from cache", + yb::MetricUnit::kBytes, + "Number of bytes evicted from vector index block cache"); + +METRIC_DEFINE_counter(table, vector_index_cache_remove, + "Vector index block bytes removed from cache", + yb::MetricUnit::kBytes, + "Number of bytes removed from vector index block cache"); + namespace yb::hnsw { namespace { @@ -151,9 +180,168 @@ void Deserialize(size_t version, Out& out, Args&&... args) { } // namespace -void BlockCache::Register(FileBlockCachePtr&& file_block_cache) { - std::lock_guard guard(mutex_); - files_.push_back(std::move(file_block_cache)); +BlockCacheMetrics::BlockCacheMetrics(const MetricEntityPtr& entity) + : hit(METRIC_vector_index_cache_hit.Instantiate(entity)), + query(METRIC_vector_index_cache_query.Instantiate(entity)), + read(METRIC_vector_index_read.Instantiate(entity)), + add(METRIC_vector_index_cache_add.Instantiate(entity)), + evict(METRIC_vector_index_cache_evict.Instantiate(entity)), + remove(METRIC_vector_index_cache_remove.Instantiate(entity)) { +} + +struct CachedBlock : boost::intrusive::list_base_hook<> { + BlockCacheShard* shard; + size_t end; + size_t size; + // Guarded by BlockCacheShard mutex. + int64_t use_count; + std::atomic data{nullptr}; + std::mutex load_mutex; + DataBlock content GUARDED_BY(load_mutex); + + Result Load( + RandomAccessFile& file, MemTracker& mem_tracker, BlockCacheMetrics& metrics) { + std::lock_guard lock(load_mutex); + auto result = data.load(std::memory_order_acquire); + if (result) { + return result; + } + if (!content.empty()) { + // It could happen that block is in the process of eviction, in this case we could just + // restore data pointing to content instead of reloading it + data.store(result = content.data(), std::memory_order_release); + return result; + } + mem_tracker.Consume(size); + content.resize(size); + Slice read_result; + RETURN_NOT_OK(file.Read(end - size, size, &read_result, content.data())); + RSTATUS_DCHECK_EQ( + read_result.size(), content.size(), Corruption, + Format("Wrong number of read bytes in block $0 - $1", end - size, size)); + metrics.read->IncrementBy(read_result.size()); + data.store(result = content.data(), std::memory_order_release); + return result; + } + + void Unload(MemTracker& mem_tracker) { + std::lock_guard lock(load_mutex); + if (data.load(std::memory_order_acquire)) { + // The Load was called during eviction, no need to unload data. + return; + } + content = {}; + mem_tracker.Release(size); + } +}; + +class alignas(CACHELINE_SIZE) BlockCacheShard { + public: + void Init(size_t capacity, BlockCache& block_cache) { + capacity_ = capacity; + block_cache_ = &block_cache; + } + + Result Take(CachedBlock& block, RandomAccessFile& file) { + block_cache_->metrics().query->Increment(); + { + std::lock_guard lock(mutex_); + ++block.use_count; + // Remove block from LRU while it is used. + if (block.is_linked()) { + DCHECK_EQ(block.use_count, 1); + RemoveBlockFromLRU(block); + } + + auto data = block.data.load(std::memory_order_acquire); + if (data) { + block_cache_->metrics().hit->Increment(); + return data; + } + } + return VERIFY_RESULT( + block.Load(file, *block_cache_->mem_tracker(), block_cache_->metrics())); + } + + void Release(CachedBlock& block) { + boost::container::small_vector evicted_blocks; + { + std::lock_guard lock(mutex_); + DCHECK(!block.is_linked()); + if (--block.use_count != 0) { + return; + } + Evict(block.size, evicted_blocks); + block_cache_->metrics().add->IncrementBy(block.size); + consumption_ += block.size; + lru_.push_back(block); + } + size_t evicted_bytes = 0; + for (auto* evicted_block : evicted_blocks) { + evicted_block->Unload(*block_cache_->mem_tracker()); + evicted_bytes += evicted_block->size; + } + if (evicted_bytes) { + block_cache_->metrics().evict->IncrementBy(evicted_bytes); + } + } + + void Remove(CachedBlock& block) { + std::lock_guard lock(mutex_); + DCHECK_EQ(block.use_count, 0); + if (block.is_linked()) { + RemoveBlockFromLRU(block); + } + } + + private: + using Blocks = boost::intrusive::list; + + void RemoveBlockFromLRU(CachedBlock& block) REQUIRES(mutex_) { + block_cache_->metrics().remove->IncrementBy(block.size); + lru_.erase(lru_.iterator_to(block)); + consumption_ -= block.size; + } + + void Evict( + size_t space_required, + boost::container::small_vector_base& evicted_blocks) REQUIRES(mutex_) { + space_required = std::min(space_required, capacity_); + auto it = lru_.begin(); + while (consumption_ + space_required > capacity_ && it != lru_.end()) { + auto& block = *it; + ++it; + block.data.store(nullptr, std::memory_order_release); + lru_.erase(lru_.iterator_to(block)); + consumption_ -= block.size; + evicted_blocks.push_back(&block); + } + } + + size_t capacity_ = 0; + BlockCache* block_cache_ = nullptr; + std::mutex mutex_; + Blocks lru_ GUARDED_BY(mutex_); + size_t consumption_ GUARDED_BY(mutex_) = 0; +}; + +BlockCache::BlockCache( + Env& env, const MemTrackerPtr& mem_tracker, const MetricEntityPtr& metric_entity, + size_t capacity, size_t num_shard_bits) + : env_(env), + mem_tracker_(mem_tracker), + metrics_(std::make_unique(metric_entity)), + shards_mask_((1ULL << num_shard_bits) - 1), + shards_(std::make_unique(shards_mask_ + 1)) { + for (size_t i = 0; i <= shards_mask_; ++i) { + shards_[i].Init(capacity >> num_shard_bits, *this); + } +} + +BlockCache::~BlockCache() = default; + +BlockCacheShard& BlockCache::NextShard() { + return shards_[(next_shard_++) & shards_mask_]; } DataBlock FileBlockCacheBuilder::MakeFooter(const Header& header) const { @@ -177,28 +365,44 @@ DataBlock FileBlockCacheBuilder::MakeFooter(const Header& header) const { } FileBlockCache::FileBlockCache( - std::unique_ptr file, FileBlockCacheBuilder* builder) - : file_(std::move(file)) { + BlockCache& block_cache, std::unique_ptr file, + FileBlockCacheBuilder* builder) + : block_cache_(block_cache), file_(std::move(file)) { if (!builder) { return; } auto& blocks = builder->blocks(); - blocks_.reserve(blocks.size()); size_t total_size = 0; - for (auto& block : blocks) { - total_size += block.size(); - blocks_.emplace_back(BlockInfo { - .end = total_size, - .content = std::move(block), - }); + size_t index = 0; + AllocateBlocks(blocks.size()); + for (auto& data : blocks) { + total_size += data.size(); + auto& block = blocks_[index]; + block.shard = &block_cache_.NextShard(); + block.end = total_size; + block.size = data.size(); + block.content = std::move(data); + block_cache.mem_tracker()->Consume(block.size); + block.use_count = 1; + block.shard->Release(block); + ++index; } blocks.clear(); } -FileBlockCache::~FileBlockCache() = default; +FileBlockCache::~FileBlockCache() { + for (size_t i = 0; i != size_; ++i) { + blocks_[i].shard->Remove(blocks_[i]); + } +} + +void FileBlockCache::AllocateBlocks(size_t size) { + size_ = size; + blocks_.reset(new CachedBlock[size]); +} Result
FileBlockCache::Load() { - DCHECK(blocks_.empty()); + DCHECK_EQ(blocks_, nullptr); using FooterSizeType = uint64_t; auto file_size = VERIFY_RESULT(file_->Size()); @@ -226,20 +430,26 @@ Result
FileBlockCache::Load() { SliceReader reader(footer_data); size_t version = reader.Read(); Deserialize(version, header, reader); - blocks_.resize(reader.Left() / sizeof(uint64_t)); + AllocateBlocks(reader.Left() / sizeof(uint64_t)); size_t prev_end = 0; - for (size_t i = 0; i < blocks_.size(); i++) { + for (size_t i = 0; i < size_; i++) { + blocks_[i].shard = &block_cache_.NextShard(); blocks_[i].end = reader.Read(); - blocks_[i].content.resize(blocks_[i].end - prev_end); - Slice read_result; - RETURN_NOT_OK(file_->Read( - prev_end, blocks_[i].content.size(), &read_result, blocks_[i].content.data())); - RSTATUS_DCHECK_EQ( - read_result.size(), blocks_[i].content.size(), Corruption, - Format("Wrong number of read bytes in block $0", i)); + blocks_[i].size = blocks_[i].end - prev_end; + blocks_[i].use_count = 0; prev_end = blocks_[i].end; } return header; } +Result FileBlockCache::Take(size_t index) { + auto& block = blocks_[index]; + return block.shard->Take(block, *file_); +} + +void FileBlockCache::Release(size_t index) { + auto& block = blocks_[index]; + block.shard->Release(block); +} + } // namespace yb::hnsw diff --git a/src/yb/hnsw/hnsw_block_cache.h b/src/yb/hnsw/hnsw_block_cache.h index 4142aeb575be..c179cf9de264 100644 --- a/src/yb/hnsw/hnsw_block_cache.h +++ b/src/yb/hnsw/hnsw_block_cache.h @@ -16,6 +16,8 @@ #include "yb/hnsw/types.h" #include "yb/util/env.h" +#include "yb/util/metrics_fwd.h" +#include "yb/util/mem_tracker.h" namespace yb::hnsw { @@ -35,41 +37,73 @@ class FileBlockCacheBuilder { std::vector blocks_; }; +class BlockCacheShard; +struct CachedBlock; + +struct BlockCacheMetrics { + explicit BlockCacheMetrics(const MetricEntityPtr& entity); + + CounterPtr hit; + CounterPtr query; + CounterPtr read; + CounterPtr add; + CounterPtr evict; + CounterPtr remove; +}; + class FileBlockCache { public: - explicit FileBlockCache( - std::unique_ptr file, FileBlockCacheBuilder* builder = nullptr); + FileBlockCache( + BlockCache& block_cache, std::unique_ptr file, + FileBlockCacheBuilder* builder = nullptr); ~FileBlockCache(); Result
Load(); - const std::byte* Data(size_t index) { - return blocks_[index].content.data(); + size_t size() const { + return size_; } + Result Take(size_t index); + void Release(size_t index); + private: + void AllocateBlocks(size_t size); + + BlockCache& block_cache_; std::unique_ptr file_; - struct BlockInfo { - size_t end; - DataBlock content; - }; - std::vector blocks_; + std::unique_ptr blocks_; + size_t size_ = 0; }; class BlockCache { public: - explicit BlockCache(Env& env) : env_(env) {} + BlockCache( + Env& env, const MemTrackerPtr& mem_tracker, const MetricEntityPtr& metric_entity, + size_t capacity, size_t num_shard_bits); + ~BlockCache(); - void Register(FileBlockCachePtr&& file_block_cache); + BlockCacheShard& NextShard(); Env& env() const { return env_; } + const MemTrackerPtr& mem_tracker() const { + return mem_tracker_; + } + + BlockCacheMetrics& metrics() const { + return *metrics_; + } + private: Env& env_; - std::mutex mutex_; - std::vector files_ GUARDED_BY(mutex_); + const MemTrackerPtr mem_tracker_; + std::unique_ptr metrics_; + const size_t shards_mask_; + std::atomic next_shard_ = 0; + std::unique_ptr shards_; }; Status WriteFooter(); diff --git a/src/yb/hnsw/types.h b/src/yb/hnsw/types.h index b406835ec4c9..da88ac6171dc 100644 --- a/src/yb/hnsw/types.h +++ b/src/yb/hnsw/types.h @@ -35,6 +35,10 @@ struct Config { uint64_t connectivity_base = 0; uint64_t connectivity = 0; uint64_t expansion_search = 0; // TODO(vector_index) Don't need to store it. + + std::string ToString() const { + return YB_STRUCT_TO_STRING(connectivity_base, connectivity, expansion_search); + } }; struct LayerInfo { @@ -62,6 +66,12 @@ struct Header { std::vector layers; void Init(const unum::usearch::index_dense_gt& index); + + std::string ToString() const { + return YB_STRUCT_TO_STRING( + dimensions, vector_data_size, entry, max_level, config, max_block_size, + max_vectors_per_non_base_block, vector_data_block, vector_data_amount_per_block, layers); + } }; } // namespace yb::hnsw diff --git a/src/yb/util/metrics.h b/src/yb/util/metrics.h index bec437511525..5e101bb756d0 100644 --- a/src/yb/util/metrics.h +++ b/src/yb/util/metrics.h @@ -1003,8 +1003,6 @@ class Counter : public Metric { DISALLOW_COPY_AND_ASSIGN(Counter); }; -using CounterPtr = scoped_refptr; - class MillisLagPrototype : public MetricPrototype { public: explicit MillisLagPrototype(const MetricPrototype::CtorArgs& args) : MetricPrototype(args) { @@ -1288,8 +1286,6 @@ class EventStats : public BaseStats { DISALLOW_COPY_AND_ASSIGN(EventStats); }; -using EventStatsPtr = scoped_refptr; - template inline void IncrementStats(const scoped_refptr& stats, int64_t value) { if (stats) { diff --git a/src/yb/util/metrics_fwd.h b/src/yb/util/metrics_fwd.h index 0890c895c7a4..f0ec471c4f81 100644 --- a/src/yb/util/metrics_fwd.h +++ b/src/yb/util/metrics_fwd.h @@ -30,6 +30,7 @@ class HistogramPrototype; class HistogramSnapshotPB; class HdrHistogram; class Metric; +class MetricEntity; class MetricEntityPrototype; class MetricPrototype; class MetricRegistry; @@ -43,7 +44,8 @@ class StatsOnlyHistogram; struct MetricJsonOptions; struct MetricPrometheusOptions; -class MetricEntity; +using CounterPtr = scoped_refptr; +using EventStatsPtr = scoped_refptr; using MetricEntityPtr = scoped_refptr; template From 651493417759810c6620f38c993d1386eef7bc95 Mon Sep 17 00:00:00 2001 From: Sergei Politov Date: Thu, 8 May 2025 07:50:20 +0300 Subject: [PATCH 025/146] [#26565] DocDB: Fix alter table and master snapshot coordinator deadlock Summary: The following lock order inversion could happen: CatalogManager::AlterTable acquired catalog manager lock, then tries to replicate altered table information, which requires raft replica lock. MasterSnapshotCoordinator::CreateReplicated is invoked when replica lock is held by apply thread. Then it tries to get tablets info to schedule operations. But it is necessary to acquire catalog manager lock to obtain tablets info. This deadlock is auto resolved via timeout in alter table. But for this period of time all heartbeats and other operations that require catalog manager lock are blocked. Fixed by using separate thread pool to schedule tablet operations. Jira: DB-15933 Test Plan: ./yb_build.sh fastdebug --gcc11 --cxx-test yb-admin-snapshot-schedule-test --gtest_filter YbAdminSnapshotScheduleTestWithYsqlColocationRestoreParam.PgsqlSequenceVerifyPartialRestore/DBColocated_Clone -n 40 -- -p 8 Reviewers: mhaddad Reviewed By: mhaddad Subscribers: ybase Tags: #jenkins-ready Differential Revision: https://phorge.dev.yugabyte.com/D43681 --- src/yb/master/master_snapshot_coordinator.cc | 13 +++++++++-- .../tools/yb-admin-snapshot-schedule-test.cc | 22 +++++++++++++++---- 2 files changed, 29 insertions(+), 6 deletions(-) diff --git a/src/yb/master/master_snapshot_coordinator.cc b/src/yb/master/master_snapshot_coordinator.cc index 9d3bc50d0c94..ea75b072d777 100644 --- a/src/yb/master/master_snapshot_coordinator.cc +++ b/src/yb/master/master_snapshot_coordinator.cc @@ -336,7 +336,7 @@ class MasterSnapshotCoordinator::Impl { RETURN_NOT_OK(tablet->snapshots().Create(*sys_catalog_snapshot_data)); } - ScheduleOperations(operations, leader_term); + PostScheduleOperations(std::move(operations), leader_term); if (leader_term >= 0 && snapshot_empty) { // There could be snapshot for 0 tables, so they should be marked as complete right after @@ -542,7 +542,7 @@ class MasterSnapshotCoordinator::Impl { RETURN_NOT_OK(tablet->ApplyOperation( operation, /* batch_idx= */ -1, *rpc::CopySharedMessage(write_batch))); - ScheduleOperations(operations, leader_term); + PostScheduleOperations(std::move(operations), leader_term); return Status::OK(); } @@ -1379,6 +1379,14 @@ class MasterSnapshotCoordinator::Impl { MasterError(MasterErrorPB::SNAPSHOT_NOT_FOUND)); } + template + void PostScheduleOperations(Operations&& operations, int64_t leader_term) { + context_.Scheduler().io_service().post( + [this, operations = std::move(operations), leader_term] { + ScheduleOperations(operations, leader_term); + }); + } + template void ScheduleOperation(const Operation& operation, const TabletInfoPtr& tablet_info, int64_t leader_term); @@ -1436,6 +1444,7 @@ class MasterSnapshotCoordinator::Impl { if (!l.IsInitializedAndIsLeader()) { return; } + LongOperationTracker long_operation_tracker("Poll", 1s); VLOG(4) << __func__ << "()"; std::vector cleanup_snapshots; TabletSnapshotOperations operations; diff --git a/src/yb/tools/yb-admin-snapshot-schedule-test.cc b/src/yb/tools/yb-admin-snapshot-schedule-test.cc index fe2ee0188e03..a1a722a8f4ba 100644 --- a/src/yb/tools/yb-admin-snapshot-schedule-test.cc +++ b/src/yb/tools/yb-admin-snapshot-schedule-test.cc @@ -262,8 +262,12 @@ class YbAdminSnapshotScheduleTest : public AdminTestBase { std::string seq_no{VERIFY_RESULT(GetMemberAsStr(out, "seq_no"))}; return WaitFor([&]() -> Result { - auto out = VERIFY_RESULT(CallJsonAdmin("list_clones", source_namespace_id, seq_no)); - const auto entries = out.GetArray(); + auto out = CallJsonAdmin("list_clones", source_namespace_id, seq_no); + if (!out.ok()) { + LOG(WARNING) << "Failed to list clones: " << out.status(); + return false; + } + const auto entries = out->GetArray(); SCHECK_EQ(entries.Size(), 1, IllegalState, "Wrong number of entries. Expected 1"); auto state = master::SysCloneStatePB::CLONE_SCHEMA_STARTED; master::SysCloneStatePB::State_Parse( @@ -1295,14 +1299,24 @@ class YbAdminSnapshotScheduleTestWithYsqlColocationRestoreParam: } }; +namespace { + +std::string TestParamToString(const testing::TestParamInfo& param_info) { + return AsString(get<0>(param_info.param)).substr(1) + "_" + + AsString(get<1>(param_info.param)).substr(1); +} + +} // namespace + INSTANTIATE_TEST_CASE_P( - ColocationAndRestoreType, YbAdminSnapshotScheduleTestWithYsqlColocationRestoreParam, + , YbAdminSnapshotScheduleTestWithYsqlColocationRestoreParam, ::testing::Values( ScheduleRestoreTestParams(YsqlColocationConfig::kNotColocated, RestoreType::kPITR), ScheduleRestoreTestParams(YsqlColocationConfig::kDBColocated, RestoreType::kPITR), ScheduleRestoreTestParams(YsqlColocationConfig::kTablegroup, RestoreType::kPITR), ScheduleRestoreTestParams(YsqlColocationConfig::kNotColocated, RestoreType::kClone), - ScheduleRestoreTestParams(YsqlColocationConfig::kDBColocated, RestoreType::kClone))); + ScheduleRestoreTestParams(YsqlColocationConfig::kDBColocated, RestoreType::kClone)), + TestParamToString); TEST_P(YbAdminSnapshotScheduleTestWithYsqlColocationRestoreParam, Pgsql) { auto schedule_id = ASSERT_RESULT(PreparePgWithColocatedParam()); From 66fb37b97b86e01751a0c0bafb8806f55d4455fb Mon Sep 17 00:00:00 2001 From: Arpit Nabaria Date: Thu, 8 May 2025 05:43:07 +0000 Subject: [PATCH 026/146] [PLAT-17562]Mask internal runtimeconfig keys from the user when configuring LDAP Summary: Earlier, we were showing the runtimeKeys to the user when running ldap configure. Since these keys are internal, we should avoid exposing internal implementation to an end user. Adding yba ldap describe ``` ./yba ldap describe -h Describe LDAP configuration for YBA Usage: yba ldap describe [flags] Aliases: describe, show, get Examples: yba ldap describe Flags: --user-set-only [Optional] Only show the fields that were set by the user explicitly. -h, --help help for describe ``` Test Plan: Tested locally ``` /yba ldap configure --ldap-host 10.23.16.4 --ldap-port 636 --ldap-ssl-protocol ldaps --ldap-tls-version "TLSv1_2" --base-dn '"CN=Users,CN=MRS,DC=LDAP,DC=COM"' --dn-prefix '"CN="' --service-account-dn '"CN=service_account,CN=MRS,DC=LDAP,DC=COM"' --service-account-password '"Service@123"' --group-member-attribute 'groupName' LDAP configuration updated successfully. LDAP configuration: Key Value base-dn CN=Users,CN=MRS,DC=LDAP,DC=COM search-filter default-role ReadOnly ldap-host 10.23.16.4 group-use-role-mapping true group-search-base group-use-query false ldap-port 636 ldaps-enabled true search-and-bind-enabled false service-account-dn CN=service_account,CN=MRS,DC=LDAP,DC=COM group-attribute groupName start-tls-enabled false group-search-filter search-attribute tls-version TLSv1_2 ldap-enabled true service-account-password ******** dn-prefix CN= group-search-scope SUBTREE ``` ``` ./yba ldap describe anabaria@dev-server-anabaria 05:37:59 LDAP configuration: Key Value base-dn CN=Users,CN=MRS,DC=LDAP,DC=COM search-filter default-role ReadOnly ldap-host 10.23.16.4 group-use-role-mapping true group-search-base group-use-query false ldap-port 636 ldaps-enabled true search-and-bind-enabled false service-account-dn CN=service_account,CN=MRS,DC=LDAP,DC=COM group-attribute groupName start-tls-enabled false group-search-filter search-attribute tls-version TLSv1_2 ldap-enabled true service-account-password ******** dn-prefix CN= group-search-scope SUBTREE ``` Reviewers: dkumar Reviewed By: dkumar Differential Revision: https://phorge.dev.yugabyte.com/D43825 --- .../yba-cli/cmd/auth/ldap/configure_ldap.go | 2 +- .../yba-cli/cmd/auth/ldap/describe_ldap.go | 29 ++++ managed/yba-cli/cmd/auth/ldap/disable_ldap.go | 32 ++++- managed/yba-cli/cmd/auth/ldap/ldap.go | 1 + .../cmd/auth/ldap/ldap_runtime_keys.go | 29 ---- managed/yba-cli/cmd/auth/ldap/ldap_util.go | 135 ++++++++++++------ .../oidc/{get_oidc.go => describe_oidc.go} | 18 +-- managed/yba-cli/cmd/auth/oidc/disable_oidc.go | 6 +- managed/yba-cli/cmd/auth/oidc/oidc.go | 2 +- managed/yba-cli/cmd/auth/oidc/oidc_util.go | 12 ++ managed/yba-cli/cmd/util/ldap_runtime_keys.go | 91 ++++++++++++ managed/yba-cli/docs/yba_ldap.md | 1 + managed/yba-cli/docs/yba_ldap_configure.md | 2 +- managed/yba-cli/docs/yba_ldap_describe.md | 46 ++++++ managed/yba-cli/docs/yba_ldap_disable.md | 13 +- managed/yba-cli/docs/yba_oidc.md | 2 +- .../{yba_oidc_get.md => yba_oidc_describe.md} | 12 +- managed/yba-cli/docs/yba_oidc_disable.md | 6 +- .../yba-cli/internal/formatter/ldap/ldap.go | 104 ++++++++++++++ 19 files changed, 447 insertions(+), 96 deletions(-) create mode 100644 managed/yba-cli/cmd/auth/ldap/describe_ldap.go delete mode 100644 managed/yba-cli/cmd/auth/ldap/ldap_runtime_keys.go rename managed/yba-cli/cmd/auth/oidc/{get_oidc.go => describe_oidc.go} (50%) create mode 100644 managed/yba-cli/cmd/util/ldap_runtime_keys.go create mode 100644 managed/yba-cli/docs/yba_ldap_describe.md rename managed/yba-cli/docs/{yba_oidc_get.md => yba_oidc_describe.md} (91%) create mode 100644 managed/yba-cli/internal/formatter/ldap/ldap.go diff --git a/managed/yba-cli/cmd/auth/ldap/configure_ldap.go b/managed/yba-cli/cmd/auth/ldap/configure_ldap.go index 47b170296947..312adba57e8f 100644 --- a/managed/yba-cli/cmd/auth/ldap/configure_ldap.go +++ b/managed/yba-cli/cmd/auth/ldap/configure_ldap.go @@ -134,7 +134,7 @@ func init() { configureLDAPCmd.Flags().String("ldap-tls-version", "TLSv1_2", "[Optional] LDAP TLS version. Allowed values (case sensitive): TLSv1, TLSv1_1 and TLSv1_2.") configureLDAPCmd.Flags().StringP("base-dn", "b", "", - "[Optional] Seach base DN for LDAP. Must be enclosed in double quotes.") + "[Optional] Search base DN for LDAP. Must be enclosed in double quotes.") configureLDAPCmd.Flags().String("dn-prefix", "CN=", "[Optional] Prefix to be appended to the username for LDAP search.\n"+ " Must be enclosed in double quotes.") diff --git a/managed/yba-cli/cmd/auth/ldap/describe_ldap.go b/managed/yba-cli/cmd/auth/ldap/describe_ldap.go new file mode 100644 index 000000000000..784c282113d2 --- /dev/null +++ b/managed/yba-cli/cmd/auth/ldap/describe_ldap.go @@ -0,0 +1,29 @@ +/* + * Copyright (c) YugaByte, Inc. + */ + +package ldap + +import ( + "github.com/spf13/cobra" + "github.com/yugabyte/yugabyte-db/managed/yba-cli/cmd/util" +) + +// describeLDAPCmd is used to get LDAP authentication configuration for YBA +var describeLDAPCmd = &cobra.Command{ + Use: "describe", + Aliases: []string{"show", "get"}, + Short: "Describe LDAP configuration for YBA", + Long: "Describe LDAP configuration for YBA", + Example: `yba ldap describe`, + Run: func(cmd *cobra.Command, args []string) { + showInherited := !util.MustGetFlagBool(cmd, "user-set-only") + getLDAPConfig(showInherited /*inherited*/) + }, +} + +func init() { + describeLDAPCmd.Flags().SortFlags = false + describeLDAPCmd.Flags().Bool("user-set-only", false, + "[Optional] Only show the fields that were set by the user explicitly.") +} diff --git a/managed/yba-cli/cmd/auth/ldap/disable_ldap.go b/managed/yba-cli/cmd/auth/ldap/disable_ldap.go index 3d855550e60f..baacd385a72d 100644 --- a/managed/yba-cli/cmd/auth/ldap/disable_ldap.go +++ b/managed/yba-cli/cmd/auth/ldap/disable_ldap.go @@ -5,7 +5,10 @@ package ldap import ( + "github.com/sirupsen/logrus" "github.com/spf13/cobra" + "github.com/yugabyte/yugabyte-db/managed/yba-cli/cmd/util" + "github.com/yugabyte/yugabyte-db/managed/yba-cli/internal/formatter" ) // disableLDAPCmd is used to disable LDAP authentication for YBA @@ -14,11 +17,38 @@ var disableLDAPCmd = &cobra.Command{ Aliases: []string{"delete"}, Short: "Disable LDAP authentication for YBA", Long: "Disable LDAP authentication for YBA", + Example: `yba ldap disable --reset-fields ldap-host,ldap-port`, Run: func(cmd *cobra.Command, args []string) { - disableLDAP() + // Reset all LDAP fields to default values + resetKeys := util.MaybeGetFlagStringSlice(cmd, "reset-fields") + runtimeKeysToDelete := map[string]bool{ + util.ToggleLDAPKey: true, + } + for _, key := range resetKeys { + if key == "ldap-ssl-protocol" { + runtimeKeysToDelete[util.UseLDAPSKey] = true + runtimeKeysToDelete[util.UseStartTLSKey] = true + } else if runtimeKey, ok := util.GetLDAPKeyForResetFlag(key); ok { + runtimeKeysToDelete[runtimeKey] = true + } else { + logrus.Warn(formatter.Colorize( + "Unrecognized key: "+key+". Skipping it.\n", formatter.YellowColor)) + } + } + disableLDAP(util.MustGetFlagBool(cmd, "reset-all"), runtimeKeysToDelete) }, } func init() { disableLDAPCmd.Flags().SortFlags = false + disableLDAPCmd.Flags().Bool("reset-all", false, + "[Optional] Reset all LDAP fields to default values") + disableLDAPCmd.Flags().StringSlice("reset-fields", []string{}, + "[Optional] Reset specific LDAP fields to default values. "+ + "Comma separated list of fields. Example: --reset-fields ,\n"+ + " Allowed values: ldap-host, ldap-port, ldap-ssl-protocol,"+ + "tls-version, base-dn, dn-prefix, customer-uuid, search-and-bind-enabled, \n"+ + " search-attribute, search-filter, service-account-dn, default-role,"+ + "service-account-password, group-attribute, group-search-filter, \n"+ + " group-search-base, group-search-scope, group-use-query, group-use-role-mapping") } diff --git a/managed/yba-cli/cmd/auth/ldap/ldap.go b/managed/yba-cli/cmd/auth/ldap/ldap.go index 205404688ebe..53da938ea863 100644 --- a/managed/yba-cli/cmd/auth/ldap/ldap.go +++ b/managed/yba-cli/cmd/auth/ldap/ldap.go @@ -22,4 +22,5 @@ func init() { LdapCmd.Flags().SortFlags = false LdapCmd.AddCommand(configureLDAPCmd) LdapCmd.AddCommand(disableLDAPCmd) + LdapCmd.AddCommand(describeLDAPCmd) } diff --git a/managed/yba-cli/cmd/auth/ldap/ldap_runtime_keys.go b/managed/yba-cli/cmd/auth/ldap/ldap_runtime_keys.go deleted file mode 100644 index f7a725d8c321..000000000000 --- a/managed/yba-cli/cmd/auth/ldap/ldap_runtime_keys.go +++ /dev/null @@ -1,29 +0,0 @@ -/* - * Copyright (c) YugaByte, Inc. - */ - -package ldap - -const ( - toggleLDAPKey = "yb.security.ldap.use_ldap" - ldapHostKey = "yb.security.ldap.ldap_url" - ldapPortKey = "yb.security.ldap.ldap_port" - useLDAPSKey = "yb.security.ldap.enable_ldaps" - useStartTLSKey = "yb.security.ldap.enable_ldap_start_tls" - ldapTLSVersionKey = "yb.security.ldap.ldap_tls_protocol" - ldapBaseDNKey = "yb.security.ldap.ldap_basedn" - ldapDNPrefixKey = "yb.security.ldap.ldap_dn_prefix" - ldapCustomerUUIDKey = "yb.security.ldap.ldap_customer_uuid" - ldapSearchAndBindKey = "yb.security.ldap.use_search_and_bind" - ldapSearchAttributeKey = "yb.security.ldap.ldap_search_attribute" - ldapSearchFilterKey = "yb.security.ldap.ldap_search_filter" - ldapServiceAccountDNKey = "yb.security.ldap.ldap_service_account_distinguished_name" - ldapServiceAccountPasswordKey = "yb.security.ldap.ldap_service_account_password" - ldapDefaultRoleKey = "yb.security.ldap.ldap_default_role" - ldapGroupAttributeKey = "yb.security.ldap.ldap_group_member_of_attribute" - ldapGroupSearchFilterKey = "yb.security.ldap.ldap_group_search_filter" - ldapGroupSearchBaseKey = "yb.security.ldap.ldap_group_search_base_dn" - ldapGroupSearchScopeKey = "yb.security.ldap.ldap_group_search_scope" - ldapGroupUseQueryKey = "yb.security.ldap.ldap_group_use_query" - ldapGroupUseRoleMapping = "yb.security.ldap.ldap_group_use_role_mapping" -) diff --git a/managed/yba-cli/cmd/auth/ldap/ldap_util.go b/managed/yba-cli/cmd/auth/ldap/ldap_util.go index 2a6a781d80fd..2a4c68d3fdb1 100644 --- a/managed/yba-cli/cmd/auth/ldap/ldap_util.go +++ b/managed/yba-cli/cmd/auth/ldap/ldap_util.go @@ -5,6 +5,7 @@ package ldap import ( + "fmt" "os" "strings" @@ -15,7 +16,7 @@ import ( "github.com/yugabyte/yugabyte-db/managed/yba-cli/cmd/util" ybaAuthClient "github.com/yugabyte/yugabyte-db/managed/yba-cli/internal/client" "github.com/yugabyte/yugabyte-db/managed/yba-cli/internal/formatter" - "github.com/yugabyte/yugabyte-db/managed/yba-cli/internal/formatter/scope" + "github.com/yugabyte/yugabyte-db/managed/yba-cli/internal/formatter/ldap" ) type configureLDAPParams struct { @@ -39,52 +40,101 @@ type configureLDAPParams struct { GroupSearchScope string } -func disableLDAP() { +func disableLDAP(resetAll bool, keysToReset map[string]bool) { authAPI := ybaAuthClient.NewAuthAPIClientAndCustomer() - key.DeleteGlobalKey(authAPI, toggleLDAPKey) + ldapConfig := getScopedConfigWithLDAPKeys(authAPI, false /*inherited*/) + // Delete the toggle key if present in the config + if len(ldapConfig) == 0 { + logrus.Warn(formatter.Colorize("No LDAP configuration found.\n", formatter.YellowColor)) + return + } + if resetAll { + for _, keyConfig := range ldapConfig { + logrus.Info( + formatter.Colorize( + fmt.Sprintf("Deleting key: %s\n", util.LDAPKeyToFlagMap[keyConfig.GetKey()]), + formatter.GreenColor, + ), + ) + key.DeleteGlobalKey(authAPI, keyConfig.GetKey()) + } + } else { + for _, keyConfig := range ldapConfig { + if _, exists := keysToReset[keyConfig.GetKey()]; exists { + logrus.Info( + formatter.Colorize( + fmt.Sprintf("Deleting key: %s\n", util.LDAPKeyToFlagMap[keyConfig.GetKey()]), + formatter.GreenColor, + ), + ) + key.DeleteGlobalKey(authAPI, keyConfig.GetKey()) + } + } + } + logrus.Info( + formatter.Colorize("LDAP configuration deleted successfully.\n", formatter.GreenColor)) } func configureLDAP(params configureLDAPParams) { authAPI := ybaAuthClient.NewAuthAPIClientAndCustomer() - key.CheckAndSetGlobalKey(authAPI, toggleLDAPKey, "true") - key.CheckAndSetGlobalKey(authAPI, ldapHostKey, params.Host) - key.CheckAndSetGlobalKey(authAPI, ldapPortKey, params.Port) - key.CheckAndSetGlobalKey(authAPI, ldapTLSVersionKey, params.LdapTLSVersion) + key.CheckAndSetGlobalKey(authAPI, util.ToggleLDAPKey, "true") + key.CheckAndSetGlobalKey(authAPI, util.LDAPHostKey, params.Host) + key.CheckAndSetGlobalKey(authAPI, util.LDAPPortKey, params.Port) + key.CheckAndSetGlobalKey(authAPI, util.LDAPTLSVersionKey, params.LdapTLSVersion) if strings.ToLower(params.LdapSSLProtocol) == util.LDAPWithSSL { - key.CheckAndSetGlobalKey(authAPI, useLDAPSKey, "true") - key.CheckAndSetGlobalKey(authAPI, useStartTLSKey, "false") + key.CheckAndSetGlobalKey(authAPI, util.UseLDAPSKey, "true") + key.CheckAndSetGlobalKey(authAPI, util.UseStartTLSKey, "false") } else if strings.ToLower(params.LdapSSLProtocol) == util.LDAPWithStartTLS { - key.CheckAndSetGlobalKey(authAPI, useStartTLSKey, "true") - key.CheckAndSetGlobalKey(authAPI, useLDAPSKey, "false") + key.CheckAndSetGlobalKey(authAPI, util.UseStartTLSKey, "true") + key.CheckAndSetGlobalKey(authAPI, util.UseLDAPSKey, "false") } else if strings.ToLower(params.LdapSSLProtocol) == util.LDAPWithoutSSL { - key.CheckAndSetGlobalKey(authAPI, useLDAPSKey, "false") - key.CheckAndSetGlobalKey(authAPI, useStartTLSKey, "false") + key.CheckAndSetGlobalKey(authAPI, util.UseLDAPSKey, "false") + key.CheckAndSetGlobalKey(authAPI, util.UseStartTLSKey, "false") } - key.CheckAndSetGlobalKey(authAPI, ldapBaseDNKey, params.BaseDN) - key.CheckAndSetGlobalKey(authAPI, ldapDNPrefixKey, params.DNPrefix) - key.CheckAndSetGlobalKey(authAPI, ldapCustomerUUIDKey, params.CustomerUUID) - key.CheckAndSetGlobalKey(authAPI, ldapSearchAndBindKey, params.SearchAndBind) - key.CheckAndSetGlobalKey(authAPI, ldapSearchAttributeKey, params.LdapSearchAttribute) - key.CheckAndSetGlobalKey(authAPI, ldapSearchFilterKey, params.LdapSearchFilter) - key.CheckAndSetGlobalKey(authAPI, ldapServiceAccountDNKey, params.ServiceAccountDN) - key.CheckAndSetGlobalKey(authAPI, ldapServiceAccountPasswordKey, params.ServiceAccountPassword) + key.CheckAndSetGlobalKey(authAPI, util.LDAPBaseDNKey, params.BaseDN) + key.CheckAndSetGlobalKey(authAPI, util.LDAPDNPrefixKey, params.DNPrefix) + key.CheckAndSetGlobalKey(authAPI, util.LDAPCustomerUUIDKey, params.CustomerUUID) + key.CheckAndSetGlobalKey(authAPI, util.LDAPSearchAndBindKey, params.SearchAndBind) + key.CheckAndSetGlobalKey(authAPI, util.LDAPSearchAttributeKey, params.LdapSearchAttribute) + key.CheckAndSetGlobalKey(authAPI, util.LDAPSearchFilterKey, params.LdapSearchFilter) + key.CheckAndSetGlobalKey(authAPI, util.LDAPServiceAccountDNKey, params.ServiceAccountDN) + key.CheckAndSetGlobalKey( + authAPI, + util.LDAPServiceAccountPasswordKey, + params.ServiceAccountPassword, + ) // Group mapping params - key.CheckAndSetGlobalKey(authAPI, ldapGroupUseRoleMapping, "true") - key.CheckAndSetGlobalKey(authAPI, ldapDefaultRoleKey, params.DefaultRole) - key.CheckAndSetGlobalKey(authAPI, ldapGroupAttributeKey, params.GroupAttribute) - key.CheckAndSetGlobalKey(authAPI, ldapGroupUseQueryKey, params.GroupUseQuery) - key.CheckAndSetGlobalKey(authAPI, ldapGroupSearchFilterKey, params.GroupSearchFilter) - key.CheckAndSetGlobalKey(authAPI, ldapGroupSearchBaseKey, params.GroupSearchBase) - key.CheckAndSetGlobalKey(authAPI, ldapGroupSearchScopeKey, params.GroupSearchScope) + key.CheckAndSetGlobalKey(authAPI, util.LDAPGroupUseRoleMapping, "true") + key.CheckAndSetGlobalKey(authAPI, util.LDAPDefaultRoleKey, params.DefaultRole) + key.CheckAndSetGlobalKey(authAPI, util.LDAPGroupAttributeKey, params.GroupAttribute) + key.CheckAndSetGlobalKey(authAPI, util.LDAPGroupUseQueryKey, params.GroupUseQuery) + key.CheckAndSetGlobalKey(authAPI, util.LDAPGroupSearchFilterKey, params.GroupSearchFilter) + key.CheckAndSetGlobalKey(authAPI, util.LDAPGroupSearchBaseKey, params.GroupSearchBase) + key.CheckAndSetGlobalKey(authAPI, util.LDAPGroupSearchScopeKey, params.GroupSearchScope) logrus.Info( - formatter.Colorize("LDAP has been configured successfully.\n", formatter.GreenColor), + formatter.Colorize("LDAP configuration updated successfully.\n", formatter.GreenColor), ) - getLDAPConfig(authAPI) + getLDAPConfig(true /*inherited*/) } -func getLDAPConfig(authAPI *ybaAuthClient.AuthAPIClient) { - r, response, err := authAPI.GetConfig(util.GlobalScopeUUID).IncludeInherited(true).Execute() +func getLDAPConfig(inherited bool) { + authAPI := ybaAuthClient.NewAuthAPIClientAndCustomer() + ldapConfig := getScopedConfigWithLDAPKeys(authAPI, inherited) + if len(ldapConfig) == 0 { + logrus.Info(formatter.Colorize("No LDAP configuration found.\n", formatter.YellowColor)) + return + } + writeLDAPConfig(ldapConfig) +} + +func getScopedConfigWithLDAPKeys( + authAPI *ybaAuthClient.AuthAPIClient, + inherited bool, +) []ybaclient.ConfigEntry { + r, response, err := authAPI.GetConfig(util.GlobalScopeUUID). + IncludeInherited(inherited). + Execute() if err != nil { errMessage := util.ErrorFromHTTPResponse( response, @@ -92,17 +142,22 @@ func getLDAPConfig(authAPI *ybaAuthClient.AuthAPIClient) { "LDAP config", "Get") logrus.Fatal(formatter.Colorize(errMessage.Error()+"\n", formatter.RedColor)) } - ldapKeys := []ybaclient.ConfigEntry{} + ldapKeys := make([]ybaclient.ConfigEntry, 0, len(r.GetConfigEntries())) // Filter out the keys that are not related to LDAP for _, keyConfig := range r.GetConfigEntries() { - if strings.HasPrefix(keyConfig.GetKey(), "yb.security.ldap") { + if util.IsLDAPKey(keyConfig.GetKey()) { ldapKeys = append(ldapKeys, keyConfig) } } - r.ConfigEntries = &ldapKeys - fullScopeContext := *scope.NewFullScopeContext() - fullScopeContext.Output = os.Stdout - fullScopeContext.Format = scope.NewFullScopeFormat(viper.GetString("output")) - fullScopeContext.SetFullScope(r) - fullScopeContext.Write() + return ldapKeys +} + +func writeLDAPConfig(ldapConfig []ybaclient.ConfigEntry) { + logrus.Info(formatter.Colorize("LDAP configuration:\n", formatter.GreenColor)) + ldapConfigCtx := formatter.Context{ + Command: "list", + Output: os.Stdout, + Format: ldap.NewLDAPFormat(viper.GetString("output")), + } + ldap.Write(ldapConfigCtx, ldapConfig) } diff --git a/managed/yba-cli/cmd/auth/oidc/get_oidc.go b/managed/yba-cli/cmd/auth/oidc/describe_oidc.go similarity index 50% rename from managed/yba-cli/cmd/auth/oidc/get_oidc.go rename to managed/yba-cli/cmd/auth/oidc/describe_oidc.go index 75bfce29897f..829989cd300a 100644 --- a/managed/yba-cli/cmd/auth/oidc/get_oidc.go +++ b/managed/yba-cli/cmd/auth/oidc/describe_oidc.go @@ -9,13 +9,13 @@ import ( "github.com/yugabyte/yugabyte-db/managed/yba-cli/cmd/util" ) -// getOIDCCmd is used to get OIDC authentication configuration for YBA -var getOIDCCmd = &cobra.Command{ - Use: "get", - Aliases: []string{"show", "describe"}, - Short: "Get OIDC configuration for YBA", - Long: "Get OIDC configuration for YBA", - Example: `yba oidc get`, +// describeOIDCCmd is used to get OIDC authentication configuration for YBA +var describeOIDCCmd = &cobra.Command{ + Use: "describe", + Aliases: []string{"show", "get"}, + Short: "Describe OIDC configuration for YBA", + Long: "Describe OIDC configuration for YBA", + Example: `yba oidc describe`, Run: func(cmd *cobra.Command, args []string) { showInherited := !util.MustGetFlagBool(cmd, "user-set-only") getOIDCConfig(showInherited /*inherited*/) @@ -23,7 +23,7 @@ var getOIDCCmd = &cobra.Command{ } func init() { - getOIDCCmd.Flags().SortFlags = false - getOIDCCmd.Flags().Bool("user-set-only", false, + describeOIDCCmd.Flags().SortFlags = false + describeOIDCCmd.Flags().Bool("user-set-only", false, "[Optional] Only show the attributes that were set by the user explicitly.") } diff --git a/managed/yba-cli/cmd/auth/oidc/disable_oidc.go b/managed/yba-cli/cmd/auth/oidc/disable_oidc.go index 740c4f7a9cda..eb3991917c60 100644 --- a/managed/yba-cli/cmd/auth/oidc/disable_oidc.go +++ b/managed/yba-cli/cmd/auth/oidc/disable_oidc.go @@ -17,7 +17,7 @@ var disableOIDCCmd = &cobra.Command{ Aliases: []string{"delete"}, Short: "Disable OIDC configuration for YBA", Long: "Disable OIDC configuration for YBA", - Example: `yba oidc disable --reset-configs `, + Example: `yba oidc disable --reset-fields client-id,client-secret`, Run: func(cmd *cobra.Command, args []string) { resetKeys := util.MaybeGetFlagStringSlice(cmd, "reset-fields") runtimeKeysToDelete := map[string]bool{ @@ -42,9 +42,9 @@ func init() { "[Optional] Reset all OIDC fields to default values") disableOIDCCmd.Flags().StringSlice("reset-fields", []string{}, "[Optional] Reset specific OIDC fields to default values. "+ - "Comma separated list of fields. Example: --reset-fields \n"+ + "Comma separated list of fields. Example: --reset-fields ,\n"+ formatter.Colorize( - " Available fields: client-id, client-secret, discovery-url, scope, email-attribute, default-role \n"+ + " Allowed values: client-id, client-secret, discovery-url, scope, email-attribute, default-role \n"+ " refresh-token-endpoint, provider-configuration, auto-create-user, group-claim", formatter.GreenColor, )) diff --git a/managed/yba-cli/cmd/auth/oidc/oidc.go b/managed/yba-cli/cmd/auth/oidc/oidc.go index 495f42555268..a464dd41c020 100644 --- a/managed/yba-cli/cmd/auth/oidc/oidc.go +++ b/managed/yba-cli/cmd/auth/oidc/oidc.go @@ -23,5 +23,5 @@ func init() { OIDCCmd.Flags().SortFlags = false OIDCCmd.AddCommand(configureOIDCCmd) OIDCCmd.AddCommand(disableOIDCCmd) - OIDCCmd.AddCommand(getOIDCCmd) + OIDCCmd.AddCommand(describeOIDCCmd) } diff --git a/managed/yba-cli/cmd/auth/oidc/oidc_util.go b/managed/yba-cli/cmd/auth/oidc/oidc_util.go index 2f2199b5a6a1..c3a8e24f76bc 100644 --- a/managed/yba-cli/cmd/auth/oidc/oidc_util.go +++ b/managed/yba-cli/cmd/auth/oidc/oidc_util.go @@ -69,11 +69,23 @@ func disableOIDC(resetAll bool, keysToReset map[string]bool) { } if resetAll { for _, keyConfig := range oidcConfig { + logrus.Info( + formatter.Colorize( + fmt.Sprintf("Deleting key: %s\n", util.OidcKeyToFlagMap[keyConfig.GetKey()]), + formatter.GreenColor, + ), + ) key.DeleteGlobalKey(authAPI, keyConfig.GetKey()) } } else { for _, keyConfig := range oidcConfig { if _, exists := keysToReset[keyConfig.GetKey()]; exists { + logrus.Info( + formatter.Colorize( + fmt.Sprintf("Deleting key: %s\n", util.OidcKeyToFlagMap[keyConfig.GetKey()]), + formatter.GreenColor, + ), + ) key.DeleteGlobalKey(authAPI, keyConfig.GetKey()) } } diff --git a/managed/yba-cli/cmd/util/ldap_runtime_keys.go b/managed/yba-cli/cmd/util/ldap_runtime_keys.go new file mode 100644 index 000000000000..7df09aaf6efc --- /dev/null +++ b/managed/yba-cli/cmd/util/ldap_runtime_keys.go @@ -0,0 +1,91 @@ +/* + * Copyright (c) YugaByte, Inc. + */ + +package util + +// LDAP keys for YBA +const ( + ToggleLDAPKey = "yb.security.ldap.use_ldap" + LDAPHostKey = "yb.security.ldap.ldap_url" + LDAPPortKey = "yb.security.ldap.ldap_port" + UseLDAPSKey = "yb.security.ldap.enable_ldaps" + UseStartTLSKey = "yb.security.ldap.enable_ldap_start_tls" + LDAPTLSVersionKey = "yb.security.ldap.ldap_tls_protocol" + LDAPBaseDNKey = "yb.security.ldap.ldap_basedn" + LDAPDNPrefixKey = "yb.security.ldap.ldap_dn_prefix" + LDAPCustomerUUIDKey = "yb.security.ldap.ldap_customer_uuid" + LDAPSearchAndBindKey = "yb.security.ldap.use_search_and_bind" + LDAPSearchAttributeKey = "yb.security.ldap.ldap_search_attribute" + LDAPSearchFilterKey = "yb.security.ldap.ldap_search_filter" + LDAPServiceAccountDNKey = "yb.security.ldap.ldap_service_account_distinguished_name" + LDAPServiceAccountPasswordKey = "yb.security.ldap.ldap_service_account_password" + LDAPDefaultRoleKey = "yb.security.ldap.ldap_default_role" + LDAPGroupAttributeKey = "yb.security.ldap.ldap_group_member_of_attribute" + LDAPGroupSearchFilterKey = "yb.security.ldap.ldap_group_search_filter" + LDAPGroupSearchBaseKey = "yb.security.ldap.ldap_group_search_base_dn" + LDAPGroupSearchScopeKey = "yb.security.ldap.ldap_group_search_scope" + LDAPGroupUseQueryKey = "yb.security.ldap.ldap_group_use_query" + LDAPGroupUseRoleMapping = "yb.security.ldap.ldap_group_use_role_mapping" +) + +// LDAPKeyToFlagMap is a map of LDAP keys to their corresponding flags +var LDAPKeyToFlagMap = map[string]string{ + ToggleLDAPKey: "ldap-enabled", + LDAPHostKey: "ldap-host", + LDAPPortKey: "ldap-port", + UseLDAPSKey: "ldaps-enabled", + UseStartTLSKey: "start-tls-enabled", + LDAPTLSVersionKey: "tls-version", + LDAPBaseDNKey: "base-dn", + LDAPDNPrefixKey: "dn-prefix", + LDAPCustomerUUIDKey: "customer-uuid", + LDAPSearchAndBindKey: "search-and-bind-enabled", + LDAPSearchAttributeKey: "search-attribute", + LDAPSearchFilterKey: "search-filter", + LDAPServiceAccountDNKey: "service-account-dn", + LDAPServiceAccountPasswordKey: "service-account-password", + LDAPDefaultRoleKey: "default-role", + LDAPGroupAttributeKey: "group-attribute", + LDAPGroupSearchFilterKey: "group-search-filter", + LDAPGroupSearchBaseKey: "group-search-base", + LDAPGroupSearchScopeKey: "group-search-scope", + LDAPGroupUseQueryKey: "group-use-query", + LDAPGroupUseRoleMapping: "group-use-role-mapping", +} + +// IsLDAPKey checks if the given key is an LDAP key +func IsLDAPKey(key string) bool { + _, exists := LDAPKeyToFlagMap[key] + return exists +} + +// ResetFlagToLDAPKeyMap is a map of LDAP keys to their corresponding reset flags +var ResetFlagToLDAPKeyMap = map[string]string{ + "ldap-host": LDAPHostKey, + "ldap-port": LDAPPortKey, + "ldaps-enabled": UseLDAPSKey, + "start-tls-enabled": UseStartTLSKey, + "tls-version": LDAPTLSVersionKey, + "base-dn": LDAPBaseDNKey, + "dn-prefix": LDAPDNPrefixKey, + "customer-uuid": LDAPCustomerUUIDKey, + "search-and-bind-enabled": LDAPSearchAndBindKey, + "search-attribute": LDAPSearchAttributeKey, + "search-filter": LDAPSearchFilterKey, + "service-account-dn": LDAPServiceAccountDNKey, + "service-account-password": LDAPServiceAccountPasswordKey, + "default-role": LDAPDefaultRoleKey, + "group-attribute": LDAPGroupAttributeKey, + "group-search-filter": LDAPGroupSearchFilterKey, + "group-search-base": LDAPGroupSearchBaseKey, + "group-search-scope": LDAPGroupSearchScopeKey, + "group-use-query": LDAPGroupUseQueryKey, + "group-use-role-mapping": LDAPGroupUseRoleMapping, +} + +// GetLDAPKeyForResetFlag returns the LDAP key for the given reset flag +func GetLDAPKeyForResetFlag(flag string) (string, bool) { + key, exists := ResetFlagToLDAPKeyMap[flag] + return key, exists +} diff --git a/managed/yba-cli/docs/yba_ldap.md b/managed/yba-cli/docs/yba_ldap.md index d6130c3de2c3..e075440cd926 100644 --- a/managed/yba-cli/docs/yba_ldap.md +++ b/managed/yba-cli/docs/yba_ldap.md @@ -37,5 +37,6 @@ yba ldap [flags] * [yba](yba.md) - yba - Command line tools to manage your YugabyteDB Anywhere (Self-managed Database-as-a-Service) resources. * [yba ldap configure](yba_ldap_configure.md) - Configure LDAP authentication for YBA +* [yba ldap describe](yba_ldap_describe.md) - Describe LDAP configuration for YBA * [yba ldap disable](yba_ldap_disable.md) - Disable LDAP authentication for YBA diff --git a/managed/yba-cli/docs/yba_ldap_configure.md b/managed/yba-cli/docs/yba_ldap_configure.md index 45d56e9d4ad4..ece0c5ece9aa 100644 --- a/managed/yba-cli/docs/yba_ldap_configure.md +++ b/managed/yba-cli/docs/yba_ldap_configure.md @@ -23,7 +23,7 @@ yba ldap configure --ldap-host --ldap-port --base-dn '"< --ldap-port int [Optional] LDAP server port (default 389) --ldap-ssl-protocol string [Optional] LDAP SSL protocol. Allowed values: none, ldaps, starttls. (default "none") --ldap-tls-version string [Optional] LDAP TLS version. Allowed values (case sensitive): TLSv1, TLSv1_1 and TLSv1_2. (default "TLSv1_2") - -b, --base-dn string [Optional] Seach base DN for LDAP. Must be enclosed in double quotes. + -b, --base-dn string [Optional] Search base DN for LDAP. Must be enclosed in double quotes. --dn-prefix string [Optional] Prefix to be appended to the username for LDAP search. Must be enclosed in double quotes. (default "CN=") --customer-uuid string [Optional] YBA Customer UUID for LDAP authentication (Only for multi-tenant YBA) diff --git a/managed/yba-cli/docs/yba_ldap_describe.md b/managed/yba-cli/docs/yba_ldap_describe.md new file mode 100644 index 000000000000..2f04ce6e49b2 --- /dev/null +++ b/managed/yba-cli/docs/yba_ldap_describe.md @@ -0,0 +1,46 @@ +## yba ldap describe + +Describe LDAP configuration for YBA + +### Synopsis + +Describe LDAP configuration for YBA + +``` +yba ldap describe [flags] +``` + +### Examples + +``` +yba ldap describe +``` + +### Options + +``` + --user-set-only [Optional] Only show the fields that were set by the user explicitly. + -h, --help help for describe +``` + +### Options inherited from parent commands + +``` + -a, --apiToken string YugabyteDB Anywhere api token. + --ca-cert string CA certificate file path for secure connection to YugabyteDB Anywhere. Required when the endpoint is https and --insecure is not set. + --config string Full path to a specific configuration file for YBA CLI. If provided, this takes precedence over the directory specified via --directory, and the generated files are added to the same path. If not provided, the CLI will look for '.yba-cli.yaml' in the directory specified by --directory. Defaults to '$HOME/.yba-cli/.yba-cli.yaml'. + --debug Use debug mode, same as --logLevel debug. + --directory string Directory containing YBA CLI configuration and generated files. If specified, the CLI will look for a configuration file named '.yba-cli.yaml' in this directory. Defaults to '$HOME/.yba-cli/'. + --disable-color Disable colors in output. (default false) + -H, --host string YugabyteDB Anywhere Host (default "http://localhost:9000") + --insecure Allow insecure connections to YugabyteDB Anywhere. Value ignored for http endpoints. Defaults to false for https. + -l, --logLevel string Select the desired log level format. Allowed values: debug, info, warn, error, fatal. (default "info") + -o, --output string Select the desired output format. Allowed values: table, json, pretty. (default "table") + --timeout duration Wait command timeout, example: 5m, 1h. (default 168h0m0s) + --wait Wait until the task is completed, otherwise it will exit immediately. (default true) +``` + +### SEE ALSO + +* [yba ldap](yba_ldap.md) - Configure LDAP authentication for YBA + diff --git a/managed/yba-cli/docs/yba_ldap_disable.md b/managed/yba-cli/docs/yba_ldap_disable.md index 46c7875db919..a09356a93437 100644 --- a/managed/yba-cli/docs/yba_ldap_disable.md +++ b/managed/yba-cli/docs/yba_ldap_disable.md @@ -10,10 +10,21 @@ Disable LDAP authentication for YBA yba ldap disable [flags] ``` +### Examples + +``` +yba ldap disable --reset-fields ldap-host,ldap-port +``` + ### Options ``` - -h, --help help for disable + --reset-all [Optional] Reset all LDAP fields to default values + --reset-fields strings [Optional] Reset specific LDAP fields to default values. Comma separated list of fields. Example: --reset-fields , + Allowed values: ldap-host, ldap-port, ldap-ssl-protocol,tls-version, base-dn, dn-prefix, customer-uuid, search-and-bind-enabled, + search-attribute, search-filter, service-account-dn, default-role,service-account-password, group-attribute, group-search-filter, + group-search-base, group-search-scope, group-use-query, group-use-role-mapping + -h, --help help for disable ``` ### Options inherited from parent commands diff --git a/managed/yba-cli/docs/yba_oidc.md b/managed/yba-cli/docs/yba_oidc.md index 5478bec41bdd..0d59ab5c7acc 100644 --- a/managed/yba-cli/docs/yba_oidc.md +++ b/managed/yba-cli/docs/yba_oidc.md @@ -37,6 +37,6 @@ yba oidc [flags] * [yba](yba.md) - yba - Command line tools to manage your YugabyteDB Anywhere (Self-managed Database-as-a-Service) resources. * [yba oidc configure](yba_oidc_configure.md) - Configure OIDC configuration for YBA +* [yba oidc describe](yba_oidc_describe.md) - Describe OIDC configuration for YBA * [yba oidc disable](yba_oidc_disable.md) - Disable OIDC configuration for YBA -* [yba oidc get](yba_oidc_get.md) - Get OIDC configuration for YBA diff --git a/managed/yba-cli/docs/yba_oidc_get.md b/managed/yba-cli/docs/yba_oidc_describe.md similarity index 91% rename from managed/yba-cli/docs/yba_oidc_get.md rename to managed/yba-cli/docs/yba_oidc_describe.md index 15589438d0a6..37f0d69a55a5 100644 --- a/managed/yba-cli/docs/yba_oidc_get.md +++ b/managed/yba-cli/docs/yba_oidc_describe.md @@ -1,26 +1,26 @@ -## yba oidc get +## yba oidc describe -Get OIDC configuration for YBA +Describe OIDC configuration for YBA ### Synopsis -Get OIDC configuration for YBA +Describe OIDC configuration for YBA ``` -yba oidc get [flags] +yba oidc describe [flags] ``` ### Examples ``` -yba oidc get +yba oidc describe ``` ### Options ``` --user-set-only [Optional] Only show the attributes that were set by the user explicitly. - -h, --help help for get + -h, --help help for describe ``` ### Options inherited from parent commands diff --git a/managed/yba-cli/docs/yba_oidc_disable.md b/managed/yba-cli/docs/yba_oidc_disable.md index 7f0eff11ad99..046b5f7b9a19 100644 --- a/managed/yba-cli/docs/yba_oidc_disable.md +++ b/managed/yba-cli/docs/yba_oidc_disable.md @@ -13,15 +13,15 @@ yba oidc disable [flags] ### Examples ``` -yba oidc disable --reset-configs +yba oidc disable --reset-fields client-id,client-secret ``` ### Options ``` --reset-all [Optional] Reset all OIDC fields to default values - --reset-fields strings [Optional] Reset specific OIDC fields to default values. Comma separated list of fields. Example: --reset-fields - Available fields: client-id, client-secret, discovery-url, scope, email-attribute, default-role + --reset-fields strings [Optional] Reset specific OIDC fields to default values. Comma separated list of fields. Example: --reset-fields , + Allowed values: client-id, client-secret, discovery-url, scope, email-attribute, default-role refresh-token-endpoint, provider-configuration, auto-create-user, group-claim -h, --help help for disable ``` diff --git a/managed/yba-cli/internal/formatter/ldap/ldap.go b/managed/yba-cli/internal/formatter/ldap/ldap.go new file mode 100644 index 000000000000..efac9d0b7450 --- /dev/null +++ b/managed/yba-cli/internal/formatter/ldap/ldap.go @@ -0,0 +1,104 @@ +/* + * Copyright (c) YugaByte, Inc. + */ + +package ldap + +import ( + "encoding/json" + + "github.com/sirupsen/logrus" + ybaclient "github.com/yugabyte/platform-go-client" + "github.com/yugabyte/yugabyte-db/managed/yba-cli/cmd/util" + "github.com/yugabyte/yugabyte-db/managed/yba-cli/internal/formatter" +) + +const ( + defaultLDAPConfigListing = "table {{.Key}}\t{{.Value}}" + configHeader = "Key" + valueHeader = "Value" +) + +// Context for user outputs +type Context struct { + formatter.HeaderContext + formatter.Context + configEntry ybaclient.ConfigEntry +} + +// NewLDAPFormat for formatting output +func NewLDAPFormat(source string) formatter.Format { + switch source { + case formatter.TableFormatKey, "": + format := defaultLDAPConfigListing + return formatter.Format(format) + default: // custom format or json or pretty + return formatter.Format(source) + } +} + +// Write renders the context for a list of ldap config entries +func Write(ctx formatter.Context, ldapConfig []ybaclient.ConfigEntry) error { + // Check if the format is JSON or Pretty JSON + if (ctx.Format.IsJSON() || ctx.Format.IsPrettyJSON()) && ctx.Command.IsListCommand() { + // Marshal the slice of ldapConfig into JSON + var output []byte + var err error + + if ctx.Format.IsPrettyJSON() { + output, err = json.MarshalIndent(ldapConfig, "", " ") + } else { + output, err = json.Marshal(ldapConfig) + } + + if err != nil { + logrus.Errorf("Error marshaling ldap config entries to json: %v\n", err) + return err + } + + // Write the JSON output to the context + _, err = ctx.Output.Write(output) + return err + } + + // Existing logic for table and other formats + render := func(format func(subContext formatter.SubContext) error) error { + for _, configEntry := range ldapConfig { + err := format(&Context{configEntry: configEntry}) + if err != nil { + logrus.Debugf("Error rendering user: %v\n", err) + return err + } + } + return nil + } + return ctx.Write(NewLDAPContext(), render) +} + +// NewLDAPContext creates a new context for rendering user +func NewLDAPContext() *Context { + ldapConfigEntryCtx := Context{} + ldapConfigEntryCtx.Header = formatter.SubHeaderContext{ + "Key": configHeader, + "Value": valueHeader, + } + return &ldapConfigEntryCtx +} + +// Key fetches Key Name +func (c *Context) Key() string { + if key, exists := util.LDAPKeyToFlagMap[c.configEntry.GetKey()]; exists { + return key + } + return c.configEntry.GetKey() +} + +// Value fetches Key Value +func (c *Context) Value() string { + return c.configEntry.GetValue() +} + +// MarshalJSON function +func (c *Context) MarshalJSON() ([]byte, error) { + return json.Marshal(c.configEntry) +} From 9c70e1ec3e1cfa273258f6ea010c4b387e88b002 Mon Sep 17 00:00:00 2001 From: Sami Ahmed Siddiqui Date: Mon, 12 May 2025 13:13:38 +0500 Subject: [PATCH 027/146] Sticky right navigation (#27139) --- docs/assets/scss/_yb_container.scss | 9 +-------- docs/assets/scss/_yb_headings.scss | 3 --- 2 files changed, 1 insertion(+), 11 deletions(-) diff --git a/docs/assets/scss/_yb_container.scss b/docs/assets/scss/_yb_container.scss index bdf994562dc5..e16ed6e90ca8 100644 --- a/docs/assets/scss/_yb_container.scss +++ b/docs/assets/scss/_yb_container.scss @@ -183,11 +183,11 @@ html { @media (max-width: 1199px) { .td-main { .content-parent { - // max-width: calc(100% - 250px);max-width justify-content: center; padding-left: 20px; padding-right: 20px; margin-left: 0; + overflow: hidden; .content-child { max-width: 100%; @@ -196,7 +196,6 @@ html { main { width: 100%; - // max-width: 828px;max-width } } } @@ -259,12 +258,6 @@ html { } } -@media (max-width: 1300px) and(min-width: 1200px) { - .td-main .content-parent { - // padding: 0 85px;padding - } -} - .dragging { .td-main .content-parent .content-child, .td-main aside.td-sidebar .left-sidebar-wrap, diff --git a/docs/assets/scss/_yb_headings.scss b/docs/assets/scss/_yb_headings.scss index 84daf2013d6e..2994deb0351c 100644 --- a/docs/assets/scss/_yb_headings.scss +++ b/docs/assets/scss/_yb_headings.scss @@ -241,6 +241,3 @@ .td-searchpage .td-content h1 { margin-bottom: 16px; } -.content-parent { - overflow: hidden; -} From f9f7e50b87207cdcfdd049848239422ed9a75206 Mon Sep 17 00:00:00 2001 From: timothy-e Date: Mon, 12 May 2025 07:23:23 -0400 Subject: [PATCH 028/146] [#27112] YSQL: fix TestPgRegressProc on Mac Summary: `yb.orig.get_current_transaction_priority` has been failing on Mac for a long time, due to some small differences in output ```lang=diff < 0.400000000 (High priority transaction) --- > 0.400000095 (High priority transaction) 177c177 < 7 | 0.400000000 (High priority transaction) --- > 7 | 0.400000095 (High priority transaction) 191c191 < 0.400000000 (High priority transaction) --- > 0.400000095 (High priority transaction) 204,205c204,205 < 7 | 0.400000000 (High priority transaction) < 8 | 0.400000000 (High priority transaction) --- > 7 | 0.400000095 (High priority transaction) > 8 | 0.400000095 (High priority transaction) ``` The solution is to split the output into two fields, one for the number and one for the comment, allowing the number to be presented in whatever way we choose. 2 decimal places suffices to show the meaning of the test without also testing floating point representation. Create the test function `yb_get_current_transaction_priority_platform_independent` to do this, and then modify the test to use this function. The function is slightly complex because it needs to handle the case where `yb_get_current_transaction_priority()` returns just `(Highest Priority Transaction)`. Jira: DB-16598 Test Plan: Jenkins: test regex: .*TestPgRegressProc.* ``` ./yb_build.sh --java-test TestPgRegressProc ``` Reviewers: kramanathan Reviewed By: kramanathan Subscribers: svc_phabricator, yql Differential Revision: https://phorge.dev.yugabyte.com/D43854 --- ....orig.get_current_transaction_priority.out | 235 ++++++++++-------- ....orig.get_current_transaction_priority.sql | 59 +++-- 2 files changed, 166 insertions(+), 128 deletions(-) diff --git a/src/postgres/src/test/regress/expected/yb.orig.get_current_transaction_priority.out b/src/postgres/src/test/regress/expected/yb.orig.get_current_transaction_priority.out index 60bf6197e114..ed9309bdac74 100644 --- a/src/postgres/src/test/regress/expected/yb.orig.get_current_transaction_priority.out +++ b/src/postgres/src/test/regress/expected/yb.orig.get_current_transaction_priority.out @@ -45,164 +45,183 @@ -- NOTE: As an exception, if a transaction is assigned the highest priority possible -- i.e., kHighPriTxnUpperBound, then a single value "Highest priority transaction" is returned -- without any float. +CREATE FUNCTION yb_get_current_transaction_priority_platform_independent() +RETURNS TABLE (priority NUMERIC(3, 2), category TEXT) +LANGUAGE SQL +AS $$ + SELECT + CASE + WHEN v::TEXT ~ '^\d+(\.\d+)? ' THEN + substring(v::TEXT, 1, position(' ' IN v::TEXT) - 1)::NUMERIC(3, 2) + ELSE + NULL + END AS priority, + CASE + WHEN v::TEXT ~ '^\d+(\.\d+)? ' THEN + substring(v::TEXT, position(' ' IN v::TEXT) + 1) + ELSE + v::TEXT + END AS category + FROM yb_get_current_transaction_priority() AS v; +$$; SET yb_transaction_priority_lower_bound = 0.4; NOTICE: priorities don't exist for read committed isolation transations, the transaction will wait for conflicting transactions to commit before proceeding DETAIL: This also applies to other isolation levels if using Wait-on-Conflict concurrency control. SET yb_transaction_priority_upper_bound = 0.4; NOTICE: priorities don't exist for read committed isolation transations, the transaction will wait for conflicting transactions to commit before proceeding DETAIL: This also applies to other isolation levels if using Wait-on-Conflict concurrency control. -CREATE TABLE test (k int primary key, v varchar(100)); -INSERT INTO test values (1, '1'); +CREATE TABLE test (k INT PRIMARY KEY, priority NUMERIC(3, 2), category VARCHAR(100)); +INSERT INTO test (k, priority) VALUES (1, '1'); -- (1) Check that transaction priority is 0 until a distributed txn is started. BEGIN TRANSACTION ISOLATION LEVEL REPEATABLE READ; -SELECT yb_get_current_transaction_priority(); -- 0 since a distributed transaction hasn't started - yb_get_current_transaction_priority -------------------------------------------- - 0.000000000 (Normal priority transaction) +SELECT * FROM yb_get_current_transaction_priority_platform_independent(); -- 0 since a distributed transaction hasn't started + priority | category +----------+------------------------------- + 0.00 | (Normal priority transaction) (1 row) SELECT * FROM test; -- this is a read-only operation and doesn't start a distributed transaction - k | v ----+--- - 1 | 1 + k | priority | category +---+----------+---------- + 1 | 1.00 | (1 row) -SELECT yb_get_current_transaction_priority(); -- still 0 - yb_get_current_transaction_priority -------------------------------------------- - 0.000000000 (Normal priority transaction) +SELECT * FROM yb_get_current_transaction_priority_platform_independent(); -- still 0 + priority | category +----------+------------------------------- + 0.00 | (Normal priority transaction) (1 row) INSERT INTO test VALUES (2, '2'); -- start a distributed txn -SELECT yb_get_current_transaction_priority(); -- non-zero now - yb_get_current_transaction_priority -------------------------------------------- - 0.400000000 (Normal priority transaction) +SELECT * FROM yb_get_current_transaction_priority_platform_independent(); -- non-zero now + priority | category +----------+------------------------------- + 0.40 | (Normal priority transaction) (1 row) COMMIT; BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE; -SELECT yb_get_current_transaction_priority(); -- 0 since a distributed transaction hasn't started - yb_get_current_transaction_priority -------------------------------------------- - 0.000000000 (Normal priority transaction) +SELECT * FROM yb_get_current_transaction_priority_platform_independent(); -- 0 since a distributed transaction hasn't started + priority | category +----------+------------------------------- + 0.00 | (Normal priority transaction) (1 row) SELECT * FROM test; -- reads start a distributed txn in serializable isolation level - k | v ----+--- - 1 | 1 - 2 | 2 + k | priority | category +---+----------+---------- + 1 | 1.00 | + 2 | 2.00 | (2 rows) -SELECT yb_get_current_transaction_priority(); - yb_get_current_transaction_priority -------------------------------------------- - 0.400000000 (Normal priority transaction) +SELECT * FROM yb_get_current_transaction_priority_platform_independent(); + priority | category +----------+------------------------------- + 0.40 | (Normal priority transaction) (1 row) COMMIT; -- (2) Showing yb_transaction_priority outside a transaction block -SELECT yb_get_current_transaction_priority(); - yb_get_current_transaction_priority -------------------------------------------- - 0.000000000 (Normal priority transaction) +SELECT * FROM yb_get_current_transaction_priority_platform_independent(); + priority | category +----------+------------------------------- + 0.00 | (Normal priority transaction) (1 row) -- (3) Normal priority BEGIN TRANSACTION ISOLATION LEVEL REPEATABLE READ; -INSERT INTO test (k, v) SELECT 3, yb_get_current_transaction_priority(); -- starts a distributed transaction -SELECT yb_get_current_transaction_priority(); - yb_get_current_transaction_priority -------------------------------------------- - 0.400000000 (Normal priority transaction) +INSERT INTO test (k, priority, category) SELECT 3, * FROM yb_get_current_transaction_priority_platform_independent(); -- starts a distributed transaction +SELECT * FROM yb_get_current_transaction_priority_platform_independent(); + priority | category +----------+------------------------------- + 0.40 | (Normal priority transaction) (1 row) -INSERT INTO test (k, v) SELECT 4, yb_get_current_transaction_priority(); +INSERT INTO test SELECT 4, * FROM yb_get_current_transaction_priority_platform_independent(); SELECT * FROM test ORDER BY k; - k | v ----+------------------------------------------- - 1 | 1 - 2 | 2 - 3 | 0.000000000 (Normal priority transaction) - 4 | 0.400000000 (Normal priority transaction) + k | priority | category +---+----------+------------------------------- + 1 | 1.00 | + 2 | 2.00 | + 3 | 0.00 | (Normal priority transaction) + 4 | 0.40 | (Normal priority transaction) (4 rows) COMMIT; BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE; -INSERT INTO test (k, v) SELECT 5, yb_get_current_transaction_priority(); -- starts a distributed transaction -SELECT yb_get_current_transaction_priority(); - yb_get_current_transaction_priority -------------------------------------------- - 0.400000000 (Normal priority transaction) +INSERT INTO test SELECT 5, * FROM yb_get_current_transaction_priority_platform_independent(); -- starts a distributed transaction +SELECT * FROM yb_get_current_transaction_priority_platform_independent(); + priority | category +----------+------------------------------- + 0.40 | (Normal priority transaction) (1 row) -INSERT INTO test (k, v) SELECT 6, yb_get_current_transaction_priority(); +INSERT INTO test SELECT 6, * FROM yb_get_current_transaction_priority_platform_independent(); SELECT * FROM test ORDER BY k; - k | v ----+------------------------------------------- - 1 | 1 - 2 | 2 - 3 | 0.000000000 (Normal priority transaction) - 4 | 0.400000000 (Normal priority transaction) - 5 | 0.000000000 (Normal priority transaction) - 6 | 0.400000000 (Normal priority transaction) + k | priority | category +---+----------+------------------------------- + 1 | 1.00 | + 2 | 2.00 | + 3 | 0.00 | (Normal priority transaction) + 4 | 0.40 | (Normal priority transaction) + 5 | 0.00 | (Normal priority transaction) + 6 | 0.40 | (Normal priority transaction) (6 rows) COMMIT; -- (4) High priority BEGIN TRANSACTION ISOLATION LEVEL REPEATABLE READ; SELECT * FROM test WHERE k = 1 FOR UPDATE; -- starts a distributed transaction in high pri bucket - k | v ----+--- - 1 | 1 + k | priority | category +---+----------+---------- + 1 | 1.00 | (1 row) -SELECT yb_get_current_transaction_priority(); - yb_get_current_transaction_priority ------------------------------------------ - 0.400000000 (High priority transaction) +SELECT * FROM yb_get_current_transaction_priority_platform_independent(); + priority | category +----------+----------------------------- + 0.40 | (High priority transaction) (1 row) -INSERT INTO test (k, v) SELECT 7, yb_get_current_transaction_priority(); +INSERT INTO test SELECT 7, * FROM yb_get_current_transaction_priority_platform_independent(); SELECT * FROM test ORDER BY k; - k | v ----+------------------------------------------- - 1 | 1 - 2 | 2 - 3 | 0.000000000 (Normal priority transaction) - 4 | 0.400000000 (Normal priority transaction) - 5 | 0.000000000 (Normal priority transaction) - 6 | 0.400000000 (Normal priority transaction) - 7 | 0.400000000 (High priority transaction) + k | priority | category +---+----------+------------------------------- + 1 | 1.00 | + 2 | 2.00 | + 3 | 0.00 | (Normal priority transaction) + 4 | 0.40 | (Normal priority transaction) + 5 | 0.00 | (Normal priority transaction) + 6 | 0.40 | (Normal priority transaction) + 7 | 0.40 | (High priority transaction) (7 rows) COMMIT; BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE; SELECT * FROM test WHERE k = 1 FOR UPDATE; -- starts a distributed transaction in high pri bucket - k | v ----+--- - 1 | 1 + k | priority | category +---+----------+---------- + 1 | 1.00 | (1 row) -SELECT yb_get_current_transaction_priority(); - yb_get_current_transaction_priority ------------------------------------------ - 0.400000000 (High priority transaction) +SELECT * FROM yb_get_current_transaction_priority_platform_independent(); + priority | category +----------+----------------------------- + 0.40 | (High priority transaction) (1 row) -INSERT INTO test (k, v) SELECT 8, yb_get_current_transaction_priority(); +INSERT INTO test SELECT 8, * FROM yb_get_current_transaction_priority_platform_independent(); SELECT * FROM test ORDER BY k; - k | v ----+------------------------------------------- - 1 | 1 - 2 | 2 - 3 | 0.000000000 (Normal priority transaction) - 4 | 0.400000000 (Normal priority transaction) - 5 | 0.000000000 (Normal priority transaction) - 6 | 0.400000000 (Normal priority transaction) - 7 | 0.400000000 (High priority transaction) - 8 | 0.400000000 (High priority transaction) + k | priority | category +---+----------+------------------------------- + 1 | 1.00 | + 2 | 2.00 | + 3 | 0.00 | (Normal priority transaction) + 4 | 0.40 | (Normal priority transaction) + 5 | 0.00 | (Normal priority transaction) + 6 | 0.40 | (Normal priority transaction) + 7 | 0.40 | (High priority transaction) + 8 | 0.40 | (High priority transaction) (8 rows) COMMIT; @@ -215,29 +234,29 @@ NOTICE: priorities don't exist for read committed isolation transations, the tr DETAIL: This also applies to other isolation levels if using Wait-on-Conflict concurrency control. BEGIN TRANSACTION ISOLATION LEVEL REPEATABLE READ; SELECT * FROM test WHERE k = 1 FOR UPDATE; - k | v ----+--- - 1 | 1 + k | priority | category +---+----------+---------- + 1 | 1.00 | (1 row) -SELECT yb_get_current_transaction_priority(); - yb_get_current_transaction_priority -------------------------------------- - Highest priority transaction +SELECT * FROM yb_get_current_transaction_priority_platform_independent(); + priority | category +----------+------------------------------ + | Highest priority transaction (1 row) COMMIT; BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE; SELECT * FROM test WHERE k = 1 FOR UPDATE; - k | v ----+--- - 1 | 1 + k | priority | category +---+----------+---------- + 1 | 1.00 | (1 row) -SELECT yb_get_current_transaction_priority(); - yb_get_current_transaction_priority -------------------------------------- - Highest priority transaction +SELECT * FROM yb_get_current_transaction_priority_platform_independent(); + priority | category +----------+------------------------------ + | Highest priority transaction (1 row) COMMIT; diff --git a/src/postgres/src/test/regress/sql/yb.orig.get_current_transaction_priority.sql b/src/postgres/src/test/regress/sql/yb.orig.get_current_transaction_priority.sql index 33b91e060899..895dd3f0cbc3 100644 --- a/src/postgres/src/test/regress/sql/yb.orig.get_current_transaction_priority.sql +++ b/src/postgres/src/test/regress/sql/yb.orig.get_current_transaction_priority.sql @@ -46,57 +46,76 @@ -- i.e., kHighPriTxnUpperBound, then a single value "Highest priority transaction" is returned -- without any float. +CREATE FUNCTION yb_get_current_transaction_priority_platform_independent() +RETURNS TABLE (priority NUMERIC(3, 2), category TEXT) +LANGUAGE SQL +AS $$ + SELECT + CASE + WHEN v::TEXT ~ '^\d+(\.\d+)? ' THEN + substring(v::TEXT, 1, position(' ' IN v::TEXT) - 1)::NUMERIC(3, 2) + ELSE + NULL + END AS priority, + CASE + WHEN v::TEXT ~ '^\d+(\.\d+)? ' THEN + substring(v::TEXT, position(' ' IN v::TEXT) + 1) + ELSE + v::TEXT + END AS category + FROM yb_get_current_transaction_priority() AS v; +$$; SET yb_transaction_priority_lower_bound = 0.4; SET yb_transaction_priority_upper_bound = 0.4; -CREATE TABLE test (k int primary key, v varchar(100)); -INSERT INTO test values (1, '1'); +CREATE TABLE test (k INT PRIMARY KEY, priority NUMERIC(3, 2), category VARCHAR(100)); +INSERT INTO test (k, priority) VALUES (1, '1'); -- (1) Check that transaction priority is 0 until a distributed txn is started. BEGIN TRANSACTION ISOLATION LEVEL REPEATABLE READ; -SELECT yb_get_current_transaction_priority(); -- 0 since a distributed transaction hasn't started +SELECT * FROM yb_get_current_transaction_priority_platform_independent(); -- 0 since a distributed transaction hasn't started SELECT * FROM test; -- this is a read-only operation and doesn't start a distributed transaction -SELECT yb_get_current_transaction_priority(); -- still 0 +SELECT * FROM yb_get_current_transaction_priority_platform_independent(); -- still 0 INSERT INTO test VALUES (2, '2'); -- start a distributed txn -SELECT yb_get_current_transaction_priority(); -- non-zero now +SELECT * FROM yb_get_current_transaction_priority_platform_independent(); -- non-zero now COMMIT; BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE; -SELECT yb_get_current_transaction_priority(); -- 0 since a distributed transaction hasn't started +SELECT * FROM yb_get_current_transaction_priority_platform_independent(); -- 0 since a distributed transaction hasn't started SELECT * FROM test; -- reads start a distributed txn in serializable isolation level -SELECT yb_get_current_transaction_priority(); +SELECT * FROM yb_get_current_transaction_priority_platform_independent(); COMMIT; -- (2) Showing yb_transaction_priority outside a transaction block -SELECT yb_get_current_transaction_priority(); +SELECT * FROM yb_get_current_transaction_priority_platform_independent(); -- (3) Normal priority BEGIN TRANSACTION ISOLATION LEVEL REPEATABLE READ; -INSERT INTO test (k, v) SELECT 3, yb_get_current_transaction_priority(); -- starts a distributed transaction -SELECT yb_get_current_transaction_priority(); -INSERT INTO test (k, v) SELECT 4, yb_get_current_transaction_priority(); +INSERT INTO test (k, priority, category) SELECT 3, * FROM yb_get_current_transaction_priority_platform_independent(); -- starts a distributed transaction +SELECT * FROM yb_get_current_transaction_priority_platform_independent(); +INSERT INTO test SELECT 4, * FROM yb_get_current_transaction_priority_platform_independent(); SELECT * FROM test ORDER BY k; COMMIT; BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE; -INSERT INTO test (k, v) SELECT 5, yb_get_current_transaction_priority(); -- starts a distributed transaction -SELECT yb_get_current_transaction_priority(); -INSERT INTO test (k, v) SELECT 6, yb_get_current_transaction_priority(); +INSERT INTO test SELECT 5, * FROM yb_get_current_transaction_priority_platform_independent(); -- starts a distributed transaction +SELECT * FROM yb_get_current_transaction_priority_platform_independent(); +INSERT INTO test SELECT 6, * FROM yb_get_current_transaction_priority_platform_independent(); SELECT * FROM test ORDER BY k; COMMIT; -- (4) High priority BEGIN TRANSACTION ISOLATION LEVEL REPEATABLE READ; SELECT * FROM test WHERE k = 1 FOR UPDATE; -- starts a distributed transaction in high pri bucket -SELECT yb_get_current_transaction_priority(); -INSERT INTO test (k, v) SELECT 7, yb_get_current_transaction_priority(); +SELECT * FROM yb_get_current_transaction_priority_platform_independent(); +INSERT INTO test SELECT 7, * FROM yb_get_current_transaction_priority_platform_independent(); SELECT * FROM test ORDER BY k; COMMIT; BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE; SELECT * FROM test WHERE k = 1 FOR UPDATE; -- starts a distributed transaction in high pri bucket -SELECT yb_get_current_transaction_priority(); -INSERT INTO test (k, v) SELECT 8, yb_get_current_transaction_priority(); +SELECT * FROM yb_get_current_transaction_priority_platform_independent(); +INSERT INTO test SELECT 8, * FROM yb_get_current_transaction_priority_platform_independent(); SELECT * FROM test ORDER BY k; COMMIT; @@ -105,10 +124,10 @@ SET yb_transaction_priority_upper_bound = 1; SET yb_transaction_priority_lower_bound = 1; BEGIN TRANSACTION ISOLATION LEVEL REPEATABLE READ; SELECT * FROM test WHERE k = 1 FOR UPDATE; -SELECT yb_get_current_transaction_priority(); +SELECT * FROM yb_get_current_transaction_priority_platform_independent(); COMMIT; BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE; SELECT * FROM test WHERE k = 1 FOR UPDATE; -SELECT yb_get_current_transaction_priority(); +SELECT * FROM yb_get_current_transaction_priority_platform_independent(); COMMIT; From d8bca34ffc1206ee9c43ded7fa2a5cfce885ff15 Mon Sep 17 00:00:00 2001 From: Karthik Ramanathan Date: Thu, 8 May 2025 12:04:13 -0400 Subject: [PATCH 029/146] [#11554, #11555] YSQL: Fix DDL statements in postgres_fdw test MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Summary: The postgres_fdw extension transparently manages a YSQL connection to the foreign (remote) server. Queries to foreign tables are received by the extension over a “local” connection, translated into a remote query and sent over the “remote” connection to the foreign server. Consider the following example: ```sql CREATE TABLE t1 (k INT, v INT); CREATE FOREIGN TABLE ft1 (a INT OPTIONS (column_name 'k'), b INT OPTIONS (column_name 'v')); ``` A query which references columns `ft1.a, ft1.b` is translated into `t1.a, t1.b` as follows: ```sql SELECT a, b FROM ft1; -- local connection -- is translated into SELECT k, v FROM t1; -- remote connection ``` The postgres_fdw regress test uses a loopback interface to test the foreign server. In other words, the test reuses the local DB node as a foreign server. Therefore, both the local connection and the remote connection point to the same physical objects, despite referencing different logical objects. This is problematic for DDLs as the catalog versions in the local and remote connection can go out of sync, causing `Catalog Version Mismatch` errors. The regress test currently works around this error by sleeping for 1 second after every batch of DDLs which allows for the new catalog version to propagate via tserver <--> master heartbeat. The DDLs can broadly be put into 3 buckets: - The DDL only touches the foreign table entry (ft1 in the above example) AND the queries that follow it do not error out. The remote connection will always deal with the local table (t1) and there will be no mismatch. - Nothing needs to be done in such scenarios. - The DDL only touches the foreign table entry AND the queries that follow it produce a warning/error. - This can be worked around by waiting for the next tserver <--> master heartbeat. - This is necessary to ensure that the remote connection "sees" the new catalog version and uses it for the query. - In the absence of this, "Catalog Version Mismatch" errors are produced. This is not of concern in practical scenarios where postgres_fdw will be used to connect distinct clusters. - A necessary condition to encounter spurious "Catalog Version Mismatch" errors is that the query does not do any catalog look ups in the planning phase. - The local table (t1) is altered by the local connection. The remote connection needs to know about this before query execution. - This can be worked around by forcing a catalog refresh via a breaking change or by waiting out a tserver <--> master heartbeat interval. With D43651 / a260932fe7aba6653b17812c7a8f8b6b6b2e4633, backends now update the catalog version in shared memory after concluding a DDL. As a result, other backends can learn about the bump in catalog version without having to obtain it from the tserver. This makes waiting for a tserver <--> master heartbeat redundant in cases where both the local and remote connection are to the same DB node (ie. share memory). Therefore, this revision now removes all instances of sleeps after DDLs. The above analysis is provided as a reference for the future, when a variant of the same regress test may connect to a different node rather than use the loopback interface. Jira: DB-1239, DB-1301 Test Plan: Run the following test: ``` ./yb_build.sh --java-test 'org.yb.pgsql.TestPgRegressContribPostgresFdw#schedule' ``` Reviewers: kfranz, myang, #db-approvers, smishra Reviewed By: myang, #db-approvers, smishra Subscribers: svc_phabricator, jason, smishra, yql Tags: #jenkins-ready Differential Revision: https://phorge.dev.yugabyte.com/D43708 --- .../expected/yb.port.postgres_fdw.out | 203 ------------------ .../postgres_fdw/sql/yb.port.postgres_fdw.sql | 58 ----- 2 files changed, 261 deletions(-) diff --git a/src/postgres/contrib/postgres_fdw/expected/yb.port.postgres_fdw.out b/src/postgres/contrib/postgres_fdw/expected/yb.port.postgres_fdw.out index 8552475ff82f..853f410ff530 100644 --- a/src/postgres/contrib/postgres_fdw/expected/yb.port.postgres_fdw.out +++ b/src/postgres/contrib/postgres_fdw/expected/yb.port.postgres_fdw.out @@ -2393,13 +2393,6 @@ ALTER VIEW v4 OWNER TO regress_view_owner; -- cleanup DROP OWNED BY regress_view_owner; DROP ROLE regress_view_owner; --- YB note: catalog snapshot invalidated, remove pg_sleeps when issue #11554 is fixed -select pg_sleep(1); - pg_sleep ----------- - -(1 row) - -- =================================================================== -- Aggregate and grouping queries -- =================================================================== @@ -3111,13 +3104,6 @@ select count(t1.c3) from ft2 t1 left join ft2 t2 on (t1.c1 = random() * t2.c2); Remote SQL: SELECT c2 FROM "S 1"."T 1" (13 rows) --- YB note: catalog snapshot invalidated, remove pg_sleeps when issue #11554 is fixed -select pg_sleep(1); - pg_sleep ----------- - -(1 row) - -- Subquery in FROM clause having aggregate explain (verbose, costs off) select count(*), x.b from ft1, (select c2 a, sum(c1) b from ft1 group by c2) x where ft1.c2 = x.a group by x.b order by 1, 2; @@ -3963,13 +3949,6 @@ DROP FUNCTION f_test(int); -- conversion error -- =================================================================== ALTER FOREIGN TABLE ft1 ALTER COLUMN c8 TYPE int; --- YB note: catalog snapshot invalidated, remove pg_sleeps when issue #11554 is fixed -select pg_sleep(1); - pg_sleep ----------- - -(1 row) - SELECT * FROM ft1 WHERE c1 = 1; -- ERROR ERROR: invalid input syntax for type integer: "foo" CONTEXT: column "c8" of foreign table "ft1" @@ -3983,13 +3962,6 @@ SELECT sum(c2), array_agg(c8) FROM ft1 GROUP BY c8; -- ERROR ERROR: invalid input syntax for type integer: "foo" CONTEXT: processing expression at position 2 in select list ALTER FOREIGN TABLE ft1 ALTER COLUMN c8 TYPE user_enum; --- YB note: catalog snapshot invalidated, remove pg_sleeps when issue #11554 is fixed -select pg_sleep(1); - pg_sleep ----------- - -(1 row) - -- =================================================================== -- subtransaction -- + local/remote error doesn't break cursor @@ -4038,13 +4010,6 @@ COMMIT; create table loct3 (f1 text collate "C" unique, f2 text, f3 varchar(10) unique); create foreign table ft3 (f1 text collate "C", f2 text, f3 varchar(10)) server loopback options (table_name 'loct3', use_remote_estimate 'true'); --- YB note: foreign table does not exist, remove when #11684 is fixed -select pg_sleep(1); - pg_sleep ----------- - -(1 row) - -- can be sent to remote explain (verbose, costs off) select * from ft3 where f1 = 'foo'; QUERY PLAN @@ -5262,13 +5227,6 @@ CONTEXT: remote SQL command: EXPLAIN SELECT ctid FROM "S 1"."T 1" WHERE (("C 1" DELETE FROM ft2 WHERE c1 = 1200 RETURNING tableoid::regclass; ERROR: system column "ctid" is not supported yet CONTEXT: remote SQL command: EXPLAIN SELECT ctid FROM "S 1"."T 1" WHERE (("C 1" = 1200)) FOR UPDATE --- YB note: catalog snapshot invalidated, remove pg_sleeps when issue #11554 is fixed -select pg_sleep(1); - pg_sleep ----------- - -(1 row) - -- Test UPDATE/DELETE with RETURNING on a three-table join INSERT INTO ft2 (c1,c2,c3) SELECT id, id - 1200, to_char(id, 'FM00000') FROM generate_series(1201, 1300) id; @@ -5465,13 +5423,6 @@ SELECT * FROM cte ORDER BY c1; -- Test errors thrown on remote side during update ALTER TABLE "S 1"."T 1" ADD CONSTRAINT c2positive CHECK (c2 >= 0); --- YB note: catalog snapshot invalidated, remove pg_sleeps when issue #11554 is fixed -select pg_sleep(1); - pg_sleep ----------- - -(1 row) - INSERT INTO ft1(c1, c2) VALUES(11, 12); -- duplicate key ERROR: duplicate key value violates unique constraint "t1_pkey" CONTEXT: remote SQL command: INSERT INTO "S 1"."T 1"("C 1", c2, c3, c4, c5, c6, c7, c8) VALUES ($1, $2, $3, $4, $5, $6, $7, $8) @@ -7118,13 +7069,6 @@ SELECT * FROM ft1 ORDER BY c6 ASC NULLS FIRST, c1 OFFSET 15 LIMIT 10; -- =================================================================== -- Consistent check constraints provide consistent results ALTER FOREIGN TABLE ft1 ADD CONSTRAINT ft1_c2positive CHECK (c2 >= 0); --- YB note: catalog snapshot invalidated, remove pg_sleeps when issue #11554 is fixed -select pg_sleep(1); - pg_sleep ----------- - -(1 row) - EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 WHERE c2 < 0; QUERY PLAN ----------------------------------------------------------------- @@ -7157,13 +7101,6 @@ SELECT count(*) FROM ft1 WHERE c2 < 0; (1 row) RESET constraint_exclusion; --- YB note: catalog snapshot invalidated, remove pg_sleeps when issue #11554 is fixed -select pg_sleep(1); - pg_sleep ----------- - -(1 row) - -- check constraint is enforced on the remote side, not locally INSERT INTO ft1(c1, c2) VALUES(1111, -2); -- c2positive ERROR: new row for relation "T 1" violates check constraint "c2positive" @@ -7176,13 +7113,6 @@ CONTEXT: remote SQL command: UPDATE "S 1"."T 1" SET c2 = (- c2) WHERE (("C 1" = ALTER FOREIGN TABLE ft1 DROP CONSTRAINT ft1_c2positive; -- But inconsistent check constraints provide inconsistent results ALTER FOREIGN TABLE ft1 ADD CONSTRAINT ft1_c2negative CHECK (c2 < 0); --- YB note: catalog snapshot invalidated, remove pg_sleeps when issue #11554 is fixed -select pg_sleep(1); - pg_sleep ----------- - -(1 row) - EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 WHERE c2 >= 0; QUERY PLAN ------------------------------------------------------------------ @@ -7215,13 +7145,6 @@ SELECT count(*) FROM ft1 WHERE c2 >= 0; (1 row) RESET constraint_exclusion; --- YB note: catalog snapshot invalidated, remove pg_sleeps when issue #11554 is fixed -select pg_sleep(1); - pg_sleep ----------- - -(1 row) - -- local check constraint is not actually enforced INSERT INTO ft1(c1, c2) VALUES(1111, 2); UPDATE ft1 SET c2 = c2 + 1 WHERE c1 = 1; @@ -7433,13 +7356,6 @@ $$ language plpgsql; CREATE TRIGGER trig_row_before_insupd BEFORE INSERT OR UPDATE ON rem1 FOR EACH ROW EXECUTE PROCEDURE trig_row_before_insupdate(); --- YB note: triggers don't work immediately, remove once #11555 is fixed -select pg_sleep(1); - pg_sleep ----------- - -(1 row) - -- The new values should have 'triggered' appended INSERT INTO rem1 values(1, 'insert'); SELECT * from loc1; @@ -7503,13 +7419,6 @@ DELETE FROM rem1; CREATE TRIGGER trig_row_before_insupd2 BEFORE INSERT OR UPDATE ON rem1 FOR EACH ROW EXECUTE PROCEDURE trig_row_before_insupdate(); --- YB note: triggers don't work immediately, remove once #11555 is fixed -select pg_sleep(1); - pg_sleep ----------- - -(1 row) - INSERT INTO rem1 values(1, 'insert'); SELECT * from loc1; f1 | f2 @@ -7563,13 +7472,6 @@ $$ language plpgsql; CREATE TRIGGER trig_null BEFORE INSERT OR UPDATE OR DELETE ON rem1 FOR EACH ROW EXECUTE PROCEDURE trig_null(); --- YB note: triggers don't work immediately, remove once #11555 is fixed -select pg_sleep(1); - pg_sleep ----------- - -(1 row) - -- Nothing should have changed. INSERT INTO rem1 VALUES (2, 'test2'); SELECT * from loc1; @@ -7610,13 +7512,6 @@ FOR EACH ROW EXECUTE PROCEDURE trigger_data(23,'skidoo'); ERROR: function trigger_data() does not exist CREATE TRIGGER trig_local_before BEFORE INSERT OR UPDATE ON loc1 FOR EACH ROW EXECUTE PROCEDURE trig_row_before_insupdate(); --- YB note: triggers don't work immediately, remove once #11555 is fixed -select pg_sleep(1); - pg_sleep ----------- - -(1 row) - INSERT INTO rem1(f2) VALUES ('test'); UPDATE rem1 SET f2 = 'testo'; -- Test returning a system attribute @@ -8142,13 +8037,6 @@ create trigger loct1_br_insert_trigger before insert on loct1 for each row execute procedure br_insert_trigfunc(); create trigger loct2_br_insert_trigger before insert on loct2 for each row execute procedure br_insert_trigfunc(); --- YB note: triggers don't work immediately, remove once #11555 is fixed -select pg_sleep(1); - pg_sleep ----------- - -(1 row) - -- The new values are concatenated with ' triggered !' insert into itrtest values (1, 'foo') returning *; a | b @@ -8188,13 +8076,6 @@ create foreign table remp (a int check (a in (1)), b text) server loopback optio create table locp (a int check (a in (2)), b text); alter table utrtest attach partition remp for values in (1); alter table utrtest attach partition locp for values in (2); --- YB note: catalog snapshot invalidated, remove pg_sleeps when issue #11554 is fixed -select pg_sleep(1); - pg_sleep ----------- - -(1 row) - insert into utrtest values (1, 'foo'); insert into utrtest values (2, 'qux'); select tableoid::regclass, * FROM utrtest; @@ -8329,13 +8210,6 @@ create table loc2 (f1 int, f2 text); alter table loc2 set (autovacuum_enabled = 'false'); NOTICE: storage parameters are currently ignored in YugabyteDB create foreign table rem2 (f1 int, f2 text) server loopback options(table_name 'loc2'); --- YB note: catalog snapshot invalidated, remove pg_sleeps when issue #11554 is fixed -select pg_sleep(1); - pg_sleep ----------- - -(1 row) - -- Test basic functionality copy rem2 from stdin; WARNING: batched COPY is not supported on foreign tables @@ -8352,13 +8226,6 @@ delete from rem2; -- Test check constraints alter table loc2 add constraint loc2_f1positive check (f1 >= 0); alter foreign table rem2 add constraint rem2_f1positive check (f1 >= 0); --- YB note: constraints don't work immediately, remove pg_sleeps when issue #11598 is fixed -select pg_sleep(1); - pg_sleep ----------- - -(1 row) - -- check constraint is enforced on the remote side, not locally copy rem2 from stdin; WARNING: batched COPY is not supported on foreign tables @@ -8381,13 +8248,6 @@ select * from rem2; alter foreign table rem2 drop constraint rem2_f1positive; alter table loc2 drop constraint loc2_f1positive; --- YB note: catalog snapshot invalidated, remove pg_sleeps when issue #11554 is fixed -select pg_sleep(1); - pg_sleep ----------- - -(1 row) - delete from rem2; -- Test local triggers create trigger trig_stmt_before before insert on rem2 @@ -8402,13 +8262,6 @@ ERROR: function trigger_data() does not exist create trigger trig_row_after after insert on rem2 for each row execute procedure trigger_data(23,'skidoo'); ERROR: function trigger_data() does not exist --- YB note: triggers don't work immediately, remove once #11555 is fixed -select pg_sleep(1); - pg_sleep ----------- - -(1 row) - copy rem2 from stdin; WARNING: batched COPY is not supported on foreign tables DETAIL: Defaulting to using one transaction for the entire copy. @@ -8432,13 +8285,6 @@ delete from rem2; -- YB note: The following line is changed in the upstream PG 15 test create trigger trig_row_before_insert before insert on rem2 for each row execute procedure trig_row_before_insupdate(); -- YB note: see above --- YB note: triggers don't work immediately, remove once #11555 is fixed -select pg_sleep(1); - pg_sleep ----------- - -(1 row) - -- The new values are concatenated with ' triggered !' copy rem2 from stdin; WARNING: batched COPY is not supported on foreign tables @@ -8455,13 +8301,6 @@ drop trigger trig_row_before_insert on rem2; delete from rem2; create trigger trig_null before insert on rem2 for each row execute procedure trig_null(); --- YB note: triggers don't work immediately, remove once #11555 is fixed -select pg_sleep(1); - pg_sleep ----------- - -(1 row) - -- Nothing happens copy rem2 from stdin; WARNING: batched COPY is not supported on foreign tables @@ -8477,13 +8316,6 @@ delete from rem2; -- Test remote triggers create trigger trig_row_before_insert before insert on loc2 for each row execute procedure trig_row_before_insupdate(); --- YB note: triggers don't work immediately, remove once #11555 is fixed -select pg_sleep(1); - pg_sleep ----------- - -(1 row) - -- The new values are concatenated with ' triggered !' copy rem2 from stdin; WARNING: batched COPY is not supported on foreign tables @@ -8500,13 +8332,6 @@ drop trigger trig_row_before_insert on loc2; delete from rem2; create trigger trig_null before insert on loc2 for each row execute procedure trig_null(); --- YB note: triggers don't work immediately, remove once #11555 is fixed -select pg_sleep(2); - pg_sleep ----------- - -(1 row) - -- Nothing happens copy rem2 from stdin; WARNING: batched COPY is not supported on foreign tables @@ -8528,13 +8353,6 @@ create trigger rem2_trig_row_after after insert on rem2 ERROR: function trigger_data() does not exist create trigger loc2_trig_row_before_insert before insert on loc2 for each row execute procedure trig_row_before_insupdate(); --- YB note: catalog snapshot invalidated, remove pg_sleeps when issue #11554 is fixed -select pg_sleep(1); - pg_sleep ----------- - -(1 row) - copy rem2 from stdin; WARNING: batched COPY is not supported on foreign tables DETAIL: Defaulting to using one transaction for the entire copy. @@ -8819,13 +8637,6 @@ CREATE SCHEMA import_dest5; BEGIN; DROP TYPE "Colors" CASCADE; NOTICE: drop cascades to column Col of table import_source.t5 --- YB note: type dropping in transaction is not respected, should error when issue #11742 is fixed -select pg_sleep(1); - pg_sleep ----------- - -(1 row) - IMPORT FOREIGN SCHEMA import_source LIMIT TO (t5) FROM SERVER loopback INTO import_dest5; -- ERROR ROLLBACK; @@ -8927,23 +8738,9 @@ CREATE FOREIGN TABLE ftprt2_p1 (b int, c varchar, a int) ALTER TABLE fprt2 ATTACH PARTITION ftprt2_p1 FOR VALUES FROM (0) TO (250); CREATE FOREIGN TABLE ftprt2_p2 PARTITION OF fprt2 FOR VALUES FROM (250) TO (500) SERVER loopback OPTIONS (table_name 'fprt2_p2', use_remote_estimate 'true'); --- YB note: catalog snapshot invalidated, remove pg_sleeps when issue #11554 is fixed -select pg_sleep(1); - pg_sleep ----------- - -(1 row) - ANALYZE fprt2; ANALYZE fprt2_p1; ANALYZE fprt2_p2; --- YB note: catalog snapshot invalidated, remove pg_sleeps when issue #11554 is fixed -select pg_sleep(1); - pg_sleep ----------- - -(1 row) - -- inner join three tables EXPLAIN (COSTS OFF) SELECT t1.a,t2.b,t3.c FROM fprt1 t1 INNER JOIN fprt2 t2 ON (t1.a = t2.b) INNER JOIN fprt1 t3 ON (t2.b = t3.a) WHERE t1.a % 25 =0 ORDER BY 1,2,3; diff --git a/src/postgres/contrib/postgres_fdw/sql/yb.port.postgres_fdw.sql b/src/postgres/contrib/postgres_fdw/sql/yb.port.postgres_fdw.sql index d6cb0dfa7ccc..c1dabdb5fd06 100644 --- a/src/postgres/contrib/postgres_fdw/sql/yb.port.postgres_fdw.sql +++ b/src/postgres/contrib/postgres_fdw/sql/yb.port.postgres_fdw.sql @@ -629,8 +629,6 @@ ALTER VIEW v4 OWNER TO regress_view_owner; -- cleanup DROP OWNED BY regress_view_owner; DROP ROLE regress_view_owner; --- YB note: catalog snapshot invalidated, remove pg_sleeps when issue #11554 is fixed -select pg_sleep(1); -- =================================================================== @@ -879,8 +877,6 @@ drop operator public.<^(int, int); explain (verbose, costs off) select count(t1.c3) from ft2 t1 left join ft2 t2 on (t1.c1 = random() * t2.c2); --- YB note: catalog snapshot invalidated, remove pg_sleeps when issue #11554 is fixed -select pg_sleep(1); -- Subquery in FROM clause having aggregate explain (verbose, costs off) select count(*), x.b from ft1, (select c2 a, sum(c1) b from ft1 group by c2) x where ft1.c2 = x.a group by x.b order by 1, 2; @@ -1083,15 +1079,11 @@ DROP FUNCTION f_test(int); -- conversion error -- =================================================================== ALTER FOREIGN TABLE ft1 ALTER COLUMN c8 TYPE int; --- YB note: catalog snapshot invalidated, remove pg_sleeps when issue #11554 is fixed -select pg_sleep(1); SELECT * FROM ft1 WHERE c1 = 1; -- ERROR SELECT ft1.c1, ft2.c2, ft1.c8 FROM ft1, ft2 WHERE ft1.c1 = ft2.c1 AND ft1.c1 = 1; -- ERROR SELECT ft1.c1, ft2.c2, ft1 FROM ft1, ft2 WHERE ft1.c1 = ft2.c1 AND ft1.c1 = 1; -- ERROR SELECT sum(c2), array_agg(c8) FROM ft1 GROUP BY c8; -- ERROR ALTER FOREIGN TABLE ft1 ALTER COLUMN c8 TYPE user_enum; --- YB note: catalog snapshot invalidated, remove pg_sleeps when issue #11554 is fixed -select pg_sleep(1); -- =================================================================== -- subtransaction @@ -1117,8 +1109,6 @@ COMMIT; create table loct3 (f1 text collate "C" unique, f2 text, f3 varchar(10) unique); create foreign table ft3 (f1 text collate "C", f2 text, f3 varchar(10)) server loopback options (table_name 'loct3', use_remote_estimate 'true'); --- YB note: foreign table does not exist, remove when #11684 is fixed -select pg_sleep(1); -- can be sent to remote explain (verbose, costs off) select * from ft3 where f1 = 'foo'; @@ -1174,8 +1164,6 @@ UPDATE ft2 SET c3 = 'bar' WHERE c1 = 1200 RETURNING tableoid::regclass; EXPLAIN (verbose, costs off) DELETE FROM ft2 WHERE c1 = 1200 RETURNING tableoid::regclass; -- can be pushed down DELETE FROM ft2 WHERE c1 = 1200 RETURNING tableoid::regclass; --- YB note: catalog snapshot invalidated, remove pg_sleeps when issue #11554 is fixed -select pg_sleep(1); -- Test UPDATE/DELETE with RETURNING on a three-table join INSERT INTO ft2 (c1,c2,c3) @@ -1260,8 +1248,6 @@ SELECT * FROM cte ORDER BY c1; -- Test errors thrown on remote side during update ALTER TABLE "S 1"."T 1" ADD CONSTRAINT c2positive CHECK (c2 >= 0); --- YB note: catalog snapshot invalidated, remove pg_sleeps when issue #11554 is fixed -select pg_sleep(1); INSERT INTO ft1(c1, c2) VALUES(11, 12); -- duplicate key INSERT INTO ft1(c1, c2) VALUES(11, 12) ON CONFLICT DO NOTHING; -- works INSERT INTO ft1(c1, c2) VALUES(11, 12) ON CONFLICT (c1, c2) DO NOTHING; -- unsupported @@ -1333,16 +1319,12 @@ SELECT * FROM ft1 ORDER BY c6 ASC NULLS FIRST, c1 OFFSET 15 LIMIT 10; -- Consistent check constraints provide consistent results ALTER FOREIGN TABLE ft1 ADD CONSTRAINT ft1_c2positive CHECK (c2 >= 0); --- YB note: catalog snapshot invalidated, remove pg_sleeps when issue #11554 is fixed -select pg_sleep(1); EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 WHERE c2 < 0; SELECT count(*) FROM ft1 WHERE c2 < 0; SET constraint_exclusion = 'on'; EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 WHERE c2 < 0; SELECT count(*) FROM ft1 WHERE c2 < 0; RESET constraint_exclusion; --- YB note: catalog snapshot invalidated, remove pg_sleeps when issue #11554 is fixed -select pg_sleep(1); -- check constraint is enforced on the remote side, not locally INSERT INTO ft1(c1, c2) VALUES(1111, -2); -- c2positive UPDATE ft1 SET c2 = -c2 WHERE c1 = 1; -- c2positive @@ -1350,16 +1332,12 @@ ALTER FOREIGN TABLE ft1 DROP CONSTRAINT ft1_c2positive; -- But inconsistent check constraints provide inconsistent results ALTER FOREIGN TABLE ft1 ADD CONSTRAINT ft1_c2negative CHECK (c2 < 0); --- YB note: catalog snapshot invalidated, remove pg_sleeps when issue #11554 is fixed -select pg_sleep(1); EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 WHERE c2 >= 0; SELECT count(*) FROM ft1 WHERE c2 >= 0; SET constraint_exclusion = 'on'; EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 WHERE c2 >= 0; SELECT count(*) FROM ft1 WHERE c2 >= 0; RESET constraint_exclusion; --- YB note: catalog snapshot invalidated, remove pg_sleeps when issue #11554 is fixed -select pg_sleep(1); -- local check constraint is not actually enforced INSERT INTO ft1(c1, c2) VALUES(1111, 2); UPDATE ft1 SET c2 = c2 + 1 WHERE c1 = 1; @@ -1564,8 +1542,6 @@ $$ language plpgsql; CREATE TRIGGER trig_row_before_insupd BEFORE INSERT OR UPDATE ON rem1 FOR EACH ROW EXECUTE PROCEDURE trig_row_before_insupdate(); --- YB note: triggers don't work immediately, remove once #11555 is fixed -select pg_sleep(1); -- The new values should have 'triggered' appended INSERT INTO rem1 values(1, 'insert'); @@ -1584,8 +1560,6 @@ DELETE FROM rem1; CREATE TRIGGER trig_row_before_insupd2 BEFORE INSERT OR UPDATE ON rem1 FOR EACH ROW EXECUTE PROCEDURE trig_row_before_insupdate(); --- YB note: triggers don't work immediately, remove once #11555 is fixed -select pg_sleep(1); INSERT INTO rem1 values(1, 'insert'); SELECT * from loc1; @@ -1613,8 +1587,6 @@ $$ language plpgsql; CREATE TRIGGER trig_null BEFORE INSERT OR UPDATE OR DELETE ON rem1 FOR EACH ROW EXECUTE PROCEDURE trig_null(); --- YB note: triggers don't work immediately, remove once #11555 is fixed -select pg_sleep(1); -- Nothing should have changed. INSERT INTO rem1 VALUES (2, 'test2'); @@ -1646,8 +1618,6 @@ FOR EACH ROW EXECUTE PROCEDURE trigger_data(23,'skidoo'); CREATE TRIGGER trig_local_before BEFORE INSERT OR UPDATE ON loc1 FOR EACH ROW EXECUTE PROCEDURE trig_row_before_insupdate(); --- YB note: triggers don't work immediately, remove once #11555 is fixed -select pg_sleep(1); INSERT INTO rem1(f2) VALUES ('test'); UPDATE rem1 SET f2 = 'testo'; @@ -2019,8 +1989,6 @@ create trigger loct1_br_insert_trigger before insert on loct1 for each row execute procedure br_insert_trigfunc(); create trigger loct2_br_insert_trigger before insert on loct2 for each row execute procedure br_insert_trigfunc(); --- YB note: triggers don't work immediately, remove once #11555 is fixed -select pg_sleep(1); -- The new values are concatenated with ' triggered !' insert into itrtest values (1, 'foo') returning *; @@ -2042,8 +2010,6 @@ create foreign table remp (a int check (a in (1)), b text) server loopback optio create table locp (a int check (a in (2)), b text); alter table utrtest attach partition remp for values in (1); alter table utrtest attach partition locp for values in (2); --- YB note: catalog snapshot invalidated, remove pg_sleeps when issue #11554 is fixed -select pg_sleep(1); insert into utrtest values (1, 'foo'); insert into utrtest values (2, 'qux'); @@ -2150,8 +2116,6 @@ drop table loct2; create table loc2 (f1 int, f2 text); alter table loc2 set (autovacuum_enabled = 'false'); create foreign table rem2 (f1 int, f2 text) server loopback options(table_name 'loc2'); --- YB note: catalog snapshot invalidated, remove pg_sleeps when issue #11554 is fixed -select pg_sleep(1); -- Test basic functionality copy rem2 from stdin; @@ -2165,8 +2129,6 @@ delete from rem2; -- Test check constraints alter table loc2 add constraint loc2_f1positive check (f1 >= 0); alter foreign table rem2 add constraint rem2_f1positive check (f1 >= 0); --- YB note: constraints don't work immediately, remove pg_sleeps when issue #11598 is fixed -select pg_sleep(1); -- check constraint is enforced on the remote side, not locally copy rem2 from stdin; @@ -2180,8 +2142,6 @@ select * from rem2; alter foreign table rem2 drop constraint rem2_f1positive; alter table loc2 drop constraint loc2_f1positive; --- YB note: catalog snapshot invalidated, remove pg_sleeps when issue #11554 is fixed -select pg_sleep(1); delete from rem2; @@ -2194,8 +2154,6 @@ create trigger trig_row_before before insert on rem2 for each row execute procedure trigger_data(23,'skidoo'); create trigger trig_row_after after insert on rem2 for each row execute procedure trigger_data(23,'skidoo'); --- YB note: triggers don't work immediately, remove once #11555 is fixed -select pg_sleep(1); copy rem2 from stdin; 1 foo 2 bar @@ -2212,8 +2170,6 @@ delete from rem2; -- YB note: The following line is changed in the upstream PG 15 test create trigger trig_row_before_insert before insert on rem2 for each row execute procedure trig_row_before_insupdate(); -- YB note: see above --- YB note: triggers don't work immediately, remove once #11555 is fixed -select pg_sleep(1); -- The new values are concatenated with ' triggered !' copy rem2 from stdin; @@ -2228,8 +2184,6 @@ delete from rem2; create trigger trig_null before insert on rem2 for each row execute procedure trig_null(); --- YB note: triggers don't work immediately, remove once #11555 is fixed -select pg_sleep(1); -- Nothing happens copy rem2 from stdin; @@ -2245,8 +2199,6 @@ delete from rem2; -- Test remote triggers create trigger trig_row_before_insert before insert on loc2 for each row execute procedure trig_row_before_insupdate(); --- YB note: triggers don't work immediately, remove once #11555 is fixed -select pg_sleep(1); -- The new values are concatenated with ' triggered !' copy rem2 from stdin; @@ -2261,8 +2213,6 @@ delete from rem2; create trigger trig_null before insert on loc2 for each row execute procedure trig_null(); --- YB note: triggers don't work immediately, remove once #11555 is fixed -select pg_sleep(2); -- Nothing happens copy rem2 from stdin; @@ -2282,8 +2232,6 @@ create trigger rem2_trig_row_after after insert on rem2 for each row execute procedure trigger_data(23,'skidoo'); create trigger loc2_trig_row_before_insert before insert on loc2 for each row execute procedure trig_row_before_insupdate(); --- YB note: catalog snapshot invalidated, remove pg_sleeps when issue #11554 is fixed -select pg_sleep(1); copy rem2 from stdin; 1 foo @@ -2370,8 +2318,6 @@ CREATE TABLE import_source.t5 (c1 int, c2 text collate "C", "Col" "Colors"); CREATE SCHEMA import_dest5; BEGIN; DROP TYPE "Colors" CASCADE; --- YB note: type dropping in transaction is not respected, should error when issue #11742 is fixed -select pg_sleep(1); IMPORT FOREIGN SCHEMA import_source LIMIT TO (t5) FROM SERVER loopback INTO import_dest5; -- ERROR @@ -2452,14 +2398,10 @@ CREATE FOREIGN TABLE ftprt2_p1 (b int, c varchar, a int) ALTER TABLE fprt2 ATTACH PARTITION ftprt2_p1 FOR VALUES FROM (0) TO (250); CREATE FOREIGN TABLE ftprt2_p2 PARTITION OF fprt2 FOR VALUES FROM (250) TO (500) SERVER loopback OPTIONS (table_name 'fprt2_p2', use_remote_estimate 'true'); --- YB note: catalog snapshot invalidated, remove pg_sleeps when issue #11554 is fixed -select pg_sleep(1); ANALYZE fprt2; ANALYZE fprt2_p1; ANALYZE fprt2_p2; --- YB note: catalog snapshot invalidated, remove pg_sleeps when issue #11554 is fixed -select pg_sleep(1); -- inner join three tables EXPLAIN (COSTS OFF) SELECT t1.a,t2.b,t3.c FROM fprt1 t1 INNER JOIN fprt2 t2 ON (t1.a = t2.b) INNER JOIN fprt1 t3 ON (t2.b = t3.a) WHERE t1.a % 25 =0 ORDER BY 1,2,3; From 3656e66a52ca5f02bd09b6ec74804f3f41f3ce63 Mon Sep 17 00:00:00 2001 From: William McKenna Date: Sun, 20 Apr 2025 08:20:51 -0700 Subject: [PATCH 030/146] [#26670] YSQL: Do not prune joins at a level unless a leading hint exists at the same level Summary: Pruning joins can be unsafe unless we are sure the pruned plan cannot participate in the final plan. In this case, there is a mix of inner and outer joins, no Leading hint is present, and a join needed for higher levels is incorrectly pruned. If there is a Leading hint at a level then it is safe to prune non-hinted joins. Jira: DB-16050 Test Plan: TestPgRegressExtension (including new test) TestPgRegressJoin TestPgRegressPlanner TestPgRegressThirdPartyExtensionsPgHintPlan TestPgRegressJoin TestPgRegressPgIndex TestPgRegressAggregates TestPgRegressPlanner TestPgRegressTAQO TestPgRegressParallel TestPgRegressPgStatStatements TestPgRegressPgSelect TestPgRegressPartitions TestPgRegressPgStat TestPgRegressPgMatview TestPgExplainAnalyzeJoins TestPgRegressPgDml TestPgCardinalityEstimation Reviewers: mihnea, mtakahara Reviewed By: mtakahara Subscribers: jason, yql Differential Revision: https://phorge.dev.yugabyte.com/D43371 --- .../src/backend/optimizer/path/allpaths.c | 90 +++++-- .../src/backend/optimizer/plan/planner.c | 22 ++ .../src/backend/optimizer/util/pathnode.c | 7 +- .../regress/expected/yb.orig.extensions.out | 44 +-- .../test/regress/expected/yb.orig.hints.out | 253 ++++++++++++------ .../test/regress/sql/yb.orig.extensions.sql | 9 + .../src/test/regress/sql/yb.orig.hints.sql | 25 ++ .../pg_hint_plan/core.c | 90 +++++-- 8 files changed, 394 insertions(+), 146 deletions(-) diff --git a/src/postgres/src/backend/optimizer/path/allpaths.c b/src/postgres/src/backend/optimizer/path/allpaths.c index c9fd7506213f..b029118afa81 100644 --- a/src/postgres/src/backend/optimizer/path/allpaths.c +++ b/src/postgres/src/backend/optimizer/path/allpaths.c @@ -3603,42 +3603,89 @@ standard_join_search(PlannerInfo *root, int levels_needed, List *initial_rels) #endif } - if (IsYugaByteEnabled()) + if (IsYugaByteEnabled() && root->ybHintedJoinsOuter != NULL) { /* - * Sweep all joins at this level and look for disabled join - * and non-disabled joins. + * There is a Leading hint so sweep all joins at this level + * and look for disabled and non-disabled joins. Also determine + * if some join at this level has been hinted. If so, it is safe to + * prune non-hinted joins. */ - List *levelJoinRels = NIL; - bool foundDisabledRel = false; + List *ybLevelJoinRels = NIL; + bool ybFoundDisabledRel = false; + bool ybFoundHintedJoin = false; + ListCell *lc2; foreach(lc2, root->join_rel_level[lev]) { - RelOptInfo *rel = (RelOptInfo *) lfirst(lc2); - if (rel->cheapest_total_path->total_cost < disable_cost || - rel->cheapest_total_path->ybIsHinted || - rel->cheapest_total_path->ybHasHintedUid) + RelOptInfo *ybRel = (RelOptInfo *) lfirst(lc2); + Assert(IS_JOIN_REL(ybRel)); + + bool ybIsJoinPath; + /* + * Could we have a non-join path type here (e.g., an Append)? + * Check that we have a join path. + */ + switch (ybRel->cheapest_total_path->type) + { + case T_NestPath: + ybIsJoinPath = true; + break; + case T_MergePath: + ybIsJoinPath = true; + break; + case T_HashPath: + ybIsJoinPath = true; + break; + default: + ybIsJoinPath = false; + break; + } + + /* + * Assuming that only join paths exist in the space + * of enumerated joins. If this is found to not be the case, + * the next 2 IFs need to check for a join path, and a non-join + * path, respectively. + */ + Assert(ybIsJoinPath); + + if (ybRel->cheapest_total_path->ybIsHinted || + ybRel->cheapest_total_path->ybHasHintedUid) + { + ybFoundHintedJoin = true; + } + + if (ybRel->cheapest_total_path->total_cost < disable_cost || + ybRel->cheapest_total_path->ybIsHinted || + ybRel->cheapest_total_path->ybHasHintedUid) { /* - * Found a join with cost < disable cost. Or cost could be - * >= disable cost (because the join is really expensive) - * but it is in a Leading hint. + * Found a join with cost < disable cost, + * or whose cost could be >= disable cost because the join is + * really expensive. But it is in a Leading hint, or + * has been hinted using its UID so add it to the list + * of joins we want to keep at this level. */ - levelJoinRels = lappend(levelJoinRels, rel); + ybLevelJoinRels = lappend(ybLevelJoinRels, ybRel); } else { /* - * Found a path that has been disabled via hints. + * Found a join that has been disabled, + * or that perhaps has a "true" cost > disable cost. + * It is a join and is not hinted so set a flag so we can + * try pruning below. */ - foundDisabledRel = true; + ybFoundDisabledRel = true; } } /* - * Now look for a mix of enabled and disabled join paths at this level. + * Now look for a mix of enabled and disabled join paths at this level, + * but only do this if some join at this level has been hinted. */ - if (levelJoinRels != NIL && foundDisabledRel) + if (ybLevelJoinRels != NIL && ybFoundDisabledRel && ybFoundHintedJoin) { if (yb_enable_planner_trace) { @@ -3653,13 +3700,13 @@ standard_join_search(PlannerInfo *root, int levels_needed, List *initial_rels) foreach(lc2, root->join_rel_level[lev]) { RelOptInfo *rel = (RelOptInfo *) lfirst(lc2); - if (!list_member_ptr(levelJoinRels, rel)) + if (!list_member_ptr(ybLevelJoinRels, rel)) { ybTraceRelOptInfo(root, rel, dropMsg.data); } } - foreach(lc2, levelJoinRels) + foreach(lc2, ybLevelJoinRels) { RelOptInfo *rel = (RelOptInfo *) lfirst(lc2); ybTraceRelOptInfo(root, rel, keepMsg.data); @@ -3670,9 +3717,10 @@ standard_join_search(PlannerInfo *root, int levels_needed, List *initial_rels) } /* - * Keep only the non-disabled joins since the disabled ones cannot be part of the best plan. + * Keep only the non-disabled joins since the disabled ones + * cannot be part of the best plan. */ - root->join_rel_level[lev] = levelJoinRels; + root->join_rel_level[lev] = ybLevelJoinRels; } } } diff --git a/src/postgres/src/backend/optimizer/plan/planner.c b/src/postgres/src/backend/optimizer/plan/planner.c index 4a84a90f7186..45822abb811f 100644 --- a/src/postgres/src/backend/optimizer/plan/planner.c +++ b/src/postgres/src/backend/optimizer/plan/planner.c @@ -8397,6 +8397,13 @@ ybGenerateHintStringNode(PlannedStmt *plannedStmt, Plan *plan, StringInfoData *l { Append *append = (Append *) plan; + char *ybHintAlias = plan->ybHintAlias; + if (ybHintAlias != NULL) + { + ybAppendHintNameDisplayText(ybHintAlias, leadingBuf); + *scanList = lappend(*scanList, ybHintAlias); + } + /* * Recurse on the input blocks. */ @@ -8405,6 +8412,17 @@ ybGenerateHintStringNode(PlannedStmt *plannedStmt, Plan *plan, StringInfoData *l { Plan *subPlan = (Plan *) lfirst(lc); + if (ybHintAlias != NULL && subPlan->ybHintAlias != NULL && + strcmp(ybHintAlias, subPlan->ybHintAlias) == 0) + { + /* + * This can happen if we have a partitioned table and are + * appending results from scanning partitions since the + * partitions would have the same alias as the Append. + */ + continue; + } + char *subPlanHintString = ybGenerateHintStringBlock(plannedStmt, subPlan, maxBlockScanCnt); if (subPlanHintString != NULL) @@ -8440,6 +8458,10 @@ ybGenerateHintStringNode(PlannedStmt *plannedStmt, Plan *plan, StringInfoData *l recurse = false; } + else + { + generatedHintString = false; + } } break; case T_RecursiveUnion: diff --git a/src/postgres/src/backend/optimizer/util/pathnode.c b/src/postgres/src/backend/optimizer/util/pathnode.c index 1a955a5a1bb5..bfab505953a2 100644 --- a/src/postgres/src/backend/optimizer/util/pathnode.c +++ b/src/postgres/src/backend/optimizer/util/pathnode.c @@ -5538,7 +5538,12 @@ yb_assign_unique_path_node_id(PlannerInfo *root, Path *path) * set_dummy_rel_pathlist((). mark_dummy_rel() also creates an Append path * without a PlannerInfo instance. */ - if (root != NULL) + if (root == NULL && path->parent != NULL) + { + root = path->parent->ybRoot; + } + + if (root != NULL && root->glob != NULL) { path->ybUniqueId = ybGetNextNodeUid(root->glob); diff --git a/src/postgres/src/test/regress/expected/yb.orig.extensions.out b/src/postgres/src/test/regress/expected/yb.orig.extensions.out index f1d4a5ba0339..57f963ad5f7f 100644 --- a/src/postgres/src/test/regress/expected/yb.orig.extensions.out +++ b/src/postgres/src/test/regress/expected/yb.orig.extensions.out @@ -1,3 +1,7 @@ +SET client_min_messages = warning; +DROP DATABASE if exists test_yb_extensions; +CREATE DATABASE test_yb_extensions; +\c test_yb_extensions -- Testing pgcrypto. create extension pgcrypto; select digest('xyz', 'sha1'); @@ -58,36 +62,36 @@ select pg_stat_statements_reset(); select pg_get_userbyid(userid),datname,query,calls,rows,shared_blks_hit,shared_blks_read,shared_blks_dirtied,shared_blks_written,local_blks_hit,local_blks_read,local_blks_dirtied,local_blks_written,temp_blks_read,temp_blks_written,blk_read_time from pg_stat_statements join pg_database on dbid = oid order by query; - pg_get_userbyid | datname | query | calls | rows | shared_blks_hit | shared_blks_read | shared_blks_dirtied | shared_blks_written | local_blks_hit | local_blks_read | local_blks_dirtied | local_blks_written | temp_blks_read | temp_blks_written | blk_read_time ------------------+----------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------+------+-----------------+------------------+---------------------+---------------------+----------------+-----------------+--------------------+--------------------+----------------+-------------------+--------------- - yugabyte | yugabyte | select pg_get_userbyid(userid),datname,query,calls,rows,shared_blks_hit,shared_blks_read,shared_blks_dirtied,shared_blks_written,local_blks_hit,local_blks_read,local_blks_dirtied,local_blks_written,temp_blks_read,temp_blks_written,blk_read_time+| 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 - | | from pg_stat_statements join pg_database on dbid = oid order by query | | | | | | | | | | | | | - yugabyte | yugabyte | select pg_stat_statements_reset() | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 + pg_get_userbyid | datname | query | calls | rows | shared_blks_hit | shared_blks_read | shared_blks_dirtied | shared_blks_written | local_blks_hit | local_blks_read | local_blks_dirtied | local_blks_written | temp_blks_read | temp_blks_written | blk_read_time +-----------------+--------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------+------+-----------------+------------------+---------------------+---------------------+----------------+-----------------+--------------------+--------------------+----------------+-------------------+--------------- + yugabyte | test_yb_extensions | select pg_get_userbyid(userid),datname,query,calls,rows,shared_blks_hit,shared_blks_read,shared_blks_dirtied,shared_blks_written,local_blks_hit,local_blks_read,local_blks_dirtied,local_blks_written,temp_blks_read,temp_blks_written,blk_read_time+| 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 + | | from pg_stat_statements join pg_database on dbid = oid order by query | | | | | | | | | | | | | + yugabyte | test_yb_extensions | select pg_stat_statements_reset() | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 (2 rows) create table test(a int, b float); insert into test(a,b) values (5,10); select pg_get_userbyid(userid),datname,query,calls,rows,shared_blks_hit,shared_blks_read,shared_blks_dirtied,shared_blks_written,local_blks_hit,local_blks_read,local_blks_dirtied,local_blks_written,temp_blks_read,temp_blks_written,blk_read_time from pg_stat_statements join pg_database on dbid = oid order by query; - pg_get_userbyid | datname | query | calls | rows | shared_blks_hit | shared_blks_read | shared_blks_dirtied | shared_blks_written | local_blks_hit | local_blks_read | local_blks_dirtied | local_blks_written | temp_blks_read | temp_blks_written | blk_read_time ------------------+----------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------+------+-----------------+------------------+---------------------+---------------------+----------------+-----------------+--------------------+--------------------+----------------+-------------------+--------------- - yugabyte | yugabyte | create table test(a int, b float) | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 - yugabyte | yugabyte | insert into test(a,b) values ($1,$2) | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 - yugabyte | yugabyte | select pg_get_userbyid(userid),datname,query,calls,rows,shared_blks_hit,shared_blks_read,shared_blks_dirtied,shared_blks_written,local_blks_hit,local_blks_read,local_blks_dirtied,local_blks_written,temp_blks_read,temp_blks_written,blk_read_time+| 1 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 - | | from pg_stat_statements join pg_database on dbid = oid order by query | | | | | | | | | | | | | - yugabyte | yugabyte | select pg_stat_statements_reset() | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 + pg_get_userbyid | datname | query | calls | rows | shared_blks_hit | shared_blks_read | shared_blks_dirtied | shared_blks_written | local_blks_hit | local_blks_read | local_blks_dirtied | local_blks_written | temp_blks_read | temp_blks_written | blk_read_time +-----------------+--------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------+------+-----------------+------------------+---------------------+---------------------+----------------+-----------------+--------------------+--------------------+----------------+-------------------+--------------- + yugabyte | test_yb_extensions | create table test(a int, b float) | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 + yugabyte | test_yb_extensions | insert into test(a,b) values ($1,$2) | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 + yugabyte | test_yb_extensions | select pg_get_userbyid(userid),datname,query,calls,rows,shared_blks_hit,shared_blks_read,shared_blks_dirtied,shared_blks_written,local_blks_hit,local_blks_read,local_blks_dirtied,local_blks_written,temp_blks_read,temp_blks_written,blk_read_time+| 1 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 + | | from pg_stat_statements join pg_database on dbid = oid order by query | | | | | | | | | | | | | + yugabyte | test_yb_extensions | select pg_stat_statements_reset() | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 (4 rows) insert into test(a,b) values (15,20); select pg_get_userbyid(userid),datname,query,calls,rows,shared_blks_hit,shared_blks_read,shared_blks_dirtied,shared_blks_written,local_blks_hit,local_blks_read,local_blks_dirtied,local_blks_written,temp_blks_read,temp_blks_written,blk_read_time from pg_stat_statements join pg_database on dbid = oid order by query; - pg_get_userbyid | datname | query | calls | rows | shared_blks_hit | shared_blks_read | shared_blks_dirtied | shared_blks_written | local_blks_hit | local_blks_read | local_blks_dirtied | local_blks_written | temp_blks_read | temp_blks_written | blk_read_time ------------------+----------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------+------+-----------------+------------------+---------------------+---------------------+----------------+-----------------+--------------------+--------------------+----------------+-------------------+--------------- - yugabyte | yugabyte | create table test(a int, b float) | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 - yugabyte | yugabyte | insert into test(a,b) values ($1,$2) | 2 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 - yugabyte | yugabyte | select pg_get_userbyid(userid),datname,query,calls,rows,shared_blks_hit,shared_blks_read,shared_blks_dirtied,shared_blks_written,local_blks_hit,local_blks_read,local_blks_dirtied,local_blks_written,temp_blks_read,temp_blks_written,blk_read_time+| 2 | 6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 - | | from pg_stat_statements join pg_database on dbid = oid order by query | | | | | | | | | | | | | - yugabyte | yugabyte | select pg_stat_statements_reset() | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 + pg_get_userbyid | datname | query | calls | rows | shared_blks_hit | shared_blks_read | shared_blks_dirtied | shared_blks_written | local_blks_hit | local_blks_read | local_blks_dirtied | local_blks_written | temp_blks_read | temp_blks_written | blk_read_time +-----------------+--------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------+------+-----------------+------------------+---------------------+---------------------+----------------+-----------------+--------------------+--------------------+----------------+-------------------+--------------- + yugabyte | test_yb_extensions | create table test(a int, b float) | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 + yugabyte | test_yb_extensions | insert into test(a,b) values ($1,$2) | 2 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 + yugabyte | test_yb_extensions | select pg_get_userbyid(userid),datname,query,calls,rows,shared_blks_hit,shared_blks_read,shared_blks_dirtied,shared_blks_written,local_blks_hit,local_blks_read,local_blks_dirtied,local_blks_written,temp_blks_read,temp_blks_written,blk_read_time+| 2 | 6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 + | | from pg_stat_statements join pg_database on dbid = oid order by query | | | | | | | | | | | | | + yugabyte | test_yb_extensions | select pg_stat_statements_reset() | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 (4 rows) -- SeqScan forces YbSeqScan node with NodeTag near the end of the list in nodes.h @@ -127,3 +131,5 @@ CREATE SCHEMA has$dollar; CREATE EXTENSION yb_test_extension SCHEMA has$dollar; ERROR: invalid character in extension "yb_test_extension" schema: must not contain any of ""$'\" CREATE EXTENSION yb_test_extension SCHEMA "has space"; +\c yugabyte +DROP DATABASE test_yb_extensions WITH (FORCE); diff --git a/src/postgres/src/test/regress/expected/yb.orig.hints.out b/src/postgres/src/test/regress/expected/yb.orig.hints.out index df8bdd956167..9993e13bec01 100644 --- a/src/postgres/src/test/regress/expected/yb.orig.hints.out +++ b/src/postgres/src/test/regress/expected/yb.orig.hints.out @@ -624,100 +624,99 @@ where unn1 < 4 and ch1 > ch2; -- Complex query; explain (hints on, costs off) select count(*) from t1, t2, t3, t4, t5, t6, t7, t8, t9, t10, (select a1 x from t1, t2, t3, t4, t5, t6, t7, t8, t9, t10 where a1=a2 and a1=a3 and a1=a4 and a1=a5 and a5=a6 and a5=a7 and a5=a8 and a5=a9 and b7=1) dt where a1=a2 and a1=a3 and a1=a4 and a1=a5 and a5=a6 and a5=a7 and a5=a8 and a5=a9 and b7=1 and a1=x; - QUERY PLAN -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + QUERY PLAN +------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Aggregate - -> Hash Join - Hash Cond: (t1.a1 = t1_1.a1) + -> YB Batched Nested Loop Join + Join Filter: (t2.a2 = t1.a1) -> YB Batched Nested Loop Join - Join Filter: (t2.a2 = t1.a1) - -> YB Batched Nested Loop Join - Join Filter: (t3.a3 = t2.a2) - -> Hash Join - Hash Cond: (t3.a3 = t9.a9) + Join Filter: (t3.a3 = t2.a2) + -> Hash Join + Hash Cond: (t1_1.a1 = t3.a3) + -> Nested Loop + -> Seq Scan on t10 + -> YB Batched Nested Loop Join + Join Filter: (t2_1.a2 = t1_1.a1) + -> YB Batched Nested Loop Join + Join Filter: (t3_1.a3 = t2_1.a2) + -> Hash Join + Hash Cond: (t3_1.a3 = t9_1.a9) + -> Hash Join + Hash Cond: (t3_1.a3 = t4_1.a4) + -> Hash Join + Hash Cond: (t3_1.a3 = t8_1.a8) + -> Hash Join + Hash Cond: (t3_1.a3 = t6_1.a6) + -> Hash Join + Hash Cond: (t7_1.a7 = t3_1.a3) + -> Seq Scan on t7 t7_1 + Storage Filter: (b7 = 1) + -> Hash + -> Seq Scan on t3 t3_1 + -> Hash + -> Seq Scan on t6 t6_1 + -> Hash + -> Seq Scan on t8 t8_1 + -> Hash + -> Hash Join + Hash Cond: (t5_1.a5 = t4_1.a4) + -> Seq Scan on t5 t5_1 + -> Hash + -> Seq Scan on t4 t4_1 + -> Hash + -> Nested Loop + -> Seq Scan on t10 t10_1 + -> Materialize + -> Seq Scan on t9 t9_1 + -> Memoize + Cache Key: t9_1.a9 + Cache Mode: logical + -> Index Only Scan using t2_a2_idx on t2 t2_1 + Index Cond: (a2 = ANY (ARRAY[t9_1.a9, $1, $2, ..., $1023])) + -> Memoize + Cache Key: t9_1.a9 + Cache Mode: logical + -> Index Only Scan using t1_a1_desc_idx on t1 t1_1 + Index Cond: (a1 = ANY (ARRAY[t9_1.a9, $1025, $1026, ..., $2047])) + -> Hash -> Hash Join Hash Cond: (t3.a3 = t4.a4) -> Hash Join - Hash Cond: (t3.a3 = t8.a8) + Hash Cond: (t3.a3 = t5.a5) -> Hash Join - Hash Cond: (t3.a3 = t6.a6) + Hash Cond: (t3.a3 = t8.a8) -> Hash Join - Hash Cond: (t7.a7 = t3.a3) - -> Seq Scan on t7 - Storage Filter: (b7 = 1) + Hash Cond: (t9.a9 = t3.a3) + -> Seq Scan on t9 -> Hash -> Seq Scan on t3 -> Hash - -> Seq Scan on t6 + -> Seq Scan on t8 -> Hash - -> Seq Scan on t8 + -> Hash Join + Hash Cond: (t6.a6 = t5.a5) + -> Seq Scan on t6 + -> Hash + -> Seq Scan on t5 -> Hash -> Hash Join - Hash Cond: (t5.a5 = t4.a4) - -> Seq Scan on t5 + Hash Cond: (t7.a7 = t4.a4) + -> Seq Scan on t7 + Storage Filter: (b7 = 1) -> Hash -> Seq Scan on t4 - -> Hash - -> Nested Loop - -> Seq Scan on t10 - -> Materialize - -> Seq Scan on t9 - -> Memoize - Cache Key: t9.a9 - Cache Mode: logical - -> Index Only Scan using t2_a2_idx on t2 - Index Cond: (a2 = ANY (ARRAY[t9.a9, $1, $2, ..., $1023])) -> Memoize Cache Key: t9.a9 Cache Mode: logical - -> Index Only Scan using t1_a1_desc_idx on t1 - Index Cond: (a1 = ANY (ARRAY[t9.a9, $1025, $1026, ..., $2047])) - -> Hash - -> YB Batched Nested Loop Join - Join Filter: (t2_1.a2 = t1_1.a1) - -> YB Batched Nested Loop Join - Join Filter: (t3_1.a3 = t2_1.a2) - -> Hash Join - Hash Cond: (t3_1.a3 = t9_1.a9) - -> Hash Join - Hash Cond: (t3_1.a3 = t4_1.a4) - -> Hash Join - Hash Cond: (t3_1.a3 = t8_1.a8) - -> Hash Join - Hash Cond: (t3_1.a3 = t6_1.a6) - -> Hash Join - Hash Cond: (t7_1.a7 = t3_1.a3) - -> Seq Scan on t7 t7_1 - Storage Filter: (b7 = 1) - -> Hash - -> Seq Scan on t3 t3_1 - -> Hash - -> Seq Scan on t6 t6_1 - -> Hash - -> Seq Scan on t8 t8_1 - -> Hash - -> Hash Join - Hash Cond: (t5_1.a5 = t4_1.a4) - -> Seq Scan on t5 t5_1 - -> Hash - -> Seq Scan on t4 t4_1 - -> Hash - -> Nested Loop - -> Seq Scan on t10 t10_1 - -> Materialize - -> Seq Scan on t9 t9_1 - -> Memoize - Cache Key: t9_1.a9 - Cache Mode: logical - -> Index Only Scan using t2_a2_idx on t2 t2_1 - Index Cond: (a2 = ANY (ARRAY[t9_1.a9, $2049, $2050, ..., $3071])) - -> Memoize - Cache Key: t9_1.a9 - Cache Mode: logical - -> Index Only Scan using t1_a1_desc_idx on t1 t1_1 - Index Cond: (a1 = ANY (ARRAY[t9_1.a9, $3073, $3074, ..., $4095])) - Generated hints: /*+ Leading(((((((((t7 t3) t6) t8) (t5 t4)) (t10 t9)) t2) t1) (((((((t7_1 t3_1) t6_1) t8_1) (t5_1 t4_1)) (t10_1 t9_1)) t2_1) t1_1))) SeqScan(t7) SeqScan(t3) HashJoin(t3 t7) SeqScan(t6) HashJoin(t3 t6 t7) SeqScan(t8) HashJoin(t3 t6 t7 t8) SeqScan(t5) SeqScan(t4) HashJoin(t4 t5) HashJoin(t3 t4 t5 t6 t7 t8) SeqScan(t10) SeqScan(t9) NestLoop(t10 t9) HashJoin(t10 t3 t4 t5 t6 t7 t8 t9) IndexOnlyScan(t2 t2_a2_idx) YbBatchedNL(t10 t2 t3 t4 t5 t6 t7 t8 t9) IndexOnlyScan(t1 t1_a1_desc_idx) YbBatchedNL(t1 t10 t2 t3 t4 t5 t6 t7 t8 t9) SeqScan(t7_1) SeqScan(t3_1) HashJoin(t3_1 t7_1) SeqScan(t6_1) HashJoin(t3_1 t6_1 t7_1) SeqScan(t8_1) HashJoin(t3_1 t6_1 t7_1 t8_1) SeqScan(t5_1) SeqScan(t4_1) HashJoin(t4_1 t5_1) HashJoin(t3_1 t4_1 t5_1 t6_1 t7_1 t8_1) SeqScan(t10_1) SeqScan(t9_1) NestLoop(t10_1 t9_1) HashJoin(t10_1 t3_1 t4_1 t5_1 t6_1 t7_1 t8_1 t9_1) IndexOnlyScan(t2_1 t2_a2_idx) YbBatchedNL(t10_1 t2_1 t3_1 t4_1 t5_1 t6_1 t7_1 t8_1 t9_1) IndexOnlyScan(t1_1 t1_a1_desc_idx) YbBatchedNL(t10_1 t1_1 t2_1 t3_1 t4_1 t5_1 t6_1 t7_1 t8_1 t9_1) HashJoin(t1 t10 t10_1 t1_1 t2 t2_1 t3 t3_1 t4 t4_1 t5 t5_1 t6 t6_1 t7 t7_1 t8 t8_1 t9 t9_1) Set(yb_enable_optimizer_statistics off) Set(yb_enable_base_scans_cost_model off) Set(enable_hashagg on) Set(enable_material on) Set(enable_memoize on) Set(enable_sort on) Set(enable_incremental_sort on) Set(max_parallel_workers_per_gather 2) Set(parallel_tuple_cost 0.10) Set(parallel_setup_cost 1000.00) Set(min_parallel_table_scan_size 1024) Set(yb_prefer_bnl on) Set(yb_bnl_batch_size 1024) Set(yb_fetch_row_limit 1024) Set(from_collapse_limit 20) Set(join_collapse_limit 20) Set(geqo false) */ -(91 rows) + -> Index Only Scan using t2_a2_idx on t2 + Index Cond: (a2 = ANY (ARRAY[t9.a9, $2049, $2050, ..., $3071])) + -> Memoize + Cache Key: t9.a9 + Cache Mode: logical + -> Index Only Scan using t1_a1_desc_idx on t1 + Index Cond: (a1 = ANY (ARRAY[t9.a9, $3073, $3074, ..., $4095])) + Generated hints: /*+ Leading(((((t10 (((((((t7_1 t3_1) t6_1) t8_1) (t5_1 t4_1)) (t10_1 t9_1)) t2_1) t1_1)) ((((t9 t3) t8) (t6 t5)) (t7 t4))) t2) t1)) SeqScan(t10) SeqScan(t7_1) SeqScan(t3_1) HashJoin(t3_1 t7_1) SeqScan(t6_1) HashJoin(t3_1 t6_1 t7_1) SeqScan(t8_1) HashJoin(t3_1 t6_1 t7_1 t8_1) SeqScan(t5_1) SeqScan(t4_1) HashJoin(t4_1 t5_1) HashJoin(t3_1 t4_1 t5_1 t6_1 t7_1 t8_1) SeqScan(t10_1) SeqScan(t9_1) NestLoop(t10_1 t9_1) HashJoin(t10_1 t3_1 t4_1 t5_1 t6_1 t7_1 t8_1 t9_1) IndexOnlyScan(t2_1 t2_a2_idx) YbBatchedNL(t10_1 t2_1 t3_1 t4_1 t5_1 t6_1 t7_1 t8_1 t9_1) IndexOnlyScan(t1_1 t1_a1_desc_idx) YbBatchedNL(t10_1 t1_1 t2_1 t3_1 t4_1 t5_1 t6_1 t7_1 t8_1 t9_1) NestLoop(t10 t10_1 t1_1 t2_1 t3_1 t4_1 t5_1 t6_1 t7_1 t8_1 t9_1) SeqScan(t9) SeqScan(t3) HashJoin(t3 t9) SeqScan(t8) HashJoin(t3 t8 t9) SeqScan(t6) SeqScan(t5) HashJoin(t5 t6) HashJoin(t3 t5 t6 t8 t9) SeqScan(t7) SeqScan(t4) HashJoin(t4 t7) HashJoin(t3 t4 t5 t6 t7 t8 t9) HashJoin(t10 t10_1 t1_1 t2_1 t3 t3_1 t4 t4_1 t5 t5_1 t6 t6_1 t7 t7_1 t8 t8_1 t9 t9_1) IndexOnlyScan(t2 t2_a2_idx) YbBatchedNL(t10 t10_1 t1_1 t2 t2_1 t3 t3_1 t4 t4_1 t5 t5_1 t6 t6_1 t7 t7_1 t8 t8_1 t9 t9_1) IndexOnlyScan(t1 t1_a1_desc_idx) YbBatchedNL(t1 t10 t10_1 t1_1 t2 t2_1 t3 t3_1 t4 t4_1 t5 t5_1 t6 t6_1 t7 t7_1 t8 t8_1 t9 t9_1) Set(yb_enable_optimizer_statistics off) Set(yb_enable_base_scans_cost_model off) Set(enable_hashagg on) Set(enable_material on) Set(enable_memoize on) Set(enable_sort on) Set(enable_incremental_sort on) Set(max_parallel_workers_per_gather 2) Set(parallel_tuple_cost 0.10) Set(parallel_setup_cost 1000.00) Set(min_parallel_table_scan_size 1024) Set(yb_prefer_bnl on) Set(yb_bnl_batch_size 1024) Set(yb_fetch_row_limit 1024) Set(from_collapse_limit 20) Set(join_collapse_limit 20) Set(geqo false) */ +(90 rows) /*+ Leading(((((t10 (((((((t7_1 t3_1) t6_1) t8_1) (t5_1 t4_1)) (t10_1 t9_1)) t2_1) t1_1)) ((((t9 t3) t8) (t6 t5)) (t7 t4))) t2) t1)) SeqScan(t10) SeqScan(t7_1) SeqScan(t3_1) HashJoin(t3_1 t7_1) SeqScan(t6_1) HashJoin(t3_1 t6_1 t7_1) SeqScan(t8_1) HashJoin(t3_1 t6_1 t7_1 t8_1) SeqScan(t5_1) SeqScan(t4_1) HashJoin(t4_1 t5_1) HashJoin(t3_1 t4_1 t5_1 t6_1 t7_1 t8_1) SeqScan(t10_1) SeqScan(t9_1) NestLoop(t10_1 t9_1) HashJoin(t10_1 t3_1 t4_1 t5_1 t6_1 t7_1 t8_1 t9_1) IndexOnlyScan(t2_1 t2_a2_idx) YbBatchedNL(t10_1 t2_1 t3_1 t4_1 t5_1 t6_1 t7_1 t8_1 t9_1) IndexOnlyScan(t1_1 t1_a1_asc_idx) YbBatchedNL(t10_1 t1_1 t2_1 t3_1 t4_1 t5_1 t6_1 t7_1 t8_1 t9_1) NestLoop(t10 t10_1 t1_1 t2_1 t3_1 t4_1 t5_1 t6_1 t7_1 t8_1 t9_1) SeqScan(t9) SeqScan(t3) HashJoin(t3 t9) SeqScan(t8) HashJoin(t3 t8 t9) SeqScan(t6) SeqScan(t5) HashJoin(t5 t6) HashJoin(t3 t5 t6 t8 t9) SeqScan(t7) SeqScan(t4) HashJoin(t4 t7) HashJoin(t3 t4 t5 t6 t7 t8 t9) HashJoin(t10 t10_1 t1_1 t2_1 t3 t3_1 t4 t4_1 t5 t5_1 t6 t6_1 t7 t7_1 t8 t8_1 t9 t9_1) IndexOnlyScan(t2 t2_a2_idx) YbBatchedNL(t10 t10_1 t1_1 t2 t2_1 t3 t3_1 t4 t4_1 t5 t5_1 t6 t6_1 t7 t7_1 t8 t8_1 t9 t9_1) IndexOnlyScan(t1 t1_a1_asc_idx) YbBatchedNL(t1 t10 t10_1 t1_1 t2 t2_1 t3 t3_1 t4 t4_1 t5 t5_1 t6 t6_1 t7 t7_1 t8 t8_1 t9 t9_1) Set(enable_hashagg on) Set(enable_material on) Set(enable_memoize on) Set(enable_sort on) Set(enable_incremental_sort on) Set(max_parallel_workers_per_gather 2) Set(parallel_tuple_cost 0.10) Set(parallel_setup_cost 1000.00) Set(min_parallel_table_scan_size 1024) Set(yb_prefer_bnl on) Set(yb_bnl_batch_size 1024) Set(yb_fetch_row_limit 1024) Set(from_collapse_limit 20) Set(join_collapse_limit 20) Set(geqo false) */ explain (hints on, costs off) select count(*) from t1, t2, t3, t4, t5, t6, t7, t8, t9, t10, (select a1 x from t1, t2, t3, t4, t5, t6, t7, t8, t9, t10 where a1=a2 and a1=a3 and a1=a4 and a1=a5 and a5=a6 and a5=a7 and a5=a8 and a5=a9 and b7=1) dt where a1=a2 and a1=a3 and a1=a4 and a1=a5 and a5=a6 and a5=a7 and a5=a8 and a5=a9 and b7=1 and a1=x; QUERY PLAN @@ -1087,6 +1086,30 @@ explain (hints on, costs off) select 1 from t2, t3, t1 where a1=a2 and a1=a3 and Generated hints: /*+ Leading(((t3 t1) t2)) SeqScan(t3) IndexScan(t1 t1_a1_desc_idx) YbBatchedNL(t1 t3) IndexOnlyScan(t2 t2_a2_idx) YbBatchedNL(t1 t2 t3) Set(yb_enable_optimizer_statistics off) Set(yb_enable_base_scans_cost_model off) Set(enable_hashagg on) Set(enable_material on) Set(enable_memoize on) Set(enable_sort on) Set(enable_incremental_sort on) Set(max_parallel_workers_per_gather 2) Set(parallel_tuple_cost 0.10) Set(parallel_setup_cost 1000.00) Set(min_parallel_table_scan_size 1024) Set(yb_prefer_bnl on) Set(yb_bnl_batch_size 1024) Set(yb_fetch_row_limit 1024) Set(from_collapse_limit 8) Set(join_collapse_limit 8) Set(geqo false) */ (17 rows) +-- Check hint generation for partitioned tables. +explain (hints on, costs off) select count(*) from prt1 p1 join prt2 p2 on p1.a=p2.a; + QUERY PLAN +-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + Aggregate + -> YB Batched Nested Loop Join + Join Filter: (p1.a = p2.a) + -> Append + -> Seq Scan on prt2_p1 p2_1 + -> Seq Scan on prt2_p2 p2_2 + -> Seq Scan on prt2_p3 p2_3 + -> Memoize + Cache Key: p2.a + Cache Mode: logical + -> Append + -> Index Only Scan using iprt1_p1_a_asc on prt1_p1 p1_1 + Index Cond: (a = ANY (ARRAY[p2.a, $1, $2, ..., $1023])) + -> Index Only Scan using iprt1_p2_a on prt1_p2 p1_2 + Index Cond: (a = ANY (ARRAY[p2.a, $1, $2, ..., $1023])) + -> Index Only Scan using iprt1_p3_a on prt1_p3 p1_3 + Index Cond: (a = ANY (ARRAY[p2.a, $1, $2, ..., $1023])) + Generated hints: /*+ Leading((p2 p1)) YbBatchedNL(p1 p2) Set(yb_enable_optimizer_statistics off) Set(yb_enable_base_scans_cost_model off) Set(enable_hashagg on) Set(enable_material on) Set(enable_memoize on) Set(enable_sort on) Set(enable_incremental_sort on) Set(max_parallel_workers_per_gather 2) Set(parallel_tuple_cost 0.10) Set(parallel_setup_cost 1000.00) Set(min_parallel_table_scan_size 1024) Set(yb_prefer_bnl on) Set(yb_bnl_batch_size 1024) Set(yb_fetch_row_limit 1024) Set(from_collapse_limit 8) Set(join_collapse_limit 8) Set(geqo false) */ +(18 rows) + -- Partitioned table where all partition-wise joins are forced to be merge joins. Should give no warnings/errors. SET enable_partitionwise_join to true; /*+ Mergejoin(t1 t2) */ explain (hints on, costs off) SELECT t1.a, t1.c, t2.b, t2.c FROM prt1 t1, prt2 t2 WHERE t1.a = t2.b AND t1.b = 0 ORDER BY t1.a, t2.b; @@ -1153,8 +1176,8 @@ SET enable_partitionwise_join to true; -- Turn off join partitioning and try hinting. Should work fine. SET enable_partitionwise_join to false; /*+ Leading((t2 t1)) HashJoin(t1 t2) */ explain (hints on, costs off) SELECT t1.a, t1.c, t2.b, t2.c FROM prt1 t1, prt2 t2 WHERE t1.a = t2.b AND t1.b = 0 ORDER BY t1.a, t2.b; - QUERY PLAN -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + QUERY PLAN +----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Sort Sort Key: t1.a -> Hash Join @@ -1171,7 +1194,30 @@ SET enable_partitionwise_join to false; Storage Filter: (b = 0) -> Seq Scan on prt1_p3 t1_3 Storage Filter: (b = 0) - Generated hints: /*+ HashJoin() SeqScan(t2) SeqScan(t2) SeqScan(t2) SeqScan(t1) SeqScan(t1) SeqScan(t1) Set(yb_enable_optimizer_statistics off) Set(yb_enable_base_scans_cost_model off) Set(enable_hashagg on) Set(enable_material on) Set(enable_memoize on) Set(enable_sort on) Set(enable_incremental_sort on) Set(max_parallel_workers_per_gather 2) Set(parallel_tuple_cost 0.10) Set(parallel_setup_cost 1000.00) Set(min_parallel_table_scan_size 1024) Set(yb_prefer_bnl on) Set(yb_bnl_batch_size 1024) Set(yb_fetch_row_limit 1024) Set(from_collapse_limit 8) Set(join_collapse_limit 8) Set(geqo false) */ + Generated hints: /*+ Leading((t2 t1)) HashJoin(t1 t2) Set(yb_enable_optimizer_statistics off) Set(yb_enable_base_scans_cost_model off) Set(enable_hashagg on) Set(enable_material on) Set(enable_memoize on) Set(enable_sort on) Set(enable_incremental_sort on) Set(max_parallel_workers_per_gather 2) Set(parallel_tuple_cost 0.10) Set(parallel_setup_cost 1000.00) Set(min_parallel_table_scan_size 1024) Set(yb_prefer_bnl on) Set(yb_bnl_batch_size 1024) Set(yb_fetch_row_limit 1024) Set(from_collapse_limit 8) Set(join_collapse_limit 8) Set(geqo false) */ +(17 rows) + +-- Make sure the internal hint test passes. +explain (hints on, costs off) SELECT t1.a, t1.c, t2.b, t2.c FROM prt1 t1, prt2 t2 WHERE t1.a = t2.b AND t1.b = 0 ORDER BY t1.a, t2.b; + QUERY PLAN +----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + Sort + Sort Key: t1.a + -> Hash Join + Hash Cond: (t1.a = t2.b) + -> Append + -> Seq Scan on prt1_p1 t1_1 + Storage Filter: (b = 0) + -> Seq Scan on prt1_p2 t1_2 + Storage Filter: (b = 0) + -> Seq Scan on prt1_p3 t1_3 + Storage Filter: (b = 0) + -> Hash + -> Append + -> Seq Scan on prt2_p1 t2_1 + -> Seq Scan on prt2_p2 t2_2 + -> Seq Scan on prt2_p3 t2_3 + Generated hints: /*+ Leading((t1 t2)) HashJoin(t1 t2) Set(yb_enable_optimizer_statistics off) Set(yb_enable_base_scans_cost_model off) Set(enable_hashagg on) Set(enable_material on) Set(enable_memoize on) Set(enable_sort on) Set(enable_incremental_sort on) Set(max_parallel_workers_per_gather 2) Set(parallel_tuple_cost 0.10) Set(parallel_setup_cost 1000.00) Set(min_parallel_table_scan_size 1024) Set(yb_prefer_bnl on) Set(yb_bnl_batch_size 1024) Set(yb_fetch_row_limit 1024) Set(from_collapse_limit 8) Set(join_collapse_limit 8) Set(geqo false) */ (17 rows) -- Test hint table using query id instead of query text. @@ -2017,10 +2063,49 @@ drop cascades to view v2 drop cascades to view v3 drop cascades to view v4 drop cascades to view v5 -ERROR: relation "command_tag" does not exist -LINE 1: INSERT INTO command_tag(tag) VALUES (TG_TAG) - ^ -QUERY: INSERT INTO command_tag(tag) VALUES (TG_TAG) -CONTEXT: PL/pgSQL function "has space".record_drop_command() line 3 at SQL statement +-- Test fix for incorrect pruning of joins. +drop schema if exists yb26670 cascade; +NOTICE: schema "yb26670" does not exist, skipping +create schema yb26670; +set search_path to yb26670; +create table t0(c0 int4range , c1 BIT VARYING(40) ); +create table t1(c0 DECIMAL ); +create table t2(c0 bytea , c1 REAL ); +create table t3(c0 inet , c1 int4range ) WITHOUT OIDS ; +create table t4(c0 TEXT ); +create temporary view v6(c0) AS (SELECT '132.63.53.50' FROM t2*, t1*, t3, t4*, t0* WHERE lower_inf(((((t0.c0)*(t3.c1)))+(((t3.c1)+(t3.c1))))) LIMIT 2444285747789238479); +-- Can't generate hints here because the final plan has a join replaced by a RESULT node. +explain (hints on) SELECT MAX((0.6002056)::MONEY) FROM t1*, ONLY t0, ONLY v6 LEFT OUTER JOIN t4* ON TRUE RIGHT OUTER JOIN t2* ON FALSE GROUP BY - (+ (strpos(t4.c0, v6.c0))); + QUERY PLAN +--------------------------------------------------------------------------------- + HashAggregate (cost=27512815.00..27513515.00 rows=40000 width=12) + Group Key: (- (+ strpos(c0, c0))) + -> Nested Loop (cost=0.00..20012815.00 rows=1000000000 width=4) + -> Nested Loop (cost=0.00..12712.50 rows=1000000 width=64) + -> Nested Loop Left Join (cost=0.00..110.00 rows=1000 width=64) + Join Filter: false + -> Seq Scan on t2 (cost=0.00..100.00 rows=1000 width=0) + -> Result (cost=0.00..0.00 rows=0 width=64) + One-Time Filter: false + -> Materialize (cost=0.00..105.00 rows=1000 width=0) + -> Seq Scan on t1 (cost=0.00..100.00 rows=1000 width=0) + -> Materialize (cost=0.00..105.00 rows=1000 width=0) + -> Seq Scan on t0 (cost=0.00..100.00 rows=1000 width=0) + Generated hints: none +(14 rows) + +SELECT MAX((0.6002056)::MONEY) FROM t1*, ONLY t0, ONLY v6 LEFT OUTER JOIN t4* ON TRUE RIGHT OUTER JOIN t2* ON FALSE GROUP BY - (+ (strpos(t4.c0, v6.c0))); + max +----- +(0 rows) + +drop schema yb26670 cascade; +NOTICE: drop cascades to 6 other objects +DETAIL: drop cascades to table t0 +drop cascades to table t1 +drop cascades to table t2 +drop cascades to table t3 +drop cascades to table t4 +drop cascades to view v6 \set ECHO none WARNING: there is no transaction in progress diff --git a/src/postgres/src/test/regress/sql/yb.orig.extensions.sql b/src/postgres/src/test/regress/sql/yb.orig.extensions.sql index b8bc4aba92ee..61fba9c8722f 100644 --- a/src/postgres/src/test/regress/sql/yb.orig.extensions.sql +++ b/src/postgres/src/test/regress/sql/yb.orig.extensions.sql @@ -1,3 +1,8 @@ +SET client_min_messages = warning; +DROP DATABASE if exists test_yb_extensions; +CREATE DATABASE test_yb_extensions; +\c test_yb_extensions + -- Testing pgcrypto. create extension pgcrypto; @@ -60,3 +65,7 @@ CREATE SCHEMA has$dollar; CREATE EXTENSION yb_test_extension SCHEMA has$dollar; CREATE EXTENSION yb_test_extension SCHEMA "has space"; + +\c yugabyte + +DROP DATABASE test_yb_extensions WITH (FORCE); diff --git a/src/postgres/src/test/regress/sql/yb.orig.hints.sql b/src/postgres/src/test/regress/sql/yb.orig.hints.sql index aa7557710345..f96a2bfc221c 100644 --- a/src/postgres/src/test/regress/sql/yb.orig.hints.sql +++ b/src/postgres/src/test/regress/sql/yb.orig.hints.sql @@ -245,6 +245,9 @@ explain (hints on, costs off) select f2 from t1, t2, t3, func2(1, 2) funky where -- Make sure uniqueness can be proved using indices for hinted query generated for internal hint test. explain (hints on, costs off) select 1 from t2, t3, t1 where a1=a2 and a1=a3 and unn1=1; +-- Check hint generation for partitioned tables. +explain (hints on, costs off) select count(*) from prt1 p1 join prt2 p2 on p1.a=p2.a; + -- Partitioned table where all partition-wise joins are forced to be merge joins. Should give no warnings/errors. SET enable_partitionwise_join to true; /*+ Mergejoin(t1 t2) */ explain (hints on, costs off) SELECT t1.a, t1.c, t2.b, t2.c FROM prt1 t1, prt2 t2 WHERE t1.a = t2.b AND t1.b = 0 ORDER BY t1.a, t2.b; @@ -257,6 +260,9 @@ SET enable_partitionwise_join to true; SET enable_partitionwise_join to false; /*+ Leading((t2 t1)) HashJoin(t1 t2) */ explain (hints on, costs off) SELECT t1.a, t1.c, t2.b, t2.c FROM prt1 t1, prt2 t2 WHERE t1.a = t2.b AND t1.b = 0 ORDER BY t1.a, t2.b; +-- Make sure the internal hint test passes. +explain (hints on, costs off) SELECT t1.a, t1.c, t2.b, t2.c FROM prt1 t1, prt2 t2 WHERE t1.a = t2.b AND t1.b = 0 ORDER BY t1.a, t2.b; + -- Test hint table using query id instead of query text. create extension if not exists pg_hint_plan; set pg_hint_plan.enable_hint_table to on; @@ -413,4 +419,23 @@ set pg_hint_plan.yb_bad_hint_mode to warn; /*+ noNestLoop(t1 t2) noNestLoop(t1 t3) */ explain (costs off, uids on) select max(a1) from t1 join t2 on a1a3; drop schema yb_hints cascade; + +-- Test fix for incorrect pruning of joins. +drop schema if exists yb26670 cascade; +create schema yb26670; +set search_path to yb26670; + +create table t0(c0 int4range , c1 BIT VARYING(40) ); +create table t1(c0 DECIMAL ); +create table t2(c0 bytea , c1 REAL ); +create table t3(c0 inet , c1 int4range ) WITHOUT OIDS ; +create table t4(c0 TEXT ); +create temporary view v6(c0) AS (SELECT '132.63.53.50' FROM t2*, t1*, t3, t4*, t0* WHERE lower_inf(((((t0.c0)*(t3.c1)))+(((t3.c1)+(t3.c1))))) LIMIT 2444285747789238479); + +-- Can't generate hints here because the final plan has a join replaced by a RESULT node. +explain (hints on) SELECT MAX((0.6002056)::MONEY) FROM t1*, ONLY t0, ONLY v6 LEFT OUTER JOIN t4* ON TRUE RIGHT OUTER JOIN t2* ON FALSE GROUP BY - (+ (strpos(t4.c0, v6.c0))); +SELECT MAX((0.6002056)::MONEY) FROM t1*, ONLY t0, ONLY v6 LEFT OUTER JOIN t4* ON TRUE RIGHT OUTER JOIN t2* ON FALSE GROUP BY - (+ (strpos(t4.c0, v6.c0))); + +drop schema yb26670 cascade; + \set ECHO none diff --git a/src/postgres/third-party-extensions/pg_hint_plan/core.c b/src/postgres/third-party-extensions/pg_hint_plan/core.c index 297b20a1b01c..8333b2332aaf 100644 --- a/src/postgres/third-party-extensions/pg_hint_plan/core.c +++ b/src/postgres/third-party-extensions/pg_hint_plan/core.c @@ -193,42 +193,89 @@ standard_join_search(PlannerInfo *root, int levels_needed, List *initial_rels) #endif } - if (IsYugaByteEnabled()) + if (IsYugaByteEnabled() && root->ybHintedJoinsOuter != NULL) { /* - * Sweep all joins at this level and look for disabled join - * and non-disabled joins. + * There is a Leading hint so sweep all joins at this level + * and look for disabled and non-disabled joins. Also determine + * if some join at this level has been hinted. If so, it is safe to + * prune non-hinted joins. */ - List *levelJoinRels = NIL; - bool foundDisabledRel = false; + List *ybLevelJoinRels = NIL; + bool ybFoundDisabledRel = false; + bool ybFoundHintedJoin = false; + ListCell *lc2; foreach(lc2, root->join_rel_level[lev]) { - RelOptInfo *rel = (RelOptInfo *) lfirst(lc2); - if (rel->cheapest_total_path->total_cost < disable_cost || - rel->cheapest_total_path->ybIsHinted || - rel->cheapest_total_path->ybHasHintedUid) + RelOptInfo *ybRel = (RelOptInfo *) lfirst(lc2); + Assert(IS_JOIN_REL(ybRel)); + + bool ybIsJoinPath; + /* + * Could have a non-join path type here (e.g., an Append) + * so need to check that we have a join path. + */ + switch (ybRel->cheapest_total_path->type) + { + case T_NestPath: + ybIsJoinPath = true; + break; + case T_MergePath: + ybIsJoinPath = true; + break; + case T_HashPath: + ybIsJoinPath = true; + break; + default: + ybIsJoinPath = false; + break; + } + + /* + * Assuming that only join paths exist in the space + * of enumerated joins. If this is found to not be the case, + * the next 2 IFs need to check for a join path, and a non-join + * path, respectively. + */ + Assert(ybIsJoinPath); + + if (ybRel->cheapest_total_path->ybIsHinted || + ybRel->cheapest_total_path->ybHasHintedUid) + { + ybFoundHintedJoin = true; + } + + if (ybRel->cheapest_total_path->total_cost < disable_cost || + ybRel->cheapest_total_path->ybIsHinted || + ybRel->cheapest_total_path->ybHasHintedUid) { /* - * Found a join with cost < disable cost. Or cost could be - * >= disable cost (because the join is really expensive) - * but it is in a Leading hint. + * Found a join with cost < disable cost, + * or whose cost could be >= disable cost because the join is + * really expensive. But it is in a Leading hint, or + * has been hinted using its UID so add it to the list + * of joins we want to keep at this level. */ - levelJoinRels = lappend(levelJoinRels, rel); + ybLevelJoinRels = lappend(ybLevelJoinRels, ybRel); } else { /* - * Found a path that has been disabled via hints. + * Found a join that has been disabled, + * or that perhaps has a "true" cost > disable cost. + * It is a join and is not hinted so set a flag so we can + * try pruning below. */ - foundDisabledRel = true; + ybFoundDisabledRel = true; } } /* - * Now look for a mix of enabled and disabled join paths at this level. + * Now look for a mix of enabled and disabled join paths at this level, + * but only do this if some join at this level has been hinted. */ - if (levelJoinRels != NIL && foundDisabledRel) + if (ybLevelJoinRels != NIL && ybFoundDisabledRel && ybFoundHintedJoin) { if (yb_enable_planner_trace) { @@ -243,13 +290,13 @@ standard_join_search(PlannerInfo *root, int levels_needed, List *initial_rels) foreach(lc2, root->join_rel_level[lev]) { RelOptInfo *rel = (RelOptInfo *) lfirst(lc2); - if (!list_member_ptr(levelJoinRels, rel)) + if (!list_member_ptr(ybLevelJoinRels, rel)) { ybTraceRelOptInfo(root, rel, dropMsg.data); } } - foreach(lc2, levelJoinRels) + foreach(lc2, ybLevelJoinRels) { RelOptInfo *rel = (RelOptInfo *) lfirst(lc2); ybTraceRelOptInfo(root, rel, keepMsg.data); @@ -260,9 +307,10 @@ standard_join_search(PlannerInfo *root, int levels_needed, List *initial_rels) } /* - * Keep only the non-disabled joins since the disabled ones cannot be part of the best plan. + * Keep only the non-disabled joins since the disabled ones + * cannot be part of the best plan. */ - root->join_rel_level[lev] = levelJoinRels; + root->join_rel_level[lev] = ybLevelJoinRels; } } } From 1473d72c34307b651c7c6bc083999dcf0119e52a Mon Sep 17 00:00:00 2001 From: Fizaa Luthra Date: Thu, 8 May 2025 14:35:59 -0400 Subject: [PATCH 031/146] [#27141] YSQL: Add cube, earthdistance to the list of supported extensions for YSQL major upgrade Summary: Add cube and earthdistance to the list of supported extensions in yb_check_installed_extensions(). Jira: DB-16621 Test Plan: ./yb_build.sh release --cxx-test ysql_major_extension_upgrade-test --gtest-filter YsqlMajorExtensionUpgradeTest.Simple Reviewers: hsunder Reviewed By: hsunder Subscribers: yql Differential Revision: https://phorge.dev.yugabyte.com/D43870 --- src/postgres/src/bin/pg_upgrade/check.c | 4 +++- .../upgrade-tests/ysql_major_extension_upgrade-test.cc | 2 ++ 2 files changed, 5 insertions(+), 1 deletion(-) diff --git a/src/postgres/src/bin/pg_upgrade/check.c b/src/postgres/src/bin/pg_upgrade/check.c index 4a8dca929fe4..367155ea1457 100644 --- a/src/postgres/src/bin/pg_upgrade/check.c +++ b/src/postgres/src/bin/pg_upgrade/check.c @@ -1828,7 +1828,9 @@ yb_check_installed_extensions() "'pg_cron'," "'pg_partman'," "'plpgsql'," - "'anon')"); + "'anon'," + "'cube'," + "'earthdistance')"); ntups = PQntuples(res); i_name = PQfnumber(res, "extname"); diff --git a/src/yb/integration-tests/upgrade-tests/ysql_major_extension_upgrade-test.cc b/src/yb/integration-tests/upgrade-tests/ysql_major_extension_upgrade-test.cc index 48cc5c871420..54b1cf7c5564 100644 --- a/src/yb/integration-tests/upgrade-tests/ysql_major_extension_upgrade-test.cc +++ b/src/yb/integration-tests/upgrade-tests/ysql_major_extension_upgrade-test.cc @@ -43,6 +43,8 @@ TEST_F(YsqlMajorExtensionUpgradeTest, Simple) { ASSERT_OK(ExecuteStatement(Format("CREATE EXTENSION pg_partman"))); ASSERT_OK(ExecuteStatement(Format("CREATE EXTENSION pg_cron"))); ASSERT_OK(ExecuteStatement(Format("CREATE EXTENSION pgaudit"))); + ASSERT_OK(ExecuteStatement(Format("CREATE EXTENSION cube"))); + ASSERT_OK(ExecuteStatement(Format("CREATE EXTENSION earthdistance"))); ASSERT_OK(UpgradeClusterToCurrentVersion()); } From 44f8d7ca57dbc1f75d08a96ffb6ef8c427bf472d Mon Sep 17 00:00:00 2001 From: Andrei Martsinchyk Date: Fri, 9 May 2025 10:02:32 -0700 Subject: [PATCH 032/146] [#27114] YSQL: Error out of ANALYZE in mixed mode Summary: ANALYZE command can't work in the mixed PG11 to PG15 mode for two main reasons: 1. The random number generator has changed between PG11 and PG15 and its state went from 48 to 128 bits. There's no way to convert one into other. In our implementation, sinse these states are the different protobuf fields, when sampling state is passed between different base Postgres versions the receiving side reads uninitialized random number generator state. 2. We prohibit metadata updates in mixed mode, so ANALYZE can't record collected statistics, even if it succeeds. Hence we do not try to make ANALYZE command working, but error out early and gracefully when mixed mode is detected. In this diff we are adding a validation, whether the PG15 random number generator state presents in the incoming sampling state data. If not, that indicates that the state is coming from incompatible node. We add this validation in two places, in DocDB when it receives the request from PgGate, and in PgGate where it receives response from DocDB. It is hard to add equivalent validation to PG11. We have no control, from what PG11 based version user upgrades to PG15. It may happen that the version is too old and has no validation. Apparently, it is not a problem. If PG11 PgGate sends request to PG15 DocDB, it returns an error. If PG15 PgGate sends request to PG11 DocDB it proceeds with uninitialized random number generator state, however it is resilient to that. It collects potentially incorrect portion of the sample, but the response is rejected by PG15, so wasted work on PG11 node is the only impact. **Upgrade/Rollback safety:** Comment only change to a .proto file Jira: DB-16600 Test Plan: Manual. Start from a PG11 based cluster. Start upgrade. - ANALYZE fails because it can't update statistics Upgrade some tservers - ANALYZE fails because of missing random state both when ysqlsh is connected to PG11 or PG15 node. Upgrade remaining tservers - ANALYZE fails because it can't update statistics Finalize upgrade - ANALYZE succeeds Unit: ./yb_build.sh --gcc13 --cxx-test ysql_major_upgrade-test --gtest_filter YsqlMajorUpgradeTest.Analyze Reviewers: aagrawal, telgersma, timur Reviewed By: telgersma Subscribers: yql Tags: #jenkins-ready Differential Revision: https://phorge.dev.yugabyte.com/D43863 --- src/yb/common/pgsql_protocol.proto | 1 - src/yb/docdb/pgsql_operation.cc | 2 + .../upgrade-tests/ysql_major_upgrade-test.cc | 37 +++++++++++++++++++ src/yb/yql/pggate/pg_sample.cc | 6 ++- 4 files changed, 43 insertions(+), 3 deletions(-) diff --git a/src/yb/common/pgsql_protocol.proto b/src/yb/common/pgsql_protocol.proto index f1035c3059c0..89843746378a 100644 --- a/src/yb/common/pgsql_protocol.proto +++ b/src/yb/common/pgsql_protocol.proto @@ -148,7 +148,6 @@ message PgsqlPartitionBound { optional bool is_inclusive = 2; } -// YB_TODO(upgrade): Handle in pg15 upgrade path. message PgsqlRandState { required uint64 s0 = 1; required uint64 s1 = 2; diff --git a/src/yb/docdb/pgsql_operation.cc b/src/yb/docdb/pgsql_operation.cc index 633664f60849..1325c053b9aa 100644 --- a/src/yb/docdb/pgsql_operation.cc +++ b/src/yb/docdb/pgsql_operation.cc @@ -2221,6 +2221,8 @@ Result> PgsqlReadOperation::ExecuteSample() { // Current number of rows to skip before collecting next one for sample double rowstoskip = sampling_state.rowstoskip(); // Variables for the random numbers generator + SCHECK(sampling_state.has_rand_state(), InvalidArgument, + "Invalid sampling state, random state is missing"); YbgPrepareMemoryContext(); YbgReservoirState rstate = NULL; YbgSamplerCreate( diff --git a/src/yb/integration-tests/upgrade-tests/ysql_major_upgrade-test.cc b/src/yb/integration-tests/upgrade-tests/ysql_major_upgrade-test.cc index e59f7d6368f4..b44702720797 100644 --- a/src/yb/integration-tests/upgrade-tests/ysql_major_upgrade-test.cc +++ b/src/yb/integration-tests/upgrade-tests/ysql_major_upgrade-test.cc @@ -1651,4 +1651,41 @@ TEST_F(YsqlMajorUpgradeTest, YbSuperuserRole) { ASSERT_OK(UpgradeClusterToCurrentVersion(kNoDelayBetweenNodes)); } +TEST_F(YsqlMajorUpgradeTest, Analyze) { + constexpr std::string_view kStatsUpdateError = + "YSQL DDLs, and catalog modifications are not allowed during a major YSQL upgrade"; + constexpr std::string_view kNoRandStateError = "Invalid sampling state, random state is missing"; + using ExpectedErrors = std::optional>; + auto check_analyze = [this](std::optional server, ExpectedErrors expected_errors) { + auto conn = ASSERT_RESULT(CreateConnToTs(server)); + auto status = conn.ExecuteFormat("ANALYZE $0", kSimpleTableName); + if (!expected_errors) { + ASSERT_OK(status); + } else { + ASSERT_NOK(status); + for (const auto& err : *expected_errors) { + if (status.ToString().find(err) != std::string::npos) { + return; + } + } + FAIL() << "Unexpected error " << status.ToString(); + } + }; + ASSERT_OK(CreateSimpleTable()); + check_analyze(kAnyTserver, std::nullopt); + ASSERT_OK(RestartAllMastersInCurrentVersion(kNoDelayBetweenNodes)); + ASSERT_OK(PerformYsqlMajorCatalogUpgrade()); + check_analyze(kAnyTserver, {{kStatsUpdateError}}); + LOG(INFO) << "Restarting yb-tserver " << kMixedModeTserverPg15 << " in current version"; + auto mixed_mode_pg15_tserver = cluster_->tablet_server(kMixedModeTserverPg15); + ASSERT_OK(RestartTServerInCurrentVersion( + *mixed_mode_pg15_tserver, /*wait_for_cluster_to_stabilize=*/true)); + check_analyze(kMixedModeTserverPg11, {{kNoRandStateError, kStatsUpdateError}}); + check_analyze(kMixedModeTserverPg15, {{kNoRandStateError, kStatsUpdateError}}); + ASSERT_OK(UpgradeAllTserversFromMixedMode()); + check_analyze(kAnyTserver, {{kStatsUpdateError}}); + ASSERT_OK(FinalizeUpgrade()); + check_analyze(kAnyTserver, std::nullopt); +} + } // namespace yb diff --git a/src/yb/yql/pggate/pg_sample.cc b/src/yb/yql/pggate/pg_sample.cc index a0bfa94ce17f..9a326e079cd9 100644 --- a/src/yb/yql/pggate/pg_sample.cc +++ b/src/yb/yql/pggate/pg_sample.cc @@ -156,14 +156,16 @@ class PgDocSampleOp : public PgDocReadOp { auto* sampling_state = res.mutable_sampling_state(); VLOG_WITH_PREFIX_AND_FUNC(1) << "Received sampling state: " << sampling_state->ShortDebugString(); + SCHECK(sampling_state->has_rand_state(), InvalidArgument, + "Invalid sampling state, random state is missing"); sampling_stats_ = { .num_blocks_processed = sampling_state->num_blocks_processed(), .num_blocks_collected = sampling_state->num_blocks_collected(), .num_rows_processed = sampling_state->samplerows(), .num_rows_collected = sampling_state->numrows(), - }; + }; - RETURN_NOT_OK(PgDocReadOp::CompleteProcessResponse()); + RETURN_NOT_OK(PgDocReadOp::CompleteProcessResponse()); if (active_op_count_ > 0) { auto& next_active_op = GetReadOp(0); From ad8d14ccefb8c5fc9ae891e1d12ec0f9294503d4 Mon Sep 17 00:00:00 2001 From: Basava Date: Mon, 12 May 2025 10:05:30 -0700 Subject: [PATCH 033/146] [#27035] DocDB: Fix interim pg_locks failure when one node is unavailable Summary: When a node goes down, master removes it from the list of live tservers after it fails to receive any heartbeat for a default of 60s - controlled by `FLAGS_tserver_unresponsive_timeout_ms`. Until then, the other tserver too treat this tserver as active. While executing the pg_locks query, we fetch the list of status tablets on each tserver by querying the master. Post that, we try fetching the old transactions at all of the tservers considered live, and fail the query if we don't hear back from any tserver. **Problem** If a node goes down, the pg_locks query might fail for 60 secs post that. **Solution** Ideally, we could log and ignore this failure as we have a subsequent check where we fail the query if we haven't heard back a valid response for all of the status tablets. So it shouldn't be a problem even if a node goes down as the leaders switch to live nodes, and the pg_locks query can be served. Jira: DB-16509 Test Plan: Jenkins ./yb_build.sh --cxx-test='TEST_F(PgGetLockStatusTestRF3, TestPgLocksAfterTserverShutdown)' Reviewers: yyan, rthallam Reviewed By: yyan Subscribers: ybase, yql Differential Revision: https://phorge.dev.yugabyte.com/D43668 --- src/yb/tserver/pg_client_service.cc | 36 +++++++----- .../yql/pgwrapper/pg_get_lock_status-test.cc | 58 ++++++++++++++++--- 2 files changed, 71 insertions(+), 23 deletions(-) diff --git a/src/yb/tserver/pg_client_service.cc b/src/yb/tserver/pg_client_service.cc index f047f8edf638..1cacac76e1f3 100644 --- a/src/yb/tserver/pg_client_service.cc +++ b/src/yb/tserver/pg_client_service.cc @@ -890,7 +890,7 @@ class PgClientServiceImpl::Impl : public LeaseEpochValidator, public SessionProv std::future> DoGetOldTransactionsForTablet( const uint32_t min_txn_age_ms, const uint32_t max_num_txns, - const std::shared_ptr& proxy, const TabletId& tablet_id) { + const RemoteTabletServerPtr& remote_ts, const TabletId& tablet_id) { auto req = std::make_shared(); req->set_tablet_id(tablet_id); req->set_min_txn_age_ms(min_txn_age_ms); @@ -899,13 +899,14 @@ class PgClientServiceImpl::Impl : public LeaseEpochValidator, public SessionProv return MakeFuture>([&](auto callback) { auto resp = std::make_shared(); std::shared_ptr controller = std::make_shared(); - proxy->GetOldTransactionsAsync( + remote_ts->proxy()->GetOldTransactionsAsync( *req.get(), resp.get(), controller.get(), - [req, callback, controller, resp] { + [req, callback, controller, resp, remote_ts] { auto s = controller->status(); if (!s.ok()) { - s = s.CloneAndPrepend( - Format("GetOldTransactions request for tablet $0 failed: ", req->tablet_id())); + s = s.CloneAndPrepend(Format( + "GetOldTransactions request for tablet $0 to tserver $1 failed: ", + req->tablet_id(), remote_ts->permanent_uuid())); return callback(s); } callback(OldTxnsRespInfo { @@ -918,7 +919,7 @@ class PgClientServiceImpl::Impl : public LeaseEpochValidator, public SessionProv std::future> DoGetOldSingleShardWaiters( const uint32_t min_txn_age_ms, const uint32_t max_num_txns, - const std::shared_ptr& proxy) { + const RemoteTabletServerPtr& remote_ts) { auto req = std::make_shared(); req->set_min_txn_age_ms(min_txn_age_ms); req->set_max_num_txns(max_num_txns); @@ -926,12 +927,14 @@ class PgClientServiceImpl::Impl : public LeaseEpochValidator, public SessionProv return MakeFuture>([&](auto callback) { auto resp = std::make_shared(); std::shared_ptr controller = std::make_shared(); - proxy->GetOldSingleShardWaitersAsync( + remote_ts->proxy()->GetOldSingleShardWaitersAsync( *req.get(), resp.get(), controller.get(), - [req, callback, controller, resp] { + [req, callback, controller, resp, remote_ts] { auto s = controller->status(); if (!s.ok()) { - s = s.CloneAndPrepend("GetOldSingleShardWaiters request failed: "); + s = s.CloneAndPrepend(Format( + "GetOldSingleShardWaiters request to tserver $0 failed: ", + remote_ts->permanent_uuid())); return callback(s); } callback(OldTxnsRespInfo { @@ -1061,17 +1064,17 @@ class PgClientServiceImpl::Impl : public LeaseEpochValidator, public SessionProv auto proxy = remote_tserver->proxy(); for (const auto& tablet : txn_status_tablets.global_tablets) { res_futures.push_back( - DoGetOldTransactionsForTablet(min_txn_age_ms, max_num_txns, proxy, tablet)); + DoGetOldTransactionsForTablet(min_txn_age_ms, max_num_txns, remote_tserver, tablet)); status_tablet_ids.insert(tablet); } for (const auto& tablet : txn_status_tablets.placement_local_tablets) { res_futures.push_back( - DoGetOldTransactionsForTablet(min_txn_age_ms, max_num_txns, proxy, tablet)); + DoGetOldTransactionsForTablet(min_txn_age_ms, max_num_txns, remote_tserver, tablet)); status_tablet_ids.insert(tablet); } // Query for oldest single shard waiting transactions as well. res_futures.push_back( - DoGetOldSingleShardWaiters(min_txn_age_ms, max_num_txns, proxy)); + DoGetOldSingleShardWaiters(min_txn_age_ms, max_num_txns, remote_tserver)); } // Limit num transactions to max_num_txns for which lock status is being queried. // @@ -1083,10 +1086,13 @@ class PgClientServiceImpl::Impl : public LeaseEpochValidator, public SessionProv OldTxnMetadataVariantComparator> old_txns_pq; StatusToPB(Status::OK(), resp->mutable_status()); for (auto it = res_futures.begin(); - it != res_futures.end() && resp->status().code() == AppStatusPB::OK; ) { + it != res_futures.end() && resp->status().code() == AppStatusPB::OK; ++it) { auto res = it->get(); if (!res.ok()) { - return res.status(); + // A node could be unavailable. We need not fail the pg_locks query if we see at least one + // response for all of the status tablets. + LOG(INFO) << res.status(); + continue; } std::visit([&](auto&& old_txns_resp) { @@ -1094,7 +1100,6 @@ class PgClientServiceImpl::Impl : public LeaseEpochValidator, public SessionProv // Ignore leadership and NOT_FOUND errors as we broadcast the request to all tservers. if (old_txns_resp->error().code() == TabletServerErrorPB::NOT_THE_LEADER || old_txns_resp->error().code() == TabletServerErrorPB::TABLET_NOT_FOUND) { - it = res_futures.erase(it); return; } const auto& s = StatusFromPB(old_txns_resp->error().status()); @@ -1114,7 +1119,6 @@ class PgClientServiceImpl::Impl : public LeaseEpochValidator, public SessionProv old_txns_pq.pop(); } } - it++; }, res->resp_ptr); } if (resp->status().code() != AppStatusPB::OK) { diff --git a/src/yb/yql/pgwrapper/pg_get_lock_status-test.cc b/src/yb/yql/pgwrapper/pg_get_lock_status-test.cc index 60bcc92be3f5..bf1b3f4bfb59 100644 --- a/src/yb/yql/pgwrapper/pg_get_lock_status-test.cc +++ b/src/yb/yql/pgwrapper/pg_get_lock_status-test.cc @@ -38,6 +38,9 @@ DECLARE_bool(enable_wait_queues); DECLARE_bool(TEST_skip_returning_old_transactions); DECLARE_uint64(force_single_shard_waiter_retry_ms); DECLARE_int32(tserver_unresponsive_timeout_ms); +DECLARE_double(leader_failure_max_missed_heartbeat_periods); +DECLARE_int32(raft_heartbeat_interval_ms); +DECLARE_int32(leader_lease_duration_ms); using namespace std::literals; using std::string; @@ -47,6 +50,9 @@ namespace pgwrapper { YB_STRONGLY_TYPED_BOOL(RequestSpecifiedTxnIds); +constexpr auto kPgLocksDistTxnsQuery = + "SELECT COUNT(DISTINCT(ybdetails->>'transactionid')) FROM pg_locks"; + struct TestTxnLockInfo { TestTxnLockInfo() {} explicit TestTxnLockInfo(int num_locks) : num_locks(num_locks) {} @@ -240,8 +246,7 @@ TEST_F(PgGetLockStatusTest, TestLocksFromWaitQueue) { // Assert that locks corresponding to the waiter txn as well are returned in pg_locks; SleepFor(MonoDelta::FromSeconds(2 * kTimeMultiplier)); - auto num_txns = ASSERT_RESULT(session.conn->FetchRow( - "SELECT COUNT(DISTINCT(ybdetails->>'transactionid')) FROM pg_locks")); + auto num_txns = ASSERT_RESULT(session.conn->FetchRow(kPgLocksDistTxnsQuery)); ASSERT_EQ(num_txns, 2); ASSERT_OK(session.conn->Execute("COMMIT")); @@ -887,8 +892,7 @@ TEST_F(PgGetLockStatusTest, TestLockStatusRespHasHostNodeSet) { constexpr int kMinTxnAgeMs = 1; // All distributed txns returned as part of pg_locks should have the host node uuid set. const auto kPgLocksQuery = - Format("SELECT COUNT(DISTINCT(ybdetails->>'transactionid')) FROM pg_locks " - "WHERE NOT fastpath AND ybdetails->>'node' IS NULL"); + Format("$0 WHERE NOT fastpath AND ybdetails->>'node' IS NULL", kPgLocksDistTxnsQuery); const auto table = "foo"; const auto key = "1"; @@ -1148,8 +1152,6 @@ TEST_F(PgGetLockStatusTest, TestPgLocksOutputAfterNodeOperations) { // tombstoned amidst the two passes. #ifndef NDEBUG TEST_F(PgGetLockStatusTest, FetchLocksAmidstTransactionCommit) { - const auto kPgLocksQuery = "SELECT COUNT(DISTINCT(ybdetails->>'transactionid')) FROM pg_locks"; - auto setup_conn = ASSERT_RESULT(Connect()); ASSERT_OK(setup_conn.Execute("CREATE TABLE foo(k INT PRIMARY KEY, v INT) SPLIT INTO 1 TABLETS")); ASSERT_OK(setup_conn.Execute("INSERT INTO foo SELECT generate_series(1, 10), 0")); @@ -1171,7 +1173,7 @@ TEST_F(PgGetLockStatusTest, FetchLocksAmidstTransactionCommit) { auto result_future = std::async(std::launch::async, [&]() -> Result { auto conn = VERIFY_RESULT(Connect()); - return conn.FetchRow(kPgLocksQuery); + return conn.FetchRow(kPgLocksDistTxnsQuery); }); // Wait for the lock status request to scan the transaction reverse index section and store @@ -1187,5 +1189,47 @@ TEST_F(PgGetLockStatusTest, FetchLocksAmidstTransactionCommit) { } #endif // NDEBUG +class PgGetLockStatusTestFastElection : public PgGetLockStatusTestRF3 { + protected: + void SetUp() override { + ANNOTATE_UNPROTECTED_WRITE(FLAGS_leader_failure_max_missed_heartbeat_periods) = 4; + ANNOTATE_UNPROTECTED_WRITE(FLAGS_raft_heartbeat_interval_ms) = 100; + ANNOTATE_UNPROTECTED_WRITE(FLAGS_leader_lease_duration_ms) = 400; + PgGetLockStatusTestRF3::SetUp(); + } +}; + +TEST_F_EX( + PgGetLockStatusTestRF3, TestPgLocksAfterTserverShutdown, PgGetLockStatusTestFastElection) { + const auto kTable = "foo"; + auto setup_conn = ASSERT_RESULT(Connect()); + ASSERT_OK(setup_conn.ExecuteFormat("CREATE TABLE $0(k INT, v INT) SPLIT INTO 1 TABLETS", kTable)); + ASSERT_OK(setup_conn.ExecuteFormat("INSERT INTO $0 SELECT generate_series(1, 10), 0", kTable)); + + auto conn = ASSERT_RESULT(Connect()); + ASSERT_OK(conn.StartTransaction(IsolationLevel::READ_COMMITTED)); + ASSERT_OK(conn.ExecuteFormat("UPDATE $0 SET v=v+1 WHERE k=1", kTable)); + SleepFor(FLAGS_heartbeat_interval_ms * 2ms * kTimeMultiplier); + ASSERT_EQ(ASSERT_RESULT(setup_conn.FetchRow(kPgLocksDistTxnsQuery)), 1); + + const auto table_id = ASSERT_RESULT(GetTableIDFromTableName(kTable)); + auto leader_peers = ListTableActiveTabletLeadersPeers(cluster_.get(), table_id); + ASSERT_EQ(leader_peers.size(), 1); + auto& leader_peer = leader_peers[0]; + + auto* leader_ts = cluster_->find_tablet_server(leader_peer->permanent_uuid()); + if (leader_ts == cluster_->mini_tablet_server(kPgTsIndex)) { + leader_ts = cluster_->mini_tablet_server((kPgTsIndex + 1) % cluster_->num_tablet_servers()); + ASSERT_OK(StepDown(leader_peer, leader_ts->server()->permanent_uuid(), ForceStepDown::kTrue)); + } + ASSERT_NE(leader_ts, cluster_->mini_tablet_server(kPgTsIndex)); + leader_ts->Shutdown(); + ASSERT_OK(WaitForTableLeaders( + cluster_.get(), ASSERT_RESULT(GetTableIDFromTableName("transactions")), + 5s * kTimeMultiplier)); + ASSERT_OK(WaitForTableLeaders(cluster_.get(), table_id, 5s * kTimeMultiplier)); + ASSERT_EQ(ASSERT_RESULT(setup_conn.FetchRow(kPgLocksDistTxnsQuery)), 1); +} + } // namespace pgwrapper } // namespace yb From 8474efe990a86d7a2016087a053824f53f0c6a54 Mon Sep 17 00:00:00 2001 From: Dwight Hodge <79169168+ddhodge@users.noreply.github.com> Date: Mon, 12 May 2025 14:31:39 -0400 Subject: [PATCH 034/146] minor fix to command (#27143) --- .../install-software/installer.md | 8 +++++--- .../install-software/installer.md | 8 +++++--- .../install-software/installer.md | 8 +++++--- .../install-software/installer.md | 8 +++++--- 4 files changed, 20 insertions(+), 12 deletions(-) diff --git a/docs/content/preview/yugabyte-platform/install-yugabyte-platform/install-software/installer.md b/docs/content/preview/yugabyte-platform/install-yugabyte-platform/install-software/installer.md index abdfcc60fada..2dd6aae2716c 100644 --- a/docs/content/preview/yugabyte-platform/install-yugabyte-platform/install-software/installer.md +++ b/docs/content/preview/yugabyte-platform/install-yugabyte-platform/install-software/installer.md @@ -282,7 +282,7 @@ To use the data disk with a new installation, do the following: ### Reconfigure -You can use YBA Installer to reconfigure an installed YBA instance. +You can use YBA Installer to make changes to an installed YBA instance. To reconfigure an installation, edit the configuration file with your changes, and then run the command as follows: @@ -297,8 +297,8 @@ For more information, refer to [Configuration options](#configuration-options). YBA Installer provides basic service management, with `start`, `stop`, and `restart` commands. Each of these can be performed for all the services (`platform`, `postgres`, and `prometheus`), or any individual service. ```sh -sudo yba-ctl [start, stop, reconfigure] -sudo yba-ctl [start, stop, reconfigure] prometheus +sudo yba-ctl [start, stop, restart] +sudo yba-ctl [start, stop, restart] prometheus ``` In addition to the state changing operations, you can use the `status` command to show the status of all YugabyteDB Anywhere services, in addition to other information such as the log and configuration location, versions of each service, and the URL to access the YugabyteDB Anywhere UI. @@ -437,6 +437,8 @@ YBA Installer [automatically generates](#configure-yba-installer) the file when | sudo | opt/yba-ctl/ | | non-sudo | ~/opt/yba-ctl/ | +To make changes to an existing installation, edit the configuration file with your changes and run the [reconfigure](#reconfigure) command. Note that some settings (marked with {{}}) cannot be changed after installation. + Note that the file must include all fields. Optional fields may be left blank. ### Configure YBA Installer diff --git a/docs/content/stable/yugabyte-platform/install-yugabyte-platform/install-software/installer.md b/docs/content/stable/yugabyte-platform/install-yugabyte-platform/install-software/installer.md index 39326f8a9acd..f8bcc0f3ceb9 100644 --- a/docs/content/stable/yugabyte-platform/install-yugabyte-platform/install-software/installer.md +++ b/docs/content/stable/yugabyte-platform/install-yugabyte-platform/install-software/installer.md @@ -279,7 +279,7 @@ To use the data disk with a new installation, do the following: ### Reconfigure -You can use YBA Installer to reconfigure an installed YBA instance. +You can use YBA Installer to make changes to an installed YBA instance. To reconfigure an installation, edit the configuration file with your changes, and then run the command as follows: @@ -294,8 +294,8 @@ For more information, refer to [Configuration options](#configuration-options). YBA Installer provides basic service management, with `start`, `stop`, and `restart` commands. Each of these can be performed for all the services (`platform`, `postgres`, and `prometheus`), or any individual service. ```sh -sudo yba-ctl [start, stop, reconfigure] -sudo yba-ctl [start, stop, reconfigure] prometheus +sudo yba-ctl [start, stop, restart] +sudo yba-ctl [start, stop, restart] prometheus ``` In addition to the state changing operations, you can use the `status` command to show the status of all YugabyteDB Anywhere services, in addition to other information such as the log and configuration location, versions of each service, and the URL to access the YugabyteDB Anywhere UI. @@ -434,6 +434,8 @@ YBA Installer [automatically generates](#configure-yba-installer) the file when | sudo | opt/yba-ctl/ | | non-sudo | ~/opt/yba-ctl/ | +To make changes to an existing installation, edit the configuration file with your changes and run the [reconfigure](#reconfigure) command. Note that some settings (marked with {{}}) cannot be changed after installation. + Note that the file must include all fields. Optional fields may be left blank. ### Configure YBA Installer diff --git a/docs/content/v2.20/yugabyte-platform/install-yugabyte-platform/install-software/installer.md b/docs/content/v2.20/yugabyte-platform/install-yugabyte-platform/install-software/installer.md index 31179c3005e9..76ea23f41934 100644 --- a/docs/content/v2.20/yugabyte-platform/install-yugabyte-platform/install-software/installer.md +++ b/docs/content/v2.20/yugabyte-platform/install-yugabyte-platform/install-software/installer.md @@ -276,7 +276,7 @@ To use the data disk with a new installation, do the following: ### Reconfigure -You can use YBA Installer to reconfigure an installed YBA instance. +You can use YBA Installer to make changes to an installed YBA instance. To reconfigure an installation, edit the configuration file with your changes, and then run the command as follows: @@ -291,8 +291,8 @@ For more information, refer to [Configuration options](#configuration-options). YBA Installer provides basic service management, with `start`, `stop`, and `restart` commands. Each of these can be performed for all the services (`platform`, `postgres`, and `prometheus`), or any individual service. ```sh -sudo yba-ctl [start, stop, reconfigure] -sudo yba-ctl [start, stop, reconfigure] prometheus +sudo yba-ctl [start, stop, restart] +sudo yba-ctl [start, stop, restart] prometheus ``` In addition to the state changing operations, you can use the `status` command to show the status of all YugabyteDB Anywhere services, in addition to other information such as the log and configuration location, versions of each service, and the URL to access the YugabyteDB Anywhere UI. @@ -433,6 +433,8 @@ YBA Installer [automatically generates](#configure-yba-installer) the file when | sudo | opt/yba-ctl/ | | non-sudo | ~/opt/yba-ctl/ | +To make changes to an existing installation, edit the configuration file with your changes and run the [reconfigure](#reconfigure) command. Note that some settings (marked with {{}}) cannot be changed after installation. + Note that the file must include all fields. Optional fields may be left blank. ### Configure YBA Installer diff --git a/docs/content/v2024.1/yugabyte-platform/install-yugabyte-platform/install-software/installer.md b/docs/content/v2024.1/yugabyte-platform/install-yugabyte-platform/install-software/installer.md index 3aa51a3bd18c..0e943d88ebde 100644 --- a/docs/content/v2024.1/yugabyte-platform/install-yugabyte-platform/install-software/installer.md +++ b/docs/content/v2024.1/yugabyte-platform/install-yugabyte-platform/install-software/installer.md @@ -275,7 +275,7 @@ To use the data disk with a new installation, do the following: ### Reconfigure -You can use YBA Installer to reconfigure an installed YBA instance. +You can use YBA Installer to make changes to an installed YBA instance. To reconfigure an installation, edit the configuration file with your changes, and then run the command as follows: @@ -290,8 +290,8 @@ For more information, refer to [Configuration options](#configuration-options). YBA Installer provides basic service management, with `start`, `stop`, and `restart` commands. Each of these can be performed for all the services (`platform`, `postgres`, and `prometheus`), or any individual service. ```sh -sudo yba-ctl [start, stop, reconfigure] -sudo yba-ctl [start, stop, reconfigure] prometheus +sudo yba-ctl [start, stop, restart] +sudo yba-ctl [start, stop, restart] prometheus ``` In addition to the state changing operations, you can use the `status` command to show the status of all YugabyteDB Anywhere services, in addition to other information such as the log and configuration location, versions of each service, and the URL to access the YugabyteDB Anywhere UI. @@ -432,6 +432,8 @@ YBA Installer [automatically generates](#configure-yba-installer) the file when | sudo | opt/yba-ctl/ | | non-sudo | ~/opt/yba-ctl/ | +To make changes to an existing installation, edit the configuration file with your changes and run the [reconfigure](#reconfigure) command. Note that some settings (marked with {{}}) cannot be changed after installation. + Note that the file must include all fields. Optional fields may be left blank. ### Configure YBA Installer From db8fd52c5d3ebf0fef2c8ba5b7f2af0f2cbcc4ac Mon Sep 17 00:00:00 2001 From: Sanketh I Date: Mon, 12 May 2025 13:01:28 -0700 Subject: [PATCH 035/146] Update _index.md --- docs/content/stable/api/ysql/ddl-behavior/_index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/content/stable/api/ysql/ddl-behavior/_index.md b/docs/content/stable/api/ysql/ddl-behavior/_index.md index 6019205de65c..bf5d47a11f07 100644 --- a/docs/content/stable/api/ysql/ddl-behavior/_index.md +++ b/docs/content/stable/api/ysql/ddl-behavior/_index.md @@ -2,7 +2,7 @@ title: Behavior of DDL statements [YSQL] headerTitle: Behavior of DDL statements linkTitle: Behavior of DDL statements -description: Explains specific aspects of DDL statement behavior in YugabyteDB, contrasting it with Postgres behavior. [YSQL]. +description: Explains specific aspects of DDL statement behavior in YugabyteDB. [YSQL]. menu: stable_api: identifier: ddl-behavior From 3972a7ad025443018608e97e6786d3e96f2ae974 Mon Sep 17 00:00:00 2001 From: Sudhanshu Prajapati Date: Tue, 13 May 2025 03:19:50 +0530 Subject: [PATCH 036/146] update instructions for contribute docs (#27138) --- docs/content/preview/contribute/docs/macos.md | 4 ++-- docs/content/preview/contribute/docs/ubuntu.md | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/content/preview/contribute/docs/macos.md b/docs/content/preview/contribute/docs/macos.md index f97885cc1df2..f5322c712404 100644 --- a/docs/content/preview/contribute/docs/macos.md +++ b/docs/content/preview/contribute/docs/macos.md @@ -18,11 +18,11 @@ private=true Recent versions of macOS have only a `python3` executable, as does the Homebrew install. You can use [pyenv](https://github.com/pyenv/pyenv) to manage multiple versions of python on your system. Make sure to point to Python v3.10 or earlier. -* **Hugo**: Install Hugo v0.143.1 Follow these steps: +* **Hugo**: Install Hugo v0.145.0 Follow these steps: * Unpin Hugo to stop its formula from being updated - `brew unpin hugo` * Uninstall any older version if installed - `brew uninstall hugo` - * Download the v0.143.1 formula file from [Homebrew's repository](https://github.com/Homebrew/homebrew-core/blob/8dda2dcd7a7e2cec735942ef69879cfba621f7b8/Formula/h/hugo.rb). + * Download the v0.145.0 formula file from [Homebrew's repository](https://github.com/Homebrew/homebrew-core/blob/f55947be0ab55cfa5274d7232608d87b0e2ebf94/Formula/h/hugo.rb). * Install the downloaded formula - `brew install hugo.rb` * Lastly, prevent automatic updates of Hugo version - `brew pin hugo` diff --git a/docs/content/preview/contribute/docs/ubuntu.md b/docs/content/preview/contribute/docs/ubuntu.md index 75aff168bed7..2312f88863ec 100644 --- a/docs/content/preview/contribute/docs/ubuntu.md +++ b/docs/content/preview/contribute/docs/ubuntu.md @@ -8,7 +8,7 @@ private=true Recent versions of Ubuntu default to `python` and point to Python 3. If not, you can install a new version of Python, and use [pyenv](https://github.com/pyenv/pyenv) to manage multiple Python versions. Ensure you're using Python 3.10 or earlier. -* **Hugo**: Install the `hugo_extended_0.143.1` version from the [official Hugo releases](https://github.com/gohugoio/hugo/releases) for your Linux machine architecture. Make sure the `hugo` binary is available in the shell path. +* **Hugo**: Install the `hugo_extended_0.145.0` version from the [official Hugo releases](https://github.com/gohugoio/hugo/releases) for your Linux machine architecture. Make sure the `hugo` binary is available in the shell path. * **Go**: Install latest Go from the [official Go website](https://golang.org/dl/). From 46a531d975145bb3e7ef21ed5f6fba1b1b2d2717 Mon Sep 17 00:00:00 2001 From: Sami Ahmed Siddiqui Date: Tue, 13 May 2025 02:55:58 +0500 Subject: [PATCH 037/146] [Docs] Upgrade Hugo, Docsy, Node and its dependencies (#26970) * Upgrade Hugo, Docsy, Node and its dependencies * Remove js debugging code --------- Co-authored-by: Dwight Hodge <79169168+ddhodge@users.noreply.github.com> --- docs/.eslintrc | 91 - docs/config/_default/hugo.toml | 2 +- docs/eslint.config.mjs | 38 + docs/go.mod | 2 + docs/go.sum | 3 + docs/netlify.toml | 6 +- docs/package-lock.json | 3268 +++++++++----------------------- docs/package.json | 35 +- 8 files changed, 980 insertions(+), 2465 deletions(-) delete mode 100644 docs/.eslintrc create mode 100644 docs/eslint.config.mjs diff --git a/docs/.eslintrc b/docs/.eslintrc deleted file mode 100644 index 65d33041f737..000000000000 --- a/docs/.eslintrc +++ /dev/null @@ -1,91 +0,0 @@ -{ - "root": true, - "parser": "@babel/eslint-parser", - "env": { - "browser": true, - "node": true - }, - "plugins": [ - "jsx-a11y", - "import" - ], - "extends": [ - "airbnb", - "eslint:recommended", - "xo-space" - ], - "rules": { - "arrow-parens": 0, - "camelcase": 0, - "comma-dangle": [ - 1, - "always-multiline" - ], - "complexity": [ - "error", - { - "max": 25 - } - ], - "import/no-extraneous-dependencies": 0, - "import/prefer-default-export": 0, - "key-spacing": [ - "error", - { - "multiLine": { - "mode": "minimum" - } - } - ], - "new-cap": [ - "error", - { - "capIsNewExceptions": [ - "DrawerNavigator", - "StackNavigator", - "TabNavigator" - ] - } - ], - "no-cond-assign": [ - 2, - "except-parens" - ], - "no-console": 1, - "no-debugger": 1, - "no-multi-assign": 0, - "no-return-assign": [ - 2, - "except-parens" - ], - "no-unused-vars": 1, - "no-use-before-define": 0, - "no-warning-comments": 0, - "object-curly-spacing": 0, - "one-var": 0, - "one-var-declaration-per-line": 0, - "react/forbid-prop-types": 0, - "react/jsx-closing-bracket-location": 0, - "react/jsx-first-prop-new-line": 0, - "react/prefer-stateless-function": 0, - "react/require-default-props": 0, - "react/jsx-filename-extension": [ - 1, - { - "extensions": [ - ".js", - ".jsx" - ] - } - ] - }, - "settings": { - "import/resolver": { - "node": { - "paths": [ - "./src" - ] - } - } - } -} diff --git a/docs/config/_default/hugo.toml b/docs/config/_default/hugo.toml index e79b2ebf4e07..65230491826c 100644 --- a/docs/config/_default/hugo.toml +++ b/docs/config/_default/hugo.toml @@ -18,7 +18,7 @@ enableGitInfo = true [module] [module.hugoVersion] extended = true - min = "0.104.3" + min = "0.145.0" [[module.imports]] path = "github.com/google/docsy" disable = false diff --git a/docs/eslint.config.mjs b/docs/eslint.config.mjs new file mode 100644 index 000000000000..84ec92ac48de --- /dev/null +++ b/docs/eslint.config.mjs @@ -0,0 +1,38 @@ +import { defineConfig } from "eslint/config"; +import _import from "eslint-plugin-import"; +import stylistic from "@stylistic/eslint-plugin"; +import path from "node:path"; +import { fileURLToPath } from "node:url"; + +export default defineConfig([ + { + languageOptions: { + parserOptions: { + ecmaVersion: "latest", + sourceType: "module", + ecmaFeatures: { + jsx: true, + }, + }, + }, + + plugins: { + import: _import, + stylistic, + }, + + settings: { + "import/resolver": { + node: { + paths: ["./src"], + }, + }, + }, + + rules: { + "no-console": "warn", + "no-debugger": "warn", + "no-unused-vars": "warn", + }, + }, +]); diff --git a/docs/go.mod b/docs/go.mod index cd1fdaac7b57..e3be6e940265 100644 --- a/docs/go.mod +++ b/docs/go.mod @@ -3,7 +3,9 @@ module github.com/yugabyte/yugabyte-db/docs go 1.20 require ( + github.com/FortAwesome/Font-Awesome v0.0.0-20240716171331-37eff7fa00de // indirect github.com/google/docsy v0.11.0 // indirect github.com/google/docsy/dependencies v0.7.2 // indirect github.com/trunkcode/hugo-seo v0.2.2 // indirect + github.com/twbs/bootstrap v5.3.5+incompatible // indirect ) diff --git a/docs/go.sum b/docs/go.sum index bf0dd7236abd..b16713a2f9f9 100644 --- a/docs/go.sum +++ b/docs/go.sum @@ -1,5 +1,6 @@ github.com/FortAwesome/Font-Awesome v0.0.0-20230327165841-0698449d50f2/go.mod h1:IUgezN/MFpCDIlFezw3L8j83oeiIuYoj28Miwr/KUYo= github.com/FortAwesome/Font-Awesome v0.0.0-20240402185447-c0f460dca7f7/go.mod h1:IUgezN/MFpCDIlFezw3L8j83oeiIuYoj28Miwr/KUYo= +github.com/FortAwesome/Font-Awesome v0.0.0-20240716171331-37eff7fa00de h1:JvHOfdSqvArF+7cffH9oWU8oLhn6YFYI60Pms8M/6tI= github.com/FortAwesome/Font-Awesome v0.0.0-20240716171331-37eff7fa00de/go.mod h1:IUgezN/MFpCDIlFezw3L8j83oeiIuYoj28Miwr/KUYo= github.com/google/docsy v0.10.0 h1:6tMDacPwAyRWNCfvsn/9qGOZDQ8b0aRzjRZvnZPY5dg= github.com/google/docsy v0.10.0/go.mod h1:c0nIAqmRTOuJ01F85U/wJPQtc3Zj9N58Kea9bOT2AJc= @@ -11,3 +12,5 @@ github.com/trunkcode/hugo-seo v0.2.2 h1:ywfWzmde21QktGKxs5hfdbMXhAEY0cANP/oaXKDg github.com/trunkcode/hugo-seo v0.2.2/go.mod h1:L66E4t0yxaJE8YyS97iCHYwDYkapDuIBaGwCpvC5bWM= github.com/twbs/bootstrap v5.2.3+incompatible/go.mod h1:fZTSrkpSf0/HkL0IIJzvVspTt1r9zuf7XlZau8kpcY0= github.com/twbs/bootstrap v5.3.3+incompatible/go.mod h1:fZTSrkpSf0/HkL0IIJzvVspTt1r9zuf7XlZau8kpcY0= +github.com/twbs/bootstrap v5.3.5+incompatible h1:6XrrFNMsiTTFcVTBf2886FO2XUNtwSE+QPv1os0uAA4= +github.com/twbs/bootstrap v5.3.5+incompatible/go.mod h1:fZTSrkpSf0/HkL0IIJzvVspTt1r9zuf7XlZau8kpcY0= diff --git a/docs/netlify.toml b/docs/netlify.toml index ef5a60c73689..a92c4fd562fc 100644 --- a/docs/netlify.toml +++ b/docs/netlify.toml @@ -12,21 +12,21 @@ # all api_keys were generated from btoa(orig_key) [context.deploy-preview.environment] GO_VERSION = "1.20" - HUGO_VERSION = "0.143.1" + HUGO_VERSION = "0.145.0" NODE_VERSION = "22" CXXFLAGS = "-std=c++17" RUDDERSTACK_API_KEY = "Mmo5Zmp5M1lONWFLM2xYS044N3k5cGhic3R4" [context.branch-deploy.environment] GO_VERSION = "1.20" - HUGO_VERSION = "0.143.1" + HUGO_VERSION = "0.145.0" NODE_VERSION = "22" CXXFLAGS = "-std=c++17" RUDDERSTACK_API_KEY = "Mmo5Zmp5M1lONWFLM2xYS044N3k5cGhic3R4" [context.production.environment] GO_VERSION = "1.20" - HUGO_VERSION = "0.143.1" + HUGO_VERSION = "0.145.0" NODE_VERSION = "22" CXXFLAGS = "-std=c++17" RUDDERSTACK_API_KEY = "Mmo5Y252MjZ6R290ZjhPa3BROUx4cFl6VFVK" diff --git a/docs/package-lock.json b/docs/package-lock.json index ed30db353963..a3d5dd5c86a9 100644 --- a/docs/package-lock.json +++ b/docs/package-lock.json @@ -9,36 +9,31 @@ "version": "1.3.0", "license": "Apache License 2.0", "dependencies": { - "@babel/core": "7.26.8", - "@babel/eslint-parser": "7.26.8", + "@babel/core": "7.26.10", "@babel/plugin-proposal-decorators": "7.25.9", - "@babel/preset-env": "7.26.8", + "@babel/preset-env": "7.26.9", "@fortawesome/fontawesome-pro": "6.7.2", + "@stylistic/eslint-plugin": "4.2.0", "algoliasearch": "4.23.3", - "babel-loader": "9.2.1", + "babel-loader": "10.0.0", "clipboard": "2.0.11", "detect-external-link": "2.0.1", - "eslint": "8.56.0", - "eslint-config-airbnb": "19.0.4", - "eslint-config-xo-space": "0.35.0", + "eslint": "9.25.1", "eslint-plugin-import": "2.31.0", - "eslint-plugin-jsx-a11y": "6.10.2", - "eslint-webpack-plugin": "4.2.0", - "hugo-algolia": "1.2.14", - "npm-run-all": "4.1.5", - "react-dev-tools": "0.0.1", - "react-dev-utils": "12.0.1", - "run-p": "0.0.0", - "sass": "1.84.0", - "webpack": "5.97.1", + "eslint-webpack-plugin": "5.0.1", + "globals": "16.0.0", + "webpack": "5.99.7", "webpack-cli": "6.0.1", - "webpack-dev-server": "5.2.0", + "webpack-dev-server": "5.2.1", "yb-rrdiagram": "0.0.7" }, "devDependencies": { - "autoprefixer": "10.4.20", - "postcss": "8.5.1", - "postcss-cli": "11.0.0" + "autoprefixer": "10.4.21", + "npm-run-all": "4.1.5", + "postcss": "8.5.3", + "postcss-cli": "11.0.1", + "run-p": "0.0.0", + "sass": "1.87.0" } }, "node_modules/@algolia/cache-browser-local-storage": { @@ -226,22 +221,21 @@ } }, "node_modules/@babel/core": { - "version": "7.26.8", - "resolved": "https://registry.npmjs.org/@babel/core/-/core-7.26.8.tgz", - "integrity": "sha512-l+lkXCHS6tQEc5oUpK28xBOZ6+HwaH7YwoYQbLFiYb4nS2/l1tKnZEtEWkD0GuiYdvArf9qBS0XlQGXzPMsNqQ==", + "version": "7.26.10", + "resolved": "https://registry.npmjs.org/@babel/core/-/core-7.26.10.tgz", + "integrity": "sha512-vMqyb7XCDMPvJFFOaT9kxtiRh42GwlZEg1/uIgtZshS5a/8OaduUfCi7kynKgc3Tw/6Uo2D+db9qBttghhmxwQ==", "license": "MIT", "dependencies": { "@ampproject/remapping": "^2.2.0", "@babel/code-frame": "^7.26.2", - "@babel/generator": "^7.26.8", + "@babel/generator": "^7.26.10", "@babel/helper-compilation-targets": "^7.26.5", "@babel/helper-module-transforms": "^7.26.0", - "@babel/helpers": "^7.26.7", - "@babel/parser": "^7.26.8", - "@babel/template": "^7.26.8", - "@babel/traverse": "^7.26.8", - "@babel/types": "^7.26.8", - "@types/gensync": "^1.0.0", + "@babel/helpers": "^7.26.10", + "@babel/parser": "^7.26.10", + "@babel/template": "^7.26.9", + "@babel/traverse": "^7.26.10", + "@babel/types": "^7.26.10", "convert-source-map": "^2.0.0", "debug": "^4.1.0", "gensync": "^1.0.0-beta.2", @@ -256,32 +250,14 @@ "url": "https://opencollective.com/babel" } }, - "node_modules/@babel/eslint-parser": { - "version": "7.26.8", - "resolved": "https://registry.npmjs.org/@babel/eslint-parser/-/eslint-parser-7.26.8.tgz", - "integrity": "sha512-3tBctaHRW6xSub26z7n8uyOTwwUsCdvIug/oxBH9n6yCO5hMj2vwDJAo7RbBMKrM7P+W2j61zLKviJQFGOYKMg==", - "license": "MIT", - "dependencies": { - "@nicolo-ribaudo/eslint-scope-5-internals": "5.1.1-v1", - "eslint-visitor-keys": "^2.1.0", - "semver": "^6.3.1" - }, - "engines": { - "node": "^10.13.0 || ^12.13.0 || >=14.0.0" - }, - "peerDependencies": { - "@babel/core": "^7.11.0", - "eslint": "^7.5.0 || ^8.0.0 || ^9.0.0" - } - }, "node_modules/@babel/generator": { - "version": "7.26.8", - "resolved": "https://registry.npmjs.org/@babel/generator/-/generator-7.26.8.tgz", - "integrity": "sha512-ef383X5++iZHWAXX0SXQR6ZyQhw/0KtTkrTz61WXRhFM6dhpHulO/RJz79L8S6ugZHJkOOkUrUdxgdF2YiPFnA==", + "version": "7.27.0", + "resolved": "https://registry.npmjs.org/@babel/generator/-/generator-7.27.0.tgz", + "integrity": "sha512-VybsKvpiN1gU1sdMZIp7FcqphVVKEwcuj02x73uvcHE0PTihx1nlBcowYWhDwjpoAXRv43+gDzyggGnn1XZhVw==", "license": "MIT", "dependencies": { - "@babel/parser": "^7.26.8", - "@babel/types": "^7.26.8", + "@babel/parser": "^7.27.0", + "@babel/types": "^7.27.0", "@jridgewell/gen-mapping": "^0.3.5", "@jridgewell/trace-mapping": "^0.3.25", "jsesc": "^3.0.2" @@ -303,12 +279,12 @@ } }, "node_modules/@babel/helper-compilation-targets": { - "version": "7.26.5", - "resolved": "https://registry.npmjs.org/@babel/helper-compilation-targets/-/helper-compilation-targets-7.26.5.tgz", - "integrity": "sha512-IXuyn5EkouFJscIDuFF5EsiSolseme1s0CZB+QxVugqJLYmKdxI1VfIBOst0SUu4rnk2Z7kqTwmoO1lp3HIfnA==", + "version": "7.27.0", + "resolved": "https://registry.npmjs.org/@babel/helper-compilation-targets/-/helper-compilation-targets-7.27.0.tgz", + "integrity": "sha512-LVk7fbXml0H2xH34dFzKQ7TDZ2G4/rVTOrq9V+icbbadjbVxxeFeDsNHv2SrZeWoA+6ZiTyWYWtScEIW07EAcA==", "license": "MIT", "dependencies": { - "@babel/compat-data": "^7.26.5", + "@babel/compat-data": "^7.26.8", "@babel/helper-validator-option": "^7.25.9", "browserslist": "^4.24.0", "lru-cache": "^5.1.1", @@ -319,17 +295,17 @@ } }, "node_modules/@babel/helper-create-class-features-plugin": { - "version": "7.25.9", - "resolved": "https://registry.npmjs.org/@babel/helper-create-class-features-plugin/-/helper-create-class-features-plugin-7.25.9.tgz", - "integrity": "sha512-UTZQMvt0d/rSz6KI+qdu7GQze5TIajwTS++GUozlw8VBJDEOAqSXwm1WvmYEZwqdqSGQshRocPDqrt4HBZB3fQ==", + "version": "7.27.0", + "resolved": "https://registry.npmjs.org/@babel/helper-create-class-features-plugin/-/helper-create-class-features-plugin-7.27.0.tgz", + "integrity": "sha512-vSGCvMecvFCd/BdpGlhpXYNhhC4ccxyvQWpbGL4CWbvfEoLFWUZuSuf7s9Aw70flgQF+6vptvgK2IfOnKlRmBg==", "license": "MIT", "dependencies": { "@babel/helper-annotate-as-pure": "^7.25.9", "@babel/helper-member-expression-to-functions": "^7.25.9", "@babel/helper-optimise-call-expression": "^7.25.9", - "@babel/helper-replace-supers": "^7.25.9", + "@babel/helper-replace-supers": "^7.26.5", "@babel/helper-skip-transparent-expression-wrappers": "^7.25.9", - "@babel/traverse": "^7.25.9", + "@babel/traverse": "^7.27.0", "semver": "^6.3.1" }, "engines": { @@ -340,9 +316,9 @@ } }, "node_modules/@babel/helper-create-regexp-features-plugin": { - "version": "7.26.3", - "resolved": "https://registry.npmjs.org/@babel/helper-create-regexp-features-plugin/-/helper-create-regexp-features-plugin-7.26.3.tgz", - "integrity": "sha512-G7ZRb40uUgdKOQqPLjfD12ZmGA54PzqDFUv2BKImnC9QIfGhIHKvVML0oN8IUiDq4iRqpq74ABpvOaerfWdong==", + "version": "7.27.0", + "resolved": "https://registry.npmjs.org/@babel/helper-create-regexp-features-plugin/-/helper-create-regexp-features-plugin-7.27.0.tgz", + "integrity": "sha512-fO8l08T76v48BhpNRW/nQ0MxfnSdoSKUJBMjubOAYffsVuGG5qOfMq7N6Es7UJvi7Y8goXXo07EfcHZXDPuELQ==", "license": "MIT", "dependencies": { "@babel/helper-annotate-as-pure": "^7.25.9", @@ -357,9 +333,9 @@ } }, "node_modules/@babel/helper-define-polyfill-provider": { - "version": "0.6.3", - "resolved": "https://registry.npmjs.org/@babel/helper-define-polyfill-provider/-/helper-define-polyfill-provider-0.6.3.tgz", - "integrity": "sha512-HK7Bi+Hj6H+VTHA3ZvBis7V/6hu9QuTrnMXNybfUf2iiuU/N97I8VjB+KbhFF8Rld/Lx5MzoCwPCpPjfK+n8Cg==", + "version": "0.6.4", + "resolved": "https://registry.npmjs.org/@babel/helper-define-polyfill-provider/-/helper-define-polyfill-provider-0.6.4.tgz", + "integrity": "sha512-jljfR1rGnXXNWnmQg2K3+bvhkxB51Rl32QRaOTuwwjviGrHzIbSc8+x9CpraDtbT7mfyjXObULP4w/adunNwAw==", "license": "MIT", "dependencies": { "@babel/helper-compilation-targets": "^7.22.6", @@ -525,25 +501,25 @@ } }, "node_modules/@babel/helpers": { - "version": "7.26.7", - "resolved": "https://registry.npmjs.org/@babel/helpers/-/helpers-7.26.7.tgz", - "integrity": "sha512-8NHiL98vsi0mbPQmYAGWwfcFaOy4j2HY49fXJCfuDcdE7fMIsH9a7GdaeXpIBsbT7307WU8KCMp5pUVDNL4f9A==", + "version": "7.27.0", + "resolved": "https://registry.npmjs.org/@babel/helpers/-/helpers-7.27.0.tgz", + "integrity": "sha512-U5eyP/CTFPuNE3qk+WZMxFkp/4zUzdceQlfzf7DdGdhp+Fezd7HD+i8Y24ZuTMKX3wQBld449jijbGq6OdGNQg==", "license": "MIT", "dependencies": { - "@babel/template": "^7.25.9", - "@babel/types": "^7.26.7" + "@babel/template": "^7.27.0", + "@babel/types": "^7.27.0" }, "engines": { "node": ">=6.9.0" } }, "node_modules/@babel/parser": { - "version": "7.26.8", - "resolved": "https://registry.npmjs.org/@babel/parser/-/parser-7.26.8.tgz", - "integrity": "sha512-TZIQ25pkSoaKEYYaHbbxkfL36GNsQ6iFiBbeuzAkLnXayKR1yP1zFe+NxuZWWsUyvt8icPU9CCq0sgWGXR1GEw==", + "version": "7.27.0", + "resolved": "https://registry.npmjs.org/@babel/parser/-/parser-7.27.0.tgz", + "integrity": "sha512-iaepho73/2Pz7w2eMS0Q5f83+0RKI7i4xmiYeBmDzfRVbQtTOG7Ts0S4HzJVsTMGI9keU8rNfuZr8DKfSt7Yyg==", "license": "MIT", "dependencies": { - "@babel/types": "^7.26.8" + "@babel/types": "^7.27.0" }, "bin": { "parser": "bin/babel-parser.js" @@ -786,12 +762,12 @@ } }, "node_modules/@babel/plugin-transform-block-scoping": { - "version": "7.25.9", - "resolved": "https://registry.npmjs.org/@babel/plugin-transform-block-scoping/-/plugin-transform-block-scoping-7.25.9.tgz", - "integrity": "sha512-1F05O7AYjymAtqbsFETboN1NvBdcnzMerO+zlMyJBEz6WkMdejvGWw9p05iTSjC85RLlBseHHQpYaM4gzJkBGg==", + "version": "7.27.0", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-block-scoping/-/plugin-transform-block-scoping-7.27.0.tgz", + "integrity": "sha512-u1jGphZ8uDI2Pj/HJj6YQ6XQLZCNjOlprjxB5SVz6rq2T6SwAR+CdrWK0CP7F+9rDVMXdB0+r6Am5G5aobOjAQ==", "license": "MIT", "dependencies": { - "@babel/helper-plugin-utils": "^7.25.9" + "@babel/helper-plugin-utils": "^7.26.5" }, "engines": { "node": ">=6.9.0" @@ -852,6 +828,15 @@ "@babel/core": "^7.0.0-0" } }, + "node_modules/@babel/plugin-transform-classes/node_modules/globals": { + "version": "11.12.0", + "resolved": "https://registry.npmjs.org/globals/-/globals-11.12.0.tgz", + "integrity": "sha512-WOBp/EEGUiIsJSp7wcv/y6MO+lV9UoncWqxuFfm8eBwzWNgyfBd6Gz+IeKQ9jCmyhoH99g15M3T+QaVHFjizVA==", + "license": "MIT", + "engines": { + "node": ">=4" + } + }, "node_modules/@babel/plugin-transform-computed-properties": { "version": "7.25.9", "resolved": "https://registry.npmjs.org/@babel/plugin-transform-computed-properties/-/plugin-transform-computed-properties-7.25.9.tgz", @@ -976,12 +961,12 @@ } }, "node_modules/@babel/plugin-transform-for-of": { - "version": "7.25.9", - "resolved": "https://registry.npmjs.org/@babel/plugin-transform-for-of/-/plugin-transform-for-of-7.25.9.tgz", - "integrity": "sha512-LqHxduHoaGELJl2uhImHwRQudhCM50pT46rIBNvtT/Oql3nqiS3wOwP+5ten7NpYSXrrVLgtZU3DZmPtWZo16A==", + "version": "7.26.9", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-for-of/-/plugin-transform-for-of-7.26.9.tgz", + "integrity": "sha512-Hry8AusVm8LW5BVFgiyUReuoGzPUpdHQQqJY5bZnbbf+ngOHWuCuYFKw/BqaaWlvEUrF91HMhDtEaI1hZzNbLg==", "license": "MIT", "dependencies": { - "@babel/helper-plugin-utils": "^7.25.9", + "@babel/helper-plugin-utils": "^7.26.5", "@babel/helper-skip-transparent-expression-wrappers": "^7.25.9" }, "engines": { @@ -1323,12 +1308,12 @@ } }, "node_modules/@babel/plugin-transform-regenerator": { - "version": "7.25.9", - "resolved": "https://registry.npmjs.org/@babel/plugin-transform-regenerator/-/plugin-transform-regenerator-7.25.9.tgz", - "integrity": "sha512-vwDcDNsgMPDGP0nMqzahDWE5/MLcX8sv96+wfX7as7LoF/kr97Bo/7fI00lXY4wUXYfVmwIIyG80fGZ1uvt2qg==", + "version": "7.27.0", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-regenerator/-/plugin-transform-regenerator-7.27.0.tgz", + "integrity": "sha512-LX/vCajUJQDqE7Aum/ELUMZAY19+cDpghxrnyt5I1tV6X5PyC86AOoWXWFYFeIvauyeSA6/ktn4tQVn/3ZifsA==", "license": "MIT", "dependencies": { - "@babel/helper-plugin-utils": "^7.25.9", + "@babel/helper-plugin-utils": "^7.26.5", "regenerator-transform": "^0.15.2" }, "engines": { @@ -1431,9 +1416,9 @@ } }, "node_modules/@babel/plugin-transform-typeof-symbol": { - "version": "7.26.7", - "resolved": "https://registry.npmjs.org/@babel/plugin-transform-typeof-symbol/-/plugin-transform-typeof-symbol-7.26.7.tgz", - "integrity": "sha512-jfoTXXZTgGg36BmhqT3cAYK5qkmqvJpvNrPhaK/52Vgjhw4Rq29s9UqpWWV0D6yuRmgiFH/BUVlkl96zJWqnaw==", + "version": "7.27.0", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-typeof-symbol/-/plugin-transform-typeof-symbol-7.27.0.tgz", + "integrity": "sha512-+LLkxA9rKJpNoGsbLnAgOCdESl73vwYn+V6b+5wHbrE7OGKVDPHIQvbFSzqE6rwqaCw2RE+zdJrlLkcf8YOA0w==", "license": "MIT", "dependencies": { "@babel/helper-plugin-utils": "^7.26.5" @@ -1509,9 +1494,9 @@ } }, "node_modules/@babel/preset-env": { - "version": "7.26.8", - "resolved": "https://registry.npmjs.org/@babel/preset-env/-/preset-env-7.26.8.tgz", - "integrity": "sha512-um7Sy+2THd697S4zJEfv/U5MHGJzkN2xhtsR3T/SWRbVSic62nbISh51VVfU9JiO/L/Z97QczHTaFVkOU8IzNg==", + "version": "7.26.9", + "resolved": "https://registry.npmjs.org/@babel/preset-env/-/preset-env-7.26.9.tgz", + "integrity": "sha512-vX3qPGE8sEKEAZCWk05k3cpTAE3/nOYca++JA+Rd0z2NCNzabmYvEiSShKzm10zdquOIAVXsy2Ei/DTW34KlKQ==", "license": "MIT", "dependencies": { "@babel/compat-data": "^7.26.8", @@ -1543,7 +1528,7 @@ "@babel/plugin-transform-dynamic-import": "^7.25.9", "@babel/plugin-transform-exponentiation-operator": "^7.26.3", "@babel/plugin-transform-export-namespace-from": "^7.25.9", - "@babel/plugin-transform-for-of": "^7.25.9", + "@babel/plugin-transform-for-of": "^7.26.9", "@babel/plugin-transform-function-name": "^7.25.9", "@babel/plugin-transform-json-strings": "^7.25.9", "@babel/plugin-transform-literals": "^7.25.9", @@ -1606,9 +1591,9 @@ } }, "node_modules/@babel/runtime": { - "version": "7.26.7", - "resolved": "https://registry.npmjs.org/@babel/runtime/-/runtime-7.26.7.tgz", - "integrity": "sha512-AOPI3D+a8dXnja+iwsUqGRjr1BbZIe771sXdapOtYI531gSqpi92vXivKcq2asu/DFpdl1ceFAKZyRzK2PCVcQ==", + "version": "7.27.0", + "resolved": "https://registry.npmjs.org/@babel/runtime/-/runtime-7.27.0.tgz", + "integrity": "sha512-VtPOkrdPHZsKc/clNqyi9WUA8TINkZ4cGk63UUE3u4pmB2k+ZMQRDuIOagv8UVd6j7k0T3+RRIb7beKTebNbcw==", "license": "MIT", "dependencies": { "regenerator-runtime": "^0.14.0" @@ -1618,30 +1603,30 @@ } }, "node_modules/@babel/template": { - "version": "7.26.8", - "resolved": "https://registry.npmjs.org/@babel/template/-/template-7.26.8.tgz", - "integrity": "sha512-iNKaX3ZebKIsCvJ+0jd6embf+Aulaa3vNBqZ41kM7iTWjx5qzWKXGHiJUW3+nTpQ18SG11hdF8OAzKrpXkb96Q==", + "version": "7.27.0", + "resolved": "https://registry.npmjs.org/@babel/template/-/template-7.27.0.tgz", + "integrity": "sha512-2ncevenBqXI6qRMukPlXwHKHchC7RyMuu4xv5JBXRfOGVcTy1mXCD12qrp7Jsoxll1EV3+9sE4GugBVRjT2jFA==", "license": "MIT", "dependencies": { "@babel/code-frame": "^7.26.2", - "@babel/parser": "^7.26.8", - "@babel/types": "^7.26.8" + "@babel/parser": "^7.27.0", + "@babel/types": "^7.27.0" }, "engines": { "node": ">=6.9.0" } }, "node_modules/@babel/traverse": { - "version": "7.26.8", - "resolved": "https://registry.npmjs.org/@babel/traverse/-/traverse-7.26.8.tgz", - "integrity": "sha512-nic9tRkjYH0oB2dzr/JoGIm+4Q6SuYeLEiIiZDwBscRMYFJ+tMAz98fuel9ZnbXViA2I0HVSSRRK8DW5fjXStA==", + "version": "7.27.0", + "resolved": "https://registry.npmjs.org/@babel/traverse/-/traverse-7.27.0.tgz", + "integrity": "sha512-19lYZFzYVQkkHkl4Cy4WrAVcqBkgvV2YM2TU3xG6DIwO7O3ecbDPfW3yM3bjAGcqcQHi+CCtjMR3dIEHxsd6bA==", "license": "MIT", "dependencies": { "@babel/code-frame": "^7.26.2", - "@babel/generator": "^7.26.8", - "@babel/parser": "^7.26.8", - "@babel/template": "^7.26.8", - "@babel/types": "^7.26.8", + "@babel/generator": "^7.27.0", + "@babel/parser": "^7.27.0", + "@babel/template": "^7.27.0", + "@babel/types": "^7.27.0", "debug": "^4.3.1", "globals": "^11.1.0" }, @@ -1649,10 +1634,19 @@ "node": ">=6.9.0" } }, + "node_modules/@babel/traverse/node_modules/globals": { + "version": "11.12.0", + "resolved": "https://registry.npmjs.org/globals/-/globals-11.12.0.tgz", + "integrity": "sha512-WOBp/EEGUiIsJSp7wcv/y6MO+lV9UoncWqxuFfm8eBwzWNgyfBd6Gz+IeKQ9jCmyhoH99g15M3T+QaVHFjizVA==", + "license": "MIT", + "engines": { + "node": ">=4" + } + }, "node_modules/@babel/types": { - "version": "7.26.8", - "resolved": "https://registry.npmjs.org/@babel/types/-/types-7.26.8.tgz", - "integrity": "sha512-eUuWapzEGWFEpHFxgEaBG8e3n6S8L3MSu0oda755rOfabWPnh0Our1AozNFVUxGFIhbKgd1ksprsoDGMinTOTA==", + "version": "7.27.0", + "resolved": "https://registry.npmjs.org/@babel/types/-/types-7.27.0.tgz", + "integrity": "sha512-H45s8fVLYjbhFH62dIJ3WtmJ6RSPt/3DRO0ZcT2SUiYiQyz3BLVb9ADEnLl91m74aQPS3AzzeajZHYOalWe3bg==", "license": "MIT", "dependencies": { "@babel/helper-string-parser": "^7.25.9", @@ -1672,9 +1666,9 @@ } }, "node_modules/@eslint-community/eslint-utils": { - "version": "4.4.1", - "resolved": "https://registry.npmjs.org/@eslint-community/eslint-utils/-/eslint-utils-4.4.1.tgz", - "integrity": "sha512-s3O3waFUrMV8P/XaF/+ZTp1X9XBZW1a4B97ZnjQF2KYWaFD2A8KyFBsrsfSjEmjn3RGWAIuvlneuZm3CUK3jbA==", + "version": "4.6.1", + "resolved": "https://registry.npmjs.org/@eslint-community/eslint-utils/-/eslint-utils-4.6.1.tgz", + "integrity": "sha512-KTsJMmobmbrFLe3LDh0PC2FXpcSYJt/MLjlkh/9LEnmKYLSYmT/0EW9JWANjeoemiuZrmogti0tW5Ch+qNUYDw==", "license": "MIT", "dependencies": { "eslint-visitor-keys": "^3.4.3" @@ -1710,16 +1704,51 @@ "node": "^12.0.0 || ^14.0.0 || >=16.0.0" } }, + "node_modules/@eslint/config-array": { + "version": "0.20.0", + "resolved": "https://registry.npmjs.org/@eslint/config-array/-/config-array-0.20.0.tgz", + "integrity": "sha512-fxlS1kkIjx8+vy2SjuCB94q3htSNrufYTXubwiBFeaQHbH6Ipi43gFJq2zCMt6PHhImH3Xmr0NksKDvchWlpQQ==", + "license": "Apache-2.0", + "dependencies": { + "@eslint/object-schema": "^2.1.6", + "debug": "^4.3.1", + "minimatch": "^3.1.2" + }, + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + } + }, + "node_modules/@eslint/config-helpers": { + "version": "0.2.1", + "resolved": "https://registry.npmjs.org/@eslint/config-helpers/-/config-helpers-0.2.1.tgz", + "integrity": "sha512-RI17tsD2frtDu/3dmI7QRrD4bedNKPM08ziRYaC5AhkGrzIAJelm9kJU1TznK+apx6V+cqRz8tfpEeG3oIyjxw==", + "license": "Apache-2.0", + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + } + }, + "node_modules/@eslint/core": { + "version": "0.13.0", + "resolved": "https://registry.npmjs.org/@eslint/core/-/core-0.13.0.tgz", + "integrity": "sha512-yfkgDw1KR66rkT5A8ci4irzDysN7FRpq3ttJolR88OqQikAWqwA8j5VZyas+vjyBNFIJ7MfybJ9plMILI2UrCw==", + "license": "Apache-2.0", + "dependencies": { + "@types/json-schema": "^7.0.15" + }, + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + } + }, "node_modules/@eslint/eslintrc": { - "version": "2.1.4", - "resolved": "https://registry.npmjs.org/@eslint/eslintrc/-/eslintrc-2.1.4.tgz", - "integrity": "sha512-269Z39MS6wVJtsoUl10L60WdkhJVdPG24Q4eZTH3nnF6lpvSShEK3wQjDX9JRWAUPvPh7COouPpU9IrqaZFvtQ==", + "version": "3.3.1", + "resolved": "https://registry.npmjs.org/@eslint/eslintrc/-/eslintrc-3.3.1.tgz", + "integrity": "sha512-gtF186CXhIl1p4pJNGZw8Yc6RlshoePRvE0X91oPGb3vZ8pM3qOS9W9NGPat9LziaBV7XrJWGylNQXkGcnM3IQ==", "license": "MIT", "dependencies": { "ajv": "^6.12.4", "debug": "^4.3.2", - "espree": "^9.6.0", - "globals": "^13.19.0", + "espree": "^10.0.1", + "globals": "^14.0.0", "ignore": "^5.2.0", "import-fresh": "^3.2.1", "js-yaml": "^4.1.0", @@ -1727,34 +1756,53 @@ "strip-json-comments": "^3.1.1" }, "engines": { - "node": "^12.22.0 || ^14.17.0 || >=16.0.0" + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" }, "funding": { "url": "https://opencollective.com/eslint" } }, "node_modules/@eslint/eslintrc/node_modules/globals": { - "version": "13.24.0", - "resolved": "https://registry.npmjs.org/globals/-/globals-13.24.0.tgz", - "integrity": "sha512-AhO5QUcj8llrbG09iWhPU2B204J1xnPeL8kQmVorSsy+Sjj1sk8gIyh6cUocGmH4L0UuhAJy+hJMRA4mgA4mFQ==", + "version": "14.0.0", + "resolved": "https://registry.npmjs.org/globals/-/globals-14.0.0.tgz", + "integrity": "sha512-oahGvuMGQlPw/ivIYBjVSrWAfWLBeku5tpPE2fOPLi+WHffIWbuh2tCjhyQhTBPMf5E9jDEH4FOmTYgYwbKwtQ==", "license": "MIT", - "dependencies": { - "type-fest": "^0.20.2" - }, "engines": { - "node": ">=8" + "node": ">=18" }, "funding": { "url": "https://github.com/sponsors/sindresorhus" } }, "node_modules/@eslint/js": { - "version": "8.56.0", - "resolved": "https://registry.npmjs.org/@eslint/js/-/js-8.56.0.tgz", - "integrity": "sha512-gMsVel9D7f2HLkBma9VbtzZRehRogVRfbr++f06nL2vnCGCNlzOD+/MUov/F4p8myyAHspEhVobgjpX64q5m6A==", + "version": "9.25.1", + "resolved": "https://registry.npmjs.org/@eslint/js/-/js-9.25.1.tgz", + "integrity": "sha512-dEIwmjntEx8u3Uvv+kr3PDeeArL8Hw07H9kyYxCjnM9pBjfEhk6uLXSchxxzgiwtRhhzVzqmUSDFBOi1TuZ7qg==", "license": "MIT", "engines": { - "node": "^12.22.0 || ^14.17.0 || >=16.0.0" + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + } + }, + "node_modules/@eslint/object-schema": { + "version": "2.1.6", + "resolved": "https://registry.npmjs.org/@eslint/object-schema/-/object-schema-2.1.6.tgz", + "integrity": "sha512-RBMg5FRL0I0gs51M/guSAj5/e14VQ4tpZnQNWwuDT66P14I43ItmPfIZRhO9fUVIPOAQXU47atlywZ/czoqFPA==", + "license": "Apache-2.0", + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + } + }, + "node_modules/@eslint/plugin-kit": { + "version": "0.2.8", + "resolved": "https://registry.npmjs.org/@eslint/plugin-kit/-/plugin-kit-0.2.8.tgz", + "integrity": "sha512-ZAoA40rNMPwSm+AeHpCq8STiNAwzWLJuP8Xv4CHIc9wv/PSuExjMrmjfYNj682vW0OOiZ1HKxzvjQr9XZIisQA==", + "license": "Apache-2.0", + "dependencies": { + "@eslint/core": "^0.13.0", + "levn": "^0.4.1" + }, + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" } }, "node_modules/@fortawesome/fontawesome-pro": { @@ -1766,19 +1814,39 @@ "node": ">=6" } }, - "node_modules/@humanwhocodes/config-array": { - "version": "0.11.14", - "resolved": "https://registry.npmjs.org/@humanwhocodes/config-array/-/config-array-0.11.14.tgz", - "integrity": "sha512-3T8LkOmg45BV5FICb15QQMsyUSWrQ8AygVfC7ZG32zOalnqrilm018ZVCw0eapXux8FtA33q8PSRSstjee3jSg==", - "deprecated": "Use @eslint/config-array instead", + "node_modules/@humanfs/core": { + "version": "0.19.1", + "resolved": "https://registry.npmjs.org/@humanfs/core/-/core-0.19.1.tgz", + "integrity": "sha512-5DyQ4+1JEUzejeK1JGICcideyfUbGixgS9jNgex5nqkW+cY7WZhxBigmieN5Qnw9ZosSNVC9KQKyb+GUaGyKUA==", + "license": "Apache-2.0", + "engines": { + "node": ">=18.18.0" + } + }, + "node_modules/@humanfs/node": { + "version": "0.16.6", + "resolved": "https://registry.npmjs.org/@humanfs/node/-/node-0.16.6.tgz", + "integrity": "sha512-YuI2ZHQL78Q5HbhDiBA1X4LmYdXCKCMQIfw0pw7piHJwyREFebJUvrQN4cMssyES6x+vfUbx1CIpaQUKYdQZOw==", "license": "Apache-2.0", "dependencies": { - "@humanwhocodes/object-schema": "^2.0.2", - "debug": "^4.3.1", - "minimatch": "^3.0.5" + "@humanfs/core": "^0.19.1", + "@humanwhocodes/retry": "^0.3.0" }, "engines": { - "node": ">=10.10.0" + "node": ">=18.18.0" + } + }, + "node_modules/@humanfs/node/node_modules/@humanwhocodes/retry": { + "version": "0.3.1", + "resolved": "https://registry.npmjs.org/@humanwhocodes/retry/-/retry-0.3.1.tgz", + "integrity": "sha512-JBxkERygn7Bv/GbN5Rv8Ul6LVknS+5Bp6RgDC/O8gEBU/yeH5Ui5C/OlWrTb6qct7LjjfT6Re2NxB0ln0yYybA==", + "license": "Apache-2.0", + "engines": { + "node": ">=18.18" + }, + "funding": { + "type": "github", + "url": "https://github.com/sponsors/nzakas" } }, "node_modules/@humanwhocodes/module-importer": { @@ -1794,12 +1862,18 @@ "url": "https://github.com/sponsors/nzakas" } }, - "node_modules/@humanwhocodes/object-schema": { - "version": "2.0.3", - "resolved": "https://registry.npmjs.org/@humanwhocodes/object-schema/-/object-schema-2.0.3.tgz", - "integrity": "sha512-93zYdMES/c1D69yZiKDBj0V24vqNzB/koF26KPaagAfd3P/4gUlh3Dys5ogAK+Exi9QyzlD8x/08Zt7wIKcDcA==", - "deprecated": "Use @eslint/object-schema instead", - "license": "BSD-3-Clause" + "node_modules/@humanwhocodes/retry": { + "version": "0.4.2", + "resolved": "https://registry.npmjs.org/@humanwhocodes/retry/-/retry-0.4.2.tgz", + "integrity": "sha512-xeO57FpIu4p1Ri3Jq/EXq4ClRm86dVF2z/+kvFnyqVYRavTZmaFaUBbWCOuuTh0o/g7DSsk6kc2vrS4Vl5oPOQ==", + "license": "Apache-2.0", + "engines": { + "node": ">=18.18" + }, + "funding": { + "type": "github", + "url": "https://github.com/sponsors/nzakas" + } }, "node_modules/@jest/schemas": { "version": "29.6.3", @@ -1905,9 +1979,9 @@ } }, "node_modules/@jsonjoy.com/json-pack": { - "version": "1.1.1", - "resolved": "https://registry.npmjs.org/@jsonjoy.com/json-pack/-/json-pack-1.1.1.tgz", - "integrity": "sha512-osjeBqMJ2lb/j/M8NCPjs1ylqWIcTRTycIhVB5pt6LgzgeRSb0YRZ7j9RfA8wIUrsr/medIuhVyonXRZWLyfdw==", + "version": "1.2.0", + "resolved": "https://registry.npmjs.org/@jsonjoy.com/json-pack/-/json-pack-1.2.0.tgz", + "integrity": "sha512-io1zEbbYcElht3tdlqEOFxZ0dMTYrHz9iMf0gqn1pPjZFTCgM5R4R5IMA20Chb2UPYYsxjzs8CgZ7Nb5n2K2rA==", "license": "Apache-2.0", "dependencies": { "@jsonjoy.com/base64": "^1.1.1", @@ -1948,15 +2022,6 @@ "integrity": "sha512-Vo+PSpZG2/fmgmiNzYK9qWRh8h/CHrwD0mo1h1DzL4yzHNSfWYujGTYsWGreD000gcgmZ7K4Ys6Tx9TxtsKdDw==", "license": "MIT" }, - "node_modules/@nicolo-ribaudo/eslint-scope-5-internals": { - "version": "5.1.1-v1", - "resolved": "https://registry.npmjs.org/@nicolo-ribaudo/eslint-scope-5-internals/-/eslint-scope-5-internals-5.1.1-v1.tgz", - "integrity": "sha512-54/JRvkLIzzDWshCWfuhadfrfZVPiElY8Fcgmg1HroEly/EDSszzhBAsarCux+D/kOslTRquNzuyGSmUSTTHGg==", - "license": "MIT", - "dependencies": { - "eslint-scope": "5.1.1" - } - }, "node_modules/@nodelib/fs.scandir": { "version": "2.1.5", "resolved": "https://registry.npmjs.org/@nodelib/fs.scandir/-/fs.scandir-2.1.5.tgz", @@ -1996,6 +2061,7 @@ "version": "2.5.1", "resolved": "https://registry.npmjs.org/@parcel/watcher/-/watcher-2.5.1.tgz", "integrity": "sha512-dfUnCxiN9H4ap84DvD2ubjw+3vUNpstxa0TneY/Paat8a3R4uQZDLSvWjmznAY/DoahqTHl9V46HF/Zs3F29pg==", + "dev": true, "hasInstallScript": true, "license": "MIT", "optional": true, @@ -2035,6 +2101,7 @@ "cpu": [ "arm64" ], + "dev": true, "license": "MIT", "optional": true, "os": [ @@ -2055,6 +2122,7 @@ "cpu": [ "arm64" ], + "dev": true, "license": "MIT", "optional": true, "os": [ @@ -2075,6 +2143,7 @@ "cpu": [ "x64" ], + "dev": true, "license": "MIT", "optional": true, "os": [ @@ -2095,6 +2164,7 @@ "cpu": [ "x64" ], + "dev": true, "license": "MIT", "optional": true, "os": [ @@ -2115,6 +2185,7 @@ "cpu": [ "arm" ], + "dev": true, "license": "MIT", "optional": true, "os": [ @@ -2135,6 +2206,7 @@ "cpu": [ "arm" ], + "dev": true, "license": "MIT", "optional": true, "os": [ @@ -2155,6 +2227,7 @@ "cpu": [ "arm64" ], + "dev": true, "license": "MIT", "optional": true, "os": [ @@ -2175,6 +2248,7 @@ "cpu": [ "arm64" ], + "dev": true, "license": "MIT", "optional": true, "os": [ @@ -2195,6 +2269,7 @@ "cpu": [ "x64" ], + "dev": true, "license": "MIT", "optional": true, "os": [ @@ -2215,6 +2290,7 @@ "cpu": [ "x64" ], + "dev": true, "license": "MIT", "optional": true, "os": [ @@ -2235,6 +2311,7 @@ "cpu": [ "arm64" ], + "dev": true, "license": "MIT", "optional": true, "os": [ @@ -2255,6 +2332,7 @@ "cpu": [ "ia32" ], + "dev": true, "license": "MIT", "optional": true, "os": [ @@ -2275,6 +2353,7 @@ "cpu": [ "x64" ], + "dev": true, "license": "MIT", "optional": true, "os": [ @@ -2300,17 +2379,56 @@ "integrity": "sha512-+Fj43pSMwJs4KRrH/938Uf+uAELIgVBmQzg/q1YG10djyfA3TnrU8N8XzqCh/okZdszqBQTZf96idMfE5lnwTA==", "license": "MIT" }, - "node_modules/@sindresorhus/merge-streams": { - "version": "2.3.0", - "resolved": "https://registry.npmjs.org/@sindresorhus/merge-streams/-/merge-streams-2.3.0.tgz", - "integrity": "sha512-LtoMMhxAlorcGhmFYI+LhPgbPZCkgP6ra1YL604EeF6U98pLlQ3iWIGMdWSC+vWmPBWBNgmDBAhnAobLROJmwg==", - "dev": true, + "node_modules/@stylistic/eslint-plugin": { + "version": "4.2.0", + "resolved": "https://registry.npmjs.org/@stylistic/eslint-plugin/-/eslint-plugin-4.2.0.tgz", + "integrity": "sha512-8hXezgz7jexGHdo5WN6JBEIPHCSFyyU4vgbxevu4YLVS5vl+sxqAAGyXSzfNDyR6xMNSH5H1x67nsXcYMOHtZA==", "license": "MIT", + "dependencies": { + "@typescript-eslint/utils": "^8.23.0", + "eslint-visitor-keys": "^4.2.0", + "espree": "^10.3.0", + "estraverse": "^5.3.0", + "picomatch": "^4.0.2" + }, "engines": { - "node": ">=18" + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + }, + "peerDependencies": { + "eslint": ">=9.0.0" + } + }, + "node_modules/@stylistic/eslint-plugin/node_modules/eslint-visitor-keys": { + "version": "4.2.0", + "resolved": "https://registry.npmjs.org/eslint-visitor-keys/-/eslint-visitor-keys-4.2.0.tgz", + "integrity": "sha512-UyLnSehNt62FFhSwjZlHmeokpRK59rcz29j+F1/aDgbkbRTk7wIc9XzdoasMUbRNKDM0qQt/+BJ4BrpFeABemw==", + "license": "Apache-2.0", + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" }, "funding": { - "url": "https://github.com/sponsors/sindresorhus" + "url": "https://opencollective.com/eslint" + } + }, + "node_modules/@stylistic/eslint-plugin/node_modules/estraverse": { + "version": "5.3.0", + "resolved": "https://registry.npmjs.org/estraverse/-/estraverse-5.3.0.tgz", + "integrity": "sha512-MMdARuVEQziNTeJD8DgMqmhwR11BRQ/cBP+pLtYdSTnf3MIO8fFeiINEbX36ZdNlfU/7A9f3gUw49B3oQsvwBA==", + "license": "BSD-2-Clause", + "engines": { + "node": ">=4.0" + } + }, + "node_modules/@stylistic/eslint-plugin/node_modules/picomatch": { + "version": "4.0.2", + "resolved": "https://registry.npmjs.org/picomatch/-/picomatch-4.0.2.tgz", + "integrity": "sha512-M7BAV6Rlcy5u+m6oPhAPFgJTzAioX/6B0DxyvDlo9l8+T3nLKbrczg2WLUyzd45L8RqfUMyGPzekbMvX2Ldkwg==", + "license": "MIT", + "engines": { + "node": ">=12" + }, + "funding": { + "url": "https://github.com/sponsors/jonschlinkert" } }, "node_modules/@types/body-parser": { @@ -2352,9 +2470,9 @@ } }, "node_modules/@types/eslint": { - "version": "8.56.12", - "resolved": "https://registry.npmjs.org/@types/eslint/-/eslint-8.56.12.tgz", - "integrity": "sha512-03ruubjWyOHlmljCVoxSuNDdmfZDzsrrz0P2LeJsOXr+ZwFQ+0yQIwNCwt/GYhV7Z31fgtXJTAEs+FYlEL851g==", + "version": "9.6.1", + "resolved": "https://registry.npmjs.org/@types/eslint/-/eslint-9.6.1.tgz", + "integrity": "sha512-FXx2pKgId/WyYo2jXw63kk7/+TY7u7AziEJxJAnSFzHlqTAS3Ync6SvgYAN/k4/PQpnnVuzoMuVnByKK2qp0ag==", "license": "MIT", "dependencies": { "@types/estree": "*", @@ -2372,9 +2490,9 @@ } }, "node_modules/@types/estree": { - "version": "1.0.6", - "resolved": "https://registry.npmjs.org/@types/estree/-/estree-1.0.6.tgz", - "integrity": "sha512-AYnb1nQyY49te+VRAVgmzfcgjYS91mY5P0TKUDCLEM+gNnA+3T6rWITXRLYCpahpqSQbN5cE+gHpnPyXjHWxcw==", + "version": "1.0.7", + "resolved": "https://registry.npmjs.org/@types/estree/-/estree-1.0.7.tgz", + "integrity": "sha512-w28IoSUCJpidD/TGviZwwMJckNESJZXFu7NBZ5YJ4mEUnNraUn9Pm8HSZm/jDF1pDWYKspWE7oVphigUPRakIQ==", "license": "MIT" }, "node_modules/@types/express": { @@ -2390,18 +2508,6 @@ } }, "node_modules/@types/express-serve-static-core": { - "version": "5.0.6", - "resolved": "https://registry.npmjs.org/@types/express-serve-static-core/-/express-serve-static-core-5.0.6.tgz", - "integrity": "sha512-3xhRnjJPkULekpSzgtoNYYcTWgEZkp4myc+Saevii5JPnHNvHMRlBSHDbs7Bh1iPPoVTERHEZXyhyLbMEsExsA==", - "license": "MIT", - "dependencies": { - "@types/node": "*", - "@types/qs": "*", - "@types/range-parser": "*", - "@types/send": "*" - } - }, - "node_modules/@types/express/node_modules/@types/express-serve-static-core": { "version": "4.19.6", "resolved": "https://registry.npmjs.org/@types/express-serve-static-core/-/express-serve-static-core-4.19.6.tgz", "integrity": "sha512-N4LZ2xG7DatVqhCZzOGb1Yi5lMbXSZcmdLDe9EzSndPV2HpWYWzRbaerl2n27irrm94EPpprqa8KpskPT085+A==", @@ -2413,12 +2519,6 @@ "@types/send": "*" } }, - "node_modules/@types/gensync": { - "version": "1.0.4", - "resolved": "https://registry.npmjs.org/@types/gensync/-/gensync-1.0.4.tgz", - "integrity": "sha512-C3YYeRQWp2fmq9OryX+FoDy8nXS6scQ7dPptD8LnFDAUNcKWJjXQKDNJD3HVm+kOUsXhTOkpi69vI4EuAr95bA==", - "license": "MIT" - }, "node_modules/@types/http-errors": { "version": "2.0.4", "resolved": "https://registry.npmjs.org/@types/http-errors/-/http-errors-2.0.4.tgz", @@ -2477,12 +2577,12 @@ "license": "MIT" }, "node_modules/@types/node": { - "version": "22.13.1", - "resolved": "https://registry.npmjs.org/@types/node/-/node-22.13.1.tgz", - "integrity": "sha512-jK8uzQlrvXqEU91UxiK5J7pKHyzgnI1Qnl0QDHIgVGuolJhRb9EEl28Cj9b3rGR8B2lhFCtvIm5os8lFnO/1Ew==", + "version": "22.15.3", + "resolved": "https://registry.npmjs.org/@types/node/-/node-22.15.3.tgz", + "integrity": "sha512-lX7HFZeHf4QG/J7tBZqrCAXwz9J5RD56Y6MpP0eJkka8p+K0RY/yBTW7CYFJ4VGCclxqOLKmiGP5juQc6MKgcw==", "license": "MIT", "dependencies": { - "undici-types": "~6.20.0" + "undici-types": "~6.21.0" } }, "node_modules/@types/node-forge": { @@ -2494,12 +2594,6 @@ "@types/node": "*" } }, - "node_modules/@types/parse-json": { - "version": "4.0.2", - "resolved": "https://registry.npmjs.org/@types/parse-json/-/parse-json-4.0.2.tgz", - "integrity": "sha512-dISoDXWWQwUquiKsyZ4Ng+HX2KsPL7LyHKHQwgGFEA3IaKac4Obd+h2a/a6waisAoepJlBcx9paWqjA8/HVjCw==", - "license": "MIT" - }, "node_modules/@types/qs": { "version": "6.9.18", "resolved": "https://registry.npmjs.org/@types/qs/-/qs-6.9.18.tgz", @@ -2558,9 +2652,9 @@ } }, "node_modules/@types/ws": { - "version": "8.5.14", - "resolved": "https://registry.npmjs.org/@types/ws/-/ws-8.5.14.tgz", - "integrity": "sha512-bd/YFLW+URhBzMXurx7lWByOu+xzU9+kb3RboOteXYDfW+tr+JZa99OyNmPINEGB/ahzKrEuc8rcv4gnpJmxTw==", + "version": "8.18.1", + "resolved": "https://registry.npmjs.org/@types/ws/-/ws-8.18.1.tgz", + "integrity": "sha512-ThVF6DCVhA8kUGy+aazFQ4kXQ7E1Ty7A3ypFOe0IcJV8O/M511G99AW24irKrW56Wt44yG9+ij8FaqoBGkuBXg==", "license": "MIT", "dependencies": { "@types/node": "*" @@ -2581,72 +2675,210 @@ "integrity": "sha512-I4q9QU9MQv4oEOz4tAHJtNz1cwuLxn2F3xcc2iV5WdqLPpUnj30aUuxt1mAxYTG+oe8CZMV/+6rU4S4gRDzqtQ==", "license": "MIT" }, - "node_modules/@ungap/structured-clone": { - "version": "1.3.0", - "resolved": "https://registry.npmjs.org/@ungap/structured-clone/-/structured-clone-1.3.0.tgz", - "integrity": "sha512-WmoN8qaIAo7WTYWbAZuG8PYEhn5fkz7dZrqTBZ7dtt//lL2Gwms1IcnQ5yHqjDfX8Ft5j4YzDM23f87zBfDe9g==", - "license": "ISC" - }, - "node_modules/@webassemblyjs/ast": { - "version": "1.14.1", - "resolved": "https://registry.npmjs.org/@webassemblyjs/ast/-/ast-1.14.1.tgz", - "integrity": "sha512-nuBEDgQfm1ccRp/8bCQrx1frohyufl4JlbMMZ4P1wpeOfDhF6FQkxZJ1b/e+PLwr6X1Nhw6OLme5usuBWYBvuQ==", + "node_modules/@typescript-eslint/scope-manager": { + "version": "8.31.0", + "resolved": "https://registry.npmjs.org/@typescript-eslint/scope-manager/-/scope-manager-8.31.0.tgz", + "integrity": "sha512-knO8UyF78Nt8O/B64i7TlGXod69ko7z6vJD9uhSlm0qkAbGeRUSudcm0+K/4CrRjrpiHfBCjMWlc08Vav1xwcw==", "license": "MIT", "dependencies": { - "@webassemblyjs/helper-numbers": "1.13.2", - "@webassemblyjs/helper-wasm-bytecode": "1.13.2" + "@typescript-eslint/types": "8.31.0", + "@typescript-eslint/visitor-keys": "8.31.0" + }, + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/typescript-eslint" } }, - "node_modules/@webassemblyjs/floating-point-hex-parser": { - "version": "1.13.2", - "resolved": "https://registry.npmjs.org/@webassemblyjs/floating-point-hex-parser/-/floating-point-hex-parser-1.13.2.tgz", - "integrity": "sha512-6oXyTOzbKxGH4steLbLNOu71Oj+C8Lg34n6CqRvqfS2O71BxY6ByfMDRhBytzknj9yGUPVJ1qIKhRlAwO1AovA==", - "license": "MIT" - }, - "node_modules/@webassemblyjs/helper-api-error": { - "version": "1.13.2", - "resolved": "https://registry.npmjs.org/@webassemblyjs/helper-api-error/-/helper-api-error-1.13.2.tgz", - "integrity": "sha512-U56GMYxy4ZQCbDZd6JuvvNV/WFildOjsaWD3Tzzvmw/mas3cXzRJPMjP83JqEsgSbyrmaGjBfDtV7KDXV9UzFQ==", - "license": "MIT" - }, - "node_modules/@webassemblyjs/helper-buffer": { - "version": "1.14.1", - "resolved": "https://registry.npmjs.org/@webassemblyjs/helper-buffer/-/helper-buffer-1.14.1.tgz", - "integrity": "sha512-jyH7wtcHiKssDtFPRB+iQdxlDf96m0E39yb0k5uJVhFGleZFoNw1c4aeIcVUPPbXUVJ94wwnMOAqUHyzoEPVMA==", - "license": "MIT" + "node_modules/@typescript-eslint/types": { + "version": "8.31.0", + "resolved": "https://registry.npmjs.org/@typescript-eslint/types/-/types-8.31.0.tgz", + "integrity": "sha512-Ch8oSjVyYyJxPQk8pMiP2FFGYatqXQfQIaMp+TpuuLlDachRWpUAeEu1u9B/v/8LToehUIWyiKcA/w5hUFRKuQ==", + "license": "MIT", + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/typescript-eslint" + } }, - "node_modules/@webassemblyjs/helper-numbers": { - "version": "1.13.2", - "resolved": "https://registry.npmjs.org/@webassemblyjs/helper-numbers/-/helper-numbers-1.13.2.tgz", - "integrity": "sha512-FE8aCmS5Q6eQYcV3gI35O4J789wlQA+7JrqTTpJqn5emA4U2hvwJmvFRC0HODS+3Ye6WioDklgd6scJ3+PLnEA==", + "node_modules/@typescript-eslint/typescript-estree": { + "version": "8.31.0", + "resolved": "https://registry.npmjs.org/@typescript-eslint/typescript-estree/-/typescript-estree-8.31.0.tgz", + "integrity": "sha512-xLmgn4Yl46xi6aDSZ9KkyfhhtnYI15/CvHbpOy/eR5NWhK/BK8wc709KKwhAR0m4ZKRP7h07bm4BWUYOCuRpQQ==", "license": "MIT", "dependencies": { - "@webassemblyjs/floating-point-hex-parser": "1.13.2", - "@webassemblyjs/helper-api-error": "1.13.2", - "@xtuc/long": "4.2.2" + "@typescript-eslint/types": "8.31.0", + "@typescript-eslint/visitor-keys": "8.31.0", + "debug": "^4.3.4", + "fast-glob": "^3.3.2", + "is-glob": "^4.0.3", + "minimatch": "^9.0.4", + "semver": "^7.6.0", + "ts-api-utils": "^2.0.1" + }, + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/typescript-eslint" + }, + "peerDependencies": { + "typescript": ">=4.8.4 <5.9.0" } }, - "node_modules/@webassemblyjs/helper-wasm-bytecode": { - "version": "1.13.2", - "resolved": "https://registry.npmjs.org/@webassemblyjs/helper-wasm-bytecode/-/helper-wasm-bytecode-1.13.2.tgz", - "integrity": "sha512-3QbLKy93F0EAIXLh0ogEVR6rOubA9AoZ+WRYhNbFyuB70j3dRdwH9g+qXhLAO0kiYGlg3TxDV+I4rQTr/YNXkA==", - "license": "MIT" - }, - "node_modules/@webassemblyjs/helper-wasm-section": { - "version": "1.14.1", - "resolved": "https://registry.npmjs.org/@webassemblyjs/helper-wasm-section/-/helper-wasm-section-1.14.1.tgz", - "integrity": "sha512-ds5mXEqTJ6oxRoqjhWDU83OgzAYjwsCV8Lo/N+oRsNDmx/ZDpqalmrtgOMkHwxsG0iI//3BwWAErYRHtgn0dZw==", + "node_modules/@typescript-eslint/typescript-estree/node_modules/brace-expansion": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/brace-expansion/-/brace-expansion-2.0.1.tgz", + "integrity": "sha512-XnAIvQ8eM+kC6aULx6wuQiwVsnzsi9d3WxzV3FpWTGA19F621kwdbsAcFKXgKUHZWsy+mY6iL1sHTxWEFCytDA==", "license": "MIT", "dependencies": { - "@webassemblyjs/ast": "1.14.1", - "@webassemblyjs/helper-buffer": "1.14.1", - "@webassemblyjs/helper-wasm-bytecode": "1.13.2", - "@webassemblyjs/wasm-gen": "1.14.1" + "balanced-match": "^1.0.0" } }, - "node_modules/@webassemblyjs/ieee754": { - "version": "1.13.2", - "resolved": "https://registry.npmjs.org/@webassemblyjs/ieee754/-/ieee754-1.13.2.tgz", + "node_modules/@typescript-eslint/typescript-estree/node_modules/minimatch": { + "version": "9.0.5", + "resolved": "https://registry.npmjs.org/minimatch/-/minimatch-9.0.5.tgz", + "integrity": "sha512-G6T0ZX48xgozx7587koeX9Ys2NYy6Gmv//P89sEte9V9whIapMNF4idKxnW2QtCcLiTWlb/wfCabAtAFWhhBow==", + "license": "ISC", + "dependencies": { + "brace-expansion": "^2.0.1" + }, + "engines": { + "node": ">=16 || 14 >=14.17" + }, + "funding": { + "url": "https://github.com/sponsors/isaacs" + } + }, + "node_modules/@typescript-eslint/typescript-estree/node_modules/semver": { + "version": "7.7.1", + "resolved": "https://registry.npmjs.org/semver/-/semver-7.7.1.tgz", + "integrity": "sha512-hlq8tAfn0m/61p4BVRcPzIGr6LKiMwo4VM6dGi6pt4qcRkmNzTcWq6eCEjEh+qXjkMDvPlOFFSGwQjoEa6gyMA==", + "license": "ISC", + "bin": { + "semver": "bin/semver.js" + }, + "engines": { + "node": ">=10" + } + }, + "node_modules/@typescript-eslint/utils": { + "version": "8.31.0", + "resolved": "https://registry.npmjs.org/@typescript-eslint/utils/-/utils-8.31.0.tgz", + "integrity": "sha512-qi6uPLt9cjTFxAb1zGNgTob4x9ur7xC6mHQJ8GwEzGMGE9tYniublmJaowOJ9V2jUzxrltTPfdG2nKlWsq0+Ww==", + "license": "MIT", + "dependencies": { + "@eslint-community/eslint-utils": "^4.4.0", + "@typescript-eslint/scope-manager": "8.31.0", + "@typescript-eslint/types": "8.31.0", + "@typescript-eslint/typescript-estree": "8.31.0" + }, + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/typescript-eslint" + }, + "peerDependencies": { + "eslint": "^8.57.0 || ^9.0.0", + "typescript": ">=4.8.4 <5.9.0" + } + }, + "node_modules/@typescript-eslint/visitor-keys": { + "version": "8.31.0", + "resolved": "https://registry.npmjs.org/@typescript-eslint/visitor-keys/-/visitor-keys-8.31.0.tgz", + "integrity": "sha512-QcGHmlRHWOl93o64ZUMNewCdwKGU6WItOU52H0djgNmn1EOrhVudrDzXz4OycCRSCPwFCDrE2iIt5vmuUdHxuQ==", + "license": "MIT", + "dependencies": { + "@typescript-eslint/types": "8.31.0", + "eslint-visitor-keys": "^4.2.0" + }, + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/typescript-eslint" + } + }, + "node_modules/@typescript-eslint/visitor-keys/node_modules/eslint-visitor-keys": { + "version": "4.2.0", + "resolved": "https://registry.npmjs.org/eslint-visitor-keys/-/eslint-visitor-keys-4.2.0.tgz", + "integrity": "sha512-UyLnSehNt62FFhSwjZlHmeokpRK59rcz29j+F1/aDgbkbRTk7wIc9XzdoasMUbRNKDM0qQt/+BJ4BrpFeABemw==", + "license": "Apache-2.0", + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + }, + "funding": { + "url": "https://opencollective.com/eslint" + } + }, + "node_modules/@webassemblyjs/ast": { + "version": "1.14.1", + "resolved": "https://registry.npmjs.org/@webassemblyjs/ast/-/ast-1.14.1.tgz", + "integrity": "sha512-nuBEDgQfm1ccRp/8bCQrx1frohyufl4JlbMMZ4P1wpeOfDhF6FQkxZJ1b/e+PLwr6X1Nhw6OLme5usuBWYBvuQ==", + "license": "MIT", + "dependencies": { + "@webassemblyjs/helper-numbers": "1.13.2", + "@webassemblyjs/helper-wasm-bytecode": "1.13.2" + } + }, + "node_modules/@webassemblyjs/floating-point-hex-parser": { + "version": "1.13.2", + "resolved": "https://registry.npmjs.org/@webassemblyjs/floating-point-hex-parser/-/floating-point-hex-parser-1.13.2.tgz", + "integrity": "sha512-6oXyTOzbKxGH4steLbLNOu71Oj+C8Lg34n6CqRvqfS2O71BxY6ByfMDRhBytzknj9yGUPVJ1qIKhRlAwO1AovA==", + "license": "MIT" + }, + "node_modules/@webassemblyjs/helper-api-error": { + "version": "1.13.2", + "resolved": "https://registry.npmjs.org/@webassemblyjs/helper-api-error/-/helper-api-error-1.13.2.tgz", + "integrity": "sha512-U56GMYxy4ZQCbDZd6JuvvNV/WFildOjsaWD3Tzzvmw/mas3cXzRJPMjP83JqEsgSbyrmaGjBfDtV7KDXV9UzFQ==", + "license": "MIT" + }, + "node_modules/@webassemblyjs/helper-buffer": { + "version": "1.14.1", + "resolved": "https://registry.npmjs.org/@webassemblyjs/helper-buffer/-/helper-buffer-1.14.1.tgz", + "integrity": "sha512-jyH7wtcHiKssDtFPRB+iQdxlDf96m0E39yb0k5uJVhFGleZFoNw1c4aeIcVUPPbXUVJ94wwnMOAqUHyzoEPVMA==", + "license": "MIT" + }, + "node_modules/@webassemblyjs/helper-numbers": { + "version": "1.13.2", + "resolved": "https://registry.npmjs.org/@webassemblyjs/helper-numbers/-/helper-numbers-1.13.2.tgz", + "integrity": "sha512-FE8aCmS5Q6eQYcV3gI35O4J789wlQA+7JrqTTpJqn5emA4U2hvwJmvFRC0HODS+3Ye6WioDklgd6scJ3+PLnEA==", + "license": "MIT", + "dependencies": { + "@webassemblyjs/floating-point-hex-parser": "1.13.2", + "@webassemblyjs/helper-api-error": "1.13.2", + "@xtuc/long": "4.2.2" + } + }, + "node_modules/@webassemblyjs/helper-wasm-bytecode": { + "version": "1.13.2", + "resolved": "https://registry.npmjs.org/@webassemblyjs/helper-wasm-bytecode/-/helper-wasm-bytecode-1.13.2.tgz", + "integrity": "sha512-3QbLKy93F0EAIXLh0ogEVR6rOubA9AoZ+WRYhNbFyuB70j3dRdwH9g+qXhLAO0kiYGlg3TxDV+I4rQTr/YNXkA==", + "license": "MIT" + }, + "node_modules/@webassemblyjs/helper-wasm-section": { + "version": "1.14.1", + "resolved": "https://registry.npmjs.org/@webassemblyjs/helper-wasm-section/-/helper-wasm-section-1.14.1.tgz", + "integrity": "sha512-ds5mXEqTJ6oxRoqjhWDU83OgzAYjwsCV8Lo/N+oRsNDmx/ZDpqalmrtgOMkHwxsG0iI//3BwWAErYRHtgn0dZw==", + "license": "MIT", + "dependencies": { + "@webassemblyjs/ast": "1.14.1", + "@webassemblyjs/helper-buffer": "1.14.1", + "@webassemblyjs/helper-wasm-bytecode": "1.13.2", + "@webassemblyjs/wasm-gen": "1.14.1" + } + }, + "node_modules/@webassemblyjs/ieee754": { + "version": "1.13.2", + "resolved": "https://registry.npmjs.org/@webassemblyjs/ieee754/-/ieee754-1.13.2.tgz", "integrity": "sha512-4LtOzh58S/5lX4ITKxnAK2USuNEvpdVV9AlgGQb8rJDHaLeHciwG4zlGr0j/SNWlr7x3vO1lDEsuePvtcDNCkw==", "license": "MIT", "dependencies": { @@ -2812,9 +3044,9 @@ } }, "node_modules/acorn": { - "version": "8.14.0", - "resolved": "https://registry.npmjs.org/acorn/-/acorn-8.14.0.tgz", - "integrity": "sha512-cl669nCJTZBsL97OF4kUQm5g5hC2uihk0NxY3WENAC0TYdILVkAyHymAntgxGkl7K+t0cXIrH5siy5S4XkFycA==", + "version": "8.14.1", + "resolved": "https://registry.npmjs.org/acorn/-/acorn-8.14.1.tgz", + "integrity": "sha512-OvQ/2pUDKmgfCg++xsTX1wGxfTaszcHVcTctW4UJB4hibJx2HXxxO5UmVgyjMa+ZDsiaf5wWLXYpRWMmBI0QHg==", "license": "MIT", "bin": { "acorn": "bin/acorn" @@ -2832,24 +3064,6 @@ "acorn": "^6.0.0 || ^7.0.0 || ^8.0.0" } }, - "node_modules/address": { - "version": "1.2.2", - "resolved": "https://registry.npmjs.org/address/-/address-1.2.2.tgz", - "integrity": "sha512-4B/qKCfeE/ODUaAUpSwfzazo5x29WD4r3vXiWsB7I2mSDAihwEqKO+g8GELZUQSSAo5e1XTYh3ZVfLyxBc12nA==", - "license": "MIT", - "engines": { - "node": ">= 10.0.0" - } - }, - "node_modules/agentkeepalive": { - "version": "2.2.0", - "resolved": "https://registry.npmjs.org/agentkeepalive/-/agentkeepalive-2.2.0.tgz", - "integrity": "sha512-TnB6ziK363p7lR8QpeLC8aMr8EGYBKZTpgzQLfqTs3bR0Oo5VbKdwKf8h0dSzsYrB7lSCgfJnMZKqShvlq5Oyg==", - "license": "MIT", - "engines": { - "node": ">= 0.10.0" - } - }, "node_modules/ajv": { "version": "6.12.6", "resolved": "https://registry.npmjs.org/ajv/-/ajv-6.12.6.tgz", @@ -2905,15 +3119,6 @@ "integrity": "sha512-NM8/P9n3XjXhIZn1lLhkFaACTOURQXjWhV4BA/RnOv8xvgqtqpAX9IO4mRQxSx1Rlo4tqzeqb0sOlruaOy3dug==", "license": "MIT" }, - "node_modules/ajv-keywords": { - "version": "3.5.2", - "resolved": "https://registry.npmjs.org/ajv-keywords/-/ajv-keywords-3.5.2.tgz", - "integrity": "sha512-5p6WTN0DdTGVQk6VjcEju19IgaHudalcfabD7yhDGeA6bcQnmL+CpveLJq/3hvfwd1aof6L386Ougkx6RfyMIQ==", - "license": "MIT", - "peerDependencies": { - "ajv": "^6.9.1" - } - }, "node_modules/algoliasearch": { "version": "4.23.3", "resolved": "https://registry.npmjs.org/algoliasearch/-/algoliasearch-4.23.3.tgz", @@ -2953,6 +3158,7 @@ "version": "5.0.1", "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.1.tgz", "integrity": "sha512-quJQXlTSUGL2LH9SUXo8VwsY4soanhgo6LNSm84E1LBcE8s3O0wpdiRzyR9z/ZZJMlMWv37qOOb9pdJlMUEKFQ==", + "dev": true, "license": "MIT", "engines": { "node": ">=8" @@ -2992,15 +3198,6 @@ "integrity": "sha512-8+9WqebbFzpX9OR+Wa6O29asIogeRMzcGtAINdpMHHyAg10f05aSFVBbcEqGf/PXw1EjAZ+q2/bEBg3DvurK3Q==", "license": "Python-2.0" }, - "node_modules/aria-query": { - "version": "5.3.2", - "resolved": "https://registry.npmjs.org/aria-query/-/aria-query-5.3.2.tgz", - "integrity": "sha512-COROpnaoap1E2F000S62r6A60uHZnmlvomhfyT2DlTcrY1OrBKn2UhH7qn5wTC9zMvD0AY7csdPSNwKP+7WiQw==", - "license": "Apache-2.0", - "engines": { - "node": ">= 0.4" - } - }, "node_modules/array-buffer-byte-length": { "version": "1.0.2", "resolved": "https://registry.npmjs.org/array-buffer-byte-length/-/array-buffer-byte-length-1.0.2.tgz", @@ -3043,48 +3240,19 @@ "url": "https://github.com/sponsors/ljharb" } }, - "node_modules/array-union": { - "version": "2.1.0", - "resolved": "https://registry.npmjs.org/array-union/-/array-union-2.1.0.tgz", - "integrity": "sha512-HGyxoOTYUyCM6stUe6EJgnd4EoewAI7zMdfqO+kGjnlZmBDz/cR5pf8r/cR4Wq60sL/p0IkcjUEEPwS3GFrIyw==", - "license": "MIT", - "engines": { - "node": ">=8" - } - }, - "node_modules/array.prototype.findlast": { - "version": "1.2.5", - "resolved": "https://registry.npmjs.org/array.prototype.findlast/-/array.prototype.findlast-1.2.5.tgz", - "integrity": "sha512-CVvd6FHg1Z3POpBLxO6E6zr+rSKEQ9L6rZHAaY7lLfhKsWYUBBOuMs0e9o24oopj6H+geRCX0YJ+TJLBK2eHyQ==", - "license": "MIT", - "peer": true, - "dependencies": { - "call-bind": "^1.0.7", - "define-properties": "^1.2.1", - "es-abstract": "^1.23.2", - "es-errors": "^1.3.0", - "es-object-atoms": "^1.0.0", - "es-shim-unscopables": "^1.0.2" - }, - "engines": { - "node": ">= 0.4" - }, - "funding": { - "url": "https://github.com/sponsors/ljharb" - } - }, "node_modules/array.prototype.findlastindex": { - "version": "1.2.5", - "resolved": "https://registry.npmjs.org/array.prototype.findlastindex/-/array.prototype.findlastindex-1.2.5.tgz", - "integrity": "sha512-zfETvRFA8o7EiNn++N5f/kaCw221hrpGsDmcpndVupkPzEc1Wuf3VgC0qby1BbHs7f5DVYjgtEU2LLh5bqeGfQ==", + "version": "1.2.6", + "resolved": "https://registry.npmjs.org/array.prototype.findlastindex/-/array.prototype.findlastindex-1.2.6.tgz", + "integrity": "sha512-F/TKATkzseUExPlfvmwQKGITM3DGTK+vkAsCZoDc5daVygbJBnjEUCbgkAvVFsgfXfX4YIqZ/27G3k3tdXrTxQ==", "license": "MIT", "dependencies": { - "call-bind": "^1.0.7", + "call-bind": "^1.0.8", + "call-bound": "^1.0.4", "define-properties": "^1.2.1", - "es-abstract": "^1.23.2", + "es-abstract": "^1.23.9", "es-errors": "^1.3.0", - "es-object-atoms": "^1.0.0", - "es-shim-unscopables": "^1.0.2" + "es-object-atoms": "^1.1.1", + "es-shim-unscopables": "^1.1.0" }, "engines": { "node": ">= 0.4" @@ -3129,23 +3297,6 @@ "url": "https://github.com/sponsors/ljharb" } }, - "node_modules/array.prototype.tosorted": { - "version": "1.1.4", - "resolved": "https://registry.npmjs.org/array.prototype.tosorted/-/array.prototype.tosorted-1.1.4.tgz", - "integrity": "sha512-p6Fx8B7b7ZhL/gmUsAy0D15WhvDccw3mnGNbZpi3pmeJdxtWsj2jEaI4Y6oo3XiHfzuSgPwKc04MYt6KgvC/wA==", - "license": "MIT", - "peer": true, - "dependencies": { - "call-bind": "^1.0.7", - "define-properties": "^1.2.1", - "es-abstract": "^1.23.3", - "es-errors": "^1.3.0", - "es-shim-unscopables": "^1.0.2" - }, - "engines": { - "node": ">= 0.4" - } - }, "node_modules/arraybuffer.prototype.slice": { "version": "1.0.4", "resolved": "https://registry.npmjs.org/arraybuffer.prototype.slice/-/arraybuffer.prototype.slice-1.0.4.tgz", @@ -3167,12 +3318,6 @@ "url": "https://github.com/sponsors/ljharb" } }, - "node_modules/ast-types-flow": { - "version": "0.0.8", - "resolved": "https://registry.npmjs.org/ast-types-flow/-/ast-types-flow-0.0.8.tgz", - "integrity": "sha512-OH/2E5Fg20h2aPrbe+QL8JZQFko0YZaF+j4mnQ7BGhfavO7OpSLa8a0y9sBwomHdSbkhTS8TQNayBfnW5DwbvQ==", - "license": "MIT" - }, "node_modules/async-function": { "version": "1.0.0", "resolved": "https://registry.npmjs.org/async-function/-/async-function-1.0.0.tgz", @@ -3182,19 +3327,10 @@ "node": ">= 0.4" } }, - "node_modules/at-least-node": { - "version": "1.0.0", - "resolved": "https://registry.npmjs.org/at-least-node/-/at-least-node-1.0.0.tgz", - "integrity": "sha512-+q/t7Ekv1EDY2l6Gda6LLiX14rU9TV20Wa3ofeQmwPFZbOMo9DXrLbOjFaaclkXKWidIaopwAObQDqwWtGUjqg==", - "license": "ISC", - "engines": { - "node": ">= 4.0.0" - } - }, "node_modules/autoprefixer": { - "version": "10.4.20", - "resolved": "https://registry.npmjs.org/autoprefixer/-/autoprefixer-10.4.20.tgz", - "integrity": "sha512-XY25y5xSv/wEoqzDyXXME4AFfkZI0P23z6Fs3YgymDnKJkCGOnkL0iTxCa85UTqaSgfcqyf3UA6+c7wUvx/16g==", + "version": "10.4.21", + "resolved": "https://registry.npmjs.org/autoprefixer/-/autoprefixer-10.4.21.tgz", + "integrity": "sha512-O+A6LWV5LDHSJD3LjHYoNi4VLsj/Whi7k6zG12xTYaU4cQ8oxQGckXNX8cRHK5yOZ/ppVHe0ZBXGzSV9jXdVbQ==", "dev": true, "funding": [ { @@ -3212,11 +3348,11 @@ ], "license": "MIT", "dependencies": { - "browserslist": "^4.23.3", - "caniuse-lite": "^1.0.30001646", + "browserslist": "^4.24.4", + "caniuse-lite": "^1.0.30001702", "fraction.js": "^4.3.7", "normalize-range": "^0.1.2", - "picocolors": "^1.0.1", + "picocolors": "^1.1.1", "postcss-value-parser": "^4.2.0" }, "bin": { @@ -3244,49 +3380,30 @@ "url": "https://github.com/sponsors/ljharb" } }, - "node_modules/axe-core": { - "version": "4.10.2", - "resolved": "https://registry.npmjs.org/axe-core/-/axe-core-4.10.2.tgz", - "integrity": "sha512-RE3mdQ7P3FRSe7eqCWoeQ/Z9QXrtniSjp1wUjt5nRC3WIpz5rSCve6o3fsZ2aCpJtrZjSZgjwXAoTO5k4tEI0w==", - "license": "MPL-2.0", - "engines": { - "node": ">=4" - } - }, - "node_modules/axobject-query": { - "version": "4.1.0", - "resolved": "https://registry.npmjs.org/axobject-query/-/axobject-query-4.1.0.tgz", - "integrity": "sha512-qIj0G9wZbMGNLjLmg1PT6v2mE9AH2zlnADJD/2tC6E00hgmhUOfEB6greHPAfLRSufHqROIUTkw6E+M3lH0PTQ==", - "license": "Apache-2.0", - "engines": { - "node": ">= 0.4" - } - }, "node_modules/babel-loader": { - "version": "9.2.1", - "resolved": "https://registry.npmjs.org/babel-loader/-/babel-loader-9.2.1.tgz", - "integrity": "sha512-fqe8naHt46e0yIdkjUZYqddSXfej3AHajX+CSO5X7oy0EmPc6o5Xh+RClNoHjnieWz9AW4kZxW9yyFMhVB1QLA==", + "version": "10.0.0", + "resolved": "https://registry.npmjs.org/babel-loader/-/babel-loader-10.0.0.tgz", + "integrity": "sha512-z8jt+EdS61AMw22nSfoNJAZ0vrtmhPRVi6ghL3rCeRZI8cdNYFiV5xeV3HbE7rlZZNmGH8BVccwWt8/ED0QOHA==", "license": "MIT", "dependencies": { - "find-cache-dir": "^4.0.0", - "schema-utils": "^4.0.0" + "find-up": "^5.0.0" }, "engines": { - "node": ">= 14.15.0" + "node": "^18.20.0 || ^20.10.0 || >=22.0.0" }, "peerDependencies": { "@babel/core": "^7.12.0", - "webpack": ">=5" + "webpack": ">=5.61.0" } }, "node_modules/babel-plugin-polyfill-corejs2": { - "version": "0.4.12", - "resolved": "https://registry.npmjs.org/babel-plugin-polyfill-corejs2/-/babel-plugin-polyfill-corejs2-0.4.12.tgz", - "integrity": "sha512-CPWT6BwvhrTO2d8QVorhTCQw9Y43zOu7G9HigcfxvepOU6b8o3tcWad6oVgZIsZCTt42FFv97aA7ZJsbM4+8og==", + "version": "0.4.13", + "resolved": "https://registry.npmjs.org/babel-plugin-polyfill-corejs2/-/babel-plugin-polyfill-corejs2-0.4.13.tgz", + "integrity": "sha512-3sX/eOms8kd3q2KZ6DAhKPc0dgm525Gqq5NtWKZ7QYYZEv57OQ54KtblzJzH1lQF/eQxO8KjWGIK9IPUJNus5g==", "license": "MIT", "dependencies": { "@babel/compat-data": "^7.22.6", - "@babel/helper-define-polyfill-provider": "^0.6.3", + "@babel/helper-define-polyfill-provider": "^0.6.4", "semver": "^6.3.1" }, "peerDependencies": { @@ -3307,12 +3424,12 @@ } }, "node_modules/babel-plugin-polyfill-regenerator": { - "version": "0.6.3", - "resolved": "https://registry.npmjs.org/babel-plugin-polyfill-regenerator/-/babel-plugin-polyfill-regenerator-0.6.3.tgz", - "integrity": "sha512-LiWSbl4CRSIa5x/JAU6jZiG9eit9w6mz+yVMFwDE83LAWvt0AfGBoZ7HS/mkhrKuh2ZlzfVZYKoLjXdqw6Yt7Q==", + "version": "0.6.4", + "resolved": "https://registry.npmjs.org/babel-plugin-polyfill-regenerator/-/babel-plugin-polyfill-regenerator-0.6.4.tgz", + "integrity": "sha512-7gD3pRadPrbjhjLyxebmx/WrFYcuSjZ0XbdUujQMZ/fcE9oeewk2U/7PCvez84UeuK3oSjmPZ0Ch0dlupQvGzw==", "license": "MIT", "dependencies": { - "@babel/helper-define-polyfill-provider": "^0.6.3" + "@babel/helper-define-polyfill-provider": "^0.6.4" }, "peerDependencies": { "@babel/core": "^7.4.0 || ^8.0.0-0 <8.0.0" @@ -3494,9 +3611,9 @@ } }, "node_modules/call-bind-apply-helpers": { - "version": "1.0.1", - "resolved": "https://registry.npmjs.org/call-bind-apply-helpers/-/call-bind-apply-helpers-1.0.1.tgz", - "integrity": "sha512-BhYE+WDaywFg2TBWYNXAE+8B1ATnThNBqXHP5nQu0jWJdVvY2hvkpyB3qOmtmDePiS5/BDQ8wASEWGMWRG148g==", + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/call-bind-apply-helpers/-/call-bind-apply-helpers-1.0.2.tgz", + "integrity": "sha512-Sp1ablJ0ivDkSzjcaJdxEunN5/XvksFJ2sMBFfq6x0ryhQV/2b/KwFe21cMpmHtPOSij8K99/wSfoEuTObmuMQ==", "license": "MIT", "dependencies": { "es-errors": "^1.3.0", @@ -3507,13 +3624,13 @@ } }, "node_modules/call-bound": { - "version": "1.0.3", - "resolved": "https://registry.npmjs.org/call-bound/-/call-bound-1.0.3.tgz", - "integrity": "sha512-YTd+6wGlNlPxSuri7Y6X8tY2dmm12UMH66RpKMhiX6rsk5wXXnYgbUcOt8kiS31/AjfoTOvCsE+w8nZQLQnzHA==", + "version": "1.0.4", + "resolved": "https://registry.npmjs.org/call-bound/-/call-bound-1.0.4.tgz", + "integrity": "sha512-+ys997U96po4Kx/ABpBCqhA9EuxJaQWDQg7295H4hBphv3IZg0boBKuwYpt4YXp6MZ5AmZQnU/tyMTlRpaSejg==", "license": "MIT", "dependencies": { - "call-bind-apply-helpers": "^1.0.1", - "get-intrinsic": "^1.2.6" + "call-bind-apply-helpers": "^1.0.2", + "get-intrinsic": "^1.3.0" }, "engines": { "node": ">= 0.4" @@ -3532,9 +3649,9 @@ } }, "node_modules/caniuse-lite": { - "version": "1.0.30001699", - "resolved": "https://registry.npmjs.org/caniuse-lite/-/caniuse-lite-1.0.30001699.tgz", - "integrity": "sha512-b+uH5BakXZ9Do9iK+CkDmctUSEqZl+SP056vc5usa0PL+ev5OHw003rZXcnjNDv3L8P5j6rwT6C0BPKSikW08w==", + "version": "1.0.30001715", + "resolved": "https://registry.npmjs.org/caniuse-lite/-/caniuse-lite-1.0.30001715.tgz", + "integrity": "sha512-7ptkFGMm2OAOgvZpwgA4yjQ5SQbrNVGdRjzH0pBdy1Fasvcr+KAeECmbCAECzTuDuoX0FCY8KzUxjf9+9kfZEw==", "funding": [ { "type": "opencollective", @@ -3706,12 +3823,6 @@ "integrity": "sha512-GpVkmM8vF2vQUkj2LvZmD35JxeJOLCwJ9cUkugyk2nuhbv3+mJvpLYYt+0+USMxE+oj+ey/lJEnhZw75x/OMcQ==", "license": "MIT" }, - "node_modules/common-path-prefix": { - "version": "3.0.0", - "resolved": "https://registry.npmjs.org/common-path-prefix/-/common-path-prefix-3.0.0.tgz", - "integrity": "sha512-QE33hToZseCH3jS0qN96O/bSh3kaw/h+Tq7ngyY9eWDUnTlTNUyqfqvCXioLe5Na5jFsL78ra/wuBU4iuEgd4w==", - "license": "ISC" - }, "node_modules/compressible": { "version": "2.0.18", "resolved": "https://registry.npmjs.org/compressible/-/compressible-2.0.18.tgz", @@ -3725,9 +3836,9 @@ } }, "node_modules/compression": { - "version": "1.7.5", - "resolved": "https://registry.npmjs.org/compression/-/compression-1.7.5.tgz", - "integrity": "sha512-bQJ0YRck5ak3LgtnpKkiabX5pNF7tMUh1BSy2ZBOTh0Dim0BUu6aPPwByIns6/A5Prh8PufSPerMDUklpzes2Q==", + "version": "1.8.0", + "resolved": "https://registry.npmjs.org/compression/-/compression-1.8.0.tgz", + "integrity": "sha512-k6WLKfunuqCYD3t6AsuPGvQWaKwuLLh2/xHNcX4qE+vIfDNXpSqnrhwA7O53R7WVQUnt8dVAIW+YHr7xTgOgGA==", "license": "MIT", "dependencies": { "bytes": "3.1.2", @@ -3763,12 +3874,6 @@ "integrity": "sha512-/Srv4dswyQNBfohGpz9o6Yb3Gz3SrUDqBH5rTuhGR7ahtlbYKnVxw2bCFMRljaA7EXHaXZ8wsHdodFvbkhKmqg==", "license": "MIT" }, - "node_modules/confusing-browser-globals": { - "version": "1.0.11", - "resolved": "https://registry.npmjs.org/confusing-browser-globals/-/confusing-browser-globals-1.0.11.tgz", - "integrity": "sha512-JsPKdmh8ZkmnHxDk55FZ1TqVLvEQTvoByJZRN9jzI0UjxK/QgAmsphz7PGtqgPieQZ/CQcHWXCR7ATDNhGe+YA==", - "license": "MIT" - }, "node_modules/connect-history-api-fallback": { "version": "2.0.0", "resolved": "https://registry.npmjs.org/connect-history-api-fallback/-/connect-history-api-fallback-2.0.0.tgz", @@ -3821,12 +3926,12 @@ "license": "MIT" }, "node_modules/core-js-compat": { - "version": "3.40.0", - "resolved": "https://registry.npmjs.org/core-js-compat/-/core-js-compat-3.40.0.tgz", - "integrity": "sha512-0XEDpr5y5mijvw8Lbc6E5AkjrHfp7eEoPlu36SWeAbcL8fn1G1ANe8DBlo2XoNN89oVpxWwOjYIPVzR4ZvsKCQ==", + "version": "3.41.0", + "resolved": "https://registry.npmjs.org/core-js-compat/-/core-js-compat-3.41.0.tgz", + "integrity": "sha512-RFsU9LySVue9RTwdDVX/T0e2Y6jRYWXERKElIjpuEOEnxaXffI0X7RUwVzfYLfzuLXSNJDYoRYUAmRUcyln20A==", "license": "MIT", "dependencies": { - "browserslist": "^4.24.3" + "browserslist": "^4.24.4" }, "funding": { "type": "opencollective", @@ -3839,40 +3944,6 @@ "integrity": "sha512-ZQBvi1DcpJ4GDqanjucZ2Hj3wEO5pZDS89BWbkcrvdxksJorwUDDZamX9ldFkp9aw2lmBDLgkObEA4DWNJ9FYQ==", "license": "MIT" }, - "node_modules/cosmiconfig": { - "version": "6.0.0", - "resolved": "https://registry.npmjs.org/cosmiconfig/-/cosmiconfig-6.0.0.tgz", - "integrity": "sha512-xb3ZL6+L8b9JLLCx3ZdoZy4+2ECphCMo2PwqgP1tlfVq6M6YReyzBJtvWWtbDSpNr9hn96pkCiZqUcFEc+54Qg==", - "license": "MIT", - "dependencies": { - "@types/parse-json": "^4.0.0", - "import-fresh": "^3.1.0", - "parse-json": "^5.0.0", - "path-type": "^4.0.0", - "yaml": "^1.7.2" - }, - "engines": { - "node": ">=8" - } - }, - "node_modules/cosmiconfig/node_modules/path-type": { - "version": "4.0.0", - "resolved": "https://registry.npmjs.org/path-type/-/path-type-4.0.0.tgz", - "integrity": "sha512-gDKb8aZMDeD/tZWs9P6+q0J9Mwkdl6xMV8TjnGP3qJVJ06bdMgkbBlLU8IdfOsIsFz2BW1rNVT3XuNEl8zPAvw==", - "license": "MIT", - "engines": { - "node": ">=8" - } - }, - "node_modules/cosmiconfig/node_modules/yaml": { - "version": "1.10.2", - "resolved": "https://registry.npmjs.org/yaml/-/yaml-1.10.2.tgz", - "integrity": "sha512-r3vXyErRCYJ7wg28yvBY5VSoAF8ZvlcW9/BwUzEtUsjvX/DKs24dIkuwjtuprwJJHsbyUbLApepYTR1BN4uHrg==", - "license": "ISC", - "engines": { - "node": ">= 6" - } - }, "node_modules/cross-spawn": { "version": "7.0.6", "resolved": "https://registry.npmjs.org/cross-spawn/-/cross-spawn-7.0.6.tgz", @@ -3887,12 +3958,6 @@ "node": ">= 8" } }, - "node_modules/damerau-levenshtein": { - "version": "1.0.8", - "resolved": "https://registry.npmjs.org/damerau-levenshtein/-/damerau-levenshtein-1.0.8.tgz", - "integrity": "sha512-sdQSFB7+llfUcQHUQO3+B8ERRj0Oa4w9POWMI/puGtuf7gFywGmkaLCElnudfTiKZV+NvHqL0ifzdrI8Ro7ESA==", - "license": "BSD-2-Clause" - }, "node_modules/data-view-buffer": { "version": "1.0.2", "resolved": "https://registry.npmjs.org/data-view-buffer/-/data-view-buffer-1.0.2.tgz", @@ -3967,15 +4032,6 @@ "integrity": "sha512-oIPzksmTg4/MriiaYGO+okXDT7ztn/w3Eptv/+gSIdMdKsJo0u4CfYNFJPy+4SKMuCqGw2wxnA+URMg3t8a/bQ==", "license": "MIT" }, - "node_modules/deepmerge": { - "version": "4.3.1", - "resolved": "https://registry.npmjs.org/deepmerge/-/deepmerge-4.3.1.tgz", - "integrity": "sha512-3sUqbMEc77XqpdNO7FRyRog+eW3ph+GYCbj+rK+uYyRMuwsVy0rMiVtPn+QJlKFvWP/1PYpapqYn0Me2knFn+A==", - "license": "MIT", - "engines": { - "node": ">=0.10.0" - } - }, "node_modules/default-browser": { "version": "5.2.1", "resolved": "https://registry.npmjs.org/default-browser/-/default-browser-5.2.1.tgz", @@ -4021,15 +4077,6 @@ "url": "https://github.com/sponsors/ljharb" } }, - "node_modules/define-lazy-prop": { - "version": "2.0.0", - "resolved": "https://registry.npmjs.org/define-lazy-prop/-/define-lazy-prop-2.0.0.tgz", - "integrity": "sha512-Ds09qNh8yw3khSjiJjiUInaGX9xlqZDY7JVryGxdxV7NPeuqQfplOpQ66yJFZut3jLa5zOwkXw1g9EI2uKh4Og==", - "license": "MIT", - "engines": { - "node": ">=8" - } - }, "node_modules/define-properties": { "version": "1.2.1", "resolved": "https://registry.npmjs.org/define-properties/-/define-properties-1.2.1.tgz", @@ -4063,13 +4110,13 @@ } }, "node_modules/dependency-graph": { - "version": "0.11.0", - "resolved": "https://registry.npmjs.org/dependency-graph/-/dependency-graph-0.11.0.tgz", - "integrity": "sha512-JeMq7fEshyepOWDfcfHK06N3MhyPhz++vtqWhMT5O9A3K42rdsEDpfdVqjaqaAhsw6a+ZqeDvQVtD0hFHQWrzg==", + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/dependency-graph/-/dependency-graph-1.0.0.tgz", + "integrity": "sha512-cW3gggJ28HZ/LExwxP2B++aiKxhJXMSIt9K48FOXQkm+vuG5gyatXnLsONRJdzO/7VfjDIiaOOa/bs4l464Lwg==", "dev": true, "license": "MIT", "engines": { - "node": ">= 0.6.0" + "node": ">=4" } }, "node_modules/destroy": { @@ -4092,6 +4139,7 @@ "version": "1.0.3", "resolved": "https://registry.npmjs.org/detect-libc/-/detect-libc-1.0.3.tgz", "integrity": "sha512-pGjwhsmsp4kL2RTz08wcOlGN83otlqHeD/Z5T8GXZB+/YcpQ/dgo+lbU8ZsGxV0HIvqqxo9l7mqYwyYMD9bKDg==", + "dev": true, "license": "Apache-2.0", "optional": true, "bin": { @@ -4107,88 +4155,30 @@ "integrity": "sha512-T0NIuQpnTvFDATNuHN5roPwSBG83rFsuO+MXXH9/3N1eFbn4wcPjttvjMLEPWJ0RGUYgQE7cGgS3tNxbqCGM7g==", "license": "MIT" }, - "node_modules/detect-port-alt": { - "version": "1.1.6", - "resolved": "https://registry.npmjs.org/detect-port-alt/-/detect-port-alt-1.1.6.tgz", - "integrity": "sha512-5tQykt+LqfJFBEYaDITx7S7cR7mJ/zQmLXZ2qt5w04ainYZw6tBf9dBunMjVeVOdYVRUzUOE4HkY5J7+uttb5Q==", + "node_modules/dns-packet": { + "version": "5.6.1", + "resolved": "https://registry.npmjs.org/dns-packet/-/dns-packet-5.6.1.tgz", + "integrity": "sha512-l4gcSouhcgIKRvyy99RNVOgxXiicE+2jZoNmaNmZ6JXiGajBOJAesk1OBlJuM5k2c+eudGdLxDqXuPCKIj6kpw==", "license": "MIT", "dependencies": { - "address": "^1.0.1", - "debug": "^2.6.0" - }, - "bin": { - "detect": "bin/detect-port", - "detect-port": "bin/detect-port" + "@leichtgewicht/ip-codec": "^2.0.1" }, "engines": { - "node": ">= 4.2.1" - } - }, - "node_modules/detect-port-alt/node_modules/debug": { - "version": "2.6.9", - "resolved": "https://registry.npmjs.org/debug/-/debug-2.6.9.tgz", - "integrity": "sha512-bC7ElrdJaJnPbAP+1EotYvqZsb3ecl5wi6Bfi6BJTUcNowp6cvspg0jXznRTKDjm/E7AdgFBVeAPVMNcKGsHMA==", - "license": "MIT", - "dependencies": { - "ms": "2.0.0" - } - }, - "node_modules/detect-port-alt/node_modules/ms": { - "version": "2.0.0", - "resolved": "https://registry.npmjs.org/ms/-/ms-2.0.0.tgz", - "integrity": "sha512-Tpp60P6IUJDTuOq/5Z8cdskzJujfwqfOTkrwIwj7IRISpnkJnT6SyJ4PCPnGMoFjC9ddhal5KVIYtAt97ix05A==", - "license": "MIT" - }, - "node_modules/dir-glob": { - "version": "3.0.1", - "resolved": "https://registry.npmjs.org/dir-glob/-/dir-glob-3.0.1.tgz", - "integrity": "sha512-WkrWp9GR4KXfKGYzOLmTuGVi1UWFfws377n9cc55/tb6DuqyF6pcQ5AbiHEshaDpY9v6oaSr2XCDidGmMwdzIA==", - "license": "MIT", - "dependencies": { - "path-type": "^4.0.0" - }, - "engines": { - "node": ">=8" - } - }, - "node_modules/dir-glob/node_modules/path-type": { - "version": "4.0.0", - "resolved": "https://registry.npmjs.org/path-type/-/path-type-4.0.0.tgz", - "integrity": "sha512-gDKb8aZMDeD/tZWs9P6+q0J9Mwkdl6xMV8TjnGP3qJVJ06bdMgkbBlLU8IdfOsIsFz2BW1rNVT3XuNEl8zPAvw==", - "license": "MIT", - "engines": { - "node": ">=8" - } - }, - "node_modules/dns-packet": { - "version": "5.6.1", - "resolved": "https://registry.npmjs.org/dns-packet/-/dns-packet-5.6.1.tgz", - "integrity": "sha512-l4gcSouhcgIKRvyy99RNVOgxXiicE+2jZoNmaNmZ6JXiGajBOJAesk1OBlJuM5k2c+eudGdLxDqXuPCKIj6kpw==", - "license": "MIT", - "dependencies": { - "@leichtgewicht/ip-codec": "^2.0.1" - }, - "engines": { - "node": ">=6" + "node": ">=6" } }, "node_modules/doctrine": { - "version": "3.0.0", - "resolved": "https://registry.npmjs.org/doctrine/-/doctrine-3.0.0.tgz", - "integrity": "sha512-yS+Q5i3hBf7GBkd4KG8a7eBNNWNGLTaEwwYWUijIYM7zrlYDM0BFXHjjPWlWZ1Rg7UaddZeIDmi9jF3HmqiQ2w==", + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/doctrine/-/doctrine-2.1.0.tgz", + "integrity": "sha512-35mSku4ZXK0vfCuHEDAwt55dg2jNajHZ1odvF+8SSr82EsZY4QmXfuWso8oEd8zRhVObSN18aM0CjSdoBX7zIw==", "license": "Apache-2.0", "dependencies": { "esutils": "^2.0.2" }, "engines": { - "node": ">=6.0.0" + "node": ">=0.10.0" } }, - "node_modules/dom-walk": { - "version": "0.1.2", - "resolved": "https://registry.npmjs.org/dom-walk/-/dom-walk-0.1.2.tgz", - "integrity": "sha512-6QvTW9mrGeIegrFXdtQi9pk7O/nSK6lSdXW2eqUspN5LWD7UTji2Fqw5V2YLjBpHEoU9Xl/eUWNpDeZvoyOv2w==" - }, "node_modules/dunder-proto": { "version": "1.0.1", "resolved": "https://registry.npmjs.org/dunder-proto/-/dunder-proto-1.0.1.tgz", @@ -4203,12 +4193,6 @@ "node": ">= 0.4" } }, - "node_modules/duplexer": { - "version": "0.1.2", - "resolved": "https://registry.npmjs.org/duplexer/-/duplexer-0.1.2.tgz", - "integrity": "sha512-jtD6YG370ZCIi/9GTaJKQxWTZD045+4R4hTk/x1UyoqadyJ9x9CgSi1RlVDQF8U2sxLLSnFkCaMihqljHIWgMg==", - "license": "MIT" - }, "node_modules/ee-first": { "version": "1.1.1", "resolved": "https://registry.npmjs.org/ee-first/-/ee-first-1.1.1.tgz", @@ -4216,17 +4200,11 @@ "license": "MIT" }, "node_modules/electron-to-chromium": { - "version": "1.5.96", - "resolved": "https://registry.npmjs.org/electron-to-chromium/-/electron-to-chromium-1.5.96.tgz", - "integrity": "sha512-8AJUW6dh75Fm/ny8+kZKJzI1pgoE8bKLZlzDU2W1ENd+DXKJrx7I7l9hb8UWR4ojlnb5OlixMt00QWiYJoVw1w==", + "version": "1.5.143", + "resolved": "https://registry.npmjs.org/electron-to-chromium/-/electron-to-chromium-1.5.143.tgz", + "integrity": "sha512-QqklJMOFBMqe46k8iIOwA9l2hz57V2OKMmP5eSWcUvwx+mASAsbU+wkF1pHjn9ZVSBPrsYWr4/W/95y5SwYg2g==", "license": "ISC" }, - "node_modules/emoji-regex": { - "version": "9.2.2", - "resolved": "https://registry.npmjs.org/emoji-regex/-/emoji-regex-9.2.2.tgz", - "integrity": "sha512-L18DaJsXSUk2+42pv8mLs5jJT2hqFkFE4j21wOmgbUqsZ2hL72NsUU785g9RXgo3s0ZNgVl42TiHp3ZtOv/Vyg==", - "license": "MIT" - }, "node_modules/encodeurl": { "version": "2.0.0", "resolved": "https://registry.npmjs.org/encodeurl/-/encodeurl-2.0.0.tgz", @@ -4258,19 +4236,6 @@ "node": ">=6" } }, - "node_modules/envify": { - "version": "4.1.0", - "resolved": "https://registry.npmjs.org/envify/-/envify-4.1.0.tgz", - "integrity": "sha512-IKRVVoAYr4pIx4yIWNsz9mOsboxlNXiu7TNBnem/K/uTHdkyzXWDzHCK7UTolqBbgaBz0tQHsD3YNls0uIIjiw==", - "license": "MIT", - "dependencies": { - "esprima": "^4.0.0", - "through": "~2.3.4" - }, - "bin": { - "envify": "bin/envify" - } - }, "node_modules/envinfo": { "version": "7.14.0", "resolved": "https://registry.npmjs.org/envinfo/-/envinfo-7.14.0.tgz", @@ -4287,6 +4252,7 @@ "version": "1.3.2", "resolved": "https://registry.npmjs.org/error-ex/-/error-ex-1.3.2.tgz", "integrity": "sha512-7dFHNmqeFSEt2ZBsCriorKnn3Z2pj+fd9kmI6QoWw4//DL+icEBfc0U7qJCisqrTsKTjw4fNFy2pW9OqStD84g==", + "dev": true, "license": "MIT", "dependencies": { "is-arrayish": "^0.2.1" @@ -4375,38 +4341,10 @@ "node": ">= 0.4" } }, - "node_modules/es-iterator-helpers": { - "version": "1.2.1", - "resolved": "https://registry.npmjs.org/es-iterator-helpers/-/es-iterator-helpers-1.2.1.tgz", - "integrity": "sha512-uDn+FE1yrDzyC0pCo961B2IHbdM8y/ACZsKD4dG6WqrjV53BADjwa7D+1aom2rsNVfLyDgU/eigvlJGJ08OQ4w==", - "license": "MIT", - "peer": true, - "dependencies": { - "call-bind": "^1.0.8", - "call-bound": "^1.0.3", - "define-properties": "^1.2.1", - "es-abstract": "^1.23.6", - "es-errors": "^1.3.0", - "es-set-tostringtag": "^2.0.3", - "function-bind": "^1.1.2", - "get-intrinsic": "^1.2.6", - "globalthis": "^1.0.4", - "gopd": "^1.2.0", - "has-property-descriptors": "^1.0.2", - "has-proto": "^1.2.0", - "has-symbols": "^1.1.0", - "internal-slot": "^1.1.0", - "iterator.prototype": "^1.1.4", - "safe-array-concat": "^1.1.3" - }, - "engines": { - "node": ">= 0.4" - } - }, "node_modules/es-module-lexer": { - "version": "1.6.0", - "resolved": "https://registry.npmjs.org/es-module-lexer/-/es-module-lexer-1.6.0.tgz", - "integrity": "sha512-qqnD1yMU6tk/jnaMosogGySTZP8YtUgAffA9nMN+E/rjxcfRQ6IEk7IiozUjgxKoFHBGjTLnrHB/YC45r/59EQ==", + "version": "1.7.0", + "resolved": "https://registry.npmjs.org/es-module-lexer/-/es-module-lexer-1.7.0.tgz", + "integrity": "sha512-jEQoCwk8hyb2AZziIOLhDqpm5+2ww5uIE6lkO/6jcOCusfk6LhMHpXXfBLXTZ7Ydyt0j4VoUQv6uGNYbdW+kBA==", "license": "MIT" }, "node_modules/es-object-atoms": { @@ -4437,12 +4375,15 @@ } }, "node_modules/es-shim-unscopables": { - "version": "1.0.2", - "resolved": "https://registry.npmjs.org/es-shim-unscopables/-/es-shim-unscopables-1.0.2.tgz", - "integrity": "sha512-J3yBRXCzDu4ULnQwxyToo/OjdMx6akgVC7K6few0a7F/0wLtmKKN7I73AH5T2836UuXRqN7Qg+IIUw/+YJksRw==", + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/es-shim-unscopables/-/es-shim-unscopables-1.1.0.tgz", + "integrity": "sha512-d9T8ucsEhh8Bi1woXCf+TIKDIROLG5WCkxg8geBCbvk22kzwC5G2OnXVMO6FUsvQlgUUXQ2itephWDLqDzbeCw==", "license": "MIT", "dependencies": { - "hasown": "^2.0.0" + "hasown": "^2.0.2" + }, + "engines": { + "node": ">= 0.4" } }, "node_modules/es-to-primitive": { @@ -4462,12 +4403,6 @@ "url": "https://github.com/sponsors/ljharb" } }, - "node_modules/es6-promise": { - "version": "4.2.8", - "resolved": "https://registry.npmjs.org/es6-promise/-/es6-promise-4.2.8.tgz", - "integrity": "sha512-HJDGx5daxeIvxdBxvG2cb9g4tEvwIk3i8+nhX0yGrYmZUzbkdg8QbDevheDB8gd0//uPj4c1EQua8Q+MViT0/w==", - "license": "MIT" - }, "node_modules/escalade": { "version": "3.2.0", "resolved": "https://registry.npmjs.org/escalade/-/escalade-3.2.0.tgz", @@ -4496,135 +4431,63 @@ } }, "node_modules/eslint": { - "version": "8.56.0", - "resolved": "https://registry.npmjs.org/eslint/-/eslint-8.56.0.tgz", - "integrity": "sha512-Go19xM6T9puCOWntie1/P997aXxFsOi37JIHRWI514Hc6ZnaHGKY9xFhrU65RT6CcBEzZoGG1e6Nq+DT04ZtZQ==", - "deprecated": "This version is no longer supported. Please see https://eslint.org/version-support for other options.", + "version": "9.25.1", + "resolved": "https://registry.npmjs.org/eslint/-/eslint-9.25.1.tgz", + "integrity": "sha512-E6Mtz9oGQWDCpV12319d59n4tx9zOTXSTmc8BLVxBx+G/0RdM5MvEEJLU9c0+aleoePYYgVTOsRblx433qmhWQ==", "license": "MIT", "dependencies": { "@eslint-community/eslint-utils": "^4.2.0", - "@eslint-community/regexpp": "^4.6.1", - "@eslint/eslintrc": "^2.1.4", - "@eslint/js": "8.56.0", - "@humanwhocodes/config-array": "^0.11.13", + "@eslint-community/regexpp": "^4.12.1", + "@eslint/config-array": "^0.20.0", + "@eslint/config-helpers": "^0.2.1", + "@eslint/core": "^0.13.0", + "@eslint/eslintrc": "^3.3.1", + "@eslint/js": "9.25.1", + "@eslint/plugin-kit": "^0.2.8", + "@humanfs/node": "^0.16.6", "@humanwhocodes/module-importer": "^1.0.1", - "@nodelib/fs.walk": "^1.2.8", - "@ungap/structured-clone": "^1.2.0", + "@humanwhocodes/retry": "^0.4.2", + "@types/estree": "^1.0.6", + "@types/json-schema": "^7.0.15", "ajv": "^6.12.4", "chalk": "^4.0.0", - "cross-spawn": "^7.0.2", + "cross-spawn": "^7.0.6", "debug": "^4.3.2", - "doctrine": "^3.0.0", "escape-string-regexp": "^4.0.0", - "eslint-scope": "^7.2.2", - "eslint-visitor-keys": "^3.4.3", - "espree": "^9.6.1", - "esquery": "^1.4.2", + "eslint-scope": "^8.3.0", + "eslint-visitor-keys": "^4.2.0", + "espree": "^10.3.0", + "esquery": "^1.5.0", "esutils": "^2.0.2", "fast-deep-equal": "^3.1.3", - "file-entry-cache": "^6.0.1", + "file-entry-cache": "^8.0.0", "find-up": "^5.0.0", "glob-parent": "^6.0.2", - "globals": "^13.19.0", - "graphemer": "^1.4.0", "ignore": "^5.2.0", "imurmurhash": "^0.1.4", "is-glob": "^4.0.0", - "is-path-inside": "^3.0.3", - "js-yaml": "^4.1.0", "json-stable-stringify-without-jsonify": "^1.0.1", - "levn": "^0.4.1", "lodash.merge": "^4.6.2", "minimatch": "^3.1.2", "natural-compare": "^1.4.0", - "optionator": "^0.9.3", - "strip-ansi": "^6.0.1", - "text-table": "^0.2.0" + "optionator": "^0.9.3" }, "bin": { "eslint": "bin/eslint.js" }, "engines": { - "node": "^12.22.0 || ^14.17.0 || >=16.0.0" - }, - "funding": { - "url": "https://opencollective.com/eslint" - } - }, - "node_modules/eslint-config-airbnb": { - "version": "19.0.4", - "resolved": "https://registry.npmjs.org/eslint-config-airbnb/-/eslint-config-airbnb-19.0.4.tgz", - "integrity": "sha512-T75QYQVQX57jiNgpF9r1KegMICE94VYwoFQyMGhrvc+lB8YF2E/M/PYDaQe1AJcWaEgqLE+ErXV1Og/+6Vyzew==", - "license": "MIT", - "dependencies": { - "eslint-config-airbnb-base": "^15.0.0", - "object.assign": "^4.1.2", - "object.entries": "^1.1.5" - }, - "engines": { - "node": "^10.12.0 || ^12.22.0 || ^14.17.0 || >=16.0.0" - }, - "peerDependencies": { - "eslint": "^7.32.0 || ^8.2.0", - "eslint-plugin-import": "^2.25.3", - "eslint-plugin-jsx-a11y": "^6.5.1", - "eslint-plugin-react": "^7.28.0", - "eslint-plugin-react-hooks": "^4.3.0" - } - }, - "node_modules/eslint-config-airbnb-base": { - "version": "15.0.0", - "resolved": "https://registry.npmjs.org/eslint-config-airbnb-base/-/eslint-config-airbnb-base-15.0.0.tgz", - "integrity": "sha512-xaX3z4ZZIcFLvh2oUNvcX5oEofXda7giYmuplVxoOg5A7EXJMrUyqRgR+mhDhPK8LZ4PttFOBvCYDbX3sUoUig==", - "license": "MIT", - "dependencies": { - "confusing-browser-globals": "^1.0.10", - "object.assign": "^4.1.2", - "object.entries": "^1.1.5", - "semver": "^6.3.0" - }, - "engines": { - "node": "^10.12.0 || >=12.0.0" - }, - "peerDependencies": { - "eslint": "^7.32.0 || ^8.2.0", - "eslint-plugin-import": "^2.25.2" - } - }, - "node_modules/eslint-config-xo": { - "version": "0.44.0", - "resolved": "https://registry.npmjs.org/eslint-config-xo/-/eslint-config-xo-0.44.0.tgz", - "integrity": "sha512-YG4gdaor0mJJi8UBeRJqDPO42MedTWYMaUyucF5bhm2pi/HS98JIxfFQmTLuyj6hGpQlAazNfyVnn7JuDn+Sew==", - "license": "MIT", - "dependencies": { - "confusing-browser-globals": "1.0.11" - }, - "engines": { - "node": ">=18" + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" }, "funding": { - "url": "https://github.com/sponsors/sindresorhus" + "url": "https://eslint.org/donate" }, "peerDependencies": { - "eslint": ">=8.56.0" - } - }, - "node_modules/eslint-config-xo-space": { - "version": "0.35.0", - "resolved": "https://registry.npmjs.org/eslint-config-xo-space/-/eslint-config-xo-space-0.35.0.tgz", - "integrity": "sha512-+79iVcoLi3PvGcjqYDpSPzbLfqYpNcMlhsCBRsnmDoHAn4npJG6YxmHpelQKpXM7v/EeZTUKb4e1xotWlei8KA==", - "license": "MIT", - "dependencies": { - "eslint-config-xo": "^0.44.0" + "jiti": "*" }, - "engines": { - "node": ">=12" - }, - "funding": { - "url": "https://github.com/sponsors/sindresorhus" - }, - "peerDependencies": { - "eslint": ">=8.56.0" + "peerDependenciesMeta": { + "jiti": { + "optional": true + } } }, "node_modules/eslint-import-resolver-node": { @@ -4715,124 +4578,6 @@ "ms": "^2.1.1" } }, - "node_modules/eslint-plugin-import/node_modules/doctrine": { - "version": "2.1.0", - "resolved": "https://registry.npmjs.org/doctrine/-/doctrine-2.1.0.tgz", - "integrity": "sha512-35mSku4ZXK0vfCuHEDAwt55dg2jNajHZ1odvF+8SSr82EsZY4QmXfuWso8oEd8zRhVObSN18aM0CjSdoBX7zIw==", - "license": "Apache-2.0", - "dependencies": { - "esutils": "^2.0.2" - }, - "engines": { - "node": ">=0.10.0" - } - }, - "node_modules/eslint-plugin-jsx-a11y": { - "version": "6.10.2", - "resolved": "https://registry.npmjs.org/eslint-plugin-jsx-a11y/-/eslint-plugin-jsx-a11y-6.10.2.tgz", - "integrity": "sha512-scB3nz4WmG75pV8+3eRUQOHZlNSUhFNq37xnpgRkCCELU3XMvXAxLk1eqWWyE22Ki4Q01Fnsw9BA3cJHDPgn2Q==", - "license": "MIT", - "dependencies": { - "aria-query": "^5.3.2", - "array-includes": "^3.1.8", - "array.prototype.flatmap": "^1.3.2", - "ast-types-flow": "^0.0.8", - "axe-core": "^4.10.0", - "axobject-query": "^4.1.0", - "damerau-levenshtein": "^1.0.8", - "emoji-regex": "^9.2.2", - "hasown": "^2.0.2", - "jsx-ast-utils": "^3.3.5", - "language-tags": "^1.0.9", - "minimatch": "^3.1.2", - "object.fromentries": "^2.0.8", - "safe-regex-test": "^1.0.3", - "string.prototype.includes": "^2.0.1" - }, - "engines": { - "node": ">=4.0" - }, - "peerDependencies": { - "eslint": "^3 || ^4 || ^5 || ^6 || ^7 || ^8 || ^9" - } - }, - "node_modules/eslint-plugin-react": { - "version": "7.37.4", - "resolved": "https://registry.npmjs.org/eslint-plugin-react/-/eslint-plugin-react-7.37.4.tgz", - "integrity": "sha512-BGP0jRmfYyvOyvMoRX/uoUeW+GqNj9y16bPQzqAHf3AYII/tDs+jMN0dBVkl88/OZwNGwrVFxE7riHsXVfy/LQ==", - "license": "MIT", - "peer": true, - "dependencies": { - "array-includes": "^3.1.8", - "array.prototype.findlast": "^1.2.5", - "array.prototype.flatmap": "^1.3.3", - "array.prototype.tosorted": "^1.1.4", - "doctrine": "^2.1.0", - "es-iterator-helpers": "^1.2.1", - "estraverse": "^5.3.0", - "hasown": "^2.0.2", - "jsx-ast-utils": "^2.4.1 || ^3.0.0", - "minimatch": "^3.1.2", - "object.entries": "^1.1.8", - "object.fromentries": "^2.0.8", - "object.values": "^1.2.1", - "prop-types": "^15.8.1", - "resolve": "^2.0.0-next.5", - "semver": "^6.3.1", - "string.prototype.matchall": "^4.0.12", - "string.prototype.repeat": "^1.0.0" - }, - "engines": { - "node": ">=4" - }, - "peerDependencies": { - "eslint": "^3 || ^4 || ^5 || ^6 || ^7 || ^8 || ^9.7" - } - }, - "node_modules/eslint-plugin-react-hooks": { - "version": "4.6.2", - "resolved": "https://registry.npmjs.org/eslint-plugin-react-hooks/-/eslint-plugin-react-hooks-4.6.2.tgz", - "integrity": "sha512-QzliNJq4GinDBcD8gPB5v0wh6g8q3SUi6EFF0x8N/BL9PoVs0atuGc47ozMRyOWAKdwaZ5OnbOEa3WR+dSGKuQ==", - "license": "MIT", - "peer": true, - "engines": { - "node": ">=10" - }, - "peerDependencies": { - "eslint": "^3.0.0 || ^4.0.0 || ^5.0.0 || ^6.0.0 || ^7.0.0 || ^8.0.0-0" - } - }, - "node_modules/eslint-plugin-react/node_modules/doctrine": { - "version": "2.1.0", - "resolved": "https://registry.npmjs.org/doctrine/-/doctrine-2.1.0.tgz", - "integrity": "sha512-35mSku4ZXK0vfCuHEDAwt55dg2jNajHZ1odvF+8SSr82EsZY4QmXfuWso8oEd8zRhVObSN18aM0CjSdoBX7zIw==", - "license": "Apache-2.0", - "peer": true, - "dependencies": { - "esutils": "^2.0.2" - }, - "engines": { - "node": ">=0.10.0" - } - }, - "node_modules/eslint-plugin-react/node_modules/resolve": { - "version": "2.0.0-next.5", - "resolved": "https://registry.npmjs.org/resolve/-/resolve-2.0.0-next.5.tgz", - "integrity": "sha512-U7WjGVG9sH8tvjW5SmGbQuui75FiyjAX72HX15DwBBwF9dNiQZRQAg9nnPhYy+TUnE0+VcrttuvNI8oSxZcocA==", - "license": "MIT", - "peer": true, - "dependencies": { - "is-core-module": "^2.13.0", - "path-parse": "^1.0.7", - "supports-preserve-symlinks-flag": "^1.0.0" - }, - "bin": { - "resolve": "bin/resolve" - }, - "funding": { - "url": "https://github.com/sponsors/ljharb" - } - }, "node_modules/eslint-scope": { "version": "5.1.1", "resolved": "https://registry.npmjs.org/eslint-scope/-/eslint-scope-5.1.1.tgz", @@ -4846,38 +4591,20 @@ "node": ">=8.0.0" } }, - "node_modules/eslint-scope/node_modules/estraverse": { - "version": "4.3.0", - "resolved": "https://registry.npmjs.org/estraverse/-/estraverse-4.3.0.tgz", - "integrity": "sha512-39nnKffWz8xN1BU/2c79n9nB9HDzo0niYUqx6xyqUnyoAnQyyWpOTdZEeiCch8BBu515t4wp9ZmgVfVhn9EBpw==", - "license": "BSD-2-Clause", - "engines": { - "node": ">=4.0" - } - }, - "node_modules/eslint-visitor-keys": { - "version": "2.1.0", - "resolved": "https://registry.npmjs.org/eslint-visitor-keys/-/eslint-visitor-keys-2.1.0.tgz", - "integrity": "sha512-0rSmRBzXgDzIsD6mGdJgevzgezI534Cer5L/vyMX0kHzT/jiB43jRhd9YUlMGYLQy2zprNmoT8qasCGtY+QaKw==", - "license": "Apache-2.0", - "engines": { - "node": ">=10" - } - }, "node_modules/eslint-webpack-plugin": { - "version": "4.2.0", - "resolved": "https://registry.npmjs.org/eslint-webpack-plugin/-/eslint-webpack-plugin-4.2.0.tgz", - "integrity": "sha512-rsfpFQ01AWQbqtjgPRr2usVRxhWDuG0YDYcG8DJOteD3EFnpeuYuOwk0PQiN7PRBTqS6ElNdtPZPggj8If9WnA==", + "version": "5.0.1", + "resolved": "https://registry.npmjs.org/eslint-webpack-plugin/-/eslint-webpack-plugin-5.0.1.tgz", + "integrity": "sha512-Ur100Vi+z0uP7j4Z8Ccah0pXmNHhl3f7P2hCYZj3mZCOSc33G5c1R/vZ4KCapwWikPgRyD4dkangx6JW3KaVFQ==", "license": "MIT", "dependencies": { - "@types/eslint": "^8.56.10", + "@types/eslint": "^9.6.1", "jest-worker": "^29.7.0", - "micromatch": "^4.0.5", + "micromatch": "^4.0.8", "normalize-path": "^3.0.0", - "schema-utils": "^4.2.0" + "schema-utils": "^4.3.0" }, "engines": { - "node": ">= 14.15.0" + "node": ">= 18.12.0" }, "funding": { "type": "opencollective", @@ -4889,90 +4616,71 @@ } }, "node_modules/eslint/node_modules/eslint-scope": { - "version": "7.2.2", - "resolved": "https://registry.npmjs.org/eslint-scope/-/eslint-scope-7.2.2.tgz", - "integrity": "sha512-dOt21O7lTMhDM+X9mB4GX+DZrZtCUJPL/wlcTqxyrx5IvO0IYtILdtrQGQp+8n5S0gwSVmOf9NQrjMOgfQZlIg==", + "version": "8.3.0", + "resolved": "https://registry.npmjs.org/eslint-scope/-/eslint-scope-8.3.0.tgz", + "integrity": "sha512-pUNxi75F8MJ/GdeKtVLSbYg4ZI34J6C0C7sbL4YOp2exGwen7ZsuBqKzUhXd0qMQ362yET3z+uPwKeg/0C2XCQ==", "license": "BSD-2-Clause", "dependencies": { "esrecurse": "^4.3.0", "estraverse": "^5.2.0" }, "engines": { - "node": "^12.22.0 || ^14.17.0 || >=16.0.0" + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" }, "funding": { "url": "https://opencollective.com/eslint" } }, "node_modules/eslint/node_modules/eslint-visitor-keys": { - "version": "3.4.3", - "resolved": "https://registry.npmjs.org/eslint-visitor-keys/-/eslint-visitor-keys-3.4.3.tgz", - "integrity": "sha512-wpc+LXeiyiisxPlEkUzU6svyS1frIO3Mgxj1fdy7Pm8Ygzguax2N3Fa/D/ag1WqbOprdI+uY6wMUl8/a2G+iag==", + "version": "4.2.0", + "resolved": "https://registry.npmjs.org/eslint-visitor-keys/-/eslint-visitor-keys-4.2.0.tgz", + "integrity": "sha512-UyLnSehNt62FFhSwjZlHmeokpRK59rcz29j+F1/aDgbkbRTk7wIc9XzdoasMUbRNKDM0qQt/+BJ4BrpFeABemw==", "license": "Apache-2.0", "engines": { - "node": "^12.22.0 || ^14.17.0 || >=16.0.0" + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" }, "funding": { "url": "https://opencollective.com/eslint" } }, - "node_modules/eslint/node_modules/globals": { - "version": "13.24.0", - "resolved": "https://registry.npmjs.org/globals/-/globals-13.24.0.tgz", - "integrity": "sha512-AhO5QUcj8llrbG09iWhPU2B204J1xnPeL8kQmVorSsy+Sjj1sk8gIyh6cUocGmH4L0UuhAJy+hJMRA4mgA4mFQ==", - "license": "MIT", - "dependencies": { - "type-fest": "^0.20.2" - }, + "node_modules/eslint/node_modules/estraverse": { + "version": "5.3.0", + "resolved": "https://registry.npmjs.org/estraverse/-/estraverse-5.3.0.tgz", + "integrity": "sha512-MMdARuVEQziNTeJD8DgMqmhwR11BRQ/cBP+pLtYdSTnf3MIO8fFeiINEbX36ZdNlfU/7A9f3gUw49B3oQsvwBA==", + "license": "BSD-2-Clause", "engines": { - "node": ">=8" - }, - "funding": { - "url": "https://github.com/sponsors/sindresorhus" + "node": ">=4.0" } }, "node_modules/espree": { - "version": "9.6.1", - "resolved": "https://registry.npmjs.org/espree/-/espree-9.6.1.tgz", - "integrity": "sha512-oruZaFkjorTpF32kDSI5/75ViwGeZginGGy2NoOSg3Q9bnwlnmDm4HLnkl0RE3n+njDXR037aY1+x58Z/zFdwQ==", + "version": "10.3.0", + "resolved": "https://registry.npmjs.org/espree/-/espree-10.3.0.tgz", + "integrity": "sha512-0QYC8b24HWY8zjRnDTL6RiHfDbAWn63qb4LMj1Z4b076A4une81+z03Kg7l7mn/48PUTqoLptSXez8oknU8Clg==", "license": "BSD-2-Clause", "dependencies": { - "acorn": "^8.9.0", + "acorn": "^8.14.0", "acorn-jsx": "^5.3.2", - "eslint-visitor-keys": "^3.4.1" + "eslint-visitor-keys": "^4.2.0" }, "engines": { - "node": "^12.22.0 || ^14.17.0 || >=16.0.0" + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" }, "funding": { "url": "https://opencollective.com/eslint" } }, "node_modules/espree/node_modules/eslint-visitor-keys": { - "version": "3.4.3", - "resolved": "https://registry.npmjs.org/eslint-visitor-keys/-/eslint-visitor-keys-3.4.3.tgz", - "integrity": "sha512-wpc+LXeiyiisxPlEkUzU6svyS1frIO3Mgxj1fdy7Pm8Ygzguax2N3Fa/D/ag1WqbOprdI+uY6wMUl8/a2G+iag==", + "version": "4.2.0", + "resolved": "https://registry.npmjs.org/eslint-visitor-keys/-/eslint-visitor-keys-4.2.0.tgz", + "integrity": "sha512-UyLnSehNt62FFhSwjZlHmeokpRK59rcz29j+F1/aDgbkbRTk7wIc9XzdoasMUbRNKDM0qQt/+BJ4BrpFeABemw==", "license": "Apache-2.0", "engines": { - "node": "^12.22.0 || ^14.17.0 || >=16.0.0" + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" }, "funding": { "url": "https://opencollective.com/eslint" } }, - "node_modules/esprima": { - "version": "4.0.1", - "resolved": "https://registry.npmjs.org/esprima/-/esprima-4.0.1.tgz", - "integrity": "sha512-eGuFFw7Upda+g4p+QHvnW0RyTX/SVeJBDM/gCtMARO0cLuT2HcEKnTPvhjV6aGeqrCB/sbNop0Kszm0jsaWU4A==", - "license": "BSD-2-Clause", - "bin": { - "esparse": "bin/esparse.js", - "esvalidate": "bin/esvalidate.js" - }, - "engines": { - "node": ">=4" - } - }, "node_modules/esquery": { "version": "1.6.0", "resolved": "https://registry.npmjs.org/esquery/-/esquery-1.6.0.tgz", @@ -4985,6 +4693,15 @@ "node": ">=0.10" } }, + "node_modules/esquery/node_modules/estraverse": { + "version": "5.3.0", + "resolved": "https://registry.npmjs.org/estraverse/-/estraverse-5.3.0.tgz", + "integrity": "sha512-MMdARuVEQziNTeJD8DgMqmhwR11BRQ/cBP+pLtYdSTnf3MIO8fFeiINEbX36ZdNlfU/7A9f3gUw49B3oQsvwBA==", + "license": "BSD-2-Clause", + "engines": { + "node": ">=4.0" + } + }, "node_modules/esrecurse": { "version": "4.3.0", "resolved": "https://registry.npmjs.org/esrecurse/-/esrecurse-4.3.0.tgz", @@ -4997,7 +4714,7 @@ "node": ">=4.0" } }, - "node_modules/estraverse": { + "node_modules/esrecurse/node_modules/estraverse": { "version": "5.3.0", "resolved": "https://registry.npmjs.org/estraverse/-/estraverse-5.3.0.tgz", "integrity": "sha512-MMdARuVEQziNTeJD8DgMqmhwR11BRQ/cBP+pLtYdSTnf3MIO8fFeiINEbX36ZdNlfU/7A9f3gUw49B3oQsvwBA==", @@ -5006,6 +4723,15 @@ "node": ">=4.0" } }, + "node_modules/estraverse": { + "version": "4.3.0", + "resolved": "https://registry.npmjs.org/estraverse/-/estraverse-4.3.0.tgz", + "integrity": "sha512-39nnKffWz8xN1BU/2c79n9nB9HDzo0niYUqx6xyqUnyoAnQyyWpOTdZEeiCch8BBu515t4wp9ZmgVfVhn9EBpw==", + "license": "BSD-2-Clause", + "engines": { + "node": ">=4.0" + } + }, "node_modules/esutils": { "version": "2.0.3", "resolved": "https://registry.npmjs.org/esutils/-/esutils-2.0.3.tgz", @@ -5100,18 +4826,6 @@ "integrity": "sha512-Tpp60P6IUJDTuOq/5Z8cdskzJujfwqfOTkrwIwj7IRISpnkJnT6SyJ4PCPnGMoFjC9ddhal5KVIYtAt97ix05A==", "license": "MIT" }, - "node_modules/extend-shallow": { - "version": "2.0.1", - "resolved": "https://registry.npmjs.org/extend-shallow/-/extend-shallow-2.0.1.tgz", - "integrity": "sha512-zCnTtlxNoAiDc3gqY2aYAWFx7XWWiasuF2K8Me5WbN8otHKTUKBwjPtNpRs/rbUZm7KxWAaNj7P1a/p52GbVug==", - "license": "MIT", - "dependencies": { - "is-extendable": "^0.1.0" - }, - "engines": { - "node": ">=0.10.0" - } - }, "node_modules/fast-deep-equal": { "version": "3.1.3", "resolved": "https://registry.npmjs.org/fast-deep-equal/-/fast-deep-equal-3.1.3.tgz", @@ -5184,9 +4898,9 @@ } }, "node_modules/fastq": { - "version": "1.19.0", - "resolved": "https://registry.npmjs.org/fastq/-/fastq-1.19.0.tgz", - "integrity": "sha512-7SFSRCNjBQIZH/xZR3iy5iQYR8aGBE0h3VG6/cwlbrpdciNYBMotQav8c1XI3HjHH+NikUpP53nPdlZSdWmFzA==", + "version": "1.19.1", + "resolved": "https://registry.npmjs.org/fastq/-/fastq-1.19.1.tgz", + "integrity": "sha512-GwLTyxkCXjXbxqIhTsMI2Nui8huMPtnxg7krajPJAjnEG/iiOS7i+zCtWGZR9G0NBKbXKh6X9m9UIsYX/N6vvQ==", "license": "ISC", "dependencies": { "reusify": "^1.0.4" @@ -5205,24 +4919,15 @@ } }, "node_modules/file-entry-cache": { - "version": "6.0.1", - "resolved": "https://registry.npmjs.org/file-entry-cache/-/file-entry-cache-6.0.1.tgz", - "integrity": "sha512-7Gps/XWymbLk2QLYK4NzpMOrYjMhdIxXuIvy2QBsLE6ljuodKvdkWs/cpyJJ3CVIVpH0Oi1Hvg1ovbMzLdFBBg==", + "version": "8.0.0", + "resolved": "https://registry.npmjs.org/file-entry-cache/-/file-entry-cache-8.0.0.tgz", + "integrity": "sha512-XXTUwCvisa5oacNGRP9SfNtYBNAMi+RPwBFmblZEF7N7swHYQS6/Zfk7SRwx4D5j3CH211YNRco1DEMNVfZCnQ==", "license": "MIT", "dependencies": { - "flat-cache": "^3.0.4" + "flat-cache": "^4.0.0" }, "engines": { - "node": "^10.12.0 || >=12.0.0" - } - }, - "node_modules/filesize": { - "version": "8.0.7", - "resolved": "https://registry.npmjs.org/filesize/-/filesize-8.0.7.tgz", - "integrity": "sha512-pjmC+bkIF8XI7fWaH8KxHcZL3DPybs1roSKP4rKDvy20tAWwIObE4+JIseG2byfGKhud5ZnM4YSGKBz7Sh0ndQ==", - "license": "BSD-3-Clause", - "engines": { - "node": ">= 0.4.0" + "node": ">=16.0.0" } }, "node_modules/fill-range": { @@ -5270,22 +4975,6 @@ "integrity": "sha512-Tpp60P6IUJDTuOq/5Z8cdskzJujfwqfOTkrwIwj7IRISpnkJnT6SyJ4PCPnGMoFjC9ddhal5KVIYtAt97ix05A==", "license": "MIT" }, - "node_modules/find-cache-dir": { - "version": "4.0.0", - "resolved": "https://registry.npmjs.org/find-cache-dir/-/find-cache-dir-4.0.0.tgz", - "integrity": "sha512-9ZonPT4ZAK4a+1pUPVPZJapbi7O5qbbJPdYw/NOQWZZbVLdDTYM3A4R9z/DpAM08IDaFGsvPgiGZ82WEwUDWjg==", - "license": "MIT", - "dependencies": { - "common-path-prefix": "^3.0.0", - "pkg-dir": "^7.0.0" - }, - "engines": { - "node": ">=14.16" - }, - "funding": { - "url": "https://github.com/sponsors/sindresorhus" - } - }, "node_modules/find-up": { "version": "5.0.0", "resolved": "https://registry.npmjs.org/find-up/-/find-up-5.0.0.tgz", @@ -5312,23 +5001,22 @@ } }, "node_modules/flat-cache": { - "version": "3.2.0", - "resolved": "https://registry.npmjs.org/flat-cache/-/flat-cache-3.2.0.tgz", - "integrity": "sha512-CYcENa+FtcUKLmhhqyctpclsq7QF38pKjZHsGNiSQF5r4FtoKDWabFDl3hzaEQMvT1LHEysw5twgLvpYYb4vbw==", + "version": "4.0.1", + "resolved": "https://registry.npmjs.org/flat-cache/-/flat-cache-4.0.1.tgz", + "integrity": "sha512-f7ccFPK3SXFHpx15UIGyRJ/FJQctuKZ0zVuN3frBo4HnK3cay9VEW0R6yPYFHC0AgqhukPzKjq22t5DmAyqGyw==", "license": "MIT", "dependencies": { "flatted": "^3.2.9", - "keyv": "^4.5.3", - "rimraf": "^3.0.2" + "keyv": "^4.5.4" }, "engines": { - "node": "^10.12.0 || >=12.0.0" + "node": ">=16" } }, "node_modules/flatted": { - "version": "3.3.2", - "resolved": "https://registry.npmjs.org/flatted/-/flatted-3.3.2.tgz", - "integrity": "sha512-AiwGJM8YcNOaobumgtng+6NHuOqC3A7MixFeDafM3X9cIUM+xUXoS5Vfgf+OihAYe20fxqNM9yPBXJzRtZ/4eA==", + "version": "3.3.3", + "resolved": "https://registry.npmjs.org/flatted/-/flatted-3.3.3.tgz", + "integrity": "sha512-GX+ysw4PBCz0PzosHDepZGANEuFCMLrnRTiEy9McGjmkCQYwRq4A/X786G/fjM/+OjsWSU1ZrY5qyARZmO/uwg==", "license": "ISC" }, "node_modules/follow-redirects": { @@ -5352,9 +5040,9 @@ } }, "node_modules/for-each": { - "version": "0.3.4", - "resolved": "https://registry.npmjs.org/for-each/-/for-each-0.3.4.tgz", - "integrity": "sha512-kKaIINnFpzW6ffJNDjjyjrk21BkDx38c0xa/klsT8VzLCaMEefv4ZTacrcVR4DmgTeBra++jMDAfS/tS799YDw==", + "version": "0.3.5", + "resolved": "https://registry.npmjs.org/for-each/-/for-each-0.3.5.tgz", + "integrity": "sha512-dKx12eRCVIzqCxFGplyFKJMPvLEWgmNtUrpTiJIR5u97zEhRG8ySrtboPHZXx7daLxQVrl643cTzbab2tkQjxg==", "license": "MIT", "dependencies": { "is-callable": "^1.2.7" @@ -5366,96 +5054,6 @@ "url": "https://github.com/sponsors/ljharb" } }, - "node_modules/foreach": { - "version": "2.0.6", - "resolved": "https://registry.npmjs.org/foreach/-/foreach-2.0.6.tgz", - "integrity": "sha512-k6GAGDyqLe9JaebCsFCoudPPWfihKu8pylYXRlqP1J7ms39iPoTtk2fviNglIeQEwdh0bQeKJ01ZPyuyQvKzwg==", - "license": "MIT" - }, - "node_modules/fork-ts-checker-webpack-plugin": { - "version": "6.5.3", - "resolved": "https://registry.npmjs.org/fork-ts-checker-webpack-plugin/-/fork-ts-checker-webpack-plugin-6.5.3.tgz", - "integrity": "sha512-SbH/l9ikmMWycd5puHJKTkZJKddF4iRLyW3DeZ08HTI7NGyLS38MXd/KGgeWumQO7YNQbW2u/NtPT2YowbPaGQ==", - "license": "MIT", - "dependencies": { - "@babel/code-frame": "^7.8.3", - "@types/json-schema": "^7.0.5", - "chalk": "^4.1.0", - "chokidar": "^3.4.2", - "cosmiconfig": "^6.0.0", - "deepmerge": "^4.2.2", - "fs-extra": "^9.0.0", - "glob": "^7.1.6", - "memfs": "^3.1.2", - "minimatch": "^3.0.4", - "schema-utils": "2.7.0", - "semver": "^7.3.2", - "tapable": "^1.0.0" - }, - "engines": { - "node": ">=10", - "yarn": ">=1.0.0" - }, - "peerDependencies": { - "eslint": ">= 6", - "typescript": ">= 2.7", - "vue-template-compiler": "*", - "webpack": ">= 4" - }, - "peerDependenciesMeta": { - "eslint": { - "optional": true - }, - "vue-template-compiler": { - "optional": true - } - } - }, - "node_modules/fork-ts-checker-webpack-plugin/node_modules/fs-extra": { - "version": "9.1.0", - "resolved": "https://registry.npmjs.org/fs-extra/-/fs-extra-9.1.0.tgz", - "integrity": "sha512-hcg3ZmepS30/7BSFqRvoo3DOMQu7IjqxO5nCDt+zM9XWjb33Wg7ziNT+Qvqbuc3+gWpzO02JubVyk2G4Zvo1OQ==", - "license": "MIT", - "dependencies": { - "at-least-node": "^1.0.0", - "graceful-fs": "^4.2.0", - "jsonfile": "^6.0.1", - "universalify": "^2.0.0" - }, - "engines": { - "node": ">=10" - } - }, - "node_modules/fork-ts-checker-webpack-plugin/node_modules/schema-utils": { - "version": "2.7.0", - "resolved": "https://registry.npmjs.org/schema-utils/-/schema-utils-2.7.0.tgz", - "integrity": "sha512-0ilKFI6QQF5nxDZLFn2dMjvc4hjg/Wkg7rHd3jK6/A4a1Hl9VFdQWvgB1UMGoU94pad1P/8N7fMcEnLnSiju8A==", - "license": "MIT", - "dependencies": { - "@types/json-schema": "^7.0.4", - "ajv": "^6.12.2", - "ajv-keywords": "^3.4.1" - }, - "engines": { - "node": ">= 8.9.0" - }, - "funding": { - "type": "opencollective", - "url": "https://opencollective.com/webpack" - } - }, - "node_modules/fork-ts-checker-webpack-plugin/node_modules/semver": { - "version": "7.7.1", - "resolved": "https://registry.npmjs.org/semver/-/semver-7.7.1.tgz", - "integrity": "sha512-hlq8tAfn0m/61p4BVRcPzIGr6LKiMwo4VM6dGi6pt4qcRkmNzTcWq6eCEjEh+qXjkMDvPlOFFSGwQjoEa6gyMA==", - "license": "ISC", - "bin": { - "semver": "bin/semver.js" - }, - "engines": { - "node": ">=10" - } - }, "node_modules/forwarded": { "version": "0.2.0", "resolved": "https://registry.npmjs.org/forwarded/-/forwarded-0.2.0.tgz", @@ -5503,18 +5101,6 @@ "node": ">=14.14" } }, - "node_modules/fs-monkey": { - "version": "1.0.6", - "resolved": "https://registry.npmjs.org/fs-monkey/-/fs-monkey-1.0.6.tgz", - "integrity": "sha512-b1FMfwetIKymC0eioW7mTywihSQE4oLzQn1dB6rZB5fx/3NpNEdAWeCSMB+60/AeT0TCXsxzAlcYVEFCTAksWg==", - "license": "Unlicense" - }, - "node_modules/fs.realpath": { - "version": "1.0.0", - "resolved": "https://registry.npmjs.org/fs.realpath/-/fs.realpath-1.0.0.tgz", - "integrity": "sha512-OO0pH2lK6a0hZnAdau5ItzHPI6pUlvI7jMVnxUQRtw4owF2wk8lOSabtGDCTP4Ggrg2MbGnWO9X8K1t4+fGMDw==", - "license": "ISC" - }, "node_modules/fsevents": { "version": "2.3.3", "resolved": "https://registry.npmjs.org/fsevents/-/fsevents-2.3.3.tgz", @@ -5587,17 +5173,17 @@ } }, "node_modules/get-intrinsic": { - "version": "1.2.7", - "resolved": "https://registry.npmjs.org/get-intrinsic/-/get-intrinsic-1.2.7.tgz", - "integrity": "sha512-VW6Pxhsrk0KAOqs3WEd0klDiF/+V7gQOpAvY1jVU/LHmaD/kQO4523aiJuikX/QAKYiW6x8Jh+RJej1almdtCA==", + "version": "1.3.0", + "resolved": "https://registry.npmjs.org/get-intrinsic/-/get-intrinsic-1.3.0.tgz", + "integrity": "sha512-9fSjSaos/fRIVIp+xSJlE6lfwhES7LNtKaCBIamHsjr2na1BiABJPo0mOjjz8GJDURarmCPGqaiVg5mfjb98CQ==", "license": "MIT", "dependencies": { - "call-bind-apply-helpers": "^1.0.1", + "call-bind-apply-helpers": "^1.0.2", "es-define-property": "^1.0.1", "es-errors": "^1.3.0", - "es-object-atoms": "^1.0.0", + "es-object-atoms": "^1.1.1", "function-bind": "^1.1.2", - "get-proto": "^1.0.0", + "get-proto": "^1.0.1", "gopd": "^1.2.0", "has-symbols": "^1.1.0", "hasown": "^2.0.2", @@ -5623,19 +5209,6 @@ "node": ">= 0.4" } }, - "node_modules/get-stdin": { - "version": "9.0.0", - "resolved": "https://registry.npmjs.org/get-stdin/-/get-stdin-9.0.0.tgz", - "integrity": "sha512-dVKBjfWisLAicarI2Sf+JuBE/DghV4UzNAVe9yhEJuzeREd3JhOTE9cUaJTeSa77fsbQUK3pcOpJfM59+VKZaA==", - "dev": true, - "license": "MIT", - "engines": { - "node": ">=12" - }, - "funding": { - "url": "https://github.com/sponsors/sindresorhus" - } - }, "node_modules/get-symbol-description": { "version": "1.1.0", "resolved": "https://registry.npmjs.org/get-symbol-description/-/get-symbol-description-1.1.0.tgz", @@ -5653,27 +5226,6 @@ "url": "https://github.com/sponsors/ljharb" } }, - "node_modules/glob": { - "version": "7.2.3", - "resolved": "https://registry.npmjs.org/glob/-/glob-7.2.3.tgz", - "integrity": "sha512-nFR0zLpU2YCaRxwoCJvL6UvCH2JFyFVIvwTLsIf21AuHlMskA1hhTdk+LlYJtOlYt9v6dvszD2BGRqBL+iQK9Q==", - "deprecated": "Glob versions prior to v9 are no longer supported", - "license": "ISC", - "dependencies": { - "fs.realpath": "^1.0.0", - "inflight": "^1.0.4", - "inherits": "2", - "minimatch": "^3.1.1", - "once": "^1.3.0", - "path-is-absolute": "^1.0.0" - }, - "engines": { - "node": "*" - }, - "funding": { - "url": "https://github.com/sponsors/isaacs" - } - }, "node_modules/glob-parent": { "version": "6.0.2", "resolved": "https://registry.npmjs.org/glob-parent/-/glob-parent-6.0.2.tgz", @@ -5692,70 +5244,16 @@ "integrity": "sha512-lkX1HJXwyMcprw/5YUZc2s7DrpAiHB21/V+E1rHUrVNokkvB6bqMzT0VfV6/86ZNabt1k14YOIaT7nDvOX3Iiw==", "license": "BSD-2-Clause" }, - "node_modules/global": { - "version": "4.4.0", - "resolved": "https://registry.npmjs.org/global/-/global-4.4.0.tgz", - "integrity": "sha512-wv/LAoHdRE3BeTGz53FAamhGlPLhlssK45usmGFThIi4XqnBmjKQ16u+RNbP7WvigRZDxUsM0J3gcQ5yicaL0w==", - "license": "MIT", - "dependencies": { - "min-document": "^2.19.0", - "process": "^0.11.10" - } - }, - "node_modules/global-modules": { - "version": "2.0.0", - "resolved": "https://registry.npmjs.org/global-modules/-/global-modules-2.0.0.tgz", - "integrity": "sha512-NGbfmJBp9x8IxyJSd1P+otYK8vonoJactOogrVfFRIAEY1ukil8RSKDz2Yo7wh1oihl51l/r6W4epkeKJHqL8A==", - "license": "MIT", - "dependencies": { - "global-prefix": "^3.0.0" - }, - "engines": { - "node": ">=6" - } - }, - "node_modules/global-prefix": { - "version": "3.0.0", - "resolved": "https://registry.npmjs.org/global-prefix/-/global-prefix-3.0.0.tgz", - "integrity": "sha512-awConJSVCHVGND6x3tmMaKcQvwXLhjdkmomy2W+Goaui8YPgYgXJZewhg3fWC+DlfqqQuWg8AwqjGTD2nAPVWg==", - "license": "MIT", - "dependencies": { - "ini": "^1.3.5", - "kind-of": "^6.0.2", - "which": "^1.3.1" - }, - "engines": { - "node": ">=6" - } - }, - "node_modules/global-prefix/node_modules/kind-of": { - "version": "6.0.3", - "resolved": "https://registry.npmjs.org/kind-of/-/kind-of-6.0.3.tgz", - "integrity": "sha512-dcS1ul+9tmeD95T+x28/ehLgd9mENa3LsvDTtzm3vyBEO7RPptvAD+t44WVXaUjTBRcrpFeFlC8WCruUR456hw==", - "license": "MIT", - "engines": { - "node": ">=0.10.0" - } - }, - "node_modules/global-prefix/node_modules/which": { - "version": "1.3.1", - "resolved": "https://registry.npmjs.org/which/-/which-1.3.1.tgz", - "integrity": "sha512-HxJdYWq1MTIQbJ3nw0cqssHoTNU267KlrDuGZ1WYlxDStUtKUhOaJmh112/TZmHxxUfuJqPXSOm7tDyas0OSIQ==", - "license": "ISC", - "dependencies": { - "isexe": "^2.0.0" - }, - "bin": { - "which": "bin/which" - } - }, "node_modules/globals": { - "version": "11.12.0", - "resolved": "https://registry.npmjs.org/globals/-/globals-11.12.0.tgz", - "integrity": "sha512-WOBp/EEGUiIsJSp7wcv/y6MO+lV9UoncWqxuFfm8eBwzWNgyfBd6Gz+IeKQ9jCmyhoH99g15M3T+QaVHFjizVA==", + "version": "16.0.0", + "resolved": "https://registry.npmjs.org/globals/-/globals-16.0.0.tgz", + "integrity": "sha512-iInW14XItCXET01CQFqudPOWP2jYMl7T+QRQT+UNcR/iQncN/F0UNpgd76iFkBPgNQb4+X3LV9tLJYzwh+Gl3A==", "license": "MIT", "engines": { - "node": ">=4" + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" } }, "node_modules/globalthis": { @@ -5774,122 +5272,33 @@ "url": "https://github.com/sponsors/ljharb" } }, - "node_modules/globby": { - "version": "14.1.0", - "resolved": "https://registry.npmjs.org/globby/-/globby-14.1.0.tgz", - "integrity": "sha512-0Ia46fDOaT7k4og1PDW4YbodWWr3scS2vAr2lTbsplOt2WkKp0vQbkI9wKis/T5LV/dqPjO3bpS/z6GTJB82LA==", - "dev": true, - "license": "MIT", - "dependencies": { - "@sindresorhus/merge-streams": "^2.1.0", - "fast-glob": "^3.3.3", - "ignore": "^7.0.3", - "path-type": "^6.0.0", - "slash": "^5.1.0", - "unicorn-magic": "^0.3.0" - }, - "engines": { - "node": ">=18" - }, - "funding": { - "url": "https://github.com/sponsors/sindresorhus" - } - }, - "node_modules/globby/node_modules/ignore": { - "version": "7.0.3", - "resolved": "https://registry.npmjs.org/ignore/-/ignore-7.0.3.tgz", - "integrity": "sha512-bAH5jbK/F3T3Jls4I0SO1hmPR0dKU0a7+SY6n1yzRtG54FLO8d6w/nxLFX2Nb7dBu6cCWXPaAME6cYqFUMmuCA==", - "dev": true, - "license": "MIT", - "engines": { - "node": ">= 4" - } - }, "node_modules/good-listener": { "version": "1.2.2", "resolved": "https://registry.npmjs.org/good-listener/-/good-listener-1.2.2.tgz", - "integrity": "sha512-goW1b+d9q/HIwbVYZzZ6SsTr4IgE+WA44A0GmPIQstuOrgsFcT7VEJ48nmr9GaRtNu0XTKacFLGnBPAM6Afouw==", - "license": "MIT", - "dependencies": { - "delegate": "^3.1.2" - } - }, - "node_modules/gopd": { - "version": "1.2.0", - "resolved": "https://registry.npmjs.org/gopd/-/gopd-1.2.0.tgz", - "integrity": "sha512-ZUKRh6/kUFoAiTAtTYPZJ3hw9wNxx+BIBOijnlG9PnrJsCcSjs1wyyD6vJpaYtgnzDrKYRSqf3OO6Rfa93xsRg==", - "license": "MIT", - "engines": { - "node": ">= 0.4" - }, - "funding": { - "url": "https://github.com/sponsors/ljharb" - } - }, - "node_modules/graceful-fs": { - "version": "4.2.11", - "resolved": "https://registry.npmjs.org/graceful-fs/-/graceful-fs-4.2.11.tgz", - "integrity": "sha512-RbJ5/jmFcNNCcDV5o9eTnBLJ/HszWV0P73bc+Ff4nS/rJj+YaS6IGyiOL0VoBYX+l1Wrl3k63h/KrH+nhJ0XvQ==", - "license": "ISC" - }, - "node_modules/graphemer": { - "version": "1.4.0", - "resolved": "https://registry.npmjs.org/graphemer/-/graphemer-1.4.0.tgz", - "integrity": "sha512-EtKwoO6kxCL9WO5xipiHTZlSzBm7WLT627TqC/uVRd0HKmq8NXyebnNYxDoBi7wt8eTWrUrKXCOVaFq9x1kgag==", - "license": "MIT" - }, - "node_modules/gray-matter": { - "version": "3.1.1", - "resolved": "https://registry.npmjs.org/gray-matter/-/gray-matter-3.1.1.tgz", - "integrity": "sha512-nZ1qjLmayEv0/wt3sHig7I0s3/sJO0dkAaKYQ5YAOApUtYEOonXSFdWvL1khvnZMTvov4UufkqlFsilPnejEXA==", - "license": "MIT", - "dependencies": { - "extend-shallow": "^2.0.1", - "js-yaml": "^3.10.0", - "kind-of": "^5.0.2", - "strip-bom-string": "^1.0.0" - }, - "engines": { - "node": ">=0.10.0" - } - }, - "node_modules/gray-matter/node_modules/argparse": { - "version": "1.0.10", - "resolved": "https://registry.npmjs.org/argparse/-/argparse-1.0.10.tgz", - "integrity": "sha512-o5Roy6tNG4SL/FOkCAN6RzjiakZS25RLYFrcMttJqbdd8BWrnA+fGz57iN5Pb06pvBGvl5gQ0B48dJlslXvoTg==", - "license": "MIT", - "dependencies": { - "sprintf-js": "~1.0.2" - } - }, - "node_modules/gray-matter/node_modules/js-yaml": { - "version": "3.14.1", - "resolved": "https://registry.npmjs.org/js-yaml/-/js-yaml-3.14.1.tgz", - "integrity": "sha512-okMH7OXXJ7YrN9Ok3/SXrnu4iX9yOk+25nqX4imS2npuvTYDmo/QEZoqwZkYaIDk3jVvBOTOIEgEhaLOynBS9g==", + "integrity": "sha512-goW1b+d9q/HIwbVYZzZ6SsTr4IgE+WA44A0GmPIQstuOrgsFcT7VEJ48nmr9GaRtNu0XTKacFLGnBPAM6Afouw==", "license": "MIT", "dependencies": { - "argparse": "^1.0.7", - "esprima": "^4.0.0" - }, - "bin": { - "js-yaml": "bin/js-yaml.js" + "delegate": "^3.1.2" } }, - "node_modules/gzip-size": { - "version": "6.0.0", - "resolved": "https://registry.npmjs.org/gzip-size/-/gzip-size-6.0.0.tgz", - "integrity": "sha512-ax7ZYomf6jqPTQ4+XCpUGyXKHk5WweS+e05MBO4/y3WJ5RkmPXNKvX+bx1behVILVwr6JSQvZAku021CHPXG3Q==", + "node_modules/gopd": { + "version": "1.2.0", + "resolved": "https://registry.npmjs.org/gopd/-/gopd-1.2.0.tgz", + "integrity": "sha512-ZUKRh6/kUFoAiTAtTYPZJ3hw9wNxx+BIBOijnlG9PnrJsCcSjs1wyyD6vJpaYtgnzDrKYRSqf3OO6Rfa93xsRg==", "license": "MIT", - "dependencies": { - "duplexer": "^0.1.2" - }, "engines": { - "node": ">=10" + "node": ">= 0.4" }, "funding": { - "url": "https://github.com/sponsors/sindresorhus" + "url": "https://github.com/sponsors/ljharb" } }, + "node_modules/graceful-fs": { + "version": "4.2.11", + "resolved": "https://registry.npmjs.org/graceful-fs/-/graceful-fs-4.2.11.tgz", + "integrity": "sha512-RbJ5/jmFcNNCcDV5o9eTnBLJ/HszWV0P73bc+Ff4nS/rJj+YaS6IGyiOL0VoBYX+l1Wrl3k63h/KrH+nhJ0XvQ==", + "license": "ISC" + }, "node_modules/handle-thing": { "version": "2.0.1", "resolved": "https://registry.npmjs.org/handle-thing/-/handle-thing-2.0.1.tgz", @@ -5987,6 +5396,7 @@ "version": "2.8.9", "resolved": "https://registry.npmjs.org/hosted-git-info/-/hosted-git-info-2.8.9.tgz", "integrity": "sha512-mxIDAb9Lsm6DoOJ7xH+5+X4y1LU/4Hi50L9C5sIswK3JzULS4bwk1FvjdBgvYR4bzT4tuUQiC15FE2f5HbLvYw==", + "dev": true, "license": "ISC" }, "node_modules/hpack.js": { @@ -6060,9 +5470,9 @@ } }, "node_modules/http-parser-js": { - "version": "0.5.9", - "resolved": "https://registry.npmjs.org/http-parser-js/-/http-parser-js-0.5.9.tgz", - "integrity": "sha512-n1XsPy3rXVxlqxVioEWdC+0+M+SQw0DpJynwtOPo1X+ZlvdzTLtDBIJJlDQTnwZIFJrZSzSGmIOUdP8tu+SgLw==", + "version": "0.5.10", + "resolved": "https://registry.npmjs.org/http-parser-js/-/http-parser-js-0.5.10.tgz", + "integrity": "sha512-Pysuw9XpUq5dVc/2SMHpuTY01RFl8fttgcyunjL7eEMhGM3cI4eOmiCycJDVCo/7O7ClfQD3SaI6ftDzqOXYMA==", "license": "MIT" }, "node_modules/http-proxy": { @@ -6080,9 +5490,9 @@ } }, "node_modules/http-proxy-middleware": { - "version": "2.0.7", - "resolved": "https://registry.npmjs.org/http-proxy-middleware/-/http-proxy-middleware-2.0.7.tgz", - "integrity": "sha512-fgVY8AV7qU7z/MmXJ/rxwbrtQH4jBQ9m7kp3llF0liB7glmFeVZFBepQb32T3y8n8k2+AEYuMPCpinYW+/CuRA==", + "version": "2.0.9", + "resolved": "https://registry.npmjs.org/http-proxy-middleware/-/http-proxy-middleware-2.0.9.tgz", + "integrity": "sha512-c1IyJYLYppU574+YI7R4QyX2ystMtVXZwIdzazUIPIJsHuWNd+mho2j+bKoHftndicGj9yh+xjd+l0yj7VeT1Q==", "license": "MIT", "dependencies": { "@types/http-proxy": "^1.17.8", @@ -6103,89 +5513,6 @@ } } }, - "node_modules/hugo-algolia": { - "version": "1.2.14", - "resolved": "https://registry.npmjs.org/hugo-algolia/-/hugo-algolia-1.2.14.tgz", - "integrity": "sha512-VHDnKmvWZRQ/MGgWFFDlEGO+8arfL4X3kP6xWCY637/GHuc9Y1dlZ1F5GrFa+LAJ/5JiFo0e8E002SKXVv9CNQ==", - "license": "ISC", - "dependencies": { - "algoliasearch": "^3.24.1", - "bytes": "^3.0.0", - "commander": "^2.11.0", - "glob": "^7.1.2", - "gray-matter": "^3.0.2", - "lodash": "^4.17.11", - "pos": "^0.4.2", - "remove-markdown": "^0.2.0", - "stopword": "^0.1.8", - "striptags": "^3.0.1", - "to-snake-case": "^1.0.0", - "toml": "^2.3.2", - "truncate-utf8-bytes": "^1.0.2" - }, - "bin": { - "hugo-algolia": "bin/index.js" - } - }, - "node_modules/hugo-algolia/node_modules/algoliasearch": { - "version": "3.35.1", - "resolved": "https://registry.npmjs.org/algoliasearch/-/algoliasearch-3.35.1.tgz", - "integrity": "sha512-K4yKVhaHkXfJ/xcUnil04xiSrB8B8yHZoFEhWNpXg23eiCnqvTZw1tn/SqvdsANlYHLJlKl0qi3I/Q2Sqo7LwQ==", - "license": "MIT", - "dependencies": { - "agentkeepalive": "^2.2.0", - "debug": "^2.6.9", - "envify": "^4.0.0", - "es6-promise": "^4.1.0", - "events": "^1.1.0", - "foreach": "^2.0.5", - "global": "^4.3.2", - "inherits": "^2.0.1", - "isarray": "^2.0.1", - "load-script": "^1.0.0", - "object-keys": "^1.0.11", - "querystring-es3": "^0.2.1", - "reduce": "^1.0.1", - "semver": "^5.1.0", - "tunnel-agent": "^0.6.0" - }, - "engines": { - "node": ">=0.8" - } - }, - "node_modules/hugo-algolia/node_modules/debug": { - "version": "2.6.9", - "resolved": "https://registry.npmjs.org/debug/-/debug-2.6.9.tgz", - "integrity": "sha512-bC7ElrdJaJnPbAP+1EotYvqZsb3ecl5wi6Bfi6BJTUcNowp6cvspg0jXznRTKDjm/E7AdgFBVeAPVMNcKGsHMA==", - "license": "MIT", - "dependencies": { - "ms": "2.0.0" - } - }, - "node_modules/hugo-algolia/node_modules/events": { - "version": "1.1.1", - "resolved": "https://registry.npmjs.org/events/-/events-1.1.1.tgz", - "integrity": "sha512-kEcvvCBByWXGnZy6JUlgAp2gBIUjfCAV6P6TgT1/aaQKcmuAEC4OZTV1I4EWQLz2gxZw76atuVyvHhTxvi0Flw==", - "license": "MIT", - "engines": { - "node": ">=0.4.x" - } - }, - "node_modules/hugo-algolia/node_modules/ms": { - "version": "2.0.0", - "resolved": "https://registry.npmjs.org/ms/-/ms-2.0.0.tgz", - "integrity": "sha512-Tpp60P6IUJDTuOq/5Z8cdskzJujfwqfOTkrwIwj7IRISpnkJnT6SyJ4PCPnGMoFjC9ddhal5KVIYtAt97ix05A==", - "license": "MIT" - }, - "node_modules/hugo-algolia/node_modules/semver": { - "version": "5.7.2", - "resolved": "https://registry.npmjs.org/semver/-/semver-5.7.2.tgz", - "integrity": "sha512-cBznnQ9KjJqU67B52RMC65CMarK2600WFnbkcaiwWq3xy/5haFJlshgnpjovMVJ+Hff49d8GEn0b87C5pDQ10g==", - "license": "ISC", - "bin": { - "semver": "bin/semver" - } - }, "node_modules/hyperdyperid": { "version": "1.2.0", "resolved": "https://registry.npmjs.org/hyperdyperid/-/hyperdyperid-1.2.0.tgz", @@ -6216,20 +5543,11 @@ "node": ">= 4" } }, - "node_modules/immer": { - "version": "9.0.21", - "resolved": "https://registry.npmjs.org/immer/-/immer-9.0.21.tgz", - "integrity": "sha512-bc4NBHqOqSfRW7POMkHd51LvClaeMXpm8dx0e8oE2GORbq5aRK7Bxl4FyzVLdGtLmvLKL7BTDBG5ACQm4HWjTA==", - "license": "MIT", - "funding": { - "type": "opencollective", - "url": "https://opencollective.com/immer" - } - }, "node_modules/immutable": { - "version": "5.0.3", - "resolved": "https://registry.npmjs.org/immutable/-/immutable-5.0.3.tgz", - "integrity": "sha512-P8IdPQHq3lA1xVeBRi5VPqUm5HDgKnx0Ru51wZz5mjxHr5n3RWhjIpOFU7ybkUxfB+5IToy+OLaHYDBIWsv+uw==", + "version": "5.1.1", + "resolved": "https://registry.npmjs.org/immutable/-/immutable-5.1.1.tgz", + "integrity": "sha512-3jatXi9ObIsPGr3N5hGw/vWWcTkq6hUYhpQz4k0wLC+owqWi/LiugIw9x0EdNZ2yGedKN/HzePiBvaJRXa0Ujg==", + "dev": true, "license": "MIT" }, "node_modules/import-fresh": { @@ -6267,70 +5585,6 @@ "url": "https://github.com/sponsors/sindresorhus" } }, - "node_modules/import-local/node_modules/find-up": { - "version": "4.1.0", - "resolved": "https://registry.npmjs.org/find-up/-/find-up-4.1.0.tgz", - "integrity": "sha512-PpOwAdQ/YlXQ2vj8a3h8IipDuYRi3wceVQQGYWxNINccq40Anw7BlsEXCMbt1Zt+OLA6Fq9suIpIWD0OsnISlw==", - "license": "MIT", - "dependencies": { - "locate-path": "^5.0.0", - "path-exists": "^4.0.0" - }, - "engines": { - "node": ">=8" - } - }, - "node_modules/import-local/node_modules/locate-path": { - "version": "5.0.0", - "resolved": "https://registry.npmjs.org/locate-path/-/locate-path-5.0.0.tgz", - "integrity": "sha512-t7hw9pI+WvuwNJXwk5zVHpyhIqzg2qTlklJOf0mVxGSbe3Fp2VieZcduNYjaLDoy6p9uGpQEGWG87WpMKlNq8g==", - "license": "MIT", - "dependencies": { - "p-locate": "^4.1.0" - }, - "engines": { - "node": ">=8" - } - }, - "node_modules/import-local/node_modules/p-limit": { - "version": "2.3.0", - "resolved": "https://registry.npmjs.org/p-limit/-/p-limit-2.3.0.tgz", - "integrity": "sha512-//88mFWSJx8lxCzwdAABTJL2MyWB12+eIY7MDL2SqLmAkeKU9qxRvWuSyTjm3FUmpBEMuFfckAIqEaVGUDxb6w==", - "license": "MIT", - "dependencies": { - "p-try": "^2.0.0" - }, - "engines": { - "node": ">=6" - }, - "funding": { - "url": "https://github.com/sponsors/sindresorhus" - } - }, - "node_modules/import-local/node_modules/p-locate": { - "version": "4.1.0", - "resolved": "https://registry.npmjs.org/p-locate/-/p-locate-4.1.0.tgz", - "integrity": "sha512-R79ZZ/0wAxKGu3oYMlz8jy/kbhsNrS7SKZ7PxEHBgJ5+F2mtFW2fK2cOtBh1cHYkQsbzFV7I+EoRKe6Yt0oK7A==", - "license": "MIT", - "dependencies": { - "p-limit": "^2.2.0" - }, - "engines": { - "node": ">=8" - } - }, - "node_modules/import-local/node_modules/pkg-dir": { - "version": "4.2.0", - "resolved": "https://registry.npmjs.org/pkg-dir/-/pkg-dir-4.2.0.tgz", - "integrity": "sha512-HRDzbaKjC+AOWVXxAU/x54COGeIv9eb+6CkDSQoNTt4XyWoIJvuPsXizxu/Fr23EiekbtZwmh1IcIG/l/a10GQ==", - "license": "MIT", - "dependencies": { - "find-up": "^4.0.0" - }, - "engines": { - "node": ">=8" - } - }, "node_modules/imurmurhash": { "version": "0.1.4", "resolved": "https://registry.npmjs.org/imurmurhash/-/imurmurhash-0.1.4.tgz", @@ -6340,29 +5594,12 @@ "node": ">=0.8.19" } }, - "node_modules/inflight": { - "version": "1.0.6", - "resolved": "https://registry.npmjs.org/inflight/-/inflight-1.0.6.tgz", - "integrity": "sha512-k92I/b08q4wvFscXCLvqfsHCrjrF7yiXsQuIVvVE7N82W3+aqpzuUdBbfhWcy/FZR3/4IgflMgKLOsvPDrGCJA==", - "deprecated": "This module is not supported, and leaks memory. Do not use it. Check out lru-cache if you want a good and tested way to coalesce async requests by a key value, which is much more comprehensive and powerful.", - "license": "ISC", - "dependencies": { - "once": "^1.3.0", - "wrappy": "1" - } - }, "node_modules/inherits": { "version": "2.0.4", "resolved": "https://registry.npmjs.org/inherits/-/inherits-2.0.4.tgz", "integrity": "sha512-k/vGaX4/Yla3WzyMCvTQOXYeIHvqOKtnqBduzTHpzpQZzAskKMhZ2K+EnBiSM9zGSoIFeMpXKxa4dYeZIQqewQ==", "license": "ISC" }, - "node_modules/ini": { - "version": "1.3.8", - "resolved": "https://registry.npmjs.org/ini/-/ini-1.3.8.tgz", - "integrity": "sha512-JV/yugV2uzW5iMRSiZAyDtQd+nxtUnjeLt0acNdw98kKLrvuRVyB80tsREOE7yvGVgalhZ6RNXCmEHkUKBKxew==", - "license": "ISC" - }, "node_modules/internal-slot": { "version": "1.1.0", "resolved": "https://registry.npmjs.org/internal-slot/-/internal-slot-1.1.0.tgz", @@ -6416,6 +5653,7 @@ "version": "0.2.1", "resolved": "https://registry.npmjs.org/is-arrayish/-/is-arrayish-0.2.1.tgz", "integrity": "sha512-zz06S8t0ozoDXMG+ube26zeCTNXcKIPJZJi8hBrF4idCLms4CG9QtK7qBl1boi5ODzFpjswb5JPmHCbMpjaYzg==", + "dev": true, "license": "MIT" }, "node_modules/is-async-function": { @@ -6540,30 +5778,6 @@ "url": "https://github.com/sponsors/ljharb" } }, - "node_modules/is-docker": { - "version": "2.2.1", - "resolved": "https://registry.npmjs.org/is-docker/-/is-docker-2.2.1.tgz", - "integrity": "sha512-F+i2BKsFrH66iaUFc0woD8sLy8getkwTwtOBjvs56Cx4CgJDeKQeqfz8wAYiSb8JOprWhHH5p77PbmYCvvUuXQ==", - "license": "MIT", - "bin": { - "is-docker": "cli.js" - }, - "engines": { - "node": ">=8" - }, - "funding": { - "url": "https://github.com/sponsors/sindresorhus" - } - }, - "node_modules/is-extendable": { - "version": "0.1.1", - "resolved": "https://registry.npmjs.org/is-extendable/-/is-extendable-0.1.1.tgz", - "integrity": "sha512-5BMULNob1vgFX6EjQw5izWDxrecWK9AM72rugNr0TFldMOi0fj6Jk+zeKIt0xGj4cEfQIJth4w3OKWOJ4f+AFw==", - "license": "MIT", - "engines": { - "node": ">=0.10.0" - } - }, "node_modules/is-extglob": { "version": "2.1.1", "resolved": "https://registry.npmjs.org/is-extglob/-/is-extglob-2.1.1.tgz", @@ -6710,15 +5924,6 @@ "url": "https://github.com/sponsors/ljharb" } }, - "node_modules/is-path-inside": { - "version": "3.0.3", - "resolved": "https://registry.npmjs.org/is-path-inside/-/is-path-inside-3.0.3.tgz", - "integrity": "sha512-Fd4gABb+ycGAmKou8eMftCupSir5lRxqf4aD/vd0cD2qc4HL07OjCeuHMr8Ro4CoMaeCKDB0/ECBOVWjTwUvPQ==", - "license": "MIT", - "engines": { - "node": ">=8" - } - }, "node_modules/is-plain-obj": { "version": "3.0.0", "resolved": "https://registry.npmjs.org/is-plain-obj/-/is-plain-obj-3.0.0.tgz", @@ -6761,15 +5966,6 @@ "url": "https://github.com/sponsors/ljharb" } }, - "node_modules/is-root": { - "version": "2.1.0", - "resolved": "https://registry.npmjs.org/is-root/-/is-root-2.1.0.tgz", - "integrity": "sha512-AGOriNp96vNBd3HtU+RzFEc75FfR5ymiYv8E553I71SCeXBiMsVDUtdio1OEFvrPyLIQ9tVR5RxXIFe5PUFjMg==", - "license": "MIT", - "engines": { - "node": ">=6" - } - }, "node_modules/is-set": { "version": "2.0.3", "resolved": "https://registry.npmjs.org/is-set/-/is-set-2.0.3.tgz", @@ -6888,18 +6084,6 @@ "url": "https://github.com/sponsors/ljharb" } }, - "node_modules/is-wsl": { - "version": "2.2.0", - "resolved": "https://registry.npmjs.org/is-wsl/-/is-wsl-2.2.0.tgz", - "integrity": "sha512-fKzAra0rGJUUBwGBgNkHZuToZcn+TtXHpeCgmkMJMMYx1sQDYaCSyjJBSCa2nH1DGm7s3n1oBnohoVTBaN7Lww==", - "license": "MIT", - "dependencies": { - "is-docker": "^2.0.0" - }, - "engines": { - "node": ">=8" - } - }, "node_modules/isarray": { "version": "2.0.5", "resolved": "https://registry.npmjs.org/isarray/-/isarray-2.0.5.tgz", @@ -6921,24 +6105,6 @@ "node": ">=0.10.0" } }, - "node_modules/iterator.prototype": { - "version": "1.1.5", - "resolved": "https://registry.npmjs.org/iterator.prototype/-/iterator.prototype-1.1.5.tgz", - "integrity": "sha512-H0dkQoCa3b2VEeKQBOxFph+JAbcrQdE7KC0UkqwpLmv2EC4P41QXP+rqo9wYodACiG5/WM5s9oDApTU8utwj9g==", - "license": "MIT", - "peer": true, - "dependencies": { - "define-data-property": "^1.1.4", - "es-object-atoms": "^1.0.0", - "get-intrinsic": "^1.2.6", - "get-proto": "^1.0.0", - "has-symbols": "^1.1.0", - "set-function-name": "^2.0.2" - }, - "engines": { - "node": ">= 0.4" - } - }, "node_modules/jest-util": { "version": "29.7.0", "resolved": "https://registry.npmjs.org/jest-util/-/jest-util-29.7.0.tgz", @@ -7026,6 +6192,7 @@ "version": "1.0.2", "resolved": "https://registry.npmjs.org/json-parse-better-errors/-/json-parse-better-errors-1.0.2.tgz", "integrity": "sha512-mrqyZKfX5EhL7hvqcV6WG1yYjnjeuYDzDhhcAAUrq8Po85NBQBJP+ZDUT75qZQ98IkUoBqdkExkukOU7Ts2wrw==", + "dev": true, "license": "MIT" }, "node_modules/json-parse-even-better-errors": { @@ -7062,6 +6229,7 @@ "version": "6.1.0", "resolved": "https://registry.npmjs.org/jsonfile/-/jsonfile-6.1.0.tgz", "integrity": "sha512-5dgndWOriYSm5cnYaJNhalLNDKOqFwyDB/rr1E9ZsGciGvKPs8R2xYGCacuf3z6K1YKDz182fd+fY3cn3pMqXQ==", + "dev": true, "license": "MIT", "dependencies": { "universalify": "^2.0.0" @@ -7070,21 +6238,6 @@ "graceful-fs": "^4.1.6" } }, - "node_modules/jsx-ast-utils": { - "version": "3.3.5", - "resolved": "https://registry.npmjs.org/jsx-ast-utils/-/jsx-ast-utils-3.3.5.tgz", - "integrity": "sha512-ZZow9HBI5O6EPgSJLUb8n2NKgmVWTwCvHGwFuJlMjvLFqlGG6pjirPhtdsseaLZjSibD8eegzmYpUZwoIlj2cQ==", - "license": "MIT", - "dependencies": { - "array-includes": "^3.1.6", - "array.prototype.flat": "^1.3.1", - "object.assign": "^4.1.4", - "object.values": "^1.1.6" - }, - "engines": { - "node": ">=4.0" - } - }, "node_modules/keyv": { "version": "4.5.4", "resolved": "https://registry.npmjs.org/keyv/-/keyv-4.5.4.tgz", @@ -7094,46 +6247,10 @@ "json-buffer": "3.0.1" } }, - "node_modules/kind-of": { - "version": "5.1.0", - "resolved": "https://registry.npmjs.org/kind-of/-/kind-of-5.1.0.tgz", - "integrity": "sha512-NGEErnH6F2vUuXDh+OlbcKW7/wOcfdRHaZ7VWtqCztfHri/++YKmP51OdWeGPuqCOba6kk2OTe5d02VmTB80Pw==", - "license": "MIT", - "engines": { - "node": ">=0.10.0" - } - }, - "node_modules/kleur": { - "version": "3.0.3", - "resolved": "https://registry.npmjs.org/kleur/-/kleur-3.0.3.tgz", - "integrity": "sha512-eTIzlVOSUR+JxdDFepEYcBMtZ9Qqdef+rnzWdRZuMbOywu5tO2w2N7rqjoANZ5k9vywhL6Br1VRjUIgTQx4E8w==", - "license": "MIT", - "engines": { - "node": ">=6" - } - }, - "node_modules/language-subtag-registry": { - "version": "0.3.23", - "resolved": "https://registry.npmjs.org/language-subtag-registry/-/language-subtag-registry-0.3.23.tgz", - "integrity": "sha512-0K65Lea881pHotoGEa5gDlMxt3pctLi2RplBb7Ezh4rRdLEOtgi7n4EwK9lamnUCkKBqaeKRVebTq6BAxSkpXQ==", - "license": "CC0-1.0" - }, - "node_modules/language-tags": { - "version": "1.0.9", - "resolved": "https://registry.npmjs.org/language-tags/-/language-tags-1.0.9.tgz", - "integrity": "sha512-MbjN408fEndfiQXbFQ1vnd+1NoLDsnQW41410oQBXiyXDMYH5z505juWa4KUE1LqxRC7DgOgZDbKLxHIwm27hA==", - "license": "MIT", - "dependencies": { - "language-subtag-registry": "^0.3.20" - }, - "engines": { - "node": ">=0.10" - } - }, "node_modules/launch-editor": { - "version": "2.9.1", - "resolved": "https://registry.npmjs.org/launch-editor/-/launch-editor-2.9.1.tgz", - "integrity": "sha512-Gcnl4Bd+hRO9P9icCP/RVVT2o8SFlPXofuCxvA2SaZuH45whSvf5p8x5oih5ftLiVhEI4sp5xDY+R+b3zJBh5w==", + "version": "2.10.0", + "resolved": "https://registry.npmjs.org/launch-editor/-/launch-editor-2.10.0.tgz", + "integrity": "sha512-D7dBRJo/qcGX9xlvt/6wUYzQxjh5G1RvZPgPv8vi4KRU99DVQL/oW7tnVOCCTm2HGeo3C5HvGE5Yrh6UBoZ0vA==", "license": "MIT", "dependencies": { "picocolors": "^1.0.0", @@ -7166,16 +6283,11 @@ "url": "https://github.com/sponsors/antonk52" } }, - "node_modules/lines-and-columns": { - "version": "1.2.4", - "resolved": "https://registry.npmjs.org/lines-and-columns/-/lines-and-columns-1.2.4.tgz", - "integrity": "sha512-7ylylesZQ/PV29jhEDl3Ufjo6ZX7gCqJr5F7PKrqc93v7fzSymt1BpwEU8nAUXs8qzzvqhbjhK5QZg6Mt/HkBg==", - "license": "MIT" - }, "node_modules/load-json-file": { "version": "4.0.0", "resolved": "https://registry.npmjs.org/load-json-file/-/load-json-file-4.0.0.tgz", "integrity": "sha512-Kx8hMakjX03tiGTLAIdJ+lL0htKnXjEZN6hk/tozf/WOuYGdZBJrZ+rCJRbVCugsjB3jMLn9746NsQIf5VjBMw==", + "dev": true, "license": "MIT", "dependencies": { "graceful-fs": "^4.1.2", @@ -7187,34 +6299,16 @@ "node": ">=4" } }, - "node_modules/load-json-file/node_modules/parse-json": { - "version": "4.0.0", - "resolved": "https://registry.npmjs.org/parse-json/-/parse-json-4.0.0.tgz", - "integrity": "sha512-aOIos8bujGN93/8Ox/jPLh7RwVnPEysynVFE+fQZyg6jKELEHwzgKdLRFHUgXJL6kylijVSBC4BvN9OmsB48Rw==", - "license": "MIT", - "dependencies": { - "error-ex": "^1.3.1", - "json-parse-better-errors": "^1.0.1" - }, - "engines": { - "node": ">=4" - } - }, "node_modules/load-json-file/node_modules/pify": { "version": "3.0.0", "resolved": "https://registry.npmjs.org/pify/-/pify-3.0.0.tgz", "integrity": "sha512-C3FsVNH1udSEX48gGX1xfvwTWfsYWj5U+8/uK15BGzIGrKoUpghX8hWZwa/OFnakBiiVNmBvemTJR5mcy7iPcg==", + "dev": true, "license": "MIT", "engines": { "node": ">=4" } }, - "node_modules/load-script": { - "version": "1.0.0", - "resolved": "https://registry.npmjs.org/load-script/-/load-script-1.0.0.tgz", - "integrity": "sha512-kPEjMFtZvwL9TaZo0uZ2ml+Ye9HUMmPwbYRJ324qF9tqMejwykJ5ggTyvzmrbBeapCAbk98BSbTeovHEEP1uCA==", - "license": "MIT" - }, "node_modules/loader-runner": { "version": "4.3.0", "resolved": "https://registry.npmjs.org/loader-runner/-/loader-runner-4.3.0.tgz", @@ -7224,15 +6318,6 @@ "node": ">=6.11.5" } }, - "node_modules/loader-utils": { - "version": "3.3.1", - "resolved": "https://registry.npmjs.org/loader-utils/-/loader-utils-3.3.1.tgz", - "integrity": "sha512-FMJTLMXfCLMLfJxcX9PFqX5qD88Z5MRGaZCVzfuqeZSPsyiBzs+pahDQjbIWz2QIzPZz0NX9Zy4FX3lmK6YHIg==", - "license": "MIT", - "engines": { - "node": ">= 12.13.0" - } - }, "node_modules/locate-path": { "version": "6.0.0", "resolved": "https://registry.npmjs.org/locate-path/-/locate-path-6.0.0.tgz", @@ -7248,12 +6333,6 @@ "url": "https://github.com/sponsors/sindresorhus" } }, - "node_modules/lodash": { - "version": "4.17.21", - "resolved": "https://registry.npmjs.org/lodash/-/lodash-4.17.21.tgz", - "integrity": "sha512-v2kDEe57lecTulaDIuNTPy3Ry4gLGJ6Z1O3vE1krgXZNrsQ+LFTGHVxVjcXPs17LhbZVGedAJv8XZ1tvj5FvSg==", - "license": "MIT" - }, "node_modules/lodash.debounce": { "version": "4.0.8", "resolved": "https://registry.npmjs.org/lodash.debounce/-/lodash.debounce-4.0.8.tgz", @@ -7266,19 +6345,6 @@ "integrity": "sha512-0KpjqXRVvrYyCsX1swR/XTK0va6VQkQM6MNo7PqW77ByjAhoARA8EfrP1N4+KlKj8YS0ZUCtRT/YUuhyYDujIQ==", "license": "MIT" }, - "node_modules/loose-envify": { - "version": "1.4.0", - "resolved": "https://registry.npmjs.org/loose-envify/-/loose-envify-1.4.0.tgz", - "integrity": "sha512-lyuxPGr/Wfhrlem2CL/UcnUc1zcqKAImBDzukY7Y5F/yQiNdko6+fRLevlw1HgMySw7f611UIY408EtxRSoK3Q==", - "license": "MIT", - "peer": true, - "dependencies": { - "js-tokens": "^3.0.0 || ^4.0.0" - }, - "bin": { - "loose-envify": "cli.js" - } - }, "node_modules/lru-cache": { "version": "5.1.1", "resolved": "https://registry.npmjs.org/lru-cache/-/lru-cache-5.1.1.tgz", @@ -7306,22 +6372,11 @@ "node": ">= 0.6" } }, - "node_modules/memfs": { - "version": "3.5.3", - "resolved": "https://registry.npmjs.org/memfs/-/memfs-3.5.3.tgz", - "integrity": "sha512-UERzLsxzllchadvbPs5aolHh65ISpKpM+ccLbOJ8/vvpBKmAWf+la7dXFy7Mr0ySHbdHrFv5kGFCUHHe6GFEmw==", - "license": "Unlicense", - "dependencies": { - "fs-monkey": "^1.0.4" - }, - "engines": { - "node": ">= 4.0.0" - } - }, "node_modules/memorystream": { "version": "0.3.1", "resolved": "https://registry.npmjs.org/memorystream/-/memorystream-0.3.1.tgz", "integrity": "sha512-S3UwM3yj5mtUSEfP41UZmt/0SCoVYUcU1rkXv+BQ5Ig8ndL4sPoJNBUJERafdPb5jjHJGuMgytgKvKIf58XNBw==", + "dev": true, "engines": { "node": ">= 0.10.0" } @@ -7405,14 +6460,6 @@ "node": ">= 0.6" } }, - "node_modules/min-document": { - "version": "2.19.0", - "resolved": "https://registry.npmjs.org/min-document/-/min-document-2.19.0.tgz", - "integrity": "sha512-9Wy1B3m3f66bPPmU5hdA4DR4PB2OfDU/+GS3yAB7IQozE3tqXaVv2zOjgla7MEGSRv95+ILmOuvhLkOK6wJtCQ==", - "dependencies": { - "dom-walk": "^0.1.0" - } - }, "node_modules/minimalistic-assert": { "version": "1.0.1", "resolved": "https://registry.npmjs.org/minimalistic-assert/-/minimalistic-assert-1.0.1.tgz", @@ -7460,9 +6507,9 @@ } }, "node_modules/nanoid": { - "version": "3.3.8", - "resolved": "https://registry.npmjs.org/nanoid/-/nanoid-3.3.8.tgz", - "integrity": "sha512-WNLf5Sd8oZxOm+TzppcYk8gVOgP+l58xNy58D0nbUnOxOWRWvlcCV4kUF7ltmI6PsrLl/BgKEyS4mqsGChFN0w==", + "version": "3.3.11", + "resolved": "https://registry.npmjs.org/nanoid/-/nanoid-3.3.11.tgz", + "integrity": "sha512-N8SpfPUnUp1bK+PMYW8qSWdl9U+wwNWI4QKxOYDy9JAro3WMX7p2OeVRF9v+347pnakNevPmiHhNmZ2HbFA76w==", "dev": true, "funding": [ { @@ -7503,12 +6550,14 @@ "version": "1.0.5", "resolved": "https://registry.npmjs.org/nice-try/-/nice-try-1.0.5.tgz", "integrity": "sha512-1nh45deeb5olNY7eX82BkPO7SSxR5SSYJiPTrTdFUVYwAl8CKMA5N9PjTYkHiRjisVcxcQ1HXdLhx2qxxJzLNQ==", + "dev": true, "license": "MIT" }, "node_modules/node-addon-api": { "version": "7.1.1", "resolved": "https://registry.npmjs.org/node-addon-api/-/node-addon-api-7.1.1.tgz", "integrity": "sha512-5m3bsyrjFWE1xf7nz7YXdN4udnVtXK6/Yfgn5qnahL6bCkf2yKt4k3nuTKAtT4r3IG8JNR2ncsIMdZuAzJjHQQ==", + "dev": true, "license": "MIT", "optional": true }, @@ -7531,6 +6580,7 @@ "version": "2.5.0", "resolved": "https://registry.npmjs.org/normalize-package-data/-/normalize-package-data-2.5.0.tgz", "integrity": "sha512-/5CMN3T0R4XTj4DcGaexo+roZSdSFW/0AOOTROrjxzCG1wrWXEsGbRKevjlIL+ZDE4sZlJr5ED4YW0yqmkK+eA==", + "dev": true, "license": "BSD-2-Clause", "dependencies": { "hosted-git-info": "^2.1.4", @@ -7543,6 +6593,7 @@ "version": "5.7.2", "resolved": "https://registry.npmjs.org/semver/-/semver-5.7.2.tgz", "integrity": "sha512-cBznnQ9KjJqU67B52RMC65CMarK2600WFnbkcaiwWq3xy/5haFJlshgnpjovMVJ+Hff49d8GEn0b87C5pDQ10g==", + "dev": true, "license": "ISC", "bin": { "semver": "bin/semver" @@ -7571,6 +6622,7 @@ "version": "4.1.5", "resolved": "https://registry.npmjs.org/npm-run-all/-/npm-run-all-4.1.5.tgz", "integrity": "sha512-Oo82gJDAVcaMdi3nuoKFavkIHBRVqQ1qvMb+9LHk/cF4P6B2m8aP04hGf7oL6wZ9BuGwX1onlLhpuoofSyoQDQ==", + "dev": true, "license": "MIT", "dependencies": { "ansi-styles": "^3.2.1", @@ -7596,6 +6648,7 @@ "version": "3.2.1", "resolved": "https://registry.npmjs.org/ansi-styles/-/ansi-styles-3.2.1.tgz", "integrity": "sha512-VT0ZI6kZRdTh8YyJw3SMbYm/u+NqfsAxEpWO0Pf9sq8/e94WxxOpPKx9FR1FlyCtOVDNOQ+8ntlqFxiRc+r5qA==", + "dev": true, "license": "MIT", "dependencies": { "color-convert": "^1.9.0" @@ -7608,6 +6661,7 @@ "version": "2.4.2", "resolved": "https://registry.npmjs.org/chalk/-/chalk-2.4.2.tgz", "integrity": "sha512-Mti+f9lpJNcwF4tWV8/OrTTtF1gZi+f8FqlyAdouralcFWFQWF2+NgCHShjkCb+IFBLq9buZwE1xckQU4peSuQ==", + "dev": true, "license": "MIT", "dependencies": { "ansi-styles": "^3.2.1", @@ -7622,6 +6676,7 @@ "version": "1.9.3", "resolved": "https://registry.npmjs.org/color-convert/-/color-convert-1.9.3.tgz", "integrity": "sha512-QfAUtd+vFdAtFQcC8CCyYt1fYWxSqAiK2cSD6zDB8N3cpsEBAvRxp9zOGg6G/SHHJYAT88/az/IuDGALsNVbGg==", + "dev": true, "license": "MIT", "dependencies": { "color-name": "1.1.3" @@ -7631,12 +6686,14 @@ "version": "1.1.3", "resolved": "https://registry.npmjs.org/color-name/-/color-name-1.1.3.tgz", "integrity": "sha512-72fSenhMw2HZMTVHeCA9KCmpEIbzWiQsjN+BHcBbS9vr1mtt+vJjPdksIBNUmKAW8TFUDPJK5SUU3QhE9NEXDw==", + "dev": true, "license": "MIT" }, "node_modules/npm-run-all/node_modules/cross-spawn": { "version": "6.0.6", "resolved": "https://registry.npmjs.org/cross-spawn/-/cross-spawn-6.0.6.tgz", "integrity": "sha512-VqCUuhcd1iB+dsv8gxPttb5iZh/D0iubSP21g36KXdEuf6I5JiioesUVjpCdHV9MZRUfVFlvwtIUyPfxo5trtw==", + "dev": true, "license": "MIT", "dependencies": { "nice-try": "^1.0.4", @@ -7653,6 +6710,7 @@ "version": "1.0.5", "resolved": "https://registry.npmjs.org/escape-string-regexp/-/escape-string-regexp-1.0.5.tgz", "integrity": "sha512-vbRorB5FUQWvla16U8R/qgaFIya2qGzwDrNmCZuYKrbdSUMG6I1ZCGQRefkRVhuOkIGVne7BQ35DSfo1qvJqFg==", + "dev": true, "license": "MIT", "engines": { "node": ">=0.8.0" @@ -7662,6 +6720,7 @@ "version": "3.0.0", "resolved": "https://registry.npmjs.org/has-flag/-/has-flag-3.0.0.tgz", "integrity": "sha512-sKJf1+ceQBr4SMkvQnBDNDtf4TXpVhVGateu0t918bl30FnbE2m4vNLX+VWe/dpjlb+HugGYzW7uQXH98HPEYw==", + "dev": true, "license": "MIT", "engines": { "node": ">=4" @@ -7671,6 +6730,7 @@ "version": "2.0.1", "resolved": "https://registry.npmjs.org/path-key/-/path-key-2.0.1.tgz", "integrity": "sha512-fEHGKCSmUSDPv4uoj8AlD+joPlq3peND+HRYyxFz4KPw4z926S/b8rIuFs2FYJg3BwsxJf6A9/3eIdLaYC+9Dw==", + "dev": true, "license": "MIT", "engines": { "node": ">=4" @@ -7680,6 +6740,7 @@ "version": "5.7.2", "resolved": "https://registry.npmjs.org/semver/-/semver-5.7.2.tgz", "integrity": "sha512-cBznnQ9KjJqU67B52RMC65CMarK2600WFnbkcaiwWq3xy/5haFJlshgnpjovMVJ+Hff49d8GEn0b87C5pDQ10g==", + "dev": true, "license": "ISC", "bin": { "semver": "bin/semver" @@ -7689,6 +6750,7 @@ "version": "1.2.0", "resolved": "https://registry.npmjs.org/shebang-command/-/shebang-command-1.2.0.tgz", "integrity": "sha512-EV3L1+UQWGor21OmnvojK36mhg+TyIKDh3iFBKBohr5xeXIhNBcx8oWdgkTEEQ+BEFFYdLRuqMfd5L84N1V5Vg==", + "dev": true, "license": "MIT", "dependencies": { "shebang-regex": "^1.0.0" @@ -7701,6 +6763,7 @@ "version": "1.0.0", "resolved": "https://registry.npmjs.org/shebang-regex/-/shebang-regex-1.0.0.tgz", "integrity": "sha512-wpoSFAxys6b2a2wHZ1XpDSgD7N9iVjg29Ph9uV/uaP9Ex/KXlkTZTeddxDPSYQpgvzKLGJke2UU0AzoGCjNIvQ==", + "dev": true, "license": "MIT", "engines": { "node": ">=0.10.0" @@ -7710,6 +6773,7 @@ "version": "5.5.0", "resolved": "https://registry.npmjs.org/supports-color/-/supports-color-5.5.0.tgz", "integrity": "sha512-QjVjwdXIt408MIiAqCX4oUKsgU2EqAGzs2Ppkm4aQYbjm+ZEWEcW4SfFNTr4uMNZma0ey4f5lgLrkB0aX0QMow==", + "dev": true, "license": "MIT", "dependencies": { "has-flag": "^3.0.0" @@ -7722,6 +6786,7 @@ "version": "1.3.1", "resolved": "https://registry.npmjs.org/which/-/which-1.3.1.tgz", "integrity": "sha512-HxJdYWq1MTIQbJ3nw0cqssHoTNU267KlrDuGZ1WYlxDStUtKUhOaJmh112/TZmHxxUfuJqPXSOm7tDyas0OSIQ==", + "dev": true, "license": "ISC", "dependencies": { "isexe": "^2.0.0" @@ -7730,16 +6795,6 @@ "which": "bin/which" } }, - "node_modules/object-assign": { - "version": "4.1.1", - "resolved": "https://registry.npmjs.org/object-assign/-/object-assign-4.1.1.tgz", - "integrity": "sha512-rJgTQnkUnH1sFw8yT6VSU3zD3sWmu6sZhIseY8VX+GRu3P6F7Fu+JNDoXfklElbLJSnc3FUQHVe4cU5hj+BcUg==", - "license": "MIT", - "peer": true, - "engines": { - "node": ">=0.10.0" - } - }, "node_modules/object-inspect": { "version": "1.13.4", "resolved": "https://registry.npmjs.org/object-inspect/-/object-inspect-1.13.4.tgz", @@ -7781,20 +6836,6 @@ "url": "https://github.com/sponsors/ljharb" } }, - "node_modules/object.entries": { - "version": "1.1.8", - "resolved": "https://registry.npmjs.org/object.entries/-/object.entries-1.1.8.tgz", - "integrity": "sha512-cmopxi8VwRIAw/fkijJohSfpef5PdN0pMQJN6VC/ZKvn0LIknWD8KtgY6KlQdEc4tIjcQ3HxSMmnvtzIscdaYQ==", - "license": "MIT", - "dependencies": { - "call-bind": "^1.0.7", - "define-properties": "^1.2.1", - "es-object-atoms": "^1.0.0" - }, - "engines": { - "node": ">= 0.4" - } - }, "node_modules/object.fromentries": { "version": "2.0.8", "resolved": "https://registry.npmjs.org/object.fromentries/-/object.fromentries-2.0.8.tgz", @@ -7872,32 +6913,6 @@ "node": ">= 0.8" } }, - "node_modules/once": { - "version": "1.4.0", - "resolved": "https://registry.npmjs.org/once/-/once-1.4.0.tgz", - "integrity": "sha512-lNaJgI+2Q5URQBkccEKHTQOPaXdUxnZZElQTZY0MFUAuaEqe1E+Nyvgdz/aIyNi6Z9MzO5dv1H8n58/GELp3+w==", - "license": "ISC", - "dependencies": { - "wrappy": "1" - } - }, - "node_modules/open": { - "version": "8.4.2", - "resolved": "https://registry.npmjs.org/open/-/open-8.4.2.tgz", - "integrity": "sha512-7x81NCL719oNbsq/3mh+hVrAWmFuEYUqrq/Iw3kUzH8ReypT9QQ0BLoJS7/G9k6N81XjW4qHWtjWwe/9eLy1EQ==", - "license": "MIT", - "dependencies": { - "define-lazy-prop": "^2.0.0", - "is-docker": "^2.1.1", - "is-wsl": "^2.2.0" - }, - "engines": { - "node": ">=12" - }, - "funding": { - "url": "https://github.com/sponsors/sindresorhus" - } - }, "node_modules/optionator": { "version": "0.9.4", "resolved": "https://registry.npmjs.org/optionator/-/optionator-0.9.4.tgz", @@ -8001,21 +7016,17 @@ } }, "node_modules/parse-json": { - "version": "5.2.0", - "resolved": "https://registry.npmjs.org/parse-json/-/parse-json-5.2.0.tgz", - "integrity": "sha512-ayCKvm/phCGxOkYRSCM82iDwct8/EonSEgCSxWxD7ve6jHggsFl4fZVQBPRNgQoKiuV/odhFrGzQXZwbifC8Rg==", + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/parse-json/-/parse-json-4.0.0.tgz", + "integrity": "sha512-aOIos8bujGN93/8Ox/jPLh7RwVnPEysynVFE+fQZyg6jKELEHwzgKdLRFHUgXJL6kylijVSBC4BvN9OmsB48Rw==", + "dev": true, "license": "MIT", "dependencies": { - "@babel/code-frame": "^7.0.0", "error-ex": "^1.3.1", - "json-parse-even-better-errors": "^2.3.0", - "lines-and-columns": "^1.1.6" + "json-parse-better-errors": "^1.0.1" }, "engines": { - "node": ">=8" - }, - "funding": { - "url": "https://github.com/sponsors/sindresorhus" + "node": ">=4" } }, "node_modules/parseurl": { @@ -8036,15 +7047,6 @@ "node": ">=8" } }, - "node_modules/path-is-absolute": { - "version": "1.0.1", - "resolved": "https://registry.npmjs.org/path-is-absolute/-/path-is-absolute-1.0.1.tgz", - "integrity": "sha512-AVbw3UJ2e9bq64vSaS9Am0fje1Pa8pbGqTTsmXfaIiMpnr5DlDhfJOuLj9Sf95ZPVDAUerDfEk88MPmPe7UCQg==", - "license": "MIT", - "engines": { - "node": ">=0.10.0" - } - }, "node_modules/path-key": { "version": "3.1.1", "resolved": "https://registry.npmjs.org/path-key/-/path-key-3.1.1.tgz", @@ -8067,16 +7069,26 @@ "license": "MIT" }, "node_modules/path-type": { - "version": "6.0.0", - "resolved": "https://registry.npmjs.org/path-type/-/path-type-6.0.0.tgz", - "integrity": "sha512-Vj7sf++t5pBD637NSfkxpHSMfWaeig5+DKWLhcqIYx6mWQz5hdJTGDVMQiJcw1ZYkhs7AazKDGpRVji1LJCZUQ==", + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/path-type/-/path-type-3.0.0.tgz", + "integrity": "sha512-T2ZUsdZFHgA3u4e5PfPbjd7HDDpxPnQb5jN0SrDsjNSuVXHJqtwTnWqG0B1jZrgmJ/7lj1EmVIByWt1gxGkWvg==", "dev": true, "license": "MIT", - "engines": { - "node": ">=18" + "dependencies": { + "pify": "^3.0.0" }, - "funding": { - "url": "https://github.com/sponsors/sindresorhus" + "engines": { + "node": ">=4" + } + }, + "node_modules/path-type/node_modules/pify": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/pify/-/pify-3.0.0.tgz", + "integrity": "sha512-C3FsVNH1udSEX48gGX1xfvwTWfsYWj5U+8/uK15BGzIGrKoUpghX8hWZwa/OFnakBiiVNmBvemTJR5mcy7iPcg==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=4" } }, "node_modules/picocolors": { @@ -8101,6 +7113,7 @@ "version": "0.3.1", "resolved": "https://registry.npmjs.org/pidtree/-/pidtree-0.3.1.tgz", "integrity": "sha512-qQbW94hLHEqCg7nhby4yRC7G2+jYHY4Rguc2bjw7Uug4GIJuu1tvf2uHaZv5Q8zdt+WKJ6qK1FOI6amaWUo5FA==", + "dev": true, "license": "MIT", "bin": { "pidtree": "bin/pidtree.js" @@ -8120,140 +7133,43 @@ } }, "node_modules/pkg-dir": { - "version": "7.0.0", - "resolved": "https://registry.npmjs.org/pkg-dir/-/pkg-dir-7.0.0.tgz", - "integrity": "sha512-Ie9z/WINcxxLp27BKOCHGde4ITq9UklYKDzVo1nhk5sqGEXU3FpkwP5GM2voTGJkGd9B3Otl+Q4uwSOeSUtOBA==", + "version": "4.2.0", + "resolved": "https://registry.npmjs.org/pkg-dir/-/pkg-dir-4.2.0.tgz", + "integrity": "sha512-HRDzbaKjC+AOWVXxAU/x54COGeIv9eb+6CkDSQoNTt4XyWoIJvuPsXizxu/Fr23EiekbtZwmh1IcIG/l/a10GQ==", "license": "MIT", "dependencies": { - "find-up": "^6.3.0" + "find-up": "^4.0.0" }, "engines": { - "node": ">=14.16" - }, - "funding": { - "url": "https://github.com/sponsors/sindresorhus" + "node": ">=8" } }, "node_modules/pkg-dir/node_modules/find-up": { - "version": "6.3.0", - "resolved": "https://registry.npmjs.org/find-up/-/find-up-6.3.0.tgz", - "integrity": "sha512-v2ZsoEuVHYy8ZIlYqwPe/39Cy+cFDzp4dXPaxNvkEuouymu+2Jbz0PxpKarJHYJTmv2HWT3O382qY8l4jMWthw==", + "version": "4.1.0", + "resolved": "https://registry.npmjs.org/find-up/-/find-up-4.1.0.tgz", + "integrity": "sha512-PpOwAdQ/YlXQ2vj8a3h8IipDuYRi3wceVQQGYWxNINccq40Anw7BlsEXCMbt1Zt+OLA6Fq9suIpIWD0OsnISlw==", "license": "MIT", "dependencies": { - "locate-path": "^7.1.0", - "path-exists": "^5.0.0" + "locate-path": "^5.0.0", + "path-exists": "^4.0.0" }, "engines": { - "node": "^12.20.0 || ^14.13.1 || >=16.0.0" - }, - "funding": { - "url": "https://github.com/sponsors/sindresorhus" + "node": ">=8" } }, "node_modules/pkg-dir/node_modules/locate-path": { - "version": "7.2.0", - "resolved": "https://registry.npmjs.org/locate-path/-/locate-path-7.2.0.tgz", - "integrity": "sha512-gvVijfZvn7R+2qyPX8mAuKcFGDf6Nc61GdvGafQsHL0sBIxfKzA+usWn4GFC/bk+QdwPUD4kWFJLhElipq+0VA==", - "license": "MIT", - "dependencies": { - "p-locate": "^6.0.0" - }, - "engines": { - "node": "^12.20.0 || ^14.13.1 || >=16.0.0" - }, - "funding": { - "url": "https://github.com/sponsors/sindresorhus" - } - }, - "node_modules/pkg-dir/node_modules/p-limit": { - "version": "4.0.0", - "resolved": "https://registry.npmjs.org/p-limit/-/p-limit-4.0.0.tgz", - "integrity": "sha512-5b0R4txpzjPWVw/cXXUResoD4hb6U/x9BH08L7nw+GN1sezDzPdxeRvpc9c433fZhBan/wusjbCsqwqm4EIBIQ==", - "license": "MIT", - "dependencies": { - "yocto-queue": "^1.0.0" - }, - "engines": { - "node": "^12.20.0 || ^14.13.1 || >=16.0.0" - }, - "funding": { - "url": "https://github.com/sponsors/sindresorhus" - } - }, - "node_modules/pkg-dir/node_modules/p-locate": { - "version": "6.0.0", - "resolved": "https://registry.npmjs.org/p-locate/-/p-locate-6.0.0.tgz", - "integrity": "sha512-wPrq66Llhl7/4AGC6I+cqxT07LhXvWL08LNXz1fENOw0Ap4sRZZ/gZpTTJ5jpurzzzfS2W/Ge9BY3LgLjCShcw==", - "license": "MIT", - "dependencies": { - "p-limit": "^4.0.0" - }, - "engines": { - "node": "^12.20.0 || ^14.13.1 || >=16.0.0" - }, - "funding": { - "url": "https://github.com/sponsors/sindresorhus" - } - }, - "node_modules/pkg-dir/node_modules/path-exists": { "version": "5.0.0", - "resolved": "https://registry.npmjs.org/path-exists/-/path-exists-5.0.0.tgz", - "integrity": "sha512-RjhtfwJOxzcFmNOi6ltcbcu4Iu+FL3zEj83dk4kAS+fVpTxXLO1b38RvJgT/0QwvV/L3aY9TAnyv0EOqW4GoMQ==", - "license": "MIT", - "engines": { - "node": "^12.20.0 || ^14.13.1 || >=16.0.0" - } - }, - "node_modules/pkg-dir/node_modules/yocto-queue": { - "version": "1.1.1", - "resolved": "https://registry.npmjs.org/yocto-queue/-/yocto-queue-1.1.1.tgz", - "integrity": "sha512-b4JR1PFR10y1mKjhHY9LaGo6tmrgjit7hxVIeAmyMw3jegXR4dhYqLaQF5zMXZxY7tLpMyJeLjr1C4rLmkVe8g==", - "license": "MIT", - "engines": { - "node": ">=12.20" - }, - "funding": { - "url": "https://github.com/sponsors/sindresorhus" - } - }, - "node_modules/pkg-up": { - "version": "3.1.0", - "resolved": "https://registry.npmjs.org/pkg-up/-/pkg-up-3.1.0.tgz", - "integrity": "sha512-nDywThFk1i4BQK4twPQ6TA4RT8bDY96yeuCVBWL3ePARCiEKDRSrNGbFIgUJpLp+XeIR65v8ra7WuJOFUBtkMA==", + "resolved": "https://registry.npmjs.org/locate-path/-/locate-path-5.0.0.tgz", + "integrity": "sha512-t7hw9pI+WvuwNJXwk5zVHpyhIqzg2qTlklJOf0mVxGSbe3Fp2VieZcduNYjaLDoy6p9uGpQEGWG87WpMKlNq8g==", "license": "MIT", "dependencies": { - "find-up": "^3.0.0" + "p-locate": "^4.1.0" }, "engines": { "node": ">=8" } }, - "node_modules/pkg-up/node_modules/find-up": { - "version": "3.0.0", - "resolved": "https://registry.npmjs.org/find-up/-/find-up-3.0.0.tgz", - "integrity": "sha512-1yD6RmLI1XBfxugvORwlck6f75tYL+iR0jqwsOrOxMZyGYqUuDhJ0l4AXdO1iX/FTs9cBAMEk1gWSEx1kSbylg==", - "license": "MIT", - "dependencies": { - "locate-path": "^3.0.0" - }, - "engines": { - "node": ">=6" - } - }, - "node_modules/pkg-up/node_modules/locate-path": { - "version": "3.0.0", - "resolved": "https://registry.npmjs.org/locate-path/-/locate-path-3.0.0.tgz", - "integrity": "sha512-7AO748wWnIhNqAuaty2ZWHkQHRSNfPVIsPIfwEOWO22AmaoVrWavlOcMR5nzTLNYvp36X220/maaRsrec1G65A==", - "license": "MIT", - "dependencies": { - "p-locate": "^3.0.0", - "path-exists": "^3.0.0" - }, - "engines": { - "node": ">=6" - } - }, - "node_modules/pkg-up/node_modules/p-limit": { + "node_modules/pkg-dir/node_modules/p-limit": { "version": "2.3.0", "resolved": "https://registry.npmjs.org/p-limit/-/p-limit-2.3.0.tgz", "integrity": "sha512-//88mFWSJx8lxCzwdAABTJL2MyWB12+eIY7MDL2SqLmAkeKU9qxRvWuSyTjm3FUmpBEMuFfckAIqEaVGUDxb6w==", @@ -8268,33 +7184,16 @@ "url": "https://github.com/sponsors/sindresorhus" } }, - "node_modules/pkg-up/node_modules/p-locate": { - "version": "3.0.0", - "resolved": "https://registry.npmjs.org/p-locate/-/p-locate-3.0.0.tgz", - "integrity": "sha512-x+12w/To+4GFfgJhBEpiDcLozRJGegY+Ei7/z0tSLkMmxGZNybVMSfWj9aJn8Z5Fc7dBUNJOOVgPv2H7IwulSQ==", + "node_modules/pkg-dir/node_modules/p-locate": { + "version": "4.1.0", + "resolved": "https://registry.npmjs.org/p-locate/-/p-locate-4.1.0.tgz", + "integrity": "sha512-R79ZZ/0wAxKGu3oYMlz8jy/kbhsNrS7SKZ7PxEHBgJ5+F2mtFW2fK2cOtBh1cHYkQsbzFV7I+EoRKe6Yt0oK7A==", "license": "MIT", "dependencies": { - "p-limit": "^2.0.0" + "p-limit": "^2.2.0" }, "engines": { - "node": ">=6" - } - }, - "node_modules/pkg-up/node_modules/path-exists": { - "version": "3.0.0", - "resolved": "https://registry.npmjs.org/path-exists/-/path-exists-3.0.0.tgz", - "integrity": "sha512-bpC7GYwiDYQ4wYLe+FA8lhRjhQCMcQGuSgGGqDkg/QerRWw9CmGRT0iSOVRSZJ29NMLZgIzqaljJ63oaL4NIJQ==", - "license": "MIT", - "engines": { - "node": ">=4" - } - }, - "node_modules/pos": { - "version": "0.4.2", - "resolved": "https://registry.npmjs.org/pos/-/pos-0.4.2.tgz", - "integrity": "sha512-5HtivCe1HaOqjQZZNhtKrIR1zBvm2FLVVGl4b1poHPZDbXq1BEqYOlmWmetbzqrkRFITxPbEpVgpB03qNS4cSw==", - "engines": { - "node": ">=0" + "node": ">=8" } }, "node_modules/possible-typed-array-names": { @@ -8307,9 +7206,9 @@ } }, "node_modules/postcss": { - "version": "8.5.1", - "resolved": "https://registry.npmjs.org/postcss/-/postcss-8.5.1.tgz", - "integrity": "sha512-6oz2beyjc5VMn/KV1pPw8fliQkhBXrVn1Z3TVyqZxU8kZpzEKhBdmCFqI6ZbmGtamQvQGuU1sgPTk8ZrXDD7jQ==", + "version": "8.5.3", + "resolved": "https://registry.npmjs.org/postcss/-/postcss-8.5.3.tgz", + "integrity": "sha512-dle9A3yYxlBSrt8Fu+IpjGT8SY8hN0mlaA6GY8t0P5PjIOZemULz/E2Bnm/2dcUOena75OTNkHI76uZBNUUq3A==", "dev": true, "funding": [ { @@ -8336,23 +7235,22 @@ } }, "node_modules/postcss-cli": { - "version": "11.0.0", - "resolved": "https://registry.npmjs.org/postcss-cli/-/postcss-cli-11.0.0.tgz", - "integrity": "sha512-xMITAI7M0u1yolVcXJ9XTZiO9aO49mcoKQy6pCDFdMh9kGqhzLVpWxeD/32M/QBmkhcGypZFFOLNLmIW4Pg4RA==", + "version": "11.0.1", + "resolved": "https://registry.npmjs.org/postcss-cli/-/postcss-cli-11.0.1.tgz", + "integrity": "sha512-0UnkNPSayHKRe/tc2YGW6XnSqqOA9eqpiRMgRlV1S6HdGi16vwJBx7lviARzbV1HpQHqLLRH3o8vTcB0cLc+5g==", "dev": true, "license": "MIT", "dependencies": { "chokidar": "^3.3.0", - "dependency-graph": "^0.11.0", + "dependency-graph": "^1.0.0", "fs-extra": "^11.0.0", - "get-stdin": "^9.0.0", - "globby": "^14.0.0", "picocolors": "^1.0.0", "postcss-load-config": "^5.0.0", "postcss-reporter": "^7.0.0", "pretty-hrtime": "^1.0.3", "read-cache": "^1.0.0", "slash": "^5.0.0", + "tinyglobby": "^0.2.12", "yargs": "^17.0.0" }, "bin": { @@ -8458,46 +7356,12 @@ "node": ">= 0.8" } }, - "node_modules/process": { - "version": "0.11.10", - "resolved": "https://registry.npmjs.org/process/-/process-0.11.10.tgz", - "integrity": "sha512-cdGef/drWFoydD1JsMzuFf8100nZl+GT+yacc2bEced5f9Rjk4z+WtFUTBu9PhOi9j/jfmBPu0mMEY4wIdAF8A==", - "license": "MIT", - "engines": { - "node": ">= 0.6.0" - } - }, "node_modules/process-nextick-args": { "version": "2.0.1", "resolved": "https://registry.npmjs.org/process-nextick-args/-/process-nextick-args-2.0.1.tgz", "integrity": "sha512-3ouUOpQhtgrbOa17J7+uxOTpITYWaGP7/AhoR3+A+/1e9skrzelGi/dXzEYyvbxubEF6Wn2ypscTKiKJFFn1ag==", "license": "MIT" }, - "node_modules/prompts": { - "version": "2.4.2", - "resolved": "https://registry.npmjs.org/prompts/-/prompts-2.4.2.tgz", - "integrity": "sha512-NxNv/kLguCA7p3jE8oL2aEBsrJWgAakBpgmgK6lpPWV+WuOmY6r2/zbAVnP+T8bQlA0nzHXSJSJW0Hq7ylaD2Q==", - "license": "MIT", - "dependencies": { - "kleur": "^3.0.3", - "sisteransi": "^1.0.5" - }, - "engines": { - "node": ">= 6" - } - }, - "node_modules/prop-types": { - "version": "15.8.1", - "resolved": "https://registry.npmjs.org/prop-types/-/prop-types-15.8.1.tgz", - "integrity": "sha512-oj87CgZICdulUohogVAR7AjlC0327U4el4L6eAvOqCeudMDVU0NThNaV+b9Df4dXgSP1gXMTnPdhfe/2qDH5cg==", - "license": "MIT", - "peer": true, - "dependencies": { - "loose-envify": "^1.4.0", - "object-assign": "^4.1.1", - "react-is": "^16.13.1" - } - }, "node_modules/proxy-addr": { "version": "2.0.7", "resolved": "https://registry.npmjs.org/proxy-addr/-/proxy-addr-2.0.7.tgz", @@ -8544,14 +7408,6 @@ "url": "https://github.com/sponsors/ljharb" } }, - "node_modules/querystring-es3": { - "version": "0.2.1", - "resolved": "https://registry.npmjs.org/querystring-es3/-/querystring-es3-0.2.1.tgz", - "integrity": "sha512-773xhDQnZBMFobEiztv8LIl70ch5MSF/jUQVlhwFyBILqq96anmoctVIYz+ZRp0qbCKATTn6ev02M3r7Ga5vqA==", - "engines": { - "node": ">=0.4.x" - } - }, "node_modules/queue-microtask": { "version": "1.2.3", "resolved": "https://registry.npmjs.org/queue-microtask/-/queue-microtask-1.2.3.tgz", @@ -8585,108 +7441,26 @@ "version": "1.2.1", "resolved": "https://registry.npmjs.org/range-parser/-/range-parser-1.2.1.tgz", "integrity": "sha512-Hrgsx+orqoygnmhFbKaHE6c296J+HTAQXoxEF6gNupROmmGJRoyzfG3ccAveqCBrwr/2yxQ5BVd/GTl5agOwSg==", - "license": "MIT", - "engines": { - "node": ">= 0.6" - } - }, - "node_modules/raw-body": { - "version": "2.5.2", - "resolved": "https://registry.npmjs.org/raw-body/-/raw-body-2.5.2.tgz", - "integrity": "sha512-8zGqypfENjCIqGhgXToC8aB2r7YrBX+AQAfIPs/Mlk+BtPTztOvTS01NRW/3Eh60J+a48lt8qsCzirQ6loCVfA==", - "license": "MIT", - "dependencies": { - "bytes": "3.1.2", - "http-errors": "2.0.0", - "iconv-lite": "0.4.24", - "unpipe": "1.0.0" - }, - "engines": { - "node": ">= 0.8" - } - }, - "node_modules/react-dev-tools": { - "version": "0.0.1", - "resolved": "https://registry.npmjs.org/react-dev-tools/-/react-dev-tools-0.0.1.tgz", - "integrity": "sha512-V82SL/Y3/YLRZHIqzyBEauIFASeTOD4ZllvGhhG0Q4Npc06ElbMFPFgDYt+M9uW+CJrGoufS4kOQ6GSzTf3AJw==" - }, - "node_modules/react-dev-utils": { - "version": "12.0.1", - "resolved": "https://registry.npmjs.org/react-dev-utils/-/react-dev-utils-12.0.1.tgz", - "integrity": "sha512-84Ivxmr17KjUupyqzFode6xKhjwuEJDROWKJy/BthkL7Wn6NJ8h4WE6k/exAv6ImS+0oZLRRW5j/aINMHyeGeQ==", - "license": "MIT", - "dependencies": { - "@babel/code-frame": "^7.16.0", - "address": "^1.1.2", - "browserslist": "^4.18.1", - "chalk": "^4.1.2", - "cross-spawn": "^7.0.3", - "detect-port-alt": "^1.1.6", - "escape-string-regexp": "^4.0.0", - "filesize": "^8.0.6", - "find-up": "^5.0.0", - "fork-ts-checker-webpack-plugin": "^6.5.0", - "global-modules": "^2.0.0", - "globby": "^11.0.4", - "gzip-size": "^6.0.0", - "immer": "^9.0.7", - "is-root": "^2.1.0", - "loader-utils": "^3.2.0", - "open": "^8.4.0", - "pkg-up": "^3.1.0", - "prompts": "^2.4.2", - "react-error-overlay": "^6.0.11", - "recursive-readdir": "^2.2.2", - "shell-quote": "^1.7.3", - "strip-ansi": "^6.0.1", - "text-table": "^0.2.0" - }, + "license": "MIT", "engines": { - "node": ">=14" + "node": ">= 0.6" } }, - "node_modules/react-dev-utils/node_modules/globby": { - "version": "11.1.0", - "resolved": "https://registry.npmjs.org/globby/-/globby-11.1.0.tgz", - "integrity": "sha512-jhIXaOzy1sb8IyocaruWSn1TjmnBVs8Ayhcy83rmxNJ8q2uWKCAj3CnJY+KpGSXCueAPc0i05kVvVKtP1t9S3g==", + "node_modules/raw-body": { + "version": "2.5.2", + "resolved": "https://registry.npmjs.org/raw-body/-/raw-body-2.5.2.tgz", + "integrity": "sha512-8zGqypfENjCIqGhgXToC8aB2r7YrBX+AQAfIPs/Mlk+BtPTztOvTS01NRW/3Eh60J+a48lt8qsCzirQ6loCVfA==", "license": "MIT", "dependencies": { - "array-union": "^2.1.0", - "dir-glob": "^3.0.1", - "fast-glob": "^3.2.9", - "ignore": "^5.2.0", - "merge2": "^1.4.1", - "slash": "^3.0.0" - }, - "engines": { - "node": ">=10" + "bytes": "3.1.2", + "http-errors": "2.0.0", + "iconv-lite": "0.4.24", + "unpipe": "1.0.0" }, - "funding": { - "url": "https://github.com/sponsors/sindresorhus" - } - }, - "node_modules/react-dev-utils/node_modules/slash": { - "version": "3.0.0", - "resolved": "https://registry.npmjs.org/slash/-/slash-3.0.0.tgz", - "integrity": "sha512-g9Q1haeby36OSStwb4ntCGGGaKsaVSjQ68fBxoQcutl5fS1vuY18H3wSt3jFyFtrkx+Kz0V1G85A4MyAdDMi2Q==", - "license": "MIT", "engines": { - "node": ">=8" + "node": ">= 0.8" } }, - "node_modules/react-error-overlay": { - "version": "6.0.11", - "resolved": "https://registry.npmjs.org/react-error-overlay/-/react-error-overlay-6.0.11.tgz", - "integrity": "sha512-/6UZ2qgEyH2aqzYZgQPxEnz33NJ2gNsnHA2o5+o4wW9bLM/JYQitNP9xPhsXwC08hMMovfGe/8retsdDsczPRg==", - "license": "MIT" - }, - "node_modules/react-is": { - "version": "16.13.1", - "resolved": "https://registry.npmjs.org/react-is/-/react-is-16.13.1.tgz", - "integrity": "sha512-24e6ynE2H+OKt4kqsOvNd8kBpV65zoxbA4BVsEOB3ARVWQki/DHzaUoC5KuON/BiccDaCCTZBuOcfZs70kR8bQ==", - "license": "MIT", - "peer": true - }, "node_modules/read-cache": { "version": "1.0.0", "resolved": "https://registry.npmjs.org/read-cache/-/read-cache-1.0.0.tgz", @@ -8701,6 +7475,7 @@ "version": "3.0.0", "resolved": "https://registry.npmjs.org/read-pkg/-/read-pkg-3.0.0.tgz", "integrity": "sha512-BLq/cCO9two+lBgiTYNqD6GdtK8s4NpaWrl6/rCO9w0TUS8oJl7cmToOZfRYllKTISY6nt1U7jQ53brmKqY6BA==", + "dev": true, "license": "MIT", "dependencies": { "load-json-file": "^4.0.0", @@ -8711,27 +7486,6 @@ "node": ">=4" } }, - "node_modules/read-pkg/node_modules/path-type": { - "version": "3.0.0", - "resolved": "https://registry.npmjs.org/path-type/-/path-type-3.0.0.tgz", - "integrity": "sha512-T2ZUsdZFHgA3u4e5PfPbjd7HDDpxPnQb5jN0SrDsjNSuVXHJqtwTnWqG0B1jZrgmJ/7lj1EmVIByWt1gxGkWvg==", - "license": "MIT", - "dependencies": { - "pify": "^3.0.0" - }, - "engines": { - "node": ">=4" - } - }, - "node_modules/read-pkg/node_modules/pify": { - "version": "3.0.0", - "resolved": "https://registry.npmjs.org/pify/-/pify-3.0.0.tgz", - "integrity": "sha512-C3FsVNH1udSEX48gGX1xfvwTWfsYWj5U+8/uK15BGzIGrKoUpghX8hWZwa/OFnakBiiVNmBvemTJR5mcy7iPcg==", - "license": "MIT", - "engines": { - "node": ">=4" - } - }, "node_modules/readable-stream": { "version": "3.6.2", "resolved": "https://registry.npmjs.org/readable-stream/-/readable-stream-3.6.2.tgz", @@ -8770,30 +7524,6 @@ "node": ">= 10.13.0" } }, - "node_modules/recursive-readdir": { - "version": "2.2.3", - "resolved": "https://registry.npmjs.org/recursive-readdir/-/recursive-readdir-2.2.3.tgz", - "integrity": "sha512-8HrF5ZsXk5FAH9dgsx3BlUer73nIhuj+9OrQwEbLTPOBzGkL1lsFCR01am+v+0m2Cmbs1nP12hLDl5FA7EszKA==", - "license": "MIT", - "dependencies": { - "minimatch": "^3.0.5" - }, - "engines": { - "node": ">=6.0.0" - } - }, - "node_modules/reduce": { - "version": "1.0.3", - "resolved": "https://registry.npmjs.org/reduce/-/reduce-1.0.3.tgz", - "integrity": "sha512-0Dtt3Bgj34/yKFzE5N9V6/HYyP3gb+E3TLs/hMr/wGgkCIzYa+7G4hNrE/P+en52OJT+pLUgmba9DQF3AB+2LQ==", - "license": "MIT", - "dependencies": { - "object-keys": "^1.1.1" - }, - "engines": { - "node": ">= 0.4" - } - }, "node_modules/reflect.getprototypeof": { "version": "1.0.10", "resolved": "https://registry.npmjs.org/reflect.getprototypeof/-/reflect.getprototypeof-1.0.10.tgz", @@ -8916,12 +7646,6 @@ "node": ">=6" } }, - "node_modules/remove-markdown": { - "version": "0.2.2", - "resolved": "https://registry.npmjs.org/remove-markdown/-/remove-markdown-0.2.2.tgz", - "integrity": "sha512-jwgEf3Yh/xi4WodWi/vPlasa9C9pMv1kz5ITOIAGjBW7PeZ/CHZCdBfJzQnn2VX2cBvf1xCuJv0tUJqn/FCMNA==", - "license": "MIT" - }, "node_modules/require-directory": { "version": "2.1.1", "resolved": "https://registry.npmjs.org/require-directory/-/require-directory-2.1.1.tgz", @@ -9007,31 +7731,15 @@ } }, "node_modules/reusify": { - "version": "1.0.4", - "resolved": "https://registry.npmjs.org/reusify/-/reusify-1.0.4.tgz", - "integrity": "sha512-U9nH88a3fc/ekCF1l0/UP1IosiuIjyTh7hBvXVMHYgVcfGvt897Xguj2UOLDeI5BG2m7/uwyaLVT6fbtCwTyzw==", + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/reusify/-/reusify-1.1.0.tgz", + "integrity": "sha512-g6QUff04oZpHs0eG5p83rFLhHeV00ug/Yf9nZM6fLeUrPguBTkTQOdpAWWspMh55TZfVQDPaN3NQJfbVRAxdIw==", "license": "MIT", "engines": { "iojs": ">=1.0.0", "node": ">=0.10.0" } }, - "node_modules/rimraf": { - "version": "3.0.2", - "resolved": "https://registry.npmjs.org/rimraf/-/rimraf-3.0.2.tgz", - "integrity": "sha512-JZkJMZkAGFFPP2YqXZXPbMlMBgsxzE8ILs4lMIX/2o0L9UBw9O/Y3o6wFw/i9YLapcUJWwqbi3kdxIPdC62TIA==", - "deprecated": "Rimraf versions prior to v4 are no longer supported", - "license": "ISC", - "dependencies": { - "glob": "^7.1.3" - }, - "bin": { - "rimraf": "bin.js" - }, - "funding": { - "url": "https://github.com/sponsors/isaacs" - } - }, "node_modules/run-applescript": { "version": "7.0.0", "resolved": "https://registry.npmjs.org/run-applescript/-/run-applescript-7.0.0.tgz", @@ -9048,6 +7756,7 @@ "version": "0.0.0", "resolved": "https://registry.npmjs.org/run-p/-/run-p-0.0.0.tgz", "integrity": "sha512-ZLiUUVOXJcM/S1hMnm6Ooc1zAgAx98Mmn1qyA+y3WNeK7hOTGAusVR5r3uOQJ0NuUxZt7J9vNusYNNVgKPSbww==", + "dev": true, "license": "MIT" }, "node_modules/run-parallel": { @@ -9152,9 +7861,10 @@ "license": "MIT" }, "node_modules/sass": { - "version": "1.84.0", - "resolved": "https://registry.npmjs.org/sass/-/sass-1.84.0.tgz", - "integrity": "sha512-XDAbhEPJRxi7H0SxrnOpiXFQoUJHwkR2u3Zc4el+fK/Tt5Hpzw5kkQ59qVDfvdaUq6gCrEZIbySFBM2T9DNKHg==", + "version": "1.87.0", + "resolved": "https://registry.npmjs.org/sass/-/sass-1.87.0.tgz", + "integrity": "sha512-d0NoFH4v6SjEK7BoX810Jsrhj7IQSYHAHLi/iSpgqKc7LaIDshFRlSg5LOymf9FqQhxEHs2W5ZQXlvy0KD45Uw==", + "dev": true, "license": "MIT", "dependencies": { "chokidar": "^4.0.0", @@ -9175,6 +7885,7 @@ "version": "4.0.3", "resolved": "https://registry.npmjs.org/chokidar/-/chokidar-4.0.3.tgz", "integrity": "sha512-Qgzu8kfBvo+cA4962jnP1KkS6Dop5NS6g7R5LFYJr4b8Ub94PPQXUksCw9PvXoeXPRRddRNC5C1JQUR2SMGtnA==", + "dev": true, "license": "MIT", "dependencies": { "readdirp": "^4.0.1" @@ -9187,9 +7898,10 @@ } }, "node_modules/sass/node_modules/readdirp": { - "version": "4.1.1", - "resolved": "https://registry.npmjs.org/readdirp/-/readdirp-4.1.1.tgz", - "integrity": "sha512-h80JrZu/MHUZCyHu5ciuoI0+WxsCxzxJTILn6Fs8rxSnFPh+UVHYfeIxK1nVGugMqkfC4vJcBOYbkfkwYK0+gw==", + "version": "4.1.2", + "resolved": "https://registry.npmjs.org/readdirp/-/readdirp-4.1.2.tgz", + "integrity": "sha512-GDhwkLfywWL2s6vEjyhri+eXmfH6j1L7JE27WhqLeYzoh/A3DBaYGEj2H/HFZCn/kMfim73FXxEJTw06WtxQwg==", + "dev": true, "license": "MIT", "engines": { "node": ">= 14.18.0" @@ -9200,9 +7912,9 @@ } }, "node_modules/schema-utils": { - "version": "4.3.0", - "resolved": "https://registry.npmjs.org/schema-utils/-/schema-utils-4.3.0.tgz", - "integrity": "sha512-Gf9qqc58SpCA/xdziiHz35F4GNIWYWZrEshUc/G/r5BnLph6xpKuLeoJoQuj5WfBIx/eQLf+hmVPYHaxJu7V2g==", + "version": "4.3.2", + "resolved": "https://registry.npmjs.org/schema-utils/-/schema-utils-4.3.2.tgz", + "integrity": "sha512-Gn/JaSk/Mt9gYubxTtSn/QCV4em9mpAPiR1rqy/Ocu19u/G9J5WWdNoUT4SiV6mFC3y6cxyFcFwdzPM3FgxGAQ==", "license": "MIT", "dependencies": { "@types/json-schema": "^7.0.9", @@ -9614,12 +8326,6 @@ "url": "https://github.com/sponsors/ljharb" } }, - "node_modules/sisteransi": { - "version": "1.0.5", - "resolved": "https://registry.npmjs.org/sisteransi/-/sisteransi-1.0.5.tgz", - "integrity": "sha512-bLGGlR1QxBcynn2d5YmDX4MGjlZvy2MRBDRNHLJ8VI6l6+9FUiyTFNJ0IveOSP0bcXgVDPRcfGqA0pjaqUpfVg==", - "license": "MIT" - }, "node_modules/slash": { "version": "5.1.0", "resolved": "https://registry.npmjs.org/slash/-/slash-5.1.0.tgz", @@ -9657,6 +8363,7 @@ "version": "1.2.1", "resolved": "https://registry.npmjs.org/source-map-js/-/source-map-js-1.2.1.tgz", "integrity": "sha512-UXWMKhLOwVKb728IUtQPXxfYU+usdybtUrK/8uGE8CQMvrhOpwvzDBwj0QhSL7MQc7vIsISBG8VQ8+IDQxpfQA==", + "dev": true, "license": "BSD-3-Clause", "engines": { "node": ">=0.10.0" @@ -9676,6 +8383,7 @@ "version": "3.2.0", "resolved": "https://registry.npmjs.org/spdx-correct/-/spdx-correct-3.2.0.tgz", "integrity": "sha512-kN9dJbvnySHULIluDHy32WHRUu3Og7B9sbY7tsFLctQkIqnMh3hErYgdMjTYuqmcXX+lK5T1lnUt3G7zNswmZA==", + "dev": true, "license": "Apache-2.0", "dependencies": { "spdx-expression-parse": "^3.0.0", @@ -9686,12 +8394,14 @@ "version": "2.5.0", "resolved": "https://registry.npmjs.org/spdx-exceptions/-/spdx-exceptions-2.5.0.tgz", "integrity": "sha512-PiU42r+xO4UbUS1buo3LPJkjlO7430Xn5SVAhdpzzsPHsjbYVflnnFdATgabnLude+Cqu25p6N+g2lw/PFsa4w==", + "dev": true, "license": "CC-BY-3.0" }, "node_modules/spdx-expression-parse": { "version": "3.0.1", "resolved": "https://registry.npmjs.org/spdx-expression-parse/-/spdx-expression-parse-3.0.1.tgz", "integrity": "sha512-cbqHunsQWnJNE6KhVSMsMeH5H/L9EpymbzqTQ3uLwNCLZ1Q481oWaofqH7nO6V07xlXwY6PhQdQ2IedWx/ZK4Q==", + "dev": true, "license": "MIT", "dependencies": { "spdx-exceptions": "^2.1.0", @@ -9702,6 +8412,7 @@ "version": "3.0.21", "resolved": "https://registry.npmjs.org/spdx-license-ids/-/spdx-license-ids-3.0.21.tgz", "integrity": "sha512-Bvg/8F5XephndSK3JffaRqdT+gyhfqIPwDHpX80tJrF8QQRYMo8sNMeaZ2Dp5+jhwKnUmIOyFFQfHRkjJm5nXg==", + "dev": true, "license": "CC0-1.0" }, "node_modules/spdy": { @@ -9734,12 +8445,6 @@ "wbuf": "^1.7.3" } }, - "node_modules/sprintf-js": { - "version": "1.0.3", - "resolved": "https://registry.npmjs.org/sprintf-js/-/sprintf-js-1.0.3.tgz", - "integrity": "sha512-D9cPgkvLlV3t3IzL0D0YLvGA9Ahk4PcvVwUbN0dSGr1aP0Nrt4AEnTUbuGvquEC0mA64Gqt1fzirlRs5ibXx8g==", - "license": "BSD-3-Clause" - }, "node_modules/statuses": { "version": "2.0.1", "resolved": "https://registry.npmjs.org/statuses/-/statuses-2.0.1.tgz", @@ -9749,12 +8454,6 @@ "node": ">= 0.8" } }, - "node_modules/stopword": { - "version": "0.1.19", - "resolved": "https://registry.npmjs.org/stopword/-/stopword-0.1.19.tgz", - "integrity": "sha512-oKkl/LClyJ2YLWm2xZvIiCUGiTsggj+BPOQyt3IKtPUJZj43jYxFJEmXvP1VZQvMuexdodMBshL4sVUSPURmwg==", - "license": "MIT" - }, "node_modules/string_decoder": { "version": "1.3.0", "resolved": "https://registry.npmjs.org/string_decoder/-/string_decoder-1.3.0.tgz", @@ -9786,52 +8485,11 @@ "dev": true, "license": "MIT" }, - "node_modules/string.prototype.includes": { - "version": "2.0.1", - "resolved": "https://registry.npmjs.org/string.prototype.includes/-/string.prototype.includes-2.0.1.tgz", - "integrity": "sha512-o7+c9bW6zpAdJHTtujeePODAhkuicdAryFsfVKwA+wGw89wJ4GTY484WTucM9hLtDEOpOvI+aHnzqnC5lHp4Rg==", - "license": "MIT", - "dependencies": { - "call-bind": "^1.0.7", - "define-properties": "^1.2.1", - "es-abstract": "^1.23.3" - }, - "engines": { - "node": ">= 0.4" - } - }, - "node_modules/string.prototype.matchall": { - "version": "4.0.12", - "resolved": "https://registry.npmjs.org/string.prototype.matchall/-/string.prototype.matchall-4.0.12.tgz", - "integrity": "sha512-6CC9uyBL+/48dYizRf7H7VAYCMCNTBeM78x/VTUe9bFEaxBepPJDa1Ow99LqI/1yF7kuy7Q3cQsYMrcjGUcskA==", - "license": "MIT", - "peer": true, - "dependencies": { - "call-bind": "^1.0.8", - "call-bound": "^1.0.3", - "define-properties": "^1.2.1", - "es-abstract": "^1.23.6", - "es-errors": "^1.3.0", - "es-object-atoms": "^1.0.0", - "get-intrinsic": "^1.2.6", - "gopd": "^1.2.0", - "has-symbols": "^1.1.0", - "internal-slot": "^1.1.0", - "regexp.prototype.flags": "^1.5.3", - "set-function-name": "^2.0.2", - "side-channel": "^1.1.0" - }, - "engines": { - "node": ">= 0.4" - }, - "funding": { - "url": "https://github.com/sponsors/ljharb" - } - }, "node_modules/string.prototype.padend": { "version": "3.1.6", "resolved": "https://registry.npmjs.org/string.prototype.padend/-/string.prototype.padend-3.1.6.tgz", "integrity": "sha512-XZpspuSB7vJWhvJc9DLSlrXl1mcA2BdoY5jjnS135ydXqLoqhs96JjDtCkjJEQHvfqZIp9hBuBMgI589peyx9Q==", + "dev": true, "license": "MIT", "dependencies": { "call-bind": "^1.0.7", @@ -9846,17 +8504,6 @@ "url": "https://github.com/sponsors/ljharb" } }, - "node_modules/string.prototype.repeat": { - "version": "1.0.0", - "resolved": "https://registry.npmjs.org/string.prototype.repeat/-/string.prototype.repeat-1.0.0.tgz", - "integrity": "sha512-0u/TldDbKD8bFCQ/4f5+mNRrXwZ8hg2w7ZR8wa16e8z9XpePWl3eGEcUD0OXpEH/VJH/2G3gjUtR3ZOiBe2S/w==", - "license": "MIT", - "peer": true, - "dependencies": { - "define-properties": "^1.1.3", - "es-abstract": "^1.17.5" - } - }, "node_modules/string.prototype.trim": { "version": "1.2.10", "resolved": "https://registry.npmjs.org/string.prototype.trim/-/string.prototype.trim-1.2.10.tgz", @@ -9917,6 +8564,7 @@ "version": "6.0.1", "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-6.0.1.tgz", "integrity": "sha512-Y38VPSHcqkFrCpFnQ9vuSXmquuv5oXOKpGeT6aGrr3o3Gc9AlVa6JBfUSOCnbxGGZF+/0ooI7KrPuUSztUdU5A==", + "dev": true, "license": "MIT", "dependencies": { "ansi-regex": "^5.0.1" @@ -9934,15 +8582,6 @@ "node": ">=4" } }, - "node_modules/strip-bom-string": { - "version": "1.0.0", - "resolved": "https://registry.npmjs.org/strip-bom-string/-/strip-bom-string-1.0.0.tgz", - "integrity": "sha512-uCC2VHvQRYu+lMh4My/sFNmF2klFymLX1wHJeXnbEJERpV/ZsVuonzerjfrGpIGF7LBVa1O7i9kjiWvJiFck8g==", - "license": "MIT", - "engines": { - "node": ">=0.10.0" - } - }, "node_modules/strip-json-comments": { "version": "3.1.1", "resolved": "https://registry.npmjs.org/strip-json-comments/-/strip-json-comments-3.1.1.tgz", @@ -9955,12 +8594,6 @@ "url": "https://github.com/sponsors/sindresorhus" } }, - "node_modules/striptags": { - "version": "3.2.0", - "resolved": "https://registry.npmjs.org/striptags/-/striptags-3.2.0.tgz", - "integrity": "sha512-g45ZOGzHDMe2bdYMdIvdAfCQkCTDMGBazSw1ypMowwGIee7ZQ5dU0rBJ8Jqgl+jAKIv4dbeE1jscZq9wid1Tkw==", - "license": "MIT" - }, "node_modules/supports-color": { "version": "7.2.0", "resolved": "https://registry.npmjs.org/supports-color/-/supports-color-7.2.0.tgz", @@ -9985,19 +8618,10 @@ "url": "https://github.com/sponsors/ljharb" } }, - "node_modules/tapable": { - "version": "1.1.3", - "resolved": "https://registry.npmjs.org/tapable/-/tapable-1.1.3.tgz", - "integrity": "sha512-4WK/bYZmj8xLr+HUCODHGF1ZFzsYffasLUgEiMBY4fgtltdO6B4WJtlSbPaDTLpYTcGVwM2qLnFTICEcNxs3kA==", - "license": "MIT", - "engines": { - "node": ">=6" - } - }, "node_modules/terser": { - "version": "5.38.1", - "resolved": "https://registry.npmjs.org/terser/-/terser-5.38.1.tgz", - "integrity": "sha512-GWANVlPM/ZfYzuPHjq0nxT+EbOEDDN3Jwhwdg1D8TU8oSkktp8w64Uq4auuGLxFSoNTRDncTq2hQHX1Ld9KHkA==", + "version": "5.39.0", + "resolved": "https://registry.npmjs.org/terser/-/terser-5.39.0.tgz", + "integrity": "sha512-LBAhFyLho16harJoWMg/nZsQYgTrg5jXOn2nCYjRUcZZEdE3qa2zb8QEDRUGVZBW4rlazf2fxkg8tztybTaqWw==", "license": "BSD-2-Clause", "dependencies": { "@jridgewell/source-map": "^0.3.3", @@ -10013,9 +8637,9 @@ } }, "node_modules/terser-webpack-plugin": { - "version": "5.3.11", - "resolved": "https://registry.npmjs.org/terser-webpack-plugin/-/terser-webpack-plugin-5.3.11.tgz", - "integrity": "sha512-RVCsMfuD0+cTt3EwX8hSl2Ks56EbFHWmhluwcqoPKtBnfjiT6olaq7PRIRfhyU8nnC2MrnDrBLfrD/RGE+cVXQ==", + "version": "5.3.14", + "resolved": "https://registry.npmjs.org/terser-webpack-plugin/-/terser-webpack-plugin-5.3.14.tgz", + "integrity": "sha512-vkZjpUjb6OMS7dhV+tILUW6BhpDR7P2L/aQSAv+Uwk+m8KATX9EccViHTJR2qDtACKPIYndLGCyl3FMo+r2LMw==", "license": "MIT", "dependencies": { "@jridgewell/trace-mapping": "^0.3.25", @@ -10075,12 +8699,6 @@ "url": "https://github.com/chalk/supports-color?sponsor=1" } }, - "node_modules/text-table": { - "version": "0.2.0", - "resolved": "https://registry.npmjs.org/text-table/-/text-table-0.2.0.tgz", - "integrity": "sha512-N+8UisAXDGk8PFXP4HAzVR9nbfmVJ3zYLAWiTIoqC5v5isinhr+r5uaO8+7r3BMfuNIufIsA7RdpVgacC2cSpw==", - "license": "MIT" - }, "node_modules/thenby": { "version": "1.3.4", "resolved": "https://registry.npmjs.org/thenby/-/thenby-1.3.4.tgz", @@ -10100,12 +8718,6 @@ "tslib": "^2" } }, - "node_modules/through": { - "version": "2.3.8", - "resolved": "https://registry.npmjs.org/through/-/through-2.3.8.tgz", - "integrity": "sha512-w89qg7PI8wAdvX60bMDP+bFoD5Dvhm9oLheFp5O4a2QF0cSBGsBX4qZmadPMvVqlLJBBci+WqGGOAPvcDeNSVg==", - "license": "MIT" - }, "node_modules/thunky": { "version": "1.1.0", "resolved": "https://registry.npmjs.org/thunky/-/thunky-1.1.0.tgz", @@ -10118,40 +8730,61 @@ "integrity": "sha512-NB6Dk1A9xgQPMoGqC5CVXn123gWyte215ONT5Pp5a0yt4nlEoO1ZWeCwpncaekPHXO60i47ihFnZPiRPjRMq4Q==", "license": "MIT" }, - "node_modules/to-no-case": { - "version": "1.0.2", - "resolved": "https://registry.npmjs.org/to-no-case/-/to-no-case-1.0.2.tgz", - "integrity": "sha512-Z3g735FxuZY8rodxV4gH7LxClE4H0hTIyHNIHdk+vpQxjLm0cwnKXq/OFVZ76SOQmto7txVcwSCwkU5kqp+FKg==", - "license": "MIT" - }, - "node_modules/to-regex-range": { - "version": "5.0.1", - "resolved": "https://registry.npmjs.org/to-regex-range/-/to-regex-range-5.0.1.tgz", - "integrity": "sha512-65P7iz6X5yEr1cwcgvQxbbIw7Uk3gOy5dIdtZ4rDveLqhrdJP+Li/Hx6tyK0NEb+2GCyneCMJiGqrADCSNk8sQ==", + "node_modules/tinyglobby": { + "version": "0.2.13", + "resolved": "https://registry.npmjs.org/tinyglobby/-/tinyglobby-0.2.13.tgz", + "integrity": "sha512-mEwzpUgrLySlveBwEVDMKk5B57bhLPYovRfPAXD5gA/98Opn0rCDj3GtLwFvCvH5RK9uPCExUROW5NjDwvqkxw==", + "dev": true, "license": "MIT", "dependencies": { - "is-number": "^7.0.0" + "fdir": "^6.4.4", + "picomatch": "^4.0.2" }, "engines": { - "node": ">=8.0" + "node": ">=12.0.0" + }, + "funding": { + "url": "https://github.com/sponsors/SuperchupuDev" } }, - "node_modules/to-snake-case": { - "version": "1.0.0", - "resolved": "https://registry.npmjs.org/to-snake-case/-/to-snake-case-1.0.0.tgz", - "integrity": "sha512-joRpzBAk1Bhi2eGEYBjukEWHOe/IvclOkiJl3DtA91jV6NwQ3MwXA4FHYeqk8BNp/D8bmi9tcNbRu/SozP0jbQ==", + "node_modules/tinyglobby/node_modules/fdir": { + "version": "6.4.4", + "resolved": "https://registry.npmjs.org/fdir/-/fdir-6.4.4.tgz", + "integrity": "sha512-1NZP+GK4GfuAv3PqKvxQRDMjdSRZjnkq7KfhlNrCNNlZ0ygQFpebfrnfnq/W7fpUnAv9aGWmY1zKx7FYL3gwhg==", + "dev": true, "license": "MIT", - "dependencies": { - "to-space-case": "^1.0.0" + "peerDependencies": { + "picomatch": "^3 || ^4" + }, + "peerDependenciesMeta": { + "picomatch": { + "optional": true + } } }, - "node_modules/to-space-case": { - "version": "1.0.0", - "resolved": "https://registry.npmjs.org/to-space-case/-/to-space-case-1.0.0.tgz", - "integrity": "sha512-rLdvwXZ39VOn1IxGL3V6ZstoTbwLRckQmn/U8ZDLuWwIXNpuZDhQ3AiRUlhTbOXFVE9C+dR51wM0CBDhk31VcA==", + "node_modules/tinyglobby/node_modules/picomatch": { + "version": "4.0.2", + "resolved": "https://registry.npmjs.org/picomatch/-/picomatch-4.0.2.tgz", + "integrity": "sha512-M7BAV6Rlcy5u+m6oPhAPFgJTzAioX/6B0DxyvDlo9l8+T3nLKbrczg2WLUyzd45L8RqfUMyGPzekbMvX2Ldkwg==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=12" + }, + "funding": { + "url": "https://github.com/sponsors/jonschlinkert" + } + }, + "node_modules/to-regex-range": { + "version": "5.0.1", + "resolved": "https://registry.npmjs.org/to-regex-range/-/to-regex-range-5.0.1.tgz", + "integrity": "sha512-65P7iz6X5yEr1cwcgvQxbbIw7Uk3gOy5dIdtZ4rDveLqhrdJP+Li/Hx6tyK0NEb+2GCyneCMJiGqrADCSNk8sQ==", "license": "MIT", "dependencies": { - "to-no-case": "^1.0.0" + "is-number": "^7.0.0" + }, + "engines": { + "node": ">=8.0" } }, "node_modules/toidentifier": { @@ -10163,12 +8796,6 @@ "node": ">=0.6" } }, - "node_modules/toml": { - "version": "2.3.6", - "resolved": "https://registry.npmjs.org/toml/-/toml-2.3.6.tgz", - "integrity": "sha512-gVweAectJU3ebq//Ferr2JUY4WKSDe5N+z0FvjDncLGyHmIDoxgY/2Ie4qfEIDm4IS7OA6Rmdm7pdEEdMcV/xQ==", - "license": "MIT" - }, "node_modules/tree-dump": { "version": "1.0.2", "resolved": "https://registry.npmjs.org/tree-dump/-/tree-dump-1.0.2.tgz", @@ -10185,13 +8812,16 @@ "tslib": "2" } }, - "node_modules/truncate-utf8-bytes": { - "version": "1.0.2", - "resolved": "https://registry.npmjs.org/truncate-utf8-bytes/-/truncate-utf8-bytes-1.0.2.tgz", - "integrity": "sha512-95Pu1QXQvruGEhv62XCMO3Mm90GscOCClvrIUwCM0PYOXK3kaF3l3sIHxx71ThJfcbM2O5Au6SO3AWCSEfW4mQ==", - "license": "WTFPL", - "dependencies": { - "utf8-byte-length": "^1.0.1" + "node_modules/ts-api-utils": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/ts-api-utils/-/ts-api-utils-2.1.0.tgz", + "integrity": "sha512-CUgTZL1irw8u29bzrOD/nH85jqyc74D6SshFgujOIA7osm2Rz7dYH77agkx7H4FBNxDq7Cjf+IjaX/8zwFW+ZQ==", + "license": "MIT", + "engines": { + "node": ">=18.12" + }, + "peerDependencies": { + "typescript": ">=4.8.4" } }, "node_modules/tsconfig-paths": { @@ -10224,18 +8854,6 @@ "integrity": "sha512-oJFu94HQb+KVduSUQL7wnpmqnfmLsOA/nAh6b6EH0wCEoK0/mPeXU6c3wKDV83MkOuHPRHtSXKKU99IBazS/2w==", "license": "0BSD" }, - "node_modules/tunnel-agent": { - "version": "0.6.0", - "resolved": "https://registry.npmjs.org/tunnel-agent/-/tunnel-agent-0.6.0.tgz", - "integrity": "sha512-McnNiV1l8RYeY8tBgEpuodCC1mLUdbSN+CYBL7kJsJNInOP8UjDDEwdk6Mw60vdLLrr5NHKZhMAOSrR2NZuQ+w==", - "license": "Apache-2.0", - "dependencies": { - "safe-buffer": "^5.0.1" - }, - "engines": { - "node": "*" - } - }, "node_modules/type-check": { "version": "0.4.0", "resolved": "https://registry.npmjs.org/type-check/-/type-check-0.4.0.tgz", @@ -10248,18 +8866,6 @@ "node": ">= 0.8.0" } }, - "node_modules/type-fest": { - "version": "0.20.2", - "resolved": "https://registry.npmjs.org/type-fest/-/type-fest-0.20.2.tgz", - "integrity": "sha512-Ne+eE4r0/iWnpAxD852z3A+N0Bt5RN//NjJwRd2VFHEmrywxf5vsZlh4R6lixl6B+wz/8d+maTSAkN1FIkI3LQ==", - "license": "(MIT OR CC0-1.0)", - "engines": { - "node": ">=10" - }, - "funding": { - "url": "https://github.com/sponsors/sindresorhus" - } - }, "node_modules/type-is": { "version": "1.6.18", "resolved": "https://registry.npmjs.org/type-is/-/type-is-1.6.18.tgz", @@ -10348,9 +8954,9 @@ } }, "node_modules/typescript": { - "version": "5.7.3", - "resolved": "https://registry.npmjs.org/typescript/-/typescript-5.7.3.tgz", - "integrity": "sha512-84MVSjMEHP+FQRPy3pX9sTVV/INIex71s9TL2Gm5FG/WG1SqXeKyZ0k7/blY/4FdOzI12CBy1vGc4og/eus0fw==", + "version": "5.8.3", + "resolved": "https://registry.npmjs.org/typescript/-/typescript-5.8.3.tgz", + "integrity": "sha512-p1diW6TqL9L07nNxvRMM7hMMw4c5XOo/1ibL4aAIGmSAt9slTE1Xgw5KWuof2uTOvCg9BY7ZRi+GaF+7sfgPeQ==", "license": "Apache-2.0", "peer": true, "bin": { @@ -10380,9 +8986,9 @@ } }, "node_modules/undici-types": { - "version": "6.20.0", - "resolved": "https://registry.npmjs.org/undici-types/-/undici-types-6.20.0.tgz", - "integrity": "sha512-Ny6QZ2Nju20vw1SRHe3d9jVu6gJ+4e3+MMpqu7pqE5HT6WsTSlce++GQmK5UXS8mzV8DSYHrQH+Xrf2jVcuKNg==", + "version": "6.21.0", + "resolved": "https://registry.npmjs.org/undici-types/-/undici-types-6.21.0.tgz", + "integrity": "sha512-iwDZqg0QAGrg9Rav5H4n0M64c3mkR59cJ6wQp+7C4nI0gsmExaedaYLNO44eT4AtBBwjbTiGPMlt2Md0T9H9JQ==", "license": "MIT" }, "node_modules/unicode-canonical-property-names-ecmascript": { @@ -10425,23 +9031,11 @@ "node": ">=4" } }, - "node_modules/unicorn-magic": { - "version": "0.3.0", - "resolved": "https://registry.npmjs.org/unicorn-magic/-/unicorn-magic-0.3.0.tgz", - "integrity": "sha512-+QBBXBCvifc56fsbuxZQ6Sic3wqqc3WWaqxs58gvJrcOuN83HGTCwz3oS5phzU9LthRNE9VrJCFCLUgHeeFnfA==", - "dev": true, - "license": "MIT", - "engines": { - "node": ">=18" - }, - "funding": { - "url": "https://github.com/sponsors/sindresorhus" - } - }, "node_modules/universalify": { "version": "2.0.1", "resolved": "https://registry.npmjs.org/universalify/-/universalify-2.0.1.tgz", "integrity": "sha512-gptHNQghINnc/vTGIk0SOFGFNXw7JVrlRUtConJRlvaw6DuX0wO5Jeko9sWrMBhh+PsYAZ7oXAiOnf/UKogyiw==", + "dev": true, "license": "MIT", "engines": { "node": ">= 10.0.0" @@ -10457,9 +9051,9 @@ } }, "node_modules/update-browserslist-db": { - "version": "1.1.2", - "resolved": "https://registry.npmjs.org/update-browserslist-db/-/update-browserslist-db-1.1.2.tgz", - "integrity": "sha512-PPypAm5qvlD7XMZC3BujecnaOxwhrtoFR+Dqkk5Aa/6DssiH0ibKoketaj9w8LP7Bont1rYeoV5plxD7RTEPRg==", + "version": "1.1.3", + "resolved": "https://registry.npmjs.org/update-browserslist-db/-/update-browserslist-db-1.1.3.tgz", + "integrity": "sha512-UxhIZQ+QInVdunkDAaiazvvT/+fXL5Osr0JZlJulepYu6Jd7qJtDZjlur0emRlT71EN3ScPoE7gvsuIKKNavKw==", "funding": [ { "type": "opencollective", @@ -10495,12 +9089,6 @@ "punycode": "^2.1.0" } }, - "node_modules/utf8-byte-length": { - "version": "1.0.5", - "resolved": "https://registry.npmjs.org/utf8-byte-length/-/utf8-byte-length-1.0.5.tgz", - "integrity": "sha512-Xn0w3MtiQ6zoz2vFyUVruaCL53O/DwUvkEeOvj+uulMm0BkUGYWmBYVyElqZaSLhY6ZD0ulfU3aBra2aVT4xfA==", - "license": "(WTFPL OR MIT)" - }, "node_modules/util-deprecate": { "version": "1.0.2", "resolved": "https://registry.npmjs.org/util-deprecate/-/util-deprecate-1.0.2.tgz", @@ -10529,6 +9117,7 @@ "version": "3.0.4", "resolved": "https://registry.npmjs.org/validate-npm-package-license/-/validate-npm-package-license-3.0.4.tgz", "integrity": "sha512-DpKm2Ui/xN7/HQKCtpZxoRWBhZ9Z0kqtygG8XCgNQ8ZlDnxuQmWhj566j8fN4Cu3/JmbhsDo7fcAJq4s9h27Ew==", + "dev": true, "license": "Apache-2.0", "dependencies": { "spdx-correct": "^3.0.0", @@ -10567,13 +9156,14 @@ } }, "node_modules/webpack": { - "version": "5.97.1", - "resolved": "https://registry.npmjs.org/webpack/-/webpack-5.97.1.tgz", - "integrity": "sha512-EksG6gFY3L1eFMROS/7Wzgrii5mBAFe4rIr3r2BTfo7bcc+DWwFZ4OJ/miOuHJO/A85HwyI4eQ0F6IKXesO7Fg==", + "version": "5.99.7", + "resolved": "https://registry.npmjs.org/webpack/-/webpack-5.99.7.tgz", + "integrity": "sha512-CNqKBRMQjwcmKR0idID5va1qlhrqVUKpovi+Ec79ksW8ux7iS1+A6VqzfZXgVYCFRKl7XL5ap3ZoMpwBJxcg0w==", "license": "MIT", "dependencies": { "@types/eslint-scope": "^3.7.7", "@types/estree": "^1.0.6", + "@types/json-schema": "^7.0.15", "@webassemblyjs/ast": "^1.14.1", "@webassemblyjs/wasm-edit": "^1.14.1", "@webassemblyjs/wasm-parser": "^1.14.1", @@ -10590,9 +9180,9 @@ "loader-runner": "^4.2.0", "mime-types": "^2.1.27", "neo-async": "^2.6.2", - "schema-utils": "^3.2.0", + "schema-utils": "^4.3.2", "tapable": "^2.1.1", - "terser-webpack-plugin": "^5.3.10", + "terser-webpack-plugin": "^5.3.11", "watchpack": "^2.4.1", "webpack-sources": "^3.2.3" }, @@ -10712,14 +9302,15 @@ } }, "node_modules/webpack-dev-server": { - "version": "5.2.0", - "resolved": "https://registry.npmjs.org/webpack-dev-server/-/webpack-dev-server-5.2.0.tgz", - "integrity": "sha512-90SqqYXA2SK36KcT6o1bvwvZfJFcmoamqeJY7+boioffX9g9C0wjjJRGUrQIuh43pb0ttX7+ssavmj/WN2RHtA==", + "version": "5.2.1", + "resolved": "https://registry.npmjs.org/webpack-dev-server/-/webpack-dev-server-5.2.1.tgz", + "integrity": "sha512-ml/0HIj9NLpVKOMq+SuBPLHcmbG+TGIjXRHsYfZwocUBIqEvws8NnS/V9AFQ5FKP+tgn5adwVwRrTEpGL33QFQ==", "license": "MIT", "dependencies": { "@types/bonjour": "^3.5.13", "@types/connect-history-api-fallback": "^1.5.4", "@types/express": "^4.17.21", + "@types/express-serve-static-core": "^4.17.21", "@types/serve-index": "^1.9.4", "@types/serve-static": "^1.15.5", "@types/sockjs": "^0.3.36", @@ -10795,9 +9386,9 @@ } }, "node_modules/webpack-dev-server/node_modules/open": { - "version": "10.1.0", - "resolved": "https://registry.npmjs.org/open/-/open-10.1.0.tgz", - "integrity": "sha512-mnkeQ1qP5Ue2wd+aivTD3NHd/lZ96Lu0jgf0pwktLPtx6cTZiH7tyeGRRHs0zX0rbrahXPnXlUnbeXyaBBuIaw==", + "version": "10.1.1", + "resolved": "https://registry.npmjs.org/open/-/open-10.1.1.tgz", + "integrity": "sha512-zy1wx4+P3PfhXSEPJNtZmJXfhkkIaxU1VauWIrDZw1O7uJRDRJtKr9n3Ic4NgbA16KyOxOXO2ng9gYwCdXuSXA==", "license": "MIT", "dependencies": { "default-browser": "^5.2.1", @@ -10835,24 +9426,6 @@ "node": ">=10.13.0" } }, - "node_modules/webpack/node_modules/schema-utils": { - "version": "3.3.0", - "resolved": "https://registry.npmjs.org/schema-utils/-/schema-utils-3.3.0.tgz", - "integrity": "sha512-pN/yOAvcC+5rQ5nERGuwrjLlYvLTbCibnZ1I7B1LaiAz9BRBlE9GMgE/eqV30P7aJQUf7Ddimy/RsbYO/GrVGg==", - "license": "MIT", - "dependencies": { - "@types/json-schema": "^7.0.8", - "ajv": "^6.12.5", - "ajv-keywords": "^3.5.2" - }, - "engines": { - "node": ">= 10.13.0" - }, - "funding": { - "type": "opencollective", - "url": "https://opencollective.com/webpack" - } - }, "node_modules/webpack/node_modules/tapable": { "version": "2.2.1", "resolved": "https://registry.npmjs.org/tapable/-/tapable-2.2.1.tgz", @@ -10965,15 +9538,16 @@ } }, "node_modules/which-typed-array": { - "version": "1.1.18", - "resolved": "https://registry.npmjs.org/which-typed-array/-/which-typed-array-1.1.18.tgz", - "integrity": "sha512-qEcY+KJYlWyLH9vNbsr6/5j59AXk5ni5aakf8ldzBvGde6Iz4sxZGkJyWSAueTG7QhOvNRYb1lDdFmL5Td0QKA==", + "version": "1.1.19", + "resolved": "https://registry.npmjs.org/which-typed-array/-/which-typed-array-1.1.19.tgz", + "integrity": "sha512-rEvr90Bck4WZt9HHFC4DJMsjvu7x+r6bImz0/BrbWb7A2djJ8hnZMrWnHo9F8ssv0OMErasDhftrfROTyqSDrw==", "license": "MIT", "dependencies": { "available-typed-arrays": "^1.0.7", "call-bind": "^1.0.8", - "call-bound": "^1.0.3", - "for-each": "^0.3.3", + "call-bound": "^1.0.4", + "for-each": "^0.3.5", + "get-proto": "^1.0.1", "gopd": "^1.2.0", "has-tostringtag": "^1.0.2" }, @@ -11017,16 +9591,10 @@ "url": "https://github.com/chalk/wrap-ansi?sponsor=1" } }, - "node_modules/wrappy": { - "version": "1.0.2", - "resolved": "https://registry.npmjs.org/wrappy/-/wrappy-1.0.2.tgz", - "integrity": "sha512-l4Sp/DRseor9wL6EvV2+TuQn63dMkPjZ/sp9XkghTEbV9KlPS1xUsZ3u7/IQO4wxtcFB4bgpQPRcR3QCvezPcQ==", - "license": "ISC" - }, "node_modules/ws": { - "version": "8.18.0", - "resolved": "https://registry.npmjs.org/ws/-/ws-8.18.0.tgz", - "integrity": "sha512-8VbfWfHLbbwu3+N6OKsOMpBdT4kXPDDB9cJk2bJ6mh9ucxdlnNvH1e+roYkKmN9Nxw2yjz7VzeO9oOz2zJ04Pw==", + "version": "8.18.1", + "resolved": "https://registry.npmjs.org/ws/-/ws-8.18.1.tgz", + "integrity": "sha512-RKW2aJZMXeMxVpnZ6bck+RswznaxmzdULiBr6KY7XkTnW8uvt0iT9H5DkHUChXrc+uurzwa0rVI16n/Xzjdz1w==", "license": "MIT", "engines": { "node": ">=10.0.0" @@ -11061,9 +9629,9 @@ "license": "ISC" }, "node_modules/yaml": { - "version": "2.7.0", - "resolved": "https://registry.npmjs.org/yaml/-/yaml-2.7.0.tgz", - "integrity": "sha512-+hSoy/QHluxmC9kCIJyL/uyFmLmc+e5CFR5Wa+bpIhIj85LVb9ZH2nVnqrHoSvKogwODv0ClqZkmiSSaIH5LTA==", + "version": "2.7.1", + "resolved": "https://registry.npmjs.org/yaml/-/yaml-2.7.1.tgz", + "integrity": "sha512-10ULxpnOCQXxJvBgxsn9ptjq6uviG/htZKk9veJGhlqn3w/DxQ631zFF+nlQXLwmImeS5amR2dl2U8sg6U9jsQ==", "dev": true, "license": "ISC", "bin": { diff --git a/docs/package.json b/docs/package.json index a64c6c5b16ac..e637059c57ee 100644 --- a/docs/package.json +++ b/docs/package.json @@ -27,35 +27,30 @@ "license": "Apache License 2.0", "homepage": "https://github.com/yugabyte/yugabyte-db/docs#readme", "dependencies": { - "@babel/core": "7.26.8", - "@babel/eslint-parser": "7.26.8", + "@babel/core": "7.26.10", "@babel/plugin-proposal-decorators": "7.25.9", - "@babel/preset-env": "7.26.8", + "@babel/preset-env": "7.26.9", "@fortawesome/fontawesome-pro": "6.7.2", + "@stylistic/eslint-plugin": "4.2.0", "algoliasearch": "4.23.3", - "babel-loader": "9.2.1", + "babel-loader": "10.0.0", "clipboard": "2.0.11", "detect-external-link": "2.0.1", - "eslint": "8.56.0", - "eslint-config-airbnb": "19.0.4", - "eslint-config-xo-space": "0.35.0", + "eslint": "9.25.1", "eslint-plugin-import": "2.31.0", - "eslint-plugin-jsx-a11y": "6.10.2", - "eslint-webpack-plugin": "4.2.0", - "hugo-algolia": "1.2.14", - "npm-run-all": "4.1.5", - "react-dev-tools": "0.0.1", - "react-dev-utils": "12.0.1", - "run-p": "0.0.0", - "sass": "1.84.0", - "webpack": "5.97.1", + "eslint-webpack-plugin": "5.0.1", + "globals": "16.0.0", + "sass": "1.87.0", + "webpack": "5.99.7", "webpack-cli": "6.0.1", - "webpack-dev-server": "5.2.0", + "webpack-dev-server": "5.2.1", "yb-rrdiagram": "0.0.7" }, "devDependencies": { - "autoprefixer": "10.4.20", - "postcss": "8.5.1", - "postcss-cli": "11.0.0" + "autoprefixer": "10.4.21", + "npm-run-all": "4.1.5", + "postcss": "8.5.3", + "postcss-cli": "11.0.1", + "run-p": "0.0.0" } } From 30079f0eb0b7a84ea721d94d9fb312fd9f0299f9 Mon Sep 17 00:00:00 2001 From: Jethro Mak <88681329+Jethro-M@users.noreply.github.com> Date: Fri, 9 May 2025 19:21:35 -0400 Subject: [PATCH 038/146] [PLAT-17437] Add enable node agent migration flow to YBA UI Summary: This diff adds a banner to the dashboard page which appears whenever the user does not have the runtime config `yb.node_agent.enabler.run_installer` set to `true`. This banner serves to nudge the user into enabling this runtime config flag so YBA is able to automatically install Node Agent on all existing universes without Node Agent. Here is the related backend change whcih introduced the runtime config flag for automatically installing Node Agent: e872607bacf8895b6b966507fc4d6228bfdd72ad / D43566 The change was introduced to let users opt-in to the migration at a time which is convenient for them instead of pushing the migration onto users as soon as they upgrade their YBA version. It is worth noting that the `nodeAgentMissing` field does not get updated if a universe is paused. Test Plan: Verify the Node Agent banner text. Verify the `Enable Automatic Node Agent Migration` button opens a modal for the user to confirm their intent to enable the migration. Verify the modal appearance is as expected. Verify the modal submission modifies the runtime config. {F353917} **Case 1: Automatic Node Agent migration disabled & no installation failures** {F355715} **Case 2: Automatic Node Agent migration disabled & has installation failures** {F355714} **Case 3: Automatic Node Agent migration enabled & no installation failures** {F355712} **Case 4: Automatic Node Agent migration enabled & has installation failures** {F355713} Verify that `nodeAgentMissing` gets set to true if the user manually remove node agent from a universe node. If a universe is paused, the field does not get updated until the universe is resumed. Reviewers: nsingh, rmadhavan, nbhatia Reviewed By: nsingh Differential Revision: https://phorge.dev.yugabyte.com/D43602 --- .../UniverseDisplayPanel.tsx | 18 +++- .../UniverseDetail/UniverseDetail.js | 9 +- ...bleAutomaticNodeAgentInstallationModal.tsx | 85 ++++++++++++++++ .../InstallNodeAgentReminderBanner.tsx | 99 +++++++++++++++---- .../universe/universe-form/utils/dto.ts | 2 + managed/ui/src/redesign/helpers/constants.ts | 3 +- managed/ui/src/translations/en.json | 21 +++- 7 files changed, 206 insertions(+), 31 deletions(-) create mode 100644 managed/ui/src/redesign/features/NodeAgent/EnableAutomaticNodeAgentInstallationModal.tsx diff --git a/managed/ui/src/components/panels/UniverseDisplayPanel/UniverseDisplayPanel.tsx b/managed/ui/src/components/panels/UniverseDisplayPanel/UniverseDisplayPanel.tsx index 6144e9a29f67..e665c33c4432 100644 --- a/managed/ui/src/components/panels/UniverseDisplayPanel/UniverseDisplayPanel.tsx +++ b/managed/ui/src/components/panels/UniverseDisplayPanel/UniverseDisplayPanel.tsx @@ -4,7 +4,6 @@ import { Link } from 'react-router'; import { Row, Col } from 'react-bootstrap'; import { UniverseCard } from './UniverseCard'; import { useQuery } from 'react-query'; -import { makeStyles } from '@material-ui/core'; import { RbacValidator } from '../../../redesign/features/rbac/common/RbacApiPermValidator'; import { ApiPermissionMap } from '../../../redesign/features/rbac/ApiAndUserPermMapping'; @@ -85,14 +84,22 @@ export const UniverseDisplayPanel = ({ ); }); } + const hasUniverseMissingNodeAgent = universeList.data.some( + (universe: Universe) => universe.universeDetails.nodeAgentMissing === true + ); const nodeAgentEnablerScanInterval = globalRuntimeConfigQuery.data?.configEntries?.find( (configEntry: RunTimeConfigEntry) => configEntry.key === RuntimeConfigKey.NODE_AGENT_ENABLER_SCAN_INTERVAL )?.value ?? ''; const isNodeAgentEnabled = getIsNodeAgentEnabled(nodeAgentEnablerScanInterval); + const isAutoNodeAgentInstallationEnabled = + globalRuntimeConfigQuery.data?.configEntries?.find( + (configEntry: RunTimeConfigEntry) => + configEntry.key === RuntimeConfigKey.ENABLE_AUTO_NODE_AGENT_INSTALLATION + )?.value === 'true' ?? false; - const showNodeAgentBanner = isNodeAgentEnabled && hasNodeAgentFailures; + const showNodeAgentInstallReminderBanner = isNodeAgentEnabled && hasUniverseMissingNodeAgent; return (
@@ -120,7 +127,12 @@ export const UniverseDisplayPanel = ({ )} - {showNodeAgentBanner && } + {showNodeAgentInstallReminderBanner && ( + + )} {universeDisplayList}
); diff --git a/managed/ui/src/components/universes/UniverseDetail/UniverseDetail.js b/managed/ui/src/components/universes/UniverseDetail/UniverseDetail.js index dc5c358573e9..a3ea9794af48 100644 --- a/managed/ui/src/components/universes/UniverseDetail/UniverseDetail.js +++ b/managed/ui/src/components/universes/UniverseDetail/UniverseDetail.js @@ -401,8 +401,7 @@ class UniverseDetail extends Component { const providerUUID = primaryCluster?.userIntent?.provider; const provider = providers.data.find((provider) => provider.uuid === providerUUID); const isProviderNodeAgentEnabled = provider?.details?.enableNodeAgent; - const isNodeAgentInstallationPending = - universe?.currentUniverse?.data?.universeDetails?.installNodeAgent; + const isNodeAgentMissing = universe?.currentUniverse?.data?.universeDetails?.nodeAgentMissing; let onPremSkipProvisioning = false; if (provider && provider.code === 'onprem') { @@ -1522,9 +1521,7 @@ class UniverseDetail extends Component { onClick={showInstallNodeAgentModal} > - {isNodeAgentInstallationPending - ? 'Install Node Agent' - : 'Reinstall Node Agent'} + {isNodeAgentMissing ? 'Install Node Agent' : 'Reinstall Node Agent'} @@ -1798,7 +1795,7 @@ class UniverseDetail extends Component { universeUuid={currentUniverse.data.universeUUID} nodeNames={nodeNames} isUniverseAction={true} - isReinstall={!isNodeAgentInstallationPending} + isReinstall={!isNodeAgentMissing} /> diff --git a/managed/ui/src/redesign/features/NodeAgent/EnableAutomaticNodeAgentInstallationModal.tsx b/managed/ui/src/redesign/features/NodeAgent/EnableAutomaticNodeAgentInstallationModal.tsx new file mode 100644 index 000000000000..a502e9207567 --- /dev/null +++ b/managed/ui/src/redesign/features/NodeAgent/EnableAutomaticNodeAgentInstallationModal.tsx @@ -0,0 +1,85 @@ +import { Box, Typography, useTheme } from '@material-ui/core'; +import { AxiosError } from 'axios'; +import { useState } from 'react'; +import { Trans, useTranslation } from 'react-i18next'; +import { useMutation } from 'react-query'; + +import { YBInput, YBModal, YBModalProps } from '../../components'; +import { RuntimeConfigKey } from '../../helpers/constants'; +import { api } from '../universe/universe-form/utils/api'; +import { handleServerError } from '../../../utils/errorHandlingUtils'; + +interface EnableAutomaticNodeAgentInstallationModalProps { + modalProps: YBModalProps; +} + +const TRANSLATION_KEY_PREFIX = 'nodeAgent.enableAutomaticNodeAgentInstallationModal'; +const COMPONENT_NAME = 'EnableAutomaticNodeAgentInstallationModal'; +const ACCEPTED_CONFIRMATION_TEXT = 'YES'; +export const EnableAutomaticNodeAgentInstallationModal = ({ + modalProps +}: EnableAutomaticNodeAgentInstallationModalProps) => { + const [confirmationText, setConfirmationText] = useState(''); + const [isSubmitting, setIsSubmitting] = useState(false); + const { t } = useTranslation('translation', { keyPrefix: TRANSLATION_KEY_PREFIX }); + const theme = useTheme(); + const enableAutomaticNodeAgentInstallation = useMutation( + () => + api.setRunTimeConfig({ + key: RuntimeConfigKey.ENABLE_AUTO_NODE_AGENT_INSTALLATION, + value: true + }), + { + onSuccess: () => { + modalProps.onClose(); + }, + onError: (error: AxiosError | Error) => { + handleServerError(error, { + customErrorLabel: 'Failed to enable automatic node agent installation.' + }); + } + } + ); + + const resetModal = () => { + setIsSubmitting(false); + setConfirmationText(''); + }; + const onSubmit = () => { + setIsSubmitting(true); + enableAutomaticNodeAgentInstallation.mutate(undefined /* variables */, { + onSettled: () => resetModal() + }); + }; + + const isFormDisabled = isSubmitting || confirmationText !== ACCEPTED_CONFIRMATION_TEXT; + + return ( + + + }} /> + + + {t('confirmationText')} + setConfirmationText(event.target.value)} + /> + + + ); +}; diff --git a/managed/ui/src/redesign/features/NodeAgent/InstallNodeAgentReminderBanner.tsx b/managed/ui/src/redesign/features/NodeAgent/InstallNodeAgentReminderBanner.tsx index b5c4efab3043..65e3ec2f7919 100644 --- a/managed/ui/src/redesign/features/NodeAgent/InstallNodeAgentReminderBanner.tsx +++ b/managed/ui/src/redesign/features/NodeAgent/InstallNodeAgentReminderBanner.tsx @@ -1,5 +1,5 @@ import { useState } from 'react'; -import { Box, Collapse, makeStyles, Typography, useTheme, withWidth } from '@material-ui/core'; +import { Box, Collapse, makeStyles, Typography, useTheme } from '@material-ui/core'; import { Trans, useTranslation } from 'react-i18next'; import { browserHistory } from 'react-router'; @@ -8,6 +8,12 @@ import { NODE_AGENT_FAQ_DOCS_URL, NODE_AGENT_PREREQ_DOCS_URL } from './constants import { YBButton } from '../../components/YBButton/YBButton'; import { YBExternalLink } from '../../components/YBLink/YBExternalLink'; import { getStoredBooleanValue } from '../../helpers/utils'; +import { EnableAutomaticNodeAgentInstallationModal } from './EnableAutomaticNodeAgentInstallationModal'; + +interface InstallNodeAgentReminderBannerProps { + isAutoNodeAgentInstallationEnabled: boolean; + hasNodeAgentFailures: boolean; +} const useStyles = makeStyles((theme) => ({ banner: { @@ -65,13 +71,21 @@ const useStyles = makeStyles((theme) => ({ } })); -const TRANSLATION_KEY_PREFIX = 'dashboard.nodeAgentReminderBanner'; +const TRANSLATION_KEY_PREFIX = 'dashboard.installNodeAgentReminderBanner'; const IS_BANNER_EXPANDED_LOCAL_STORAGE_KEY = 'isInstallNodeAgentReminderBannerExpanded'; +const TEST_COMPONENT_NAME = 'InstallNodeAgentReminderBanner'; -export const InstallNodeAgentReminderBanner = () => { +export const InstallNodeAgentReminderBanner = ({ + isAutoNodeAgentInstallationEnabled, + hasNodeAgentFailures +}: InstallNodeAgentReminderBannerProps) => { const [isBannerExpanded, setIsBannerExpanded] = useState(() => getStoredBooleanValue(IS_BANNER_EXPANDED_LOCAL_STORAGE_KEY, true) ); + const [ + isEnableAutomaticNodeAgentInstallationModalOpen, + setIsEnableAutomaticNodeAgentInstallationModalOpen + ] = useState(false); const classes = useStyles(); const theme = useTheme(); const { t } = useTranslation('translation', { keyPrefix: TRANSLATION_KEY_PREFIX }); @@ -82,6 +96,7 @@ export const InstallNodeAgentReminderBanner = () => { localStorage.setItem(IS_BANNER_EXPANDED_LOCAL_STORAGE_KEY, newValue.toString()); }; const redirectToNodeAgentPage = () => browserHistory.push('/nodeAgent'); + const shouldShowViewNodeAgentButton = isAutoNodeAgentInstallationEnabled || hasNodeAgentFailures; return (
@@ -112,29 +127,79 @@ export const InstallNodeAgentReminderBanner = () => { - + + + {t('nodeAgentMustBeInstallToUpgradeYba')} + + {isAutoNodeAgentInstallationEnabled && !hasNodeAgentFailures ? ( + <> + {t('automaticInstallationsInProgress')} + {t('manuallyUnregisteredNodeAgent')} + + ) : ( + <> + {!isAutoNodeAgentInstallationEnabled && ( + <> + {t('waysToInstallNodeAgent')} + + {t('howToManuallyInstallNodeAgent')} + + + )} + {t('manuallyUnregisteredNodeAgent')} + {hasNodeAgentFailures && ( + + + ) + }} + /> + + )} + + )} , - nodeAgentPrereqDocsLink: , - paragraph:

+ nodeAgentFaqDocsLink: }} /> - - - - {t('viewNodeAgentsButton')} - + + + {shouldShowViewNodeAgentButton && ( + + {t('viewNodeAgentsButton')} + + )} + {!isAutoNodeAgentInstallationEnabled && ( + setIsEnableAutomaticNodeAgentInstallationModalOpen(true)} + > + {t('enableAutomaticNodeAgentInstallationButton')} + + )}

+ {isEnableAutomaticNodeAgentInstallationModalOpen && ( + setIsEnableAutomaticNodeAgentInstallationModalOpen(false) + }} + /> + )}
); }; diff --git a/managed/ui/src/redesign/features/universe/universe-form/utils/dto.ts b/managed/ui/src/redesign/features/universe/universe-form/utils/dto.ts index 371c0c1933e8..4228aadf34d3 100644 --- a/managed/ui/src/redesign/features/universe/universe-form/utils/dto.ts +++ b/managed/ui/src/redesign/features/universe/universe-form/utils/dto.ts @@ -248,6 +248,8 @@ export interface UniverseDetails { prevYBSoftwareConfig: { softwareVersion: string }; universePaused: boolean; xclusterInfo: any; + installNodeAgent: boolean; + nodeAgentMissing: boolean; runOnlyPrechecks?: boolean; } diff --git a/managed/ui/src/redesign/helpers/constants.ts b/managed/ui/src/redesign/helpers/constants.ts index 6c50e453b20f..e71ef9d78ffe 100644 --- a/managed/ui/src/redesign/helpers/constants.ts +++ b/managed/ui/src/redesign/helpers/constants.ts @@ -82,7 +82,8 @@ export const RuntimeConfigKey = { NODE_AGENT_CLIENT_ENABLE: 'yb.node_agent.client.enabled', NODE_AGENT_ENABLER_SCAN_INTERVAL: 'yb.node_agent.enabler.scan_interval', HYPERDISKS_STORAGE_TYPE: 'yb.gcp.show_hyperdisks_storage_type', - CIPHERTRUST_KMS_ENABLE: 'yb.kms.allow_ciphertrust' + CIPHERTRUST_KMS_ENABLE: 'yb.kms.allow_ciphertrust', + ENABLE_AUTO_NODE_AGENT_INSTALLATION: 'yb.node_agent.enabler.run_installer' } as const; /** diff --git a/managed/ui/src/translations/en.json b/managed/ui/src/translations/en.json index b33ba6c9c6ff..8d78386d37dd 100644 --- a/managed/ui/src/translations/en.json +++ b/managed/ui/src/translations/en.json @@ -99,10 +99,17 @@ "yba_ha_last_backup_size_mb": "HA Last Backup Size (MB)" }, "dashboard": { - "nodeAgentReminderBanner": { - "primaryText": "Automatic installation of node agent failed on some nodes.", - "additionalDetailText": "Node agent manages communication between universe nodes and YugabyteDB Anywhere, and must be installed on all nodes before you can upgrade to future versions of YugabyteDB Anywhere.Click View Node Agents to find the problem nodes, verify they fulfill the node prerequisites and retry the installation.Learn more about Node Agent.", - "viewNodeAgentsButton": "View Node Agents" + "installNodeAgentReminderBanner": { + "primaryText": "Node agent needs to be installed on some nodes", + "nodeAgentMustBeInstallToUpgradeYba": "Node agent manages communication between universe nodes and YugabyteDB Anywhere, and must be installed on all nodes before you can upgrade to future versions of YugabyteDB Anywhere.", + "waysToInstallNodeAgent": "YugabyteDB Anywhere can automatically install node agent on universes that require it, or you can install node agent manually.", + "howToManuallyInstallNodeAgent": "To manually install node agent, navigate to the universe and choose Actions>More>Install Node Agent.", + "automaticInstallationsInProgress": "Automatic installation of node agent is in progress. Click View Node Agents to see which nodes have node agent installed.", + "manuallyUnregisteredNodeAgent": "If you manually removed node agent on any nodes, then you must also manually reinstall by navigating to the universe and choosing Actions>More>Install Node Agent. The automatic installation of node agent will not reinstall node agent on those nodes for you.", + "someNodeAgentsAreNotFunctional": "We found some universes where node agent is not functional. Click View Node Agents to find the problem nodes, verify they fulfill the node prerequisites and retry the installation by choosing Actions>Reinstall Node Agent.", + "learnMoreText": "Learn more about Node Agent.", + "viewNodeAgentsButton": "View Node Agents", + "enableAutomaticNodeAgentInstallationButton": " Automatically Install Node Agents" } }, "universeActions": { @@ -1748,6 +1755,12 @@ "TIMED_OUT": "Timed Out", "universePaused": "Universe Paused" }, + "enableAutomaticNodeAgentInstallationModal": { + "title": "Automatically Install Node Agent", + "infoText": "YugabyteDB Anywhere can automatically install node agent on universes that require it. Installation is quick and does not impact application or database performance.Some universe tasks will be unavailable while installation is in progress. If other tasks are in progress (for example, backups), wait until they complete before starting automatic installation.", + "confirmationText": "Type YES to proceeed.", + "submitLabel": "Start" + }, "installNodeAgentModal": { "title": { "install": "Install Node Agent", From f4a8deeca2af1860416a5879578bb68c2b51610b Mon Sep 17 00:00:00 2001 From: Zachary Drudi Date: Mon, 12 May 2025 10:52:16 -0400 Subject: [PATCH 039/146] [#27140] docdb: Fix broken ash test. Summary: WaitStateITest/AshTestVerifyPgOccurrence.VerifyWaitStateEntered/kWaitingOnTServer was broken on asan and tsan builds by 53146f9da4877244f931761920a5159a5b8da412. Jira: DB-16620 Test Plan: ``` % ./yb_build.sh tsan --cxx-test integration-tests_wait_states-itest --gtest_filter WaitStateITest/AshTestVerifyPgOccurrence.VerifyWaitStateEntered/kWaitingOnTServer -n 10 --stop-at-failure % ./yb_build.sh asan --cxx-test integration-tests_wait_states-itest --gtest_filter WaitStateITest/AshTestVerifyPgOccurrence.VerifyWaitStateEntered/kWaitingOnTServer -n 10 --stop-at-failure Reviewers: amitanand Reviewed By: amitanand Subscribers: rthallam, slingam, ybase Differential Revision: https://phorge.dev.yugabyte.com/D43920 --- src/yb/integration-tests/wait_states-itest.cc | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/src/yb/integration-tests/wait_states-itest.cc b/src/yb/integration-tests/wait_states-itest.cc index 8d6b187b8a85..174bda091b94 100644 --- a/src/yb/integration-tests/wait_states-itest.cc +++ b/src/yb/integration-tests/wait_states-itest.cc @@ -1035,6 +1035,11 @@ class AshTestVerifyPgOccurrence : public AshTestVerifyPgOccurrenceBase, public ::testing::WithParamInterface { public: AshTestVerifyPgOccurrence() : AshTestVerifyPgOccurrenceBase(GetParam()) {} + + protected: + void OverrideMiniClusterOptions(MiniClusterOptions* options) override { + options->wait_for_pg = false; + } }; INSTANTIATE_TEST_SUITE_P( From 34a34b881c12ac98062070a2eb6efae302d6fd9a Mon Sep 17 00:00:00 2001 From: Hemant Bhanawat Date: Mon, 12 May 2025 17:35:31 +0530 Subject: [PATCH 040/146] [#26986] YSQL: ASH: Rename wait_event_type "Network" to "RPCWait" Summary: In all the occurrences of "Network" wait type, we are actually waiting for a RPC to complete and not only the network. So it is appropriate to rename wait_event_type "Network" to "RPCWait". Jira: DB-16459 Test Plan: Only a rename. Testing is manual. Reviewers: amitanand, asaha Reviewed By: amitanand, asaha Subscribers: yql Differential Revision: https://phorge.dev.yugabyte.com/D43694 --- src/yb/ash/wait_state.cc | 10 +++++----- src/yb/ash/wait_state.h | 2 +- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/src/yb/ash/wait_state.cc b/src/yb/ash/wait_state.cc index 00b842522aec..35fcf50e3870 100644 --- a/src/yb/ash/wait_state.cc +++ b/src/yb/ash/wait_state.cc @@ -504,7 +504,7 @@ WaitStateType GetWaitStateType(WaitStateCode code) { case WaitStateCode::kIndexWrite: case WaitStateCode::kTableWrite: case WaitStateCode::kWaitingOnTServer: - return WaitStateType::kNetwork; + return WaitStateType::kRPCWait; case WaitStateCode::kOnCpu_Active: case WaitStateCode::kOnCpu_Passive: @@ -527,7 +527,7 @@ WaitStateType GetWaitStateType(WaitStateCode code) { return WaitStateType::kDiskIO; case WaitStateCode::kTransactionStatusCache_DoGetCommitData: - return WaitStateType::kNetwork; + return WaitStateType::kRPCWait; case WaitStateCode::kWaitForYSQLBackendsCatalogVersion: return WaitStateType::kWaitOnCondition; @@ -541,7 +541,7 @@ WaitStateType GetWaitStateType(WaitStateCode code) { return WaitStateType::kWaitOnCondition; case WaitStateCode::kConflictResolution_ResolveConficts: - return WaitStateType::kNetwork; + return WaitStateType::kRPCWait; case WaitStateCode::kLockedBatchEntry_Lock: case WaitStateCode::kConflictResolution_WaitOnConflictingTxns: @@ -550,7 +550,7 @@ WaitStateType GetWaitStateType(WaitStateCode code) { case WaitStateCode::kRaft_WaitingForReplication: case WaitStateCode::kRemoteBootstrap_StartRemoteSession: case WaitStateCode::kRemoteBootstrap_FetchData: - return WaitStateType::kNetwork; + return WaitStateType::kRPCWait; case WaitStateCode::kRaft_ApplyingEdits: return WaitStateType::kCpu; @@ -593,7 +593,7 @@ WaitStateType GetWaitStateType(WaitStateCode code) { case WaitStateCode::kYBClient_WaitingOnDocDB: case WaitStateCode::kYBClient_LookingUpTablet: case WaitStateCode::kYBClient_WaitingOnMaster: - return WaitStateType::kNetwork; + return WaitStateType::kRPCWait; } FATAL_INVALID_ENUM_VALUE(WaitStateCode, code); } diff --git a/src/yb/ash/wait_state.h b/src/yb/ash/wait_state.h index 99d2e4254e28..29ca6738b034 100644 --- a/src/yb/ash/wait_state.h +++ b/src/yb/ash/wait_state.h @@ -211,7 +211,7 @@ YB_DEFINE_TYPED_ENUM(FixedQueryId, uint8_t, YB_DEFINE_TYPED_ENUM(WaitStateType, uint8_t, (kCpu) (kDiskIO) - (kNetwork) + (kRPCWait) (kWaitOnCondition) (kLock) ); From b163f141eaa5b87af72c2a10f7112239d7c1d68e Mon Sep 17 00:00:00 2001 From: Dwight Hodge <79169168+ddhodge@users.noreply.github.com> Date: Mon, 12 May 2025 23:24:09 -0400 Subject: [PATCH 041/146] [doc] Read committed GA on preview (#27157) * Read committed GA on preview * more tags --- docs/content/preview/architecture/design-goals.md | 4 ++-- .../architecture/transactions/isolation-levels.md | 4 ++-- .../architecture/transactions/read-committed.md | 2 -- .../architecture/transactions/read-restart-error.md | 6 ++++++ .../learn/transactions/acid-transactions-ysql.md | 2 +- .../preview/develop/postgresql-compatibility.md | 6 +++--- .../preview/explore/transactions/isolation-levels.md | 10 +++++----- .../preview/reference/configuration/yb-tserver.md | 10 +++++----- .../administer-yugabyte-platform/anywhere-rbac.md | 4 ++-- .../alerts-monitoring/anywhere-export-configuration.md | 2 +- .../content/stable/develop/postgresql-compatibility.md | 4 ++-- .../v2024.1/develop/postgresql-compatibility.md | 4 ++-- .../v2024.1/reference/configuration/yb-tserver.md | 6 +++--- 13 files changed, 34 insertions(+), 30 deletions(-) diff --git a/docs/content/preview/architecture/design-goals.md b/docs/content/preview/architecture/design-goals.md index 644d9e07c06e..f0bc51ec2d9b 100644 --- a/docs/content/preview/architecture/design-goals.md +++ b/docs/content/preview/architecture/design-goals.md @@ -56,7 +56,7 @@ YugabyteDB supports single-row linearizable writes. Linearizability is one of th YugabyteDB supports multi-row transactions with three isolation levels: Serializable, Snapshot (also known as repeatable read), and Read Committed isolation. -- The [YSQL API](../../api/ysql/) supports Serializable, Snapshot (default), and Read Committed isolation {{}} using the PostgreSQL isolation level syntax of `SERIALIZABLE`, `REPEATABLE READ`, and `READ COMMITTED` respectively. For more details, see [Isolation levels](#transaction-isolation-levels). +- The [YSQL API](../../api/ysql/) supports Serializable, Snapshot (default), and Read Committed isolation using the PostgreSQL isolation level syntax of `SERIALIZABLE`, `REPEATABLE READ`, and `READ COMMITTED` respectively. For more details, see [Isolation levels](#transaction-isolation-levels). - The [YCQL API](../../api/ycql/) supports only Snapshot isolation (default) using the [BEGIN TRANSACTION](../../api/ycql/dml_transaction/) syntax. ## Partition Tolerance - CAP @@ -97,7 +97,7 @@ Depending on the use case, the database may need to support diverse workloads, s Transaction isolation is foundational to handling concurrent transactions in databases. YugabyteDB supports three strict transaction isolation levels in [YSQL](../../api/ysql/). -- [Read Committed](../transactions/read-committed/) {{}}, which maps to the SQL isolation level of the same name +- [Read Committed](../transactions/read-committed/), which maps to the SQL isolation level of the same name - [Serializable](../../explore/transactions/isolation-levels/#serializable-isolation), which maps to the SQL isolation level of the same name - [Snapshot](../../explore/transactions/isolation-levels/#snapshot-isolation), which maps to the SQL Repeatable Read isolation level diff --git a/docs/content/preview/architecture/transactions/isolation-levels.md b/docs/content/preview/architecture/transactions/isolation-levels.md index 057977327d99..c5b25a084490 100644 --- a/docs/content/preview/architecture/transactions/isolation-levels.md +++ b/docs/content/preview/architecture/transactions/isolation-levels.md @@ -15,13 +15,13 @@ Transaction isolation is foundational to handling concurrent transactions in dat YugabyteDB supports the following three strictest transaction isolation levels: -1. Read Committed {{}}, which maps to the SQL isolation level of the same name. This isolation level guarantees that each statement sees all data that has been committed before it is issued (this implicitly also means that the statement sees a consistent snapshot). In addition, this isolation level internally handles read restart and conflict errors. In other words, the client does not see read restart and conflict errors (barring an exception). +1. Read Committed, which maps to the SQL isolation level of the same name. This isolation level guarantees that each statement sees all data that has been committed before it is issued (this implicitly also means that the statement sees a consistent snapshot). In addition, this isolation level internally handles read restart and conflict errors. In other words, the client does not see read restart and conflict errors (barring an exception). 2. Serializable, which maps to the SQL isolation level of the same name. This isolation level guarantees that transactions run in a way equivalent to a serial (sequential) schedule. 3. Snapshot, which maps to the SQL Repeatable Read isolation level. This isolation level guarantees that all reads made in a transaction see a consistent snapshot of the database, and the transaction itself can successfully commit only if no updates it has made conflict with any concurrent updates made by transactions that committed after that snapshot. Transaction isolation level support differs between the YSQL and YCQL APIs: -- [YSQL](../../../api/ysql/) supports Serializable, Snapshot, and Read Committed {{}} isolation levels. +- [YSQL](../../../api/ysql/) supports Serializable, Snapshot, and Read Committed isolation levels. - [YCQL](../../../api/ycql/dml_transaction/) supports only Snapshot isolation using the `BEGIN TRANSACTION` syntax. Similarly to PostgreSQL, you can specify Read Uncommitted for YSQL, but it behaves the same as Read Committed. diff --git a/docs/content/preview/architecture/transactions/read-committed.md b/docs/content/preview/architecture/transactions/read-committed.md index f6eea11c8b1f..6709ad122066 100644 --- a/docs/content/preview/architecture/transactions/read-committed.md +++ b/docs/content/preview/architecture/transactions/read-committed.md @@ -3,8 +3,6 @@ title: Read Committed isolation level headerTitle: Read Committed isolation level linkTitle: Read Committed description: Details about the Read Committed isolation level -tags: - feature: early-access menu: preview: identifier: architecture-read-committed diff --git a/docs/content/preview/architecture/transactions/read-restart-error.md b/docs/content/preview/architecture/transactions/read-restart-error.md index 2bf9c52d0450..aaad015c7d97 100644 --- a/docs/content/preview/architecture/transactions/read-restart-error.md +++ b/docs/content/preview/architecture/transactions/read-restart-error.md @@ -16,6 +16,7 @@ rightNav: The distributed nature of YugabyteDB means that clock skew can be present between different physical nodes in the database cluster. Given that YugabyteDB is a multi-version concurrency control (MVCC) database, this clock skew can sometimes result in an unresolvable ambiguity of whether a version of data should, or not be part of a read in snapshot-based transaction isolations (that is, repeatable read and read committed). There are multiple solutions for this problem, [each with their own challenges](https://www.yugabyte.com/blog/evolving-clock-sync-for-distributed-databases/). PostgreSQL doesn't require defining semantics around read restart errors because it is a single-node database without clock skew. Read restart errors are raised to maintain the _read-after-commit-visibility_ guarantee: any read query should see all data that was committed before the read query was issued (even in the presence of clock skew between nodes). In other words, read restart errors prevent the following stale read anomaly: + 1. First, user X commits some data, for which the database picks a commit timestamp, say commit_time. 2. Next, user X informs user Y about the commit via a channel outside the database, say a phone call. 3. Then, user Y issues a read that picks a read time, which is less than the prior commit_time due to clock skew. @@ -32,11 +33,13 @@ The following scenario describes how clock skew can result in the above mentione * Tokens 17, 29 are inserted into an empty tokens table. Then, all the tokens from the table are retrieved. The SQL commands for the scenario are as follows: + ```sql INSERT INTO tokens VALUES (17); INSERT INTO tokens VALUES (29); SELECT * FROM tokens; ``` + * The SELECT must return both 17 and 29. * However, due to clock skew, the INSERT operation picks a commit time higher than the reference time, while the SELECT picks a lower read time and thus omits the prior INSERT from the result set. @@ -85,17 +88,20 @@ You can handle and mitigate read restart errors using the following techniques: Examples: Set transaction properties at the session level. + ```sql SET SESSION CHARACTERISTICS AS TRANSACTION ISOLATION LEVEL SERIALIZABLE READ ONLY DEFERRABLE; SELECT * FROM large_table; ``` Enclose the offending query within a transaction block. + ```sql BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE READ ONLY DEFERRABLE; SELECT * FROM large_table; COMMIT; ``` + - Using read only, deferrable transactions is not always feasible, either because the query is not read only, or the query is part of a read-write transaction, or because an additional 500ms of latency is not acceptable. In these cases, try increasing the value of `ysql_output_buffer_size`. This will enable YugabyteDB to retry the query internally on behalf of the user. As long as the output of a statement hasn't crossed ysql_output_buffer_size to result in flushing partial data to the external client, the YSQL query layer retries read restart errors for all statements in a Read Committed transaction block, for the first statement in a Repeatable Read transaction block, and for any standalone statement outside a transaction block. As a tradeoff, increasing the buffer size also increases the memory consumed by the YSQL backend processes, resulting in a higher risk of out-of-memory errors. diff --git a/docs/content/preview/develop/learn/transactions/acid-transactions-ysql.md b/docs/content/preview/develop/learn/transactions/acid-transactions-ysql.md index 7af80637c90c..e1792a29dceb 100644 --- a/docs/content/preview/develop/learn/transactions/acid-transactions-ysql.md +++ b/docs/content/preview/develop/learn/transactions/acid-transactions-ysql.md @@ -73,7 +73,7 @@ YugabyteDB supports three kinds of isolation levels to support different applica | Level | Description | | :---- | :---------- | | [Repeatable Read (Snapshot)](../../../../explore/transactions/isolation-levels/#snapshot-isolation) | Only the data that is committed before the transaction began is visible to the transaction. Effectively, the transaction sees the snapshot of the database as of the start of the transaction. {{}}Applications using this isolation level should be designed to [retry](../transactions-retries-ysql#client-side-retry) on serialization failures.{{}} | -| [Read Committed](../../../../explore/transactions/isolation-levels/#read-committed-isolation){{}} | Each statement of the transaction sees the latest data committed by any concurrent transaction just before the execution of the statement. If another transaction has modified a row related to the current transaction, the current transaction waits for the other transaction to commit or rollback its changes. {{}} The server internally waits and retries on conflicts, so applications [need not retry](../transactions-retries-ysql#automatic-retries) on serialization failures.{{}} | +| [Read Committed](../../../../explore/transactions/isolation-levels/#read-committed-isolation) | Each statement of the transaction sees the latest data committed by any concurrent transaction just before the execution of the statement. If another transaction has modified a row related to the current transaction, the current transaction waits for the other transaction to commit or rollback its changes. {{}} The server internally waits and retries on conflicts, so applications [need not retry](../transactions-retries-ysql#automatic-retries) on serialization failures.{{}} | | [Serializable](../../../../explore/transactions/isolation-levels/#serializable-isolation) | This is the strictest isolation level and has the effect of all transactions being executed in a serial manner, one after the other rather than in parallel. {{}} Applications using this isolation level should be designed to [retry](../transactions-retries-ysql/#client-side-retry) on serialization failures.{{}} | {{}} diff --git a/docs/content/preview/develop/postgresql-compatibility.md b/docs/content/preview/develop/postgresql-compatibility.md index 84a85ea7a459..78f6b4ab4464 100644 --- a/docs/content/preview/develop/postgresql-compatibility.md +++ b/docs/content/preview/develop/postgresql-compatibility.md @@ -17,11 +17,11 @@ rightNav: YugabyteDB is a [PostgreSQL-compatible](https://www.yugabyte.com/tech/postgres-compatibility/) distributed database that supports the majority of PostgreSQL syntax. YugabyteDB is methodically expanding its features to deliver PostgreSQL-compatible performance that can substantially improve your application's efficiency. -To test and take advantage of features developed for enhanced PostgreSQL compatibility in YugabyteDB that are currently in {{}}, you can enable Enhanced PostgreSQL Compatibility Mode (EPCM). When this mode is turned on, YugabyteDB is configured to use all the latest features developed for feature and performance parity. EPCM is available in [v2024.1](/preview/releases/ybdb-releases/v2024.1/) and later. Here are the features that are part of the EPCM mode. +To test and take advantage of features developed for enhanced PostgreSQL compatibility in YugabyteDB that are currently in {{}}, you can enable Enhanced PostgreSQL Compatibility Mode (EPCM). When this mode is turned on, YugabyteDB is configured to use all the latest features developed for feature and performance parity. EPCM is available in [v2024.1](/preview/releases/ybdb-releases/v2024.1/) and later. The following features are part of EPCM. | Feature | Flag/Configuration Parameter | EA | GA | | :--- | :--- | :--- | :--- | -| [Read committed](#read-committed) | [yb_enable_read_committed_isolation](../../reference/configuration/yb-tserver/#ysql-default-transaction-isolation) | {{}} | | +| [Read committed](#read-committed) | [yb_enable_read_committed_isolation](../../reference/configuration/yb-tserver/#ysql-default-transaction-isolation) | {{}} | {{}} | | [Wait-on-conflict](#wait-on-conflict-concurrency) | [enable_wait_queues](../../reference/configuration/yb-tserver/#enable-wait-queues) | {{}} | {{}} | | [Cost based optimizer](#cost-based-optimizer) | [yb_enable_base_scans_cost_model](../../reference/configuration/yb-tserver/#yb-enable-base-scans-cost-model) | {{}} | | | [Batch nested loop join](#batched-nested-loop-join) | [yb_enable_batchednl](../../reference/configuration/yb-tserver/#yb-enable-batchednl) | {{}} | {{}} | @@ -110,7 +110,7 @@ Default ascending indexing provides feature compatibility and is the default in Configuration parameter: `yb_enable_bitmapscan=true` -Bitmap scans use multiple indexes to answer a query, with only one scan of the main table. Each index produces a "bitmap" indicating which rows of the main table are interesting. Bitmap scans can improve the performance of queries containing AND and OR conditions across several index scans. YugabyteDB bitmap scan provides feature compatibility and improved performance parity. For YugabyteDB relations to use a bitmap scan, the PostgreSQL parameter `enable_bitmapscan` must also be true (the default). +Bitmap scans use multiple indexes to answer a query, with only one scan of the main table. Each index produces a "bitmap" indicating which rows of the main table are interesting. Bitmap scans can improve the performance of queries containing `AND` and `OR` conditions across several index scans. YugabyteDB bitmap scan provides feature compatibility and improved performance parity. For YugabyteDB relations to use a bitmap scan, the PostgreSQL parameter `enable_bitmapscan` must also be true (the default). ### Efficient communication between PostgreSQL and DocDB diff --git a/docs/content/preview/explore/transactions/isolation-levels.md b/docs/content/preview/explore/transactions/isolation-levels.md index bb3e036c3374..22fab24ba2fd 100644 --- a/docs/content/preview/explore/transactions/isolation-levels.md +++ b/docs/content/preview/explore/transactions/isolation-levels.md @@ -35,11 +35,11 @@ YugabyteDB supports three isolation levels in the transactional layer: - Serializable - Snapshot -- Read committed {{}} +- Read committed The default isolation level for the YSQL API is effectively Snapshot (that is, the same as PostgreSQL's `REPEATABLE READ`) because, by default, Read committed, which is the YSQL API and PostgreSQL _syntactic_ default, maps to Snapshot isolation. -To enable Read committed (currently in [Early Access](/preview/releases/versioning/#feature-maturity)), you must set the YB-TServer flag `yb_enable_read_committed_isolation` to `true`. By default this flag is `false` and the Read committed isolation level of the YugabyteDB transactional layer falls back to the stricter Snapshot isolation (in which case `READ COMMITTED` and `READ UNCOMMITTED` of YSQL also in turn use Snapshot isolation). +To enable Read committed, you must set the YB-TServer flag `yb_enable_read_committed_isolation` to `true`. By default this flag is `false` and the Read committed isolation level of the YugabyteDB transactional layer falls back to the stricter Snapshot isolation (in which case `READ COMMITTED` and `READ UNCOMMITTED` of YSQL also in turn use Snapshot isolation). {{< tip title="Tip" >}} @@ -57,8 +57,8 @@ The following table shows the mapping between the PostgreSQL isolation levels in | PostgreSQL Isolation | YugabyteDB Equivalent | Dirty Read | Non-repeatable Read | Phantom Read | Serialization Anomaly | | :------------------- | :------------------------ | :--------- | :------------------ | :----------- | :-------------------- | -| Read uncommitted | Read Committed {{}} | Allowed, but not in YSQL | Possible | Possible | Possible | -| Read committed | Read Committed {{}} | Not possible | Possible | Possible | Possible | +| Read uncommitted | Read Committed | Allowed, but not in YSQL | Possible | Possible | Possible | +| Read committed | Read Committed | Not possible | Possible | Possible | Possible | | Repeatable read | Snapshot | Not possible | Not possible | Allowed, but not in YSQL | Possible | | Serializable | Serializable | Not possible | Not possible | Not possible | Not possible | @@ -352,7 +352,7 @@ SELECT * FROM example; ## Read committed isolation -{{}}Read committed isolation is the same as Snapshot isolation, except that every statement in the transaction is aware of all data that has been committed before it has been issued (this implicitly means that the statement will see a consistent snapshot). In other words, each statement works on a new snapshot of the database that includes everything that has been committed before the statement is issued. Conflict detection is the same as in Snapshot isolation. +Read committed isolation is the same as Snapshot isolation, except that every statement in the transaction is aware of all data that has been committed before it has been issued (this implicitly means that the statement will see a consistent snapshot). In other words, each statement works on a new snapshot of the database that includes everything that has been committed before the statement is issued. Conflict detection is the same as in Snapshot isolation. Consider an example of transactions' behavior under the Read committed isolation level. diff --git a/docs/content/preview/reference/configuration/yb-tserver.md b/docs/content/preview/reference/configuration/yb-tserver.md index e7c37a824d67..9b70c69a78a7 100644 --- a/docs/content/preview/reference/configuration/yb-tserver.md +++ b/docs/content/preview/reference/configuration/yb-tserver.md @@ -767,13 +767,13 @@ Specifies the default transaction isolation level. Valid values: `SERIALIZABLE`, `REPEATABLE READ`, `READ COMMITTED`, and `READ UNCOMMITTED`. -Default: `READ COMMITTED` {{}} +Default: `READ COMMITTED` -Read Committed support is currently in [Early Access](/preview/releases/versioning/#feature-maturity). [Read Committed Isolation](../../../explore/transactions/isolation-levels/) is supported only if the YB-TServer flag `yb_enable_read_committed_isolation` is set to `true`. By default this flag is `false` and in this case the Read Committed isolation level of the YugabyteDB transactional layer falls back to the stricter Snapshot Isolation (in which case `READ COMMITTED` and `READ UNCOMMITTED` of YSQL also in turn use Snapshot Isolation). +[Read Committed Isolation](../../../explore/transactions/isolation-levels/) is supported only if the YB-TServer flag `yb_enable_read_committed_isolation` is set to `true`. By default this flag is `false` and in this case the Read Committed isolation level of the YugabyteDB transactional layer falls back to the stricter Snapshot Isolation (in which case `READ COMMITTED` and `READ UNCOMMITTED` of YSQL also in turn use Snapshot Isolation). ##### --yb_enable_read_committed_isolation -{{}} Enables Read Committed Isolation. By default this flag is false and in this case `READ COMMITTED` (and `READ UNCOMMITTED`) isolation level of YSQL fall back to the stricter [Snapshot Isolation](../../../explore/transactions/isolation-levels/). See [--ysql_default_transaction_isolation](#ysql-default-transaction-isolation) flag for more details. +Enables Read Committed Isolation. By default this flag is false and in this case `READ COMMITTED` (and `READ UNCOMMITTED`) isolation level of YSQL fall back to the stricter [Snapshot Isolation](../../../explore/transactions/isolation-levels/). See [--ysql_default_transaction_isolation](#ysql-default-transaction-isolation) flag for more details. Default: `false` @@ -1812,13 +1812,13 @@ Default: 1024 ##### yb_enable_batchednl -{{}} Enable or disable the query planner's use of batched nested loop join. +Enable or disable the query planner's use of batched nested loop join. Default: true ##### yb_enable_base_scans_cost_model -{{}} Enables the YugabyteDB cost model for Sequential and Index scans. When enabling this parameter, you must run ANALYZE on user tables to maintain up-to-date statistics. +{{}} Enables the YugabyteDB cost model for Sequential and Index scans. When enabling this parameter, you must run ANALYZE on user tables to maintain up-to-date statistics. When enabling the cost based optimizer, ensure that [packed row](../../../architecture/docdb/packed-rows) for colocated tables is enabled by setting `ysql_enable_packed_row_for_colocated_table = true`. diff --git a/docs/content/preview/yugabyte-platform/administer-yugabyte-platform/anywhere-rbac.md b/docs/content/preview/yugabyte-platform/administer-yugabyte-platform/anywhere-rbac.md index 9255636e000a..571f7ad0611f 100644 --- a/docs/content/preview/yugabyte-platform/administer-yugabyte-platform/anywhere-rbac.md +++ b/docs/content/preview/yugabyte-platform/administer-yugabyte-platform/anywhere-rbac.md @@ -14,11 +14,11 @@ type: docs YugabyteDB Anywhere uses a role-based access control (RBAC) model to manage access to your YugabyteDB Anywhere instance (whether via the UI or the REST API). Using roles, you can enforce the [principle of least privilege](https://en.wikipedia.org/wiki/Principle_of_least_privilege) (PoLP) by ensuring that users have the precise permissions needed to fulfill their roles while mitigating the risk of unauthorized access or accidental breaches. A role defines a set of permissions that determine what features can be accessed by account users who have been assigned that role. -RBAC is also available with fine-grained control over access to universes. Fine-grained RBAC is {{}}; during Early Access, by default fine-grained RBAC is not enabled. See [Manage users](#manage-users). +RBAC is also available with fine-grained control over access to universes. Fine-grained RBAC is {{}}; during Early Access, by default fine-grained RBAC is not enabled. See [Manage users](#manage-users). ## Users and roles -As a Super Admin or Admin, you can invite new users and manage existing users for your YugabyteDB Anywhere instance. How you add and modify users varies depending on whether you have enabled [fine-grained RBAC](#fine-grained-rbac) {{}}. You can only assign, create, and modify custom roles if fine-grained RBAC is enabled. +As a Super Admin or Admin, you can invite new users and manage existing users for your YugabyteDB Anywhere instance. How you add and modify users varies depending on whether you have enabled [fine-grained RBAC](#fine-grained-rbac) {{}}. You can only assign, create, and modify custom roles if fine-grained RBAC is enabled. A user can interact with a YugabyteDB Anywhere instance via the UI or [REST API](../../anywhere-automation/anywhere-api/). diff --git a/docs/content/preview/yugabyte-platform/alerts-monitoring/anywhere-export-configuration.md b/docs/content/preview/yugabyte-platform/alerts-monitoring/anywhere-export-configuration.md index 5a8cbe619286..a309e1868bbd 100644 --- a/docs/content/preview/yugabyte-platform/alerts-monitoring/anywhere-export-configuration.md +++ b/docs/content/preview/yugabyte-platform/alerts-monitoring/anywhere-export-configuration.md @@ -30,7 +30,7 @@ For information on how to export logs from a universe using an export configurat ## Prerequisites -Export configuration is {{}}. To enable export configuration management, set the **Enable DB Audit Logging** Global Configuration option (config key `yb.universe.audit_logging_enabled`) to true. Refer to [Manage runtime configuration settings](../../administer-yugabyte-platform/manage-runtime-config/). Note that only a Super Admin user can modify Global configuration settings. The flag can't be turned off if audit logging is enabled on a universe. +Export configuration is {{}}. To enable export configuration management, set the **Enable DB Audit Logging** Global Configuration option (config key `yb.universe.audit_logging_enabled`) to true. Refer to [Manage runtime configuration settings](../../administer-yugabyte-platform/manage-runtime-config/). Note that only a Super Admin user can modify Global configuration settings. The flag can't be turned off if audit logging is enabled on a universe. ## Best practices diff --git a/docs/content/stable/develop/postgresql-compatibility.md b/docs/content/stable/develop/postgresql-compatibility.md index 958b5cf134bc..5763d4c9a1e3 100644 --- a/docs/content/stable/develop/postgresql-compatibility.md +++ b/docs/content/stable/develop/postgresql-compatibility.md @@ -14,7 +14,7 @@ rightNav: YugabyteDB is a [PostgreSQL-compatible](https://www.yugabyte.com/tech/postgres-compatibility/) distributed database that supports the majority of PostgreSQL syntax. YugabyteDB is methodically expanding its features to deliver PostgreSQL-compatible performance that can substantially improve your application's efficiency. -To test and take advantage of features developed for enhanced PostgreSQL compatibility in YugabyteDB that are currently in {{}}, you can enable Enhanced PostgreSQL Compatibility Mode (EPCM). When this mode is turned on, YugabyteDB is configured to use all the latest features developed for feature and performance parity. EPCM is available in [v2024.1](/preview/releases/ybdb-releases/v2024.1/) and later. Here are the features that are part of the EPCM mode. +To test and take advantage of features developed for enhanced PostgreSQL compatibility in YugabyteDB that are currently in {{}}, you can enable Enhanced PostgreSQL Compatibility Mode (EPCM). When this mode is turned on, YugabyteDB is configured to use all the latest features developed for feature and performance parity. EPCM is available in [v2024.1](/preview/releases/ybdb-releases/v2024.1/) and later. The following features are part of EPCM. | Feature | Flag/Configuration Parameter | EA | GA | | :--- | :--- | :--- | :--- | @@ -107,7 +107,7 @@ Default ascending indexing provides feature compatibility and is the default in Configuration parameter: `yb_enable_bitmapscan=true` -Bitmap scans use multiple indexes to answer a query, with only one scan of the main table. Each index produces a "bitmap" indicating which rows of the main table are interesting. Bitmap scans can improve the performance of queries containing AND and OR conditions across several index scans. YugabyteDB bitmap scan provides feature compatibility and improved performance parity. For YugabyteDB relations to use a bitmap scan, the PostgreSQL parameter `enable_bitmapscan` must also be true (the default). +Bitmap scans use multiple indexes to answer a query, with only one scan of the main table. Each index produces a "bitmap" indicating which rows of the main table are interesting. Bitmap scans can improve the performance of queries containing `AND` and `OR` conditions across several index scans. YugabyteDB bitmap scan provides feature compatibility and improved performance parity. For YugabyteDB relations to use a bitmap scan, the PostgreSQL parameter `enable_bitmapscan` must also be true (the default). ### Efficient communication between PostgreSQL and DocDB diff --git a/docs/content/v2024.1/develop/postgresql-compatibility.md b/docs/content/v2024.1/develop/postgresql-compatibility.md index 20a87d4f7978..024c51991513 100644 --- a/docs/content/v2024.1/develop/postgresql-compatibility.md +++ b/docs/content/v2024.1/develop/postgresql-compatibility.md @@ -14,7 +14,7 @@ rightNav: YugabyteDB is a [PostgreSQL-compatible](https://www.yugabyte.com/tech/postgres-compatibility/) distributed database that supports the majority of PostgreSQL syntax. YugabyteDB is methodically expanding its features to deliver PostgreSQL-compatible performance that can substantially improve your application's efficiency. -To test and take advantage of features developed for enhanced PostgreSQL compatibility in YugabyteDB that are currently in {{}}, you can enable Enhanced PostgreSQL Compatibility Mode (EPCM). When this mode is turned on, YugabyteDB is configured to use all the latest features developed for feature and performance parity. EPCM is available in [v2024.1](/preview/releases/ybdb-releases/v2024.1/) and later. Here are the features that are part of the EPCM mode. +To test and take advantage of features developed for enhanced PostgreSQL compatibility in YugabyteDB that are currently in {{}}, you can enable Enhanced PostgreSQL Compatibility Mode (EPCM). When this mode is turned on, YugabyteDB is configured to use all the latest features developed for feature and performance parity. EPCM is available in [v2024.1](/preview/releases/ybdb-releases/v2024.1/) and later. The following features are part of EPCM. | Feature | Flag/Configuration Parameter | EA | GA | | :--- | :--- | :--- | :--- | @@ -107,7 +107,7 @@ Default ascending indexing provides feature compatibility and is the default in Configuration parameter: `yb_enable_bitmapscan=true` -Bitmap scans use multiple indexes to answer a query, with only one scan of the main table. Each index produces a "bitmap" indicating which rows of the main table are interesting. Bitmap scans can improve the performance of queries containing AND and OR conditions across several index scans. YugabyteDB bitmap scan provides feature compatibility and improved performance parity. For YugabyteDB relations to use a bitmap scan, the PostgreSQL parameter `enable_bitmapscan` must also be true (the default). +Bitmap scans use multiple indexes to answer a query, with only one scan of the main table. Each index produces a "bitmap" indicating which rows of the main table are interesting. Bitmap scans can improve the performance of queries containing `AND` and `OR` conditions across several index scans. YugabyteDB bitmap scan provides feature compatibility and improved performance parity. For YugabyteDB relations to use a bitmap scan, the PostgreSQL parameter `enable_bitmapscan` must also be true (the default). ### Efficient communication between PostgreSQL and DocDB diff --git a/docs/content/v2024.1/reference/configuration/yb-tserver.md b/docs/content/v2024.1/reference/configuration/yb-tserver.md index 160e34a67639..bcd11068c755 100644 --- a/docs/content/v2024.1/reference/configuration/yb-tserver.md +++ b/docs/content/v2024.1/reference/configuration/yb-tserver.md @@ -1730,7 +1730,7 @@ Default: `1GB` PostgreSQL parameter to enable or disable the query planner's use of bitmap-scan plan types. -Bitmap Scans use multiple indexes to answer a query, with only one scan of the main table. Each index produces a "bitmap" indicating which rows of the main table are interesting. Multiple bitmaps can be combined with AND or OR operators to create a final bitmap that is used to collect rows from the main table. +Bitmap Scans use multiple indexes to answer a query, with only one scan of the main table. Each index produces a "bitmap" indicating which rows of the main table are interesting. Multiple bitmaps can be combined with `AND` or `OR` operators to create a final bitmap that is used to collect rows from the main table. Bitmap scans follow the same `work_mem` behavior as PostgreSQL: each individual bitmap is bounded by `work_mem`. If there are n bitmaps, it means we may use `n * work_mem` memory. @@ -1759,13 +1759,13 @@ Default: 1024 ##### yb_enable_batchednl -{{}} Enable or disable the query planner's use of batched nested loop join. +Enable or disable the query planner's use of batched nested loop join. Default: true ##### yb_enable_base_scans_cost_model -{{}} Enables the YugabyteDB cost model for sequential and index scans. When enabling this parameter, you must run ANALYZE on user tables to maintain up-to-date statistics. +{{}} Enables the YugabyteDB cost model for sequential and index scans. When enabling this parameter, you must run ANALYZE on user tables to maintain up-to-date statistics. When enabling the cost based optimizer, ensure that [packed row](../../../architecture/docdb/packed-rows) for colocated tables is enabled by setting `ysql_enable_packed_row_for_colocated_table = true`. From a33dc4e310ce8be2e138c962202fc3123397e1be Mon Sep 17 00:00:00 2001 From: Deepti-yb Date: Mon, 12 May 2025 17:34:07 +0000 Subject: [PATCH 042/146] [PLAT-17581][CLI]KMS config needs to be passed separately in universe creation even if it exists in the config template Summary: The KMS config name was still reliant on `cmd.Flags().GetString()` even after the switch to config template support. This diff fixes the same Test Plan: Test KMS config passing via template file, via flag, and via flag overriding the template Reviewers: skurapati Reviewed By: skurapati Differential Revision: https://phorge.dev.yugabyte.com/D43926 --- managed/yba-cli/cmd/auth/authutil.go | 2 +- .../yba-cli/cmd/universe/build_universe.go | 4 +-- .../yba-cli/cmd/universe/create_universe.go | 34 +++++++++++++++---- 3 files changed, 31 insertions(+), 9 deletions(-) diff --git a/managed/yba-cli/cmd/auth/authutil.go b/managed/yba-cli/cmd/auth/authutil.go index cbc57e8b3688..bb0872516529 100644 --- a/managed/yba-cli/cmd/auth/authutil.go +++ b/managed/yba-cli/cmd/auth/authutil.go @@ -49,7 +49,7 @@ func authWriteConfigFile(r ybaclient.SessionInfo) { } logrus.Infof( formatter.Colorize( - fmt.Sprintf("Configuration file '%v' sucessfully updated.\n", + fmt.Sprintf("Configuration file '%v' successfully updated.\n", configFileUsed), formatter.GreenColor)) sessionCtx := formatter.Context{ diff --git a/managed/yba-cli/cmd/universe/build_universe.go b/managed/yba-cli/cmd/universe/build_universe.go index 4a9ce5181bcd..b1ebc8028c92 100644 --- a/managed/yba-cli/cmd/universe/build_universe.go +++ b/managed/yba-cli/cmd/universe/build_universe.go @@ -85,7 +85,7 @@ func buildClusters( logrus.Fatalf(formatter.Colorize(errMessage.Error()+"\n", formatter.RedColor)) } if len(providerListResponse) < 1 { - return nil, fmt.Errorf("no provider found") + return nil, fmt.Errorf("no provider with name %s found", providerName) } providerUsed := providerListResponse[0] providerUUID := providerUsed.GetUuid() @@ -170,7 +170,7 @@ func buildClusters( imageBundleUUIDs = append(imageBundleUUIDs, "") } - logrus.Info("Using image bundles: ", imageBundleUUIDs, "\n") + logrus.Info("Using linux versions: ", imageBundleUUIDs, "\n") dedicatedNodes := v1.GetBool("dedicated-nodes") diff --git a/managed/yba-cli/cmd/universe/create_universe.go b/managed/yba-cli/cmd/universe/create_universe.go index 2eaaf53dce1f..20cc78bf9a9b 100644 --- a/managed/yba-cli/cmd/universe/create_universe.go +++ b/managed/yba-cli/cmd/universe/create_universe.go @@ -59,7 +59,16 @@ var createUniverseCmd = &cobra.Command{ } enableVolumeEncryption := v1.GetBool("enable-volume-encryption") if enableVolumeEncryption { - cmd.MarkFlagRequired("kms-config") + kmsConfigName := v1.GetString("kms-config") + if len(strings.TrimSpace(kmsConfigName)) == 0 { + cmd.Help() + logrus.Fatalln( + formatter.Colorize( + "No kms config name found while enabling volume encryption\n", + formatter.RedColor, + ), + ) + } } }, Run: func(cmd *cobra.Command, args []string) { @@ -110,6 +119,15 @@ var createUniverseCmd = &cobra.Command{ formatter.Colorize(clientRootCACertUUID, formatter.GreenColor)), "\n") } } + if len(clientRootCACertUUID) == 0 { + logrus.Fatalf(formatter.Colorize( + fmt.Sprintf( + "Client root certificate %s not found\n", + clientRootCA, + ), + formatter.RedColor, + )) + } } rootCACertUUID := "" @@ -117,7 +135,6 @@ var createUniverseCmd = &cobra.Command{ // find the root certficate UUID from the name if len(rootCA) != 0 { - for _, c := range certs { if strings.Compare(c.GetLabel(), rootCA) == 0 { rootCACertUUID = c.GetUuid() @@ -127,6 +144,10 @@ var createUniverseCmd = &cobra.Command{ formatter.Colorize(rootCACertUUID, formatter.GreenColor)), "\n") } } + if len(rootCACertUUID) == 0 { + logrus.Fatalf(formatter.Colorize( + fmt.Sprintf("Root certificate %s not found\n", rootCA), formatter.RedColor)) + } } kmsConfigUUID := "" @@ -135,10 +156,7 @@ var createUniverseCmd = &cobra.Command{ if enableVolumeEncryption { opType = util.EnableOpType - kmsConfigName, err := cmd.Flags().GetString("kms-config") - if err != nil { - logrus.Fatalf(formatter.Colorize(err.Error()+"\n", formatter.RedColor)) - } + kmsConfigName := v1.GetString("kms-config") // find kmsConfigUUID from the name kmsConfigs, response, err := authAPI.ListKMSConfigs().Execute() if err != nil { @@ -163,6 +181,10 @@ var createUniverseCmd = &cobra.Command{ } } } + if len(kmsConfigUUID) == 0 { + logrus.Fatalf(formatter.Colorize( + fmt.Sprintf("KMS config %s not found\n", kmsConfigName), formatter.RedColor)) + } } cpuArch := v1.GetString("cpu-architecture") From 37788cce11a0311128d12518a83b32d2716a3b85 Mon Sep 17 00:00:00 2001 From: Arpan Agrawal Date: Mon, 12 May 2025 23:21:26 +0530 Subject: [PATCH 043/146] [#25949] YSQL: Clean up TestPg15Regress Summary: Cleanup TestPg15Regress, which was temporarily added while stabilising the pg15 branch. These tests are duplicate subsets of other regression tests and are no longer required. There are 15 YB_TODOs in this test file. - 13 of these are related to test tracking. All these are obsolete given that all the tests are already being tracked via detective. - The remaining two (described below) are non-reproducible anymore and hence, presumed to be fixed over the course of pg15 stabilisation. One of the YB_TODO is related to failure (segmentation fault AFAIR) on changing the order of the tests. ``` -- Joins (YB_TODO: if I move it below pushdown test, the test fails) ``` The other YB_TODO is related to the difference in JOIN plan. ``` -- YB_TODO: pg15 used merge join, whereas hash join is expected. ``` As mentioned above, both of these are no longer reproducible. Jira: DB-15268 Test Plan: jenkins: compile-only Close: #25949 Reviewers: telgersma Reviewed By: telgersma Subscribers: svc_phabricator, yql Differential Revision: https://phorge.dev.yugabyte.com/D43704 --- .../java/org/yb/pgsql/TestPg15Regress.java | 50 - .../test/regress/expected/yb.orig.pg15.out | 1203 ----------------- .../src/test/regress/sql/yb.orig.pg15.sql | 579 -------- src/postgres/src/test/regress/yb_pg15 | 6 - 4 files changed, 1838 deletions(-) delete mode 100644 java/yb-pgsql/src/test/java/org/yb/pgsql/TestPg15Regress.java delete mode 100644 src/postgres/src/test/regress/expected/yb.orig.pg15.out delete mode 100644 src/postgres/src/test/regress/sql/yb.orig.pg15.sql delete mode 100644 src/postgres/src/test/regress/yb_pg15 diff --git a/java/yb-pgsql/src/test/java/org/yb/pgsql/TestPg15Regress.java b/java/yb-pgsql/src/test/java/org/yb/pgsql/TestPg15Regress.java deleted file mode 100644 index 06868481e542..000000000000 --- a/java/yb-pgsql/src/test/java/org/yb/pgsql/TestPg15Regress.java +++ /dev/null @@ -1,50 +0,0 @@ -// Copyright (c) YugaByte, Inc. -// -// Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except -// in compliance with the License. You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software distributed under the License -// is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express -// or implied. See the License for the specific language governing permissions and limitations -// under the License. -// -package org.yb.pgsql; - -import java.util.Map; - -import org.junit.Test; -import org.junit.runner.RunWith; -import org.yb.util.BuildTypeUtil; -import org.yb.util.YBTestRunnerNonTsanOnly; - -// Runs the pg_regress test suite on YB code. -@RunWith(value = YBTestRunnerNonTsanOnly.class) -public class TestPg15Regress extends BasePgRegressTest { - @Override - public int getTestMethodTimeoutSec() { - return BuildTypeUtil.nonSanitizerVsSanitizer(2100, 2700); - } - - @Override - protected Map getTServerFlags() { - Map flagMap = super.getTServerFlags(); - flagMap.put("allowed_preview_flags_csv", "ysql_yb_enable_replication_commands"); - flagMap.put("ysql_yb_enable_replication_commands", "true"); - return flagMap; - } - - @Override - protected Map getMasterFlags() { - Map flagMap = super.getMasterFlags(); - flagMap.put("allowed_preview_flags_csv", "ysql_yb_enable_replication_commands"); - flagMap.put("ysql_yb_enable_replication_commands", "true"); - return flagMap; - } - - @Test - public void testPg15Regress() throws Exception { - runPgRegressTest("yb_pg15"); - } -} diff --git a/src/postgres/src/test/regress/expected/yb.orig.pg15.out b/src/postgres/src/test/regress/expected/yb.orig.pg15.out deleted file mode 100644 index 851ac89e37bd..000000000000 --- a/src/postgres/src/test/regress/expected/yb.orig.pg15.out +++ /dev/null @@ -1,1203 +0,0 @@ --- --- Tests for pg15 branch stability. --- --- Basics -create table t1 (id int, name text); -create table t2 (id int primary key, name text); -explain (COSTS OFF) insert into t2 values (1); - QUERY PLAN ------------------------ - Insert on t2 - -> Result *RESULT* -(2 rows) - -insert into t2 values (1); -explain (COSTS OFF) insert into t2 values (2), (3); - QUERY PLAN ---------------------------------- - Insert on t2 - -> Values Scan on "*VALUES*" -(2 rows) - -insert into t2 values (2), (3); -explain (COSTS OFF) select * from t2 where id = 1; - QUERY PLAN --------------------------------- - Index Scan using t2_pkey on t2 - Index Cond: (id = 1) -(2 rows) - -select * from t2 where id = 1; - id | name -----+------ - 1 | -(1 row) - -explain (COSTS OFF) select * from t2 where id > 1; - QUERY PLAN ----------------------------- - Seq Scan on t2 - Storage Filter: (id > 1) -(2 rows) - -select * from t2 where id > 1; - id | name -----+------ - 2 | - 3 | -(2 rows) - -explain (COSTS OFF) update t2 set name = 'John' where id = 1; - QUERY PLAN ------------------ - Update on t2 - -> Result t2 -(2 rows) - -update t2 set name = 'John' where id = 1; -explain (COSTS OFF) update t2 set name = 'John' where id > 1; - QUERY PLAN ----------------------------------- - Update on t2 - -> Seq Scan on t2 - Storage Filter: (id > 1) -(3 rows) - -update t2 set name = 'John' where id > 1; -explain (COSTS OFF) update t2 set id = id + 4 where id = 1; - QUERY PLAN --------------------------------------- - Update on t2 - -> Index Scan using t2_pkey on t2 - Index Cond: (id = 1) -(3 rows) - -update t2 set id = id + 4 where id = 1; -explain (COSTS OFF) update t2 set id = id + 4 where id > 1; - QUERY PLAN ----------------------------------- - Update on t2 - -> Seq Scan on t2 - Storage Filter: (id > 1) -(3 rows) - -update t2 set id = id + 4 where id > 1; -explain (COSTS OFF) delete from t2 where id = 1; - QUERY PLAN ------------------ - Delete on t2 - -> Result t2 -(2 rows) - -delete from t2 where id = 1; -explain (COSTS OFF) delete from t2 where id > 1; - QUERY PLAN ----------------------------------- - Delete on t2 - -> Seq Scan on t2 - Storage Filter: (id > 1) -(3 rows) - -delete from t2 where id > 1; --- Before update trigger test. -alter table t2 add column count int; -insert into t2 values (1, 'John', 0); -CREATE OR REPLACE FUNCTION update_count() RETURNS trigger LANGUAGE plpgsql AS -$func$ -BEGIN - NEW.count := NEW.count+1; - RETURN NEW; -END -$func$; -CREATE TRIGGER update_count_trig BEFORE UPDATE ON t2 FOR ROW EXECUTE PROCEDURE update_count(); -update t2 set name = 'Jane' where id = 1; -select * from t2; - id | name | count -----+------+------- - 1 | Jane | 1 -(1 row) - --- CREATE INDEX -CREATE INDEX myidx on t2(name); --- Insert with on conflict -insert into t2 values (1, 'foo') on conflict ON CONSTRAINT t2_pkey do update set id = t2.id+1; -select * from t2; - id | name | count -----+------+------- - 2 | Jane | 2 -(1 row) - --- Joins (YB_TODO: if I move it below pushdown test, the test fails) -CREATE TABLE p1 (a int, b int, c varchar, primary key(a,b)); -INSERT INTO p1 SELECT i, i % 25, to_char(i, 'FM0000') FROM generate_series(0, 599) i WHERE i % 2 = 0; -CREATE TABLE p2 (a int, b int, c varchar, primary key(a,b)); -INSERT INTO p2 SELECT i, i % 25, to_char(i, 'FM0000') FROM generate_series(0, 599) i WHERE i % 3 = 0; --- Merge join -EXPLAIN (COSTS OFF) SELECT * FROM p1 t1 JOIN p2 t2 ON t1.a = t2.a WHERE t1.a <= 100 AND t2.a <= 100; - QUERY PLAN ------------------------------------------- - Merge Join - Merge Cond: (t1.a = t2.a) - -> Sort - Sort Key: t1.a - -> Seq Scan on p1 t1 - Storage Filter: (a <= 100) - -> Sort - Sort Key: t2.a - -> Seq Scan on p2 t2 - Storage Filter: (a <= 100) -(10 rows) - -SELECT * FROM p1 t1 JOIN p2 t2 ON t1.a = t2.a WHERE t1.a <= 100 AND t2.a <= 100; - a | b | c | a | b | c -----+----+------+----+----+------ - 0 | 0 | 0000 | 0 | 0 | 0000 - 6 | 6 | 0006 | 6 | 6 | 0006 - 12 | 12 | 0012 | 12 | 12 | 0012 - 18 | 18 | 0018 | 18 | 18 | 0018 - 24 | 24 | 0024 | 24 | 24 | 0024 - 30 | 5 | 0030 | 30 | 5 | 0030 - 36 | 11 | 0036 | 36 | 11 | 0036 - 42 | 17 | 0042 | 42 | 17 | 0042 - 48 | 23 | 0048 | 48 | 23 | 0048 - 54 | 4 | 0054 | 54 | 4 | 0054 - 60 | 10 | 0060 | 60 | 10 | 0060 - 66 | 16 | 0066 | 66 | 16 | 0066 - 72 | 22 | 0072 | 72 | 22 | 0072 - 78 | 3 | 0078 | 78 | 3 | 0078 - 84 | 9 | 0084 | 84 | 9 | 0084 - 90 | 15 | 0090 | 90 | 15 | 0090 - 96 | 21 | 0096 | 96 | 21 | 0096 -(17 rows) - --- Hash join -SET enable_mergejoin = off; -EXPLAIN (COSTS OFF) SELECT * FROM p1 t1 JOIN p2 t2 ON t1.a = t2.a WHERE t1.a <= 100 AND t2.a <= 100; - QUERY PLAN ------------------------------------------- - Hash Join - Hash Cond: (t1.a = t2.a) - -> Seq Scan on p1 t1 - Storage Filter: (a <= 100) - -> Hash - -> Seq Scan on p2 t2 - Storage Filter: (a <= 100) -(7 rows) - -SELECT * FROM p1 t1 JOIN p2 t2 ON t1.a = t2.a WHERE t1.a <= 100 AND t2.a <= 100; - a | b | c | a | b | c -----+----+------+----+----+------ - 78 | 3 | 0078 | 78 | 3 | 0078 - 90 | 15 | 0090 | 90 | 15 | 0090 - 12 | 12 | 0012 | 12 | 12 | 0012 - 6 | 6 | 0006 | 6 | 6 | 0006 - 96 | 21 | 0096 | 96 | 21 | 0096 - 42 | 17 | 0042 | 42 | 17 | 0042 - 48 | 23 | 0048 | 48 | 23 | 0048 - 60 | 10 | 0060 | 60 | 10 | 0060 - 72 | 22 | 0072 | 72 | 22 | 0072 - 36 | 11 | 0036 | 36 | 11 | 0036 - 54 | 4 | 0054 | 54 | 4 | 0054 - 18 | 18 | 0018 | 18 | 18 | 0018 - 66 | 16 | 0066 | 66 | 16 | 0066 - 30 | 5 | 0030 | 30 | 5 | 0030 - 84 | 9 | 0084 | 84 | 9 | 0084 - 0 | 0 | 0000 | 0 | 0 | 0000 - 24 | 24 | 0024 | 24 | 24 | 0024 -(17 rows) - --- Batched nested loop join -ANALYZE p1; -ANALYZE p2; -SET enable_hashjoin = off; -SET enable_seqscan = off; -SET enable_material = off; -SET yb_bnl_batch_size = 3; -EXPLAIN (COSTS OFF) SELECT * FROM p1 t1 JOIN p2 t2 ON t1.a = t2.a WHERE t1.a <= 100 AND t2.a <= 100; - QUERY PLAN ------------------------------------------------------ - YB Batched Nested Loop Join - Join Filter: (t1.a = t2.a) - -> Seq Scan on p2 t2 - Storage Filter: (a <= 100) - -> Index Scan using p1_pkey on p1 t1 - Index Cond: (a = ANY (ARRAY[t2.a, $1, $2])) - Storage Filter: (a <= 100) -(7 rows) - -SELECT * FROM p1 t1 JOIN p2 t2 ON t1.a = t2.a WHERE t1.a <= 100 AND t2.a <= 100; - a | b | c | a | b | c -----+----+------+----+----+------ - 78 | 3 | 0078 | 78 | 3 | 0078 - 90 | 15 | 0090 | 90 | 15 | 0090 - 12 | 12 | 0012 | 12 | 12 | 0012 - 6 | 6 | 0006 | 6 | 6 | 0006 - 96 | 21 | 0096 | 96 | 21 | 0096 - 42 | 17 | 0042 | 42 | 17 | 0042 - 48 | 23 | 0048 | 48 | 23 | 0048 - 60 | 10 | 0060 | 60 | 10 | 0060 - 72 | 22 | 0072 | 72 | 22 | 0072 - 36 | 11 | 0036 | 36 | 11 | 0036 - 54 | 4 | 0054 | 54 | 4 | 0054 - 18 | 18 | 0018 | 18 | 18 | 0018 - 66 | 16 | 0066 | 66 | 16 | 0066 - 30 | 5 | 0030 | 30 | 5 | 0030 - 84 | 9 | 0084 | 84 | 9 | 0084 - 0 | 0 | 0000 | 0 | 0 | 0000 - 24 | 24 | 0024 | 24 | 24 | 0024 -(17 rows) - -SET enable_mergejoin = on; -SET enable_hashjoin = on; -SET enable_seqscan = on; -SET enable_material = on; --- Update pushdown test. -CREATE TABLE single_row_decimal (k int PRIMARY KEY, v1 decimal, v2 decimal(10,2), v3 int); -CREATE FUNCTION next_v3(int) returns int language sql as $$ - SELECT v3 + 1 FROM single_row_decimal WHERE k = $1; -$$; -INSERT INTO single_row_decimal(k, v1, v2, v3) values (1,1.5,1.5,1), (2,2.5,2.5,2), (3,null, null,null); -SELECT * FROM single_row_decimal ORDER BY k; - k | v1 | v2 | v3 ----+-----+------+---- - 1 | 1.5 | 1.50 | 1 - 2 | 2.5 | 2.50 | 2 - 3 | | | -(3 rows) - -UPDATE single_row_decimal SET v1 = v1 + 1.555, v2 = v2 + 1.555, v3 = v3 + 1 WHERE k = 1; --- v2 should be rounded to 2 decimals. -SELECT * FROM single_row_decimal ORDER BY k; - k | v1 | v2 | v3 ----+-------+------+---- - 1 | 3.055 | 3.06 | 2 - 2 | 2.5 | 2.50 | 2 - 3 | | | -(3 rows) - -UPDATE single_row_decimal SET v1 = v1 + 1.555, v2 = v2 + 1.555, v3 = 3 WHERE k = 1; -SELECT * FROM single_row_decimal ORDER BY k; - k | v1 | v2 | v3 ----+------+------+---- - 1 | 4.61 | 4.62 | 3 - 2 | 2.5 | 2.50 | 2 - 3 | | | -(3 rows) - -UPDATE single_row_decimal SET v1 = v1 + 1.555, v2 = v2 + 1.555, v3 = next_v3(1) WHERE k = 1; -SELECT * FROM single_row_decimal ORDER BY k; - k | v1 | v2 | v3 ----+-------+------+---- - 1 | 6.165 | 6.18 | 4 - 2 | 2.5 | 2.50 | 2 - 3 | | | -(3 rows) - --- Delete with returning -insert into t2 values (4), (5), (6); -delete from t2 where id > 2 returning id, name; - id | name -----+------ - 5 | - 6 | - 4 | -(3 rows) - --- COPY FROM -CREATE TABLE myemp (id int primary key, name text); -COPY myemp FROM stdin; -SELECT * from myemp; - id | name -----+------ - 1 | a - 2 | b -(2 rows) - -CREATE TABLE myemp2(id int primary key, name text) PARTITION BY range(id); -CREATE TABLE myemp2_1_100 PARTITION OF myemp2 FOR VALUES FROM (1) TO (100); -CREATE TABLE myemp2_101_200 PARTITION OF myemp2 FOR VALUES FROM (101) TO (200); -COPY myemp2 FROM stdin; -SELECT * from myemp2_1_100; - id | name -----+------ - 1 | a -(1 row) - -SELECT * from myemp2_101_200; - id | name ------+------ - 102 | b -(1 row) - --- Adding PK -create table test (id int); -insert into test values (1); -ALTER TABLE test ENABLE ROW LEVEL SECURITY; -CREATE POLICY test_policy ON test FOR SELECT USING (true); -alter table test add primary key (id); -NOTICE: table rewrite may lead to inconsistencies -DETAIL: Concurrent DMLs may not be reflected in the new table. -HINT: See https://github.com/yugabyte/yugabyte-db/issues/19860. Set 'ysql_suppress_unsafe_alter_notice' yb-tserver gflag to true to suppress this notice. -create table test2 (id int); -insert into test2 values (1), (1); -alter table test2 add primary key (id); -NOTICE: table rewrite may lead to inconsistencies -DETAIL: Concurrent DMLs may not be reflected in the new table. -HINT: See https://github.com/yugabyte/yugabyte-db/issues/19860. Set 'ysql_suppress_unsafe_alter_notice' yb-tserver gflag to true to suppress this notice. -ERROR: duplicate key value violates unique constraint "test2" --- Creating partitioned table -create table emp_par1(id int primary key, name text) partition by range(id); -CREATE TABLE emp_par1_1_100 PARTITION OF emp_par1 FOR VALUES FROM (1) TO (100); -create table emp_par2(id int primary key, name text) partition by list(id); -create table emp_par3(id int primary key, name text) partition by hash(id); --- Adding FK -create table emp(id int unique); -create table address(emp_id int, addr text); -insert into address values (1, 'a'); -ALTER TABLE address ADD FOREIGN KEY(emp_id) REFERENCES emp(id); -ERROR: insert or update on table "address" violates foreign key constraint "address_emp_id_fkey" -DETAIL: Key (emp_id)=(1) is not present in table "emp". -insert into emp values (1); -ALTER TABLE address ADD FOREIGN KEY(emp_id) REFERENCES emp(id); --- Adding PK with pre-existing FK constraint -alter table emp add primary key (id); -NOTICE: table rewrite may lead to inconsistencies -DETAIL: Concurrent DMLs may not be reflected in the new table. -HINT: See https://github.com/yugabyte/yugabyte-db/issues/19860. Set 'ysql_suppress_unsafe_alter_notice' yb-tserver gflag to true to suppress this notice. -alter table address add primary key (emp_id); -NOTICE: table rewrite may lead to inconsistencies -DETAIL: Concurrent DMLs may not be reflected in the new table. -HINT: See https://github.com/yugabyte/yugabyte-db/issues/19860. Set 'ysql_suppress_unsafe_alter_notice' yb-tserver gflag to true to suppress this notice. --- Add primary key with with pre-existing FK where confdelsetcols non nul -create table emp2 (id int, name text, primary key (id, name)); -create table address2 (id int, name text, addr text, FOREIGN KEY (id, name) REFERENCES emp2 ON DELETE SET NULL (name)); -insert into emp2 values (1, 'a'), (2, 'b'); -insert into address2 values (1, 'a', 'a'), (2, 'b', 'b'); -delete from emp2 where id = 1; -select * from address2 order by id; - id | name | addr -----+------+------ - 1 | | a - 2 | b | b -(2 rows) - -alter table address2 add primary key (id); -NOTICE: table rewrite may lead to inconsistencies -DETAIL: Concurrent DMLs may not be reflected in the new table. -HINT: See https://github.com/yugabyte/yugabyte-db/issues/19860. Set 'ysql_suppress_unsafe_alter_notice' yb-tserver gflag to true to suppress this notice. -delete from emp2 where id = 2; -select * from address2 order by id; - id | name | addr -----+------+------ - 1 | | a - 2 | | b -(2 rows) - --- create database -CREATE DATABASE mytest; --- drop database -DROP DATABASE mytest; -create table fastpath (a int, b text, c numeric); -insert into fastpath select y.x, 'b' || (y.x/10)::text, 100 from (select generate_series(1,10000) as x) y; -select md5(string_agg(a::text, b order by a, b asc)) from fastpath - where a >= 1000 and a < 2000 and b > 'b1' and b < 'b3'; - md5 ----------------------------------- - 2ca216010a558a52d7df12f76dfc77ab -(1 row) - --- Index scan test row comparison expressions -CREATE TABLE pk_range_int_asc (r1 INT, r2 INT, r3 INT, v INT, PRIMARY KEY(r1 asc, r2 asc, r3 asc)); -INSERT INTO pk_range_int_asc SELECT i/25, (i/5) % 5, i % 5, i FROM generate_series(1, 125) AS i; -EXPLAIN (COSTS OFF, TIMING OFF, SUMMARY OFF, ANALYZE) SELECT * FROM pk_range_int_asc WHERE (r1, r2, r3) <= (2,3,2); - QUERY PLAN -------------------------------------------------------------------------------------- - Index Scan using pk_range_int_asc_pkey on pk_range_int_asc (actual rows=67 loops=1) - Index Cond: (ROW(r1, r2, r3) <= ROW(2, 3, 2)) -(2 rows) - -SELECT * FROM pk_range_int_asc WHERE (r1, r2, r3) <= (2,3,2); - r1 | r2 | r3 | v -----+----+----+---- - 0 | 0 | 1 | 1 - 0 | 0 | 2 | 2 - 0 | 0 | 3 | 3 - 0 | 0 | 4 | 4 - 0 | 1 | 0 | 5 - 0 | 1 | 1 | 6 - 0 | 1 | 2 | 7 - 0 | 1 | 3 | 8 - 0 | 1 | 4 | 9 - 0 | 2 | 0 | 10 - 0 | 2 | 1 | 11 - 0 | 2 | 2 | 12 - 0 | 2 | 3 | 13 - 0 | 2 | 4 | 14 - 0 | 3 | 0 | 15 - 0 | 3 | 1 | 16 - 0 | 3 | 2 | 17 - 0 | 3 | 3 | 18 - 0 | 3 | 4 | 19 - 0 | 4 | 0 | 20 - 0 | 4 | 1 | 21 - 0 | 4 | 2 | 22 - 0 | 4 | 3 | 23 - 0 | 4 | 4 | 24 - 1 | 0 | 0 | 25 - 1 | 0 | 1 | 26 - 1 | 0 | 2 | 27 - 1 | 0 | 3 | 28 - 1 | 0 | 4 | 29 - 1 | 1 | 0 | 30 - 1 | 1 | 1 | 31 - 1 | 1 | 2 | 32 - 1 | 1 | 3 | 33 - 1 | 1 | 4 | 34 - 1 | 2 | 0 | 35 - 1 | 2 | 1 | 36 - 1 | 2 | 2 | 37 - 1 | 2 | 3 | 38 - 1 | 2 | 4 | 39 - 1 | 3 | 0 | 40 - 1 | 3 | 1 | 41 - 1 | 3 | 2 | 42 - 1 | 3 | 3 | 43 - 1 | 3 | 4 | 44 - 1 | 4 | 0 | 45 - 1 | 4 | 1 | 46 - 1 | 4 | 2 | 47 - 1 | 4 | 3 | 48 - 1 | 4 | 4 | 49 - 2 | 0 | 0 | 50 - 2 | 0 | 1 | 51 - 2 | 0 | 2 | 52 - 2 | 0 | 3 | 53 - 2 | 0 | 4 | 54 - 2 | 1 | 0 | 55 - 2 | 1 | 1 | 56 - 2 | 1 | 2 | 57 - 2 | 1 | 3 | 58 - 2 | 1 | 4 | 59 - 2 | 2 | 0 | 60 - 2 | 2 | 1 | 61 - 2 | 2 | 2 | 62 - 2 | 2 | 3 | 63 - 2 | 2 | 4 | 64 - 2 | 3 | 0 | 65 - 2 | 3 | 1 | 66 - 2 | 3 | 2 | 67 -(67 rows) - --- SERIAL type -CREATE TABLE serial_test (k int, v SERIAL); -INSERT INTO serial_test VALUES (1), (1), (1); -SELECT * FROM serial_test ORDER BY v; - k | v ----+--- - 1 | 1 - 1 | 2 - 1 | 3 -(3 rows) - -SELECT last_value, is_called FROM public.serial_test_v_seq; - last_value | is_called -------------+----------- - 100 | t -(1 row) - --- lateral join -CREATE TABLE tlateral1 (a int, b int, c varchar); -INSERT INTO tlateral1 SELECT i, i % 25, to_char(i % 4, 'FM0000') FROM generate_series(0, 599, 2) i; -CREATE TABLE tlateral2 (a int, b int, c varchar); -INSERT INTO tlateral2 SELECT i % 25, i, to_char(i % 4, 'FM0000') FROM generate_series(0, 599, 3) i; -ANALYZE tlateral1, tlateral2; --- YB_TODO: pg15 used merge join, whereas hash join is expected. --- EXPLAIN (COSTS FALSE) SELECT * FROM tlateral1 t1 LEFT JOIN LATERAL (SELECT t2.a AS t2a, t2.c AS t2c, t2.b AS t2b, t3.b AS t3b, least(t1.a,t2.a,t3.b) FROM tlateral1 t2 JOIN tlateral2 t3 ON (t2.a = t3.b AND t2.c = t3.c)) ss ON t1.a = ss.t2a WHERE t1.b = 0 ORDER BY t1.a; -SELECT * FROM tlateral1 t1 LEFT JOIN LATERAL (SELECT t2.a AS t2a, t2.c AS t2c, t2.b AS t2b, t3.b AS t3b, least(t1.a,t2.a,t3.b) FROM tlateral1 t2 JOIN tlateral2 t3 ON (t2.a = t3.b AND t2.c = t3.c)) ss ON t1.a = ss.t2a WHERE t1.b = 0 ORDER BY t1.a; - a | b | c | t2a | t2c | t2b | t3b | least ------+---+------+-----+------+-----+-----+------- - 0 | 0 | 0000 | 0 | 0000 | 0 | 0 | 0 - 50 | 0 | 0002 | | | | | - 100 | 0 | 0000 | | | | | - 150 | 0 | 0002 | 150 | 0002 | 0 | 150 | 150 - 200 | 0 | 0000 | | | | | - 250 | 0 | 0002 | | | | | - 300 | 0 | 0000 | 300 | 0000 | 0 | 300 | 300 - 350 | 0 | 0002 | | | | | - 400 | 0 | 0000 | | | | | - 450 | 0 | 0002 | 450 | 0002 | 0 | 450 | 450 - 500 | 0 | 0000 | | | | | - 550 | 0 | 0002 | | | | | -(12 rows) - --- Test FailedAssertion("BufferIsValid(bsrcslot->buffer) failure from ExecCopySlot in ExecMergeJoin. -CREATE TABLE mytest1(h int, r int, v1 int, v2 int, v3 int, primary key(h HASH, r ASC)); -INSERT INTO mytest1 VALUES (1,2,4,9,2), (2,3,2,4,6); -CREATE TABLE mytest2(h int, r int, v1 int, v2 int, v3 int, primary key(h ASC, r ASC)); -INSERT INTO mytest2 VALUES (1,2,4,5,7), (1,3,8,6,1), (4,3,7,3,2); -SET enable_hashjoin = off; -SET enable_nestloop = off; -explain SELECT * FROM mytest1 t1 JOIN mytest2 t2 on t1.h = t2.h WHERE t2.r = 2; - QUERY PLAN ------------------------------------------------------------------------------------------ - Merge Join (cost=149.83..175.33 rows=500 width=40) - Merge Cond: (t1.h = t2.h) - -> Sort (cost=149.83..152.33 rows=1000 width=20) - Sort Key: t1.h - -> Seq Scan on mytest1 t1 (cost=0.00..100.00 rows=1000 width=20) - -> Index Scan using mytest2_pkey on mytest2 t2 (cost=0.00..15.25 rows=100 width=20) - Index Cond: (r = 2) -(7 rows) - -SELECT * FROM mytest1 t1 JOIN mytest2 t2 on t1.h = t2.h WHERE t2.r = 2; - h | r | v1 | v2 | v3 | h | r | v1 | v2 | v3 ----+---+----+----+----+---+---+----+----+---- - 1 | 2 | 4 | 9 | 2 | 1 | 2 | 4 | 5 | 7 -(1 row) - -SET enable_hashjoin = on; -SET enable_nestloop = on; --- Insert with on conflict on temp table -create temporary table mytmp (id int primary key, name text, count int); -insert into mytmp values (1, 'foo', 0); -insert into mytmp values (1, 'foo') on conflict ON CONSTRAINT mytmp_pkey do update set id = mytmp.id+1; -select * from mytmp; - id | name | count -----+------+------- - 2 | foo | 0 -(1 row) - -CREATE OR REPLACE FUNCTION update_count() RETURNS trigger LANGUAGE plpgsql AS -$func$ -BEGIN - NEW.count := NEW.count+1; - RETURN NEW; -END -$func$; -CREATE TRIGGER update_count_trig BEFORE UPDATE ON mytmp FOR ROW EXECUTE PROCEDURE update_count(); -insert into mytmp values (2, 'foo') on conflict ON CONSTRAINT mytmp_pkey do update set id = mytmp.id+1; -select * from mytmp; - id | name | count -----+------+------- - 3 | foo | 1 -(1 row) - -create view myview as select * from mytmp; -NOTICE: view "myview" will be a temporary view -insert into myview values (3, 'foo') on conflict (id) do update set id = myview.id + 1; -select * from myview; - id | name | count -----+------+------- - 4 | foo | 2 -(1 row) - --- YB batched nested loop join -CREATE TABLE p3 (a int, b int, c varchar, primary key(a,b)); -INSERT INTO p3 SELECT i, i % 25, to_char(i, 'FM0000') FROM generate_series(0, 599) i WHERE i % 5 = 0; -ANALYZE p3; -CREATE INDEX p1_b_idx ON p1 (b ASC); -SET enable_hashjoin = off; -SET enable_mergejoin = off; -SET enable_seqscan = off; -SET enable_material = off; -SET yb_bnl_batch_size = 3; -SELECT * FROM p1 JOIN p2 ON p1.a = p2.b AND p2.a = p1.b; - a | b | c | a | b | c -----+----+------+----+----+------ - 0 | 0 | 0000 | 0 | 0 | 0000 - 6 | 6 | 0006 | 6 | 6 | 0006 - 12 | 12 | 0012 | 12 | 12 | 0012 - 18 | 18 | 0018 | 18 | 18 | 0018 - 24 | 24 | 0024 | 24 | 24 | 0024 -(5 rows) - -SELECT * FROM p3 t3 RIGHT OUTER JOIN (SELECT t1.a as a FROM p1 t1 JOIN p2 t2 ON t1.a = t2.b WHERE t1.b <= 10 AND t2.b <= 15) s ON t3.a = s.a; - a | b | c | a -----+----+------+---- - 10 | 10 | 0010 | 10 - | | | 6 - | | | 6 - 10 | 10 | 0010 | 10 - 10 | 10 | 0010 | 10 - | | | 8 - 10 | 10 | 0010 | 10 - | | | 2 - | | | 2 - 0 | 0 | 0000 | 0 - 0 | 0 | 0000 | 0 - | | | 4 - 0 | 0 | 0000 | 0 - | | | 4 - | | | 8 - 10 | 10 | 0010 | 10 - | | | 4 - | | | 6 - 10 | 10 | 0010 | 10 - | | | 4 - | | | 4 - | | | 2 - | | | 8 - | | | 6 - 10 | 10 | 0010 | 10 - | | | 2 - | | | 8 - 0 | 0 | 0000 | 0 - | | | 4 - | | | 6 - 0 | 0 | 0000 | 0 - | | | 4 - | | | 8 - 0 | 0 | 0000 | 0 - 0 | 0 | 0000 | 0 - | | | 2 - | | | 2 - | | | 4 - | | | 8 - | | | 2 - | | | 8 - | | | 6 - 0 | 0 | 0000 | 0 - | | | 8 - | | | 6 - 10 | 10 | 0010 | 10 - | | | 2 - | | | 6 -(48 rows) - -CREATE TABLE m1 (a money, primary key(a asc)); -INSERT INTO m1 SELECT i*2 FROM generate_series(1, 2000) i; -CREATE TABLE m2 (a money, primary key(a asc)); -INSERT INTO m2 SELECT i*5 FROM generate_series(1, 2000) i; -SELECT * FROM m1 t1 JOIN m2 t2 ON t1.a = t2.a WHERE t1.a <= 50::money; - a | a ---------+-------- - $10.00 | $10.00 - $20.00 | $20.00 - $30.00 | $30.00 - $40.00 | $40.00 - $50.00 | $50.00 -(5 rows) - --- Index on tmp table -create temp table prtx2 (a integer, b integer, c integer); -insert into prtx2 select 1 + i%10, i, i from generate_series(1,5000) i, generate_series(1,10) j; -create index on prtx2 (c); --- testing yb_hash_code pushdown on a secondary index with a text hash column -CREATE TABLE text_table (hr text, ti text, tj text, i int, j int, primary key (hr)); -INSERT INTO text_table SELECT i::TEXT, i::TEXT, i::TEXT, i, i FROM generate_series(1,10000) i; -CREATE INDEX textidx ON text_table (tj); -SELECT tj FROM text_table WHERE yb_hash_code(tj) <= 63; - tj ------- - 4999 - 8300 - 6918 - 912 - 8646 - 4946 - 6920 - 6785 - 5659 - 1363 -(10 rows) - --- Row locking -CREATE TABLE t(h INT, r INT, PRIMARY KEY(h, r)); -INSERT INTO t VALUES(1, 1), (1, 3); -SELECT * FROM t WHERE h = 1 AND r in(1, 3) FOR KEY SHARE; - h | r ----+--- - 1 | 1 - 1 | 3 -(2 rows) - -DROP TABLE t; --- Test for ItemPointerIsValid assertion failure -CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple'); --- Aggregate pushdown -SELECT COUNT(*) FROM pg_enum WHERE enumtypid = 'rainbow'::regtype; - count -------- - 6 -(1 row) - --- IndexOnlyScan -SELECT enumlabel FROM pg_enum WHERE enumtypid = 'rainbow'::regtype; - enumlabel ------------ - blue - green - orange - purple - red - yellow -(6 rows) - --- Cleanup -DROP TABLE IF EXISTS address, address2, emp, emp2, emp_par1, emp_par1_1_100, emp_par2, emp_par3, - fastpath, myemp, myemp2, myemp2_101_200, myemp2_1_100, p1, p2, pk_range_int_asc, - single_row_decimal, t1, t2, test, test2, serial_test, tlateral1, tlateral2, mytest1, mytest2 CASCADE; --- insert into temp table in function body -create temp table compos (f1 int, f2 text); -create function fcompos1(v compos) returns void as $$ -insert into compos values (v.*); -$$ language sql; -select fcompos1(row(1,'one')); - fcompos1 ----------- - -(1 row) - --- very basic REINDEX -CREATE TABLE yb (i int PRIMARY KEY, j int); -CREATE INDEX NONCONCURRENTLY ON yb (j); -UPDATE pg_index SET indisvalid = false - WHERE indexrelid = 'yb_j_idx'::regclass; -\c -REINDEX INDEX yb_j_idx; -UPDATE pg_index SET indisvalid = false - WHERE indexrelid = 'yb_j_idx'::regclass; -\c -\set VERBOSITY terse -REINDEX(verbose) INDEX yb_j_idx; -INFO: index "yb_j_idx" was reindexed -\set VERBOSITY default --- internal collation -create table texttab (t text); -insert into texttab values ('a'); -select count(*) from texttab group by t; - count -------- - 1 -(1 row) - --- ALTER TABLE ADD COLUMN DEFAULT with pre-existing rows -CREATE TABLE mytable (pk INT NOT NULL PRIMARY KEY); -INSERT INTO mytable SELECT * FROM generate_series(1, 10) a; -ALTER TABLE mytable ADD COLUMN c_bigint BIGINT NOT NULL DEFAULT -1; -SELECT c_bigint FROM mytable WHERE c_bigint = -1 LIMIT 1; - c_bigint ----------- - -1 -(1 row) - -DROP TABLE mytable; --- Update partitioned table with multiple partitions -CREATE TABLE t(id int) PARTITION BY range(id); -CREATE TABLE t_1_100 PARTITION OF t FOR VALUES FROM (1) TO (100); -CREATE TABLE t_101_200 PARTITION OF t FOR VALUES FROM (101) TO (200); -INSERT INTO t VALUES (1); -UPDATE t SET id = 2; -SELECT * FROM t; - id ----- - 2 -(1 row) - -DROP TABLE t; --- Update partitioned table with multiple partitions and secondary index -CREATE TABLE t3(id int primary key, name int, add int, unique(id, name)) PARTITION BY range(id); -CREATE TABLE t3_1_100 partition of t3 FOR VALUES FROM (1) TO (100); -CREATE TABLE t3_101_200 partition of t3 FOR VALUES FROM (101) TO (200); -INSERT INTO t3 VALUES (1, 1, 1); -UPDATE t3 SET ADD = 2; -SELECT * from t3; - id | name | add -----+------+----- - 1 | 1 | 2 -(1 row) - -DROP TABLE t3; --- Test whether single row optimization is invoked when --- only one partition is being updated. -CREATE TABLE list_parted (a int, b int, c int, primary key(a,b)) PARTITION BY list (a); -CREATE TABLE sub_parted PARTITION OF list_parted for VALUES in (1) PARTITION BY list (b); -CREATE TABLE sub_part1 PARTITION OF sub_parted for VALUES in (1); -INSERT INTO list_parted VALUES (1, 1, 1); -EXPLAIN (COSTS OFF) UPDATE list_parted SET c = 2 WHERE a = 1 and b = 1; - QUERY PLAN ------------------------------------ - Update on list_parted - Update on sub_part1 list_parted - -> Result list_parted -(3 rows) - -UPDATE list_parted SET c = 2 WHERE a = 1 and b = 1; -SELECT * FROM list_parted; - a | b | c ----+---+--- - 1 | 1 | 2 -(1 row) - -EXPLAIN (COSTS OFF) DELETE FROM list_parted WHERE a = 1 and b = 1; - QUERY PLAN ------------------------------------ - Delete on list_parted - Delete on sub_part1 list_parted - -> Result list_parted -(3 rows) - -DELETE FROM list_parted WHERE a = 1 and b = 1; -SELECT * FROM list_parted; - a | b | c ----+---+--- -(0 rows) - -DROP TABLE list_parted; --- Cross partition UPDATE with nested loop join (multiple matches) -CREATE TABLE list_parted (a int, b int) PARTITION BY list (a); -CREATE TABLE sub_part1 PARTITION OF list_parted for VALUES in (1); -CREATE TABLE sub_part2 PARTITION OF list_parted for VALUES in (2); -INSERT into list_parted VALUES (1, 2); -CREATE TABLE non_parted (id int); -INSERT into non_parted VALUES (1), (1), (1); -UPDATE list_parted t1 set a = 2 FROM non_parted t2 WHERE t1.a = t2.id and a = 1; -SELECT * FROM list_parted; - a | b ----+--- - 2 | 2 -(1 row) - -DROP TABLE list_parted; -DROP TABLE non_parted; --- Test no segmentation fault in YbSeqscan with row marks -CREATE TABLE main_table (a int) partition by range(a); -CREATE TABLE main_table_1_100 partition of main_table FOR VALUES FROM (1) TO (100); -INSERT INTO main_table VALUES (1); -BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE; -SELECT * FROM main_table; - a ---- - 1 -(1 row) - -SELECT * FROM main_table FOR KEY SHARE; - a ---- - 1 -(1 row) - -COMMIT; --- YB_TODO: begin: remove this after tracking yb.port.partition_prune --- Test partition pruning -create table rlp (a int, b varchar) partition by range (a); -create table rlp_default partition of rlp default partition by list (a); -create table rlp_default_default partition of rlp_default default; -create table rlp_default_10 partition of rlp_default for values in (10); -create table rlp_default_30 partition of rlp_default for values in (30); -create table rlp_default_null partition of rlp_default for values in (null); -create table rlp1 partition of rlp for values from (minvalue) to (1); -create table rlp2 partition of rlp for values from (1) to (10); -create table rlp3 (b varchar, a int) partition by list (b varchar_ops); -create table rlp3_default partition of rlp3 default; -create table rlp3abcd partition of rlp3 for values in ('ab', 'cd'); -create table rlp3efgh partition of rlp3 for values in ('ef', 'gh'); -create table rlp3nullxy partition of rlp3 for values in (null, 'xy'); -alter table rlp attach partition rlp3 for values from (15) to (20); -create table rlp4 partition of rlp for values from (20) to (30) partition by range (a); -create table rlp4_default partition of rlp4 default; -create table rlp4_1 partition of rlp4 for values from (20) to (25); -create table rlp4_2 partition of rlp4 for values from (25) to (29); -create table rlp5 partition of rlp for values from (31) to (maxvalue) partition by range (a); -create table rlp5_default partition of rlp5 default; -create table rlp5_1 partition of rlp5 for values from (31) to (40); -explain (costs off) select * from rlp where a = 1 or b = 'ab'; - QUERY PLAN ---------------------------------------------------------------- - Append - -> Seq Scan on rlp1 rlp_1 - Storage Filter: ((a = 1) OR ((b)::text = 'ab'::text)) - -> Seq Scan on rlp2 rlp_2 - Storage Filter: ((a = 1) OR ((b)::text = 'ab'::text)) - -> Seq Scan on rlp3abcd rlp_3 - Storage Filter: ((a = 1) OR ((b)::text = 'ab'::text)) - -> Seq Scan on rlp4_1 rlp_4 - Storage Filter: ((a = 1) OR ((b)::text = 'ab'::text)) - -> Seq Scan on rlp4_2 rlp_5 - Storage Filter: ((a = 1) OR ((b)::text = 'ab'::text)) - -> Seq Scan on rlp4_default rlp_6 - Storage Filter: ((a = 1) OR ((b)::text = 'ab'::text)) - -> Seq Scan on rlp5_1 rlp_7 - Storage Filter: ((a = 1) OR ((b)::text = 'ab'::text)) - -> Seq Scan on rlp5_default rlp_8 - Storage Filter: ((a = 1) OR ((b)::text = 'ab'::text)) - -> Seq Scan on rlp_default_10 rlp_9 - Storage Filter: ((a = 1) OR ((b)::text = 'ab'::text)) - -> Seq Scan on rlp_default_30 rlp_10 - Storage Filter: ((a = 1) OR ((b)::text = 'ab'::text)) - -> Seq Scan on rlp_default_null rlp_11 - Storage Filter: ((a = 1) OR ((b)::text = 'ab'::text)) - -> Seq Scan on rlp_default_default rlp_12 - Storage Filter: ((a = 1) OR ((b)::text = 'ab'::text)) -(25 rows) - --- YB_TODO: end --- suppress warning that depends on wal_level -SET client_min_messages = 'ERROR'; -CREATE PUBLICATION p; -RESET client_min_messages; -ALTER PUBLICATION p ADD TABLES IN SCHEMA public; -ERROR: CREATE/ALTER PUBLICATION SCHEMA not supported yet -LINE 1: ALTER PUBLICATION p ADD TABLES IN SCHEMA public; - ^ -HINT: Please report the issue on https://github.com/YugaByte/yugabyte-db/issues -ALTER PUBLICATION p DROP TABLES IN SCHEMA CURRENT_SCHEMA; -ERROR: CREATE/ALTER PUBLICATION SCHEMA not supported yet -LINE 1: ALTER PUBLICATION p DROP TABLES IN SCHEMA CURRENT_SCHEMA; - ^ -HINT: Please report the issue on https://github.com/YugaByte/yugabyte-db/issues -ALTER PUBLICATION p SET CURRENT_SCHEMA; -ERROR: CREATE/ALTER PUBLICATION SCHEMA not supported yet -LINE 1: ALTER PUBLICATION p SET CURRENT_SCHEMA; - ^ -HINT: Please report the issue on https://github.com/YugaByte/yugabyte-db/issues --- YB_TODO: begin: remove after tracking yb.orig.join_batching -SET enable_hashjoin = off; -SET enable_mergejoin = off; -SET enable_seqscan = off; -SET enable_material = off; -SET yb_prefer_bnl = on; -SET yb_bnl_batch_size = 3; -CREATE TABLE t10 (r1 int, r2 int, r3 int, r4 int); -INSERT INTO t10 - SELECT DISTINCT - i1, i2+5, i3, i4 - FROM generate_series(1, 5) i1, - generate_series(1, 5) i2, - generate_series(1, 5) i3, - generate_series(1, 10) i4; -CREATE index i_t ON t10 (r1 ASC, r2 ASC, r3 ASC, r4 ASC); -CREATE TABLE t11 (c1 int, c3 int, x int); -INSERT INTO t11 VALUES (1,2,0), (1,3,0), (5,2,0), (5,3,0), (5,4,0); -CREATE TABLE t12 (c4 int, c2 int, y int); -INSERT INTO t12 VALUES (3,7,0),(6,9,0),(9,7,0),(4,9,0); -EXPLAIN (COSTS OFF) /*+ Leading((t12 (t11 t10))) Set(enable_seqscan true) */ SELECT t10.* FROM t12, t11, t10 WHERE x = y AND c1 = r1 AND c2 = r2 AND c3 = r3 AND c4 = r4 order by c1, c2, c3, c4; - QUERY PLAN --------------------------------------------------------------------------------------------------------------------------------------------------- - Sort - Sort Key: t10.r1, t10.r2, t10.r3, t10.r4 - -> Nested Loop - -> Seq Scan on t12 - -> YB Batched Nested Loop Join - Join Filter: ((t12.y = t11.x) AND (t11.c1 = t10.r1) AND (t11.c3 = t10.r3)) - -> Seq Scan on t11 - -> Index Only Scan using i_t on t10 - Index Cond: ((ROW(r1, r3) = ANY (ARRAY[ROW(t11.c1, t11.c3), ROW($1, $4), ROW($2, $5)])) AND (r2 = t12.c2) AND (r4 = t12.c4)) -(9 rows) - -/*+ Leading((t12 (t11 t10))) Set(enable_seqscan true) */ SELECT t10.* FROM t12, t11, t10 WHERE x = y AND c1 = r1 AND c2 = r2 AND c3 = r3 AND c4 = r4 order by c1, c2, c3, c4; - r1 | r2 | r3 | r4 -----+----+----+---- - 1 | 7 | 2 | 3 - 1 | 7 | 2 | 9 - 1 | 7 | 3 | 3 - 1 | 7 | 3 | 9 - 1 | 9 | 2 | 4 - 1 | 9 | 2 | 6 - 1 | 9 | 3 | 4 - 1 | 9 | 3 | 6 - 5 | 7 | 2 | 3 - 5 | 7 | 2 | 9 - 5 | 7 | 3 | 3 - 5 | 7 | 3 | 9 - 5 | 7 | 4 | 3 - 5 | 7 | 4 | 9 - 5 | 9 | 2 | 4 - 5 | 9 | 2 | 6 - 5 | 9 | 3 | 4 - 5 | 9 | 3 | 6 - 5 | 9 | 4 | 4 - 5 | 9 | 4 | 6 -(20 rows) - -DROP TABLE t10; -DROP TABLE t11; -DROP TABLE t12; -RESET enable_hashjoin; -RESET enable_mergejoin; -RESET enable_seqscan; -RESET enable_material; -RESET yb_prefer_bnl; -RESET yb_bnl_batch_size; --- YB_TODO: end --- YB_TODO: begin: remove after tracking yb.port.foreign_key --- (the following test is a simplified version of the test under --- 'Foreign keys and partitioned tables' section there). -CREATE TABLE fk_notpartitioned_pk (a int, b int,PRIMARY KEY (a, b)); -INSERT INTO fk_notpartitioned_pk VALUES (2501, 2503); -CREATE TABLE fk_partitioned_fk (a int, b int) PARTITION BY HASH (a); -ALTER TABLE fk_partitioned_fk ADD FOREIGN KEY (a, b) REFERENCES fk_notpartitioned_pk; -CREATE TABLE fk_partitioned_fk_0 PARTITION OF fk_partitioned_fk FOR VALUES WITH (MODULUS 5, REMAINDER 0); -CREATE TABLE fk_partitioned_fk_1 PARTITION OF fk_partitioned_fk FOR VALUES WITH (MODULUS 5, REMAINDER 1); -INSERT INTO fk_partitioned_fk (a,b) VALUES (2501, 2503); --- this update fails because there is no referenced row -UPDATE fk_partitioned_fk SET a = a + 1 WHERE a = 2501; -ERROR: insert or update on table "fk_partitioned_fk_1" violates foreign key constraint "fk_partitioned_fk_a_b_fkey" -DETAIL: Key (a, b)=(2502, 2503) is not present in table "fk_notpartitioned_pk". --- but we can fix it thusly: -INSERT INTO fk_notpartitioned_pk (a,b) VALUES (2502, 2503); -UPDATE fk_partitioned_fk SET a = a + 1 WHERE a = 2501; --- YB_TODO: end --- YB_TODO: begin: remove after tracking postgres_fdw -CREATE EXTENSION postgres_fdw; -CREATE SERVER testserver1 FOREIGN DATA WRAPPER postgres_fdw; -NOTICE: no server_type specified. Defaulting to PostgreSQL. -HINT: Use "ALTER SERVER ... OPTIONS (ADD server_type '')" to explicitly set server_type. -DO $d$ - BEGIN - EXECUTE $$CREATE SERVER loopback FOREIGN DATA WRAPPER postgres_fdw - OPTIONS (dbname '$$||current_database()||$$', - host '$$||current_setting('listen_addresses')||$$', - port '$$||current_setting('port')||$$' - )$$; - END; -$d$; -NOTICE: no server_type specified. Defaulting to PostgreSQL. -HINT: Use "ALTER SERVER ... OPTIONS (ADD server_type '')" to explicitly set server_type. -CREATE USER MAPPING FOR public SERVER testserver1 - OPTIONS (user 'value', password 'value'); -CREATE USER MAPPING FOR CURRENT_USER SERVER loopback; -create table ctrtest (a int, b text) partition by list (a); -create table loct1 (a int check (a in (1)), b text); -create foreign table remp1 (a int check (a in (1)), b text) server loopback options (table_name 'loct1'); -alter table ctrtest attach partition remp1 for values in (1); -copy ctrtest from stdin; -select * from ctrtest; - a | b ----+----- - 1 | foo -(1 row) - -create table return_test (a int, b int, c int); -insert into return_test values (1, 1, 1); -update return_test set c = c returning *; - a | b | c ----+---+--- - 1 | 1 | 1 -(1 row) - -update return_test set c = c returning c; - c ---- - 1 -(1 row) - -update return_test set c = c returning a,b; - a | b ----+--- - 1 | 1 -(1 row) - --- YB_TODO: end --- YB_TODO: begin: remove after generated is tracked -CREATE TABLE sales ( - id SERIAL PRIMARY KEY, - product_id INT NOT NULL, - quantity INT NOT NULL, - price_per_unit NUMERIC(10, 2) NOT NULL, - total_price NUMERIC GENERATED ALWAYS AS (quantity * price_per_unit) STORED -); -INSERT INTO sales (product_id, quantity, price_per_unit) VALUES (1, 10, 100), (2, 10, 200); -SELECT * FROM SALES; - id | product_id | quantity | price_per_unit | total_price -----+------------+----------+----------------+------------- - 1 | 1 | 10 | 100.00 | 1000 - 2 | 2 | 10 | 200.00 | 2000 -(2 rows) - -UPDATE sales set quantity = quantity + 1; -SELECT * FROM SALES; - id | product_id | quantity | price_per_unit | total_price -----+------------+----------+----------------+------------- - 1 | 1 | 11 | 100.00 | 1100 - 2 | 2 | 11 | 200.00 | 2200 -(2 rows) - -UPDATE sales set price_per_unit = price_per_unit + 100; -SELECT * FROM SALES; - id | product_id | quantity | price_per_unit | total_price -----+------------+----------+----------------+------------- - 1 | 1 | 11 | 200.00 | 2200 - 2 | 2 | 11 | 300.00 | 3300 -(2 rows) - -UPDATE sales SET total_price = 0; -ERROR: column "total_price" can only be updated to DEFAULT -DETAIL: Column "total_price" is a generated column. -UPDATE sales SET product_id = product_id + 10; -SELECT * FROM SALES; - id | product_id | quantity | price_per_unit | total_price -----+------------+----------+----------------+------------- - 1 | 11 | 11 | 200.00 | 2200 - 2 | 12 | 11 | 300.00 | 3300 -(2 rows) - -CREATE OR REPLACE FUNCTION update_count() RETURNS trigger LANGUAGE plpgsql AS -$func$ -BEGIN - NEW.count := NEW.count+1; - RETURN NEW; -END -$func$; -CREATE TABLE test(a int, b int, count int, double_count int GENERATED ALWAYS AS (2 * count) STORED); -CREATE TRIGGER update_count_test_trig BEFORE UPDATE OF a ON test FOR ROW EXECUTE PROCEDURE update_count(); -INSERT INTO test(a, b, count) values (1, 1, 1), (2, 2, 1); -SELECT * FROM test ORDER BY a DESC; - a | b | count | double_count ----+---+-------+-------------- - 2 | 2 | 1 | 2 - 1 | 1 | 1 | 2 -(2 rows) - -UPDATE test set a = a + 5; -SELECT * FROM test ORDER BY a DESC; - a | b | count | double_count ----+---+-------+-------------- - 7 | 2 | 2 | 4 - 6 | 1 | 2 | 4 -(2 rows) - -UPDATE test set b = b + 5; -SELECT * FROM test ORDER BY a DESC; - a | b | count | double_count ----+---+-------+-------------- - 7 | 7 | 2 | 4 - 6 | 6 | 2 | 4 -(2 rows) - --- YB_TODO: end --- YB_TODO: begin: remove once subselect is tracked -create temp table outer_text (f1 text, f2 text); -insert into outer_text values ('a', 'a'); -insert into outer_text values ('b', 'a'); -insert into outer_text values ('a', null); -insert into outer_text values ('b', null); -create temp table inner_text (c1 text, c2 text); -insert into inner_text values ('a', null); -insert into inner_text values ('123', '456'); -select * from outer_text where (f1, f2) not in (select * from inner_text) order by f2; - f1 | f2 -----+---- - b | a - b | -(2 rows) - -drop table outer_text, inner_text; -create table outer_text (f1 text, f2 text); -insert into outer_text values ('a', 'a'); -insert into outer_text values ('b', 'a'); -insert into outer_text values ('a', null); -insert into outer_text values ('b', null); -create table inner_text (c1 text, c2 text); -insert into inner_text values ('a', null); -insert into inner_text values ('123', '456'); -select * from outer_text where (f1, f2) not in (select * from inner_text) order by f2; - f1 | f2 -----+---- - b | a - b | -(2 rows) - --- YB_TODO: begin: remove after tracking yb.orig.tablespaces -CREATE TABLESPACE y WITH (replica_placement='{"num_replicas":3, "placement_blocks":[{"cloud":"cloud1","region":"r1","zone":"z1","min_num_replicas":1},{"cloud":"cloud2","region":"r2", "zone":"z2", "min_num_replicas":1}]}'); --- YB_TODO: end diff --git a/src/postgres/src/test/regress/sql/yb.orig.pg15.sql b/src/postgres/src/test/regress/sql/yb.orig.pg15.sql deleted file mode 100644 index 18f1b2b25220..000000000000 --- a/src/postgres/src/test/regress/sql/yb.orig.pg15.sql +++ /dev/null @@ -1,579 +0,0 @@ --- --- Tests for pg15 branch stability. --- --- Basics -create table t1 (id int, name text); - -create table t2 (id int primary key, name text); - -explain (COSTS OFF) insert into t2 values (1); -insert into t2 values (1); - -explain (COSTS OFF) insert into t2 values (2), (3); -insert into t2 values (2), (3); - -explain (COSTS OFF) select * from t2 where id = 1; -select * from t2 where id = 1; - -explain (COSTS OFF) select * from t2 where id > 1; -select * from t2 where id > 1; - -explain (COSTS OFF) update t2 set name = 'John' where id = 1; -update t2 set name = 'John' where id = 1; - -explain (COSTS OFF) update t2 set name = 'John' where id > 1; -update t2 set name = 'John' where id > 1; - -explain (COSTS OFF) update t2 set id = id + 4 where id = 1; -update t2 set id = id + 4 where id = 1; - -explain (COSTS OFF) update t2 set id = id + 4 where id > 1; -update t2 set id = id + 4 where id > 1; - -explain (COSTS OFF) delete from t2 where id = 1; -delete from t2 where id = 1; - -explain (COSTS OFF) delete from t2 where id > 1; -delete from t2 where id > 1; - --- Before update trigger test. - -alter table t2 add column count int; - -insert into t2 values (1, 'John', 0); - -CREATE OR REPLACE FUNCTION update_count() RETURNS trigger LANGUAGE plpgsql AS -$func$ -BEGIN - NEW.count := NEW.count+1; - RETURN NEW; -END -$func$; - -CREATE TRIGGER update_count_trig BEFORE UPDATE ON t2 FOR ROW EXECUTE PROCEDURE update_count(); - -update t2 set name = 'Jane' where id = 1; - -select * from t2; - --- CREATE INDEX -CREATE INDEX myidx on t2(name); - --- Insert with on conflict -insert into t2 values (1, 'foo') on conflict ON CONSTRAINT t2_pkey do update set id = t2.id+1; - -select * from t2; - --- Joins (YB_TODO: if I move it below pushdown test, the test fails) - -CREATE TABLE p1 (a int, b int, c varchar, primary key(a,b)); -INSERT INTO p1 SELECT i, i % 25, to_char(i, 'FM0000') FROM generate_series(0, 599) i WHERE i % 2 = 0; - -CREATE TABLE p2 (a int, b int, c varchar, primary key(a,b)); -INSERT INTO p2 SELECT i, i % 25, to_char(i, 'FM0000') FROM generate_series(0, 599) i WHERE i % 3 = 0; - --- Merge join -EXPLAIN (COSTS OFF) SELECT * FROM p1 t1 JOIN p2 t2 ON t1.a = t2.a WHERE t1.a <= 100 AND t2.a <= 100; -SELECT * FROM p1 t1 JOIN p2 t2 ON t1.a = t2.a WHERE t1.a <= 100 AND t2.a <= 100; - --- Hash join -SET enable_mergejoin = off; -EXPLAIN (COSTS OFF) SELECT * FROM p1 t1 JOIN p2 t2 ON t1.a = t2.a WHERE t1.a <= 100 AND t2.a <= 100; -SELECT * FROM p1 t1 JOIN p2 t2 ON t1.a = t2.a WHERE t1.a <= 100 AND t2.a <= 100; - --- Batched nested loop join -ANALYZE p1; -ANALYZE p2; -SET enable_hashjoin = off; -SET enable_seqscan = off; -SET enable_material = off; -SET yb_bnl_batch_size = 3; - -EXPLAIN (COSTS OFF) SELECT * FROM p1 t1 JOIN p2 t2 ON t1.a = t2.a WHERE t1.a <= 100 AND t2.a <= 100; -SELECT * FROM p1 t1 JOIN p2 t2 ON t1.a = t2.a WHERE t1.a <= 100 AND t2.a <= 100; - -SET enable_mergejoin = on; -SET enable_hashjoin = on; -SET enable_seqscan = on; -SET enable_material = on; --- Update pushdown test. - -CREATE TABLE single_row_decimal (k int PRIMARY KEY, v1 decimal, v2 decimal(10,2), v3 int); -CREATE FUNCTION next_v3(int) returns int language sql as $$ - SELECT v3 + 1 FROM single_row_decimal WHERE k = $1; -$$; - -INSERT INTO single_row_decimal(k, v1, v2, v3) values (1,1.5,1.5,1), (2,2.5,2.5,2), (3,null, null,null); -SELECT * FROM single_row_decimal ORDER BY k; -UPDATE single_row_decimal SET v1 = v1 + 1.555, v2 = v2 + 1.555, v3 = v3 + 1 WHERE k = 1; --- v2 should be rounded to 2 decimals. -SELECT * FROM single_row_decimal ORDER BY k; - -UPDATE single_row_decimal SET v1 = v1 + 1.555, v2 = v2 + 1.555, v3 = 3 WHERE k = 1; -SELECT * FROM single_row_decimal ORDER BY k; -UPDATE single_row_decimal SET v1 = v1 + 1.555, v2 = v2 + 1.555, v3 = next_v3(1) WHERE k = 1; -SELECT * FROM single_row_decimal ORDER BY k; - --- Delete with returning -insert into t2 values (4), (5), (6); -delete from t2 where id > 2 returning id, name; - --- COPY FROM -CREATE TABLE myemp (id int primary key, name text); -COPY myemp FROM stdin; -1 a -2 b -\. -SELECT * from myemp; - -CREATE TABLE myemp2(id int primary key, name text) PARTITION BY range(id); -CREATE TABLE myemp2_1_100 PARTITION OF myemp2 FOR VALUES FROM (1) TO (100); -CREATE TABLE myemp2_101_200 PARTITION OF myemp2 FOR VALUES FROM (101) TO (200); -COPY myemp2 FROM stdin; -1 a -102 b -\. -SELECT * from myemp2_1_100; -SELECT * from myemp2_101_200; --- Adding PK -create table test (id int); -insert into test values (1); -ALTER TABLE test ENABLE ROW LEVEL SECURITY; -CREATE POLICY test_policy ON test FOR SELECT USING (true); -alter table test add primary key (id); - -create table test2 (id int); -insert into test2 values (1), (1); -alter table test2 add primary key (id); - --- Creating partitioned table -create table emp_par1(id int primary key, name text) partition by range(id); -CREATE TABLE emp_par1_1_100 PARTITION OF emp_par1 FOR VALUES FROM (1) TO (100); -create table emp_par2(id int primary key, name text) partition by list(id); -create table emp_par3(id int primary key, name text) partition by hash(id); - --- Adding FK -create table emp(id int unique); -create table address(emp_id int, addr text); -insert into address values (1, 'a'); -ALTER TABLE address ADD FOREIGN KEY(emp_id) REFERENCES emp(id); -insert into emp values (1); -ALTER TABLE address ADD FOREIGN KEY(emp_id) REFERENCES emp(id); - --- Adding PK with pre-existing FK constraint -alter table emp add primary key (id); -alter table address add primary key (emp_id); - --- Add primary key with with pre-existing FK where confdelsetcols non nul -create table emp2 (id int, name text, primary key (id, name)); -create table address2 (id int, name text, addr text, FOREIGN KEY (id, name) REFERENCES emp2 ON DELETE SET NULL (name)); -insert into emp2 values (1, 'a'), (2, 'b'); -insert into address2 values (1, 'a', 'a'), (2, 'b', 'b'); -delete from emp2 where id = 1; -select * from address2 order by id; -alter table address2 add primary key (id); -delete from emp2 where id = 2; -select * from address2 order by id; - --- create database -CREATE DATABASE mytest; - --- drop database -DROP DATABASE mytest; - -create table fastpath (a int, b text, c numeric); -insert into fastpath select y.x, 'b' || (y.x/10)::text, 100 from (select generate_series(1,10000) as x) y; -select md5(string_agg(a::text, b order by a, b asc)) from fastpath - where a >= 1000 and a < 2000 and b > 'b1' and b < 'b3'; - --- Index scan test row comparison expressions -CREATE TABLE pk_range_int_asc (r1 INT, r2 INT, r3 INT, v INT, PRIMARY KEY(r1 asc, r2 asc, r3 asc)); -INSERT INTO pk_range_int_asc SELECT i/25, (i/5) % 5, i % 5, i FROM generate_series(1, 125) AS i; -EXPLAIN (COSTS OFF, TIMING OFF, SUMMARY OFF, ANALYZE) SELECT * FROM pk_range_int_asc WHERE (r1, r2, r3) <= (2,3,2); -SELECT * FROM pk_range_int_asc WHERE (r1, r2, r3) <= (2,3,2); - --- SERIAL type -CREATE TABLE serial_test (k int, v SERIAL); -INSERT INTO serial_test VALUES (1), (1), (1); -SELECT * FROM serial_test ORDER BY v; -SELECT last_value, is_called FROM public.serial_test_v_seq; - --- lateral join -CREATE TABLE tlateral1 (a int, b int, c varchar); -INSERT INTO tlateral1 SELECT i, i % 25, to_char(i % 4, 'FM0000') FROM generate_series(0, 599, 2) i; -CREATE TABLE tlateral2 (a int, b int, c varchar); -INSERT INTO tlateral2 SELECT i % 25, i, to_char(i % 4, 'FM0000') FROM generate_series(0, 599, 3) i; -ANALYZE tlateral1, tlateral2; --- YB_TODO: pg15 used merge join, whereas hash join is expected. --- EXPLAIN (COSTS FALSE) SELECT * FROM tlateral1 t1 LEFT JOIN LATERAL (SELECT t2.a AS t2a, t2.c AS t2c, t2.b AS t2b, t3.b AS t3b, least(t1.a,t2.a,t3.b) FROM tlateral1 t2 JOIN tlateral2 t3 ON (t2.a = t3.b AND t2.c = t3.c)) ss ON t1.a = ss.t2a WHERE t1.b = 0 ORDER BY t1.a; -SELECT * FROM tlateral1 t1 LEFT JOIN LATERAL (SELECT t2.a AS t2a, t2.c AS t2c, t2.b AS t2b, t3.b AS t3b, least(t1.a,t2.a,t3.b) FROM tlateral1 t2 JOIN tlateral2 t3 ON (t2.a = t3.b AND t2.c = t3.c)) ss ON t1.a = ss.t2a WHERE t1.b = 0 ORDER BY t1.a; - --- Test FailedAssertion("BufferIsValid(bsrcslot->buffer) failure from ExecCopySlot in ExecMergeJoin. -CREATE TABLE mytest1(h int, r int, v1 int, v2 int, v3 int, primary key(h HASH, r ASC)); -INSERT INTO mytest1 VALUES (1,2,4,9,2), (2,3,2,4,6); - -CREATE TABLE mytest2(h int, r int, v1 int, v2 int, v3 int, primary key(h ASC, r ASC)); -INSERT INTO mytest2 VALUES (1,2,4,5,7), (1,3,8,6,1), (4,3,7,3,2); - -SET enable_hashjoin = off; -SET enable_nestloop = off; -explain SELECT * FROM mytest1 t1 JOIN mytest2 t2 on t1.h = t2.h WHERE t2.r = 2; -SELECT * FROM mytest1 t1 JOIN mytest2 t2 on t1.h = t2.h WHERE t2.r = 2; -SET enable_hashjoin = on; -SET enable_nestloop = on; --- Insert with on conflict on temp table -create temporary table mytmp (id int primary key, name text, count int); -insert into mytmp values (1, 'foo', 0); -insert into mytmp values (1, 'foo') on conflict ON CONSTRAINT mytmp_pkey do update set id = mytmp.id+1; -select * from mytmp; - -CREATE OR REPLACE FUNCTION update_count() RETURNS trigger LANGUAGE plpgsql AS -$func$ -BEGIN - NEW.count := NEW.count+1; - RETURN NEW; -END -$func$; - -CREATE TRIGGER update_count_trig BEFORE UPDATE ON mytmp FOR ROW EXECUTE PROCEDURE update_count(); -insert into mytmp values (2, 'foo') on conflict ON CONSTRAINT mytmp_pkey do update set id = mytmp.id+1; -select * from mytmp; - -create view myview as select * from mytmp; -insert into myview values (3, 'foo') on conflict (id) do update set id = myview.id + 1; -select * from myview; - --- YB batched nested loop join -CREATE TABLE p3 (a int, b int, c varchar, primary key(a,b)); -INSERT INTO p3 SELECT i, i % 25, to_char(i, 'FM0000') FROM generate_series(0, 599) i WHERE i % 5 = 0; -ANALYZE p3; - -CREATE INDEX p1_b_idx ON p1 (b ASC); -SET enable_hashjoin = off; -SET enable_mergejoin = off; -SET enable_seqscan = off; -SET enable_material = off; -SET yb_bnl_batch_size = 3; - -SELECT * FROM p1 JOIN p2 ON p1.a = p2.b AND p2.a = p1.b; - -SELECT * FROM p3 t3 RIGHT OUTER JOIN (SELECT t1.a as a FROM p1 t1 JOIN p2 t2 ON t1.a = t2.b WHERE t1.b <= 10 AND t2.b <= 15) s ON t3.a = s.a; - -CREATE TABLE m1 (a money, primary key(a asc)); -INSERT INTO m1 SELECT i*2 FROM generate_series(1, 2000) i; - -CREATE TABLE m2 (a money, primary key(a asc)); -INSERT INTO m2 SELECT i*5 FROM generate_series(1, 2000) i; -SELECT * FROM m1 t1 JOIN m2 t2 ON t1.a = t2.a WHERE t1.a <= 50::money; --- Index on tmp table -create temp table prtx2 (a integer, b integer, c integer); -insert into prtx2 select 1 + i%10, i, i from generate_series(1,5000) i, generate_series(1,10) j; -create index on prtx2 (c); - --- testing yb_hash_code pushdown on a secondary index with a text hash column -CREATE TABLE text_table (hr text, ti text, tj text, i int, j int, primary key (hr)); -INSERT INTO text_table SELECT i::TEXT, i::TEXT, i::TEXT, i, i FROM generate_series(1,10000) i; -CREATE INDEX textidx ON text_table (tj); -SELECT tj FROM text_table WHERE yb_hash_code(tj) <= 63; - --- Row locking -CREATE TABLE t(h INT, r INT, PRIMARY KEY(h, r)); -INSERT INTO t VALUES(1, 1), (1, 3); -SELECT * FROM t WHERE h = 1 AND r in(1, 3) FOR KEY SHARE; -DROP TABLE t; - --- Test for ItemPointerIsValid assertion failure -CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple'); --- Aggregate pushdown -SELECT COUNT(*) FROM pg_enum WHERE enumtypid = 'rainbow'::regtype; --- IndexOnlyScan -SELECT enumlabel FROM pg_enum WHERE enumtypid = 'rainbow'::regtype; - --- Cleanup -DROP TABLE IF EXISTS address, address2, emp, emp2, emp_par1, emp_par1_1_100, emp_par2, emp_par3, - fastpath, myemp, myemp2, myemp2_101_200, myemp2_1_100, p1, p2, pk_range_int_asc, - single_row_decimal, t1, t2, test, test2, serial_test, tlateral1, tlateral2, mytest1, mytest2 CASCADE; - --- insert into temp table in function body -create temp table compos (f1 int, f2 text); -create function fcompos1(v compos) returns void as $$ -insert into compos values (v.*); -$$ language sql; -select fcompos1(row(1,'one')); - --- very basic REINDEX -CREATE TABLE yb (i int PRIMARY KEY, j int); -CREATE INDEX NONCONCURRENTLY ON yb (j); -UPDATE pg_index SET indisvalid = false - WHERE indexrelid = 'yb_j_idx'::regclass; -\c -REINDEX INDEX yb_j_idx; -UPDATE pg_index SET indisvalid = false - WHERE indexrelid = 'yb_j_idx'::regclass; -\c -\set VERBOSITY terse -REINDEX(verbose) INDEX yb_j_idx; -\set VERBOSITY default - --- internal collation -create table texttab (t text); -insert into texttab values ('a'); -select count(*) from texttab group by t; - --- ALTER TABLE ADD COLUMN DEFAULT with pre-existing rows -CREATE TABLE mytable (pk INT NOT NULL PRIMARY KEY); -INSERT INTO mytable SELECT * FROM generate_series(1, 10) a; -ALTER TABLE mytable ADD COLUMN c_bigint BIGINT NOT NULL DEFAULT -1; -SELECT c_bigint FROM mytable WHERE c_bigint = -1 LIMIT 1; -DROP TABLE mytable; - --- Update partitioned table with multiple partitions -CREATE TABLE t(id int) PARTITION BY range(id); -CREATE TABLE t_1_100 PARTITION OF t FOR VALUES FROM (1) TO (100); -CREATE TABLE t_101_200 PARTITION OF t FOR VALUES FROM (101) TO (200); -INSERT INTO t VALUES (1); -UPDATE t SET id = 2; -SELECT * FROM t; -DROP TABLE t; - --- Update partitioned table with multiple partitions and secondary index -CREATE TABLE t3(id int primary key, name int, add int, unique(id, name)) PARTITION BY range(id); -CREATE TABLE t3_1_100 partition of t3 FOR VALUES FROM (1) TO (100); -CREATE TABLE t3_101_200 partition of t3 FOR VALUES FROM (101) TO (200); -INSERT INTO t3 VALUES (1, 1, 1); -UPDATE t3 SET ADD = 2; -SELECT * from t3; -DROP TABLE t3; - --- Test whether single row optimization is invoked when --- only one partition is being updated. -CREATE TABLE list_parted (a int, b int, c int, primary key(a,b)) PARTITION BY list (a); -CREATE TABLE sub_parted PARTITION OF list_parted for VALUES in (1) PARTITION BY list (b); -CREATE TABLE sub_part1 PARTITION OF sub_parted for VALUES in (1); -INSERT INTO list_parted VALUES (1, 1, 1); -EXPLAIN (COSTS OFF) UPDATE list_parted SET c = 2 WHERE a = 1 and b = 1; -UPDATE list_parted SET c = 2 WHERE a = 1 and b = 1; -SELECT * FROM list_parted; - -EXPLAIN (COSTS OFF) DELETE FROM list_parted WHERE a = 1 and b = 1; -DELETE FROM list_parted WHERE a = 1 and b = 1; -SELECT * FROM list_parted; - -DROP TABLE list_parted; --- Cross partition UPDATE with nested loop join (multiple matches) -CREATE TABLE list_parted (a int, b int) PARTITION BY list (a); -CREATE TABLE sub_part1 PARTITION OF list_parted for VALUES in (1); -CREATE TABLE sub_part2 PARTITION OF list_parted for VALUES in (2); -INSERT into list_parted VALUES (1, 2); - -CREATE TABLE non_parted (id int); -INSERT into non_parted VALUES (1), (1), (1); -UPDATE list_parted t1 set a = 2 FROM non_parted t2 WHERE t1.a = t2.id and a = 1; -SELECT * FROM list_parted; -DROP TABLE list_parted; -DROP TABLE non_parted; --- Test no segmentation fault in YbSeqscan with row marks -CREATE TABLE main_table (a int) partition by range(a); -CREATE TABLE main_table_1_100 partition of main_table FOR VALUES FROM (1) TO (100); -INSERT INTO main_table VALUES (1); -BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE; -SELECT * FROM main_table; -SELECT * FROM main_table FOR KEY SHARE; -COMMIT; - --- YB_TODO: begin: remove this after tracking yb.port.partition_prune --- Test partition pruning -create table rlp (a int, b varchar) partition by range (a); -create table rlp_default partition of rlp default partition by list (a); -create table rlp_default_default partition of rlp_default default; -create table rlp_default_10 partition of rlp_default for values in (10); -create table rlp_default_30 partition of rlp_default for values in (30); -create table rlp_default_null partition of rlp_default for values in (null); -create table rlp1 partition of rlp for values from (minvalue) to (1); -create table rlp2 partition of rlp for values from (1) to (10); - -create table rlp3 (b varchar, a int) partition by list (b varchar_ops); -create table rlp3_default partition of rlp3 default; -create table rlp3abcd partition of rlp3 for values in ('ab', 'cd'); -create table rlp3efgh partition of rlp3 for values in ('ef', 'gh'); -create table rlp3nullxy partition of rlp3 for values in (null, 'xy'); -alter table rlp attach partition rlp3 for values from (15) to (20); - -create table rlp4 partition of rlp for values from (20) to (30) partition by range (a); -create table rlp4_default partition of rlp4 default; -create table rlp4_1 partition of rlp4 for values from (20) to (25); -create table rlp4_2 partition of rlp4 for values from (25) to (29); - -create table rlp5 partition of rlp for values from (31) to (maxvalue) partition by range (a); -create table rlp5_default partition of rlp5 default; -create table rlp5_1 partition of rlp5 for values from (31) to (40); - -explain (costs off) select * from rlp where a = 1 or b = 'ab'; --- YB_TODO: end - --- suppress warning that depends on wal_level -SET client_min_messages = 'ERROR'; -CREATE PUBLICATION p; -RESET client_min_messages; -ALTER PUBLICATION p ADD TABLES IN SCHEMA public; -ALTER PUBLICATION p DROP TABLES IN SCHEMA CURRENT_SCHEMA; -ALTER PUBLICATION p SET CURRENT_SCHEMA; - --- YB_TODO: begin: remove after tracking yb.orig.join_batching -SET enable_hashjoin = off; -SET enable_mergejoin = off; -SET enable_seqscan = off; -SET enable_material = off; -SET yb_prefer_bnl = on; -SET yb_bnl_batch_size = 3; -CREATE TABLE t10 (r1 int, r2 int, r3 int, r4 int); - -INSERT INTO t10 - SELECT DISTINCT - i1, i2+5, i3, i4 - FROM generate_series(1, 5) i1, - generate_series(1, 5) i2, - generate_series(1, 5) i3, - generate_series(1, 10) i4; - -CREATE index i_t ON t10 (r1 ASC, r2 ASC, r3 ASC, r4 ASC); - -CREATE TABLE t11 (c1 int, c3 int, x int); -INSERT INTO t11 VALUES (1,2,0), (1,3,0), (5,2,0), (5,3,0), (5,4,0); - -CREATE TABLE t12 (c4 int, c2 int, y int); -INSERT INTO t12 VALUES (3,7,0),(6,9,0),(9,7,0),(4,9,0); - -EXPLAIN (COSTS OFF) /*+ Leading((t12 (t11 t10))) Set(enable_seqscan true) */ SELECT t10.* FROM t12, t11, t10 WHERE x = y AND c1 = r1 AND c2 = r2 AND c3 = r3 AND c4 = r4 order by c1, c2, c3, c4; - -/*+ Leading((t12 (t11 t10))) Set(enable_seqscan true) */ SELECT t10.* FROM t12, t11, t10 WHERE x = y AND c1 = r1 AND c2 = r2 AND c3 = r3 AND c4 = r4 order by c1, c2, c3, c4; - -DROP TABLE t10; -DROP TABLE t11; -DROP TABLE t12; -RESET enable_hashjoin; -RESET enable_mergejoin; -RESET enable_seqscan; -RESET enable_material; -RESET yb_prefer_bnl; -RESET yb_bnl_batch_size; --- YB_TODO: end - --- YB_TODO: begin: remove after tracking yb.port.foreign_key --- (the following test is a simplified version of the test under --- 'Foreign keys and partitioned tables' section there). -CREATE TABLE fk_notpartitioned_pk (a int, b int,PRIMARY KEY (a, b)); -INSERT INTO fk_notpartitioned_pk VALUES (2501, 2503); - -CREATE TABLE fk_partitioned_fk (a int, b int) PARTITION BY HASH (a); -ALTER TABLE fk_partitioned_fk ADD FOREIGN KEY (a, b) REFERENCES fk_notpartitioned_pk; - -CREATE TABLE fk_partitioned_fk_0 PARTITION OF fk_partitioned_fk FOR VALUES WITH (MODULUS 5, REMAINDER 0); -CREATE TABLE fk_partitioned_fk_1 PARTITION OF fk_partitioned_fk FOR VALUES WITH (MODULUS 5, REMAINDER 1); -INSERT INTO fk_partitioned_fk (a,b) VALUES (2501, 2503); - --- this update fails because there is no referenced row -UPDATE fk_partitioned_fk SET a = a + 1 WHERE a = 2501; --- but we can fix it thusly: -INSERT INTO fk_notpartitioned_pk (a,b) VALUES (2502, 2503); -UPDATE fk_partitioned_fk SET a = a + 1 WHERE a = 2501; --- YB_TODO: end - --- YB_TODO: begin: remove after tracking postgres_fdw -CREATE EXTENSION postgres_fdw; -CREATE SERVER testserver1 FOREIGN DATA WRAPPER postgres_fdw; -DO $d$ - BEGIN - EXECUTE $$CREATE SERVER loopback FOREIGN DATA WRAPPER postgres_fdw - OPTIONS (dbname '$$||current_database()||$$', - host '$$||current_setting('listen_addresses')||$$', - port '$$||current_setting('port')||$$' - )$$; - END; -$d$; - -CREATE USER MAPPING FOR public SERVER testserver1 - OPTIONS (user 'value', password 'value'); -CREATE USER MAPPING FOR CURRENT_USER SERVER loopback; - -create table ctrtest (a int, b text) partition by list (a); -create table loct1 (a int check (a in (1)), b text); -create foreign table remp1 (a int check (a in (1)), b text) server loopback options (table_name 'loct1'); -alter table ctrtest attach partition remp1 for values in (1); - -copy ctrtest from stdin; -1 foo -\. -select * from ctrtest; - -create table return_test (a int, b int, c int); -insert into return_test values (1, 1, 1); -update return_test set c = c returning *; -update return_test set c = c returning c; -update return_test set c = c returning a,b; --- YB_TODO: end - --- YB_TODO: begin: remove after generated is tracked -CREATE TABLE sales ( - id SERIAL PRIMARY KEY, - product_id INT NOT NULL, - quantity INT NOT NULL, - price_per_unit NUMERIC(10, 2) NOT NULL, - total_price NUMERIC GENERATED ALWAYS AS (quantity * price_per_unit) STORED -); -INSERT INTO sales (product_id, quantity, price_per_unit) VALUES (1, 10, 100), (2, 10, 200); -SELECT * FROM SALES; -UPDATE sales set quantity = quantity + 1; -SELECT * FROM SALES; -UPDATE sales set price_per_unit = price_per_unit + 100; -SELECT * FROM SALES; -UPDATE sales SET total_price = 0; -UPDATE sales SET product_id = product_id + 10; -SELECT * FROM SALES; - -CREATE OR REPLACE FUNCTION update_count() RETURNS trigger LANGUAGE plpgsql AS -$func$ -BEGIN - NEW.count := NEW.count+1; - RETURN NEW; -END -$func$; -CREATE TABLE test(a int, b int, count int, double_count int GENERATED ALWAYS AS (2 * count) STORED); -CREATE TRIGGER update_count_test_trig BEFORE UPDATE OF a ON test FOR ROW EXECUTE PROCEDURE update_count(); -INSERT INTO test(a, b, count) values (1, 1, 1), (2, 2, 1); -SELECT * FROM test ORDER BY a DESC; -UPDATE test set a = a + 5; -SELECT * FROM test ORDER BY a DESC; -UPDATE test set b = b + 5; -SELECT * FROM test ORDER BY a DESC; - --- YB_TODO: end --- YB_TODO: begin: remove once subselect is tracked -create temp table outer_text (f1 text, f2 text); -insert into outer_text values ('a', 'a'); -insert into outer_text values ('b', 'a'); -insert into outer_text values ('a', null); -insert into outer_text values ('b', null); - -create temp table inner_text (c1 text, c2 text); -insert into inner_text values ('a', null); -insert into inner_text values ('123', '456'); - -select * from outer_text where (f1, f2) not in (select * from inner_text) order by f2; - -drop table outer_text, inner_text; - -create table outer_text (f1 text, f2 text); -insert into outer_text values ('a', 'a'); -insert into outer_text values ('b', 'a'); -insert into outer_text values ('a', null); -insert into outer_text values ('b', null); - -create table inner_text (c1 text, c2 text); -insert into inner_text values ('a', null); -insert into inner_text values ('123', '456'); - -select * from outer_text where (f1, f2) not in (select * from inner_text) order by f2; --- YB_TODO: begin: remove after tracking yb.orig.tablespaces -CREATE TABLESPACE y WITH (replica_placement='{"num_replicas":3, "placement_blocks":[{"cloud":"cloud1","region":"r1","zone":"z1","min_num_replicas":1},{"cloud":"cloud2","region":"r2", "zone":"z2", "min_num_replicas":1}]}'); --- YB_TODO: end diff --git a/src/postgres/src/test/regress/yb_pg15 b/src/postgres/src/test/regress/yb_pg15 deleted file mode 100644 index 704ad6211e68..000000000000 --- a/src/postgres/src/test/regress/yb_pg15 +++ /dev/null @@ -1,6 +0,0 @@ -# src/test/regress/yb_pg15 -# -################################################################################ -# YB PG15 Testsuite: Yugabyte-owned (non-ported) temporary tests for pg15 branch stability. -################################################################################ -test: yb.orig.pg15 From 6979797f3e2e7de0f6018b9b643ce99422445030 Mon Sep 17 00:00:00 2001 From: sushil-yb Date: Mon, 12 May 2025 14:47:10 +0530 Subject: [PATCH 044/146] [CLOUDGA-27496] Remove tailscale from YNP Summary: Remoove tailscale module for YNP Test Plan: Manually tried running YNP Reviewers: anijhawan, svarshney Reviewed By: svarshney Subscribers: svarshney, anijhawan Differential Revision: https://phorge.dev.yugabyte.com/D43916 --- managed/node-agent/resources/ynp/configs/config.j2 | 1 - .../ynp/modules/provision/tailscale/__init__.py | 0 .../ynp/modules/provision/tailscale/tailscale.py | 5 ----- .../provision/tailscale/templates/precheck.j2 | 10 ---------- .../modules/provision/tailscale/templates/run.j2 | 13 ------------- 5 files changed, 29 deletions(-) delete mode 100644 managed/node-agent/resources/ynp/modules/provision/tailscale/__init__.py delete mode 100644 managed/node-agent/resources/ynp/modules/provision/tailscale/tailscale.py delete mode 100644 managed/node-agent/resources/ynp/modules/provision/tailscale/templates/precheck.j2 delete mode 100644 managed/node-agent/resources/ynp/modules/provision/tailscale/templates/run.j2 diff --git a/managed/node-agent/resources/ynp/configs/config.j2 b/managed/node-agent/resources/ynp/configs/config.j2 index c0d475de3f6b..5de2eb0f55d8 100644 --- a/managed/node-agent/resources/ynp/configs/config.j2 +++ b/managed/node-agent/resources/ynp/configs/config.j2 @@ -114,5 +114,4 @@ earlyoom_args = {{ ynp_config['ynp'].earlyoom_args | default('') }} {%- if 'extra' in ynp_config and ynp_config.extra.is_ybm %} [ConfigureYBMAMI] -[ConfigureTailscale] {%- endif %} diff --git a/managed/node-agent/resources/ynp/modules/provision/tailscale/__init__.py b/managed/node-agent/resources/ynp/modules/provision/tailscale/__init__.py deleted file mode 100644 index e69de29bb2d1..000000000000 diff --git a/managed/node-agent/resources/ynp/modules/provision/tailscale/tailscale.py b/managed/node-agent/resources/ynp/modules/provision/tailscale/tailscale.py deleted file mode 100644 index 7e34fc51c834..000000000000 --- a/managed/node-agent/resources/ynp/modules/provision/tailscale/tailscale.py +++ /dev/null @@ -1,5 +0,0 @@ -from ...base_module import BaseYnpModule - - -class ConfigureTailscale(BaseYnpModule): - pass diff --git a/managed/node-agent/resources/ynp/modules/provision/tailscale/templates/precheck.j2 b/managed/node-agent/resources/ynp/modules/provision/tailscale/templates/precheck.j2 deleted file mode 100644 index 7a38d95ccae7..000000000000 --- a/managed/node-agent/resources/ynp/modules/provision/tailscale/templates/precheck.j2 +++ /dev/null @@ -1,10 +0,0 @@ -# Check if tailscale binary exists -if command -v tailscale &>/dev/null; then - tailscale_version=$(tailscale version | head -n1 | awk '{print $1}') - echo "Tailscale is installed: $tailscale_version" - add_result "Tailscale Binary Check" "PASS" \ - "Tailscale is installed: $tailscale_version" -else - echo "Tailscale is not installed" - add_result "Tailscale Binary Check" "FAIL" "Tailscale is not installed" -fi diff --git a/managed/node-agent/resources/ynp/modules/provision/tailscale/templates/run.j2 b/managed/node-agent/resources/ynp/modules/provision/tailscale/templates/run.j2 deleted file mode 100644 index 0f0e4d83fd88..000000000000 --- a/managed/node-agent/resources/ynp/modules/provision/tailscale/templates/run.j2 +++ /dev/null @@ -1,13 +0,0 @@ -# --- Reset hostname --- -echo "Resetting hostname..." -hostnamectl set-hostname "" - -# --- Add Tailscale repository --- -echo "Adding Tailscale repository..." -curl -fsSL https://pkgs.tailscale.com/stable/fedora/tailscale.repo \ - -o /etc/yum.repos.d/tailscale.repo -# --- Install Tailscale --- -echo "Installing Tailscale..." -dnf install -y tailscale - -echo "Tailscale installation complete." From 891f548b0330f54e148140d5e1cb7fdde17a28e1 Mon Sep 17 00:00:00 2001 From: Sami Ahmed Siddiqui Date: Tue, 13 May 2025 14:19:03 +0500 Subject: [PATCH 045/146] [Docs] Redesigning the YB tserver page (#27126) * Redesigning the YB tserver page * Change H5 font, increase spacing before H2/H3 and remove divider from the first H5 * Add extra spacing before an H5 heading based on rule * Remove content changes * remove duplicated lines from the PR --- docs/assets/scss/_styles_project.scss | 7 +++ docs/assets/scss/_yb_headings.scss | 47 ++++++++++++++++++ docs/assets/scss/_yb_tags.scss | 31 +++++++++++- .../shortcodes/tags/feature/deprecated.html | 1 + .../tags/feature/restart-needed.html | 1 + .../shortcodes/tags/feature/t-server.html | 1 + docs/layouts/shortcodes/tags/wrap.html | 1 + docs/src/index.js | 22 ++++++++ .../static/fonts/sf-mono/SFMonoSemibold.woff2 | Bin 0 -> 44824 bytes docs/static/icons/t-server.svg | 3 ++ 10 files changed, 113 insertions(+), 1 deletion(-) create mode 100644 docs/layouts/shortcodes/tags/feature/deprecated.html create mode 100644 docs/layouts/shortcodes/tags/feature/restart-needed.html create mode 100644 docs/layouts/shortcodes/tags/feature/t-server.html create mode 100644 docs/layouts/shortcodes/tags/wrap.html create mode 100644 docs/static/fonts/sf-mono/SFMonoSemibold.woff2 create mode 100644 docs/static/icons/t-server.svg diff --git a/docs/assets/scss/_styles_project.scss b/docs/assets/scss/_styles_project.scss index 4ab64ea4d640..aae48e96e73c 100644 --- a/docs/assets/scss/_styles_project.scss +++ b/docs/assets/scss/_styles_project.scss @@ -31,6 +31,13 @@ @import "./_yb_tags.scss"; @import "./_yb_container.scss"; +@font-face { + font-family: 'SFMonoSemibold'; + src: url('/fonts/sf-mono/SFMonoSemibold.woff2') format('woff2'); + font-weight: normal; + font-style: normal; +} + html { padding: 0 !important; } diff --git a/docs/assets/scss/_yb_headings.scss b/docs/assets/scss/_yb_headings.scss index 2994deb0351c..8ebed43f627f 100644 --- a/docs/assets/scss/_yb_headings.scss +++ b/docs/assets/scss/_yb_headings.scss @@ -232,6 +232,53 @@ } } +body.configuration { + .td-content { + h2, + .h2, + h3, + .h3 { + margin-top: 96px; + } + + h5:not(:first-child) { + font-family: 'SFMonoSemibold'; + font-size: 16px; + font-weight: normal; + line-height: 24px; + position: relative; + margin-top: 64px; + + &::after { + position: absolute; + top: calc(7.5rem + 20px - 32px); + content: ""; + display: block; + width: 100%; + height: 1px; + background: #D7DEE4; + } + } + + h5.first-h5, + header + h5:not(:first-child), + .main-heading-with-version + h5:not(:first-child), + h2 + h5:not(:first-child), + h3 + h5:not(:first-child), + h4 + h5:not(:first-child) { + margin-top: 32px; + + &::after { + display: none; + } + } + + h5.first-h5 { + margin-top: 96px; + } + } +} + @media (hover: none) and (pointer: coarse), (max-width: 576px) { .td-heading-self-link { visibility: hidden; diff --git a/docs/assets/scss/_yb_tags.scss b/docs/assets/scss/_yb_tags.scss index 2d45287736fe..9507788c9c86 100644 --- a/docs/assets/scss/_yb_tags.scss +++ b/docs/assets/scss/_yb_tags.scss @@ -36,6 +36,28 @@ color: #097345; } + &.restart-needed { + background: #E8E9FE; + color: #4F4FA4; + } + + &.t-server { + background: #E5EDFF; + color: #2B59C3; + + &::before { + content: ""; + background: url(/icons/t-server.svg) center no-repeat; + width: 18px; + height: 18px; + margin-right: 4px; + } + } + + &.deprecated { + background: #FEEDED; + color: #DA1515; + } &.ysql { background: #CBCCFB; @@ -51,6 +73,13 @@ text-decoration: none !important; } } + .tags-row{ + margin-bottom: 16px; + display: flex; + flex-flow: wrap; + gap: 6px; + font-size: 0; + } } .tag.release { @@ -112,7 +141,7 @@ white-space: normal; text-transform: none; - &:before { + &::before { position: absolute; left: -13px; content: ""; diff --git a/docs/layouts/shortcodes/tags/feature/deprecated.html b/docs/layouts/shortcodes/tags/feature/deprecated.html new file mode 100644 index 000000000000..97c20dcfba64 --- /dev/null +++ b/docs/layouts/shortcodes/tags/feature/deprecated.html @@ -0,0 +1 @@ +Deprecated diff --git a/docs/layouts/shortcodes/tags/feature/restart-needed.html b/docs/layouts/shortcodes/tags/feature/restart-needed.html new file mode 100644 index 000000000000..79fc999ab75f --- /dev/null +++ b/docs/layouts/shortcodes/tags/feature/restart-needed.html @@ -0,0 +1 @@ +Restart Needed diff --git a/docs/layouts/shortcodes/tags/feature/t-server.html b/docs/layouts/shortcodes/tags/feature/t-server.html new file mode 100644 index 000000000000..09c7d3f7b74e --- /dev/null +++ b/docs/layouts/shortcodes/tags/feature/t-server.html @@ -0,0 +1 @@ +T-Server / Master Match diff --git a/docs/layouts/shortcodes/tags/wrap.html b/docs/layouts/shortcodes/tags/wrap.html new file mode 100644 index 000000000000..70d511aa0e63 --- /dev/null +++ b/docs/layouts/shortcodes/tags/wrap.html @@ -0,0 +1 @@ +
{{ .Inner }}
diff --git a/docs/src/index.js b/docs/src/index.js index 4ddbfc2a366a..b89f2ef5c553 100644 --- a/docs/src/index.js +++ b/docs/src/index.js @@ -348,6 +348,28 @@ $(document).ready(() => { }); })(); + /** + * Check immediate heading before H5 on particular pages to apply divider on them. + * Like `/preview/reference/configuration/yb-tserver/`. + */ + (() => { + if (document.body.classList.contains('configuration')) { + const headings = document.querySelectorAll('.configuration h2, .configuration h3, .configuration h4, .configuration h5'); + let checkH5 = false; + + headings.forEach(heading => { + const tag = heading.tagName; + + if (tag === 'H2' || tag === 'H3' || tag === 'H4') { + checkH5 = true; + } else if (tag === 'H5' && checkH5) { + heading.classList.add('first-h5'); + checkH5 = false; + } + }); + } + })(); + /** * Add Image Popup. */ diff --git a/docs/static/fonts/sf-mono/SFMonoSemibold.woff2 b/docs/static/fonts/sf-mono/SFMonoSemibold.woff2 new file mode 100644 index 0000000000000000000000000000000000000000..ef083fc716bacd3a27ade7e8a1211e4370d16bc7 GIT binary patch literal 44824 zcmV)2K+L~)Pew8T0RR910IwJT4FCWD0)6xV0Is|M0RR9100000000000000000000 z0000#Mn+Uk92yt~U;v6(5eN#1fNX}1Pz$XL00A}vBmQ1_yzM4_oT&kp{}f zwyTODSrL_PLm)DFb6!nh4xq=jhm_8PB%C}mu|~Xo6u2Ec{O2yi$o~KT|NsBoWirOP z4fxvzpr~n8_5Xs3!3-4c5eQU7ctgO0op9mS&_=DO5k$mN*V(q-Q})@G?&2u#rk*|e zHP#j5r@7~CXGg0_6CH=3@QjaaPuwsXVc4dRRdoHOPO7Gp$p?mbi~5#BSb?=_S68lD z(vm)k{m2&+PbLy4N!64azKQW9WzaxUAt?zGBqfQwXD} zBq)f-`-?>k@teLW1D=tI*B{JjXm>yQfxt5xnH-z?wq2|C&dX>L;$9?=nZ@(}t8LEl!egQreBsodrt&JzU3Jx1prw0xOeI;!GB9=-l!z81 zvhC{rcyay!cKdhkxU+89$w};@WHiwF#>K7_CGum}MoU)4JY(!sLUP8L`#5BbU5P?N zu+7dXXafc&M$g8=XjH5KMX?c7M8U$yDPYG+#ah&x3pcl^b93R$g>LVRweLIKe*5Bi zzWrGD%^&nr0wF@E-tcN38i_+BLH4zfB#Sk%5;pGXHWo{b+E6QsR)G-`0xNhl(e{K* z@jj1(|L-q$&inT@QO#sYprJ-l#nzg&{4nyb8_*bO5gMMPSr!*fnT*b1E@TO|wH**@b*(P^tvo4U zHL3LuswK(p0>!A59M}`&^5fqE{oq=Aq*|ikC6Y6t(>m!?x1ET;`H-6lp$e&p&z_DZ z^WE`cyog=voA}%=`-^B#p^^lv==3%)r|zIl5E>$^z7XyiuPqN5b7*oO0RgeY(h!J& z3;+-qxR&VSnWxZYqjfY~{(7 z9u>tirU3hW6!?6mtI{qsVqh&=&+HjEJ9EyZYop6{v#9!4RX0`DC83tYpcXJlj1yvZ zKyZ7QHEJZ77O>8kmjh#&^RwrCWNqq#=)!2Dtodcwb7^^4Bw_n|Nx~jYKvG1>}pX&Pk|JGjZ%qFb% zCo01IxI{P-Hcs3>q1_Np7&}R6e*uL7lq5`T`GqG{uYf9mF5eZLsgd;(RgY)*ucov6 z=xc2*gB59)03W&(^i+asmv7t}<)RRUlHQL*`&4uTBJ|Wge40W94-lJzE*TF5=KnYK zla`s$+uf83184xhnynDZqbZK0XZ<4CX{Fey&qgDqvprz}EP#ZA0FGnNntfS-p!qjX z8w03Fnz_xU!+no2^}i=s^ZBe3I`{&pn)BbsXjL~&gQG((GTo8!ZdD*mo zY|O6Rx8C`$twn-p5~7em1oppd`#-!NRR$m0M;tk zAK-gP-h!HM9%E>bz7~ii0J)$M%YPjNIynHuDgni($VJyoqFcH||;dRM!9gql3#egk8BGaG6m-wp`raoLC)xuB0t zruxMO$DMb}-4ZNqu0FV_1_vMqk+_7$FeL0m@W|M>GJx7ida4@VO0C&O)xm)BBSTO% zQMFdZzIFGQLQ{G$b+rZ??HJV0>en|Cb0*MwzAdK?Ny_|tzm@-A*hE`i&->)9>jRxN zx!W@KZH}Gf|8j_}%waab;WZ-w4yWi;p)!>gg%^l9JRJPgX`gmzlh&j{w>A4O0AS{o zlt_TMNQ8s~%miTi!ukOK08IU#vIR``6L$a;W8+@{V`ty<0Ia_NfN(PQ!GhtgdUB#~ zbp*h@;-yB@RmXx(-p9&GRaYH=c(D!QrBo{q^~!4CUG6WI)x73vy`kOh8IGs(AK7(p?dASs&17ceX*6p1BL znOvb%sWn=i-e5GDEmoV|;Y^I|*mK~>i8B{iuH3l8@!-jeHy^(I_zMsyNH9Et5TS^| zgd>R%DN3{$u>g1|#l}^qS*J5jJF7>xUVR4i8#Ls+b1t~(va7DRX4r4WjhS%0TZf4( zUShXTrc+YUsvPN(sy<~}t0ZYk`jJ7EQPrav4Sz<8WZBt*XHswN1puP$YW{K$t01?jsBX7;6O4khMJknGQEhz}U z5*zYHMGRMubaux+7LqQ35LFf~CnN|E@=lA!nU`IKRuws~N|lpPr_U*ITV+>-2m&iS zLY{M0ImM5PIv}_OP$(Xj3tP%>V=;xg_9R5ht0Z0a@}4Uuf`o)~bzWsssLMw#Ps@(JwLvT|#ynz8XIUM0_p)_}zxR6a`1=WUGfi?1nZ zI4sAloS?WoHt4o+qO6&~v|!~fO@489mGG3wqlrfTS>r_|Oke1tG<{_(SHFf027wLEy~Lk2aPfu^vhcJTY1-b?UP-Y{t48ws2(I=CChs=yWrvhQmnHQa z2F4o`b2JgG^w--aZE9PzsAZI*xHP{z{uc!NCH~ z{pSX38_(LdW$&56r)`3(P#pZInpWHIj$(BOO%O6~+7FAz z7!b`e69W)BvaD!8XG&&_N}X1O0*c58-}p@7;R6u2Cl?q^J#IkO8*shx%K!aG^;PVj zB<6HUwX}qS6pNDeh343GfR|UslZ_+IjN+;$Qq;pDtv@BI*LoFAVwa+I9c`VWBAI$w38Sr5>C5#XV$&HI^mqtMfJ1e%_1Xt; zM5qA1M`xK`FpA!NK`Y=h-J(w=f%-tn0Q#fC!tNE5Wb&$&UOEz1D5(OHm=Tl84OdK* z&|ToIXpin=O5qni3n<<96D(~23|O;D1qRD z6ay)(K!M5!qxE~9X!(a9;&PFl_9CorD6mC;k9^LU=X}Js=S$g%rn;w}q}fhy zgK~CVe%jv>n*xC|`K^Oz*(2g{lML^?=Et@%l%qg~D&4G$S&Z@h2<38KDs8NYh z!0P>OWAvM8PmZahbalSYPb#Txif@zpkdq>HS zR#!@jadZrWhUoVtq})=P=gVbTKHi4`0il%BDWw>nj<4`j`1$8nP{>?%>{{D-ocz_j z_?x?G7CZR1o|jUFq1IZgU)zaz&yZ$^9^9ebQ|dKSWpXwXvu;DjclAH?>LE? zcMC@Xe#Xz-*>{@T}9*Te%%G-xViBb*`!DB8tr z0N}7Sa1(pWluK?qu&z|tpYFQ0l||TyzaAKJDO(-gnM=0P#P2(Jk{sJ;n#mVX z*gf3x*9Ru#u48;g*4LjRmi{)kt#ch*Z9tp51-z`S@{|4|nRGhw_T_YpA|>=TNu|yc zQ0+pzi|b0H+0qT?^F0v~TXZ2n+nYLXw?Hftpz?$cf0~%c@I!er6=X*DH->osM+j?# z-uHyEirA?@e6HAU)n*~EW}n9}?ZKpHn#HL3f4pI!g!z+54A^nU$Xwxi{e#(ojJ&O5 zHAV`AbFgFA&RwtaqfFU*I(pq@iat~sc8|NJvoCe)Alm1N@rd!OoCS?3k6N}Koq7H5 zeKJPl3kqj-S`bZQqbFT3gKPSNdsL^cE&>uy=jV#$pX5deZz#t?2tnUCi0ApctO95IUO$cpI$BfRV&kc7zA_J$jFZ z*81K&ULwi&HZ~6xv>U!KwL#AMMQ3#PQ#R&vI>O}g9(4&-IXamdC#QTeSHIU+tGY3? z)*Q#IYcB|lX#BfN3L_E$;ff`^q_O>TD~4m9B7-BzQyk0~-ikb@5Z9Z`bM3m5W&~Dh zl4l5HSWXr?+c`G>e3bL&~n$#WAHWqOlLT~$f62?Nm;JF?8>7K$gF@ufPB(&3z^ zJo4xwu}eoSLN{%N)Tin8st76!tqSa-Z>K_T;E=Er zx?Cx#x~Q-W%GU=o)~&NVg<)YtbdFg()P1A&qne8B_bi?^OmoTJqCV}Dej>%uoMhrj zrF|M^{)g!o>SE?&-9>qiRH5e>#&(H60!OoA^%IzY8d;7goe@iYWonW`9qaNPd#Yi0 zev+61bW?~1%TC4y`dUINDy)Ed9x@qvSp7W0?LwN?%oa$BzDn7s{s=nxtLWS3A&uA9@-BVh6v+4Uoy(InU&>)sw6sV+|cFD z>?A%t^{n)}^U8uS4h@u?dZw!?EW#!4vJ3;yV#vV|Fqw}?c!!+4vE3>)KBE^Jw4gtQ zRk8(#lR1swKgDyCpDmRCk?CQ5^f~I9-%%mGU2WW=goPI)Y6M5pu=(Z%uB1;%#|IU1 zuN{G~dcHco2K8d)Ln4r9`4GfWVlH%5=@d#K<8BD1*>r?aR`-9ugiYOq3bkgtEem;U zpid@MA0?Ch5}6$#rg^;AS6e0mFn1=2RN|8cvOt8w%(rugscF&teHU!|TpVPs*- z5e~3j>+PfJc%1&wL4e7znz+Ha&|RyA)x{ zjzESdpNISiEa;DDt<`aNKn(y9FT-9zp7;3nv&<&Hg>py0tlFMGWzD?2ZWe-Krb@seJ|ENX7+D;DW!wkbbU+SI$WqK9&al3UHfgD-qwT>F_+yJ5NKHuIw%T z$&u?w2ybl0A4U%pg_5{jsITO}<9yEE>z`hZ6Fm7bP?P$mbLBktM7d37ytARQ+l(&BWt=fy*ov7PRdfim+ zKI)HXi*fU#7&)p;3H581!<| zSZaf1lfFY!(FA2aSbe(PEM!qGVE2&O(V{eD7xX^HTcBI61l+)lEy8%yOc}!4KL7r< zdQ?q%f%cMfx3`4?CI0@7`Z%7S&5oe+(=JdkY4^?3ye;A|8X#tR(gFob!@seIk=dqZa8+h=dJ zV|(Ml%w<<7X?r*1;yJ>Oeti=|fM%5~V6bFk(T4w=3x>R;-LhHf{?6p2(W9p(3=qG@ z4Y~Uiylc5@xlInu`VH>jY}kKKW2o{sY6EJ12Up1@`9lp#&n}+{ z@2cHcoQ!x*Dc!O0YXC_suwM7A>ze~)-HKSADb)#wY z@UB0{xAm+pe#1|*ZQW|~O`mG5h`U|epkcT&HkT~oiY?26p$gfQvVe8BmxR~DkXMOx zddUi;OcwZ8no)y$=v7rB24lCM-9V4@=1VoInnOjUC(PNiQM{%rx#j+=K_LaerB2s; zy%C<6n!~*ThRl2+88o2n%U517%OPCTNOd^jHQ;NO)V16M#nX^h2k*(OO&=Q=jdE;v zra>|dt;#PC{C5>Oq{VJ6rmxR}e@-}-Ro|r4;$SFrxb`ta4esc7R%MT7Ybo3@-xMV1 z)~0pM_#_b5^&MIahrM+AXR@9R+sTpRgDW$`_)Z4io5N3Bi z(BNt?s+t7u&C6feHcp*;F$5AJBGS+kKCbaah(`@N*}4%H1?cxian7~tnv4+j3yf_= zL=qV`T}j|00|*o>vLi5xhJD!Nh8ngl!!sCUnNV<;H`{vva$85?f~$w+lqjc_(#s`U zCjDNZ!(KNedMZR>W{!eZZ4i9U4vlY478V&pxgQ{8^K}w^P%j4JbegtyS1<*J6-E?e zhe?3V$Hyx*U0i0u(bC};?i!Ps({lCPBGbQHF6?S#{_T^H;xJDx0swaS!Ge(`2f%*e z^I%rK2k}$-tbhcj$Oeq?YR8tMuf@LCvcrtO$=YV^eZ>7F4O0as5Bbf@7ngWcdwRSP zPO}{poh6+$_f<3=vaYcCJkO2|8T3>%xOjdzpF5&q}fz6`}0d{R8NR} z8vgeF-DH;hv(=&?bkh^S^rF`as0+a!)Oskid`5Ad_05EUc4fOz=QCJM0z|KKkZYoW zuO`&YIfnNK$@-E%X(>)6RJKGC;eBv!JN_z+usI?0+YFUQot z#cpz@+>2%0bC=}{XgSUcV}y;fM|90T(TgHV?|wQPF{PJb>(-y17E1X( zXqiD0pkrCz#JNvC7>9TY!b-+1@Xth#?BuyHS0UD;5R8td1 zfkW0+fkqNl5wtezPDUU^2E-q}z0OstFp1QF(ty&c`kFpKct6+K@TB?vCZ`8=E-I!f z+*vDH1an_^MoA^`HW5brFkqh*@qI{Qkku)rKxIww5?=dDZ@0--%OUrj3d=vFq&(2` z$r0HOdHp;=Tq_MyW!GAR)DaGro!cn8_XFMKD0+N9RSU*AFRvV#AM`6<=uvCj(-6P$ zi`5pKp!ID)+ab_s2!I+A{{bXHkd)$in)JaWVjmIOVh@yUkLex!&P`x`Y!49W@z^pc z-Z0fQY!pt+=sXE$D+&nH3bk5iwzV@1UWphtv%TBK7;%LvWQE zyc9|J#UThpi#jK7aJ53Nq6(H)+3|V-a*;3u^7+;dnFtN+cxma$`;RrA>`F9g^Vk0nxkd`KhlnN_TIpvwcv|pU#^(N(oWN3SW=}U%Q>IhKZCmW|$ZVS*t0U48hy2Cg%H_k+uv}dM0wA=J5 z^lq-@>ZX;j?qvuP@FM;Mfa_%Bjr(bK9N&4T`C7iU#i!!9E@W-KLz)F|lJVo!VqQv^ zh(uZRcwB1Jvf|!ssu+%yFAzC{y!m!j&WLvGT$U+2I7S>dCaRWz%Sgy#HwWR_7R|d> z+>6jO8IyOfn&_!_@WU;e1kVjYn zayr75bYN*RJdIvNM6f~L0=|4KmC#}n)V`uHFLTOqjZ>Rfs9aUH$5mxE&a<~29_Z6Ebw2Vz zOBSl^^PhblWh6#sEiX^!S~MTe!30OPQYSy9$VOS5)eNRhI+;ji;$$(iAa>ghbw23$pheQt6skFv%6J=!OST%SDact~S9RmF5RT321S3$fLA zY}_Rp-eJ~C4uBXmf%u%jCkb%NFy_`ycMzrG0j`2rMiO9w)gQq*n-qHs(kci`^KkAl zvB)lUSjHE(9WV_stcs>vx#G7p4!*JfN0Oh4yp4n; zd<`K9crC|-)*;)74bFlDg$4r*ZEg=H9wmA1Jlej__sWAzIMj86#xbG38sOD?XhWN9 zFT&(8dgu<5+2n4fCkM<`FVAS*R!($vEXx2JDjutxadwl%=wr zzNflmQBhPLht-z155Le!%K+jrbhQtB0sGBc+O}_%t!VVPEuJb+F>|5*VGSD@zQQu7 z0(mZ3*&|!-Jj-S)ux^9Mi|}Tq%%h8anO@|A_P8Mai6F+hj5+^xn_b5N4>kYmDZo1n z`v|Eby!&=)Rb&F1+--g|j|wlfYJ_^w8Gv0>UIb?<)ZV{4@)pRX5V=qRL_il@g)3xJ zTlGY;td5>iWyJ{M9rY7hmau?r9gb;P0nB` za)_s#vw;YDI*(NAy5Pyu+qv0>FW6%0GkEPSHPwBviD*PPlZ>x}-9QeYUo|H0Zg`>VMT3Pr0NhZJqJ);HCz^!sYn%=6tn_*NFu8=EFl5#Y+mJ-mkjdvQ@^1M7bLR~pUb!D< zc__<>VUcH`1g(9HYtNMfhKU=Q&=DIf@&o{6QM~hPU0FvQ*I?#@1eC7;vYklb10!wD zr5LwIg3rBbw|qL)CkcoA+K=PH?F`0_RJ6&_3~WV{Mgg{?kQuFkD>EBdsI}Cg#yI=2%{|tXoZU_P_@8 z&P;<^zlvX zc#|^Wy{&lncSAG_AVd(R993uU%gwP6ST|i!`0K;oi6LA6@6Z^pfw4-M!PNd)J`)qC z$mvqTh&?(4${_@IEA1uia!7(FB>irXz0r#)U!?NJixer5Ja@OS8FAsHis6)CW|Tn~ zI+!eyhll`9+#N>=h{{XY(+s;2m%=FI#J%zsZO|q)r&r=pODF``S_A9@=zA3Efdx4x z9{|_Kzq2wckptoh6RN0nK!5IpX*pM0K?0#Kq9!I7KlP}u-YgUoJJ?VXc;l!7$z2Uc z2FSU4n-9Pq@4;3_`IuBhOW*@9F>|#Uv1}=kQ<_GyEvbbM=awv+I$%yN1|AB;tWix1P`72OGn5f# zs?`}|KkA41noy^BrP=WK{-O= zNb~L!QAAZ_4_078E@J)EVN4pmPRJ5}&z^93T^E2})t09->{#RvKO`6B56JSu^784d zh)4&ur-Z0?73#N-v~<+-*F8e+zJ1hY5K*Pb5H^I%Yy|beaqGW)aGA?5*d0R(|CKTi z-YT;0o)d3+ik^YVxR{DlI1D{xEB^_x8UbXnGdllbTnl+Df}88KT(KWX71(6%$8EOB zwb+jT|D*=3UrBncGPw-MD6`*-vl+>^^d_Z7JrnE#TzI_S)<3o8&r(A(EM{#z#Zj_A zT^qNqjU}3|&jQmmgd2;!lGByOU#aAj#o2Ie?%su={fal@a*`k4I{trcw^A$7p?tFY zsljI`9*o&K8{x-(lH7EK{%{E;`oY`lyq82aoqe;8R^kt}OAR05H1htnEh%CR{EIbJdo>NdjC+^O%)gS&02xXBE#C?`6H}uX3@EY+c(VsTkTR&q zSNG;s*LM7yQ0FJ~tuO{Q`E%Q}0sMMQsc5RRnGtGr@lzWmzY6KCEjp&IIYl8?8(neT z_({7!d#)9!!*uFT5v|nC%^zEr&+VQOy}l`oK-?I#+p&(;8$^;cPGcU*Fa(F<_>T|C zUil@cn|&*N=mKDoh)*OSx(%nj#`GY$9*PXks_N}Y>Tk28&&hF?@-xXTge~j@5(8!IvgqEyXni6zZR0jx_ z1(N=x>wqxo-)N?O#3kFs2rLuIz+{$?xvMZv#J1O)w;$_{k$Qv7xZh!F!PJQP$} z>Ud8i-*bHKw!pPqhTFMd-}tbceIDYro3wo70p3cZuT7%AP7ERUUzd`Un7w3-FOvGT zdUm{2%maV!MqmDqSswYE?wu*LYo;v61%25IH`&XQHy%Sc8^Z`b_?`-L{~qz`-@LM0 zM5KX8bd@8poR2()8>Zjo-b?Xn@-c*dcGXIspA$mhWFsqCBkLE0m4s_@mjI3%@6VT`#=9R|_q;4cOP}oQ z2;vZEMRX$hLb#P+_TP}n$xATKOjdUzI?KA|{`jV0TE;zP80=Ksj^@s+C&sES@$0A81Q@td%q ztCD=0134Hk9?!Ds4%}L->-2#rVPMw@r}BLZ4g7UqBUvNu%bIRVDY2{4`n3DPadYpt zbOrIe`>yhfY1ZL)m)=S<&2L>J+&!&P+3xjgmUgk$>&X>dmA88%;d=^xwzY5o4!ux- z5ayke9-YZ8gG@q zvIP`D6u6QCu3>MpboG^12wcjQX##l|1p~p+#9!`6y}=nZU@~OAnm11l61Psb=lODO zfQfG;7d-)xj6_@H%csOS;0@ZaK^_5r%>JKb&vd>`8XA&qB;d<{5b_P-j{R5FuGPAu z17bUVlII=8vnPDgV{(uu+3E*AktTZY>V-VLU}u6~()6Mdm8ZbnPycP1UlK)8ocIti z%}-{?9%P36JA8zCAVZM_Zmb<0%v;R%0G4KFODLbvXU*b_`dUjUU7?5?H?(QkN;%>s zOgVeIb}S8-ZU`)vh7mFyY|9&zu`4h*5AOh0vF5QgUah^Gkc)Ctt`(9$Svw!3SBMy( z_{69@07e7V3k#7-m$F_CI*r{J$x~8rw~!zpxY8)sLqE9h1c;s$pgu zDqIq)0$PwRK*p821bW40C`?kp{3ZcVqOIkRx5FQN?4^%=^h?cHk{UPegt2Y`c@xb6 zhw=sk<^GWdRoMs#E;TmfSJn2CPP|XPKska!sCq8en!v$BU}w8LS5%bDLVrRA#pkHw zL*ML!o0jcok|>1WdCEKh=%#nytf>9eGt&vb53M-k#3DPw6iM5oxqGb?A^_TixKYfP zJ3s#+U+&D=)h|@X2UVI$RYU8M^mY+3Dsl)DkK?O1cy#Z)o`S3j)s+&qOljLK-DIG6 zlu7-Eh@6@2S0li}4DRR?4fvpy71NB&`@IFUpxH{*oV74cP zT8jgx;+hgC0-*(+Lx{bIOnlZ*0)EhAza#H7=H~^Zf#fMZp?L$3ssgQQss=qQMV^RYgcP@1a_0JiO-_*b)y=2 zg5?4=(yxn7lVSD28h7HaPtMb`jp|$wD0y6szBV1krnKpS#WKRiD0gY{2DVmNi>i{6un8gX!`?p^BLN@iht^36L*cR= z7P)>Tw5d}-H>8muVR@=rH2;tP#L?4YS@aX1_}L;elz;`=ks!$lm1t)a*YI(t7V;%* zKbOq;98snN6;G@xK?s3CFojkFq=boLDNk&+S;f>qjv;Oh_JR(DhJjb-=5#2wG$)}F zLIAxORI!L?jz#EkEE805UC0ElM%E<{QJkH~wH+ODLY+=)$PjkNl%QFt2=Z&q@r=PH zKP0v^T-KD_m6XcjvXc=tL{oHcqCf^0mjSr5RUm-ST*v+j1Y1i)p#PF&}7 z#eA)j&baDiP|!37y+7ts2w_;3*;yzVAd!YVti$c-o2MR~?5n=K4b z0^g_+L8fyK7s573+hNp(`|4U2+YT* zG4b^pZ|pq68q|zH;jlp7n2|y~Puwkqk$5P>ACA`Y4xh=KaPl5Y7%~zAl%vTrLdlU$ z#aagHScVf}b7F`o)$Ou1_2mhO;f|lf@^rR11Eoph(NsUaTdeq&U9l{;;#gj#VCC;Y zD&`O`lQ6FHY>u$6#y~9~jreY~&Wj#r(Ykmg4R&2c$1@pZPzQpS#MfscOY)0`Vjiy> zz3zDrN5ILU`v3_7Q%;GNz)VY&+?5fTIZUNu#B>LYMDarzyYecQoK;YJU(;#uj#M*Ltanj^~#i<#p4}FlKc$Bi>Q{ zRpVP7nF2fTSH0j%3ez7&x9fYley;jm_Hf7G@KGBHeT4+5B7yK)!cSafRXKgjXnr$8 zlKi;*2pY2fH#6Mj4e27NBYz_;VZBCmDkVPD&=Xb%r!A0&R{0;jNuxs=@NB}DehuiG z=u}+blyDprpk@932Euzo`4IWNmHAQ5OE4c=E`RiCxwPB&gBQ~;fBxOZH$1EFS6)~1 zrB7i6xJSjte&okib`m>g&C^eESiA4*+^^O+533GCpFDN4){;94#EvS3Xe5?NK>Ij4 zI4t_titL+O<+SS2EXmhdud;$y=84QW@w6Gjye-@0+CM;CoR^IXZU89<4WRnmZFzimtnY zC!vmlbmNp!c#Szu1;I`wu&|*c#jNfh)RAYKOyYtXvsD=FiON!4JE)xMGgod&*A=*y zeDcAjs6-WJcM`^7d7JNuOSw`}$&up*oNfX0YKm0G{Lo!Z=D_t_SoFhGJPkb*FrCTi zFwvd9`DujwlPoACx_h3lxaL6nRLW= zRH#b^|1@dSpvxWhd`WKQHuFi)yv)&P$8V7t{t9Wk+8WMOFQuohyGmvJx*-qC2WsPcD)F2a`}2b2x&61t1W(yo6EzS(~v0Efmbv`xcL)nVcvQ5fw2dB~h-1mRK@-S>{80MS-Pg}Jpwf191PO0vW)DlK6OM5X3jYvr=k|?%AuSM#mw9k z#X`iUYiybZ=|Jof+Ao)V%jL7c=)m$+schx!@C#IlHZ_fuiIcfXSQW=GNOrQnB0bwO z*6k8}FpDFNg6Xg#1D@q1p+Z-`6o-wY4#Vrnc&LgfV{ob?{-zl!AK{2pRc!57%{Qy`F5uKjQYg@wE7x%30h+ zOK7lYd&XYw5)qP_#3%imy?Aa$SRjr3!^w8-Om%fu`Oab={)ubKBKyA+%%hzwQ4IxY z;_ruBPgOUj7OqdW;8KcGC*iB3Ohc_K#Qe-y383=9E1@t(tMT-g_nxG~wAvlJSM0@0 zCU`q44E!1Fx;qOHj@CdE@p1yk_RbFYXe>1#og;DxO_ zFJZCz$K;pQ-IO!w6{*2BMR8FTMZxyDSIdpXa0XEfLtGWYr~`Z62Yz`u_vHWEspZ8f z=i*Wwxk0Z=2zVz-QC)!D|6n3+!jx<3FvZEC*(;2BxPhug85#Z37u? zqxJYtDedfHscmsYq??)Gg$=HH{xMB+i}qo6skPy{VVYCfgHxYK*yb%5D)`d-rEsWl zftQF_6>ioU(f8rzY>DC|6gic5958d&!MMYi$nA`52_EPNlg{AFIA7T|_! zf4&})5}aNV6`b`j`#Y&UPAGvc#oBw*#;NIUs$TsMW5w1^X69~YCNKe_Gu?ta>RhHt zi1vZ%eVs|aH{>{=rC-mYgSCY+1D-JYDlr}RX{w^t_CUa%a47Vwfar*Z69V^aVFL*V;cQu`Op_ z-!AH%3&`mTIw!m6j>jA;7752IyPiV=pt>{=swJgIAXaHV=P4WT=ur@!6GmPK3h9BK z2RYUsh-RW7l|M&1n|6BhzIVMdv&Ncvxojc4`I-1;iXG_!_P$2|BbF4APJ@Jtxnf*x zJ0@M-B1sjI5um5_*wNRnoj{$RO~W4b?iDvh!p2p@?wpFUhNYdm%EgE)?k>WJ+sLNg z3P~N0zy;a)I5icnfhMR?GXFpmk&(`}$$Mn`_CT>egNOeppdSSg-k;8e%G{UrSDql? zA*{u9!Bv={W3#gm!@`v=dXD_+!|WL`MB(Hk*nxG z2^Y8k3}5|T8@P5a(O&_C z_iD*I+41$|hrRgCqo)Q1&lcdV{NBf-xxx?Y#%U)q#>%pGyamD|{wgAH0pJm#ok;#LvPO9tU!PDyZ zED2{a*(K3V@j;nnwsL?_cTso4wLvsY%lqWn({joQQLiV)R6fbH$m zL<>hDhg`h&F*^acIVU6eahY_Q5nj8{LI;^KQ=c0Y0O8r+2n4Ro8}CmV@qPg$(o`D= z$_ADv0|?Iw=m=y^@ow;V^3KtFxG~KO*Z2Y@9%BK9fZrwD#ei>w#Hw$R5#71nmze)Z@7MBXxFQ4*)PiDqlaHK#Crv}6B?N`z=Bi0P{Uf#P6k)tT}mD??umOIrF0cM$W@^Ne4TB9!Nu~)CNxgVLf zR(@eSSA}<(-2>-wH^1fO=6wLj-l8Lp*MU1-38cOoHhH8QSO766fM=PL)7*}>UEq0} z|L*#JZr;(c3pj79cI<2msU!0Uhw-Bre;-ODoNz?)=JV(PT51F6CT&*Wr7ZK4bBAws zH#aSjy&e^(W!1F06akAVo+f7KM9#!gv<|$;I}FMQCI_Pkrh(!1oyu1#2CJ`psRXI6 z$5xB^=>0T``h2&ge)1^Z-Uyb?ZBI(EeG9MahYM$-s7G^+tOhrQxdhG9sI1AS&p2q^ zKP`Mbi4vV;(e|&tGNp35rAkR7|L^?%mF4_)5c@`mTPUutoXlj27_EHbP9P?EI+*KU zCUW))X9&lHId^}n>}uW2VU1kac&QN?&G2!u^tl+x$rCxP3JR1r+ZS@E8STq(rF@wF4ICBB2)coaOS`^z5iILCW=WcdE&$yA#4$SgGG~j^^%9n^NI+*sTwEgUZBi$03}@ZQu-U#I(o9w2Z$F zOzaa`2WFl7rsZO5k14lGaq=#bN{lM3hH$XqsCP+{MOZk60zOVpgOJlZ$hn?gBboG4 z1nDk`QpNO=-Z(j*6ey7MJTk3a)&KY<{1knyk(WI?waJ&nvc{_I&j>j&UTfjSLVsE# zXyEUE#{LpXe!CCjvv@ZRmd@K-h$<>2il5R~@`1m0lb+YqvXL%G=_`)o`%;gpn`XqO zl1R*=hTHl5F2hpbjqvFy{9SUB@I&`Fyqv8@TnYkA^L>qz} z9h?5ZG21!Xu_Afp5muZE>jX}P6}3vXp;F+(;f4L^NXCcaDST#o26qM*>a|~shTB=d zh=-llp9Uf4AlB?hEmeJ)!~E^(5y!2Ldt<@_?U=O9qPaqJ$Y|gb&p~UmjMZB`4?Ws7=sFtnb^3P)**+NF%*t+=#Q@ribI#I88RzBu*aLB{h!j>cj zNk{P%x7gzY)57DoZp3kfj6;^kbojO<;EdttHH4{$0MxqE_6F0N|h zsz^6-NlN(IiQSNpec1}%lC`03$ovb*V`{y_AikHtp!x^h6i;PwunJZcKJT3p@9uK7 zK9_lb&C@YZ&D$YJzIu?kcs?@gOKoP9!iu&-dtuYy!YLnClMHs++;nrf~VxZ%H3x`2KzAd3;g`XT@2tn*#^kiY}6Zi2_?C-=5^J!uu7vmkNCH)YmYOC!qGAa`QiFdIeR>i zgO}H3u=XBhYwP2~4s@^}BLNzD51^4?`dswA$d-tRHd2I_H&|V@J;zRzbuu{?OAL(U zB(iW$@FHFRR7cej4UoFp_Bd7v3XDX9mJe_X3R>ZgvuwbWocbq6$@io`oK3LhXnd|^ z&h9{}$gl`sMHdk$J&9~h`blnP;s43N3IezXVosxUwHDCUG~Zc|zm24kjX z-|U0pK7Awr|NDx)fl#y7q!_A95MSagEriU0MV#wS=T$47X|jh zE(#-K@g`PBRwVuYh647LF5B3$iwSfyJo_zv#C5O>JE;bqD1}FcibVeMPHsVG0w%&x z>nkc$p50So4y&=)RR2ILYjYB;t)RbI2N)VijMZKjXsU;?z_av@>-_rE<9e}Q@M+8e zCgXg?wmZ&*FRkW3am7A+nPJnghIaY`dUwMTy$nSvJ>YHRkpQwe(n_yt^A_eCIAZO^ zYmp+IjN8m7Ef{L_XBN!)CpqEm#z?Al^1W-lNHxChaMf5kx=BM=3D z_8yj^jw@i-dH3PH_t)jI_rk~UcB`(BT>hVI9R%WI6-cW@f|KGcrCE`H$Vc^C~VE5?FjL}Nxy zte0zN3nFMW2Q1?G6IP&gVij*fief~TmZH{@=A(~9?eqa@ZVLjI1v(Cz!MG5Xwu)hQ z_HAAXo`{liM5>a?1?9+d!JYCRj!%jBgqG0wNo4jK?L+h( zk?*CD%C3Opq3>HnTmDd6xIYzy3-DI-izcixyn`RYtv`2y|NQRclRv;M4YZFUcIObF zY5Q#5_P}u8N_**(ysL34l1Uu-mGO1EBf~c}VM{_cHU=L@W!6}Yh})LN($guVjMV6) zG-IE?Z)=M7*T+uRc3tRnDB5pD&ct_2Z$QF{?@qLL?L zDypWnFDCZ)FGt{`=BawV|MCf9d8$S{M2Ffyk2`T@tVJE8%7X_+xyNA5-F%_WrDAQD z#!D}`4?z60hfbYW5-|D!gR0vdem0&sIX`Jnz7rl5ZZ7`SkRrRc-jR7^Lan$Pliv+z ztb;qK)x*Ds1Acya=9;K(thE8wfEm$QHxwU$7udK?@+O9s3-yYSK24Qc(J0HhVB>#z z`OPl4T`GgcZiml<>}UKZZnoUD2G?!%smvIqcoU4X>%&<7P8=WBtw=aol-drIehp(S z(Fo+F_4DxzH(hh*)W7UzOI>S-&b3dUt?0N$Do%{~pPNptMcrR?iB_l9;=WI>c_rfU z>j8|GoS2c6zy8B6E&fjnsiiHk=>k%oxZ#=1W=wkPqAw%0G17`)?}W9_^rM#y<??vB*WR76S50Iogmlq-amCaI^{eGAwNovOdcY_yzN-mEWPT2 zMn^A1?euPOizuNM4F7xQpkNnT+1aJ+hcz?^JUf6OV8u5^_D?-6=5jeXuZAbCfXz>d zV=pB&|5uM$M=iWxaj!F!{7^IJ21XsF=-%aCFUb!vbwcwqM9u9Y!Ua3lT#QIAd;nFd zl(ykNnQ2+>1)t%L@Iade@=s!n7C2qN{Fe9b_hdx20FC@pgSV*VK zDN{H$jGg{>v1pr*g6^2aESgv$A$cL8EJA4B%ZBM}yVTKy5Ib99+XaJ<9f}mDTTk9g$Xx?g@CW^w-GX?oQt1$_15JURMsHP6`9)>2)zw#=PTofXBd3I!HwQ_Ol+ zq`f<18&0y`_|C8hdhEnD2qle05x(BkIA<#>6I`7s!h#}^0Gt!FVs&}!`tEu65zwNR z2*CJA*St=%6T%R6*Gj&q9xaNr_k5I%SVVYqeoX1M`&0}1IP|Q3fQ1|+%GM?u7I#8x zz|Bsus>O1UlM#ebZ-%`<|soEbx*N-tVz%`FxWAal7enDkEdR$P%PTet! zFS&6Q&kRpAsc3JGC1+4VyM~j#S0JrT2uUfP%UmIGQoKWevKav3ZJ{7dAII0hf&*D_ zaj{J;(ocBq317SN20Z(W+i=!@Ja_)ojKkJ`K^F z6VOBNEUF($p?h?^UxZ~eDq79>+S=K+cqlvjdPVdx{*izXte;`CtT1+shAgthCF67X z=AKT2L{f(DkHPDCO=frxkAFTJ_m=6l0PIKip6IRj3JDPS_;->GvC7UYrEqE&voWZD z$7(PjnbRS2r$cA{a5|FhJmLbAm>Yx}(c7LT-I*iZ{gSi4wH&TJakxSs`rGUq=0ynB zERe}a4_#J(4M6j7{X^f%*PdS3(dx*i6w2Ww0OPC`#)~wpS%#!?Bqaj{UH0JwBSUX{ zH#ls(e;g{a0A*t`5(4YSv`$e{F?o>!MdFxkbxf=%W@jZ7Fvm&R1=72Zr1+!|fSc)E z+lahJONp7^ZrT#nDKx8q0>6?{US&r1#3gI*4R8ETa3ZTROSurXsI-xHB6KOVEpOfC z3GhubMYdeNS`f;P&?IODS49zeE*wlx?p!>`xK|OW9)j{~pah@nmp=A*TAJfu<;PSn zGmcj0_jJj};sVO$Dmy&6HB+@79ABS+8txS5m`YG-)m`((xj~Z&;mV@KeU%+?LMR{N zyB_*rKaAR{;EV((jlG~QW`h#Uzo=Brh0(xdR4*+mE;+&~`S8v5%+qUN=iZNP^Xt#F@mPRLCrs8>Hs- z!QJr#b=j=QS_s3oc!X=|t}WuP<(~FVb5Z@r(}wKOnXX9ds(uip1?vYy?$#y!G^3p8 zV@XE-5#@=%Np@<4`E@u)!j2$&fv)s7kAqE*msESQqqaSwHmsI(khGSMWZX=Em16UD zduw~A7$@CPtySHv13Jnvhob8%0}qtm4clw2E^guQ4eUUa>1@}K9$J*3Ls53XumO3R z$Top2m<_^}_ZILu>3X`a3uw@Mh><};8gn3xLBJE+&cVdT>9daOaig{P(0!rgBi=;4 ziP(#xPg4HA*q9gS3g(nPTo=P^t6}2a<3noLa~n-zi|04f>M}`rh~~>tM+>(j z&|W3)F_IowO8kq=L>xd=&!l@Cgol{wvbTw51SgA#Wz&cx{HSBP&Z@g5mTqR5{D2fZ zce=`ezl#Waph}Q~U_bLAA=cuIz^R-gkA2-sne-AkoFL0lx;ZQL{pA>l$TH$&*GZ`G zNrfAR#&&mzN)MH8Z%yAX@J6hUTk;$V~`S0e7@7}+b$tzHM164ZZcH}p64)q?)t zGlKx^MT?HfrD!^Z;wn*@W3Z`zEl<7c<;>g&CLf1{Mc^&Oa-C070Ym9Xgcl5Le~7K=J66-zaqhX{Pu;@F%_v4;{8Vn?1yoCGT?nPMN+RFz;9%NrNHjk%WjFf zJeIp$kH2^-m*TVQ!2$I}j~C(7oA$ul#4xF3uJ8f?+55B-#+gyLnN{Y`)N^?+Wn#YJ zM=DKkBz)y$XGu9AI2q$yjmrSxS!thCh)r&S+ij>jcD9BV0UmZU;5JS2#g=O%ZK&}5 ztvSRa+ZyT%>~Z@gjaygSixiGQKxS5v0pdv>~ruVuWr^rkYC|gd3!wT2&x;LXU&cE zpEQ10Z6u*4JU8FQ;1djc4ueeyTn*k$09cMqG)4r;(RRsd!ab_Mvpq;j!9T?LEv{#n zroAmX^V27s(Cbwfxh&@taEf~Ml}Tq8Wz-_#)y5p?kXL zW?;&|GDYFQ9oJmEKo|HfEeHpjQPC{vCn0|0vBaMbnRpLLZ};H|p@)($?V8 znzj^-w<^ZxgujLz5s5~ts%yVhzo$rpaWo?~YGaI>b21@~pq-u+GpI1O$mku`dnvOI z4=Z$$GO<}uECIgY*Ws=F@NKDUa0zp{W4{a(0dpIix<8=W@mtjt>+ohS`ys{1n#huN zA>f7UN9oA|L}T-Snmlx$SrlyTcKZa5*FXi1SUHBJvO)giRjb?2CXHEXy-#DD^f zkJZ+$wth(wn^ELZyb^at;F8Ue$G(~ZW>@&Z@u7(*h186Uc5Gc~aKW5EX186l^VO4$ zCyVDtzm>}ZQ!1F@&e%k66L?1O2PvZ3rk-jr7|tub&FM73(gcdHM>NJ5Yh!`Vu8{_2 z&!?DmtVjp<)NP4Qy$z_~IB9z%h%M&?I=aW3qg|}5bk^#EgpTe>5qDz1KBQt|U_yB} zBYUqMZ;ph^#|z=~F}Doi0ug5!mnY<#1qjNI{1jVPG$6DeO9bLLle%m|Hm*8CTuq~2 z@yOw)XGSyD#_4SqYQRxe$haXAT zu`hfg8D{1-zu++DsJ&(4hR0=B?`z3!4oA(;;~5*=xER!Fxst}`@s>D8SJvEpu}~o!aXxE_-MG{0-yh*IAUl@oUY8RwEd2K z)$oOfsT%mM`JE$z%lC5N^#x1Ls8xt{Pzw(B(6n(|Lbp&3P#bABqG;)81LJM-G;L~q z%*;;s-3E?8DDLrL*|rv8#0=1+#52;|C(^`$^yniA&90Y-k!7?A&Ct&Uzz0~ml?3(# zHHPvE==`XV`~ zT5mzVPbN9%QyUcuX<0bN^w1NeSXMV-FrmK^h+J& zK4nUF;qP9G)d=Wu%}FPE9BS_vuN{*~CZ)C@1aF)sUW`13v&gCDsd-&P5BVMT&!e~a zGvf0KYpDM}a19}8aXCdb)~X40mgj>M%K;d6bF)w*?Wc2?8BfyrMOEfaWFY*ao`-7@ zDNP;o5!vm`+UDi0mZ=uq<=6%$%;FFr!A%nb6RNag!PTh^66Rs=WI95>q_~x8JsN@9QKo9%LtZMnf;4?M;U@5nMkv==66YtU0`PuvfLR){ zh3e1bq(#>hldop`$C)`<+ZX^9nVK$>8i%--CfptawDt;IrOr97S3&J0Va2qpBEwC| zJ4zufzZDcgO#krMqeNLRcTxx?7to2mQP~95fcS+9HO*?YL!DuThqa`}{JVIACg2Y- z&}qJSX|xAz;^fvgIh$Wr$M6HgJ>@jBJQ5mu&6TC9(I}zfA5X_cpPV84fxzknvc)}+ z$xe%|DI#CX>89-(f}cIjtgj5RUGtgUzl?oRun&x3P%lWZkbV+=UxV&Euz{|<-vmNwHlwGnFC^r*9XvQ1nWf1cAc(XEi zc)w-9^lyXC6N&3yqJIGh?+d*tJgqS;Tk?mEf8y_cwxGZ7wL#9fX|6sqgQ(?eJ$AdA zc6ZSJAb-);Y4(x-s4dyQj`4c;zsjqt1N~H^vbinaKQPNxy8`ZdvtaZ3>%KuHxHyF= z&vSM{OPW{o^?ojVnD&cwS&@~xjPHJ{Ql=Jv+;n>SGaUC1HCHp;J9=w*oBdi6)!WMl zYxku&bn747URbx2FxvIwpOq@Ky{O@GQ0OhEzk)*C5(9liN28@wvxM?yS->@^+V3ZK z1Ed!Ku2~@qj-*?@*)nD+3?`ZJP(-rqp;9M&4h9NF67#m*^wQh8jk(41ttm!zR!=)v zXb9+SI{Rq?f4y*VVQm!-Hpw_UMJ*nZx;($s^eGXk!&3+)dD@0%Ut(P7YwBxbdDKY^ zSyE9AVoTcT2eiLv_$9}s_-u-0nBz5ctHcWg&0oEwlyFZ9Ay zzizzPl*;mO_b%{G{b^jZCpTdD9I+zrPZMmU0VSDeU!#Z~F)Rr=4w=b+A9eLl&y0~s zI0=)aT}sOz8eN>W_$c_Zk9nddxBUf`wS)wP9Vk+5&gIi+y6_}mDW$p23VY~f0M5Kr{Eg}5X3tBzs-G50OZQj!Dios?d8$rtid=H z_!(8+?3uaEzsngt8l}~hG}0@Ya*B8jb>w@2DY@whWJU%#J-fC%xW1zmELdn+cFr%j zd?LJJI>U5`CKI_MLr8mc>-w_1E5J30X?~GJd_BZ6uZv!P_FoZ?WQs*h5h3qU$F=oF zpZk{xeL>L}b3-pTit$(0!yo%@?5@EU!}hlBN>0;Yn@)Y_MG!0Vc0W6MYQ_$6#qLtw zhiji+|NX6Z)+2EOvexp(RW0Y~HpGYNe}4^Z8on zG>Y#gVr|y)T5a@P>p)v9+q}}F^la-5k8@adR;rs_6L?UJ&n?zXCms0@12+YO_Lb@! z1a`=)ay4Jj+jJ2aD+_BOUxPsH1n#XE>{22jKHodk;>_5OV{)E`woV=Ae=GdgXZ!Xn zaQ~*HATC8W%!>$$K>Zj!)M!54<^NXd8zrj8R8D*JazF|rR6Bi283v0wvIX}GE}sOe zn9VR7Oq7n=mI+OHG<*TrZzf-=_4{5kH)s0sG`$YjLSHw3%^17vwvJL1f^j^#C?+he zC{-5mcGl=pt;7eOWioxtd-(Q!#uH*q)Gz8Ohn#GRn_V+>%T?gHEMA`LoqHbU*o;xw z9o_RR&8cW$dV{9o((2n|=?Hkv`KIZ5jgKcK_=OMtM2m{yKN>B5^_-Cc&fO+%wvAro zM4rj!ZjLmaD9r*pcC=55Ne^XL46*m*>r<(*Z5Z)QLL!?pP=G;%9Cx?Gn}+Co=%W~b zZG84d5L%^c#v@F>CYkw#0J*uvfh^{y1!Ekb5eYH778a zWfb?IKt^iEtF7NOU7L+j1EYJ+6h@Fm@}*!HLSQ55?VEEN*llC^wb5r@F)6;r(bvLY zVh1o-ITyBcY<>A{!(7q;t-es;>f99%e#$xPB)O_+qp0mC#|f6mR?T&TZ^B^3L|*sZ zmhhduZYn+Mi*}1$3&{7>e$e4u~=9(K4?V7a$qs@~f(wnGE9V#wZ+jS8gG|qATZhX~0mQ~>|KyI2v zuk25hv@*Sf`;tNgQw?J4z0u!E1NU}I>UOi0C%{DApQe8&CZCxS!wPe#Xaxwb)kre5 z%gCaD@95l}_0y*av1(=!bI zPiM#8NqHwqcr25;0T3Q2Clu$bcZYkCaP?y3_lMKJ_uppW`hN_3WO)3=?Du|eug%c; z+OsRGYUzC3hbW-UCr!RCCFObj;+GN#il%z|&=x?@Sabcil;jBjRYHB}T{YaGZ5va0 zL}_-Q4TW<*e4|`iQd4;orfM}0!+j=o*;lXWxrjPmNovxQF*HM*&L)9NE8 zi-CUQ%0k%Yh%n>2NI3g1jTq^J!@loNHVYHyM~F?s>vorD?e>?x+6@UbDveH}b8K_l z->87E&JVu5Y0RaN>oWIS-$1FG=(`-V(}UQRkIqfQujJs7?1iPF6$)D$ixR8osRbtT zP#&9`?%tQ`*Z_pGco{1qvO{xjRNLb+6=4u^$4i-PCaTC9a9O8lNqGc`;_gl(70s(3L9bqIA?CcBM!5?`qT`tXcQ5BCQcqsI%o2*>pY;= z@lCJ2D*t!~zM5&f;rYTwg;>d9R-8Y9(hTC9@&1GBc*`NSWY?8a{12|ImuwZt2B(wyF5#y z6F=74r^2c2M1KGfUNcxN3iKKs6?1f`m&_iG>1O^haexMQI;$gVhimV{eO_lbl>1?in}x|%m(ii{pXpDy zpXkY@c}LFh$Bc2L&ll9SPN#(!Pz!>jVfM1w@1xHAT@IE@2<0urHfiJ!?(5!nJOUtC z7xAA-7@nYaYskd~uV7B9a*U~DpV{BnRP6qKb#RIV7F)h%!lwvp1f~=ik}>I#;Mcm= zBbrJSXkoyLFESRDc79(|0f&hmBb0RNq1llnXcVb|^GW8m@Wm`^oq9s?>twG}3|izk*DtL!mA9 zh_!nQyH%r@F6={Ll8UPuy=68Kg$hL8Rwoi`_odJN2 z$KgJ<;h&q0j&*fYZhe#=9o6;3nB{`Y_PG0Dgs?hlS`Tgr6p`Nz1~LXZe<{3&z~E-n zJ)4B9iHW4y^7+!9_JaK04oZ6vU&R5ubRs>~&pF@NZr<2_v!cf!B7qndnc&}Jv3;?h z)DQQzNFNO_kB6FVhnnaWOHZDWCz$c6A0g)VF`h4f+i5#&Ixr5g6W{53P22@z9Jf3p zl1p2X(BTbPwJe;7U>90hELi^3Vpqi!HSy#*KUt-hFh<)hXrUeFdc(-kK$WP6m8Wv9dbFBcah6iTbjikmT`J~3UkVHzEk|-%n2~rIRX;^G* zjcKS)dSx_T-&W&7GuBuJzd-P2KrY!|mB?CYW_C*RrOmBC*q>C?D)9;FW8h>Ufc7u` zF>jd+x(ZibX`2~h-fw6`?)OcK<7#oYwyE@5I()5YfATCUl*o)qoiAJN7^*cJs_mfX zZ^UinEagAg(CJ!qy>@RL5E-Oy0kNLCccv#7XRJoleX%WD2vd2BiuOY4384% zAjvZA&PR5gQtIoHZ-%G38fW_WA&+AuRTv!vQ zpO-$Cz6d(&#*EG9I8Z)3G`ul>uuK2BIg1C2^fIs+z5g(#jgJzO@FyflR!V-9@@M>r zu0$v7(`11OQY!H(Xt!h0wehtw@&CYJ0qaw@7;u#spGQy4FG(hFafwQQ0)%(ImqZnL zoH+Vu=67??&_3K?W)`lyWM4PB-!VCX9XCpTqYV0$!Mx_`T#x9* zBbb)8c8!jL4vqEpmLq7eIQIK);cliaOjioB&YMB5Zs}3jsc_Hk02OLXkspEY_lV(41 z>N-#ZMd9*kI+Iy8pY98qF>#I^`W^N-v8X*zN_t1y_GEa9YpoG@!biiovBb%LAnzVb zi=d||Km29T-JnN*JesMU(U3lRAqLJUPky@%vM2f7hj)L={yM2pUL-zivFO)+4Y_ zWo^;ud0P?6<860OJ8Ks+a{P;Zbews`xwjRY`VlKl0O65gnU_hKa?UGyXqUE-f+qhr z-}d38vRXu8QDB{4qD{=j2fk?EW>OSz-dBEa>nFkRtN4fW>CLPf%jG8}!;i8c^Z$0W zKiKWVBwqZ;T9bC){zE_nBoI{H@kj|NWm9m?a$X#gX|+#B2=RF zE)M8lYy+n$9#>O)n2mkS5{vAA7D*;o9#nMSNP^ycC`!HDXmF*wEJ)Ic-P_8qJj-mN zx-TiEwzpjVh2_jWWuZyIM+{zO23f|gpQLRr53K0Pgk@0;Y5L3xe}67wfhZkm+xCh$#xyCup$y>-{v z;UI%}_txq0x;1!6Z zynSZD)nq+(#Qe@8YAHk|WW19b- zYr`DpYmv=i8Y}f3eTaLLvYGmKR8%|GhXd1d1r}`5JJNI~zV`v|kMM=6zAmcPJLLzr zlEQ@znk3$GTc|t?96axex5eB$Xm|Ocz%ehIq9&<#gDhm%i;Rjv zJ!;a4%y#Gx3Tzqpy9vS8 zvJ%~FlcG)cc%R(IaIBGjUp{;{?%L7n-L5!DNB;0fhcb&X@{Z+f9}po5JDB2~UW{x^ zySF^K-V~mM=aAyw$?0@w$WCmeQpFgfVQEZF4E67L?|;u~!wFaXdLMA+xa!>kvHrdC z=_R)3SC(oNWt==ulH*&7^VF8wGoel;pcY)a*Ap)ISNEC=6pbPdkPL;%GQce%i>ZoVr};3S^4KlZR` zB2L-|ZcYzEvZT!f@-Nwf6(@r??~f8<&t9}!m0cn*UnVV)s!+6{EWQS_mi9*}mK9cA z553@1U27)5+?Q1)d|l1aQ39XpzQ%OiE?)jNj>`5DN|ro$cY&Svxq>_uBb5-Vr7HT`cvh*Ly9cY@?i; z?(`qKyx$^ZEiSR+8^MxyFu4zMdRUw`?@e#_Xiw)ioI!_$lw|LmXU^n(S))Q%G_1+^8jCXs@mRFjl3HxxIju4wR!d+g`Es`-@Pf> z6a^2z@qXUH&BRUb<d;kjbD1+2GKYX7OV(tWuxO85v-yh)Jj2#hI}#*;zNWM z)Q^m)!(sKxSSvm{_LyH#^12>@MP35s6%@PNqTIcBqxsnXm;~||%R;fV&{zgJz&3k1 zqGVbtwQZTDggAsSREscCrIezzUsL)PFh~5cNYQxJ_#kgT<{)N%E?|a*aMfvz#t7Sq zV2w2Ij@mH_E8Yr&c>g*`QF?%c-cZ@_r5(94qL>-G6GtoH=@TpqT#a{rL^9Kgkn^Gq)Of0<<l?J6T>q3$EN$xzh20*xFkO733|0O#uM^Q7k^MMqeB<_m~??W?c zSc^QiK=&JZ2rn(R)2a3Jp&ax%>D5t}%B*$J>edb3SZn^^$dRcj@uLelhOZ z7Q}?zgzqzNf)$9CzCE?FI|5kM;p>@F7>F8i3~Y1B-vY?qm?ch!{F$D8`Fc+yxLsY3 z+3O=_(-I~g>z~Kx%C03Gr04RhuT2=|Nq>C%R?Dhi^+Nc+N@rBDc+c|&^6vDX&vF92 zw!&%2q^NXy#5z8RwWuz*a({U2Wbn(nDuD0;B44(k8m6jpUe;exqy7R`MFA;gEv-X3nS@Gt>n&S8bDW`J&=+yZ8HD22DM4thlT;dgXWIYJ8?4HI*i@ zc5v|V{qv?e%cuCRFpt}_yRy%Ytvk46k?QE8f(@vzXe@y96v}OL=i!DxL80m8C^a-F z4h5<25eRa~gg)?S1r&$&a7v)UG0Qo(GOwOPaVUP16G}l^(;+ma+7J#3Zo5Q)0CBz} zYt5Z%nR=VEGXK{al=EzqxQE})EZs<0q3iA%>4Yu+_Sd=tv?g*2GRr1d5pPXepKCa&-#s=PAKUoJ?>ZDo#>TZN&C z1^yjnNKy2qfadP`kwIRnu3ykl!+n`9*w%&h4igYj9L&1jV85z6c%B2OM*P_)6F%)IBNWCGf(F06Urt0tID`l;I`6+g4OcE{+){qNXA z;F~M9^p}LRC6qP30yV_l&ufhXS9)L={{)4kb5S*JEfc|3Wy@WZrQZ98z$)m|XzAgz zHA_33i*l1qu#DWuVIHGJ2;%jY%&Ud@=-elEC_WsRP05BWRxu5A($crup1?+c<6aQ# zV8GWS-wN4RnpMi4^TszX@wjM`ka(wnv$&(%=)C?{qVtlvQW=nN07+Cq1G9{U;933Q zq+rBJLRu2hS{p2opWKYMsx6hGCJBk+Kg;1w?$?yEdh!#r!I=uLL4AM=)?!C;lsS-y zjMZ@6z<#wjde0vO9j6q8z&x4vZo7r_CkQv>p@_HgV4j@Fl#)nC3hyM@j#hkUr_*L7 z?aje&qdc4+GYH?C2vbL|Tcq!-3!vN+k>}|RIaducQ*~0D?#R?kT_Ydu_B-n(jAbD~ zk#I#_eM^k49?i!OV|nnU=8ufdVBdCZKNkDhC5ua|BdaG~_X(C{37Lagq_*|vndNfX zoHtmLCz-Vau{yF^@@tANdxJeezvRe5hqThqETgnup#xyA1ovk?f_q+34kC&~rYQtT z)I|MznhHQ@R%&dqDn2I(OJ-VYl}>((B1GS&;XC>*UTwf>*X?%i46L^nM!RuQU@-TL ziUmG$I#{1XBDlauhePY{tkNv>%kC(6?*IDQyWheZ(Vc|uS(;rce;u75#1GIf;w*BP{I5Vk8uMfX=`cyd zO=o5KUo=T0QeCPo08-(J&gDfanBv^gsr(ZhlR>pS5@QU1B@?0mUa02P$ADvm0U)(~4szxW;WrXu0m{OQX^oq)cAF3>#j>jO1To*+Gxqu$# z8_go9`BN4uR5eeTI0FB_f?kU}sTUb)In}r;^YfD~+uZ~)f%iF6Mf=G4Rz;rQ3?yat z0e1HzRtDLjcJ4jy$-2m{WQfbm`5v;a*6hp&>~4Sfn~zWJkRQWC&W@<4>F8NI6>J5^ zq{jui`zE(}#6imT-m$C=*F_N@Ujl=22J8SVDxw>6@@nD@aSF?vPl|W}W_Wg9mA#() z3g_bo^mn7!9Us7rG|+Vg-n^Q^GpPnXDC>v+@=70f;R<;6R*i>{Vi_T0v4-1@Kg@QB zqC#PY8wonobj~)6Oct`Ty3w>&k+~N8`;&ijmZ2``{|P*j@MFazyFAcpdLTTiRUKSX zP5e@Y`R3Q)H0RzKKV6b@`WWt^|E;2t)4ZD{8Tr}kdC>>d$Ja?A zgj`4`R%|AIbv4bC$IbjNzoukXag-@uQ?!q$y(!v6V-@|;)aIZMpW@!9OThc+4_mE@ zNMzdw8|97aXnPTqb_XjdYRh+U^0qfUBMB>X*qi zYg_keQjS0iZ4SotIUj>Y;Y$- z8va$L=pwS3T?=gprY(4=Ju6T%?~Ck?ikPWM|J+ey0KzI)ShrEWET0G1-49qKa-RKL zL2ozn$!n3FJOl9|IDvcX!hAWcjE?ZTdP4LKY@)kTSV;YCtsN{ML}v}z%IKyzX#t)9 zTIzdHSUg05e|M}v??OEl-rNax-Ugg+4#-937#-uBJ)UDcQeAskX~{U;&lY`g~$2g zM&F=^h5tWcz)FCbVI)(#*2QU;U2$B!0Xr!#i$8Mn!;|Op;*`+eS6*XC)7LGlz?^5BxTRtW&Q?z7}SO&7Nd!0`CtIHiM z9()zV`gxx?z!49KS>N7s1YEQ&xT#?Da~4 zrRo=hOjrf6SdZJ)<_^9`z?|GWH&>nkcN&C?NSLtT&Lzlwdj#;Ez!z{JwLLZWX$!pF zSV}7`qjd<-P{<_Xl{rrc{59V){nh7~A9*?UM1m~AJ}k!}41r(oL05K_aaB_U4B=qY zkv`r2k$&BKW{Y{Js<2h4Tg-=ZZi&S@`3dUvxoqTIVE-R6I~NT9hr{LrR;6;vi^{-Xi}#=R`k%?^n3%u?V`r@k2dk>m z%_nQ{9(J~Pj~ar9tsQs~ZCgAIQwwehr?n4U>mj^UkBdzTkIKl3kppHU+mc)NJk7)8 zr17HMU{!_SXn5IbsMQ7ys~8(w-M*Bc%{4;$pY&ewH8+IT1YC(_V~$C<`D(xA2!IfA zy-MGsj1D2Dt(FRFUPmL9;{kW|#doixQBK|ER?&Uck9wIZ&z+8R?o06R^)#V}C30)> zu^jkUi6I7hwHGg1IFx5dsnTs30Op!qcih9R)`ZjqB^A1Ml1v*nz*u(<4L$vsyFHoB z3NFAJ%Fd>t6E_<638#j+&eXY{;V#zdF6J8$`69)z=37_*QJXvRZbL9!P(O(4stl>52 zWscqHGPiaYYCq3)JNdE1Q$(nhy<>1M(bF{=+xCfVCnvUT+qP}nwrwZB*fvg*6WdPS z^F05zUev9+x9*p&`Lb8_UehyOGu^uvtg!HU2n;!r{x=P^*+T=0-Hp8NBsQIWR%!Cp z@r`87x~fdlHV^$)!K{f#u22F$A*d^B{EvI4mE}`;Ej}A{XBqp4t=|fji3Xnlj@<=z zh5k$=^Pn4_?GuayWb>XDgLlJJF6Wlgn+^T(nNI1IL*>okc!;8ppt;SUrjww$!k~nR zkd$X-4pMs$iZ4vtKW;|odj6ldHL#j(xu|!T6r>( zmR`L%aB@|G-)-Mf$$Y4loXyqsQJC}cT_=jh?B+u;L5*5&fz-P!yN-d5K@!Shx+mE= zU3KM^t-9HNJ#G;Kx{ZafX&2rG7N=kqg;E0API5GV^tGnhJX(?qGNRx)$rvgLy{QwP zF3+wqj4%DF^@BbmBcnKG(S6wesg(sS{U$i|t z!ofuZWF1+J5rhWT@f^o*^?&NC9w(mBZ?fq)FgyLM2%dzI=K=XKZ~c{_{AiO|KqlXD_h=Fy2j;I9 zXWdMdkGtAm7;0-L+Ge3*6F$5SkneG>w{OGbiq6p`ytGe0S?~jwZEuboSQ=t6@gLBdq#{MS8QH|A|0197IK*mN0|;M z8GLTrkpJ!gzr_;%7!uT?C$Px#rueX**LvmOe_FTDuW!zCQ+pVMn9SGW^sMp5@c~Qs zsqcVWZ|%VpRifE^9 zs6ylT$hSquPLr7P*$#>~DNTLx$Gf!~9B*eGX76NK10^x9oUh{Sov(fe){^qyd!DTL z8*uG7hc4Aph~SlQT4Kt?QOJCs0D}VqovbI}BrD`2Z6$#;Oyh4+!y=?mruVHT5ssz` z4w^7<-Xo|Ql#QLWPE6B)NoS2v2fj4~Uks-A-PK63BJ6J+#0nSDtnd7lh<4ZozMRlZ zT(%Z$)&X9R5zdm#(!%nQMcyo=&xS60ty&2$Nm zS+6g$9sH{mV+Vd+GMTx(o{28DZjVJKt@I)kM^n$+>c9kzThG|i=C(^b57?RoRTX=} z3!BY9Qw2k@2u_VjM;1d--E$$FBQlgGsGWl!JI_X=!3FJal67zDuN*fkeWWI1E4pdu zT7Ua&ybn|PTW`W;wS=W-Mj|)zt=zB`ojvgF#{ALOFKJ###Bc2;PL(rHqYnT4(aODz z>a|GV()6yiZc%djr*ZZ~TVnorP>F#%>6i0TEr2`*H_w!o140E6iZE)u%y_C_;2l9# zD|JB@nmxcmxS-7iUgTMN0fzE&VS#&Ht2TX7zBVv|_uBYhdlPV~f7#wl=}byb+9Jlk zZ)2s9=<1c4Gpkk$nA*zcIqynq6{;#8WU)FDzA%0GQjH_%o7sZ4&uQPG++s*Sb1zuIC7t%BQ!S;C78w6K7_R;Jnj%g46JHdHF?wEQc~ zBw)9z%+2bFfIioi5h_IHCfAlqTmZ5t4RJ83Jq{SLG=)W# zDVf&_d9?%D`=vph`>!w=@rpEFeG%TUvU(Rr!xmO~g_ofd1QC@bI~samWZGC9_@7c! zV(!{>uS6}O=f*Wi`*;T6oN*CNN^nK?7;7M6H4F>jGro#ZP?*j|E%b&Mt?S)y^^m|w zg_w?Zn2?w8$Lur^3unBND3!R`VaI@)D#g?bl#VU!wRSSH!XmRu8>vAAoHOw{w2n-v z`=*Kl8P#aIz?Cm+bQucuDx!XI1({R|TK%9T7xkzfkV)sLE`R7L6Ju=sDz1|#N}mA7 zy6wRu3WI-of7Nx{prxAw!+m1cA{Datnp#gPCjE6UK$3QE`#${slHROgzMtJSIl!y9 z_F9ssx7PBoMaa6_G?o|kkhV3zx|N+ z<7gw&Gje6YGcGZ%UnxAdp|u%0POYz&&P69IjN+*D?6shC4ce>Ywg0|6zJy8+(}B@Q#F20D7?`q z6;`nZt@pK&mz1LNAC!>Jao&l-&x8$1C3e*F)IT*J4l0Bw@s;9CRlaM0W>K@7>cof^ z*{YR=>?QFlGs>L6U!Eh@YhX|mBQ>y9!Ho?uKxHzeZV&8ZCipqZIe4iBYO7I{B^YX$ zPM97tKn5-8IbF@r#{XKQu?GuiJ&d32!2Am8?VLpNp+8qJ?06v;hBF|c{bd6@dUY*i zc9|=*U)qp3sS?2Kre9Qub%7t{K*uFrmna72+<*(uVB+!6`8Qjpe{$pbz_&cA^K;-B zZ;;!@+Q*=>o-U4jRi3}pT`6x5m-YjqAI=LO&fE7LY-g^NHl}BEre`*%C$vJwbn&ke zp-<$(!CW9tZ|Su+GO%lmQ7GwjS0kY#*QH?C+TKXlQzZMRE*&d7mGV`tv|QeU_!JY; zKrWmWp+-92%1Yex{ykug%MH|PNy@+Ag(j#_f+)Z2IAQ%}j;!f(#~KxI@K)5`6N7xTjq|O@is@$KJLZnh< zP4jLdq^AHWHq?KTj>ukaq*bWWO3k1UA%sbc5QAt+@Z`@JWEmJAxaH}^7%|GaX&DP{ zW(-?oc^au$NZY?7-VkycT7fgu0~L-1db(d(2}N-uw@4N}`ckdI+H(d>w8c(c`>9FR z<+6>!w|SDv=@0%Kj6M;szz{i$l2QF`m!ADB=bzGQMC8i(5r0gnpA41mqoxq4L&^7m z)9(}q8aUEeI}$GkU78^6BP38$>pF*7-?`5!pWUO&h|Ok>=qrM0#6xVkaD7kKrL258 zsYm3x6M()L(YwbfjI&VLkW@m zZ+3Mx;lL)qI9}C*6d%5LDc*AtpTj+Zb~sm7{4Z3jd9ttO{9)K?u7m&4P)eymx^2oH zl|me?5-MbQ%JTNwm$BpkQB0|T0@}(LZjVdnU$nbwJn(3h1XtqM@!S1awYq8X)|yZjpx_Chp=$T*{q{ukkuF))OL z8!0e$qPds3lXgtEjCOG*i1bq=iSPbOSROIOj6@OrHTAuW8~Y4svUO!I9S(4gSia=3 zo=)3_T;jM`_+AhDTG3luX}NL{7C67xb!8mm4#vnrD=$huhhN~D*}4zH&OJil8Hlf- z(ny+WK!ZU@c64_TT2GLyo`?yVWUKP8FyVO&R2rjn$vWWK0*HYksX3gI&n-&dlb!AP)qmadLaMFWFU2NGAVU9$UWwNjGaIaMbLaBx4D7t-P<{T}Qrnw?kr3#M9%kFeXfcUp9 zRg!pqR*Pq@{K8w70JW}F8n2nFBymR~j;IA-S{uX<5IP$5=8$DZcKSq<0mb{fwT$NtIH?UQwHA3Hosftznh}jDT0Q#piPgk&WWnj% z&Ifq>_ovOFrLX!#3U0Z)RVbNMS&}KSZtbm+4rEY^T^U~(?I(WxgTX5%Bsdg)+77p6 zz0EFTD}qB%o@a_GO)(!Je?)NA{1k!`oSIUbJbM$yC(8a~iwAW7IBrH9RoD90ilOn%}l~3vh;yx|CnlJ`sxn;YO#1P zBSO>tw;#KY1CS1;gNUx%72G>StUvvX*tAAqLa})I?)#`>H%>|-W~&5k(joV44AQol zDP!kmyLzzuqit?IY=*<>!%-So`d#O2+h8`u@eFuu)p!c-W~_7UHg7SpHC0)d(VoF*+L< z`#b;G_u>=a4s-4Y*NgofncG$?Avq@@9bJxcQsyK_CUq7gGf;~G){IEBJj(a3jfTlh z;(~AStH=O5IqUUz`J~9~(3XFHhiYYdy03*XdTBM>a{UufleihJSR@Qy0#Ze68I8sb zSnNxDbgNhAleES$4#(^f&IY)c3M=B*`#ejhLAd=NB5PYEb{R;w6s_VccS%*CHlIte4>Z7JLP5PwQ5r`^;9KHiay=vG}Ifx-nGwa0m!4aXVl;u z%r{{1wx_i}{^oBJ{JzCdYS)ailEbmeVAXKAz^)an@7rgqY@6CsbF1-8ohv_ur5}JYomysPqw^=g>P7 zv6nU%67g9tLZ2&P8AdKroLzPS*Jm<%P#e@@ATy#5PAD&CoaMMrWg-ip3FFH%!(#16 z*dDogb7LXGJA!f^;NlLtQyJ8^y2eb!ev#)2XPoi&t=*#?v^-OR&;oeVQFCv*4@}!s zki{fMo~6>OAVr~oJW`u#b(1hRrRcggSeH$xpH|3qLTX|B`}>Ad6gvI>A2s{cmbXqT zADax@f$&_!WrB{t`OQ}Qxn_N8bMk5(s3wbxqCVhf9efpagz}Y<4t#;sY`S6Sn6I26k7oYFuOE2ZmGd0&F_@4H$q#s}*o!MJ3L_LFoSB#OhSna9b~X{`valuck;zcQ?l zndE}z&lxKWB40=W$D+y*VqBy_QV~-`vje0$F1{eo1?JnhK_TrCmzUs+@-SyDdf|Pv z^87rdXC^s@wk$(uYO!$OR>S1<`K&6GVWYO*?&e5RHwp5c0Q;gT33-#bUr?mB+y&~)1i{T`LvRes@X2_FL;Gs&fbmWvAsmF;w^r9jR8o(DMRjVf z?(8KI_SgaVQd!0CFR#0u=%2PzcV;dYak~*Tlc~n{a)^A?N3E!$_sW$zh!@6v_@;fi z76^rIgIGo@axr;h^j%h#)L9W-d{T!R8x1!s%Z0(&iudL}yMusFws%~g5$(4tbU5#Y zlEtH|+TKj2Us&feZ-TUM@*CDa4>3ysl4y%W)M=!s03-|E=Y86Jqlbqf%HPP;twdGq zB^sQ(7Nei1UV@0qs5#P7Oi!?Y#lK8BGU+-Pw3{?+tghGzw3A$;<3<|Ye{*n7G_;$* z0=OZ{F!d-w1yF(bkLHUCa#sMcDWwpSolfXr+W@f?6DVO$R>ATS=sy!cAcNQ#aj+{? zTRil>Usv@66uB;l5VG+&yj@R~g275Pf+Xr#TKU4s4Gl`1fW} zo|&nEV)IGYI_3hvxQOhwRSxMuTsFKVo=VOnq`2X$&`Tq^ttI*jKm%x`RE=4sBAv-_ z>$>CG2I(ahK(?qm4sX#M=nSJ#8>5Qc(43IR8KkyO$^k;naMz~`xPwLAwF*!Ito$`k zd8H-Q7f$#$aMbz2yESmu$6_C{Mf;iH3x;U{8H{Whg`hPkj!$T$@K<>8fqGXzT_TWCNQZikyYqeglQlLMaU1V|jr=OX|9D&4^0@=~C2K(qF zCiMHrveGPo8XaeHNIx`y-(iSLD}|~<&aZ>s=hT=JzZuSo4)Z|BJk&Mm2XGF2oQefD zK&F@P76Sc(0JLZ+gb4RQu&%uqTquvDHQ^N` zg6&>S;s}6IeJni-pLMGccb`3L!cFBZTRe8f9L8htlyXw#!;7V`V(s(hl$W?O=O5-v zhs0%|29!>mjMXn)Rgc`huadL80aJu`K);duy)!2r2C46K256iZ;dM|XV~X4=^#$>J zzU*n<()<&PO%0DCuSsOC8(YC6Jr52{xz zcy?SeiU@5YiJ6ZIH~-Ncs^Tk823Ek`7Rb)2d=gWbS~WP^ zIrL`fmJ?JoiqtWWNB5I#JzodN@3^+V#fsO`J3qo~6lzm4qW~xeHZ^P>1Ci=sHc51= z`m;ESokeJDN@?~|V9(ydsIWfUGu%~6^F8^EbbgYxGh3Q<*y@4dNt_(S!uN|mGBG{V zcsJ~hB26|i%2r5`+*slc7fFxoRas($n&k&f_s^oj8R<*5J8jEH;%|=xsyh@ro;nx7 zI!2Z_C;vLZMcKR<=2JKd`U-C7Ch{h!tGI__%o<6KnO z2#B~YwQv7y3s={^0f-BojwFVm>{0iF+CdP9Op|n0G*3Od_?qx*vu+FA`p^N1d3u4x>j+(Zu(WIt`{> z@^5gLc$5y#v*AB%kjaD7WHKhq00Dd#FXU1MDKjjNV@c8pM{Y)*4pc+&`0`R|1h9hm z1&Q$(@(bnbC;%(|fMu|X4E6)#`S`h~9@Vr&xTMRONRi|05vL6GT$$_>kB0Sx6iO`k zBk;Tx3Jh=b+c5H$*F(9ZvgR0Fen_akFHBwX14>jvr|cV_Htn}Jun^JD&)vZeG#0zh zlS{IlZ}h$|L$ntMylBF924>E7fy&(B4tbk}Xp4?$bm<+K8t$ql4HNXkgEBFcGaFWP zzn{6+HLq*~ACXXxHiM0vj?j-2=1nNi-wOx>jSz&IQT4&|b47jg!sM;YR%ryqS6;lg z?Rs$H(R)tU`!vdD#Ey`c_HBbCE+QWo23KwA@XV|WSm}jjgPKmRSqJCr( zukX=}HWw7Y1kVNfGp#=1o9^V7Qmf8=-okklcz_9E7(-Of40vqtk~e*zs*`F){ln+Z zl#f!GXRLz3_+NS*M?J+6eAh0L!U;NP5j2B69w?7_?x&6Ad7hA0lECM56Ako>Rj1TQ z-!H4BSS6E63#3T)Rd|sJfp!0*U6Y;TbtWB87|_IDM85-aF+8ujeIAU{b}*88vMX(#J5NM2#fT`%IifA23Wl zlq=c(H_X0EC$FA95E59JfT05zO6aJ8qX$T$NST7A3q-O)A{Lv;YC?>I4@C4I1fM_! z(liNEr%r`}8Rd(A&xZ^!hnU2+k zG5t}UB?PEw$3M;x|B-fd3mm~%7z10`asoNp^?xnMN_X9gf9T480TqA}Q-Gt8h%(cN z`+@rZ?Z2iX+YTe$mLo4eR57Pya}Q;8mu0sfR=KBU`xhs}+Sb?IutQLZMc9#tV2X+8 ziVIMpp;p_WmmiXtW0Kk6)`8*H0j_zlgY6ysknI!A@|eyfm4XWVe<1bp`9Ua|ZKgXc zNyKI-UFAJToCj_Gw6}>wZCQ%>RaS*pGGve6Bo}%500W$cYG*#2!HqxsU;f1=pWqYV zKfyuLC==$-KSA7N%E?^n*Tt*%3rL7D$cR%=i51}VDUL+;|GI(t56^xes6nWxeK5%d zxT(7!%8CE)+XWXVP&AK4iy%*!G>%3cix7YmJYnSA!P6%|i4wj@liXHI-R6Jvqhb}S zR?dMX+e3Jq$w%`aH=^CG}(PP>b*Sp{h$8*`VYx~7MroBPr{f3(wY0Bn)9-&cf*`x)1BwTp6in@kpCGH zjkDnyPX7tzp+mPsY zeh-rsbGz}lELU~9?+CvAUIbOUt!r8Ku;nNWow<>-a8`wk!V55lsRZu@jX`Ux2kkj+ zyhFfktTjh&i3%Yscut#VKp(p%Yn^P3Q<#FsNWT&7eluH!I%>Jut;Z~1|JwUImF-=O z`E|>l=q4xaXme?w@;mw`mZ?BYXDgVhlAgCI?Scvv07@A1c4QE+;%1>n) zf49GC9C5vlegt75bEaV>rN}{z;Gm>HRgNl{@EzOUdm|JRcQOXNS*F|8VJUt_YF)cw z0nno1lXornJFD!ML2auVDPAU?{Pr7|!D-?a;sVJ1W0?ieZM^sZTiLY4UG`Y;2yOp8 z$Qi=#4_2C?V77rbDi4vO;)-NlnX8u7qW0-tEPFeJYIc`?a*uJ{TG7{lJ#!+d(2hnZ z3hkCY5O;@sEMvJ0A`KV2W1gC_weDQ~ zUUBF{7Rs6@rjieFel_yZWDHtLaO1+ zbb*z_GD$|~LbO7kgZ{TAm>4DP@B(TIQEseVr9S>P;UQY@u&fwopZWb5k5mTV%-wZ5~=Re ztf6}t)#bhEux@4-NHqSY z)6~E*g`uq0D)n%ax^2{aRQt^t*znZb{c+Ir;Gd36RBgP!3UM>PrR2$+`MLM$$n!~X z*WF@>^N$4m+V$^S%12*EBB^I|23tccIeAqqcUtdLYm65*Oqh$+G~B?I8-h&$kAH*} zR+ioGrZx9?ABmd|?liTxsKer=Zv*4NmK01?(xi4u_qaBHRFOtbvq-JV`+lLP!!X)Ysr+l+2A`8uArA+()`7 z5${B4ooP}1X#qEPw!FFV&MTWjFN}}JJ!753cHKQp<* z(M>gMgZu0oWql~mz%!_x!rqaL6M9v7mBQ3FZ&3$6L3AplEM4s>hq@XarH0*Q`GdRz zbVGpU@V>+#V{&2zlq+y(x*~D^t9!&{^2-L#Fgdjzm2!*rSopK%hY>@ZfhAl!$v-_z zy(CyH`!ga7ibRi<4BnCs^5F;g20+5}-~cDKWx+j`}elfWXDu8T_Ye5fI^+5&4#Uf=j#cC{;xzlbu3hcAOuAe zg7n`;Ns6j8<-g7Ux=_-xu5DfYFEgv>ac%G~dB8%UR_Vb{YeIg-naT^$CyXQYMb!5Z zxW-CUHsjp!G1B>lJ^Q}+tKjCBKc7~aoe%?HFyK^83W!+}Q^jMC2)1L=fE#_?`3An*aX5?^#QEb0E-kksvMO(TJKe+=}-pUXyt&XHC3`>|2_9 z_sb^^+WMDNfam`4CWWn&#!!r=*486Bul4+NE(+d_0Q1@~FuBsCS)k@fjA2BUFa}7@ zF^uI8u^es99z+Ks{}m-JJ>%sjqc1(^i6rhv#73wVn5JKdtg64W@zoS{=f(X4`tj6eJQy^^hhFAO7%=pkN1jJ#}4_n&C(_^neMbgcB|i^yyBbihNW-gjsFZWv~io zg+;@n>Wb<_RgDXqD_fO+?tDMUlgmBc8VMwm6}G(;gXh;~b;AZU84;m>Eaz12Mb0{Ea?&&t(vCH^zQP`E~si*6r%J+MzFxpRY#wFoQ3> zCRX4ZtH2(p&p&CF^wAu@pFUFgP?N6%3BAtWW$f)w9o05fq-^j z*O`V`{`_0r`#jb1DZasnDlUD?A32IVXYV?G)VWHL$I{uWE40yV=la?3=g*sTIxCO! zcs3;#_?&~-?>pD``&8?a0o6tkvDix06#~5<+}kYmS0Vn|{+amu9m)Jr4C5T|H9yhF zfQr=Hg}zzG7=`v!p~~@6DP{1Lm*=Vde`5;Lnq$~N@wqfc>VxDgSG8b-YYgtN*8lpkcZr(U8U~zn zA%H}cz-hb8VL-f|bRm;gV#58afwy7mywE8<`c@jXzKN~eFwjU7moa1#`SQ4Gdv&8Z4+ZgTMpc`Dt0FuBsuP{clb7j*ACbj=}^-ub~hh=-Gqe0nkN zV<|`q^qVXzYUn!#X(zRJIEkwhX#p!02|R!jmJ?#jA_Q?4Q^)jqL-h}7E^snNYK!^q z*VLyNV@cf~(^QXSa?Q7cm*TtFZRKgl3kAO995ex^oef~Im|2kCq9mWFOw6>IMVl9-vUIAN zdVpbAdj?6vWHk7>N)8YOfw+XeKuMq^w0x7;bF}v6RntVaz;*{+5wvvj8{BD(43Q1B z{(S{84azX>PHn+@_?Qvrtg1qP`~7NHw*1^2V8`6Fxbdac4RbOrqfg_>MkL_B_LT>N zK<@`8`kH4n!_VM(EdmH0>Eol~!2=N4Y`ur9-;(-&k@)bjr9Ul+8)CVs zRev8BX*ok&Ejy*VX{-=L6wr+6(2a+xDkF9KHdGMRsc&m{$@9F0tv6HqXkKP_N12Z} zXTy=i_?Bz$bY$a3yp`G0jAFJY%$?_|4JleWY7edE*2l`pY(95%I-o>wyzBRjZtU;& zm%n=vY&at3G&NIwxpE-O%_=#~C5Pi}&*@WAdz(xww;JgEO!9{(HEi2-Q zrq;ZG#?ZO0F6L=jb~Nc$M>E&tk|;UWvPXtJ?O=qP{?M^x(QczH`#TjVrpY~MSM*Oa z810&E%-?*82sze>n4$@Ar;9^&`(+vA14YPHsF(p8mU0=DV69s1NYsuatVe7XJGBH& zB2%R?VL57axckLgpS^vRYRtAy`in{GrR=tc-h1cFd^=Cupp<0lvvio9)5>RZdlz3bX{;dlh|OBwG8O~0+D7aPp$63+4+^75 zruY=0Fhf`D4@ztC`3eiH886a-v|C?&Ejr5y9*YNV_dnM*?YW4B|Gs{+SH qV=YT3KqpfvF@wLid78^qq?z(a3!khT%TT9zqGt;__zV~b=zjot + + \ No newline at end of file From 9ff8aa4489c122063cd51780d5b09babc7f03e31 Mon Sep 17 00:00:00 2001 From: Deepti-yb Date: Thu, 8 May 2025 10:15:23 +0000 Subject: [PATCH 046/146] [PLAT-17555][YBA CLI]Make storage-config as optional so that xcluster can be created without bootstrap Summary: Some customers would want to skip full copy of tables while creating/editing replication. Storage configurations are only required when tables need full copy. Setting the flag as optional and introducing `skip-full-copy-tables` to fix the same Test Plan: Create 2 universes A (source) and B (target) and create tables with the same name in both Add entries in the table in universe A Create xCluster between A and B with `--skip-full-copy-tables` enabled Add more entries in the table in universe A Verify that table in universe B contains only the entries made in table from A after replication is enabled Reviewers: hzare Reviewed By: hzare Differential Revision: https://phorge.dev.yugabyte.com/D43841 --- .../yba-cli/cmd/xcluster/create_xcluster.go | 217 ++++++++++-------- .../yba-cli/cmd/xcluster/update_xcluster.go | 198 ++++++++-------- managed/yba-cli/docs/yba_xcluster_create.md | 11 +- managed/yba-cli/docs/yba_xcluster_update.md | 9 +- 4 files changed, 239 insertions(+), 196 deletions(-) diff --git a/managed/yba-cli/cmd/xcluster/create_xcluster.go b/managed/yba-cli/cmd/xcluster/create_xcluster.go index 0f0d5a27ee3d..328856a54930 100644 --- a/managed/yba-cli/cmd/xcluster/create_xcluster.go +++ b/managed/yba-cli/cmd/xcluster/create_xcluster.go @@ -45,15 +45,20 @@ var createXClusterCmd = &cobra.Command{ ) } + skipBootstrap, err := cmd.Flags().GetBool("skip-full-copy-tables") + if err != nil { + logrus.Fatalf(formatter.Colorize(err.Error()+"\n", formatter.RedColor)) + } + storageConfigNameFlag, err := cmd.Flags().GetString("storage-config-name") if err != nil { logrus.Fatalf(formatter.Colorize(err.Error()+"\n", formatter.RedColor)) } - if len(strings.TrimSpace(storageConfigNameFlag)) == 0 { + if len(strings.TrimSpace(storageConfigNameFlag)) == 0 && !skipBootstrap { cmd.Help() logrus.Fatalln( formatter.Colorize( - "No storage config name found to take a backup\n", + "No storage config name found to take a backup for replication\n", formatter.RedColor, ), ) @@ -127,79 +132,6 @@ var createXClusterCmd = &cobra.Command{ logrus.Fatalf(formatter.Colorize(err.Error()+"\n", formatter.RedColor)) } - tableNeedBootstrapUUIDsString, err := cmd.Flags().GetString("tables-need-full-copy-uuids") - if err != nil { - logrus.Fatalf(formatter.Colorize(err.Error()+"\n", formatter.RedColor)) - } - - parallelism, err := cmd.Flags().GetInt("parallelism") - if err != nil { - logrus.Fatalf(formatter.Colorize(err.Error()+"\n", formatter.RedColor)) - } - - storageConfigName, err := cmd.Flags().GetString("storage-config-name") - if err != nil { - logrus.Fatalf(formatter.Colorize(err.Error()+"\n", formatter.RedColor)) - } - - allowBootstrap, err := cmd.Flags().GetBool("allow-bootstrap") - if err != nil { - logrus.Fatalf(formatter.Colorize(err.Error()+"\n", formatter.RedColor)) - } - - configType, err := cmd.Flags().GetString("config-type") - if err != nil { - logrus.Fatalf(formatter.Colorize(err.Error()+"\n", formatter.RedColor)) - } - configType = strings.ToLower(configType) - switch configType { - case "basic": - configType = util.BasicXClusterConfigType - case "txn": - configType = util.TxnXClusterConfigType - case "db": - configType = util.DBXClusterConfigType - default: - configType = util.BasicXClusterConfigType - } - - storageConfigListRequest := authAPI.GetListOfCustomerConfig() - rStorageConfigList, response, err := storageConfigListRequest.Execute() - if err != nil { - errMessage := util.ErrorFromHTTPResponse( - response, err, "Backup", "Create - Get Storage Configuration") - logrus.Fatalf(formatter.Colorize(errMessage.Error()+"\n", formatter.RedColor)) - } - - storageConfigs := make([]ybaclient.CustomerConfigUI, 0) - for _, s := range rStorageConfigList { - if strings.Compare(s.GetType(), util.StorageCustomerConfigType) == 0 { - storageConfigs = append(storageConfigs, s) - } - } - storageConfigsName := make([]ybaclient.CustomerConfigUI, 0) - for _, s := range storageConfigs { - if strings.Compare(s.GetConfigName(), storageConfigName) == 0 { - storageConfigsName = append(storageConfigsName, s) - } - } - rStorageConfigList = storageConfigsName - - if len(rStorageConfigList) < 1 { - logrus.Fatalf( - formatter.Colorize( - fmt.Sprintf("No storage configurations with name: %s found\n", - storageConfigName), - formatter.RedColor, - )) - return - } - - var storageUUID string - if len(rStorageConfigList) > 0 { - storageUUID = rStorageConfigList[0].GetConfigUUID() - } - tableUUIDsString = strings.TrimSpace(tableUUIDsString) tableUUIDs := make([]string, 0) if len(tableUUIDsString) != 0 { @@ -238,13 +170,30 @@ var createXClusterCmd = &cobra.Command{ } } - tableNeedBootstrapUUIDsString = strings.TrimSpace(tableNeedBootstrapUUIDsString) - tableNeedBootstrapUUIDs := make([]string, 0) - if len(tableNeedBootstrapUUIDsString) != 0 { - tableNeedBootstrapUUIDs = strings.Split(tableNeedBootstrapUUIDsString, ",") - } else { - allowBootstrap = true - tableNeedBootstrapUUIDs = tableUUIDs + parallelism, err := cmd.Flags().GetInt("parallelism") + if err != nil { + logrus.Fatalf(formatter.Colorize(err.Error()+"\n", formatter.RedColor)) + } + + configType, err := cmd.Flags().GetString("config-type") + if err != nil { + logrus.Fatalf(formatter.Colorize(err.Error()+"\n", formatter.RedColor)) + } + configType = strings.ToLower(configType) + switch configType { + case "basic": + configType = util.BasicXClusterConfigType + case "txn": + configType = util.TxnXClusterConfigType + case "db": + configType = util.DBXClusterConfigType + default: + configType = util.BasicXClusterConfigType + } + + skipBootstrap, err := cmd.Flags().GetBool("skip-full-copy-tables") + if err != nil { + logrus.Fatalf(formatter.Colorize(err.Error()+"\n", formatter.RedColor)) } req := ybaclient.XClusterConfigCreateFormData{ @@ -254,14 +203,80 @@ var createXClusterCmd = &cobra.Command{ SourceUniverseUUID: sourceUniverseUUID, TargetUniverseUUID: targetUniverseUUID, ConfigType: util.GetStringPointer(configType), - BootstrapParams: &ybaclient.BootstrapParams{ + } + + if !skipBootstrap { + + storageConfigName, err := cmd.Flags().GetString("storage-config-name") + if err != nil { + logrus.Fatalf(formatter.Colorize(err.Error()+"\n", formatter.RedColor)) + } + + allowBootstrap, err := cmd.Flags().GetBool("allow-full-copy-tables") + if err != nil { + logrus.Fatalf(formatter.Colorize(err.Error()+"\n", formatter.RedColor)) + } + + tableNeedBootstrapUUIDsString, err := cmd.Flags(). + GetString("tables-need-full-copy-uuids") + if err != nil { + logrus.Fatalf(formatter.Colorize(err.Error()+"\n", formatter.RedColor)) + } + + storageConfigListRequest := authAPI.GetListOfCustomerConfig() + rStorageConfigList, response, err := storageConfigListRequest.Execute() + if err != nil { + errMessage := util.ErrorFromHTTPResponse( + response, err, "Backup", "Create - Get Storage Configuration") + logrus.Fatalf(formatter.Colorize(errMessage.Error()+"\n", formatter.RedColor)) + } + + storageConfigs := make([]ybaclient.CustomerConfigUI, 0) + for _, s := range rStorageConfigList { + if strings.Compare(s.GetType(), util.StorageCustomerConfigType) == 0 { + storageConfigs = append(storageConfigs, s) + } + } + storageConfigsName := make([]ybaclient.CustomerConfigUI, 0) + for _, s := range storageConfigs { + if strings.Compare(s.GetConfigName(), storageConfigName) == 0 { + storageConfigsName = append(storageConfigsName, s) + } + } + rStorageConfigList = storageConfigsName + + if len(rStorageConfigList) < 1 { + logrus.Fatalf( + formatter.Colorize( + fmt.Sprintf("No storage configurations with name: %s found\n", + storageConfigName), + formatter.RedColor, + )) + return + } + + var storageUUID string + if len(rStorageConfigList) > 0 { + storageUUID = rStorageConfigList[0].GetConfigUUID() + } + + tableNeedBootstrapUUIDsString = strings.TrimSpace(tableNeedBootstrapUUIDsString) + tableNeedBootstrapUUIDs := make([]string, 0) + if len(tableNeedBootstrapUUIDsString) != 0 { + tableNeedBootstrapUUIDs = strings.Split(tableNeedBootstrapUUIDsString, ",") + } else { + allowBootstrap = true + tableNeedBootstrapUUIDs = tableUUIDs + } + + req.BootstrapParams = &ybaclient.BootstrapParams{ BackupRequestParams: ybaclient.BootstrapBackupParams{ StorageConfigUUID: storageUUID, Parallelism: util.GetInt32Pointer(int32(parallelism)), }, Tables: util.StringSliceFromString(tableNeedBootstrapUUIDs), AllowBootstrap: util.GetBoolPointer(allowBootstrap), - }, + } } rTask, response, err := authAPI.CreateXClusterConfig(). @@ -358,6 +373,10 @@ func init() { formatter.Colorize("Required when table-uuids is not specified", formatter.GreenColor))) + createXClusterCmd.Flags().String("config-type", "basic", + "[Optional] Scope of the xcluster config to create. "+ + "Allowed values: basic, txn, db.") + createXClusterCmd.Flags().String("table-uuids", "", "[Optional] Comma separated list of source universe table IDs/UUIDs. "+ "All tables must be of the same type. "+ @@ -365,29 +384,35 @@ func init() { " to check the list of tables that can be added for asynchronous replication. If left empty, "+ "all tables of specified table-type will be added for asynchronous replication.") + createXClusterCmd.Flags().Bool("skip-full-copy-tables", false, + "[Optional] Skip taking a backup for replication. (default false)") + createXClusterCmd.Flags().String("storage-config-name", "", - "[Required] Storage config to be used for taking the backup for replication. ") - createXClusterCmd.MarkFlagRequired("storage-config-name") + fmt.Sprintf( + "[Optional] Storage config to be used for taking the backup for replication. %s", + formatter.Colorize( + "Required when tables require full copy. Ignored when skip-full-copy-tables is set to true.", + formatter.GreenColor, + ), + )) createXClusterCmd.Flags().String("tables-need-full-copy-uuids", "", "[Optional] Comma separated list of source universe table IDs/UUIDs that are allowed to be "+ "full-copied to the target universe. Must be a subset of table-uuids. If left empty,"+ - " allow-bootstrap is set to true so full-copy can be done for all the tables passed "+ + " allow-full-copy-tables is set to true so full-copy can be done for all the tables passed "+ "in to be in replication. Run \"yba xcluster needs-full-copy-tables --source-universe-name"+ " --target-universe-name --table-uuids"+ - " \" to check the list of tables that need bootstrapping.") - - createXClusterCmd.Flags().Int("parallelism", 8, - "[Optional] Number of concurrent commands to run on nodes over SSH via \"yb_backup\" script.") + " \" to check the list of tables that need full copy. "+ + "Ignored when skip-full-copy-tables is set to true.") - createXClusterCmd.Flags().Bool("allow-bootstrap", false, + createXClusterCmd.Flags().Bool("allow-full-copy-tables", false, "[Optional] Allow full copy on all the tables being added to the replication. "+ "The same as passing the same set passed to table-uuids to "+ - "tables-need-full-copy-uuids. (default false)") + "tables-need-full-copy-uuids. Ignored when skip-full-copy-tables is set to true. (default false)") - createXClusterCmd.Flags().String("config-type", "basic", - "[Optional] Scope of the xcluster config to create. "+ - "Allowed values: basic, txn, db.") + createXClusterCmd.Flags().Int("parallelism", 8, + "[Optional] Number of concurrent commands to run on nodes over SSH via \"yb_backup\" script. "+ + "Ignored when skip-full-copy-tables is set to true. (default 8)") createXClusterCmd.Flags().Bool("dry-run", false, "[Optional] Run the pre-checks without actually running the subtasks. (default false)") diff --git a/managed/yba-cli/cmd/xcluster/update_xcluster.go b/managed/yba-cli/cmd/xcluster/update_xcluster.go index 51a50a4e7ba2..374f8c200d9e 100644 --- a/managed/yba-cli/cmd/xcluster/update_xcluster.go +++ b/managed/yba-cli/cmd/xcluster/update_xcluster.go @@ -116,11 +116,6 @@ var updateXClusterCmd = &cobra.Command{ logrus.Fatalf(formatter.Colorize(err.Error()+"\n", formatter.RedColor)) } - tableNeedBootstrapUUIDsString, err := cmd.Flags().GetString("tables-need-full-copy-uuids") - if err != nil { - logrus.Fatalf(formatter.Colorize(err.Error()+"\n", formatter.RedColor)) - } - formTableUUIDs := make([]string, 0) removeTableUUIDsString = strings.TrimSpace(removeTableUUIDsString) if len(removeTableUUIDsString) != 0 { @@ -141,6 +136,11 @@ var updateXClusterCmd = &cobra.Command{ allowBoostrap := false + skipBootstrap, err := cmd.Flags().GetBool("skip-full-copy-tables") + if err != nil { + logrus.Fatalf(formatter.Colorize(err.Error()+"\n", formatter.RedColor)) + } + addTableUUIDsString = strings.TrimSpace(addTableUUIDsString) addTableUUIDs := make([]string, 0) if len(addTableUUIDsString) != 0 { @@ -150,99 +150,109 @@ var updateXClusterCmd = &cobra.Command{ logrus.Debug("Adding tables to XCluster") req.SetTables(formTableUUIDs) - tableNeedBootstrapUUIDsString = strings.TrimSpace(tableNeedBootstrapUUIDsString) - tableNeedBootstrapUUIDs := make([]string, 0) - if len(tableNeedBootstrapUUIDsString) != 0 { - tableNeedBootstrapUUIDs = strings.Split(tableNeedBootstrapUUIDsString, ",") - } else { - tableNeedBootstrapUUIDs = addTableUUIDs - allowBoostrap = true - } + if !skipBootstrap { + tableNeedBootstrapUUIDsString, err := cmd.Flags(). + GetString("tables-need-full-copy-uuids") + if err != nil { + logrus.Fatalf(formatter.Colorize(err.Error()+"\n", formatter.RedColor)) + } - if len(tableNeedBootstrapUUIDs) > 0 || allowBoostrap { - logrus.Debug("Updating tables needing bootstrap\n") - bootstrapParams := ybaclient.BootstrapParams{ - Tables: util.StringSliceFromString(tableNeedBootstrapUUIDs), + tableNeedBootstrapUUIDsString = strings.TrimSpace(tableNeedBootstrapUUIDsString) + tableNeedBootstrapUUIDs := make([]string, 0) + if len(tableNeedBootstrapUUIDsString) != 0 { + tableNeedBootstrapUUIDs = strings.Split(tableNeedBootstrapUUIDsString, ",") + } else { + tableNeedBootstrapUUIDs = addTableUUIDs + allowBoostrap = true } - if allowBoostrap { - logrus.Debug("Updating allow bootstrap to true\n") - bootstrapParams.SetAllowBootstrap(allowBoostrap) - } else if cmd.Flags().Changed("allow-bootstrap") { - allowBootstrap, err := cmd.Flags().GetBool("allow-bootstrap") + if len(tableNeedBootstrapUUIDs) > 0 || allowBoostrap { + logrus.Debug("Updating tables needing bootstrap\n") + bootstrapParams := ybaclient.BootstrapParams{ + Tables: util.StringSliceFromString(tableNeedBootstrapUUIDs), + } + + if allowBoostrap { + logrus.Debug("Updating allow bootstrap to true\n") + bootstrapParams.SetAllowBootstrap(allowBoostrap) + } else if cmd.Flags().Changed("allow-full-copy-tables") { + allowBootstrap, err := cmd.Flags().GetBool("allow-full-copy-tables") + if err != nil { + logrus.Fatalf(formatter.Colorize(err.Error()+"\n", formatter.RedColor)) + } + logrus.Debug("Updating allow bootstrap\n") + bootstrapParams.SetAllowBootstrap(allowBootstrap) + } + + backupBootstrapParams := ybaclient.BootstrapBackupParams{} + + parallelism, err := cmd.Flags().GetInt("parallelism") if err != nil { logrus.Fatalf(formatter.Colorize(err.Error()+"\n", formatter.RedColor)) } - logrus.Debug("Updating allow bootstrap\n") - bootstrapParams.SetAllowBootstrap(allowBootstrap) - } + logrus.Debug("Updating parallelism\n") + backupBootstrapParams.SetParallelism(int32(parallelism)) - backupBootstrapParams := ybaclient.BootstrapBackupParams{} - - parallelism, err := cmd.Flags().GetInt("parallelism") - if err != nil { - logrus.Fatalf(formatter.Colorize(err.Error()+"\n", formatter.RedColor)) - } - logrus.Debug("Updating parallelism\n") - backupBootstrapParams.SetParallelism(int32(parallelism)) + storageConfigName, err := cmd.Flags().GetString("storage-config-name") + if err != nil { + logrus.Fatalf(formatter.Colorize(err.Error()+"\n", formatter.RedColor)) + } - storageConfigName, err := cmd.Flags().GetString("storage-config-name") - if err != nil { - logrus.Fatalf(formatter.Colorize(err.Error()+"\n", formatter.RedColor)) - } + if len(strings.TrimSpace(storageConfigName)) == 0 { + logrus.Fatalf( + formatter.Colorize( + "Storage configuration must be provided since a tables need bootstrap\n", + formatter.RedColor, + ), + ) + } - if len(strings.TrimSpace(storageConfigName)) == 0 { - logrus.Fatalf( - formatter.Colorize( - "Storage configuration must be provided since a tables need bootstrap\n", - formatter.RedColor, - ), - ) - } + storageConfigListRequest := authAPI.GetListOfCustomerConfig() + rStorageConfigList, response, err := storageConfigListRequest.Execute() + if err != nil { + errMessage := util.ErrorFromHTTPResponse( + response, err, "Backup", "Update - Get Storage Configuration") + logrus.Fatalf( + formatter.Colorize(errMessage.Error()+"\n", formatter.RedColor), + ) + } - storageConfigListRequest := authAPI.GetListOfCustomerConfig() - rStorageConfigList, response, err := storageConfigListRequest.Execute() - if err != nil { - errMessage := util.ErrorFromHTTPResponse( - response, err, "Backup", "Update - Get Storage Configuration") - logrus.Fatalf(formatter.Colorize(errMessage.Error()+"\n", formatter.RedColor)) - } + storageConfigs := make([]ybaclient.CustomerConfigUI, 0) + for _, s := range rStorageConfigList { + if strings.Compare(s.GetType(), util.StorageCustomerConfigType) == 0 { + storageConfigs = append(storageConfigs, s) + } + } + storageConfigsName := make([]ybaclient.CustomerConfigUI, 0) + for _, s := range storageConfigs { + if strings.Compare(s.GetConfigName(), storageConfigName) == 0 { + storageConfigsName = append(storageConfigsName, s) + } + } + rStorageConfigList = storageConfigsName + + if len(rStorageConfigList) < 1 { + logrus.Fatalf( + formatter.Colorize( + fmt.Sprintf("No storage configurations with name: %s found\n", + storageConfigName), + formatter.RedColor, + )) + return + } - storageConfigs := make([]ybaclient.CustomerConfigUI, 0) - for _, s := range rStorageConfigList { - if strings.Compare(s.GetType(), util.StorageCustomerConfigType) == 0 { - storageConfigs = append(storageConfigs, s) + var storageUUID string + if len(rStorageConfigList) > 0 { + storageUUID = rStorageConfigList[0].GetConfigUUID() } - } - storageConfigsName := make([]ybaclient.CustomerConfigUI, 0) - for _, s := range storageConfigs { - if strings.Compare(s.GetConfigName(), storageConfigName) == 0 { - storageConfigsName = append(storageConfigsName, s) + if len(storageUUID) != 0 { + logrus.Debugf("Adding storage config for bootstrap\n") + backupBootstrapParams.SetStorageConfigUUID(storageUUID) } - } - rStorageConfigList = storageConfigsName - - if len(rStorageConfigList) < 1 { - logrus.Fatalf( - formatter.Colorize( - fmt.Sprintf("No storage configurations with name: %s found\n", - storageConfigName), - formatter.RedColor, - )) - return - } - var storageUUID string - if len(rStorageConfigList) > 0 { - storageUUID = rStorageConfigList[0].GetConfigUUID() - } - if len(storageUUID) != 0 { - logrus.Debugf("Adding storage config for bootstrap\n") - backupBootstrapParams.SetStorageConfigUUID(storageUUID) + bootstrapParams.SetBackupRequestParams(backupBootstrapParams) + req.SetBootstrapParams(bootstrapParams) } - - bootstrapParams.SetBackupRequestParams(backupBootstrapParams) - req.SetBootstrapParams(bootstrapParams) } } @@ -338,30 +348,36 @@ func init() { "Run \"yba xcluster describe --uuid \""+ " to check the list of tables that can be removed from asynchronous replication.") + updateXClusterCmd.Flags().Bool("skip-full-copy-tables", false, + "[Optional] Skip taking a backup for replication. (default false)") + updateXClusterCmd.Flags().String("storage-config-name", "", fmt.Sprintf( "[Optional] Storage config to be used for taking the backup for replication. %s", - formatter.Colorize("Required when tables require bootstrapping.", + formatter.Colorize("Required when tables require bootstrapping. "+ + "Ignored when skip-full-copy-tables is set to true.", formatter.GreenColor))) updateXClusterCmd.Flags().String("tables-need-full-copy-uuids", "", "[Optional] Comma separated list of source universe table IDs/UUIDs "+ "that are allowed to be full-copied"+ " to the target universe. Must be a subset of table-uuids. "+ - "If left empty, allow-bootstrap is set to true so full-copy can be done for "+ + "If left empty, allow-full-copy-tables is set to true so full-copy can be done for "+ "all the tables passed in to be in replication. Run \"yba xcluster needs-full-copy-tables"+ " --source-universe-name --target-universe-name "+ "--table-uuids \" to check the list of tables that need "+ - "bootstrapping.") - - updateXClusterCmd.Flags().Int("parallelism", 8, - "[Optional] Number of concurrent commands to run on nodes over SSH via \"yb_backup\" script.") + "bootstrapping. Ignored when skip-full-copy-tables is set to true.") - updateXClusterCmd.Flags().Bool("allow-bootstrap", false, + updateXClusterCmd.Flags().Bool("allow-full-copy-tables", false, "[Optional] Allow full copy on all the tables being added to the replication. "+ - "The same as passing the same set passed to table-uuids to tables-need-full-copy-uuids."+ + "The same as passing the same set passed to table-uuids to tables-need-full-copy-uuids. "+ + "Ignored when skip-full-copy-tables is set to true."+ " (default false)") + updateXClusterCmd.Flags().Int("parallelism", 8, + "[Optional] Number of concurrent commands to run on nodes over SSH via \"yb_backup\" script. "+ + "Ignored when skip-full-copy-tables is set to true.") + updateXClusterCmd.Flags().Bool("dry-run", false, "[Optional] Run the pre-checks without actually running the subtasks. (default false)") diff --git a/managed/yba-cli/docs/yba_xcluster_create.md b/managed/yba-cli/docs/yba_xcluster_create.md index 23f1058e332c..4f3e71564baf 100644 --- a/managed/yba-cli/docs/yba_xcluster_create.md +++ b/managed/yba-cli/docs/yba_xcluster_create.md @@ -27,12 +27,13 @@ yba xcluster create --name \ --source-universe-name string [Required] The name of the source universe for the xcluster config. --target-universe-name string [Required] The name of the target universe for the xcluster config. --table-type string [Optional] Table type. Required when table-uuids is not specified. Allowed values: ysql, ycql - --table-uuids string [Optional] Comma separated list of source universe table IDs/UUIDs. All tables must be of the same type. Run "yba universe table list --name --xcluster-supported-only" to check the list of tables that can be added for asynchronous replication. If left empty, all tables of specified table-type will be added for asynchronous replication. - --storage-config-name string [Required] Storage config to be used for taking the backup for replication. - --tables-need-full-copy-uuids string [Optional] Comma separated list of source universe table IDs/UUIDs that are allowed to be full-copied to the target universe. Must be a subset of table-uuids. If left empty, allow-bootstrap is set to true so full-copy can be done for all the tables passed in to be in replication. Run "yba xcluster needs-full-copy-tables --source-universe-name --target-universe-name --table-uuids " to check the list of tables that need bootstrapping. - --parallelism int [Optional] Number of concurrent commands to run on nodes over SSH via "yb_backup" script. (default 8) - --allow-bootstrap [Optional] Allow full copy on all the tables being added to the replication. The same as passing the same set passed to table-uuids to tables-need-full-copy-uuids. (default false) --config-type string [Optional] Scope of the xcluster config to create. Allowed values: basic, txn, db. (default "basic") + --table-uuids string [Optional] Comma separated list of source universe table IDs/UUIDs. All tables must be of the same type. Run "yba universe table list --name --xcluster-supported-only" to check the list of tables that can be added for asynchronous replication. If left empty, all tables of specified table-type will be added for asynchronous replication. + --skip-full-copy-tables [Optional] Skip taking a backup for replication. (default false) + --storage-config-name string [Optional] Storage config to be used for taking the backup for replication. Required when tables require full copy. Ignored when skip-full-copy-tables is set to true. + --tables-need-full-copy-uuids string [Optional] Comma separated list of source universe table IDs/UUIDs that are allowed to be full-copied to the target universe. Must be a subset of table-uuids. If left empty, allow-full-copy-tables is set to true so full-copy can be done for all the tables passed in to be in replication. Run "yba xcluster needs-full-copy-tables --source-universe-name --target-universe-name --table-uuids " to check the list of tables that need full copy. Ignored when skip-full-copy-tables is set to true. + --allow-full-copy-tables [Optional] Allow full copy on all the tables being added to the replication. The same as passing the same set passed to table-uuids to tables-need-full-copy-uuids. Ignored when skip-full-copy-tables is set to true. (default false) + --parallelism int [Optional] Number of concurrent commands to run on nodes over SSH via "yb_backup" script. Ignored when skip-full-copy-tables is set to true. (default 8) (default 8) --dry-run [Optional] Run the pre-checks without actually running the subtasks. (default false) -h, --help help for create ``` diff --git a/managed/yba-cli/docs/yba_xcluster_update.md b/managed/yba-cli/docs/yba_xcluster_update.md index aed9e78ed389..1b171d0af2cb 100644 --- a/managed/yba-cli/docs/yba_xcluster_update.md +++ b/managed/yba-cli/docs/yba_xcluster_update.md @@ -26,10 +26,11 @@ yba xcluster update --uuid \ --target-role string [Optional] The role that the target universe should have in the xCluster config. Allowed values: active, standby, unrecognized. --add-table-uuids string [Optional] Comma separated list of source universe table IDs/UUIDs. All tables must be of the same type. Run "yba universe table list --name --xcluster-supported-only" to check the list of tables that can be added for asynchronous replication. --remove-table-uuids string [Optional] Comma separated list of source universe table IDs/UUIDs. Run "yba xcluster describe --uuid " to check the list of tables that can be removed from asynchronous replication. - --storage-config-name string [Optional] Storage config to be used for taking the backup for replication. Required when tables require bootstrapping. - --tables-need-full-copy-uuids string [Optional] Comma separated list of source universe table IDs/UUIDs that are allowed to be full-copied to the target universe. Must be a subset of table-uuids. If left empty, allow-bootstrap is set to true so full-copy can be done for all the tables passed in to be in replication. Run "yba xcluster needs-full-copy-tables --source-universe-name --target-universe-name --table-uuids " to check the list of tables that need bootstrapping. - --parallelism int [Optional] Number of concurrent commands to run on nodes over SSH via "yb_backup" script. (default 8) - --allow-bootstrap [Optional] Allow full copy on all the tables being added to the replication. The same as passing the same set passed to table-uuids to tables-need-full-copy-uuids. (default false) + --skip-full-copy-tables [Optional] Skip taking a backup for replication. (default false) + --storage-config-name string [Optional] Storage config to be used for taking the backup for replication. Required when tables require bootstrapping. Ignored when skip-full-copy-tables is set to true. + --tables-need-full-copy-uuids string [Optional] Comma separated list of source universe table IDs/UUIDs that are allowed to be full-copied to the target universe. Must be a subset of table-uuids. If left empty, allow-full-copy-tables is set to true so full-copy can be done for all the tables passed in to be in replication. Run "yba xcluster needs-full-copy-tables --source-universe-name --target-universe-name --table-uuids " to check the list of tables that need bootstrapping. Ignored when skip-full-copy-tables is set to true. + --allow-full-copy-tables [Optional] Allow full copy on all the tables being added to the replication. The same as passing the same set passed to table-uuids to tables-need-full-copy-uuids. Ignored when skip-full-copy-tables is set to true. (default false) + --parallelism int [Optional] Number of concurrent commands to run on nodes over SSH via "yb_backup" script. Ignored when skip-full-copy-tables is set to true. (default 8) --dry-run [Optional] Run the pre-checks without actually running the subtasks. (default false) --auto-include-index-tables [Optional] Whether or not YBA should also include all index tables from any provided main tables. (default false) -h, --help help for update From 6af35186607383e17d52f615499cbd444664d83d Mon Sep 17 00:00:00 2001 From: Minghui Yang Date: Thu, 8 May 2025 16:44:50 +0000 Subject: [PATCH 047/146] [#27148] YSQL: Make commit statement a DDL when running YSQL upgrade scripts Summary: Currently, if a DDL statement writes to PG catalog tables that generates invalidation messages, we often will increment the catalog version in `pg_yb_catalog_version` table and insert the captured invalidation messages into `pg_yb_invalidation_messages` table. The new version and associated invalidation messages will be broadcast to all PG backends on all the nodes via tserver master heartbeat mechanism. Such a DDL statement is called `new-version` DDL as it increments the catalog version. Some DDL statements do not increment the catalog version (e.g., a simple CREATE TABLE statement), we call these DDL statements `same-version DDL` as opposed to the `new-version DDL` whose behavior is described above. Such a DDL does writes to PG catalog tables and generate invalidation messages, but we assume that those invalidation messages only serve to invalidate negative cache entries because these DDL statements add new database objects rather than altering or dropping an existing one. YSQL has disabled negative caching for such a case and therefore those invalidation messages would become a no-op even if we tried to apply them because the negative cache entries they are meant to remove do not exist. There is a special case in YSQL upgrade mode where we allow DML statements directly write to PG catalog tables under the GUC `yb_non_ddl_txn_for_sys_tables_allowed=1`. These DML statements also generate invalidation messages. They do not cause catalog version to increase. If there is no DDL statement following these DML statements, their messages are simply dropped in the same way as a `same-version` DDL. But if they are followed by a `new-version` DDL statement, then this can cause a difference between full catalog cache refresh and incremental catalog cache refresh: * A full catalog cache refresh takes care of those dropped invalidation messages because the entire catalog cache is refreshed * Incremental catalog cache refreshes only broadcast and apply the invalidation messages generated by the DDL statement, those that are generated by the DMLs are ignored, which can cause stale cache problem. In addition, YSQL does allow negative caching for some system catalog entries. The assumption is that during normal operation, those system portion of catalog tables do not change and therefore it is more efficient to allow negative caching for them. But during YSQL upgrade, that assumption is no longer true: the YSQL migrate script can actually alter those system catalog entries and therefore we should not ignore invalidation messages generated by DMLs. In fact the simple CREATE TABLE statement is turned into a `new-version DDL` if the GUC `ysql_upgrade_mode` is set to true, that means we do need to invalidated negative system cache entries during YSQL upgrade. To fix this YSQL upgrade problem in incremental refresh mode, I changed to classify COMMIT statement into a DDL statement that increments the catalog version (but non-breaking). It is assumed that migrate scripts are well-formed so that each block of DML statements will be properly terminated by a COMMIT statement. In order to capture all the invalidation messages, I also made a change to use snapshot isolation mode during YSQL upgrade. This is needed as a simple work around so that we do not start subtransactions during YSQL upgrade. Otherwise the COMMIT will only capture the invalidation messages generated by the last DML statement preceding the COMMIT statement. Note that this problem is more general because it can appear outside the YSQL upgrade context. The fix here is not going to work universally beyond YSQL upgrade context where we assume simple use case and can use snapshot isolation. We are likely need to have transactional DDL support where we can come back to fix the more general problem. At present, if customers need to use `yb_non_ddl_txn_for_sys_tables_allowed=1` to directly write to catalog tables, they need to explicitly increment the catalog version to trigger a full catalog cache refresh in order to fix the stale cache created by the catalog writes. I picked a few representative YSQL migrate scripts and put them into new unit tests. These new unit tests would fail before this diff. Jira: DB-16631 Test Plan: (1) ./yb_build.sh --cxx-test pg_catalog_version-test --gtest_filter PgCatalogVersionTest.InvalMessageYsqlUpgradeCommit* (2) Manually backport the code change to 2024.2 and run ``` ./yb_build.sh --java-test 'org.yb.pgsql.TestYsqlUpgrade' ``` Verify the tests passed and the test logs do not have the error `invalid transaction termination`. Make a change to introduce `COMMIT` inside an anonymous block ``` diff --git a/src/yb/yql/pgwrapper/ysql_migrations/V59.6__21851__yb_backend_heap_snapshot.sql b/src/yb/yql/pgwrapper/ysql_migrations/V59.6__21851__yb_backend_heap_snapshot.sql index 3c90be8758..32602d5a93 100644 --- a/src/yb/yql/pgwrapper/ysql_migrations/V59.6__21851__yb_backend_heap_snapshot.sql +++ b/src/yb/yql/pgwrapper/ysql_migrations/V59.6__21851__yb_backend_heap_snapshot.sql @@ -55,6 +55,7 @@ BEGIN; ) VALUES (0, 0, 0, 1255, 8086, 0, 'p'); END IF; + COMMIT; END $$; COMMIT; ``` Run ``` ./yb_build.sh --java-test 'org.yb.pgsql.TestYsqlUpgrade#migratingIsEquivalentToReinitdb' ``` Verify the test failed and the following error appeared in test logs: ``` java.lang.AssertionError: Process exited with code 1, message: kind == TRANS_STMT_COMMIT && + /* + * A COMMIT statement itself does not ensure a successful + * commit. If the current transaction is already aborted, + * it is equivalent to a ROLLBACK statement. + */ + IsTransactionState() && + YbIsInvalidationMessageEnabled() && + /* + * When we have ddl transaction block support, we do not need + * this special case code for YSQL upgrade. + */ + !*YBCGetGFlags()->TEST_ysql_yb_ddl_transaction_block_enabled) + { + /* + * We assume YSQL upgrade only makes simple use of COMMIT + * so that we can handle invalidation messages correctly. + */ + if (is_top_level) + is_breaking_change = false; + else + elog(ERROR, "improper nesting level of COMMIT in YSQL upgrade"); + } + else + is_ddl = false; + break; + } + default: /* Not a DDL operation. */ is_ddl = false; @@ -7496,7 +7537,9 @@ YbRefreshMatviewInPlace() static bool YbHasDdlMadeChanges() { - return YBCPgHasWriteOperationsInDdlTxnMode() || ddl_transaction_state.force_send_inval_messages; + return YBCPgHasWriteOperationsInDdlTxnMode() || + ddl_transaction_state.original_node_tag == T_TransactionStmt || + ddl_transaction_state.force_send_inval_messages; } void diff --git a/src/yb/yql/pgwrapper/pg_catalog_version-test.cc b/src/yb/yql/pgwrapper/pg_catalog_version-test.cc index 5789e15c732e..b9428b1e1ca4 100644 --- a/src/yb/yql/pgwrapper/pg_catalog_version-test.cc +++ b/src/yb/yql/pgwrapper/pg_catalog_version-test.cc @@ -14,6 +14,7 @@ #include "yb/gutil/strings/util.h" #include "yb/tserver/tserver_service.proxy.h" #include "yb/tserver/tserver_shared_mem.h" +#include "yb/util/env_util.h" #include "yb/util/path_util.h" #include "yb/util/scope_exit.h" #include "yb/util/string_util.h" @@ -579,6 +580,21 @@ class PgCatalogVersionTest : public LibPqTestBase { ASSERT_EQ(count, 2); } + // This function is extracted and adapted from ysql_upgrade.cc. + std::string ReadMigrationFile(const string& migration_file) { + const char* kStaticDataParentDir = "share"; + const char* kMigrationsDir = "ysql_migrations"; + const std::string search_for_dir = JoinPathSegments(kStaticDataParentDir, kMigrationsDir); + const std::string root_dir = env_util::GetRootDir(search_for_dir); + CHECK(!root_dir.empty()); + const std::string migrations_dir = + JoinPathSegments(root_dir, kStaticDataParentDir, kMigrationsDir); + faststring migration_content; + CHECK_OK(ReadFileToString(Env::Default(), + JoinPathSegments(migrations_dir, migration_file), + &migration_content)); + return migration_content.ToString(); + } }; TEST_F(PgCatalogVersionTest, DBCatalogVersion) { @@ -2626,5 +2642,232 @@ SET yb_non_ddl_txn_for_sys_tables_allowed=1; ASSERT_EQ(expected, result); } +// Test YSQL upgrade where we can directly write to catalog tables using DML +// statements under the GUC yb_non_ddl_txn_for_sys_tables_allowed=1. These +// DML statements do generate invalidation messages. We make the COMMIT statement +// in a YSQL migrate script to be a DDL so that we can capture the messages +// generated by these DML statements. +TEST_F(PgCatalogVersionTest, InvalMessageYsqlUpgradeCommit1) { + RestartClusterWithInvalMessageEnabled(); + auto conn_yugabyte = ASSERT_RESULT(Connect()); + ASSERT_OK(conn_yugabyte.Execute("SET log_min_messages = DEBUG1")); + // Use snapshot isolation mode during YSQL upgrade. This is needed as a simple work + // around so that we do not start subtransactions during YSQL upgrade. Otherwise the + // COMMIT will only capture the invalidation messages generated by the last DML statement + // preceding the COMMIT statement. + ASSERT_OK(conn_yugabyte.Execute("SET DEFAULT_TRANSACTION_ISOLATION TO \"REPEATABLE READ\"")); + auto v = ASSERT_RESULT(GetCatalogVersion(&conn_yugabyte)); + ASSERT_EQ(v, 1); + string migrate_sql = "SET yb_non_ddl_txn_for_sys_tables_allowed=1;\n"; + // We directly make an update to pg_class that will generate 1 invalidation message. + // Write for a random number of times, and verify we have captured the same number + // of messages by the COMMIT statement. + const auto inval_message_count = RandomUniformInt(1, 100); + LOG(INFO) << "inval_message_count: " << inval_message_count; + for (int i = 0; i < inval_message_count; ++i) { + // The nested BEGIN; does not have any effect other than causing a warning messages + // WARNING: there is already a transaction in progress + // However if we allow YSQL upgrade to run in read committed isolation, then + // each statement will start a subtransaction which prevents the final COMMIT + // statement to catpure all the invalidation messages. For now we disallow YSQL + // upgrade to run in read committed isolation to avoid that. + migrate_sql += "BEGIN;\nUPDATE pg_class SET relam = 2 WHERE oid = 8010;\n"; + } + migrate_sql += "COMMIT;\n"; + ASSERT_OK(conn_yugabyte.Execute("SET ysql_upgrade_mode TO true")); + ASSERT_OK(conn_yugabyte.Execute(migrate_sql)); + // The migrate sql is run under YSQL upgrade mode. Therefore the COMMIT is + // considered as a DDL and causes catalog version to increment. + v = ASSERT_RESULT(GetCatalogVersion(&conn_yugabyte)); + ASSERT_EQ(v, 2); + const auto count = ASSERT_RESULT(conn_yugabyte.FetchRow( + "SELECT COUNT(*) FROM pg_yb_invalidation_messages")); + ASSERT_EQ(count, 1); + auto query = "SELECT encode(messages, 'hex') FROM pg_yb_invalidation_messages " + "WHERE current_version=$0"s; + auto result2 = ASSERT_RESULT(conn_yugabyte.FetchAllAsString(Format(query, 2))); + // Each invalidation messages is 24 bytes, in hex is 48 bytes. + ASSERT_EQ(result2.size(), inval_message_count * 48U); + // Make sure we only have simple usage of COMMIT in a migration script. PG allows + // COMMIT inside a an anonymous code block, in YSQL upgrade we do not allow. + migrate_sql = + R"#( +DO $$ +BEGIN + UPDATE pg_class SET relam = 2 WHERE oid = 8010; + COMMIT; +END$$; + )#"; + auto status = conn_yugabyte.Execute(migrate_sql); + ASSERT_TRUE(status.IsNetworkError()) << status; + ASSERT_STR_CONTAINS(status.ToString(), "invalid transaction termination"); + ASSERT_OK(conn_yugabyte.Execute("ROLLBACK")); + // PG also allows COMMIT inside a procedure that is invoked via CALL statement. + // In YSQL upgrade we do not allow. + migrate_sql = + R"#( +CREATE OR REPLACE PROCEDURE myproc() AS +$$ +BEGIN + UPDATE pg_class SET relam = 2 WHERE oid = 8010; + COMMIT; +END $$ LANGUAGE 'plpgsql'; +CALL myproc(); + )#"; + status = conn_yugabyte.Execute(migrate_sql); + ASSERT_TRUE(status.IsNetworkError()) << status; + ASSERT_STR_CONTAINS(status.ToString(), "invalid transaction termination"); +} + +TEST_F(PgCatalogVersionTest, InvalMessageYsqlUpgradeCommit2) { + RestartClusterWithInvalMessageEnabled(); + // Prepare the test setup by reverting + // V75__26335__pg_set_relation_stats.sql + auto conn_yugabyte = ASSERT_RESULT(Connect()); + ASSERT_OK(conn_yugabyte.Execute("SET log_min_messages = DEBUG1")); + ASSERT_OK(conn_yugabyte.Execute("SET DEFAULT_TRANSACTION_ISOLATION TO \"REPEATABLE READ\"")); + auto v = ASSERT_RESULT(GetCatalogVersion(&conn_yugabyte)); + ASSERT_EQ(v, 1); + const string setup_sql = + R"#( +BEGIN; +SET LOCAL yb_non_ddl_txn_for_sys_tables_allowed TO true; +DELETE FROM pg_catalog.pg_proc WHERE oid in (8091, 8092, 8093, 8094); +DELETE FROM pg_catalog.pg_description WHERE objoid in (8091, 8092, 8093, 8094) + AND classoid = 1255 AND objsubid = 0; +COMMIT; + )#"; + ASSERT_OK(conn_yugabyte.Execute(setup_sql)); + // The setup sql is not run under YSQL upgrade mode. Therefore its COMMIT is + // considered as a DML and does not cause catalog version to increment. + v = ASSERT_RESULT(GetCatalogVersion(&conn_yugabyte)); + ASSERT_EQ(v, 1); + + // Now run the migrate sql under YSQL upgrade mode: + // V75__26335__pg_set_relation_stats.sql + const string migrate_sql = + ReadMigrationFile("V75__26335__pg_set_relation_stats.sql"); + ASSERT_OK(conn_yugabyte.Execute("SET ysql_upgrade_mode TO true")); + ASSERT_OK(conn_yugabyte.Execute(migrate_sql)); + // The migrate sql is run under YSQL upgrade mode. Therefore each COMMIT is + // considered as a DDL and causes catalog version to increment. + v = ASSERT_RESULT(GetCatalogVersion(&conn_yugabyte)); + ASSERT_EQ(v, 5); + const auto count = ASSERT_RESULT(conn_yugabyte.FetchRow( + "SELECT COUNT(*) FROM pg_yb_invalidation_messages")); + ASSERT_EQ(count, 4); + auto query = "SELECT encode(messages, 'hex') FROM pg_yb_invalidation_messages " + "WHERE current_version=$0"s; + + // version 2 messages. + auto result2 = ASSERT_RESULT(conn_yugabyte.FetchAllAsString(Format(query, 2))); + ASSERT_EQ(result2.size(), 144U); + + // version 3 messages. + auto result3 = ASSERT_RESULT(conn_yugabyte.FetchAllAsString(Format(query, 3))); + ASSERT_EQ(result3.size(), 144U); + + // version 4 messages. + auto result4 = ASSERT_RESULT(conn_yugabyte.FetchAllAsString(Format(query, 4))); + ASSERT_EQ(result4.size(), 144U); + + // version 5 messages. + auto result5 = ASSERT_RESULT(conn_yugabyte.FetchAllAsString(Format(query, 5))); + ASSERT_EQ(result5.size(), 144U); +} + +TEST_F(PgCatalogVersionTest, InvalMessageYsqlUpgradeCommit3) { + RestartClusterWithInvalMessageEnabled(); + auto conn_yugabyte = ASSERT_RESULT(Connect()); + ASSERT_OK(conn_yugabyte.Execute("SET log_min_messages = DEBUG1")); + ASSERT_OK(conn_yugabyte.Execute("SET DEFAULT_TRANSACTION_ISOLATION TO \"REPEATABLE READ\"")); + auto v = ASSERT_RESULT(GetCatalogVersion(&conn_yugabyte)); + ASSERT_EQ(v, 1); + + // Now run the migrate sql under YSQL upgrade mode. + // V77__26590__query_id_yb_terminated_queries_view.sql + const string migrate_sql = + ReadMigrationFile("V77__26590__query_id_yb_terminated_queries_view.sql"); + ASSERT_OK(conn_yugabyte.Execute("SET ysql_upgrade_mode TO true")); + ASSERT_OK(conn_yugabyte.Execute(migrate_sql)); + // The migrate sql is run under YSQL upgrade mode. Therefore its COMMIT is + // considered as a DDL. There are two COMMIT statements. The first COMMIT + // has got invalidation messages so it causes catalog version to increment + // from 1 to 2. Then the DROP VIEW statement causes catalog version to + // increment from 2 to 3, the next CREATE OR REPLACE VIEW statement causes + // catalog version to increment from 3 to 4. The last COMMIT statement got + // 1 invalidation messages because even though there is no catalog table + // writes between the CREATE OR REPLACE VIEW and the last COMMIT, the call + // to increment catalog version does generate one message that is not + // captured by the call itself. Therefore the last COMMIT still causes + // catalog version to increment. + v = ASSERT_RESULT(GetCatalogVersion(&conn_yugabyte)); + ASSERT_EQ(v, 5); + const auto count = ASSERT_RESULT(conn_yugabyte.FetchRow( + "SELECT COUNT(*) FROM pg_yb_invalidation_messages")); + ASSERT_EQ(count, 4); + auto query = "SELECT encode(messages, 'hex') FROM pg_yb_invalidation_messages " + "WHERE current_version=$0"s; + + // version 2 messages. + auto result2 = ASSERT_RESULT(conn_yugabyte.FetchAllAsString(Format(query, 2))); + ASSERT_EQ(result2.size(), 144U); + + // version 3 messages. + auto result3 = ASSERT_RESULT(conn_yugabyte.FetchAllAsString(Format(query, 3))); + ASSERT_EQ(result3.size(), 1248U); + + // version 4 messages. + auto result4 = ASSERT_RESULT(conn_yugabyte.FetchAllAsString(Format(query, 4))); + ASSERT_EQ(result4.size(), 1344U); + + // version 5 messages. + auto result5 = ASSERT_RESULT(conn_yugabyte.FetchAllAsString(Format(query, 5))); + ASSERT_EQ(result5.size(), 48U); +} + +TEST_F(PgCatalogVersionTest, InvalMessageYsqlUpgradeCommit4) { + RestartClusterWithInvalMessageEnabled(); + // Prepare the test setup by reverting + // V78__26645__yb_binary_upgrade_set_next_pg_enum_sortorder.sql + auto conn_yugabyte = ASSERT_RESULT(Connect()); + ASSERT_OK(conn_yugabyte.Execute("SET log_min_messages = DEBUG1")); + ASSERT_OK(conn_yugabyte.Execute("SET DEFAULT_TRANSACTION_ISOLATION TO \"REPEATABLE READ\"")); + auto v = ASSERT_RESULT(GetCatalogVersion(&conn_yugabyte)); + ASSERT_EQ(v, 1); + const string setup_sql = + R"#( +BEGIN; +SET LOCAL yb_non_ddl_txn_for_sys_tables_allowed TO true; +DELETE FROM pg_catalog.pg_proc WHERE oid = 8095; +DELETE FROM pg_catalog.pg_description WHERE objoid = 8095 AND classoid = 1255 AND objsubid = 0; +COMMIT; + )#"; + ASSERT_OK(conn_yugabyte.Execute(setup_sql)); + // The setup sql is not run under YSQL upgrade mode. Therefore its COMMIT is + // considered as a DML and does not cause catalog version to increment. + v = ASSERT_RESULT(GetCatalogVersion(&conn_yugabyte)); + ASSERT_EQ(v, 1); + + // Now run the migrate sql under YSQL upgrade mode: + // V78__26645__yb_binary_upgrade_set_next_pg_enum_sortorder.sql + const string migrate_sql = + ReadMigrationFile("V78__26645__yb_binary_upgrade_set_next_pg_enum_sortorder.sql"); + ASSERT_OK(conn_yugabyte.Execute("SET ysql_upgrade_mode TO true")); + ASSERT_OK(conn_yugabyte.Execute(migrate_sql)); + // The migrate sql is run under YSQL upgrade mode. Therefore its COMMIT is + // considered as a DDL and causes catalog version to increment. + v = ASSERT_RESULT(GetCatalogVersion(&conn_yugabyte)); + ASSERT_EQ(v, 2); + auto query = "SELECT encode(messages, 'hex') FROM pg_yb_invalidation_messages"s; + auto result = ASSERT_RESULT(conn_yugabyte.FetchAllAsString(query)); + // The migrate script has generated 3 messages: + // 1 SharedInvalCatcacheMsg for PROCNAMEARGSNSP + // 1 SharedInvalCatcacheMsg for PROCOID + // 1 SharedInvalSnapshotMsg for pg_description + // each messages is 24 raw bytes and 48 bytes in 'hex' (48 * 3 = 144). + ASSERT_EQ(result.size(), 144U); +} + } // namespace pgwrapper } // namespace yb diff --git a/src/yb/yql/pgwrapper/ysql_upgrade.cc b/src/yb/yql/pgwrapper/ysql_upgrade.cc index f7e890e96ed8..7a3587bfd6d3 100644 --- a/src/yb/yql/pgwrapper/ysql_upgrade.cc +++ b/src/yb/yql/pgwrapper/ysql_upgrade.cc @@ -215,6 +215,11 @@ class YsqlUpgradeHelper::DatabaseEntry { Result> MakeConnection() { auto pgconn = std::make_shared(VERIFY_RESULT(conn_builder_.Connect())); RETURN_NOT_OK(pgconn->Execute("SET ysql_upgrade_mode TO true;")); + // Use snapshot isolation mode during YSQL upgrade. This is needed as a simple work + // around so that we do not start subtransactions during YSQL upgrade. Otherwise the + // COMMIT will only capture the invalidation messages generated by the last DML statement + // preceding the COMMIT statement. + RETURN_NOT_OK(pgconn->Execute("SET DEFAULT_TRANSACTION_ISOLATION TO \"REPEATABLE READ\"")); return pgconn; } From 440157a7ae65632dc147bf6732543dea371a0a81 Mon Sep 17 00:00:00 2001 From: Abhinab Saha Date: Wed, 9 Apr 2025 17:37:00 +0530 Subject: [PATCH 048/146] [#26086] DocDB: Capture and return per RPC metrics for writes with protobuf response Summary: This diff captures the rocksdb and tablet metrics for writes and returns them with the response. A future diff will use this to display these metrics in the EXPLAIN (ANALYZE, DIST, DEBUG) output. A single instance of `WriteQuery` can contain a batch of write operations. If the `batch_tablet_metrics_update` gflag is true, docdb and tablet metrics for all the write operations will be accumulated and stored in the `WriteQuery` object. The accumulated metrics will be returned with the response PB of the first write operation in the `WriteQuery` object. Storing the metrics in the `WriteQuery` object ensures that we also capture the tablet level metrics which are updated outside the scope of executing the write operations. We pass the metrics from the WriteQuery object to other places like WaitOnConflictResolver while executing the query. In case of errors like deadlock error, it's possible for the WriteQuery object to be destroyed before WaitOnConflictResolver, in that case, it results in a use-after-free bug. We solve this by using a shared pointer of the TabletMetrics pointer. The destructor of the WriteQuery will update the TabletMetrics pointer to the global TabletMetrics pointer. So, any metrics updated after the WriteQuery object is destroyed will directly be updated in the global tablet metrics. In this way, even after the WriteQuery object is destroyed, the shared pointer is still alive until all the objects which store the metrics object are destroyed. Perf report with this change {F354351} **Upgrade/rollback safety:** This diff adds an optional metrics_capture field to PgsqlWriteRequestPB but does not require any flags to maintain upgrade/downgrade safety. If the field is not set, the default value (PGSQL_METRICS_CAPTURE_NONE) results in no metric changes being sent with the response/metric changes missing in the output, but it is acceptable for this feature to not report all metric changes during a rolling upgrade/downgrade, and the issue will resolve automatically when it finishes (all nodes sending a value after upgrade/value not checked at all after downgrade). Jira: DB-15412 Test Plan: Jenkins Reviewers: kramanathan, esheng Reviewed By: esheng Subscribers: svc_phabricator, yql, ybase Differential Revision: https://phorge.dev.yugabyte.com/D41947 --- src/yb/common/pgsql_protocol.proto | 3 ++ src/yb/docdb/conflict_resolution.cc | 35 +++++++++-------- src/yb/docdb/conflict_resolution.h | 4 +- src/yb/docdb/docdb.cc | 6 +-- src/yb/docdb/docdb.h | 2 +- src/yb/tablet/write_query.cc | 61 ++++++++++++++++++++++------- src/yb/tablet/write_query.h | 19 +++++++++ src/yb/tserver/tablet_service.cc | 12 ++++++ src/yb/yql/pggate/pg_dml_write.cc | 2 + 9 files changed, 107 insertions(+), 37 deletions(-) diff --git a/src/yb/common/pgsql_protocol.proto b/src/yb/common/pgsql_protocol.proto index 89843746378a..9f7617a3c65e 100644 --- a/src/yb/common/pgsql_protocol.proto +++ b/src/yb/common/pgsql_protocol.proto @@ -315,6 +315,9 @@ message PgsqlWriteRequestPB { // data. Bypass this check when we know its safe. // Valid cases are InitDB, pg_upgrade, and temp tables. optional bool force_catalog_modifications = 27; + + // What metric changes to return in response. + optional PgsqlMetricsCaptureType metrics_capture = 28; } //-------------------------------------------------------------------------------------------------- diff --git a/src/yb/docdb/conflict_resolution.cc b/src/yb/docdb/conflict_resolution.cc index beecb5278765..1ad92e88159d 100644 --- a/src/yb/docdb/conflict_resolution.cc +++ b/src/yb/docdb/conflict_resolution.cc @@ -89,8 +89,9 @@ using TransactionConflictInfoMap = std::unordered_map; Status MakeConflictStatus(const TransactionId& our_id, const TransactionId& other_id, - const char* reason, tablet::TabletMetrics* tablet_metrics) { - tablet_metrics->Increment(tablet::TabletCounters::kTransactionConflicts); + const char* reason, + const std::shared_ptr& tablet_metrics) { + (*tablet_metrics)->Increment(tablet::TabletCounters::kTransactionConflicts); return (STATUS(TryAgain, Format("$0 conflicts with $1 transaction: $2", our_id, reason, other_id), Slice(), TransactionError(TransactionErrorCode::kConflict))); } @@ -126,7 +127,7 @@ class ConflictResolverContext { virtual int64_t GetTxnStartUs() const = 0; - virtual tablet::TabletMetrics* GetTabletMetrics() = 0; + virtual const std::shared_ptr& GetTabletMetrics() = 0; virtual bool IgnoreConflictsWith(const TransactionId& other) = 0; @@ -664,7 +665,7 @@ class WaitOnConflictResolver : public ConflictResolver { if (wait_start_time_.Initialized()) { const MonoDelta elapsed_time = MonoTime::Now().GetDeltaSince(wait_start_time_); - context_->GetTabletMetrics()->Increment( + (*context_->GetTabletMetrics())->Increment( tablet::TabletEventStats::kTotalWaitQueueTime, make_unsigned(elapsed_time.ToMicroseconds())); } @@ -907,12 +908,12 @@ class StrongConflictChecker { StrongConflictChecker(const TransactionId& transaction_id, HybridTime read_time, ConflictResolver* resolver, - tablet::TabletMetrics* tablet_metrics, + const std::shared_ptr& tablet_metrics, KeyBytes* buffer) : transaction_id_(transaction_id), read_time_(read_time), resolver_(*resolver), - tablet_metrics_(*tablet_metrics), + tablet_metrics_(tablet_metrics), buffer_(*buffer) {} @@ -984,7 +985,7 @@ class StrongConflictChecker { return STATUS(InternalError, "Skip locking since entity was modified in regular db", TransactionError(TransactionErrorCode::kSkipLocking)); } else { - tablet_metrics_.Increment(tablet::TabletCounters::kTransactionConflicts); + (*tablet_metrics_)->Increment(tablet::TabletCounters::kTransactionConflicts); return STATUS_EC_FORMAT( TryAgain, TransactionError(TransactionErrorCode::kConflict), "$0 conflict with concurrently committed data. Value write after transaction start: " @@ -1010,7 +1011,7 @@ class StrongConflictChecker { const TransactionId& transaction_id_; const HybridTime read_time_; ConflictResolver& resolver_; - tablet::TabletMetrics& tablet_metrics_; + std::shared_ptr tablet_metrics_; KeyBytes& buffer_; // RocksDb iterator with bloom filter can be reused in case keys has same hash component. @@ -1022,12 +1023,12 @@ class ConflictResolverContextBase : public ConflictResolverContext { ConflictResolverContextBase(const DocOperations& doc_ops, HybridTime resolution_ht, int64_t txn_start_us, - tablet::TabletMetrics* tablet_metrics, + const std::shared_ptr& tablet_metrics, ConflictManagementPolicy conflict_management_policy) : doc_ops_(doc_ops), resolution_ht_(resolution_ht), txn_start_us_(txn_start_us), - tablet_metrics_(*tablet_metrics), + tablet_metrics_(tablet_metrics), conflict_management_policy_(conflict_management_policy) { } @@ -1047,8 +1048,8 @@ class ConflictResolverContextBase : public ConflictResolverContext { return txn_start_us_; } - tablet::TabletMetrics* GetTabletMetrics() override { - return &tablet_metrics_; + const std::shared_ptr& GetTabletMetrics() override { + return tablet_metrics_; } ConflictManagementPolicy GetConflictManagementPolicy() const override { @@ -1103,7 +1104,7 @@ class ConflictResolverContextBase : public ConflictResolverContext { bool fetched_metadata_for_transactions_ = false; - tablet::TabletMetrics& tablet_metrics_; + std::shared_ptr tablet_metrics_; const ConflictManagementPolicy conflict_management_policy_; }; @@ -1116,7 +1117,7 @@ class TransactionConflictResolverContext : public ConflictResolverContextBase { HybridTime resolution_ht, HybridTime read_time, int64_t txn_start_us, - tablet::TabletMetrics* tablet_metrics, + const std::shared_ptr& tablet_metrics, ConflictManagementPolicy conflict_management_policy) : ConflictResolverContextBase( doc_ops, resolution_ht, txn_start_us, tablet_metrics, conflict_management_policy), @@ -1357,7 +1358,7 @@ class OperationConflictResolverContext : public ConflictResolverContextBase { OperationConflictResolverContext(const DocOperations* doc_ops, HybridTime resolution_ht, int64_t txn_start_us, - tablet::TabletMetrics* tablet_metrics, + const std::shared_ptr& tablet_metrics, ConflictManagementPolicy conflict_management_policy) : ConflictResolverContextBase( *doc_ops, resolution_ht, txn_start_us, tablet_metrics, conflict_management_policy) { @@ -1445,7 +1446,7 @@ Status ResolveTransactionConflicts(const DocOperations& doc_ops, const DocDB& doc_db, PartialRangeKeyIntents partial_range_key_intents, TransactionStatusManager* status_manager, - tablet::TabletMetrics* tablet_metrics, + const std::shared_ptr& tablet_metrics, LockBatch* lock_batch, WaitQueue* wait_queue, bool is_advisory_lock_request, @@ -1491,7 +1492,7 @@ Status ResolveOperationConflicts(const DocOperations& doc_ops, const DocDB& doc_db, PartialRangeKeyIntents partial_range_key_intents, TransactionStatusManager* status_manager, - tablet::TabletMetrics* tablet_metrics, + const std::shared_ptr& tablet_metrics, LockBatch* lock_batch, WaitQueue* wait_queue, CoarseTimePoint deadline, diff --git a/src/yb/docdb/conflict_resolution.h b/src/yb/docdb/conflict_resolution.h index ae0977f6b513..62b0b683f429 100644 --- a/src/yb/docdb/conflict_resolution.h +++ b/src/yb/docdb/conflict_resolution.h @@ -100,7 +100,7 @@ Status ResolveTransactionConflicts(const DocOperations& doc_ops, const DocDB& doc_db, dockv::PartialRangeKeyIntents partial_range_key_intents, TransactionStatusManager* status_manager, - tablet::TabletMetrics* tablet_metrics, + const std::shared_ptr& tablet_metrics, LockBatch* lock_batch, WaitQueue* wait_queue, bool is_advisory_lock_request, @@ -128,7 +128,7 @@ Status ResolveOperationConflicts(const DocOperations& doc_ops, const DocDB& doc_db, dockv::PartialRangeKeyIntents partial_range_key_intents, TransactionStatusManager* status_manager, - tablet::TabletMetrics* tablet_metrics, + const std::shared_ptr& tablet_metrics, LockBatch* lock_batch, WaitQueue* wait_queue, CoarseTimePoint deadline, diff --git a/src/yb/docdb/docdb.cc b/src/yb/docdb/docdb.cc index c4963d76f3ca..6c03213c675b 100644 --- a/src/yb/docdb/docdb.cc +++ b/src/yb/docdb/docdb.cc @@ -191,7 +191,7 @@ Result> DetermineKeysToLock( Result PrepareDocWriteOperation( const std::vector>& doc_write_ops, const ArenaList& read_pairs, - tablet::TabletMetrics* tablet_metrics, + const std::shared_ptr& tablet_metrics, IsolationLevel isolation_level, RowMarkType row_mark_type, bool transactional_table, @@ -221,14 +221,14 @@ Result PrepareDocWriteOperation( auto lock_status = result.lock_batch.status(); if (!lock_status.ok()) { if (tablet_metrics != nullptr) { - tablet_metrics->Increment(tablet::TabletCounters::kFailedBatchLock); + (*tablet_metrics)->Increment(tablet::TabletCounters::kFailedBatchLock); } return lock_status.CloneAndAppend( Format("Timeout: $0", deadline - ToCoarse(start_time))); } if (tablet_metrics != nullptr) { const MonoDelta elapsed_time = MonoTime::Now().GetDeltaSince(start_time); - tablet_metrics->Increment( + (*tablet_metrics)->Increment( tablet::TabletEventStats::kWriteLockLatency, make_unsigned(elapsed_time.ToMicroseconds())); } diff --git a/src/yb/docdb/docdb.h b/src/yb/docdb/docdb.h index 2dac9832443f..cfd0bcc57822 100644 --- a/src/yb/docdb/docdb.h +++ b/src/yb/docdb/docdb.h @@ -110,7 +110,7 @@ struct PrepareDocWriteOperationResult { Result PrepareDocWriteOperation( const std::vector>& doc_write_ops, const ArenaList& read_pairs, - tablet::TabletMetrics* tablet_metrics, + const std::shared_ptr& tablet_metrics, IsolationLevel isolation_level, RowMarkType row_mark_type, bool transactional_table, diff --git a/src/yb/tablet/write_query.cc b/src/yb/tablet/write_query.cc index f747abc1dc88..bcbfb746c359 100644 --- a/src/yb/tablet/write_query.cc +++ b/src/yb/tablet/write_query.cc @@ -45,6 +45,7 @@ #include "yb/tserver/tserver.pb.h" +#include "yb/util/debug-util.h" #include "yb/util/logging.h" #include "yb/util/metrics.h" #include "yb/util/scope_exit.h" @@ -63,6 +64,9 @@ TAG_FLAG(disable_alter_vs_write_mutual_exclusion, advanced); DEFINE_test_flag(bool, writequery_stuck_from_callback_leak, false, "Simulate WriteQuery stuck because of the update index flushed rpc call back leak"); +DECLARE_bool(batch_tablet_metrics_update); +DECLARE_bool(ysql_analyze_dump_metrics); + namespace yb { namespace tablet { @@ -203,6 +207,14 @@ WriteQuery::WriteQuery( start_time_(MonoTime::Now()), execute_mode_(ExecuteMode::kSimple) { IncrementActiveWriteQueryObjectsBy(1); + auto res = tablet_safe(); + if (res.ok()) { + global_tablet_metrics_ = (*res)->metrics(); + } + + metrics_ = std::make_shared( + GetAtomicFlag(&FLAGS_batch_tablet_metrics_update) + ? &scoped_tablet_metrics_ : global_tablet_metrics_); } LWWritePB& WriteQuery::request() { @@ -281,6 +293,21 @@ void WriteQuery::Release() { WriteQuery::~WriteQuery() { IncrementActiveWriteQueryObjectsBy(-1); + + // Any metrics updated after destroying the WriteQuery + // object cannot be sent with the response PB. So, update + // global tablet metrics directly from now. + *metrics_ = global_tablet_metrics_; + + if (global_tablet_metrics_) { + scoped_tablet_metrics_.MergeAndClear(global_tablet_metrics_); + } + auto tablet_result = tablet_safe(); + if (tablet_result.ok()) { + scoped_statistics_.MergeAndClear( + (*tablet_result)->regulardb_statistics().get(), + (*tablet_result)->intentsdb_statistics().get()); + } } void WriteQuery::set_client_request(std::reference_wrapper req) { @@ -309,12 +336,11 @@ void WriteQuery::Finished(WriteOperation* operation, const Status& status) { auto tablet = *tablet_result; if (status.ok()) { - TabletMetrics* metrics = tablet->metrics(); - if (metrics) { + if (metrics_) { auto op_duration_usec = make_unsigned(MonoDelta(MonoTime::Now() - start_time_).ToMicroseconds()); - metrics->Increment(tablet::TabletEventStats::kQlWriteLatency, op_duration_usec); + (*metrics_)->Increment(tablet::TabletEventStats::kQlWriteLatency, op_duration_usec); } } @@ -715,7 +741,7 @@ Status WriteQuery::DoExecute() { dockv::PartialRangeKeyIntents partial_range_key_intents(metadata.UsePartialRangeKeyIntents()); prepare_result_ = VERIFY_RESULT(docdb::PrepareDocWriteOperation( - doc_ops_, write_batch.read_pairs(), tablet->metrics(), isolation_level_, row_mark_type, + doc_ops_, write_batch.read_pairs(), metrics_, isolation_level_, row_mark_type, transactional_table, write_batch.has_transaction(), deadline(), partial_range_key_intents, tablet->shared_lock_manager())); @@ -772,7 +798,7 @@ Status WriteQuery::DoExecute() { return docdb::ResolveOperationConflicts( doc_ops_, conflict_management_policy, now, write_batch.transaction().pg_txn_start_us(), request_start_us(), request_id, tablet->doc_db(), partial_range_key_intents, - transaction_participant, tablet->metrics(), &prepare_result_.lock_batch, wait_queue, + transaction_participant, metrics_, &prepare_result_.lock_batch, wait_queue, deadline(), [this, now](const Result& result) { if (!result.ok()) { @@ -808,7 +834,7 @@ Status WriteQuery::DoExecute() { doc_ops_, conflict_management_policy, write_batch, tablet->clock()->Now(), read_time_ ? read_time_.read : HybridTime::kMax, write_batch.transaction().pg_txn_start_us(), request_start_us(), request_id, tablet->doc_db(), partial_range_key_intents, - transaction_participant, tablet->metrics(), + transaction_participant, metrics_, &prepare_result_.lock_batch, wait_queue, is_advisory_lock_request, deadline(), [this](const Result& result) { if (!result.ok()) { @@ -851,7 +877,7 @@ Status WriteQuery::DoTransactionalConflictsResolved() { safe_time = VERIFY_RESULT(tablet->SafeTime(RequireLease::kTrue)); read_time_ = ReadHybridTime::FromHybridTimeRange( {safe_time, tablet->clock()->NowRange().second}); - tablet->metrics()->Increment(tablet::TabletCounters::kPickReadTimeOnDocDB); + (*metrics_)->Increment(tablet::TabletCounters::kPickReadTimeOnDocDB); } else if (prepare_result_.need_read_snapshot && isolation_level_ == IsolationLevel::SERIALIZABLE_ISOLATION) { return STATUS_FORMAT( @@ -872,7 +898,7 @@ Status WriteQuery::DoCompleteExecute(HybridTime safe_time) { auto tablet = VERIFY_RESULT(tablet_safe()); if (prepare_result_.need_read_snapshot && !read_time_) { // A read_time will be picked by the below ScopedReadOperation::Create() call. - tablet->metrics()->Increment(tablet::TabletCounters::kPickReadTimeOnDocDB); + (*metrics_)->Increment(tablet::TabletCounters::kPickReadTimeOnDocDB); } // For WriteQuery requests with execution mode kCql and kPgsql, we perform schema version checks // in two places: @@ -903,18 +929,13 @@ Status WriteQuery::DoCompleteExecute(HybridTime safe_time) { read_time_)) : ScopedReadOperation(); - docdb::DocDBStatistics statistics; - auto scope_exit = ScopeExit([&statistics, tablet] { - statistics.MergeAndClear( - tablet->regulardb_statistics().get(), tablet->intentsdb_statistics().get()); - }); docdb::ReadOperationData read_operation_data { .deadline = deadline(), .read_time = prepare_result_.need_read_snapshot ? read_op.read_time() // When need_read_snapshot is false, this time is used only to write TTL field of record. : ReadHybridTime::SingleTime(tablet->clock()->Now()), - .statistics = &statistics, + .statistics = &scoped_statistics_, }; // We expect all read operations for this transaction to be done in AssembleDocWriteBatch. Once @@ -1423,5 +1444,17 @@ void WriteQuery::IncrementActiveWriteQueryObjectsBy(int64_t value) { } } +PgsqlResponsePB* WriteQuery::GetPgsqlResponseForMetricsCapture() const { + if (!pgsql_write_ops_.empty()) { + auto& write_op = pgsql_write_ops_.at(0); + if (GetAtomicFlag(&FLAGS_ysql_analyze_dump_metrics) && + write_op->request().metrics_capture() == + PgsqlMetricsCaptureType::PGSQL_METRICS_CAPTURE_ALL) { + return write_op->response(); + } + } + return nullptr; +} + } // namespace tablet } // namespace yb diff --git a/src/yb/tablet/write_query.h b/src/yb/tablet/write_query.h index 8291f2736c82..ad5fb9ac178a 100644 --- a/src/yb/tablet/write_query.h +++ b/src/yb/tablet/write_query.h @@ -18,12 +18,14 @@ #include "yb/docdb/docdb_fwd.h" #include "yb/docdb/docdb.h" #include "yb/docdb/doc_operation.h" +#include "yb/docdb/docdb_statistics.h" #include "yb/docdb/lock_batch.h" #include "yb/rpc/rpc_context.h" #include "yb/tablet/tablet_fwd.h" +#include "yb/tablet/tablet_metrics.h" #include "yb/tserver/tserver.fwd.h" #include "yb/util/operation_counter.h" @@ -119,6 +121,12 @@ class WriteQuery { uint64_t request_start_us() const { return request_start_us_; } + std::shared_ptr metrics() { return metrics_; } + + PgsqlResponsePB* GetPgsqlResponseForMetricsCapture() const; + ScopedTabletMetrics scoped_tablet_metrics() { return scoped_tablet_metrics_; } + docdb::DocDBStatistics scoped_statistics() { return scoped_statistics_; } + private: friend struct UpdateQLIndexesTask; enum class ExecuteMode; @@ -243,6 +251,17 @@ class WriteQuery { // Stores the start time of the underlying rpc request that created this WriteQuery. // The field is consistent across failed ReadRpc/WriteRpc retries. uint64_t request_start_us_ = 0; + + // Metrics that are stored for the lifetime of a WriteQuery object and returned + // with PgsqlResponsePB. + ScopedTabletMetrics scoped_tablet_metrics_; + TabletMetrics* global_tablet_metrics_ = nullptr; + + // Stores either scoped_tablet_metrics_ or global_tablet_metrics_ + // depending on the batch_tablet_metrics_update gflag. This points + // to global_tablet_metrics_ once the WriteQuery object is destroyed. + std::shared_ptr metrics_; + docdb::DocDBStatistics scoped_statistics_; }; } // namespace tablet diff --git a/src/yb/tserver/tablet_service.cc b/src/yb/tserver/tablet_service.cc index f64066086962..9039f470578f 100644 --- a/src/yb/tserver/tablet_service.cc +++ b/src/yb/tserver/tablet_service.cc @@ -514,6 +514,8 @@ class WriteQueryCompletionCallback { TRACE("Write completing with status $0", yb::ToString(status)); + CopyMetricsToPgsqlResponse(); + if (!status.ok()) { if (leader_term_set_in_request_ && status.IsAborted() && status.message().Contains("Operation submitted in term")) { @@ -556,6 +558,16 @@ class WriteQueryCompletionCallback { return response_->mutable_error(); } + void CopyMetricsToPgsqlResponse() const { + auto tablet_metrics = query_->scoped_tablet_metrics(); + auto statistics = query_->scoped_statistics(); + + if (auto* resp = query_->GetPgsqlResponseForMetricsCapture()) { + tablet_metrics.CopyToPgsqlResponse(resp); + statistics.CopyToPgsqlResponse(resp); + } + } + tablet::TabletPeerPtr tablet_peer_; const std::shared_ptr context_; WriteResponsePB* const response_; diff --git a/src/yb/yql/pggate/pg_dml_write.cc b/src/yb/yql/pggate/pg_dml_write.cc index 8bd5da5fed34..3cd27f289996 100644 --- a/src/yb/yql/pggate/pg_dml_write.cc +++ b/src/yb/yql/pggate/pg_dml_write.cc @@ -51,6 +51,8 @@ Status PgDmlWrite::Prepare(const PgObjectId& table_id, bool is_region_local) { write_req_->dup_table_id(table_id.GetYbTableId()); write_req_->set_schema_version(target_->schema_version()); write_req_->set_stmt_id(reinterpret_cast(write_req_.get())); + // TODO(#26086): Capture and display metrics in EXPLAIN output + write_req_->set_metrics_capture(PgsqlMetricsCaptureType::PGSQL_METRICS_CAPTURE_NONE); doc_op_ = std::make_shared(pg_session_, &target_, std::move(write_op)); PrepareColumns(); From a5a8e8febd98f2338575e664f5a86b47b1041f19 Mon Sep 17 00:00:00 2001 From: asharma Date: Fri, 9 May 2025 09:54:47 +0000 Subject: [PATCH 049/146] [PLAT-17587] Fix dump entities missing on TLS enabled k8s universe Summary: With this db [[ https://github.com/yugabyte/yugabyte-db/commit/b737017d49a1257146f7af5b663d4edf8fe12ceb | commit]] , http requests are always redirected to https when required. We can always make http request when collecting dump entities. This is also how its done everywhere else in YBA. Test Plan: Manually verified that dump entities is present on TLS enabled VM and k8s universes. Reviewers: skurapati Reviewed By: skurapati Subscribers: svc_phabricator Differential Revision: https://phorge.dev.yugabyte.com/D43891 --- .../yw/common/supportbundle/TabletReportComponent.java | 7 +------ 1 file changed, 1 insertion(+), 6 deletions(-) diff --git a/managed/src/main/java/com/yugabyte/yw/common/supportbundle/TabletReportComponent.java b/managed/src/main/java/com/yugabyte/yw/common/supportbundle/TabletReportComponent.java index 90bcfa741833..5030e13dc641 100644 --- a/managed/src/main/java/com/yugabyte/yw/common/supportbundle/TabletReportComponent.java +++ b/managed/src/main/java/com/yugabyte/yw/common/supportbundle/TabletReportComponent.java @@ -69,12 +69,7 @@ public void downloadComponent( try { String masterLeaderHost = universe.getMasterLeaderHostText(); int masterHttpPort = universe.getMasterLeaderNode().masterHttpPort; - String protocol = - universe.getUniverseDetails().getPrimaryCluster().userIntent.enableClientToNodeEncrypt - ? "https" - : "http"; - String url = - String.format("%s://%s:%d/dump-entities", protocol, masterLeaderHost, masterHttpPort); + String url = String.format("http://%s:%d/dump-entities", masterLeaderHost, masterHttpPort); log.info("Querying url {} for dump entities.", url); JsonNode response = apiHelper.getRequest(url); if (response.has("error")) { From a822676a3c14a160966648e12cf1e72d3995e68e Mon Sep 17 00:00:00 2001 From: Kai Franz Date: Tue, 13 May 2025 13:55:57 -0700 Subject: [PATCH 050/146] [#27151] YSQL: Add GUC to disable hint table cache Summary: Currently, we don't have a way to disable the PG backend hint table cache added in D43933. If a customer runs into an issue with it or if it turns out to have a bug in it, we would like to have a way to disable it. This diff introduces a new GUC, `pg_hint_plan.yb_enable_hint_table_cache`, which is `true` by default. Setting this GUC to `false` will disable the hint cache completely, causing `pg_hint_plan` to fall back to the previous behavior where it reads hints from the hint table during query planning. **We make the following behavioral changes when the GUC is set to `false`:** 1. During startup, we don't install the relcache invalidation hook used for invalidating the hint cache. 2. The trigger on the hint cache table remains installed, since it gets installed when the user runs `CREATE EXTENSION pg_hint_plan`. But the trigger is configured to do nothing when the GUC is set to false. 3. During query planning, when looking up the hint for a query, we skip over the hint cache and go straight to the hint table in `get_hints_from_table`. When the GUC is disabled, this function operates identically to how it did before D43933. NOTE: This GUC is `PGC_BACKEND`, meaning that it can only be set from the `ysql_pg_conf_csv` file or by client request in the connection startup packet. This is done to prevent attempts to change the GUC on a running backend, which could result in a corrupted state. Jira: DB-16635 Test Plan: ``` ./yb_build.sh release --sj --cxx-test pg_hint_table-test ``` Added a new test to this file that sets the GUC to false and checks that the hint cache is not being used. Reviewers: myang Reviewed By: myang Subscribers: yql Differential Revision: https://phorge.dev.yugabyte.com/D43933 --- .../pg_hint_plan/pg_hint_plan.c | 23 +++- src/yb/yql/pgwrapper/pg_hint_table-test.cc | 109 ++++++++++++------ 2 files changed, 93 insertions(+), 39 deletions(-) diff --git a/src/postgres/third-party-extensions/pg_hint_plan/pg_hint_plan.c b/src/postgres/third-party-extensions/pg_hint_plan/pg_hint_plan.c index caedd6ff998f..f7e36ee471cb 100644 --- a/src/postgres/third-party-extensions/pg_hint_plan/pg_hint_plan.c +++ b/src/postgres/third-party-extensions/pg_hint_plan/pg_hint_plan.c @@ -618,6 +618,7 @@ static bool yb_enable_internal_hint_test = false; static bool yb_internal_hint_test_fail = false; static bool yb_use_generated_hints_for_plan = false; static bool yb_use_query_id_for_hinting = false; +static bool yb_enable_hint_table_cache = true; static int plpgsql_recurse_level = 0; /* PLpgSQL recursion level */ static int recurse_level = 0; /* recursion level incl. direct SPI calls */ @@ -761,7 +762,8 @@ _PG_init(void) { PLpgSQL_plugin **var_ptr; - CacheRegisterRelcacheCallback(YbInvalidateHintCacheCallback, (Datum) 0); + if (yb_enable_hint_table_cache) + CacheRegisterRelcacheCallback(YbInvalidateHintCacheCallback, (Datum) 0); /* Define custom GUC variables. */ DefineCustomBoolVariable("pg_hint_plan.enable_hint", @@ -889,6 +891,17 @@ _PG_init(void) NULL, NULL); + DefineCustomBoolVariable("pg_hint_plan.yb_enable_hint_table_cache", + "Enables per-session caching for the hint table.", + NULL, + &yb_enable_hint_table_cache, + true, + PGC_BACKEND, + 0, + NULL, + NULL, + NULL); + EmitWarningsOnPlaceholders("pg_hint_plan"); /* Install hooks. */ @@ -2111,7 +2124,7 @@ get_hints_from_table(const char *client_query, const char *client_application) text *qry; text *app; - if (IsYugaByteEnabled()) + if (IsYugaByteEnabled() && yb_enable_hint_table_cache) { bool found; elog(DEBUG5, "Looking up hints cache for query: %s, application: '%s'", client_query, client_application); @@ -6794,6 +6807,12 @@ PG_FUNCTION_INFO_V1(yb_hint_plan_cache_invalidate); Datum yb_hint_plan_cache_invalidate(PG_FUNCTION_ARGS) { + if (!yb_enable_hint_table_cache) + { + elog(DEBUG3, "Hint cache is disabled, skipping cache invalidation"); + PG_RETURN_DATUM(PointerGetDatum(NULL)); + } + if (!CALLED_AS_TRIGGER(fcinfo)) { ereport(ERROR, (errcode(ERRCODE_E_R_I_E_TRIGGER_PROTOCOL_VIOLATED), diff --git a/src/yb/yql/pgwrapper/pg_hint_table-test.cc b/src/yb/yql/pgwrapper/pg_hint_table-test.cc index d339fb8649f4..c9f0b992fa56 100644 --- a/src/yb/yql/pgwrapper/pg_hint_table-test.cc +++ b/src/yb/yql/pgwrapper/pg_hint_table-test.cc @@ -39,9 +39,9 @@ class PgHintTableTest : public LibPqTestBase { } static Result ExecuteExplainAndGetJoinType( - PGConn& conn, + PGConn *conn, const std::string& explain_query) { - auto explain_str = VERIFY_RESULT(conn.FetchRow(explain_query)); + auto explain_str = VERIFY_RESULT(conn->FetchRow(explain_query)); return GetJoinType(explain_str); } @@ -62,6 +62,45 @@ class PgHintTableTest : public LibPqTestBase { RETURN_NOT_OK(conn.Execute("SET pg_hint_plan.yb_use_query_id_for_hinting TO on")); return conn; } + + Result> InsertHintsAndRunQueries() { + // ------------------------------------------------------------------------------------------ + // 1. Setup connections + // ------------------------------------------------------------------------------------------ + auto conn_query = VERIFY_RESULT(ConnectWithHintTable()); + auto conn_hint = VERIFY_RESULT(ConnectWithHintTable()); + + // ------------------------------------------------------------------------------------------ + // 2. Insert hints and run queries to force hint table lookups + // ------------------------------------------------------------------------------------------ + const int num_queries = 100; + for (int i = 0; i < num_queries; i++) { + std::string whitespace(i * 1000, ' '); + auto hint_value = Format("YbBatchedNL(pg_class $0 pg_attribute)", whitespace); + + RETURN_NOT_OK(conn_hint.ExecuteFormat( + "INSERT INTO hint_plan.hints (norm_query_string, application_name, hints) " + "VALUES ('$0', '', '$1')", + query_id + i, hint_value)); + + // Execute the query to force hint cache lookups and refreshes + auto join_type = VERIFY_RESULT( + ExecuteExplainAndGetJoinType(&conn_query, "EXPLAIN (ANALYZE, FORMAT JSON) " + query)); + SCHECK_EQ(std::string("YB Batched Nested Loop"), join_type, IllegalState, + Format("Unexpected join type: %s", join_type)); + } + LOG(INFO) << "Completed " << num_queries << " queries"; + return std::make_pair(std::move(conn_query), std::move(conn_hint)); + } +}; + +class PgHintTableTestWithoutHintCache : public PgHintTableTest { + public: + void UpdateMiniClusterOptions(ExternalMiniClusterOptions* options) override { + PgHintTableTest::UpdateMiniClusterOptions(options); + options->extra_tserver_flags.push_back( + "--ysql_pg_conf_csv=pg_hint_plan.yb_enable_hint_table_cache=off"); + } }; const std::string PgHintTableTest::query = @@ -108,7 +147,7 @@ TEST_F(PgHintTableTest, ForceBatchedNestedLoop) { // 4. Re-run the query on the first connection and verify it uses a Batched Nested Loop now // ---------------------------------------------------------------------------------------------- ASSERT_STR_EQ("YB Batched Nested Loop", - ASSERT_RESULT(ExecuteExplainAndGetJoinType(conn1, explain_query))); + ASSERT_RESULT(ExecuteExplainAndGetJoinType(&conn1, explain_query))); // ---------------------------------------------------------------------------------------------- // 5. Delete the hint from the hint table @@ -122,7 +161,7 @@ TEST_F(PgHintTableTest, ForceBatchedNestedLoop) { // ---------------------------------------------------------------------------------------------- // 6. Re-run the query on the first connection and verify it's back to the original plan // ---------------------------------------------------------------------------------------------- - ASSERT_STR_EQ("Hash Join", ASSERT_RESULT(ExecuteExplainAndGetJoinType(conn1, explain_query))); + ASSERT_STR_EQ("Hash Join", ASSERT_RESULT(ExecuteExplainAndGetJoinType(&conn1, explain_query))); } TEST_F(PgHintTableTest, SimpleConcurrencyTest) { @@ -171,7 +210,7 @@ TEST_F(PgHintTableTest, SimpleConcurrencyTest) { &iterations]() { while (!stop_threads) { std::string join_type = ASSERT_RESULT( - ExecuteExplainAndGetJoinType(conn_explain, explain_query)); + ExecuteExplainAndGetJoinType(&conn_explain, explain_query)); LOG(INFO) << "Observed join type: " << join_type; { @@ -328,7 +367,7 @@ TEST_F(PgHintTableTest, PreparedStatementHintCacheRefresh) { int64_t custom_plan_refreshes = 0; for (int i = 0; i < 6; i++) { auto join_type = ASSERT_RESULT(ExecuteExplainAndGetJoinType( - conn_pstmt, Format("EXPLAIN (ANALYZE, FORMAT JSON) EXECUTE test_stmt($0)", i))); + &conn_pstmt, Format("EXPLAIN (ANALYZE, FORMAT JSON) EXECUTE test_stmt($0)", i))); ASSERT_STR_EQ("YB Batched Nested Loop", join_type); // Verify that we got 5 misses and no hits auto metrics_custom_plan = GetPrometheusMetrics(); @@ -352,7 +391,7 @@ TEST_F(PgHintTableTest, PreparedStatementHintCacheRefresh) { int64_t generic_plan_refreshes = 0; for (int i = 7; i < 12; i++) { auto join_type = ASSERT_RESULT(ExecuteExplainAndGetJoinType( - conn_pstmt, Format("EXPLAIN (ANALYZE, FORMAT JSON) EXECUTE test_stmt($0)", i))); + &conn_pstmt, Format("EXPLAIN (ANALYZE, FORMAT JSON) EXECUTE test_stmt($0)", i))); ASSERT_STR_EQ("YB Batched Nested Loop", join_type); // Verify that we got no additional hits/misses @@ -400,7 +439,7 @@ TEST_F(PgHintTableTest, PreparedStatementHintCacheRefresh) { // used // ---------------------------------------------------------------------------------------------- auto join_type = ASSERT_RESULT(ExecuteExplainAndGetJoinType( - conn_pstmt, "EXPLAIN (ANALYZE, FORMAT JSON) EXECUTE test_stmt(10)")); + &conn_pstmt, "EXPLAIN (ANALYZE, FORMAT JSON) EXECUTE test_stmt(10)")); ASSERT_STR_EQ("Merge Join", join_type); auto new_hint_metrics = GetPrometheusMetrics(); @@ -427,7 +466,7 @@ TEST_F(PgHintTableTest, InvalidHint) { // Execute the prepared statement once to establish a baseline auto initial_join_type = ASSERT_RESULT(ExecuteExplainAndGetJoinType( - conn_explain, "EXPLAIN (ANALYZE, FORMAT JSON) " + query)); + &conn_explain, "EXPLAIN (ANALYZE, FORMAT JSON) " + query)); LOG(INFO) << "Initial join type: " << initial_join_type; // ---------------------------------------------------------------------------------------------- @@ -450,7 +489,7 @@ TEST_F(PgHintTableTest, InvalidHint) { // ---------------------------------------------------------------------------------------------- for (int i = 0; i < 10; i++) { auto join_type_after_invalid = ASSERT_RESULT(ExecuteExplainAndGetJoinType( - conn_explain, "EXPLAIN (ANALYZE, FORMAT JSON) " + query)); + &conn_explain, "EXPLAIN (ANALYZE, FORMAT JSON) " + query)); // The join type should remain the same as the initial one // since the invalid hint should be ignored ASSERT_STR_EQ(initial_join_type, join_type_after_invalid); @@ -463,36 +502,12 @@ TEST_F(PgHintTableTest, InvalidHint) { */ TEST_F(PgHintTableTest, HintCacheMemoryLeakTest) { // ---------------------------------------------------------------------------------------------- - // 1. Setup connections - // ---------------------------------------------------------------------------------------------- - auto conn_query = ASSERT_RESULT(ConnectWithHintTable()); - auto conn_hint = ASSERT_RESULT(ConnectWithHintTable()); - - ASSERT_OK(conn_hint.Execute("SET yb_tcmalloc_sample_period = 1")); - ASSERT_OK(conn_query.Execute("SET yb_tcmalloc_sample_period = 1")); - - // ---------------------------------------------------------------------------------------------- - // 2. Insert hints and run queries to force hint cache lookups & refreshes + // 1. Setup connections, add hints and check that the hints are being followed // ---------------------------------------------------------------------------------------------- - const int num_queries = 100; - for (int i = 0; i < num_queries; i++) { - std::string whitespace(i * 1000, ' '); - auto hint_value = Format("YbBatchedNL(pg_class $0 pg_attribute)", whitespace); - - ASSERT_OK(conn_hint.ExecuteFormat( - "INSERT INTO hint_plan.hints (norm_query_string, application_name, hints) " - "VALUES ('$0', '', '$1')", - query_id + i, hint_value)); - - // Execute the query to force hint cache lookups and refreshes - auto join_type = ASSERT_RESULT( - ExecuteExplainAndGetJoinType(conn_query, "EXPLAIN (ANALYZE, FORMAT JSON) " + query)); - ASSERT_STR_EQ("YB Batched Nested Loop", join_type); - } - LOG(INFO) << "Completed " << num_queries << " queries"; + auto [conn_query, conn_hint] = ASSERT_RESULT(InsertHintsAndRunQueries()); // ---------------------------------------------------------------------------------------------- - // 3. Check size of YbHintCacheContext and the total size of all memory contexts + // 2. Check size of YbHintCacheContext and the total size of all memory contexts // ---------------------------------------------------------------------------------------------- for (PGConn* conn : {&conn_query, &conn_hint}) { ASSERT_TRUE(ASSERT_RESULT( @@ -521,4 +536,24 @@ TEST_F(PgHintTableTest, HintCacheMemoryLeakTest) { } } +// Test that the hint cache is disabled when the yb_enable_hint_table_cache flag is set to off. +TEST_F(PgHintTableTestWithoutHintCache, HintCacheDisabled) { + // ---------------------------------------------------------------------------------------------- + // 1. Setup connections, add hints and check that the hints are being followed + // ---------------------------------------------------------------------------------------------- + auto [conn_query, conn_hint] = ASSERT_RESULT(InsertHintsAndRunQueries()); + + // ---------------------------------------------------------------------------------------------- + // 2. Check that the YbHintCacheContext does not exist on all connections + // ---------------------------------------------------------------------------------------------- + auto count_query = ASSERT_RESULT(conn_query.FetchRow( + "SELECT COUNT(*) FROM pg_backend_memory_contexts " + "WHERE name = 'YbHintCacheContext'")); + ASSERT_EQ(count_query, 0); + auto count_hint = ASSERT_RESULT(conn_hint.FetchRow( + "SELECT COUNT(*) FROM pg_backend_memory_contexts " + "WHERE name = 'YbHintCacheContext'")); + ASSERT_EQ(count_hint, 0); +} + } // namespace yb::pgwrapper From c031504855bd5104848c2062cc2094a52ff8c882 Mon Sep 17 00:00:00 2001 From: Arpan Agrawal Date: Tue, 13 May 2025 20:51:47 +0530 Subject: [PATCH 051/146] [#26772] YSQL: Fix foreign key that references partitioned relations Summary: This revision fixes two bugs in foreign keys that reference partitioned tables when the root and the leaf PK relations have different column orders. The details of the bugs are as follows: # When the foreign key is created using ALTER TABLE, the cleanup post the constraint validation is not done properly, leading to `TupleDesc reference leak` warnings. Specifically, the tuples in `EState.es_tupleTable` are not reset. It contains the ResultRelInfo.ri_PartitionTupleSlot when the root and leaf relations have different column orders. # When the root and leaf relations have different column orders, ExecGetChildToRootMap() allocates the tuple conversion map, if not already done. `YbFindReferencedPartition()` incorrectly calls this function in the current memory context; rather, it should be called in the query context. This leads to use-after-free when the foreign key constraint is in DEFERRED mode. Below is a detailed explanation: When inserting into a relation with fk constraint, YbFindReferencedPartition() is invoked in two code paths with different memory context in each case: invocation #1: AfterTriggerSaveEvent -> YbAddTriggerFKReferenceIntent -> YBCBuildYBTupleIdDescriptor -> YbFindReferencedPartition: In this case, it is invoked in per-query memory context. invocation #2: afterTriggerInvokeEvents -> AfterTriggerExecute -> ExecCallTriggerFunc -> RI_FKey_check_ins -> RI_FKey_check -> YBCBuildYBTupleIdDescriptor -> YbFindReferencedPartition: In this case, it is invoked in per-tuple memory context. In the IMMEDIATE mode, the same executor state is used in both invocations. The invocation #1 allocates the map in per-query memory context, and invocation #2 just reads it, without needing to reallocate it. In the DEFERRED case, the executor states are different. afterTriggerInvokeEvents() creates a new local executor state to execute the triggers. Hence, the invocation #2 needs to allocate the map, but it ends up doing it in the (incorrect) per-tuple memory context. Consider the case when two (or more) tuples are batch-inserted in the FK relation. The first trigger execution invokes ExecGetChildToRootMap(), which sets ri_ChildToRootMapValid as valid and allocates the map in the per-tuple context. This context is reset before the next trigger execution. The second trigger execution seg faults because ri_ChildToRootMapValid indicates the map is valid, but that is not the case. In passing, also initialise estate->yb_es_pk_proutes to NIL to keep the behaviour the same as other fields of list type in EState. Jira: DB-16152 Test Plan: ./yb_build.sh --java-test 'org.yb.pgsql.TestPgRegressForeignKey' Close: #26772 Reviewers: dmitry, kramanathan, #db-approvers Reviewed By: kramanathan, #db-approvers Subscribers: svc_phabricator, yql Differential Revision: https://phorge.dev.yugabyte.com/D43893 --- src/postgres/src/backend/commands/tablecmds.c | 16 +++++++++ src/postgres/src/backend/executor/execUtils.c | 1 + .../src/backend/utils/adt/ri_triggers.c | 2 ++ .../regress/expected/yb.orig.foreign_key.out | 34 +++++++++++++++++-- .../test/regress/sql/yb.orig.foreign_key.sql | 32 +++++++++++++++-- 5 files changed, 79 insertions(+), 6 deletions(-) diff --git a/src/postgres/src/backend/commands/tablecmds.c b/src/postgres/src/backend/commands/tablecmds.c index 9336bac131f9..41aaa66be159 100644 --- a/src/postgres/src/backend/commands/tablecmds.c +++ b/src/postgres/src/backend/commands/tablecmds.c @@ -12730,6 +12730,22 @@ static void YbFKTriggerScanEnd(YbFKTriggerScanDesc descr) { Assert(descr); + + /* + * destroy the executor's tuple table. Actually we only care about + * releasing buffer pins and tupdesc refcounts; there's no need to pfree + * the TupleTableSlots, since the containing memory context is about to go + * away anyway. + */ + ExecResetTupleTable(descr->estate->es_tupleTable, false); + + /* + * No relations should be opened in this estate, still, be conservative and + * call the relation closing functions. They should be no-op. + */ + ExecCloseResultRelations(descr->estate); + ExecCloseRangeTableRelations(descr->estate); + if (descr->estate) FreeExecutorState(descr->estate); pfree(descr); diff --git a/src/postgres/src/backend/executor/execUtils.c b/src/postgres/src/backend/executor/execUtils.c index 41bdfc890bd7..da68d12841e7 100644 --- a/src/postgres/src/backend/executor/execUtils.c +++ b/src/postgres/src/backend/executor/execUtils.c @@ -203,6 +203,7 @@ CreateExecutorState(void) estate->yb_exec_params.yb_fetch_size_limit = yb_fetch_size_limit; estate->yb_exec_params.yb_index_check = false; + estate->yb_es_pk_proutes = NIL; return estate; } diff --git a/src/postgres/src/backend/utils/adt/ri_triggers.c b/src/postgres/src/backend/utils/adt/ri_triggers.c index 992c15964d50..f7819f0cd2f5 100644 --- a/src/postgres/src/backend/utils/adt/ri_triggers.c +++ b/src/postgres/src/backend/utils/adt/ri_triggers.c @@ -364,7 +364,9 @@ YbFindReferencedPartition(EState *estate, const RI_ConstraintInfo *riinfo, ResultRelInfo *pk_part_rri = ExecFindPartition(&mtstate, &pk_root_rri, proute, pkslot, estate); + MemoryContext oldcxt = MemoryContextSwitchTo(estate->es_query_cxt); *leaf_root_conversion_map = ExecGetChildToRootMap(pk_part_rri); + MemoryContextSwitchTo(oldcxt); if (!using_index) referenced_rel = pk_part_rri->ri_RelationDesc; diff --git a/src/postgres/src/test/regress/expected/yb.orig.foreign_key.out b/src/postgres/src/test/regress/expected/yb.orig.foreign_key.out index 50d6aba22bae..cd6d03fbd1a7 100644 --- a/src/postgres/src/test/regress/expected/yb.orig.foreign_key.out +++ b/src/postgres/src/test/regress/expected/yb.orig.foreign_key.out @@ -705,7 +705,7 @@ SELECT * from fk; (1 row) DROP TABLE pk, fk; --- Test foreign key referencing partitioned table +--- Test foreign key referencing partitioned table -- Base case CREATE TABLE pk(id INT PRIMARY KEY) PARTITION BY RANGE(id); CREATE TABLE pk_1_100 PARTITION OF pk FOR VALUES FROM (1) TO (100); @@ -758,7 +758,7 @@ NOTICE: table rewrite may lead to inconsistencies DETAIL: Concurrent DMLs may not be reflected in the new table. HINT: See https://github.com/yugabyte/yugabyte-db/issues/19860. Set 'ysql_suppress_unsafe_alter_notice' yb-tserver gflag to true to suppress this notice. CREATE TABLE fk(a INT, c INT, FOREIGN KEY (a, c) REFERENCES pk(a, c)); -INSERT INTO pk VALUES (1, 100, 20, 150); +INSERT INTO pk VALUES (1, 100, 20, 150), (1, 100, 21, 150); INSERT INTO fk VALUES (1, 20); INSERT INTO fk VALUES (150, 20); -- should fail ERROR: insert or update on table "fk" violates foreign key constraint "fk_a_c_fkey" @@ -769,7 +769,16 @@ SELECT * FROM fk; 1 | 20 (1 row) -DROP TABLE pk, fk; +CREATE TABLE fk2(a INT, c INT, FOREIGN KEY (a, c) REFERENCES pk(a, c) DEFERRABLE INITIALLY DEFERRED); +INSERT INTO fk2 VALUES (1, 20), (1, 21); +SELECT * FROM fk2; + a | c +---+---- + 1 | 20 + 1 | 21 +(2 rows) + +DROP TABLE pk, fk, fk2; -- Using index, PK root and leaf partition have different column orders CREATE TABLE pk(a INT, b INT, c INT, d INT, PRIMARY KEY(a, b), UNIQUE(a, c)) PARTITION BY RANGE(a); CREATE TABLE pk_1_100(a INT NOT NULL, c INT NOT NULL, d INT, b INT NOT NULL); @@ -840,6 +849,25 @@ SELECT * from fk; (1 row) DROP TABLE pk, pk2, fk; +-- Test foreign key constraint validation at the time of constraint creation +CREATE TABLE pk(a INT, b INT, c INT, d INT, PRIMARY KEY(a, c)) PARTITION BY RANGE(a); +CREATE TABLE pk_1_100 PARTITION OF pk FOR VALUES FROM (1) TO (100); +INSERT INTO pk VALUES (1, 100, 20, 150); +CREATE TABLE fk(a INT, c INT); +INSERT INTO fk VALUES (1, 20); +ALTER TABLE fk ADD FOREIGN KEY (a, c) REFERENCES pk(a, c); +DROP TABLE pk, fk; +CREATE TABLE pk(a INT, b INT, c INT, d INT, PRIMARY KEY(a, c)) PARTITION BY RANGE(a); +CREATE TABLE pk_1_100(a INT NOT NULL, c INT NOT NULL, d INT, b INT); +ALTER TABLE pk ATTACH PARTITION pk_1_100 FOR VALUES FROM (1) TO (100); +NOTICE: table rewrite may lead to inconsistencies +DETAIL: Concurrent DMLs may not be reflected in the new table. +HINT: See https://github.com/yugabyte/yugabyte-db/issues/19860. Set 'ysql_suppress_unsafe_alter_notice' yb-tserver gflag to true to suppress this notice. +INSERT INTO pk VALUES (1, 100, 20, 150); +CREATE TABLE fk(a INT, c INT); +INSERT INTO fk VALUES (1, 20); +ALTER TABLE fk ADD FOREIGN KEY (a, c) REFERENCES pk(a, c); +DROP TABLE pk, fk; -- test updating a subset of foreign constraint keys. CREATE TABLE pk(a INT, b INT, PRIMARY KEY (a, b)); CREATE TABLE fk(a INT, b INT, FOREIGN KEY(a, b) REFERENCES pk); diff --git a/src/postgres/src/test/regress/sql/yb.orig.foreign_key.sql b/src/postgres/src/test/regress/sql/yb.orig.foreign_key.sql index 198e781e9e0e..44e1e918d924 100644 --- a/src/postgres/src/test/regress/sql/yb.orig.foreign_key.sql +++ b/src/postgres/src/test/regress/sql/yb.orig.foreign_key.sql @@ -444,7 +444,7 @@ INSERT INTO fk VALUES (500, 1); -- should fail SELECT * from fk; DROP TABLE pk, fk; --- Test foreign key referencing partitioned table +--- Test foreign key referencing partitioned table -- Base case CREATE TABLE pk(id INT PRIMARY KEY) PARTITION BY RANGE(id); @@ -482,12 +482,16 @@ CREATE TABLE pk_1_100(a INT NOT NULL, c INT NOT NULL, d INT, b INT); ALTER TABLE pk ATTACH PARTITION pk_1_100 FOR VALUES FROM (1) TO (100); CREATE TABLE fk(a INT, c INT, FOREIGN KEY (a, c) REFERENCES pk(a, c)); -INSERT INTO pk VALUES (1, 100, 20, 150); +INSERT INTO pk VALUES (1, 100, 20, 150), (1, 100, 21, 150); INSERT INTO fk VALUES (1, 20); INSERT INTO fk VALUES (150, 20); -- should fail SELECT * FROM fk; -DROP TABLE pk, fk; +CREATE TABLE fk2(a INT, c INT, FOREIGN KEY (a, c) REFERENCES pk(a, c) DEFERRABLE INITIALLY DEFERRED); +INSERT INTO fk2 VALUES (1, 20), (1, 21); +SELECT * FROM fk2; + +DROP TABLE pk, fk, fk2; -- Using index, PK root and leaf partition have different column orders CREATE TABLE pk(a INT, b INT, c INT, d INT, PRIMARY KEY(a, b), UNIQUE(a, c)) PARTITION BY RANGE(a); @@ -539,6 +543,28 @@ SELECT * from fk; DROP TABLE pk, pk2, fk; +-- Test foreign key constraint validation at the time of constraint creation +CREATE TABLE pk(a INT, b INT, c INT, d INT, PRIMARY KEY(a, c)) PARTITION BY RANGE(a); +CREATE TABLE pk_1_100 PARTITION OF pk FOR VALUES FROM (1) TO (100); +INSERT INTO pk VALUES (1, 100, 20, 150); + +CREATE TABLE fk(a INT, c INT); +INSERT INTO fk VALUES (1, 20); +ALTER TABLE fk ADD FOREIGN KEY (a, c) REFERENCES pk(a, c); + +DROP TABLE pk, fk; + +CREATE TABLE pk(a INT, b INT, c INT, d INT, PRIMARY KEY(a, c)) PARTITION BY RANGE(a); +CREATE TABLE pk_1_100(a INT NOT NULL, c INT NOT NULL, d INT, b INT); +ALTER TABLE pk ATTACH PARTITION pk_1_100 FOR VALUES FROM (1) TO (100); +INSERT INTO pk VALUES (1, 100, 20, 150); + +CREATE TABLE fk(a INT, c INT); +INSERT INTO fk VALUES (1, 20); +ALTER TABLE fk ADD FOREIGN KEY (a, c) REFERENCES pk(a, c); + +DROP TABLE pk, fk; + -- test updating a subset of foreign constraint keys. CREATE TABLE pk(a INT, b INT, PRIMARY KEY (a, b)); CREATE TABLE fk(a INT, b INT, FOREIGN KEY(a, b) REFERENCES pk); From df7ac494182c4ce6f82d2f06c824fb8c706147c0 Mon Sep 17 00:00:00 2001 From: Yury Shchetinin Date: Fri, 11 Apr 2025 11:22:29 +0300 Subject: [PATCH 052/146] [PLAT-17159] Capture process memory usage of top "n" non-YB processes on nodes Summary: Added ability to capture memory metric for topK non-yba processes. The number of processes is configurable (by default this feature is disabled as the number is set to 0) Also there is a threshold for minimal percent of memory occupied by process to be included into result. This metrics are kept apart from per-process metrics for yba processes (and not included into total) and are distinguished by tag subtype=other Test Plan: create universe - verify health script is successfull modify respective config value to 5 - see 5 metrics with subtype=other modify threshold to 50 - see no "other" metrics (as there is no such memory-consuming process) Reviewers: amalyshev Reviewed By: amalyshev Subscribers: daniel, yugaware Differential Revision: https://phorge.dev.yugabyte.com/D43248 --- .../yw/commissioner/HealthChecker.java | 11 ++ .../yw/common/config/UniverseConfKeys.java | 16 +++ .../resources/health/node_health.py.template | 105 ++++++++++++++++-- managed/src/main/resources/reference.conf | 2 + 4 files changed, 123 insertions(+), 11 deletions(-) diff --git a/managed/src/main/java/com/yugabyte/yw/commissioner/HealthChecker.java b/managed/src/main/java/com/yugabyte/yw/commissioner/HealthChecker.java index 48b16c504d6c..e631d3b857e6 100644 --- a/managed/src/main/java/com/yugabyte/yw/commissioner/HealthChecker.java +++ b/managed/src/main/java/com/yugabyte/yw/commissioner/HealthChecker.java @@ -739,6 +739,12 @@ public void checkSingleUniverse(CheckSingleUniverseParams params) { details.additionalServicesStateData != null && details.additionalServicesStateData.getEarlyoomConfig() != null && details.additionalServicesStateData.getEarlyoomConfig().isEnabled(); + int topKOtherProcesses = + confGetter.getConfForScope( + params.universe, UniverseConfKeys.healthCollectTopKOtherProcessesCount); + int topKMemThresholdPercent = + confGetter.getConfForScope( + params.universe, UniverseConfKeys.healthCollectTopKOtherProcessesMemThreshold); for (NodeDetails nodeDetails : sortedDetails) { NodeInstance nodeInstance = nodeInstanceMap.get(nodeDetails.getNodeUuid()); String nodeIdentifier = StringUtils.EMPTY; @@ -767,6 +773,8 @@ public void checkSingleUniverse(CheckSingleUniverseParams params) { .setTestCqlshConnectivity(testCqlshConnectivity) .setUniverseUuid(params.universe.getUniverseUUID()) .setEarlyoomEnabled(earlyoomEnabled) + .setTopKOtherProcesses(topKOtherProcesses) + .setTopKMemThresholdPercent(topKMemThresholdPercent) .setNodeDetails(nodeDetails); if (nodeDetails.isMaster) { nodeInfo @@ -1312,6 +1320,9 @@ public static class NodeInfo { private boolean otelCollectorEnabled; private boolean clockSyncServiceRequired = true; private boolean clockboundEnabled = false; + + private int topKOtherProcesses; + private int topKMemThresholdPercent; @JsonIgnore @EqualsAndHashCode.Exclude private NodeDetails nodeDetails; private boolean earlyoomEnabled = false; } diff --git a/managed/src/main/java/com/yugabyte/yw/common/config/UniverseConfKeys.java b/managed/src/main/java/com/yugabyte/yw/common/config/UniverseConfKeys.java index b84890f3b33d..8bef508a46ce 100644 --- a/managed/src/main/java/com/yugabyte/yw/common/config/UniverseConfKeys.java +++ b/managed/src/main/java/com/yugabyte/yw/common/config/UniverseConfKeys.java @@ -1638,4 +1638,20 @@ public class UniverseConfKeys extends RuntimeConfigKeysModule { "Have YBC ignore errors during restore. When false, can be overwritten via API", ConfDataType.BooleanType, ImmutableList.of(ConfKeyTags.INTERNAL)); + public static final ConfKeyInfo healthCollectTopKOtherProcessesCount = + new ConfKeyInfo<>( + "yb.health_checks.collect_other_processes_memory_count", + ScopeType.UNIVERSE, + "Number of non-yba managed processes to collect memory metrics for", + "Number of non-yba managed processes to collect memory metrics for", + ConfDataType.IntegerType, + ImmutableList.of(ConfKeyTags.INTERNAL)); + public static final ConfKeyInfo healthCollectTopKOtherProcessesMemThreshold = + new ConfKeyInfo<>( + "yb.health_checks.other_processes_memory_threshold_percent", + ScopeType.UNIVERSE, + "Threshold of memory percent for non-yba processes", + "Threshold of memory percent for non-yba processes", + ConfDataType.IntegerType, + ImmutableList.of(ConfKeyTags.INTERNAL)); } diff --git a/managed/src/main/resources/health/node_health.py.template b/managed/src/main/resources/health/node_health.py.template index 73cb4af179c2..4881fb0e0d51 100755 --- a/managed/src/main/resources/health/node_health.py.template +++ b/managed/src/main/resources/health/node_health.py.template @@ -564,7 +564,7 @@ class NodeChecker(): is_ybc_enabled, ybc_port, time_drift_wrn_threshold, time_drift_err_threshold, otel_enabled, temp_output_file, ddl_atomicity_check, master_leader_url, master_rpc_port, tserver_rpc_port, verbose, clock_service_required, - enable_earlyoom): + enable_earlyoom, topk_other_processes, topk_mem_threshold_percent): self.node = node self.node_name = node_name self.node_identifier = node_identifier @@ -603,6 +603,8 @@ class NodeChecker(): self.master_leader_url = master_leader_url self.enable_earlyoom = enable_earlyoom self.verbose = verbose + self.topk_other_processes = topk_other_processes + self.topk_mem_threshold_percent = topk_mem_threshold_percent self.prev_process_results = self._load_previous_per_process_results(temp_output_file) self.current_process_results = {} @@ -1964,6 +1966,31 @@ class NodeChecker(): return e.fill_and_return_entry(["Failed to parse process stats: {}".format(str(ex))], has_error=True, metrics = metrics) + + def check_other_process_stats(self): + metric = Metric.from_definition(YB_PROCESS_MEMORY_KB) + e = self._new_metric_entry("Per-process topK other check") + top_k = self.topk_other_processes + threshold = self.topk_mem_threshold_percent + try: + res = self._load_top_k_by_mem_pids(top_k, threshold) + other_label = Label("subtype", "other") + virtual_label = Label('type', 'virtual') + proportional_label = Label('type', 'proportional') + resident_label = Label('type', 'resident') + for process_name in res.keys(): + stat = res[process_name]['total_stat'] + process = Label("process", process_name) + metric.add_value(stat['resident_memory_kb'], labels=[process, other_label, + resident_label]) + metric.add_value(stat['virtual_memory_kb'], labels=[process, other_label, + virtual_label]) + return e.fill_and_return_entry([], has_error=False, metrics=[metric]) + except Exception as ex: + return e.fill_and_return_entry(["Failed to parse other stats: {}".format(str(ex))], + has_error=True, metrics = [metric]) + + def check_systemd_unit_preexec_present(self): e = self._new_entry("Systemd unit preexec check") services = ["yb-master.service", "yb-tserver.service"] @@ -2073,6 +2100,42 @@ class NodeChecker(): return True return False + + def _load_top_k_by_mem_pids(self, k, threshold): + used_pids = [] + for proc in self.current_process_results.keys(): + proc_results = self.current_process_results[proc] + if proc != "version" and "process_map" in proc_results.keys(): + used_pids.extend(list(proc_results['process_map'].keys())) + result = {} + cmd = "ps -eo pid,ppid,comm,%mem --sort=-%mem" + output = self._check_output(cmd) + for line in output.splitlines()[1:]: + if len(result.keys()) >= k: + break + split = line.split() + pid = split[0] + ppid = split[1] + if pid in used_pids or ppid in used_pids: + continue + command = split[2] + mem_pct = float(split[3]) + if threshold > 0 and mem_pct < threshold: + break + used_pids.append(pid) + if command in result.keys(): + command = "{}_{}".format(command, pid) + self.verbose_log("checking {} with pid {}".format(command, pid)) + pidlist = self._get_subprocess_pids(pid) + pidlist.append(pid) + stats = self._load_per_process_metrics_by_pids("", pid, pidlist) + if (stats['total_stat']['resident_memory_kb'] == 0 + and stats['total_stat']['virtual_memory_kb'] == 0): + continue + result[command] = stats + return result + + def _load_per_process_metrics(self, process_name): root_pid = self.get_process_pid_by_name(process_name) if root_pid is None: @@ -2087,6 +2150,10 @@ class NodeChecker(): if postgre_pid is not None: pid_list.remove(postgre_pid) pid_list = [p for p in pid_list if p not in self._get_subprocess_pids(postgre_pid)] + return self._load_per_process_metrics_by_pids(process_name, root_pid, pid_list) + + + def _load_per_process_metrics_by_pids(self, process_name, root_pid, pid_list): total_stat = self._get_empty_proc_results() prev_process_map = {} if process_name in self.prev_process_results.keys(): @@ -2097,7 +2164,11 @@ class NodeChecker(): process_map = {} for cur_pid in pid_list: - stat = self._get_process_stats_by_pid(cur_pid) + stat = self._get_empty_proc_results() + try: + stat = self._get_process_stats_by_pid(cur_pid) + except Exception as e: + pass # Calculating the sum with current state total_stat = self._merge_proc_results(stat, total_stat) # We need to substract pid stats from prev run @@ -2168,14 +2239,21 @@ class NodeChecker(): cmd = 'cat /proc/{}/smaps'.format(pid) stat = self._check_output(cmd).strip() - stat_list = stat.split('\n') - for stat in stat_list: - lst = stat.split(' ') - if len(lst) > 0: - if lst[0] == 'Pss:': - res['proportional_memory_kb'] += int(lst[len(lst) - 2]) - if lst[0] == 'Rss:': - res['resident_memory_kb'] += int(lst[len(lst) - 2]) + if "permission denied" in stat.lower(): + # This is less precise, but we cannot use smaps for processes with diff user. + psstat = self._check_output("ps -o rss -p {} --no-headers".format(pid)).strip() + lst = psstat.split(' ') + if len(lst) > 0 and lst[0].isdigit(): + res['resident_memory_kb'] += int(lst[0]) + else: + stat_list = stat.split('\n') + for stat in stat_list: + lst = stat.split(' ') + if len(lst) > 0: + if lst[0] == 'Pss:': + res['proportional_memory_kb'] += int(lst[len(lst) - 2]) + if lst[0] == 'Rss:': + res['resident_memory_kb'] += int(lst[len(lst) - 2]) cmd = 'cat /proc/{}/io'.format(pid) stat = self._check_output(cmd).strip() @@ -2544,6 +2622,8 @@ class NodeInfo: self.enable_earlyoom = data["earlyoomEnabled"] self.clockSyncServiceRequired = data.get("clockSyncServiceRequired", True) self.clockbound_enabled = data["clockboundEnabled"] + self.topk_other_processes = data["topKOtherProcesses"] + self.topk_mem_threshold_percent = data["topKMemThresholdPercent"] def main(): @@ -2593,7 +2673,7 @@ def main(): n.is_ybc_enabled, n.ybc_port, n.time_drift_wrn_threshold, n.time_drift_err_threshold, n.otel_enabled, args.temp_output_file, args.ddl_atomicity_check, args.master_leader_url, n.master_rpc_port, n.tserver_rpc_port, args.verbose, n.clockSyncServiceRequired, - n.enable_earlyoom) + n.enable_earlyoom, n.topk_other_processes, n.topk_mem_threshold_percent) coordinator.add_precheck(checker, "check_openssl_availability") coordinator.add_check(checker, "check_earlyoom", n.enable_earlyoom) @@ -2708,6 +2788,9 @@ def main(): coordinator.add_check(checker, "check_clockbound_sync_status") coordinator.add_check(checker, "check_process_stats", TOTAL) + if n.topk_other_processes > 0: + # This should be run after TOTAL is calculated (we don't want it to be included in total). + coordinator.add_check(checker, "check_other_process_stats") entries = coordinator.run() for e in entries: diff --git a/managed/src/main/resources/reference.conf b/managed/src/main/resources/reference.conf index 6d71d110d0eb..430323a366ff 100644 --- a/managed/src/main/resources/reference.conf +++ b/managed/src/main/resources/reference.conf @@ -344,6 +344,8 @@ yb { time_drift_err_threshold_ms = 400 clock_sync_service_required = true unexpected_servers_check_enabled = false + collect_other_processes_memory_count = 0 + other_processes_memory_threshold_percent = 0 } # Alerts thresholds From 9d9711fecd7ca2fbbb9b5de86cd919419aa959fa Mon Sep 17 00:00:00 2001 From: Sudhanshu Prajapati Date: Wed, 14 May 2025 18:40:19 +0530 Subject: [PATCH 053/146] [DOC-757] Revamp schema workaround page in voyager docs (#27137) * restructure and add new sub headings * Add section breaks in PostgreSQL known issues documentation * Update docs/content/preview/yugabyte-voyager/known-issues/postgresql.md Co-authored-by: Dwight Hodge <79169168+ddhodge@users.noreply.github.com> * Update docs/content/preview/yugabyte-voyager/known-issues/postgresql.md Co-authored-by: Dwight Hodge <79169168+ddhodge@users.noreply.github.com> * Standardize section headings in PostgreSQL known issues documentation * Add section breaks in PostgreSQL known issues documentation * add a description * rephrase it * move events and listen to server programming section --------- Co-authored-by: Dwight Hodge <79169168+ddhodge@users.noreply.github.com> --- .../known-issues/postgresql.md | 1648 ++++++++--------- 1 file changed, 817 insertions(+), 831 deletions(-) diff --git a/docs/content/preview/yugabyte-voyager/known-issues/postgresql.md b/docs/content/preview/yugabyte-voyager/known-issues/postgresql.md index b570d90fecca..3ff494e8a3b6 100644 --- a/docs/content/preview/yugabyte-voyager/known-issues/postgresql.md +++ b/docs/content/preview/yugabyte-voyager/known-issues/postgresql.md @@ -9,57 +9,46 @@ menu: parent: known-issues weight: 101 type: docs -rightNav: - hideH3: true --- -Review limitations and implement suggested workarounds to successfully migrate data from PostgreSQL to YugabyteDB. - -## Contents - -- [Adding primary key to a partitioned table results in an error](#adding-primary-key-to-a-partitioned-table-results-in-an-error) -- [Index creation on partitions fail for some YugabyteDB builds](#index-creation-on-partitions-fail-for-some-yugabytedb-builds) -- [Creation of certain views in the rule.sql file](#creation-of-certain-views-in-the-rule-sql-file) -- [Create or alter conversion is not supported](#create-or-alter-conversion-is-not-supported) -- [GENERATED ALWAYS AS STORED type column is not supported](#generated-always-as-stored-type-column-is-not-supported) -- [Unsupported ALTER TABLE DDL variants in source schema](#unsupported-alter-table-ddl-variants-in-source-schema) -- [Storage parameters on indexes or constraints in the source PostgreSQL](#storage-parameters-on-indexes-or-constraints-in-the-source-postgresql) -- [Foreign table in the source database requires SERVER and USER MAPPING](#foreign-table-in-the-source-database-requires-server-and-user-mapping) -- [Exclusion constraints is not supported](#exclusion-constraints-is-not-supported) -- [PostgreSQL extensions are not supported by target YugabyteDB](#postgresql-extensions-are-not-supported-by-target-yugabytedb) -- [Deferrable constraint on constraints other than foreign keys is not supported](#deferrable-constraint-on-constraints-other-than-foreign-keys-is-not-supported) -- [Data ingestion on XML data type is not supported](#data-ingestion-on-xml-data-type-is-not-supported) -- [GiST, BRIN, and SPGIST index types are not supported](#gist-brin-and-spgist-index-types-are-not-supported) -- [Indexes on some complex data types are not supported](#indexes-on-some-complex-data-types-are-not-supported) -- [Constraint trigger is not supported](#constraint-trigger-is-not-supported) -- [Table inheritance is not supported](#table-inheritance-is-not-supported) -- [%Type syntax is not supported](#type-syntax-is-not-supported) -- [GIN indexes on multiple columns are not supported](#gin-indexes-on-multiple-columns-are-not-supported) -- [Policies on users in source require manual user creation](#policies-on-users-in-source-require-manual-user-creation) -- [VIEW WITH CHECK OPTION is not supported](#view-with-check-option-is-not-supported) -- [UNLOGGED table is not supported](#unlogged-table-is-not-supported) -- [Hash-sharding with indexes on the timestamp/date columns](#hash-sharding-with-indexes-on-the-timestamp-date-columns) -- [Exporting data with names for tables/functions/procedures using special characters/whitespaces fails](#exporting-data-with-names-for-tables-functions-procedures-using-special-characters-whitespaces-fails) -- [Importing with case-sensitive schema names](#importing-with-case-sensitive-schema-names) -- [Unsupported datatypes by YugabyteDB](#unsupported-datatypes-by-yugabytedb) -- [Unsupported datatypes by Voyager during live migration](#unsupported-datatypes-by-voyager-during-live-migration) -- [XID functions is not supported](#xid-functions-is-not-supported) -- [REFERENCING clause for triggers](#referencing-clause-for-triggers) -- [BEFORE ROW triggers on partitioned tables](#before-row-triggers-on-partitioned-tables) -- [Advisory locks is not yet implemented](#advisory-locks-is-not-yet-implemented) -- [System columns is not yet supported](#system-columns-is-not-yet-supported) -- [XML functions is not yet supported](#xml-functions-is-not-yet-supported) -- [Large Objects and its functions are currently not supported](#large-objects-and-its-functions-are-currently-not-supported) -- [PostgreSQL 12 and later features](#postgresql-12-and-later-features) -- [MERGE command](#merge-command) -- [JSONB subscripting](#jsonb-subscripting) -- [Events Listen / Notify](#events-listen-notify) -- [Two-Phase Commit](#two-phase-commit) -- [DDL operations within the Transaction](#ddl-operations-within-the-transaction) -- [Hotspots with range-sharded timestamp/date indexes](#hotspots-with-range-sharded-timestamp-date-indexes) -- [Redundant indexes](#redundant-indexes) - -### Adding primary key to a partitioned table results in an error +When migrating data from PostgreSQL to YugabyteDB, you must address specific limitations and implement necessary workarounds. Some features, like table inheritance, certain DDL operations, and unique constraint types, are unsupported. You will also encounter compatibility issues with data types and functions. This page helps you navigate these challenges by offering advice on schema adjustments, handling unsupported features, and optimizing performance for a successful migration. + +## Data definition + +### Tables + +#### Table inheritance is not supported + +**GitHub**: [Issue #5956](https://github.com/yugabyte/yugabyte-db/issues/5956) + +**Description**: If you have table inheritance in the source database, it will error out in the target as it is not currently supported in YugabyteDB: + +```output +ERROR: INHERITS not supported yet +``` + +**Workaround**: Currently, there is no workaround. + +**Example** + +An example schema on the source database is as follows: + +```sql +CREATE TABLE public.cities ( + name text, + population real, + elevation integer +); + +CREATE TABLE public.capitals ( + state character(2) NOT NULL +) +INHERITS (public.cities); +``` + +--- + +#### Adding primary key to a partitioned table results in an error **GitHub**: [Issue #612](https://github.com/yugabyte/yb-voyager/issues/612) @@ -104,159 +93,7 @@ PARTITION BY LIST (region); --- -### Index creation on partitions fail for some YugabyteDB builds - -**GitHub**: [Issue #14529](https://github.com/yugabyte/yugabyte-db/issues/14529) - -**Description**: If you have a partitioned table with indexes on it, the migration will fail with an error for YugabyteDB `2.15` or `2.16` due to a regression. - -Note that this is fixed in release [2.17.1.0](../../../releases/ybdb-releases/end-of-life/v2.17/#v2.17.1.0). - -**Workaround**: N/A - -**Example** - -An example schema on the source database is as follows: - -```sql -DROP TABLE IF EXISTS list_part; - -CREATE TABLE list_part (id INTEGER, status TEXT, arr NUMERIC) PARTITION BY LIST(status); - -CREATE TABLE list_active PARTITION OF list_part FOR VALUES IN ('ACTIVE'); - -CREATE TABLE list_archived PARTITION OF list_part FOR VALUES IN ('EXPIRED'); - -CREATE TABLE list_others PARTITION OF list_part DEFAULT; - -INSERT INTO list_part VALUES (1,'ACTIVE',100), (2,'RECURRING',20), (3,'EXPIRED',38), (4,'REACTIVATED',144), (5,'ACTIVE',50); - -CREATE INDEX list_ind ON list_part(status); -``` - ---- - -### Creation of certain views in the rule.sql file - -**GitHub**: [Issue #770](https://github.com/yugabyte/yb-voyager/issues/770) - -**Description**: There may be few cases where certain exported views come under the `rule.sql` file and the `view.sql` file might contain a dummy view definition. This `pg_dump` behaviour may be due to how PostgreSQL handles views internally (via rules). - -{{< note title ="Note" >}} -This does not affect the migration as YugabyteDB Voyager takes care of the DDL creation sequence internally. -{{< /note >}} - -**Workaround**: Not required - -**Example** - -An example schema on the source database is as follows: - -```sql -CREATE TABLE foo(n1 int PRIMARY KEY, n2 int); -CREATE VIEW v1 AS - SELECT n1,n2 - FROM foo - GROUP BY n1; -``` - -The exported schema for `view.sql` is as follows: - -```sql -CREATE VIEW public.v1 AS - SELECT - NULL::integer AS n1, - NULL::integer AS n2; -``` - -The exported schema for `rule.sql` is as follows: - -```sql -CREATE OR REPLACE VIEW public.v1 AS - SELECT foo.n1,foo.n2 - FROM public.foo - GROUP BY foo.n1; -``` - -### Create or alter conversion is not supported - -**GitHub**: [Issue #10866](https://github.com/yugabyte/yugabyte-db/issues/10866) - -**Description**: If you have conversions in your PostgreSQL database, they will error out as follows as conversions are currently not supported in the target YugabyteDB: - -```output -ERROR: CREATE CONVERSION not supported yet -``` - -**Workaround**: Remove the conversions from the exported schema and modify the applications to not use these conversions before pointing them to YugabyteDB. - -**Example** - -An example schema on the source database is as follows: - -```sql -CREATE CONVERSION public.my_latin1_to_utf8 FOR 'LATIN1' TO 'UTF8' FROM public.latin1_to_utf8; - -CREATE FUNCTION public.latin1_to_utf8(src_encoding integer, dest_encoding integer, src bytea, dest bytea, len integer) RETURNS integer - LANGUAGE c - AS '/usr/lib/postgresql/12/lib/latin1_to_utf8.so', 'my_latin1_to_utf8'; -``` - ---- - -### GENERATED ALWAYS AS STORED type column is not supported - -**GitHub**: [Issue #10695](https://github.com/yugabyte/yugabyte-db/issues/10695) - -**Description**: If you have tables in the source database with columns of GENERATED ALWAYS AS STORED type (which means the data of this column is derived from some other columns of the table), it will throw a syntax error in YugabyteDB as follows: - -```output -ERROR: syntax error at or near "(" (SQLSTATE 42601) -``` - -**Workaround**: Create a trigger on this table that updates its value on any INSERT/UPDATE operation, and set a default value for this column. This provides functionality similar to PostgreSQL's GENERATED ALWAYS AS STORED columns using a trigger. - -**Fixed In**: {{}}. - -**Example** - -An example schema on the source database is as follows: - -```sql -CREATE TABLE people ( - name text, - height_cm numeric, - height_in numeric GENERATED ALWAYS AS (height_cm / 2.54) STORED -); -``` - -Suggested change to the schema is as follows: - -```sql -ALTER TABLE people - ALTER COLUMN height_in SET DEFAULT -1; - -CREATE OR REPLACE FUNCTION compute_height_in() RETURNS TRIGGER AS $$ -BEGIN - IF NEW.height_in IS DISTINCT FROM -1 THEN - RAISE EXCEPTION 'cannot insert in column "height_in"'; - ELSE - NEW.height_in := NEW.height_cm / 2.54; - END IF; - - RETURN NEW; -END; -$$ LANGUAGE plpgsql; - -CREATE TRIGGER compute_height_in_trigger - BEFORE INSERT OR UPDATE ON people - FOR EACH ROW - EXECUTE FUNCTION compute_height_in(); -``` - ---- - -### Unsupported ALTER TABLE DDL variants in source schema +#### Unsupported ALTER TABLE DDL variants in source schema **GitHub**: [Issue #1124](https://github.com/yugabyte/yugabyte-db/issues/1124) @@ -307,7 +144,43 @@ ALTER TABLE public.example --- -### Storage parameters on indexes or constraints in the source PostgreSQL +#### UNLOGGED table is not supported + +**GitHub**: [Issue #1129](https://github.com/yugabyte/yugabyte-db/issues/1129) + +**Description**: If there are UNLOGGED tables in the source schema, they will error out during the import schema with the following error as it is not supported in target YugabyteDB. + +```output +ERROR: UNLOGGED database object not supported yet +``` + +**Workaround**: Convert it to a LOGGED table. + +**Fixed In**: {{}} + +**Example** + +An example schema on the source database is as follows: + +```sql +CREATE UNLOGGED TABLE tbl_unlogged ( + id int, + val text +); +``` + +Suggested change to the schema is as follows: + +```sql +CREATE TABLE tbl_unlogged ( + id int, + val text +); +``` + +--- + +#### Storage parameters on indexes or constraints in the source PostgreSQL **GitHub**: [Issue #23467](https://github.com/yugabyte/yugabyte-db/issues/23467) @@ -359,93 +232,37 @@ CREATE INDEX abc --- -### Foreign table in the source database requires SERVER and USER MAPPING +### Constraints -**GitHub**: [Issue #1627](https://github.com/yugabyte/yb-voyager/issues/1627) +#### Exclusion constraints is not supported -**Description**: If you have foreign tables in the schema, during the export schema phase the exported schema does not include the SERVER and USER MAPPING objects. You must manually create these objects before importing schema, otherwise FOREIGN TABLE creation fails with the following error: +**GitHub**: [Issue #3944](https://github.com/yugabyte/yugabyte-db/issues/3944) + +**Description**: If you have exclusion constraints on the tables in the source database, those will error out during import schema to the target with the following error: ```output -ERROR: server "remote_server" does not exist (SQLSTATE 42704) +ERROR: EXCLUDE constraint not supported yet (SQLSTATE 0A000) ``` -**Workaround**: Create the SERVER and its USER MAPPING manually on the target YugabyteDB database. +**Workaround**: To implement exclusion constraints, follow this workaround: + +1. Create a trigger: Set up a TRIGGER for INSERT or UPDATE operations on the table. This trigger will use the specified expression to search the relevant columns for any potential violations. + +1. Add indexes: Create an INDEX on the columns involved in the expression. This helps ensure that the search operation performed by the trigger does not negatively impact performance. + +Note that creating an index on the relevant columns _is essential_ for maintaining performance. Without an index, the trigger's search operation can degrade performance. + +**Caveats**: Note that there are specific issues related to creating indexes on certain data types using certain index methods in YugabyteDB. Depending on the data types or methods involved, additional workarounds may be required to ensure optimal performance for these constraints. **Example** An example schema on the source database is as follows: ```sql -CREATE EXTENSION postgres_fdw; - -CREATE SERVER remote_server - FOREIGN DATA WRAPPER postgres_fdw - OPTIONS (host '127.0.0.1', port '5432', dbname 'postgres'); - -CREATE FOREIGN TABLE foreign_table ( - id INT, - name TEXT, - data JSONB -) -SERVER remote_server -OPTIONS ( - schema_name 'public', - table_name 'remote_table' -); - -CREATE USER MAPPING FOR postgres -SERVER remote_server -OPTIONS (user 'postgres', password 'XXX'); -``` - -Exported schema only has the following: - -```sql -CREATE FOREIGN TABLE foreign_table ( - id INT, - name TEXT, - data JSONB -) -SERVER remote_server -OPTIONS ( - schema_name 'public', - table_name 'remote_table' -); -``` - -Suggested change is to manually create the SERVER and USER MAPPING on the target YugabyteDB. - ---- - -### Exclusion constraints is not supported - -**GitHub**: [Issue #3944](https://github.com/yugabyte/yugabyte-db/issues/3944) - -**Description**: If you have exclusion constraints on the tables in the source database, those will error out during import schema to the target with the following error: - -```output -ERROR: EXCLUDE constraint not supported yet (SQLSTATE 0A000) -``` - -**Workaround**: To implement exclusion constraints, follow this workaround: - -1. Create a trigger: Set up a TRIGGER for INSERT or UPDATE operations on the table. This trigger will use the specified expression to search the relevant columns for any potential violations. - -1. Add indexes: Create an INDEX on the columns involved in the expression. This helps ensure that the search operation performed by the trigger does not negatively impact performance. - -Note that creating an index on the relevant columns _is essential_ for maintaining performance. Without an index, the trigger's search operation can degrade performance. - -**Caveats**: Note that there are specific issues related to creating indexes on certain data types using certain index methods in YugabyteDB. Depending on the data types or methods involved, additional workarounds may be required to ensure optimal performance for these constraints. - -**Example** - -An example schema on the source database is as follows: - -```sql -CREATE TABLE public.meeting ( - id integer NOT NULL, - room_id integer NOT NULL, - time_range tsrange NOT NULL +CREATE TABLE public.meeting ( + id integer NOT NULL, + room_id integer NOT NULL, + time_range tsrange NOT NULL ); ALTER TABLE ONLY public.meeting @@ -480,29 +297,7 @@ CREATE INDEX idx_no_time_overlap on public.meeting USING gist(room_id,time_range --- -### PostgreSQL extensions are not supported by target YugabyteDB - -**Documentation**: [PostgreSQL extensions](../../../explore/ysql-language-features/pg-extensions/) - -**Description**: If you have any PostgreSQL extension that is not supported by the target YugabyteDB, they result in the following errors during import schema: - -```output -ERROR: could not open extension control file "/home/centos/yb/postgres/share/extension/.control": No such file or directory -``` - -**Workaround**: Remove the extension from the exported schema. - -**Example** - -An example schema on the source database is as follows: - -```sql -CREATE EXTENSION IF NOT EXISTS postgis WITH SCHEMA public; -``` - ---- - -### Deferrable constraint on constraints other than foreign keys is not supported +#### Deferrable constraint on constraints other than foreign keys is not supported **GitHub**: [Issue #1709](https://github.com/yugabyte/yugabyte-db/issues/1709) @@ -530,593 +325,696 @@ ALTER TABLE ONLY public.users --- -### Data ingestion on XML data type is not supported +### Columns -**GitHub**: [Issue #1043](https://github.com/yugabyte/yugabyte-db/issues/1043) +#### GENERATED ALWAYS AS STORED type column is not supported -**Description**: If you have XML datatype in the source database, it errors out in the import data to target YugabyteDB phase as data ingestion is not allowed on this data type: +**GitHub**: [Issue #10695](https://github.com/yugabyte/yugabyte-db/issues/10695) + +**Description**: If you have tables in the source database with columns of GENERATED ALWAYS AS STORED type (which means the data of this column is derived from some other columns of the table), it will throw a syntax error in YugabyteDB as follows: ```output - ERROR: unsupported XML feature (SQLSTATE 0A000) +ERROR: syntax error at or near "(" (SQLSTATE 42601) ``` -**Workaround**: To migrate the data, a workaround is to convert the type to text and import the data to target; to read the data on the target YugabyteDB, you need to create some user defined functions similar to XML functions. +**Workaround**: Create a trigger on this table that updates its value on any INSERT/UPDATE operation, and set a default value for this column. This provides functionality similar to PostgreSQL's GENERATED ALWAYS AS STORED columns using a trigger. + +**Fixed In**: {{}}. **Example** An example schema on the source database is as follows: ```sql -CREATE TABLE xml_example ( - id integer, - data xml +CREATE TABLE people ( + name text, + height_cm numeric, + height_in numeric GENERATED ALWAYS AS (height_cm / 2.54) STORED ); ``` ---- - -### GiST, BRIN, and SPGIST index types are not supported +Suggested change to the schema is as follows: -**GitHub**: [Issue #1337](https://github.com/yugabyte/yugabyte-db/issues/1337) +```sql +ALTER TABLE people + ALTER COLUMN height_in SET DEFAULT -1; -**Description**: If you have GiST, BRIN, and SPGIST indexes on the source database, it errors out in the import schema phase with the following error: +CREATE OR REPLACE FUNCTION compute_height_in() RETURNS TRIGGER AS $$ +BEGIN + IF NEW.height_in IS DISTINCT FROM -1 THEN + RAISE EXCEPTION 'cannot insert in column "height_in"'; + ELSE + NEW.height_in := NEW.height_cm / 2.54; + END IF; -```output - ERROR: index method "gist" not supported yet (SQLSTATE XX000) + RETURN NEW; +END; +$$ LANGUAGE plpgsql; +CREATE TRIGGER compute_height_in_trigger + BEFORE INSERT OR UPDATE ON people + FOR EACH ROW + EXECUTE FUNCTION compute_height_in(); ``` -**Workaround**: Currently, there is no workaround; remove the index from the exported schema. +--- -**Example** +#### System columns is not yet supported -An example schema on the source database is as follows: +**GitHub**: [Issue #24843](https://github.com/yugabyte/yugabyte-db/issues/24843) + +**Description**: System columns, including `xmin`, `xmax`, `cmin`, `cmax`, and `ctid`, are not available in YugabyteDB. Queries or applications referencing these columns will fail as per the following example: ```sql -CREATE INDEX gist_idx ON public.ts_query_table USING gist (query); +yugabyte=# SELECT xmin, xmax FROM employees where id = 100; ``` ---- - -### Indexes on some complex data types are not supported - -**GitHub**: [Issue #9698](https://github.com/yugabyte/yugabyte-db/issues/9698), [Issue #23829](https://github.com/yugabyte/yugabyte-db/issues/23829), [Issue #17017](https://github.com/yugabyte/yugabyte-db/issues/17017) - -**Description**: If you have indexes on some complex types such as TSQUERY, TSVECTOR, JSONB, ARRAYs, INET, UDTs, citext, and so on, those will error out in import schema phase with the following error: - ```output - ERROR: INDEX on column of type '' not yet supported +ERROR: System column "xmin" is not supported yet ``` -**Workaround**: Currently, there is no workaround, but you can cast these data types in the index definition to supported types, which may require adjustments on the application side when querying the column using the index. Ensure you address these changes before modifying the schema. - -**Example** - -An example schema on the source database is as follows: - -```sql -CREATE TABLE public.citext_type ( - id integer, - data public.citext -); - -CREATE TABLE public.documents ( - id integer NOT NULL, - title_tsvector tsvector, - content_tsvector tsvector -); +**Workaround**: Use the application layer to manage tracking instead of relying on system columns. -CREATE TABLE public.ts_query_table ( - id integer, - query tsquery -); +--- -CREATE TABLE public.test_json ( - id integer, - data jsonb -); +### Other objects -CREATE INDEX tsvector_idx ON public.documents (title_tsvector); +#### Large Objects and its functions are currently not supported -CREATE INDEX tsquery_idx ON public.ts_query_table (query); +**GitHub**: Issue [#25318](https://github.com/yugabyte/yugabyte-db/issues/25318) -CREATE INDEX idx_citext ON public.citext_type USING btree (data); +**Description**: If you have large objects (datatype `lo`) in the source schema and are using large object functions in queries, the migration will fail during import-schema, as large object is not supported in YugabyteDB. -CREATE INDEX idx_json ON public.test_json (data); +```sql +SELECT lo_create(''); ``` ---- - -### Constraint trigger is not supported - -**GitHub**: [Issue #4700](https://github.com/yugabyte/yugabyte-db/issues/4700) - -**Description**: If you have constraint triggers in your source database, as they are currently unsupported in YugabyteDB, and they will error out as follows: - ```output - ERROR: CREATE CONSTRAINT TRIGGER not supported yet +ERROR: Transaction for catalog table write operation 'pg_largeobject_metadata' not found ``` -**Workaround**: Currently, there is no workaround; remove the constraint trigger from the exported schema and modify the applications if they are using these triggers before pointing it to YugabyteDB. +**Workaround**: No workaround is available. **Example** An example schema on the source database is as follows: ```sql -CREATE TABLE public.users ( - id int, - email character varying(255) -); - -CREATE FUNCTION public.check_unique_username() RETURNS trigger - LANGUAGE plpgsql -AS $$ -BEGIN - IF EXISTS ( - SELECT 1 - FROM users - WHERE email = NEW.email AND id <> NEW.id - ) THEN - RAISE EXCEPTION 'Email % already exists.', NEW.email; - END IF; - RETURN NEW; -END; -$$; +CREATE TABLE image (id int, raster lo); -CREATE CONSTRAINT TRIGGER check_unique_username_trigger - AFTER INSERT OR UPDATE ON public.users - DEFERRABLE INITIALLY DEFERRED - FOR EACH ROW - EXECUTE FUNCTION public.check_unique_username(); +CREATE TRIGGER t_raster BEFORE UPDATE OR DELETE ON public.image + FOR EACH ROW EXECUTE FUNCTION lo_manage(raster); ``` --- -### Table inheritance is not supported +#### VIEW WITH CHECK OPTION is not supported -**GitHub**: [Issue #5956](https://github.com/yugabyte/yugabyte-db/issues/5956) +**GitHub**: [Issue #22716](https://github.com/yugabyte/yugabyte-db/issues/22716) -**Description**: If you have table inheritance in the source database, it will error out in the target as it is not currently supported in YugabyteDB: +**Description**: If there are VIEWs with check option in the source database, they error out during the import schema phase as follows: ```output -ERROR: INHERITS not supported yet +ERROR: VIEW WITH CHECK OPTION not supported yet ``` -**Workaround**: Currently, there is no workaround. +**Workaround**: You can use a TRIGGER with INSTEAD OF clause on INSERT/UPDATE on view to achieve this functionality, but it may require application-side adjustments to handle different errors instead of constraint violations. **Example** An example schema on the source database is as follows: ```sql -CREATE TABLE public.cities ( - name text, - population real, - elevation integer +CREATE TABLE public.employees ( + employee_id integer NOT NULL, + employee_name text, + salary numeric ); -CREATE TABLE public.capitals ( - state character(2) NOT NULL -) -INHERITS (public.cities); +CREATE VIEW public.employees_less_than_12000 AS + SELECT + employees.employee_id, + employees.employee_name, + employees.salary + FROM + public.employees + WHERE + employees.employee_id < 12000 + WITH CASCADED CHECK OPTION; ``` ---- - -### %Type syntax is not supported +Suggested change to the schema is as follows: -**GitHub**: [Issue #23619](https://github.com/yugabyte/yugabyte-db/issues/23619) +```sql +SELECT + employees.employee_id, + employees.employee_name, + employees.salary +FROM + public.employees +WHERE + employees.employee_id < 12000; -**Description**: If you have any function, procedure, or trigger using the `%TYPE` syntax for referencing a type of a column from a table, then it errors out in YugabyteDB with the following error: +CREATE OR REPLACE FUNCTION modify_employees_less_than_12000() +RETURNS TRIGGER AS $$ +BEGIN + -- Handle INSERT operations + IF TG_OP = 'INSERT' THEN + IF NEW.employee_id < 12000 THEN + INSERT INTO employees(employee_id, employee_name, salary) + VALUES (NEW.employee_id, NEW.employee_name, NEW.salary); + RETURN NEW; + ELSE + RAISE EXCEPTION 'new row violates check option for view "employees_less_than_12000"; employee_id must be less than 12000'; + END IF; + + -- Handle UPDATE operations + ELSIF TG_OP = 'UPDATE' THEN + IF NEW.employee_id < 12000 THEN + UPDATE employees + SET employee_name = NEW.employee_name, + salary = NEW.salary + WHERE employee_id = OLD.employee_id; + RETURN NEW; + ELSE + RAISE EXCEPTION 'new row violates check option for view "employees_less_than_12000"; employee_id must be less than 12000'; + END IF; + END IF; +END; +$$ LANGUAGE plpgsql; + +CREATE TRIGGER trigger_modify_employee_12000 + INSTEAD OF INSERT OR UPDATE ON employees_less_than_12000 + FOR EACH ROW + EXECUTE FUNCTION modify_employees_less_than_12000(); +``` + +--- + +#### Create or alter conversion is not supported + +**GitHub**: [Issue #10866](https://github.com/yugabyte/yugabyte-db/issues/10866) + +**Description**: If you have conversions in your PostgreSQL database, they will error out as follows as conversions are currently not supported in the target YugabyteDB: ```output -ERROR: invalid type name "employees.salary%TYPE" (SQLSTATE 42601) +ERROR: CREATE CONVERSION not supported yet ``` -**Workaround**: Fix the syntax to include the actual type name instead of referencing the type of a column. +**Workaround**: Remove the conversions from the exported schema and modify the applications to not use these conversions before pointing them to YugabyteDB. **Example** An example schema on the source database is as follows: ```sql -CREATE TABLE public.employees ( - employee_id integer NOT NULL, - employee_name text, - salary numeric -); +CREATE CONVERSION public.my_latin1_to_utf8 FOR 'LATIN1' TO 'UTF8' FROM public.latin1_to_utf8; +CREATE FUNCTION public.latin1_to_utf8(src_encoding integer, dest_encoding integer, src bytea, dest bytea, len integer) RETURNS integer + LANGUAGE c + AS '/usr/lib/postgresql/12/lib/latin1_to_utf8.so', 'my_latin1_to_utf8'; +``` -CREATE FUNCTION public.get_employee_salary(emp_id integer) RETURNS numeric - LANGUAGE plpgsql - AS $$ -DECLARE - emp_salary employees.salary%TYPE; -- Declare a variable with the same type as employees.salary -BEGIN - SELECT salary INTO emp_salary - FROM employees - WHERE employee_id = emp_id; +--- - RETURN emp_salary; -END; -$$; -``` +### Data types -Suggested change to CREATE FUNCTION is as follows: +#### Unsupported datatypes by YugabyteDB -```sql -CREATE FUNCTION public.get_employee_salary(emp_id integer) RETURNS numeric - LANGUAGE plpgsql - AS $$ -DECLARE - Emp_salary NUMERIC; -- Declare a variable with the same type as employees.salary -BEGIN - SELECT salary INTO emp_salary - FROM employees - WHERE employee_id = emp_id; +**GitHub**: [Issue 11323](https://github.com/yugabyte/yugabyte-db/issues/11323), [Issue 1731](https://github.com/yugabyte/yb-voyager/issues/1731) - RETURN emp_salary; -END; -$$; +**Description**: The migration skips databases that have the following data types on any column: `GEOMETRY`, `GEOGRAPHY`, `BOX2D`, `BOX3D`, `TOPOGEOMETRY`, `RASTER`, `PG_LSN`, or `TXID_SNAPSHOT`. + +**Workaround**: None. + +**Example** + +An example schema on the source database is as follows: + +```sql +CREATE TABLE public.locations ( + id integer NOT NULL, + name character varying(100), + geom geometry(Point,4326) + ); ``` --- -### GIN indexes on multiple columns are not supported +## Data manipulation -**GitHub**: [Issue #724](https://github.com/yugabyte/yb-voyager/issues/724) +### MERGE command -**Description**: If there are GIN indexes in the source schema on multiple columns, they result in an error during import schema as follows: +**GitHub**: Issue [#25574](https://github.com/yugabyte/yugabyte-db/issues/25574) + +**Description**: If you are using a Merge query to conditionally insert, update, or delete rows on a table on your source database, then this query will fail once you migrate your apps to YugabyteDB as it is a PostgreSQL 15 feature, and not supported yet. ```output -ERROR: access method "ybgin" does not support multicolumn indexes (SQLSTATE 0A000) +ERROR: syntax error at or near "MERGE" ``` -**Workaround**: Currently, as there is no workaround, modify the schema to not include such indexes. +**Workaround**: Use the PL/pgSQL function to implement similar functionality on the database. **Example** An example schema on the source database is as follows: ```sql -CREATE TABLE public.test_gin_json ( - id integer, - text jsonb, - text1 jsonb +CREATE TABLE customer_account ( + customer_id INT PRIMARY KEY, + balance NUMERIC(10, 2) NOT NULL ); -CREATE INDEX gin_multi_on_json - ON public.test_gin_json USING gin (text, text1); +INSERT INTO customer_account (customer_id, balance) +VALUES + (1, 100.00), + (2, 200.00), + (3, 300.00); + +CREATE TABLE recent_transactions ( + transaction_id SERIAL PRIMARY KEY, + customer_id INT NOT NULL, + transaction_value NUMERIC(10, 2) NOT NULL +); +INSERT INTO recent_transactions (customer_id, transaction_value) +VALUES + (1, 50.00), + (3, -25.00), + (4, 150.00); + +MERGE INTO customer_account ca +USING recent_transactions t +ON t.customer_id = ca.customer_id +WHEN MATCHED THEN + UPDATE SET balance = balance + transaction_value +WHEN NOT MATCHED THEN + INSERT (customer_id, balance) + VALUES (t.customer_id, t.transaction_value); +``` + +Suggested schema change is to replace the MERGE command with a PL/pgSQL function similar to the following: + +```sql +CREATE OR REPLACE FUNCTION merge_customer_account() +RETURNS void AS $$ +BEGIN + -- Insert new rows or update existing rows in customer_account + INSERT INTO customer_account (customer_id, balance) + SELECT customer_id, transaction_value + FROM recent_transactions + ON CONFLICT (customer_id) + DO UPDATE + SET balance = customer_account.balance + EXCLUDED.balance; +END; +$$ LANGUAGE plpgsql; ``` --- -### Policies on users in source require manual user creation +## Functions and operators -**GitHub**: [Issue #1655](https://github.com/yugabyte/yb-voyager/issues/1655) +### XID functions is not supported -**Description**: If there are policies in the source schema for USERs in the database, the USERs have to be created manually on the target YugabyteDB, as currently the migration of USER/GRANT is not supported. Skipping the manual user creation will return an error during import schema as follows: +**GitHub**: [Issue #15638](https://github.com/yugabyte/yugabyte-db/issues/15638) + +**Description**: If you have XID datatypes in the source database, its functions, such as, `txid_current()` are not yet supported in YugabyteDB and will result in an error in the target as follows: ```output -ERROR: role "" does not exist (SQLSTATE 42704) + ERROR: Yugabyte does not support xid ``` -**Workaround**: Create the USERs manually on target before import schema to create policies. +**Workaround**: None. **Example** An example schema on the source database is as follows: ```sql -CREATE TABLE public.z1 ( - a integer, - b text +CREATE TABLE xid_example ( + id integer, + tx_id xid ); -CREATE ROLE regress_rls_group; -CREATE POLICY p2 ON public.z1 TO regress_rls_group USING (((a % 2) = 1)); ``` --- -### VIEW WITH CHECK OPTION is not supported +### XML functions is not yet supported -**GitHub**: [Issue #22716](https://github.com/yugabyte/yugabyte-db/issues/22716) +**GitHub**: [Issue #1043](https://github.com/yugabyte/yugabyte-db/issues/1043) -**Description**: If there are VIEWs with check option in the source database, they error out during the import schema phase as follows: +**Description**: XML functions and the XML data type are unsupported in YugabyteDB. If you use functions like `xpath`, `xmlconcat`, and `xmlparse`, it will fail with an error as per the following example: + +```sql +yugabyte=# SELECT xml_is_well_formed_content('Alpha') AS is_well_formed_content; +``` ```output -ERROR: VIEW WITH CHECK OPTION not supported yet +ERROR: unsupported XML feature +DETAIL: This functionality requires the server to be built with libxml support. +HINT: You need to rebuild PostgreSQL using --with-libxml. ``` -**Workaround**: You can use a TRIGGER with INSTEAD OF clause on INSERT/UPDATE on view to achieve this functionality, but it may require application-side adjustments to handle different errors instead of constraint violations. +**Workaround**: Convert XML data to JSON format for compatibility with YugabyteDB, or handle XML processing at the application layer before inserting data. -**Example** +--- -An example schema on the source database is as follows: +### JSONB subscripting -```sql -CREATE TABLE public.employees ( - employee_id integer NOT NULL, - employee_name text, - salary numeric -); +**GitHub**: Issue [#25575](https://github.com/yugabyte/yugabyte-db/issues/25575) -CREATE VIEW public.employees_less_than_12000 AS - SELECT - employees.employee_id, - employees.employee_name, - employees.salary - FROM - public.employees - WHERE - employees.employee_id < 12000 - WITH CASCADED CHECK OPTION; +**Description**: If you are using the JSONB subscripting in app queries and in the schema (constraints or default expression) on your source database, then the app query will fail once you migrate your apps to YugabyteDB, and import-schema will fail if any DDL has this feature, as it's a PostgreSQL 15 feature. + +```output +ERROR: cannot subscript type jsonb because it is not an array ``` -Suggested change to the schema is as follows: +**Workaround**: You can use the Arrow ( `-> / ->>` ) operators to access JSONB fields. -```sql -SELECT - employees.employee_id, - employees.employee_name, - employees.salary -FROM - public.employees -WHERE - employees.employee_id < 12000; +**Fixed In**: {{}}. -CREATE OR REPLACE FUNCTION modify_employees_less_than_12000() -RETURNS TRIGGER AS $$ -BEGIN - -- Handle INSERT operations - IF TG_OP = 'INSERT' THEN - IF NEW.employee_id < 12000 THEN - INSERT INTO employees(employee_id, employee_name, salary) - VALUES (NEW.employee_id, NEW.employee_name, NEW.salary); - RETURN NEW; - ELSE - RAISE EXCEPTION 'new row violates check option for view "employees_less_than_12000"; employee_id must be less than 12000'; - END IF; +**Example** - -- Handle UPDATE operations - ELSIF TG_OP = 'UPDATE' THEN - IF NEW.employee_id < 12000 THEN - UPDATE employees - SET employee_name = NEW.employee_name, - salary = NEW.salary - WHERE employee_id = OLD.employee_id; - RETURN NEW; - ELSE - RAISE EXCEPTION 'new row violates check option for view "employees_less_than_12000"; employee_id must be less than 12000'; - END IF; - END IF; -END; -$$ LANGUAGE plpgsql; +An example query / DDL on the source database is as follows: + +```sql +SELECT ('{"a": {"b": {"c": "some text"}}}'::jsonb)['a']['b']['c']; + +CREATE TABLE test_jsonb_chk ( + id int, + data1 jsonb, + CHECK (data1['key']<>'{}') +); +``` -CREATE TRIGGER trigger_modify_employee_12000 - INSTEAD OF INSERT OR UPDATE ON employees_less_than_12000 - FOR EACH ROW - EXECUTE FUNCTION modify_employees_less_than_12000(); +Suggested change in query to get it working- + +```sql +SELECT ((('{"a": {"b": {"c": "some text"}}}'::jsonb)->'a')->'b')->>'c'; + +CREATE TABLE test_jsonb_chk ( + id int, + data1 jsonb, + CHECK (data1->'key'<>'{}') +); ``` --- -### UNLOGGED table is not supported +## Indexes -**GitHub**: [Issue #1129](https://github.com/yugabyte/yugabyte-db/issues/1129) +### Index creation on partitions fail for some YugabyteDB builds -**Description**: If there are UNLOGGED tables in the source schema, they will error out during the import schema with the following error as it is not supported in target YugabyteDB. +**GitHub**: [Issue #14529](https://github.com/yugabyte/yugabyte-db/issues/14529) -```output -ERROR: UNLOGGED database object not supported yet -``` +**Description**: If you have a partitioned table with indexes on it, the migration will fail with an error for YugabyteDB `2.15` or `2.16` due to a regression. -**Workaround**: Convert it to a LOGGED table. +Note that this is fixed in release [2.17.1.0](../../../releases/ybdb-releases/end-of-life/v2.17/#v2.17.1.0). -**Fixed In**: {{}} +**Workaround**: N/A **Example** An example schema on the source database is as follows: ```sql -CREATE UNLOGGED TABLE tbl_unlogged ( - id int, - val text -); +DROP TABLE IF EXISTS list_part; + +CREATE TABLE list_part (id INTEGER, status TEXT, arr NUMERIC) PARTITION BY LIST(status); + +CREATE TABLE list_active PARTITION OF list_part FOR VALUES IN ('ACTIVE'); + +CREATE TABLE list_archived PARTITION OF list_part FOR VALUES IN ('EXPIRED'); + +CREATE TABLE list_others PARTITION OF list_part DEFAULT; + +INSERT INTO list_part VALUES (1,'ACTIVE',100), (2,'RECURRING',20), (3,'EXPIRED',38), (4,'REACTIVATED',144), (5,'ACTIVE',50); + +CREATE INDEX list_ind ON list_part(status); ``` -Suggested change to the schema is as follows: +--- + +### GiST, BRIN, and SPGIST index types are not supported + +**GitHub**: [Issue #1337](https://github.com/yugabyte/yugabyte-db/issues/1337) + +**Description**: If you have GiST, BRIN, and SPGIST indexes on the source database, it errors out in the import schema phase with the following error: + +```output + ERROR: index method "gist" not supported yet (SQLSTATE XX000) + +``` + +**Workaround**: Currently, there is no workaround; remove the index from the exported schema. + +**Example** + +An example schema on the source database is as follows: ```sql -CREATE TABLE tbl_unlogged ( - id int, - val text -); +CREATE INDEX gist_idx ON public.ts_query_table USING gist (query); ``` --- -### Hash-sharding with indexes on the timestamp/date columns +### Indexes on some complex data types are not supported -**GitHub**: [Issue #49](https://github.com/yugabyte/yb-voyager/issues/49) -**Description**: Indexes on timestamp or date columns are commonly used in range-based queries. However, by default, indexes in YugabyteDB are hash-sharded, which is not optimal for range predicates and can impact query performance. +**GitHub**: [Issue #9698](https://github.com/yugabyte/yugabyte-db/issues/9698), [Issue #23829](https://github.com/yugabyte/yugabyte-db/issues/23829), [Issue #17017](https://github.com/yugabyte/yugabyte-db/issues/17017) -Note that range sharding is currently enabled by default only in [PostgreSQL compatibility mode](../../../develop/postgresql-compatibility/) in YugabyteDB. +**Description**: If you have indexes on some complex types such as TSQUERY, TSVECTOR, JSONB, ARRAYs, INET, UDTs, citext, and so on, those will error out in import schema phase with the following error: -**Workaround**: Explicitly configure the index to use range sharding. This ensures efficient data access with range-based queries. +```output + ERROR: INDEX on column of type '' not yet supported +``` + +**Workaround**: Currently, there is no workaround, but you can cast these data types in the index definition to supported types, which may require adjustments on the application side when querying the column using the index. Ensure you address these changes before modifying the schema. **Example** An example schema on the source database is as follows: ```sql -CREATE TABLE orders ( - order_id int PRIMARY, - ... - created_at timestamp +CREATE TABLE public.citext_type ( + id integer, + data public.citext ); -CREATE INDEX idx_orders_created ON orders(created_at); -``` +CREATE TABLE public.documents ( + id integer NOT NULL, + title_tsvector tsvector, + content_tsvector tsvector +); -Suggested change to the schema is to add the ASC/DESC clause as follows: +CREATE TABLE public.ts_query_table ( + id integer, + query tsquery +); -```sql -CREATE INDEX idx_orders_created ON orders(created_at DESC); +CREATE TABLE public.test_json ( + id integer, + data jsonb +); + +CREATE INDEX tsvector_idx ON public.documents (title_tsvector); + +CREATE INDEX tsquery_idx ON public.ts_query_table (query); + +CREATE INDEX idx_citext ON public.citext_type USING btree (data); + +CREATE INDEX idx_json ON public.test_json (data); ``` --- -### Exporting data with names for tables/functions/procedures using special characters/whitespaces fails +### GIN indexes on multiple columns are not supported -**GitHub**: [Issue #636](https://github.com/yugabyte/yb-voyager/issues/636), [Issue #688](https://github.com/yugabyte/yb-voyager/issues/688), [Issue #702](https://github.com/yugabyte/yb-voyager/issues/702) +**GitHub**: [Issue #724](https://github.com/yugabyte/yb-voyager/issues/724) -**Description**: If you define complex names for your source database tables/functions/procedures using backticks or double quotes for example, \`abc xyz\` , \`abc@xyz\`, or "abc@123", the migration hangs during the export data step. +**Description**: If there are GIN indexes in the source schema on multiple columns, they result in an error during import schema as follows: -**Workaround**: Rename the objects (tables/functions/procedures) on the source database to a name without special characters. +```output +ERROR: access method "ybgin" does not support multicolumn indexes (SQLSTATE 0A000) +``` + +**Workaround**: Currently, as there is no workaround, modify the schema to not include such indexes. **Example** -An example schema on the source MySQL database is as follows: +An example schema on the source database is as follows: ```sql -CREATE TABLE `xyz abc`(id int); -INSERT INTO `xyz abc` VALUES(1); -INSERT INTO `xyz abc` VALUES(2); -INSERT INTO `xyz abc` VALUES(3); +CREATE TABLE public.test_gin_json ( + id integer, + text jsonb, + text1 jsonb +); + +CREATE INDEX gin_multi_on_json + ON public.test_gin_json USING gin (text, text1); ``` -The exported schema is as follows: +--- + +## Concurrency control + +### Advisory locks is not yet implemented + +**GitHub**: [Issue #3642](https://github.com/yugabyte/yugabyte-db/issues/3642) + +**Description**: YugabyteDB does not support PostgreSQL advisory locks (for example, pg_advisory_lock, pg_try_advisory_lock). Any attempt to use advisory locks will result in a "function-not-implemented" error as per the following example: ```sql -CREATE TABLE "xyz abc" (id bigint); +yugabyte=# SELECT pg_advisory_lock(100), COUNT(*) FROM cars; ``` -The preceding example may hang or result in an error. +```output +ERROR: advisory locks feature is currently in preview +HINT: To enable this preview feature, set the GFlag ysql_yb_enable_advisory_locks to true and add it to the list of allowed preview flags i.e. GFlag allowed_preview_flags_csv. If the app doesn't need strict functionality, this error can be silenced by using the GFlag yb_silence_advisory_locks_not_supported_error. See https://github.com/yugabyte/yugabyte-db/issues/3642 for details +``` + +**Workaround**: Implement a custom locking mechanism in the application to coordinate actions without relying on database-level advisory locks. --- -### Importing with case-sensitive schema names +### Two-Phase Commit -**GitHub**: [Issue #422](https://github.com/yugabyte/yb-voyager/issues/422) +**GitHub**: Issue [#11084](https://github.com/yugabyte/yugabyte-db/issues/11084) -**Description**: If you migrate your database using a case-sensitive schema name, the migration will fail with a "no schema has been selected" or "schema already exists" error(s). +**Description**: If your application queries or PL/pgSQL objects rely on [Two-Phase Commit protocol](https://www.postgresql.org/docs/11/two-phase.html) that allows multiple distributed systems to work together in a transactional manner in the source PostgreSQL database, these functionalities will not work after migrating to YugabyteDB. Currently, Two-Phase Commit is not implemented in YugabyteDB and will throw the following error when you attempt to execute the commands: -**Workaround**: Currently, yb-voyager does not support case-sensitive schema names; all schema names are assumed to be case-insensitive (lower-case). If required, you may alter the schema names to a case-sensitive alternative post-migration using the ALTER SCHEMA command. +```sql +ERROR: PREPARE TRANSACTION not supported yet +``` -**Example** +**Workaround**: Currently, there is no workaround. -An example yb-voyager import-schema command with a case-sensitive schema name is as follows: +--- -```sh -yb-voyager import schema --target-db-name voyager - --target-db-hostlocalhost - --export-dir . - --target-db-password password - --target-db-user yugabyte - --target-db-schema "\"Test\"" -``` +### DDL operations within the Transaction -The preceding example will result in an error as follows: +**GitHub**: Issue [#1404](https://github.com/yugabyte/yugabyte-db/issues/1404) -```output -ERROR: no schema has been selected to create in (SQLSTATE 3F000) -``` +**Description**: If your application queries or PL/pgSQL objects runs DDL operations inside transactions in the source PostgreSQL database, this functionality will not work after migrating to YugabyteDB. Currently, DDL operations in a transaction in YugabyteDB is not supported and will not work as expected. -Suggested changes to the schema can be done using the following steps: +**Workaround**: Currently, there is no workaround. -1. Change the case sensitive schema name during schema migration as follows: +**Example:** - ```sh - yb-voyager import schema --target-db-name voyager - --target-db-hostlocalhost - --export-dir . - --target-db-password password - --target-db-user yugabyte - --target-db-schema test - ``` +```sql +yugabyte=# \d test +Did not find any relation named "test". +yugabyte=# BEGIN; +BEGIN +yugabyte=*# CREATE TABLE test(id int, val text); +CREATE TABLE +yugabyte=*# \d test + Table "public.test" + Column | Type | Collation | Nullable | Default +--------+---------+-----------+----------+--------- + id | integer | | | + val | text | | | +yugabyte=*# ROLLBACK; +ROLLBACK +yugabyte=# \d test + Table "public.test" + Column | Type | Collation | Nullable | Default +--------+---------+-----------+----------+--------- + id | integer | | | + val | text | | | +``` -1. Alter the schema name post migration as follows: +--- - ```sh - ALTER SCHEMA "test" RENAME TO "Test"; - ``` +## Extensions ---- +### PostgreSQL extensions are not supported by target YugabyteDB -### Unsupported datatypes by YugabyteDB +**Documentation**: [PostgreSQL extensions](../../../explore/ysql-language-features/pg-extensions/) -**GitHub**: [Issue 11323](https://github.com/yugabyte/yugabyte-db/issues/11323), [Issue 1731](https://github.com/yugabyte/yb-voyager/issues/1731) +**Description**: If you have any PostgreSQL extension that is not supported by the target YugabyteDB, they result in the following errors during import schema: -**Description**: The migration skips databases that have the following data types on any column: `GEOMETRY`, `GEOGRAPHY`, `BOX2D`, `BOX3D`, `TOPOGEOMETRY`, `RASTER`, `PG_LSN`, or `TXID_SNAPSHOT`. +```output +ERROR: could not open extension control file "/home/centos/yb/postgres/share/extension/.control": No such file or directory +``` -**Workaround**: None. +**Workaround**: Remove the extension from the exported schema. **Example** An example schema on the source database is as follows: ```sql -CREATE TABLE public.locations ( - id integer NOT NULL, - name character varying(100), - geom geometry(Point,4326) - ); - +CREATE EXTENSION IF NOT EXISTS postgis WITH SCHEMA public; ``` --- -### Unsupported datatypes by Voyager during live migration +## Server programming -**GitHub**: [Issue 1731](https://github.com/yugabyte/yb-voyager/issues/1731) +### Events Listen / Notify -**Description**: For live migration, the migration skips data from source databases that have the following data types on any column: `POINT`, `LINE`, `LSEG`, `BOX`, `PATH`, `POLYGON`, or `CIRCLE`. +**GitHub**: Issue [#1872](https://github.com/yugabyte/yugabyte-db/issues/1872) -For live migration with fall-forward/fall-back, the migration skips data from source databases that have the following data types on any column: `HSTORE`, `POINT`, `LINE`, `LSEG`, `BOX`, `PATH`, `POLYGON`, `TSVECTOR`, `TSQUERY`, `CIRCLE`, or `ARRAY OF ENUMS`. +**Description**: If your application queries or PL/pgSQL objects rely on **LISTEN/NOTIFY events** in the source PostgreSQL database, these functionalities will not work after migrating to YugabyteDB. Currently, LISTEN/NOTIFY events are a no-op in YugabyteDB, and any attempt to use them will trigger a warning instead of performing the expected event-driven operations: -**Workaround**: None. +```sql +WARNING: LISTEN not supported yet and will be ignored +``` -**Example** +**Workaround**: Currently, there is no workaround. -An example schema on the source database is as follows: +**Example:** ```sql -CREATE TABLE combined_tbl ( - id int, - l line, - ls lseg, - p point, - p1 path, - p2 polygon -); +LISTEN my_table_changes; +INSERT INTO my_table (name) VALUES ('Charlie'); +NOTIFY my_table_changes, 'New row added with name: Charlie'; ``` --- -### XID functions is not supported +### Constraint trigger is not supported -**GitHub**: [Issue #15638](https://github.com/yugabyte/yugabyte-db/issues/15638) +**GitHub**: [Issue #4700](https://github.com/yugabyte/yugabyte-db/issues/4700) -**Description**: If you have XID datatypes in the source database, its functions, such as, `txid_current()` are not yet supported in YugabyteDB and will result in an error in the target as follows: +**Description**: If you have constraint triggers in your source database, as they are currently unsupported in YugabyteDB, and they will error out as follows: ```output - ERROR: Yugabyte does not support xid + ERROR: CREATE CONSTRAINT TRIGGER not supported yet ``` -**Workaround**: None. +**Workaround**: Currently, there is no workaround; remove the constraint trigger from the exported schema and modify the applications if they are using these triggers before pointing it to YugabyteDB. **Example** An example schema on the source database is as follows: ```sql -CREATE TABLE xid_example ( - id integer, - tx_id xid +CREATE TABLE public.users ( + id int, + email character varying(255) ); + +CREATE FUNCTION public.check_unique_username() RETURNS trigger + LANGUAGE plpgsql +AS $$ +BEGIN + IF EXISTS ( + SELECT 1 + FROM users + WHERE email = NEW.email AND id <> NEW.id + ) THEN + RAISE EXCEPTION 'Email % already exists.', NEW.email; + END IF; + RETURN NEW; +END; +$$; + +CREATE CONSTRAINT TRIGGER check_unique_username_trigger + AFTER INSERT OR UPDATE ON public.users + DEFERRABLE INITIALLY DEFERRED + FOR EACH ROW + EXECUTE FUNCTION public.check_unique_username(); ``` --- @@ -1256,302 +1154,388 @@ EXECUTE FUNCTION check_and_modify_val(); --- -### Advisory locks is not yet implemented +### %Type syntax is not supported -**GitHub**: [Issue #3642](https://github.com/yugabyte/yugabyte-db/issues/3642) +**GitHub**: [Issue #23619](https://github.com/yugabyte/yugabyte-db/issues/23619) -**Description**: YugabyteDB does not support PostgreSQL advisory locks (for example, pg_advisory_lock, pg_try_advisory_lock). Any attempt to use advisory locks will result in a "function-not-implemented" error as per the following example: +**Description**: If you have any function, procedure, or trigger using the `%TYPE` syntax for referencing a type of a column from a table, then it errors out in YugabyteDB with the following error: + +```output +ERROR: invalid type name "employees.salary%TYPE" (SQLSTATE 42601) +``` + +**Workaround**: Fix the syntax to include the actual type name instead of referencing the type of a column. + +**Example** + +An example schema on the source database is as follows: ```sql -yugabyte=# SELECT pg_advisory_lock(100), COUNT(*) FROM cars; +CREATE TABLE public.employees ( + employee_id integer NOT NULL, + employee_name text, + salary numeric +); + + +CREATE FUNCTION public.get_employee_salary(emp_id integer) RETURNS numeric + LANGUAGE plpgsql + AS $$ +DECLARE + emp_salary employees.salary%TYPE; -- Declare a variable with the same type as employees.salary +BEGIN + SELECT salary INTO emp_salary + FROM employees + WHERE employee_id = emp_id; + + RETURN emp_salary; +END; +$$; ``` -```output -ERROR: advisory locks feature is currently in preview -HINT: To enable this preview feature, set the GFlag ysql_yb_enable_advisory_locks to true and add it to the list of allowed preview flags i.e. GFlag allowed_preview_flags_csv. If the app doesn't need strict functionality, this error can be silenced by using the GFlag yb_silence_advisory_locks_not_supported_error. See https://github.com/yugabyte/yugabyte-db/issues/3642 for details +Suggested change to CREATE FUNCTION is as follows: + +```sql +CREATE FUNCTION public.get_employee_salary(emp_id integer) RETURNS numeric + LANGUAGE plpgsql + AS $$ +DECLARE + Emp_salary NUMERIC; -- Declare a variable with the same type as employees.salary +BEGIN + SELECT salary INTO emp_salary + FROM employees + WHERE employee_id = emp_id; + + RETURN emp_salary; +END; +$$; ``` -**Workaround**: Implement a custom locking mechanism in the application to coordinate actions without relying on database-level advisory locks. +--- + +## PostgreSQL 12 and later features + +### PostgreSQL 12 and later features + +**GitHub**: Issue [#25575](https://github.com/yugabyte/yugabyte-db/issues/25575) + +**Description**: If any of the following PostgreSQL 12 and later features are present in the source schema, the import schema step on the target YugabyteDB will fail. + +- [JSON Constructor functions](https://www.postgresql.org/about/featurematrix/detail/395/) - `JSON_ARRAY_AGG`, `JSON_ARRAY`, `JSON_OBJECT`, `JSON_OBJECT_AGG`. +- [JSON query functions](https://www.postgresql.org/docs/17/functions-json.html#FUNCTIONS-SQLJSON-TABLE) - `JSON_QUERY`, `JSON_VALUE`, `JSON_EXISTS`, `JSON_TABLE`. +- [IS JSON predicate clause](https://www.postgresql.org/about/featurematrix/detail/396/). +- Any Value [Aggregate function](https://www.postgresql.org/docs/16/functions-aggregate.html#id-1.5.8.27.5.2.4.1.1.1.1) - `any_value`. +- [COPY FROM command with ON_ERROR](https://www.postgresql.org/about/featurematrix/detail/433/) option. +- [Non-decimal integer literals](https://www.postgresql.org/about/featurematrix/detail/407/). +- [Non-deterministic collations](https://www.postgresql.org/docs/12/collation.html#COLLATION-NONDETERMINISTIC). +- [COMPRESSION clause](https://www.postgresql.org/docs/current/sql-createtable.html#SQL-CREATETABLE-PARMS-COMPRESSION) in TABLE Column for TOASTing method. +- [CREATE DATABASE options](https://www.postgresql.org/docs/15/sql-createdatabase.html) (locale, collation, strategy, and oid related). + +In addition, if any of the following PostgreSQL features are present in the source schema, the import schema step on the target YugabyteDB will fail, unless you are importing to YugabyteDB [v2.25](/preview/releases/ybdb-releases/v2.25) (which supports PG15) + +- [Multirange datatypes](https://www.postgresql.org/docs/current/rangetypes.html#RANGETYPES-BUILTIN). +- [UNIQUE NULLS NOT DISTINCT clause](https://www.postgresql.org/about/featurematrix/detail/392/) in constraint and index. +- [Range Aggregate functions](https://www.postgresql.org/docs/16/functions-aggregate.html#id-1.5.8.27.5.2.4.1.1.1.1) - `range_agg`, `range_intersect_agg`. +- [FETCH FIRST … WITH TIES in select](https://www.postgresql.org/docs/13/sql-select.html#SQL-LIMIT) statement. +- [Regex functions](https://www.postgresql.org/about/featurematrix/detail/367/) - `regexp_count`, `regexp_instr`, `regexp_like`. +- [Foreign key references](https://www.postgresql.org/about/featurematrix/detail/319/) to partitioned table. +- [Security invoker views](https://www.postgresql.org/about/featurematrix/detail/389/). +- COPY FROM command with WHERE [clause](https://www.postgresql.org/about/featurematrix/detail/330/). +- [Deterministic attribute](https://www.postgresql.org/docs/12/collation.html#COLLATION-NONDETERMINISTIC) in COLLATION objects. +- [SQL Body in Create function](https://www.postgresql.org/docs/15/sql-createfunction.html#:~:text=a%20new%20session.-,sql_body,-The%20body%20of). +- [Common Table Expressions (With queries) with MATERIALIZED clause](https://www.postgresql.org/docs/current/queries-with.html#QUERIES-WITH-CTE-MATERIALIZATION). --- -### System columns is not yet supported +## Migration process and tooling issues -**GitHub**: [Issue #24843](https://github.com/yugabyte/yugabyte-db/issues/24843) +### Exporting data with names for tables/functions/procedures using special characters/whitespaces fails -**Description**: System columns, including `xmin`, `xmax`, `cmin`, `cmax`, and `ctid`, are not available in YugabyteDB. Queries or applications referencing these columns will fail as per the following example: +**GitHub**: [Issue #636](https://github.com/yugabyte/yb-voyager/issues/636), [Issue #688](https://github.com/yugabyte/yb-voyager/issues/688), [Issue #702](https://github.com/yugabyte/yb-voyager/issues/702) + +**Description**: If you define complex names for your source database tables/functions/procedures using backticks or double quotes for example, \`abc xyz\` , \`abc@xyz\`, or "abc@123", the migration hangs during the export data step. + +**Workaround**: Rename the objects (tables/functions/procedures) on the source database to a name without special characters. + +**Example** + +An example schema on the source MySQL database is as follows: ```sql -yugabyte=# SELECT xmin, xmax FROM employees where id = 100; +CREATE TABLE `xyz abc`(id int); +INSERT INTO `xyz abc` VALUES(1); +INSERT INTO `xyz abc` VALUES(2); +INSERT INTO `xyz abc` VALUES(3); +``` + +The exported schema is as follows: + +```sql +CREATE TABLE "xyz abc" (id bigint); +``` + +The preceding example may hang or result in an error. + +--- + +### Importing with case-sensitive schema names + +**GitHub**: [Issue #422](https://github.com/yugabyte/yb-voyager/issues/422) + +**Description**: If you migrate your database using a case-sensitive schema name, the migration will fail with a "no schema has been selected" or "schema already exists" error(s). + +**Workaround**: Currently, yb-voyager does not support case-sensitive schema names; all schema names are assumed to be case-insensitive (lower-case). If required, you may alter the schema names to a case-sensitive alternative post-migration using the ALTER SCHEMA command. + +**Example** + +An example yb-voyager import-schema command with a case-sensitive schema name is as follows: + +```sh +yb-voyager import schema --target-db-name voyager + --target-db-hostlocalhost + --export-dir . + --target-db-password password + --target-db-user yugabyte + --target-db-schema "\"Test\"" ``` +The preceding example will result in an error as follows: + ```output -ERROR: System column "xmin" is not supported yet +ERROR: no schema has been selected to create in (SQLSTATE 3F000) ``` -**Workaround**: Use the application layer to manage tracking instead of relying on system columns. +Suggested changes to the schema can be done using the following steps: + +1. Change the case sensitive schema name during schema migration as follows: + + ```sh + yb-voyager import schema --target-db-name voyager + --target-db-hostlocalhost + --export-dir . + --target-db-password password + --target-db-user yugabyte + --target-db-schema test + ``` + +1. Alter the schema name post migration as follows: + + ```sh + ALTER SCHEMA "test" RENAME TO "Test"; + ``` --- -### XML functions is not yet supported +### Foreign table in the source database requires SERVER and USER MAPPING -**GitHub**: [Issue #1043](https://github.com/yugabyte/yugabyte-db/issues/1043) +**GitHub**: [Issue #1627](https://github.com/yugabyte/yb-voyager/issues/1627) -**Description**: XML functions and the XML data type are unsupported in YugabyteDB. If you use functions like `xpath`, `xmlconcat`, and `xmlparse`, it will fail with an error as per the following example: +**Description**: If you have foreign tables in the schema, during the export schema phase the exported schema does not include the SERVER and USER MAPPING objects. You must manually create these objects before importing schema, otherwise FOREIGN TABLE creation fails with the following error: + +```output +ERROR: server "remote_server" does not exist (SQLSTATE 42704) +``` + +**Workaround**: Create the SERVER and its USER MAPPING manually on the target YugabyteDB database. + +**Example** + +An example schema on the source database is as follows: ```sql -yugabyte=# SELECT xml_is_well_formed_content('Alpha') AS is_well_formed_content; +CREATE EXTENSION postgres_fdw; + +CREATE SERVER remote_server + FOREIGN DATA WRAPPER postgres_fdw + OPTIONS (host '127.0.0.1', port '5432', dbname 'postgres'); + +CREATE FOREIGN TABLE foreign_table ( + id INT, + name TEXT, + data JSONB +) +SERVER remote_server +OPTIONS ( + schema_name 'public', + table_name 'remote_table' +); + +CREATE USER MAPPING FOR postgres +SERVER remote_server +OPTIONS (user 'postgres', password 'XXX'); ``` -```output -ERROR: unsupported XML feature -DETAIL: This functionality requires the server to be built with libxml support. -HINT: You need to rebuild PostgreSQL using --with-libxml. +Exported schema only has the following: + +```sql +CREATE FOREIGN TABLE foreign_table ( + id INT, + name TEXT, + data JSONB +) +SERVER remote_server +OPTIONS ( + schema_name 'public', + table_name 'remote_table' +); ``` -**Workaround**: Convert XML data to JSON format for compatibility with YugabyteDB, or handle XML processing at the application layer before inserting data. +Suggested change is to manually create the SERVER and USER MAPPING on the target YugabyteDB. --- -### Large Objects and its functions are currently not supported - - -**GitHub**: Issue [#25318](https://github.com/yugabyte/yugabyte-db/issues/25318) +### Unsupported datatypes by Voyager during live migration -**Description**: If you have large objects (datatype `lo`) in the source schema and are using large object functions in queries, the migration will fail during import-schema, as large object is not supported in YugabyteDB. +**GitHub**: [Issue 1731](https://github.com/yugabyte/yb-voyager/issues/1731) -```sql -SELECT lo_create(''); -``` +**Description**: For live migration, the migration skips data from source databases that have the following data types on any column: `POINT`, `LINE`, `LSEG`, `BOX`, `PATH`, `POLYGON`, or `CIRCLE`. -```output -ERROR: Transaction for catalog table write operation 'pg_largeobject_metadata' not found -``` +For live migration with fall-forward/fall-back, the migration skips data from source databases that have the following data types on any column: `HSTORE`, `POINT`, `LINE`, `LSEG`, `BOX`, `PATH`, `POLYGON`, `TSVECTOR`, `TSQUERY`, `CIRCLE`, or `ARRAY OF ENUMS`. -**Workaround**: No workaround is available. +**Workaround**: None. **Example** An example schema on the source database is as follows: ```sql -CREATE TABLE image (id int, raster lo); - -CREATE TRIGGER t_raster BEFORE UPDATE OR DELETE ON public.image - FOR EACH ROW EXECUTE FUNCTION lo_manage(raster); +CREATE TABLE combined_tbl ( + id int, + l line, + ls lseg, + p point, + p1 path, + p2 polygon +); ``` --- -### PostgreSQL 12 and later features - -**GitHub**: Issue [#25575](https://github.com/yugabyte/yugabyte-db/issues/25575) - -**Description**: If any of these PostgreSQL features for version 12 and later are present in the source schema, the import schema step on the target YugabyteDB will fail as YugabyteDB is currently PG11 compatible. - -- [JSON Constructor functions](https://www.postgresql.org/about/featurematrix/detail/395/) - `JSON_ARRAY_AGG`, `JSON_ARRAY`, `JSON_OBJECT`, `JSON_OBJECT_AGG`. -- [JSON query functions](https://www.postgresql.org/docs/17/functions-json.html#FUNCTIONS-SQLJSON-TABLE) - `JSON_QUERY`, `JSON_VALUE`, `JSON_EXISTS`, `JSON_TABLE`. -- [IS JSON predicate clause](https://www.postgresql.org/about/featurematrix/detail/396/). -- Any Value [Aggregate function](https://www.postgresql.org/docs/16/functions-aggregate.html#id-1.5.8.27.5.2.4.1.1.1.1) - `any_value`. -- [COPY FROM command with ON_ERROR](https://www.postgresql.org/about/featurematrix/detail/433/) option. -- [Non-decimal integer literals](https://www.postgresql.org/about/featurematrix/detail/407/). -- [Non-deterministic collations](https://www.postgresql.org/docs/12/collation.html#COLLATION-NONDETERMINISTIC). -- [COMPRESSION clause](https://www.postgresql.org/docs/current/sql-createtable.html#SQL-CREATETABLE-PARMS-COMPRESSION) in TABLE Column for TOASTing method. -- [CREATE DATABASE options](https://www.postgresql.org/docs/15/sql-createdatabase.html) (locale, collation, strategy, and oid related). - -Apart from these, the following issues are supported in YugabyteDB [v2.25](/preview/releases/ybdb-releases/v2.25), which supports PostgreSQL 15. - -- [Multirange datatypes](https://www.postgresql.org/docs/current/rangetypes.html#RANGETYPES-BUILTIN). -- [UNIQUE NULLS NOT DISTINCT clause](https://www.postgresql.org/about/featurematrix/detail/392/) in constraint and index. -- [Range Aggregate functions](https://www.postgresql.org/docs/16/functions-aggregate.html#id-1.5.8.27.5.2.4.1.1.1.1) - `range_agg`, `range_intersect_agg`. -- [FETCH FIRST … WITH TIES in select](https://www.postgresql.org/docs/13/sql-select.html#SQL-LIMIT) statement. -- [Regex functions](https://www.postgresql.org/about/featurematrix/detail/367/) - `regexp_count`, `regexp_instr`, `regexp_like`. -- [Foreign key references](https://www.postgresql.org/about/featurematrix/detail/319/) to partitioned table. -- [Security invoker views](https://www.postgresql.org/about/featurematrix/detail/389/). -- COPY FROM command with WHERE [clause](https://www.postgresql.org/about/featurematrix/detail/330/). -- [Deterministic attribute](https://www.postgresql.org/docs/12/collation.html#COLLATION-NONDETERMINISTIC) in COLLATION objects. -- [SQL Body in Create function](https://www.postgresql.org/docs/15/sql-createfunction.html#:~:text=a%20new%20session.-,sql_body,-The%20body%20of). -- [Common Table Expressions (With queries) with MATERIALIZED clause](https://www.postgresql.org/docs/current/queries-with.html#QUERIES-WITH-CTE-MATERIALIZATION). - ---- - -### MERGE command +### Data ingestion on XML data type is not supported -**GitHub**: Issue [#25574](https://github.com/yugabyte/yugabyte-db/issues/25574) +**GitHub**: [Issue #1043](https://github.com/yugabyte/yugabyte-db/issues/1043) -**Description**: If you are using a Merge query to conditionally insert, update, or delete rows on a table on your source database, then this query will fail once you migrate your apps to YugabyteDB as it is a PostgreSQL 15 feature, and not supported yet. +**Description**: If you have XML datatype in the source database, it errors out in the import data to target YugabyteDB phase as data ingestion is not allowed on this data type: ```output -ERROR: syntax error at or near "MERGE" + ERROR: unsupported XML feature (SQLSTATE 0A000) ``` -**Workaround**: Use the PL/pgSQL function to implement similar functionality on the database. +**Workaround**: To migrate the data, a workaround is to convert the type to text and import the data to target; to read the data on the target YugabyteDB, you need to create some user defined functions similar to XML functions. **Example** An example schema on the source database is as follows: ```sql -CREATE TABLE customer_account ( - customer_id INT PRIMARY KEY, - balance NUMERIC(10, 2) NOT NULL -); - -INSERT INTO customer_account (customer_id, balance) -VALUES - (1, 100.00), - (2, 200.00), - (3, 300.00); - -CREATE TABLE recent_transactions ( - transaction_id SERIAL PRIMARY KEY, - customer_id INT NOT NULL, - transaction_value NUMERIC(10, 2) NOT NULL +CREATE TABLE xml_example ( + id integer, + data xml ); -INSERT INTO recent_transactions (customer_id, transaction_value) -VALUES - (1, 50.00), - (3, -25.00), - (4, 150.00); - -MERGE INTO customer_account ca -USING recent_transactions t -ON t.customer_id = ca.customer_id -WHEN MATCHED THEN - UPDATE SET balance = balance + transaction_value -WHEN NOT MATCHED THEN - INSERT (customer_id, balance) - VALUES (t.customer_id, t.transaction_value); -``` - -Suggested schema change is to replace the MERGE command with a PL/pgSQL function similar to the following: - -```sql -CREATE OR REPLACE FUNCTION merge_customer_account() -RETURNS void AS $$ -BEGIN - -- Insert new rows or update existing rows in customer_account - INSERT INTO customer_account (customer_id, balance) - SELECT customer_id, transaction_value - FROM recent_transactions - ON CONFLICT (customer_id) - DO UPDATE - SET balance = customer_account.balance + EXCLUDED.balance; -END; -$$ LANGUAGE plpgsql; ``` --- -### JSONB subscripting +### Policies on users in source require manual user creation -**GitHub**: Issue [#25575](https://github.com/yugabyte/yugabyte-db/issues/25575) +**GitHub**: [Issue #1655](https://github.com/yugabyte/yb-voyager/issues/1655) -**Description**: If you are using the JSONB subscripting in app queries and in the schema (constraints or default expression) on your source database, then the app query will fail once you migrate your apps to YugabyteDB, and import-schema will fail if any DDL has this feature, as it's a PostgreSQL 15 feature. +**Description**: If there are policies in the source schema for USERs in the database, the USERs have to be created manually on the target YugabyteDB, as currently the migration of USER/GRANT is not supported. Skipping the manual user creation will return an error during import schema as follows: ```output -ERROR: cannot subscript type jsonb because it is not an array +ERROR: role "" does not exist (SQLSTATE 42704) ``` -**Workaround**: You can use the Arrow ( `-> / ->>` ) operators to access JSONB fields. - -**Fixed In**: {{}}. +**Workaround**: Create the USERs manually on target before import schema to create policies. **Example** -An example query / DDL on the source database is as follows: +An example schema on the source database is as follows: ```sql -SELECT ('{"a": {"b": {"c": "some text"}}}'::jsonb)['a']['b']['c']; - -CREATE TABLE test_jsonb_chk ( - id int, - data1 jsonb, - CHECK (data1['key']<>'{}') +CREATE TABLE public.z1 ( + a integer, + b text ); +CREATE ROLE regress_rls_group; +CREATE POLICY p2 ON public.z1 TO regress_rls_group USING (((a % 2) = 1)); ``` -Suggested change in query to get it working- +--- -```sql -SELECT ((('{"a": {"b": {"c": "some text"}}}'::jsonb)->'a')->'b')->>'c'; +### Creation of certain views in the rule.sql file -CREATE TABLE test_jsonb_chk ( - id int, - data1 jsonb, - CHECK (data1->'key'<>'{}') -); -``` +**GitHub**: [Issue #770](https://github.com/yugabyte/yb-voyager/issues/770) ---- +**Description**: There may be few cases where certain exported views come under the `rule.sql` file and the `view.sql` file might contain a dummy view definition. This `pg_dump` behaviour may be due to how PostgreSQL handles views internally (via rules). -### Events Listen / Notify +{{< note title ="Note" >}} +This does not affect the migration as YugabyteDB Voyager takes care of the DDL creation sequence internally. +{{< /note >}} -**GitHub**: Issue [#1872](https://github.com/yugabyte/yugabyte-db/issues/1872) +**Workaround**: Not required -**Description**: If your application queries or PL/pgSQL objects rely on **LISTEN/NOTIFY events** in the source PostgreSQL database, these functionalities will not work after migrating to YugabyteDB. Currently, LISTEN/NOTIFY events are a no-op in YugabyteDB, and any attempt to use them will trigger a warning instead of performing the expected event-driven operations: +**Example** + +An example schema on the source database is as follows: ```sql -WARNING: LISTEN not supported yet and will be ignored +CREATE TABLE foo(n1 int PRIMARY KEY, n2 int); +CREATE VIEW v1 AS + SELECT n1,n2 + FROM foo + GROUP BY n1; ``` -**Workaround**: Currently, there is no workaround. +The exported schema for `view.sql` is as follows: -**Example:** +```sql +CREATE VIEW public.v1 AS + SELECT + NULL::integer AS n1, + NULL::integer AS n2; +``` + +The exported schema for `rule.sql` is as follows: ```sql -LISTEN my_table_changes; -INSERT INTO my_table (name) VALUES ('Charlie'); -NOTIFY my_table_changes, 'New row added with name: Charlie'; +CREATE OR REPLACE VIEW public.v1 AS + SELECT foo.n1,foo.n2 + FROM public.foo + GROUP BY foo.n1; ``` --- -### Two-Phase Commit - -**GitHub**: Issue [#11084](https://github.com/yugabyte/yugabyte-db/issues/11084) +## Performance optimizations -**Description**: If your application queries or PL/pgSQL objects rely on [Two-Phase Commit protocol](https://www.postgresql.org/docs/11/two-phase.html) that allows multiple distributed systems to work together in a transactional manner in the source PostgreSQL database, these functionalities will not work after migrating to YugabyteDB. Currently, Two-Phase Commit is not implemented in YugabyteDB and will throw the following error when you attempt to execute the commands: +### Hash-sharding with indexes on the timestamp/date columns -```sql -ERROR: PREPARE TRANSACTION not supported yet -``` +**GitHub**: [Issue #49](https://github.com/yugabyte/yb-voyager/issues/49) +**Description**: Indexes on timestamp or date columns are commonly used in range-based queries. However, by default, indexes in YugabyteDB are hash-sharded, which is not optimal for range predicates and can impact query performance. -**Workaround**: Currently, there is no workaround. +Note that range sharding is currently enabled by default only in [PostgreSQL compatibility mode](../../../develop/postgresql-compatibility/) in YugabyteDB. ---- +**Workaround**: Explicitly configure the index to use range sharding. This ensures efficient data access with range-based queries. -### DDL operations within the Transaction +**Example** -**GitHub**: Issue [#1404](https://github.com/yugabyte/yugabyte-db/issues/1404) +An example schema on the source database is as follows: -**Description**: If your application queries or PL/pgSQL objects runs DDL operations inside transactions in the source PostgreSQL database, this functionality will not work after migrating to YugabyteDB. Currently, DDL operations in a transaction in YugabyteDB is not supported and will not work as expected. +```sql +CREATE TABLE orders ( + order_id int PRIMARY, + ... + created_at timestamp +); -**Workaround**: Currently, there is no workaround. +CREATE INDEX idx_orders_created ON orders(created_at); +``` -**Example:** +Suggested change to the schema is to add the ASC/DESC clause as follows: ```sql -yugabyte=# \d test -Did not find any relation named "test". -yugabyte=# BEGIN; -BEGIN -yugabyte=*# CREATE TABLE test(id int, val text); -CREATE TABLE -yugabyte=*# \d test - Table "public.test" - Column | Type | Collation | Nullable | Default ---------+---------+-----------+----------+--------- - id | integer | | | - val | text | | | -yugabyte=*# ROLLBACK; -ROLLBACK -yugabyte=# \d test - Table "public.test" - Column | Type | Collation | Nullable | Default ---------+---------+-----------+----------+--------- - id | integer | | | - val | text | | | +CREATE INDEX idx_orders_created ON orders(created_at DESC); ``` --- @@ -1592,10 +1576,10 @@ This also requires modifying the range queries to include the `shard_id` in the ```sql CREATE TABLE orders ( - order_id int PRIMARY, - ..., - shard_id int DEFAULT (floor(random() * 100)::int % 16), - created_at timestamp + order_id int PRIMARY, + ..., + shard_id int DEFAULT (floor(random() * 100)::int % 16), + created_at timestamp ); CREATE INDEX idx_orders_created ON orders(shard_id HASH, created_at DESC); @@ -1603,6 +1587,8 @@ CREATE INDEX idx_orders_created ON orders(shard_id HASH, created_at DESC); SELECT * FROM orders WHERE shard_id IN (0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15) AND created_at >= NOW() - INTERVAL '1 month'; -- for fetching orders of last one month ``` +--- + ### Redundant indexes **Description**: A redundant index is an index that duplicates the functionality of another index or is unnecessary because the database can use an existing index to achieve the same result. This happens when multiple indexes cover the same columns or when a subset of columns in one index is already covered by another. @@ -1628,4 +1614,4 @@ Suggested change to the schema is to remove this redundant index `idx_orders_ord ```sql CREATE INDEX idx_orders_order_id on orders(order_id); -``` +``` \ No newline at end of file From 366b9f5e3c4df3a1a17d553db41d6dc50146f488 Mon Sep 17 00:00:00 2001 From: Karthik Ramanathan Date: Tue, 13 May 2025 09:59:28 -0400 Subject: [PATCH 054/146] [#27179] YSQL: Address YB_TODOs in UPDATE/DELETE paths Summary: This revision addresses all the remaining YB_TODOs in nodeModifyTable.c: - The functionality of YbExecUpdateAct is deduplicated and merged into ExecUpdateAct. Variables introduced by YB are marked with a `yb_` prefix. - YB relations invoking ExecUpdateAct now return TM_Ok for row found and TM_Invisible for row not found. - Call to always fetch the new slot tuple via `ExecFetchSlotHeapTuple` for YB relations is now skipped. - Functions in ybOptimizeModifyTable.c that previously required the newtuple via `(oldtuple, newtuple)` args now require `(oldtuple, newtupleslot)`. - This makes it consistent with ExecUpdate* function signatures in PG 15. - Changes to ExecDelete between PG 15 and PG 11 were merged in an inconsistent fashion when compared to ExecUpdate. The following changes are made: - Tuple deletion logic for YB relations is moved into ExecDeleteAct. - Index deletion logic for YB relations is moved into ExecDeleteEpilogue (see ExecUpdateEpilogue for reference) - The return values of ExecDeleteAct have the same semantics as ExecUpdateAct. Jira: DB-16665 Test Plan: Run Jenkins Reviewers: aagrawal, amartsinchyk Reviewed By: aagrawal Subscribers: smishra, yql Tags: #jenkins-ready Differential Revision: https://phorge.dev.yugabyte.com/D43632 --- .../src/backend/executor/nodeModifyTable.c | 475 +++++++----------- .../backend/executor/ybOptimizeModifyTable.c | 24 +- .../include/executor/ybOptimizeModifyTable.h | 2 +- 3 files changed, 199 insertions(+), 302 deletions(-) diff --git a/src/postgres/src/backend/executor/nodeModifyTable.c b/src/postgres/src/backend/executor/nodeModifyTable.c index 791b1f8efb7d..057ae25a2b57 100644 --- a/src/postgres/src/backend/executor/nodeModifyTable.c +++ b/src/postgres/src/backend/executor/nodeModifyTable.c @@ -1589,6 +1589,32 @@ ExecDeleteAct(ModifyTableContext *context, ResultRelInfo *resultRelInfo, ItemPointer tupleid, bool changingPart) { EState *estate = context->estate; + Relation resultRelationDesc = resultRelInfo->ri_RelationDesc; + + if (IsYBRelation(resultRelationDesc)) + { + bool row_found = YBCExecuteDelete(resultRelationDesc, + context->planSlot, + ((ModifyTable *) context->mtstate->ps.plan)->ybReturningColumns, + context->mtstate->yb_fetch_target_tuple, + (estate->yb_es_is_single_row_modify_txn ? + YB_SINGLE_SHARD_TRANSACTION : + YB_TRANSACTIONAL), + changingPart, + estate); + + /* + * Vanilla postgres does not have the equivalent of "no matching tuple" + * in its visibility state enum. YugabyteDB currently does not apply + * tuple visibility semantics within the same transaction + * (command counter). So, when a tuple is not found, hard code the + * return value to TM_Invisible. + */ + if (!row_found) + return TM_Invisible; + + return TM_Ok; + } return table_tuple_delete(resultRelInfo->ri_RelationDesc, tupleid, estate->es_output_cid, @@ -1614,6 +1640,15 @@ ExecDeleteEpilogue(ModifyTableContext *context, ResultRelInfo *resultRelInfo, EState *estate = context->estate; TransitionCaptureState *ar_delete_trig_tcs; + if (IsYBRelation(resultRelInfo->ri_RelationDesc) && + YBCRelInfoHasSecondaryIndices(resultRelInfo)) + { + Datum ybctid = YBCGetYBTupleIdFromSlot(context->planSlot); + + /* Delete index entries of the old tuple */ + ExecDeleteIndexTuples(resultRelInfo, ybctid, oldtuple, estate); + } + /* * If this delete is the result of a partition key update that moved the * tuple to a new partition, put this row into the transition OLD TABLE, @@ -1730,35 +1765,6 @@ ExecDelete(ModifyTableContext *context, slot->tts_tableOid = RelationGetRelid(resultRelationDesc); } - else if (IsYBRelation(resultRelationDesc)) - { - bool row_found = YBCExecuteDelete(resultRelationDesc, - context->planSlot, - ((ModifyTable *) context->mtstate->ps.plan)->ybReturningColumns, - context->mtstate->yb_fetch_target_tuple, - (estate->yb_es_is_single_row_modify_txn ? - YB_SINGLE_SHARD_TRANSACTION : - YB_TRANSACTIONAL), - changingPart, - estate); - - if (!row_found) - { - /* - * No row was found. This is possible if it's a single row txn - * and there is no row to delete (since we do not first do a scan). - */ - return NULL; - } - - if (YBCRelInfoHasSecondaryIndices(resultRelInfo)) - { - Datum ybctid = YBCGetYBTupleIdFromSlot(context->planSlot); - - /* Delete index entries of the old tuple */ - ExecDeleteIndexTuples(resultRelInfo, ybctid, oldtuple, estate); - } - } else { /* @@ -1776,6 +1782,15 @@ ldelete:; if (tmresult) *tmresult = result; + if (IsYBRelation(resultRelationDesc) && result == TM_Invisible) + { + /* + * No row was found. This is possible if it's a single row txn + * and there is no row to delete (since we do not first do a scan). + */ + return NULL; + } + switch (result) { case TM_SelfModified: @@ -2263,7 +2278,8 @@ ExecUpdatePrepareSlot(ResultRelInfo *resultRelInfo, static TM_Result ExecUpdateAct(ModifyTableContext *context, ResultRelInfo *resultRelInfo, ItemPointer tupleid, HeapTuple oldtuple, TupleTableSlot *slot, - bool canSetTag, UpdateContext *updateCxt) + bool canSetTag, UpdateContext *updateCxt, + Bitmapset **yb_cols_marked_for_update, bool *yb_is_pk_updated) { EState *estate = context->estate; Relation resultRelationDesc = resultRelInfo->ri_RelationDesc; @@ -2308,159 +2324,6 @@ ExecUpdateAct(ModifyTableContext *context, ResultRelInfo *resultRelInfo, resultRelInfo, slot, estate); } - /* - * If a partition check failed, try to move the row into the right - * partition. - */ - if (partition_constraint_failed) - { - TupleTableSlot *inserted_tuple, - *retry_slot; - ResultRelInfo *insert_destrel = NULL; - - /* - * ExecCrossPartitionUpdate will first DELETE the row from the - * partition it's currently in and then insert it back into the root - * table, which will re-route it to the correct partition. However, - * if the tuple has been concurrently updated, a retry is needed. - */ - if (ExecCrossPartitionUpdate(context, resultRelInfo, - tupleid, oldtuple, slot, - canSetTag, updateCxt, - &result, - &retry_slot, - &inserted_tuple, - &insert_destrel)) - { - /* success! */ - updateCxt->updated = true; - updateCxt->crossPartUpdate = true; - - /* - * If the partitioned table being updated is referenced in foreign - * keys, queue up trigger events to check that none of them were - * violated. No special treatment is needed in - * non-cross-partition update situations, because the leaf - * partition's AR update triggers will take care of that. During - * cross-partition updates implemented as delete on the source - * partition followed by insert on the destination partition, - * AR-UPDATE triggers of the root table (that is, the table - * mentioned in the query) must be fired. - * - * NULL insert_destrel means that the move failed to occur, that - * is, the update failed, so no need to anything in that case. - */ - if (insert_destrel && - resultRelInfo->ri_TrigDesc && - resultRelInfo->ri_TrigDesc->trig_update_after_row) - ExecCrossPartitionUpdateForeignKey(context, - resultRelInfo, - insert_destrel, - tupleid, slot, - inserted_tuple, - NULL); - - return TM_Ok; - } - - /* - * No luck, a retry is needed. If running MERGE, we do not do so - * here; instead let it handle that on its own rules. - */ - if (context->relaction != NULL) - return result; - - /* - * ExecCrossPartitionUpdate installed an updated version of the new - * tuple in the retry slot; start over. - */ - slot = retry_slot; - goto lreplace; - } - - /* - * Check the constraints of the tuple. We've already checked the - * partition constraint above; however, we must still ensure the tuple - * passes all other constraints, so we will call ExecConstraints() and - * have it validate all remaining checks. - */ - if (resultRelationDesc->rd_att->constr) - ExecConstraints(resultRelInfo, slot, estate, context->mtstate); - - /* - * replace the heap tuple - * - * Note: if es_crosscheck_snapshot isn't InvalidSnapshot, we check that - * the row to be updated is visible to that snapshot, and throw a - * can't-serialize error if not. This is a special-case behavior needed - * for referential integrity updates in transaction-snapshot mode - * transactions. - */ - result = table_tuple_update(resultRelationDesc, tupleid, slot, - estate->es_output_cid, - estate->es_snapshot, - estate->es_crosscheck_snapshot, - true /* wait for commit */ , - &context->tmfd, &updateCxt->lockmode, - &updateCxt->updateIndexes); - if (result == TM_Ok) - updateCxt->updated = true; - - return result; -} - -/* YB_TODO(arpan): Deduplicate code between YBExecUpdateAct and ExecUpdateAct */ -/* YB_TODO(review) Revisit later. */ -/* YB_TODO(kramanathan): Evaluate use of ExecFetchSlotHeapTuple */ -static bool -YBExecUpdateAct(ModifyTableContext *context, ResultRelInfo *resultRelInfo, - ItemPointer tupleid, HeapTuple oldtuple, TupleTableSlot *slot, - bool canSetTag, UpdateContext *updateCxt, - Bitmapset **cols_marked_for_update, bool *is_pk_updated) -{ - EState *estate = context->estate; - Relation resultRelationDesc = resultRelInfo->ri_RelationDesc; - bool partition_constraint_failed; - TM_Result result; - - updateCxt->crossPartUpdate = false; - - /* - * If we move the tuple to a new partition, we loop back here to recompute - * GENERATED values (which are allowed to be different across partitions) - * and recheck any RLS policies and constraints. We do not fire any - * BEFORE triggers of the new partition, however. - */ -yb_lreplace:; - /* Fill in GENERATEd columns */ - ExecUpdatePrepareSlot(resultRelInfo, slot, estate); - - /* ensure slot is independent, consider e.g. EPQ */ - HeapTuple tuple = ExecFetchSlotHeapTuple(slot, true /* materialize */ , NULL); - - /* - * If partition constraint fails, this row might get moved to another - * partition, in which case we should check the RLS CHECK policy just - * before inserting into the new partition, rather than doing it here. - * This is because a trigger on that partition might again change the row. - * So skip the WCO checks if the partition constraint fails. - */ - partition_constraint_failed = - resultRelationDesc->rd_rel->relispartition && - !ExecPartitionCheck(resultRelInfo, slot, estate, false /* emitError */ ); - - /* Check any RLS UPDATE WITH CHECK policies */ - if (!partition_constraint_failed && - resultRelInfo->ri_WithCheckOptions != NIL) - { - /* - * ExecWithCheckOptions() will skip any WCOs which are not of the kind - * we are looking for at this point. - */ - ExecWithCheckOptions(WCO_RLS_UPDATE_CHECK, - resultRelInfo, slot, estate); - } - /* * YB: For an ON CONFLICT DO UPDATE query with read batching enabled, insert the * keys corresponding to the arbiter indexes into the intent cache. Since @@ -2471,7 +2334,8 @@ yb_lreplace:; * arbiter index, which is notable for expression indexes. * TODO(kramanathan): Optimize this by forming the tuple ID from the slot. */ - if (resultRelInfo->ri_ybIocBatchingPossible && + if (IsYBRelation(resultRelationDesc) && + resultRelInfo->ri_ybIocBatchingPossible && context->mtstate->yb_ioc_state) { ItemPointerData unusedConflictTid; @@ -2535,25 +2399,27 @@ yb_lreplace:; inserted_tuple, oldtuple); - return true; + return TM_Ok; } -#ifdef YB_TODO - /* Handle MERGE */ /* * No luck, a retry is needed. If running MERGE, we do not do so * here; instead let it handle that on its own rules. */ if (context->relaction != NULL) + { + if (IsYBRelation(resultRelationDesc)) + elog(ERROR, "Cross partition update via MERGE is not supported"); + return result; -#endif + } /* * ExecCrossPartitionUpdate installed an updated version of the new * tuple in the retry slot; start over. */ slot = retry_slot; - goto yb_lreplace; + goto lreplace; } /* @@ -2565,88 +2431,127 @@ yb_lreplace:; if (resultRelationDesc->rd_att->constr) ExecConstraints(resultRelInfo, slot, estate, context->mtstate); - bool row_found = false; - bool beforeRowUpdateTriggerFired = (resultRelInfo->ri_TrigDesc && - resultRelInfo->ri_TrigDesc->trig_update_before_row); + if (IsYBRelation(resultRelationDesc)) + { + Assert(yb_cols_marked_for_update); + Assert(yb_is_pk_updated); - /* - * A bitmapset of columns whose values are written to the main table. - * This is initialized to the set of columns marked for update at - * planning time. This set is updated as follows: - * - Columns modified by before row triggers are added. - * - Primary key columns in the setlist that are unmodified are removed. - * - * Maintaining this bitmapset allows us to continue writing out - * unmodified columns to the main table as a safety guardrail, while - * selectively skipping index updates and constraint checks. - * This guardrail may be removed in the future. This also helps avoid - * having a dependency on row locking. - */ - *cols_marked_for_update = YbFetchColumnsMarkedForUpdate(context, - resultRelInfo); + bool row_found = false; + bool beforeRowUpdateTriggerFired = (resultRelInfo->ri_TrigDesc && + resultRelInfo->ri_TrigDesc->trig_update_before_row); - if (resultRelInfo->ri_NumGeneratedNeeded > 0) - { /* - * Include any affected generated columns. Note that generated columns - * are, conceptually, updated after BEFORE triggers have run. + * A bitmapset of columns whose values are written to the main table. + * This is initialized to the set of columns marked for update at + * planning time. This set is updated as follows: + * - Columns modified by before row triggers are added. + * - Primary key columns in the setlist that are unmodified are removed. + * + * Maintaining this bitmapset allows us to continue writing out + * unmodified columns to the main table as a safety guardrail, while + * selectively skipping index updates and constraint checks. + * This guardrail may be removed in the future. This also helps avoid + * having a dependency on row locking. */ - Bitmapset *generatedCols = ExecGetExtraUpdatedCols(resultRelInfo, - estate); + *yb_cols_marked_for_update = YbFetchColumnsMarkedForUpdate(context, + resultRelInfo); - Assert(!bms_is_empty(generatedCols)); - *cols_marked_for_update = bms_union(*cols_marked_for_update, - generatedCols); - } + if (resultRelInfo->ri_NumGeneratedNeeded > 0) + { + /* + * Include any affected generated columns. Note that generated columns + * are, conceptually, updated after BEFORE triggers have run. + */ + Bitmapset *generatedCols = ExecGetExtraUpdatedCols(resultRelInfo, + estate); + + Assert(!bms_is_empty(generatedCols)); + *yb_cols_marked_for_update = bms_union(*yb_cols_marked_for_update, + generatedCols); + } - ModifyTable *plan = (ModifyTable *) context->mtstate->ps.plan; + ModifyTable *plan = (ModifyTable *) context->mtstate->ps.plan; - YbCopySkippableEntities(&estate->yb_skip_entities, plan->yb_skip_entities); + YbCopySkippableEntities(&estate->yb_skip_entities, plan->yb_skip_entities); - /* - * If an update is a "single row transaction", then we have already - * confirmed at planning time that it has no secondary indexes or - * triggers or foreign key constraints. Such an update does not - * benefit from optimizations that skip constraint checking or index - * updates. - * While it may seem that a single row, distributed transaction can be - * transformed into a single row, non-distributed transaction, this is - * not the case. It is likely that the row to be updated has been read - * from the storage layer already, thus violating the non-distributed - * transaction semantics. - */ - if (!estate->yb_es_is_single_row_modify_txn) - { - YbComputeModifiedColumnsAndSkippableEntities(context->mtstate, - resultRelInfo, estate, - oldtuple, tuple, - cols_marked_for_update, - beforeRowUpdateTriggerFired); + /* + * If an update is a "single row transaction", then we have already + * confirmed at planning time that it has no secondary indexes or + * triggers or foreign key constraints. Such an update does not + * benefit from optimizations that skip constraint checking or index + * updates. + * While it may seem that a single row, distributed transaction can be + * transformed into a single row, non-distributed transaction, this is + * not the case. It is likely that the row to be updated has been read + * from the storage layer already, thus violating the non-distributed + * transaction semantics. + */ + if (!estate->yb_es_is_single_row_modify_txn) + { + YbComputeModifiedColumnsAndSkippableEntities(context->mtstate, + resultRelInfo, estate, + oldtuple, slot, + yb_cols_marked_for_update, + beforeRowUpdateTriggerFired); + } + + /* + * Irrespective of whether the optimization is enabled or not, we have + * to check if the primary key is updated. It could be that the columns + * making up the primary key are not a part of the target list but are + * updated by a before row trigger. + */ + *yb_is_pk_updated = YbIsPrimaryKeyUpdated(resultRelationDesc, + *yb_cols_marked_for_update); + + if (*yb_is_pk_updated) + { + YBCExecuteUpdateReplace(resultRelationDesc, context->planSlot, slot, estate); + row_found = true; + } + else + row_found = YBCExecuteUpdate(resultRelInfo, context->planSlot, slot, + oldtuple, estate, plan, + context->mtstate->yb_fetch_target_tuple, + (estate->yb_es_is_single_row_modify_txn ? + YB_SINGLE_SHARD_TRANSACTION : + YB_TRANSACTIONAL), + *yb_cols_marked_for_update, canSetTag); + + /* + * Vanilla postgres does not have the equivalent of "no matching tuple" + * in its visibility state enum. YugabyteDB currently does not apply + * tuple visibility semantics within the same transaction + * (command counter). So, when a tuple is not found, hard code the + * return value to TM_Invisible. + */ + if (!row_found) + return TM_Invisible; + + updateCxt->updated = true; + return TM_Ok; } /* - * Irrespective of whether the optimization is enabled or not, we have - * to check if the primary key is updated. It could be that the columns - * making up the primary key are not a part of the target list but are - * updated by a before row trigger. + * replace the heap tuple + * + * Note: if es_crosscheck_snapshot isn't InvalidSnapshot, we check that + * the row to be updated is visible to that snapshot, and throw a + * can't-serialize error if not. This is a special-case behavior needed + * for referential integrity updates in transaction-snapshot mode + * transactions. */ - *is_pk_updated = YbIsPrimaryKeyUpdated(resultRelationDesc, - *cols_marked_for_update); - if (*is_pk_updated) - { - YBCExecuteUpdateReplace(resultRelationDesc, context->planSlot, slot, estate); - row_found = true; - } - else - row_found = YBCExecuteUpdate(resultRelInfo, context->planSlot, slot, - oldtuple, estate, plan, - context->mtstate->yb_fetch_target_tuple, - (estate->yb_es_is_single_row_modify_txn ? - YB_SINGLE_SHARD_TRANSACTION : - YB_TRANSACTIONAL), - *cols_marked_for_update, canSetTag); - - return row_found; + result = table_tuple_update(resultRelationDesc, tupleid, slot, + estate->es_output_cid, + estate->es_snapshot, + estate->es_crosscheck_snapshot, + true /* wait for commit */ , + &context->tmfd, &updateCxt->lockmode, + &updateCxt->updateIndexes); + if (result == TM_Ok) + updateCxt->updated = true; + + return result; } /* @@ -2655,7 +2560,6 @@ yb_lreplace:; * Closing steps of updating a tuple. Must be called if ExecUpdateAct * returns indicating that the tuple was updated. */ -/* YB_TODO(review) Revisit later. */ static void ExecUpdateEpilogue(ModifyTableContext *context, UpdateContext *updateCxt, ResultRelInfo *resultRelInfo, ItemPointer tupleid, @@ -2824,7 +2728,6 @@ ExecCrossPartitionUpdateForeignKey(ModifyTableContext *context, * actually updated after EvalPlanQual. * ---------------------------------------------------------------- */ -/* YB_TODO(review) Revisit later. */ static TupleTableSlot * ExecUpdate(ModifyTableContext *context, ResultRelInfo *resultRelInfo, ItemPointer tupleid, HeapTuple oldtuple, TupleTableSlot *slot, @@ -2834,8 +2737,8 @@ ExecUpdate(ModifyTableContext *context, ResultRelInfo *resultRelInfo, Relation resultRelationDesc = resultRelInfo->ri_RelationDesc; UpdateContext updateCxt = {0}; TM_Result result; - Bitmapset *cols_marked_for_update = NULL; - bool pk_is_updated = false; + Bitmapset *yb_cols_marked_for_update = NULL; + bool yb_pk_is_updated = false; /* * abort the operation if not running transactions @@ -2881,28 +2784,6 @@ ExecUpdate(ModifyTableContext *context, ResultRelInfo *resultRelInfo, */ slot->tts_tableOid = RelationGetRelid(resultRelationDesc); } - else if (IsYBRelation(resultRelationDesc)) - { - /* Fill in the slot appropriately */ - ExecUpdatePrepareSlot(resultRelInfo, slot, estate); - if (!YBExecUpdateAct(context, resultRelInfo, tupleid, oldtuple, slot, - canSetTag, &updateCxt, &cols_marked_for_update, - &pk_is_updated)) - { - /* - * No row was found. This is possible if it's a single row txn - * and there is no row to update (since we do not first do a scan). - */ - return NULL; - } - /* - * If ExecUpdateAct reports that a cross-partition update was done, - * then the RETURNING tuple (if any) has been projected and there's - * nothing else for us to do. - */ - if (updateCxt.crossPartUpdate) - return context->cpUpdateReturningSlot; - } else { ItemPointerData lockedtid; @@ -2915,9 +2796,21 @@ ExecUpdate(ModifyTableContext *context, ResultRelInfo *resultRelInfo, * to do them again.) */ redo_act: - lockedtid = *tupleid; + if (!IsYBRelation(resultRelationDesc)) + lockedtid = *tupleid; + result = ExecUpdateAct(context, resultRelInfo, tupleid, oldtuple, slot, - canSetTag, &updateCxt); + canSetTag, &updateCxt, &yb_cols_marked_for_update, + &yb_pk_is_updated); + + if (IsYBRelation(resultRelationDesc) && result == TM_Invisible) + { + /* + * No row was found. This is possible if it's a single row txn + * and there is no row to update (since we do not first do a scan). + */ + return NULL; + } /* * If ExecUpdateAct reports that a cross-partition update was done, @@ -3083,10 +2976,10 @@ ExecUpdate(ModifyTableContext *context, ResultRelInfo *resultRelInfo, (estate->es_processed)++; ExecUpdateEpilogue(context, &updateCxt, resultRelInfo, tupleid, oldtuple, - slot, cols_marked_for_update, pk_is_updated); + slot, yb_cols_marked_for_update, yb_pk_is_updated); YbClearSkippableEntities(&estate->yb_skip_entities); - bms_free(cols_marked_for_update); + bms_free(yb_cols_marked_for_update); /* Process RETURNING if present */ if (resultRelInfo->ri_projectReturning) @@ -3573,7 +3466,9 @@ lmerge_matched:; break; /* concurrent update/delete */ } result = ExecUpdateAct(context, resultRelInfo, tupleid, NULL, - newslot, canSetTag, &updateCxt); + newslot, canSetTag, &updateCxt, + NULL /* yb_cols_marked_for_update */ , + NULL /* yb_is_pk_updated */ ); /* * As in ExecUpdate(), if ExecUpdateAct() reports that a diff --git a/src/postgres/src/backend/executor/ybOptimizeModifyTable.c b/src/postgres/src/backend/executor/ybOptimizeModifyTable.c index 960974c90f33..408fd8830171 100644 --- a/src/postgres/src/backend/executor/ybOptimizeModifyTable.c +++ b/src/postgres/src/backend/executor/ybOptimizeModifyTable.c @@ -214,7 +214,8 @@ YBAreDatumsStoredIdentically(Datum lhs, * ---------------------------------------------------------------------------- */ static bool -YBIsColumnModified(Relation rel, HeapTuple oldtuple, HeapTuple newtuple, +YBIsColumnModified(Relation rel, HeapTuple oldtuple, + TupleTableSlot *newtupleslot, const FormData_pg_attribute *attdesc) { const AttrNumber attnum = attdesc->attnum; @@ -225,9 +226,9 @@ YBIsColumnModified(Relation rel, HeapTuple oldtuple, HeapTuple newtuple, bool old_is_null = false; bool new_is_null = false; - TupleDesc relTupdesc = RelationGetDescr(rel); - Datum old_value = heap_getattr(oldtuple, attnum, relTupdesc, &old_is_null); - Datum new_value = heap_getattr(newtuple, attnum, relTupdesc, &new_is_null); + Datum old_value = heap_getattr(oldtuple, attnum, + RelationGetDescr(rel), &old_is_null); + Datum new_value = slot_getattr(newtupleslot, attnum, &new_is_null); return ((old_is_null != new_is_null) || (!old_is_null && @@ -248,7 +249,8 @@ YBIsColumnModified(Relation rel, HeapTuple oldtuple, HeapTuple newtuple, * ---------------------------------------------------------------- */ static void -YBComputeExtraUpdatedCols(Relation rel, HeapTuple oldtuple, HeapTuple newtuple, +YBComputeExtraUpdatedCols(Relation rel, HeapTuple oldtuple, + TupleTableSlot *newtupleslot, Bitmapset *updated_cols, Bitmapset **modified_cols, Bitmapset **unmodified_cols, bool is_update_optimization_enabled, @@ -318,7 +320,7 @@ YBComputeExtraUpdatedCols(Relation rel, HeapTuple oldtuple, HeapTuple newtuple, !bms_is_member(bms_idx, trig_cond_cols)) continue; - if (YBIsColumnModified(rel, oldtuple, newtuple, attdesc)) + if (YBIsColumnModified(rel, oldtuple, newtupleslot, attdesc)) *modified_cols = bms_add_member(*modified_cols, bms_idx); else *unmodified_cols = bms_add_member(*unmodified_cols, bms_idx); @@ -382,7 +384,7 @@ YbUpdateHandleUnmodifiedEntity(YbUpdateAffectedEntities *affected_entities, */ static YbSkippableEntities * YbComputeModifiedEntities(ResultRelInfo *resultRelInfo, HeapTuple oldtuple, - HeapTuple newtuple, Bitmapset **modified_cols, + TupleTableSlot *newtupleslot, Bitmapset **modified_cols, Bitmapset **unmodified_cols, YbUpdateAffectedEntities *affected_entities, YbSkippableEntities *skip_entities) @@ -445,7 +447,7 @@ YbComputeModifiedEntities(ResultRelInfo *resultRelInfo, HeapTuple oldtuple, idx); if (!YbIsColumnComparisonAllowed(*modified_cols, *unmodified_cols) || - YBIsColumnModified(rel, oldtuple, newtuple, attdesc)) + YBIsColumnModified(rel, oldtuple, newtupleslot, attdesc)) { /* * If we have already exceeded the max number of columns @@ -502,7 +504,7 @@ YbComputeModifiedColumnsAndSkippableEntities(ModifyTableState *mtstate, ResultRelInfo *resultRelInfo, EState *estate, HeapTuple oldtuple, - HeapTuple newtuple, + TupleTableSlot *newtupleslot, Bitmapset **updated_cols, bool beforeRowUpdateTriggerFired) { @@ -549,7 +551,7 @@ YbComputeModifiedColumnsAndSkippableEntities(ModifyTableState *mtstate, if (mtstate->yb_is_update_optimization_enabled) { - YbComputeModifiedEntities(resultRelInfo, oldtuple, newtuple, + YbComputeModifiedEntities(resultRelInfo, oldtuple, newtupleslot, &modified_cols, &unmodified_cols, plan->yb_update_affected_entities, &estate->yb_skip_entities); @@ -557,7 +559,7 @@ YbComputeModifiedColumnsAndSkippableEntities(ModifyTableState *mtstate, if (beforeRowUpdateTriggerFired) { - YBComputeExtraUpdatedCols(rel, oldtuple, newtuple, *updated_cols, + YBComputeExtraUpdatedCols(rel, oldtuple, newtupleslot, *updated_cols, &modified_cols, &unmodified_cols, mtstate->yb_is_update_optimization_enabled, (plan->operation == CMD_INSERT && diff --git a/src/postgres/src/include/executor/ybOptimizeModifyTable.h b/src/postgres/src/include/executor/ybOptimizeModifyTable.h index 8832685b5530..b7cc80a344d2 100644 --- a/src/postgres/src/include/executor/ybOptimizeModifyTable.h +++ b/src/postgres/src/include/executor/ybOptimizeModifyTable.h @@ -33,7 +33,7 @@ extern void YbComputeModifiedColumnsAndSkippableEntities(ModifyTableState *mtsta ResultRelInfo *resultRelInfo, EState *estate, HeapTuple oldtuple, - HeapTuple newtuple, + TupleTableSlot *newtupleslot, Bitmapset **updatedCols, bool beforeRowUpdateTriggerFired); From 3e65363168d40453016c4c8fe0fd8662de0261d8 Mon Sep 17 00:00:00 2001 From: Fizaa Luthra Date: Mon, 12 May 2025 13:27:44 -0400 Subject: [PATCH 055/146] [#27180] YSQL: Bump pg_stat_statements file header Summary: Since commit ccb70d235eb7e1ac4c3d73e5d82565a3e43d0235 adds new counters to the pg_stat_statements struct and changes the file format, we need to bump the pg_stat_statements file header to correctly handle the upgrade from 2.25 to 2.27. We will drop the existing stats after an upgrade from 2.25 to 2.27. This change will also be backported to 2025.1, so as to preserve stats between 2025.1 and future stable releases (2025.2+). Specifically: - 2024.2 -> 2025.1 - stats will not be preserved (a new PG data directory is created as part of YSQL major upgrade) - 2.25 -> 2.27 - stats will not be preserved (with this change, the old stats file will be discarded) - 2025.1 -> 2025.2 - stats will be preserved Specific changes: - Bump file header to 0x20250425 - Set extended_header_reader (the current reader) as the reader for the new version - Fix asserts in `extended_header_reader` and `read_entry_hdr` to allow the new version - Re-introduce the the check for the file header in `pgss_shmem_startup` (removed by commit c55d458af8fd9a45839f3ea9e76d1c4d016cd4b5) since we no longer want to preserve stats when the header is different (i.e., we want to reset them when upgrading to 2.27). In fact, all of the special file reading logic introduced by commit c55d458af8fd9a45839f3ea9e76d1c4d016cd4b5 can be cleaned up in a future revision, as we no longer need more than one file reader. Jira: DB-16667 Test Plan: on a 2.25.0 cluster, run some queries to generate data for pg_stat_statements: ``` yugabyte=# SELECT query, yb_latency_histogram FROM pg_stat_statements; query | yb_latency_histogram -------------------------------------------------------+-------------------------------------------------------- INSERT INTO x VALUES ($1), ($2), ($3) | [{"[2.8,3.0)": 1}, {"[4.8,5.2)": 1}] DELETE FROM x | [{"[1.4,1.5)": 1}, {"[3.2,3.6)": 1}] SELECT * FROM pg_stat_statements | [{"[0.3,0.4)": 2}, {"[1.5,1.6)": 1}, {"[1.6,1.8)": 1}] CREATE TABLE x (t int) | [{"[83.2,89.6)": 1}] SELECT * FROM x | [{"[0.7,0.8)": 1}, {"[1.3,1.4)": 1}, {"[1.8,2.0)": 1}] SELECT query, total_plan_time FROM pg_stat_statements | [{"[2.0,2.2)": 1}] (6 rows) ``` restart the cluster and then upgrade to 2.27. without this patch, the file is read incorrectly and there's a junk entry: ``` yugabyte=# SELECT query, yb_latency_histogram FROM pg_stat_statements; query | yb_latency_histogram ------------------------------------------------------------+---------------------- | [] SELECT query, yb_latency_histogram FROM pg_stat_statements | [] (2 rows) ``` with this patch, the entry is discarded: ``` yugabyte=# SELECT query, yb_latency_histogram FROM pg_stat_statements; query | yb_latency_histogram ------------------------------------------------------------+---------------------- SELECT query, yb_latency_histogram FROM pg_stat_statements | [] (1 row) ``` Reviewers: kramanathan Reviewed By: kramanathan Subscribers: yql Differential Revision: https://phorge.dev.yugabyte.com/D43559 --- .../pg_stat_statements/pg_stat_statements.c | 15 ++++++++++----- 1 file changed, 10 insertions(+), 5 deletions(-) diff --git a/src/postgres/contrib/pg_stat_statements/pg_stat_statements.c b/src/postgres/contrib/pg_stat_statements/pg_stat_statements.c index 42c577a917b0..684b6d625d64 100644 --- a/src/postgres/contrib/pg_stat_statements/pg_stat_statements.c +++ b/src/postgres/contrib/pg_stat_statements/pg_stat_statements.c @@ -97,10 +97,10 @@ PG_MODULE_MAGIC; #define PGSS_TEXT_FILE PG_STAT_TMP_DIR "/pgss_query_texts.stat" /* Magic number identifying the stats file format */ -/* YB_TODO() Postgres 15 uses the following number. +/* YB: Postgres 15 uses the following number. static const uint32 PGSS_FILE_HEADER = 0x20220408; */ -static const uint32 PGSS_FILE_HEADER = 0x20230330; +static const uint32 PGSS_FILE_HEADER = 0x20250425; /* PostgreSQL major version number, changes in which invalidate all entries */ static const uint32 PGSS_PG_MAJOR_VERSION = PG_VERSION_NUM / 100; @@ -888,12 +888,15 @@ read_entry_original(int header, FILE *file, FILE *qfile, /* * Parse in post-histogram pgssEntries from disk, throw out histogram parts if * config variables have changed between restarts. + * File header version 0x20230330 can no longer be read as the file format has + * changed between 0x20230330 and 0x20250425. The stats file is discarded in + * pgss_shmem_startup() if the header is not 0x20250425. */ static int read_entry_hdr(int header, FILE *file, FILE *qfile, pgssYbReaderContext *context) { - Assert(header == 0x20230330); + Assert(header == 0x20250425); /* TODO: address case where hdr_histogram size changes due to 3p update */ int prev_entry_total_size = (sizeof(pgssEntry) + @@ -949,7 +952,7 @@ static int extended_header_reader(int header, FILE *file, pgssYbReaderContext *context) { - if (header != 0x20230330) + if (header != 0x20250425) return -1; int64_t temp_yb_hdr_max_value; @@ -988,6 +991,7 @@ pgssYbReader pgssReaderList[] = { {0x20171004, NULL, read_entry_original}, {0x20230330, extended_header_reader, read_entry_hdr}, + {0x20250425, extended_header_reader, read_entry_hdr}, {pgssReaderEndMarker, NULL, NULL} }; @@ -1122,7 +1126,8 @@ pgss_shmem_startup(void) fread(&num, sizeof(int32), 1, file) != 1) goto read_error; - if (pgver != PGSS_PG_MAJOR_VERSION) + if (header != PGSS_FILE_HEADER || + pgver != PGSS_PG_MAJOR_VERSION) goto data_error; pgssYbReader *version_reader = NULL; From 771385dc6768a131c52df896e86de6baf46d4350 Mon Sep 17 00:00:00 2001 From: Timur Yusupov Date: Mon, 12 May 2025 21:11:24 +0300 Subject: [PATCH 056/146] [#26644] docdb: Fix tablet split vs tablet move race Summary: We observed the following scenario leading to one of child tablet peers to have high follower lag and never get updated: 1. Parent tablet peers 1-3 accept (but do not apply) a SPLIT_OP (op_id: 1.4). 2. Parent tablet peers 1-3 accept (but do not apply) a CHANGE_CONFIG_OP (op_id: 1.5) to add a fourth peer. 3. Parent tablet peers 1-3 started applying the SPLIT_OP (1.4) using the committed Raft config with 3 peers. 4. RBS for parent tablet peer 4 started and tablet metadata (`tablet_data_state == TABLET_DATA_READY`) is downloaded. 5. Parent tablet peers 1-3 completed applying the SPLIT_OP (1.4). 6. Parent tablet peers 1-3 applied CHANGE_CONFIG_OP (1.5) and now have committed Raft config with 4 peers. 7. RBS for parent tablet peer 4 downloaded WAL files which have CHANGE_CONFIG_OP (1.5) as committed. 8. Parent tablet peer 4 does local bootstrap and replays SPLIT_OP (1.4) as part of bootstrap. Due to `tablet_data_state` is `TABLET_DATA_READY` but not `TABLET_DATA_SPLIT_COMPLETED` replay does SPLIT_OP apply. During SPLIT_OP apply it uses the last known committed Raft config (with 4 peers) but not the one which was committed before SPLIT_OP. After that, 4th tablet peer is not a part of tablet Raft group and therefore is not receiving consensus updates from leader. Implemented the fix to disallow Raft config membership changes (but still allow leadership/role changes) on the parent tablet LEADER if LEADER has received SPLIT_OP and not yet applied it. This ensures split children tablet peers have the same Raft config. Potentially we can have similar issue when doing RBS from the follower, created https://github.com/yugabyte/yugabyte-db/issues/27056 to investigate and fix if this is a legit issue. Jira: DB-16021 Test Plan: TabletSplitITest.SplitWithParentTabletMove Reviewers: asrivastava, bkolagani Reviewed By: asrivastava Subscribers: ybase Tags: #jenkins-ready Differential Revision: https://phorge.dev.yugabyte.com/D43526 --- src/yb/consensus/raft_consensus.cc | 45 ++-- src/yb/consensus/replica_state.cc | 17 +- src/yb/consensus/replica_state.h | 5 + .../integration-tests/cluster_itest_util.cc | 25 ++- src/yb/integration-tests/cluster_itest_util.h | 4 +- .../external_mini_cluster.cc | 59 ++++-- .../integration-tests/external_mini_cluster.h | 9 +- src/yb/integration-tests/mini_cluster.h | 21 +- src/yb/integration-tests/mini_cluster_base.h | 2 + .../integration-tests/tablet-split-itest.cc | 193 +++++++++++++++++- 10 files changed, 316 insertions(+), 64 deletions(-) diff --git a/src/yb/consensus/raft_consensus.cc b/src/yb/consensus/raft_consensus.cc index 76587827d5ee..3e073fe280f5 100644 --- a/src/yb/consensus/raft_consensus.cc +++ b/src/yb/consensus/raft_consensus.cc @@ -746,14 +746,16 @@ string RaftConsensus::ServersInTransitionMessage() { const RaftConfigPB& committed_config = state_->GetCommittedConfigUnlocked(); auto servers_in_transition = CountServersInTransition(active_config); auto committed_servers_in_transition = CountServersInTransition(committed_config); - LOG(INFO) << Substitute("Active config has $0 and committed has $1 servers in transition.", - servers_in_transition, committed_servers_in_transition); + LOG_WITH_PREFIX(INFO) << Format( + "Active config has $0 and committed has $1 servers in transition.", servers_in_transition, + committed_servers_in_transition); if (servers_in_transition != 0 || committed_servers_in_transition != 0) { - err_msg = Substitute("Leader not ready to step down as there are $0 active config peers" - " in transition, $1 in committed. Configs:\nactive=$2\ncommit=$3", - servers_in_transition, committed_servers_in_transition, - active_config.ShortDebugString(), committed_config.ShortDebugString()); - LOG(INFO) << err_msg; + err_msg = Format( + "Leader not ready to step down as there are $0 active config peers" + " in transition, $1 in committed. Configs:\nactive=$2\ncommit=$3", + servers_in_transition, committed_servers_in_transition, active_config.ShortDebugString(), + committed_config.ShortDebugString()); + LOG_WITH_PREFIX(INFO) << err_msg; } return err_msg; } @@ -2449,18 +2451,23 @@ Status RaftConsensus::IsLeaderReadyForChangeConfigUnlocked(ChangeConfigType type // committed at least one operation in our current term as leader. // See https://groups.google.com/forum/#!topic/raft-dev/t4xj6dJTP6E // 2. Ensure there is no other pending change config. + // 3. Ensure there is no pending split operation (unless we are just changing role, not + // adding/removing Raft members). See https://github.com/yugabyte/yugabyte-db/issues/26644. if (!state_->AreCommittedAndCurrentTermsSameUnlocked() || - state_->IsConfigChangePendingUnlocked()) { - return STATUS_FORMAT(IllegalState, - "Leader is not ready for Config Change, can try again. " - "Type: $0. Has opid: $1. Committed config: $2. " - "Pending config: $3. Current term: $4. Committed op id: $5.", - ChangeConfigType_Name(type), - active_config.has_opid_index(), - state_->GetCommittedConfigUnlocked().ShortDebugString(), - state_->IsConfigChangePendingUnlocked() ? - state_->GetPendingConfigUnlocked().ShortDebugString() : "", - state_->GetCurrentTermUnlocked(), state_->GetCommittedOpIdUnlocked()); + state_->IsConfigChangePendingUnlocked() || + (type != CHANGE_ROLE && !state_->GetPendingSplitOpIdUnlocked().empty())) { + return STATUS_FORMAT( + IllegalState, + "Leader is not ready for Config Change, can try again. " + "Type: $0. Has opid: $1. Committed config: $2. " + "Pending config: $3. Current term: $4. Committed op id: $5. Pending split op id: $6", + ChangeConfigType_Name(type), active_config.has_opid_index(), + state_->GetCommittedConfigUnlocked().ShortDebugString(), + state_->IsConfigChangePendingUnlocked() + ? state_->GetPendingConfigUnlocked().ShortDebugString() + : "", + state_->GetCurrentTermUnlocked(), state_->GetCommittedOpIdUnlocked(), + state_->GetPendingSplitOpIdUnlocked()); } // For sys catalog tablet, additionally ensure that there are no servers currently in transition. // If not, it could lead to data loss. @@ -3560,7 +3567,7 @@ void RaftConsensus::NonTrackedRoundReplicationFinished(ConsensusRound* round, // Clear out the pending state (ENG-590). if (IsChangeConfigOperation(op_type) && state_->GetPendingConfigOpIdUnlocked() == round->id()) { - WARN_NOT_OK(state_->ClearPendingConfigUnlocked(), "Could not clear pending state"); + WARN_NOT_OK(state_->ClearPendingConfigUnlocked(), "Could not clear pending config"); } } else if (IsChangeConfigOperation(op_type)) { // Notify the TabletPeer owner object. diff --git a/src/yb/consensus/replica_state.cc b/src/yb/consensus/replica_state.cc index 42f6026eb0dd..07b3fcfb9992 100644 --- a/src/yb/consensus/replica_state.cc +++ b/src/yb/consensus/replica_state.cc @@ -756,6 +756,12 @@ Status ReplicaState::AddPendingOperation(const ConsensusRoundPtr& round, Operati SCHECK_EQ( split_request.tablet_id(), cmeta_->tablet_id(), InvalidArgument, "Received split op for a different tablet."); + SCHECK( + pending_split_op_id_.empty(), InvalidArgument, + Format( + "Received split op ($0) while having another split op pending ($1)", round->id(), + pending_split_op_id_)); + pending_split_op_id_ = round->id(); // TODO(tsplit): if we get failures past this point we can't undo the tablet state. // Might be need some tool to be able to remove SPLIT_OP from Raft log. } @@ -1500,8 +1506,17 @@ void ReplicaState::NotifyReplicationFinishedUnlocked( const ConsensusRoundPtr& round, const Status& status, int64_t leader_term, OpIds* applied_op_ids) { round->NotifyReplicationFinished(status, leader_term, applied_op_ids); - retryable_requests_.ReplicationFinished(*round->replicate_msg(), status, leader_term); + if (OpId::FromPB(round->replicate_msg()->id()) == GetPendingSplitOpIdUnlocked()) { + // There are two cases this can happen: + // 1 - status is OK, that means operation has been applied by the current peer. This relies on + // our current Raft implementation specifics where apply happens inside + // ConsensusRound::NotifyReplicationFinished. + // 2 - status is an error, that means operation has been aborted by the current peer. + // + // In both cases operation is no longer pending, so we should clear pending SPLIT_OP id. + ClearPendingSplitOpIdUnlocked(); + } } consensus::LeaderState ReplicaState::RefreshLeaderStateCacheUnlocked(CoarseTimePoint& now) const { diff --git a/src/yb/consensus/replica_state.h b/src/yb/consensus/replica_state.h index 460bb5c710a1..495668604b71 100644 --- a/src/yb/consensus/replica_state.h +++ b/src/yb/consensus/replica_state.h @@ -357,6 +357,9 @@ class ReplicaState { void SetPendingElectionOpIdUnlocked(const OpId& opid) { pending_election_opid_ = opid; } void ClearPendingElectionOpIdUnlocked() { pending_election_opid_ = OpId(); } + const OpId& GetPendingSplitOpIdUnlocked() { return pending_split_op_id_; } + void ClearPendingSplitOpIdUnlocked() { pending_split_op_id_ = OpId(); } + std::string ToString() const; std::string ToStringUnlocked() const; @@ -527,6 +530,8 @@ class ReplicaState { // If set, a leader election is pending upon the specific op id commitment to this peer's log. OpId pending_election_opid_; + OpId pending_split_op_id_; + State state_ = State::kInitialized; // When a follower becomes the leader, it uses this field to wait out the old leader's lease diff --git a/src/yb/integration-tests/cluster_itest_util.cc b/src/yb/integration-tests/cluster_itest_util.cc index 82f9d9b9385e..52a1ab336387 100644 --- a/src/yb/integration-tests/cluster_itest_util.cc +++ b/src/yb/integration-tests/cluster_itest_util.cc @@ -519,13 +519,28 @@ Status WaitUntilNumberOfAliveTServersEqual(int n_tservers, n_tservers, timeout.ToMilliseconds())); } -Result CreateTabletServerMap(ExternalMiniCluster* cluster) { - auto proxy = cluster->num_masters() > 1 - ? cluster->GetLeaderMasterProxy() - : cluster->GetMasterProxy(); - return CreateTabletServerMap(proxy, &cluster->proxy_cache()); +// TODO: switch ExternalMiniCluster::GetLeaderMaster* to return error if leader is not elected to +// unify with MiniCluster. +Result GetLeaderMasterClusterProxy(MiniCluster* cluster) { + return cluster->num_masters() > 1 + ? VERIFY_RESULT(cluster->GetLeaderMasterProxy()) + : cluster->GetMasterProxy(); } +Result GetLeaderMasterClusterProxy(ExternalMiniCluster* cluster) { + return cluster->num_masters() > 1 ? cluster->GetLeaderMasterProxy() + : cluster->GetMasterProxy(); +} + +template +Result CreateTabletServerMap(MiniClusterType* cluster) { + return CreateTabletServerMap( + VERIFY_RESULT(GetLeaderMasterClusterProxy(cluster)), &cluster->proxy_cache()); +} + +template Result CreateTabletServerMap(MiniCluster* cluster); +template Result CreateTabletServerMap(ExternalMiniCluster* cluster); + Result CreateTabletServerMap( const master::MasterClusterProxy& proxy, rpc::ProxyCache* proxy_cache) { master::ListTabletServersRequestPB req; diff --git a/src/yb/integration-tests/cluster_itest_util.h b/src/yb/integration-tests/cluster_itest_util.h index 17f8de63d2a1..fe51ae526448 100644 --- a/src/yb/integration-tests/cluster_itest_util.h +++ b/src/yb/integration-tests/cluster_itest_util.h @@ -135,7 +135,9 @@ client::YBSchema SimpleIntKeyYBSchema(); // Create a populated TabletServerMap by interrogating the master. Result CreateTabletServerMap( const master::MasterClusterProxy& proxy, rpc::ProxyCache* cache); -Result CreateTabletServerMap(ExternalMiniCluster* cluster); + +template +Result CreateTabletServerMap(MiniClusterType* cluster); template auto GetForEachReplica(const std::vector& replicas, diff --git a/src/yb/integration-tests/external_mini_cluster.cc b/src/yb/integration-tests/external_mini_cluster.cc index 5ece1f5e011a..87621ff6dfc8 100644 --- a/src/yb/integration-tests/external_mini_cluster.cc +++ b/src/yb/integration-tests/external_mini_cluster.cc @@ -1785,21 +1785,58 @@ Result> ExternalMiniClu return result; } -Result ExternalMiniCluster::GetTabletStatus( - const ExternalTabletServer& ts, const yb::TabletId& tablet_id) { +namespace { + +rpc::RpcController DefaultRpcController() { rpc::RpcController rpc; rpc.set_timeout(kDefaultTimeout); + return rpc; +} + +Status StatusFromError(const TabletServerErrorPB& error) { + return StatusFromPB(error.status()) + .CloneAndPrepend(Format("Code $0", TabletServerErrorPB::Code_Name(error.code()))); +} + +template +concept HasTabletServerError = requires(Response response) { + { response.error() } -> std::convertible_to; +}; + +template +Result CheckedResponse(const Response& response) { + if (response.has_error()) { + return StatusFromError(response.error()); + } + return response; +} + +} // namespace + +Result ExternalMiniCluster::GetTabletStatus( + const ExternalTabletServer& ts, const yb::TabletId& tablet_id) { + auto rpc = DefaultRpcController(); tserver::GetTabletStatusRequestPB req; req.set_tablet_id(tablet_id); tserver::GetTabletStatusResponsePB resp; RETURN_NOT_OK(GetProxy(&ts).GetTabletStatus(req, &resp, &rpc)); - if (resp.has_error()) { - return StatusFromPB(resp.error().status()).CloneAndPrepend( - Format("Code $0", TabletServerErrorPB::Code_Name(resp.error().code()))); + return CheckedResponse(resp); } - return resp; + +Result ExternalMiniCluster::GetTabletPeerHealth( + const ExternalTabletServer& ts, const std::vector& tablet_ids) { + auto rpc = DefaultRpcController(); + + tserver::CheckTserverTabletHealthRequestPB req; + for (const auto& tablet_id : tablet_ids) { + *req.mutable_tablet_ids()->Add() = tablet_id; + } + + tserver::CheckTserverTabletHealthResponsePB resp; + RETURN_NOT_OK(GetProxy(&ts).CheckTserverTabletHealth(req, &resp, &rpc)); + return CheckedResponse(resp); } Result ExternalMiniCluster::GetSplitKey( @@ -1817,8 +1854,7 @@ Result ExternalMiniCluster::GetSplitKey( // There's a small chance that a leader is changed after GetTabletLeaderIndex() and before // GetSplitKey() is started, in this case we should re-attempt. if (response.error().code() != TabletServerErrorPB::NOT_THE_LEADER) { - return StatusFromPB(response.error().status()).CloneAndPrepend( - Format("Code $0", TabletServerErrorPB::Code_Name(response.error().code()))); + return StatusFromError(response.error()); } LOG(WARNING) << Format( @@ -1839,8 +1875,7 @@ Result ExternalMiniCluster::GetSplitKey( tserver::GetSplitKeyResponsePB resp; RETURN_NOT_OK(GetProxy(&ts).GetSplitKey(req, &resp, &rpc)); if (fail_on_response_error && resp.has_error()) { - return StatusFromPB(resp.error().status()).CloneAndPrepend( - Format("Code $0", TabletServerErrorPB::Code_Name(resp.error().code()))); + return StatusFromError(resp.error()); } return resp; } @@ -2225,9 +2260,7 @@ Status ExternalMiniCluster::StartElection(ExternalMaster* master) { rpc.set_timeout(opts_.timeout); RETURN_NOT_OK(master_proxy->RunLeaderElection(req, &resp, &rpc)); if (resp.has_error()) { - return StatusFromPB(resp.error().status()) - .CloneAndPrepend(Format("Code $0", - TabletServerErrorPB::Code_Name(resp.error().code()))); + return StatusFromError(resp.error()); } return Status::OK(); } diff --git a/src/yb/integration-tests/external_mini_cluster.h b/src/yb/integration-tests/external_mini_cluster.h index e5747d381125..64c05188c4f7 100644 --- a/src/yb/integration-tests/external_mini_cluster.h +++ b/src/yb/integration-tests/external_mini_cluster.h @@ -429,9 +429,7 @@ class ExternalMiniCluster : public MiniClusterBase { // Return the client messenger used by the ExternalMiniCluster. rpc::Messenger* messenger(); - rpc::ProxyCache& proxy_cache() { - return *proxy_cache_; - } + rpc::ProxyCache& proxy_cache() override { return *proxy_cache_; } // Get the master leader consensus proxy. consensus::ConsensusServiceProxy GetLeaderConsensusProxy(); @@ -500,7 +498,10 @@ class ExternalMiniCluster : public MiniClusterBase { Result GetSegmentCounts(ExternalTabletServer* ts); Result GetTabletStatus( - const ExternalTabletServer& ts, const yb::TabletId& tablet_id); + const ExternalTabletServer& ts, const TabletId& tablet_id); + + Result GetTabletPeerHealth( + const ExternalTabletServer& ts, const std::vector& tablet_ids); Result GetSplitKey(const yb::TabletId& tablet_id); Result GetSplitKey(const ExternalTabletServer& ts, diff --git a/src/yb/integration-tests/mini_cluster.h b/src/yb/integration-tests/mini_cluster.h index c4e825d8fb33..f7bcf4dd3604 100644 --- a/src/yb/integration-tests/mini_cluster.h +++ b/src/yb/integration-tests/mini_cluster.h @@ -52,6 +52,7 @@ #include "yb/master/master_client.fwd.h" #include "yb/master/master_cluster.proxy.h" #include "yb/master/master_fwd.h" +#include "yb/master/mini_master.h" #include "yb/master/ts_descriptor.h" #include "yb/tablet/tablet_fwd.h" @@ -68,10 +69,6 @@ using namespace std::literals; namespace yb { -namespace master { -class MiniMaster; -} - namespace server { class SkewedClockDeltaChanger; } @@ -277,7 +274,14 @@ class MiniCluster : public MiniClusterBase { Status WaitForLoadBalancerToStabilize(MonoDelta timeout); template - Result GetLeaderMasterProxy(); + Result GetLeaderMasterProxy() { + return T(proxy_cache_.get(), VERIFY_RESULT(DoGetLeaderMasterBoundRpcAddr())); + } + + template + T GetMasterProxy() { + return T(proxy_cache_.get(), mini_master()->bound_rpc_addr()); + } std::string GetClusterId() { return options_.cluster_id; } @@ -292,6 +296,8 @@ class MiniCluster : public MiniClusterBase { std::string GetTabletServerHTTPAddresses() const override; + rpc::ProxyCache& proxy_cache() override { return *proxy_cache_; } + private: void ConfigureClientBuilder(client::YBClientBuilder* builder) override; @@ -501,11 +507,6 @@ void DumpDocDB(MiniCluster* cluster, ListPeersFilter filter = ListPeersFilter::k std::vector DumpDocDBToStrings( MiniCluster* cluster, ListPeersFilter filter = ListPeersFilter::kLeaders); -template -Result MiniCluster::GetLeaderMasterProxy() { - return T(proxy_cache_.get(), VERIFY_RESULT(DoGetLeaderMasterBoundRpcAddr())); -} - void DisableFlushOnShutdown(MiniCluster& cluster, bool disable); } // namespace yb diff --git a/src/yb/integration-tests/mini_cluster_base.h b/src/yb/integration-tests/mini_cluster_base.h index 17706f3b2948..e47f25de843e 100644 --- a/src/yb/integration-tests/mini_cluster_base.h +++ b/src/yb/integration-tests/mini_cluster_base.h @@ -65,6 +65,8 @@ class MiniClusterBase { virtual std::string GetTabletServerHTTPAddresses() const = 0; + virtual rpc::ProxyCache& proxy_cache() = 0; + protected: virtual ~MiniClusterBase() = default; diff --git a/src/yb/integration-tests/tablet-split-itest.cc b/src/yb/integration-tests/tablet-split-itest.cc index 4e686caa317d..ded8d97b49b5 100644 --- a/src/yb/integration-tests/tablet-split-itest.cc +++ b/src/yb/integration-tests/tablet-split-itest.cc @@ -100,6 +100,7 @@ #include "yb/util/status_format.h" #include "yb/util/status_log.h" #include "yb/util/sync_point.h" +#include "yb/util/test_thread_holder.h" #include "yb/util/tsan_util.h" using std::string; @@ -154,6 +155,7 @@ DECLARE_int32(TEST_nodes_per_cloud); DECLARE_int32(replication_factor); DECLARE_int32(txn_max_apply_batch_records); DECLARE_int32(TEST_pause_and_skip_apply_intents_task_loop_ms); +DECLARE_bool(TEST_pause_rbs_before_download_wal); DECLARE_bool(TEST_pause_tserver_get_split_key); DECLARE_bool(TEST_reject_delete_not_serving_tablet_rpc); DECLARE_int32(timestamp_history_retention_interval_sec); @@ -2011,9 +2013,7 @@ class AutomaticTabletSplitAddServerITest: public AutomaticTabletSplitITest { } void BuildTServerMap() { - master::MasterClusterProxy master_proxy( - proxy_cache_.get(), cluster_->mini_master()->bound_rpc_addr()); - ts_map_ = ASSERT_RESULT(itest::CreateTabletServerMap(master_proxy, proxy_cache_.get())); + ts_map_ = ASSERT_RESULT(itest::CreateTabletServerMap(cluster_.get())); } void AddTabletToNewTServer(const TabletId& tablet_id, @@ -2574,7 +2574,7 @@ Status TabletSplitSingleServerITest::TestSplitBeforeParentDeletion(bool hide_onl } const auto split_hash_code = VERIFY_RESULT(WriteRowsAndGetMiddleHashCode(kNumRows)); - const TabletId parent_id = VERIFY_RESULT(SplitTabletAndValidate(split_hash_code, kNumRows)); + const auto parent_tablet_id = VERIFY_RESULT(SplitTabletAndValidate(split_hash_code, kNumRows)); auto child_ids = ListActiveTabletIdsForTable(cluster_.get(), table_->id()); auto resp = VERIFY_RESULT(SendMasterRpcSyncSplitTablet(*child_ids.begin())); @@ -2585,7 +2585,7 @@ Status TabletSplitSingleServerITest::TestSplitBeforeParentDeletion(bool hide_onl ANNOTATE_UNPROTECTED_WRITE(FLAGS_TEST_skip_deleting_split_tablets) = false; auto catalog_mgr = VERIFY_RESULT(catalog_manager()); RETURN_NOT_OK(WaitFor([&]() -> Result { - auto parent = catalog_mgr->GetTabletInfo(parent_id); + auto parent = catalog_mgr->GetTabletInfo(parent_tablet_id); if (!parent.ok()) { if (parent.status().IsNotFound()) { return true; @@ -2857,8 +2857,7 @@ TEST_P(TabletSplitExternalMiniClusterCrashITest, CrashLeaderTest) { ASSERT_OK(WaitForTabletsExcept(2, leader_idx, tablet_id)); // Wait for both child tablets have leaders elected. - auto ts_map = ASSERT_RESULT(itest::CreateTabletServerMap( - cluster_->GetLeaderMasterProxy(), &cluster_->proxy_cache())); + auto ts_map = ASSERT_RESULT(itest::CreateTabletServerMap(cluster_.get())); auto tablet_ids = CHECK_RESULT(GetTestTableTabletIds()); for (const auto& id : tablet_ids) { if (id != tablet_id) { @@ -2991,8 +2990,7 @@ TEST_F_EX( const auto kWaitForTabletsRunningTimeout = 20s * kTimeMultiplier; const auto server_to_bootstrap_idx = 0; - auto ts_map = ASSERT_RESULT(itest::CreateTabletServerMap( - cluster_->GetLeaderMasterProxy(), &cluster_->proxy_cache())); + auto ts_map = ASSERT_RESULT(itest::CreateTabletServerMap(cluster_.get())); CreateSingleTablet(); const auto source_tablet_id = CHECK_RESULT(GetOnlyTestTabletId()); @@ -3495,8 +3493,7 @@ TEST_F_EX(TabletSplitITest, SplitOpApplyAfterLeaderChange, TabletSplitExternalMi ASSERT_OK(cluster_->SetFlagOnMasters("enable_load_balancing", "false")); - auto ts_map = ASSERT_RESULT(itest::CreateTabletServerMap( - cluster_->GetLeaderMasterProxy(), &cluster_->proxy_cache())); + auto ts_map = ASSERT_RESULT(itest::CreateTabletServerMap(cluster_.get())); CreateSingleTablet(); ASSERT_OK(WriteRowsAndFlush(kNumRows)); @@ -3791,4 +3788,178 @@ INSTANTIATE_TEST_CASE_P( "TEST_crash_before_source_tablet_mark_split_done", "TEST_crash_after_tablet_split_completed")); +namespace { + +constexpr auto kMaxAcceptableFollowerLagMs = 2000; + +Status CheckFollowerLag( + ExternalMiniCluster& cluster, const itest::TabletServerMap& ts_map, const TabletId& tablet_id, + const std::string& prefix) { + constexpr auto kRpcTimeout = 60s * kTimeMultiplier; + LOG(INFO) << prefix << " " << tablet_id << ":"; + for (auto& tserver : cluster.tserver_daemons()) { + const auto tserver_id = tserver->uuid(); + + consensus::ConsensusStatePB cstate; + auto status = itest::GetConsensusState( + ts_map.find(tserver_id)->second.get(), tablet_id, consensus::CONSENSUS_CONFIG_COMMITTED, + kRpcTimeout, &cstate); + + const auto tablet_peer_str = Format("$0 T $1 P $2", prefix, tablet_id, tserver_id); + if (status.ok()) { + LOG(INFO) << "Committed config for " << tablet_peer_str << " has " + << cstate.config().peers().size() + << " peers: " << cstate.config().ShortDebugString(); + + auto peer_health_result = cluster.GetTabletPeerHealth(*tserver, {tablet_id}); + if (peer_health_result.ok()) { + SCHECK_EQ(peer_health_result->tablet_healths_size(), 1, InternalError, ""); + const auto follower_lag_ms = peer_health_result->tablet_healths(0).follower_lag_ms(); + LOG(INFO) << "Follower lag for " << tablet_peer_str << ": " << follower_lag_ms; + SCHECK_LT(follower_lag_ms, kMaxAcceptableFollowerLagMs, InternalError, ""); + } else if (peer_health_result.status().IsIllegalState()) { + LOG(INFO) << "Not getting follower lag for " << tablet_peer_str << " due to " + << peer_health_result.status(); + } else { + LOG(INFO) << "Error getting follower lag for " << tablet_peer_str << ": " << status; + return peer_health_result.status(); + } + } else if (status.IsNotFound()) { + LOG(INFO) << "Raft consensus for " << tablet_peer_str << " is not found"; + } else if (status.IsIllegalState()) { + LOG(INFO) << "No raft consensus for " << tablet_peer_str << ": " << status; + } else { + LOG(INFO) << "Error getting Raft consensus for " << tablet_peer_str << ": " << status; + return status; + } + } + return Status::OK(); +} + +} // namespace + +TEST_F_EX(TabletSplitITest, SplitWithParentTabletMove, TabletSplitExternalMiniClusterITest) { + constexpr auto kTimeout = 15s * kTimeMultiplier; + ASSERT_OK(cluster_->SetFlagOnMasters("enable_load_balancing", "false")); + ASSERT_OK(cluster_->SetFlagOnTServers("TEST_skip_deleting_split_tablets", "true")); + + CreateSingleTablet(); + ASSERT_OK(cluster_->AddTabletServer()); + ASSERT_OK(cluster_->WaitForTabletServerCount(4, kTimeout)); + auto ts_map = ASSERT_RESULT(itest::CreateTabletServerMap(cluster_.get())); + auto* added_tserver = cluster_->tablet_server(3); + const auto added_tserver_id = added_tserver->uuid(); + + ASSERT_OK(WriteRowsAndFlush()); + const auto parent_tablet_id = ASSERT_RESULT(GetOnlyTestTabletId()); + LOG(INFO) << "Parent tablet id: " << parent_tablet_id; + + // Delay committing operations on leader. We want SPLIT_OP to be added to Raft log but not + // applied. And we can't block apply code path because it will hold ReplicaState mutex and block + // Raft functioning for parent tablet, so we won't be able to reproduce the issue because RBS + // won't be triggered. So instead we pause RaftConsensus::UpdateMajorityReplicated on parent + // tablet leader that will pause both advancing committed op id and appling operations. + const auto parent_leader_idx = CHECK_RESULT(cluster_->GetTabletLeaderIndex(parent_tablet_id)); + auto* const parent_leader_tserver = cluster_->tablet_server(parent_leader_idx); + const auto parent_leader_tserver_id = parent_leader_tserver->uuid(); + auto* const parent_leader_tserver_details = ts_map[parent_leader_tserver_id].get(); + ASSERT_OK( + cluster_->SetFlag(parent_leader_tserver, "TEST_pause_update_majority_replicated", "true")); + + OpId last_logged_op_id; + ASSERT_OK(itest::WaitForServerToBeQuiet( + kTimeout, {parent_leader_tserver_details}, parent_tablet_id, &last_logged_op_id, + itest::MustBeCommitted::kFalse)); + + ASSERT_OK(SplitTablet(parent_tablet_id)); + + // Wait for SPLIT_OP to be added to leader Raft log. + ASSERT_OK(itest::WaitForServersToAgree( + kTimeout, {parent_leader_tserver_details}, parent_tablet_id, last_logged_op_id.index + 1, + /* actual_index = */ nullptr, itest::MustBeCommitted::kFalse)); + + ASSERT_OK(cluster_->SetFlag(added_tserver, "TEST_pause_rbs_before_download_wal", "true")); + + LOG(INFO) << "Adding server " << added_tserver_id << " for parent tablet " << parent_tablet_id; + // AddServer RPC only returns when CONFIG_CHANGE_OP is majority replicaed, so we do it async to + // avoid deadlock inside test. + TestThreadHolder thread_holder; + thread_holder.AddThreadFunctor( + [&, added_tserver_details = ts_map[added_tserver_id].get()]() { + auto status = itest::AddServer( + parent_leader_tserver_details, parent_tablet_id, added_tserver_details, + consensus::PeerMemberType::PRE_VOTER, boost::none, kTimeout); + ERROR_NOT_OK(status, "AddServer error: "); + ASSERT_OK(status); + }); + + // Give some time for RBS to start and reach downloading WAL files. We can't wait for this event + // explicitly because with the fix RBS won't happen. + SleepFor(5s * kTimeMultiplier); + + // Unpause RaftConsensus::UpdateMajorityReplicated on parent tablet leader. + ASSERT_OK( + cluster_->SetFlag(parent_leader_tserver, "TEST_pause_update_majority_replicated", "false")); + + thread_holder.JoinAll(); + NO_PENDING_FATALS(); + + ASSERT_OK(WaitFor([&] -> Result { + const auto parent_leader_idx = CHECK_RESULT(cluster_->GetTabletLeaderIndex(parent_tablet_id)); + const auto parent_leader_tserver_id = cluster_->tablet_server(parent_leader_idx)->uuid(); + + consensus::ConsensusStatePB cstate; + auto status = itest::GetConsensusState( + ts_map[parent_leader_tserver_id].get(), parent_tablet_id, + consensus::CONSENSUS_CONFIG_COMMITTED, kRpcTimeout, &cstate); + + if (!status.ok()) { + return false; + } + + return cstate.config().peers_size() == 4; + }, kTimeout, "Wait for parent tablet peer to have committed Raft config with 4 peers")); + + ASSERT_OK(cluster_->SetFlag(added_tserver, "TEST_pause_rbs_before_download_wal", "false")); + ASSERT_OK(WaitUntilTabletRunning(ts_map[added_tserver_id].get(), parent_tablet_id, kTimeout)); + LOG(INFO) << "Parent tablet peer on added tserver has completed bootstrap"; + + ASSERT_OK(CheckFollowerLag(*cluster_, ts_map, parent_tablet_id, "Parent tablet")); + + const auto test_table_id = ASSERT_RESULT(GetTestTableId()); + std::vector child_tablet_ids; + ASSERT_OK(WaitFor( + [&] -> Result { + auto tablets_resp = VERIFY_RESULT(cluster_->ListTablets(added_tserver)); + for (const auto& status_and_schema : tablets_resp.status_and_schema()) { + const auto& tablet_status = status_and_schema.tablet_status(); + if (tablet_status.table_id() != test_table_id) { + continue; + } + if (tablet_status.tablet_id() == parent_tablet_id) { + continue; + } + // Child tablet + return tablet_status.state() == tablet::RaftGroupStatePB::RUNNING; + } + return true; + }, + kTimeout, + "Wait for child tablets to become either running or deleted (won't be listed by ListTablets) " + "on added tserver")); + + SleepFor((kMaxAcceptableFollowerLagMs + 100) * 1ms); + + master::GetTableLocationsResponsePB resp; + ASSERT_OK(itest::GetTableLocations( + cluster_.get(), table_->name(), kTimeout, RequireTabletsRunning::kFalse, &resp)); + + for (const auto& tablet_loc : resp.tablet_locations()) { + const auto& tablet_id = tablet_loc.tablet_id(); + ASSERT_OK(CheckFollowerLag( + *cluster_, ts_map, tablet_id, + std::string(tablet_id == parent_tablet_id ? "Parent" : "Child") + " tablet")); + } +} + } // namespace yb From 62d49fa6d5c6ff8ce4d94d065033ebafe6668b82 Mon Sep 17 00:00:00 2001 From: Daniel Shubin Date: Tue, 13 May 2025 21:14:49 +0000 Subject: [PATCH 057/146] [PLAT-15864] node agent preflight check to handle ntpd correctly Summary: Our comparison for if the ntpd reported skew is valid was incorrect. I updated it to match the other preflight check script Test Plan: tested change on node-agent managed node Reviewers: nsingh Reviewed By: nsingh Differential Revision: https://phorge.dev.yugabyte.com/D43964 --- managed/node-agent/resources/preflight_check.sh | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/managed/node-agent/resources/preflight_check.sh b/managed/node-agent/resources/preflight_check.sh index e874fc06f0e3..d46c6b3c8053 100755 --- a/managed/node-agent/resources/preflight_check.sh +++ b/managed/node-agent/resources/preflight_check.sh @@ -183,7 +183,7 @@ check_ntp_synchronization() { else update_result_json "ntp_service_status" false fi - if [[ $skew_ms -lt 400 ]]; then + if awk "BEGIN{exit !(${skew_ms} < 400)}"; then update_result_json "ntp_skew" true else update_result_json "ntp_skew" false From cd8e6fbfb660a1dd451d0988210548f465e56654 Mon Sep 17 00:00:00 2001 From: Hari Krishna Sunder Date: Tue, 13 May 2025 17:10:57 -0700 Subject: [PATCH 058/146] [#27185] YSQL: Adding gFlag to simulate the behavior of DDL blocking Summary: FLAGS_TEST_ysql_block_writes_to_catalog is added to simulate the behavior of DDLs during YSQL major upgrade. Fixes #27185 Jira: DB-16671 Test Plan: Jenkins Reviewers: dsherstobitov, telgersma Reviewed By: dsherstobitov, telgersma Subscribers: ybase Differential Revision: https://phorge.dev.yugabyte.com/D43970 --- src/yb/master/ysql/ysql_initdb_major_upgrade_handler.cc | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/src/yb/master/ysql/ysql_initdb_major_upgrade_handler.cc b/src/yb/master/ysql/ysql_initdb_major_upgrade_handler.cc index 715180a6e029..00058575b352 100644 --- a/src/yb/master/ysql/ysql_initdb_major_upgrade_handler.cc +++ b/src/yb/master/ysql/ysql_initdb_major_upgrade_handler.cc @@ -66,6 +66,9 @@ DEFINE_RUNTIME_bool(ysql_upgrade_import_stats, false, DEFINE_test_flag(bool, ysql_fail_cleanup_previous_version_catalog, false, "Fail the cleanup of the previous version ysql catalog"); +DEFINE_test_flag(bool, ysql_block_writes_to_catalog, false, + "Block writes to the catalog tables like we would during a ysql major upgrade"); + using yb::pgwrapper::PgWrapper; #define SCHECK_YSQL_ENABLED SCHECK(FLAGS_enable_ysql, IllegalState, "YSQL is not enabled") @@ -289,6 +292,10 @@ bool YsqlInitDBAndMajorUpgradeHandler::IsWriteToCatalogTableAllowed( return is_forced_update; } + if (FLAGS_TEST_ysql_block_writes_to_catalog) { + return is_forced_update; + } + // If we are not in the middle of a major upgrade then only allow updates to the current // version. return IsCurrentVersionYsqlCatalogTable(table_id); From d0a77bcd2b96eacd938d9c099e7a3b5e1d4f78ea Mon Sep 17 00:00:00 2001 From: Yury Shchetinin Date: Wed, 14 May 2025 21:10:27 +0300 Subject: [PATCH 059/146] [PLAT-17159] Fix tests Summary: Fix tests Test Plan: sbt tests Reviewers: hzare, nsingh Reviewed By: nsingh Subscribers: yugaware Differential Revision: https://phorge.dev.yugabyte.com/D43991 --- .../com/yugabyte/yw/commissioner/HealthCheckerTest.java | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/managed/src/test/java/com/yugabyte/yw/commissioner/HealthCheckerTest.java b/managed/src/test/java/com/yugabyte/yw/commissioner/HealthCheckerTest.java index 55ec60b1c15d..0ff53d511ef8 100644 --- a/managed/src/test/java/com/yugabyte/yw/commissioner/HealthCheckerTest.java +++ b/managed/src/test/java/com/yugabyte/yw/commissioner/HealthCheckerTest.java @@ -153,6 +153,12 @@ public void setUp() { when(mockConfGetter.getConfForScope( any(Universe.class), eq(UniverseConfKeys.ddlAtomicityIntervalSec))) .thenReturn(3600); + when(mockConfGetter.getConfForScope( + any(Universe.class), eq(UniverseConfKeys.healthCollectTopKOtherProcessesCount))) + .thenReturn(0); + when(mockConfGetter.getConfForScope( + any(Universe.class), eq(UniverseConfKeys.healthCollectTopKOtherProcessesMemThreshold))) + .thenReturn(0); when(mockConfGetter.getGlobalConf(eq(GlobalConfKeys.backwardCompatibleDate))).thenReturn(false); when(mockFileHelperService.createTempFile(anyString(), anyString())) .thenAnswer( From a0875be34887e3d876144c590e1854d8fe2a3ef5 Mon Sep 17 00:00:00 2001 From: Jethro Mak <88681329+Jethro-M@users.noreply.github.com> Date: Mon, 12 May 2025 18:14:11 -0400 Subject: [PATCH 060/146] [PLAT-17578] Move support bundle modal out of the universe action menu Summary: The universe action menu is a child of the universe details tab component. The tab component captures left and right arrow key presses to change the current tab. Since the support bundle modal sits in the universe action menu, the user's left/right keypresses in the modal get captured and result in current tab changing instead of moving the cursor left/right in the modal fields. The issue of universe action menu living inside the universe details tab is a long-standing one. I opened a separate ticket for changing how we structure the universe action menu + universe tabs: https://yugabyte.atlassian.net/browse/PLAT-17584 Previously the support bundle used react-bootstrap modal component which prevented keyboard events like left/right arrow key from impacting components outside of the modal. To resolve the immediate issue of left/right arrow keys changing tabs while support bundle modal is open, this diff moves MUI-based support bundle modal outside of the universe action menu. This will help prevent the left/right keypress events from triggering unwanted tab changes. Test Plan: Press left/right arrow key in the support bundle modal and observe no tab changes. Successfully create a support bundle. {F356064} Reviewers: rmadhavan, lsangappa, kkannan, asharma, skurapati Reviewed By: rmadhavan, asharma Differential Revision: https://phorge.dev.yugabyte.com/D43932 --- .../UniverseDetail/UniverseDetail.js | 45 +++--- .../SecondStep/SecondStep.js | 2 +- ...undle.js => UniverseSupportBundleModal.js} | 132 ++++++++---------- ...e.scss => UniverseSupportBundleModal.scss} | 0 4 files changed, 83 insertions(+), 96 deletions(-) rename managed/ui/src/components/universes/UniverseSupportBundle/{UniverseSupportBundle.js => UniverseSupportBundleModal.js} (69%) rename managed/ui/src/components/universes/UniverseSupportBundle/{UniverseSupportBundle.scss => UniverseSupportBundleModal.scss} (100%) diff --git a/managed/ui/src/components/universes/UniverseDetail/UniverseDetail.js b/managed/ui/src/components/universes/UniverseDetail/UniverseDetail.js index a3ea9794af48..71631b15fcfd 100644 --- a/managed/ui/src/components/universes/UniverseDetail/UniverseDetail.js +++ b/managed/ui/src/components/universes/UniverseDetail/UniverseDetail.js @@ -47,7 +47,7 @@ import { } from '../../../utils/LayoutUtils'; import { SecurityMenu } from '../SecurityModal/SecurityMenu'; import { UniverseLevelBackup } from '../../backupv2/Universe/UniverseLevelBackup'; -import { UniverseSupportBundle } from '../UniverseSupportBundle/UniverseSupportBundle'; +import { UniverseSupportBundleModal } from '../UniverseSupportBundle/UniverseSupportBundleModal'; import { XClusterReplication } from '../../xcluster/XClusterReplication'; import { EncryptionAtRest } from '../../../redesign/features/universe/universe-actions/encryption-at-rest/EncryptionAtRest'; import { EncryptionInTransit } from '../../../redesign/features/universe/universe-actions/encryption-in-transit/EncryptionInTransit'; @@ -1267,11 +1267,10 @@ class UniverseDetail extends Component { - {(featureFlags.test['supportBundle'] || - featureFlags.released['supportBundle']) && ( - <> - - {!universePaused && ( + {(featureFlags.test['supportBundle'] || featureFlags.released['supportBundle']) && + !universePaused && ( + <> + - - - Support Bundles - - - } - /> + + + Support Bundles + + - )} - - )} + + )} @@ -1798,6 +1789,12 @@ class UniverseDetail extends Component { isReinstall={!isNodeAgentMissing} /> + + { +export const UniverseSupportBundleModal = (props) => { const { currentUniverse: { universeDetails }, - button, closeModal, modal: { showModal, visibleModal } } = props; @@ -135,71 +132,64 @@ export const UniverseSupportBundle = (props) => { const isSubmitDisabled = steps === stepsObj.secondStep && payload?.components?.length === 0; return ( - - {isEmptyObject(button) ? ( - - ) : ( - button - )} - { - saveSupportBundle(universeDetails.universeUUID); - } - : undefined - } - buttonProps={{ primary: { disabled: isSubmitDisabled } }} - > -
- {steps === stepsObj.firstStep && ( - { - handleStepChange(stepsObj.secondStep); - }} - universeUUID={universeDetails.universeUUID} - /> - )} - {steps === stepsObj.secondStep && ( - { - if (selectedOptions) { - setPayload(selectedOptions); - } else { - setPayload(defaultOptions); - } - }} - payload={payload} - universeUUID={universeDetails.universeUUID} - isK8sUniverse={isK8sUniverse} - universeStatus={getUniverseStatus(props.currentUniverse)} - /> - )} - {steps === stepsObj.thirdStep && ( - - handleDownloadBundle(universeDetails.universeUUID, bundleUUID) - } - handleDeleteBundle={(bundleUUID) => - handleDeleteBundle(universeDetails.universeUUID, bundleUUID) + { + saveSupportBundle(universeDetails.universeUUID); + } + : undefined + } + buttonProps={{ primary: { disabled: isSubmitDisabled } }} + > +
+ {steps === stepsObj.firstStep && ( + { + handleStepChange(stepsObj.secondStep); + }} + universeUUID={universeDetails.universeUUID} + /> + )} + {steps === stepsObj.secondStep && ( + { + if (selectedOptions) { + setPayload(selectedOptions); + } else { + setPayload(defaultOptions); } - supportBundles={supportBundles} - onCreateSupportBundle={() => { - handleStepChange(stepsObj.secondStep); - }} - universeUUID={universeDetails.universeUUID} - /> - )} -
-
- + }} + payload={payload} + universeUUID={universeDetails.universeUUID} + isK8sUniverse={isK8sUniverse} + universeStatus={getUniverseStatus(props.currentUniverse)} + /> + )} + {steps === stepsObj.thirdStep && ( + + handleDownloadBundle(universeDetails.universeUUID, bundleUUID) + } + handleDeleteBundle={(bundleUUID) => + handleDeleteBundle(universeDetails.universeUUID, bundleUUID) + } + supportBundles={supportBundles} + onCreateSupportBundle={() => { + handleStepChange(stepsObj.secondStep); + }} + universeUUID={universeDetails.universeUUID} + /> + )} +
+
); }; @@ -242,4 +232,4 @@ function mapStateToProps(state) { }; } -export default connect(mapStateToProps)(UniverseSupportBundle); +export default connect(mapStateToProps)(UniverseSupportBundleModal); diff --git a/managed/ui/src/components/universes/UniverseSupportBundle/UniverseSupportBundle.scss b/managed/ui/src/components/universes/UniverseSupportBundle/UniverseSupportBundleModal.scss similarity index 100% rename from managed/ui/src/components/universes/UniverseSupportBundle/UniverseSupportBundle.scss rename to managed/ui/src/components/universes/UniverseSupportBundle/UniverseSupportBundleModal.scss From ae4bf8aada7931ded5b4f0e2d969e92b698d2445 Mon Sep 17 00:00:00 2001 From: Daniel Shubin Date: Tue, 6 May 2025 20:43:38 +0000 Subject: [PATCH 061/146] [PLAT-17361][PLAT-17426] Improved some yba installer config handling Summary: first, the config `platform.additional` now is correctly used when generating YBA config files. Second, as_root config now has a sane default during install. This handles the case where an existing config file is used for a new install and does not contain the as_root entry. It defaults to the user that ran the install command. Test Plan: install test without as_root in the config upgrade test Reviewers: muthu Reviewed By: muthu Differential Revision: https://phorge.dev.yugabyte.com/D43882 --- managed/yba-installer/cmd/init.go | 8 +++++++- managed/yba-installer/pkg/config/templatedConfig.go | 2 +- 2 files changed, 8 insertions(+), 2 deletions(-) diff --git a/managed/yba-installer/cmd/init.go b/managed/yba-installer/cmd/init.go index 9a697404f45e..11ecc6e6acdc 100644 --- a/managed/yba-installer/cmd/init.go +++ b/managed/yba-installer/cmd/init.go @@ -138,7 +138,13 @@ func handleRootCheck(cmdName string) { } else if user.Uid != "0" && err == nil { log.Fatal("Detected root install at /opt/yba-ctl, cannot upgrade as non-root") } - log.Debug("legacy root check passed for upgrade") + log.Debug(fmt.Sprintf("legacy root check passed for %s", cmdName)) + + // Also handle the case where a config file is provided but did not include as_root + if err := common.SetYamlValue(common.InputFile(), "as_root", user.Uid == "0"); err != nil { + log.Warn("Failed to set as_root in config file, please set it manually") + log.Fatal("Failed to set as_root in config file: " + err.Error()) + } return } else if user.Uid == "0" && !viper.GetBool("as_root") { log.Fatal("running as root user with 'as_root' set to false is not supported") diff --git a/managed/yba-installer/pkg/config/templatedConfig.go b/managed/yba-installer/pkg/config/templatedConfig.go index c1ef004863e8..092ccf470a08 100644 --- a/managed/yba-installer/pkg/config/templatedConfig.go +++ b/managed/yba-installer/pkg/config/templatedConfig.go @@ -160,7 +160,7 @@ func GenerateTemplate(component common.Component) error { defer file.Close() // Add the additional raw text to yb-platform.conf if it exists. - additionalEntryString := strings.TrimSuffix(GetYamlPathData(".platform.additional"), "\n") + additionalEntryString := strings.TrimSuffix(GetYamlPathData("platform.additional"), "\n") log.DebugLF("Writing addition data to yb-platform config: " + additionalEntryString) if _, err := file.WriteString(additionalEntryString); err != nil { From 9f757a19122dfa681217101fdc3af8f7a97a8931 Mon Sep 17 00:00:00 2001 From: Pradeep Kumar Gayam Date: Wed, 14 May 2025 16:41:33 -0700 Subject: [PATCH 062/146] [#1348] DocDB: Add validator for leader lease to be larger than heartbeat interval (#25747) This PR brings in delayed validators to make sure raft_heartbeat_interval_ms is less than leader_lease_duration_ms. Fixes #1348 --- src/yb/consensus/raft_consensus.cc | 17 +++++++++++++++-- 1 file changed, 15 insertions(+), 2 deletions(-) diff --git a/src/yb/consensus/raft_consensus.cc b/src/yb/consensus/raft_consensus.cc index 3e073fe280f5..640f3a4c4202 100644 --- a/src/yb/consensus/raft_consensus.cc +++ b/src/yb/consensus/raft_consensus.cc @@ -71,6 +71,7 @@ #include "yb/util/debug/trace_event.h" #include "yb/util/enums.h" #include "yb/util/flags.h" +#include "yb/util/flag_validators.h" #include "yb/util/format.h" #include "yb/util/logging.h" #include "yb/util/memory/memory.h" @@ -90,7 +91,7 @@ using namespace std::literals; using namespace std::placeholders; -DEFINE_UNKNOWN_int32(raft_heartbeat_interval_ms, yb::NonTsanVsTsan(500, 1000), +DEFINE_NON_RUNTIME_int32(raft_heartbeat_interval_ms, yb::NonTsanVsTsan(500, 1000), "The heartbeat interval for Raft replication. The leader produces heartbeats " "to followers at this interval. The followers expect a heartbeat at this interval " "and consider a leader to have failed if it misses several in a row."); @@ -190,12 +191,24 @@ METRIC_DEFINE_event_stats( yb::MetricUnit::kMicroseconds, "Microseconds spent resolving DNS requests during RaftConsensus::UpdateRaftConfig"); -DEFINE_UNKNOWN_int32(leader_lease_duration_ms, yb::consensus::kDefaultLeaderLeaseDurationMs, +DEFINE_NON_RUNTIME_int32(leader_lease_duration_ms, yb::consensus::kDefaultLeaderLeaseDurationMs, "Leader lease duration. A leader keeps establishing a new lease or extending the " "existing one with every UpdateConsensus. A new server is not allowed to serve as a " "leader (i.e. serve up-to-date read requests or acknowledge write requests) until a " "lease of this duration has definitely expired on the old leader's side."); +DEFINE_validator(leader_lease_duration_ms, + FLAG_DELAYED_COND_VALIDATOR( + FLAGS_raft_heartbeat_interval_ms < _value, + yb::Format("Must be strictly greater than raft_heartbeat_interval_ms: $0", + FLAGS_raft_heartbeat_interval_ms))); + +DEFINE_validator(raft_heartbeat_interval_ms, + FLAG_DELAYED_COND_VALIDATOR( + _value < FLAGS_leader_lease_duration_ms, + yb::Format("Must be strictly less than leader_lease_duration_ms: $0", + FLAGS_leader_lease_duration_ms))); + DEFINE_UNKNOWN_int32(ht_lease_duration_ms, 2000, "Hybrid time leader lease duration. A leader keeps establishing a new lease or " "extending the existing one with every UpdateConsensus. A new server is not allowed " From dfbc2a43899fd55f3005a0fda6b28a747de934e0 Mon Sep 17 00:00:00 2001 From: Dwight Hodge <79169168+ddhodge@users.noreply.github.com> Date: Wed, 14 May 2025 19:57:25 -0400 Subject: [PATCH 063/146] [DOC-761] earthdistance and cube (#27191) * earthdistance and cube * edit --- .../explore/ysql-language-features/pg-extensions/_index.md | 2 ++ docs/content/preview/yugabyte-cloud/_index.md | 2 +- .../explore/ysql-language-features/pg-extensions/_index.md | 2 ++ .../explore/ysql-language-features/pg-extensions/_index.md | 2 ++ .../explore/ysql-language-features/pg-extensions/_index.md | 2 ++ 5 files changed, 9 insertions(+), 1 deletion(-) diff --git a/docs/content/preview/explore/ysql-language-features/pg-extensions/_index.md b/docs/content/preview/explore/ysql-language-features/pg-extensions/_index.md index c7a921bbf5ae..a607ba457f97 100644 --- a/docs/content/preview/explore/ysql-language-features/pg-extensions/_index.md +++ b/docs/content/preview/explore/ysql-language-features/pg-extensions/_index.md @@ -36,6 +36,8 @@ YugabyteDB supports the following [PostgreSQL modules](https://www.postgresql.or | Module | Description | | :----- | :---------- | | [auto_explain](extension-auto-explain/) | Provides a means for logging execution plans of slow statements automatically. | +| cube| Implements a data type cube for representing multidimensional cubes.
For more information, see [cube](https://www.postgresql.org/docs/15/cube.html) in the PostgreSQL documentation. | +| earthdistance| Provides two different approaches to calculating great circle distances on the surface of the Earth.
For more information, see [earthdistance](https://www.postgresql.org/docs/15/earthdistance.html) in the PostgreSQL documentation. | | [file_fdw](extension-file-fdw/) | Provides the foreign-data wrapper file_fdw, which can be used to access data files in the server's file system. | | [fuzzystrmatch](extension-fuzzystrmatch/) | Provides several functions to determine similarities and distance between strings. | | hstore | Implements the hstore data type for storing sets of key-value pairs in a single PostgreSQL value.
For more information, see [hstore](https://www.postgresql.org/docs/15/hstore.html) in the PostgreSQL documentation. | diff --git a/docs/content/preview/yugabyte-cloud/_index.md b/docs/content/preview/yugabyte-cloud/_index.md index add89b1373cc..098ad8ec82f1 100644 --- a/docs/content/preview/yugabyte-cloud/_index.md +++ b/docs/content/preview/yugabyte-cloud/_index.md @@ -30,7 +30,7 @@ YugabyteDB Managed is now YugabyteDB Aeon! [Learn more](https://www.yugabyte.com {{< sections/2-boxes >}} {{< sections/bottom-image-box title="Sign up to create a Sandbox cluster" - description="Sign up, log in, and follow the built-in tutorial to create your first cluster and build a sample application. No credit card required." + description="Sign up, log in, and follow the built-in tutorial to create your first cluster, and build a sample application. No credit card required." buttonText="Sign up" buttonTarget="_blank" buttonUrl="https://cloud.yugabyte.com/signup?utm_medium=direct&utm_source=docs&utm_campaign=YBM_signup" diff --git a/docs/content/stable/explore/ysql-language-features/pg-extensions/_index.md b/docs/content/stable/explore/ysql-language-features/pg-extensions/_index.md index 5cb39aa6a987..038d779cbd7b 100644 --- a/docs/content/stable/explore/ysql-language-features/pg-extensions/_index.md +++ b/docs/content/stable/explore/ysql-language-features/pg-extensions/_index.md @@ -31,6 +31,8 @@ YugabyteDB supports the following [PostgreSQL modules](https://www.postgresql.or | Module | Description | | :----- | :---------- | | [auto_explain](extension-auto-explain/) | Provides a means for logging execution plans of slow statements automatically. | +| cube| Implements a data type cube for representing multidimensional cubes.
For more information, see [cube](https://www.postgresql.org/docs/11/cube.html) in the PostgreSQL documentation. | +| earthdistance| Provides two different approaches to calculating great circle distances on the surface of the Earth.
For more information, see [earthdistance](https://www.postgresql.org/docs/11/earthdistance.html) in the PostgreSQL documentation. | | [file_fdw](extension-file-fdw/) | Provides the foreign-data wrapper file_fdw, which can be used to access data files in the server's file system. | | [fuzzystrmatch](extension-fuzzystrmatch/) | Provides several functions to determine similarities and distance between strings. | | hstore | Implements the hstore data type for storing sets of key-value pairs in a single PostgreSQL value.
For more information, see [hstore](https://www.postgresql.org/docs/11/hstore.html) in the PostgreSQL documentation. | diff --git a/docs/content/v2.20/explore/ysql-language-features/pg-extensions/_index.md b/docs/content/v2.20/explore/ysql-language-features/pg-extensions/_index.md index 46a5e1e95234..668e7dff4ae8 100644 --- a/docs/content/v2.20/explore/ysql-language-features/pg-extensions/_index.md +++ b/docs/content/v2.20/explore/ysql-language-features/pg-extensions/_index.md @@ -31,6 +31,8 @@ YugabyteDB supports the following [PostgreSQL modules](https://www.postgresql.or | Module | Description | | :----- | :---------- | | [auto_explain](extension-auto-explain/) | Provides a means for logging execution plans of slow statements automatically. | +| cube| Implements a data type cube for representing multidimensional cubes.
For more information, see [cube](https://www.postgresql.org/docs/11/cube.html) in the PostgreSQL documentation. | +| earthdistance| Provides two different approaches to calculating great circle distances on the surface of the Earth.
For more information, see [earthdistance](https://www.postgresql.org/docs/11/earthdistance.html) in the PostgreSQL documentation. | | [file_fdw](extension-file-fdw/) | Provides the foreign-data wrapper file_fdw, which can be used to access data files in the server's file system. | | [fuzzystrmatch](extension-fuzzystrmatch/) | Provides several functions to determine similarities and distance between strings. | | hstore | Implements the hstore data type for storing sets of key-value pairs in a single PostgreSQL value.
For more information, see [hstore](https://www.postgresql.org/docs/11/hstore.html) in the PostgreSQL documentation. | diff --git a/docs/content/v2024.1/explore/ysql-language-features/pg-extensions/_index.md b/docs/content/v2024.1/explore/ysql-language-features/pg-extensions/_index.md index ff9a024ad25a..64c87a390732 100644 --- a/docs/content/v2024.1/explore/ysql-language-features/pg-extensions/_index.md +++ b/docs/content/v2024.1/explore/ysql-language-features/pg-extensions/_index.md @@ -31,6 +31,8 @@ YugabyteDB supports the following [PostgreSQL modules](https://www.postgresql.or | Module | Description | | :----- | :---------- | | [auto_explain](extension-auto-explain/) | Provides a means for logging execution plans of slow statements automatically. | +| cube| Implements a data type cube for representing multidimensional cubes.
For more information, see [cube](https://www.postgresql.org/docs/11/cube.html) in the PostgreSQL documentation. | +| earthdistance| Provides two different approaches to calculating great circle distances on the surface of the Earth.
For more information, see [earthdistance](https://www.postgresql.org/docs/11/earthdistance.html) in the PostgreSQL documentation. | | [file_fdw](extension-file-fdw/) | Provides the foreign-data wrapper file_fdw, which can be used to access data files in the server's file system. | | [fuzzystrmatch](extension-fuzzystrmatch/) | Provides several functions to determine similarities and distance between strings. | | hstore | Implements the hstore data type for storing sets of key-value pairs in a single PostgreSQL value.
For more information, see [hstore](https://www.postgresql.org/docs/11/hstore.html) in the PostgreSQL documentation. | From 5a28966ead4e261dc92cace93f34292bac1cbea2 Mon Sep 17 00:00:00 2001 From: svarshney Date: Wed, 14 May 2025 23:52:51 +0530 Subject: [PATCH 064/146] [PLAT-17511] configure ybc using node-agent Summary: configure ybc using node-agent Test Plan: iTest pipeline Reviewers: nsingh Reviewed By: nsingh Subscribers: svc_phabricator Differential Revision: https://phorge.dev.yugabyte.com/D43898 --- managed/node-agent/app/server/rpc.go | 11 + .../node-agent/app/task/install_software.go | 8 +- managed/node-agent/app/task/install_ybc.go | 218 ++++++++++++++++++ managed/node-agent/proto/server.proto | 2 + managed/node-agent/proto/yb.proto | 12 + .../tasks/UniverseDefinitionTaskBase.java | 2 +- .../subtasks/AnsibleConfigureServers.java | 133 +++++++++-- .../yugabyte/yw/common/NodeAgentClient.java | 16 ++ .../com/yugabyte/yw/common/NodeManager.java | 32 +-- 9 files changed, 404 insertions(+), 30 deletions(-) create mode 100644 managed/node-agent/app/task/install_ybc.go diff --git a/managed/node-agent/app/server/rpc.go b/managed/node-agent/app/server/rpc.go index 408f7ec431ba..837a545a521c 100644 --- a/managed/node-agent/app/server/rpc.go +++ b/managed/node-agent/app/server/rpc.go @@ -350,6 +350,17 @@ func (server *RPCServer) SubmitTask( res.TaskId = taskID return res, nil } + installYbcInput := req.GetInstallYbcInput() + if installYbcInput != nil { + installYbcHandler := task.NewInstallYbcHandler(installYbcInput, username) + err := task.GetTaskManager().Submit(ctx, taskID, installYbcHandler) + if err != nil { + util.FileLogger().Errorf(ctx, "Error in running install ybc - %s", err.Error()) + return res, status.Error(codes.Internal, err.Error()) + } + res.TaskId = taskID + return res, nil + } return res, status.Error(codes.Unimplemented, "Unknown task") } diff --git a/managed/node-agent/app/task/install_software.go b/managed/node-agent/app/task/install_software.go index d123bb06aefa..1c1507efdf0b 100644 --- a/managed/node-agent/app/task/install_software.go +++ b/managed/node-agent/app/task/install_software.go @@ -72,6 +72,12 @@ func (h *InstallSoftwareHandler) Handle(ctx context.Context) (*pb.DescribeTaskRe return nil, err } + if len(h.param.GetSymLinkFolders()) == 0 { + err := errors.New("server process is required") + util.FileLogger().Error(ctx, err.Error()) + return nil, err + } + // 1) extract all the names and paths up front pkgName := filepath.Base(ybPkg) pkgFolder := helpers.ExtractArchiveFolderName(pkgName) @@ -183,7 +189,7 @@ func (h *InstallSoftwareHandler) setupSymlinks( home string, ybSoftwareDir string, ) error { - processes := []string{"master", "tserver"} + processes := h.param.GetSymLinkFolders() files, err := helpers.ListDirectoryContent(ybSoftwareDir) if err != nil { return err diff --git a/managed/node-agent/app/task/install_ybc.go b/managed/node-agent/app/task/install_ybc.go new file mode 100644 index 000000000000..ee43b141fd2b --- /dev/null +++ b/managed/node-agent/app/task/install_ybc.go @@ -0,0 +1,218 @@ +// Copyright (c) YugaByte, Inc. + +package task + +import ( + "context" + "errors" + "fmt" + "node-agent/app/task/helpers" + pb "node-agent/generated/service" + "node-agent/util" + "path/filepath" +) + +type InstallYbcHandler struct { + shellTask *ShellTask + param *pb.InstallYbcInput + username string + logOut util.Buffer +} + +func NewInstallYbcHandler(param *pb.InstallYbcInput, username string) *InstallYbcHandler { + return &InstallYbcHandler{ + param: param, + username: username, + logOut: util.NewBuffer(MaxBufferCapacity), + } +} + +// helper that wraps NewShellTaskWithUser + Process + error logging +func (h *InstallYbcHandler) runShell( + ctx context.Context, + desc, shell string, + args []string, +) error { + h.logOut.WriteLine("Running install YBC phase: %s", desc) + h.shellTask = NewShellTaskWithUser(desc, h.username, shell, args) + _, err := h.shellTask.Process(ctx) + if err != nil { + util.FileLogger().Errorf(ctx, + "Install YBC failed [%s]: %s", desc, err) + return err + } + return nil +} + +// CurrentTaskStatus implements the AsyncTask method. +func (h *InstallYbcHandler) CurrentTaskStatus() *TaskStatus { + return &TaskStatus{ + Info: h.logOut, + ExitStatus: &ExitStatus{}, + } +} + +func (h *InstallYbcHandler) String() string { + return "Install YBC Task" +} + +func (h *InstallYbcHandler) execSetupYBCCommands( + ctx context.Context, + ybcPackagePath, ybcSoftwareDir, ybcControllerDir string, +) error { + /* + ybcPackagePath - Points to the current location where the YBC package is stored + ybcSoftwareDir - Points to the location where YBC package will be stored. + Example - /home/yugabyte/yb-software/ybc-2.2.0.2-b2-linux-x86_64 + ybcControllerDir - YBC directory on the node, /home/yugabyte/controller. + */ + + steps := []struct { + desc string + cmd string + }{ + {"clean-ybc-software-dir", fmt.Sprintf("rm -rf %s", ybcSoftwareDir)}, + { + "make-ybc-software-dir", + fmt.Sprintf( + "mkdir -p %s && chown %s:%s %s && chmod 0755 %s", + ybcSoftwareDir, + h.username, + h.username, + ybcSoftwareDir, + ybcSoftwareDir, + ), + }, + { + "untar-ybc-software", + fmt.Sprintf( + "tar --no-same-owner -xzvf %s --strip-components=1 -C %s", + ybcPackagePath, + ybcSoftwareDir, + ), + }, + { + "make-controller-dir", + fmt.Sprintf( + "mkdir -p %s && chown %s:%s %s && chmod 0755 %s", + ybcControllerDir, + h.username, + h.username, + ybcControllerDir, + ybcControllerDir, + ), + }, + {"remove-temp-package", fmt.Sprintf("rm -rf %s", ybcPackagePath)}, + } + + for _, step := range steps { + if err := h.runShell(ctx, step.desc, util.DefaultShell, []string{"-c", step.cmd}); err != nil { + return err + } + } + return nil +} + +func (h *InstallYbcHandler) execConfigureYBCCommands( + ctx context.Context, + ybcSoftwareDir, ybcControllerDir string, +) error { + mountPoint := "" + if len(h.param.GetMountPoints()) > 0 { + mountPoint = h.param.GetMountPoints()[0] + } + steps := []struct { + desc string + cmd string + }{ + { + "setup-bin-symlink", + fmt.Sprintf( + "ln -sf %s %s", + filepath.Join(ybcSoftwareDir, "bin"), + filepath.Join(ybcControllerDir, "bin"), + ), + }, + { + "create-ybc-logs-dir-mount-path", + fmt.Sprintf( + "mkdir -p %s && chown %s:%s %s && chmod 0755 %s", + filepath.Join(mountPoint, "ybc-data/controller/logs"), + h.username, + h.username, + filepath.Join(mountPoint, "ybc-data/controller/logs"), + filepath.Join(mountPoint, "ybc-data/controller/logs"), + ), + }, + { + "create-logs-dir-symlinks", + fmt.Sprintf( + "ln -sf %s %s", + filepath.Join(mountPoint, "ybc-data/controller/logs"), + filepath.Join(ybcControllerDir), + ), + }, + { + "create-ybc-conf-dir", + fmt.Sprintf( + "mkdir -p %s && chown %s:%s %s && chmod 0755 %s", + filepath.Join(ybcControllerDir, "conf"), + h.username, + h.username, + filepath.Join(ybcControllerDir, "conf"), + filepath.Join(ybcControllerDir, "conf"), + ), + }, + } + + for _, step := range steps { + if err := h.runShell(ctx, step.desc, util.DefaultShell, []string{"-c", step.cmd}); err != nil { + return err + } + } + return nil +} + +func (h *InstallYbcHandler) Handle(ctx context.Context) (*pb.DescribeTaskResponse, error) { + util.FileLogger().Info(ctx, "Starting install YBC handler.") + + ybcPkg := h.param.GetYbcPackage() + if ybcPkg == "" { + err := errors.New("ybPackage is required") + util.FileLogger().Error(ctx, err.Error()) + return nil, err + } + + // 1) extract all the names and paths up front + pkgName := filepath.Base(ybcPkg) + pkgFolder := helpers.ExtractArchiveFolderName(pkgName) + + // 2) figure out home dir + home := "" + if h.param.GetYbHomeDir() != "" { + home = h.param.GetYbHomeDir() + } else { + err := errors.New("ybHomeDir is required") + util.FileLogger().Error(ctx, err.Error()) + return nil, err + } + + ybcSoftwareDir := filepath.Join(home, "yb-software", pkgFolder) + ybcControllerDir := filepath.Join(h.param.GetYbHomeDir(), "controller") + // 3) Put the ybc software at the desired location. + ybcPackagePath := filepath.Join(h.param.GetRemoteTmp(), pkgName) + err := h.execSetupYBCCommands(ctx, ybcPackagePath, ybcSoftwareDir, ybcControllerDir) + if err != nil { + util.FileLogger().Error(ctx, err.Error()) + return nil, err + } + + // 4) Configure the ybc package. + err = h.execConfigureYBCCommands(ctx, ybcSoftwareDir, ybcControllerDir) + if err != nil { + util.FileLogger().Error(ctx, err.Error()) + return nil, err + } + + return nil, nil +} diff --git a/managed/node-agent/proto/server.proto b/managed/node-agent/proto/server.proto index 2da935689fa0..e0c0870f51e4 100644 --- a/managed/node-agent/proto/server.proto +++ b/managed/node-agent/proto/server.proto @@ -58,6 +58,7 @@ message SubmitTaskRequest { ConfigureServiceInput configureServiceInput = 6; InstallSoftwareInput installSoftwareInput = 7; ServerGFlagsInput serverGFlagsInput = 8; + InstallYbcInput installYbcInput = 9; } } @@ -79,6 +80,7 @@ message DescribeTaskResponse { ConfigureServiceOutput configureServiceOutput = 6; InstallSoftwareOutput installSoftwareOutput = 7; ServerGFlagsOutput serverGFlagsOutput = 8; + InstallYbcOutput installYbcOutput = 9; } } diff --git a/managed/node-agent/proto/yb.proto b/managed/node-agent/proto/yb.proto index 0edd8bc7ba0b..869ee1dff97d 100644 --- a/managed/node-agent/proto/yb.proto +++ b/managed/node-agent/proto/yb.proto @@ -111,6 +111,7 @@ message InstallSoftwareInput { string iTestS3PackagePath = 9; string remoteTmp = 10; string ybHomeDir = 11; + repeated string symLinkFolders = 12; } message InstallSoftwareOutput { @@ -126,3 +127,14 @@ message ServerGFlagsInput { message ServerGFlagsOutput { } + +message InstallYbcInput { + string ybcPackage = 1; + string remoteTmp = 2; + string ybHomeDir = 3; + repeated string mountPoints = 4; +} + +message InstallYbcOutput { + int32 pid = 1; +} diff --git a/managed/src/main/java/com/yugabyte/yw/commissioner/tasks/UniverseDefinitionTaskBase.java b/managed/src/main/java/com/yugabyte/yw/commissioner/tasks/UniverseDefinitionTaskBase.java index b280131f2ca3..8ca88ca42338 100644 --- a/managed/src/main/java/com/yugabyte/yw/commissioner/tasks/UniverseDefinitionTaskBase.java +++ b/managed/src/main/java/com/yugabyte/yw/commissioner/tasks/UniverseDefinitionTaskBase.java @@ -2767,7 +2767,7 @@ public void createStartTserverProcessTasks( */ public void createStartYbcProcessTasks(Set nodesToBeStarted, boolean isSystemd) { // Create Start yb-controller tasks for non-systemd only - if (!isSystemd) { + if (!isSystemd || confGetter.getGlobalConf(GlobalConfKeys.nodeAgentEnableConfigureServer)) { createStartYbcTasks(nodesToBeStarted).setSubTaskGroupType(SubTaskGroupType.ConfigureUniverse); } diff --git a/managed/src/main/java/com/yugabyte/yw/commissioner/tasks/subtasks/AnsibleConfigureServers.java b/managed/src/main/java/com/yugabyte/yw/commissioner/tasks/subtasks/AnsibleConfigureServers.java index b994abf2d9a0..411f9b9906e9 100644 --- a/managed/src/main/java/com/yugabyte/yw/commissioner/tasks/subtasks/AnsibleConfigureServers.java +++ b/managed/src/main/java/com/yugabyte/yw/commissioner/tasks/subtasks/AnsibleConfigureServers.java @@ -29,6 +29,7 @@ import com.yugabyte.yw.common.config.ProviderConfKeys; import com.yugabyte.yw.common.gflags.GFlagsUtil; import com.yugabyte.yw.common.utils.FileUtils; +import com.yugabyte.yw.common.utils.Pair; import com.yugabyte.yw.forms.CertsRotateParams.CertRotationType; import com.yugabyte.yw.forms.UniverseDefinitionTaskParams.Cluster; import com.yugabyte.yw.forms.UniverseDefinitionTaskParams.UserIntent; @@ -48,16 +49,20 @@ import com.yugabyte.yw.models.helpers.UpgradeDetails.YsqlMajorVersionUpgradeState; import com.yugabyte.yw.models.helpers.audit.AuditLogConfig; import com.yugabyte.yw.nodeagent.InstallSoftwareInput; +import com.yugabyte.yw.nodeagent.InstallYbcInput; import com.yugabyte.yw.nodeagent.ServerGFlagsInput; import java.nio.file.Path; import java.nio.file.Paths; +import java.util.Arrays; import java.util.HashMap; import java.util.HashSet; +import java.util.List; import java.util.Map; import java.util.Optional; import java.util.Set; import java.util.UUID; import java.util.function.Supplier; +import java.util.stream.Collectors; import javax.annotation.Nullable; import javax.inject.Inject; import lombok.extern.slf4j.Slf4j; @@ -74,6 +79,19 @@ protected AnsibleConfigureServers( this.releaseManager = releaseManager; } + private String getYbPackage(ReleaseContainer release, Architecture arch, Region region) { + String ybServerPackage = null; + if (release != null) { + if (arch != null) { + ybServerPackage = release.getFilePath(arch); + } else { + ybServerPackage = release.getFilePath(region); + } + } + + return ybServerPackage; + } + private InstallSoftwareInput.Builder fillYbReleaseMetadata( Universe universe, Provider provider, @@ -85,20 +103,32 @@ private InstallSoftwareInput.Builder fillYbReleaseMetadata( NodeAgent nodeAgent, String customTmpDirectory) { Map envConfig = CloudInfoInterface.fetchEnvVars(provider); - String ybServerPackage = null; ReleaseContainer release = releaseManager.getReleaseByVersion(ybSoftwareVersion); - if (release != null) { - if (arch != null) { - ybServerPackage = release.getFilePath(arch); - } else { - ybServerPackage = release.getFilePath(region); - } - } + String ybServerPackage = getYbPackage(release, arch, region); installSoftwareInputBuilder.setYbPackage(ybServerPackage); if (release.isS3Download(ybServerPackage)) { installSoftwareInputBuilder.setS3RemoteDownload(true); - installSoftwareInputBuilder.setAwsAccessKey(envConfig.get("AWS_ACCESS_KEY_ID")); - installSoftwareInputBuilder.setAwsSecretKey(envConfig.get("AWS_SECRET_ACCESS_KEY")); + String accessKey = System.getenv("AWS_ACCESS_KEY_ID"); + if (accessKey == null || accessKey.isEmpty()) { + // Todo: This will be removed once iTest moves to new release API. + accessKey = release.getAwsAccessKey(arch); + if (accessKey == null) { + accessKey = envConfig.get("AWS_ACCESS_KEY_ID"); + } + } + + String secretKey = System.getenv("AWS_SECRET_ACCESS_KEY"); + if (secretKey == null || secretKey.isEmpty()) { + secretKey = release.getAwsSecretKey(arch); + if (secretKey == null) { + secretKey = envConfig.get("AWS_ACCESS_KEY_ID"); + } + } + + if (accessKey != null) { + installSoftwareInputBuilder.setAwsAccessKey(accessKey); + installSoftwareInputBuilder.setAwsSecretKey(secretKey); + } } else if (release.isGcsDownload(ybServerPackage)) { installSoftwareInputBuilder.setGcsRemoteDownload(true); // Upload the Credential json to the remote host. @@ -136,6 +166,13 @@ private InstallSoftwareInput.Builder fillYbReleaseMetadata( installSoftwareInputBuilder.setYbPackage( customTmpDirectory + "/" + Paths.get(ybServerPackage).getFileName().toString()); } + if (!node.isInPlacement(universe.getUniverseDetails().getPrimaryCluster().uuid)) { + // For RR we don't setup master + installSoftwareInputBuilder.addSymLinkFolders("tserver"); + } else { + installSoftwareInputBuilder.addSymLinkFolders("tserver"); + installSoftwareInputBuilder.addSymLinkFolders("master"); + } installSoftwareInputBuilder.setRemoteTmp(customTmpDirectory); installSoftwareInputBuilder.setYbHomeDir(provider.getYbHome()); return installSoftwareInputBuilder; @@ -163,6 +200,60 @@ private InstallSoftwareInput setupInstallSoftwareBits( return installSoftwareInputBuilder.build(); } + private InstallYbcInput setupInstallYbcSoftwareBits( + Universe universe, NodeDetails nodeDetails, Params taskParams, NodeAgent nodeAgent) { + InstallYbcInput.Builder installYbcInputBuilder = InstallYbcInput.newBuilder(); + ReleaseContainer release = releaseManager.getReleaseByVersion(taskParams.ybSoftwareVersion); + String ybServerPackage = + getYbPackage(release, universe.getUniverseDetails().arch, taskParams.getRegion()); + Cluster cluster = universe.getCluster(nodeDetails.placementUuid); + Provider provider = Provider.getOrBadRequest(UUID.fromString(cluster.userIntent.provider)); + String customTmpDirectory = + confGetter.getConfForScope(provider, ProviderConfKeys.remoteTmpDirectory); + String ybcPackage = null; + Pair ybcPackageDetails = + Util.getYbcPackageDetailsFromYbServerPackage(ybServerPackage); + String stableYbc = confGetter.getGlobalConf(GlobalConfKeys.ybcStableVersion); + ReleaseManager.ReleaseMetadata releaseMetadata = + releaseManager.getYbcReleaseByVersion( + stableYbc, ybcPackageDetails.getFirst(), ybcPackageDetails.getSecond()); + if (releaseMetadata == null) { + throw new RuntimeException( + String.format("Ybc package metadata: %s cannot be empty with ybc enabled", stableYbc)); + } + if (universe.getUniverseDetails().arch != null) { + ybcPackage = releaseMetadata.getFilePath(universe.getUniverseDetails().arch); + } else { + // Fallback to region in case arch is not present + ybcPackage = releaseMetadata.getFilePath(taskParams.getRegion()); + } + if (StringUtils.isBlank(ybcPackage)) { + throw new RuntimeException("Ybc package cannot be empty with ybc enabled"); + } + installYbcInputBuilder.setYbcPackage(ybcPackage); + nodeAgentClient.uploadFile( + nodeAgent, + ybcPackage, + customTmpDirectory + "/" + Paths.get(ybcPackage).getFileName().toString(), + "yugabyte", + 0, + null); + installYbcInputBuilder.setRemoteTmp(customTmpDirectory); + installYbcInputBuilder.setYbHomeDir(provider.getYbHome()); + List mountPoints = Arrays.asList("/mnt/d0"); + if (taskParams.deviceInfo != null && taskParams.deviceInfo.mountPoints != null) { + String mountPointsStr = taskParams.deviceInfo.mountPoints; + Arrays.stream(mountPointsStr.split(",")) + .map(String::trim) + .filter(s -> !s.isEmpty()) + .collect(Collectors.toList()); + } + for (String mp : mountPoints) { + installYbcInputBuilder.addMountPoints(mp); + } + return installYbcInputBuilder.build(); + } + public static class Params extends NodeTaskParams { public UpgradeTaskType type = UpgradeTaskType.Everything; public String ybSoftwareVersion = null; @@ -276,7 +367,9 @@ && taskParams().isMasterInShellMode getUniverse(), nodeDetails, true /*check feature flag*/) : Optional.empty(); taskParams().skipDownloadSoftware = optional.isPresent(); - if (optional.isPresent() && taskParams().type == UpgradeTaskType.GFlags) { + if (optional.isPresent() + && (taskParams().type == UpgradeTaskType.GFlags + || taskParams().type == UpgradeTaskType.YbcGFlags)) { log.info("Updating gflags using node agent {}", optional.get()); runServerGFlagsWithNodeAgent(optional.get(), universe, nodeDetails); return; @@ -288,13 +381,22 @@ && taskParams().isMasterInShellMode .processErrors(); if (optional.isPresent() && (taskParams().type == UpgradeTaskType.Everything - || taskParams().type == UpgradeTaskType.Software - || taskParams().type == UpgradeTaskType.YbcGFlags)) { + || taskParams().type == UpgradeTaskType.Software)) { log.info("Installing software using node agent {}", optional.get()); nodeAgentClient.runInstallSoftware( optional.get(), setupInstallSoftwareBits(universe, nodeDetails, taskParams(), optional.get()), "yugabyte"); + + if (taskParams().isEnableYbc()) { + log.info("Installing YBC using node agent {}", optional.get()); + nodeAgentClient.runInstallYbcSoftware( + optional.get(), + setupInstallYbcSoftwareBits(universe, nodeDetails, taskParams(), optional.get()), + "yugabyte"); + runServerGFlagsWithNodeAgent( + optional.get(), universe, nodeDetails, ServerType.CONTROLLER.toString()); + } } if (taskParams().type == UpgradeTaskType.Everything && !taskParams().updateMasterAddrsOnly) { @@ -351,6 +453,11 @@ private void runServerGFlagsWithNodeAgent( && !processType.equals(ServerType.TSERVER.toString())) { throw new RuntimeException("Invalid processType: " + processType); } + runServerGFlagsWithNodeAgent(nodeAgent, universe, nodeDetails, processType); + } + + private void runServerGFlagsWithNodeAgent( + NodeAgent nodeAgent, Universe universe, NodeDetails nodeDetails, String processType) { String serverName = processType.toLowerCase(); String serverHome = Paths.get(nodeUniverseManager.getYbHomeDir(nodeDetails, universe), serverName).toString(); diff --git a/managed/src/main/java/com/yugabyte/yw/common/NodeAgentClient.java b/managed/src/main/java/com/yugabyte/yw/common/NodeAgentClient.java index e5d5573de80c..23ac1ab244ee 100644 --- a/managed/src/main/java/com/yugabyte/yw/common/NodeAgentClient.java +++ b/managed/src/main/java/com/yugabyte/yw/common/NodeAgentClient.java @@ -15,6 +15,8 @@ import com.google.protobuf.Descriptors.FieldDescriptor; import com.typesafe.config.Config; import com.yugabyte.yw.commissioner.NodeAgentEnabler; +import com.yugabyte.yw.common.NodeAgentClient.ChannelFactory; +import com.yugabyte.yw.common.NodeAgentClient.GrpcClientRequestInterceptor; import com.yugabyte.yw.common.certmgmt.CertificateHelper; import com.yugabyte.yw.common.config.GlobalConfKeys; import com.yugabyte.yw.common.config.ProviderConfKeys; @@ -37,6 +39,8 @@ import com.yugabyte.yw.nodeagent.FileInfo; import com.yugabyte.yw.nodeagent.InstallSoftwareInput; import com.yugabyte.yw.nodeagent.InstallSoftwareOutput; +import com.yugabyte.yw.nodeagent.InstallYbcInput; +import com.yugabyte.yw.nodeagent.InstallYbcOutput; import com.yugabyte.yw.nodeagent.NodeAgentGrpc; import com.yugabyte.yw.nodeagent.NodeAgentGrpc.NodeAgentBlockingStub; import com.yugabyte.yw.nodeagent.NodeAgentGrpc.NodeAgentStub; @@ -925,6 +929,18 @@ public InstallSoftwareOutput runInstallSoftware( return runAsyncTask(nodeAgent, builder.build(), InstallSoftwareOutput.class); } + public InstallYbcOutput runInstallYbcSoftware( + NodeAgent nodeAgent, InstallYbcInput input, String user) { + SubmitTaskRequest.Builder builder = + SubmitTaskRequest.newBuilder() + .setInstallYbcInput(input) + .setTaskId(UUID.randomUUID().toString()); + if (StringUtils.isNotBlank(user)) { + builder.setUser(user); + } + return runAsyncTask(nodeAgent, builder.build(), InstallYbcOutput.class); + } + public ServerGFlagsOutput runServerGFlags( NodeAgent nodeAgent, ServerGFlagsInput input, String user) { SubmitTaskRequest.Builder builder = diff --git a/managed/src/main/java/com/yugabyte/yw/common/NodeManager.java b/managed/src/main/java/com/yugabyte/yw/common/NodeManager.java index ae6dc52b0f02..9c3ee8e5cd3b 100644 --- a/managed/src/main/java/com/yugabyte/yw/common/NodeManager.java +++ b/managed/src/main/java/com/yugabyte/yw/common/NodeManager.java @@ -1044,15 +1044,15 @@ private List getConfigureSubCommand( if (!taskParam.skipDownloadSoftware) { subcommand.add("--package"); subcommand.add(ybServerPackage); - } - if (taskParam.isEnableYbc()) { - subcommand.add("--ybc_flags"); - subcommand.add(Json.stringify(Json.toJson(ybcFlags))); - subcommand.add("--configure_ybc"); - subcommand.add("--ybc_package"); - subcommand.add(ybcPackage); - subcommand.add("--ybc_dir"); - subcommand.add(ybcDir); + if (taskParam.isEnableYbc()) { + subcommand.add("--ybc_flags"); + subcommand.add(Json.stringify(Json.toJson(ybcFlags))); + subcommand.add("--configure_ybc"); + subcommand.add("--ybc_package"); + subcommand.add(ybcPackage); + subcommand.add("--ybc_dir"); + subcommand.add(ybcDir); + } } if (!node.isInPlacement(universe.getUniverseDetails().getPrimaryCluster().uuid)) { // For RR we don't setup master @@ -1099,9 +1099,11 @@ private List getConfigureSubCommand( if (taskParam.isEnableYbc()) { subcommand.add("--ybc_flags"); subcommand.add(Json.stringify(Json.toJson(ybcFlags))); - subcommand.add("--configure_ybc"); - subcommand.add("--ybc_package"); - subcommand.add(ybcPackage); + if (!taskParam.skipDownloadSoftware) { + subcommand.add("--configure_ybc"); + subcommand.add("--ybc_package"); + subcommand.add(ybcPackage); + } subcommand.add("--ybc_dir"); subcommand.add(ybcDir); } @@ -1406,12 +1408,12 @@ private List getConfigureSubCommand( if (!taskParam.skipDownloadSoftware) { subcommand.add("--package"); subcommand.add(ybServerPackage); + subcommand.add("--ybc_package"); + subcommand.add(ybcPackage); + subcommand.add("--configure_ybc"); } - subcommand.add("--ybc_package"); - subcommand.add(ybcPackage); subcommand.add("--ybc_flags"); subcommand.add(Json.stringify(Json.toJson(ybcFlags))); - subcommand.add("--configure_ybc"); subcommand.add("--ybc_dir"); subcommand.add(ybcDir); subcommand.add("--tags"); From e94f4eb2a8ed45d6e275edd1d231c4c68257beb0 Mon Sep 17 00:00:00 2001 From: Vipul Bansal Date: Tue, 13 May 2025 09:03:23 +0000 Subject: [PATCH 065/146] [PLAT-17244][PLAT-17586]: Store upgrade details in audit table Summary: Added changes to store the finalizeRequired and ysqlMajorVersionUpgrade fields in the audit table. YBM has moved from persistent disk to ephemeral volumes for YBA, which wipes out the metadata extracted from the DB package. So, if YBM still uses the precheck API, then YBA downloads the file and returns the result, but this causes a timeout in YBM. So, YBM will run the precheck/software upgrade API, and then they can extract this info. Also, added the changes to make PG upgrade check work if auth is disabled in userIntent but enabled through gflags override. Test Plan: Tested manually that requested data is available in the audit table and the PG upgrade check works if auth is enabled using gflags override. https://jenkins.dev.yugabyte.com/view/Dev/job/dev-itest-pipeline/1840/ Reviewers: daniel, amindrov, sanketh Reviewed By: amindrov Subscribers: yugaware Differential Revision: https://phorge.dev.yugabyte.com/D43954 --- .../tasks/subtasks/check/CheckUpgrade.java | 41 ++++++++++++++++++- .../subtasks/check/PGUpgradeTServerCheck.java | 30 ++++++++++++-- .../yugabyte/yw/common/gflags/GFlagsUtil.java | 12 ++++++ 3 files changed, 78 insertions(+), 5 deletions(-) diff --git a/managed/src/main/java/com/yugabyte/yw/commissioner/tasks/subtasks/check/CheckUpgrade.java b/managed/src/main/java/com/yugabyte/yw/commissioner/tasks/subtasks/check/CheckUpgrade.java index fdb5998ed2be..e330451bbcaf 100644 --- a/managed/src/main/java/com/yugabyte/yw/commissioner/tasks/subtasks/check/CheckUpgrade.java +++ b/managed/src/main/java/com/yugabyte/yw/commissioner/tasks/subtasks/check/CheckUpgrade.java @@ -5,6 +5,9 @@ import static play.mvc.Http.Status.BAD_REQUEST; import static play.mvc.Http.Status.INTERNAL_SERVER_ERROR; +import com.fasterxml.jackson.databind.JsonNode; +import com.fasterxml.jackson.databind.ObjectMapper; +import com.fasterxml.jackson.databind.node.ObjectNode; import com.google.inject.Inject; import com.typesafe.config.Config; import com.yugabyte.yw.commissioner.BaseTaskDependencies; @@ -13,12 +16,15 @@ import com.yugabyte.yw.commissioner.tasks.subtasks.ServerSubTaskBase; import com.yugabyte.yw.common.PlatformServiceException; import com.yugabyte.yw.common.ReleaseManager; +import com.yugabyte.yw.common.SoftwareUpgradeHelper; import com.yugabyte.yw.common.Util; +import com.yugabyte.yw.common.audit.AuditService; import com.yugabyte.yw.common.gflags.AutoFlagUtil; import com.yugabyte.yw.common.gflags.GFlagsUtil; import com.yugabyte.yw.common.gflags.GFlagsValidation; import com.yugabyte.yw.forms.UniverseDefinitionTaskParams.Cluster; import com.yugabyte.yw.forms.UniverseDefinitionTaskParams.UserIntent; +import com.yugabyte.yw.models.Audit; import com.yugabyte.yw.models.Universe; import com.yugabyte.yw.models.helpers.CommonUtils; import com.yugabyte.yw.models.helpers.NodeDetails; @@ -41,6 +47,8 @@ public class CheckUpgrade extends ServerSubTaskBase { private final ReleaseManager releaseManager; private final Config appConfig; private final AutoFlagUtil autoFlagUtil; + private final AuditService auditService; + private final SoftwareUpgradeHelper softwareUpgradeHelper; @Inject protected CheckUpgrade( @@ -48,12 +56,16 @@ protected CheckUpgrade( Config appConfig, GFlagsValidation gFlagsValidation, ReleaseManager releaseManager, - AutoFlagUtil autoFlagUtil) { + AutoFlagUtil autoFlagUtil, + SoftwareUpgradeHelper softwareUpgradeHelper, + AuditService auditService) { super(baseTaskDependencies); this.appConfig = appConfig; this.gFlagsValidation = gFlagsValidation; this.releaseManager = releaseManager; this.autoFlagUtil = autoFlagUtil; + this.softwareUpgradeHelper = softwareUpgradeHelper; + this.auditService = auditService; } public static class Params extends ServerSubTaskParams { @@ -87,6 +99,33 @@ public void run() { // Check if YSQL major version upgrade is allowed. validateYSQLMajorUpgrade(universe, oldVersion, newVersion); + + // Update the audit details with upgrade info. + updateAuditDetails(oldVersion, newVersion); + } + + private void updateAuditDetails(String oldVersion, String newVersion) { + Audit audit = auditService.getFromTaskUUID(getUserTaskUUID()); + if (audit == null) { + log.info("Audit not found for task UUID: {}", getUserTaskUUID()); + return; + } + JsonNode auditDetails = audit.getAdditionalDetails(); + ObjectNode modifiedNode; + if (auditDetails != null) { + modifiedNode = auditDetails.deepCopy(); + } else { + ObjectMapper mapper = new ObjectMapper(); + modifiedNode = mapper.createObjectNode(); + } + modifiedNode.put( + "finalizeRequired", + String.valueOf(softwareUpgradeHelper.checkUpgradeRequireFinalize(oldVersion, newVersion))); + modifiedNode.put( + "ysqlMajorVersionUpgrade", + String.valueOf(gFlagsValidation.ysqlMajorVersionUpgrade(oldVersion, newVersion))); + + auditService.updateAdditionalDetails(getUserTaskUUID(), modifiedNode); } private void validateAutoflag(Universe universe, String newVersion) { diff --git a/managed/src/main/java/com/yugabyte/yw/commissioner/tasks/subtasks/check/PGUpgradeTServerCheck.java b/managed/src/main/java/com/yugabyte/yw/commissioner/tasks/subtasks/check/PGUpgradeTServerCheck.java index e8748ccfff8e..9995221a1c6b 100644 --- a/managed/src/main/java/com/yugabyte/yw/commissioner/tasks/subtasks/check/PGUpgradeTServerCheck.java +++ b/managed/src/main/java/com/yugabyte/yw/commissioner/tasks/subtasks/check/PGUpgradeTServerCheck.java @@ -5,6 +5,7 @@ import com.fasterxml.jackson.databind.JsonNode; import com.fasterxml.jackson.databind.ObjectMapper; import com.fasterxml.jackson.databind.SerializationFeature; +import com.fasterxml.jackson.databind.node.ObjectNode; import com.google.common.collect.ImmutableList; import com.google.inject.Inject; import com.yugabyte.yw.cloud.PublicCloudConstants.Architecture; @@ -26,10 +27,12 @@ import com.yugabyte.yw.common.audit.AuditService; import com.yugabyte.yw.common.config.ProviderConfKeys; import com.yugabyte.yw.common.config.UniverseConfKeys; +import com.yugabyte.yw.common.gflags.GFlagsUtil; import com.yugabyte.yw.forms.UniverseDefinitionTaskParams; import com.yugabyte.yw.forms.UniverseDefinitionTaskParams.UserIntent; import com.yugabyte.yw.forms.UpgradeTaskParams.UpgradeTaskSubType; import com.yugabyte.yw.forms.UpgradeTaskParams.UpgradeTaskType; +import com.yugabyte.yw.models.Audit; import com.yugabyte.yw.models.AvailabilityZone; import com.yugabyte.yw.models.Provider; import com.yugabyte.yw.models.Universe; @@ -306,7 +309,8 @@ private void runCheckOnNode(Universe universe, NodeDetails node) { } command.add(pgDataDir); command.add("--old-host"); - if (primaryCluster.userIntent.enableYSQLAuth) { + boolean authEnabled = GFlagsUtil.isYsqlAuthEnabled(universe, node); + if (authEnabled) { command.add(String.format("'$(ls -d -t %s/.yb.* | head -1)'", customTmpDirectory)); } else { command.add(node.cloudInfo.private_ip); @@ -340,9 +344,9 @@ private void runCheckOnNode(Universe universe, NodeDetails node) { "Reading PG15 upgrade check logs on node: {} with command: {}", node.nodeName, command); ShellResponse readLogsResponse = nodeUniverseManager.runCommand(node, universe, readLogsCommand, context).processErrors(); - JsonNode output = parsePGUpgradeOutput(readLogsResponse.extractRunCommandOutput()); + ObjectNode output = parsePGUpgradeOutput(readLogsResponse.extractRunCommandOutput()); log.info("PG upgrade check output on node: {} is: {}", node.nodeName, output); - auditService.updateAdditionalDetails(getUserTaskUUID(), output); + appendAuditDetails(output); if (output != null && output.has("overallStatus") && output.get("overallStatus").asText().equals("Failure, exiting")) { @@ -354,6 +358,24 @@ private void runCheckOnNode(Universe universe, NodeDetails node) { } } + private void appendAuditDetails(ObjectNode output) { + Audit audit = auditService.getFromTaskUUID(getUserTaskUUID()); + if (audit == null) { + return; + } + JsonNode auditDetails = audit.getAdditionalDetails(); + ObjectNode modifiedNode; + if (auditDetails != null) { + modifiedNode = auditDetails.deepCopy(); + } else { + ObjectMapper mapper = new ObjectMapper(); + modifiedNode = mapper.createObjectNode(); + } + modifiedNode.setAll(output); + log.debug("Software upgrade task audit details: {}", modifiedNode); + auditService.updateAdditionalDetails(getUserTaskUUID(), modifiedNode); + } + private String extractVersionName(String ybServerPackage) { return extractPackageName(ybServerPackage).replace(".tar.gz", ""); } @@ -444,7 +466,7 @@ private AnsibleConfigureServers.Params getAnsibleConfigureServerParamsToDownload * Failure, exiting" * */ - public static JsonNode parsePGUpgradeOutput(String input) { + public static ObjectNode parsePGUpgradeOutput(String input) { Map result = new HashMap<>(); String[] lines = input.split("\n"); String title = lines[0].trim(); diff --git a/managed/src/main/java/com/yugabyte/yw/common/gflags/GFlagsUtil.java b/managed/src/main/java/com/yugabyte/yw/common/gflags/GFlagsUtil.java index 4b1df7de108e..44f6a5dc6bf9 100644 --- a/managed/src/main/java/com/yugabyte/yw/common/gflags/GFlagsUtil.java +++ b/managed/src/main/java/com/yugabyte/yw/common/gflags/GFlagsUtil.java @@ -1883,4 +1883,16 @@ public static String getLogLinePrefix(String pgConfCsv) { } return DEFAULT_LOG_LINE_PREFIX; } + + public static boolean isYsqlAuthEnabled(Universe universe, NodeDetails node) { + Map gflags = + GFlagsUtil.getGFlagsForNode( + node, + ServerType.TSERVER, + universe.getUniverseDetails().getPrimaryCluster(), + universe.getUniverseDetails().clusters); + return universe.getUniverseDetails().getPrimaryCluster().userIntent.enableYSQLAuth + || (gflags.containsKey(YSQL_ENABLE_AUTH) + && gflags.get(YSQL_ENABLE_AUTH).equalsIgnoreCase("true")); + } } From 75645f76ad5c3ae75d82359e3311eb2cbb88e418 Mon Sep 17 00:00:00 2001 From: svarshney Date: Tue, 13 May 2025 16:47:18 +0530 Subject: [PATCH 066/146] [PLAT-17577] Implement Configure Server in node-agent Summary: This diff implements the configure server in node-agent. Two bits are left in configure server phase. 1) Install otel-collector. 2) cgroups setup. These will be implemented as separate RPCs. Test Plan: iTest pipeline. Manually verified by creating universe. Reviewers: nsingh Reviewed By: nsingh Differential Revision: https://phorge.dev.yugabyte.com/D43952 --- .../opscli/ybops/cloud/common/method.py | 4 +- .../tasks/prepare-configure-server.yml | 2 + managed/node-agent/app/server/rpc.go | 11 + .../node-agent/app/task/configure_server.go | 365 +++++++++++++++ .../node-agent/app/task/helpers/yb_helper.go | 43 ++ managed/node-agent/app/task/module/systemd.go | 32 +- managed/node-agent/proto/server.proto | 2 + managed/node-agent/proto/yb.proto | 12 + .../templates/server/clean_cores.sh.j2 | 59 +++ .../templates/server/clock-sync.sh.j2 | 128 ++++++ .../server/collect_metrics_wrapper.sh.j2 | 28 ++ .../templates/server/yb-server-ctl.sh.j2 | 426 ++++++++++++++++++ .../templates/server/zip_purge_yb_logs.sh.j2 | 220 +++++++++ .../ynp/commands/provision_command.py | 2 +- .../subtasks/AnsibleConfigureServers.java | 41 ++ .../yugabyte/yw/common/NodeAgentClient.java | 14 + .../com/yugabyte/yw/common/NodeManager.java | 3 + 17 files changed, 1387 insertions(+), 5 deletions(-) create mode 100644 managed/node-agent/app/task/configure_server.go create mode 100755 managed/node-agent/resources/templates/server/clean_cores.sh.j2 create mode 100755 managed/node-agent/resources/templates/server/clock-sync.sh.j2 create mode 100755 managed/node-agent/resources/templates/server/collect_metrics_wrapper.sh.j2 create mode 100644 managed/node-agent/resources/templates/server/yb-server-ctl.sh.j2 create mode 100755 managed/node-agent/resources/templates/server/zip_purge_yb_logs.sh.j2 diff --git a/managed/devops/opscli/ybops/cloud/common/method.py b/managed/devops/opscli/ybops/cloud/common/method.py index dadfa4d6acfb..7ea1c512ed78 100644 --- a/managed/devops/opscli/ybops/cloud/common/method.py +++ b/managed/devops/opscli/ybops/cloud/common/method.py @@ -1353,6 +1353,8 @@ def prepare(self): help="Path to GCP credentials file used for logs export.") self.parser.add_argument('--ycql_audit_log_level', default=None, help="YCQL audit log level.") + self.parser.add_argument('--skip_ansible_configure_playbook', action="store_true", + help="If specified will not run the ansible playbooks.") def get_ssh_user(self): # Force the yugabyte user for configuring instances. The configure step performs YB specific @@ -1696,7 +1698,7 @@ def callback(self, args): if delete_paths: self.extra_vars["delete_paths"] = delete_paths # If we are just rotating certs, we don't need to do any configuration changes. - if not rotate_certs: + if not rotate_certs and not args.skip_ansible_configure_playbook: self.cloud.setup_ansible(args).run( "configure-{}.yml".format(args.type), self.extra_vars, host_info) diff --git a/managed/devops/roles/configure-cluster-server/tasks/prepare-configure-server.yml b/managed/devops/roles/configure-cluster-server/tasks/prepare-configure-server.yml index ce52330e1cb8..a3ab79b1fbcd 100644 --- a/managed/devops/roles/configure-cluster-server/tasks/prepare-configure-server.yml +++ b/managed/devops/roles/configure-cluster-server/tasks/prepare-configure-server.yml @@ -199,6 +199,7 @@ shell: cmd: "loginctl enable-linger {{ user_name }}" + # Todo: In node-agent. - name: Configure | Setup OpenTelemetry Collector include_role: name: manage_otel_collector @@ -311,6 +312,7 @@ when: (systemd_option and not ((ansible_os_family == 'RedHat' and ansible_distribution_major_version == '7') or (ansible_distribution == 'Amazon' and ansible_distribution_major_version == '2'))) +# Todo: In node-agent. - name: Configure | setup-postgres-cgroups include_role: name: setup-cgroup diff --git a/managed/node-agent/app/server/rpc.go b/managed/node-agent/app/server/rpc.go index 837a545a521c..8954cb8da985 100644 --- a/managed/node-agent/app/server/rpc.go +++ b/managed/node-agent/app/server/rpc.go @@ -361,6 +361,17 @@ func (server *RPCServer) SubmitTask( res.TaskId = taskID return res, nil } + configureServerInput := req.GetConfigureServerInput() + if configureServerInput != nil { + configureServerHandler := task.NewConfigureServerHandler(configureServerInput, username) + err := task.GetTaskManager().Submit(ctx, taskID, configureServerHandler) + if err != nil { + util.FileLogger().Errorf(ctx, "Error in running configure server - %s", err.Error()) + return res, status.Error(codes.Internal, err.Error()) + } + res.TaskId = taskID + return res, nil + } return res, status.Error(codes.Unimplemented, "Unknown task") } diff --git a/managed/node-agent/app/task/configure_server.go b/managed/node-agent/app/task/configure_server.go new file mode 100644 index 000000000000..01aa4b39ba9d --- /dev/null +++ b/managed/node-agent/app/task/configure_server.go @@ -0,0 +1,365 @@ +// Copyright (c) YugaByte, Inc. + +package task + +import ( + "context" + "errors" + "fmt" + "io/fs" + "node-agent/app/task/helpers" + "node-agent/app/task/module" + pb "node-agent/generated/service" + "node-agent/util" + "path/filepath" + "strings" +) + +const ( + SystemdUnitPath = ".config/systemd/user" + ServerTemplateSubpath = "server/" +) + +var SystemdUnits = []string{ + "yb-zip_purge_yb_logs.service", + "yb-clean_cores.service", + "yb-collect_metrics.service", + "yb-zip_purge_yb_logs.timer", + "yb-clean_cores.timer", + "yb-collect_metrics.timer", +} + +type ConfigureServerHandler struct { + shellTask *ShellTask + param *pb.ConfigureServerInput + username string + logOut util.Buffer +} + +func NewConfigureServerHandler( + param *pb.ConfigureServerInput, + username string, +) *ConfigureServerHandler { + return &ConfigureServerHandler{ + param: param, + username: username, + logOut: util.NewBuffer(MaxBufferCapacity), + } +} + +// helper that wraps NewShellTaskWithUser + Process + error logging +func (h *ConfigureServerHandler) runShell( + ctx context.Context, + desc, shell string, + args []string, +) (*TaskStatus, error) { + h.logOut.WriteLine("Running configure server phase: %s", desc) + h.shellTask = NewShellTaskWithUser(desc, h.username, shell, args) + result, err := h.shellTask.Process(ctx) + if err != nil { + util.FileLogger().Errorf(ctx, + "configure server failed [%s]: %s", desc, err) + return nil, err + } + return result, nil +} + +// CurrentTaskStatus implements the AsyncTask method. +func (h *ConfigureServerHandler) CurrentTaskStatus() *TaskStatus { + return &TaskStatus{ + Info: h.logOut, + ExitStatus: &ExitStatus{}, + } +} + +func (h *ConfigureServerHandler) String() string { + return "Configure Server Task" +} + +func (h *ConfigureServerHandler) Handle(ctx context.Context) (*pb.DescribeTaskResponse, error) { + util.FileLogger().Info(ctx, "Starting configure server handler.") + + // 0. Validate that the processes are specified. + if len(h.param.GetProcesses()) == 0 { + err := errors.New("processes is required") + util.FileLogger().Error(ctx, err.Error()) + return nil, err + } + + // 1) figure out home dir + home := "" + if h.param.GetYbHomeDir() != "" { + home = h.param.GetYbHomeDir() + } else { + err := errors.New("ybHomeDir is required") + util.FileLogger().Error(ctx, err.Error()) + return nil, err + } + + // 2) determine yb_metric_dir + yb_metrics_dir := filepath.Join(h.param.GetRemoteTmp(), "yugabyte/metrics") + cmd := "systemctl show node_exporter | grep -oP '(?<=--collector.textfile.directory=)[^ ]+' | head -n1" + h.logOut.WriteLine("Determing the node_exporter textfile directory") + shellTask := NewShellTaskWithUser( + h.String(), + h.username, + util.DefaultShell, + []string{"-c", cmd}, + ) + status, err := shellTask.Process(ctx) + if err != nil { + util.FileLogger().Errorf(ctx, "Configure server failed in %v - %s", cmd, err.Error()) + return nil, err + } + if status.Info.String() != yb_metrics_dir { + yb_metrics_dir = filepath.Join(home, "metrics") + } + + // 3) Execute the shell commands. + err = h.execShellCommands(ctx, home) + if err != nil { + util.FileLogger().Errorf(ctx, "Configure server failed - %s", err.Error()) + return nil, err + } + + // 4) Setup the server scripts. + err = h.setupServerScript(ctx, home, yb_metrics_dir) + if err != nil { + util.FileLogger().Errorf(ctx, "Configure server failed - %s", err.Error()) + return nil, err + } + + // 5) Enable the user systemd units. + err = h.enableSystemdServices(ctx, home) + if err != nil { + util.FileLogger().Errorf(ctx, "Configure server failed - %s", err.Error()) + return nil, err + } + + for _, process := range h.param.GetProcesses() { + // 6) Configure the individual specified process. + err = h.configureProcess(ctx, home, process) + if err != nil { + util.FileLogger().Errorf(ctx, "Configure server failed - %s", err.Error()) + return nil, err + } + } + + return nil, nil +} + +func (h *ConfigureServerHandler) configureProcess(ctx context.Context, home, process string) error { + mountPoint := "" + if len(h.param.GetMountPoints()) > 0 { + mountPoint = h.param.GetMountPoints()[0] + } + + steps := []struct { + desc string + cmd string + }{ + { + fmt.Sprintf("make-yb-%s-conf-dir", process), + fmt.Sprintf("mkdir -p %s", filepath.Join(home, process, "conf")), + }, + { + "create-mount-logs-directory", + fmt.Sprintf("mkdir -p %s", filepath.Join(mountPoint, "yb-data/", process, "logs")), + }, + { + "symlink-logs-to-yb-logs", + fmt.Sprintf( + "ln -sf %s %s", + filepath.Join(mountPoint, "yb-data/", process, "logs"), + filepath.Join(home, process, "logs"), + ), + }, + } + + for _, step := range steps { + _, err := h.runShell(ctx, step.desc, util.DefaultShell, []string{"-c", step.cmd}) + if err != nil { + return err + } + } + return nil +} + +func (h *ConfigureServerHandler) enableSystemdServices(ctx context.Context, home string) error { + for _, unit := range SystemdUnits { + cmd := module.EnableSystemdUnit(h.username, unit) + h.logOut.WriteLine("Running configure server phase: %s", cmd) + + shellTask := NewShellTaskWithUser( + h.String(), + h.username, + util.DefaultShell, + []string{"-c", cmd}, + ) + util.FileLogger().Infof(ctx, "Running command %v", cmd) + _, err := shellTask.Process(ctx) + if err != nil { + util.FileLogger().Errorf(ctx, "Configure server failed in %v - %s", cmd, err.Error()) + return err + } + + if unit != "network-online.target" && unit[len(unit)-6:] == "timer" { + startCmd := module.StartSystemdUnit(h.username, unit) + h.logOut.WriteLine("Running configure server phase: %s", startCmd) + + shellTask = NewShellTaskWithUser( + h.String(), + h.username, + util.DefaultShell, + []string{"-c", startCmd}, + ) + util.FileLogger().Infof(ctx, "Running command %v", startCmd) + _, err := shellTask.Process(ctx) + if err != nil { + util.FileLogger(). + Errorf(ctx, "Configure server failed in %v - %s", cmd, err.Error()) + return err + } + } + } + + info, err := helpers.GetOSInfo() + if err != nil { + util.FileLogger().Errorf(ctx, "Error retreiving OS information %s", err.Error()) + return err + } + + unitDir := "/lib/systemd/system" + if strings.Contains(info.ID, "suse") || strings.Contains(info.Family, "suse") { + unitDir = "/usr/lib/systemd/system" + } + + // Link network-online.target if required + linkCmd := fmt.Sprintf("systemctl --user link %s/network-online.target", unitDir) + shellTask := NewShellTaskWithUser( + h.String(), + h.username, + util.DefaultShell, + []string{"-c", linkCmd}, + ) + h.logOut.WriteLine("Running configure server phase: %s", linkCmd) + util.FileLogger().Infof(ctx, "Running command %v", linkCmd) + _, err = shellTask.Process(ctx) + if err != nil { + util.FileLogger().Errorf(ctx, "Configure server failed in %v - %s", linkCmd, err.Error()) + return err + } + + return nil +} + +func (h *ConfigureServerHandler) setupServerScript( + ctx context.Context, + home, yb_metrics_dir string, +) error { + serverScriptContext := map[string]any{ + "mount_paths": strings.Join(h.param.GetMountPoints(), " "), + "user_name": h.username, + "yb_cores_dir": filepath.Join(home, "cores"), + "systemd_option": true, + "yb_home_dir": home, + "num_cores_to_keep": h.param.GetNumCoresToKeep(), + "yb_metrics_dir": yb_metrics_dir, + } + + // Copy yb-server-ctl.sh script. + err := module.CopyFile( + ctx, + serverScriptContext, + filepath.Join(ServerTemplateSubpath, "yb-server-ctl.sh.j2"), + filepath.Join(home, "bin", "yb-server-ctl.sh"), + fs.FileMode(0755), + ) + if err != nil { + return err + } + + // Copy clock-sync.sh script. + err = module.CopyFile( + ctx, + serverScriptContext, + filepath.Join(ServerTemplateSubpath, "clock-sync.sh.j2"), + filepath.Join(home, "bin", "clock-sync.sh"), + fs.FileMode(0755), + ) + if err != nil { + return err + } + + // Copy clean_cores.sh script. + err = module.CopyFile( + ctx, + serverScriptContext, + filepath.Join(ServerTemplateSubpath, "clean_cores.sh.j2"), + filepath.Join(home, "bin", "clean_cores.sh"), + fs.FileMode(0755), + ) + if err != nil { + return err + } + + // Copy zip_purge_yb_logs.sh.sh script. + err = module.CopyFile( + ctx, + serverScriptContext, + filepath.Join(ServerTemplateSubpath, "zip_purge_yb_logs.sh.j2"), + filepath.Join(home, "bin", "zip_purge_yb_logs.sh"), + fs.FileMode(0755), + ) + if err != nil { + return err + } + + // Copy collect_metrics_wrapper.sh script. + err = module.CopyFile( + ctx, + serverScriptContext, + filepath.Join(ServerTemplateSubpath, "collect_metrics_wrapper.sh.j2"), + filepath.Join(home, "bin", "collect_metrics_wrapper.sh"), + fs.FileMode(0755), + ) + if err != nil { + return err + } + + return nil +} + +func (h *ConfigureServerHandler) execShellCommands( + ctx context.Context, + home string, +) error { + mountPoint := "" + if len(h.param.GetMountPoints()) > 0 { + mountPoint = h.param.GetMountPoints()[0] + } + + steps := []struct { + desc string + cmd string + }{ + {"make-yb-bin-dir", fmt.Sprintf("mkdir -p %s", filepath.Join(home, "bin"))}, + {"make-cores-dir", fmt.Sprintf("mkdir -p %s", filepath.Join(mountPoint, "cores"))}, + { + "symlink-cores-to-yb-cores", + fmt.Sprintf( + "ln -sf %s %s", + filepath.Join(mountPoint, "cores"), + filepath.Join(home, "cores"), + ), + }, + } + + for _, step := range steps { + _, err := h.runShell(ctx, step.desc, util.DefaultShell, []string{"-c", step.cmd}) + if err != nil { + return err + } + } + return nil +} diff --git a/managed/node-agent/app/task/helpers/yb_helper.go b/managed/node-agent/app/task/helpers/yb_helper.go index 55b7eeb68f02..4da8c69d27db 100644 --- a/managed/node-agent/app/task/helpers/yb_helper.go +++ b/managed/node-agent/app/task/helpers/yb_helper.go @@ -3,10 +3,12 @@ package helpers import ( + "bufio" "fmt" "os" "path/filepath" "regexp" + "runtime" "strings" ) @@ -16,6 +18,14 @@ type Release struct { Name string } +// OSInfo represents parsed OS release info +type OSInfo struct { + ID string // e.g., "ubuntu" + Family string // e.g., "debian" + Pretty string // e.g., "Ubuntu 22.04.4 LTS" + Arch string // e.g., "x86_64" +} + var releaseFormat = regexp.MustCompile(`yugabyte[-_]([\d]+\.[\d]+\.[\d]+\.[\d]+-[a-z0-9]+)`) // extractReleaseFromArchive parses the archive filename and returns a Release. @@ -66,3 +76,36 @@ func ListDirectoryContent(dirPath string) ([]string, error) { } return names, nil } + +// GetOSInfo parses /etc/os-release and returns OS info +func GetOSInfo() (*OSInfo, error) { + file, err := os.Open("/etc/os-release") + if err != nil { + return nil, fmt.Errorf("failed to open /etc/os-release: %w", err) + } + defer file.Close() + + info := &OSInfo{} + scanner := bufio.NewScanner(file) + for scanner.Scan() { + line := scanner.Text() + // Remove quotes from values + if keyVal := strings.SplitN(line, "=", 2); len(keyVal) == 2 { + key := keyVal[0] + val := strings.Trim(keyVal[1], `"`) + switch key { + case "ID": + info.ID = val + case "ID_LIKE": + info.Family = val + case "PRETTY_NAME": + info.Pretty = val + } + } + } + if err := scanner.Err(); err != nil { + return nil, fmt.Errorf("error reading /etc/os-release: %w", err) + } + info.Arch = runtime.GOARCH + return info, nil +} diff --git a/managed/node-agent/app/task/module/systemd.go b/managed/node-agent/app/task/module/systemd.go index 4bc3e41570d9..41c70e6cc774 100644 --- a/managed/node-agent/app/task/module/systemd.go +++ b/managed/node-agent/app/task/module/systemd.go @@ -16,7 +16,7 @@ func IsUserSystemd(username, serverName string) (bool, error) { if err != nil { return false, err } - if !strings.HasSuffix(serverName, ".service") { + if !strings.HasSuffix(serverName, ".service") && !strings.HasSuffix(serverName, ".timer") { serverName = serverName + ".service" } path := filepath.Join(info.User.HomeDir, ".config/systemd/user", serverName) @@ -30,17 +30,43 @@ func IsUserSystemd(username, serverName string) (bool, error) { return false, err } -func ControlServerCmd(username, serverName, controlType string) (string, error) { +func getUserOptionForUserLevel(username, serverName string) string { userOption := "" if username != "" { yes, err := IsUserSystemd(username, serverName) if err != nil { - return "", err + return "" } if yes { userOption = "--user " } } + + return userOption +} + +func EnableSystemdUnit(username, serverName string) string { + userOption := getUserOptionForUserLevel(username, serverName) + return fmt.Sprintf( + "systemctl %sdaemon-reload && systemctl %senable %s", + userOption, + userOption, + serverName, + ) +} + +func StartSystemdUnit(username, serverName string) string { + userOption := getUserOptionForUserLevel(username, serverName) + return fmt.Sprintf("systemctl %sstart %s", userOption, serverName) +} + +func StopSystemdUnit(username, serverName string) string { + userOption := getUserOptionForUserLevel(username, serverName) + return fmt.Sprintf("systemctl %s stop %s", userOption, serverName) +} + +func ControlServerCmd(username, serverName, controlType string) (string, error) { + userOption := getUserOptionForUserLevel(username, serverName) return fmt.Sprintf( "systemctl %sdaemon-reload && systemctl %senable %s && systemctl %s%s %s", userOption, diff --git a/managed/node-agent/proto/server.proto b/managed/node-agent/proto/server.proto index e0c0870f51e4..13d0fa9ca674 100644 --- a/managed/node-agent/proto/server.proto +++ b/managed/node-agent/proto/server.proto @@ -59,6 +59,7 @@ message SubmitTaskRequest { InstallSoftwareInput installSoftwareInput = 7; ServerGFlagsInput serverGFlagsInput = 8; InstallYbcInput installYbcInput = 9; + ConfigureServerInput configureServerInput = 10; } } @@ -81,6 +82,7 @@ message DescribeTaskResponse { InstallSoftwareOutput installSoftwareOutput = 7; ServerGFlagsOutput serverGFlagsOutput = 8; InstallYbcOutput installYbcOutput = 9; + ConfigureServerOutput configureServerOutput = 10; } } diff --git a/managed/node-agent/proto/yb.proto b/managed/node-agent/proto/yb.proto index 869ee1dff97d..6990f49d1259 100644 --- a/managed/node-agent/proto/yb.proto +++ b/managed/node-agent/proto/yb.proto @@ -138,3 +138,15 @@ message InstallYbcInput { message InstallYbcOutput { int32 pid = 1; } + +message ConfigureServerInput { + string remoteTmp = 1; + string ybHomeDir = 2; + repeated string processes = 3; + repeated string mountPoints = 4; + uint32 numCoresToKeep = 5; +} + +message ConfigureServerOutput { + int32 pid = 1; +} diff --git a/managed/node-agent/resources/templates/server/clean_cores.sh.j2 b/managed/node-agent/resources/templates/server/clean_cores.sh.j2 new file mode 100755 index 000000000000..66badb2a88da --- /dev/null +++ b/managed/node-agent/resources/templates/server/clean_cores.sh.j2 @@ -0,0 +1,59 @@ +#!/usr/bin/env bash +# +# Copyright 2019 YugaByte, Inc. and Contributors +# +# Licensed under the Polyform Free Trial License 1.0.0 (the "License"); you +# may not use this file except in compliance with the License. You +# may obtain a copy of the License at +# +# https://github.com/YugaByte/yugabyte-db/blob/master/licenses/POLYFORM-FREE-TRIAL-LICENSE-1.0.0.txt + +set -euo pipefail + +print_help() { + cat <] +Options: + -n, --num_corestokeep + number of latest core files to keep (default: 5). + -h, --help + Show usage +EOT +} + +num_cores_to_keep={{ yb_num_clean_cores_to_keep }} +YB_CRASH_DIR=({{ yb_cores_dir }}/) +while [[ $# -gt 0 ]]; do + case "$1" in + -n|--num_corestokeep) + num_cores_to_keep=$2 + shift + ;; + -h|--help) + print_help + exit 0 + ;; + *) + echo "Invalid option: $1" >&2 + print_help + exit 1 + esac + shift +done + +USER=${USER:-$(whoami)} +if [[ "$(id -u)" != "0" && "$USER" != "yugabyte" ]]; then + echo "This script must be run as root or yugabyte" + exit 1 +fi + +find_core_files="find $YB_CRASH_DIR -name 'core_*' -type f -printf '%T+\t%p\n' | sort | + awk '{print \$2}'" +num_core_files=$(eval $find_core_files | wc -l) +if [ $num_core_files -gt $num_cores_to_keep ]; then + core_files_to_delete=$(eval $find_core_files | head -n$(($num_core_files - $num_cores_to_keep))) + for file in $core_files_to_delete; do + echo "Deleting core file $file" + rm $file + done +fi diff --git a/managed/node-agent/resources/templates/server/clock-sync.sh.j2 b/managed/node-agent/resources/templates/server/clock-sync.sh.j2 new file mode 100755 index 000000000000..d019511d378a --- /dev/null +++ b/managed/node-agent/resources/templates/server/clock-sync.sh.j2 @@ -0,0 +1,128 @@ +#!/bin/bash + +SCRIPT_NAME=$(basename "$0") + +################### Config ################### +is_acceptable_clock_skew_wait_enabled="{{ is_acceptable_clock_skew_wait_enabled | default(true) }}" # Whether check clock skew +acceptable_clock_skew_sec="{{ acceptable_clock_skew_sec | default(0.5) }}" # In seconds +max_tries="{{ acceptable_clock_skew_max_tries | default(120) }}" # Maximum number of tries before returning failure +retry_wait_time_s=1 # How long waits before retry in seconds + +if [[ "$is_acceptable_clock_skew_wait_enabled" != true && "$is_acceptable_clock_skew_wait_enabled" != "True" ]]; then + echo "Wait for clock skew to go below the acceptable threshold is disabled. Returning success." + exit 0 +fi + +command_exists() { + command -v "$1" >/dev/null 2>&1 +} + +readonly PYTHON_EXECUTABLES=("python" "python3" "python3.11" "python3.10" "python3.9" "python3.8" "python3.7" "python3.6" "python3.12" "python2") +PYTHON_EXECUTABLE="" +set_python_executable() { + for py_executable in "${PYTHON_EXECUTABLES[@]}"; do + if which "$py_executable" > /dev/null 2>&1; then + PYTHON_EXECUTABLE="$py_executable" + export PYTHON_EXECUTABLE + return + fi + done +} + +check_clock_sync_chrony() { + # if chrond is restarted, tracking will return all 0s + set_python_executable + chrony_tracking="$(chronyc tracking)" + if [[ $? -ne 0 ]]; then + echo "`chronyc tracking` failed to execute" + return 1 + fi + if [[ $(echo "${chrony_tracking}" | awk "/Reference ID/ {print \$4}") == "00000000" ]]; then + echo "chrony is not initialized" + return 1 + fi + local skew=$(echo "${chrony_tracking}" | awk "/System time/ {print \$4}") + local dispersion=$(echo "${chrony_tracking}" | awk "/Root dispersion/ {print \$4}") + local delay=$(echo "${chrony_tracking}" | awk "/Root delay/ {print \$4}") + local clock_error="" + if [[ -z "${PYTHON_EXECUTABLE}" ]]; then + clock_error=${skew} + else + clock_error=$(${PYTHON_EXECUTABLE} -c "print(${skew} + ${dispersion} + (0.5 * ${delay}))") + fi + + if awk "BEGIN {exit !($clock_error < $acceptable_clock_skew_sec)}"; then + echo "Clock skew is within acceptable limits: $skew ms" + return 0 + else + echo "Clock skew exceeds acceptable limits: $skew ms" + return 1 + fi +} + +check_clock_sync_ntpd() { + set_python_executable + local skew=$(ntpq -p | awk "\$1 ~ /^\*/ {print \$9}") + local acceptable_skew_ms=$(${PYTHON_EXECUTABLE} -c "print(${acceptable_clock_skew_sec} * 1000)") + + if [[ -z "$skew" ]]; then + echo "ntpd is not initialized" + return 1 + fi + + if awk "BEGIN{exit !(${skew} < ${acceptable_skew_ms})}"; then + echo "Clock skew is within acceptable limits: $skew ms" + return 0 + else + echo "Clock skew exceeds acceptable limits: $skew ms" + return 1 + fi +} + +check_clock_sync_timesyncd() { + synchronized=$(timedatectl status | grep "System clock synchronized" | awk "{print \$4}") + if [[ "${synchronized}" == "yes" ]]; then + echo "timesyncd reports clock is synchronized" + return 0 + else + echo "timesyncd clock is not synchronized" + return 1 + fi +} + +systemd_loaded() { + active=$(systemctl show --no-pager $1 | grep "ActiveState" | cut -d= -f2) + if [[ "${active}" == "active" ]]; then + return 0 + fi + return 1 +} + +iter=0 +while true; do + # If chrony is available, use it for clock sync. + if command_exists chronyc; then + check_clock_sync_chrony + res=$? + # If ntpd is available, use it for clock sync. + elif command_exists ntpd; then + check_clock_sync_ntpd + res=$? + elif systemd_loaded systemd-timesyncd; then + check_clock_sync_timesyncd + res=$? + else + echo "Chrony, NTPd, and timesyncd are not available, but required." + exit 1 + fi + ((iter++)) + if [ $res -eq 0 ]; then + echo "Success! Clock skew is within acceptable limits." + exit 0 + fi + if [ $iter -ge $max_tries ]; then + echo "Failure! Maximum number of tries reached." + exit 1 + fi + sleep "$retry_wait_time_s" +done diff --git a/managed/node-agent/resources/templates/server/collect_metrics_wrapper.sh.j2 b/managed/node-agent/resources/templates/server/collect_metrics_wrapper.sh.j2 new file mode 100755 index 000000000000..25410acda81a --- /dev/null +++ b/managed/node-agent/resources/templates/server/collect_metrics_wrapper.sh.j2 @@ -0,0 +1,28 @@ +#!/usr/bin/env bash +# +# Copyright 2021 YugaByte, Inc. and Contributors +# +# Licensed under the Polyform Free Trial License 1.0.0 (the "License"); you +# may not use this file except in compliance with the License. You +# may obtain a copy of the License at +# +# https://github.com/YugaByte/yugabyte-db/blob/master/licenses/POLYFORM-FREE-TRIAL-LICENSE-1.0.0.txt + +set -euo pipefail + +collect_metrics_script=({{ yb_home_dir }}/bin/collect_metrics.sh) +filename=({{ yb_metrics_dir }}/node_metrics.prom) + +USER=${USER:-$(whoami)} +if [[ "$(id -u)" != "0" && "$USER" != "yugabyte" ]]; then + echo "This script must be run as root or yugabyte" + exit 1 +fi + +# Just call a script, generated and uploaded by health check process +if [ -f $collect_metrics_script ]; then + /bin/bash $collect_metrics_script -o file -f $filename +else + echo "Metric collection script $collect_metrics_script does not exist" + exit 1 +fi diff --git a/managed/node-agent/resources/templates/server/yb-server-ctl.sh.j2 b/managed/node-agent/resources/templates/server/yb-server-ctl.sh.j2 new file mode 100644 index 000000000000..496252c2e2e1 --- /dev/null +++ b/managed/node-agent/resources/templates/server/yb-server-ctl.sh.j2 @@ -0,0 +1,426 @@ +#!/usr/bin/env bash +# +# Copyright 2019 YugaByte, Inc. and Contributors +# +# Licensed under the Polyform Free Trial License 1.0.0 (the "License"); you +# may not use this file except in compliance with the License. You +# may obtain a copy of the License at +# +# https://github.com/YugaByte/yugabyte-db/blob/master/licenses/POLYFORM-FREE-TRIAL-LICENSE-1.0.0.txt + +set -euo pipefail +# Redirect stderr to syslog. +exec 2> >(logger -t $(basename $0)) + +readonly MOUNT_PATHS=({{ mount_paths }}) +readonly EXPECTED_USERNAME=({{ user_name }}) +readonly CORES_DIR=({{ yb_cores_dir }}) +{% raw %} +readonly NUM_MOUNTS=${#MOUNT_PATHS[@]} +readonly SYSTEMD_OPTION={{systemd_option}} +{% endraw %} + +if ! [ -f /.dockerenv ] && [ "$(whoami)" != "$EXPECTED_USERNAME" ]; then + echo "Script must be run as user: $EXPECTED_USERNAME" + exit -1 +fi + +print_help() { + cat < OR + ${0##*/} clean-instance +Daemons: + master + tserver + controller + otel-collector +Commands: + create - Start the YB process on this node in cluster creation node (only applicable for + master) + status - Report the status of the YB process + start - Start the YB daemon on this node + stop - Stop the YB daemon on this node + clean - Remove all daemon data from this node + clean-no-conf - Remove all daemon data from this node, except configurations + clean-logs - Remove all daemon logs + -h, --help - Show usage +EOT +} + +check_pid_file() { + if [[ ! -f ${daemon_pid_file} ]]; then + print_err_out "Error: PID file does not exist: ${daemon_pid_file}, process is "\ + "probably not running" + exit 1 + fi +} + +exit_on_running() { + if [[ $(check_running) -eq 0 ]]; then + print_err_out "yb-$daemon already running" + exit 0 + fi +} + +NO_PID_FILE=200 +# arg1 [OPTIONAL]: proc_pid -- the PID of the process, else, defaults to contents of daemon_pid_file +check_running() { + set +e + custom_proc_pid=${1:-} + proc_pid=$custom_proc_pid + if [[ -z $proc_pid ]]; then + proc_pid=$(cat ${daemon_pid_file} 2>/dev/null) + if [[ $? -ne 0 ]]; then + echo $NO_PID_FILE + return + fi + fi + set -e + kill -0 "$proc_pid" 2>/dev/null + kill_outcome=$? + # Workaround race condition between: + # 1) cron checking the file exists and succeeding + # 2) stop deleting the PID file and stopping the process + # 3) cron then trying to kill and cat the file, failing and then restarting the daemon + # If we searched for a PID file above and then we couldn't find a process to kill, then check if + # the PID file still exists: + # - No, then return 0 so we do not restart the process + # - Yes, then default to outcome of kill command. + if [[ $kill_outcome -ne 0 ]] && [[ -z $custom_proc_pid ]] && [[ ! -f ${daemon_pid_file} ]]; then + echo 0 + else + echo $kill_outcome + fi +} + +get_pid() { + cat ${daemon_pid_file} +} + +print_err_out() { + echo $1 | tee /dev/stderr +} + +# arg1: pid_to_wait_for -- the PID of the process to wait for +wait_pid() { + pid_to_wait_for=$1 + end_time=$(($SECONDS + 10)) + while [[ $SECONDS -lt $end_time ]]; do + if [[ $(check_running "$pid_to_wait_for") -eq 1 ]]; then + break + fi + print_err_out "Waiting on PID: $pid_to_wait_for" + sleep 1 + done +} + +wait_for_dir_or_exit() { + local dir_to_check=$1 + local end_time=$(($SECONDS + 10)) + while [[ $SECONDS -lt $end_time ]]; do + if test -d $dir_to_check; + then + return + else + echo "Waiting for $dir_to_check dir..." + sleep 1 + fi + done + # Exit if the directory never appeared. + exit 1 +} + +clean_data_paths() { + clean_conf_arg=${1:-true} + + set -x + for (( i=0; i
- + )"; for (const auto& [txn, txn_entry] : txn_locks_) { - std::lock_guard txn_lock(txn_entry.mutex); + UniqueLock txn_lock(txn_entry->mutex); const auto& locks = - locks_map == LocksMapType::kGranted ? txn_entry.granted_locks : txn_entry.waiting_locks; + locks_map == LocksMapType::kGranted ? txn_entry->granted_locks : txn_entry->waiting_locks; for (const auto& [subtxn_id, subtxn_locks] : locks) { for (const auto& [object_id, entry] : subtxn_locks) { out << "" @@ -518,9 +1109,9 @@ size_t ObjectLockManagerImpl::TEST_LocksSize(LocksMapType locks_map) const { std::lock_guard lock(global_mutex_); size_t size = 0; for (const auto& [txn, txn_entry] : txn_locks_) { - std::lock_guard txn_lock(txn_entry.mutex); + UniqueLock txn_lock(txn_entry->mutex); const auto& locks = - locks_map == LocksMapType::kGranted ? txn_entry.granted_locks : txn_entry.waiting_locks; + locks_map == LocksMapType::kGranted ? txn_entry->granted_locks : txn_entry->waiting_locks; for (const auto& [subtxn_id, subtxn_locks] : locks) { size += subtxn_locks.size(); } @@ -536,94 +1127,46 @@ size_t ObjectLockManagerImpl::TEST_WaitingLocksSize() const { return TEST_LocksSize(LocksMapType::kWaiting); } -void ObjectLockManagerImpl::DoReleaseTrackedLock( - const ObjectLockPrefix& object_id, const TrackedLockEntry& entry) { - // We don't pass an intents set to unlock so as to trigger notify on every lock release. It is - // necessary as two (or more) transactions could be holding a read lock and one of the txns - // could request a conflicting lock mode. And since conflicts with self should be ignored, we - // need to signal the cond variable on every release, else the lock release call from the other - // transaction wouldn't unblock the waiter. - DoUnlockSingleEntry(entry.locked_batch_entry, object_id, entry.state); - - entry.locked_batch_entry.ref_count -= entry.ref_count; - if (entry.locked_batch_entry.ref_count == 0) { - locks_.erase(object_id); - free_lock_entries_.push_back(&entry.locked_batch_entry); +std::unordered_map + ObjectLockManagerImpl::TEST_GetLockStateMapForTxn(const TransactionId& txn) const { + TrackedTxnLockEntryPtr txn_entry; + { + std::lock_guard lock(global_mutex_); + auto txn_it = txn_locks_.find(txn); + if (txn_it == txn_locks_.end()) { + return {}; + } + txn_entry = txn_it->second; } + UniqueLock txn_lock(txn_entry->mutex); + return txn_entry->existing_states; } -void ObjectLockManagerImpl::AcquiredLock( - const LockBatchEntry& lock_entry, TrackedTransactionLockEntry& txn, - SubTransactionId subtxn_id, const OwnerAsString& owner_as_string, - LocksMapType locks_map) { - TRACE_FUNC(); - VLOG_WITH_FUNC(1) << "lock_entry: " << lock_entry.ToString() - << ", owner: " << owner_as_string(); - auto delta = IntentTypeSetAdd(lock_entry.intent_types); - - std::lock_guard txn_lock(txn.mutex); - auto& locks = locks_map == LocksMapType::kGranted ? txn.granted_locks : txn.waiting_locks; - auto& subtxn_locks = locks[subtxn_id]; - auto it = subtxn_locks.find(lock_entry.key); - if (it == subtxn_locks.end()) { - it = subtxn_locks.emplace(lock_entry.key, TrackedLockEntry(*lock_entry.locked)).first; - } - it->second.state += delta; - ++it->second.ref_count; -} +ObjectLockManager::ObjectLockManager(ThreadPool* thread_pool, server::RpcServerBase& server) + : impl_(std::make_unique(thread_pool, server)) {} -void ObjectLockManagerImpl::ReleasedLock( - const LockBatchEntry& lock_entry, TrackedTransactionLockEntry& txn, - SubTransactionId subtxn_id, const OwnerAsString& owner_as_string, - LocksMapType locks_map) { - TRACE_FUNC(); - VLOG_WITH_FUNC(1) << "lock_entry: " << lock_entry.ToString() - << ", owner: " << owner_as_string(); - auto delta = IntentTypeSetAdd(lock_entry.intent_types); +ObjectLockManager::~ObjectLockManager() = default; - std::lock_guard txn_lock(txn.mutex); - auto& locks = locks_map == LocksMapType::kGranted ? txn.granted_locks : txn.waiting_locks; - auto subtxn_itr = locks.find(subtxn_id); - if (subtxn_itr == locks.end()) { - LOG_WITH_FUNC(DFATAL) << "No locks found for " << owner_as_string() - << ", cannot release lock on " << AsString(lock_entry.key); - return; - } - auto& subtxn_locks = subtxn_itr->second; - auto it = subtxn_locks.find(lock_entry.key); - if (it == subtxn_locks.end()) { - LOG_WITH_FUNC(DFATAL) << "No lock found for " << owner_as_string() << " on " - << AsString(lock_entry.key) << ", cannot release"; - } - auto& entry = it->second; - entry.state -= delta; - --entry.ref_count; - if (entry.state == 0) { - DCHECK_EQ(entry.ref_count, 0) - << "TrackedLockEntry::ref_count for key " << AsString(lock_entry.key) << " expected to " - << "have been 0 here. This could lead to faulty tracking of acquired/waiting object locks " - << "and also issues with garbage collection of free lock entries in ObjectLockManager."; - subtxn_locks.erase(it); - } +void ObjectLockManager::Lock(LockData&& data) { + impl_->Lock(std::move(data)); } -ObjectLockManager::ObjectLockManager() : impl_(std::make_unique()) { } - -ObjectLockManager::~ObjectLockManager() = default; +void ObjectLockManager::Unlock( + const ObjectLockOwner& object_lock_owner, Status resume_with_status) { + impl_->Unlock(object_lock_owner, resume_with_status); +} -bool ObjectLockManager::Lock( - LockBatchEntries& key_to_intent_type, CoarseTimePoint deadline, - const ObjectLockOwner& object_lock_owner) { - return impl_->Lock(key_to_intent_type, deadline, object_lock_owner); +void ObjectLockManager::Poll() { + impl_->Poll(); } -void ObjectLockManager::Unlock( - const std::vector& lock_entry_keys) { - impl_->Unlock(lock_entry_keys); +void ObjectLockManager::Start( + docdb::LocalWaitingTxnRegistry* waiting_txn_registry) { + return impl_->Start(waiting_txn_registry); } -void ObjectLockManager::Unlock(const ObjectLockOwner& object_lock_owner) { - impl_->Unlock(object_lock_owner); +void ObjectLockManager::Shutdown() { + impl_->Shutdown(); } void ObjectLockManager::DumpStatusHtml(std::ostream& out) { @@ -638,4 +1181,9 @@ size_t ObjectLockManager::TEST_WaitingLocksSize() const { return impl_->TEST_WaitingLocksSize(); } +std::unordered_map + ObjectLockManager::TEST_GetLockStateMapForTxn(const TransactionId& txn) const { + return impl_->TEST_GetLockStateMapForTxn(txn); +} + } // namespace yb::docdb diff --git a/src/yb/docdb/object_lock_manager.h b/src/yb/docdb/object_lock_manager.h index 02267f03c01c..25da8c5f024e 100644 --- a/src/yb/docdb/object_lock_manager.h +++ b/src/yb/docdb/object_lock_manager.h @@ -13,18 +13,27 @@ #pragma once -#include -#include -#include - #include "yb/docdb/docdb_fwd.h" -#include "yb/docdb/object_lock_data.h" +#include "yb/docdb/lock_util.h" + +#include "yb/server/server_fwd.h" + +#include "yb/util/status_callback.h" + +namespace yb { + +class ThreadPool; -#include "yb/util/monotime.h" -#include "yb/util/ref_cnt_buffer.h" -#include "yb/util/tostring.h" +namespace docdb { -namespace yb::docdb { +struct LockData { + DetermineKeysToLockResult key_to_lock; + CoarseTimePoint deadline; + ObjectLockOwner object_lock_owner; + TabletId status_tablet; + MonoTime start_time; + StdStatusCallback callback; +}; // Helper struct used for keying table/object locks of a transaction. struct TrackedLockEntryKey { @@ -50,32 +59,32 @@ class ObjectLockManagerImpl; // server maintains an instance of the ObjectLockManager. class ObjectLockManager { public: - ObjectLockManager(); + ObjectLockManager(ThreadPool* thread_pool, server::RpcServerBase& server); ~ObjectLockManager(); - // Attempt to lock a batch of keys and track the lock against the given object_lock_owner key. The - // call may be blocked waiting for other conflicting locks to be released. If the entries don't - // exist, they are created. On success, the lock state is exists in-memory until an explicit - // release is called (or the process restarts). - // - // Returns false if was not able to acquire lock until deadline. - MUST_USE_RESULT bool Lock( - LockBatchEntries& key_to_intent_type, CoarseTimePoint deadline, - const ObjectLockOwner& object_lock_owner); - - // Release the batch of locks, if they were acquired at the first place. - void Unlock(const std::vector& lock_entry_keys); + // Attempt to lock a batch of keys and track the lock against data.object_lock_owner key. The + // callback is executed with failure if the locks aren't able to be acquired within the deadline. + void Lock(LockData&& data); // Release all locks held against the given object_lock_owner. - void Unlock(const ObjectLockOwner& object_lock_owner); + void Unlock(const ObjectLockOwner& object_lock_owner, Status resume_with_status); + + void Poll(); + + void Start(docdb::LocalWaitingTxnRegistry* waiting_txn_registry); + + void Shutdown(); void DumpStatusHtml(std::ostream& out); size_t TEST_GrantedLocksSize() const; size_t TEST_WaitingLocksSize() const; + std::unordered_map + TEST_GetLockStateMapForTxn(const TransactionId& txn) const; private: std::unique_ptr impl_; }; -} // namespace yb::docdb +} // namespace docdb +} // namespace yb diff --git a/src/yb/docdb/shared_lock_manager-test.cc b/src/yb/docdb/shared_lock_manager-test.cc index 18a704fd7efb..9e97a30a6bf3 100644 --- a/src/yb/docdb/shared_lock_manager-test.cc +++ b/src/yb/docdb/shared_lock_manager-test.cc @@ -202,8 +202,8 @@ TEST_F(SharedLockManagerTest, DumpKeys) { ASSERT_NOK(lb2.status()); ASSERT_STR_CONTAINS( lb2.status().ToString(), - "[{ key: 666F6F intent_types: [kStrongRead, kStrongWrite] existing_state: 0 }, " - "{ key: 626172 intent_types: [kStrongRead, kStrongWrite] existing_state: 0 }]"); + "[{ key: 666F6F intent_types: [kStrongRead, kStrongWrite] }, " + "{ key: 626172 intent_types: [kStrongRead, kStrongWrite] }]"); } } // namespace docdb diff --git a/src/yb/integration-tests/object_lock-test.cc b/src/yb/integration-tests/object_lock-test.cc index 60c067bb9d99..cf7920c88765 100644 --- a/src/yb/integration-tests/object_lock-test.cc +++ b/src/yb/integration-tests/object_lock-test.cc @@ -64,6 +64,7 @@ DECLARE_uint64(ysql_lease_refresher_interval_ms); DECLARE_double(TEST_tserver_ysql_lease_refresh_failure_prob); DECLARE_bool(enable_load_balancing); DECLARE_uint64(object_lock_cleanup_interval_ms); +DECLARE_bool(TEST_olm_skip_sending_wait_for_probes); namespace yb { @@ -104,6 +105,7 @@ class ObjectLockTest : public MiniClusterTestWithClient { kDefaultYSQLLeaseRefreshIntervalMilli; ANNOTATE_UNPROTECTED_WRITE(FLAGS_enable_load_balancing) = false; ANNOTATE_UNPROTECTED_WRITE(FLAGS_object_lock_cleanup_interval_ms) = 500; + ANNOTATE_UNPROTECTED_WRITE(FLAGS_TEST_olm_skip_sending_wait_for_probes) = true; MiniClusterTestWithClient::SetUp(); MiniClusterOptions opts; opts.num_tablet_servers = 3; @@ -379,7 +381,8 @@ std::future AcquireLockGloballyAsync( session_host_uuid, owner, database_id, object_id, TableLockType::ACCESS_EXCLUSIVE, lease_epoch, client->Clock(), opt_deadline); auto callback = [promise](const Status& s) { promise->set_value(s); }; - client->AcquireObjectLocksGlobalAsync(req, std::move(callback), rpc_timeout); + client->AcquireObjectLocksGlobalAsync( + req, std::move(callback), ToCoarse(MonoTime::Now() + rpc_timeout)); return future; } @@ -439,7 +442,8 @@ Status ReleaseLockGlobally( auto req = ReleaseRequestFor( session_host_uuid, owner, lease_epoch, client->Clock(), apply_after); Synchronizer sync; - client->ReleaseObjectLocksGlobalAsync(req, sync.AsStdStatusCallback(), rpc_timeout); + client->ReleaseObjectLocksGlobalAsync( + req, sync.AsStdStatusCallback(), ToCoarse(MonoTime::Now() + rpc_timeout)); return sync.Wait(); } @@ -921,39 +925,39 @@ TEST_F(ObjectLockTest, TServerLeaseExpiresBeforeExclusiveLockRequest) { ASSERT_OK(cluster_->mini_tablet_server(idx_to_take_down)->Start()); } -TEST_F(ObjectLockTest, TServerLeaseExpiresAfterExclusiveLockRequest) { - auto kBlockingRequestTimeout = MonoDelta::FromSeconds(20); - ASSERT_GT(kBlockingRequestTimeout.ToMilliseconds(), FLAGS_master_ysql_operation_lease_ttl_ms); - auto idx_to_take_down = 0; - auto uuid_to_take_down = TSUuid(idx_to_take_down); - { - auto* tserver0 = cluster_->mini_tablet_server(idx_to_take_down); - auto tserver0_proxy = TServerProxyFor(tserver0); - ASSERT_OK(AcquireLockAt( - &tserver0_proxy, uuid_to_take_down, kTxn1, kDatabaseID, kObjectId)); - } - auto master_proxy = ASSERT_RESULT(MasterLeaderProxy()); - auto future = AcquireLockGloballyAsync( - &master_proxy, TSUuid(1), kTxn2, kDatabaseID, kObjectId, kLeaseEpoch, nullptr, std::nullopt, - kBlockingRequestTimeout); - - ASSERT_OK(WaitFor( - [&]() -> bool { - return cluster_->mini_tablet_server(idx_to_take_down) - ->server() - ->ts_local_lock_manager() - ->TEST_WaitingLocksSize() > 0; - }, - kBlockingRequestTimeout, - "Timed out waiting for acquire lock request to block on the master")); - LOG(INFO) << "Shutting down tablet server " << uuid_to_take_down; - ASSERT_NOTNULL(cluster_->find_tablet_server(uuid_to_take_down))->Shutdown(); - // Now wait for the lease to expire. After the lease expires the lock acquisition should succeed. - LOG(INFO) << Format("Waiting for tablet server $0 to lose its lease", uuid_to_take_down); - ASSERT_OK(WaitForTServerLeaseToExpire(uuid_to_take_down, kBlockingRequestTimeout)); - ASSERT_OK(ResolveFutureStatus(future)); - ASSERT_OK(cluster_->mini_tablet_server(idx_to_take_down)->Start()); -} +// TODO: Enable this test once https://github.com/yugabyte/yugabyte-db/issues/27192 is addressed. +// TEST_F(ObjectLockTest, TServerLeaseExpiresAfterExclusiveLockRequest) { +// auto kBlockingRequestTimeout = MonoDelta::FromSeconds(20); +// ASSERT_GT(kBlockingRequestTimeout.ToMilliseconds(), FLAGS_master_ysql_operation_lease_ttl_ms); +// auto idx_to_take_down = 0; +// auto uuid_to_take_down = TSUuid(idx_to_take_down); +// { +// auto* tserver0 = cluster_->mini_tablet_server(idx_to_take_down); +// auto tserver0_proxy = TServerProxyFor(tserver0); +// ASSERT_OK(AcquireLockAt( +// &tserver0_proxy, uuid_to_take_down, kTxn1, kDatabaseID, kObjectId)); +// } +// auto master_proxy = ASSERT_RESULT(MasterLeaderProxy()); +// auto future = AcquireLockGloballyAsync( +// &master_proxy, TSUuid(1), kTxn2, kDatabaseID, kObjectId, kLeaseEpoch, nullptr, +// std::nullopt, kBlockingRequestTimeout); +// ASSERT_OK(WaitFor( +// [&]() -> bool { +// return cluster_->mini_tablet_server(idx_to_take_down) +// ->server() +// ->ts_local_lock_manager() +// ->TEST_WaitingLocksSize() > 0; +// }, +// kBlockingRequestTimeout, +// "Timed out waiting for acquire lock request to block on the master")); +// LOG(INFO) << "Shutting down tablet server " << uuid_to_take_down; +// ASSERT_NOTNULL(cluster_->find_tablet_server(uuid_to_take_down))->Shutdown(); +// // Now wait for the lease to expire. After that, the lock acquisition should succeed. +// LOG(INFO) << Format("Waiting for tablet server $0 to lose its lease", uuid_to_take_down); +// ASSERT_OK(WaitForTServerLeaseToExpire(uuid_to_take_down, kBlockingRequestTimeout)); +// ASSERT_OK(ResolveFutureStatus(future)); +// ASSERT_OK(cluster_->mini_tablet_server(idx_to_take_down)->Start()); +// } TEST_F(ObjectLockTest, TServerHeldExclusiveLocksReleasedAfterRestart) { // Bump up the lease lifetime to verify the lease is lost when a new tserver process registers. @@ -1492,13 +1496,15 @@ ExternalMiniClusterOptions ExternalObjectLockTest::MakeExternalMiniClusterOption opts.replication_factor = ReplicationFactor(); opts.enable_ysql = true; opts.extra_master_flags = { - "--TEST_enable_object_locking_for_table_locks", - Format("--master_ysql_operation_lease_ttl_ms=$0", kDefaultMasterYSQLLeaseTTLMilli), - Format("--object_lock_cleanup_interval_ms=$0", kDefaultMasterObjectLockCleanupIntervalMilli), - "--enable_load_balancing=false"}; + "--TEST_enable_object_locking_for_table_locks", + Format("--master_ysql_operation_lease_ttl_ms=$0", kDefaultMasterYSQLLeaseTTLMilli), + Format("--object_lock_cleanup_interval_ms=$0", kDefaultMasterObjectLockCleanupIntervalMilli), + "--enable_load_balancing=false", + "--TEST_olm_skip_sending_wait_for_probes=false"}; opts.extra_tserver_flags = { "--TEST_enable_object_locking_for_table_locks", - Format("--ysql_lease_refresher_interval_ms=$0", kDefaultYSQLLeaseRefreshIntervalMilli)}; + Format("--ysql_lease_refresher_interval_ms=$0", kDefaultYSQLLeaseRefreshIntervalMilli), + "--TEST_olm_skip_sending_wait_for_probes=false"}; return opts; } diff --git a/src/yb/integration-tests/tablet-split-itest.cc b/src/yb/integration-tests/tablet-split-itest.cc index ded8d97b49b5..8d4c13905563 100644 --- a/src/yb/integration-tests/tablet-split-itest.cc +++ b/src/yb/integration-tests/tablet-split-itest.cc @@ -138,6 +138,7 @@ DECLARE_int32(retryable_request_timeout_secs); DECLARE_int32(rocksdb_base_background_compactions); DECLARE_int32(rocksdb_max_background_compactions); DECLARE_int32(rocksdb_level0_file_num_compaction_trigger); +DECLARE_int32(TEST_simulate_long_remote_bootstrap_sec); DECLARE_bool(enable_automatic_tablet_splitting); DECLARE_bool(TEST_pause_rbs_before_download_wal); DECLARE_int64(tablet_split_low_phase_shard_count_per_node); diff --git a/src/yb/master/catalog_manager.cc b/src/yb/master/catalog_manager.cc index 6ec1db2fc8a3..ba45422c4277 100644 --- a/src/yb/master/catalog_manager.cc +++ b/src/yb/master/catalog_manager.cc @@ -2185,7 +2185,7 @@ void CatalogManager::CompleteShutdown() { if (async_task_pool_) { async_task_pool_->Shutdown(); } - + object_lock_info_manager_->Shutdown(); // It's OK if the visitor adds more entries even after we finish; it won't start any new tasks for // those entries. AbortAndWaitForAllTasks(); diff --git a/src/yb/master/master.cc b/src/yb/master/master.cc index 332e6769914e..a7ce0689694a 100644 --- a/src/yb/master/master.cc +++ b/src/yb/master/master.cc @@ -158,6 +158,7 @@ Master::Master(const MasterOptions& opts) state_(kStopped), metric_entity_cluster_( METRIC_ENTITY_cluster.Instantiate(metric_registry_.get(), "yb.cluster")), + master_tablet_server_(new MasterTabletServer(this, metric_entity())), sys_catalog_(new SysCatalogTable(this, metric_registry_.get())), ts_manager_(new TSManager(*sys_catalog_)), catalog_manager_(new CatalogManager(this, sys_catalog_.get())), @@ -175,8 +176,7 @@ Master::Master(const MasterOptions& opts) test_async_rpc_manager_(new TestAsyncRpcManager(this, catalog_manager())), init_future_(init_status_.get_future()), opts_(opts), - maintenance_manager_(new MaintenanceManager(MaintenanceManager::DEFAULT_OPTIONS)), - master_tablet_server_(new MasterTabletServer(this, metric_entity())) { + maintenance_manager_(new MaintenanceManager(MaintenanceManager::DEFAULT_OPTIONS)) { SetConnectionContextFactory(rpc::CreateConnectionContextFactory( GetAtomicFlag(&FLAGS_inbound_rpc_memory_limit), mem_tracker())); diff --git a/src/yb/master/master.h b/src/yb/master/master.h index 6e547a3d7be1..3ca054a54d03 100644 --- a/src/yb/master/master.h +++ b/src/yb/master/master.h @@ -277,6 +277,9 @@ class Master : public tserver::DbServerBase { // The metric entity for the cluster. scoped_refptr metric_entity_cluster_; + // Master's tablet server implementation used to host virtual tables like system.peers. + std::unique_ptr master_tablet_server_; + std::unique_ptr sys_catalog_; std::unique_ptr ts_manager_; std::unique_ptr catalog_manager_; @@ -307,9 +310,6 @@ class Master : public tserver::DbServerBase { // The maintenance manager for this master. std::shared_ptr maintenance_manager_; - // Master's tablet server implementation used to host virtual tables like system.peers. - std::unique_ptr master_tablet_server_; - std::unique_ptr cdc_state_client_init_; std::mutex master_metrics_mutex_; std::map> master_metrics_ GUARDED_BY(master_metrics_mutex_); diff --git a/src/yb/master/master_ddl.proto b/src/yb/master/master_ddl.proto index 8f2203e32975..1ac99fee8132 100644 --- a/src/yb/master/master_ddl.proto +++ b/src/yb/master/master_ddl.proto @@ -786,6 +786,7 @@ message AcquireObjectLocksGlobalRequestPB { optional fixed64 ignore_after_hybrid_time = 6; optional fixed64 propagated_hybrid_time = 7; optional AshMetadataPB ash_metadata = 8; + optional bytes status_tablet = 9; } message AcquireObjectLocksGlobalResponsePB { diff --git a/src/yb/master/object_lock_info_manager.cc b/src/yb/master/object_lock_info_manager.cc index da76f5801f2a..9b660e5500ce 100644 --- a/src/yb/master/object_lock_info_manager.cc +++ b/src/yb/master/object_lock_info_manager.cc @@ -102,6 +102,8 @@ Status ValidateLockRequest( return Status::OK(); } +constexpr auto kTserverRpcsTimeoutDefaultSecs = 60s; + } // namespace class ObjectLockInfoManager::Impl { @@ -110,9 +112,11 @@ class ObjectLockInfoManager::Impl { : master_(master), catalog_manager_(catalog_manager), clock_(master.clock()), - local_lock_manager_( - std::make_shared(clock_, master_.tablet_server())), - poller_(std::bind(&Impl::CleanupExpiredLeaseEpochs, this)) {} + poller_(std::bind(&Impl::CleanupExpiredLeaseEpochs, this)) { + CHECK_OK(ThreadPoolBuilder("object_lock_info_manager").Build(&lock_manager_thread_pool_)); + local_lock_manager_ = std::make_shared( + clock_, master_.tablet_server(), master_, lock_manager_thread_pool_.get()); + } void Start() { poller_.Start( @@ -120,6 +124,21 @@ class ObjectLockInfoManager::Impl { MonoDelta::FromMilliseconds(FLAGS_object_lock_cleanup_interval_ms)); } + void Shutdown() { + poller_.Shutdown(); + tserver::TSLocalLockManager* lock_manager = nullptr; + { + LockGuard lock(mutex_); + object_lock_infos_map_.clear(); + if (local_lock_manager_) { + lock_manager = local_lock_manager_.get(); + } + } + if (lock_manager) { + lock_manager->Shutdown(); + } + } + void LockObject( AcquireObjectLockRequestPB&& req, CoarseTimePoint deadline, StdStatusCallback&& callback); @@ -235,11 +254,12 @@ class ObjectLockInfoManager::Impl { }; std::unordered_map txn_host_info_map_ GUARDED_BY(mutex_); + rpc::Poller poller_; + std::unique_ptr lock_manager_thread_pool_; std::shared_ptr local_lock_manager_ GUARDED_BY(mutex_); // Only accessed from a single thread for now, so no need for synchronization. std::unordered_map> expired_lease_epoch_cleanup_tasks_; - rpc::Poller poller_; }; template @@ -340,6 +360,8 @@ class UpdateTServer : public RetrySpecificTSRpcTask { return Format("$0 for TServer: $1 ", shared_all_tservers_->LogPrefix(), permanent_uuid()); } + MonoTime ComputeDeadline() const override; + protected: void Finished(const Status& status) override; @@ -396,6 +418,7 @@ AcquireObjectLockRequestPB TserverRequestFor( if (master_request.has_ash_metadata()) { req.mutable_ash_metadata()->CopyFrom(master_request.ash_metadata()); } + req.set_status_tablet(master_request.status_tablet()); return req; } @@ -457,6 +480,10 @@ void ObjectLockInfoManager::Start() { impl_->Start(); } +void ObjectLockInfoManager::Shutdown() { + impl_->Shutdown(); +} + void ObjectLockInfoManager::LockObject( const AcquireObjectLocksGlobalRequestPB& req, AcquireObjectLocksGlobalResponsePB& resp, rpc::RpcContext rpc) { @@ -992,7 +1019,11 @@ void ObjectLockInfoManager::Impl::Clear() { catalog_manager_.AssertLeaderLockAcquiredForWriting(); LockGuard lock(mutex_); object_lock_infos_map_.clear(); - local_lock_manager_.reset(new tserver::TSLocalLockManager(clock_, master_.tablet_server())); + if (local_lock_manager_) { + local_lock_manager_->Shutdown(); + } + local_lock_manager_.reset(new tserver::TSLocalLockManager( + clock_, master_.tablet_server(), master_, lock_manager_thread_pool_.get())); } std::optional ObjectLockInfoManager::Impl::GetLeaseEpoch(const std::string& ts_uuid) { @@ -1127,6 +1158,7 @@ UpdateAllTServers::UpdateAllTServers( txn_id_(FullyDecodeTransactionId(req_.txn_id())), epoch_(std::move(leader_epoch)), callback_(std::move(callback)), + deadline_(CoarseMonoClock::Now() + kTserverRpcsTimeoutDefaultSecs), trace_(Trace::CurrentTrace()) { VLOG(3) << __PRETTY_FUNCTION__; } @@ -1406,6 +1438,11 @@ bool UpdateTServer::Sen return true; } +template +MonoTime UpdateTServer::ComputeDeadline() const { + return MonoTime(ToSteady(shared_all_tservers_->GetClientDeadline())); +} + template void UpdateTServer::HandleResponse(int attempt) { VLOG_WITH_PREFIX(3) << __func__ << " response is " << yb::ToString(resp_); diff --git a/src/yb/master/object_lock_info_manager.h b/src/yb/master/object_lock_info_manager.h index 259e542e62f4..700b9a2f981d 100644 --- a/src/yb/master/object_lock_info_manager.h +++ b/src/yb/master/object_lock_info_manager.h @@ -60,6 +60,8 @@ class ObjectLockInfoManager { void Start(); + void Shutdown(); + void LockObject( const AcquireObjectLocksGlobalRequestPB& req, AcquireObjectLocksGlobalResponsePB& resp, rpc::RpcContext rpc); diff --git a/src/yb/server/server_fwd.h b/src/yb/server/server_fwd.h index a0c461fd847c..9b85473f253d 100644 --- a/src/yb/server/server_fwd.h +++ b/src/yb/server/server_fwd.h @@ -23,6 +23,7 @@ namespace server { class Clock; class GenericServiceProxy; class MonitoredTask; +class RpcServerBase; class RunnableMonitoredTask; enum class MonitoredTaskState : int; diff --git a/src/yb/tserver/pg_client_session.cc b/src/yb/tserver/pg_client_session.cc index 374127944d49..bced09a02122 100644 --- a/src/yb/tserver/pg_client_session.cc +++ b/src/yb/tserver/pg_client_session.cc @@ -1080,13 +1080,14 @@ class TransactionProvider { } } - Result NextTxnIdForPlain(CoarseTimePoint deadline) { + Result NextTxnMetaForPlain( + CoarseTimePoint deadline, bool is_for_release = false) { + client::internal::InFlightOpsGroupsWithMetadata ops_info; if (!next_plain_) { auto txn = Build(deadline, {}); // Don't execute txn->GetMetadata() here since the transaction is not iniatialized with // its full metadata yet, like isolation level. Synchronizer synchronizer; - client::internal::InFlightOpsGroupsWithMetadata ops_info; if (txn->batcher_if().Prepare( &ops_info, client::ForceConsistentRead::kFalse, deadline, client::Initial::kFalse, synchronizer.AsStdStatusCallback())) { @@ -1095,7 +1096,20 @@ class TransactionProvider { RETURN_NOT_OK(synchronizer.Wait()); next_plain_.swap(txn); } - return next_plain_->id(); + // next_plain_ would be ready at this point i.e status tablet picked. + auto txn_meta_res = next_plain_->metadata(); + if (txn_meta_res.ok()) { + return txn_meta_res; + } + if (!is_for_release) { + return txn_meta_res.status(); + } + // If the transaction has already failed due to some reason, we should release the locks. + // And also reset next_plain_, so the subsequent ysql transaction would use a new docdb txn. + TransactionMetadata txn_meta_for_release; + txn_meta_for_release.transaction_id = next_plain_->id(); + next_plain_ = nullptr; + return txn_meta_for_release; } private: @@ -1195,7 +1209,7 @@ template Request AcquireRequestFor( const std::string& session_host_uuid, const TransactionId& txn_id, SubTransactionId subtxn_id, uint64_t database_id, uint64_t object_id, TableLockType lock_type, uint64_t lease_epoch, - ClockBase* clock, CoarseTimePoint deadline) { + ClockBase* clock, CoarseTimePoint deadline, const TabletId& status_tablet) { auto now = clock->Now(); Request req; if (const auto& wait_state = ash::WaitStateInfo::CurrentWaitState()) { @@ -1214,6 +1228,7 @@ Request AcquireRequestFor( lock->set_database_oid(database_id); lock->set_object_oid(object_id); lock->set_lock_type(lock_type); + req.set_status_tablet(status_tablet); return req; } @@ -1267,7 +1282,8 @@ void ReleaseWithRetries( // interval it can safely give up. The Master is responsible for cleaning up the locks for any // tserver that loses its lease. We have additional retries just to be safe. Also the timeout // used here defaults to 60s, which is much larger than the default lease interval of 15s. - auto timeout = MonoDelta::FromMilliseconds(FLAGS_tserver_yb_client_default_timeout_ms); + auto deadline = MonoTime::Now() + + MonoDelta::FromMilliseconds(FLAGS_tserver_yb_client_default_timeout_ms); if (!lease_validator.IsLeaseValid(release_req->lease_epoch())) { LOG(INFO) << "Lease epoch " << release_req->lease_epoch() << " is not valid. Will not retry " << " Release request " << (VLOG_IS_ON(2) ? release_req->ShortDebugString() : ""); @@ -1291,7 +1307,7 @@ void ReleaseWithRetries( ReleaseWithRetries(client, lease_validator, release_req, attempt + 1); } }, - timeout); + ToCoarse(deadline)); } } // namespace @@ -2323,12 +2339,13 @@ class PgClientSession::Impl { if (setup_session_result.is_plain && setup_session_result.session_data.transaction) { RETURN_NOT_OK(setup_session_result.session_data.transaction->GetMetadata(deadline).get()); } - auto& txn_id = setup_session_result.session_data.transaction - ? setup_session_result.session_data.transaction->id() - : VERIFY_RESULT_REF(transaction_provider_.NextTxnIdForPlain(deadline)); + auto txn_meta_res = setup_session_result.session_data.transaction + ? setup_session_result.session_data.transaction->GetMetadata(deadline).get() + : transaction_provider_.NextTxnMetaForPlain(deadline); + RETURN_NOT_OK(txn_meta_res); const auto lock_type = static_cast(req.lock_type()); VLOG_WITH_PREFIX_AND_FUNC(1) - << "txn_id " << txn_id + << "txn_id " << txn_meta_res->transaction_id << " lock_type: " << AsString(lock_type) << " req: " << req.ShortDebugString(); @@ -2337,18 +2354,18 @@ class PgClientSession::Impl { plain_session_has_exclusive_object_locks_.store(true); } auto lock_req = AcquireRequestFor( - instance_uuid(), txn_id, options.active_sub_transaction_id(), req.database_oid(), - req.object_oid(), lock_type, lease_epoch_, context_.clock.get(), deadline); + instance_uuid(), txn_meta_res->transaction_id, options.active_sub_transaction_id(), + req.database_oid(), req.object_oid(), lock_type, lease_epoch_, context_.clock.get(), + deadline, txn_meta_res->status_tablet); auto status_future = MakeFuture([&](auto callback) { - client_.AcquireObjectLocksGlobalAsync( - lock_req, callback, - MonoDelta::FromMilliseconds(FLAGS_tserver_yb_client_default_timeout_ms)); + client_.AcquireObjectLocksGlobalAsync(lock_req, callback, deadline); }); return status_future.get(); } auto lock_req = AcquireRequestFor( - instance_uuid(), txn_id, options.active_sub_transaction_id(), req.database_oid(), - req.object_oid(), lock_type, lease_epoch_, context_.clock.get(), deadline); + instance_uuid(), txn_meta_res->transaction_id, options.active_sub_transaction_id(), + req.database_oid(), req.object_oid(), lock_type, lease_epoch_, context_.clock.get(), + deadline, txn_meta_res->status_tablet); return ts_lock_manager()->AcquireObjectLocks(lock_req, deadline); } @@ -3169,9 +3186,12 @@ class PgClientSession::Impl { plain_session_has_exclusive_object_locks_.store(false); DEBUG_ONLY_TEST_SYNC_POINT("PlainTxnStateReset"); } + auto txn_meta_res = txn + ? txn->GetMetadata(deadline).get() + : transaction_provider_.NextTxnMetaForPlain(deadline, !subtxn_id); + RETURN_NOT_OK(txn_meta_res); return DoReleaseObjectLocks( - txn ? txn->id() : VERIFY_RESULT_REF(transaction_provider_.NextTxnIdForPlain(deadline)), - subtxn_id, deadline, has_exclusive_locks); + txn_meta_res->transaction_id, subtxn_id, deadline, has_exclusive_locks); } Status DoReleaseObjectLocks( diff --git a/src/yb/tserver/tablet_server.cc b/src/yb/tserver/tablet_server.cc index 7d68deb45ee5..c81ba3cbe0a6 100644 --- a/src/yb/tserver/tablet_server.cc +++ b/src/yb/tserver/tablet_server.cc @@ -367,12 +367,6 @@ TabletServer::TabletServer(const TabletServerOptions& opts) std::make_unique>(); ysql_db_catalog_version_index_used_->fill(false); } - if (opts.server_type == TabletServerOptions::kServerType && - PREDICT_FALSE(FLAGS_TEST_enable_object_locking_for_table_locks)) { - ts_local_lock_manager_ = std::make_shared(clock_, this); - } else { - ts_local_lock_manager_ = nullptr; - } LOG(INFO) << "yb::tserver::TabletServer created at " << this; LOG(INFO) << "yb::tserver::TSTabletManager created at " << tablet_manager_.get(); } @@ -740,6 +734,8 @@ Status TabletServer::Start() { RETURN_NOT_OK(heartbeater_->Start()); + StartTSLocalLockManager(); + if (FLAGS_tserver_enable_metrics_snapshotter) { RETURN_NOT_OK(metrics_snapshotter_->Start()); } @@ -785,6 +781,10 @@ void TabletServer::Shutdown() { "Failed to stop table mutation count sender thread"); } + if (auto local_lock_manager = ts_local_lock_manager(); local_lock_manager) { + local_lock_manager->Shutdown(); + } + client()->RequestAbortAllRpcs(); tablet_manager_->StartShutdown(); @@ -795,9 +795,24 @@ void TabletServer::Shutdown() { } tserver::TSLocalLockManagerPtr TabletServer::ResetAndGetTSLocalLockManager() { - std::lock_guard l(lock_); - ts_local_lock_manager_ = std::make_shared(clock_, this); - return ts_local_lock_manager_; + ts_local_lock_manager()->Shutdown(); + { + std::lock_guard l(lock_); + ts_local_lock_manager_.reset(); + } + StartTSLocalLockManager(); + return ts_local_lock_manager(); +} + +void TabletServer::StartTSLocalLockManager() { + if (opts_.server_type == TabletServerOptions::kServerType && + PREDICT_FALSE(FLAGS_TEST_enable_object_locking_for_table_locks)) { + std::lock_guard l(lock_); + ts_local_lock_manager_ = std::make_shared( + clock_, this /* TabletServerIf* */, *this /* RpcServerBase& */, + tablet_manager_->waiting_txn_pool()); + ts_local_lock_manager_->Start(tablet_manager_->waiting_txn_registry()); + } } bool TabletServer::HasBootstrappedLocalLockManager() const { diff --git a/src/yb/tserver/tablet_server.h b/src/yb/tserver/tablet_server.h index bf6d5969416a..dc6f245ea1a9 100644 --- a/src/yb/tserver/tablet_server.h +++ b/src/yb/tserver/tablet_server.h @@ -459,6 +459,8 @@ class TabletServer : public DbServerBase, public TabletServerIf { Result CreateInternalPGConn( const std::string& database_name, const std::optional& deadline) override; + void StartTSLocalLockManager() EXCLUDES (lock_); + std::atomic initted_{false}; // If true, all heartbeats will be seen as failed. diff --git a/src/yb/tserver/ts_local_lock_manager-test.cc b/src/yb/tserver/ts_local_lock_manager-test.cc index fc06ae63a7e5..1eb34f497fe8 100644 --- a/src/yb/tserver/ts_local_lock_manager-test.cc +++ b/src/yb/tserver/ts_local_lock_manager-test.cc @@ -35,12 +35,22 @@ #include "yb/util/test_util.h" #include "yb/util/tsan_util.h" +DECLARE_bool(TEST_enable_object_locking_for_table_locks); +DECLARE_bool(TEST_assert_olm_empty_locks_map); +DECLARE_bool(TEST_olm_skip_scheduling_waiter_resumption); +DECLARE_bool(TEST_olm_skip_sending_wait_for_probes); + using namespace std::literals; +using yb::docdb::IntentTypeSetAdd; +using yb::docdb::LockState; using yb::docdb::ObjectLockOwner; +using yb::docdb::ObjectLockPrefix; namespace yb::tserver { +using LockStateMap = std::unordered_map; + auto kTxn1 = ObjectLockOwner{TransactionId::GenerateRandom(), 1}; auto kTxn2 = ObjectLockOwner{TransactionId::GenerateRandom(), 1}; @@ -50,27 +60,20 @@ constexpr auto kObject1 = 1; class TSLocalLockManagerTest : public TabletServerTestBase { protected: - TSLocalLockManagerTest() { - auto ts = TabletServerTestBase::CreateMiniTabletServer(); - CHECK_OK(ts); - mini_server_.reset(ts->release()); - lm_ = std::make_unique( - new server::HybridClock(), mini_server_->server()); - lm_->TEST_MarkBootstrapped(); - } - - std::unique_ptr mini_server_; - std::unique_ptr lm_; - void SetUp() override { - YBTest::SetUp(); - ASSERT_OK(lm_->clock()->Init()); + ANNOTATE_UNPROTECTED_WRITE(FLAGS_TEST_enable_object_locking_for_table_locks) = true; + ANNOTATE_UNPROTECTED_WRITE(FLAGS_TEST_assert_olm_empty_locks_map) = true; + ANNOTATE_UNPROTECTED_WRITE(FLAGS_TEST_olm_skip_sending_wait_for_probes) = true; + TabletServerTestBase::SetUp(); + StartTabletServer(); + lm_ = CHECK_NOTNULL(mini_server_->server()->ts_local_lock_manager()); + lm_->TEST_MarkBootstrapped(); } Status LockObjects( const ObjectLockOwner& owner, uint64_t database_id, const std::vector& object_ids, const std::vector& lock_types, - CoarseTimePoint deadline = CoarseTimePoint::max()) { + CoarseTimePoint deadline = CoarseTimePoint::max(), LockStateMap* state_map = nullptr) { SCHECK_EQ(object_ids.size(), lock_types.size(), IllegalState, "Expected equal sizes"); tserver::AcquireObjectLockRequestPB req; owner.PopulateLockRequest(&req); @@ -80,27 +83,36 @@ class TSLocalLockManagerTest : public TabletServerTestBase { lock->set_object_oid(object_ids[i]); lock->set_lock_type(lock_types[i]); } - return lm_->AcquireObjectLocks(req, deadline); + req.set_propagated_hybrid_time(MonoTime::Now().ToUint64()); + auto s = lm_->AcquireObjectLocks(req, deadline); + if (!state_map || !s.ok()) { + return s; + } + auto res = VERIFY_RESULT(DetermineObjectsToLock(req.object_locks())); + for (auto& lock_batch_entry : res.lock_batch) { + (*state_map)[lock_batch_entry.key] += IntentTypeSetAdd(lock_batch_entry.intent_types); + } + return s; } Status LockObject( const ObjectLockOwner& owner, uint64_t database_id, uint64_t object_id, - TableLockType lock_type, CoarseTimePoint deadline = CoarseTimePoint::max()) { - return LockObjects(owner, database_id, {object_id}, {lock_type}, deadline); + TableLockType lock_type, CoarseTimePoint deadline = CoarseTimePoint::max(), + LockStateMap* state_map = nullptr) { + return LockObjects(owner, database_id, {object_id}, {lock_type}, deadline, state_map); } - Status ReleaseObjectLock( + Status ReleaseLocksForSubtxn( const ObjectLockOwner& owner, CoarseTimePoint deadline = CoarseTimePoint::max()) { tserver::ReleaseObjectLockRequestPB req; - owner.PopulateLockRequest(&req); + owner.PopulateReleaseRequest(&req, false /* release all locks */); return lm_->ReleaseObjectLocks(req, deadline); } Status ReleaseAllLocksForTxn( const ObjectLockOwner& owner, CoarseTimePoint deadline = CoarseTimePoint::max()) { tserver::ReleaseObjectLockRequestPB req; - req.set_txn_id(owner.txn_id.data(), owner.txn_id.size()); - req.set_subtxn_id(owner.subtxn_id); + owner.PopulateReleaseRequest(&req); return lm_->ReleaseObjectLocks(req, deadline); } @@ -111,6 +123,8 @@ class TSLocalLockManagerTest : public TabletServerTestBase { size_t WaitingLocksSize() const { return lm_->TEST_WaitingLocksSize(); } + + std::shared_ptr lm_; }; TEST_F(TSLocalLockManagerTest, TestLockAndRelease) { @@ -119,7 +133,7 @@ TEST_F(TSLocalLockManagerTest, TestLockAndRelease) { ASSERT_GE(GrantedLocksSize(), 1); ASSERT_EQ(WaitingLocksSize(), 0); - ASSERT_OK(ReleaseObjectLock(kTxn1)); + ASSERT_OK(ReleaseAllLocksForTxn(kTxn1)); ASSERT_EQ(GrantedLocksSize(), 0); ASSERT_EQ(WaitingLocksSize(), 0); } @@ -175,7 +189,7 @@ TEST_F(TSLocalLockManagerTest, TestWaitersAndBlocker) { ASSERT_GE(WaitingLocksSize(), 1); for (int i = 0; i < kNumReaders; i++) { - ASSERT_OK(ReleaseObjectLock(reader_txns[i])); + ASSERT_OK(ReleaseAllLocksForTxn(reader_txns[i])); if (i + 1 < kNumReaders) { ASSERT_NE(status_future.wait_for(0s), std::future_status::ready); } @@ -192,12 +206,12 @@ TEST_F(TSLocalLockManagerTest, TestWaitersAndBlocker) { } ASSERT_EQ(waiters_blocked.count(), 5); - ASSERT_OK(ReleaseObjectLock(kTxn1)); + ASSERT_OK(ReleaseAllLocksForTxn(kTxn1)); ASSERT_TRUE(waiters_blocked.WaitFor(2s * kTimeMultiplier)); ASSERT_EQ(GrantedLocksSize(), kNumReaders); thread_holder.WaitAndStop(2s * kTimeMultiplier); for (int i = 0; i < kNumReaders; i++) { - ASSERT_OK(ReleaseObjectLock(reader_txns[i])); + ASSERT_OK(ReleaseAllLocksForTxn(reader_txns[i])); } } @@ -214,6 +228,7 @@ TEST_F(TSLocalLockManagerTest, TestSessionIgnoresLockConflictWithSelf) { // {1, kWeakObjectLock} // {1, kStrongObjectLock} ASSERT_EQ(GrantedLocksSize(), 2); + ASSERT_OK(ReleaseAllLocksForTxn(kTxn1)); } // The below test asserts that the lock manager signals the corresponding condition variable on @@ -227,8 +242,9 @@ TEST_F(TSLocalLockManagerTest, TestWaitersSignaledOnEveryRelease) { }); ASSERT_NE(status_future.wait_for(1s * kTimeMultiplier), std::future_status::ready); - ASSERT_OK(ReleaseObjectLock(kTxn2)); + ASSERT_OK(ReleaseAllLocksForTxn(kTxn2)); ASSERT_OK(status_future.get()); + ASSERT_OK(ReleaseAllLocksForTxn(kTxn1)); } #ifndef NDEBUG @@ -250,7 +266,7 @@ TEST_F(TSLocalLockManagerTest, TestFailedLockRpcSemantics) { ASSERT_EQ(GrantedLocksSize(), 4); SyncPoint::GetInstance()->LoadDependency({ - {"ObjectLockedBatchEntry::Lock", "TestFailedLockRpcSemantics"}}); + {"ObjectLockManagerImpl::DoLockSingleEntry", "TestFailedLockRpcSemantics"}}); SyncPoint::GetInstance()->ClearTrace(); SyncPoint::GetInstance()->EnableProcessing(); @@ -272,13 +288,14 @@ TEST_F(TSLocalLockManagerTest, TestFailedLockRpcSemantics) { ASSERT_OK(ReleaseAllLocksForTxn(kTxn1)); ASSERT_EQ(GrantedLocksSize(), 1); + ASSERT_OK(ReleaseAllLocksForTxn(kTxn2)); } TEST_F(TSLocalLockManagerTest, TestReleaseWaitingLocks) { ASSERT_OK(LockObject(kTxn1, kDatabase1, kObject1, TableLockType::ACCESS_SHARE)); SyncPoint::GetInstance()->LoadDependency( - {{"ObjectLockedBatchEntry::Lock", "TestReleaseWaitingLocks"}}); + {{"ObjectLockManagerImpl::DoLockSingleEntry", "TestReleaseWaitingLocks"}}); SyncPoint::GetInstance()->ClearTrace(); SyncPoint::GetInstance()->EnableProcessing(); @@ -293,6 +310,8 @@ TEST_F(TSLocalLockManagerTest, TestReleaseWaitingLocks) { ASSERT_TRUE(status_future.valid()); ASSERT_OK(ReleaseAllLocksForTxn(kTxn2)); ASSERT_NOK(status_future.get()); + ASSERT_OK(ReleaseAllLocksForTxn(kTxn2)); + ASSERT_OK(ReleaseAllLocksForTxn(kTxn1)); } #endif // NDEBUG @@ -321,8 +340,9 @@ TEST_F(TSLocalLockManagerTest, TestDowngradeDespiteExclusiveLockWaiter) { docdb::DocDBTableLocksConflictMatrixTest::ObjectLocksConflict(entries1, entries2)); if (is_conflicting) { - SyncPoint::GetInstance()->LoadDependency( - {{"ObjectLockedBatchEntry::Lock", "TestDowngradeDespiteExclusiveLockWaiter"}}); + SyncPoint::GetInstance()->LoadDependency({ + {"ObjectLockManagerImpl::DoLockSingleEntry", + "TestDowngradeDespiteExclusiveLockWaiter"}}); SyncPoint::GetInstance()->ClearTrace(); SyncPoint::GetInstance()->EnableProcessing(); @@ -357,4 +377,74 @@ TEST_F(TSLocalLockManagerTest, TestDowngradeDespiteExclusiveLockWaiter) { } } +TEST_F(TSLocalLockManagerTest, TestSanity) { + const auto kNumConns = 30; + const auto kNumbObjects = 5; + TestThreadHolder thread_holder; + for (int i = 0; i < kNumConns; i++) { + thread_holder.AddThreadFunctor([&, &stop = thread_holder.stop_flag()]() { + LockStateMap state_map; + auto owner = ObjectLockOwner{TransactionId::GenerateRandom(), 1}; + while (!stop) { + LockStateMap prev = state_map; + auto deadline = CoarseMonoClock::Now() + 2s; + unsigned int seed = SeedRandom(); + auto lock_type = TableLockType((rand_r(&seed) % TableLockType_MAX) + 1); + auto failed = false; + while(!failed && rand_r(&seed) % 3) { + failed = !LockObject( + owner, kDatabase1, rand_r(&seed) % kNumbObjects, lock_type, deadline, + &state_map).ok(); + } + if (failed) { + ASSERT_OK(ReleaseLocksForSubtxn(owner, deadline)); + state_map = prev; + } + owner.subtxn_id++; + auto actual_state_map = lm_->TEST_GetLockStateMapForTxn(owner.txn_id); + for (auto& [key, state] : state_map) { + auto it = actual_state_map.find(key); + ASSERT_TRUE(it != actual_state_map.end()); + ASSERT_EQ(it->second, state); + actual_state_map.erase(it); + } + for (auto& [_, state] : actual_state_map) { + ASSERT_EQ(state, 0); + } + if (rand_r(&seed) % 3) { + ASSERT_OK(ReleaseAllLocksForTxn(owner, deadline)); + state_map.clear(); + owner.subtxn_id = 1; + } + } + ASSERT_OK(ReleaseAllLocksForTxn(owner, CoarseTimePoint::max())); + }); + } + thread_holder.WaitAndStop(45s); +} + +#ifndef NDEBUG +TEST_F(TSLocalLockManagerTest, TestWaiterResetsStateDuringShutdown) { + ASSERT_OK(LockObject(kTxn1, kDatabase1, kObject1, TableLockType::ACCESS_SHARE)); + + SyncPoint::GetInstance()->LoadDependency( + {{"ObjectLockManagerImpl::DoLockSingleEntry", "TestWaiterResetsStateDuringShutdown"}}); + SyncPoint::GetInstance()->ClearTrace(); + SyncPoint::GetInstance()->EnableProcessing(); + + auto status_future = std::async(std::launch::async, [&]() { + return LockObject( + kTxn2, kDatabase1, kObject1, TableLockType::ACCESS_EXCLUSIVE, CoarseMonoClock::Now() + 10s); + }); + DEBUG_ONLY_TEST_SYNC_POINT("TestWaiterResetsStateDuringShutdown"); + + ANNOTATE_UNPROTECTED_WRITE(FLAGS_TEST_olm_skip_scheduling_waiter_resumption) = true; + ASSERT_OK(ReleaseAllLocksForTxn(kTxn1, CoarseTimePoint::max())); + mini_server_->Shutdown(); + auto status = status_future.get(); + ASSERT_NOK(status); + ASSERT_STR_CONTAINS(status.ToString(), "Object Lock Manager shutting down"); +} +#endif + } // namespace yb::tserver diff --git a/src/yb/tserver/ts_local_lock_manager.cc b/src/yb/tserver/ts_local_lock_manager.cc index 737e5ca887b4..400a365916fb 100644 --- a/src/yb/tserver/ts_local_lock_manager.cc +++ b/src/yb/tserver/ts_local_lock_manager.cc @@ -18,6 +18,11 @@ #include "yb/docdb/docdb.h" #include "yb/docdb/docdb_fwd.h" #include "yb/docdb/object_lock_manager.h" + +#include "yb/rpc/messenger.h" +#include "yb/rpc/poller.h" + +#include "yb/server/server_base.h" #include "yb/util/backoff_waiter.h" #include "yb/util/monotime.h" #include "yb/util/scope_exit.h" @@ -27,11 +32,22 @@ using namespace std::literals; DECLARE_bool(dump_lock_keys); +DEFINE_RUNTIME_int64(olm_poll_interval_ms, 100, + "Poll interval for Object lock Manager. Waiting requests that are unblocked by other release " + "requests are independent of this interval since the release schedules unblocking of potential " + "waiters. Yet this might help release timedout requests soon and also avoid probable issues " + "with the signaling mechanism if any."); + namespace yb::tserver { class TSLocalLockManager::Impl { public: - Impl(const server::ClockPtr& clock, TabletServerIf* server) : clock_(clock), server_(server) {} + Impl( + const server::ClockPtr& clock, TabletServerIf* tablet_server, + server::RpcServerBase& messenger_server, ThreadPool* thread_pool) + : clock_(clock), server_(tablet_server), messenger_base_(messenger_server), + object_lock_manager_(thread_pool, messenger_server), + poller_("TSLocalLockManager", std::bind(&Impl::Poll, this)) {} ~Impl() = default; @@ -56,10 +72,34 @@ class TSLocalLockManager::Impl { return Status::OK(); } + Status CheckShutdown() { + return shutdown_ + ? STATUS_FORMAT(ShutdownInProgress, "Object Lock Manager Shutdown") : Status::OK(); + } + Status AcquireObjectLocks( const tserver::AcquireObjectLockRequestPB& req, CoarseTimePoint deadline, WaitForBootstrap wait) { + Synchronizer synchronizer; + DoAcquireObjectLocksAsync( + req, deadline, synchronizer.AsStdStatusCallback(), tserver::WaitForBootstrap::kFalse); + return synchronizer.Wait(); + } + + void DoAcquireObjectLocksAsync( + const tserver::AcquireObjectLockRequestPB& req, CoarseTimePoint deadline, + StdStatusCallback&& callback, WaitForBootstrap wait) { + auto s = PrepareAndExecuteAcquire(req, deadline, callback, wait); + if (!s.ok()) { + callback(s); + } + } + + Status PrepareAndExecuteAcquire( + const tserver::AcquireObjectLockRequestPB& req, CoarseTimePoint deadline, + StdStatusCallback& callback, WaitForBootstrap wait) { TRACE_FUNC(); + RETURN_NOT_OK(CheckShutdown()); auto txn = VERIFY_RESULT(FullyDecodeTransactionId(req.txn_id())); docdb::ObjectLockOwner object_lock_owner(txn, req.subtxn_id()); VLOG(3) << object_lock_owner.ToString() << " Acquiring lock : " << req.ShortDebugString(); @@ -80,17 +120,14 @@ class TSLocalLockManager::Impl { UpdateLeaseEpochIfNecessary(req.session_host_uuid(), req.lease_epoch()); auto keys_to_lock = VERIFY_RESULT(DetermineObjectsToLock(req.object_locks())); - if (object_lock_manager_.Lock(keys_to_lock.lock_batch, deadline, object_lock_owner)) { - TRACE("Successfully obtained object locks."); - return Status::OK(); - } - TRACE("Could not get the object locks."); - std::string batch_str; - if (FLAGS_dump_lock_keys) { - batch_str = Format(", batch: $0", keys_to_lock.lock_batch); - } - return STATUS_FORMAT( - TryAgain, "Failed to obtain object locks until deadline: $0$1", deadline, batch_str); + object_lock_manager_.Lock(docdb::LockData { + .key_to_lock = std::move(keys_to_lock), + .deadline = deadline, + .object_lock_owner = std::move(object_lock_owner), + .status_tablet = req.status_tablet(), + .start_time = MonoTime::FromUint64(req.propagated_hybrid_time()), + .callback = std::move(callback)}); + return Status::OK(); } Status WaitToApplyIfNecessary( @@ -129,6 +166,7 @@ class TSLocalLockManager::Impl { Status ReleaseObjectLocks( const tserver::ReleaseObjectLockRequestPB& req, CoarseTimePoint deadline) { + RETURN_NOT_OK(CheckShutdown()); auto txn = VERIFY_RESULT(FullyDecodeTransactionId(req.txn_id())); docdb::ObjectLockOwner object_lock_owner(txn, req.subtxn_id()); VLOG(3) << object_lock_owner.ToString() @@ -142,10 +180,37 @@ class TSLocalLockManager::Impl { if (req.has_db_catalog_version_data()) { server_->SetYsqlDBCatalogVersions(req.db_catalog_version_data()); } - object_lock_manager_.Unlock(object_lock_owner); + Status abort_status = req.has_abort_status() && req.abort_status().code() != AppStatusPB::OK + ? StatusFromPB(req.abort_status()) + : Status::OK(); + object_lock_manager_.Unlock(object_lock_owner, abort_status); return Status::OK(); } + void Poll() { + object_lock_manager_.Poll(); + } + + void Start(docdb::LocalWaitingTxnRegistry* waiting_txn_registry) { + object_lock_manager_.Start(waiting_txn_registry); + poller_.Start( + &messenger_base_.messenger()->scheduler(), 1ms * FLAGS_olm_poll_interval_ms); + } + + void Shutdown() { + shutdown_ = true; + poller_.Shutdown(); + { + yb::UniqueLock lock(mutex_); + while (!txns_in_progress_.empty()) { + WaitOnConditionVariableUntil(&cv_, &lock, CoarseMonoClock::Now() + 5s); + LOG_WITH_FUNC(WARNING) + << Format("Waiting for $0 in progress requests at the OLM", txns_in_progress_.size()); + } + } + object_lock_manager_.Shutdown(); + } + void UpdateLeaseEpochIfNecessary(const std::string& uuid, uint64_t lease_epoch) EXCLUDES(mutex_) { TRACE_FUNC(); std::lock_guard lock(mutex_); @@ -216,6 +281,11 @@ class TSLocalLockManager::Impl { return object_lock_manager_.TEST_WaitingLocksSize(); } + std::unordered_map + TEST_GetLockStateMapForTxn(const TransactionId& txn) const { + return object_lock_manager_.TEST_GetLockStateMapForTxn(txn); + } + void DumpLocksToHtml(std::ostream& out) { object_lock_manager_.DumpStatusHtml(out); } @@ -238,7 +308,6 @@ class TSLocalLockManager::Impl { private: const server::ClockPtr clock_; - docdb::ObjectLockManager object_lock_manager_; std::atomic_bool is_bootstrapped_{false}; std::unordered_map max_seen_lease_epoch_ GUARDED_BY(mutex_); std::unordered_set txns_in_progress_ GUARDED_BY(mutex_); @@ -246,13 +315,21 @@ class TSLocalLockManager::Impl { using LockType = std::mutex; LockType mutex_; TabletServerIf* server_; + server::RpcServerBase& messenger_base_; + docdb::ObjectLockManager object_lock_manager_; + std::atomic shutdown_{false}; + rpc::Poller poller_; }; -TSLocalLockManager::TSLocalLockManager(const server::ClockPtr& clock, TabletServerIf* server) - : impl_(new Impl(clock, server)) {} +TSLocalLockManager::TSLocalLockManager( + const server::ClockPtr& clock, TabletServerIf* tablet_server, + server::RpcServerBase& messenger_server, ThreadPool* thread_pool) + : impl_(new Impl( + clock, CHECK_NOTNULL(tablet_server), messenger_server, CHECK_NOTNULL(thread_pool))) {} TSLocalLockManager::~TSLocalLockManager() {} +// TODO: Remove this method and enforce callers supply a callback func. Status TSLocalLockManager::AcquireObjectLocks( const tserver::AcquireObjectLockRequestPB& req, CoarseTimePoint deadline, WaitForBootstrap wait) { @@ -271,6 +348,12 @@ Status TSLocalLockManager::AcquireObjectLocks( return ret; } +void TSLocalLockManager::AcquireObjectLocksAsync( + const tserver::AcquireObjectLockRequestPB& req, CoarseTimePoint deadline, + StdStatusCallback&& callback, WaitForBootstrap wait) { + impl_->DoAcquireObjectLocksAsync(req, deadline, std::move(callback), wait); +} + Status TSLocalLockManager::ReleaseObjectLocks( const tserver::ReleaseObjectLockRequestPB& req, CoarseTimePoint deadline) { if (VLOG_IS_ON(4)) { @@ -287,6 +370,15 @@ Status TSLocalLockManager::ReleaseObjectLocks( return ret; } +void TSLocalLockManager::Start( + docdb::LocalWaitingTxnRegistry* waiting_txn_registry) { + return impl_->Start(waiting_txn_registry); +} + +void TSLocalLockManager::Shutdown() { + impl_->Shutdown(); +} + void TSLocalLockManager::DumpLocksToHtml(std::ostream& out) { return impl_->DumpLocksToHtml(out); } @@ -299,6 +391,10 @@ bool TSLocalLockManager::IsBootstrapped() const { return impl_->IsBootstrapped(); } +server::ClockPtr TSLocalLockManager::clock() const { + return impl_->clock(); +} + size_t TSLocalLockManager::TEST_WaitingLocksSize() const { return impl_->TEST_WaitingLocksSize(); } @@ -311,8 +407,9 @@ void TSLocalLockManager::TEST_MarkBootstrapped() { impl_->MarkBootstrapped(); } -server::ClockPtr TSLocalLockManager::clock() const { - return impl_->clock(); +std::unordered_map + TSLocalLockManager::TEST_GetLockStateMapForTxn(const TransactionId& txn) const { + return impl_->TEST_GetLockStateMapForTxn(txn); } } // namespace yb::tserver diff --git a/src/yb/tserver/ts_local_lock_manager.h b/src/yb/tserver/ts_local_lock_manager.h index dfde6765cfec..ff52753f8b28 100644 --- a/src/yb/tserver/ts_local_lock_manager.h +++ b/src/yb/tserver/ts_local_lock_manager.h @@ -18,15 +18,21 @@ #include #include "yb/common/common_fwd.h" -#include "yb/common/transaction.pb.h" -#include "yb/docdb/object_lock_manager.h" -#include "yb/dockv/value_type.h" +#include "yb/common/transaction.h" + #include "yb/server/clock.h" +#include "yb/server/server_fwd.h" + #include "yb/tserver/tablet_server_interface.h" #include "yb/tserver/tserver.pb.h" -#include "yb/util/status.h" -namespace yb::tserver { +#include "yb/util/status_callback.h" + +namespace yb { + +class ThreadPool; + +namespace tserver { YB_STRONGLY_TYPED_BOOL(WaitForBootstrap); @@ -49,7 +55,9 @@ YB_STRONGLY_TYPED_BOOL(WaitForBootstrap); // it with all exisitng DDL (global) locks. class TSLocalLockManager { public: - TSLocalLockManager(const server::ClockPtr& clock, TabletServerIf* server); + TSLocalLockManager( + const server::ClockPtr& clock, TabletServerIf* tablet_server, + server::RpcServerBase& messenger_server, ThreadPool* thread_pool); ~TSLocalLockManager(); // Tries acquiring object locks with the specified modes and registers them against the given @@ -73,6 +81,10 @@ class TSLocalLockManager { const tserver::AcquireObjectLockRequestPB& req, CoarseTimePoint deadline, WaitForBootstrap wait = WaitForBootstrap::kTrue); + void AcquireObjectLocksAsync( + const tserver::AcquireObjectLockRequestPB& req, CoarseTimePoint deadline, + StdStatusCallback&& callback, WaitForBootstrap wait = WaitForBootstrap::kTrue); + // When subtxn id is set, releases all locks tagged against . Else releases all // object locks owned by . // @@ -80,19 +92,29 @@ class TSLocalLockManager { // lock modes on a key multiple times, and will unlock them all with a single unlock rpc. Status ReleaseObjectLocks( const tserver::ReleaseObjectLockRequestPB& req, CoarseTimePoint deadline); + + void Start(docdb::LocalWaitingTxnRegistry* waiting_txn_registry); + + void Shutdown(); + void DumpLocksToHtml(std::ostream& out); Status BootstrapDdlObjectLocks(const tserver::DdlLockEntriesPB& resp); bool IsBootstrapped() const; + + server::ClockPtr clock() const; + size_t TEST_GrantedLocksSize() const; size_t TEST_WaitingLocksSize() const; void TEST_MarkBootstrapped(); - server::ClockPtr clock() const; + std::unordered_map + TEST_GetLockStateMapForTxn(const TransactionId& txn) const; private: class Impl; std::unique_ptr impl_; }; -} // namespace yb::tserver +} // namespace tserver +} // namespace yb diff --git a/src/yb/tserver/tserver.proto b/src/yb/tserver/tserver.proto index 5c876733513d..3d2f9156ac3a 100644 --- a/src/yb/tserver/tserver.proto +++ b/src/yb/tserver/tserver.proto @@ -429,6 +429,7 @@ message AcquireObjectLockRequestPB { optional fixed64 ignore_after_hybrid_time = 6; optional fixed64 propagated_hybrid_time = 7; optional AshMetadataPB ash_metadata = 8; + optional bytes status_tablet = 9; } message AcquireObjectLockResponsePB { @@ -467,6 +468,7 @@ message ReleaseObjectLockRequestPB { // Used for tracking in-progress DDL unlock requests at the master. optional uint64 request_id = 8; optional AshMetadataPB ash_metadata = 9; + optional AppStatusPB abort_status = 10; } message ReleaseObjectLockResponsePB { diff --git a/src/yb/yql/pgwrapper/pg_object_locks-test.cc b/src/yb/yql/pgwrapper/pg_object_locks-test.cc index 194f0f244df0..720e45e95739 100644 --- a/src/yb/yql/pgwrapper/pg_object_locks-test.cc +++ b/src/yb/yql/pgwrapper/pg_object_locks-test.cc @@ -107,8 +107,9 @@ class PgObjectLocksTestRF1 : public PgMiniTestBase { ASSERT_OK(conn1.StartTransaction(IsolationLevel::SNAPSHOT_ISOLATION)); ASSERT_OK(conn1.Execute(lock_stmt_1)); - // In sync point ObjectLockedBatchEntry::Lock, the lock is in waiting state. - SyncPoint::GetInstance()->LoadDependency({{"WaitingLock", "ObjectLockedBatchEntry::Lock"}}); + // In sync point ObjectLockManagerImpl::DoLockSingleEntry, the lock is in waiting state. + SyncPoint::GetInstance()->LoadDependency( + {{"WaitingLock", "ObjectLockManagerImpl::DoLockSingleEntry"}}); SyncPoint::GetInstance()->ClearTrace(); SyncPoint::GetInstance()->EnableProcessing(); From 6316f6ceb99f5ca59336d6330f768a0bf4b50a10 Mon Sep 17 00:00:00 2001 From: Aleksandr Malyshev Date: Fri, 16 May 2025 22:45:15 +0300 Subject: [PATCH 092/146] [PLAT-11945] YBA: Make YBA FIPS compliant Summary: See https://yugabyte.slack.com/archives/C02AL0YGD37/p1745253211441579?thread_ts=1742894930.694429&cid=C02AL0YGD37 for more details. This basically allows running YBA in FIPS compliant mode with -Dorg.bouncycastle.fips.approved_only=true system property. It replaces regular BouncyCastle providers with BC FIPS providers + makes sure we have only BC FIPS providers + Sun provider (for secure random entropy impl) in FIPS approved mode. It also removes PEM keystore implementation, as we don't actually use that + PEM keystore is not supported in FIPS compliant mode. YBA truststore format for new YBA installations is changed to BCFKS. In case YBA already has PKCS12 truststore - it continues to use that. To migrate exicting system to FIPS compliant mode we will have to convert truststore. Also, it changes hashing algorithm for user passwords and API keys to PBKDF2 HMAC SHA256. We continue to support BCrypt for existing users, though. To make a system FIPS compliant we'll have to implement migration for these as well - users will have to change passwords and re-generate API keys. Test Plan: Tested YBA locally with the following settings ``` -Dorg.bouncycastle.fips.approved_only=true -Dhttp.port=disabled -Dhttps.port=9000 -Dhttps.keyStore=/Users/amalysh86/certs1/localhost.bcfks -Dhttps.keyStoreType=BCFKS -Dhttps.keyStorePassword=global-truststore-password -Dhttps.keyStoreAlgorithm=PKIX ``` Also tested with ``` -Dorg.bouncycastle.fips.approved_only=true and PEM keystore for HTTPS ``` Tested operations with YBA truststore. Reviewers: anijhawan, nbhatia, nsingh, dshubin Reviewed By: nsingh Differential Revision: https://phorge.dev.yugabyte.com/D43451 --- java/interface-annotations/pom.xml | 2 +- java/pom.xml | 2 +- java/yb-cdc/pom.xml | 2 +- java/yb-cli/pom.xml | 2 +- java/yb-client/pom.xml | 2 +- .../java/org/yb/client/AsyncYBClient.java | 2 - java/yb-cql-4x/pom.xml | 2 +- java/yb-cql/pom.xml | 2 +- java/yb-jedis-tests/pom.xml | 2 +- java/yb-loadtester/pom.xml | 2 +- java/yb-multiapi/pom.xml | 2 +- java/yb-pgsql/pom.xml | 2 +- java/yb-sample/pom.xml | 2 +- java/yb-ysql-conn-mgr/pom.xml | 2 +- java/yb-yugabyted/pom.xml | 2 +- managed/build.sbt | 28 +- managed/src/main/java/MainModule.java | 28 +- .../java/com/yugabyte/yw/common/AppInit.java | 25 + .../yugabyte/yw/common/NodeAgentClient.java | 2 +- .../yw/common/TableSpaceStructures.java | 4 +- .../yugabyte/yw/common/WSClientRefresher.java | 10 +- .../yw/common/certmgmt/CertificateHelper.java | 49 +- .../castore/CustomCAStoreManager.java | 63 +-- .../castore/PemTrustStoreManager.java | 288 ------------ .../certmgmt/castore/TrustStoreManager.java | 11 +- ...Manager.java => YBATrustStoreManager.java} | 135 +++--- .../common/certmgmt/providers/VaultPKI.java | 4 +- .../common/encryption/bc/BcOpenBsdHasher.java | 10 +- .../encryption/bc/Pbkdf2HmacSha256Hasher.java | 61 +++ .../yw/forms/PerfAdvisorSettingsFormData.java | 4 +- .../yugabyte/yw/forms/RunQueryFormData.java | 4 +- .../com/yugabyte/yw/models/HealthCheck.java | 6 +- .../java/com/yugabyte/yw/models/Users.java | 39 +- .../yw/models/helpers/CommonUtils.java | 2 + .../resources/health/node_health.py.template | 1 - managed/src/main/resources/reference.conf | 4 + .../java/com/yugabyte/yw/common/UtilTest.java | 13 +- .../certmgmt/CertificateHelperTest.java | 8 +- .../certmgmt/PemTrustStoreManagerTest.java | 438 ------------------ .../yw/common/certmgmt/VaultPKITest.java | 4 +- .../ha/PlatformInstanceClientFactoryTest.java | 8 - .../yw/controllers/JWTVerifierTest.java | 4 +- 42 files changed, 318 insertions(+), 965 deletions(-) delete mode 100644 managed/src/main/java/com/yugabyte/yw/common/certmgmt/castore/PemTrustStoreManager.java rename managed/src/main/java/com/yugabyte/yw/common/certmgmt/castore/{Pkcs12TrustStoreManager.java => YBATrustStoreManager.java} (73%) create mode 100644 managed/src/main/java/com/yugabyte/yw/common/encryption/bc/Pbkdf2HmacSha256Hasher.java delete mode 100644 managed/src/test/java/com/yugabyte/yw/common/certmgmt/PemTrustStoreManagerTest.java diff --git a/java/interface-annotations/pom.xml b/java/interface-annotations/pom.xml index 99d36a9011a2..b8222e3db631 100644 --- a/java/interface-annotations/pom.xml +++ b/java/interface-annotations/pom.xml @@ -21,7 +21,7 @@ org.yb yb-parent - 0.8.103-SNAPSHOT + 0.8.104-SNAPSHOT interface-annotations diff --git a/java/pom.xml b/java/pom.xml index 8fe8b7019c10..0b5165245da3 100644 --- a/java/pom.xml +++ b/java/pom.xml @@ -40,7 +40,7 @@ org.yb yb-parent - 0.8.103-SNAPSHOT + 0.8.104-SNAPSHOT pom Yugabyte diff --git a/java/yb-cdc/pom.xml b/java/yb-cdc/pom.xml index 89298e30803b..dcef6a3f58c8 100644 --- a/java/yb-cdc/pom.xml +++ b/java/yb-cdc/pom.xml @@ -5,7 +5,7 @@ org.yb yb-parent - 0.8.103-SNAPSHOT + 0.8.104-SNAPSHOT yb-cdc YB CDC Connector diff --git a/java/yb-cli/pom.xml b/java/yb-cli/pom.xml index f1b19d715510..c94607b0edc6 100644 --- a/java/yb-cli/pom.xml +++ b/java/yb-cli/pom.xml @@ -25,7 +25,7 @@ org.yb yb-parent - 0.8.103-SNAPSHOT + 0.8.104-SNAPSHOT yb-cli diff --git a/java/yb-client/pom.xml b/java/yb-client/pom.xml index 101f813c2a46..4f307d662304 100644 --- a/java/yb-client/pom.xml +++ b/java/yb-client/pom.xml @@ -25,7 +25,7 @@ org.yb yb-parent - 0.8.103-SNAPSHOT + 0.8.104-SNAPSHOT yb-client diff --git a/java/yb-client/src/main/java/org/yb/client/AsyncYBClient.java b/java/yb-client/src/main/java/org/yb/client/AsyncYBClient.java index dcafaf313760..76e81698ee30 100644 --- a/java/yb-client/src/main/java/org/yb/client/AsyncYBClient.java +++ b/java/yb-client/src/main/java/org/yb/client/AsyncYBClient.java @@ -108,7 +108,6 @@ import javax.net.ssl.SSLContext; import javax.net.ssl.SSLEngine; import javax.net.ssl.TrustManagerFactory; -import org.bouncycastle.jce.provider.BouncyCastleProvider; import org.bouncycastle.util.io.pem.PemObject; import org.bouncycastle.util.io.pem.PemReader; import org.slf4j.Logger; @@ -3808,7 +3807,6 @@ private void handleClose(TabletClient client, Channel channel) { private SslHandler createSslHandler() { try { - Security.addProvider(new BouncyCastleProvider()); CertificateFactory cf = CertificateFactory.getInstance("X.509"); FileInputStream fis = new FileInputStream(certFile); List cas; diff --git a/java/yb-cql-4x/pom.xml b/java/yb-cql-4x/pom.xml index 7f51b46279d6..0f7dde4b9da3 100644 --- a/java/yb-cql-4x/pom.xml +++ b/java/yb-cql-4x/pom.xml @@ -7,7 +7,7 @@ org.yb yb-parent - 0.8.103-SNAPSHOT + 0.8.104-SNAPSHOT yb-cql-4x YB CQL Support for 4.x Driver diff --git a/java/yb-cql/pom.xml b/java/yb-cql/pom.xml index d07ed27a6e0b..b0a63e78f8ae 100644 --- a/java/yb-cql/pom.xml +++ b/java/yb-cql/pom.xml @@ -7,7 +7,7 @@ org.yb yb-parent - 0.8.103-SNAPSHOT + 0.8.104-SNAPSHOT yb-cql YB CQL Support diff --git a/java/yb-jedis-tests/pom.xml b/java/yb-jedis-tests/pom.xml index 5ec8d99fcdb7..0dcf31dbf926 100644 --- a/java/yb-jedis-tests/pom.xml +++ b/java/yb-jedis-tests/pom.xml @@ -7,7 +7,7 @@ org.yb yb-parent - 0.8.103-SNAPSHOT + 0.8.104-SNAPSHOT yb-jedis-tests YB Jedis Tests diff --git a/java/yb-loadtester/pom.xml b/java/yb-loadtester/pom.xml index 917095b7312f..26d02434106c 100644 --- a/java/yb-loadtester/pom.xml +++ b/java/yb-loadtester/pom.xml @@ -6,7 +6,7 @@ org.yb yb-parent - 0.8.103-SNAPSHOT + 0.8.104-SNAPSHOT yb-loadtester diff --git a/java/yb-multiapi/pom.xml b/java/yb-multiapi/pom.xml index ec89ff388480..bb60472c8ff7 100644 --- a/java/yb-multiapi/pom.xml +++ b/java/yb-multiapi/pom.xml @@ -9,7 +9,7 @@ org.yb yb-parent - 0.8.103-SNAPSHOT + 0.8.104-SNAPSHOT yb-multiapi diff --git a/java/yb-pgsql/pom.xml b/java/yb-pgsql/pom.xml index c372638ec1ea..30055b6ffba3 100644 --- a/java/yb-pgsql/pom.xml +++ b/java/yb-pgsql/pom.xml @@ -8,7 +8,7 @@ org.yb yb-parent - 0.8.103-SNAPSHOT + 0.8.104-SNAPSHOT yb-pgsql YB PostgreSQL Support diff --git a/java/yb-sample/pom.xml b/java/yb-sample/pom.xml index 4d2d6ad19627..aea44a156574 100644 --- a/java/yb-sample/pom.xml +++ b/java/yb-sample/pom.xml @@ -8,7 +8,7 @@ org.yb yb-parent - 0.8.103-SNAPSHOT + 0.8.104-SNAPSHOT yb-sample YB Manual Support diff --git a/java/yb-ysql-conn-mgr/pom.xml b/java/yb-ysql-conn-mgr/pom.xml index c6737ec8b7cc..318817e8c4a5 100644 --- a/java/yb-ysql-conn-mgr/pom.xml +++ b/java/yb-ysql-conn-mgr/pom.xml @@ -22,7 +22,7 @@ org.yb yb-parent - 0.8.103-SNAPSHOT + 0.8.104-SNAPSHOT yb-ysql-conn-mgr Ysql Connection Manager Tests diff --git a/java/yb-yugabyted/pom.xml b/java/yb-yugabyted/pom.xml index 1313c1e43734..ba919ecdb6d9 100644 --- a/java/yb-yugabyted/pom.xml +++ b/java/yb-yugabyted/pom.xml @@ -10,7 +10,7 @@ org.yb yb-parent - 0.8.103-SNAPSHOT + 0.8.104-SNAPSHOT yb-yugabyted diff --git a/managed/build.sbt b/managed/build.sbt index 8da7d1e93542..abe15e814793 100644 --- a/managed/build.sbt +++ b/managed/build.sbt @@ -1,15 +1,16 @@ import jline.console.ConsoleReader import play.sbt.PlayImport.PlayKeys.{playInteractionMode, playMonitoredFiles} import play.sbt.PlayInteractionMode + import java.io.File import java.nio.charset.StandardCharsets import java.nio.file.{FileSystems, Files, Paths} import sbt.complete.Parsers.spaceDelimited -import sbt.Tests._ +import sbt.Tests.* -import scala.collection.JavaConverters._ +import scala.collection.JavaConverters.* import scala.sys.process.Process -import scala.sys.process._ +import scala.sys.process.* historyPath := Some(file(System.getenv("HOME") + "/.sbt/.yugaware-history")) @@ -171,7 +172,10 @@ libraryDependencies ++= Seq( // https://github.com/YugaByte/cassandra-java-driver/releases "com.yugabyte" % "cassandra-driver-core" % "3.8.0-yb-7", "org.yaml" % "snakeyaml" % "2.1", - "org.bouncycastle" % "bcpkix-jdk18on" % "1.80", + "org.bouncycastle" % "bc-fips" % "2.1.0", + "org.bouncycastle" % "bcpkix-fips" % "2.1.9", + "org.bouncycastle" % "bctls-fips" % "2.1.20", + "org.mindrot" % "jbcrypt" % "0.4", "org.springframework.security" % "spring-security-core" % "5.8.16", "com.amazonaws" % "aws-java-sdk-ec2" % "1.12.768", "com.amazonaws" % "aws-java-sdk-kms" % "1.12.768", @@ -241,7 +245,7 @@ libraryDependencies ++= Seq( "io.jsonwebtoken" % "jjwt-impl" % "0.11.5", "io.jsonwebtoken" % "jjwt-jackson" % "0.11.5", "io.swagger" % "swagger-annotations" % "1.6.1", // needed for annotations in prod code - "de.dentrassi.crypto" % "pem-keystore" % "2.2.1", + "de.dentrassi.crypto" % "pem-keystore" % "3.0.0", // Prod dependency temporary as we use HSQLDB as a dummy perf_advisor DB for YBM scenario // Remove once YBM starts using real PG DB. "org.hsqldb" % "hsqldb" % "2.7.1", @@ -924,7 +928,7 @@ runPlatform := { Project.extract(newState).runTask(runPlatformTask, newState) } -libraryDependencies += "org.yb" % "yb-client" % "0.8.103-SNAPSHOT" +libraryDependencies += "org.yb" % "yb-client" % "0.8.104-SNAPSHOT" libraryDependencies += "org.yb" % "ybc-client" % "2.2.0.2-b2" libraryDependencies += "org.yb" % "yb-perf-advisor" % "1.0.0-b35" @@ -1006,6 +1010,10 @@ dependencyOverrides ++= jacksonOverrides excludeDependencies += "org.eclipse.jetty" % "jetty-io" excludeDependencies += "org.eclipse.jetty" % "jetty-server" excludeDependencies += "commons-collections" % "commons-collections" +excludeDependencies += "org.bouncycastle" % "bcpkix-jdk15on" +excludeDependencies += "org.bouncycastle" % "bcprov-jdk15on" +excludeDependencies += "org.bouncycastle" % "bcpkix-jdk18on" +excludeDependencies += "org.bouncycastle" % "bcprov-jdk18on" Global / concurrentRestrictions := Seq(Tags.limitAll(16)) @@ -1138,7 +1146,13 @@ lazy val swagger = project dependencyOverrides ++= jacksonOverrides, dependencyOverrides += "org.scala-lang.modules" %% "scala-xml" % "2.1.0", - swaggerGen := Def.taskDyn { + excludeDependencies += "org.bouncycastle" % "bcpkix-jdk15on", + excludeDependencies += "org.bouncycastle" % "bcprov-jdk15on", + excludeDependencies += "org.bouncycastle" % "bcpkix-jdk18on", + excludeDependencies += "org.bouncycastle" % "bcprov-jdk18on", + + +swaggerGen := Def.taskDyn { // Consider generating this only in managedResources val swaggerJson = (root / Compile / resourceDirectory).value / "swagger.json" val swaggerStrictJson = (root / Compile / resourceDirectory).value / "swagger-strict.json" diff --git a/managed/src/main/java/MainModule.java b/managed/src/main/java/MainModule.java index b51fc364c551..a2989e4acee6 100644 --- a/managed/src/main/java/MainModule.java +++ b/managed/src/main/java/MainModule.java @@ -97,12 +97,14 @@ import com.yugabyte.yw.metrics.MetricQueryHelper; import com.yugabyte.yw.models.CertificateInfo; import com.yugabyte.yw.models.HealthCheck; +import com.yugabyte.yw.models.helpers.CommonUtils; import com.yugabyte.yw.models.helpers.TaskTypesModule; import com.yugabyte.yw.queries.QueryHelper; import com.yugabyte.yw.scheduler.Scheduler; import de.dentrassi.crypto.pem.PemKeyStoreProvider; import io.prometheus.client.CollectorRegistry; import java.security.KeyStore; +import java.security.Provider; import java.security.SecureRandom; import java.security.Security; import javax.net.ssl.HttpsURLConnection; @@ -112,7 +114,8 @@ import lombok.extern.slf4j.Slf4j; import org.apache.commons.lang3.StringUtils; import org.apache.commons.validator.routines.DomainValidator; -import org.bouncycastle.jce.provider.BouncyCastleProvider; +import org.bouncycastle.jcajce.provider.BouncyCastleFipsProvider; +import org.bouncycastle.jsse.provider.BouncyCastleJsseProvider; import org.pac4j.core.client.Clients; import org.pac4j.core.context.session.SessionStore; import org.pac4j.core.http.url.DefaultUrlResolver; @@ -157,8 +160,27 @@ public void configure() { } install(new TaskTypesModule()); - Security.addProvider(new PemKeyStoreProvider()); - Security.addProvider(new BouncyCastleProvider()); + if (!config.getBoolean(CommonUtils.FIPS_ENABLED)) { + Security.addProvider(new PemKeyStoreProvider()); + } else { + Provider[] providers = Security.getProviders(); + log.info("Removing all providers configured in JVM (java.security file)"); + if (providers != null) { + for (Provider provider : providers) { + // We have to leave SUN provider in place as it is used to get entropy for SecureRandom + // We'll have to fugure out how to actually provide a proper entropy source. + // See https://github.com/bcgit/bc-java/issues/1285 for more details. + if (!provider.getName().equals("SUN")) { + Security.removeProvider(provider.getName()); + } + } + } + } + log.info("Adding BC-FIPS providers"); + Security.setProperty("ssl.KeyManagerFactory.algorithm", "PKIX"); + Security.setProperty("ssl.TrustManagerFactory.algorithm", "PKIX"); + Security.insertProviderAt(new BouncyCastleFipsProvider("C:HYBRID;ENABLE{All};"), 1); + Security.insertProviderAt(new BouncyCastleJsseProvider("fips:BCFIPS"), 2); TLSConfig.modifyTLSDisabledAlgorithms(config); bind(RuntimeConfigFactory.class).to(SettableRuntimeConfigFactory.class).asEagerSingleton(); install(new CustomerConfKeys()); diff --git a/managed/src/main/java/com/yugabyte/yw/common/AppInit.java b/managed/src/main/java/com/yugabyte/yw/common/AppInit.java index 8beff69cea30..9a8c9d2134db 100644 --- a/managed/src/main/java/com/yugabyte/yw/common/AppInit.java +++ b/managed/src/main/java/com/yugabyte/yw/common/AppInit.java @@ -50,6 +50,7 @@ import com.yugabyte.yw.models.Principal; import com.yugabyte.yw.models.Release; import com.yugabyte.yw.models.Users; +import com.yugabyte.yw.models.helpers.CommonUtils; import com.yugabyte.yw.models.rbac.ResourceGroup; import com.yugabyte.yw.models.rbac.Role; import com.yugabyte.yw.models.rbac.RoleBinding; @@ -62,6 +63,8 @@ import io.prometheus.client.CollectorRegistry; import io.prometheus.client.Gauge; import io.prometheus.client.hotspot.DefaultExports; +import java.security.Provider; +import java.security.Security; import java.util.HashSet; import java.util.List; import java.util.Map; @@ -69,6 +72,8 @@ import java.util.concurrent.atomic.AtomicBoolean; import java.util.stream.Collectors; import lombok.extern.slf4j.Slf4j; +import org.bouncycastle.crypto.CryptoServicesRegistrar; +import org.bouncycastle.crypto.fips.FipsStatus; import play.Application; import play.Environment; @@ -365,6 +370,26 @@ public AppInit( releaseManager.fixFilePaths(); } + if (config.getBoolean(CommonUtils.FIPS_ENABLED)) { + if (FipsStatus.isReady()) { + log.info("FipsStatus.isReady = true"); + } else { + throw new RuntimeException("FipsStatus.isReady = false"); + } + if (CryptoServicesRegistrar.isInApprovedOnlyMode()) { + log.info("CryptoServicesRegistrar.isInApprovedOnlyMode = true"); + } else { + throw new RuntimeException("CryptoServicesRegistrar.isInApprovedOnlyMode = false"); + } + Provider[] providers = Security.getProviders(); + log.info("Following providers are installed:"); + if (providers != null) { + for (int i = 0; i < providers.length; i++) { + log.info("{}: {}", i, providers[i].getName()); + } + } + } + log.info("AppInit completed"); } } catch (Throwable t) { diff --git a/managed/src/main/java/com/yugabyte/yw/common/NodeAgentClient.java b/managed/src/main/java/com/yugabyte/yw/common/NodeAgentClient.java index 36697ec3f0ea..1a3a49e7eddd 100644 --- a/managed/src/main/java/com/yugabyte/yw/common/NodeAgentClient.java +++ b/managed/src/main/java/com/yugabyte/yw/common/NodeAgentClient.java @@ -392,7 +392,7 @@ public void onError(Throwable throwable) { setCorrelationId(); this.throwable = throwable; latch.countDown(); - log.error("Error encountered for {} - {}", getId(), throwable.getMessage()); + log.error("Error encountered for {} - {}", getId(), throwable.getMessage(), throwable); } @Override diff --git a/managed/src/main/java/com/yugabyte/yw/common/TableSpaceStructures.java b/managed/src/main/java/com/yugabyte/yw/common/TableSpaceStructures.java index 77da380716ae..6d91b83d52ac 100644 --- a/managed/src/main/java/com/yugabyte/yw/common/TableSpaceStructures.java +++ b/managed/src/main/java/com/yugabyte/yw/common/TableSpaceStructures.java @@ -7,7 +7,7 @@ import com.fasterxml.jackson.annotation.JsonIgnoreProperties; import com.fasterxml.jackson.annotation.JsonInclude; import com.fasterxml.jackson.annotation.JsonProperty; -import com.fasterxml.jackson.databind.PropertyNamingStrategy; +import com.fasterxml.jackson.databind.PropertyNamingStrategies; import com.fasterxml.jackson.databind.annotation.JsonNaming; import com.yugabyte.yw.models.helpers.CommonUtils; import io.swagger.annotations.ApiModel; @@ -68,7 +68,7 @@ public boolean equals(Object obj) { @EqualsAndHashCode @NoArgsConstructor - @JsonNaming(PropertyNamingStrategy.SnakeCaseStrategy.class) + @JsonNaming(PropertyNamingStrategies.SnakeCaseStrategy.class) public static class PlacementBlock { @ApiModelProperty(value = "Cloud") @NotNull diff --git a/managed/src/main/java/com/yugabyte/yw/common/WSClientRefresher.java b/managed/src/main/java/com/yugabyte/yw/common/WSClientRefresher.java index 847100059170..f0eec25de3ff 100644 --- a/managed/src/main/java/com/yugabyte/yw/common/WSClientRefresher.java +++ b/managed/src/main/java/com/yugabyte/yw/common/WSClientRefresher.java @@ -19,6 +19,7 @@ import com.yugabyte.yw.common.certmgmt.castore.CustomCAStoreManager; import com.yugabyte.yw.common.config.RuntimeConfigFactory; import java.io.IOException; +import java.util.ArrayList; import java.util.List; import java.util.Map; import java.util.concurrent.ConcurrentHashMap; @@ -100,13 +101,8 @@ private Config getWsConfig(String ybWsConfigPath) { ConfigValue ybWsOverrides = runtimeConfigFactory.globalRuntimeConf().getValue(ybWsConfigPath); Config customWsConfig = ConfigFactory.empty().withValue("play.ws", ybWsOverrides); - // Add the custom CA truststore config if applicable. - List> ybaStoreConfig = customCAStoreManager.getPemStoreConfig(); - if (!ybaStoreConfig.isEmpty() && !customCAStoreManager.isEnabled()) { - log.warn("Skipping to add YBA's custom trust-store config as the feature is disabled"); - } - - if (!ybaStoreConfig.isEmpty() && customCAStoreManager.isEnabled()) { + List> ybaStoreConfig = new ArrayList<>(); + if (customCAStoreManager.isEnabled()) { // Add JRE default cert paths as well in this case. ybaStoreConfig.add(customCAStoreManager.getJavaDefaultConfig()); ybaStoreConfig.addAll(customCAStoreManager.getYBAJavaKeyStoreConfig()); diff --git a/managed/src/main/java/com/yugabyte/yw/common/certmgmt/CertificateHelper.java b/managed/src/main/java/com/yugabyte/yw/common/certmgmt/CertificateHelper.java index 1b8b530343ad..5d91706c32c3 100644 --- a/managed/src/main/java/com/yugabyte/yw/common/certmgmt/CertificateHelper.java +++ b/managed/src/main/java/com/yugabyte/yw/common/certmgmt/CertificateHelper.java @@ -31,20 +31,13 @@ import java.io.StringReader; import java.io.StringWriter; import java.math.BigInteger; -import java.security.Key; -import java.security.KeyFactory; -import java.security.KeyPair; -import java.security.KeyPairGenerator; +import java.security.*; import java.security.MessageDigest; -import java.security.NoSuchAlgorithmException; -import java.security.NoSuchProviderException; -import java.security.PrivateKey; import java.security.cert.CertificateException; import java.security.cert.CertificateFactory; import java.security.cert.X509Certificate; import java.security.interfaces.RSAPrivateKey; import java.security.interfaces.RSAPublicKey; -import java.security.spec.PKCS8EncodedKeySpec; import java.time.Instant; import java.time.OffsetDateTime; import java.time.ZoneOffset; @@ -79,15 +72,16 @@ import org.bouncycastle.cert.jcajce.JcaX509CertificateConverter; import org.bouncycastle.cert.jcajce.JcaX509ExtensionUtils; import org.bouncycastle.cert.jcajce.JcaX509v3CertificateBuilder; -import org.bouncycastle.jce.provider.BouncyCastleProvider; +import org.bouncycastle.jcajce.provider.BouncyCastleFipsProvider; +import org.bouncycastle.openssl.PEMKeyPair; +import org.bouncycastle.openssl.PEMParser; +import org.bouncycastle.openssl.jcajce.JcaPEMKeyConverter; import org.bouncycastle.openssl.jcajce.JcaPEMWriter; import org.bouncycastle.operator.ContentSigner; import org.bouncycastle.operator.jcajce.JcaContentSignerBuilder; import org.bouncycastle.pkcs.PKCS10CertificationRequest; import org.bouncycastle.pkcs.PKCS10CertificationRequestBuilder; import org.bouncycastle.pkcs.jcajce.JcaPKCS10CertificationRequestBuilder; -import org.bouncycastle.util.io.pem.PemObject; -import org.bouncycastle.util.io.pem.PemReader; import org.flywaydb.play.FileUtils; import play.libs.Json; @@ -708,28 +702,11 @@ public static X509Certificate convertStringToX509Cert(String certificate) return (X509Certificate) cf.generateCertificate(new ByteArrayInputStream(certificateData)); } - public static PrivateKey convertStringToPrivateKey(String strKey) throws Exception { - strKey = strKey.replace(System.lineSeparator(), ""); - strKey = strKey.replaceAll("^\"+|\"+$", ""); - strKey = strKey.replace("-----BEGIN PRIVATE KEY-----", ""); - strKey = strKey.replace("-----END PRIVATE KEY-----", ""); - strKey = strKey.replace("-----BEGIN RSA PRIVATE KEY-----", ""); - strKey = strKey.replace("-----END RSA PRIVATE KEY-----", ""); - - byte[] decoded = Base64.getMimeDecoder().decode(strKey); - - PKCS8EncodedKeySpec spec = new PKCS8EncodedKeySpec(decoded); - KeyFactory kf = KeyFactory.getInstance("RSA"); - return kf.generatePrivate(spec); - } - public static PrivateKey getPrivateKey(String keyContent) { - try (PemReader pemReader = new PemReader(new StringReader(keyContent))) { - PemObject pemObject = pemReader.readPemObject(); - byte[] bytes = pemObject.getContent(); - PKCS8EncodedKeySpec spec = new PKCS8EncodedKeySpec(bytes); - KeyFactory kf = KeyFactory.getInstance("RSA"); - return kf.generatePrivate(spec); + try { + PEMParser parser = new PEMParser(new StringReader(keyContent)); + PEMKeyPair pemKeyPair = (PEMKeyPair) parser.readObject(); + return new JcaPEMKeyConverter().getPrivateKey(pemKeyPair.getPrivateKeyInfo()); } catch (Exception e) { log.error(e.getMessage()); throw new RuntimeException("Unable to get Private Key"); @@ -820,7 +797,7 @@ public static void writeCertBundleToCertPath( public static KeyPair getKeyPairObject() throws NoSuchAlgorithmException, NoSuchProviderException { - KeyPairGenerator keypairGen = KeyPairGenerator.getInstance("RSA", "BC"); + KeyPairGenerator keypairGen = KeyPairGenerator.getInstance("RSA"); keypairGen.initialize(2048); return keypairGen.generateKeyPair(); } @@ -870,7 +847,7 @@ public X509Certificate generateCACertificate( new JcaContentSignerBuilder(CertificateHelper.SIGNATURE_ALGO).build(keyPair.getPrivate()); X509CertificateHolder holder = certGen.build(signer); JcaX509CertificateConverter converter = new JcaX509CertificateConverter(); - converter.setProvider(new BouncyCastleProvider()); + converter.setProvider(Security.getProvider(BouncyCastleFipsProvider.PROVIDER_NAME)); return converter.getCertificate(holder); } catch (Exception e) { throw new RuntimeException(e.getMessage(), e); @@ -940,10 +917,10 @@ public static X509Certificate createAndSignCertificate( X509CertificateHolder newCertHolder = newCertBuilder.build(csrContentSigner); X509Certificate newCert = new JcaX509CertificateConverter() - .setProvider(new BouncyCastleProvider()) + .setProvider(Security.getProvider(BouncyCastleFipsProvider.PROVIDER_NAME)) .getCertificate(newCertHolder); - newCert.verify(caCert.getPublicKey(), BouncyCastleProvider.PROVIDER_NAME); + newCert.verify(caCert.getPublicKey(), BouncyCastleFipsProvider.PROVIDER_NAME); return newCert; } catch (Exception e) { diff --git a/managed/src/main/java/com/yugabyte/yw/common/certmgmt/castore/CustomCAStoreManager.java b/managed/src/main/java/com/yugabyte/yw/common/certmgmt/castore/CustomCAStoreManager.java index e978c47bef61..86034a71354e 100644 --- a/managed/src/main/java/com/yugabyte/yw/common/certmgmt/castore/CustomCAStoreManager.java +++ b/managed/src/main/java/com/yugabyte/yw/common/certmgmt/castore/CustomCAStoreManager.java @@ -11,7 +11,6 @@ import com.yugabyte.yw.common.CustomTrustStoreListener; import com.yugabyte.yw.common.PlatformServiceException; import com.yugabyte.yw.common.certmgmt.CertificateHelper; -import com.yugabyte.yw.common.config.RuntimeConfigFactory; import com.yugabyte.yw.common.utils.FileUtils; import com.yugabyte.yw.models.CustomCaCertificateInfo; import com.yugabyte.yw.models.Customer; @@ -43,22 +42,12 @@ public class CustomCAStoreManager { // Reference to the listeners who want to get notified about updates in this custom trust-store. private final List trustStoreListeners = new ArrayList<>(); - - private final PemTrustStoreManager pemTrustStoreManager; - private final Pkcs12TrustStoreManager pkcs12TrustStoreManager; - - private final RuntimeConfigFactory runtimeConfigFactory; + private final YBATrustStoreManager ybaTrustStoreManager; @Inject - public CustomCAStoreManager( - Pkcs12TrustStoreManager pkcs12TrustStoreManager, - PemTrustStoreManager pemTrustStoreManager, - RuntimeConfigFactory runtimeConfigFactory) { - this.runtimeConfigFactory = runtimeConfigFactory; - trustStoreManagers.add(pkcs12TrustStoreManager); - trustStoreManagers.add(pemTrustStoreManager); - this.pemTrustStoreManager = pemTrustStoreManager; - this.pkcs12TrustStoreManager = pkcs12TrustStoreManager; + public CustomCAStoreManager(YBATrustStoreManager ybaTrustStoreManager) { + trustStoreManagers.add(ybaTrustStoreManager); + this.ybaTrustStoreManager = ybaTrustStoreManager; } public UUID addCACert(UUID customerId, String name, String contents, String storagePath) { @@ -317,30 +306,6 @@ public CustomCaCertificateInfo get(UUID customerId, UUID certId) { return CustomCaCertificateInfo.getOrGrunt(customerId, certId); } - // -------------- PEM CA trust-store specific methods ------------ - - public List> getPemStoreConfig() { - String storagePath = AppConfigHelper.getStoragePath(); - String trustStoreHome = getTruststoreHome(storagePath); - List ybaTrustStoreConfig = new ArrayList<>(); - - if (Files.exists(Paths.get(trustStoreHome))) { - String pemStorePathStr = pemTrustStoreManager.getYbaTrustStorePath(trustStoreHome); - Path pemStorePath = Paths.get(pemStorePathStr); - if (Files.exists(pemStorePath)) { - if (!pemTrustStoreManager.isTrustStoreEmpty(pemStorePathStr, getTruststorePassword())) { - Map trustStoreMap = new HashMap<>(); - trustStoreMap.put("path", pemStorePathStr); - trustStoreMap.put("type", pemTrustStoreManager.getYbaTrustStoreType()); - ybaTrustStoreConfig.add(trustStoreMap); - } - } - } - - log.debug("YBA's custom trust store config is {}", ybaTrustStoreConfig); - return ybaTrustStoreConfig; - } - // -------------- PKCS12 CA trust-store specific methods ------------ public List> getYBAJavaKeyStoreConfig() { @@ -349,12 +314,14 @@ public List> getYBAJavaKeyStoreConfig() { List ybaJavaKeyStoreConfig = new ArrayList<>(); if (Files.exists(Paths.get(trustStoreHome))) { - String javaTrustStorePathStr = pkcs12TrustStoreManager.getYbaTrustStorePath(trustStoreHome); + TrustStoreManager.TrustStoreInfo trustStoreInfo = + ybaTrustStoreManager.getYbaTrustStoreInfo(trustStoreHome); + String javaTrustStorePathStr = trustStoreInfo.getPath(); Path javaTrustStorePath = Paths.get(javaTrustStorePathStr); if (Files.exists(javaTrustStorePath)) { Map trustStoreMap = new HashMap<>(); trustStoreMap.put("path", javaTrustStorePathStr); - trustStoreMap.put("type", pkcs12TrustStoreManager.getYbaTrustStoreType()); + trustStoreMap.put("type", trustStoreInfo.getType()); trustStoreMap.put("password", new String(getTruststorePassword())); ybaJavaKeyStoreConfig.add(trustStoreMap); } @@ -367,15 +334,21 @@ public List> getYBAJavaKeyStoreConfig() { private KeyStore getYbaKeyStore() { String storagePath = AppConfigHelper.getStoragePath(); String trustStoreHome = getTruststoreHome(storagePath); - String pkcs12StorePathStr = pkcs12TrustStoreManager.getYbaTrustStorePath(trustStoreHome); + TrustStoreManager.TrustStoreInfo trustStoreInfo = + ybaTrustStoreManager.getYbaTrustStoreInfo(trustStoreHome); + if (!Files.exists(Path.of(trustStoreInfo.getPath()))) { + // Truststore file is not created yet. + return null; + } KeyStore pkcs12Store = - pkcs12TrustStoreManager.maybeGetTrustStore(pkcs12StorePathStr, getTruststorePassword()); + ybaTrustStoreManager.getTrustStore( + trustStoreInfo.getPath(), getTruststorePassword(), trustStoreInfo.getType()); return pkcs12Store; } public KeyStore getYbaAndJavaKeyStore() { // Add YBA certs into this default keystore. - KeyStore ybaJavaKeyStore = pkcs12TrustStoreManager.getJavaDefaultKeystore(); + KeyStore ybaJavaKeyStore = ybaTrustStoreManager.getJavaDefaultKeystore(); try { KeyStore ybaKeyStore = getYbaKeyStore(); @@ -394,7 +367,7 @@ public KeyStore getYbaAndJavaKeyStore() { } public Map getJavaDefaultConfig() { - return pkcs12TrustStoreManager.getJavaDefaultConfig(); + return ybaTrustStoreManager.getJavaDefaultConfig(); } // ---------------- helper methods ------------------ diff --git a/managed/src/main/java/com/yugabyte/yw/common/certmgmt/castore/PemTrustStoreManager.java b/managed/src/main/java/com/yugabyte/yw/common/certmgmt/castore/PemTrustStoreManager.java deleted file mode 100644 index 272df11fc5bf..000000000000 --- a/managed/src/main/java/com/yugabyte/yw/common/certmgmt/castore/PemTrustStoreManager.java +++ /dev/null @@ -1,288 +0,0 @@ -// Copyright (c) YugaByte, Inc. - -package com.yugabyte.yw.common.certmgmt.castore; - -import static play.mvc.Http.Status.BAD_REQUEST; -import static play.mvc.Http.Status.INTERNAL_SERVER_ERROR; - -import com.google.common.collect.Iterators; -import com.google.inject.Singleton; -import com.yugabyte.yw.common.PlatformServiceException; -import com.yugabyte.yw.common.certmgmt.CertificateHelper; -import com.yugabyte.yw.models.CustomCaCertificateInfo; -import com.yugabyte.yw.models.FileData; -import java.io.File; -import java.io.FileInputStream; -import java.io.FileWriter; -import java.io.IOException; -import java.nio.file.Files; -import java.nio.file.Path; -import java.nio.file.Paths; -import java.security.KeyStoreException; -import java.security.cert.Certificate; -import java.security.cert.CertificateException; -import java.security.cert.CertificateFactory; -import java.security.cert.X509Certificate; -import java.util.ArrayList; -import java.util.Collections; -import java.util.Iterator; -import java.util.List; -import java.util.stream.Collectors; -import lombok.extern.slf4j.Slf4j; -import org.apache.commons.collections4.CollectionUtils; -import org.bouncycastle.openssl.jcajce.JcaPEMWriter; - -/** Relates to YBA's PEM trust store */ -@Slf4j -@Singleton -public class PemTrustStoreManager implements TrustStoreManager { - public static final String TRUSTSTORE_FILE_NAME = "yb-ca-bundle.pem"; - - /** Creates a trust-store with only custom CA certificates in PEM format. */ - public boolean addCertificate( - String certPath, - String certAlias, - String trustStoreHome, - char[] trustStorePassword, - boolean suppressErrors) - throws KeyStoreException, CertificateException, IOException, PlatformServiceException { - log.debug("Trying to update YBA's PEM truststore ..."); - - List newCerts = null; - newCerts = getX509Certificate(certPath); - if (CollectionUtils.isEmpty(newCerts)) { - throw new PlatformServiceException( - BAD_REQUEST, String.format("No new CA certificate exists at %s", certPath)); - } - - // Get the existing trust bundle. - String trustStorePath = getTrustStorePath(trustStoreHome, TRUSTSTORE_FILE_NAME); - boolean doesTrustStoreExist = new File(trustStorePath).exists(); - if (!doesTrustStoreExist) { - File sysTrustStoreFile = new File(trustStorePath); - sysTrustStoreFile.createNewFile(); - log.debug("Created an empty YBA PEM trust-store"); - } else { - List trustCerts = getCertsInTrustStore(trustStorePath, trustStorePassword); - List addedCertChain = new ArrayList(newCerts); - if (trustCerts != null) { - // Check if such an alias already exists. - for (int i = 0; i < addedCertChain.size(); i++) { - Certificate newCert = addedCertChain.get(i); - boolean exists = trustCerts.contains(newCert); - if (!exists) { - break; - } - // In case of certificate chain, we can have the same root/intermediate - // cert present in the chain. Throw error in case all of these exists. - if (exists && !suppressErrors && addedCertChain.size() - 1 == i) { - String msg = "CA certificate with same content already exists"; - log.error(msg); - throw new PlatformServiceException(BAD_REQUEST, msg); - } else if (exists && addedCertChain.size() != i) { - newCerts.remove(i); - } - } - } - } - - // Update the trust store in file system. - List x509Certificates = - newCerts.stream() - .filter(certificate -> certificate instanceof X509Certificate) - .map(certificate -> (X509Certificate) certificate) - .collect(Collectors.toList()); - CertificateHelper.writeCertBundleToCertPath(x509Certificates, trustStorePath, false, true); - log.debug("Truststore '{}' now has the cert {}", trustStorePath, certAlias); - - // Backup up YBA's PEM trust store in DB. - FileData.addToBackup(Collections.singletonList(trustStorePath)); - - log.info("Custom CA certificate {} added in PEM truststore", certAlias); - return !doesTrustStoreExist; - } - - public void replaceCertificate( - String oldCertPath, - String newCertPath, - String certAlias, - String trustStoreHome, - char[] trustStorePassword, - boolean suppressErrors) - throws CertificateException, IOException { - - // Get the existing trust bundle. - String trustStorePath = getTrustStorePath(trustStoreHome, TRUSTSTORE_FILE_NAME); - log.debug("Trying to replace cert {} in the PEM truststore {}..", certAlias, trustStorePath); - List trustCerts = getCertificates(trustStorePath); - - // Check if such a cert already exists. - List oldCerts = getX509Certificate(oldCertPath); - boolean exists = false; - for (Certificate oldCert : oldCerts) { - if (isCertificateUsedInOtherChain(oldCert)) { - log.debug("Certificate {} is part of a chain, skipping replacement.", certAlias); - continue; - } - exists = trustCerts.remove(oldCert); - if (!exists && !suppressErrors) { - String msg = String.format("Certificate '%s' doesn't exist to update", certAlias); - log.error(msg); - throw new PlatformServiceException(BAD_REQUEST, msg); - } - } - - // Update the trust store. - if (exists) { - List newCerts = getX509Certificate(newCertPath); - trustCerts.addAll(newCerts); - saveTo(trustStorePath, trustCerts); - log.info("Truststore '{}' updated with new cert at alias '{}'", trustStorePath, certAlias); - } - - // Backup up YBA's PEM trust store in DB. - FileData.addToBackup(Collections.singletonList(trustStorePath)); - } - - public static void saveTo(String trustStorePath, List trustCerts) - throws IOException { - try (FileWriter fWriter = new FileWriter(trustStorePath); - JcaPEMWriter pemWriter = new JcaPEMWriter(fWriter)) { - int countCerts = 0; - for (Certificate certificate : trustCerts) { - pemWriter.writeObject(certificate); - pemWriter.flush(); - countCerts += 1; - } - log.debug("Saved {} certificates to {}", countCerts, trustStorePath); - } - } - - public void remove( - String certPath, - String certAlias, - String trustStoreHome, - char[] trustStorePassword, - boolean suppressErrors) - throws CertificateException, IOException, KeyStoreException { - - log.info("Removing cert {} from PEM truststore ...", certAlias); - String trustStorePath = getTrustStorePath(trustStoreHome, TRUSTSTORE_FILE_NAME); - List trustCerts = getCertificates(trustStorePath); - // Check if such an alias already exists. - List certToRemove = getX509Certificate(certPath); - int certToRemoveCount = certToRemove.size(); - // Iterate through each certificate in certToRemove and check if it's used in any chain - boolean exists = false; - Iterator certIterator = certToRemove.iterator(); - while (certIterator.hasNext()) { - Certificate cert = certIterator.next(); - if (isCertificateUsedInOtherChain(cert)) { - // Certificate is part of a chain, do not remove it - log.debug("Certificate {} is part of a chain, skipping removal.", certAlias); - certToRemoveCount -= 1; - certIterator.remove(); - } else { - // Certificate is not part of a chain - exists = true; - } - } - - if (certToRemoveCount == 0) { - log.debug( - "Skipping removal of cert from PEM truststore, as the cert is part of other trust chain"); - return; - } - - if (!exists && !suppressErrors) { - String msg = String.format("Certificate '%s' does not exist to delete", certAlias); - log.error(msg); - throw new PlatformServiceException(BAD_REQUEST, msg); - } - - // Delete from the trust-store. - if (!certToRemove.isEmpty()) { - Iterators.removeAll(trustCerts.iterator(), certToRemove); - saveTo(trustStorePath, trustCerts); - log.debug("Certificate {} is now deleted from trust-store {}", certAlias, trustStorePath); - } - log.info("custom CA certs deleted from YBA's PEM truststore"); - } - - public List getCertsInTrustStore(String trustStorePath, char[] trustStorePassword) { - if (trustStorePath == null) { - throw new PlatformServiceException( - INTERNAL_SERVER_ERROR, "Cannot get CA certificates from empty path"); - } - - if (new File(trustStorePath).exists()) { - log.debug("YBA's truststore already exists, returning certs from it"); - } - List trustCerts = getCertificates(trustStorePath); - if (trustCerts.isEmpty()) log.info("Initiating empty YBA trust-store"); - - return trustCerts; - } - - private List getCertificates(String trustStorePath) { - List certs = new ArrayList<>(); - try (FileInputStream certStream = new FileInputStream(trustStorePath)) { - CertificateFactory certificateFactory = CertificateFactory.getInstance("X.509"); - Certificate certificate = null; - while ((certificate = certificateFactory.generateCertificate(certStream)) != null) { - certs.add(certificate); - } - } catch (IOException | CertificateException e) { - String msg = String.format("Extracted %d certs from %s.", certs.size(), trustStorePath); - if (certs.isEmpty()) { - log.warn(msg); // We expect certs to exist if a trust-store path exists. - } else if (e.getMessage() != null && e.getMessage().trim().contains("Empty input")) { - log.info(msg); // Reached EOF, should be ignored. - } else { - log.error(msg, e); - throw new PlatformServiceException(INTERNAL_SERVER_ERROR, e.getLocalizedMessage()); - } - } - return certs; - } - - public String getYbaTrustStorePath(String trustStoreHome) { - // Get the existing trust bundle. - return getTrustStorePath(trustStoreHome, TRUSTSTORE_FILE_NAME); - } - - public String getYbaTrustStoreType() { - return "PEM"; - } - - public boolean isTrustStoreEmpty(String storePathStr, char[] trustStorePassword) { - Path storePath = Paths.get(storePathStr); - byte[] caBundleContent = new byte[0]; - try { - caBundleContent = Files.readAllBytes(storePath); - if (caBundleContent.length == 0) return true; - else return false; - } catch (IOException e) { - String msg = String.format("Failed to read custom trust-store %s", storePathStr); - log.error(msg); - throw new PlatformServiceException(INTERNAL_SERVER_ERROR, msg); - } - } - - private boolean isCertificateUsedInOtherChain(Certificate cert) - throws CertificateException, IOException { - // Retrieve all the certificates from `custom_ca_certificate_info` schema. - // In case the passed cert is substring in more than 1 cert than it is used - // as part of other cert chain as well. - // We will skip removing it from the trust-store. - int certChainCount = 0; - List customCACertificates = CustomCaCertificateInfo.getAll(false); - for (CustomCaCertificateInfo customCA : customCACertificates) { - List certChain = getX509Certificate(customCA.getContents()); - if (certChain.contains(cert)) { - certChainCount++; - } - } - return certChainCount > 1; - } -} diff --git a/managed/src/main/java/com/yugabyte/yw/common/certmgmt/castore/TrustStoreManager.java b/managed/src/main/java/com/yugabyte/yw/common/certmgmt/castore/TrustStoreManager.java index 88287454a8f1..49f8e4747662 100644 --- a/managed/src/main/java/com/yugabyte/yw/common/certmgmt/castore/TrustStoreManager.java +++ b/managed/src/main/java/com/yugabyte/yw/common/certmgmt/castore/TrustStoreManager.java @@ -11,6 +11,7 @@ import java.security.cert.CertificateFactory; import java.util.ArrayList; import java.util.List; +import lombok.Value; public interface TrustStoreManager { default String getTrustStorePath(String trustStoreHome, String trustStoreFileName) { @@ -56,9 +57,11 @@ void replaceCertificate( boolean suppressErrors) throws CertificateException, IOException, KeyStoreException, NoSuchAlgorithmException; - String getYbaTrustStorePath(String trustStoreHome); + TrustStoreInfo getYbaTrustStoreInfo(String trustStoreHome); - String getYbaTrustStoreType(); - - boolean isTrustStoreEmpty(String caStorePathStr, char[] trustStorePassword); + @Value + class TrustStoreInfo { + String path; + String type; + } } diff --git a/managed/src/main/java/com/yugabyte/yw/common/certmgmt/castore/Pkcs12TrustStoreManager.java b/managed/src/main/java/com/yugabyte/yw/common/certmgmt/castore/YBATrustStoreManager.java similarity index 73% rename from managed/src/main/java/com/yugabyte/yw/common/certmgmt/castore/Pkcs12TrustStoreManager.java rename to managed/src/main/java/com/yugabyte/yw/common/certmgmt/castore/YBATrustStoreManager.java index 1c4b5d0dc980..c91449821786 100644 --- a/managed/src/main/java/com/yugabyte/yw/common/certmgmt/castore/Pkcs12TrustStoreManager.java +++ b/managed/src/main/java/com/yugabyte/yw/common/certmgmt/castore/YBATrustStoreManager.java @@ -17,6 +17,7 @@ import java.io.FileOutputStream; import java.io.IOException; import java.nio.file.Files; +import java.nio.file.Path; import java.nio.file.Paths; import java.security.KeyStore; import java.security.KeyStoreException; @@ -31,18 +32,23 @@ @Singleton @Slf4j -public class Pkcs12TrustStoreManager implements TrustStoreManager { +public class YBATrustStoreManager implements TrustStoreManager { - public static final String TRUSTSTORE_FILE_NAME = "ybPkcs12CaCerts"; + public static final String JVM_DEFAULT_KEYSTORE_TYPE = "JKS"; + + public static final String BCFKS_TRUSTSTORE_FILE_NAME = "ybBcfksCaCerts"; + + public static final String PKCS12_TRUSTSTORE_FILE_NAME = "ybPkcs12CaCerts"; private static final String YB_JAVA_HOME_PATHS = "yb.wellKnownCA.trustStore.javaHomePaths"; private final RuntimeConfGetter runtimeConfGetter; - @Inject Config config; + private final Config config; @Inject - public Pkcs12TrustStoreManager(RuntimeConfGetter runtimeConfGetter) { + public YBATrustStoreManager(RuntimeConfGetter runtimeConfGetter, Config config) { this.runtimeConfGetter = runtimeConfGetter; + this.config = config; } /** Creates a trust-store with only custom CA certificates in pkcs12 format. */ @@ -54,20 +60,20 @@ public boolean addCertificate( boolean suppressErrors) throws KeyStoreException, CertificateException, IOException, PlatformServiceException { - log.debug("Trying to update YBA's pkcs12 truststore ..."); + log.debug("Trying to update YBA's truststore ..."); // Get the existing trust bundle. - String trustStorePath = getTrustStorePath(trustStoreHome, TRUSTSTORE_FILE_NAME); - log.debug("Updating truststore {}", trustStorePath); + TrustStoreInfo trustStoreInfo = getYbaTrustStoreInfo(trustStoreHome); + log.debug("Updating truststore {}", trustStoreInfo); - boolean doesTrustStoreExist = new File(trustStorePath).exists(); + boolean doesTrustStoreExist = new File(trustStoreInfo.getPath()).exists(); KeyStore trustStore = null; if (!doesTrustStoreExist) { - File trustStoreFile = new File(trustStorePath); + File trustStoreFile = new File(trustStoreInfo.getPath()); trustStoreFile.createNewFile(); - log.debug("Created an empty YBA pkcs12 trust-store"); + log.debug("Created an empty YBA trust-store"); } - trustStore = getTrustStore(trustStorePath, trustStorePassword, !doesTrustStoreExist); + trustStore = getTrustStore(trustStoreInfo, trustStorePassword, !doesTrustStoreExist); if (trustStore == null) { String errMsg = "Truststore cannot be null"; log.error(errMsg); @@ -87,11 +93,14 @@ public boolean addCertificate( trustStore.setCertificateEntry(alias, certificates.get(i)); } // Update the trust store in file-system. - saveTrustStore(trustStorePath, trustStore, trustStorePassword); - log.debug("Truststore '{}' now has a certificate with alias '{}'", trustStorePath, certAlias); + saveTrustStore(trustStoreInfo, trustStore, trustStorePassword); + log.debug( + "Truststore '{}' now has a certificate with alias '{}'", + trustStoreInfo.getPath(), + certAlias); // Backup up YBA's pkcs12 trust store in DB. - FileData.addToBackup(Collections.singletonList(trustStorePath)); + FileData.addToBackup(Collections.singletonList(trustStoreInfo.getPath())); log.info("Custom CA certificate added in YBA's pkcs12 trust-store"); return !doesTrustStoreExist; @@ -107,9 +116,9 @@ public void replaceCertificate( throws IOException, KeyStoreException, CertificateException { // Get the existing trust bundle. - String trustStorePath = getTrustStorePath(trustStoreHome, TRUSTSTORE_FILE_NAME); - log.debug("Trying to replace cert {} in YBA's pkcs12 truststore {}", certAlias, trustStorePath); - KeyStore trustStore = getTrustStore(trustStorePath, trustStorePassword, false); + TrustStoreInfo trustStoreInfo = getYbaTrustStoreInfo(trustStoreHome); + log.debug("Trying to replace cert {} in YBA's truststore {}", certAlias, trustStoreInfo); + KeyStore trustStore = getTrustStore(trustStoreInfo, trustStorePassword, false); if (trustStore == null) { String errMsg = "Truststore cannot be null"; log.error(errMsg); @@ -141,34 +150,36 @@ public void replaceCertificate( String alias = certAlias + "-" + i; trustStore.setCertificateEntry(alias, newCertificates.get(i)); } - saveTrustStore(trustStorePath, trustStore, trustStorePassword); + saveTrustStore(trustStoreInfo, trustStore, trustStorePassword); // Backup up YBA's pkcs12 trust store in DB. - FileData.addToBackup(Collections.singletonList(trustStorePath)); + FileData.addToBackup(Collections.singletonList(trustStoreInfo.getPath())); - log.info("Truststore '{}' updated with new cert at alias '{}'", trustStorePath, certAlias); + log.info( + "Truststore '{}' updated with new cert at alias '{}'", trustStoreInfo.getPath(), certAlias); } private void saveTrustStore( - String trustStorePath, KeyStore trustStore, char[] trustStorePassword) { + TrustStoreInfo trustStoreInfo, KeyStore trustStore, char[] trustStorePassword) { if (trustStore != null) { - try (FileOutputStream storeOutputStream = new FileOutputStream(trustStorePath)) { + try (FileOutputStream storeOutputStream = new FileOutputStream(trustStoreInfo.getPath())) { trustStore.store(storeOutputStream, trustStorePassword); - log.debug("Trust store written to {}", trustStorePath); + log.debug("Trust store written to {}", trustStoreInfo.getPath()); } catch (IOException | KeyStoreException | NoSuchAlgorithmException | CertificateException e) { - String msg = String.format("Failed to save certificate to %s", trustStorePath); + String msg = String.format("Failed to save certificate to %s", trustStoreInfo.getPath()); log.error(msg, e); throw new PlatformServiceException(INTERNAL_SERVER_ERROR, msg); } } } - protected KeyStore getTrustStore(String trustStorePath, char[] trustStorePassword, boolean init) { - try (FileInputStream storeInputStream = new FileInputStream(trustStorePath)) { - KeyStore trustStore = KeyStore.getInstance(KeyStore.getDefaultType()); + protected KeyStore getTrustStore( + TrustStoreInfo trustStoreInfo, char[] trustStorePassword, boolean init) { + try (FileInputStream storeInputStream = new FileInputStream(trustStoreInfo.getPath())) { + KeyStore trustStore = KeyStore.getInstance(trustStoreInfo.getType()); if (init) { trustStore.load(null, trustStorePassword); } else { @@ -183,13 +194,13 @@ protected KeyStore getTrustStore(String trustStorePath, char[] trustStorePasswor } } - protected KeyStore maybeGetTrustStore(String trustStorePath, char[] trustStorePassword) { + protected KeyStore getTrustStore(String trustStorePath, char[] trustStorePassword, String type) { KeyStore trustStore = null; try (FileInputStream storeInputStream = new FileInputStream(trustStorePath)) { - trustStore = KeyStore.getInstance(KeyStore.getDefaultType()); + trustStore = KeyStore.getInstance(type); trustStore.load(storeInputStream, trustStorePassword); } catch (Exception e) { - log.warn(String.format("Couldn't get pkcs12 trust store: %s", e.getLocalizedMessage())); + throw new RuntimeException("Couldn't load trust store " + trustStorePath, e); } return trustStore; } @@ -203,8 +214,8 @@ public void remove( throws KeyStoreException, IOException, CertificateException { log.info("Removing cert {} from YBA's pkcs12 truststore ...", certAlias); - String trustStorePath = getTrustStorePath(trustStoreHome, TRUSTSTORE_FILE_NAME); - KeyStore trustStore = getTrustStore(trustStorePath, trustStorePassword, false); + TrustStoreInfo trustStoreInfo = getYbaTrustStoreInfo(trustStoreHome); + KeyStore trustStore = getTrustStore(trustStoreInfo, trustStorePassword, false); List certificates = getX509Certificate(certPath); for (int i = 0; i < certificates.size(); i++) { String alias = certAlias + "-" + i; @@ -221,42 +232,15 @@ public void remove( trustStore.deleteEntry(alias); } } - saveTrustStore(trustStorePath, trustStore, trustStorePassword); - log.debug("Truststore '{}' now does not have a CA certificate '{}'", trustStorePath, certAlias); + saveTrustStore(trustStoreInfo, trustStore, trustStorePassword); + log.debug( + "Truststore '{}' now does not have a CA certificate '{}'", + trustStoreInfo.getPath(), + certAlias); log.info("Custom CA certs deleted in YBA's pkcs12 truststore"); } - public String getYbaTrustStorePath(String trustStoreHome) { - // Get the existing trust bundle. - return getTrustStorePath(trustStoreHome, TRUSTSTORE_FILE_NAME); - } - - public String getYbaTrustStoreType() { - String storeType = KeyStore.getDefaultType(); - log.debug("The trust-store type is {}", storeType); // pkcs12 - return storeType; - } - - public boolean isTrustStoreEmpty(String caStorePathStr, char[] trustStorePassword) { - KeyStore trustStore = maybeGetTrustStore(caStorePathStr, trustStorePassword); - if (trustStore == null) { - return true; - } else { - try { - log.debug("There are {} entries in pkcs12 trust-store", trustStore.size()); - if (trustStore.size() == 0) { - return true; - } - } catch (KeyStoreException e) { - String msg = "Failed to get size of pkcs12 trust-store"; - log.error(msg, e.getLocalizedMessage()); - throw new PlatformServiceException(INTERNAL_SERVER_ERROR, e.getLocalizedMessage()); - } - } - return false; - } - // ------------- methods for Java defaults ---------------- private Map maybeGetJavaxNetSslTrustStore() { String javaxNetSslTrustStore = @@ -296,7 +280,7 @@ protected Map getJavaDefaultConfig() { for (String javaPath : javaHomePaths) { if (Files.exists(Paths.get(javaPath))) { javaSSLConfigMap.put("path", javaPath); - javaSSLConfigMap.put("type", KeyStore.getDefaultType()); // pkcs12 + javaSSLConfigMap.put("type", JVM_DEFAULT_KEYSTORE_TYPE); // pkcs12 } } log.info("Java SSL config is:{}", javaSSLConfigMap); @@ -311,8 +295,10 @@ protected KeyStore getJavaDefaultKeystore() { // NOTE: If adding any custom path, we must add the ordered default path as well, if they exist. if (javaxNetSslMap != null) { javaStore = - maybeGetTrustStore( - javaxNetSslMap.get("path"), javaxNetSslMap.get("password").toCharArray()); + getTrustStore( + javaxNetSslMap.get("path"), + javaxNetSslMap.get("password").toCharArray(), + JVM_DEFAULT_KEYSTORE_TYPE); return javaStore; } @@ -320,9 +306,22 @@ protected KeyStore getJavaDefaultKeystore() { log.debug("Java home cert paths are {}", javaHomePaths); for (String javaPath : javaHomePaths) { if (Files.exists(Paths.get(javaPath))) { - javaStore = maybeGetTrustStore(javaPath, null); + javaStore = getTrustStore(javaPath, null, "JKS"); + break; } } return javaStore; } + + public TrustStoreInfo getYbaTrustStoreInfo(String trustStoreHome) { + // Get the existing trust bundle. + String trustStorePath = getTrustStorePath(trustStoreHome, PKCS12_TRUSTSTORE_FILE_NAME); + if (Files.exists(Path.of(trustStorePath))) { + // PKSC12 bundle for backward compatibility + return new TrustStoreInfo(trustStorePath, "PKCS12"); + } + // BCFKS bundle for fresh installed YBAs to simplify FIPS migration + return new TrustStoreInfo( + getTrustStorePath(trustStoreHome, BCFKS_TRUSTSTORE_FILE_NAME), "BCFKS"); + } } diff --git a/managed/src/main/java/com/yugabyte/yw/common/certmgmt/providers/VaultPKI.java b/managed/src/main/java/com/yugabyte/yw/common/certmgmt/providers/VaultPKI.java index 008f95668909..2c0b1a8b3688 100644 --- a/managed/src/main/java/com/yugabyte/yw/common/certmgmt/providers/VaultPKI.java +++ b/managed/src/main/java/com/yugabyte/yw/common/certmgmt/providers/VaultPKI.java @@ -195,7 +195,7 @@ public CertificateDetails createCertificate( // fetch key String newCertKeyStr = result.get(ISSUE_FIELD_PRV_KEY); curKeyStr = newCertKeyStr; - PrivateKey pKeyObj = CertificateHelper.convertStringToPrivateKey(newCertKeyStr); + PrivateKey pKeyObj = CertificateHelper.getPrivateKey(newCertKeyStr); // fetch issue ca cert String issuingCAStr = result.get(ISSUE_FIELD_CA); curCaCertificateStr = issuingCAStr; @@ -204,7 +204,7 @@ public CertificateDetails createCertificate( LOG.debug("Issue CA is:: {}", CertificateHelper.getCertificateProperties(issueCAcert)); LOG.debug("Certificate is:: {}", CertificateHelper.getCertificateProperties(certObj)); - certObj.verify(issueCAcert.getPublicKey(), "BC"); + certObj.verify(issueCAcert.getPublicKey()); // for client certificate: later it is read using CertificateHelper.getClientCertFile return CertificateHelper.dumpNewCertsToFile( diff --git a/managed/src/main/java/com/yugabyte/yw/common/encryption/bc/BcOpenBsdHasher.java b/managed/src/main/java/com/yugabyte/yw/common/encryption/bc/BcOpenBsdHasher.java index 1cf25b6689e8..762dea997b04 100644 --- a/managed/src/main/java/com/yugabyte/yw/common/encryption/bc/BcOpenBsdHasher.java +++ b/managed/src/main/java/com/yugabyte/yw/common/encryption/bc/BcOpenBsdHasher.java @@ -2,7 +2,7 @@ import com.yugabyte.yw.common.encryption.HashBuilder; import java.security.SecureRandom; -import org.bouncycastle.crypto.generators.OpenBSDBCrypt; +import org.springframework.security.crypto.bcrypt.BCrypt; /** * Bouncy Castle's OpenBSD implementation based Hash generator @@ -13,19 +13,17 @@ public class BcOpenBsdHasher implements HashBuilder { private static final SecureRandom rnd = new SecureRandom(); private static final int COST = 12; // arrived after discussions - private static final byte SALT_LENGTH_BYTES = 16; @Override public String hash(String password) { if (password == null) throw new IllegalArgumentException("Password cannot be null"); - byte[] salt = new byte[SALT_LENGTH_BYTES]; - rnd.nextBytes(salt); - return OpenBSDBCrypt.generate(password.toCharArray(), salt, COST); + String salt = BCrypt.gensalt(COST, rnd); + return BCrypt.hashpw(password, salt); } @Override public boolean isValid(String password, String hash) { if (password == null) throw new IllegalArgumentException("Password cannot be null"); - return OpenBSDBCrypt.checkPassword(hash, password.toCharArray()); + return BCrypt.checkpw(password, hash); } } diff --git a/managed/src/main/java/com/yugabyte/yw/common/encryption/bc/Pbkdf2HmacSha256Hasher.java b/managed/src/main/java/com/yugabyte/yw/common/encryption/bc/Pbkdf2HmacSha256Hasher.java new file mode 100644 index 000000000000..a17cad764a3f --- /dev/null +++ b/managed/src/main/java/com/yugabyte/yw/common/encryption/bc/Pbkdf2HmacSha256Hasher.java @@ -0,0 +1,61 @@ +package com.yugabyte.yw.common.encryption.bc; + +import com.yugabyte.yw.common.encryption.HashBuilder; +import java.security.SecureRandom; +import java.util.Arrays; +import org.apache.commons.lang3.StringUtils; +import org.bouncycastle.crypto.PasswordBasedDeriver; +import org.bouncycastle.crypto.fips.*; +import org.bouncycastle.util.encoders.Base64; + +public class Pbkdf2HmacSha256Hasher implements HashBuilder { + + private static final int SALT_SIZE_BYTES = 32; + private static final int KEY_SIZE_BYTES = 32; + + private static final String SALT_KEY_DELIMITER = "."; + private static final String SALT_KEY_DELIMITER_REGEX = "\\" + SALT_KEY_DELIMITER; + + private static final SecureRandom rnd = new SecureRandom(); + + @Override + public String hash(String password) { + if (password == null) throw new IllegalArgumentException("Password cannot be null"); + byte[] salt = new byte[SALT_SIZE_BYTES]; + rnd.nextBytes(salt); + FipsPBKD.Parameters parameters = + FipsPBKD.PBKDF2.using(FipsSHS.Algorithm.SHA256_HMAC, password.getBytes()).withSalt(salt); + byte[] key = + new FipsPBKD.DeriverFactory() + .createDeriver(parameters) + .deriveKey(PasswordBasedDeriver.KeyType.CIPHER, KEY_SIZE_BYTES); + return "4." + Base64.toBase64String(salt) + SALT_KEY_DELIMITER + Base64.toBase64String(key); + } + + @Override + public boolean isValid(String password, String hash) { + if (password == null) throw new IllegalArgumentException("Password cannot be null"); + String[] versionSaltAndKey = hash.split(SALT_KEY_DELIMITER_REGEX); + if (versionSaltAndKey.length != 3) { + // Don't want to write hash value itself into logs. + throw new IllegalArgumentException("Invalid hash value"); + } + String version = versionSaltAndKey[0]; + if (!StringUtils.equals(version, "4")) { + throw new IllegalArgumentException("Only hash version 4 is supported"); + } + String saltBase64 = versionSaltAndKey[1]; + byte[] salt = Base64.decode(saltBase64); + String hashKeyBase64 = versionSaltAndKey[2]; + byte[] hashKey = Base64.decode(hashKeyBase64); + + FipsPBKD.Parameters parameters = + FipsPBKD.PBKDF2.using(FipsSHS.Algorithm.SHA256_HMAC, password.getBytes()).withSalt(salt); + byte[] key = + new FipsPBKD.DeriverFactory() + .createDeriver(parameters) + .deriveKey(PasswordBasedDeriver.KeyType.CIPHER, hashKey.length); + + return Arrays.equals(hashKey, key); + } +} diff --git a/managed/src/main/java/com/yugabyte/yw/forms/PerfAdvisorSettingsFormData.java b/managed/src/main/java/com/yugabyte/yw/forms/PerfAdvisorSettingsFormData.java index e694d8295409..ab195ec16e41 100644 --- a/managed/src/main/java/com/yugabyte/yw/forms/PerfAdvisorSettingsFormData.java +++ b/managed/src/main/java/com/yugabyte/yw/forms/PerfAdvisorSettingsFormData.java @@ -2,7 +2,7 @@ package com.yugabyte.yw.forms; -import com.fasterxml.jackson.databind.PropertyNamingStrategy; +import com.fasterxml.jackson.databind.PropertyNamingStrategies; import com.fasterxml.jackson.databind.annotation.JsonNaming; import io.swagger.annotations.ApiModel; import io.swagger.annotations.ApiModelProperty; @@ -14,7 +14,7 @@ @ApiModel @Data @Accessors(chain = true) -@JsonNaming(PropertyNamingStrategy.SnakeCaseStrategy.class) +@JsonNaming(PropertyNamingStrategies.SnakeCaseStrategy.class) public class PerfAdvisorSettingsFormData { @ApiModelProperty(value = "Enable/disable perf advisor runs for the universe") diff --git a/managed/src/main/java/com/yugabyte/yw/forms/RunQueryFormData.java b/managed/src/main/java/com/yugabyte/yw/forms/RunQueryFormData.java index 09d1032ff728..bb38974e11bd 100644 --- a/managed/src/main/java/com/yugabyte/yw/forms/RunQueryFormData.java +++ b/managed/src/main/java/com/yugabyte/yw/forms/RunQueryFormData.java @@ -2,14 +2,14 @@ package com.yugabyte.yw.forms; -import com.fasterxml.jackson.databind.PropertyNamingStrategy; +import com.fasterxml.jackson.databind.PropertyNamingStrategies; import com.fasterxml.jackson.databind.annotation.JsonNaming; import javax.validation.constraints.NotNull; import lombok.Data; import org.yb.CommonTypes.TableType; @Data -@JsonNaming(PropertyNamingStrategy.SnakeCaseStrategy.class) +@JsonNaming(PropertyNamingStrategies.SnakeCaseStrategy.class) public class RunQueryFormData { @NotNull private String query; diff --git a/managed/src/main/java/com/yugabyte/yw/models/HealthCheck.java b/managed/src/main/java/com/yugabyte/yw/models/HealthCheck.java index 32c2cc3915da..1472ae853756 100644 --- a/managed/src/main/java/com/yugabyte/yw/models/HealthCheck.java +++ b/managed/src/main/java/com/yugabyte/yw/models/HealthCheck.java @@ -3,7 +3,7 @@ package com.yugabyte.yw.models; import com.fasterxml.jackson.annotation.JsonFormat; -import com.fasterxml.jackson.databind.PropertyNamingStrategy; +import com.fasterxml.jackson.databind.PropertyNamingStrategies; import com.fasterxml.jackson.databind.annotation.JsonNaming; import io.ebean.Finder; import io.ebean.Model; @@ -33,11 +33,11 @@ public class HealthCheck extends Model { @Data @Accessors(chain = true) - @JsonNaming(PropertyNamingStrategy.SnakeCaseStrategy.class) + @JsonNaming(PropertyNamingStrategies.SnakeCaseStrategy.class) public static class Details { @Data @Accessors(chain = true) - @JsonNaming(PropertyNamingStrategy.SnakeCaseStrategy.class) + @JsonNaming(PropertyNamingStrategies.SnakeCaseStrategy.class) public static class NodeData { private String node; private String process; diff --git a/managed/src/main/java/com/yugabyte/yw/models/Users.java b/managed/src/main/java/com/yugabyte/yw/models/Users.java index 8de596e267b7..4604ea0b552f 100644 --- a/managed/src/main/java/com/yugabyte/yw/models/Users.java +++ b/managed/src/main/java/com/yugabyte/yw/models/Users.java @@ -15,6 +15,7 @@ import com.yugabyte.yw.common.config.RuntimeConfGetter; import com.yugabyte.yw.common.encryption.HashBuilder; import com.yugabyte.yw.common.encryption.bc.BcOpenBsdHasher; +import com.yugabyte.yw.common.encryption.bc.Pbkdf2HmacSha256Hasher; import com.yugabyte.yw.common.inject.StaticInjectorHolder; import io.ebean.DuplicateKeyException; import io.ebean.Finder; @@ -42,6 +43,7 @@ import lombok.RequiredArgsConstructor; import lombok.Setter; import lombok.extern.slf4j.Slf4j; +import org.apache.commons.lang3.StringUtils; import org.joda.time.DateTime; import org.slf4j.Logger; import org.slf4j.LoggerFactory; @@ -57,7 +59,9 @@ public class Users extends Model { public static final Logger LOG = LoggerFactory.getLogger(Users.class); - private static final HashBuilder hasher = new BcOpenBsdHasher(); + private static final HashBuilder bcryptHasher = new BcOpenBsdHasher(); + + private static final HashBuilder pbkdf2Hasher = new Pbkdf2HmacSha256Hasher(); private static final KeyLock usersLock = new KeyLock<>(); @@ -146,7 +150,7 @@ public enum UserType { private String passwordHash; public void setPassword(String password) { - this.setPasswordHash(Users.hasher.hash(password)); + this.setPasswordHash(Users.pbkdf2Hasher.hash(password)); } @JsonFormat(shape = JsonFormat.Shape.STRING, pattern = "yyyy-MM-dd'T'HH:mm:ss'Z'") @@ -359,12 +363,14 @@ public boolean delete() { */ public static Users authWithPassword(String email, String password) { Users users = Users.find.query().where().eq("email", email).findOne(); - - if (users != null && Users.hasher.isValid(password, users.getPasswordHash())) { - return users; - } else { - return null; + if (users != null && StringUtils.isNotEmpty(users.getPasswordHash())) { + HashBuilder hashBuilder = + users.getPasswordHash().startsWith("4.") ? Users.pbkdf2Hasher : Users.bcryptHasher; + if (hashBuilder.isValid(password, users.getPasswordHash())) { + return users; + } } + return null; } /** @@ -416,7 +422,7 @@ public String upsertApiToken() { } public String upsertApiToken(Long version) { - String apiTokenFormatVersion = "3"; + String apiTokenFormatVersion = "4"; UUID uuidToLock = uuid != null ? uuid : NULL_UUID; usersLock.acquireLock(uuidToLock); try { @@ -427,7 +433,7 @@ public String upsertApiToken(Long version) { throw new PlatformServiceException(BAD_REQUEST, "API token version has changed"); } String apiTokenUnhashed = UUID.randomUUID().toString(); - apiToken = Users.hasher.hash(apiTokenUnhashed); + apiToken = Users.pbkdf2Hasher.hash(apiTokenUnhashed); apiTokenVersion = apiTokenVersion == null ? 1L : apiTokenVersion + 1; save(); @@ -489,6 +495,7 @@ public static Users authWithApiToken(String apiToken) { // 1. apiToken (older format) // 2. apiTokenFormatVersion$userUUID$apiToken (version 2 $ seperated format) // 3. apiTokenFormatVersion.userUUID.apiToken (version 3 . seperated format) + // 4. apiTokenFormatVersion.userUUID.apiToken (version 4 . seperated format - using PBKDF2) // The 3rd format is better for shell based clients because dot is easier to escape than dollar // The first format would lead to performance degradation in the case of more than 10 users // Recommended to reissue the token (which will follow the third format) @@ -498,22 +505,28 @@ public static Users authWithApiToken(String apiToken) { boolean disableV1APIToken = runtimeConfGetter.getGlobalConf(GlobalConfKeys.disableV1APIToken); try { boolean isOldFormat = true; + boolean isPBKDF2 = false; String regexSeparator = ""; - if (apiToken.startsWith("2$", 0)) { + if (apiToken.startsWith("2$")) { regexSeparator = "\\$"; isOldFormat = false; - } else if (apiToken.startsWith("3.", 0)) { + } else if (apiToken.startsWith("3.")) { + regexSeparator = "\\."; + isOldFormat = false; + } else if (apiToken.startsWith("4.")) { regexSeparator = "\\."; isOldFormat = false; + isPBKDF2 = true; } if (!isOldFormat) { + HashBuilder hashBuilder = isPBKDF2 ? Users.pbkdf2Hasher : Users.bcryptHasher; // to authenticate new format of api token = apiTokenFormatVersion.userUUID.apiTokenUnhashed String[] parts = apiToken.split(regexSeparator); UUID userUUID = UUID.fromString(parts[1]); String apiTokenUnhashed = parts[2]; Users userWithToken = find.query().where().eq("uuid", userUUID).findOne(); if (userWithToken != null) { - if (Users.hasher.isValid(apiTokenUnhashed, userWithToken.getApiToken())) { + if (hashBuilder.isValid(apiTokenUnhashed, userWithToken.getApiToken())) { return userWithToken; } } @@ -529,7 +542,7 @@ public static Users authWithApiToken(String apiToken) { List usersList = find.query().where().isNotNull("apiToken").findList(); long startTime = System.currentTimeMillis(); for (Users user : usersList) { - if (Users.hasher.isValid(apiToken, user.getApiToken())) { + if (Users.bcryptHasher.isValid(apiToken, user.getApiToken())) { LOG.info( "Authentication using API token. Completed time: {} ms", System.currentTimeMillis() - startTime); diff --git a/managed/src/main/java/com/yugabyte/yw/models/helpers/CommonUtils.java b/managed/src/main/java/com/yugabyte/yw/models/helpers/CommonUtils.java index e4c32262a348..a97aad4486e4 100644 --- a/managed/src/main/java/com/yugabyte/yw/models/helpers/CommonUtils.java +++ b/managed/src/main/java/com/yugabyte/yw/models/helpers/CommonUtils.java @@ -180,6 +180,8 @@ public class CommonUtils { entry('9', '\''), entry(' ', '\'')); + public static final String FIPS_ENABLED = "yb.fips.enabled"; + /** * Checks whether the field name represents a field with a sensitive data or not. * diff --git a/managed/src/main/resources/health/node_health.py.template b/managed/src/main/resources/health/node_health.py.template index 4881fb0e0d51..bd5831579375 100755 --- a/managed/src/main/resources/health/node_health.py.template +++ b/managed/src/main/resources/health/node_health.py.template @@ -1627,7 +1627,6 @@ class NodeChecker(): def check_ddl_atomicity_internal(self, recheck_table_uuids): - logging.info("Checking DDL atomicity on node {}".format(self.node)) e = self._new_entry("DDL atomicity") metric = Metric.from_definition(YB_DDL_ATOMICITY_CHECK) tables_with_errors = [] diff --git a/managed/src/main/resources/reference.conf b/managed/src/main/resources/reference.conf index 430323a366ff..018418662f72 100644 --- a/managed/src/main/resources/reference.conf +++ b/managed/src/main/resources/reference.conf @@ -139,6 +139,10 @@ yb { credentials = "" } } + fips { + enabled = false + enabled = ${?org.bouncycastle.fips.approved_only} + } thirdparty.packagePath = /opt/third-party grafana.accessKey="changeme" allow_db_version_more_than_yba_version=false diff --git a/managed/src/test/java/com/yugabyte/yw/common/UtilTest.java b/managed/src/test/java/com/yugabyte/yw/common/UtilTest.java index befe129992de..9264a55e963d 100644 --- a/managed/src/test/java/com/yugabyte/yw/common/UtilTest.java +++ b/managed/src/test/java/com/yugabyte/yw/common/UtilTest.java @@ -15,6 +15,7 @@ import static org.mockito.Mockito.doReturn; import static org.mockito.Mockito.mock; +import com.cronutils.utils.StringUtils; import com.fasterxml.jackson.databind.JsonNode; import com.google.common.collect.ImmutableMap; import com.yugabyte.yw.common.utils.FileUtils; @@ -451,13 +452,15 @@ public void testBase36hash(String output, String input) { @Test public void testGetFileSize() { - int maxChars = (int) 1e6; // ~ 2 MB + int maxCharsBatch = 8192; // ~ 8 KB int minChars = 1; + StringBuilder data = new StringBuilder(StringUtils.EMPTY); + for (int i = 0; i < 100; i++) { + data.append(RandomStringUtils.randomAlphabetic(minChars, maxCharsBatch)); + } + Path tmpFilePath = Paths.get(TestHelper.createTempFile(data.toString())); - String data = RandomStringUtils.randomAlphabetic(minChars, maxChars); - Path tmpFilePath = Paths.get(TestHelper.createTempFile(data)); - - long actualFileSize = data.getBytes().length; + long actualFileSize = data.toString().getBytes().length; long fileSize = FileUtils.getFileSize(tmpFilePath.toString()); org.apache.commons.io.FileUtils.deleteQuietly(new File(tmpFilePath.toString())); diff --git a/managed/src/test/java/com/yugabyte/yw/common/certmgmt/CertificateHelperTest.java b/managed/src/test/java/com/yugabyte/yw/common/certmgmt/CertificateHelperTest.java index 3ad15c70e5cf..2d4cbcc6e117 100644 --- a/managed/src/test/java/com/yugabyte/yw/common/certmgmt/CertificateHelperTest.java +++ b/managed/src/test/java/com/yugabyte/yw/common/certmgmt/CertificateHelperTest.java @@ -100,7 +100,7 @@ public void testCreateRootCAWithClientCert() { FileInputStream is = new FileInputStream(certPath + String.format("/%s/yugabytedb.crt", rootCA)); X509Certificate clientCer = (X509Certificate) factory.generateCertificate(is); - clientCer.verify(cert.getPublicKey(), "BC"); + clientCer.verify(cert.getPublicKey()); } catch (Exception e) { fail(e.getMessage()); } @@ -129,7 +129,7 @@ public void testCreateClientRootCAWithClientCert() { FileInputStream is = new FileInputStream(certPath + String.format("/%s/yugabytedb.crt", clientRootCA)); X509Certificate clientCer = (X509Certificate) factory.generateCertificate(is); - clientCer.verify(cert.getPublicKey(), "BC"); + clientCer.verify(cert.getPublicKey()); } catch (Exception e) { fail(e.getMessage()); } @@ -165,7 +165,7 @@ public void testCreateCustomerCertToString() ByteArrayInputStream bytes = new ByteArrayInputStream(clientCert.getBytes()); X509Certificate clientCer = (X509Certificate) fact.generateCertificate(bytes); - clientCer.verify(cer.getPublicKey(), "BC"); + clientCer.verify(cer.getPublicKey()); } @Test @@ -199,7 +199,7 @@ public void testCreateCustomerCertToFile() is = new FileInputStream(String.format("/tmp/%s/yugabytedb.crt", rootCA)); X509Certificate clientCer = (X509Certificate) fact.generateCertificate(is); - clientCer.verify(cer.getPublicKey(), "BC"); + clientCer.verify(cer.getPublicKey()); } else { fail(); } diff --git a/managed/src/test/java/com/yugabyte/yw/common/certmgmt/PemTrustStoreManagerTest.java b/managed/src/test/java/com/yugabyte/yw/common/certmgmt/PemTrustStoreManagerTest.java deleted file mode 100644 index d1ecb96f7a4f..000000000000 --- a/managed/src/test/java/com/yugabyte/yw/common/certmgmt/PemTrustStoreManagerTest.java +++ /dev/null @@ -1,438 +0,0 @@ -/* - * Copyright 2023 YugaByte, Inc. and Contributors - * - * Licensed under the Polyform Free Trial License 1.0.0 (the "License"); you - * may not use this file except in compliance with the License. You - * may obtain a copy of the License at - * - * https://github.com/YugaByte/yugabyte-db/blob/master/licenses/ - * POLYFORM-FREE-TRIAL-LICENSE-1.0.0.txt - */ - -package com.yugabyte.yw.common.certmgmt; - -import static org.junit.Assert.assertEquals; -import static org.junit.Assert.fail; - -import com.yugabyte.yw.common.FakeDBApplication; -import com.yugabyte.yw.common.ModelFactory; -import com.yugabyte.yw.common.PlatformServiceException; -import com.yugabyte.yw.common.certmgmt.castore.PemTrustStoreManager; -import com.yugabyte.yw.models.CustomCaCertificateInfo; -import com.yugabyte.yw.models.Customer; -import java.io.File; -import java.io.FileWriter; -import java.io.IOException; -import java.security.KeyStoreException; -import java.security.cert.Certificate; -import java.security.cert.CertificateException; -import java.util.Date; -import java.util.List; -import java.util.UUID; -import lombok.extern.slf4j.Slf4j; -import org.apache.commons.io.FileUtils; -import org.junit.After; -import org.junit.Before; -import org.junit.Test; -import org.junit.runner.RunWith; -import org.mockito.InjectMocks; -import org.mockito.junit.MockitoJUnitRunner; - -@Slf4j -@RunWith(MockitoJUnitRunner.class) -public class PemTrustStoreManagerTest extends FakeDBApplication { - - static String TMP_PEM_STORE = "/tmp/certmgmt/"; - static String PEM_TRUST_STORE_FILE; - - @InjectMocks PemTrustStoreManager pemTrustStoreManager; - private Customer defaultCustomer; - - @Before - public void setUp() throws IOException { - defaultCustomer = ModelFactory.testCustomer(); - TMP_PEM_STORE += defaultCustomer.getUuid().toString(); - PEM_TRUST_STORE_FILE = TMP_PEM_STORE + "/" + PemTrustStoreManager.TRUSTSTORE_FILE_NAME; - new File(TMP_PEM_STORE).mkdirs(); - new File(PEM_TRUST_STORE_FILE).createNewFile(); - } - - @After - public void tearDown() throws IOException { - FileUtils.deleteDirectory(new File(TMP_PEM_STORE)); - } - - public String getCertificateChain2Content() { - String certChain = - "-----BEGIN CERTIFICATE-----\n" - + "MIICQzCCAawCCQDhN1VLJiS5JDANBgkqhkiG9w0BAQsFADBmMQswCQYDVQQGEwJJ\n" - + "TjELMAkGA1UECAwCS0ExEjAQBgNVBAcMCUJlbmdhbHVydTELMAkGA1UECgwCWUIx\n" - + "DDAKBgNVBAsMA0RldjEbMBkGA1UEAwwSQ2xpZW50IENlcnRpZmljYXRlMB4XDTIz\n" - + "MTAwNjExMTM0NFoXDTMzMTAwMzExMTM0NFowZjELMAkGA1UEBhMCSU4xCzAJBgNV\n" - + "BAgMAktBMRIwEAYDVQQHDAlCZW5nYWx1cnUxCzAJBgNVBAoMAllCMQwwCgYDVQQL\n" - + "DANEZXYxGzAZBgNVBAMMEkNsaWVudCBDZXJ0aWZpY2F0ZTCBnzANBgkqhkiG9w0B\n" - + "AQEFAAOBjQAwgYkCgYEA037hWIlJsttsV10ZIGDi0bzUdjouSirqhvjNxUZUEo0t\n" - + "CNupAdjp5M+vyJj2xnAI8DziUslpYovFpPtuwPCuB4MzqO95xVLGLhszH9cQidua\n" - + "Qu+cgFN9JqgrmHUuQJOkTZlPYAEwkE2lqFBvdhXgqICmbgOXgFVHpa8YB8f4U8UC\n" - + "AwEAATANBgkqhkiG9w0BAQsFAAOBgQBmr3fX12cDXaqLMDOqvs8VsAKS7YdOS4K8\n" - + "5RxupmOaoiw0pV2RMY0KSl6VXgcROBlGWo7WCUVVMXq6xw1mrV/RigoV8T+jZ6SQ\n" - + "MvDXgw80Ykm1o+U5tvL6xQ33jTZrlPUDEEHjMq8SRmSzBbLs/G34cuFbnFFCm8h4\n" - + "m4ZyRf8Nlw==\n" - + "-----END CERTIFICATE-----\n" - + "\n" - + "-----BEGIN CERTIFICATE-----\n" - + "MIIDFTCCAn6gAwIBAgICEAEwDQYJKoZIhvcNAQELBQAwZjELMAkGA1UEBhMCSU4x\n" - + "CzAJBgNVBAgMAktBMRIwEAYDVQQHDAlCZW5nYWx1cnUxCzAJBgNVBAoMAllCMQww\n" - + "CgYDVQQLDANEZXYxGzAZBgNVBAMMEkNsaWVudCBDZXJ0aWZpY2F0ZTAeFw0yMzEw\n" - + "MDYxMTE0MDFaFw0yNDEwMDUxMTE0MDFaMGYxCzAJBgNVBAYTAklOMQswCQYDVQQI\n" - + "DAJLQTESMBAGA1UEBwwJQmVuZ2FsdXJ1MQswCQYDVQQKDAJZQjEMMAoGA1UECwwD\n" - + "RGV2MRswGQYDVQQDDBJDbGllbnQgQ2VydGlmaWNhdGUwgZ8wDQYJKoZIhvcNAQEB\n" - + "BQADgY0AMIGJAoGBALUPV0LrhSq7cDPJuQnt/9k26rZuhDLYsy8mjP4yHV+S3T4x\n" - + "TmVi3Q9VTQc16WBjGp31SLci9ZpCC79GdhmG8TuzzXe89udmvyfS24Tqxl/2jyZm\n" - + "ABUHx/v/6Uvd3Q/GkUTkoyFAPb69Ra9ZbCCNIY7p08zM2U3ILsNJMJRJZnefAgMB\n" - + "AAGjgdEwgc4wDAYDVR0TBAUwAwEB/zALBgNVHQ8EBAMCAaYwHQYDVR0OBBYEFBTQ\n" - + "4JZlqLnFw7mZpgEl7UBlujG1MIGABgNVHSMEeTB3oWqkaDBmMQswCQYDVQQGEwJJ\n" - + "TjELMAkGA1UECAwCS0ExEjAQBgNVBAcMCUJlbmdhbHVydTELMAkGA1UECgwCWUIx\n" - + "DDAKBgNVBAsMA0RldjEbMBkGA1UEAwwSQ2xpZW50IENlcnRpZmljYXRlggkA4TdV\n" - + "SyYkuSQwDwYDVR0RBAgwBocEChcQETANBgkqhkiG9w0BAQsFAAOBgQBlBoNDnIfT\n" - + "nGw0TJSsR5MXXgnckHldUlsuA+T3UWlzG8KDlVY4F2pGm0fvPqN6YPpABjeB+ue9\n" - + "W7SYPqMflpbEqAHTnC8poID91x7xx9FQzLqwOx8/fmsNZSXscJoPWRxOvqPMoEuh\n" - + "iIHq0hzJqeG6abe076ILpUX7xhVZ9OZ6dQ==\n" - + "-----END CERTIFICATE-----\n"; - - return certChain; - } - - public void addCertificateToYBA(String certPath, String certContent) throws IOException { - File file = new File(certPath); - file.createNewFile(); - FileWriter writer = new FileWriter(file); - writer.write(certContent); - writer.close(); - } - - public String getCertificateChain1Content() { - String certificates = - "-----BEGIN CERTIFICATE-----\n" - + "MIICQzCCAawCCQDhN1VLJiS5JDANBgkqhkiG9w0BAQsFADBmMQswCQYDVQQGEwJJ\n" - + "TjELMAkGA1UECAwCS0ExEjAQBgNVBAcMCUJlbmdhbHVydTELMAkGA1UECgwCWUIx\n" - + "DDAKBgNVBAsMA0RldjEbMBkGA1UEAwwSQ2xpZW50IENlcnRpZmljYXRlMB4XDTIz\n" - + "MTAwNjExMTM0NFoXDTMzMTAwMzExMTM0NFowZjELMAkGA1UEBhMCSU4xCzAJBgNV\n" - + "BAgMAktBMRIwEAYDVQQHDAlCZW5nYWx1cnUxCzAJBgNVBAoMAllCMQwwCgYDVQQL\n" - + "DANEZXYxGzAZBgNVBAMMEkNsaWVudCBDZXJ0aWZpY2F0ZTCBnzANBgkqhkiG9w0B\n" - + "AQEFAAOBjQAwgYkCgYEA037hWIlJsttsV10ZIGDi0bzUdjouSirqhvjNxUZUEo0t\n" - + "CNupAdjp5M+vyJj2xnAI8DziUslpYovFpPtuwPCuB4MzqO95xVLGLhszH9cQidua\n" - + "Qu+cgFN9JqgrmHUuQJOkTZlPYAEwkE2lqFBvdhXgqICmbgOXgFVHpa8YB8f4U8UC\n" - + "AwEAATANBgkqhkiG9w0BAQsFAAOBgQBmr3fX12cDXaqLMDOqvs8VsAKS7YdOS4K8\n" - + "5RxupmOaoiw0pV2RMY0KSl6VXgcROBlGWo7WCUVVMXq6xw1mrV/RigoV8T+jZ6SQ\n" - + "MvDXgw80Ykm1o+U5tvL6xQ33jTZrlPUDEEHjMq8SRmSzBbLs/G34cuFbnFFCm8h4\n" - + "m4ZyRf8Nlw==\n" - + "-----END CERTIFICATE-----\n" - + "-----BEGIN CERTIFICATE-----\n" - + "MIIDFTCCAn6gAwIBAgICEAIwDQYJKoZIhvcNAQELBQAwZjELMAkGA1UEBhMCSU4x\n" - + "CzAJBgNVBAgMAktBMRIwEAYDVQQHDAlCZW5nYWx1cnUxCzAJBgNVBAoMAllCMQww\n" - + "CgYDVQQLDANEZXYxGzAZBgNVBAMMEkNsaWVudCBDZXJ0aWZpY2F0ZTAeFw0yMzEw\n" - + "MTAwNTIzMzdaFw0yNDEwMDkwNTIzMzdaMGYxCzAJBgNVBAYTAklOMQswCQYDVQQI\n" - + "DAJLQTESMBAGA1UEBwwJQmVuZ2FsdXJ1MQswCQYDVQQKDAJZQjEMMAoGA1UECwwD\n" - + "RGV2MRswGQYDVQQDDBJDbGllbnQgQ2VydGlmaWNhdGUwgZ8wDQYJKoZIhvcNAQEB\n" - + "BQADgY0AMIGJAoGBAPacLQJ5kyDi37F8WAVrBgcyx+UvLR9hdmWhq+h4AyXO/Ibq\n" - + "vpocAQRXoRC8eguXDSwNNVCYzz1QqyxyYY29XHiy5Fk2ZrMdg0bHArNoOWEcxQe0\n" - + "kmH6y7yG7Y07FE11GVMMyzs/gaVv1NQGjt8IhZ7ZPpkx9X6/uoV07emWwFZNAgMB\n" - + "AAGjgdEwgc4wDAYDVR0TBAUwAwEB/zALBgNVHQ8EBAMCAaYwHQYDVR0OBBYEFNPK\n" - + "51Qbc/iCTki7ZECdR8vqoSiwMIGABgNVHSMEeTB3oWqkaDBmMQswCQYDVQQGEwJJ\n" - + "TjELMAkGA1UECAwCS0ExEjAQBgNVBAcMCUJlbmdhbHVydTELMAkGA1UECgwCWUIx\n" - + "DDAKBgNVBAsMA0RldjEbMBkGA1UEAwwSQ2xpZW50IENlcnRpZmljYXRlggkA4TdV\n" - + "SyYkuSQwDwYDVR0RBAgwBocEChcQETANBgkqhkiG9w0BAQsFAAOBgQDOg9EyCo6c\n" - + "Lg1+RDsztb+MBEYzL+X1ADNFEICLW7w7uboMow/CFIeT1wr0jdJj52a28ypJxX7v\n" - + "gN4HytXW/d7uoi0Z7XWpDliZ1/+o+71vNKIxqWh00bGqFbxYMl0Pt08YWy4RcAYQ\n" - + "TKb1mt19gJKGdjJDQz6lDSg20OqehrZmsQ==\n" - + "-----END CERTIFICATE-----\n"; - - return certificates; - } - - public String getCertificateContent() { - String certificate = - "-----BEGIN CERTIFICATE-----\n" - + "MIIDBTCCAe2gAwIBAgIQKvGH5iWg47RCxMcw/B3HfDANBgkqhkiG9w0BAQsFADAV\n" - + "MRMwEQYDVQQDEwppdGVzdC1sZGFwMB4XDTIzMDIxMzA5MDkzMloXDTMzMDIxMzA5\n" - + "MTkzMlowFTETMBEGA1UEAxMKaXRlc3QtbGRhcDCCASIwDQYJKoZIhvcNAQEBBQAD\n" - + "ggEPADCCAQoCggEBAM0GyAjTfFqpN9+bTiUBo4sY3v3yhyc6cfbS8K8Pt3t2D7nW\n" - + "0VnmLZOc1h7XmHv9aq67c5mjiC4q1O1zbVRh/3MofyzLywmCxfZSt8DjynN+FAxA\n" - + "g46fa+8NdQMAKbmRWG+05j1AhY6LZJr6h2J126n45BjZGZCsfnrSnHyAlIe8ZTvA\n" - + "H17WG0GKoCC84H43MYyunyaXmQZ7ImR7+lAhtIXtg9l7mNLHZTjZTI40iFxQV4T9\n" - + "M+Antrhs5E5HGoaKagHqbxk2U+sW9KFn1+bo9UpgUlVZDtKkHsLzXZO/ilXXhgyL\n" - + "HxJlxP8ed0qEoaLg3wFK3GJrzFmy4p33BXZBVzUCAwEAAaNRME8wCwYDVR0PBAQD\n" - + "AgGGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFAd4BgRooit2x4eJeukQXi+u\n" - + "no3XMBAGCSsGAQQBgjcVAQQDAgEAMA0GCSqGSIb3DQEBCwUAA4IBAQCZA6FfNJKo\n" - + "GlbxErYUSQc0BmlVTDP3TxriCKPzPaxl6WZ/p3bFkg8HvXLOySzl/kEAEe4KWe56\n" - + "751Iz0vuGRihPqQgkikjfPtvIuLVSytHx19a/sDTZUCQM7qyhWHtYyL7FTxtXMDG\n" - + "8nyrBRax1CNEDW/l0uwyxndopshFxaPvlLsz2SLVcP8g07DWpT9m0d+PnjHBI1Vl\n" - + "B9LKHhpy27wUjYCctLO4BzEP9URJGGgpSI4YEoJIx3HHCSZcjtFxc9zHSeDU7EdY\n" - + "1JKjULlmEPHyte03hkKh+8Ylcn/jo1UsVgbmnJpvQPhWGeI76TtnoM33LgZpnPLW\n" - + "IgS49841p/6U\n" - + "-----END CERTIFICATE-----\n"; - - return certificate; - } - - public String getCertificateChain3Content() { - String certChain = - "-----BEGIN CERTIFICATE-----\n" - + "MIIFZDCCA0ygAwIBAgIIDrYWUG22FM8wDQYJKoZIhvcNAQELBQAwMzESMBAGA1UE\n" - + "AwwJRklDQSBSb290MRAwDgYDVQQKDAdGTVIgTExDMQswCQYDVQQGEwJVUzAeFw0x\n" - + "NjA3MTkyMjAyNTVaFw00NjA3MTkyMjAyNTVaMDMxEjAQBgNVBAMMCUZJQ0EgUm9v\n" - + "dDEQMA4GA1UECgwHRk1SIExMQzELMAkGA1UEBhMCVVMwggIiMA0GCSqGSIb3DQEB\n" - + "AQUAA4ICDwAwggIKAoICAQDBQ6PhzyH9OqT05jxb7oVP0n+XsHqw3FUw7OhuVqTs\n" - + "FAYlMX8wVIsjsS/ozXMV26337yl5S7x62j9/e5KxGT6b2J1Xqfok2NUnx3pMk9/o\n" - + "Dl9+ifrUF0RPw94bxVJCx3DhAjDxiSPJs9XXlJ8+tSUzPpr+ZK+PvyvunWwB3Jr2\n" - + "ZBalKYQfSa6DWz5pIjcf+UwvMaW+NUMewOhcrWP55DTfVH3vfxh6OQkr/xWHYIYq\n" - + "TFEBg4ctEW6vYiPi7SEn27wDVRHn1E1Tno9Eq/a+2INQq1JLVMV7NnkDAx+r+Rbw\n" - + "h11lj6rkW3CKCpSv6Pu6EO03BSdResi0VzHCoa2EgrStSmjJq+DeVRQt8Bz0bP0D\n" - + "ap/u16lxKZCRzs0LE8LDKMa4D/vRZfvPAnmsWwDRvW5JieUbBLWGO7AungJqirWb\n" - + "EGqfHXVQfMWsFADROnWJpUJfvlVZ5DC4Cf9ewc8NZb5SZM0yml8KMngoPBJ1vpe4\n" - + "dgWGGeygzTEHn15Ux9GKrXQ2xye4mQIK2okFuOIQZSPJDqcR3k6Rl+Ilm5KGfqO3\n" - + "vpiKSdvmlcNhwGqKNUhDt8yUMr/9zSJV6Hxx8StAzqoX4f5RWnRiH90uSqTTS+Ui\n" - + "QVPiQoSAvwkDgADYGbj8EBZhppqgCUw14I91vF8KUGQc2HcHYCjl8tP/PB8biupz\n" - + "5wIDAQABo3wwejAPBgNVHRMBAf8EBTADAQH/MB8GA1UdIwQYMBaAFN6nbJ4lvQ8K\n" - + "vWTSssDiysj/k3zEMBcGA1UdIAQQMA4wDAYKKwYBBAGKFB4BATAdBgNVHQ4EFgQU\n" - + "3qdsniW9Dwq9ZNKywOLKyP+TfMQwDgYDVR0PAQH/BAQDAgGGMA0GCSqGSIb3DQEB\n" - + "CwUAA4ICAQBrx1a+3wzbfqai4xrctuXkqxyAYl0BIYAli9uZA1f9+q92nOZ2ZFc9\n" - + "6BzZGZeUtxlMd5mAVW+zC2lSKn9xWfZWA7rrpCPiMprXfJdQGA+HZ/w2hNtruCPv\n" - + "tSZ/DkiQFz2DBYEnQOdPvYWZGCIcdA2IEtd/fLgSbENXBr2hoX0vHR7YnV3OExk6\n" - + "9IJjYij1CY3IGXt0Zy1eI2APYuTGogAcYwhMvK/XDhNa2oYf0T++7UbzXzkq3Zy0\n" - + "KxnOd1X3bsXjdZYGgTPPjycLTzbLxvbz14YlnoGHD0filMtfHHH0wG1lDieIU+lk\n" - + "OL8Ff8Bk71khoP9qNgkS8CkjazUaW2PczjL4yczs+jvNB69h718Sg4xS7sV8/VC9\n" - + "lmeT/glZNFhTZKNbK3LSMoumk4ElUw7MeWCuKqPD4OB5WHKMrcj0WtnZJYgagaco\n" - + "RfpRSrd7CI9iD9yu2P173En8uzgMZA2/pMDf8gH6m3b5sz5td7nzdTaXVYQjkYBq\n" - + "t8bZFHvD3hgsIeEcQ9v9XunEYVpCac6LcUDxL33c3+e4s419LrLkgQMnoFYBBezy\n" - + "0TR5XPOoWin1SVgZ29WlIz432VJpw4mnrKdW5k4LcbdHRQ9E4ORNEJC2OesRAyBW\n" - + "C5MpsYpIhsgPPL1y7HS9r83MXWReheDyOEl5SnNYMjrcOib086vCag==\n" - + "-----END CERTIFICATE-----\n" - + "\n" - + "-----BEGIN CERTIFICATE-----\n" - + "MIIF0TCCA7mgAwIBAgIIalPEPqScMu8wDQYJKoZIhvcNAQELBQAwMzESMBAGA1UE\n" - + "AwwJRklDQSBSb290MRAwDgYDVQQKDAdGTVIgTExDMQswCQYDVQQGEwJVUzAeFw0x\n" - + "NjA3MTkyMjE3MDhaFw00MTA3MTkyMjE3MDhaMDsxGjAYBgNVBAMMEUZJQ0EgSW50\n" - + "ZXJtZWRpYXRlMRAwDgYDVQQKDAdGTVIgTExDMQswCQYDVQQGEwJVUzCCAiIwDQYJ\n" - + "KoZIhvcNAQEBBQADggIPADCCAgoCggIBAMGGwmwlY9u9t6vESH4ehQCjge1mArZ+\n" - + "ubHkcezLlZ0jY/yMDvh2BF/VASMVMrzih84lEvdQbJYz23wXVjKTmEEZuf64LDiL\n" - + "5l8a4kMRbbeYK5Uq8KgqAY9CAfzIKNl01YR2AxzALdJ8qEptjHZYXuj3/tOSyaKz\n" - + "g5APSLeLg+CbSVTa6gzmizEFpBuI/aWuw5RIWd2+T7gBv5Tpno2wh2cYSBos7/HF\n" - + "zhJiG8ufDjo/mWro/VAq0j8NVbHELKrf61kJhwZv7Z8mIx3RXEFVdQLayFAitzPl\n" - + "9Ul2kI3fOqNJgN73XILj75CjpSLxRaiciVQa6ux0Pqd3FlaQDN43BCb57BEuOREI\n" - + "/XnxucktgwyDuBqM/POScRv6uWclTqLBIbrCaZj63/WVt3K+t3HmBWgGyX4z+N4/\n" - + "NNWncfs61aPC974Ztbh2GOReqmijIDAOND33iY4hZOZFNTiy1d8wz2sOBVOwpl5D\n" - + "e/A9/plepmblcuo3IcC6QRcFCiquD7SJ6e7UIM30bHFKyNHhvFL5k/hPZnCFEdyH\n" - + "wKKAUY8u9iIHTt5l/L7oxsAd8myP5kB49XaF+W/t2VWWOKlaMrOq78y3BSZzIrMb\n" - + "wK9aR4j6KCUBJahPDjAT/Q8qUsyebOR/fRGHy0un0TUw/sfZBYdF6VHHx1TnFrUr\n" - + "3eyy2IL2LCtvAgMBAAGjgeAwgd0wDwYDVR0TAQH/BAUwAwEB/zAfBgNVHSMEGDAW\n" - + "gBTep2yeJb0PCr1k0rLA4srI/5N8xDA/BggrBgEFBQcBAQQzMDEwLwYIKwYBBQUH\n" - + "MAGGI2h0dHA6Ly9lamJjYXZhbC5mbXIuY29tL3N0YXR1cy9vY3NwMDkGA1UdHwQy\n" - + "MDAwLqAsoCqGKGh0dHA6Ly9lamJjYXZhbC5mbXIuY29tL2NybC9maWNhcm9vdC5j\n" - + "cmwwHQYDVR0OBBYEFGVBj6AkbmRQWylNcjIALL7ypX45MA4GA1UdDwEB/wQEAwIB\n" - + "hjANBgkqhkiG9w0BAQsFAAOCAgEAmDZ2764ea1coc4ZOCobea9fUBfHCKI47IM9l\n" - + "VDkN+PlGmFCEbkBfuqxsuPHH26z2whsEwsIMr2V5Qf1bf1+yXl0Sx86o1Y8WI4AU\n" - + "AEDXnnRmF4E+YudHHbA7jzPamj3rq4+boNUtNRFhK3wW3WH8wfYhsCc9ZTz8wlFG\n" - + "kN9wTQy6HM9R+9ly+hkh9A8Tv8R+Fxvc5gwIjeURZEC19RjHfAa5lJ7VHz8Y6ZYQ\n" - + "lJo/Li2Pxo79e7mBbuBuU4FPRVCYaTEhaZTULD99eNu6pzPNluk/FIYAu/YJhhUX\n" - + "cR7xUnO8ZRRWWmYaYJSo418iAe4qhQvs4G3YRdENZEneE3l6MjiNq+GKZt/KnUMi\n" - + "yqIKM19l8pZw5onNuN44XbnGeNWMPRpS4tB2AOyuKlOZUDU7VV4mptD/parwTDLB\n" - + "dWDi+wadPpEUmZTQkG+yMg4ubOEnu81rmBBOm3CeSsJypAKVgx3sTAUqC//jfZkW\n" - + "9meeBnbn/ol/NKg+dUdMb0T8eef4S50jpNbXpy4jwTmKRRCm+ByxmGQYtADoy5c8\n" - + "UgNuzLFhezAjfnl0570Bw0qK8B8mZmW8srDdpNwMRTk+LdtEaLlv2pgdA0SbTXl+\n" - + "lv+ntTIhRNMh7fnk5EEzGfhv+HPOTBIV2Q9uy/KmxFSxPE2xvTajKxGk7lXJA1Bi\n" - + "LIqYphQ=\n" - + "-----END CERTIFICATE-----\n" - + "\n" - + "-----BEGIN CERTIFICATE-----\n" - + "MIIF1jCCA76gAwIBAgIIIv77qJI0sdUwDQYJKoZIhvcNAQELBQAwOzEaMBgGA1UE\n" - + "AwwRRklDQSBJbnRlcm1lZGlhdGUxEDAOBgNVBAoMB0ZNUiBMTEMxCzAJBgNVBAYT\n" - + "AlVTMB4XDTE2MDcyODIxMjY1MVoXDTM2MDcyODIxMjY1MVowOTEYMBYGA1UEAwwP\n" - + "RklDQSBJc3N1aW5nIEUxMRAwDgYDVQQKDAdGTVIgTExDMQswCQYDVQQGEwJVUzCC\n" - + "AiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAOsnvfnZ8QICs84YDxo0DxJV\n" - + "2ZNY7CXGVOgSk+LBJdA0U9ebSenSJWUtTNznCHt5w8t/8IzTQoVuo6HfKoHynzMw\n" - + "Pe20rPQAS7Ot62ZchqrkZsnbcEjcYY1fBDWiIRqeFj53V0kCTXFq8qi9AszB/Nyu\n" - + "vt0tZMJ4Sl6TzM9B076IRtGrLtpkWMJ3TBjnpPLUKwbDqv0vg78prPEx8DoL9Dmv\n" - + "5gF8vjuyXS5KFvOREj3CAw6rW2cW0hzZV1nItAorxoeH9T3SzniH3KLMcm7KmJOi\n" - + "24xzZivETJzYypYphd6eaZ3BZg+k9CyLbbWB2QVX1/rFZAZj0cQrkkr1JqYBBvrW\n" - + "Vb1jU2zUGPlisYX+ecGZ91a2MTSGIPZPCkbq/HtwCOualwhwJV2Olu1o3PrPquI5\n" - + "FV6Q60y+icKC27Y/CKOlt4DAcNY8WK/Kgtn1IEL2AX82wlEIPnY4YtH2RHj3EG/e\n" - + "Xy9L89CITLX8EwvpkkW9KwZDT84rgrT6GVc3vD7bOMsBNhF7Lwj4Kofnc1ay/xaG\n" - + "zWphyIWBrgfD+nc+bl7zBVzILSBsnRZiMhtJfjiEDY3FiOKKuO8kn606jMntF4VW\n" - + "L8tISmTfSMEKIUU4nUjlSYPeVaOTdQnp5g6wU6YQ3soh2i97umtWPXCMnMz51VM5\n" - + "JxL3mIuWy4NxOaU8V6f3AgMBAAGjgd8wgdwwDwYDVR0TAQH/BAUwAwEB/zAfBgNV\n" - + "HSMEGDAWgBRlQY+gJG5kUFspTXIyACy+8qV+OTA/BggrBgEFBQcBAQQzMDEwLwYI\n" - + "KwYBBQUHMAGGI2h0dHA6Ly9lamJjYXZhbC5mbXIuY29tL3N0YXR1cy9vY3NwMDgG\n" - + "A1UdHwQxMC8wLaAroCmGJ2h0dHA6Ly9lamJjYXZhbC5mbXIuY29tL2NybC9maWNh\n" - + "aW50LmNybDAdBgNVHQ4EFgQUAfJmhr3/kwKCRJoLd04vP8EaiU8wDgYDVR0PAQH/\n" - + "BAQDAgGGMA0GCSqGSIb3DQEBCwUAA4ICAQBT7580KPs/Xg6t5NiAnerWb6KnlHPk\n" - + "Gr2VgO4pmEDl1tBb/bbSWRQ1ByrYQspu+ZQqo5n4A4WkTYVQvhX8bqiDepig633S\n" - + "gybX2q51RWYWgKph8cbvYAtHattqLUhlmkNqMepT/3Wx7vIACAmWloZNWB20EynI\n" - + "fJUm0p1cCnJoJLaWHikq70IZX1ly3M0oSyaz5ezEHQXgpPbezQXDdtQxdoEVsrnu\n" - + "iuccjGIMyFsQwCFFia7tmx+adwrfG1/yKhvn/L6+pcMGA47fPGIppDYT47dlILJk\n" - + "zavVaik1JaoEOyMMJN6iJsypSgey7GV6KfXyclBeBU1RO0A5gPfg5pD8aZ6QPnnZ\n" - + "K1WiiCSi2vcbdLExB0rS/I/J5xBtrMGRknfYkB+VQ2Hly8+5LP1lf0cThybO0Xal\n" - + "v0Zejsg8zGXiHD5T62clfn4vrtt5D1227hc58lol1ZgdTNUd2nRwWahOTlkj3lib\n" - + "WVBbCHlAH7sVhNSX3bd0KPXzy8CWnwrwbJMeqp6lKVwjcORve4RWaugyrwkqvtXa\n" - + "dxFODwJU6TEk11yV1y+tnX4BngJouz4PoiSopdqRr92tlK2fu9lMkCmMjCZ/38tR\n" - + "37eaos6RAqVKLWGfYkwwMumwVJt889LuYgC8sKNt9pPyj/LVoHoIddUJs3Jjhfi8\n" - + "iDUYpcRmFIuvRQ==\n" - + "-----END CERTIFICATE-----\n"; - - return certChain; - } - - @Test - public void addCertificateChainToTrustStore() - throws KeyStoreException, CertificateException, IOException, PlatformServiceException { - // Add a new certificate to the YBA's PEM trust store. - UUID certUUID = UUID.randomUUID(); - String certDirectory = TMP_PEM_STORE + certUUID.toString(); - new File(certDirectory).mkdirs(); - String certPath = certDirectory + "/ca.crt"; - String certificateContent = getCertificateChain1Content(); - addCertificateToYBA(certPath, certificateContent); - - pemTrustStoreManager.addCertificate(certPath, "test-cert", TMP_PEM_STORE, null, false); - - List pemTrustStoreCertificates = - pemTrustStoreManager.getCertsInTrustStore(PEM_TRUST_STORE_FILE, null); - assertEquals(2, pemTrustStoreCertificates.size()); - - certUUID = UUID.randomUUID(); - certDirectory = TMP_PEM_STORE + certUUID.toString(); - new File(certDirectory).mkdirs(); - certPath = certDirectory + "/ca.crt"; - certificateContent = getCertificateChain2Content(); - addCertificateToYBA(certPath, certificateContent); - pemTrustStoreManager.addCertificate(certPath, "test-cert-1", TMP_PEM_STORE, null, false); - pemTrustStoreCertificates = - pemTrustStoreManager.getCertsInTrustStore(PEM_TRUST_STORE_FILE, null); - assertEquals(3, pemTrustStoreCertificates.size()); - } - - @Test - public void addCertificateToTrustStore() - throws KeyStoreException, CertificateException, IOException, PlatformServiceException { - UUID certUUID = UUID.randomUUID(); - String certDirectory = TMP_PEM_STORE + certUUID.toString(); - new File(certDirectory).mkdirs(); - String certPath = certDirectory + "/ca.crt"; - - String certificateContent = getCertificateContent(); - addCertificateToYBA(certPath, certificateContent); - pemTrustStoreManager.addCertificate(certPath, "test-cert", TMP_PEM_STORE, null, false); - - List pemTrustStoreCertificates = - pemTrustStoreManager.getCertsInTrustStore(PEM_TRUST_STORE_FILE, null); - assertEquals(1, pemTrustStoreCertificates.size()); - } - - @Test - public void removeCertificateChainFromTrustStore() - throws KeyStoreException, CertificateException, IOException, PlatformServiceException { - UUID certUUID = UUID.randomUUID(); - String certDirectory = TMP_PEM_STORE + certUUID.toString(); - new File(certDirectory).mkdirs(); - String certPath = certDirectory + "/ca.crt"; - - String certificateContent = getCertificateChain1Content(); - CustomCaCertificateInfo.create( - defaultCustomer.getUuid(), certUUID, "test-cert", certPath, new Date(), new Date(), true); - addCertificateToYBA(certPath, certificateContent); - pemTrustStoreManager.addCertificate(certPath, "test-cert", TMP_PEM_STORE, null, false); - - List pemTrustStoreCertificates = - pemTrustStoreManager.getCertsInTrustStore(PEM_TRUST_STORE_FILE, null); - assertEquals(2, pemTrustStoreCertificates.size()); - - certUUID = UUID.randomUUID(); - certDirectory = TMP_PEM_STORE + certUUID.toString(); - new File(certDirectory).mkdirs(); - certPath = certDirectory + "/ca.crt"; - certificateContent = getCertificateChain2Content(); - CustomCaCertificateInfo.create( - defaultCustomer.getUuid(), certUUID, "test-cert-1", certPath, new Date(), new Date(), true); - addCertificateToYBA(certPath, certificateContent); - - pemTrustStoreManager.addCertificate(certPath, "test-cert-1", TMP_PEM_STORE, null, false); - pemTrustStoreCertificates = - pemTrustStoreManager.getCertsInTrustStore(PEM_TRUST_STORE_FILE, null); - assertEquals(3, pemTrustStoreCertificates.size()); - - pemTrustStoreManager.remove(certPath, "test-cert", TMP_PEM_STORE, null, false); - pemTrustStoreCertificates = - pemTrustStoreManager.getCertsInTrustStore(PEM_TRUST_STORE_FILE, null); - assertEquals(2, pemTrustStoreCertificates.size()); - } - - @Test - public void replaceCertificateChainInTrustStore() - throws KeyStoreException, CertificateException, IOException, PlatformServiceException { - UUID certUUID = UUID.randomUUID(); - String certDirectory = TMP_PEM_STORE + certUUID.toString(); - new File(certDirectory).mkdirs(); - String oldCertPath = certDirectory + "/ca.crt"; - - String certificateContent = getCertificateChain1Content(); - CustomCaCertificateInfo.create( - defaultCustomer.getUuid(), - certUUID, - "test-cert", - oldCertPath, - new Date(), - new Date(), - true); - addCertificateToYBA(oldCertPath, certificateContent); - pemTrustStoreManager.addCertificate(oldCertPath, "test-cert", TMP_PEM_STORE, null, false); - - List pemTrustStoreCertificates = - pemTrustStoreManager.getCertsInTrustStore(PEM_TRUST_STORE_FILE, null); - assertEquals(2, pemTrustStoreCertificates.size()); - - certUUID = UUID.randomUUID(); - certDirectory = TMP_PEM_STORE + certUUID.toString(); - new File(certDirectory).mkdirs(); - String certPath = certDirectory + "/ca.crt"; - certificateContent = getCertificateChain2Content(); - CustomCaCertificateInfo.create( - defaultCustomer.getUuid(), certUUID, "test-cert-1", certPath, new Date(), new Date(), true); - addCertificateToYBA(certPath, certificateContent); - - pemTrustStoreManager.addCertificate(certPath, "test-cert-1", TMP_PEM_STORE, null, false); - pemTrustStoreCertificates = - pemTrustStoreManager.getCertsInTrustStore(PEM_TRUST_STORE_FILE, null); - assertEquals(3, pemTrustStoreCertificates.size()); - - certUUID = UUID.randomUUID(); - certDirectory = TMP_PEM_STORE + certUUID.toString(); - new File(certDirectory).mkdirs(); - String newCertPath = certDirectory + "/ca.crt"; - certificateContent = getCertificateChain3Content(); - addCertificateToYBA(newCertPath, certificateContent); - - pemTrustStoreManager.replaceCertificate( - certPath, newCertPath, "test-cert-1", TMP_PEM_STORE, null, false); - CustomCaCertificateInfo.create( - defaultCustomer.getUuid(), - certUUID, - "test-cert-1", - newCertPath, - new Date(), - new Date(), - true); - - pemTrustStoreCertificates = - pemTrustStoreManager.getCertsInTrustStore(PEM_TRUST_STORE_FILE, null); - List allCerts = pemTrustStoreManager.getX509Certificate(newCertPath); - allCerts.addAll(pemTrustStoreManager.getX509Certificate(oldCertPath)); - assertEquals(5, allCerts.size()); - for (Certificate cert : allCerts) { - if (!pemTrustStoreCertificates.contains(cert)) { - fail(); - } - } - assertEquals(5, pemTrustStoreCertificates.size()); - } -} diff --git a/managed/src/test/java/com/yugabyte/yw/common/certmgmt/VaultPKITest.java b/managed/src/test/java/com/yugabyte/yw/common/certmgmt/VaultPKITest.java index 41e706864d90..9b0206f10081 100644 --- a/managed/src/test/java/com/yugabyte/yw/common/certmgmt/VaultPKITest.java +++ b/managed/src/test/java/com/yugabyte/yw/common/certmgmt/VaultPKITest.java @@ -151,13 +151,13 @@ public void TestCertificateGeneration() throws Exception { X509Certificate caCert = CertificateHelper.convertStringToX509Cert(((VaultPKI) certProvider).getCACertificate()); PublicKey k = caCert.getPublicKey(); - cert.verify(k, "BC"); + cert.verify(k); // verify by fetching ca from mountPath. caCert = ((VaultPKI) certProvider).getCACertificateFromVault(); assertNotEquals("", caCert); k = caCert.getPublicKey(); - cert.verify(k, "BC"); + cert.verify(k); } @Test diff --git a/managed/src/test/java/com/yugabyte/yw/common/ha/PlatformInstanceClientFactoryTest.java b/managed/src/test/java/com/yugabyte/yw/common/ha/PlatformInstanceClientFactoryTest.java index aeb1bea95d5e..757a8748ece5 100644 --- a/managed/src/test/java/com/yugabyte/yw/common/ha/PlatformInstanceClientFactoryTest.java +++ b/managed/src/test/java/com/yugabyte/yw/common/ha/PlatformInstanceClientFactoryTest.java @@ -12,7 +12,6 @@ import static com.yugabyte.yw.models.ScopedRuntimeConfig.GLOBAL_SCOPE_UUID; import static org.junit.Assert.assertNotEquals; -import static org.junit.Assert.assertThrows; import static play.test.Helpers.fakeRequest; import com.yugabyte.yw.common.FakeApi; @@ -40,8 +39,6 @@ public class PlatformInstanceClientFactoryTest extends FakeDBApplication { private static final String REMOTE_ACME_ORG = "http://remote.acme.org"; - private static final String BAD_CA_CERT_KEY = "-----BAD CERT-----\n"; - private static final String GOOD_CA_CERT_KEY = "-----BEGIN CERTIFICATE-----\n" + "MIIDzTCCArWgAwIBAgIQCjeHZF5ftIwiTv0b7RQMPDANBgkqhkiG9w0BAQsFADBa\n" @@ -117,11 +114,6 @@ public void getCustomClient() { platformInstanceClient3.getApiHelper().getWsClient()); } - @Test - public void setWsConfig_badCert() { - assertThrows(RuntimeException.class, () -> setWsConfig(BAD_CA_CERT_KEY)); - } - private void setWsConfig(String pemKey) { String wsConfig = String.format( diff --git a/managed/src/test/java/com/yugabyte/yw/controllers/JWTVerifierTest.java b/managed/src/test/java/com/yugabyte/yw/controllers/JWTVerifierTest.java index 31ef89dbf911..822e4c35774b 100644 --- a/managed/src/test/java/com/yugabyte/yw/controllers/JWTVerifierTest.java +++ b/managed/src/test/java/com/yugabyte/yw/controllers/JWTVerifierTest.java @@ -23,7 +23,7 @@ import java.time.Instant; import java.util.Date; import java.util.UUID; -import org.bouncycastle.jce.provider.BouncyCastleProvider; +import org.bouncycastle.jcajce.provider.BouncyCastleFipsProvider; import org.junit.Before; import org.junit.Test; import org.junit.runner.RunWith; @@ -41,7 +41,7 @@ public class JWTVerifierTest { @Before public void setup() throws Exception { - Security.addProvider(new BouncyCastleProvider()); + Security.addProvider(new BouncyCastleFipsProvider()); verfier = new JWTVerifier(keyProvider); keyPair = CertificateHelper.getKeyPairObject(); } From c359d8c0cea6a03dc5c58da5b21d515a57a9b277 Mon Sep 17 00:00:00 2001 From: Dwight Hodge <79169168+ddhodge@users.noreply.github.com> Date: Sat, 17 May 2025 00:07:24 -0400 Subject: [PATCH 093/146] format landing page (#27240) --- docs/content/_index.md | 1 - 1 file changed, 1 deletion(-) diff --git a/docs/content/_index.md b/docs/content/_index.md index a1ed100a7294..2424c58e35ce 100644 --- a/docs/content/_index.md +++ b/docs/content/_index.md @@ -7,7 +7,6 @@ type: indexpage layout: list breadcrumbDisable: true weight: 1 -showRightNav: true unversioned: true --- From 06990855c7b48df925e01bec6388f672747e935a Mon Sep 17 00:00:00 2001 From: Jason Kim Date: Wed, 14 May 2025 17:26:37 -0700 Subject: [PATCH 094/146] [#25892] YSQL: make dummy_seclabel in build_postgres.py Summary: Commit 57539dd6b797d99678c6f7c25296f16cb5b91dd2 adds dummy_seclabel as a subdir to build for the default make. It technically doesn't belong in the default path, and the modification to add that there increases diff with upstream postgres. Instead, run make for dummy_seclabel in build_postgres.py. From the user perspective, it still builds by default with yb_build.sh, but the intent is cleaner, and it can possibly be built when-needed in the future. Jira: DB-15199 Test Plan: On Almalinux 8: ./yb_build.sh fastdebug --gcc11 --rebuild-postgres \ --java-test TestPgRegressModulesDummySeclabel Close: #25892 Reviewers: asaha Reviewed By: asaha Subscribers: yql Differential Revision: https://phorge.dev.yugabyte.com/D43999 --- python/yugabyte/build_postgres.py | 2 ++ src/postgres/src/Makefile | 3 +-- 2 files changed, 3 insertions(+), 2 deletions(-) diff --git a/python/yugabyte/build_postgres.py b/python/yugabyte/build_postgres.py index 1d667e952919..383da38295be 100755 --- a/python/yugabyte/build_postgres.py +++ b/python/yugabyte/build_postgres.py @@ -817,8 +817,10 @@ def make_postgres(self) -> None: work_dirs = [ self.pg_build_root, os.path.join(self.pg_build_root, 'contrib'), + os.path.join(self.pg_build_root, 'src/test/modules/dummy_seclabel'), ] + external_extension_dirs + # TODO(#27196): parallelize this for loop. for work_dir in work_dirs: # Postgresql requires MAKELEVEL to be 0 or non-set when calling its make. # But in the case where the YB project is built with make, diff --git a/src/postgres/src/Makefile b/src/postgres/src/Makefile index b45ed15b319b..79e274a4769b 100644 --- a/src/postgres/src/Makefile +++ b/src/postgres/src/Makefile @@ -29,8 +29,7 @@ SUBDIRS = \ makefiles \ test/regress \ test/isolation \ - test/perl \ - test/modules/dummy_seclabel # YB TODO (#25892): Figure out another way to run these tests + test/perl ifeq ($(with_llvm), yes) SUBDIRS += backend/jit/llvm From fdfaf1dc94ebb23b57941d5972608b62db1851f9 Mon Sep 17 00:00:00 2001 From: jhe Date: Wed, 14 May 2025 09:41:03 -0700 Subject: [PATCH 095/146] [#27066] xClusterDDLRepl: Add statement_timeout flag for ddl_queue execution Summary: Adding `xcluster_ddl_queue_statement_timeout_ms` gflag to control timeout for running DDLs on the target from ddl_queue. Setting default to 0 (no timeout). Jira: DB-16548 Test Plan: Jenkins Reviewers: xCluster, hsunder Reviewed By: hsunder Subscribers: ybase Differential Revision: https://phorge.dev.yugabyte.com/D43990 --- src/yb/tserver/xcluster_ddl_queue_handler.cc | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/src/yb/tserver/xcluster_ddl_queue_handler.cc b/src/yb/tserver/xcluster_ddl_queue_handler.cc index f864d6327a0e..90858064e5c4 100644 --- a/src/yb/tserver/xcluster_ddl_queue_handler.cc +++ b/src/yb/tserver/xcluster_ddl_queue_handler.cc @@ -35,6 +35,9 @@ DEFINE_RUNTIME_int32(xcluster_ddl_queue_max_retries_per_ddl, 5, "Maximum number of retries per DDL before we pause processing of the ddl_queue table."); +DEFINE_RUNTIME_uint32(xcluster_ddl_queue_statement_timeout_ms, 0, + "Statement timeout to use for executing DDLs from the ddl_queue table. 0 means no timeout."); + DEFINE_test_flag(bool, xcluster_ddl_queue_handler_cache_connection, true, "Whether we should cache the ddl_queue handler's connection, or always recreate it."); @@ -405,6 +408,9 @@ Status XClusterDDLQueueHandler::ProcessDDLQuery(const DDLQueryInfo& query_info) setup_query << "SET yb_test_fail_next_ddl TO true;"; } + setup_query << Format( + "SET statement_timeout TO $0;", FLAGS_xcluster_ddl_queue_statement_timeout_ms); + RETURN_NOT_OK(RunAndLogQuery(setup_query.str())); RETURN_NOT_OK(ProcessFailedDDLQuery(RunAndLogQuery(query_info.query), query_info)); RETURN_NOT_OK( From 8a160aee9ecd29c7c2ca0cba6b4694d674d43d76 Mon Sep 17 00:00:00 2001 From: Sergei Politov Date: Sat, 17 May 2025 20:12:48 +0300 Subject: [PATCH 096/146] [#27042] DocDB: Replace LOG(ERROR) with WARNING or DFATAL Summary: Per our coding style ([Log Levels](https://docs.yugabyte.com/preview/contribute/core-database/coding-style/#log-levels)), we should avoid using `LOG(ERROR)`. - For **expected failures**, use `LOG(WARNING)`. - For **unexpected failures**, prefer `LOG(DFATAL)` to ensure tests fail appropriately. Updated most of `LOG(ERROR)` usages to either `LOG(WARNING)` or `LOG(DFATAL)`. Left several places where `LOG(ERROR)` is used. Because this log is triggered in some tests, but should never happen in production. Jira: DB-16515 Test Plan: Jenkins Reviewers: hsunder, mlillibridge Reviewed By: hsunder Subscribers: jason, mlillibridge, yql, ybase Tags: #jenkins-ready Differential Revision: https://phorge.dev.yugabyte.com/D43680 --- src/yb/ash/wait_state.cc | 2 +- src/yb/bfcommon/bfunc_standard.h | 6 ++-- src/yb/client/batcher.cc | 9 +++--- src/yb/client/client-internal.cc | 8 ++--- src/yb/client/client.cc | 2 +- src/yb/client/meta_cache.cc | 2 +- src/yb/client/ql-stress-test.cc | 8 ++--- src/yb/client/table_handle.cc | 4 +-- src/yb/client/tablet_rpc.h | 4 +-- src/yb/client/transaction.cc | 14 ++++----- src/yb/client/transaction_manager.cc | 8 ++--- src/yb/common/ql_value.cc | 2 +- src/yb/common/schema.h | 2 +- src/yb/consensus/consensus_queue.cc | 11 ++++--- src/yb/consensus/log.cc | 6 ++-- src/yb/consensus/raft_consensus.cc | 4 +-- .../consensus/raft_consensus_quorum-test.cc | 11 +++---- src/yb/consensus/retryable_requests.cc | 18 ++++------- src/yb/docdb/cql_operation.cc | 7 ++--- src/yb/docdb/deadlock_detector.cc | 2 +- src/yb/docdb/docdb.cc | 10 +++---- .../docdb/docdb_compaction_filter_intents.cc | 2 +- src/yb/docdb/docdb_test_util.cc | 18 +++++------ src/yb/docdb/docdb_util.cc | 5 ++-- src/yb/docdb/iter_util.cc | 4 +-- src/yb/docdb/scan_choices.cc | 2 +- src/yb/dockv/primitive_value.cc | 4 +-- src/yb/fs/fs_manager.cc | 2 +- .../cassandra_cpp_driver-test.cc | 20 ++++++------- src/yb/integration-tests/cdcsdk_gflag-test.cc | 2 +- .../cdcsdk_ysql_test_base.cc | 26 ++++++++-------- .../integration-tests/cdcsdk_ysql_test_base.h | 4 +-- .../cql-tablet-split-test.cc | 8 ++--- src/yb/master/backfill_index.cc | 20 +++++++------ src/yb/master/catalog_manager.cc | 20 ++++++------- src/yb/master/cluster_balance.cc | 4 +-- src/yb/master/master-path-handlers.cc | 4 +-- src/yb/master/master.cc | 2 +- src/yb/master/master_heartbeat_service.cc | 26 ++++++++-------- src/yb/master/master_snapshot_coordinator.cc | 2 +- src/yb/master/master_tablet_service.cc | 6 ++-- src/yb/master/master_tserver.cc | 3 +- src/yb/master/mini_master.cc | 12 ++++---- src/yb/master/permissions_manager.cc | 6 ++-- src/yb/master/sys_catalog.cc | 4 +-- src/yb/master/table_index.cc | 2 +- src/yb/master/util/yql_vtable_helpers.cc | 6 ++-- .../xcluster/xcluster_bootstrap_helper.cc | 2 +- src/yb/master/xrepl_catalog_manager.cc | 6 ++-- src/yb/master/yql_peers_vtable.cc | 8 ++--- src/yb/master/ysql/ysql_catalog_config.cc | 2 +- .../ysql/ysql_initdb_major_upgrade_handler.cc | 4 +-- src/yb/qlexpr/ql_expr.cc | 4 +-- src/yb/qlexpr/ql_serialization.cc | 4 +-- src/yb/rocksdb/db/memtable_list.cc | 6 ++-- src/yb/rocksdb/db/version_set.cc | 3 +- src/yb/rocksdb/db/write_batch.cc | 2 +- src/yb/rocksdb/table/format.cc | 2 +- src/yb/rocksdb/table/index_reader.cc | 10 +++---- src/yb/rocksdb/util/env_posix.cc | 2 +- src/yb/rocksdb/util/posix_logger.h | 6 ++-- src/yb/rpc/acceptor.cc | 2 +- src/yb/rpc/io_thread_pool.cc | 4 +-- src/yb/rpc/messenger.cc | 4 +-- src/yb/rpc/outbound_call.cc | 2 +- src/yb/rpc/scheduler.cc | 7 +++-- src/yb/rpc/yb_rpc.cc | 3 +- src/yb/server/call_home.cc | 8 ++--- src/yb/server/generic_service.cc | 2 +- src/yb/server/total_mem_watcher.cc | 6 ++-- src/yb/server/webserver.cc | 2 +- src/yb/tablet/mvcc.cc | 4 +-- src/yb/tablet/operations/operation_driver.cc | 3 +- src/yb/tablet/tablet.cc | 2 +- src/yb/tablet/tablet_metadata.cc | 9 +++--- src/yb/tablet/tablet_peer.cc | 14 ++++----- src/yb/tablet/tablet_peer_mm_ops.cc | 2 +- src/yb/tablet/transaction_participant.cc | 4 +-- src/yb/tools/data-patcher.cc | 2 +- src/yb/tools/fs_tool.cc | 8 ++--- src/yb/tserver/metrics_snapshotter.cc | 4 +-- src/yb/tserver/pg_client_session.cc | 8 ++--- src/yb/tserver/pg_create_table.cc | 2 +- .../remote_bootstrap_file_downloader.cc | 8 ++--- src/yb/tserver/remote_bootstrap_service.cc | 7 +++-- src/yb/tserver/remote_bootstrap_session.cc | 6 ++-- src/yb/tserver/remote_bootstrap_snapshots.cc | 6 ++-- src/yb/tserver/service_util.h | 6 +--- .../stateful_service_base.cc | 2 +- src/yb/tserver/tablet_server.cc | 4 +-- src/yb/tserver/tablet_service.cc | 30 +++++++++---------- src/yb/tserver/tablet_validator.cc | 9 +++--- src/yb/tserver/ts_local_lock_manager.cc | 4 +-- src/yb/tserver/ts_tablet_manager.cc | 14 ++++----- src/yb/tserver/xcluster_consumer.cc | 6 ++-- src/yb/tserver/xcluster_ddl_queue_handler.cc | 4 +-- src/yb/tserver/xcluster_output_client.cc | 4 +-- src/yb/tserver/xcluster_poller.cc | 2 +- src/yb/util/allocation_tracker.cc | 8 ++--- src/yb/util/async_util.cc | 2 +- src/yb/util/env_posix.cc | 4 +-- src/yb/util/env_util.cc | 2 +- src/yb/util/file_system_posix.cc | 8 ++--- src/yb/util/logging.h | 23 +++++++++----- src/yb/util/memory/memory.h | 8 ++--- src/yb/util/net/dns_resolver-test.cc | 4 +-- src/yb/util/net/inetaddress.cc | 4 +-- src/yb/util/net/net_util.cc | 2 +- src/yb/util/physical_time.cc | 6 ++-- src/yb/util/result-test.cc | 1 - src/yb/util/shared_mem.cc | 22 +++++++------- src/yb/util/stack_trace_tracker.cc | 4 +-- src/yb/util/status_log.h | 12 +++++++- src/yb/util/trace.cc | 3 +- src/yb/util/ulimit_util.cc | 8 ++--- src/yb/vector_index/vector_lsm.cc | 2 +- src/yb/yql/cql/cqlserver/cql_processor.cc | 10 +++---- src/yb/yql/cql/cqlserver/cql_rpc.cc | 2 +- .../yql/cql/cqlserver/system_query_cache.cc | 2 +- src/yb/yql/cql/ql/parser/scanner_util.cc | 4 +-- .../yql/cql/ql/ptree/pt_dml_write_property.cc | 2 +- src/yb/yql/cql/ql/ptree/pt_table_property.cc | 4 +-- src/yb/yql/cql/ql/util/cql_message.cc | 19 ++++++------ src/yb/yql/pggate/pg_client.cc | 6 ++-- src/yb/yql/pggate/pggate_flags.cc | 12 +++++++- src/yb/yql/pggate/ybc_pggate.cc | 4 +-- .../ybc_ysql_bench_metrics_handler.cc | 2 +- src/yb/yql/pgwrapper/pg_wrapper.cc | 19 +++++------- src/yb/yql/redis/redisserver/redis_client.cc | 2 +- .../yql/redis/redisserver/redis_commands.cc | 10 +++---- src/yb/yql/redis/redisserver/redis_service.cc | 8 ++--- 131 files changed, 428 insertions(+), 430 deletions(-) diff --git a/src/yb/ash/wait_state.cc b/src/yb/ash/wait_state.cc index 35fcf50e3870..8b7359127f44 100644 --- a/src/yb/ash/wait_state.cc +++ b/src/yb/ash/wait_state.cc @@ -74,7 +74,7 @@ void MaybeSleepForTests(WaitStateInfo* state, WaitStateCode c) { } if (!state) { - YB_LOG_EVERY_N_SECS(ERROR, 5) << __func__ << " skipping sleep because WaitStateInfo is null"; + YB_LOG_EVERY_N_SECS(WARNING, 5) << __func__ << " skipping sleep because WaitStateInfo is null"; return; } diff --git a/src/yb/bfcommon/bfunc_standard.h b/src/yb/bfcommon/bfunc_standard.h index 80bcd4e06490..a95dc8dbf285 100644 --- a/src/yb/bfcommon/bfunc_standard.h +++ b/src/yb/bfcommon/bfunc_standard.h @@ -26,20 +26,20 @@ inline Result NoOp() { // ServerOperator that takes no argument and has no return value. inline Status ServerOperator() { - LOG(ERROR) << "Only tablet servers can execute this builtin call"; + LOG(DFATAL) << "Only tablet servers can execute this builtin call"; return STATUS(RuntimeError, "Only tablet servers can execute this builtin call"); } // ServerOperator that takes 1 argument and has a return value. inline Status ServerOperator(const BFValue& arg1, BFFactory factory) { - LOG(ERROR) << "Only tablet servers can execute this builtin call"; + LOG(DFATAL) << "Only tablet servers can execute this builtin call"; return STATUS(RuntimeError, "Only tablet servers can execute this builtin call"); } // This is not used but implemented as an example for future coding. // ServerOperator that takes 2 arguments and has a return value. inline Status ServerOperator(const BFValue& arg1, const BFValue& arg2, BFFactory factory) { - LOG(ERROR) << "Only tablet servers can execute this builtin call"; + LOG(DFATAL) << "Only tablet servers can execute this builtin call"; return STATUS(RuntimeError, "Only tablet servers can execute this builtin call"); } diff --git a/src/yb/client/batcher.cc b/src/yb/client/batcher.cc index de14f3c61b8d..2d263445b85f 100644 --- a/src/yb/client/batcher.cc +++ b/src/yb/client/batcher.cc @@ -747,10 +747,11 @@ void Batcher::ProcessWriteResponse(const WriteRpc &rpc, const Status &s) { size_t row_index = err_pb.row_index(); if (row_index >= rpc.ops().size()) { - LOG_WITH_PREFIX(ERROR) << "Received a per_row_error for an out-of-bound op index " - << row_index << " (sent only " << rpc.ops().size() << " ops)"; - LOG_WITH_PREFIX(ERROR) << "Response from tablet " << rpc.tablet().tablet_id() << ":\n" - << rpc.resp().DebugString(); + LOG_WITH_PREFIX(DFATAL) + << "Received a per_row_error for an out-of-bound op index " << row_index + << " (sent only " << rpc.ops().size() << " ops)\n" + << "Response from tablet " << rpc.tablet().tablet_id() << ":\n" + << rpc.resp().DebugString(); continue; } shared_ptr yb_op = rpc.ops()[row_index].yb_op; diff --git a/src/yb/client/client-internal.cc b/src/yb/client/client-internal.cc index db83a8aff872..0227d47e408c 100644 --- a/src/yb/client/client-internal.cc +++ b/src/yb/client/client-internal.cc @@ -575,7 +575,7 @@ Status YBClient::Data::CreateTable(YBClient* client, table_name, internal::GetSchema(schema), internal::GetSchema(info.schema)); - LOG(ERROR) << msg; + LOG(WARNING) << msg; return STATUS(AlreadyPresent, msg); } @@ -593,7 +593,7 @@ Status YBClient::Data::CreateTable(YBClient* client, table_name.ToString(), partition_schema.DebugString(internal::GetSchema(schema)), info.partition_schema.DebugString(internal::GetSchema(info.schema))); - LOG(ERROR) << msg; + LOG(WARNING) << msg; return STATUS(AlreadyPresent, msg); } } @@ -901,7 +901,7 @@ Status YBClient::Data::CreateTablegroup(YBClient* client, table_name, internal::GetSchema(ybschema), internal::GetSchema(info.schema)); - LOG(ERROR) << msg; + LOG(WARNING) << msg; return STATUS(AlreadyPresent, msg); } @@ -2827,7 +2827,7 @@ Status YBClient::Data::SetMasterAddresses(const string& addrs) { out.str(master_server_addr); out.str(" "); } - LOG(ERROR) << out.str(); + LOG(DFATAL) << out.str(); return STATUS(InvalidArgument, "master addresses cannot be empty"); } diff --git a/src/yb/client/client.cc b/src/yb/client/client.cc index 9c39f3a8730a..11132f7a773c 100644 --- a/src/yb/client/client.cc +++ b/src/yb/client/client.cc @@ -2527,7 +2527,7 @@ Status YBClient::ListMasters(CoarseTimePoint deadline, std::vector* master_uuids->clear(); for (const ServerEntryPB& master : resp.masters()) { if (master.has_error()) { - LOG_WITH_PREFIX(ERROR) << "Master " << master.ShortDebugString() << " hit error " + LOG_WITH_PREFIX(WARNING) << "Master " << master.ShortDebugString() << " hit error " << master.error().ShortDebugString(); return StatusFromPB(master.error()); } diff --git a/src/yb/client/meta_cache.cc b/src/yb/client/meta_cache.cc index c15132e6b9c7..714d9f8fb523 100644 --- a/src/yb/client/meta_cache.cc +++ b/src/yb/client/meta_cache.cc @@ -575,7 +575,7 @@ void RemoteTablet::GetRemoteTabletServers( const Status status = CHECK_NOTNULL(replica->ts->local_tserver())->GetTabletStatus(&req, &resp); if (!status.ok() || resp.has_error()) { - LOG_WITH_PREFIX(ERROR) + LOG_WITH_PREFIX(WARNING) << "Received error from GetTabletStatus: " << (!status.ok() ? status : StatusFromPB(resp.error().status())); continue; diff --git a/src/yb/client/ql-stress-test.cc b/src/yb/client/ql-stress-test.cc index 49260a7c9acc..c7b358b0a2da 100644 --- a/src/yb/client/ql-stress-test.cc +++ b/src/yb/client/ql-stress-test.cc @@ -326,11 +326,9 @@ bool QLStressTest::CheckRetryableRequestsCountsAndLeaders( Slice key = iter->key(); EXPECT_OK(DocHybridTime::DecodeFromEnd(&key)); auto emplace_result = keys.emplace(key.ToBuffer(), iter->key().ToBuffer()); - if (!emplace_result.second) { - LOG(ERROR) - << "Duplicate key: " << dockv::SubDocKey::DebugSliceToString(iter->key()) - << " vs " << dockv::SubDocKey::DebugSliceToString(emplace_result.first->second); - } + EXPECT_TRUE(emplace_result.second) + << "Duplicate key: " << dockv::SubDocKey::DebugSliceToString(iter->key()) + << " vs " << dockv::SubDocKey::DebugSliceToString(emplace_result.first->second); } } } diff --git a/src/yb/client/table_handle.cc b/src/yb/client/table_handle.cc index ca0459adff12..2ad94d1f48a6 100644 --- a/src/yb/client/table_handle.cc +++ b/src/yb/client/table_handle.cc @@ -353,8 +353,8 @@ bool TableIterator::IsFlushStatusOkOrHandleErrors(FlushStatus flush_status) { HandleError(flush_status.status); if (!error_handler_) { for (const auto& error : flush_status.errors) { - LOG(ERROR) << "Failed operation: " << error->failed_op().ToString() - << ", status: " << error->status(); + LOG(WARNING) << "Failed operation: " << error->failed_op().ToString() + << ", status: " << error->status(); } } return false; diff --git a/src/yb/client/tablet_rpc.h b/src/yb/client/tablet_rpc.h index 1a07644d9d66..a95926d6ddb7 100644 --- a/src/yb/client/tablet_rpc.h +++ b/src/yb/client/tablet_rpc.h @@ -101,8 +101,8 @@ inline bool CheckIfConsensusInfoUnexpectedlyMissing(const Req& request, const Re if (pb_util::ArePBsEqual(response, empty, /*diff_str=*/ nullptr)) { return true; // we haven't gotten an actual response from an RPC yet... } - LOG(ERROR) << "Detected consensus info unexpectedly missing; request: " - << request.DebugString() << ", response: " << response.DebugString(); + LOG(DFATAL) << "Detected consensus info unexpectedly missing; request: " + << request.DebugString() << ", response: " << response.DebugString(); return false; } } diff --git a/src/yb/client/transaction.cc b/src/yb/client/transaction.cc index 84ebdf3d7932..18083c84d6eb 100644 --- a/src/yb/client/transaction.cc +++ b/src/yb/client/transaction.cc @@ -492,19 +492,19 @@ class YBTransaction::Impl final : public internal::TxnBatcherIf { if (status.ok()) { if (used_read_time && metadata_.isolation != IsolationLevel::SERIALIZABLE_ISOLATION) { const bool read_point_already_set = static_cast(read_point_.GetReadTime()); -#ifndef NDEBUG if (read_point_already_set) { +#ifndef NDEBUG // Display details of operations before crashing in debug mode. int op_idx = 1; for (const auto& op : ops) { LOG(ERROR) << "Operation " << op_idx << ": " << op.ToString(); op_idx++; } - } #endif - LOG_IF_WITH_PREFIX(DFATAL, read_point_already_set) - << "Read time already picked (" << read_point_.GetReadTime() - << ", but server replied with used read time: " << used_read_time; + LOG_WITH_PREFIX(DFATAL) + << "Read time already picked (" << read_point_.GetReadTime() + << ", but server replied with used read time: " << used_read_time; + } // TODO: Update local limit for the tablet id which sent back the used read time read_point_.SetReadTime(used_read_time, ConsistentReadPoint::HybridTimeMap()); VLOG_WITH_PREFIX(3) @@ -708,8 +708,8 @@ class YBTransaction::Impl final : public internal::TxnBatcherIf { // that were first involved in the transaction with this batch of changes. auto status = StartPromotionToGlobal(); if (!status.ok()) { - LOG(ERROR) << "Prepare for transaction " << metadata_.transaction_id - << " rejected (promotion failed): " << status; + LOG(DFATAL) << "Prepare for transaction " << metadata_.transaction_id + << " rejected (promotion failed): " << status; return status; } return true; diff --git a/src/yb/client/transaction_manager.cc b/src/yb/client/transaction_manager.cc index 3e04ec711130..11f43eb78f3a 100644 --- a/src/yb/client/transaction_manager.cc +++ b/src/yb/client/transaction_manager.cc @@ -194,8 +194,8 @@ class LoadStatusTabletsTask { // TODO(dtxn) async auto tablets = GetTransactionStatusTablets(); if (!tablets.ok()) { - YB_LOG_EVERY_N_SECS(ERROR, 1) << "Failed to get tablets of txn status tables: " - << tablets.status(); + YB_LOG_EVERY_N_SECS(WARNING, 1) << "Failed to get tablets of txn status tables: " + << tablets.status(); if (callback_) { callback_(tablets.status()); } @@ -292,8 +292,8 @@ class TransactionManager::Impl { } if (!tasks_pool_.Enqueue(&thread_pool_, client_, &table_state_, version, std::move(cb))) { - YB_LOG_EVERY_N_SECS(ERROR, 1) << "Update tasks overflow, number of tasks: " - << tasks_pool_.size(); + YB_LOG_EVERY_N_SECS(WARNING, 1) << "Update tasks overflow, number of tasks: " + << tasks_pool_.size(); if (callback) { callback(STATUS_FORMAT(ServiceUnavailable, "Update tasks queue overflow, number of tasks: $0", diff --git a/src/yb/common/ql_value.cc b/src/yb/common/ql_value.cc index 4a867393f728..40f7d02230a4 100644 --- a/src/yb/common/ql_value.cc +++ b/src/yb/common/ql_value.cc @@ -261,7 +261,7 @@ void DoAppendToKey(const PB& value_pb, string* bytes) { case InternalType::kVirtualValue: LOG(FATAL) << "Runtime error: virtual value should not be used to construct hash key"; case InternalType::kGinNullValue: { - LOG(ERROR) << "Runtime error: gin null value should not be used to construct hash key"; + LOG(DFATAL) << "Runtime error: gin null value should not be used to construct hash key"; YBPartition::AppendIntToKey(value_pb.gin_null_value(), bytes); break; } diff --git a/src/yb/common/schema.h b/src/yb/common/schema.h index a984850a433a..3979ed8aff8a 100644 --- a/src/yb/common/schema.h +++ b/src/yb/common/schema.h @@ -701,7 +701,7 @@ class Schema : public MissingValueProvider { return &table_properties_; } - void SetDefaultTimeToLive(const uint64_t& ttl_msec) { + void SetDefaultTimeToLive(uint64_t ttl_msec) { table_properties_.SetDefaultTimeToLive(ttl_msec); } diff --git a/src/yb/consensus/consensus_queue.cc b/src/yb/consensus/consensus_queue.cc index 2ac9edbed5b6..ec454487212f 100644 --- a/src/yb/consensus/consensus_queue.cc +++ b/src/yb/consensus/consensus_queue.cc @@ -718,10 +718,9 @@ Result PeerMessageQueue::ReadFromLogCache( // IsIncomplete() means that we tried to read beyond the head of the log (in the future). // KUDU-1078 points to a fix of this log spew issue that we've ported. This should not // happen under normal circumstances. - LOG_WITH_PREFIX(ERROR) << "Error trying to read ahead of the log " - << "while preparing peer request: " - << s.ToString() << ". Destination peer: " - << peer_uuid; + LOG_WITH_PREFIX(DFATAL) + << "Error trying to read ahead of the log while preparing peer request: " + << s << ". Destination peer: " << peer_uuid; return s; } else { LOG_WITH_PREFIX(FATAL) << "Error reading the log while preparing peer request: " @@ -1886,7 +1885,7 @@ bool PeerMessageQueue::CanPeerBecomeLeader(const std::string& peer_uuid) const { std::lock_guard lock(queue_lock_); TrackedPeer* peer = FindPtrOrNull(peers_map_, peer_uuid); if (peer == nullptr) { - LOG(ERROR) << "Invalid peer UUID: " << peer_uuid; + LOG(WARNING) << "Invalid peer UUID: " << peer_uuid; return false; } const bool peer_can_be_leader = peer->last_received >= queue_state_.majority_replicated_op_id; @@ -1902,7 +1901,7 @@ OpId PeerMessageQueue::PeerLastReceivedOpId(const TabletServerId& uuid) const { std::lock_guard lock(queue_lock_); TrackedPeer* peer = FindPtrOrNull(peers_map_, uuid); if (peer == nullptr) { - LOG(ERROR) << "Invalid peer UUID: " << uuid; + LOG(WARNING) << "Invalid peer UUID: " << uuid; return OpId::Min(); } return peer->last_received; diff --git a/src/yb/consensus/log.cc b/src/yb/consensus/log.cc index af47223ae3e4..26d0d8faed0c 100644 --- a/src/yb/consensus/log.cc +++ b/src/yb/consensus/log.cc @@ -1560,8 +1560,8 @@ yb::OpId Log::WaitForSafeOpIdToApply(const yb::OpId& min_allowed, MonoDelta dura } // TODO(bogdan): If the log is closed at this point, consider refactoring to return status // and fail cleanly. - LOG_WITH_PREFIX(ERROR) << "Appender stack: " << appender_->GetRunThreadStack(); LOG_WITH_PREFIX(DFATAL) + << "Appender stack: " << appender_->GetRunThreadStack() << "\n" << "Long wait for safe op id: " << min_allowed << ", current: " << GetLatestEntryOpId() << ", last appended: " << last_appended_entry_op_id_ @@ -2245,8 +2245,8 @@ bool Log::HasSufficientDiskSpaceForWrite() { const auto free_space_mb = *free_space_result / 1024 / 1024; if (free_space_mb < min_allowed_disk_space_mb) { - YB_LOG_EVERY_N_SECS(ERROR, 600) << "Not enough disk space available on " << path - << ". Free space: " << *free_space_result << " bytes"; + YB_LOG_EVERY_N_SECS(WARNING, 600) << "Not enough disk space available on " << path + << ". Free space: " << *free_space_result << " bytes"; has_space = false; } else if (free_space_mb < min_space_to_trigger_aggressive_check_mb) { YB_LOG_EVERY_N_SECS(WARNING, 600) diff --git a/src/yb/consensus/raft_consensus.cc b/src/yb/consensus/raft_consensus.cc index 640f3a4c4202..ef2edab72a76 100644 --- a/src/yb/consensus/raft_consensus.cc +++ b/src/yb/consensus/raft_consensus.cc @@ -829,7 +829,7 @@ Status RaftConsensus::StepDown(const LeaderStepDownRequestPB* req, LeaderStepDow const auto msg = Format( "Received a leader stepdown operation for wrong tablet id: $0, must be: $1", tablet_id, this->tablet_id()); - LOG_WITH_PREFIX(ERROR) << msg; + LOG_WITH_PREFIX(DFATAL) << msg; StatusToPB(STATUS(IllegalState, msg), resp->mutable_error()->mutable_status()); return Status::OK(); } @@ -3571,7 +3571,7 @@ void RaftConsensus::NonTrackedRoundReplicationFinished(ConsensusRound* round, OperationType op_type = round->replicate_msg()->op_type(); string op_str = Format("$0 [$1]", OperationType_Name(op_type), round->id()); if (!IsConsensusOnlyOperation(op_type)) { - LOG_WITH_PREFIX(ERROR) << "Unexpected op type: " << op_str; + LOG_WITH_PREFIX(DFATAL) << "Unexpected op type: " << op_str; return; } if (!status.ok()) { diff --git a/src/yb/consensus/raft_consensus_quorum-test.cc b/src/yb/consensus/raft_consensus_quorum-test.cc index 4a833878e1f7..87d68a78599c 100644 --- a/src/yb/consensus/raft_consensus_quorum-test.cc +++ b/src/yb/consensus/raft_consensus_quorum-test.cc @@ -350,15 +350,12 @@ class RaftConsensusQuorumTest : public YBTest { backoff_exp = std::min(backoff_exp + 1, kMaxBackoffExp); } - LOG(ERROR) << "Max timeout reached (" << timeout.ToString() << ") while waiting for commit of " - << "op " << to_wait_for << " on replica. Last committed op on replica: " - << committed_op_id << ". Dumping state and quitting."; - vector lines; + LOG(WARNING) + << "Max timeout reached (" << timeout.ToString() << ") while waiting for commit of " + << "op " << to_wait_for << " on replica. Last committed op on replica: " + << committed_op_id << ". Dumping state and quitting."; shared_ptr leader; ASSERT_OK(peers_->GetPeerByIdx(leader_idx, &leader)); - for (const string& line : lines) { - LOG(ERROR) << line; - } // Gather the replica and leader operations for printing log::LogEntries replica_ops = GatherLogEntries(peer_idx, logs_[peer_idx]); diff --git a/src/yb/consensus/retryable_requests.cc b/src/yb/consensus/retryable_requests.cc index 3d8ac7a3c007..a4e2c4fac73c 100644 --- a/src/yb/consensus/retryable_requests.cc +++ b/src/yb/consensus/retryable_requests.cc @@ -488,10 +488,8 @@ class RetryableRequests::Impl { auto& running_indexed_by_request_id = client_retryable_requests.running->get(); auto running_it = running_indexed_by_request_id.find(data.request_id()); if (running_it == running_indexed_by_request_id.end()) { -#ifndef NDEBUG - LOG_WITH_PREFIX(ERROR) << "Running requests: " - << AsString(running_indexed_by_request_id); -#endif + LOG_WITH_PREFIX(WARNING) << "Running requests: " + << AsString(running_indexed_by_request_id); LOG_WITH_PREFIX(DFATAL) << "Replication finished for request with unknown id " << data; return; } @@ -531,10 +529,8 @@ class RetryableRequests::Impl { data.client_id(), mem_tracker_).first->second; auto& running_indexed_by_request_id = client_retryable_requests.running->get(); if (running_indexed_by_request_id.count(data.request_id()) != 0) { -#ifndef NDEBUG - LOG_WITH_PREFIX(ERROR) << "Running requests: " - << yb::ToString(running_indexed_by_request_id); -#endif + LOG_WITH_PREFIX(WARNING) << "Running requests: " + << AsString(running_indexed_by_request_id); LOG_WITH_PREFIX(DFATAL) << "Bootstrapped running request " << data; return; } @@ -622,10 +618,8 @@ class RetryableRequests::Impl { auto& replicated_indexed_by_last_id = client->replicated->get(); auto request_it = replicated_indexed_by_last_id.lower_bound(request_id); if (request_it != replicated_indexed_by_last_id.end() && request_it->first_id <= request_id) { -#ifndef NDEBUG - LOG_WITH_PREFIX(ERROR) - << "Replicated requests: " << yb::ToString(client->replicated); -#endif + LOG_WITH_PREFIX(WARNING) + << "Replicated requests: " << AsString(client->replicated); LOG_WITH_PREFIX(DFATAL) << "Request already replicated: " << data; return; diff --git a/src/yb/docdb/cql_operation.cc b/src/yb/docdb/cql_operation.cc index 9080417df251..a476c4681a5d 100644 --- a/src/yb/docdb/cql_operation.cc +++ b/src/yb/docdb/cql_operation.cc @@ -877,8 +877,7 @@ Status QLWriteOperation::ApplyForSubscriptArgs(const QLColumnValuePB& column_val break; } default: { - LOG(ERROR) << "Unexpected type for setting subcolumn: " - << column.type()->ToString(); + LOG(DFATAL) << "Unexpected type for setting subcolumn: " << column.type()->ToString(); } } return Status::OK(); @@ -1288,8 +1287,8 @@ Status QLWriteOperation::DeleteSubscriptedColumnElement( break; } default: { - LOG(ERROR) << "Unexpected type for deleting subscripted column element: " - << column_schema.type()->ToString(); + LOG(DFATAL) << "Unexpected type for deleting subscripted column element: " + << column_schema.type()->ToString(); return STATUS_FORMAT(InternalError, "Unexpected type for deleting subscripted column element: $0", *column_schema.type()); } diff --git a/src/yb/docdb/deadlock_detector.cc b/src/yb/docdb/deadlock_detector.cc index c5ef4beeb807..da90607bb37c 100644 --- a/src/yb/docdb/deadlock_detector.cc +++ b/src/yb/docdb/deadlock_detector.cc @@ -992,7 +992,7 @@ class DeadlockDetector::Impl : public std::enable_shared_from_thisdeadlock_size_->Increment(resp.deadlocked_txn_ids_size()); auto waiter_or_status = FullyDecodeTransactionId(resp.deadlocked_txn_ids(0)); if (!waiter_or_status.ok()) { - LOG(ERROR) << "Failed to decode transaction id in detected deadlock!"; + LOG(DFATAL) << "Failed to decode transaction id in detected deadlock!"; } else { const auto& waiter = *waiter_or_status; auto deadlock_msg = ConstructDeadlockedMessage(waiter, resp); diff --git a/src/yb/docdb/docdb.cc b/src/yb/docdb/docdb.cc index 6c03213c675b..5c3b973e8e57 100644 --- a/src/yb/docdb/docdb.cc +++ b/src/yb/docdb/docdb.cc @@ -75,8 +75,8 @@ using std::vector; using namespace std::placeholders; -DEFINE_UNKNOWN_int32(cdc_max_stream_intent_records, 1680, - "Max number of intent records allowed in single cdc batch. "); +DEFINE_RUNTIME_uint64(cdc_max_stream_intent_records, 1680, + "Max number of intent records allowed in single cdc batch."); DEFINE_RUNTIME_bool(cdc_enable_caching_db_block, true, "When set to true, cache the DB block read for CDC in block cache."); @@ -206,8 +206,8 @@ Result PrepareDocWriteOperation( transactional_table, partial_range_key_intents)); VLOG_WITH_FUNC(4) << "determine_keys_to_lock_result=" << determine_keys_to_lock_result.ToString(); if (determine_keys_to_lock_result.lock_batch.empty() && !write_transaction_metadata) { - LOG(ERROR) << "Empty lock batch, doc_write_ops: " << yb::ToString(doc_write_ops) - << ", read pairs: " << AsString(read_pairs); + LOG(DFATAL) << "Empty lock batch, doc_write_ops: " << yb::ToString(doc_write_ops) + << ", read pairs: " << AsString(read_pairs); return STATUS(Corruption, "Empty lock batch"); } result.need_read_snapshot = determine_keys_to_lock_result.need_read_snapshot; @@ -350,7 +350,7 @@ Result GetIntentsBatch( write_id = stream_state->write_id; reverse_index_iter.Next(); } - const uint64_t& max_records = FLAGS_cdc_max_stream_intent_records; + const auto max_records = FLAGS_cdc_max_stream_intent_records; uint64_t cur_records = 0; while (reverse_index_iter.Valid()) { diff --git a/src/yb/docdb/docdb_compaction_filter_intents.cc b/src/yb/docdb/docdb_compaction_filter_intents.cc index d7782d31534b..7c66b4139967 100644 --- a/src/yb/docdb/docdb_compaction_filter_intents.cc +++ b/src/yb/docdb/docdb_compaction_filter_intents.cc @@ -106,7 +106,7 @@ class DocDBIntentsCompactionFilter : public rocksdb::CompactionFilter { #define MAYBE_LOG_ERROR_AND_RETURN_KEEP(result) { \ if (!result.ok()) { \ if (num_errors_ < GetAtomicFlag(&FLAGS_intents_compaction_filter_max_errors_to_log)) { \ - LOG_WITH_PREFIX(ERROR) << StatusToString(result.status()); \ + LOG_WITH_PREFIX(DFATAL) << StatusToString(result.status()); \ } \ num_errors_++; \ return rocksdb::FilterDecision::kKeep; \ diff --git a/src/yb/docdb/docdb_test_util.cc b/src/yb/docdb/docdb_test_util.cc index 57a5035bdd84..d19f29c19f73 100644 --- a/src/yb/docdb/docdb_test_util.cc +++ b/src/yb/docdb/docdb_test_util.cc @@ -447,7 +447,7 @@ void DocDBLoadGenerator::VerifySnapshot(const InMemDocDbState& snapshot) { flashback_state.CaptureAt(doc_db(), snap_ht); const bool is_match = flashback_state.EqualsAndLogDiff(snapshot); if (!is_match) { - LOG(ERROR) << details_msg << "\nDOCDB SNAPSHOT VERIFICATION FAILED, DOCDB STATE:"; + LOG(WARNING) << details_msg << "\nDOCDB SNAPSHOT VERIFICATION FAILED, DOCDB STATE:"; fixture_->DocDBDebugDumpToConsole(); } ASSERT_TRUE(is_match) << details_msg; @@ -490,15 +490,13 @@ void DocDBRocksDBFixture::AssertDocDbDebugDumpStrEq( mismatch_line_numbers.push_back(i + 1); } } - LOG(ERROR) << "Assertion failure" - << "\nExpected DocDB contents:\n\n" << expected_str << "\n" - << "\nActual DocDB contents:\n\n" << debug_dump_str << "\n" - << "\nExpected # of lines: " << expected_lines.size() - << ", actual # of lines: " << actual_lines.size() - << "\nLines not matching: " << AsString(mismatch_line_numbers) - << "\nPlease check if source files have trailing whitespace and remove it."; - - FAIL(); + FAIL() << "Assertion failure" + << "\nExpected DocDB contents:\n\n" << expected_str << "\n" + << "\nActual DocDB contents:\n\n" << debug_dump_str << "\n" + << "\nExpected # of lines: " << expected_lines.size() + << ", actual # of lines: " << actual_lines.size() + << "\nLines not matching: " << AsString(mismatch_line_numbers) + << "\nPlease check if source files have trailing whitespace and remove it."; } } diff --git a/src/yb/docdb/docdb_util.cc b/src/yb/docdb/docdb_util.cc index dd8913edd5c8..cc6d53f554eb 100644 --- a/src/yb/docdb/docdb_util.cc +++ b/src/yb/docdb/docdb_util.cc @@ -237,9 +237,8 @@ Status DocDBRocksDBUtil::WriteToRocksDB( rocksdb::Status rocksdb_write_status = db->Write(write_options(), &rocksdb_write_batch); if (!rocksdb_write_status.ok()) { - LOG(ERROR) << "Failed writing to RocksDB: " << rocksdb_write_status.ToString(); - return STATUS_SUBSTITUTE(RuntimeError, - "Error writing to RocksDB: $0", rocksdb_write_status.ToString()); + LOG(DFATAL) << "Failed writing to RocksDB: " << rocksdb_write_status; + return rocksdb_write_status.CloneAndPrepend("Error writing to RocksDB"); } return Status::OK(); } diff --git a/src/yb/docdb/iter_util.cc b/src/yb/docdb/iter_util.cc index e8bf7094ff4c..b73ba264b7e6 100644 --- a/src/yb/docdb/iter_util.cc +++ b/src/yb/docdb/iter_util.cc @@ -209,8 +209,8 @@ const rocksdb::KeyValueEntry& SeekBackward(Slice upper_bound_key, rocksdb::Itera // positioned after the given key, which is confirmed by the above IsIterBeforeKey() call. DCHECK(entry.Valid()); // Maybe it's even better to put a CHECK here. if (PREDICT_FALSE(!entry.Valid())) { - LOG_WITH_FUNC(ERROR) << "Unexpected Seek() result -- invalid entry, key = '" - << upper_bound_key.ToDebugHexString() << "', status: " << iter.status(); + LOG_WITH_FUNC(DFATAL) << "Unexpected Seek() result -- invalid entry, key = '" + << upper_bound_key.ToDebugHexString() << "', status: " << iter.status(); return entry; } diff --git a/src/yb/docdb/scan_choices.cc b/src/yb/docdb/scan_choices.cc index 1c11ad79c28e..bbd87ddb1c8a 100644 --- a/src/yb/docdb/scan_choices.cc +++ b/src/yb/docdb/scan_choices.cc @@ -933,7 +933,7 @@ ScanChoicesPtr ScanChoices::Create( // hash columns in a hash partitioned table. And the hash code column cannot be skip'ed without // skip'ing all hash columns as well. if (prefixlen != 0 && !valid_prefixlen) { - LOG(ERROR) + LOG(DFATAL) << "Prefix length: " << prefixlen << " is invalid for schema: " << "num_hash_cols: " << num_hash_cols << ", num_key_cols: " << num_key_cols; } diff --git a/src/yb/dockv/primitive_value.cc b/src/yb/dockv/primitive_value.cc index 48caabb079b9..ab59ad335933 100644 --- a/src/yb/dockv/primitive_value.cc +++ b/src/yb/dockv/primitive_value.cc @@ -174,7 +174,7 @@ std::string VarIntToString(const std::string& str_val) { VarInt varint; auto status = varint.DecodeFromComparable(str_val); if (!status.ok()) { - LOG(ERROR) << "Unable to decode varint: " << status.message().ToString(); + LOG(DFATAL) << "Unable to decode varint: " << status.message().ToString(); return ""; } return varint.ToString(); @@ -184,7 +184,7 @@ std::string DecimalToString(const std::string& str_val) { util::Decimal decimal; auto status = decimal.DecodeFromComparable(str_val); if (!status.ok()) { - LOG(ERROR) << "Unable to decode decimal"; + LOG(DFATAL) << "Unable to decode decimal"; return ""; } return decimal.ToString(); diff --git a/src/yb/fs/fs_manager.cc b/src/yb/fs/fs_manager.cc index 30801bc9dba8..a02e3c98d2c6 100644 --- a/src/yb/fs/fs_manager.cc +++ b/src/yb/fs/fs_manager.cc @@ -921,7 +921,7 @@ void FsManager::DumpFileSystemTree(ostream& out) { std::vector objects; Status s = env_->GetChildren(root, &objects); if (!s.ok()) { - LOG(ERROR) << "Unable to list the fs-tree: " << s.ToString(); + LOG(DFATAL) << "Unable to list the fs-tree: " << s.ToString(); return; } diff --git a/src/yb/integration-tests/cassandra_cpp_driver-test.cc b/src/yb/integration-tests/cassandra_cpp_driver-test.cc index c78204bdd04f..e7f3fa4631ff 100644 --- a/src/yb/integration-tests/cassandra_cpp_driver-test.cc +++ b/src/yb/integration-tests/cassandra_cpp_driver-test.cc @@ -448,11 +448,9 @@ class Metrics { const auto result = ts.GetMetricFromHost( host_port, &METRIC_ENTITY_server, entity_id, CHECK_NOTNULL(metric_proto), "total_count"); - if (!result.ok()) { - LOG(ERROR) << "Failed to get metric " << metric_proto->name() << " from TS" - << ts_index << ": " << host_port << " with error " << result.status(); - } - ASSERT_OK(result); + ASSERT_TRUE(result.ok()) + << "Failed to get metric " << metric_proto->name() << " from TS" + << ts_index << ": " << host_port << " with error " << result.status(); *CHECK_NOTNULL(value) = *result; } @@ -1635,7 +1633,7 @@ TEST_F_EX(CppCassandraDriverTest, TestCreateUniqueIndexIntent, CppCassandraDrive "Overwrite failed"); SleepFor(MonoDelta::FromMilliseconds(kSleepTimeMs)); } else { - LOG(ERROR) << "Deleting & Inserting failed for " << i; + LOG(WARNING) << "Deleting & Inserting failed for " << i; } } @@ -1658,7 +1656,7 @@ TEST_F_EX(CppCassandraDriverTest, TestCreateUniqueIndexIntent, CppCassandraDrive "Overwrite failed"); SleepFor(MonoDelta::FromMilliseconds(kSleepTimeMs)); } else { - LOG(ERROR) << "Deleting & Inserting failed for " << i; + LOG(WARNING) << "Deleting & Inserting failed for " << i; } } @@ -1716,7 +1714,7 @@ TEST_F_EX( "Overwrite failed"); SleepFor(MonoDelta::FromMilliseconds(kSleepTimeMs)); } else { - LOG(ERROR) << "Deleting & Inserting failed for " << i; + LOG(WARNING) << "Deleting & Inserting failed for " << i; } } @@ -1740,7 +1738,7 @@ TEST_F_EX( "Overwrite failed"); SleepFor(MonoDelta::FromMilliseconds(kSleepTimeMs)); } else { - LOG(ERROR) << "Deleting & Inserting failed for " << i; + LOG(WARNING) << "Deleting & Inserting failed for " << i; } } @@ -2369,8 +2367,8 @@ void DoTestCreateUniqueIndexWithOnlineWrites(CppCassandraDriverTestIndex* test, if (!duplicate_insert_failed) { LOG(INFO) << "Successfully inserted the duplicate value"; } else { - LOG(ERROR) << "Giving up on inserting the duplicate value after " - << kMaxRetries << " tries."; + LOG(WARNING) << "Giving up on inserting the duplicate value after " + << kMaxRetries << " tries."; } LOG(INFO) << "Waited on the Create Index to finish. Status = " diff --git a/src/yb/integration-tests/cdcsdk_gflag-test.cc b/src/yb/integration-tests/cdcsdk_gflag-test.cc index a483d2bcfbe5..2fdc852d90cf 100644 --- a/src/yb/integration-tests/cdcsdk_gflag-test.cc +++ b/src/yb/integration-tests/cdcsdk_gflag-test.cc @@ -17,7 +17,7 @@ #include "yb/util/test_macros.h" DECLARE_int32(cdc_snapshot_batch_size); -DECLARE_int32(cdc_max_stream_intent_records); +DECLARE_uint64(cdc_max_stream_intent_records); namespace yb { namespace cdc { diff --git a/src/yb/integration-tests/cdcsdk_ysql_test_base.cc b/src/yb/integration-tests/cdcsdk_ysql_test_base.cc index 332e97d1fe1b..8bc8a684f00b 100644 --- a/src/yb/integration-tests/cdcsdk_ysql_test_base.cc +++ b/src/yb/integration-tests/cdcsdk_ysql_test_base.cc @@ -1579,7 +1579,7 @@ Result CDCSDKYsqlTest::GetUniverseId(PostgresMiniCluster* cluster) { Status CDCSDKYsqlTest::UpdatePublicationTableList( const xrepl::StreamId& stream_id, const std::vector table_ids, - const uint64_t& session_id) { + uint64_t session_id) { UpdatePublicationTableListRequestPB req; UpdatePublicationTableListResponsePB resp; @@ -1920,9 +1920,9 @@ Result CDCSDKYsqlTest::GetUniverseId(PostgresMiniCluster* cluster) { if (get_changes_result.ok()) { change_resp = *get_changes_result; } else { - LOG(ERROR) << "Encountered error while calling GetChanges on tablet: " - << tablets[tablet_idx].tablet_id() - << ", status: " << get_changes_result.status(); + LOG(WARNING) << "Encountered error while calling GetChanges on tablet: " + << tablets[tablet_idx].tablet_id() + << ", status: " << get_changes_result.status(); break; } @@ -1994,7 +1994,7 @@ Result CDCSDKYsqlTest::GetUniverseId(PostgresMiniCluster* cluster) { if (init_virtual_wal) { Status s = InitVirtualWAL(stream_id, table_ids, session_id, std::move(slot_hash_range)); if (!s.ok()) { - LOG(ERROR) << "Error while trying to initialize virtual WAL: " << s; + LOG(WARNING) << "Error while trying to initialize virtual WAL: " << s; RETURN_NOT_OK(s); } } @@ -2013,8 +2013,8 @@ Result CDCSDKYsqlTest::GetUniverseId(PostgresMiniCluster* cluster) { if (get_changes_result.ok()) { change_resp = *get_changes_result; } else { - LOG(ERROR) << "Encountered error while calling GetConsistentChanges on stream: " - << stream_id << ", status: " << get_changes_result.status(); + LOG(WARNING) << "Encountered error while calling GetConsistentChanges on stream: " + << stream_id << ", status: " << get_changes_result.status(); RETURN_NOT_OK(get_changes_result); } @@ -2055,7 +2055,7 @@ Result CDCSDKYsqlTest::GetUniverseId(PostgresMiniCluster* cluster) { auto result = UpdateAndPersistLSN(stream_id, confirmed_flush_lsn, restart_lsn, session_id); if (!result.ok()) { - LOG(ERROR) << "UpdateRestartLSN failed: " << result; + LOG(WARNING) << "UpdateRestartLSN failed: " << result; RETURN_NOT_OK(result); } } @@ -2103,9 +2103,9 @@ Result CDCSDKYsqlTest::GetUniverseId(PostgresMiniCluster* cluster) { if (get_changes_result.ok()) { change_resp = *get_changes_result; } else { - LOG(ERROR) << "Encountered error while calling GetChanges on tablet: " - << tablets[tablet_idx].tablet_id() - << ", status: " << get_changes_result.status(); + LOG(WARNING) << "Encountered error while calling GetChanges on tablet: " + << tablets[tablet_idx].tablet_id() + << ", status: " << get_changes_result.status(); break; } @@ -2903,8 +2903,8 @@ Result CDCSDKYsqlTest::GetUniverseId(PostgresMiniCluster* cluster) { total_seen_records += change_resp.cdc_sdk_proto_records_size(); first_iter = false; } else { - LOG(ERROR) << "Encountered error while calling GetChanges on tablet: " - << tablets[0].tablet_id(); + LOG(WARNING) << "Encountered error while calling GetChanges on tablet: " + << tablets[0].tablet_id(); break; } } diff --git a/src/yb/integration-tests/cdcsdk_ysql_test_base.h b/src/yb/integration-tests/cdcsdk_ysql_test_base.h index 4a03855115df..ba7a9b8d88df 100644 --- a/src/yb/integration-tests/cdcsdk_ysql_test_base.h +++ b/src/yb/integration-tests/cdcsdk_ysql_test_base.h @@ -84,7 +84,7 @@ DECLARE_int32(rocksdb_level0_file_num_compaction_trigger); DECLARE_int32(timestamp_history_retention_interval_sec); DECLARE_bool(tablet_enable_ttl_file_filter); DECLARE_int32(timestamp_syscatalog_history_retention_interval_sec); -DECLARE_int32(cdc_max_stream_intent_records); +DECLARE_uint64(cdc_max_stream_intent_records); DECLARE_bool(enable_single_record_update); DECLARE_bool(enable_truncate_cdcsdk_table); DECLARE_bool(enable_load_balancing); @@ -573,7 +573,7 @@ class CDCSDKYsqlTest : public CDCSDKTestBase { Status UpdatePublicationTableList( const xrepl::StreamId& stream_id, const std::vector table_ids, - const uint64_t& session_id = kVWALSessionId1); + uint64_t session_id = kVWALSessionId1); void TestIntentGarbageCollectionFlag( const uint32_t num_tservers, diff --git a/src/yb/integration-tests/cql-tablet-split-test.cc b/src/yb/integration-tests/cql-tablet-split-test.cc index affb25701dd6..30b2391c4137 100644 --- a/src/yb/integration-tests/cql-tablet-split-test.cc +++ b/src/yb/integration-tests/cql-tablet-split-test.cc @@ -306,20 +306,20 @@ load_generator::ReadStatus CqlSecondaryIndexReader::PerformRead( "for v: '$0', expected key: '$1', key_index: $2", expected_value, key_str, key_index); }; if (!iter.Next()) { - LOG(ERROR) << "No rows found " << values_formatter(); + LOG(WARNING) << "No rows found " << values_formatter(); return load_generator::ReadStatus::kNoRows; } auto row = iter.Row(); const auto k = row.Value(0).ToString(); if (k != key_str) { - LOG(ERROR) << "Invalid k " << values_formatter() << " got k: " << k; + LOG(WARNING) << "Invalid k " << values_formatter() << " got k: " << k; return load_generator::ReadStatus::kInvalidRead; } if (iter.Next()) { return load_generator::ReadStatus::kExtraRows; - LOG(ERROR) << "More than 1 row found " << values_formatter(); + LOG(WARNING) << "More than 1 row found " << values_formatter(); do { - LOG(ERROR) << "k: " << iter.Row().Value(0).ToString(); + LOG(WARNING) << "k: " << iter.Row().Value(0).ToString(); } while (iter.Next()); } return load_generator::ReadStatus::kOk; diff --git a/src/yb/master/backfill_index.cc b/src/yb/master/backfill_index.cc index b6d591bf0e13..75c325345a4f 100644 --- a/src/yb/master/backfill_index.cc +++ b/src/yb/master/backfill_index.cc @@ -841,9 +841,9 @@ Status BackfillTable::UpdateRowsProcessedForIndexTable(const uint64_t number_row Status BackfillTable::UpdateSafeTime(const Status& s, HybridTime ht) { if (!s.ok()) { // Move on to ABORTED permission. - LOG_WITH_PREFIX(ERROR) + LOG_WITH_PREFIX(DFATAL) << "Failed backfill. Could not compute safe time for " - << yb::ToString(indexed_table_) << " " << s; + << AsString(indexed_table_) << " " << s; if (!timestamp_chosen_.exchange(true)) { RETURN_NOT_OK(Abort()); } @@ -958,8 +958,8 @@ Status BackfillTable::DoBackfill() { Status BackfillTable::Done(const Status& s, const std::unordered_set& failed_indexes) { if (!s.ok()) { - LOG_WITH_PREFIX(ERROR) << "failed to backfill the index: " << yb::ToString(failed_indexes) - << " due to " << s; + LOG_WITH_PREFIX(WARNING) << "failed to backfill the index: " << AsString(failed_indexes) + << " due to " << s; RETURN_NOT_OK_PREPEND( MarkIndexesAsFailed(failed_indexes, s.message().ToBuffer()), "Couldn't mark indexes as failed"); @@ -1168,8 +1168,8 @@ Status BackfillTable::AllowCompactionsToGCDeleteMarkers( DVLOG(3) << __PRETTY_FUNCTION__; auto res = master_->catalog_manager()->FindTableById(index_table_id); if (!res && res.status().IsNotFound()) { - LOG(ERROR) << "Index " << index_table_id << " was not found." - << " This is ok in case somebody issued a delete index. : " << res.ToString(); + LOG(WARNING) << "Index " << index_table_id << " was not found." + << " This is ok in case somebody issued a delete index. : " << res.ToString(); return Status::OK(); } scoped_refptr index_table_info = VERIFY_RESULT_PREPEND(std::move(res), @@ -1192,8 +1192,8 @@ Status BackfillTable::AllowCompactionsToGCDeleteMarkers( auto index_table_rlock = index_table_info->LockForRead(); auto state = index_table_rlock->pb.state(); if (!index_table_rlock->is_running() || FLAGS_TEST_simulate_cannot_enable_compactions) { - LOG(ERROR) << "Index " << index_table_id << " is in state " - << SysTablesEntryPB_State_Name(state) << " : cannot enable compactions on it"; + LOG(WARNING) << "Index " << index_table_id << " is in state " + << SysTablesEntryPB_State_Name(state) << " : cannot enable compactions on it"; // Treating it as success so that we can proceed with updating other indexes. return Status::OK(); } @@ -1421,7 +1421,9 @@ void GetSafeTimeForTablet::UnregisterAsyncTaskCallback() { } else { safe_time = HybridTime(resp_.safe_time()); if (safe_time.is_special()) { - LOG(ERROR) << "GetSafeTime for " << tablet_->ToString() << " got " << safe_time; + status = STATUS_FORMAT( + InternalError, "GetSafeTime for $0 got $1", tablet_->ToString(), safe_time); + LOG(DFATAL) << status; } else { VLOG(3) << "GetSafeTime for " << tablet_->ToString() << " got " << safe_time; } diff --git a/src/yb/master/catalog_manager.cc b/src/yb/master/catalog_manager.cc index ba45422c4277..923f7334a988 100644 --- a/src/yb/master/catalog_manager.cc +++ b/src/yb/master/catalog_manager.cc @@ -1294,8 +1294,8 @@ void CatalogManager::ValidateIndexTablesPostLoad( IndexStatusPB::BackfillStatus backfill_status) { DCHECK(status.ok()); if (!status.ok()) { - LOG(ERROR) << "ValidateIndexTablesPostLoad: Failed to get backfill status for " - << "index table " << index_id << ": " << status; + LOG(WARNING) << "ValidateIndexTablesPostLoad: Failed to get backfill status for " + << "index table " << index_id << ": " << status; return; } @@ -7963,7 +7963,7 @@ Status CatalogManager::GetTablegroupSchema(const GetTablegroupSchemaRequestPB* r schema_req.mutable_table()->set_table_id(table_id); Status s = GetTableSchema(&schema_req, &schema_resp); if (!s.ok() || schema_resp.has_error()) { - LOG(ERROR) << "Error while getting table schema: " << s; + LOG(WARNING) << "Error while getting table schema: " << s; return SetupError(resp->mutable_error(), MasterErrorPB::OBJECT_NOT_FOUND, s); } resp->add_get_table_schema_response_pbs()->Swap(&schema_resp); @@ -7999,7 +7999,7 @@ Status CatalogManager::GetColocatedTabletSchema(const GetColocatedTabletSchemaRe listTablesReq.set_exclude_system_tables(true); Status status = ListTables(&listTablesReq, &ListTablesResp); if (!status.ok() || ListTablesResp.has_error()) { - LOG(ERROR) << "Error while listing tables: " << status; + LOG(WARNING) << "Error while listing tables: " << status; return SetupError(resp->mutable_error(), MasterErrorPB::OBJECT_NOT_FOUND, status); } @@ -8016,7 +8016,7 @@ Status CatalogManager::GetColocatedTabletSchema(const GetColocatedTabletSchemaRe schemaReq.mutable_table()->set_table_id(t.id()); status = GetTableSchema(&schemaReq, &schemaResp); if (!status.ok() || schemaResp.has_error()) { - LOG(ERROR) << "Error while getting table schema: " << status; + LOG(WARNING) << "Error while getting table schema: " << status; return SetupError(resp->mutable_error(), MasterErrorPB::OBJECT_NOT_FOUND, status); } resp->add_get_table_schema_response_pbs()->Swap(&schemaResp); @@ -9038,9 +9038,9 @@ Status CatalogManager::CheckIfDatabaseHasReplication(const scoped_refptrIsTableReplicated(table->id())) { - LOG(ERROR) << "Error deleting database: " << database->id() << ", table: " << table->id() - << " is under replication" - << ". Cannot delete a database that contains tables under replication."; + LOG(WARNING) << "Error deleting database: " << database->id() << ", table: " << table->id() + << " is under replication" + << ". Cannot delete a database that contains tables under replication."; return STATUS_FORMAT( InvalidCommand, Format( "Table: $0 is under replication. Cannot delete a database that " @@ -12642,10 +12642,10 @@ void CatalogManager::RebuildYQLSystemPartitions() { if (system_partitions_tablet_ != nullptr) { Status s = ResultToStatus(GetYqlPartitionsVtable().GenerateAndCacheData()); if (!s.ok()) { - LOG(ERROR) << "Error rebuilding system.partitions: " << s.ToString(); + LOG(WARNING) << "Error rebuilding system.partitions: " << s.ToString(); } } else { - LOG(ERROR) << "Error finding system.partitions vtable."; + LOG(WARNING) << "Error finding system.partitions vtable."; } } } diff --git a/src/yb/master/cluster_balance.cc b/src/yb/master/cluster_balance.cc index 2e8fe934ec8f..97c10d49cccd 100644 --- a/src/yb/master/cluster_balance.cc +++ b/src/yb/master/cluster_balance.cc @@ -1853,8 +1853,8 @@ const PlacementInfoPB& ClusterLoadBalancer::GetReadOnlyPlacementFromUuid( } } // Should never get here. - LOG(ERROR) << "Could not find read only cluster with placement uuid: " - << state_->options_->placement_uuid; + LOG(DFATAL) << "Could not find read only cluster with placement uuid: " + << state_->options_->placement_uuid; return replication_info.read_replicas(0); } diff --git a/src/yb/master/master-path-handlers.cc b/src/yb/master/master-path-handlers.cc index f06552e03d6f..3406006d29d5 100644 --- a/src/yb/master/master-path-handlers.cc +++ b/src/yb/master/master-path-handlers.cc @@ -1208,7 +1208,7 @@ void MasterPathHandlers::HandleAllTables( if (result.ok()) { table_row[kYsqlOid] = std::to_string(*result); } else { - LOG(ERROR) << "Failed to get OID of '" << table_uuid << "' ysql table"; + LOG(WARNING) << "Failed to get OID of '" << table_uuid << "' ysql table"; } const auto& schema = table_locked->schema(); @@ -1400,7 +1400,7 @@ void MasterPathHandlers::HandleAllTablesJSON( if (result.ok()) { table_row.ysql_oid = std::to_string(*result); } else { - LOG(ERROR) << "Failed to get OID of '" << table_uuid << "' ysql table"; + LOG(WARNING) << "Failed to get OID of '" << table_uuid << "' ysql table"; } const auto& schema = table_locked->schema(); diff --git a/src/yb/master/master.cc b/src/yb/master/master.cc index a7ce0689694a..8d4a3bc4dbf9 100644 --- a/src/yb/master/master.cc +++ b/src/yb/master/master.cc @@ -361,7 +361,7 @@ Status Master::StartAsync() { void Master::InitCatalogManagerTask() { Status s = InitCatalogManager(); if (!s.ok()) { - LOG(ERROR) << ToString() << ": Unable to init master catalog manager: " << s.ToString(); + LOG(WARNING) << ToString() << ": Unable to init master catalog manager: " << s; } init_status_.set_value(s); } diff --git a/src/yb/master/master_heartbeat_service.cc b/src/yb/master/master_heartbeat_service.cc index f922c859df62..7b759a68940f 100644 --- a/src/yb/master/master_heartbeat_service.cc +++ b/src/yb/master/master_heartbeat_service.cc @@ -585,7 +585,7 @@ void MasterHeartbeatServiceImpl::DeleteOrphanedTabletReplica( !catalog_manager_->IsDeletedTabletLoadedFromSysCatalog(tablet_id)) { // See the comment in deleted_tablets_loaded_from_sys_catalog_ declaration for an // explanation of this logic. - LOG(ERROR) << Format( + LOG(WARNING) << Format( "Skipping deletion of orphaned tablet $0, since master has never registered this " "tablet.", tablet_id); return; @@ -1149,12 +1149,12 @@ bool MasterHeartbeatServiceImpl::ProcessCommittedConsensusState( if (report.has_schema_version() && report.schema_version() != table_lock->pb.version()) { if (report.schema_version() > table_lock->pb.version()) { - LOG(ERROR) << "TS " << ts_desc->permanent_uuid() - << " has reported a schema version greater than the current one " - << " for tablet " << tablet->ToString() - << ". Expected version " << table_lock->pb.version() - << " got " << report.schema_version() - << " (corruption)"; + LOG(WARNING) << "TS " << ts_desc->permanent_uuid() + << " has reported a schema version greater than the current one " + << " for tablet " << tablet->ToString() + << ". Expected version " << table_lock->pb.version() + << " got " << report.schema_version() + << " (corruption)"; } else { // TODO: For Alter (rolling apply to tablets), this is an expected transitory state. LOG(INFO) << "TS " << ts_desc->permanent_uuid() @@ -1205,12 +1205,12 @@ bool MasterHeartbeatServiceImpl::ProcessCommittedConsensusState( continue; } if (id_to_version.second > table_lock->pb.version()) { - LOG(ERROR) << "TS " << ts_desc->permanent_uuid() - << " has reported a schema version greater than the current one " - << " for table " << id_to_version.first - << ". Expected version " << table_lock->pb.version() - << " got " << id_to_version.second - << " (corruption)"; + LOG(WARNING) << "TS " << ts_desc->permanent_uuid() + << " has reported a schema version greater than the current one " + << " for table " << id_to_version.first + << ". Expected version " << table_lock->pb.version() + << " got " << id_to_version.second + << " (corruption)"; } else { LOG(INFO) << "TS " << ts_desc->permanent_uuid() << " does not have the latest schema for table " << id_to_version.first diff --git a/src/yb/master/master_snapshot_coordinator.cc b/src/yb/master/master_snapshot_coordinator.cc index ea75b072d777..809e41cc5e47 100644 --- a/src/yb/master/master_snapshot_coordinator.cc +++ b/src/yb/master/master_snapshot_coordinator.cc @@ -991,7 +991,7 @@ class MasterSnapshotCoordinator::Impl { if (FLAGS_TEST_fatal_on_snapshot_verify) { LOG(DFATAL) << error_msg; } else { - LOG(ERROR) << error_msg; + LOG(WARNING) << error_msg; } } diff --git a/src/yb/master/master_tablet_service.cc b/src/yb/master/master_tablet_service.cc index f578dccc03ce..44714b48a7f5 100644 --- a/src/yb/master/master_tablet_service.cc +++ b/src/yb/master/master_tablet_service.cc @@ -179,8 +179,8 @@ void MasterTabletServiceImpl::Write(const tserver::WriteRequestPB* req, for (const auto db_oid : db_oids) { if (!master_->catalog_manager()->GetYsqlDBCatalogVersion(db_oid, &catalog_version, &last_breaking_version).ok()) { - LOG_WITH_FUNC(ERROR) << "failed to get db catalog version for " - << db_oid << ", ignoring"; + LOG_WITH_FUNC(DFATAL) << "failed to get db catalog version for " + << db_oid << ", ignoring"; } else { LOG_WITH_FUNC(INFO) << "db catalog version for " << db_oid << ": " << catalog_version << ", breaking version: " @@ -190,7 +190,7 @@ void MasterTabletServiceImpl::Write(const tserver::WriteRequestPB* req, } else { if (!master_->catalog_manager()->GetYsqlCatalogVersion(&catalog_version, &last_breaking_version).ok()) { - LOG_WITH_FUNC(ERROR) << "failed to get catalog version, ignoring"; + LOG_WITH_FUNC(DFATAL) << "failed to get catalog version, ignoring"; } else { LOG_WITH_FUNC(INFO) << "catalog version: " << catalog_version << ", breaking version: " << last_breaking_version; diff --git a/src/yb/master/master_tserver.cc b/src/yb/master/master_tserver.cc index 6363f3c99cd4..83a4ebc94e2a 100644 --- a/src/yb/master/master_tserver.cc +++ b/src/yb/master/master_tserver.cc @@ -148,8 +148,7 @@ void MasterTabletServer::get_ysql_db_catalog_version(uint32_t db_oid, master_->catalog_manager()->GetYsqlDBCatalogVersion( db_oid, current_version, last_breaking_version); if (!s.ok()) { - LOG(ERROR) << "Could not get YSQL catalog version for master's tserver API: " - << s.ToUserMessage(); + LOG(WARNING) << "Could not get YSQL catalog version for master's tserver API: " << s; fill_vers(); } } diff --git a/src/yb/master/mini_master.cc b/src/yb/master/mini_master.cc index a2e94289d816..03c958a5597a 100644 --- a/src/yb/master/mini_master.cc +++ b/src/yb/master/mini_master.cc @@ -128,13 +128,11 @@ Status MiniMaster::StartOnPorts(uint16_t rpc_port, uint16_t web_port) { } MasterOptions opts(master_addresses); - Status start_status = StartOnPorts(rpc_port, web_port, &opts); - if (!start_status.ok()) { - LOG(ERROR) << "MiniMaster failed to start on RPC port " << rpc_port - << ", web port " << web_port << ": " << start_status; - // Don't crash here. Handle the error in the caller (e.g. could retry there). - } - return start_status; + // Don't crash here. Handle the error in the caller (e.g. could retry there). + RETURN_NOT_OK_WITH_WARNING( + StartOnPorts(rpc_port, web_port, &opts), + Format("MiniMaster failed to start on RPC port $0, web port $1", rpc_port, web_port)); + return Status::OK(); } Status MiniMaster::StartOnPorts(uint16_t rpc_port, uint16_t web_port, diff --git a/src/yb/master/permissions_manager.cc b/src/yb/master/permissions_manager.cc index aca2965abd5d..1d2f08bc29c1 100644 --- a/src/yb/master/permissions_manager.cc +++ b/src/yb/master/permissions_manager.cc @@ -452,7 +452,7 @@ Status PermissionsManager::AlterRole( s = catalog_manager_->sys_catalog_->Upsert(catalog_manager_->leader_ready_term(), role); if (!s.ok()) { - LOG(ERROR) << "Unable to alter role " << req->name() << ": " << s; + LOG(WARNING) << "Unable to alter role " << req->name() << ": " << s; return s; } l.Commit(); @@ -518,8 +518,8 @@ Status PermissionsManager::DeleteRole( // Update sys-catalog with the new member_of list for this role. s = catalog_manager_->sys_catalog_->Upsert(catalog_manager_->leader_ready_term(), role); if (!s.ok()) { - LOG(ERROR) << "Unable to remove role " << req->name() - << " from member_of list for role " << role_name; + LOG(WARNING) << "Unable to remove role " << req->name() + << " from member_of list for role " << role_name; role->mutable_metadata()->AbortMutation(); } else { role->mutable_metadata()->CommitMutation(); diff --git a/src/yb/master/sys_catalog.cc b/src/yb/master/sys_catalog.cc index 751b396bee85..f970aaf53719 100644 --- a/src/yb/master/sys_catalog.cc +++ b/src/yb/master/sys_catalog.cc @@ -745,8 +745,8 @@ Status SysCatalogTable::SyncWrite(SysCatalogWriter* writer) { << "complete. Continuing to wait."; time = CoarseMonoClock::now(); if (time >= deadline) { - LOG(ERROR) << "Already waited for a total of " << ::yb::ToString(waited_so_far) << ". " - << "Returning a timeout from SyncWrite."; + LOG(WARNING) << "Already waited for a total of " << ::yb::ToString(waited_so_far) << ". " + << "Returning a timeout from SyncWrite."; return STATUS_FORMAT(TimedOut, "SyncWrite timed out after $0", waited_so_far); } } diff --git a/src/yb/master/table_index.cc b/src/yb/master/table_index.cc index bb9c911d1948..1cfb2a020c55 100644 --- a/src/yb/master/table_index.cc +++ b/src/yb/master/table_index.cc @@ -37,7 +37,7 @@ void TableIndex::AddOrReplace(const TableInfoPtr& table) { std::string first_id = (*pos)->id(); auto replace_successful = tables_.replace(pos, table); if (!replace_successful) { - LOG(ERROR) << Format( + LOG(WARNING) << Format( "Multiple tables prevented inserting a new table with id $0. First id was $1", table->id(), first_id); } diff --git a/src/yb/master/util/yql_vtable_helpers.cc b/src/yb/master/util/yql_vtable_helpers.cc index 90ce1e721560..cb8e01ba3385 100644 --- a/src/yb/master/util/yql_vtable_helpers.cc +++ b/src/yb/master/util/yql_vtable_helpers.cc @@ -133,7 +133,7 @@ QLValuePB GetValueHelper::Apply(const std::string& strval, const Da value_pb.set_binary_value(strval); break; default: - LOG(ERROR) << "unexpected string type " << data_type; + LOG(DFATAL) << "unexpected string type " << data_type; break; } return value_pb; @@ -150,7 +150,7 @@ QLValuePB GetValueHelper::Apply( value_pb.set_binary_value(strval, len); break; default: - LOG(ERROR) << "unexpected string type " << data_type; + LOG(DFATAL) << "unexpected string type " << data_type; break; } return value_pb; @@ -172,7 +172,7 @@ QLValuePB GetValueHelper::Apply(const int32_t intval, const DataType da value_pb.set_int8_value(intval); break; default: - LOG(ERROR) << "unexpected int type " << data_type; + LOG(DFATAL) << "unexpected int type " << data_type; break; } return value_pb; diff --git a/src/yb/master/xcluster/xcluster_bootstrap_helper.cc b/src/yb/master/xcluster/xcluster_bootstrap_helper.cc index 26192cb508ad..cd50e8974c70 100644 --- a/src/yb/master/xcluster/xcluster_bootstrap_helper.cc +++ b/src/yb/master/xcluster/xcluster_bootstrap_helper.cc @@ -239,7 +239,7 @@ void SetupUniverseReplicationWithBootstrapHelper::DoReplicationBootstrap( // First get the universe. auto bootstrap_info = catalog_manager_.GetUniverseReplicationBootstrap(replication_id); if (bootstrap_info == nullptr) { - LOG(ERROR) << "UniverseReplicationBootstrap not found: " << replication_id; + LOG(DFATAL) << "UniverseReplicationBootstrap not found: " << replication_id; return; } diff --git a/src/yb/master/xrepl_catalog_manager.cc b/src/yb/master/xrepl_catalog_manager.cc index 59425881724d..bf719d54cb77 100644 --- a/src/yb/master/xrepl_catalog_manager.cc +++ b/src/yb/master/xrepl_catalog_manager.cc @@ -238,7 +238,7 @@ class CDCStreamLoader : public Visitor { table = catalog_manager_->tables_->FindTableOrNull( xcluster::StripSequencesDataAliasIfPresent(metadata.table_id(0))); if (!table) { - LOG(ERROR) << "Invalid table ID " << metadata.table_id(0) << " for stream " << stream_id; + LOG(DFATAL) << "Invalid table ID " << metadata.table_id(0) << " for stream " << stream_id; // TODO (#2059): Potentially signals a race condition that table got deleted while stream // was being created. // Log error and continue without loading the stream. @@ -4752,7 +4752,7 @@ Status CatalogManager::ClearFailedReplicationBootstrap() { if (bootstrap_info == nullptr) { auto error_msg = Format("UniverseReplicationBootstrap not found: $0", replication_id.ToString()); - LOG(ERROR) << error_msg; + LOG(WARNING) << error_msg; return STATUS(NotFound, error_msg); } } @@ -4990,7 +4990,7 @@ Status CatalogManager::DoProcessCDCSDKTabletDeletion() { auto s = cdc_state_table_->DeleteEntries(entries_to_delete); if (!s.ok()) { - LOG(ERROR) << "Unable to flush operations to delete cdc streams: " << s; + LOG(WARNING) << "Unable to flush operations to delete cdc streams: " << s; return s.CloneAndPrepend("Error deleting cdc stream rows from cdc_state table"); } diff --git a/src/yb/master/yql_peers_vtable.cc b/src/yb/master/yql_peers_vtable.cc index fb29d5e28b14..cbad3e1e0c7b 100644 --- a/src/yb/master/yql_peers_vtable.cc +++ b/src/yb/master/yql_peers_vtable.cc @@ -114,15 +114,15 @@ Result PeersVTable::RetrieveData( // result, skip 'remote_endpoint' in the results. auto private_ip = entry.ts_ips.private_ip_future.get(); if (!private_ip.ok()) { - LOG(ERROR) << "Failed to get private ip from " << entry.ts_info.ShortDebugString() - << ": " << private_ip.status(); + LOG(WARNING) << "Failed to get private ip from " << entry.ts_info.ShortDebugString() + << ": " << private_ip.status(); continue; } auto public_ip = entry.ts_ips.public_ip_future.get(); if (!public_ip.ok()) { - LOG(ERROR) << "Failed to get public ip from " << entry.ts_info.ShortDebugString() - << ": " << public_ip.status(); + LOG(WARNING) << "Failed to get public ip from " << entry.ts_info.ShortDebugString() + << ": " << public_ip.status(); continue; } diff --git a/src/yb/master/ysql/ysql_catalog_config.cc b/src/yb/master/ysql/ysql_catalog_config.cc index e41737636694..64efc0f619b1 100644 --- a/src/yb/master/ysql/ysql_catalog_config.cc +++ b/src/yb/master/ysql/ysql_catalog_config.cc @@ -111,7 +111,7 @@ Status YsqlCatalogConfig::SetInitDbDone(const Status& initdb_status, const Leade if (initdb_status.ok()) { LOG(INFO) << "Global initdb completed successfully"; } else { - LOG(ERROR) << "Global initdb failed: " << initdb_status; + LOG(FATAL) << "Global initdb failed: " << initdb_status; } auto [l, pb] = LockForWrite(epoch); diff --git a/src/yb/master/ysql/ysql_initdb_major_upgrade_handler.cc b/src/yb/master/ysql/ysql_initdb_major_upgrade_handler.cc index 00058575b352..b702bb599cdb 100644 --- a/src/yb/master/ysql/ysql_initdb_major_upgrade_handler.cc +++ b/src/yb/master/ysql/ysql_initdb_major_upgrade_handler.cc @@ -359,12 +359,12 @@ void YsqlInitDBAndMajorUpgradeHandler::RunMajorVersionUpgrade(const LeaderEpoch& if (update_state_status.ok()) { LOG(INFO) << "Ysql major catalog upgrade completed successfully"; } else { - LOG(ERROR) << "Failed to set major version upgrade state: " << update_state_status; + LOG(DFATAL) << "Failed to set major version upgrade state: " << update_state_status; } return; } - LOG(ERROR) << "Ysql major catalog upgrade failed: " << status; + LOG(WARNING) << "Ysql major catalog upgrade failed: " << status; ERROR_NOT_OK( TransitionMajorCatalogUpgradeState(YsqlMajorCatalogUpgradeInfoPB::FAILED, epoch, status), "Failed to set major version upgrade state"); diff --git a/src/yb/qlexpr/ql_expr.cc b/src/yb/qlexpr/ql_expr.cc index 8705d0968bf3..ebfeee20bc41 100644 --- a/src/yb/qlexpr/ql_expr.cc +++ b/src/yb/qlexpr/ql_expr.cc @@ -490,7 +490,7 @@ Status QLExprExecutor::EvalCondition(const QLConditionPB& condition, case QL_OP_LIKE: FALLTHROUGH_INTENDED; case QL_OP_NOT_LIKE: - LOG(ERROR) << "Internal error: illegal or unknown operator " << condition.op(); + LOG(DFATAL) << "Internal error: illegal or unknown operator " << condition.op(); break; case QL_OP_NOOP: @@ -733,7 +733,7 @@ Status QLExprExecutor::EvalCondition( case QL_OP_LIKE: FALLTHROUGH_INTENDED; case QL_OP_NOT_LIKE: - LOG(ERROR) << "Internal error: illegal or unknown operator " << condition.op(); + LOG(DFATAL) << "Internal error: illegal or unknown operator " << condition.op(); break; case QL_OP_NOOP: diff --git a/src/yb/qlexpr/ql_serialization.cc b/src/yb/qlexpr/ql_serialization.cc index 8f239e426209..128e838eaafa 100644 --- a/src/yb/qlexpr/ql_serialization.cc +++ b/src/yb/qlexpr/ql_serialization.cc @@ -58,8 +58,8 @@ void SerializeValue( bool is_out_of_range = false; CQLEncodeBytes(decimal.EncodeToSerializedBigDecimal(&is_out_of_range), buffer); if(is_out_of_range) { - LOG(ERROR) << "Out of range: Unable to encode decimal " << decimal.ToString() - << " into a BigDecimal serialized representation"; + LOG(DFATAL) << "Out of range: Unable to encode decimal " << decimal.ToString() + << " into a BigDecimal serialized representation"; } return; } diff --git a/src/yb/rocksdb/db/memtable_list.cc b/src/yb/rocksdb/db/memtable_list.cc index 892564084b8c..2a143c28a487 100644 --- a/src/yb/rocksdb/db/memtable_list.cc +++ b/src/yb/rocksdb/db/memtable_list.cc @@ -326,9 +326,9 @@ void MemTableList::PickMemtablesToFlush( } all_memtables_logged = true; } - LOG(ERROR) << "Failed when checking if memtable can be flushed (will still flush it): " - << filter_result.status() << ". Memtable: " << m->ToString() - << ss.str(); + LOG(DFATAL) << "Failed when checking if memtable can be flushed (will still flush it): " + << filter_result.status() << ". Memtable: " << m->ToString() + << ss.str(); // Still flush the memtable so that this error does not keep occurring. } } diff --git a/src/yb/rocksdb/db/version_set.cc b/src/yb/rocksdb/db/version_set.cc index 4e2afbc81d3c..ff7ce86b345a 100644 --- a/src/yb/rocksdb/db/version_set.cc +++ b/src/yb/rocksdb/db/version_set.cc @@ -3132,8 +3132,7 @@ Status VersionSet::Import(const std::string& source_dir, if (!status.ok()) { for (const auto& file : revert_list) { - auto delete_status = env_->DeleteFile(file); - LOG(ERROR) << "Failed to delete file: " << file << ", status: " << delete_status.ToString(); + ERROR_NOT_OK(env_->DeleteFile(file), yb::Format("Failed to delete file $0", file)); } return status; } diff --git a/src/yb/rocksdb/db/write_batch.cc b/src/yb/rocksdb/db/write_batch.cc index e6e003676ead..2f2a95f2959e 100644 --- a/src/yb/rocksdb/db/write_batch.cc +++ b/src/yb/rocksdb/db/write_batch.cc @@ -251,7 +251,7 @@ uint32_t WriteBatch::ComputeContentFlags() const { if ((rv & ContentFlags::DEFERRED) != 0) { BatchContentClassifier classifier; auto status = Iterate(&classifier); - LOG_IF(ERROR, !status.ok()) << "Iterate failed during ComputeContentFlags: " << status; + LOG_IF(WARNING, !status.ok()) << "Iterate failed during ComputeContentFlags: " << status; rv = classifier.content_flags; // this method is conceptually const, because it is performing a lazy diff --git a/src/yb/rocksdb/table/format.cc b/src/yb/rocksdb/table/format.cc index c3cbe9e1e627..0a386ce8ce15 100644 --- a/src/yb/rocksdb/table/format.cc +++ b/src/yb/rocksdb/table/format.cc @@ -463,7 +463,7 @@ Status ReadBlockContents(RandomAccessFileReader* file, const Footer& footer, status = ReadBlock(file, footer, options, handle, &slice, used_buf); if (!status.ok()) { - LOG(ERROR) << __func__ << ": " << status << "\n" << yb::GetStackTrace(); + LOG_WITH_FUNC(WARNING) << status; return status; } diff --git a/src/yb/rocksdb/table/index_reader.cc b/src/yb/rocksdb/table/index_reader.cc index 8602fd0cc007..54a6891321cc 100644 --- a/src/yb/rocksdb/table/index_reader.cc +++ b/src/yb/rocksdb/table/index_reader.cc @@ -143,7 +143,7 @@ Result> HashIndexReader::Create( s = FindMetaBlock(meta_index_iter, kHashIndexPrefixesBlock, &prefixes_handle); if (!s.ok()) { - LOG(ERROR) << "Failed to find hash index prefixes block: " << s; + LOG(DFATAL) << "Failed to find hash index prefixes block: " << s; return index_reader; } @@ -152,7 +152,7 @@ Result> HashIndexReader::Create( s = FindMetaBlock(meta_index_iter, kHashIndexPrefixesMetadataBlock, &prefixes_meta_handle); if (!s.ok()) { - LOG(ERROR) << "Failed to find hash index prefixes metadata block: " << s; + LOG(DFATAL) << "Failed to find hash index prefixes metadata block: " << s; return index_reader; } @@ -168,7 +168,7 @@ Result> HashIndexReader::Create( &prefixes_meta_contents, env, mem_tracker, true /* do decompression */); if (!s.ok()) { - LOG(ERROR) << "Failed to read hash index prefixes metadata block: " << s; + LOG(DFATAL) << "Failed to read hash index prefixes metadata block: " << s; return index_reader; } @@ -183,7 +183,7 @@ Result> HashIndexReader::Create( index_reader->index_block_->SetBlockHashIndex(hash_index); index_reader->OwnPrefixesContents(std::move(prefixes_contents)); } else { - LOG(ERROR) << "Failed to create block hash index: " << s; + LOG(DFATAL) << "Failed to create block hash index: " << s; } } else { BlockPrefixIndex* prefix_index = nullptr; @@ -194,7 +194,7 @@ Result> HashIndexReader::Create( if (s.ok()) { index_reader->index_block_->SetBlockPrefixIndex(prefix_index); } else { - LOG(ERROR) << "Failed to create block prefix index: " << s; + LOG(DFATAL) << "Failed to create block prefix index: " << s; } } diff --git a/src/yb/rocksdb/util/env_posix.cc b/src/yb/rocksdb/util/env_posix.cc index 8bf9b3f72896..bb79f1e4b0aa 100644 --- a/src/yb/rocksdb/util/env_posix.cc +++ b/src/yb/rocksdb/util/env_posix.cc @@ -396,7 +396,7 @@ class PosixEnv : public Env { int fd = fileno(f); #ifdef ROCKSDB_FALLOCATE_PRESENT if (fallocate(fd, FALLOC_FL_KEEP_SIZE, 0, 4 * 1024) != 0) { - LOG(ERROR) << STATUS_IO_ERROR(fname, errno); + LOG(WARNING) << STATUS_IO_ERROR(fname, errno); } #endif SetFD_CLOEXEC(fd, nullptr); diff --git a/src/yb/rocksdb/util/posix_logger.h b/src/yb/rocksdb/util/posix_logger.h index 90306a1d3a37..304f69d50f08 100644 --- a/src/yb/rocksdb/util/posix_logger.h +++ b/src/yb/rocksdb/util/posix_logger.h @@ -165,9 +165,9 @@ class PosixLogger : public Logger { if (fallocate( fd_, FALLOC_FL_KEEP_SIZE, 0, static_cast(desired_allocation_chunk * kDebugLogChunkSize)) != 0) { - LOG(ERROR) << STATUS_IO_ERROR(fname_, errno) - << " desired_allocation_chunk: " << desired_allocation_chunk - << " kDebugLogChunkSize: " << kDebugLogChunkSize; + LOG(WARNING) << STATUS_IO_ERROR(fname_, errno) + << " desired_allocation_chunk: " << desired_allocation_chunk + << " kDebugLogChunkSize: " << kDebugLogChunkSize; } } #endif diff --git a/src/yb/rpc/acceptor.cc b/src/yb/rpc/acceptor.cc index 87e3a1189f2c..e4da0227cc51 100644 --- a/src/yb/rpc/acceptor.cc +++ b/src/yb/rpc/acceptor.cc @@ -146,7 +146,7 @@ void Acceptor::Shutdown() { void Acceptor::IoHandler(ev::io& io, int events) { auto it = sockets_.find(&io); if (it == sockets_.end()) { - LOG(ERROR) << "IoHandler for unknown socket: " << &io; + LOG(DFATAL) << "IoHandler for unknown socket: " << &io; return; } Socket& socket = it->second.socket; diff --git a/src/yb/rpc/io_thread_pool.cc b/src/yb/rpc/io_thread_pool.cc index f14d57abcea9..2e15da40bfa5 100644 --- a/src/yb/rpc/io_thread_pool.cc +++ b/src/yb/rpc/io_thread_pool.cc @@ -59,7 +59,7 @@ class IoThreadPool::Impl { auto deadline = std::chrono::steady_clock::now() + 15s; while (!io_service_.stopped()) { if (std::chrono::steady_clock::now() >= deadline) { - LOG(ERROR) << "Io service failed to stop"; + LOG(WARNING) << "Io service failed to stop"; io_service_.stop(); break; } @@ -74,7 +74,7 @@ class IoThreadPool::Impl { void Execute() { boost::system::error_code ec; io_service_.run(ec); - LOG_IF(ERROR, ec) << "Failed to run io service: " << ec; + LOG_IF(DFATAL, ec) << "Failed to run io service: " << ec; } std::string name_; diff --git a/src/yb/rpc/messenger.cc b/src/yb/rpc/messenger.cc index c7538bf55743..97f1eab60e73 100644 --- a/src/yb/rpc/messenger.cc +++ b/src/yb/rpc/messenger.cc @@ -604,8 +604,8 @@ Messenger::~Messenger() { VLOG(1) << "Messenger destructor for " << this << " called at:\n" << GetStackTrace(); #ifndef NDEBUG if (!closing_) { - LOG(ERROR) << "Messenger created here:\n" << creation_stack_trace_.Symbolize() - << "Messenger destructor for " << this << " called at:\n" << GetStackTrace(); + LOG(DFATAL) << "Messenger created here:\n" << creation_stack_trace_.Symbolize() + << "Messenger destructor for " << this << " called at:\n" << GetStackTrace(); } #endif CHECK(closing_) << "Should have already shut down"; diff --git a/src/yb/rpc/outbound_call.cc b/src/yb/rpc/outbound_call.cc index ac55648dbf05..5015fdb0ca91 100644 --- a/src/yb/rpc/outbound_call.cc +++ b/src/yb/rpc/outbound_call.cc @@ -621,7 +621,7 @@ void OutboundCall::SetFailed(const Status &status, std::unique_ptrtime(), ec); - LOG_IF(ERROR, ec) << "Reschedule timer failed: " << ec.message(); + LOG_IF(DFATAL, ec) << "Reschedule timer failed: " << ec.message(); ++timer_counter_; timer_.async_wait(strand_.wrap(std::bind(&Impl::HandleTimer, this, _1))); } @@ -128,7 +128,8 @@ class Scheduler::Impl { --timer_counter_; if (ec) { - LOG_IF(ERROR, ec != boost::asio::error::operation_aborted) << "Wait failed: " << ec.message(); + LOG_IF(DFATAL, ec != boost::asio::error::operation_aborted) + << "Wait failed: " << ec.message(); return; } if (closing_.load(std::memory_order_acquire)) { diff --git a/src/yb/rpc/yb_rpc.cc b/src/yb/rpc/yb_rpc.cc index b122b4b17b36..4d0f801dbbdd 100644 --- a/src/yb/rpc/yb_rpc.cc +++ b/src/yb/rpc/yb_rpc.cc @@ -281,8 +281,7 @@ void YBInboundCall::UpdateWaitStateInfo() { .method = method_name().ToBuffer(), }); } else { - LOG_IF(ERROR, GetAtomicFlag(&FLAGS_ysql_yb_enable_ash)) - << "Wait state is nullptr for " << ToString(); + LOG_IF(DFATAL, FLAGS_ysql_yb_enable_ash) << "Wait state is nullptr for " << ToString(); } } diff --git a/src/yb/server/call_home.cc b/src/yb/server/call_home.cc index 2702ec94c73e..2f72e0247050 100644 --- a/src/yb/server/call_home.cc +++ b/src/yb/server/call_home.cc @@ -151,7 +151,7 @@ class RpcsCollector : public Collector { auto url = Substitute("http://$0/rpcz", yb::ToString(*addr_)); auto status = curl_.FetchURL(url, &buf); if (!status.ok()) { - LOG(ERROR) << "Unable to read url " << url; + LOG(WARNING) << "Unable to read url " << url; return; } @@ -258,9 +258,9 @@ std::string CallHome::BuildJson() { rapidjson::Reader reader; rapidjson::StringStream ss(str.c_str()); if (!reader.Parse(ss, writer)) { - LOG(ERROR) << "Unable to parse json. Error: " << reader.GetParseErrorCode() << " at offset " - << reader.GetErrorOffset() << " in string " - << str.substr(reader.GetErrorOffset(), 10); + LOG(WARNING) << "Unable to parse json. Error: " << reader.GetParseErrorCode() << " at offset " + << reader.GetErrorOffset() << " in string " + << str.substr(reader.GetErrorOffset(), 10); return str; } diff --git a/src/yb/server/generic_service.cc b/src/yb/server/generic_service.cc index 0d162193a5e0..aa73f19cf852 100644 --- a/src/yb/server/generic_service.cc +++ b/src/yb/server/generic_service.cc @@ -171,7 +171,7 @@ void GenericServiceImpl::ReloadCertificates( rpc::RpcContext rpc) { const auto status = server_->ReloadKeysAndCertificates(); if (!status.ok()) { - LOG(ERROR) << "Reloading certificates failed: " << status; + LOG(WARNING) << "Reloading certificates failed: " << status; rpc.RespondFailure(status); return; } diff --git a/src/yb/server/total_mem_watcher.cc b/src/yb/server/total_mem_watcher.cc index 5a8784ab07b2..d2b252327043 100644 --- a/src/yb/server/total_mem_watcher.cc +++ b/src/yb/server/total_mem_watcher.cc @@ -100,9 +100,9 @@ void TotalMemWatcher::MemoryMonitoringLoop(std::function trigger_termina } std::string termination_explanation = GetTerminationExplanation(); if (!termination_explanation.empty()) { - LOG(ERROR) << "Memory usage exceeded configured limit, terminating the process: " - << termination_explanation << "\nDetails:\n" - << GetMemoryUsageDetails(); + LOG(DFATAL) << "Memory usage exceeded configured limit, terminating the process: " + << termination_explanation << "\nDetails:\n" + << GetMemoryUsageDetails(); trigger_termination_fn(); return; } diff --git a/src/yb/server/webserver.cc b/src/yb/server/webserver.cc index 6f5218103658..684761858927 100644 --- a/src/yb/server/webserver.cc +++ b/src/yb/server/webserver.cc @@ -546,7 +546,7 @@ Status Webserver::Impl::GetBoundAddresses(std::vector* addrs_ptr) cons break; } default: { - LOG(ERROR) << "Unexpected address family: " << sockaddrs[i]->ss_family; + LOG(DFATAL) << "Unexpected address family: " << sockaddrs[i]->ss_family; RSTATUS_DCHECK(false, IllegalState, "Unexpected address family"); break; } diff --git a/src/yb/tablet/mvcc.cc b/src/yb/tablet/mvcc.cc index 0f70848b34c3..bf6a1058652b 100644 --- a/src/yb/tablet/mvcc.cc +++ b/src/yb/tablet/mvcc.cc @@ -388,7 +388,7 @@ void MvccManager::AddPending(HybridTime ht, const OpId& op_id, bool is_follower_ sanity_check_lower_bound && sanity_check_lower_bound != HybridTime::kMax) { HybridTime incremented_hybrid_time = sanity_check_lower_bound.Incremented(); - YB_LOG_EVERY_N_SECS(ERROR, 5) << LogPrefix() + YB_LOG_EVERY_N_SECS(DFATAL, 5) << LogPrefix() << "Assigning an artificially incremented hybrid time: " << incremented_hybrid_time << ". This needs to be investigated. " << get_details_msg(/* drain_aborted */ false); ht = incremented_hybrid_time; @@ -466,7 +466,7 @@ void MvccManager::UpdatePropagatedSafeTimeOnLeader(const FixedHybridTimeLease& h #else // Do not crash in production. if (safe_time < propagated_safe_time_) { - YB_LOG_EVERY_N_SECS(ERROR, 5) << LogPrefix() + YB_LOG_EVERY_N_SECS(DFATAL, 5) << LogPrefix() << "Previously saw " << YB_EXPR_TO_STREAM(propagated_safe_time_) << ", but now safe time is " << safe_time; } else { diff --git a/src/yb/tablet/operations/operation_driver.cc b/src/yb/tablet/operations/operation_driver.cc index 09b716e0bfc7..15fbf5666268 100644 --- a/src/yb/tablet/operations/operation_driver.cc +++ b/src/yb/tablet/operations/operation_driver.cc @@ -353,8 +353,7 @@ void OperationDriver::ReplicationFinished( // the tablet. if (prepare_state_copy != PrepareState::PREPARED) { LOG(DFATAL) << "Replicating an operation that has not been prepared: " << AsString(this); - - LOG(ERROR) << "Attempting to wait for the operation to be prepared"; + LOG(WARNING) << "Attempting to wait for the operation to be prepared"; // This case should never happen, but if it happens we are trying to survive. for (;;) { diff --git a/src/yb/tablet/tablet.cc b/src/yb/tablet/tablet.cc index e2081541cdfe..0edfa4aa6697 100644 --- a/src/yb/tablet/tablet.cc +++ b/src/yb/tablet/tablet.cc @@ -1106,7 +1106,7 @@ Status Tablet::OpenRegularDB(const rocksdb::Options& common_options) { if (db != nullptr) { delete db; } - return STATUS(IllegalState, rocksdb_open_status.ToString()); + return rocksdb_open_status; } regular_db_.reset(db); regular_db_->ListenFilesChanged(std::bind(&Tablet::RegularDbFilesChanged, this)); diff --git a/src/yb/tablet/tablet_metadata.cc b/src/yb/tablet/tablet_metadata.cc index 4c81b5f6c59c..0f2e640da9d8 100644 --- a/src/yb/tablet/tablet_metadata.cc +++ b/src/yb/tablet/tablet_metadata.cc @@ -880,10 +880,11 @@ Status RaftGroupMetadata::DeleteTabletData(TabletDataState delete_type, const auto& rocksdb_dir = this->rocksdb_dir(); LOG_WITH_PREFIX(INFO) << "Destroying regular db at: " << rocksdb_dir; - rocksdb::Status status = rocksdb::DestroyDB(rocksdb_dir, rocksdb_options); + auto status = DestroyDB(rocksdb_dir, rocksdb_options); if (!status.ok()) { - LOG_WITH_PREFIX(ERROR) << "Failed to destroy regular DB at: " << rocksdb_dir << ": " << status; + LOG_WITH_PREFIX(WARNING) + << "Failed to destroy regular DB at: " << rocksdb_dir << ": " << status; } else { LOG_WITH_PREFIX(INFO) << "Successfully destroyed regular DB at: " << rocksdb_dir; } @@ -899,8 +900,8 @@ Status RaftGroupMetadata::DeleteTabletData(TabletDataState delete_type, status = rocksdb::DestroyDB(intents_dir, rocksdb_options); if (!status.ok()) { - LOG_WITH_PREFIX(ERROR) << "Failed to destroy provisional records DB at: " << intents_dir - << ": " << status; + LOG_WITH_PREFIX(DFATAL) << "Failed to destroy provisional records DB at: " << intents_dir + << ": " << status; } else { LOG_WITH_PREFIX(INFO) << "Successfully destroyed provisional records DB at: " << intents_dir; } diff --git a/src/yb/tablet/tablet_peer.cc b/src/yb/tablet/tablet_peer.cc index a8e7d6a83b6a..5eec3b366b51 100644 --- a/src/yb/tablet/tablet_peer.cc +++ b/src/yb/tablet/tablet_peer.cc @@ -1474,7 +1474,7 @@ Status TabletPeer::StartReplicaOperation( void TabletPeer::SetPropagatedSafeTime(HybridTime ht) { auto driver = NewReplicaOperationDriver(nullptr); if (!driver.ok()) { - LOG_WITH_PREFIX(ERROR) << "Failed to create operation driver to set propagated hybrid time"; + LOG_WITH_PREFIX(DFATAL) << "Failed to create operation driver to set propagated hybrid time"; return; } (**driver).SetPropagatedSafeTime(ht, tablet_->mvcc_manager()); @@ -1754,19 +1754,17 @@ Status TabletPeer::ChangeRole(const std::string& requestor_uuid) { } switch (peer_pb.member_type()) { - case PeerMemberType::OBSERVER: - FALLTHROUGH_INTENDED; + case PeerMemberType::OBSERVER: [[fallthrough]]; case PeerMemberType::VOTER: - LOG(ERROR) << "Peer " << peer_pb.permanent_uuid() << " is a " - << PeerMemberType_Name(peer_pb.member_type()) - << " Not changing its role after remote bootstrap"; + LOG(WARNING) << "Peer " << peer_pb.permanent_uuid() << " is a " + << PeerMemberType_Name(peer_pb.member_type()) + << " Not changing its role after remote bootstrap"; // Even though this is an error, we return Status::OK() so the remote server doesn't // tombstone its tablet. return Status::OK(); - case PeerMemberType::PRE_OBSERVER: - FALLTHROUGH_INTENDED; + case PeerMemberType::PRE_OBSERVER: [[fallthrough]]; case PeerMemberType::PRE_VOTER: { consensus::ChangeConfigRequestPB req; consensus::ChangeConfigResponsePB resp; diff --git a/src/yb/tablet/tablet_peer_mm_ops.cc b/src/yb/tablet/tablet_peer_mm_ops.cc index 270a585ab3f3..df50e78dd963 100644 --- a/src/yb/tablet/tablet_peer_mm_ops.cc +++ b/src/yb/tablet/tablet_peer_mm_ops.cc @@ -98,7 +98,7 @@ void LogGCOp::Perform() { Status s = tablet_peer_->RunLogGC(); if (!s.ok()) { s = s.CloneAndPrepend("Unexpected error while running Log GC from TabletPeer"); - LOG(ERROR) << s.ToString(); + LOG(DFATAL) << s.ToString(); } sem_.unlock(); diff --git a/src/yb/tablet/transaction_participant.cc b/src/yb/tablet/transaction_participant.cc index 777eb5e8e552..feffa431d4de 100644 --- a/src/yb/tablet/transaction_participant.cc +++ b/src/yb/tablet/transaction_participant.cc @@ -784,8 +784,8 @@ class TransactionParticipant::Impl auto id = FullyDecodeTransactionId(data.state.transaction_id()); if (!id.ok()) { - LOG(ERROR) << "Could not decode transaction details, whose apply record OpId was: " - << data.op_id; + LOG(DFATAL) << "Could not decode transaction details, whose apply record OpId was: " + << data.op_id << ": " << id.status(); return id.status(); } diff --git a/src/yb/tools/data-patcher.cc b/src/yb/tools/data-patcher.cc index f49ff1f58fff..3ab7b262a129 100644 --- a/src/yb/tools/data-patcher.cc +++ b/src/yb/tools/data-patcher.cc @@ -1090,7 +1090,7 @@ class ApplyPatch { LOG(INFO) << "Renaming " << src << " to " << dst; Status s = env_->RenameFile(src, dst); if (!s.ok()) { - LOG(ERROR) << "Error renaming " << src << " to " << dst << ": " << s; + LOG(DFATAL) << "Error renaming " << src << " to " << dst << ": " << s; } return s; } diff --git a/src/yb/tools/fs_tool.cc b/src/yb/tools/fs_tool.cc index 7b38c0767e05..1b3a8bda8bd0 100644 --- a/src/yb/tools/fs_tool.cc +++ b/src/yb/tools/fs_tool.cc @@ -217,12 +217,8 @@ Status FsTool::PrintLogSegmentHeader(const string& path, auto segment_result = ReadableLogSegment::Open(fs_manager_->env(), path); if (!segment_result.ok()) { auto s = segment_result.status(); - if (s.IsUninitialized()) { - LOG(ERROR) << path << " is not initialized: " << s.ToString(); - return Status::OK(); - } - if (s.IsCorruption()) { - LOG(ERROR) << path << " is corrupt: " << s.ToString(); + if (s.IsUninitialized() || s.IsCorruption()) { + LOG(DFATAL) << path << " is not valid: " << s; return Status::OK(); } return s.CloneAndPrepend("Unexpected error reading log segment " + path); diff --git a/src/yb/tserver/metrics_snapshotter.cc b/src/yb/tserver/metrics_snapshotter.cc index cd2eafb4ecf7..bcd1b689b2fb 100644 --- a/src/yb/tserver/metrics_snapshotter.cc +++ b/src/yb/tserver/metrics_snapshotter.cc @@ -504,8 +504,8 @@ Status MetricsSnapshotter::Thread::DoMetricsSnapshot() { uint64_t system_ticks = cur_ticks[2] - prev_ticks_[2]; prev_ticks_ = cur_ticks; if (total_ticks <= 0) { - YB_LOG_EVERY_N_SECS(ERROR, 120) << Format("Failed to calculate CPU usage - " - "invalid total CPU ticks: $0.", total_ticks); + YB_LOG_EVERY_N_SECS(DFATAL, 120) + << Format("Failed to calculate CPU usage - invalid total CPU ticks: $0.", total_ticks); } else { double cpu_usage_user = static_cast(user_ticks) / total_ticks; double cpu_usage_system = static_cast(system_ticks) / total_ticks; diff --git a/src/yb/tserver/pg_client_session.cc b/src/yb/tserver/pg_client_session.cc index bced09a02122..1a5efb6df03e 100644 --- a/src/yb/tserver/pg_client_session.cc +++ b/src/yb/tserver/pg_client_session.cc @@ -2446,7 +2446,7 @@ class PgClientSession::Impl { // If we failed to report the status of this DDL transaction, we can just log and ignore // it, as the poller in the YB-Master will figure out the status of this transaction using // the transaction status tablet and PG catalog. - ERROR_NOT_OK(client_.ReportYsqlDdlTxnStatus(*metadata, *commit), + WARN_NOT_OK(client_.ReportYsqlDdlTxnStatus(*metadata, *commit), Format("Sending ReportYsqlDdlTxnStatus call of $0 failed", *commit)); } @@ -2461,8 +2461,8 @@ class PgClientSession::Impl { // (commit.has_value() is false), the purpose is to use the side effect of // WaitForDdlVerificationToFinish to trigger the start of a background task to // complete the DDL transaction at the DocDB side. - ERROR_NOT_OK(client_.WaitForDdlVerificationToFinish(*metadata), - "WaitForDdlVerificationToFinish call failed"); + WARN_NOT_OK(client_.WaitForDdlVerificationToFinish(*metadata), + "WaitForDdlVerificationToFinish call failed"); } } } @@ -3145,7 +3145,7 @@ class PgClientSession::Impl { // collected. One way to fix this we need to add a periodic scan job in YB-Master to look // for any table/index that are involved in a DDL transaction and start a background task // to complete the DDL transaction at the DocDB side. - LOG(ERROR) << "DdlAtomicityFinishTransaction failed: " << status; + LOG(DFATAL) << "DdlAtomicityFinishTransaction failed: " << status; } return MergeStatus(std::move(commit_status), std::move(status)); } diff --git a/src/yb/tserver/pg_create_table.cc b/src/yb/tserver/pg_create_table.cc index 7793c5bf1b14..32aaba329ebc 100644 --- a/src/yb/tserver/pg_create_table.cc +++ b/src/yb/tserver/pg_create_table.cc @@ -432,7 +432,7 @@ Status CreateSequencesDataTable(client::YBClient* client, CoarseTimePoint deadli LOG(INFO) << "Table '" << table_name.ToString() << "' already exists"; } else { // If any other error, report that! - LOG(ERROR) << "Error creating table '" << table_name.ToString() << "': " << status; + LOG(WARNING) << "Error creating table '" << table_name.ToString() << "': " << status; return status; } return Status::OK(); diff --git a/src/yb/tserver/remote_bootstrap_file_downloader.cc b/src/yb/tserver/remote_bootstrap_file_downloader.cc index 5198e5720ac5..f9cfbc9ec132 100644 --- a/src/yb/tserver/remote_bootstrap_file_downloader.cc +++ b/src/yb/tserver/remote_bootstrap_file_downloader.cc @@ -113,8 +113,8 @@ Status RemoteBootstrapFileDownloader::DownloadFile( return Status::OK(); } // TODO fallback to copy. - LOG_WITH_PREFIX(ERROR) << "Failed to link file: " << file_path << " => " << it->second - << ": " << link_status; + LOG_WITH_PREFIX(WARNING) + << "Failed to link file: " << file_path << " => " << it->second << ": " << link_status; } } @@ -155,8 +155,8 @@ Status RemoteBootstrapFileDownloader::DownloadFile( static auto rate_updater = []() { auto remote_bootstrap_clients_started = RemoteClientBase::StartedClientsCount(); if (remote_bootstrap_clients_started < 1) { - YB_LOG_EVERY_N(ERROR, 100) << "Invalid number of remote bootstrap sessions: " - << remote_bootstrap_clients_started; + YB_LOG_EVERY_N(DFATAL, 100) << "Invalid number of remote bootstrap sessions: " + << remote_bootstrap_clients_started; return static_cast(FLAGS_remote_bootstrap_rate_limit_bytes_per_sec); } return static_cast( diff --git a/src/yb/tserver/remote_bootstrap_service.cc b/src/yb/tserver/remote_bootstrap_service.cc index 009961fff49f..c2037f0d0ed7 100644 --- a/src/yb/tserver/remote_bootstrap_service.cc +++ b/src/yb/tserver/remote_bootstrap_service.cc @@ -586,9 +586,10 @@ Status RemoteBootstrapServiceImpl::DoEndRemoteBootstrapSession( num_sessions_serving_data_->Decrement(); LOG_IF(DFATAL, nsessions_serving_data_.fetch_sub(1, std::memory_order_acq_rel) <= 0) << "found nsessions_serving_data_ <= 0 when updating rbs session " << session_id; - LOG(ERROR) << "Remote bootstrap session " << session_id << " on tablet " << session->tablet_id() - << " with peer " << session->requestor_uuid() << " failed. session_succeeded = " - << session_succeeded; + LOG(WARNING) + << "Remote bootstrap session " << session_id << " on tablet " << session->tablet_id() + << " with peer " << session->requestor_uuid() << " failed. session_succeeded = " + << session_succeeded; } return Status::OK(); diff --git a/src/yb/tserver/remote_bootstrap_session.cc b/src/yb/tserver/remote_bootstrap_session.cc index ea625e8b945e..7fcf63cb3184 100644 --- a/src/yb/tserver/remote_bootstrap_session.cc +++ b/src/yb/tserver/remote_bootstrap_session.cc @@ -656,9 +656,9 @@ void RemoteBootstrapSession::InitRateLimiter() { rate_limiter_.SetTargetRateUpdater([this]() -> uint64_t { DCHECK_GT(FLAGS_remote_bootstrap_rate_limit_bytes_per_sec, 0); if (FLAGS_remote_bootstrap_rate_limit_bytes_per_sec <= 0) { - YB_LOG_EVERY_N(ERROR, 1000) - << "Invalid value for remote_bootstrap_rate_limit_bytes_per_sec: " - << FLAGS_remote_bootstrap_rate_limit_bytes_per_sec; + YB_LOG_EVERY_N(WARNING, 1000) + << "Invalid value for remote_bootstrap_rate_limit_bytes_per_sec: " + << FLAGS_remote_bootstrap_rate_limit_bytes_per_sec; // Since the rate limiter is initialized, it's expected that the value of // FLAGS_remote_bootstrap_rate_limit_bytes_per_sec is greater than 0. Since this is not the // case, we'll log an error, and set the rate to 50 MB/s. diff --git a/src/yb/tserver/remote_bootstrap_snapshots.cc b/src/yb/tserver/remote_bootstrap_snapshots.cc index 77da53c000a4..e260b068a52e 100644 --- a/src/yb/tserver/remote_bootstrap_snapshots.cc +++ b/src/yb/tserver/remote_bootstrap_snapshots.cc @@ -151,13 +151,13 @@ Status RemoteBootstrapSnapshotsComponent::DownloadFileInto( // If we fail to fetch a snapshot file, delete the snapshot directory, log the error, // but don't fail the remote bootstrap as snapshot files are not needed for running // the tablet. - LOG(ERROR) << "Error downloading snapshot file " << file_path << ": " << s; + LOG(WARNING) << "Error downloading snapshot file " << file_path << ": " << s; failed_snapshot_ids->insert(file_pb.snapshot_id()); LOG(INFO) << "Deleting snapshot dir " << snapshot_dir; auto delete_status = Env::Default()->DeleteRecursively(snapshot_dir); if (!delete_status.ok()) { - LOG(ERROR) << "Error deleting corrupted snapshot directory " << snapshot_dir << ": " - << delete_status; + LOG(WARNING) << "Error deleting corrupted snapshot directory " << snapshot_dir << ": " + << delete_status; } } else { LOG(INFO) << "Downloaded file " << file_path << " for snapshot " << file_pb.snapshot_id(); diff --git a/src/yb/tserver/service_util.h b/src/yb/tserver/service_util.h index 1bf8d834cebc..964a52085617 100644 --- a/src/yb/tserver/service_util.h +++ b/src/yb/tserver/service_util.h @@ -83,11 +83,7 @@ Result CheckUuidMatch(TabletPeerLookupIf* tablet_manager, // Maintain compat in release mode, but complain. std::string msg = strings::Substitute("$0: Missing destination UUID in request from $1: $2", method_name, requestor_string, req->ShortDebugString()); -#ifdef NDEBUG - YB_LOG_EVERY_N(ERROR, 100) << msg; -#else - LOG(FATAL) << msg; -#endif + YB_LOG_EVERY_N(DFATAL, 100) << msg; return true; } if (PREDICT_FALSE(req->dest_uuid() != local_uuid)) { diff --git a/src/yb/tserver/stateful_services/stateful_service_base.cc b/src/yb/tserver/stateful_services/stateful_service_base.cc index 2f3dcb63c149..cbe6016f2cd7 100644 --- a/src/yb/tserver/stateful_services/stateful_service_base.cc +++ b/src/yb/tserver/stateful_services/stateful_service_base.cc @@ -324,7 +324,7 @@ void StatefulServiceBase::StartPeriodicTaskIfNeeded() { std::bind(&StatefulServiceBase::ProcessTaskPeriodically, this)); if (!s.ok()) { task_enqueued_ = false; - LOG(ERROR) << "Failed to schedule " << ServiceName() << " periodic task :" << s; + LOG(WARNING) << "Failed to schedule " << ServiceName() << " periodic task :" << s; } } diff --git a/src/yb/tserver/tablet_server.cc b/src/yb/tserver/tablet_server.cc index c81ba3cbe0a6..e4f4f0d2b0f2 100644 --- a/src/yb/tserver/tablet_server.cc +++ b/src/yb/tserver/tablet_server.cc @@ -1364,8 +1364,8 @@ void TabletServer::SetYsqlDBCatalogVersions( ++count; } if (shm_index == -1) { - YB_LOG_EVERY_N_SECS(ERROR, 60) << "Cannot find free db_catalog_versions_ slot, db_oid: " - << db_oid; + YB_LOG_EVERY_N_SECS(WARNING, 60) + << "Cannot find free db_catalog_versions_ slot, db_oid: " << db_oid; continue; } // update the newly inserted entry to have the allocated slot. diff --git a/src/yb/tserver/tablet_service.cc b/src/yb/tserver/tablet_service.cc index e62f831f0c72..de5a9455b6b6 100644 --- a/src/yb/tserver/tablet_service.cc +++ b/src/yb/tserver/tablet_service.cc @@ -825,7 +825,7 @@ void TabletServiceAdminImpl::BackfillIndex( tablet.tablet->SafeTime(tablet::RequireLease::kFalse, read_at, deadline); DVLOG(1) << "Got safe time " << safe_time.ToString(); if (!safe_time.ok()) { - LOG(ERROR) << "Could not get a good enough safe time " << safe_time.ToString(); + LOG(WARNING) << "Could not get a good enough safe time " << safe_time.ToString(); SetupErrorAndRespond(resp->mutable_error(), safe_time.status(), &context); return; } @@ -1063,11 +1063,11 @@ void TabletServiceAdminImpl::AlterSchema(const tablet::ChangeMetadataRequestPB* schema_version = tablet.peer->tablet_metadata()->schema_version( req->has_alter_table_id() ? req->alter_table_id() : ""); if (schema_version == req->schema_version()) { - LOG(ERROR) << "The current schema does not match the request schema." - << " version=" << schema_version - << " current-schema=" << tablet_schema.ToString() - << " request-schema=" << req_schema.ToString() - << " (corruption)"; + LOG(DFATAL) << "The current schema does not match the request schema." + << " version=" << schema_version + << " current-schema=" << tablet_schema.ToString() + << " request-schema=" << req_schema.ToString() + << " (corruption)"; SetupErrorAndRespond(resp->mutable_error(), STATUS(Corruption, "got a different schema for the same version number"), TabletServerErrorPB::MISMATCHED_SCHEMA, &context); @@ -1077,11 +1077,11 @@ void TabletServiceAdminImpl::AlterSchema(const tablet::ChangeMetadataRequestPB* // If the current schema is newer than the one in the request reject the request. if (schema_version > req->schema_version()) { - LOG(ERROR) << "Tablet " << req->tablet_id() << " has a newer schema" - << " version=" << schema_version - << " req->schema_version()=" << req->schema_version() - << "\n current-schema=" << tablet_schema.ToString() - << "\n request-schema=" << req_schema.ToString(); + LOG(WARNING) << "Tablet " << req->tablet_id() << " has a newer schema" + << " version=" << schema_version + << " req->schema_version()=" << req->schema_version() + << "\n current-schema=" << tablet_schema.ToString() + << "\n request-schema=" << req_schema.ToString(); SetupErrorAndRespond( resp->mutable_error(), STATUS_SUBSTITUTE( @@ -1126,7 +1126,7 @@ void TabletServiceAdminImpl::AlterSchema(const tablet::ChangeMetadataRequestPB* if (tablet.tablet->transaction_participant() == nullptr) { auto status = STATUS( IllegalState, "Transaction participant is null for tablet " + req->tablet_id()); - LOG(ERROR) << status; + LOG(DFATAL) << status; SetupErrorAndRespond( resp->mutable_error(), status, @@ -1248,7 +1248,7 @@ void TabletServiceImpl::VerifyTableRowRange( const auto safe_time = tablet->SafeTime(tablet::RequireLease::kFalse, read_at, deadline); DVLOG(1) << "Got safe time " << safe_time.ToString(); if (!safe_time.ok()) { - LOG(ERROR) << "Could not get a good enough safe time " << safe_time.ToString(); + LOG(DFATAL) << "Could not get a good enough safe time " << safe_time.ToString(); SetupErrorAndRespond(resp->mutable_error(), safe_time.status(), &context); return; } @@ -2416,7 +2416,7 @@ void TabletServiceAdminImpl::WaitForYsqlBackendsCatalogVersion( server_->GetSharedMemoryPostgresAuthKey(), modified_deadline) .Connect(); if (!res.ok()) { - LOG_WITH_PREFIX_AND_FUNC(ERROR) << "failed to connect to local postgres: " << res.status(); + LOG_WITH_PREFIX_AND_FUNC(WARNING) << "failed to connect to local postgres: " << res.status(); SetupErrorAndRespond(resp->mutable_error(), res.status(), &context); return; } @@ -2463,7 +2463,7 @@ void TabletServiceAdminImpl::WaitForYsqlBackendsCatalogVersion( LOG_WITH_PREFIX(INFO) << "Deadline reached: still waiting on " << num_lagging_backends << " backends " << db_ver_tag; } else if (!s.ok()) { - LOG_WITH_PREFIX_AND_FUNC(ERROR) << "num lagging backends query failed: " << s; + LOG_WITH_PREFIX_AND_FUNC(WARNING) << "num lagging backends query failed: " << s; SetupErrorAndRespond(resp->mutable_error(), s, &context); return; } diff --git a/src/yb/tserver/tablet_validator.cc b/src/yb/tserver/tablet_validator.cc index 4402f8645a48..35cdaf7982d1 100644 --- a/src/yb/tserver/tablet_validator.cc +++ b/src/yb/tserver/tablet_validator.cc @@ -226,7 +226,8 @@ void TabletMetadataValidator::Impl::DoValidate() { auto sync_result = SyncWithMaster(); if (!sync_result.ok()) { - LOG_WITH_PREFIX(ERROR) << "Failed to sync with the master, status: " << sync_result.status(); + LOG_WITH_PREFIX(WARNING) + << "Failed to sync with the master, status: " << sync_result.status(); break; } @@ -240,8 +241,8 @@ bool TabletMetadataValidator::Impl::HandleMasterResponse( VLOG_WITH_PREFIX_AND_FUNC(2) << "response: " << response.ShortDebugString(); if (response.has_error()) { - LOG_WITH_PREFIX(ERROR) << "Failed to get backfilling status from the master, " - << "error: " << response.error().ShortDebugString(); + LOG_WITH_PREFIX(WARNING) << "Failed to get backfilling status from the master, " + << "error: " << response.error().ShortDebugString(); return false; // Will try during next period. } @@ -531,7 +532,7 @@ void TabletMetadataValidator::Impl::TriggerMetadataUpdate( // has been changed from the last flush (some operation has been applied), but this cannot be // guaranteed as no raft opeation is used for retain_delete_markers recovery. auto status = tablet_meta->Flush(); - LOG_IF_WITH_PREFIX(ERROR, !status.ok()) + LOG_IF_WITH_PREFIX(WARNING, !status.ok()) << "Tablet " << index_tablet_id << " metadata flush failed: " << status; if (status.ok()) { diff --git a/src/yb/tserver/ts_local_lock_manager.cc b/src/yb/tserver/ts_local_lock_manager.cc index 400a365916fb..a3695f50b74b 100644 --- a/src/yb/tserver/ts_local_lock_manager.cc +++ b/src/yb/tserver/ts_local_lock_manager.cc @@ -237,8 +237,8 @@ class TSLocalLockManager::Impl { yb::UniqueLock lock(mutex_); while (txns_in_progress_.find(txn_id) != txns_in_progress_.end()) { if (deadline <= CoarseMonoClock::Now()) { - LOG(ERROR) << "Failed to add txn " << txn_id << " to in progress txns until deadline: " - << ToString(deadline); + LOG(WARNING) << "Failed to add txn " << txn_id << " to in progress txns until deadline: " + << ToString(deadline); TRACE("Failed to add by deadline."); return STATUS_FORMAT( TryAgain, "Failed to add txn $0 to in progress txns until deadline: $1", txn_id, diff --git a/src/yb/tserver/ts_tablet_manager.cc b/src/yb/tserver/ts_tablet_manager.cc index 0f70ddcd6619..b0b5cff57f0f 100644 --- a/src/yb/tserver/ts_tablet_manager.cc +++ b/src/yb/tserver/ts_tablet_manager.cc @@ -1998,7 +1998,7 @@ void TSTabletManager::OpenTablet(const RaftGroupMetadataPtr& meta, auto s = ConsensusMetadata::Load( meta->fs_manager(), tablet_id, meta->fs_manager()->uuid(), &cmeta); if (!s.ok()) { - LOG(ERROR) << kLogPrefix << "Tablet failed to load consensus meta data: " << s; + LOG(DFATAL) << kLogPrefix << "Tablet failed to load consensus meta data: " << s; tablet_peer->SetFailed(s); return; } @@ -2010,7 +2010,7 @@ void TSTabletManager::OpenTablet(const RaftGroupMetadataPtr& meta, tablet_id, fs_manager_, meta->wal_dir()); s = bootstrap_state_manager->Init(); if(!s.ok()) { - LOG(ERROR) << kLogPrefix << "Tablet failed to init bootstrap state manager: " << s; + LOG(DFATAL) << kLogPrefix << "Tablet failed to init bootstrap state manager: " << s; tablet_peer->SetFailed(s); return; } @@ -2042,7 +2042,7 @@ void TSTabletManager::OpenTablet(const RaftGroupMetadataPtr& meta, if (GetAtomicFlag(&FLAGS_TEST_force_single_tablet_failure) && CompareAndSetFlag(&FLAGS_TEST_force_single_tablet_failure, true /* expected */, false /* val */)) { - LOG(ERROR) << "Setting the state of a tablet to FAILED"; + LOG(WARNING) << "Setting the state of a tablet to FAILED"; tablet_peer->SetFailed(STATUS(InternalError, "Setting tablet to failed state for test", tablet_id)); return; @@ -2052,7 +2052,7 @@ void TSTabletManager::OpenTablet(const RaftGroupMetadataPtr& meta, // partially created tablet here? s = tablet_peer->SetBootstrapping(); if (!s.ok()) { - LOG(ERROR) << kLogPrefix << "Tablet failed to set bootstrapping: " << s; + LOG(DFATAL) << kLogPrefix << "Tablet failed to set bootstrapping: " << s; tablet_peer->SetFailed(s); return; } @@ -2156,8 +2156,7 @@ void TSTabletManager::OpenTablet(const RaftGroupMetadataPtr& meta, flush_bootstrap_state_pool()); if (!s.ok()) { - LOG(ERROR) << kLogPrefix << "Tablet failed to init: " - << s.ToString(); + LOG(DFATAL) << kLogPrefix << "Tablet failed to init: " << s.ToString(); tablet_peer->SetFailed(s); return; } @@ -2168,8 +2167,7 @@ void TSTabletManager::OpenTablet(const RaftGroupMetadataPtr& meta, TRACE("Starting tablet peer"); s = tablet_peer->Start(bootstrap_info); if (!s.ok()) { - LOG(ERROR) << kLogPrefix << "Tablet failed to start: " - << s.ToString(); + LOG(DFATAL) << kLogPrefix << "Tablet failed to start: " << s; tablet_peer->SetFailed(s); return; } diff --git a/src/yb/tserver/xcluster_consumer.cc b/src/yb/tserver/xcluster_consumer.cc index b7ec84842d4f..e0d8932ae115 100644 --- a/src/yb/tserver/xcluster_consumer.cc +++ b/src/yb/tserver/xcluster_consumer.cc @@ -525,9 +525,9 @@ void XClusterConsumer::TriggerPollForNewTablets() { // it. consumer_namespace_name = ""; } else { - LOG(ERROR) << "Malformed sequences_data alias table ID: " << consumer_table_id - << "; skipping creation of a poller for a tablet belonging to that table: " - << consumer_tablet_info.tablet_id; + LOG(DFATAL) << "Malformed sequences_data alias table ID: " << consumer_table_id + << "; skipping creation of a poller for a tablet belonging to that table: " + << consumer_tablet_info.tablet_id; continue; } } else { diff --git a/src/yb/tserver/xcluster_ddl_queue_handler.cc b/src/yb/tserver/xcluster_ddl_queue_handler.cc index 90858064e5c4..9e5fce490b8a 100644 --- a/src/yb/tserver/xcluster_ddl_queue_handler.cc +++ b/src/yb/tserver/xcluster_ddl_queue_handler.cc @@ -437,8 +437,8 @@ Status XClusterDDLQueueHandler::ProcessFailedDDLQuery( if (last_failed_query_ && last_failed_query_->MatchesQueryInfo(query_info)) { num_fails_for_this_ddl_++; if (num_fails_for_this_ddl_ >= FLAGS_xcluster_ddl_queue_max_retries_per_ddl) { - LOG_WITH_PREFIX(ERROR) << "Failed to process DDL after " << num_fails_for_this_ddl_ - << " retries. Pausing DDL replication."; + LOG_WITH_PREFIX(WARNING) << "Failed to process DDL after " << num_fails_for_this_ddl_ + << " retries. Pausing DDL replication."; } } else { last_failed_query_ = QueryIdentifier{query_info.ddl_end_time, query_info.query_id}; diff --git a/src/yb/tserver/xcluster_output_client.cc b/src/yb/tserver/xcluster_output_client.cc index 36bf927d04ab..96f8ba7f97f1 100644 --- a/src/yb/tserver/xcluster_output_client.cc +++ b/src/yb/tserver/xcluster_output_client.cc @@ -838,8 +838,8 @@ void XClusterOutputClient::HandleError(const Status& s) { LOG_WITH_PREFIX(WARNING) << "Retrying applying replicated record for consumer tablet: " << consumer_tablet_info_.tablet_id << ", reason: " << s; } else { - LOG_WITH_PREFIX(ERROR) << "Error while applying replicated record: " << s - << ", consumer tablet: " << consumer_tablet_info_.tablet_id; + LOG_WITH_PREFIX(WARNING) << "Error while applying replicated record: " << s + << ", consumer tablet: " << consumer_tablet_info_.tablet_id; } { ACQUIRE_MUTEX_IF_ONLINE_ELSE_RETURN; diff --git a/src/yb/tserver/xcluster_poller.cc b/src/yb/tserver/xcluster_poller.cc index 0aab3922d5a0..1caec37615ea 100644 --- a/src/yb/tserver/xcluster_poller.cc +++ b/src/yb/tserver/xcluster_poller.cc @@ -650,7 +650,7 @@ bool XClusterPoller::IsStuck() const { const auto lag = MonoTime::Now() - last_task_schedule_time_; if (lag > 1s * GetAtomicFlag(&FLAGS_xcluster_poller_task_delay_considered_stuck_secs)) { - LOG_WITH_PREFIX(ERROR) << "XCluster Poller has not executed any tasks for " << lag.ToString(); + LOG_WITH_PREFIX(DFATAL) << "XCluster Poller has not executed any tasks for " << lag.ToString(); return true; } return false; diff --git a/src/yb/util/allocation_tracker.cc b/src/yb/util/allocation_tracker.cc index 02548b0a15dd..c6f62f46d05b 100644 --- a/src/yb/util/allocation_tracker.cc +++ b/src/yb/util/allocation_tracker.cc @@ -25,13 +25,13 @@ AllocationTrackerBase::~AllocationTrackerBase() { #ifndef NDEBUG std::lock_guard lock(mutex_); for (auto& pair : objects_) { - LOG(ERROR) << "Error of type " << name_ << " not destroyed, id: " << pair.second.second - << ", created at: " << pair.second.first; + LOG(DFATAL) << "Error of type " << name_ << " not destroyed, id: " << pair.second.second + << ", created at: " << pair.second.first; } #else if (count_) { - LOG(ERROR) << "Not all objects of type " << name_ << " were destroyed, " - << count_ << " objects left"; + LOG(DFATAL) << "Not all objects of type " << name_ << " were destroyed, " + << count_ << " objects left"; } #endif } diff --git a/src/yb/util/async_util.cc b/src/yb/util/async_util.cc index 72c7f688bdeb..c328bf2eccc9 100644 --- a/src/yb/util/async_util.cc +++ b/src/yb/util/async_util.cc @@ -104,7 +104,7 @@ void Synchronizer::EnsureWaitDone() { LOG(FATAL) << kErrorMsg; #else const int kWaitSec = 10; - YB_LOG_EVERY_N_SECS(ERROR, 1) << kErrorMsg << " Waiting up to " << kWaitSec << " seconds"; + YB_LOG_EVERY_N_SECS(DFATAL, 1) << kErrorMsg << " Waiting up to " << kWaitSec << " seconds"; CHECK_OK(WaitFor(MonoDelta::FromSeconds(kWaitSec))); #endif } diff --git a/src/yb/util/env_posix.cc b/src/yb/util/env_posix.cc index 7346ff612040..cc203ef854fe 100644 --- a/src/yb/util/env_posix.cc +++ b/src/yb/util/env_posix.cc @@ -401,7 +401,7 @@ class PosixWritableFile : public WritableFile { if (sync_on_close_) { Status sync_status = Sync(); if (!sync_status.ok()) { - LOG(ERROR) << "Unable to Sync " << filename_ << ": " << sync_status.ToString(); + LOG(WARNING) << "Unable to Sync " << filename_ << ": " << sync_status; if (s.ok()) { s = sync_status; } @@ -908,7 +908,7 @@ class PosixRWFile final : public RWFile { // Virtual function call in destructor. s = Sync(); if (!s.ok()) { - LOG(ERROR) << "Unable to Sync " << filename_ << ": " << s.ToString(); + LOG(WARNING) << "Unable to Sync " << filename_ << ": " << s; } } diff --git a/src/yb/util/env_util.cc b/src/yb/util/env_util.cc index d03eb67fa160..15c9521b53cb 100644 --- a/src/yb/util/env_util.cc +++ b/src/yb/util/env_util.cc @@ -121,7 +121,7 @@ std::pair FindRootDir(const std::string& search_for_dir) { std::string GetRootDir(const std::string& search_for_dir) { auto [status, path] = FindRootDir(search_for_dir); if (!status.ok()) { - LOG(ERROR) << status.ToString(); + LOG(WARNING) << status; } return path; } diff --git a/src/yb/util/file_system_posix.cc b/src/yb/util/file_system_posix.cc index 299408212f8d..6d4b1c2a4128 100644 --- a/src/yb/util/file_system_posix.cc +++ b/src/yb/util/file_system_posix.cc @@ -355,7 +355,7 @@ Status PosixWritableFile::Close() { << " filesize_: " << filesize_; } if (ftruncate(fd_, filesize_) != 0) { - LOG(ERROR) << STATUS_IO_ERROR(filename_, errno) << " filesize_: " << filesize_; + LOG(WARNING) << STATUS_IO_ERROR(filename_, errno) << " filesize_: " << filesize_; } #ifdef ROCKSDB_FALLOCATE_PRESENT // in some file systems, ftruncate only trims trailing space if the @@ -374,9 +374,9 @@ Status PosixWritableFile::Close() { if (fallocate( fd_, FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE, filesize_, block_size * last_allocated_block - filesize_) != 0) { - LOG(ERROR) << STATUS_IO_ERROR(filename_, errno) << " block_size: " << block_size - << " last_allocated_block: " << last_allocated_block - << " filesize_: " << filesize_; + LOG(WARNING) << STATUS_IO_ERROR(filename_, errno) << " block_size: " << block_size + << " last_allocated_block: " << last_allocated_block + << " filesize_: " << filesize_; } } #endif diff --git a/src/yb/util/logging.h b/src/yb/util/logging.h index 1853e32b32c5..407624da54c7 100644 --- a/src/yb/util/logging.h +++ b/src/yb/util/logging.h @@ -138,6 +138,17 @@ enum PRIVATE_ThrottleMsg {THROTTLE_MSG}; // benign races on their internal static variables. //////////////////////////////////////////////////////////////////////////////// +#define YB_GLOG_SEVERITY_INFO google::GLOG_INFO +#define YB_GLOG_SEVERITY_WARNING google::GLOG_WARNING +#define YB_GLOG_SEVERITY_ERROR google::GLOG_ERROR +#define YB_GLOG_SEVERITY_FATAL google::GLOG_FATAL +#if DCHECK_IS_ON() +#define YB_GLOG_SEVERITY_DFATAL google::GLOG_FATAL +#else +#define YB_GLOG_SEVERITY_DFATAL google::GLOG_ERROR +#endif +#define YB_GLOG_SEVERITY(severity) BOOST_PP_CAT(YB_GLOG_SEVERITY_, severity) + // The "base" macros. #define YB_SOME_KIND_OF_LOG_EVERY_N(severity, n, what_to_do) \ static int LOG_OCCURRENCES = 0, LOG_OCCURRENCES_MOD_N = 0; \ @@ -147,7 +158,7 @@ enum PRIVATE_ThrottleMsg {THROTTLE_MSG}; if (++LOG_OCCURRENCES_MOD_N > n) LOG_OCCURRENCES_MOD_N -= n; \ if (LOG_OCCURRENCES_MOD_N == 1) \ google::LogMessage( \ - __FILE__, __LINE__, google::GLOG_ ## severity, LOG_OCCURRENCES, \ + __FILE__, __LINE__, YB_GLOG_SEVERITY(severity), LOG_OCCURRENCES, \ &what_to_do).stream() #define YB_SOME_KIND_OF_LOG_IF_EVERY_N(severity, condition, n, what_to_do) \ @@ -158,7 +169,7 @@ enum PRIVATE_ThrottleMsg {THROTTLE_MSG}; if (condition && \ ((LOG_OCCURRENCES_MOD_N=(LOG_OCCURRENCES_MOD_N + 1) % n) == (1 % n))) \ google::LogMessage( \ - __FILE__, __LINE__, google::GLOG_ ## severity, LOG_OCCURRENCES, \ + __FILE__, __LINE__, YB_GLOG_SEVERITY(severity), LOG_OCCURRENCES, \ &what_to_do).stream() #define YB_SOME_KIND_OF_PLOG_EVERY_N(severity, n, what_to_do) \ @@ -169,7 +180,7 @@ enum PRIVATE_ThrottleMsg {THROTTLE_MSG}; if (++LOG_OCCURRENCES_MOD_N > n) LOG_OCCURRENCES_MOD_N -= n; \ if (LOG_OCCURRENCES_MOD_N == 1) \ google::ErrnoLogMessage( \ - __FILE__, __LINE__, google::GLOG_ ## severity, LOG_OCCURRENCES, \ + __FILE__, __LINE__, YB_GLOG_SEVERITY(severity), LOG_OCCURRENCES, \ &what_to_do).stream() #define YB_SOME_KIND_OF_LOG_FIRST_N(severity, n, what_to_do) \ @@ -177,13 +188,11 @@ enum PRIVATE_ThrottleMsg {THROTTLE_MSG}; ANNOTATE_BENIGN_RACE(&LOG_OCCURRENCES, "Logging the first N is approximate"); \ if (LOG_OCCURRENCES++ < (n)) \ google::LogMessage( \ - __FILE__, __LINE__, google::GLOG_ ## severity, static_cast(LOG_OCCURRENCES), \ - &what_to_do).stream() + __FILE__, __LINE__, YB_GLOG_SEVERITY(severity), \ + static_cast(LOG_OCCURRENCES), &what_to_do).stream() // The direct user-facing macros. #define YB_LOG_EVERY_N(severity, n) \ - static_assert(google::GLOG_ ## severity < google::NUM_SEVERITIES, \ - "Invalid requested log severity"); \ YB_SOME_KIND_OF_LOG_EVERY_N(severity, (n), google::LogMessage::SendToLog) #define YB_LOG_WITH_PREFIX_EVERY_N(severity, n) YB_LOG_EVERY_N(severity, n) << LogPrefix() diff --git a/src/yb/util/memory/memory.h b/src/yb/util/memory/memory.h index ceceb71c1bb4..fa5c29e9862a 100644 --- a/src/yb/util/memory/memory.h +++ b/src/yb/util/memory/memory.h @@ -966,10 +966,10 @@ void Quota::Free(size_t amount) { usage_ -= amount; // threads allocate/free memory concurrently via the same Quota object that is // not protected with a mutex (thread_safe == false). - if (usage_ > (std::numeric_limits::max() - (1 << 28))) { - LOG(ERROR) << "Suspiciously big usage_ value: " << usage_ - << " (could be a result size_t wrapping around below 0, " - << "for example as a result of race condition)."; + if (usage_ > std::numeric_limits::max() / 2) { + LOG(DFATAL) << "Suspiciously big usage_ value: " << usage_ + << " (could be a result size_t wrapping around below 0, " + << "for example as a result of race condition)."; } } diff --git a/src/yb/util/net/dns_resolver-test.cc b/src/yb/util/net/dns_resolver-test.cc index ca2306816ce2..fda57c14ea31 100644 --- a/src/yb/util/net/dns_resolver-test.cc +++ b/src/yb/util/net/dns_resolver-test.cc @@ -58,7 +58,7 @@ class DnsResolverTest : public YBTest { io_thread_ = CHECK_RESULT(Thread::Make("io_thread", "io_thread", [this] { boost::system::error_code ec; io_service_.run(ec); - LOG_IF(ERROR, ec) << "Failed to run io service: " << ec; + LOG_IF(DFATAL, ec) << "Failed to run io service: " << ec; })); } @@ -72,7 +72,7 @@ class DnsResolverTest : public YBTest { auto deadline = std::chrono::steady_clock::now() + 15s; while (!io_service_.stopped()) { if (std::chrono::steady_clock::now() >= deadline) { - LOG(ERROR) << "Io service failed to stop"; + LOG(DFATAL) << "Io service failed to stop"; io_service_.stop(); break; } diff --git a/src/yb/util/net/inetaddress.cc b/src/yb/util/net/inetaddress.cc index 835bd45923f4..4b51a66bee36 100644 --- a/src/yb/util/net/inetaddress.cc +++ b/src/yb/util/net/inetaddress.cc @@ -156,8 +156,8 @@ void FilterAddresses(const string& filter_spec, vector* addresses) { if (filter_it != kFilters->end()) { filters.push_back(&filter_it->second); } else { - LOG(ERROR) << "Unknown filter spec " << filter_name << " in filter spec " - << filter_spec; + LOG(DFATAL) << "Unknown filter spec " << filter_name << " in filter spec " + << filter_spec; } } diff --git a/src/yb/util/net/net_util.cc b/src/yb/util/net/net_util.cc index f95d405c86d2..921a52d8cfbc 100644 --- a/src/yb/util/net/net_util.cc +++ b/src/yb/util/net/net_util.cc @@ -146,7 +146,7 @@ Status HostPort::RemoveAndGetHostPortList( out.str(master_server_addr); out.str(" "); } - LOG(ERROR) << out.str(); + LOG(DFATAL) << out.str(); return STATUS_SUBSTITUTE(NotFound, "Cannot find $0 in addresses: $1", diff --git a/src/yb/util/physical_time.cc b/src/yb/util/physical_time.cc index 6b7142b8df15..6d7891f8734f 100644 --- a/src/yb/util/physical_time.cc +++ b/src/yb/util/physical_time.cc @@ -81,7 +81,7 @@ Status CallAdjTime(timex* tx) { ErrnoToString(errno)); case TIME_ERROR: if (FLAGS_disable_clock_sync_error) { - YB_LOG_EVERY_N_SECS(ERROR, 15) << "Clock unsynchronized, status: " << tx->status; + YB_LOG_EVERY_N_SECS(DFATAL, 15) << "Clock unsynchronized, status: " << tx->status; return Status::OK(); } return STATUS_FORMAT( @@ -89,8 +89,8 @@ Status CallAdjTime(timex* tx) { tx->status); default: // TODO what to do about leap seconds? see KUDU-146 - YB_LOG_FIRST_N(ERROR, 1) << "Server undergoing leap second. This may cause consistency " - << "issues (rc=" << rc << ")"; + YB_LOG_FIRST_N(DFATAL, 1) << "Server undergoing leap second. This may cause consistency " + << "issues (rc=" << rc << ")"; return Status::OK(); } } diff --git a/src/yb/util/result-test.cc b/src/yb/util/result-test.cc index ebc4b31e6834..82a7dec2f984 100644 --- a/src/yb/util/result-test.cc +++ b/src/yb/util/result-test.cc @@ -307,7 +307,6 @@ void TestNotOk(T t) { const auto LogPrefix = []() -> std::string { return "prefix"; }; WARN_NOT_OK(t, "boo"); WARN_WITH_PREFIX_NOT_OK(t, "foo"); - ERROR_NOT_OK(t, "moo"); } } // namespace diff --git a/src/yb/util/shared_mem.cc b/src/yb/util/shared_mem.cc index 65929b6f9c78..815f7af86dee 100644 --- a/src/yb/util/shared_mem.cc +++ b/src/yb/util/shared_mem.cc @@ -95,8 +95,8 @@ std::string GetSharedMemoryDirectory() { } } } else if (errno != ENOENT) { - LOG(ERROR) << "Unexpected error when reading /proc/mounts: errno=" << errno - << ": " << ErrnoToString(errno); + LOG(DFATAL) << "Unexpected error when reading /proc/mounts: errno=" << errno + << ": " << ErrnoToString(errno); } #endif @@ -117,8 +117,8 @@ int TryMemfdCreate() { struct utsname uts_name; if (uname(&uts_name) == -1) { - LOG(ERROR) << "Failed to get kernel name information: errno=" << errno - << ": " << ErrnoToString(errno); + LOG(DFATAL) << "Failed to get kernel name information: errno=" << errno + << ": " << ErrnoToString(errno); return fd; } @@ -140,8 +140,8 @@ int TryMemfdCreate() { fd = memfd_create(); if (fd == -1) { - LOG(ERROR) << "Error creating shared memory via memfd_create: errno=" << errno - << ": " << ErrnoToString(errno); + LOG(DFATAL) << "Error creating shared memory via memfd_create: errno=" << errno + << ": " << ErrnoToString(errno); } } @@ -180,9 +180,9 @@ Result CreateTempSharedMemoryFile() { // Immediately unlink the file to so it will be removed when all file descriptors close. if (unlink(temp_file_path.c_str()) == -1) { - LOG(ERROR) << "Leaking shared memory file '" << temp_file_path - << "' after failure to unlink: errno=" << errno - << ": " << ErrnoToString(errno); + LOG(DFATAL) << "Leaking shared memory file '" << temp_file_path + << "' after failure to unlink: errno=" << errno + << ": " << ErrnoToString(errno); } break; } @@ -255,8 +255,8 @@ SharedMemorySegment::SharedMemorySegment(SharedMemorySegment&& other) SharedMemorySegment::~SharedMemorySegment() { if (base_address_ && munmap(base_address_, segment_size_) == -1) { - LOG(ERROR) << "Failed to unmap shared memory segment: errno=" << errno - << ": " << ErrnoToString(errno); + LOG(DFATAL) << "Failed to unmap shared memory segment: errno=" << errno + << ": " << ErrnoToString(errno); } if (fd_ != -1) { diff --git a/src/yb/util/stack_trace_tracker.cc b/src/yb/util/stack_trace_tracker.cc index 05a9b36ace91..dff0cc8bffef 100644 --- a/src/yb/util/stack_trace_tracker.cc +++ b/src/yb/util/stack_trace_tracker.cc @@ -66,8 +66,8 @@ class GlobalStackTraceTracker { if (entry.symbolized_trace.empty()) { auto s = StackTrace::MakeStackTrace(frames); if (!s.ok()) { - LOG(ERROR) << "Bad stack trace frames: " - << Slice(frames.data(), frames.size()).ToDebugString(); + LOG(DFATAL) << "Bad stack trace frames: " + << Slice(frames.data(), frames.size()).ToDebugString(); } entry.symbolized_trace = s->Symbolize(); } diff --git a/src/yb/util/status_log.h b/src/yb/util/status_log.h index 1325ca5fe7ec..5ccdf5f10632 100644 --- a/src/yb/util/status_log.h +++ b/src/yb/util/status_log.h @@ -49,7 +49,7 @@ #define ERROR_NOT_OK(to_call, error_prefix) do { \ auto&& _s = (to_call); \ if (PREDICT_FALSE(!_s.ok())) { \ - YB_LOG(ERROR) << (error_prefix) << ": " << StatusToString(_s); \ + YB_LOG(DFATAL) << (error_prefix) << ": " << StatusToString(_s); \ } \ } while (0); @@ -67,6 +67,15 @@ YB_CHECK(_s.ok()) << (msg) << ": " << StatusToString(_s); \ } while (0); +#define YB_RETURN_NOT_OK_WITH_WARNING(expr, warning_prefix) \ + do { \ + auto&& _s = (expr); \ + if (PREDICT_FALSE(!_s.ok())) { \ + YB_LOG(WARNING) << (warning_prefix) << ": " << StatusToString(_s); \ + return MoveStatus(std::move(_s)); \ + } \ + } while (0); + // If the status is bad, CHECK immediately, appending the status to the logged message. #define YB_CHECK_OK(s) YB_CHECK_OK_PREPEND(s, "Bad status") @@ -77,6 +86,7 @@ #define LOG_AND_RETURN YB_LOG_AND_RETURN #define CHECK_OK_PREPEND YB_CHECK_OK_PREPEND #define CHECK_OK YB_CHECK_OK +#define RETURN_NOT_OK_WITH_WARNING YB_RETURN_NOT_OK_WITH_WARNING // These are standard glog macros. #define YB_LOG LOG diff --git a/src/yb/util/trace.cc b/src/yb/util/trace.cc index 7851f97c0717..6218b93a74fc 100644 --- a/src/yb/util/trace.cc +++ b/src/yb/util/trace.cc @@ -389,7 +389,8 @@ TraceEntry* Trace::NewEntry( size_t size = offsetof(TraceEntry, message) + msg_len; void* dst = arena->AllocateBytesAligned(size, alignof(TraceEntry)); if (dst == nullptr) { - LOG(ERROR) << "NewEntry(msg_len, " << file_path << ", " << line_number + LOG(DFATAL) + << "NewEntry(msg_len, " << file_path << ", " << line_number << ") received nullptr from AllocateBytes.\n So far:" << DumpToString(true); return nullptr; } diff --git a/src/yb/util/ulimit_util.cc b/src/yb/util/ulimit_util.cc index 8e344a0821d0..81aab3cb2031 100644 --- a/src/yb/util/ulimit_util.cc +++ b/src/yb/util/ulimit_util.cc @@ -202,8 +202,8 @@ void UlimitUtil::InitUlimits() { const auto limits_or_status = Env::Default()->GetUlimit(resource_id); if (!limits_or_status.ok()) { - LOG(ERROR) << "Unable to fetch hard limit for resource " << resource_name - << " Skipping initialization."; + LOG(DFATAL) << "Unable to fetch hard limit for resource " << resource_name + << " Skipping initialization."; continue; } @@ -221,8 +221,8 @@ void UlimitUtil::InitUlimits() { Status set_ulim_status = Env::Default()->SetUlimit(resource_id, new_soft_limit, resource_name); if (!set_ulim_status.ok()) { - LOG(ERROR) << "Unable to set new soft limit for resource " << resource_name - << " error: " << set_ulim_status.ToString(); + LOG(DFATAL) << "Unable to set new soft limit for resource " << resource_name + << " error: " << set_ulim_status.ToString(); } } } diff --git a/src/yb/vector_index/vector_lsm.cc b/src/yb/vector_index/vector_lsm.cc index 02edcf74d311..72a91bd21d75 100644 --- a/src/yb/vector_index/vector_lsm.cc +++ b/src/yb/vector_index/vector_lsm.cc @@ -1491,7 +1491,7 @@ void VectorLSM::DeleteFile(const VectorLSMFileMetaData& if (status.ok()) { LOG_WITH_PREFIX(INFO) << "Deleted file " << path; } else { - LOG_WITH_PREFIX(ERROR) << "Failed to delete file " << path << ", status: " << status; + LOG_WITH_PREFIX(DFATAL) << "Failed to delete file " << path << ", status: " << status; } } diff --git a/src/yb/yql/cql/cqlserver/cql_processor.cc b/src/yb/yql/cql/cqlserver/cql_processor.cc index b067969aa551..7a31c8a67f47 100644 --- a/src/yb/yql/cql/cqlserver/cql_processor.cc +++ b/src/yb/yql/cql/cqlserver/cql_processor.cc @@ -338,7 +338,7 @@ bool CQLProcessor::CheckAuthentication(const CQLRequest& req) const { unique_ptr CQLProcessor::ProcessRequest(const CQLRequest& req) { if (FLAGS_use_cassandra_authentication && !CheckAuthentication(req)) { - LOG(ERROR) << "Could not execute statement by not authenticated user!"; + LOG(WARNING) << "Could not execute statement by not authenticated user!"; return make_unique( req, ErrorResponse::Code::SERVER_ERROR, "Could not execute statement by not authenticated user"); @@ -588,7 +588,7 @@ unique_ptr CQLProcessor::ProcessRequest(const AuthResponseRequest& "Could not prepare statement for querying user " + params.username); } if (!stmt->ExecuteAsync(this, params, statement_executed_cb_).ok()) { - LOG(ERROR) << "Could not execute prepared statement to fetch login info!"; + LOG(WARNING) << "Could not execute prepared statement to fetch login info!"; return make_unique( req, ErrorResponse::Code::SERVER_ERROR, "Could not execute prepared statement for querying roles for user " + params.username); @@ -675,7 +675,7 @@ unique_ptr CQLProcessor::ProcessError(const Status& s, } } - LOG(ERROR) << "Internal error: invalid error code " << static_cast(GetErrorCode(s)); + LOG(WARNING) << "Internal error: invalid error code " << static_cast(GetErrorCode(s)); return make_unique(*request_, ErrorResponse::Code::SERVER_ERROR, "Invalid error code"); } else if (s.IsNotAuthorized()) { @@ -908,7 +908,7 @@ Result CheckLDAPAuth(const ql::AuthResponseRequest::AuthQueryParameters& p << FLAGS_ycql_ldap_server << "': " << LDAPError(r, ldap); auto error_msg = str.str(); if (r == LDAP_INVALID_CREDENTIALS) { - LOG(ERROR) << error_msg; + LOG(WARNING) << error_msg; return false; } @@ -1069,7 +1069,7 @@ unique_ptr CQLProcessor::ProcessResult(const ExecutedResult::Shared // default: fall through. } - LOG(ERROR) << "Internal error: unknown result type " << static_cast(result->type()); + LOG(WARNING) << "Internal error: unknown result type " << static_cast(result->type()); return make_unique( *request_, ErrorResponse::Code::SERVER_ERROR, "Internal error: unknown result type"); } diff --git a/src/yb/yql/cql/cqlserver/cql_rpc.cc b/src/yb/yql/cql/cqlserver/cql_rpc.cc index 9c6fdfeb00b5..5b9f025a4f76 100644 --- a/src/yb/yql/cql/cqlserver/cql_rpc.cc +++ b/src/yb/yql/cql/cqlserver/cql_rpc.cc @@ -224,7 +224,7 @@ void CQLInboundCall::RespondFailure(rpc::ErrorStatusPB::RpcErrorCodePB error_cod case rpc::ErrorStatusPB::FATAL_VERSION_MISMATCH: FALLTHROUGH_INTENDED; case rpc::ErrorStatusPB::FATAL_UNAUTHORIZED: FALLTHROUGH_INTENDED; case rpc::ErrorStatusPB::FATAL_UNKNOWN: { - LOG(ERROR) << "Unexpected error status: " + LOG(WARNING) << "Unexpected error status: " << rpc::ErrorStatusPB::RpcErrorCodePB_Name(error_code); ErrorResponse(stream_id_, ErrorResponse::Code::SERVER_ERROR, "Server error") .Serialize(compression_scheme, &msg); diff --git a/src/yb/yql/cql/cqlserver/system_query_cache.cc b/src/yb/yql/cql/cqlserver/system_query_cache.cc index 0e4f13854846..4921d38f57a0 100644 --- a/src/yb/yql/cql/cqlserver/system_query_cache.cc +++ b/src/yb/yql/cql/cqlserver/system_query_cache.cc @@ -262,7 +262,7 @@ void SystemQueryCache::ExecuteSync(const std::string& stmt, Status* status, ExecutedResult::SharedPtr* result_ptr) { const auto processor = service_impl_->GetProcessor(); if (!processor.ok()) { - LOG(ERROR) << "Unable to get CQLProcessor for system query cache"; + LOG(DFATAL) << "Unable to get CQLProcessor for system query cache"; *status = processor.status(); return; } diff --git a/src/yb/yql/cql/ql/parser/scanner_util.cc b/src/yb/yql/cql/ql/parser/scanner_util.cc index 7dd685b25a1f..788f5ef40545 100644 --- a/src/yb/yql/cql/ql/parser/scanner_util.cc +++ b/src/yb/yql/cql/ql/parser/scanner_util.cc @@ -45,7 +45,7 @@ unsigned int hexval(unsigned char c) { if (c >= 'A' && c <= 'F') return c - 'A' + 0xA; - LOG(ERROR) << "invalid hexadecimal digit"; + LOG(DFATAL) << "invalid hexadecimal digit"; return 0; /* not reached */ } @@ -233,7 +233,7 @@ void report_invalid_encoding(const char *mbstr, size_t len) { p += sprintf(p, " "); // NOLINT(*) } - LOG(ERROR) << "SQL Error: " << ErrorText(ErrorCode::CHARACTER_NOT_IN_REPERTOIRE) + LOG(DFATAL) << "SQL Error: " << ErrorText(ErrorCode::CHARACTER_NOT_IN_REPERTOIRE) << ". Invalid byte sequence for UTF8 \"" << buf << "\""; } diff --git a/src/yb/yql/cql/ql/ptree/pt_dml_write_property.cc b/src/yb/yql/cql/ql/ptree/pt_dml_write_property.cc index 9f90d41dfb28..91f18adeaba7 100644 --- a/src/yb/yql/cql/ql/ptree/pt_dml_write_property.cc +++ b/src/yb/yql/cql/ql/ptree/pt_dml_write_property.cc @@ -95,7 +95,7 @@ Status PTDmlWritePropertyListNode::Analyze(SemContext *sem_context) { for (PTDmlWriteProperty::SharedPtr tnode : node_list()) { if (tnode == nullptr) { // This shouldn't happen because AppendList ignores null nodes. - LOG(ERROR) << "Invalid null property"; + LOG(DFATAL) << "Invalid null property"; continue; } switch(tnode->property_type()) { diff --git a/src/yb/yql/cql/ql/ptree/pt_table_property.cc b/src/yb/yql/cql/ql/ptree/pt_table_property.cc index 1ddca04ca7b6..30e2f3d102b0 100644 --- a/src/yb/yql/cql/ql/ptree/pt_table_property.cc +++ b/src/yb/yql/cql/ql/ptree/pt_table_property.cc @@ -274,7 +274,7 @@ Status PTTablePropertyListNode::Analyze(SemContext *sem_context) { for (PTTableProperty::SharedPtr tnode : node_list()) { if (tnode == nullptr) { // This shouldn't happen because AppendList ignores null nodes. - LOG(ERROR) << "Invalid null property"; + LOG(DFATAL) << "Invalid null property"; continue; } switch(tnode->property_type()) { @@ -382,7 +382,7 @@ Status PTTableProperty::SetTableProperty(yb::TableProperties *table_property) co case KVProperty::kCompaction: FALLTHROUGH_INTENDED; case KVProperty::kCompression: FALLTHROUGH_INTENDED; case KVProperty::kTransactions: - LOG(ERROR) << "Not primitive table property " << table_property_name; + LOG(DFATAL) << "Not primitive table property " << table_property_name; break; case KVProperty::kNumTablets: int64_t val; diff --git a/src/yb/yql/cql/ql/util/cql_message.cc b/src/yb/yql/cql/ql/util/cql_message.cc index fee6ba605810..62e9da0caa02 100644 --- a/src/yb/yql/cql/ql/util/cql_message.cc +++ b/src/yb/yql/cql/ql/util/cql_message.cc @@ -934,7 +934,7 @@ void SerializeUUID(const string& value, faststring* mesg) { if (value.size() == CQLMessage::kUUIDSize) { mesg->append(value); } else { - LOG(ERROR) << "Internal error: inconsistent UUID size: " << value.size(); + LOG(DFATAL) << "Internal error: inconsistent UUID size: " << value.size(); uint8_t empty_uuid[CQLMessage::kUUIDSize] = {0}; mesg->append(empty_uuid, sizeof(empty_uuid)); } @@ -944,7 +944,7 @@ void SerializeTimeUUID(const string& value, faststring* mesg) { if (value.size() == CQLMessage::kUUIDSize) { mesg->append(value); } else { - LOG(ERROR) << "Internal error: inconsistent TimeUUID size: " << value.size(); + LOG(DFATAL) << "Internal error: inconsistent TimeUUID size: " << value.size(); uint8_t empty_uuid[CQLMessage::kUUIDSize] = {0}; mesg->append(empty_uuid, sizeof(empty_uuid)); } @@ -1013,7 +1013,7 @@ void SerializeValue(const CQLMessage::Value& value, faststring* mesg) { break; // default: fall through } - LOG(ERROR) << "Internal error: invalid/unknown value kind " << static_cast(value.kind); + LOG(DFATAL) << "Internal error: invalid/unknown value kind " << static_cast(value.kind); SerializeInt(-1, mesg); } #endif @@ -1226,7 +1226,7 @@ ResultResponse::RowsMetadata::Type::Type(const Id id) : id(id) { // default: fall through } - LOG(ERROR) << "Internal error: invalid/unknown primitive type id " << static_cast(id); + FATAL_INVALID_ENUM_VALUE(Id, id); } // These union members in Type below are not initialized by default. They need to be explicitly @@ -1273,7 +1273,7 @@ ResultResponse::RowsMetadata::Type::Type(const Id id, shared_ptr ele // default: fall through } - LOG(ERROR) << "Internal error: invalid/unknown list/map type id " << static_cast(id); + FATAL_INVALID_ENUM_VALUE(Id, id); } ResultResponse::RowsMetadata::Type::Type(shared_ptr map_type) : id(Id::MAP) { @@ -1332,7 +1332,7 @@ ResultResponse::RowsMetadata::Type::Type(const Type& t) : id(t.id) { // default: fall through } - LOG(ERROR) << "Internal error: unknown type id " << static_cast(id); + FATAL_INVALID_ENUM_VALUE(Id, id); } ResultResponse::RowsMetadata::Type::Type(const shared_ptr& ql_type) { @@ -1443,7 +1443,7 @@ ResultResponse::RowsMetadata::Type::Type(const shared_ptr& ql_type) { // default: fall through } - LOG(ERROR) << "Internal error: invalid/unsupported type " << type->ToString(); + FATAL_INVALID_ENUM_VALUE(DataType, type->main()); } ResultResponse::RowsMetadata::Type::~Type() { @@ -1489,7 +1489,7 @@ ResultResponse::RowsMetadata::Type::~Type() { // default: fall through } - LOG(ERROR) << "Internal error: unknown type id " << static_cast(id); + FATAL_INVALID_ENUM_VALUE(Id, id); } ResultResponse::RowsMetadata::RowsMetadata() @@ -1576,7 +1576,8 @@ void ResultResponse::SerializeType(const RowsMetadata::Type* type, faststring* m // default: fall through } - LOG(ERROR) << "Internal error: unknown type id " << static_cast(type->id); + + FATAL_INVALID_ENUM_VALUE(RowsMetadata::Type::Id, type->id); } void ResultResponse::SerializeColSpecs( diff --git a/src/yb/yql/pggate/pg_client.cc b/src/yb/yql/pggate/pg_client.cc index 6c39c16ea224..858fdbfc532c 100644 --- a/src/yb/yql/pggate/pg_client.cc +++ b/src/yb/yql/pggate/pg_client.cc @@ -476,9 +476,9 @@ class PgClient::Impl : public BigDataFetcher { // the next user activity will trigger a FATAL anyway. This is done specifically to avoid // log spew of the warning message below in cases where the session is idle (ie. no other // RPCs are being sent to the tserver). - LOG(ERROR) << "Heartbeat failed. Connection needs to be reset. " - << "Shutting down heartbeating mechanism due to unknown session " - << session_id_; + LOG(DFATAL) << "Heartbeat failed. Connection needs to be reset. " + << "Shutting down heartbeating mechanism due to unknown session " + << session_id_; heartbeat_poller_.Shutdown(); return; } diff --git a/src/yb/yql/pggate/pggate_flags.cc b/src/yb/yql/pggate/pggate_flags.cc index aa00a5a25708..f6b54550b86c 100644 --- a/src/yb/yql/pggate/pggate_flags.cc +++ b/src/yb/yql/pggate/pggate_flags.cc @@ -17,8 +17,12 @@ #include "yb/util/flag_validators.h" #include "yb/util/flags.h" +#include "yb/util/size_literals.h" + #include "yb/yql/pggate/pggate_flags.h" +using namespace yb::size_literals; + DEPRECATE_FLAG(int32, pgsql_rpc_keepalive_time_ms, "02_2024"); DEFINE_UNKNOWN_int32(pggate_rpc_timeout_secs, 60, @@ -65,7 +69,13 @@ DEFINE_test_flag(int64, inject_delay_between_prepare_ybctid_execute_batch_ybctid DEFINE_test_flag(bool, index_read_multiple_partitions, false, "Test flag used to simulate tablet spliting by joining tables' partitions."); -DEFINE_NON_RUNTIME_int32(ysql_output_buffer_size, 1024 * 1024, +#if defined(__APPLE__) +constexpr int32_t kDefaultYsqlOutputBufferSize = 256_KB; +#else +constexpr int32_t kDefaultYsqlOutputBufferSize = 1_MB; +#endif + +DEFINE_NON_RUNTIME_int32(ysql_output_buffer_size, kDefaultYsqlOutputBufferSize, "Size of postgres-level output buffer, in bytes. " "While fetched data resides within this buffer and hasn't been flushed to client yet, " "we're free to transparently restart operation in case of restart read error."); diff --git a/src/yb/yql/pggate/ybc_pggate.cc b/src/yb/yql/pggate/ybc_pggate.cc index a2c846838746..596b12326551 100644 --- a/src/yb/yql/pggate/ybc_pggate.cc +++ b/src/yb/yql/pggate/ybc_pggate.cc @@ -504,7 +504,7 @@ static Result GetYbLsnTypeString( case tserver::PGReplicationSlotLsnType::ReplicationSlotLsnTypePg_HYBRID_TIME: return YBC_LSN_TYPE_HYBRID_TIME; default: - LOG(ERROR) << "Received unexpected LSN type " << yb_lsn_type << " for stream " << stream_id; + LOG(DFATAL) << "Received unexpected LSN type " << yb_lsn_type << " for stream " << stream_id; return STATUS_FORMAT( InternalError, "Received unexpected LSN type $0 for stream $1", yb_lsn_type, stream_id); } @@ -2690,7 +2690,7 @@ void YBCStoreTServerAshSamples( acquire_cb_lock_fn(true /* exclusive */); if (!result.ok()) { // We don't return error status to avoid a restart loop of the ASH collector - LOG(ERROR) << result.status(); + LOG(WARNING) << result.status(); } else { AshCopyTServerSamples(get_cb_slot_fn, result->tserver_wait_states(), sample_time); AshCopyTServerSamples(get_cb_slot_fn, result->cql_wait_states(), sample_time); diff --git a/src/yb/yql/pggate/ysql_bench_metrics_handler/ybc_ysql_bench_metrics_handler.cc b/src/yb/yql/pggate/ysql_bench_metrics_handler/ybc_ysql_bench_metrics_handler.cc index 1f3dd2f0fcd4..a911aa9a18d3 100644 --- a/src/yb/yql/pggate/ysql_bench_metrics_handler/ybc_ysql_bench_metrics_handler.cc +++ b/src/yb/yql/pggate/ysql_bench_metrics_handler/ybc_ysql_bench_metrics_handler.cc @@ -110,7 +110,7 @@ int StartWebserver(WebserverWrapper *webserver_wrapper) { "/prometheus-metrics", "Metrics", PgPrometheusMetricsHandler, false, false); auto status = WithMaskedYsqlSignals([webserver]() { return webserver->Start(); }); if (!status.ok()) { - LOG(ERROR) << "Error starting webserver: " << status.ToString(); + LOG(DFATAL) << "Error starting webserver: " << status.ToString(); return 1; } diff --git a/src/yb/yql/pgwrapper/pg_wrapper.cc b/src/yb/yql/pgwrapper/pg_wrapper.cc index 9ae2d7df9225..68d99608706e 100644 --- a/src/yb/yql/pgwrapper/pg_wrapper.cc +++ b/src/yb/yql/pgwrapper/pg_wrapper.cc @@ -807,9 +807,8 @@ Status PgWrapper::Start() { } } #else - if (FLAGS_yb_enable_valgrind) { - LOG(ERROR) << "yb_enable_valgrind is ON, but Yugabyte was not compiled with Valgrind support."; - } + LOG_IF(DFATAL, FLAGS_yb_enable_valgrind) + << "yb_enable_valgrind is ON, but Yugabyte was not compiled with Valgrind support."; #endif vector postgres_argv { @@ -1152,9 +1151,7 @@ Status PgWrapper::InitDbForYSQL( LOG(INFO) << "initdb took " << std::chrono::duration_cast(elapsed_time).count() << " ms"; - if (!initdb_status.ok()) { - LOG(ERROR) << "initdb failed: " << initdb_status; - } + ERROR_NOT_OK(initdb_status, "initdb failed"); return initdb_status; } @@ -1277,7 +1274,7 @@ Status PgWrapper::CleanupLockFileAndKillHungPg(const std::string& lock_file) { } if (postgres_pid == 0) { - LOG(ERROR) << strings::Substitute( + LOG(WARNING) << Format( "Error reading postgres process ID from lock file $0. $1 $2", lock_file, ErrnoToString(errno), errno); } else { @@ -1466,15 +1463,15 @@ key_t PgSupervisor::GetYsqlConnManagerStatsShmkey() { if (shmid < 0) { switch (errno) { case EACCES: - LOG(ERROR) << "Unable to create shared memory segment, not authorised to create shared " - "memory segment"; + LOG(DFATAL) << "Unable to create shared memory segment, not authorised to create shared " + "memory segment"; return -1; case ENOSPC: - LOG(ERROR) + LOG(DFATAL) << "Unable to create shared memory segment, no space left."; return -1; case ENOMEM: - LOG(ERROR) + LOG(DFATAL) << "Unable to create shared memory segment, no memory left"; return -1; default: diff --git a/src/yb/yql/redis/redisserver/redis_client.cc b/src/yb/yql/redis/redisserver/redis_client.cc index cf13730a111e..aa90b8925a9f 100644 --- a/src/yb/yql/redis/redisserver/redis_client.cc +++ b/src/yb/yql/redis/redisserver/redis_client.cc @@ -62,7 +62,7 @@ RedisReply CreateReply(redisReply* reply) { default: RedisReply result( RedisReplyType::kError, Format("Unsupported reply type: $0", reply->type)); - LOG(ERROR) << result.ToString(); + LOG(DFATAL) << result.ToString(); return result; } } diff --git a/src/yb/yql/redis/redisserver/redis_commands.cc b/src/yb/yql/redis/redisserver/redis_commands.cc index 32cd98690449..183751271395 100644 --- a/src/yb/yql/redis/redisserver/redis_commands.cc +++ b/src/yb/yql/redis/redisserver/redis_commands.cc @@ -322,7 +322,7 @@ void GetTabletLocations(LocalCommandData data, RedisArrayPB* array_response) { auto s = data.client()->GetTabletsAndUpdateCache( table_name, 0, &tablets, &partitions, &locations); if (!s.ok()) { - LOG(ERROR) << "Error getting tablets: " << s.message(); + LOG(DFATAL) << "Error getting tablets: " << s.message(); return; } vector response, ts_info; @@ -717,7 +717,7 @@ class RenameData : public std::enable_shared_from_this { session_->FlushAsync([retained_self = shared_from_this()](client::FlushStatus* flush_status) { const auto& s = flush_status->status; if (!s.ok()) { - LOG(ERROR) << "Reading from src during a Rename failed. " << s; + LOG(DFATAL) << "Reading from src during a Rename failed. " << s; retained_self->RespondWithError(s.message().ToBuffer()); } else { retained_self->BeginWriteDest(); @@ -824,7 +824,7 @@ class RenameData : public std::enable_shared_from_this { session_->FlushAsync([retained_self = shared_from_this()](client::FlushStatus* flush_status) { const auto& s = flush_status->status; if (!s.ok()) { - LOG(ERROR) << "Writing to dest during a Rename failed. " << s; + LOG(DFATAL) << "Writing to dest during a Rename failed. " << s; retained_self->RespondWithError(s.message().ToBuffer()); return; } @@ -844,7 +844,7 @@ class RenameData : public std::enable_shared_from_this { session_->FlushAsync([retained_self = shared_from_this()](client::FlushStatus* flush_status) { const auto& s = flush_status->status; if (!s.ok()) { - LOG(ERROR) << "Updating ttl for dest during a Rename failed. " << s; + LOG(DFATAL) << "Updating ttl for dest during a Rename failed. " << s; retained_self->RespondWithError(s.message().ToBuffer()); return; } @@ -858,7 +858,7 @@ class RenameData : public std::enable_shared_from_this { session_->FlushAsync([retained_self = shared_from_this()](client::FlushStatus* flush_status) { const auto& s = flush_status->status; if (!s.ok()) { - LOG(ERROR) << "Deleting src during a Rename failed. " << s; + LOG(DFATAL) << "Deleting src during a Rename failed. " << s; retained_self->RespondWithError(s.message().ToBuffer()); return; } diff --git a/src/yb/yql/redis/redisserver/redis_service.cc b/src/yb/yql/redis/redisserver/redis_service.cc index c5d1af07689c..b042faf35cc4 100644 --- a/src/yb/yql/redis/redisserver/redis_service.cc +++ b/src/yb/yql/redis/redisserver/redis_service.cc @@ -1219,7 +1219,7 @@ void RedisServiceImplData::ForwardToInterestedProxies( const string& channel, const string& message, const IntFunctor& f) { auto interested_servers = GetServerAddrsForChannel(channel); if (!interested_servers.ok()) { - LOG(ERROR) << "Could not get servers to forward to " << interested_servers.status(); + LOG(DFATAL) << "Could not get servers to forward to " << interested_servers.status(); return; } std::shared_ptr resp_handler = @@ -1413,7 +1413,7 @@ const RedisCommandInfo* RedisServiceImpl::Impl::FetchHandler(const RedisClientCo } auto iter = command_name_to_info_map_.find(Slice(lower_cmd, len)); if (iter == command_name_to_info_map_.end()) { - YB_LOG_EVERY_N_SECS(ERROR, 60) + YB_LOG_EVERY_N_SECS(WARNING, 60) << "Command " << cmd_name << " not yet supported. " << "Arguments: " << ToString(cmd_args) << ". " << "Raw: " << Slice(cmd_args[0].data(), cmd_args.back().end()).ToDebugString(); @@ -1498,13 +1498,13 @@ void RedisServiceImpl::Impl::Handle(rpc::InboundCallPtr call_ptr) { size_t passed_arguments = c.size() - 1; if (!exact_count && passed_arguments < arity) { // -X means that the command needs >= X arguments. - YB_LOG_EVERY_N_SECS(ERROR, 60) + YB_LOG_EVERY_N_SECS(WARNING, 60) << "Requested command " << c[0] << " does not have enough arguments." << " At least " << arity << " expected, but " << passed_arguments << " found."; RespondWithFailure(call, idx, "Too few arguments."); } else if (exact_count && passed_arguments != arity) { // X (> 0) means that the command needs exactly X arguments. - YB_LOG_EVERY_N_SECS(ERROR, 60) + YB_LOG_EVERY_N_SECS(WARNING, 60) << "Requested command " << c[0] << " has wrong number of arguments. " << arity << " expected, but " << passed_arguments << " found."; RespondWithFailure(call, idx, "Wrong number of arguments."); From 4dbb628f538b435863ee1f6de482776a3da8efab Mon Sep 17 00:00:00 2001 From: Hal Takahara <8877300+mtakahar@users.noreply.github.com> Date: Wed, 2 Apr 2025 15:09:24 -0400 Subject: [PATCH 097/146] [#26868] YSQL: add safer non-CBO mode that ignores the stats MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Summary: The current default cost model, which uses heuristic-based estimates, occasionally produces problematic execution plans when tables are analyzed. This poses a risk for users running `ANALYZE` in preparation for transitioning to the new CBO (Cost-Based Optimizer). The new `yb_enable_cbo` GUC parameter introduces a behavior similar to the non-CBO mode but ignores the statistics, preventing such issues and also allows users to disable statistics for problematic queries only. #### `yb_enable_cbo` Values: * **legacy_mode** (default): Behaves the same as the current default. * **off**: (New behavior) Similar to the current default but completely ignores statistics. * **on**: Functions like `yb_enable_base_scans_cost_model=on`. * **legacy_stats_mode**: Functions like `yb_enable_optimizer_statistics=on`. #### Previous/New Optimizer Setting and Behavior Details: ##### A. Default, Unanalyzed → `yb_enable_cbo=off` (regardless of analyze) * Uses the PG cost model along with YB-tuned heuristics-based default estimates. ##### B. New CBO Enabled (`yb_enable_base_scans_cost_model=on`) → `yb_enable_cbo=on` * Uses YB-specific cost functions for YB-specific nodes, relying on full statistics (or PG defaults where stats are unavailable). ###### Less Common Patterns: ##### C. Default + `pg_class.reltuples` Manual Patch → `yb_enable_cbo=legacy_mode` * A workaround to influence join optimization choices before YB relations supported `ANALYZE`. * Introduced in commit [48f54177fe05ffaeb35ef5665de3948b40d76ac4](https://github.com/yugabyte/yugabyte-db/issues/7097). ##### D. `yb_enable_optimizer_statistics=on`, Analyzed → `yb_enable_cbo=legacy_stats_mode` * A workaround to influence index selection and join optimization before a proper cost model for YB nodes was introduced (new CBO). * Introduced in commit [837bc5eb4a185f844b554987ffa010b3be772c7b](https://github.com/yugabyte/yugabyte-db/issues/10840). ###### Unrecommended Patterns: ##### E. Default, Analyzed → `yb_enable_cbo=legacy_mode` * Uses the PG cost model combined with a mixture of real stats and YB-tuned heuristics-based internal default estimates. `reltuples` is used for base table accesses, and the column stats are also used for others such as joins, group by, etc. * Behavior is unpredictable due to inconsistent blending of heuristics, PG defaults, and real stats. ##### F. `yb_enable_optimizer_statistics=on`, Unanalyzed → `yb_enable_cbo=legacy_stats_mode` * Uses the PG cost model with PG default estimates. Jira: DB-16284 Test Plan: ./yb_build.sh release --java-test 'org.yb.pgsql.TestPgRegressPlanner' Reviewers: mihnea, william.mckenna, gkukreja Reviewed By: gkukreja Subscribers: yql Differential Revision: https://phorge.dev.yugabyte.com/D43008 --- .../src/backend/optimizer/path/costsize.c | 2 + .../src/backend/optimizer/util/plancat.c | 2 +- .../src/backend/statistics/extended_stats.c | 6 + src/postgres/src/backend/utils/adt/selfuncs.c | 7 +- src/postgres/src/backend/utils/misc/guc.c | 89 +++- src/postgres/src/include/optimizer/cost.h | 11 + .../expected/yb.orig.optimizer_guc.out | 279 ++++++++++ .../expected/yb.orig.row_estimates.out | 499 +++++++++++++++++- .../regress/sql/yb.orig.optimizer_guc.sql | 82 +++ .../regress/sql/yb.orig.row_estimates.sql | 394 +++++++++++++- .../test/regress/yb_planner_serial_schedule | 1 + 11 files changed, 1337 insertions(+), 35 deletions(-) create mode 100644 src/postgres/src/test/regress/expected/yb.orig.optimizer_guc.out create mode 100644 src/postgres/src/test/regress/sql/yb.orig.optimizer_guc.sql diff --git a/src/postgres/src/backend/optimizer/path/costsize.c b/src/postgres/src/backend/optimizer/path/costsize.c index 3bdbcdcef345..4824b4a784b3 100644 --- a/src/postgres/src/backend/optimizer/path/costsize.c +++ b/src/postgres/src/backend/optimizer/path/costsize.c @@ -219,6 +219,8 @@ bool enable_async_append = true; bool yb_enable_geolocation_costing = true; bool yb_enable_batchednl = false; bool yb_enable_parallel_append = false; +YbCostModel yb_enable_cbo = YB_COST_MODEL_LEGACY; +bool yb_ignore_stats = false; extern int yb_bnl_batch_size; diff --git a/src/postgres/src/backend/optimizer/util/plancat.c b/src/postgres/src/backend/optimizer/util/plancat.c index 4c4ed0dfb722..d842feb8a14f 100644 --- a/src/postgres/src/backend/optimizer/util/plancat.c +++ b/src/postgres/src/backend/optimizer/util/plancat.c @@ -1026,7 +1026,7 @@ estimate_rel_size(Relation rel, int32 *attr_widths, */ if (IsYBRelation(rel)) { - if (rel->rd_rel->reltuples < 0) + if (rel->rd_rel->reltuples < 0 || yb_ignore_stats) { *tuples = YBC_DEFAULT_NUM_ROWS; } diff --git a/src/postgres/src/backend/statistics/extended_stats.c b/src/postgres/src/backend/statistics/extended_stats.c index 0fcd81974172..c568a4038f48 100644 --- a/src/postgres/src/backend/statistics/extended_stats.c +++ b/src/postgres/src/backend/statistics/extended_stats.c @@ -49,6 +49,9 @@ #include "utils/syscache.h" #include "utils/typcache.h" +/* YB includes */ +#include "optimizer/cost.h" + /* * To avoid consuming too much memory during analysis and/or too much space * in the resulting pg_statistic rows, we ignore varlena datums that are wider @@ -2033,6 +2036,9 @@ statext_clauselist_selectivity(PlannerInfo *root, List *clauses, int varRelid, { Selectivity sel; + if (rel->is_yb_relation && yb_ignore_stats) + return 1.0; + /* First, try estimating clauses using a multivariate MCV list. */ sel = statext_mcv_clauselist_selectivity(root, clauses, varRelid, jointype, sjinfo, rel, estimatedclauses, is_or); diff --git a/src/postgres/src/backend/utils/adt/selfuncs.c b/src/postgres/src/backend/utils/adt/selfuncs.c index c8fba61c208b..44f40953d2b9 100644 --- a/src/postgres/src/backend/utils/adt/selfuncs.c +++ b/src/postgres/src/backend/utils/adt/selfuncs.c @@ -4230,7 +4230,7 @@ estimate_multivariate_ndistinct(PlannerInfo *root, RelOptInfo *rel, RangeTblEntry *rte = planner_rt_fetch(rel->relid, root); /* bail out immediately if the table has no extended statistics */ - if (!rel->statlist) + if (!rel->statlist || (rel->is_yb_relation && yb_ignore_stats)) return false; /* look for the ndistinct statistics object matching the most vars */ @@ -5360,7 +5360,7 @@ examine_variable(PlannerInfo *root, Node *node, int varRelid, vardata->atttype = exprType(node); vardata->atttypmod = exprTypmod(node); - if (onerel) + if (onerel && !(onerel->is_yb_relation && yb_ignore_stats)) { /* * We have an expression in vars of a single relation. Try to match @@ -5688,6 +5688,9 @@ examine_simple_variable(PlannerInfo *root, Var *var, } else if (rte->rtekind == RTE_RELATION) { + if (vardata->rel->is_yb_relation && yb_ignore_stats) + return; + /* * Plain table or parent of an inheritance appendrel, so look up the * column in pg_statistic diff --git a/src/postgres/src/backend/utils/misc/guc.c b/src/postgres/src/backend/utils/misc/guc.c index ae3d525a8cde..eb540c58ea4c 100644 --- a/src/postgres/src/backend/utils/misc/guc.c +++ b/src/postgres/src/backend/utils/misc/guc.c @@ -286,6 +286,9 @@ static void assign_tcmalloc_sample_period(int newval, void *extra); static void assign_yb_pg_batch_detection_mechanism(int new_value, void *extra); static void assign_ysql_upgrade_mode(bool newval, void *extra); static void check_reserved_prefixes(const char *varName); +static void assign_yb_enable_cbo(int new_value, void *extra); +static void assign_yb_enable_optimizer_statistics(bool new_value, void *extra); +static void assign_yb_enable_base_scans_cost_model(bool new_value, void *extra); static bool check_yb_enable_advisory_locks(bool *newval, void **extra, GucSource source); @@ -665,6 +668,20 @@ const struct config_enum_entry yb_sampling_algorithm_options[] = { {NULL, 0, false} }; +static const struct config_enum_entry yb_cost_model_options[] = { + {"off", YB_COST_MODEL_OFF, false}, + {"on", YB_COST_MODEL_ON, false}, + {"legacy_mode", YB_COST_MODEL_LEGACY, false}, + {"legacy_stats_mode", YB_COST_MODEL_LEGACY_STATS, false}, + {"true", YB_COST_MODEL_ON, true}, + {"false", YB_COST_MODEL_OFF, true}, + {"yes", YB_COST_MODEL_ON, true}, + {"no", YB_COST_MODEL_OFF, true}, + {"1", YB_COST_MODEL_ON, true}, + {"0", YB_COST_MODEL_OFF, true}, + {NULL, 0, false} +}; + /* * Options for enum values stored in other modules */ @@ -2831,17 +2848,22 @@ static struct config_bool ConfigureNamesBool[] = false, NULL, NULL, NULL }, + { {"yb_enable_optimizer_statistics", PGC_USERSET, QUERY_TUNING_METHOD, gettext_noop("Enables use of the PostgreSQL selectivity estimation which utilizes " "table statistics collected with ANALYZE. When disabled, a simpler heuristics based " - "selectivity estimation is used."), + "selectivity estimation is used." + " DEPRECATED: This settting is deprecated and will " + "be removed in a future release." + " Use \"yb_enable_cbo\" instead."), NULL }, &yb_enable_optimizer_statistics, false, - NULL, NULL, NULL + NULL, assign_yb_enable_optimizer_statistics, NULL }, + { {"yb_enable_expression_pushdown", PGC_USERSET, QUERY_TUNING_METHOD, gettext_noop("Push supported expressions down to DocDB for evaluation."), @@ -2999,12 +3021,14 @@ static struct config_bool ConfigureNamesBool[] = { {"yb_enable_base_scans_cost_model", PGC_USERSET, QUERY_TUNING_METHOD, gettext_noop("Enables YB cost model for Sequential and Index scans. " - "This feature is currently in preview."), + " DEPRECATED: This setting is deprecated and will " + "be removed in a future release." + " Use \"yb_enable_cbo\" instead."), NULL }, &yb_enable_base_scans_cost_model, false, - NULL, NULL, NULL + NULL, assign_yb_enable_base_scans_cost_model, NULL }, { @@ -7052,6 +7076,16 @@ static struct config_enum ConfigureNamesEnum[] = NULL, NULL, NULL }, + { + {"yb_enable_cbo", PGC_USERSET, QUERY_TUNING_METHOD, + gettext_noop("Enable YB cost model."), + NULL, + GUC_EXPLAIN + }, + &yb_enable_cbo, YB_COST_MODEL_LEGACY, yb_cost_model_options, + NULL, assign_yb_enable_cbo, NULL + }, + /* End-of-list marker */ { {NULL, 0, 0, NULL, NULL}, NULL, 0, NULL, NULL, NULL, NULL @@ -15707,6 +15741,53 @@ assign_ysql_upgrade_mode(bool newval, void *extra) allowSystemTableMods = newval; } +static void +assign_yb_enable_cbo(int new_value, void *extra) +{ + yb_enable_base_scans_cost_model = false; + yb_enable_optimizer_statistics = false; + yb_ignore_stats = false; + + switch(new_value) + { + case YB_COST_MODEL_OFF: + yb_ignore_stats = true; + break; + + case YB_COST_MODEL_ON: + yb_enable_base_scans_cost_model = true; + break; + + case YB_COST_MODEL_LEGACY: + break; + + case YB_COST_MODEL_LEGACY_STATS: + yb_enable_optimizer_statistics = true; + break; + } +} + +static void +assign_yb_enable_optimizer_statistics(bool new_value, void *extra) +{ + yb_enable_optimizer_statistics = new_value; + yb_enable_cbo = (new_value? YB_COST_MODEL_LEGACY_STATS: + (yb_enable_base_scans_cost_model? YB_COST_MODEL_ON: + YB_COST_MODEL_LEGACY)); + yb_ignore_stats = false; +} + +static void +assign_yb_enable_base_scans_cost_model(bool new_value, void *extra) +{ + yb_enable_base_scans_cost_model = new_value; + yb_enable_cbo = (new_value? YB_COST_MODEL_ON: + (yb_enable_optimizer_statistics? + YB_COST_MODEL_LEGACY_STATS: + YB_COST_MODEL_LEGACY)); + yb_ignore_stats = false; +} + static bool check_max_backoff(int *max_backoff_msecs, void **extra, GucSource source) { diff --git a/src/postgres/src/include/optimizer/cost.h b/src/postgres/src/include/optimizer/cost.h index 7c377b197446..bac9d62a196a 100644 --- a/src/postgres/src/include/optimizer/cost.h +++ b/src/postgres/src/include/optimizer/cost.h @@ -88,6 +88,15 @@ typedef enum CONSTRAINT_EXCLUSION_PARTITION /* apply c_e to otherrels only */ } ConstraintExclusionType; +/* possible values for yb_enable_cbo */ +typedef enum +{ + YB_COST_MODEL_LEGACY = -2, + YB_COST_MODEL_LEGACY_STATS = -1, + YB_COST_MODEL_OFF = 0, + YB_COST_MODEL_ON, +} YbCostModel; + /* * prototypes for costsize.c @@ -145,6 +154,8 @@ extern PGDLLIMPORT bool yb_enable_geolocation_costing; */ extern PGDLLIMPORT bool yb_enable_batchednl; extern PGDLLIMPORT bool yb_enable_parallel_append; +extern PGDLLIMPORT YbCostModel yb_enable_cbo; +extern PGDLLIMPORT bool yb_ignore_stats; extern double index_pages_fetched(double tuples_fetched, BlockNumber pages, double index_pages, PlannerInfo *root); diff --git a/src/postgres/src/test/regress/expected/yb.orig.optimizer_guc.out b/src/postgres/src/test/regress/expected/yb.orig.optimizer_guc.out new file mode 100644 index 000000000000..a3bef4f807f6 --- /dev/null +++ b/src/postgres/src/test/regress/expected/yb.orig.optimizer_guc.out @@ -0,0 +1,279 @@ +reset yb_enable_cbo; +reset yb_enable_base_scans_cost_model; +reset yb_enable_optimizer_statistics; +show yb_enable_cbo; + yb_enable_cbo +--------------- + legacy_mode +(1 row) + +show yb_enable_base_scans_cost_model; + yb_enable_base_scans_cost_model +--------------------------------- + off +(1 row) + +show yb_enable_optimizer_statistics; + yb_enable_optimizer_statistics +-------------------------------- + off +(1 row) + +-- change yb_enable_cbo +set yb_enable_cbo = on; +show yb_enable_cbo; + yb_enable_cbo +--------------- + on +(1 row) + +show yb_enable_base_scans_cost_model; + yb_enable_base_scans_cost_model +--------------------------------- + on +(1 row) + +show yb_enable_optimizer_statistics; + yb_enable_optimizer_statistics +-------------------------------- + off +(1 row) + +set yb_enable_cbo = off; +show yb_enable_cbo; + yb_enable_cbo +--------------- + off +(1 row) + +show yb_enable_base_scans_cost_model; + yb_enable_base_scans_cost_model +--------------------------------- + off +(1 row) + +show yb_enable_optimizer_statistics; + yb_enable_optimizer_statistics +-------------------------------- + off +(1 row) + +set yb_enable_cbo = legacy_mode; +show yb_enable_cbo; + yb_enable_cbo +--------------- + legacy_mode +(1 row) + +show yb_enable_base_scans_cost_model; + yb_enable_base_scans_cost_model +--------------------------------- + off +(1 row) + +show yb_enable_optimizer_statistics; + yb_enable_optimizer_statistics +-------------------------------- + off +(1 row) + +set yb_enable_cbo = legacy_stats_mode; +show yb_enable_cbo; + yb_enable_cbo +------------------- + legacy_stats_mode +(1 row) + +show yb_enable_base_scans_cost_model; + yb_enable_base_scans_cost_model +--------------------------------- + off +(1 row) + +show yb_enable_optimizer_statistics; + yb_enable_optimizer_statistics +-------------------------------- + on +(1 row) + +set yb_enable_cbo = off; +show yb_enable_cbo; + yb_enable_cbo +--------------- + off +(1 row) + +show yb_enable_base_scans_cost_model; + yb_enable_base_scans_cost_model +--------------------------------- + off +(1 row) + +show yb_enable_optimizer_statistics; + yb_enable_optimizer_statistics +-------------------------------- + off +(1 row) + +-- turn on/off old parameters +set yb_enable_base_scans_cost_model = off; +show yb_enable_cbo; + yb_enable_cbo +--------------- + legacy_mode +(1 row) + +show yb_enable_base_scans_cost_model; + yb_enable_base_scans_cost_model +--------------------------------- + off +(1 row) + +show yb_enable_optimizer_statistics; + yb_enable_optimizer_statistics +-------------------------------- + off +(1 row) + +set yb_enable_optimizer_statistics = off; +show yb_enable_cbo; + yb_enable_cbo +--------------- + legacy_mode +(1 row) + +show yb_enable_base_scans_cost_model; + yb_enable_base_scans_cost_model +--------------------------------- + off +(1 row) + +show yb_enable_optimizer_statistics; + yb_enable_optimizer_statistics +-------------------------------- + off +(1 row) + +set yb_enable_optimizer_statistics = on; +show yb_enable_cbo; + yb_enable_cbo +------------------- + legacy_stats_mode +(1 row) + +show yb_enable_base_scans_cost_model; + yb_enable_base_scans_cost_model +--------------------------------- + off +(1 row) + +show yb_enable_optimizer_statistics; + yb_enable_optimizer_statistics +-------------------------------- + on +(1 row) + +set yb_enable_base_scans_cost_model = on; +show yb_enable_cbo; + yb_enable_cbo +--------------- + on +(1 row) + +show yb_enable_base_scans_cost_model; + yb_enable_base_scans_cost_model +--------------------------------- + on +(1 row) + +show yb_enable_optimizer_statistics; + yb_enable_optimizer_statistics +-------------------------------- + on +(1 row) + +set yb_enable_base_scans_cost_model = off; +show yb_enable_cbo; + yb_enable_cbo +------------------- + legacy_stats_mode +(1 row) + +show yb_enable_base_scans_cost_model; + yb_enable_base_scans_cost_model +--------------------------------- + off +(1 row) + +show yb_enable_optimizer_statistics; + yb_enable_optimizer_statistics +-------------------------------- + on +(1 row) + +set yb_enable_optimizer_statistics = off; +show yb_enable_cbo; + yb_enable_cbo +--------------- + legacy_mode +(1 row) + +show yb_enable_base_scans_cost_model; + yb_enable_base_scans_cost_model +--------------------------------- + off +(1 row) + +show yb_enable_optimizer_statistics; + yb_enable_optimizer_statistics +-------------------------------- + off +(1 row) + +-- boolean aliases +set yb_enable_cbo = true; +show yb_enable_cbo; + yb_enable_cbo +--------------- + on +(1 row) + +set yb_enable_cbo = false; +show yb_enable_cbo; + yb_enable_cbo +--------------- + off +(1 row) + +set yb_enable_cbo = yes; +show yb_enable_cbo; + yb_enable_cbo +--------------- + on +(1 row) + +set yb_enable_cbo = no; +show yb_enable_cbo; + yb_enable_cbo +--------------- + off +(1 row) + +set yb_enable_cbo = 1; +show yb_enable_cbo; + yb_enable_cbo +--------------- + on +(1 row) + +set yb_enable_cbo = 0; +show yb_enable_cbo; + yb_enable_cbo +--------------- + off +(1 row) + +-- error +set yb_enable_cbo = oui; +ERROR: invalid value for parameter "yb_enable_cbo": "oui" +HINT: Available values: off, on, legacy_mode, legacy_stats_mode. diff --git a/src/postgres/src/test/regress/expected/yb.orig.row_estimates.out b/src/postgres/src/test/regress/expected/yb.orig.row_estimates.out index 4f135a040058..06dcf2ef7bf0 100644 --- a/src/postgres/src/test/regress/expected/yb.orig.row_estimates.out +++ b/src/postgres/src/test/regress/expected/yb.orig.row_estimates.out @@ -1,5 +1,8 @@ -- Test row count estimates set client_min_messages = 'warning'; +drop function if exists check_estimated_rows, get_estimated_rows; +-- following plpgsql functions borrowed from vanilla pg tests +-- from gin.sql, removed guc settings and ANALYZE option create function explain_query_json(query_sql text) returns table (explain_line json) language plpgsql as @@ -8,40 +11,507 @@ begin return query execute 'EXPLAIN (FORMAT json) ' || query_sql; end; $$; +-- from stats_ext.sql, removed the analyze option. +-- check the number of estimated/actual rows in the top node +create function check_estimated_rows(text) returns table (estimated int, actual int) +language plpgsql as +$$ +declare + ln text; + tmp text[]; + first_row bool := true; +begin + for ln in + execute format('explain analyze %s', $1) + loop + if first_row then + first_row := false; + tmp := regexp_match(ln, 'rows=(\d*) .* rows=(\d*)'); + return query select tmp[1]::int, tmp[2]::int; + end if; + end loop; +end; +$$; +-- modified the above to return estimates only +create function get_estimated_rows(text) returns table (estimated int) +language plpgsql as +$$ +declare + ln text; + tmp text[]; + first_row bool := true; +begin + for ln in + execute format('explain %s', $1) + loop + if first_row then + first_row := false; + tmp := regexp_match(ln, 'rows=(\d*)'); + return query select tmp[1]::int; + end if; + end loop; +end; +$$; -- create and populate the test tables with uniformly distributed values -- to make the estimates predictable. drop table if exists r, s; -create table r (a int, b int); -create table s (x int, y int); +create table r (pk int, a int, b int, c char(10), d int, e int, v char(666), primary key (pk asc)); +create table s (x int, y int, z char(10)); create index i_r_a on r (a asc); -create index i_r_b on r (b asc); +create index i_r_a_ge_1k on r (a asc) where a >= 1000; +create unique index i_r_b on r (b asc); +create index i_r_c on r (c asc); insert into r - select i / 5, i from generate_series(1, 12345) i; + select + i, i / 10, i, + -- 'aaa', 'aab', 'aac', ... + concat(chr((((i-1)/26/26) % 26) + ascii('a')), + chr((((i-1)/26) % 26) + ascii('a')), + chr(((i-1) % 26) + ascii('a'))), + i, i / 10, + sha512(('x'||i)::bytea)::bpchar||lpad(sha512((i||'y')::bytea)::bpchar, 536, '#') + from generate_series(1, 12345) i; insert into s - select i / 3, i from generate_series(1, 123) i; + select + i / 3, i, + concat(chr((((i-1)/26/26) % 26) + ascii('a')), + chr((((i-1)/26) % 26) + ascii('a')), + chr(((i-1) % 26) + ascii('a'))) + from generate_series(1, 123) i; +-- store test queries in a table +drop table if exists queries; +create table queries (id serial, hard boolean, query text, primary key (id asc)); +insert into queries values + (DEFAULT, false, 'select * from r'), + (DEFAULT, false, 'select * from r where pk = 123'), + (DEFAULT, false, 'select * from r where pk < 123'), + (DEFAULT, false, 'select * from r where pk > 123'), + (DEFAULT, false, 'select * from r where pk <= 123'), + (DEFAULT, false, 'select * from r where pk >= 123'), + (DEFAULT, false, 'select * from r where pk <> 123'), + (DEFAULT, false, 'select * from r where pk between 123 and 456'), + (DEFAULT, false, 'select * from r where a = 123'), + (DEFAULT, false, 'select * from r where a < 123'), + (DEFAULT, false, 'select * from r where a > 123'), + (DEFAULT, false, 'select * from r where a <= 123'), + (DEFAULT, false, 'select * from r where a >= 123'), + (DEFAULT, false, 'select * from r where a <> 123'), + (DEFAULT, false, 'select * from r where a between 123 and 456'), + (DEFAULT, false, 'select * from r where b = 123'), + (DEFAULT, false, 'select * from r where b < 123'), + (DEFAULT, false, 'select * from r where b > 123'), + (DEFAULT, false, 'select * from r where b <= 123'), + (DEFAULT, false, 'select * from r where b >= 123'), + (DEFAULT, false, 'select * from r where b <> 123'), + (DEFAULT, false, 'select * from r where b between 123 and 456'), + (DEFAULT, false, 'select * from r where d = 123'), + (DEFAULT, false, 'select * from r where d < 123'), + (DEFAULT, false, 'select * from r where d > 123'), + (DEFAULT, false, 'select * from r where d <= 123'), + (DEFAULT, false, 'select * from r where d >= 123'), + (DEFAULT, false, 'select * from r where d <> 123'), + (DEFAULT, false, 'select * from r where d between 123 and 456'), + (DEFAULT, false, 'select * from r where e = 123'), + (DEFAULT, false, 'select * from r where e < 123'), + (DEFAULT, false, 'select * from r where e > 123'), + (DEFAULT, false, 'select * from r where e <= 123'), + (DEFAULT, false, 'select * from r where e >= 123'), + (DEFAULT, false, 'select * from r where e <> 123'), + (DEFAULT, false, 'select * from r where e between 123 and 456'), + (DEFAULT, false, 'select * from r where c = ''abc'''), + (DEFAULT, true, 'select * from r where c < ''ab'''), + (DEFAULT, true, 'select * from r where c > ''ab'''), + (DEFAULT, true, 'select * from r where c <= ''ab'''), + (DEFAULT, true, 'select * from r where c >= ''ab'''), + (DEFAULT, false, 'select * from r where c <> ''abc'''), + (DEFAULT, true, 'select * from r where c between ''ab'' and ''bz'''), + (DEFAULT, true, 'select * from r where c like ''ab%'''), + (DEFAULT, true, 'select * from r where c not like ''ab%'''), + (DEFAULT, false, 'select count(*) from r group by pk'), + (DEFAULT, false, 'select count(*) from r group by a'), + (DEFAULT, false, 'select count(*) from r group by b'), + (DEFAULT, false, 'select count(*) from r group by c'), + (DEFAULT, false, 'select count(*) from r group by d'), + (DEFAULT, false, 'select count(*) from r group by e'), + (DEFAULT, false, 'select distinct pk from r'), + (DEFAULT, false, 'select distinct a from r'), + (DEFAULT, false, 'select distinct b from r'), + (DEFAULT, false, 'select distinct c from r'), + (DEFAULT, false, 'select distinct d from r'), + (DEFAULT, false, 'select distinct e from r'), + (DEFAULT, true, 'select * from r, s where pk = x'), + (DEFAULT, true, 'select * from r, s where pk = y'), + (DEFAULT, true, 'select * from r, s where a = x'), + (DEFAULT, true, 'select * from r, s where a = y'), + (DEFAULT, true, 'select * from r, s where b = x'), + (DEFAULT, true, 'select * from r, s where b = y'), + (DEFAULT, true, 'select * from r, s where d = x'), + (DEFAULT, true, 'select * from r, s where d = y'), + (DEFAULT, true, 'select * from r, s where e = x'), + (DEFAULT, true, 'select * from r, s where e = y'), + (DEFAULT, true, 'select * from r, s where c = z'), + (DEFAULT, true, 'select * from r, s where pk < x'), + (DEFAULT, true, 'select * from r, s where pk < y'), + (DEFAULT, true, 'select * from r, s where a < x'), + (DEFAULT, true, 'select * from r, s where a < y'), + (DEFAULT, true, 'select * from r, s where b < x'), + (DEFAULT, true, 'select * from r, s where b < y'), + (DEFAULT, true, 'select * from r, s where d < x'), + (DEFAULT, true, 'select * from r, s where d < y'), + (DEFAULT, true, 'select * from r, s where e < x'), + (DEFAULT, true, 'select * from r, s where e < y'), + (DEFAULT, true, 'select * from r, s where c < z'), + (DEFAULT, true, 'select * from r, s where x < pk'), + (DEFAULT, true, 'select * from r, s where y < pk'), + (DEFAULT, true, 'select * from r, s where x < a'), + (DEFAULT, true, 'select * from r, s where y < a'), + (DEFAULT, true, 'select * from r, s where x < b'), + (DEFAULT, true, 'select * from r, s where y < b'), + (DEFAULT, true, 'select * from r, s where x < d'), + (DEFAULT, true, 'select * from r, s where y < d'), + (DEFAULT, true, 'select * from r, s where x < e'), + (DEFAULT, true, 'select * from r, s where y < e'), + (DEFAULT, true, 'select * from r, s where z < c'), + (DEFAULT, false, 'select * from r where a = 10 and e = 10'), + (DEFAULT, false, 'select 0 from r group by a, e'), + (DEFAULT, false, 'select distinct a, e from r'), + (DEFAULT, false, 'select * from r where b % 5 = 0'), + (DEFAULT, false, 'select distinct b % 5 from r'), + (DEFAULT, false, 'select 0 from r group by b % 5'), + (DEFAULT, false, 'select 0 from r where a between 1000 and 1200'); +-- +-- check estimates with different yb_enable_cbo values +-- +drop table if exists off_before_analyze, off_after_analyze; +drop table if exists on_before_analyze, on_after_analyze; +drop table if exists legacy_before_analyze, legacy_after_analyze; +drop table if exists legacy_stats_before_analyze, legacy_stats_after_analyze; +drop table if exists cbo_estimates; +-- save the estimates before analyze +set yb_enable_cbo = off; +create temporary table off_before_analyze as + select id, query, get_estimated_rows(query) rows + from queries q; +set yb_enable_cbo = on; +create temporary table on_before_analyze as + select id, query, get_estimated_rows(query) rows + from queries q; +set yb_enable_cbo = legacy_mode; +create temporary table legacy_before_analyze as + select id, query, get_estimated_rows(query) rows + from queries q; +set yb_enable_cbo = legacy_stats_mode; +create temporary table legacy_stats_before_analyze as + select id, query, get_estimated_rows(query) rows + from queries q; +-- create extended stats then analyze +create statistics r_a_e on a, e from r; +create statistics r_bmod5 on (b % 5) from r; analyze r, s; --- parameterized filter condition in Bitmap Table Scan. --- the selectivity should be close to DEFAULT_INEQ_SEL (0.3333333333333333). -set yb_enable_base_scans_cost_model = on; +-- save the estimates after analyze +set yb_enable_cbo = off; +create temporary table off_after_analyze as + select id, query, get_estimated_rows(query) rows + from queries q; +set yb_enable_cbo = on; +create temporary table on_after_analyze as + select id, query, get_estimated_rows(query) rows + from queries q; +set yb_enable_cbo = legacy_mode; +create temporary table legacy_after_analyze as + select id, query, get_estimated_rows(query) rows + from queries q; +set yb_enable_cbo = legacy_stats_mode; +create temporary table legacy_stats_after_analyze as + select id, query, get_estimated_rows(query) rows + from queries q; +-- see how the estimates change before vs. after analyze +--- no predicate --- +-- off: default -> unchanged +select b.id, b.query, b.rows before, a.rows after + from off_before_analyze b join off_after_analyze a using (id) + where id = 1 order by id; + id | query | before | after +----+-----------------+--------+------- + 1 | select * from r | 1000 | 1000 +(1 row) + +-- on, legacy, legacy_stats: default -> accurate +select b.id, b.query, b.rows before, a.rows after + from on_before_analyze b join on_after_analyze a using (id) + where id = 1 order by id; + id | query | before | after +----+-----------------+--------+------- + 1 | select * from r | 1000 | 12345 +(1 row) + +select b.id, b.query, b.rows before, a.rows after + from legacy_before_analyze b join legacy_after_analyze a using (id) + where id = 1 order by id; + id | query | before | after +----+-----------------+--------+------- + 1 | select * from r | 1000 | 12345 +(1 row) + +select b.id, b.query, b.rows before, a.rows after + from legacy_stats_before_analyze b join legacy_stats_after_analyze a using (id) + where id = 1 order by id; + id | query | before | after +----+-----------------+--------+------- + 1 | select * from r | 1000 | 12345 +(1 row) + +--- equality --- +-- off: +-- unique indexed: 1 -> unchanged +-- non-unique indexed: 1% (YBC_SINGLE_KEY_SELECTIVITY) of 1000 -> unchanged +-- unindexed: 1000 (predicate ignored) -> unchanged +select b.id, b.query, b.rows before, a.rows after + from off_before_analyze b join off_after_analyze a using (id) + where b.query ~ 'p*[kabde] = 123' order by id; + id | query | before | after +----+--------------------------------+--------+------- + 2 | select * from r where pk = 123 | 1 | 1 + 9 | select * from r where a = 123 | 10 | 10 + 16 | select * from r where b = 123 | 1 | 1 + 23 | select * from r where d = 123 | 1000 | 1000 + 30 | select * from r where e = 123 | 1000 | 1000 +(5 rows) + +-- on, legacy_stats: +-- unique indexed: 1 -> 1 (unchanged, accurate) +-- others (non-unique indexed & unindexed): 5% (DEFAULT_EQ_SEL) of 1000 -> accurate +select b.id, b.query, b.rows before, a.rows after + from on_before_analyze b join on_after_analyze a using (id) + where b.query ~ 'p*[kabde] = 123' order by id; + id | query | before | after +----+--------------------------------+--------+------- + 2 | select * from r where pk = 123 | 1 | 1 + 9 | select * from r where a = 123 | 5 | 10 + 16 | select * from r where b = 123 | 1 | 1 + 23 | select * from r where d = 123 | 5 | 1 + 30 | select * from r where e = 123 | 5 | 10 +(5 rows) + +select b.id, b.query, b.rows before, a.rows after + from legacy_stats_before_analyze b join legacy_stats_after_analyze a using (id) + where b.query ~ 'p*[kabde] = 123' order by id; + id | query | before | after +----+--------------------------------+--------+------- + 2 | select * from r where pk = 123 | 1 | 1 + 9 | select * from r where a = 123 | 5 | 10 + 16 | select * from r where b = 123 | 1 | 1 + 23 | select * from r where d = 123 | 5 | 1 + 30 | select * from r where e = 123 | 5 | 10 +(5 rows) + +-- legacy: +-- unique indexed: 1 -> 1 (unchanged, accurate) +-- non-unique indexed: 1% of 1000 -> 1% of 12345 (predicate ignored) +-- unindexed: 1000 -> 12345 (predicate ignored) +select b.id, b.query, b.rows before, a.rows after + from legacy_before_analyze b join legacy_after_analyze a using (id) + where b.query ~ 'p*[kabde] = 123' order by id; + id | query | before | after +----+--------------------------------+--------+------- + 2 | select * from r where pk = 123 | 1 | 1 + 9 | select * from r where a = 123 | 10 | 123 + 16 | select * from r where b = 123 | 1 | 1 + 23 | select * from r where d = 123 | 1000 | 12345 + 30 | select * from r where e = 123 | 1000 | 12345 +(5 rows) + +--- group by --- +-- off: +-- unique indexed: 1000 -> unchanged +-- others: 200 (DEFAULT_NUM_DISTINCT) -> unchanged +select b.id, b.query, b.rows before, a.rows after + from off_before_analyze b join off_after_analyze a using (id) + where b.query ~ 'group by p*[kabde]$' order by id; + id | query | before | after +----+------------------------------------+--------+------- + 46 | select count(*) from r group by pk | 1000 | 1000 + 47 | select count(*) from r group by a | 200 | 200 + 48 | select count(*) from r group by b | 1000 | 1000 + 50 | select count(*) from r group by d | 200 | 200 + 51 | select count(*) from r group by e | 200 | 200 +(5 rows) + +-- on, legacy_stats, legacy: +-- unique indexed: 1000 -> 1000 (unchanged, accurate) +-- others: 200 -> accurate +select b.id, b.query, b.rows before, a.rows after + from on_before_analyze b join on_after_analyze a using (id) + where b.query ~ 'group by p*[kabde]$' order by id; + id | query | before | after +----+------------------------------------+--------+------- + 46 | select count(*) from r group by pk | 1000 | 12345 + 47 | select count(*) from r group by a | 200 | 1235 + 48 | select count(*) from r group by b | 1000 | 12345 + 50 | select count(*) from r group by d | 200 | 12345 + 51 | select count(*) from r group by e | 200 | 1235 +(5 rows) + +select b.id, b.query, b.rows before, a.rows after + from legacy_stats_before_analyze b join legacy_stats_after_analyze a using (id) + where b.query ~ 'group by p*[kabde]$' order by id; + id | query | before | after +----+------------------------------------+--------+------- + 46 | select count(*) from r group by pk | 1000 | 12345 + 47 | select count(*) from r group by a | 200 | 1235 + 48 | select count(*) from r group by b | 1000 | 12345 + 50 | select count(*) from r group by d | 200 | 12345 + 51 | select count(*) from r group by e | 200 | 1235 +(5 rows) + +select b.id, b.query, b.rows before, a.rows after + from legacy_before_analyze b join legacy_after_analyze a using (id) + where b.query ~ 'group by p*[kabde]$' order by id; + id | query | before | after +----+------------------------------------+--------+------- + 46 | select count(*) from r group by pk | 1000 | 12345 + 47 | select count(*) from r group by a | 200 | 1235 + 48 | select count(*) from r group by b | 1000 | 12345 + 50 | select count(*) from r group by d | 200 | 12345 + 51 | select count(*) from r group by e | 200 | 1235 +(5 rows) + +--- equi-join --- +-- off: +-- unique indexed: 1000 -> unchanged +-- others: 1000 * 1000 * (1/200) -> unchanged +select b.id, b.query, b.rows before, a.rows after + from off_before_analyze b join off_after_analyze a using (id) + where b.query ~ 'where p*[kabde] = [xy]' order by id; + id | query | before | after +----+---------------------------------+--------+------- + 58 | select * from r, s where pk = x | 1000 | 1000 + 59 | select * from r, s where pk = y | 1000 | 1000 + 60 | select * from r, s where a = x | 5000 | 5000 + 61 | select * from r, s where a = y | 5000 | 5000 + 62 | select * from r, s where b = x | 1000 | 1000 + 63 | select * from r, s where b = y | 1000 | 1000 + 64 | select * from r, s where d = x | 5000 | 5000 + 65 | select * from r, s where d = y | 5000 | 5000 + 66 | select * from r, s where e = x | 5000 | 5000 + 67 | select * from r, s where e = y | 5000 | 5000 +(10 rows) + +-- on, legacy_stats, legacy: +-- unique indexed: 1000 -> 123 (accurate) +-- others: 1000 * 1000 * (1/200) -> 123 or 1230 (accurate) +select b.id, b.query, b.rows before, a.rows after + from on_before_analyze b join on_after_analyze a using (id) + where b.query ~ 'where p*[kabde] = [xy]' order by id; + id | query | before | after +----+---------------------------------+--------+------- + 58 | select * from r, s where pk = x | 1000 | 123 + 59 | select * from r, s where pk = y | 1000 | 123 + 60 | select * from r, s where a = x | 5000 | 1230 + 61 | select * from r, s where a = y | 5000 | 1230 + 62 | select * from r, s where b = x | 1000 | 123 + 63 | select * from r, s where b = y | 1000 | 123 + 64 | select * from r, s where d = x | 5000 | 123 + 65 | select * from r, s where d = y | 5000 | 123 + 66 | select * from r, s where e = x | 5000 | 1230 + 67 | select * from r, s where e = y | 5000 | 1230 +(10 rows) + +select b.id, b.query, b.rows before, a.rows after + from legacy_stats_before_analyze b join legacy_stats_after_analyze a using (id) + where b.query ~ 'where p*[kabde] = [xy]' order by id; + id | query | before | after +----+---------------------------------+--------+------- + 58 | select * from r, s where pk = x | 1000 | 123 + 59 | select * from r, s where pk = y | 1000 | 123 + 60 | select * from r, s where a = x | 5000 | 1230 + 61 | select * from r, s where a = y | 5000 | 1230 + 62 | select * from r, s where b = x | 1000 | 123 + 63 | select * from r, s where b = y | 1000 | 123 + 64 | select * from r, s where d = x | 5000 | 123 + 65 | select * from r, s where d = y | 5000 | 123 + 66 | select * from r, s where e = x | 5000 | 1230 + 67 | select * from r, s where e = y | 5000 | 1230 +(10 rows) + +select b.id, b.query, b.rows before, a.rows after + from legacy_before_analyze b join legacy_after_analyze a using (id) + where b.query ~ 'where p*[kabde] = [xy]' order by id; + id | query | before | after +----+---------------------------------+--------+------- + 58 | select * from r, s where pk = x | 1000 | 123 + 59 | select * from r, s where pk = y | 1000 | 123 + 60 | select * from r, s where a = x | 5000 | 1230 + 61 | select * from r, s where a = y | 5000 | 1230 + 62 | select * from r, s where b = x | 1000 | 123 + 63 | select * from r, s where b = y | 1000 | 123 + 64 | select * from r, s where d = x | 5000 | 123 + 65 | select * from r, s where d = y | 5000 | 123 + 66 | select * from r, s where e = x | 5000 | 1230 + 67 | select * from r, s where e = y | 5000 | 1230 +(10 rows) + +-- should return no row as the estimates match +select * +from legacy_before_analyze t1 join off_before_analyze t2 using (id) +where t1.rows <> t2.rows; + id | query | rows | query | rows +----+-------+------+-------+------ +(0 rows) + +select * +from off_before_analyze t1 join off_after_analyze t2 using (id) +where t1.rows <> t2.rows; + id | query | rows | query | rows +----+-------+------+-------+------ +(0 rows) + +-- +-- check CBO estimates for easy queries +-- +set yb_enable_cbo = on; +create temporary table cbo_estimates ( + id int, + query text, + rows int[] +); +insert into cbo_estimates + select + id, query, + string_to_array(trim(both '()' from check_estimated_rows(query)::bpchar), ',')::int[] + from queries where hard = false; +-- should return no row +select id, query, rows[1] estimated, rows[2] actual +from cbo_estimates +where abs(rows[1] - rows[2]) > 2 + and abs(rows[1] - rows[2])/least(rows[1], rows[2])::float > 0.005; + id | query | estimated | actual +----+-------+-----------+-------- +(0 rows) + +-- +-- test selectivity estimates +-- +analyze r, s; +set yb_enable_cbo = on; set yb_enable_bitmapscan = on; set enable_bitmapscan = on; -set yb_prefer_bnl = off; +-- parameterized filter condition in Bitmap Table Scan. +-- the selectivity should be close to DEFAULT_INEQ_SEL (0.3333333333333333). select bts->'Node Type' bmts, bts->'Storage Filter' bmts_filter, round((bts->'Plan Rows')::text::numeric / (bts->'Plans'->0->'Plan Rows')::text::numeric, 2) sel from - explain_query_json($$/*+ Leading((s r)) NestLoop(s r) BitmapScan(r) */select * from r, s where (a = x or b <= 300) and a + b >= y$$) js, - lateral to_json( - js.explain_line->0->'Plan'->'Plans'->1 - ) bts; + explain_query_json($$/*+ Leading((s r)) NestLoop(s r) BitmapScan(r) Set(yb_bnl_batch_size 1) */select * from r, s where (a = x or b <= 300) and a + b >= y$$) js, + lateral to_json(js.explain_line->0->'Plan'->'Plans'->1) bts; bmts | bmts_filter | sel ------------------------+--------------------+------ "YB Bitmap Table Scan" | "((a + b) >= s.y)" | 0.33 (1 row) explain (costs off) -/*+ Leading((s r)) NestLoop(s r) BitmapScan(r) */select * from r, s where (a = x or b <= 300) and a + b >= y; +/*+ Leading((s r)) NestLoop(s r) BitmapScan(r) Set(yb_bnl_batch_size 1) */select * from r, s where (a = x or b <= 300) and a + b >= y; QUERY PLAN --------------------------------------------------------- Nested Loop @@ -56,3 +526,4 @@ explain (costs off) Index Cond: (b <= 300) (10 rows) +drop table if exists r, s, queries; diff --git a/src/postgres/src/test/regress/sql/yb.orig.optimizer_guc.sql b/src/postgres/src/test/regress/sql/yb.orig.optimizer_guc.sql new file mode 100644 index 000000000000..988541e1854a --- /dev/null +++ b/src/postgres/src/test/regress/sql/yb.orig.optimizer_guc.sql @@ -0,0 +1,82 @@ +reset yb_enable_cbo; +reset yb_enable_base_scans_cost_model; +reset yb_enable_optimizer_statistics; + +show yb_enable_cbo; +show yb_enable_base_scans_cost_model; +show yb_enable_optimizer_statistics; + +-- change yb_enable_cbo + +set yb_enable_cbo = on; +show yb_enable_cbo; +show yb_enable_base_scans_cost_model; +show yb_enable_optimizer_statistics; + +set yb_enable_cbo = off; +show yb_enable_cbo; +show yb_enable_base_scans_cost_model; +show yb_enable_optimizer_statistics; + +set yb_enable_cbo = legacy_mode; +show yb_enable_cbo; +show yb_enable_base_scans_cost_model; +show yb_enable_optimizer_statistics; + +set yb_enable_cbo = legacy_stats_mode; +show yb_enable_cbo; +show yb_enable_base_scans_cost_model; +show yb_enable_optimizer_statistics; + +set yb_enable_cbo = off; +show yb_enable_cbo; +show yb_enable_base_scans_cost_model; +show yb_enable_optimizer_statistics; + + +-- turn on/off old parameters +set yb_enable_base_scans_cost_model = off; +show yb_enable_cbo; +show yb_enable_base_scans_cost_model; +show yb_enable_optimizer_statistics; +set yb_enable_optimizer_statistics = off; +show yb_enable_cbo; +show yb_enable_base_scans_cost_model; +show yb_enable_optimizer_statistics; +set yb_enable_optimizer_statistics = on; +show yb_enable_cbo; +show yb_enable_base_scans_cost_model; +show yb_enable_optimizer_statistics; +set yb_enable_base_scans_cost_model = on; +show yb_enable_cbo; +show yb_enable_base_scans_cost_model; +show yb_enable_optimizer_statistics; +set yb_enable_base_scans_cost_model = off; +show yb_enable_cbo; +show yb_enable_base_scans_cost_model; +show yb_enable_optimizer_statistics; +set yb_enable_optimizer_statistics = off; +show yb_enable_cbo; +show yb_enable_base_scans_cost_model; +show yb_enable_optimizer_statistics; + + +-- boolean aliases + +set yb_enable_cbo = true; +show yb_enable_cbo; +set yb_enable_cbo = false; +show yb_enable_cbo; +set yb_enable_cbo = yes; +show yb_enable_cbo; +set yb_enable_cbo = no; +show yb_enable_cbo; +set yb_enable_cbo = 1; +show yb_enable_cbo; +set yb_enable_cbo = 0; +show yb_enable_cbo; + + +-- error +set yb_enable_cbo = oui; + diff --git a/src/postgres/src/test/regress/sql/yb.orig.row_estimates.sql b/src/postgres/src/test/regress/sql/yb.orig.row_estimates.sql index 197c60e81090..5d4c58746169 100644 --- a/src/postgres/src/test/regress/sql/yb.orig.row_estimates.sql +++ b/src/postgres/src/test/regress/sql/yb.orig.row_estimates.sql @@ -1,7 +1,11 @@ -- Test row count estimates set client_min_messages = 'warning'; +drop function if exists check_estimated_rows, get_estimated_rows; +-- following plpgsql functions borrowed from vanilla pg tests + +-- from gin.sql, removed guc settings and ANALYZE option create function explain_query_json(query_sql text) returns table (explain_line json) language plpgsql as @@ -11,39 +15,401 @@ begin end; $$; +-- from stats_ext.sql, removed the analyze option. +-- check the number of estimated/actual rows in the top node +create function check_estimated_rows(text) returns table (estimated int, actual int) +language plpgsql as +$$ +declare + ln text; + tmp text[]; + first_row bool := true; +begin + for ln in + execute format('explain analyze %s', $1) + loop + if first_row then + first_row := false; + tmp := regexp_match(ln, 'rows=(\d*) .* rows=(\d*)'); + return query select tmp[1]::int, tmp[2]::int; + end if; + end loop; +end; +$$; + +-- modified the above to return estimates only +create function get_estimated_rows(text) returns table (estimated int) +language plpgsql as +$$ +declare + ln text; + tmp text[]; + first_row bool := true; +begin + for ln in + execute format('explain %s', $1) + loop + if first_row then + first_row := false; + tmp := regexp_match(ln, 'rows=(\d*)'); + return query select tmp[1]::int; + end if; + end loop; +end; +$$; + -- create and populate the test tables with uniformly distributed values -- to make the estimates predictable. drop table if exists r, s; -create table r (a int, b int); -create table s (x int, y int); +create table r (pk int, a int, b int, c char(10), d int, e int, v char(666), primary key (pk asc)); +create table s (x int, y int, z char(10)); create index i_r_a on r (a asc); -create index i_r_b on r (b asc); +create index i_r_a_ge_1k on r (a asc) where a >= 1000; +create unique index i_r_b on r (b asc); +create index i_r_c on r (c asc); insert into r - select i / 5, i from generate_series(1, 12345) i; + select + i, i / 10, i, + -- 'aaa', 'aab', 'aac', ... + concat(chr((((i-1)/26/26) % 26) + ascii('a')), + chr((((i-1)/26) % 26) + ascii('a')), + chr(((i-1) % 26) + ascii('a'))), + i, i / 10, + sha512(('x'||i)::bytea)::bpchar||lpad(sha512((i||'y')::bytea)::bpchar, 536, '#') + from generate_series(1, 12345) i; insert into s - select i / 3, i from generate_series(1, 123) i; + select + i / 3, i, + concat(chr((((i-1)/26/26) % 26) + ascii('a')), + chr((((i-1)/26) % 26) + ascii('a')), + chr(((i-1) % 26) + ascii('a'))) + from generate_series(1, 123) i; + + +-- store test queries in a table +drop table if exists queries; +create table queries (id serial, hard boolean, query text, primary key (id asc)); +insert into queries values + (DEFAULT, false, 'select * from r'), + (DEFAULT, false, 'select * from r where pk = 123'), + (DEFAULT, false, 'select * from r where pk < 123'), + (DEFAULT, false, 'select * from r where pk > 123'), + (DEFAULT, false, 'select * from r where pk <= 123'), + (DEFAULT, false, 'select * from r where pk >= 123'), + (DEFAULT, false, 'select * from r where pk <> 123'), + (DEFAULT, false, 'select * from r where pk between 123 and 456'), + (DEFAULT, false, 'select * from r where a = 123'), + (DEFAULT, false, 'select * from r where a < 123'), + (DEFAULT, false, 'select * from r where a > 123'), + (DEFAULT, false, 'select * from r where a <= 123'), + (DEFAULT, false, 'select * from r where a >= 123'), + (DEFAULT, false, 'select * from r where a <> 123'), + (DEFAULT, false, 'select * from r where a between 123 and 456'), + (DEFAULT, false, 'select * from r where b = 123'), + (DEFAULT, false, 'select * from r where b < 123'), + (DEFAULT, false, 'select * from r where b > 123'), + (DEFAULT, false, 'select * from r where b <= 123'), + (DEFAULT, false, 'select * from r where b >= 123'), + (DEFAULT, false, 'select * from r where b <> 123'), + (DEFAULT, false, 'select * from r where b between 123 and 456'), + (DEFAULT, false, 'select * from r where d = 123'), + (DEFAULT, false, 'select * from r where d < 123'), + (DEFAULT, false, 'select * from r where d > 123'), + (DEFAULT, false, 'select * from r where d <= 123'), + (DEFAULT, false, 'select * from r where d >= 123'), + (DEFAULT, false, 'select * from r where d <> 123'), + (DEFAULT, false, 'select * from r where d between 123 and 456'), + (DEFAULT, false, 'select * from r where e = 123'), + (DEFAULT, false, 'select * from r where e < 123'), + (DEFAULT, false, 'select * from r where e > 123'), + (DEFAULT, false, 'select * from r where e <= 123'), + (DEFAULT, false, 'select * from r where e >= 123'), + (DEFAULT, false, 'select * from r where e <> 123'), + (DEFAULT, false, 'select * from r where e between 123 and 456'), + (DEFAULT, false, 'select * from r where c = ''abc'''), + (DEFAULT, true, 'select * from r where c < ''ab'''), + (DEFAULT, true, 'select * from r where c > ''ab'''), + (DEFAULT, true, 'select * from r where c <= ''ab'''), + (DEFAULT, true, 'select * from r where c >= ''ab'''), + (DEFAULT, false, 'select * from r where c <> ''abc'''), + (DEFAULT, true, 'select * from r where c between ''ab'' and ''bz'''), + (DEFAULT, true, 'select * from r where c like ''ab%'''), + (DEFAULT, true, 'select * from r where c not like ''ab%'''), + (DEFAULT, false, 'select count(*) from r group by pk'), + (DEFAULT, false, 'select count(*) from r group by a'), + (DEFAULT, false, 'select count(*) from r group by b'), + (DEFAULT, false, 'select count(*) from r group by c'), + (DEFAULT, false, 'select count(*) from r group by d'), + (DEFAULT, false, 'select count(*) from r group by e'), + (DEFAULT, false, 'select distinct pk from r'), + (DEFAULT, false, 'select distinct a from r'), + (DEFAULT, false, 'select distinct b from r'), + (DEFAULT, false, 'select distinct c from r'), + (DEFAULT, false, 'select distinct d from r'), + (DEFAULT, false, 'select distinct e from r'), + (DEFAULT, true, 'select * from r, s where pk = x'), + (DEFAULT, true, 'select * from r, s where pk = y'), + (DEFAULT, true, 'select * from r, s where a = x'), + (DEFAULT, true, 'select * from r, s where a = y'), + (DEFAULT, true, 'select * from r, s where b = x'), + (DEFAULT, true, 'select * from r, s where b = y'), + (DEFAULT, true, 'select * from r, s where d = x'), + (DEFAULT, true, 'select * from r, s where d = y'), + (DEFAULT, true, 'select * from r, s where e = x'), + (DEFAULT, true, 'select * from r, s where e = y'), + (DEFAULT, true, 'select * from r, s where c = z'), + (DEFAULT, true, 'select * from r, s where pk < x'), + (DEFAULT, true, 'select * from r, s where pk < y'), + (DEFAULT, true, 'select * from r, s where a < x'), + (DEFAULT, true, 'select * from r, s where a < y'), + (DEFAULT, true, 'select * from r, s where b < x'), + (DEFAULT, true, 'select * from r, s where b < y'), + (DEFAULT, true, 'select * from r, s where d < x'), + (DEFAULT, true, 'select * from r, s where d < y'), + (DEFAULT, true, 'select * from r, s where e < x'), + (DEFAULT, true, 'select * from r, s where e < y'), + (DEFAULT, true, 'select * from r, s where c < z'), + (DEFAULT, true, 'select * from r, s where x < pk'), + (DEFAULT, true, 'select * from r, s where y < pk'), + (DEFAULT, true, 'select * from r, s where x < a'), + (DEFAULT, true, 'select * from r, s where y < a'), + (DEFAULT, true, 'select * from r, s where x < b'), + (DEFAULT, true, 'select * from r, s where y < b'), + (DEFAULT, true, 'select * from r, s where x < d'), + (DEFAULT, true, 'select * from r, s where y < d'), + (DEFAULT, true, 'select * from r, s where x < e'), + (DEFAULT, true, 'select * from r, s where y < e'), + (DEFAULT, true, 'select * from r, s where z < c'), + (DEFAULT, false, 'select * from r where a = 10 and e = 10'), + (DEFAULT, false, 'select 0 from r group by a, e'), + (DEFAULT, false, 'select distinct a, e from r'), + (DEFAULT, false, 'select * from r where b % 5 = 0'), + (DEFAULT, false, 'select distinct b % 5 from r'), + (DEFAULT, false, 'select 0 from r group by b % 5'), + (DEFAULT, false, 'select 0 from r where a between 1000 and 1200'); + + +-- +-- check estimates with different yb_enable_cbo values +-- + +drop table if exists off_before_analyze, off_after_analyze; +drop table if exists on_before_analyze, on_after_analyze; +drop table if exists legacy_before_analyze, legacy_after_analyze; +drop table if exists legacy_stats_before_analyze, legacy_stats_after_analyze; +drop table if exists cbo_estimates; +-- save the estimates before analyze + +set yb_enable_cbo = off; +create temporary table off_before_analyze as + select id, query, get_estimated_rows(query) rows + from queries q; + +set yb_enable_cbo = on; +create temporary table on_before_analyze as + select id, query, get_estimated_rows(query) rows + from queries q; + +set yb_enable_cbo = legacy_mode; +create temporary table legacy_before_analyze as + select id, query, get_estimated_rows(query) rows + from queries q; + +set yb_enable_cbo = legacy_stats_mode; +create temporary table legacy_stats_before_analyze as + select id, query, get_estimated_rows(query) rows + from queries q; + + +-- create extended stats then analyze + +create statistics r_a_e on a, e from r; +create statistics r_bmod5 on (b % 5) from r; analyze r, s; --- parameterized filter condition in Bitmap Table Scan. --- the selectivity should be close to DEFAULT_INEQ_SEL (0.3333333333333333). -set yb_enable_base_scans_cost_model = on; +-- save the estimates after analyze + +set yb_enable_cbo = off; +create temporary table off_after_analyze as + select id, query, get_estimated_rows(query) rows + from queries q; + +set yb_enable_cbo = on; +create temporary table on_after_analyze as + select id, query, get_estimated_rows(query) rows + from queries q; + +set yb_enable_cbo = legacy_mode; +create temporary table legacy_after_analyze as + select id, query, get_estimated_rows(query) rows + from queries q; + +set yb_enable_cbo = legacy_stats_mode; +create temporary table legacy_stats_after_analyze as + select id, query, get_estimated_rows(query) rows + from queries q; + + +-- see how the estimates change before vs. after analyze + +--- no predicate --- +-- off: default -> unchanged +select b.id, b.query, b.rows before, a.rows after + from off_before_analyze b join off_after_analyze a using (id) + where id = 1 order by id; + +-- on, legacy, legacy_stats: default -> accurate +select b.id, b.query, b.rows before, a.rows after + from on_before_analyze b join on_after_analyze a using (id) + where id = 1 order by id; + +select b.id, b.query, b.rows before, a.rows after + from legacy_before_analyze b join legacy_after_analyze a using (id) + where id = 1 order by id; + +select b.id, b.query, b.rows before, a.rows after + from legacy_stats_before_analyze b join legacy_stats_after_analyze a using (id) + where id = 1 order by id; + + +--- equality --- +-- off: +-- unique indexed: 1 -> unchanged +-- non-unique indexed: 1% (YBC_SINGLE_KEY_SELECTIVITY) of 1000 -> unchanged +-- unindexed: 1000 (predicate ignored) -> unchanged +select b.id, b.query, b.rows before, a.rows after + from off_before_analyze b join off_after_analyze a using (id) + where b.query ~ 'p*[kabde] = 123' order by id; + +-- on, legacy_stats: +-- unique indexed: 1 -> 1 (unchanged, accurate) +-- others (non-unique indexed & unindexed): 5% (DEFAULT_EQ_SEL) of 1000 -> accurate +select b.id, b.query, b.rows before, a.rows after + from on_before_analyze b join on_after_analyze a using (id) + where b.query ~ 'p*[kabde] = 123' order by id; + +select b.id, b.query, b.rows before, a.rows after + from legacy_stats_before_analyze b join legacy_stats_after_analyze a using (id) + where b.query ~ 'p*[kabde] = 123' order by id; + +-- legacy: +-- unique indexed: 1 -> 1 (unchanged, accurate) +-- non-unique indexed: 1% of 1000 -> 1% of 12345 (predicate ignored) +-- unindexed: 1000 -> 12345 (predicate ignored) +select b.id, b.query, b.rows before, a.rows after + from legacy_before_analyze b join legacy_after_analyze a using (id) + where b.query ~ 'p*[kabde] = 123' order by id; + + +--- group by --- +-- off: +-- unique indexed: 1000 -> unchanged +-- others: 200 (DEFAULT_NUM_DISTINCT) -> unchanged +select b.id, b.query, b.rows before, a.rows after + from off_before_analyze b join off_after_analyze a using (id) + where b.query ~ 'group by p*[kabde]$' order by id; + +-- on, legacy_stats, legacy: +-- unique indexed: 1000 -> 1000 (unchanged, accurate) +-- others: 200 -> accurate +select b.id, b.query, b.rows before, a.rows after + from on_before_analyze b join on_after_analyze a using (id) + where b.query ~ 'group by p*[kabde]$' order by id; + +select b.id, b.query, b.rows before, a.rows after + from legacy_stats_before_analyze b join legacy_stats_after_analyze a using (id) + where b.query ~ 'group by p*[kabde]$' order by id; + +select b.id, b.query, b.rows before, a.rows after + from legacy_before_analyze b join legacy_after_analyze a using (id) + where b.query ~ 'group by p*[kabde]$' order by id; + + +--- equi-join --- +-- off: +-- unique indexed: 1000 -> unchanged +-- others: 1000 * 1000 * (1/200) -> unchanged +select b.id, b.query, b.rows before, a.rows after + from off_before_analyze b join off_after_analyze a using (id) + where b.query ~ 'where p*[kabde] = [xy]' order by id; + +-- on, legacy_stats, legacy: +-- unique indexed: 1000 -> 123 (accurate) +-- others: 1000 * 1000 * (1/200) -> 123 or 1230 (accurate) +select b.id, b.query, b.rows before, a.rows after + from on_before_analyze b join on_after_analyze a using (id) + where b.query ~ 'where p*[kabde] = [xy]' order by id; + +select b.id, b.query, b.rows before, a.rows after + from legacy_stats_before_analyze b join legacy_stats_after_analyze a using (id) + where b.query ~ 'where p*[kabde] = [xy]' order by id; + +select b.id, b.query, b.rows before, a.rows after + from legacy_before_analyze b join legacy_after_analyze a using (id) + where b.query ~ 'where p*[kabde] = [xy]' order by id; + + +-- should return no row as the estimates match +select * +from legacy_before_analyze t1 join off_before_analyze t2 using (id) +where t1.rows <> t2.rows; + +select * +from off_before_analyze t1 join off_after_analyze t2 using (id) +where t1.rows <> t2.rows; + + +-- +-- check CBO estimates for easy queries +-- + +set yb_enable_cbo = on; + +create temporary table cbo_estimates ( + id int, + query text, + rows int[] +); + +insert into cbo_estimates + select + id, query, + string_to_array(trim(both '()' from check_estimated_rows(query)::bpchar), ',')::int[] + from queries where hard = false; + +-- should return no row +select id, query, rows[1] estimated, rows[2] actual +from cbo_estimates +where abs(rows[1] - rows[2]) > 2 + and abs(rows[1] - rows[2])/least(rows[1], rows[2])::float > 0.005; + + +-- +-- test selectivity estimates +-- +analyze r, s; + +set yb_enable_cbo = on; set yb_enable_bitmapscan = on; set enable_bitmapscan = on; -set yb_prefer_bnl = off; +-- parameterized filter condition in Bitmap Table Scan. +-- the selectivity should be close to DEFAULT_INEQ_SEL (0.3333333333333333). select bts->'Node Type' bmts, bts->'Storage Filter' bmts_filter, round((bts->'Plan Rows')::text::numeric / (bts->'Plans'->0->'Plan Rows')::text::numeric, 2) sel from - explain_query_json($$/*+ Leading((s r)) NestLoop(s r) BitmapScan(r) */select * from r, s where (a = x or b <= 300) and a + b >= y$$) js, - lateral to_json( - js.explain_line->0->'Plan'->'Plans'->1 - ) bts; + explain_query_json($$/*+ Leading((s r)) NestLoop(s r) BitmapScan(r) Set(yb_bnl_batch_size 1) */select * from r, s where (a = x or b <= 300) and a + b >= y$$) js, + lateral to_json(js.explain_line->0->'Plan'->'Plans'->1) bts; + explain (costs off) -/*+ Leading((s r)) NestLoop(s r) BitmapScan(r) */select * from r, s where (a = x or b <= 300) and a + b >= y; +/*+ Leading((s r)) NestLoop(s r) BitmapScan(r) Set(yb_bnl_batch_size 1) */select * from r, s where (a = x or b <= 300) and a + b >= y; + + +drop table if exists r, s, queries; diff --git a/src/postgres/src/test/regress/yb_planner_serial_schedule b/src/postgres/src/test/regress/yb_planner_serial_schedule index 52392e6ed768..caf22fdfafae 100644 --- a/src/postgres/src/test/regress/yb_planner_serial_schedule +++ b/src/postgres/src/test/regress/yb_planner_serial_schedule @@ -11,3 +11,4 @@ test: yb.orig.planner_heuristic_selectivity test: yb.orig.planner_base_scans_cost_model_colocated test: yb.orig.planner_base_scans_cost_model test: yb.orig.row_estimates +test: yb.orig.optimizer_guc From 909275e8b02c6ccbf3fc9e9484bdbc92c085b751 Mon Sep 17 00:00:00 2001 From: Zachary Drudi Date: Thu, 15 May 2025 10:51:38 -0400 Subject: [PATCH 098/146] [#27152] docdb: tservers include their lease epoch in ysql lease refresh requests Summary: Currently tservers do not report what they think their lease epoch is when they refresh their ysql lease with the master leader. If the response from the master leader is lost the first time a tserver acquires a new lease, then the tserver will never restart its pg sessions and update its local copy of its lease epoch. This will prevent the tserver from hosting any DDLs until it is restarted. This diff adds what the tserver thinks its current lease epoch is to the lease refresh request. The master leader will set the new_lease bit and provide the acquired locks to bootstrap both if it is giving the tserver a new lease, and also if the tserver does not supply the correct lease epoch in its lease refresh request. This way regardless of how many lease refresh requests are lost when the communication is successful the tserver will bootstrap and restart PG appropriately. This diff also removes the `needs_bootstrap` field from `RefreshYsqlLeaseRequestPB`. It is redundant with `current_lease_epoch` and removing it streamlines the logic. This removal was done unsafely because the `needs_bootstrap` field only went into the 2025.1 branch and it hasn't shipped to any customers yet. We plan to backport this change before 2025.1 is shipped to any customers. **Upgrade/Rollback safety:** This diff contains modifications to protos whose use is gated behind test flags in all shipped versions. We will backport this diff to 2025.1 before it ships. Jira: DB-16636 Test Plan: ``` % ./yb_build.sh fastdebug --cxx-test object_lock-test --gtest_filter 'ExternalObjectLockTest.RefreshYsqlLease' % ./yb_build.sh fastdebug --cxx-test master-test --gtest_filter 'MasterTest.RefreshYsqlLease' ``` Reviewers: amitanand, #db-approvers Reviewed By: amitanand Subscribers: svc_phabricator, yql, ybase, rthallam, slingam Differential Revision: https://phorge.dev.yugabyte.com/D43963 --- src/yb/integration-tests/object_lock-test.cc | 42 +++++- src/yb/master/catalog_entity_info.cc | 6 +- src/yb/master/catalog_entity_info.h | 4 +- src/yb/master/master-test.cc | 128 ++++++++++--------- src/yb/master/master_ddl.proto | 4 +- src/yb/master/master_ddl_client.cc | 6 +- src/yb/master/master_ddl_client.h | 3 +- src/yb/master/master_tserver.cc | 2 +- src/yb/master/master_tserver.h | 2 +- src/yb/master/object_lock_info_manager.cc | 37 ++++-- src/yb/tserver/pg_client_service.cc | 9 +- src/yb/tserver/pg_client_service.h | 6 +- src/yb/tserver/tablet_server.cc | 12 +- src/yb/tserver/tablet_server.h | 2 +- src/yb/tserver/tablet_server_interface.h | 3 +- src/yb/tserver/tablet_service.cc | 9 +- src/yb/tserver/ysql_lease.h | 24 ++++ src/yb/tserver/ysql_lease_poller.cc | 8 +- 18 files changed, 205 insertions(+), 102 deletions(-) create mode 100644 src/yb/tserver/ysql_lease.h diff --git a/src/yb/integration-tests/object_lock-test.cc b/src/yb/integration-tests/object_lock-test.cc index cf7920c88765..896b21696039 100644 --- a/src/yb/integration-tests/object_lock-test.cc +++ b/src/yb/integration-tests/object_lock-test.cc @@ -31,6 +31,7 @@ #include "yb/master/master.h" #include "yb/master/master_cluster_client.h" #include "yb/master/master_ddl.proxy.h" +#include "yb/master/master_ddl_client.h" #include "yb/master/mini_master.h" #include "yb/master/object_lock_info_manager.h" #include "yb/master/test_async_rpc_manager.h" @@ -172,7 +173,7 @@ class ObjectLockTest : public MiniClusterTestWithClient { [tablet_servers]() -> Result { for (const auto& ts : tablet_servers) { auto lease_info = VERIFY_RESULT(ts->server()->GetYSQLLeaseInfo()); - if (!lease_info.is_live()) { + if (!lease_info.is_live) { return false; } } @@ -1096,6 +1097,45 @@ TEST_F(ExternalObjectLockTest, TServerCanAcquireLocksAfterLeaseExpiry) { EXPECT_THAT(status, EqualsStatus(BuildLeaseEpochMismatchErrorStatus(kLeaseEpoch, lease_epoch))); } +TEST_F(ExternalObjectLockTest, RefreshYsqlLease) { + auto ts = tablet_server(0); + auto master_proxy = cluster_->GetLeaderMasterProxy(); + + // Acquire a lock on behalf of another ts. + ASSERT_OK(AcquireLockGlobally( + &master_proxy, tablet_server(1)->uuid(), kTxn1, kDatabaseID, kObjectId, kLeaseEpoch, nullptr, + std::nullopt, kTimeout)); + + master::MasterDDLClient ddl_client{cluster_->GetLeaderMasterProxy()}; + + // Request a lease refresh on behalf of ts with no lease epoch in the request. + // Master should respond with our ts' current lease epoch, the acquired lock entries, and + // new_lease. + auto info = ASSERT_RESULT( + ddl_client.RefreshYsqlLease(ts->uuid(), ts->instance_id().instance_seqno(), {})); + ASSERT_TRUE(info.new_lease()); + ASSERT_EQ(info.lease_epoch(), kLeaseEpoch); + ASSERT_TRUE(info.has_ddl_lock_entries()); + ASSERT_GE(info.ddl_lock_entries().lock_entries_size(), 1); + + // Request a lease refresh on behalf of ts with the incorrect lease epoch in the request. + // Expect the master to respond with our ts' current lease epoch, the acquired lock entries, and + // new_lease. + info = + ASSERT_RESULT(ddl_client.RefreshYsqlLease(ts->uuid(), ts->instance_id().instance_seqno(), 0)); + ASSERT_TRUE(info.new_lease()); + ASSERT_EQ(info.lease_epoch(), kLeaseEpoch); + ASSERT_TRUE(info.has_ddl_lock_entries()); + ASSERT_GE(info.ddl_lock_entries().lock_entries_size(), 1); + + // Request a lease refresh on behalf of ts with the correct lease epoch in the request. + // Expect the master to omit most information and set new_lease to false. + info = ASSERT_RESULT( + ddl_client.RefreshYsqlLease(ts->uuid(), ts->instance_id().instance_seqno(), kLeaseEpoch)); + ASSERT_FALSE(info.new_lease()); + ASSERT_FALSE(info.has_ddl_lock_entries()); +} + class MultiMasterObjectLockTest : public ObjectLockTest { protected: int num_masters() override { diff --git a/src/yb/master/catalog_entity_info.cc b/src/yb/master/catalog_entity_info.cc index 0c4116864da7..f64d82754b6c 100644 --- a/src/yb/master/catalog_entity_info.cc +++ b/src/yb/master/catalog_entity_info.cc @@ -1351,8 +1351,8 @@ std::string DdlLogEntry::id() const { // ObjectLockInfo // ================================================================================================ -std::optional ObjectLockInfo::RefreshYsqlOperationLease( - const NodeInstancePB& instance) { +std::variant +ObjectLockInfo::RefreshYsqlOperationLease(const NodeInstancePB& instance) { auto l = LockForWrite(); { std::lock_guard l(mutex_); @@ -1360,7 +1360,7 @@ std::optional ObjectLockInfo::RefreshYsqlOperationLea } if (l->pb.lease_info().live_lease() && l->pb.lease_info().instance_seqno() == instance.instance_seqno()) { - return std::nullopt; + return l->pb.lease_info(); } auto& lease_info = *l.mutable_data()->pb.mutable_lease_info(); lease_info.set_live_lease(true); diff --git a/src/yb/master/catalog_entity_info.h b/src/yb/master/catalog_entity_info.h index 91f29ff9eaaf..cdaeb412599e 100644 --- a/src/yb/master/catalog_entity_info.h +++ b/src/yb/master/catalog_entity_info.h @@ -1067,8 +1067,8 @@ class ObjectLockInfo : public MetadataCowWrapper { // Return the user defined type's ID. Does not require synchronization. virtual const std::string& id() const override { return ts_uuid_; } - std::optional RefreshYsqlOperationLease(const NodeInstancePB& instance) - EXCLUDES(mutex_); + std::variant + RefreshYsqlOperationLease(const NodeInstancePB& instance) EXCLUDES(mutex_); virtual void Load(const SysObjectLockEntryPB& metadata) override; diff --git a/src/yb/master/master-test.cc b/src/yb/master/master-test.cc index 79b5ab2e8e91..536b02aad970 100644 --- a/src/yb/master/master-test.cc +++ b/src/yb/master/master-test.cc @@ -120,31 +120,13 @@ class MasterTest : public MasterTestBase { Result> FindNamespaceByName( YQLDatabase db_type, const std::string& name); -}; -Result MasterTest::SendHeartbeat( - TSToMasterCommonPB common, std::optional registration, - std::optional report) { - SysClusterConfigEntryPB config = - VERIFY_RESULT(mini_master_->catalog_manager().GetClusterConfig()); - auto universe_uuid = config.universe_uuid(); + Result SendNewTSRegistrationHeartbeat(const std::string& uuid); - TSHeartbeatRequestPB req; - TSHeartbeatResponsePB resp; - req.mutable_common()->Swap(&common); - if (registration) { - req.mutable_registration()->Swap(®istration.value()); - } - if (report) { - req.mutable_tablet_report()->Swap(&report.value()); - } - req.set_universe_uuid(universe_uuid); - RETURN_NOT_OK(proxy_heartbeat_->TSHeartbeat(req, &resp, ResetAndGetController())); - if (resp.has_error()) { - return StatusFromPB(resp.error().status()); - } - return resp; -} + private: + // Used by SendNewTSRegistrationHeartbeat to avoid host port collisions. + uint32_t registered_ts_count_ = 0; +}; TEST_F(MasterTest, TestPingServer) { // Ping the server. @@ -2859,58 +2841,86 @@ TEST_F(MasterTest, TestGetClosestLiveTserver) { } TEST_F(MasterTest, RefreshYsqlLeaseWithoutRegistration) { - ANNOTATE_UNPROTECTED_WRITE(FLAGS_TEST_enable_object_locking_for_table_locks) = true; + ANNOTATE_UNPROTECTED_WRITE(FLAGS_enable_ysql_operation_lease) = true; const char* kTsUUID = "my-ts-uuid"; auto ddl_client = MasterDDLClient{std::move(*proxy_ddl_)}; - auto result = ddl_client.RefreshYsqlLease(kTsUUID, 1); + auto result = ddl_client.RefreshYsqlLease(kTsUUID, 1, {}); ASSERT_NOK(result); ASSERT_TRUE(result.status().IsNotFound()); } TEST_F(MasterTest, RefreshYsqlLease) { - ANNOTATE_UNPROTECTED_WRITE(FLAGS_TEST_enable_object_locking_for_table_locks) = true; - const char *kTsUUID = "my-ts-uuid"; + ANNOTATE_UNPROTECTED_WRITE(FLAGS_enable_ysql_operation_lease) = true; + const std::string kTsUUID1 = "ts-uuid1"; + const std::string kTsUUID2 = "ts-uuid2"; + + auto reg_resp1 = ASSERT_RESULT(SendNewTSRegistrationHeartbeat(kTsUUID1)); + ASSERT_FALSE(reg_resp1.needs_reregister()); + + auto ddl_client = MasterDDLClient{std::move(*proxy_ddl_)}; + auto info = ASSERT_RESULT(ddl_client.RefreshYsqlLease(kTsUUID1, /* instance_seqno */ 1, {})); + ASSERT_TRUE(info.new_lease()); + ASSERT_EQ(info.lease_epoch(), 1); + + // todo(zdrudi): but we need to do this and check the bootstrap entries... + // Refresh lease again. Since we omitted current lease epoch, master leader should still say this + // is a new lease. + info = ASSERT_RESULT(ddl_client.RefreshYsqlLease(kTsUUID1, /* instance_seqno */ 1, {})); + ASSERT_TRUE(info.new_lease()); + ASSERT_EQ(info.lease_epoch(), 1); + + // Refresh lease again. We included current lease epoch but it's incorrect. + info = ASSERT_RESULT(ddl_client.RefreshYsqlLease(kTsUUID1, /* instance_seqno */ 1, 0)); + ASSERT_TRUE(info.new_lease()); + ASSERT_EQ(info.lease_epoch(), 1); + + // Refresh lease again. Current lease epoch is correct so master leader should not set new lease + // bit. + info = ASSERT_RESULT(ddl_client.RefreshYsqlLease(kTsUUID1, /* instance_seqno */ 1, 1)); + ASSERT_FALSE(info.new_lease()); +} +Result MasterTest::SendHeartbeat( + TSToMasterCommonPB common, std::optional registration, + std::optional report) { SysClusterConfigEntryPB config = - ASSERT_RESULT(mini_master_->catalog_manager().GetClusterConfig()); + VERIFY_RESULT(mini_master_->catalog_manager().GetClusterConfig()); auto universe_uuid = config.universe_uuid(); - // Register the fake TS, without sending any tablet report. - TSRegistrationPB fake_reg; - *fake_reg.mutable_common()->add_private_rpc_addresses() = MakeHostPortPB("localhost", 1000); - *fake_reg.mutable_common()->add_http_addresses() = MakeHostPortPB("localhost", 2000); - *fake_reg.mutable_resources() = master::ResourcesPB(); + TSHeartbeatRequestPB req; + TSHeartbeatResponsePB resp; + req.mutable_common()->Swap(&common); + if (registration) { + req.mutable_registration()->Swap(®istration.value()); + } + if (report) { + req.mutable_tablet_report()->Swap(&report.value()); + } + req.set_universe_uuid(universe_uuid); + RETURN_NOT_OK(proxy_heartbeat_->TSHeartbeat(req, &resp, ResetAndGetController())); + if (resp.has_error()) { + return StatusFromPB(resp.error().status()); + } + return resp; +} + +Result MasterTest::SendNewTSRegistrationHeartbeat(const std::string& uuid) { + TSRegistrationPB reg; + *reg.mutable_common()->add_private_rpc_addresses() = + MakeHostPortPB("localhost", 1000 + registered_ts_count_); + *reg.mutable_common()->add_http_addresses() = + MakeHostPortPB("localhost", 2000 + registered_ts_count_); + *reg.mutable_resources() = master::ResourcesPB(); TSToMasterCommonPB common; - common.mutable_ts_instance()->set_permanent_uuid(kTsUUID); + common.mutable_ts_instance()->set_permanent_uuid(uuid); common.mutable_ts_instance()->set_instance_seqno(1); - { - TSHeartbeatRequestPB req; - TSHeartbeatResponsePB resp; - req.mutable_common()->CopyFrom(common); - req.mutable_registration()->CopyFrom(fake_reg); - req.set_universe_uuid(universe_uuid); - ASSERT_OK(proxy_heartbeat_->TSHeartbeat(req, &resp, ResetAndGetController())); - - ASSERT_FALSE(resp.needs_reregister()); - ASSERT_TRUE(resp.needs_full_tablet_report()); - ASSERT_TRUE(resp.has_tablet_report_limit()); + auto result = SendHeartbeat(common, reg); + if (result.ok()) { + registered_ts_count_++; } - - auto descs = mini_master_->master()->ts_manager()->GetAllDescriptors(); - ASSERT_EQ(1, descs.size()) << "Should have registered the TS"; - auto reg = descs[0]->GetTSRegistrationPB(); - ASSERT_EQ(fake_reg.DebugString(), reg.DebugString()) - << "Master got different registration"; - - auto ts_desc = ASSERT_RESULT(mini_master_->master()->ts_manager()->LookupTSByUUID(kTsUUID)); - ASSERT_EQ(ts_desc, descs[0]); - - auto ddl_client = MasterDDLClient{std::move(*proxy_ddl_)}; - auto info = ASSERT_RESULT(ddl_client.RefreshYsqlLease(kTsUUID, /* instance_seqno */1)); - ASSERT_TRUE(info.new_lease()); - ASSERT_EQ(info.lease_epoch(), 1); + return result; } } // namespace master diff --git a/src/yb/master/master_ddl.proto b/src/yb/master/master_ddl.proto index 1ac99fee8132..16d8f4478f39 100644 --- a/src/yb/master/master_ddl.proto +++ b/src/yb/master/master_ddl.proto @@ -816,7 +816,9 @@ message ReleaseObjectLocksGlobalResponsePB { message RefreshYsqlLeaseRequestPB { optional NodeInstancePB instance = 1; - optional bool needs_bootstrap = 2; + // The current lease epoch of the tserver making this request. + // Unset if the tserver doesn't think it has a live lease. + optional uint64 current_lease_epoch = 2; } message RefreshYsqlLeaseInfoPB { diff --git a/src/yb/master/master_ddl_client.cc b/src/yb/master/master_ddl_client.cc index 2afd29f023cb..3faed8b9d34e 100644 --- a/src/yb/master/master_ddl_client.cc +++ b/src/yb/master/master_ddl_client.cc @@ -70,10 +70,14 @@ Status MasterDDLClient::WaitForCreateNamespaceDone(const NamespaceId& id, MonoDe } Result MasterDDLClient::RefreshYsqlLease( - const std::string& permanent_uuid, int64_t instance_seqno) { + const std::string& permanent_uuid, int64_t instance_seqno, + std::optional current_lease_epoch) { RefreshYsqlLeaseRequestPB req; req.mutable_instance()->set_permanent_uuid(permanent_uuid); req.mutable_instance()->set_instance_seqno(instance_seqno); + if (current_lease_epoch) { + req.set_current_lease_epoch(*current_lease_epoch); + } RefreshYsqlLeaseResponsePB resp; rpc::RpcController rpc; RETURN_NOT_OK(proxy_.RefreshYsqlLease(req, &resp, &rpc)); diff --git a/src/yb/master/master_ddl_client.h b/src/yb/master/master_ddl_client.h index 1802a8e67066..354269c7e306 100644 --- a/src/yb/master/master_ddl_client.h +++ b/src/yb/master/master_ddl_client.h @@ -34,7 +34,8 @@ class MasterDDLClient { Status WaitForCreateNamespaceDone(const NamespaceId& id, MonoDelta timeout); Result RefreshYsqlLease( - const std::string& permanent_uuid, int64_t instance_seqno); + const std::string& permanent_uuid, int64_t instance_seqno, + std::optional current_lease_epoch); private: MasterDdlProxy proxy_; diff --git a/src/yb/master/master_tserver.cc b/src/yb/master/master_tserver.cc index 83a4ebc94e2a..1cc75179bcc5 100644 --- a/src/yb/master/master_tserver.cc +++ b/src/yb/master/master_tserver.cc @@ -254,7 +254,7 @@ bool MasterTabletServer::SkipCatalogVersionChecks() { return master_->catalog_manager()->SkipCatalogVersionChecks(); } -Result MasterTabletServer::GetYSQLLeaseInfo() const { +Result MasterTabletServer::GetYSQLLeaseInfo() const { return STATUS(InternalError, "Unexpected call of GetYSQLLeaseInfo"); } diff --git a/src/yb/master/master_tserver.h b/src/yb/master/master_tserver.h index 1c7ef70be9c8..5784e364ef04 100644 --- a/src/yb/master/master_tserver.h +++ b/src/yb/master/master_tserver.h @@ -125,7 +125,7 @@ class MasterTabletServer : public tserver::TabletServerIf, void SetYsqlDBCatalogVersions( const tserver::DBCatalogVersionDataPB& db_catalog_version_data) override {} - Result GetYSQLLeaseInfo() const override; + Result GetYSQLLeaseInfo() const override; Status RestartPG() const override { return STATUS(NotSupported, "RestartPG not implemented for masters"); } diff --git a/src/yb/master/object_lock_info_manager.cc b/src/yb/master/object_lock_info_manager.cc index 9b660e5500ce..8970b8950132 100644 --- a/src/yb/master/object_lock_info_manager.cc +++ b/src/yb/master/object_lock_info_manager.cc @@ -887,26 +887,41 @@ void ObjectLockInfoManager::Impl::UnlockObject(const TransactionId& txn_id) { Status ObjectLockInfoManager::Impl::RefreshYsqlLease( const RefreshYsqlLeaseRequestPB& req, RefreshYsqlLeaseResponsePB& resp, rpc::RpcContext& rpc, const LeaderEpoch& epoch) { - if (!FLAGS_enable_ysql_operation_lease && - !FLAGS_TEST_enable_object_locking_for_table_locks) { - return STATUS(NotSupported, "The ysql lease is currently a test feature."); + if (!FLAGS_enable_ysql_operation_lease && !FLAGS_TEST_enable_object_locking_for_table_locks) { + return STATUS(NotSupported, "The ysql lease is currently disabled."); } // Sanity check that the tserver has already registered with the same instance_seqno. RETURN_NOT_OK(master_.ts_manager()->LookupTS(req.instance())); auto object_lock_info = GetOrCreateObjectLockInfo(req.instance().permanent_uuid()); - auto lock_opt = object_lock_info->RefreshYsqlOperationLease(req.instance()); - if (!lock_opt) { - resp.mutable_info()->set_new_lease(false); - if (req.needs_bootstrap()) { + auto lock_variant = object_lock_info->RefreshYsqlOperationLease(req.instance()); + if (auto* lease_info = std::get_if(&lock_variant)) { + resp.mutable_info()->set_lease_epoch(lease_info->lease_epoch()); + if (!req.has_current_lease_epoch() || lease_info->lease_epoch() != req.current_lease_epoch()) { *resp.mutable_info()->mutable_ddl_lock_entries() = ExportObjectLockInfo(); + // From the master leader's perspective this is not a new lease. But the tserver may not be + // aware it has received a new lease because it has not supplied its correct lease epoch. + LOG(INFO) << Format( + "TS $0 ($1) has provided $3 instead of its actual lease epoch $4 in its ysql op lease " + "refresh request. Marking its ysql lease as new", + req.instance().permanent_uuid(), req.instance().instance_seqno(), + req.has_current_lease_epoch() ? std::to_string(req.current_lease_epoch()) : "", + lease_info->lease_epoch()); + resp.mutable_info()->set_new_lease(true); + } else { + resp.mutable_info()->set_new_lease(false); } return Status::OK(); } + auto* lockp = std::get_if(&lock_variant); + CHECK_NOTNULL(lockp); RETURN_NOT_OK(catalog_manager_.sys_catalog()->Upsert(epoch, object_lock_info)); resp.mutable_info()->set_new_lease(true); - resp.mutable_info()->set_lease_epoch(lock_opt->mutable_data()->pb.lease_info().lease_epoch()); - lock_opt->Commit(); + resp.mutable_info()->set_lease_epoch(lockp->mutable_data()->pb.lease_info().lease_epoch()); + lockp->Commit(); *resp.mutable_info()->mutable_ddl_lock_entries() = ExportObjectLockInfo(); + LOG(INFO) << Format( + "Granting a new ysql op lease to TS $0 ($1). Lease epoch $2", req.instance().permanent_uuid(), + req.instance().instance_seqno(), resp.info().lease_epoch()); return Status::OK(); } @@ -1079,6 +1094,10 @@ void ObjectLockInfoManager::Impl::CleanupExpiredLeaseEpochs() { if (object_info_lock->pb.lease_info().live_lease() && current_time.GetDeltaSince(object_info->last_ysql_lease_refresh()) > MonoDelta::FromMilliseconds(GetAtomicFlag(&FLAGS_master_ysql_operation_lease_ttl_ms))) { + LOG(INFO) << Format( + "Tserver $0, instance seqno $1 with ysql lease epoch $2 has just lost its lease", + object_info->id(), object_info_lock->pb.lease_info().instance_seqno(), + object_info_lock->pb.lease_info().lease_epoch()); object_info_lock.mutable_data()->pb.mutable_lease_info()->set_live_lease(false); object_infos_to_write.push_back(object_info.get()); if (object_info_lock->pb.lease_epochs_size() > 0) { diff --git a/src/yb/tserver/pg_client_service.cc b/src/yb/tserver/pg_client_service.cc index 1cacac76e1f3..e0424117eae9 100644 --- a/src/yb/tserver/pg_client_service.cc +++ b/src/yb/tserver/pg_client_service.cc @@ -1919,10 +1919,11 @@ class PgClientServiceImpl::Impl : public LeaseEpochValidator, public SessionProv void ProcessLeaseUpdate(const master::RefreshYsqlLeaseInfoPB& lease_refresh_info, MonoTime time) { std::lock_guard lock(mutex_); last_lease_refresh_time_ = time; - if (lease_refresh_info.new_lease()) { + if (lease_refresh_info.new_lease() || lease_epoch_ != lease_refresh_info.lease_epoch()) { LOG(INFO) << Format( - "Received new lease epoch $0 from the master leader. Clearing all pg sessions.", - lease_refresh_info.lease_epoch()); + "Received new lease epoch $0 from the master leader, old lease epoch was $1. Clearing " + "all pg sessions.", + lease_refresh_info.lease_epoch(), lease_epoch_); lease_epoch_ = lease_refresh_info.lease_epoch(); auto s = tablet_server_.RestartPG(); if (!s.ok()) { @@ -2484,7 +2485,7 @@ class PgClientServiceImpl::Impl : public LeaseEpochValidator, public SessionProv PgTxnSnapshotManager txn_snapshot_manager_; MonoTime last_lease_refresh_time_ GUARDED_BY(mutex_); - uint64_t lease_epoch_ GUARDED_BY(mutex_); + uint64_t lease_epoch_ GUARDED_BY(mutex_) = 0; }; PgClientServiceImpl::PgClientServiceImpl( diff --git a/src/yb/tserver/pg_client_service.h b/src/yb/tserver/pg_client_service.h index 203b73bc9ba8..8a3814415663 100644 --- a/src/yb/tserver/pg_client_service.h +++ b/src/yb/tserver/pg_client_service.h @@ -29,6 +29,7 @@ #include "yb/tserver/pg_client.service.h" #include "yb/tserver/pg_txn_snapshot_manager.h" +#include "yb/tserver/ysql_lease.h" namespace yb { @@ -114,11 +115,6 @@ class TserverXClusterContextIf; /**/ -struct YSQLLeaseInfo { - bool is_live; - uint64_t lease_epoch; -}; - class PgClientServiceImpl : public PgClientServiceIf { public: explicit PgClientServiceImpl( diff --git a/src/yb/tserver/tablet_server.cc b/src/yb/tserver/tablet_server.cc index e4f4f0d2b0f2..17d8ed03e39e 100644 --- a/src/yb/tserver/tablet_server.cc +++ b/src/yb/tserver/tablet_server.cc @@ -824,7 +824,7 @@ Status TabletServer::ProcessLeaseUpdate( const master::RefreshYsqlLeaseInfoPB& lease_refresh_info, MonoTime time) { VLOG(2) << __func__; auto lock_manager = ts_local_lock_manager(); - if (lease_refresh_info.has_ddl_lock_entries() && lock_manager) { + if (lease_refresh_info.new_lease() && lock_manager) { if (lock_manager->IsBootstrapped()) { // Reset the local lock manager to bootstrap from the given DDL lock entries. lock_manager = ResetAndGetTSLocalLockManager(); @@ -842,7 +842,7 @@ Status TabletServer::ProcessLeaseUpdate( } -Result TabletServer::GetYSQLLeaseInfo() const { +Result TabletServer::GetYSQLLeaseInfo() const { if (!IsYsqlLeaseEnabled()) { return STATUS(NotSupported, "YSQL lease is not enabled"); } @@ -850,13 +850,7 @@ Result TabletServer::GetYSQLLeaseInfo() const { if (!pg_client_service) { RSTATUS_DCHECK(pg_client_service, InternalError, "Unable to get pg_client_service"); } - auto lease_info = pg_client_service->impl.GetYSQLLeaseInfo(); - GetYSQLLeaseInfoResponsePB resp; - resp.set_is_live(lease_info.is_live); - if (lease_info.is_live) { - resp.set_lease_epoch(lease_info.lease_epoch); - } - return resp; + return pg_client_service->impl.GetYSQLLeaseInfo(); } Status TabletServer::RestartPG() const { diff --git a/src/yb/tserver/tablet_server.h b/src/yb/tserver/tablet_server.h index dc6f245ea1a9..2e290a8a1972 100644 --- a/src/yb/tserver/tablet_server.h +++ b/src/yb/tserver/tablet_server.h @@ -209,7 +209,7 @@ class TabletServer : public DbServerBase, public TabletServerIf { Status PopulateLiveTServers(const master::TSHeartbeatResponsePB& heartbeat_resp) EXCLUDES(lock_); Status ProcessLeaseUpdate( const master::RefreshYsqlLeaseInfoPB& lease_refresh_info, MonoTime time); - Result GetYSQLLeaseInfo() const override; + Result GetYSQLLeaseInfo() const override; Status RestartPG() const override; static bool IsYsqlLeaseEnabled(); diff --git a/src/yb/tserver/tablet_server_interface.h b/src/yb/tserver/tablet_server_interface.h index 8e5da0622fc9..1fbc8fe6d963 100644 --- a/src/yb/tserver/tablet_server_interface.h +++ b/src/yb/tserver/tablet_server_interface.h @@ -32,6 +32,7 @@ #include "yb/tserver/tserver_fwd.h" #include "yb/tserver/tserver_util_fwd.h" #include "yb/tserver/local_tablet_server.h" +#include "yb/tserver/ysql_lease.h" #include "yb/util/concurrent_value.h" @@ -147,7 +148,7 @@ class TabletServerIf : public LocalTabletServer { virtual void SetYsqlDBCatalogVersions( const tserver::DBCatalogVersionDataPB& db_catalog_version_data) = 0; - virtual Result GetYSQLLeaseInfo() const = 0; + virtual Result GetYSQLLeaseInfo() const = 0; virtual Status RestartPG() const = 0; }; diff --git a/src/yb/tserver/tablet_service.cc b/src/yb/tserver/tablet_service.cc index de5a9455b6b6..eeb4dd620279 100644 --- a/src/yb/tserver/tablet_service.cc +++ b/src/yb/tserver/tablet_service.cc @@ -107,6 +107,7 @@ #include "yb/tserver/tserver_xcluster_context_if.h" #include "yb/tserver/xcluster_safe_time_map.h" #include "yb/tserver/ysql_advisory_lock_table.h" +#include "yb/tserver/ysql_lease.h" #include "yb/util/async_util.h" #include "yb/util/backoff_waiter.h" @@ -3709,7 +3710,13 @@ void TabletServiceImpl::ReleaseObjectLocks( Result TabletServiceImpl::GetYSQLLeaseInfo( const GetYSQLLeaseInfoRequestPB& req, CoarseTimePoint deadline) { - return server_->GetYSQLLeaseInfo(); + auto lease_info = VERIFY_RESULT(server_->GetYSQLLeaseInfo()); + GetYSQLLeaseInfoResponsePB resp; + resp.set_is_live(lease_info.is_live); + if (lease_info.is_live) { + resp.set_lease_epoch(lease_info.lease_epoch); + } + return resp; } void TabletServiceImpl::AdminExecutePgsql( diff --git a/src/yb/tserver/ysql_lease.h b/src/yb/tserver/ysql_lease.h new file mode 100644 index 000000000000..bc4e1e51750e --- /dev/null +++ b/src/yb/tserver/ysql_lease.h @@ -0,0 +1,24 @@ +// Copyright (c) YugabyteDB, Inc. +// +// Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except +// in compliance with the License. You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software distributed under the License +// is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express +// or implied. See the License for the specific language governing permissions and limitations +// under the License. +// +#pragma once + +#include + +namespace yb::tserver { + +struct YSQLLeaseInfo { + bool is_live; + uint64_t lease_epoch; +}; + +} diff --git a/src/yb/tserver/ysql_lease_poller.cc b/src/yb/tserver/ysql_lease_poller.cc index 3cc244b4f98f..4abb201b652b 100644 --- a/src/yb/tserver/ysql_lease_poller.cc +++ b/src/yb/tserver/ysql_lease_poller.cc @@ -22,8 +22,9 @@ #include "yb/server/server_base.proxy.h" -#include "yb/tserver/tablet_server.h" #include "yb/tserver/master_leader_poller.h" +#include "yb/tserver/tablet_server.h" +#include "yb/tserver/ysql_lease.h" #include "yb/util/async_util.h" #include "yb/util/condition_variable.h" @@ -122,7 +123,10 @@ Status YsqlLeasePoller::Poll() { master::RefreshYsqlLeaseRequestPB req; *req.mutable_instance() = server_.instance_pb(); - req.set_needs_bootstrap(!server_.HasBootstrappedLocalLockManager()); + auto current_lease_info = VERIFY_RESULT(server_.GetYSQLLeaseInfo()); + if (current_lease_info.is_live) { + req.set_current_lease_epoch(current_lease_info.lease_epoch); + } rpc::RpcController rpc; rpc.set_timeout(timeout); master::RefreshYsqlLeaseResponsePB resp; From 459605d0c7bc4d0ae7f7e94c7e88d501b2a8676b Mon Sep 17 00:00:00 2001 From: Sergei Politov Date: Wed, 14 May 2025 10:04:22 +0300 Subject: [PATCH 099/146] [#27123] DocDB: Fix remote bootstrap over existing tablet Summary: It could happen that remote bootstrap happens over existing tablet. For instance because of crash in the middle of previous remote bootstrap. In this case moving vector index directory fails, since destination directory already exists. Fixed by deleting existing directory in this scenario. Also changed external mini cluster logic to start tablet servers in parallel, which decreased test execution time by 20 seconds. **Upgrade/Rollback safety:** Added fields that are used only in tests. Jira: DB-16610 Test Plan: PgVectorIndexITest.CrashAfterRBSDownload Reviewers: arybochkin Reviewed By: arybochkin Subscribers: ybase, yql Tags: #jenkins-ready Differential Revision: https://phorge.dev.yugabyte.com/D43980 --- src/yb/consensus/log.cc | 8 +- src/yb/consensus/raft_consensus.cc | 10 +- src/yb/consensus/raft_consensus.h | 24 +- src/yb/docdb/docdb_fwd.h | 3 +- src/yb/docdb/docdb_util.cc | 27 ++ src/yb/docdb/docdb_util.h | 10 + src/yb/integration-tests/external_daemon.cc | 22 +- src/yb/integration-tests/external_daemon.h | 4 + .../external_mini_cluster.cc | 126 ++++++--- .../integration-tests/external_mini_cluster.h | 33 ++- .../external_mini_cluster_secure_test.cc | 13 +- src/yb/integration-tests/mini_cluster.cc | 2 +- .../integration-tests/raft_consensus-itest.cc | 3 +- .../remote_bootstrap-itest.cc | 3 +- src/yb/integration-tests/snapshot-test.cc | 3 +- .../tablet-split-itest-base.cc | 12 +- .../tablet-split-itest-base.h | 3 - .../integration-tests/tablet-split-itest.cc | 10 +- src/yb/tablet/tablet-split-test.cc | 10 +- src/yb/tablet/tablet.cc | 10 +- src/yb/tablet/tablet.h | 8 +- src/yb/tablet/tablet.proto | 2 + src/yb/tablet/tablet_bootstrap.cc | 4 +- src/yb/tablet/tablet_metadata.cc | 5 +- src/yb/tablet/tablet_metadata.h | 1 - src/yb/tablet/tablet_peer.cc | 62 +++-- src/yb/tablet/tablet_peer.h | 2 +- src/yb/tablet/tablet_snapshots.cc | 21 +- .../yb-backup/yb-backup-cross-feature-test.cc | 18 +- src/yb/tserver/remote_bootstrap_client.cc | 14 +- src/yb/tserver/tablet_service.cc | 2 +- src/yb/tserver/tserver_admin.proto | 3 + src/yb/yql/pggate/test/pggate_test_select.cc | 3 +- src/yb/yql/pgwrapper/CMakeLists.txt | 1 + src/yb/yql/pgwrapper/pg_ash-test.cc | 3 +- .../yql/pgwrapper/pg_index_backfill-test.cc | 164 ----------- src/yb/yql/pgwrapper/pg_packed_row-test.cc | 4 +- .../yql/pgwrapper/pg_server_restart-test.cc | 3 +- src/yb/yql/pgwrapper/pg_vector_index-itest.cc | 257 ++++++++++++++++++ 39 files changed, 543 insertions(+), 370 deletions(-) create mode 100644 src/yb/yql/pgwrapper/pg_vector_index-itest.cc diff --git a/src/yb/consensus/log.cc b/src/yb/consensus/log.cc index 26d0d8faed0c..9eadb9fb38bc 100644 --- a/src/yb/consensus/log.cc +++ b/src/yb/consensus/log.cc @@ -919,7 +919,7 @@ Status Log::AsyncAppend( return Status::OK(); } -Status Log::AsyncAppendReplicates(const ReplicateMsgs& msgs, const yb::OpId& committed_op_id, +Status Log::AsyncAppendReplicates(const ReplicateMsgs& msgs, const OpId& committed_op_id, RestartSafeCoarseTimePoint batch_mono_time, const StatusCallback& callback) { auto batch = CreateBatchFromAllocatedOperations(msgs); @@ -1527,7 +1527,7 @@ uint32_t Log::wal_retention_secs() const { wal_retention_secs; } -yb::OpId Log::GetLatestEntryOpId() const { +OpId Log::GetLatestEntryOpId() const { return last_synced_entry_op_id_.load(boost::memory_order_acquire); } @@ -1535,7 +1535,7 @@ int64_t Log::GetMinReplicateIndex() const { return min_replicate_index_.load(std::memory_order_acquire); } -yb::OpId Log::WaitForSafeOpIdToApply(const yb::OpId& min_allowed, MonoDelta duration) { +OpId Log::WaitForSafeOpIdToApply(const OpId& min_allowed, MonoDelta duration) { if (FLAGS_TEST_log_consider_all_ops_safe || all_op_ids_safe_) { return min_allowed; } @@ -1556,7 +1556,7 @@ yb::OpId Log::WaitForSafeOpIdToApply(const yb::OpId& min_allowed, MonoDelta dura break; } if (duration) { - return yb::OpId(); + return OpId(); } // TODO(bogdan): If the log is closed at this point, consider refactoring to return status // and fail cleanly. diff --git a/src/yb/consensus/raft_consensus.cc b/src/yb/consensus/raft_consensus.cc index ef2edab72a76..12b7fc1fd4a1 100644 --- a/src/yb/consensus/raft_consensus.cc +++ b/src/yb/consensus/raft_consensus.cc @@ -2935,7 +2935,7 @@ PeerRole RaftConsensus::GetActiveRole() const { return state_->GetActiveRoleUnlocked(); } -yb::OpId RaftConsensus::GetLatestOpIdFromLog() { +OpId RaftConsensus::GetLatestOpIdFromLog() { return log_->GetLatestEntryOpId(); } @@ -3531,22 +3531,22 @@ void RaftConsensus::DoElectionCallback(const LeaderElectionData& data, } } -yb::OpId RaftConsensus::GetLastReceivedOpId() { +OpId RaftConsensus::GetLastReceivedOpId() { auto lock = state_->LockForRead(); return state_->GetLastReceivedOpIdUnlocked(); } -yb::OpId RaftConsensus::GetLastCommittedOpId() { +OpId RaftConsensus::GetLastCommittedOpId() { auto lock = state_->LockForRead(); return state_->GetCommittedOpIdUnlocked(); } -yb::OpId RaftConsensus::GetLastAppliedOpId() { +OpId RaftConsensus::GetLastAppliedOpId() { auto lock = state_->LockForRead(); return state_->GetLastAppliedOpIdUnlocked(); } -yb::OpId RaftConsensus::GetAllAppliedOpId() { +OpId RaftConsensus::GetAllAppliedOpId() { return queue_->GetAllAppliedOpId(); } diff --git a/src/yb/consensus/raft_consensus.h b/src/yb/consensus/raft_consensus.h index 723c97d74844..1902b01d675a 100644 --- a/src/yb/consensus/raft_consensus.h +++ b/src/yb/consensus/raft_consensus.h @@ -239,13 +239,13 @@ class RaftConsensus : public std::enable_shared_from_this, committed_index, last_committed_op_id); } - yb::OpId GetLastReceivedOpId() override; + OpId GetLastReceivedOpId() override; - yb::OpId GetLastCommittedOpId() override; + OpId GetLastCommittedOpId() override; - yb::OpId GetLastAppliedOpId() override; + OpId GetLastAppliedOpId() override; - yb::OpId GetAllAppliedOpId(); + OpId GetAllAppliedOpId(); Result MajorityReplicatedHtLeaseExpiration( MicrosTime min_allowed, CoarseTimePoint deadline) const override; @@ -253,7 +253,7 @@ class RaftConsensus : public std::enable_shared_from_this, // The on-disk size of the consensus metadata. uint64_t OnDiskSize() const; - yb::OpId MinRetryableRequestOpId(); + OpId MinRetryableRequestOpId(); Status StartElection(const LeaderElectionData& data) override { return DoStartElection(data, PreElected::kFalse); @@ -275,10 +275,10 @@ class RaftConsensus : public std::enable_shared_from_this, } Result ReadReplicatedMessagesForXCluster( - const yb::OpId& from, const CoarseTimePoint deadline, bool fetch_single_entry) override; + const OpId& from, const CoarseTimePoint deadline, bool fetch_single_entry) override; Result ReadReplicatedMessagesForCDC( - const yb::OpId& from, + const OpId& from, int64_t* last_replicated_opid_index, const CoarseTimePoint deadline = CoarseTimePoint::max(), const bool fetch_single_entry = false) override; @@ -299,10 +299,10 @@ class RaftConsensus : public std::enable_shared_from_this, HybridTime* consistent_stream_safe_time_footer = nullptr, bool* read_entire_wal = nullptr); - void UpdateCDCConsumerOpId(const yb::OpId& op_id) override; + void UpdateCDCConsumerOpId(const OpId& op_id) override; // Start memory tracking of following operation in case it is still present in our caches. - void TrackOperationMemory(const yb::OpId& op_id); + void TrackOperationMemory(const OpId& op_id); uint64_t MajorityNumSSTFiles() const { return majority_num_sst_files_.load(std::memory_order_acquire); @@ -489,7 +489,7 @@ class RaftConsensus : public std::enable_shared_from_this, LWConsensusResponsePB* response, LeaderRequest* deduped_req); // Returns the most recent OpId written to the Log. - yb::OpId GetLatestOpIdFromLog(); + OpId GetLatestOpIdFromLog(); // Begin a replica operation. If the type of message in 'msg' is not a type // that uses operations, delegates to StartConsensusOnlyRoundUnlocked(). @@ -674,7 +674,7 @@ class RaftConsensus : public std::enable_shared_from_this, LeaderRequest* deduped_req, LWConsensusResponsePB* response); // Returns last op id received from leader. - yb::OpId EnqueueWritesUnlocked(const LeaderRequest& deduped_req, WriteEmpty write_empty); + OpId EnqueueWritesUnlocked(const LeaderRequest& deduped_req, WriteEmpty write_empty); Status MarkOperationsAsCommittedUnlocked(const LWConsensusRequestPB& request, const LeaderRequest& deduped_req, OpId last_from_leader); @@ -686,7 +686,7 @@ class RaftConsensus : public std::enable_shared_from_this, // See comment for ReplicaState::CancelPendingOperation void RollbackIdAndDeleteOpId(const ReplicateMsgPtr& replicate_msg, bool should_exists); - yb::OpId WaitForSafeOpIdToApply(const yb::OpId& op_id) override; + OpId WaitForSafeOpIdToApply(const OpId& op_id) override; void AppendEmptyBatchToLeaderLog(); diff --git a/src/yb/docdb/docdb_fwd.h b/src/yb/docdb/docdb_fwd.h index 34bc24912773..af5f140e953b 100644 --- a/src/yb/docdb/docdb_fwd.h +++ b/src/yb/docdb/docdb_fwd.h @@ -96,9 +96,10 @@ using DocVectorIndexesPtr = std::shared_ptr; using DocVectorIndexInsertEntries = std::vector; using DocVectorIndexSearchResult = std::vector; +YB_STRONGLY_TYPED_BOOL(FastBackwardScan); +YB_STRONGLY_TYPED_BOOL(IncludeIntents); YB_STRONGLY_TYPED_BOOL(SkipFlush); YB_STRONGLY_TYPED_BOOL(SkipSeek); -YB_STRONGLY_TYPED_BOOL(FastBackwardScan); YB_STRONGLY_TYPED_BOOL(UseVariableBloomFilter); } // namespace yb::docdb diff --git a/src/yb/docdb/docdb_util.cc b/src/yb/docdb/docdb_util.cc index cc6d53f554eb..86e964d83aaa 100644 --- a/src/yb/docdb/docdb_util.cc +++ b/src/yb/docdb/docdb_util.cc @@ -16,6 +16,7 @@ #include "yb/common/entity_ids.h" #include "yb/docdb/consensus_frontier.h" +#include "yb/docdb/doc_vector_index.h" #include "yb/docdb/docdb.h" #include "yb/docdb/docdb.messages.h" #include "yb/docdb/docdb_debug.h" @@ -45,6 +46,8 @@ namespace yb::docdb { using dockv::DocPath; +const std::string kIntentsDirName = "intents"; + Status SetValueFromQLBinaryWrapper( QLValuePB ql_value, const int pg_data_type, const std::unordered_map& enum_oid_label_map, @@ -590,4 +593,28 @@ std::string GetStorageCheckpointDir(const std::string& data_dir, const std::stri return JoinPathSegments(data_dir, storage); } +Status MoveChild(Env& env, const std::string& data_dir, const std::string& child) { + auto source_dir = JoinPathSegments(data_dir, child); + if (!env.DirExists(source_dir)) { + return Status::OK(); + } + auto dest_dir = GetStorageDir(data_dir, child); + LOG(INFO) << "Moving " << source_dir << " => " << dest_dir; + if (env.FileExists(dest_dir)) { + RETURN_NOT_OK(env.DeleteRecursively(dest_dir)); + } + return env.RenameFile(source_dir, dest_dir); +} + +Status MoveChildren(Env& env, const std::string& db_dir, IncludeIntents include_intents) { + auto children = VERIFY_RESULT(env.GetChildren(db_dir, ExcludeDots::kTrue)); + for (const auto& child : children) { + if (child.starts_with(kVectorIndexDirPrefix) || + (include_intents && child == kIntentsDirName)) { + RETURN_NOT_OK(MoveChild(env, db_dir, child)); + } + } + return Status::OK(); +} + } // namespace yb::docdb diff --git a/src/yb/docdb/docdb_util.h b/src/yb/docdb/docdb_util.h index 32076365e8ac..5ee575de2ddd 100644 --- a/src/yb/docdb/docdb_util.h +++ b/src/yb/docdb/docdb_util.h @@ -27,8 +27,16 @@ #include "yb/rocksdb/compaction_filter.h" +namespace yb { + +class Env; + +} + namespace yb::docdb { +extern const std::string kIntentsDirName; + Status SetValueFromQLBinaryWrapper( QLValuePB ql_value, const int pg_data_type, @@ -279,5 +287,7 @@ class DocDBRocksDBUtil : public SchemaPackingProvider { std::string GetStorageDir(const std::string& data_dir, const std::string& storage); std::string GetStorageCheckpointDir(const std::string& data_dir, const std::string& storage); +Status MoveChild(Env& env, const std::string& data_dir, const std::string& child); +Status MoveChildren(Env& env, const std::string& db_dir, IncludeIntents include_intents); } // namespace yb::docdb diff --git a/src/yb/integration-tests/external_daemon.cc b/src/yb/integration-tests/external_daemon.cc index 626952b48577..5fa19eef763f 100644 --- a/src/yb/integration-tests/external_daemon.cc +++ b/src/yb/integration-tests/external_daemon.cc @@ -268,8 +268,7 @@ Status ExternalDaemon::StartProcess(const vector& user_flags) { argv.push_back("--log_dir="); // Tell the server to dump its port information so we can pick it up. - const string info_path = GetServerInfoPath(); - argv.push_back("--server_dump_info_path=" + info_path); + argv.push_back("--server_dump_info_path=" + GetServerInfoPath()); argv.push_back("--server_dump_info_format=pb"); // We use ephemeral ports in many tests. They don't work for production, but are OK @@ -330,7 +329,7 @@ Status ExternalDaemon::StartProcess(const vector& user_flags) { auto p = std::make_unique(exe_, argv); p->PipeParentStdout(); p->PipeParentStderr(); - auto default_output_prefix = Format("[$0]", daemon_id_); + auto default_output_prefix = DefaultOutputPrefix(); LOG(INFO) << "Running " << default_output_prefix << ": " << exe_ << "\n" << JoinStrings(argv, "\n"); if (!FLAGS_external_daemon_heap_profile_prefix.empty()) { @@ -356,6 +355,12 @@ Status ExternalDaemon::StartProcess(const vector& user_flags) { stderr_tailer_thread_->SetListener(listener); } + process_.swap(p); + return Status::OK(); +} + +Status ExternalDaemon::WaitProcessReady() { + auto p = process_.get(); // The process is now starting -- wait for the bound port info to show up. Stopwatch sw; sw.start(); @@ -381,14 +386,13 @@ Status ExternalDaemon::StartProcess(const vector& user_flags) { return STATUS( TimedOut, Format( "Timed out after $0s waiting for process ($1) to write info file ($2)", - kProcessStartTimeoutSeconds, exe_, info_path)); + kProcessStartTimeoutSeconds, exe_, GetServerInfoPath())); } RETURN_NOT_OK(BuildServerStateFromInfoPath()); - LOG(INFO) << "Started " << default_output_prefix << " " << exe_ << " as pid " << p->pid(); + LOG(INFO) << "Started " << DefaultOutputPrefix() << " " << exe_ << " as pid " << p->pid(); VLOG(1) << exe_ << " instance information:\n" << status_->DebugString(); - process_.swap(p); return Status::OK(); } @@ -449,7 +453,7 @@ pid_t ExternalDaemon::pid() const { } void ExternalDaemon::Shutdown(SafeShutdown safe_shutdown, RequireExitCode0 require_exit_code_0) { - if (!process_) { + if (!process_ || !status_) { return; } @@ -543,6 +547,10 @@ std::string ExternalDaemon::ProcessNameAndPidStr() { return Format("$0 with pid $1", exe_, process_->pid()); } +std::string ExternalDaemon::DefaultOutputPrefix() { + return Format("[$0]", daemon_id_); +} + HostPort ExternalDaemon::bound_rpc_hostport() const { CHECK(status_); CHECK_GE(status_->bound_rpc_addresses_size(), 1); diff --git a/src/yb/integration-tests/external_daemon.h b/src/yb/integration-tests/external_daemon.h index 0731f939285a..534dd29b4315 100644 --- a/src/yb/integration-tests/external_daemon.h +++ b/src/yb/integration-tests/external_daemon.h @@ -259,6 +259,8 @@ class ExternalDaemon : public RefCountedThreadSafe { return std::make_unique(proxy_cache_, bound_rpc_addr()); } + Status WaitProcessReady(); + protected: friend class RefCountedThreadSafe; virtual ~ExternalDaemon(); @@ -282,6 +284,8 @@ class ExternalDaemon : public RefCountedThreadSafe { std::string ProcessNameAndPidStr(); + std::string DefaultOutputPrefix(); + const std::string daemon_id_; rpc::Messenger* messenger_; rpc::ProxyCache* proxy_cache_; diff --git a/src/yb/integration-tests/external_mini_cluster.cc b/src/yb/integration-tests/external_mini_cluster.cc index 87621ff6dfc8..6e1480fa5693 100644 --- a/src/yb/integration-tests/external_mini_cluster.cc +++ b/src/yb/integration-tests/external_mini_cluster.cc @@ -401,11 +401,13 @@ Status ExternalMiniCluster::Start(rpc::Messenger* messenger) { for (size_t i = 1; i <= opts_.num_tablet_servers; i++) { RETURN_NOT_OK_PREPEND( - AddTabletServer( - ExternalMiniClusterOptions::kDefaultStartCqlProxy, {}, -1, - /* wait_for_registration */ false), + LaunchTabletServer( + ExternalMiniClusterOptions::kDefaultStartCqlProxy, {}, -1), Format("Failed starting tablet server $0", i)); } + for (const auto& ts : tablet_servers_) { + RETURN_NOT_OK(ts->WaitProcessReady()); + } RETURN_NOT_OK(WaitForTabletServerCount( opts_.num_tablet_servers, kTabletServerRegistrationTimeout)); } else { @@ -1440,6 +1442,22 @@ string ExternalMiniCluster::GetBindIpForTabletServer(size_t index) const { Status ExternalMiniCluster::AddTabletServer( bool start_cql_proxy, const std::vector& extra_flags, int num_drives, bool wait_for_registration) { + auto idx = VERIFY_RESULT(LaunchTabletServer(start_cql_proxy, extra_flags, num_drives)); + auto ts = tablet_servers_[idx]; + RETURN_NOT_OK(ts->WaitProcessReady()); + if (!wait_for_registration) { + return Status::OK(); + } + RETURN_NOT_OK(WaitForTabletServerToRegister(ts->uuid(), kTabletServerRegistrationTimeout)); + if (opts_.enable_ysql && opts_.wait_for_tservers_to_accept_ysql_connections) { + RETURN_NOT_OK(WaitForTabletServersToAcceptYSQLConnection( + {idx}, MonoTime::Now() + kTabletServerRegistrationTimeout)); + } + return Status::OK(); +} + +Result ExternalMiniCluster::LaunchTabletServer( + bool start_cql_proxy, const std::vector& extra_flags, int num_drives) { CHECK(GetLeaderMaster() != nullptr) << "Must have started at least 1 master before adding tablet servers"; @@ -1507,13 +1525,13 @@ Status ExternalMiniCluster::AddTabletServer( num_drives = opts_.num_drives; } - scoped_refptr ts = new ExternalTabletServer( + auto ts = make_scoped_refptr( idx, messenger_, proxy_cache_.get(), exe, GetDataPath(Format("ts-$0", idx + 1)), num_drives, GetBindIpForTabletServer(idx), ts_rpc_port, ts_http_port, redis_rpc_port, redis_http_port, cql_rpc_port, cql_http_port, pgsql_rpc_port, ysql_conn_mgr_rpc_port, pgsql_http_port, master_hostports, SubstituteInFlags(flags, idx)); - RETURN_NOT_OK(ts->Start(start_cql_proxy)); + RETURN_NOT_OK(ts->Launch(start_cql_proxy)); tablet_servers_.push_back(ts); // Add yb controller for the new ts if we already have controllers for existing TSs. @@ -1522,15 +1540,7 @@ Status ExternalMiniCluster::AddTabletServer( RETURN_NOT_OK(AddYbControllerServer(ts)); } - if (wait_for_registration) { - RETURN_NOT_OK(WaitForTabletServerToRegister(ts->uuid(), kTabletServerRegistrationTimeout)); - if (opts_.enable_ysql && opts_.wait_for_tservers_to_accept_ysql_connections) { - RETURN_NOT_OK(WaitForTabletServersToAcceptYSQLConnection( - {idx}, MonoTime::Now() + kTabletServerRegistrationTimeout)); - } - } - - return Status::OK(); + return idx; } Status ExternalMiniCluster::RemoveTabletServer(const std::string& ts_uuid, MonoTime deadline) { @@ -1814,7 +1824,7 @@ Result CheckedResponse(const Response& response) { } // namespace Result ExternalMiniCluster::GetTabletStatus( - const ExternalTabletServer& ts, const yb::TabletId& tablet_id) { + const ExternalTabletServer& ts, const TabletId& tablet_id) { auto rpc = DefaultRpcController(); tserver::GetTabletStatusRequestPB req; @@ -1840,7 +1850,7 @@ Result ExternalMiniCluster::GetTabl } Result ExternalMiniCluster::GetSplitKey( - const yb::TabletId& tablet_id) { + const TabletId& tablet_id) { size_t attempts = 50; while (attempts > 0) { --attempts; @@ -1865,7 +1875,7 @@ Result ExternalMiniCluster::GetSplitKey( } Result ExternalMiniCluster::GetSplitKey( - const ExternalTabletServer& ts, const yb::TabletId& tablet_id, bool fail_on_response_error) { + const ExternalTabletServer& ts, const TabletId& tablet_id, bool fail_on_response_error) { rpc::RpcController rpc; rpc.set_timeout(kDefaultTimeout); @@ -1881,25 +1891,18 @@ Result ExternalMiniCluster::GetSplitKey( } Status ExternalMiniCluster::FlushTabletsOnSingleTServer( - ExternalTabletServer* ts, const std::vector tablet_ids, - tserver::FlushTabletsRequestPB_Operation operation) { - tserver::FlushTabletsRequestPB req; - tserver::FlushTabletsResponsePB resp; - rpc::RpcController controller; - controller.set_timeout(10s * kTimeMultiplier); + size_t idx, const std::vector& tablet_ids) { + return tablet_servers_[idx]->FlushTablets(tablet_ids); +} - req.set_dest_uuid(ts->uuid()); - req.set_operation(operation); - for (const auto& tablet_id : tablet_ids) { - req.add_tablet_ids(tablet_id); - } - if (tablet_ids.empty()) { - req.set_all_tablets(true); - } +Status ExternalMiniCluster::CompactTabletsOnSingleTServer( + size_t idx, const std::vector& tablet_ids) { + return tablet_servers_[idx]->CompactTablets(tablet_ids); +} - auto ts_admin_service_proxy = std::make_unique( - proxy_cache_.get(), ts->bound_rpc_addr()); - return ts_admin_service_proxy->FlushTablets(req, &resp, &controller); +Status ExternalMiniCluster::LogGCOnSingleTServer( + size_t idx, const std::vector& tablet_ids, bool rollover) { + return tablet_servers_[idx]->LogGC(tablet_ids, rollover); } Result ExternalMiniCluster::ListTablets( @@ -2067,7 +2070,7 @@ ExternalMaster* ExternalMiniCluster::GetLeaderMaster() { } Result ExternalMiniCluster::GetTabletLeaderIndex( - const yb::TabletId& tablet_id, bool require_lease) { + const TabletId& tablet_id, bool require_lease) { for (size_t i = 0; i < num_tablet_servers(); ++i) { auto tserver = tablet_server(i); if (tserver->IsProcessAlive() && !tserver->IsProcessPaused()) { @@ -2529,7 +2532,7 @@ Status ExternalMaster::Start(bool shell_mode) { flags.Add("master_addresses", master_addrs_); } RETURN_NOT_OK(StartProcess(flags.value())); - return Status::OK(); + return WaitProcessReady(); } Status ExternalMaster::Restart() { @@ -2577,7 +2580,14 @@ ExternalTabletServer::~ExternalTabletServer() { Status ExternalTabletServer::Start( bool start_cql_proxy, bool set_proxy_addrs, - std::vector> extra_flags) { + const std::vector>& extra_flags) { + RETURN_NOT_OK(Launch(start_cql_proxy, set_proxy_addrs, extra_flags)); + return WaitProcessReady(); +} + +Status ExternalTabletServer::Launch( + bool start_cql_proxy, bool set_proxy_addrs, + const std::vector>& extra_flags) { auto dirs = FsRootDirs(root_dir_, num_drives_); for (const auto& dir : dirs) { RETURN_NOT_OK(Env::Default()->CreateDirs(dir)); @@ -2606,9 +2616,7 @@ Status ExternalTabletServer::Start( flags.Add(flag_value.first, flag_value.second); } - RETURN_NOT_OK(StartProcess(flags.value())); - - return Status::OK(); + return StartProcess(flags.value()); } Status ExternalTabletServer::BuildServerStateFromInfoPath() { @@ -2712,6 +2720,44 @@ Result ExternalTabletServer::SignalPostmaster(int signal) { return kill(postmaster_pid, signal); } +Status ExternalTabletServer::FlushTablets(const std::vector& tablet_ids) { + return ExecuteFlushTablets(tablet_ids, tserver::FlushTabletsRequestPB::FLUSH, [](auto&){}); +} + +Status ExternalTabletServer::CompactTablets(const std::vector& tablet_ids) { + return ExecuteFlushTablets(tablet_ids, tserver::FlushTabletsRequestPB::COMPACT, [](auto&){}); +} + +Status ExternalTabletServer::LogGC(const std::vector& tablet_ids, bool rollover) { + return ExecuteFlushTablets( + tablet_ids, tserver::FlushTabletsRequestPB::LOG_GC, [rollover](auto& req) { + req.set_rollover(rollover); + }); +} + +template +Status ExternalTabletServer::ExecuteFlushTablets( + const std::vector& tablet_ids, tserver::FlushTabletsRequestPB::Operation operation, + const F& f) { + tserver::FlushTabletsRequestPB req; + tserver::FlushTabletsResponsePB resp; + rpc::RpcController controller; + controller.set_timeout(10s * kTimeMultiplier); + + req.set_dest_uuid(uuid()); + req.set_operation(operation); + for (const auto& tablet_id : tablet_ids) { + req.add_tablet_ids(tablet_id); + } + if (tablet_ids.empty()) { + req.set_all_tablets(true); + } + f(req); + + auto ts_admin_service_proxy = Proxy(); + return ts_admin_service_proxy->FlushTablets(req, &resp, &controller); +} + Status RestartAllMasters(ExternalMiniCluster* cluster) { for (size_t i = 0; i != cluster->num_masters(); ++i) { cluster->master(i)->Shutdown(); diff --git a/src/yb/integration-tests/external_mini_cluster.h b/src/yb/integration-tests/external_mini_cluster.h index 64c05188c4f7..a6e395891e0f 100644 --- a/src/yb/integration-tests/external_mini_cluster.h +++ b/src/yb/integration-tests/external_mini_cluster.h @@ -503,14 +503,18 @@ class ExternalMiniCluster : public MiniClusterBase { Result GetTabletPeerHealth( const ExternalTabletServer& ts, const std::vector& tablet_ids); - Result GetSplitKey(const yb::TabletId& tablet_id); - Result GetSplitKey(const ExternalTabletServer& ts, - const yb::TabletId& tablet_id, bool fail_on_response_error = true); + Result GetSplitKey(const TabletId& tablet_id); + Result GetSplitKey( + const ExternalTabletServer& ts, const TabletId& tablet_id, + bool fail_on_response_error = true); // Flushes all tablets if tablets_ids is empty. Status FlushTabletsOnSingleTServer( - ExternalTabletServer* ts, const std::vector tablet_ids, - tserver::FlushTabletsRequestPB_Operation operation); + size_t idx, const std::vector& tablet_ids); + Status CompactTabletsOnSingleTServer( + size_t idx, const std::vector& tablet_ids); + Status LogGCOnSingleTServer( + size_t idx, const std::vector& tablet_ids, bool rollover); Status WaitForTSToCrash(const ExternalTabletServer* ts, const MonoDelta& timeout = MonoDelta::FromSeconds(60)); @@ -622,6 +626,9 @@ class ExternalMiniCluster : public MiniClusterBase { friend class UpgradeTestBase; FRIEND_TEST(MasterFailoverTest, TestKillAnyMaster); + Result LaunchTabletServer( + bool start_cql_proxy, const std::vector& extra_flags, int num_drives); + void ConfigureClientBuilder(client::YBClientBuilder* builder) override; Result DoGetLeaderMasterBoundRpcAddr() override; @@ -779,7 +786,12 @@ class ExternalTabletServer : public ExternalDaemon { Status Start( bool start_cql_proxy = ExternalMiniClusterOptions::kDefaultStartCqlProxy, bool set_proxy_addrs = true, - std::vector> extra_flags = {}); + const std::vector>& extra_flags = {}); + + Status Launch( + bool start_cql_proxy = ExternalMiniClusterOptions::kDefaultStartCqlProxy, + bool set_proxy_addrs = true, + const std::vector>& extra_flags = {}); void UpdateMasterAddress(const std::vector& master_addrs); @@ -848,7 +860,16 @@ class ExternalTabletServer : public ExternalDaemon { const MetricPrototype* metric_proto, const char* value_field) const; + Status FlushTablets(const std::vector& tablet_ids); + Status CompactTablets(const std::vector& tablet_ids); + Status LogGC(const std::vector& tablet_ids, bool rollover); + protected: + template + Status ExecuteFlushTablets( + const std::vector& tablet_ids, tserver::FlushTabletsRequestPB::Operation operation, + const F& f); + Status DeleteServerInfoPaths() override; bool ServerInfoPathsExist() override; diff --git a/src/yb/integration-tests/external_mini_cluster_secure_test.cc b/src/yb/integration-tests/external_mini_cluster_secure_test.cc index 7d10419b1952..4a2719c6abb7 100644 --- a/src/yb/integration-tests/external_mini_cluster_secure_test.cc +++ b/src/yb/integration-tests/external_mini_cluster_secure_test.cc @@ -20,6 +20,7 @@ #include "yb/rpc/messenger.h" #include "yb/rpc/secure_stream.h" +#include "yb/util/backoff_waiter.h" #include "yb/util/file_util.h" #include "yb/util/env_util.h" #include "yb/util/string_util.h" @@ -294,10 +295,14 @@ class ExternalMiniClusterSecureWithInterCATest : public ExternalMiniClusterSecur "-p", cluster_->ysql_hostport(0).port(), sslparam, "-c", "select now();" ); - LOG(INFO) << "Running " << ToString(ysqlsh_command); - Subprocess proc(ysqlsh_command[0], ysqlsh_command); - proc.SetEnv("PGPASSWORD", "yugabyte"); - ASSERT_OK(proc.Run()); + ASSERT_OK(WaitFor([&ysqlsh_command] { + LOG(INFO) << "Running " << ToString(ysqlsh_command); + Subprocess proc(ysqlsh_command[0], ysqlsh_command); + proc.SetEnv("PGPASSWORD", "yugabyte"); + auto status = proc.Run(); + WARN_NOT_OK(status, "Failed executing ysqlsh"); + return status.ok(); + }, 10s * kTimeMultiplier, "Connected to ysqlsh")); } }; diff --git a/src/yb/integration-tests/mini_cluster.cc b/src/yb/integration-tests/mini_cluster.cc index c5a182ae29b2..48c227032e6c 100644 --- a/src/yb/integration-tests/mini_cluster.cc +++ b/src/yb/integration-tests/mini_cluster.cc @@ -1650,7 +1650,7 @@ void ActivateCompactionTimeLogging(MiniCluster* cluster) { void DumpDocDB(MiniCluster* cluster, ListPeersFilter filter) { auto peers = ListTabletPeers(cluster, filter); for (const auto& peer : peers) { - peer->shared_tablet()->TEST_DocDBDumpToLog(tablet::IncludeIntents::kTrue); + peer->shared_tablet()->TEST_DocDBDumpToLog(docdb::IncludeIntents::kTrue); } } diff --git a/src/yb/integration-tests/raft_consensus-itest.cc b/src/yb/integration-tests/raft_consensus-itest.cc index b432998ed1a4..9a58f82e3d06 100644 --- a/src/yb/integration-tests/raft_consensus-itest.cc +++ b/src/yb/integration-tests/raft_consensus-itest.cc @@ -3634,8 +3634,7 @@ TEST_F(RaftConsensusITest, CatchupAfterLeaderRestarted) { LOG(INFO)<< "Written data. Flush tablet and restart the rest of the replicas"; for (size_t ts_idx = 0; ts_idx < cluster_->num_tablet_servers(); ++ts_idx) { if (ts_idx != paused_ts_idx) { - ASSERT_OK(cluster_->FlushTabletsOnSingleTServer( - cluster_->tablet_server(ts_idx), {tablet_id_}, FlushTabletsRequestPB::FLUSH)); + ASSERT_OK(cluster_->FlushTabletsOnSingleTServer(ts_idx, {tablet_id_})); cluster_->tablet_server(ts_idx)->Shutdown(); ASSERT_OK(cluster_->tablet_server(ts_idx)->Restart()); } diff --git a/src/yb/integration-tests/remote_bootstrap-itest.cc b/src/yb/integration-tests/remote_bootstrap-itest.cc index 910c297048bc..a0391dd5d9c2 100644 --- a/src/yb/integration-tests/remote_bootstrap-itest.cc +++ b/src/yb/integration-tests/remote_bootstrap-itest.cc @@ -1808,8 +1808,7 @@ TEST_F(RemoteBootstrapITest, TestRemoteBootstrapFromClosestPeer) { // Run Log GC on the leader peer and check that the follower is still able to serve as rbs source. // The follower would request to remotely anchor the log on the last received op id. auto leader_ts = cluster_->tablet_server(crash_test_leader_index_); - ASSERT_OK(cluster_->FlushTabletsOnSingleTServer( - leader_ts, {crash_test_tablet_id_}, tserver::FlushTabletsRequestPB::LOG_GC)); + ASSERT_OK(leader_ts->LogGC({crash_test_tablet_id_}, false)); ASSERT_NE(crash_test_leader_index_, 2); AddTServerInZone("z2"); diff --git a/src/yb/integration-tests/snapshot-test.cc b/src/yb/integration-tests/snapshot-test.cc index 5d3970e16baf..1a620e253cdd 100644 --- a/src/yb/integration-tests/snapshot-test.cc +++ b/src/yb/integration-tests/snapshot-test.cc @@ -1085,8 +1085,7 @@ TEST_F_EX(SnapshotTest, CrashAfterFlushedFrontierSaved, SnapshotExternalMiniClus for (int iter = 0; iter < kNumIters; ++iter) { const auto log_prefix = Format("Iteration $0: ", iter); - ASSERT_OK( - cluster_->FlushTabletsOnSingleTServer(ts1, {}, tserver::FlushTabletsRequestPB::FLUSH)); + ASSERT_OK(ts1->FlushTablets({})); const auto snapshot_id = ASSERT_RESULT(snapshot_util.CreateSnapshot(table)); auto ts_map = ASSERT_RESULT(itest::CreateTabletServerMap(master_proxy, &client->proxy_cache())); for (const auto& tablet_id : tablet_ids) { diff --git a/src/yb/integration-tests/tablet-split-itest-base.cc b/src/yb/integration-tests/tablet-split-itest-base.cc index 269448b122fc..af13faec48c0 100644 --- a/src/yb/integration-tests/tablet-split-itest-base.cc +++ b/src/yb/integration-tests/tablet-split-itest-base.cc @@ -830,16 +830,6 @@ Status TabletSplitExternalMiniClusterITest::SplitTablet(const std::string& table return Status::OK(); } -Status TabletSplitExternalMiniClusterITest::FlushTabletsOnSingleTServer( - size_t tserver_idx, const std::vector tablet_ids, bool is_compaction) { - auto tserver = cluster_->tablet_server(tserver_idx); - auto flush_op_type = is_compaction ? - tserver::FlushTabletsRequestPB::COMPACT : - tserver::FlushTabletsRequestPB::FLUSH; - RETURN_NOT_OK(cluster_->FlushTabletsOnSingleTServer(tserver, tablet_ids, flush_op_type)); - return Status::OK(); -} - Result> TabletSplitExternalMiniClusterITest::GetTestTableTabletIds( size_t tserver_idx) { std::set tablet_ids; @@ -1032,7 +1022,7 @@ Status TabletSplitExternalMiniClusterITest::SplitTabletCrashMaster( if (change_split_boundary) { RETURN_NOT_OK(WriteRows(kNumRows * 2, kNumRows)); for (size_t i = 0; i < cluster_->num_tablet_servers(); i++) { - RETURN_NOT_OK(FlushTabletsOnSingleTServer(i, {tablet_id}, false)); + RETURN_NOT_OK(cluster_->FlushTabletsOnSingleTServer(i, {tablet_id})); } } diff --git a/src/yb/integration-tests/tablet-split-itest-base.h b/src/yb/integration-tests/tablet-split-itest-base.h index e603b9fa9c58..648f275b45eb 100644 --- a/src/yb/integration-tests/tablet-split-itest-base.h +++ b/src/yb/integration-tests/tablet-split-itest-base.h @@ -229,9 +229,6 @@ class TabletSplitExternalMiniClusterITest : public TabletSplitITestBase tablet_ids, bool is_compaction); - Result> GetTestTableTabletIds(size_t tserver_idx); Result> GetTestTableTabletIds(); diff --git a/src/yb/integration-tests/tablet-split-itest.cc b/src/yb/integration-tests/tablet-split-itest.cc index 8d4c13905563..eb28661443b3 100644 --- a/src/yb/integration-tests/tablet-split-itest.cc +++ b/src/yb/integration-tests/tablet-split-itest.cc @@ -1965,9 +1965,7 @@ TEST_F(AutomaticTabletSplitExternalMiniClusterITest, CrashedSplitIsRestarted) { std::this_thread::sleep_for(2s); // Flush to ensure SST files are generated so splitting can occur. for (size_t i = 0; i < cluster_->num_tablet_servers(); ++i) { - ASSERT_OK(cluster_->FlushTabletsOnSingleTServer(cluster_->tablet_server(i), - {tablet_id}, - tserver::FlushTabletsRequestPB::FLUSH)); + ASSERT_OK(cluster_->FlushTabletsOnSingleTServer(i, {tablet_id})); } const auto kCrashTime = 10s; @@ -3038,8 +3036,7 @@ TEST_F_EX( auto* ts = cluster_->tablet_server(i); if (i != server_to_bootstrap_idx) { ASSERT_OK(cluster_->WaitForAllIntentsApplied(ts, 15s * kTimeMultiplier)); - ASSERT_OK(cluster_->FlushTabletsOnSingleTServer( - ts, {source_tablet_id}, tserver::FlushTabletsRequestPB::FLUSH)); + ASSERT_OK(ts->FlushTablets({source_tablet_id})); // Prevent leader changes. ASSERT_OK(cluster_->SetFlag(ts, "enable_leader_failure_detection", "false")); } @@ -3169,8 +3166,7 @@ TEST_F_EX( for (size_t ts_idx = 0; ts_idx < cluster_->num_tablet_servers(); ++ts_idx) { auto* ts = cluster_->tablet_server(ts_idx); if (ts->IsProcessAlive()) { - ASSERT_OK(cluster_->FlushTabletsOnSingleTServer( - ts, {source_tablet_id}, tserver::FlushTabletsRequestPB::FLUSH)); + ASSERT_OK(ts->FlushTablets({source_tablet_id})); ASSERT_OK(WaitForAnySstFiles(*ts, source_tablet_id)); } } diff --git a/src/yb/tablet/tablet-split-test.cc b/src/yb/tablet/tablet-split-test.cc index 15a6f66b0b63..a70c895f9580 100644 --- a/src/yb/tablet/tablet-split-test.cc +++ b/src/yb/tablet/tablet-split-test.cc @@ -134,9 +134,9 @@ TEST_F(TabletSplitTest, SplitTablet) { << docdb::DocDBDebugDumpToStr( tablet()->doc_db(), &tablet()->GetSchemaPackingProvider(), docdb::IncludeBinary::kTrue); - const auto source_docdb_dump_str = tablet()->TEST_DocDBDumpStr(IncludeIntents::kTrue); + const auto source_docdb_dump_str = tablet()->TEST_DocDBDumpStr(docdb::IncludeIntents::kTrue); std::unordered_set source_docdb_dump; - tablet()->TEST_DocDBDumpToContainer(IncludeIntents::kTrue, &source_docdb_dump); + tablet()->TEST_DocDBDumpToContainer(docdb::IncludeIntents::kTrue, &source_docdb_dump); std::unordered_set source_rows; for (const auto& row : ASSERT_RESULT(SelectAll(tablet().get()))) { @@ -183,7 +183,7 @@ TEST_F(TabletSplitTest, SplitTablet) { split_tablet->metadata()->ToSuperBlock(&super_block); ASSERT_EQ(split_tablet->tablet_id(), super_block.kv_store().kv_store_id()); } - const auto split_docdb_dump_str = split_tablet->TEST_DocDBDumpStr(IncludeIntents::kTrue); + const auto split_docdb_dump_str = split_tablet->TEST_DocDBDumpStr(docdb::IncludeIntents::kTrue); // Before compaction underlying DocDB dump should be the same. ASSERT_EQ(source_docdb_dump_str, split_docdb_dump_str); @@ -207,12 +207,12 @@ TEST_F(TabletSplitTest, SplitTablet) { ASSERT_OK(split_tablet->ForceManualRocksDBCompact()); VLOG(1) << split_tablet->tablet_id() << " compacted:" << std::endl - << split_tablet->TEST_DocDBDumpStr(IncludeIntents::kTrue); + << split_tablet->TEST_DocDBDumpStr(docdb::IncludeIntents::kTrue); // After compaction split tablets' RocksDB instances should have no overlap and no unexpected // data. std::unordered_set split_docdb_dump; - split_tablet->TEST_DocDBDumpToContainer(IncludeIntents::kTrue, &split_docdb_dump); + split_tablet->TEST_DocDBDumpToContainer(docdb::IncludeIntents::kTrue, &split_docdb_dump); for (const auto& entry : split_docdb_dump) { ASSERT_EQ(source_docdb_dump.erase(entry), 1); } diff --git a/src/yb/tablet/tablet.cc b/src/yb/tablet/tablet.cc index 0edfa4aa6697..bcee4b08e030 100644 --- a/src/yb/tablet/tablet.cc +++ b/src/yb/tablet/tablet.cc @@ -812,7 +812,7 @@ Status Tablet::CreateTabletDirectories(const string& db_dir, FsManager* fs) { Format("Failed to create RocksDB tablet directory $0", db_dir)); RETURN_NOT_OK_PREPEND( - fs->CreateDirIfMissingAndSync(docdb::GetStorageDir(db_dir, kIntentsDirName)), + fs->CreateDirIfMissingAndSync(docdb::GetStorageDir(db_dir, docdb::kIntentsDirName)), Format("Failed to create RocksDB tablet intents directory $0", db_dir)); RETURN_NOT_OK(snapshots_->CreateDirectories(db_dir, fs)); @@ -1122,7 +1122,7 @@ Status Tablet::OpenIntentsDB(const rocksdb::Options& common_options) { const auto& db_dir = metadata()->rocksdb_dir(); - auto intents_dir = docdb::GetStorageDir(db_dir, kIntentsDirName); + auto intents_dir = docdb::GetStorageDir(db_dir, docdb::kIntentsDirName); LOG_WITH_PREFIX(INFO) << "Opening intents DB at: " << intents_dir; rocksdb::Options intents_rocksdb_options(common_options); intents_rocksdb_options.compaction_context_factory = {}; @@ -4114,7 +4114,7 @@ Status Tablet::ForceRocksDBCompact( return Status::OK(); } -std::string Tablet::TEST_DocDBDumpStr(IncludeIntents include_intents) { +std::string Tablet::TEST_DocDBDumpStr(docdb::IncludeIntents include_intents) { if (!regular_db_) return ""; if (!include_intents) { @@ -4126,7 +4126,7 @@ std::string Tablet::TEST_DocDBDumpStr(IncludeIntents include_intents) { } void Tablet::TEST_DocDBDumpToContainer( - IncludeIntents include_intents, std::unordered_set* out) { + docdb::IncludeIntents include_intents, std::unordered_set* out) { if (!regular_db_) return; if (!include_intents) { @@ -4137,7 +4137,7 @@ void Tablet::TEST_DocDBDumpToContainer( return docdb::DocDBDebugDumpToContainer(doc_db(), &GetSchemaPackingProvider(), out); } -void Tablet::TEST_DocDBDumpToLog(IncludeIntents include_intents) { +void Tablet::TEST_DocDBDumpToLog(docdb::IncludeIntents include_intents) { if (!regular_db_) { LOG_WITH_PREFIX(INFO) << "No RocksDB to dump"; return; diff --git a/src/yb/tablet/tablet.h b/src/yb/tablet/tablet.h index e76002ee0951..312ce9a82291 100644 --- a/src/yb/tablet/tablet.h +++ b/src/yb/tablet/tablet.h @@ -98,7 +98,6 @@ namespace tablet { YB_STRONGLY_TYPED_BOOL(BlockingRocksDbShutdownStart); YB_STRONGLY_TYPED_BOOL(FlushOnShutdown); -YB_STRONGLY_TYPED_BOOL(IncludeIntents); YB_STRONGLY_TYPED_BOOL(CheckRegularDB) YB_DEFINE_ENUM(Direction, (kForward)(kBackward)); @@ -644,13 +643,14 @@ class Tablet : public AbstractTablet, // range-based partitions always matches the returned middle key. Result GetEncodedMiddleSplitKey(std::string *partition_split_key = nullptr) const; - std::string TEST_DocDBDumpStr(IncludeIntents include_intents = IncludeIntents::kFalse); + std::string TEST_DocDBDumpStr( + docdb::IncludeIntents include_intents = docdb::IncludeIntents::kFalse); void TEST_DocDBDumpToContainer( - IncludeIntents include_intents, std::unordered_set* out); + docdb::IncludeIntents include_intents, std::unordered_set* out); // Dumps DocDB contents to log, every record as a separate log message, with the given prefix. - void TEST_DocDBDumpToLog(IncludeIntents include_intents); + void TEST_DocDBDumpToLog(docdb::IncludeIntents include_intents); Result TEST_CountDBRecords(docdb::StorageDbType db_type); diff --git a/src/yb/tablet/tablet.proto b/src/yb/tablet/tablet.proto index 2e04d6e65e57..d3c38e7cfa0c 100644 --- a/src/yb/tablet/tablet.proto +++ b/src/yb/tablet/tablet.proto @@ -37,6 +37,7 @@ option java_package = "org.yb.tablet"; import "yb/common/common.proto"; import "yb/common/common_types.proto"; +import "yb/common/opid.proto"; import "yb/tablet/tablet_types.proto"; message TabletStatusPB { @@ -65,6 +66,7 @@ message TabletStatusPB { repeated bytes colocated_table_ids = 19; optional string pgschema_name = 20; repeated bytes vector_index_finished_backfills = 21; + optional OpIdPB last_op_id = 22; } // Used to present the maintenance manager's internal state. diff --git a/src/yb/tablet/tablet_bootstrap.cc b/src/yb/tablet/tablet_bootstrap.cc index 40a8c9fa9ae9..c750f35b029c 100644 --- a/src/yb/tablet/tablet_bootstrap.cc +++ b/src/yb/tablet/tablet_bootstrap.cc @@ -555,7 +555,7 @@ class TabletBootstrap { if (FLAGS_TEST_dump_docdb_before_tablet_bootstrap) { LOG_WITH_PREFIX(INFO) << "DEBUG: DocDB dump before tablet bootstrap:"; - tablet_->TEST_DocDBDumpToLog(IncludeIntents::kTrue); + tablet_->TEST_DocDBDumpToLog(docdb::IncludeIntents::kTrue); } const auto needs_recovery = VERIFY_RESULT(PrepareToReplay()); @@ -630,7 +630,7 @@ class TabletBootstrap { listener_->StatusMessage(message); if (FLAGS_TEST_dump_docdb_after_tablet_bootstrap) { LOG_WITH_PREFIX(INFO) << "DEBUG: DocDB debug dump after tablet bootstrap:\n"; - tablet_->TEST_DocDBDumpToLog(IncludeIntents::kTrue); + tablet_->TEST_DocDBDumpToLog(docdb::IncludeIntents::kTrue); } *rebuilt_tablet = std::move(tablet_); diff --git a/src/yb/tablet/tablet_metadata.cc b/src/yb/tablet/tablet_metadata.cc index 0f2e640da9d8..66246b686342 100644 --- a/src/yb/tablet/tablet_metadata.cc +++ b/src/yb/tablet/tablet_metadata.cc @@ -132,7 +132,6 @@ std::string MakeTableInfoLogPrefix( } // namespace const int64 kNoDurableMemStore = -1; -const std::string kIntentsDirName = "intents"; const std::string kSnapshotsDirName = "snapshots"; // ============================================================================ @@ -935,7 +934,7 @@ Status RaftGroupMetadata::DeleteTabletData(TabletDataState delete_type, bool RaftGroupMetadata::IsTombstonedWithNoRocksDBData() const { std::lock_guard lock(data_mutex_); const auto& rocksdb_dir = kv_store_.rocksdb_dir; - const auto intents_dir = docdb::GetStorageDir(rocksdb_dir, kIntentsDirName); + const auto intents_dir = docdb::GetStorageDir(rocksdb_dir, docdb::kIntentsDirName); return tablet_data_state_ == TABLET_DATA_TOMBSTONED && !fs_manager_->env()->FileExists(rocksdb_dir) && !fs_manager_->env()->FileExists(intents_dir); @@ -2446,7 +2445,7 @@ bool RaftGroupMetadata::OnPostSplitCompactionDone() { } std::string RaftGroupMetadata::intents_rocksdb_dir() const { - return docdb::GetStorageDir(kv_store_.rocksdb_dir, kIntentsDirName); + return docdb::GetStorageDir(kv_store_.rocksdb_dir, docdb::kIntentsDirName); } std::string RaftGroupMetadata::snapshots_dir() const { diff --git a/src/yb/tablet/tablet_metadata.h b/src/yb/tablet/tablet_metadata.h index fb150a6bdd35..27033eab515f 100644 --- a/src/yb/tablet/tablet_metadata.h +++ b/src/yb/tablet/tablet_metadata.h @@ -70,7 +70,6 @@ namespace yb::tablet { using TableInfoMap = std::unordered_map; extern const int64 kNoDurableMemStore; -extern const std::string kIntentsDirName; extern const std::string kSnapshotsDirName; const uint64_t kNoLastFullCompactionTime = HybridTime::kMin.ToUint64(); diff --git a/src/yb/tablet/tablet_peer.cc b/src/yb/tablet/tablet_peer.cc index 5eec3b366b51..7562006cd825 100644 --- a/src/yb/tablet/tablet_peer.cc +++ b/src/yb/tablet/tablet_peer.cc @@ -787,37 +787,44 @@ std::unique_ptr TabletPeer::CreateUpdateTransaction( } void TabletPeer::GetTabletStatusPB(TabletStatusPB* status_pb_out) { - std::lock_guard lock(lock_); - DCHECK(status_pb_out != nullptr); - DCHECK(status_listener_.get() != nullptr); - const auto disk_size_info = GetOnDiskSizeInfo(); - status_pb_out->set_tablet_id(status_listener_->tablet_id()); - status_pb_out->set_namespace_name(status_listener_->namespace_name()); - status_pb_out->set_table_name(status_listener_->table_name()); - status_pb_out->set_table_id(status_listener_->table_id()); - status_pb_out->set_last_status(status_listener_->last_status()); - status_listener_->partition()->ToPB(status_pb_out->mutable_partition()); - status_pb_out->set_state(state_); - status_pb_out->set_tablet_data_state(meta_->tablet_data_state()); - auto tablet = tablet_; - if (tablet) { - status_pb_out->set_table_type(tablet->table_type()); - auto vector_index_finished_backfills = tablet->vector_indexes().FinishedBackfills(); - if (vector_index_finished_backfills) { - *status_pb_out->mutable_vector_index_finished_backfills() = - std::move(*vector_index_finished_backfills); + std::shared_ptr consensus; + { + std::lock_guard lock(lock_); + DCHECK(status_pb_out != nullptr); + DCHECK(status_listener_.get() != nullptr); + const auto disk_size_info = GetOnDiskSizeInfo(); + status_pb_out->set_tablet_id(status_listener_->tablet_id()); + status_pb_out->set_namespace_name(status_listener_->namespace_name()); + status_pb_out->set_table_name(status_listener_->table_name()); + status_pb_out->set_table_id(status_listener_->table_id()); + status_pb_out->set_last_status(status_listener_->last_status()); + status_listener_->partition()->ToPB(status_pb_out->mutable_partition()); + status_pb_out->set_state(state_); + status_pb_out->set_tablet_data_state(meta_->tablet_data_state()); + auto tablet = tablet_; + if (tablet) { + status_pb_out->set_table_type(tablet->table_type()); + auto vector_index_finished_backfills = tablet->vector_indexes().FinishedBackfills(); + if (vector_index_finished_backfills) { + *status_pb_out->mutable_vector_index_finished_backfills() = + std::move(*vector_index_finished_backfills); + } } + disk_size_info.ToPB(status_pb_out); + // Set hide status of the tablet. + status_pb_out->set_is_hidden(meta_->hidden()); + status_pb_out->set_parent_data_compacted(meta_->parent_data_compacted()); + for (const auto& table : meta_->GetAllColocatedTables()) { + status_pb_out->add_colocated_table_ids(table); + } + consensus = consensus_; } - disk_size_info.ToPB(status_pb_out); - // Set hide status of the tablet. - status_pb_out->set_is_hidden(meta_->hidden()); - status_pb_out->set_parent_data_compacted(meta_->parent_data_compacted()); - for (const auto& table : meta_->GetAllColocatedTables()) { - status_pb_out->add_colocated_table_ids(table); + if (consensus) { + consensus->log()->GetLatestEntryOpId().ToPB(status_pb_out->mutable_last_op_id()); } } -Status TabletPeer::RunLogGC() { +Status TabletPeer::RunLogGC(bool rollover) { if (!CheckRunning().ok()) { return Status::OK(); } @@ -833,6 +840,9 @@ Status TabletPeer::RunLogGC() { } else { min_log_index = VERIFY_RESULT(GetEarliestNeededLogIndex()); } + if (rollover) { + RETURN_NOT_OK(log_->AllocateSegmentAndRollOver()); + } int32_t num_gced = 0; return log_->GC(min_log_index, &num_gced); } diff --git a/src/yb/tablet/tablet_peer.h b/src/yb/tablet/tablet_peer.h index d4090ca5e07f..335af1084705 100644 --- a/src/yb/tablet/tablet_peer.h +++ b/src/yb/tablet/tablet_peer.h @@ -393,7 +393,7 @@ class TabletPeer : public std::enable_shared_from_this, Result NewReplicaOperationDriver(std::unique_ptr* operation); // Tells the tablet's log to garbage collect. - Status RunLogGC(); + Status RunLogGC(bool rollover = false); // Register the maintenance ops associated with this peer's tablet, also invokes // Tablet::RegisterMaintenanceOps(). diff --git a/src/yb/tablet/tablet_snapshots.cc b/src/yb/tablet/tablet_snapshots.cc index 421f0021c559..0debed53e06e 100644 --- a/src/yb/tablet/tablet_snapshots.cc +++ b/src/yb/tablet/tablet_snapshots.cc @@ -527,22 +527,7 @@ Status TabletSnapshots::RestoreCheckpoint( return STATUS(IllegalState, "Unable to copy checkpoint files", s.ToString()); } - { - auto& env = this->env(); - auto children = VERIFY_RESULT(env.GetChildren(db_dir, ExcludeDots::kTrue)); - for (const auto& child : children) { - if (!child.starts_with(docdb::kVectorIndexDirPrefix)) { - continue; - } - auto source_dir = JoinPathSegments(db_dir, child); - if (!env.DirExists(source_dir)) { - continue; - } - auto dest_dir = docdb::GetStorageDir(db_dir, child); - LOG_WITH_PREFIX(INFO) << "Moving " << source_dir << " => " << dest_dir; - RETURN_NOT_OK(env.RenameFile(source_dir, dest_dir)); - } - } + RETURN_NOT_OK(MoveChildren(this->env(), db_dir, docdb::IncludeIntents::kFalse)); auto tablet_metadata_file = TabletMetadataFile(db_dir); if (env().FileExists(tablet_metadata_file)) { @@ -721,8 +706,8 @@ Status TabletSnapshots::CreateCheckpoint( Status TabletSnapshots::DoCreateCheckpoint( const std::string& dir, CreateIntentsCheckpointIn create_intents_checkpoint_in) { - auto temp_intents_dir = docdb::GetStorageDir(dir, kIntentsDirName); - auto final_intents_dir = docdb::GetStorageCheckpointDir(dir, kIntentsDirName); + auto temp_intents_dir = docdb::GetStorageDir(dir, docdb::kIntentsDirName); + auto final_intents_dir = docdb::GetStorageCheckpointDir(dir, docdb::kIntentsDirName); if (has_intents_db()) { RETURN_NOT_OK(rocksdb::checkpoint::CreateCheckpoint(&intents_db(), temp_intents_dir)); diff --git a/src/yb/tools/yb-backup/yb-backup-cross-feature-test.cc b/src/yb/tools/yb-backup/yb-backup-cross-feature-test.cc index f73e5cf62d16..20a88a6a7b91 100644 --- a/src/yb/tools/yb-backup/yb-backup-cross-feature-test.cc +++ b/src/yb/tools/yb-backup/yb-backup-cross-feature-test.cc @@ -2627,12 +2627,9 @@ TEST_P( {"--backup_location", backup_dir, "--keyspace", Format("ysql.$0", backup_db_name), "create"})); - ASSERT_OK(cluster_->FlushTabletsOnSingleTServer(cluster_->tablet_server(0), {}, - tserver::FlushTabletsRequestPB::COMPACT)); - ASSERT_OK(cluster_->FlushTabletsOnSingleTServer(cluster_->tablet_server(1), {}, - tserver::FlushTabletsRequestPB::COMPACT)); - ASSERT_OK(cluster_->FlushTabletsOnSingleTServer(cluster_->tablet_server(2), {}, - tserver::FlushTabletsRequestPB::COMPACT)); + ASSERT_OK(cluster_->CompactTabletsOnSingleTServer(0, {})); + ASSERT_OK(cluster_->CompactTabletsOnSingleTServer(1, {})); + ASSERT_OK(cluster_->CompactTabletsOnSingleTServer(2, {})); ASSERT_OK(RunBackupCommand( {"--backup_location", backup_dir, "--keyspace", Format("ysql.$0", restore_db_name), @@ -2643,12 +2640,9 @@ TEST_P( ASSERT_NO_FATALS( InsertRows(Format("INSERT INTO $0 VALUES (9,9,9), (10,10,10), (11,11,11)", table_name), 3)); - ASSERT_OK(cluster_->FlushTabletsOnSingleTServer(cluster_->tablet_server(0), {}, - tserver::FlushTabletsRequestPB::COMPACT)); - ASSERT_OK(cluster_->FlushTabletsOnSingleTServer(cluster_->tablet_server(1), {}, - tserver::FlushTabletsRequestPB::COMPACT)); - ASSERT_OK(cluster_->FlushTabletsOnSingleTServer(cluster_->tablet_server(2), {}, - tserver::FlushTabletsRequestPB::COMPACT)); + ASSERT_OK(cluster_->CompactTabletsOnSingleTServer(0, {})); + ASSERT_OK(cluster_->CompactTabletsOnSingleTServer(1, {})); + ASSERT_OK(cluster_->CompactTabletsOnSingleTServer(2, {})); LOG(INFO) << "Test finished: " << CURRENT_TEST_CASE_AND_TEST_NAME_STR(); } diff --git a/src/yb/tserver/remote_bootstrap_client.cc b/src/yb/tserver/remote_bootstrap_client.cc index 2ac87c49625c..aed3a65542a7 100644 --- a/src/yb/tserver/remote_bootstrap_client.cc +++ b/src/yb/tserver/remote_bootstrap_client.cc @@ -644,19 +644,7 @@ Status RemoteBootstrapClient::DownloadRocksDBFiles() { } // To avoid adding new file type to remote bootstrap we move intents as subdir of regular DB. auto& env = this->env(); - auto children = VERIFY_RESULT(env.GetChildren(rocksdb_dir, ExcludeDots::kTrue)); - for (const auto& child : children) { - if (!child.starts_with(docdb::kVectorIndexDirPrefix) && child != tablet::kIntentsDirName) { - continue; - } - auto source_dir = JoinPathSegments(rocksdb_dir, child); - if (!env.DirExists(source_dir)) { - continue; - } - auto dest_dir = docdb::GetStorageDir(rocksdb_dir, child); - LOG_WITH_PREFIX(INFO) << "Moving " << source_dir << " => " << dest_dir; - RETURN_NOT_OK(env.RenameFile(source_dir, dest_dir)); - } + RETURN_NOT_OK(MoveChildren(env, rocksdb_dir, docdb::IncludeIntents::kTrue)); if (FLAGS_bytes_remote_bootstrap_durable_write_mb != 0) { // Persist directory so that recently downloaded files are accessible. RETURN_NOT_OK(env.SyncDir(rocksdb_dir)); diff --git a/src/yb/tserver/tablet_service.cc b/src/yb/tserver/tablet_service.cc index eeb4dd620279..1782966c3b69 100644 --- a/src/yb/tserver/tablet_service.cc +++ b/src/yb/tserver/tablet_service.cc @@ -2068,7 +2068,7 @@ void TabletServiceAdminImpl::FlushTablets(const FlushTabletsRequestPB* req, case FlushTabletsRequestPB::LOG_GC: for (const auto& tablet : tablet_peers) { resp->set_failed_tablet_id(tablet->tablet_id()); - RETURN_UNKNOWN_ERROR_IF_NOT_OK(tablet->RunLogGC(), resp, &context); + RETURN_UNKNOWN_ERROR_IF_NOT_OK(tablet->RunLogGC(req->rollover()), resp, &context); resp->clear_failed_tablet_id(); } break; diff --git a/src/yb/tserver/tserver_admin.proto b/src/yb/tserver/tserver_admin.proto index 22eb0a954233..2aef3caa8b0e 100644 --- a/src/yb/tserver/tserver_admin.proto +++ b/src/yb/tserver/tserver_admin.proto @@ -257,6 +257,9 @@ message FlushTabletsRequestPB { // Whether we want to perform operation for all vector indexes for the specified tablets. optional bool all_vector_indexes = 8; + + // Whether to rollover log before LOG_GC. + optional bool rollover = 9; } message FlushTabletsResponsePB { diff --git a/src/yb/yql/pggate/test/pggate_test_select.cc b/src/yb/yql/pggate/test/pggate_test_select.cc index d5236ff18e9c..98aab95446cd 100644 --- a/src/yb/yql/pggate/test/pggate_test_select.cc +++ b/src/yb/yql/pggate/test/pggate_test_select.cc @@ -530,8 +530,7 @@ TEST_F_EX(PggateTestSelect, GetColocatedTableKeyRanges, PggateTestSelectWithYsql ASSERT_OK(cluster_->WaitForAllIntentsApplied(30s * kTimeMultiplier)); for (size_t ts_idx = 0; ts_idx < cluster_->num_tablet_servers(); ++ts_idx) { - ASSERT_OK(cluster_->FlushTabletsOnSingleTServer( - cluster_->tablet_server(ts_idx), {}, tserver::FlushTabletsRequestPB::FLUSH)); + ASSERT_OK(cluster_->FlushTabletsOnSingleTServer(ts_idx, {})); } std::vector> min_max_keys; diff --git a/src/yb/yql/pgwrapper/CMakeLists.txt b/src/yb/yql/pgwrapper/CMakeLists.txt index cb1c45169c34..f301e3ed2b90 100644 --- a/src/yb/yql/pgwrapper/CMakeLists.txt +++ b/src/yb/yql/pgwrapper/CMakeLists.txt @@ -171,6 +171,7 @@ ADD_YB_TEST(pg_tablet_split-test) ADD_YB_TEST(pg_type-test) ADD_YB_TEST(pg_txn-test) ADD_YB_TEST(pg_txn_status-test) +ADD_YB_TEST(pg_vector_index-itest) ADD_YB_TEST(pg_vector_index-test) ADD_YB_TEST(pg_wait_on_conflict-test) ADD_YB_TEST(pg_wrapper-test) diff --git a/src/yb/yql/pgwrapper/pg_ash-test.cc b/src/yb/yql/pgwrapper/pg_ash-test.cc index a54bfe2484fa..9d31a28b2f4e 100644 --- a/src/yb/yql/pgwrapper/pg_ash-test.cc +++ b/src/yb/yql/pgwrapper/pg_ash-test.cc @@ -527,8 +527,7 @@ TEST_F_EX(PgWaitEventAuxTest, YB_DISABLE_TEST_IN_TSAN(TabletSplitRPCs), PgTablet ASSERT_OK(conn_->ExecuteFormat( "INSERT INTO $0 SELECT i, i FROM generate_series(1, 100) AS i", kTableName)); - ASSERT_OK(cluster_->FlushTabletsOnSingleTServer(cluster_->tablet_server(0), {tablet_id}, - tserver::FlushTabletsRequestPB_Operation::FlushTabletsRequestPB_Operation_FLUSH)); + ASSERT_OK(cluster_->FlushTabletsOnSingleTServer(0, {tablet_id})); // keep running selects until GetTablePartitionList RPC is found ASSERT_OK(WaitFor([this]() -> Result { diff --git a/src/yb/yql/pgwrapper/pg_index_backfill-test.cc b/src/yb/yql/pgwrapper/pg_index_backfill-test.cc index e87e3ff7d9c0..af62c4ab2921 100644 --- a/src/yb/yql/pgwrapper/pg_index_backfill-test.cc +++ b/src/yb/yql/pgwrapper/pg_index_backfill-test.cc @@ -29,7 +29,6 @@ #include "yb/master/master_error.h" #include "yb/tserver/tserver_service.pb.h" -#include "yb/tserver/tserver_service.proxy.h" #include "yb/util/async_util.h" #include "yb/util/backoff_waiter.h" @@ -65,7 +64,6 @@ const auto kPhase = "phase"s; const auto kPhaseBackfilling = "backfilling"s; const auto kPhaseInitializing = "initializing"s; const client::YBTableName kYBTableName(YQLDatabase::YQL_DATABASE_PGSQL, kDatabaseName, kTableName); -constexpr auto kBackfillSleepSec = 10 * kTimeMultiplier; } // namespace @@ -88,8 +86,6 @@ class PgIndexBackfillTest : public LibPqTestBase, public ::testing::WithParamInt options->extra_tserver_flags.push_back("--ysql_disable_index_backfill=false"); options->extra_tserver_flags.push_back( Format("--ysql_num_shards_per_tserver=$0", kTabletsPerServer)); - options->extra_tserver_flags.push_back( - Format("--TEST_sleep_before_vector_index_backfill_seconds=$0", kBackfillSleepSec)); if (EnableTableLocks()) { options->extra_master_flags.push_back("--TEST_enable_object_locking_for_table_locks=true"); @@ -2572,166 +2568,6 @@ TEST_P(PgIndexBackfill1kRowsPerSec, ConcurrentDelete) { thread_holder_.JoinAll(); } -struct VectorIndexWriter { - static constexpr int kBig = 100000000; - - std::atomic counter = 0; - std::atomic extra_values_counter = kBig * 2; - std::atomic last_write; - std::atomic max_time_without_inserts = MonoDelta::FromNanoseconds(0); - std::atomic failure = false; - - void Perform(PGConn& conn) { - std::vector values; - for (int i = RandomUniformInt(3, 6); i > 0; --i) { - values.push_back(++counter); - } - size_t keep_values = values.size(); - for (int i = RandomUniformInt(0, 2); i > 0; --i) { - values.push_back(++extra_values_counter); - } - bool use_2_steps = RandomUniformBool(); - - int offset = use_2_steps ? kBig : 0; - ASSERT_NO_FATALS(Insert(conn, values, offset)); - if (use_2_steps || keep_values != values.size()) { - ASSERT_NO_FATALS(UpdateAndDelete(conn, values, keep_values)); - } - } - - void Insert(PGConn& conn, const std::vector& values, int offset) { - for (;;) { - ASSERT_OK(conn.StartTransaction(IsolationLevel::SNAPSHOT_ISOLATION)); - bool failed = false; - for (auto value : values) { - auto res = conn.ExecuteFormat( - "INSERT INTO test VALUES ($0, '[$1.0]')", value, value + offset); - if (!res.ok()) { - ASSERT_OK(conn.RollbackTransaction()); - LOG(INFO) << "Insert " << value << " failed: " << res; - ASSERT_STR_CONTAINS(res.message().ToBuffer(), "schema version mismatch"); - failed = true; - break; - } - } - if (!failed) { - ASSERT_OK(conn.CommitTransaction()); - auto now = CoarseMonoClock::Now(); - auto prev_last_write = last_write.exchange(now); - if (prev_last_write != CoarseTimePoint()) { - MonoDelta new_value(now - prev_last_write); - if (MakeAtLeast(max_time_without_inserts, new_value)) { - LOG(INFO) << "Update max time without inserts: " << new_value; - } - } - std::this_thread::sleep_for(100ms); - break; - } - } - } - - void UpdateAndDelete(PGConn& conn, const std::vector& values, size_t keep_values) { - for (;;) { - ASSERT_OK(conn.StartTransaction(IsolationLevel::SNAPSHOT_ISOLATION)); - bool failed = false; - for (size_t i = 0; i != values.size(); ++i) { - auto value = values[i]; - Status res; - if (i < keep_values) { - res = conn.ExecuteFormat( - "UPDATE test SET embedding = '[$0.0]' WHERE id = $0", value); - } else { - res = conn.ExecuteFormat("DELETE FROM test WHERE id = $0", value); - } - if (!res.ok()) { - ASSERT_OK(conn.RollbackTransaction()); - LOG(INFO) << - (i < keep_values ? "Update " : "Delete " ) << value << " failed: " << res; - ASSERT_STR_CONTAINS(res.message().ToBuffer(), "schema version mismatch"); - failed = true; - break; - } - } - if (!failed) { - ASSERT_OK(conn.CommitTransaction()); - std::this_thread::sleep_for(100ms); - break; - } - } - } - - void WaitWritten(int num_rows) { - auto limit = counter.load() + num_rows; - while (counter.load() < limit && !failure) { - std::this_thread::sleep_for(10ms); - } - } - - void Verify(PGConn& conn) { - int num_bad_results = 0; - for (int i = 2; i < counter.load(); ++i) { - auto rows = ASSERT_RESULT(conn.FetchAllAsString(Format( - "SELECT id FROM test ORDER BY embedding <-> '[$0]' LIMIT 3", i * 1.0 - 0.01))); - auto expected = Format("$0; $1; $2", i, i - 1, i + 1); - if (rows != expected) { - LOG(INFO) << "Bad result: " << rows << " vs " << expected; - ++num_bad_results; - } - } - // Expect recall 98% or better. - ASSERT_LE(num_bad_results, counter.load() / 50); - } -}; - -TEST_P(PgIndexBackfillTest, VectorIndex) { - ASSERT_OK(conn_->Execute("CREATE EXTENSION vector")); - ASSERT_OK(conn_->ExecuteFormat( - "CREATE TABLE test (id INT PRIMARY KEY, embedding vector(1))")); - TestThreadHolder thread_holder; - VectorIndexWriter writer; - for (int i = 0; i != 8; ++i) { - thread_holder.AddThreadFunctor( - [this, &stop_flag = thread_holder.stop_flag(), &writer] { - bool done = false; - auto se = ScopeExit([&done, &writer] { - if (!done) { - writer.failure = true; - } - }); - auto conn = ASSERT_RESULT(Connect()); - while (!stop_flag.load()) { - ASSERT_NO_FATALS(writer.Perform(conn)); - } - done = true; - }); - } - writer.WaitWritten(32); - LOG(INFO) << "Started to create index"; - // TODO(vector_index) Switch to using CONCURRENT index creation when it will be ready. - ASSERT_OK(conn_->Execute( - "CREATE INDEX ON test USING ybhnsw (embedding vector_l2_ops)")); - LOG(INFO) << "Finished to create index"; - writer.WaitWritten(32); - thread_holder.Stop(); - LOG(INFO) << "Max time without inserts: " << writer.max_time_without_inserts; - ASSERT_LT(writer.max_time_without_inserts, 1s * kBackfillSleepSec); - SCOPED_TRACE(Format("Total rows: $0", writer.counter.load())); - - // VerifyVectorIndexes does not take intents into account, so could produce false failure. - ASSERT_OK(cluster_->WaitForAllIntentsApplied(30s * kTimeMultiplier)); - - for (size_t i = 0; i != cluster_->num_tablet_servers(); ++i) { - tserver::VerifyVectorIndexesRequestPB req; - tserver::VerifyVectorIndexesResponsePB resp; - rpc::RpcController controller; - controller.set_timeout(30s); - auto proxy = cluster_->tablet_server(i)->Proxy(); - ASSERT_OK(proxy->VerifyVectorIndexes(req, &resp, &controller)); - ASSERT_FALSE(resp.has_error()) << resp.ShortDebugString(); - } - writer.Verify(*conn_); -} - class PgSerializeBackfillTest : public PgIndexBackfillTest { public: void UpdateMiniClusterOptions(ExternalMiniClusterOptions* options) override { diff --git a/src/yb/yql/pgwrapper/pg_packed_row-test.cc b/src/yb/yql/pgwrapper/pg_packed_row-test.cc index 2abe99d4605c..151358727c2a 100644 --- a/src/yb/yql/pgwrapper/pg_packed_row-test.cc +++ b/src/yb/yql/pgwrapper/pg_packed_row-test.cc @@ -309,7 +309,7 @@ TEST_P(PgPackedRowTest, Random) { continue; } std::unordered_set values; - peer->tablet()->TEST_DocDBDumpToContainer(tablet::IncludeIntents::kTrue, &values); + peer->tablet()->TEST_DocDBDumpToContainer(docdb::IncludeIntents::kTrue, &values); std::vector sorted_values(values.begin(), values.end()); std::sort(sorted_values.begin(), sorted_values.end()); for (const auto& line : sorted_values) { @@ -805,7 +805,7 @@ TEST_P(PgPackedRowTest, CleanupIntentDocHt) { if (!peer->tablet()->regular_db()) { continue; } - auto dump = peer->tablet()->TEST_DocDBDumpStr(tablet::IncludeIntents::kTrue); + auto dump = peer->tablet()->TEST_DocDBDumpStr(docdb::IncludeIntents::kTrue); LOG(INFO) << "Dump: " << dump; ASSERT_EQ(dump.find("intent doc ht"), std::string::npos); } diff --git a/src/yb/yql/pgwrapper/pg_server_restart-test.cc b/src/yb/yql/pgwrapper/pg_server_restart-test.cc index c045d2f944da..7b2932ff39e9 100644 --- a/src/yb/yql/pgwrapper/pg_server_restart-test.cc +++ b/src/yb/yql/pgwrapper/pg_server_restart-test.cc @@ -64,8 +64,7 @@ TEST_F(PgSingleServerRestartTest, GetSafeTimeBeforeConsensusStarted) { ASSERT_OK(itest::WaitForServersToAgree(10s, ts_map, tablet_id, /* minimum_index = */ 4)); SleepFor(1s); - ASSERT_OK(cluster_->FlushTabletsOnSingleTServer( - cluster_->tablet_server(0), {tablet_id}, tserver::FlushTabletsRequestPB::FLUSH)); + ASSERT_OK(cluster_->FlushTabletsOnSingleTServer(0, {tablet_id})); leader->Shutdown(SafeShutdown::kFalse); ASSERT_OK(cluster_->WaitForTSToCrash(leader, 10s)); diff --git a/src/yb/yql/pgwrapper/pg_vector_index-itest.cc b/src/yb/yql/pgwrapper/pg_vector_index-itest.cc new file mode 100644 index 000000000000..dc880858e4b4 --- /dev/null +++ b/src/yb/yql/pgwrapper/pg_vector_index-itest.cc @@ -0,0 +1,257 @@ +// Copyright (c) YugabyteDB, Inc. +// +// Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except +// in compliance with the License. You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software distributed under the License +// is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express +// or implied. See the License for the specific language governing permissions and limitations +// under the License. +// + +#include "yb/tserver/tserver_service.proxy.h" + +#include "yb/util/backoff_waiter.h" +#include "yb/util/scope_exit.h" +#include "yb/util/test_thread_holder.h" + +#include "yb/yql/pgwrapper/libpq_test_base.h" + +using namespace std::literals; + +namespace yb::pgwrapper { + +constexpr auto kBackfillSleepSec = 10 * kTimeMultiplier; + +class PgVectorIndexITest : public LibPqTestBase { + public: + Result ConnectAndInit(std::optional num_tablets = std::nullopt) { + auto conn = VERIFY_RESULT(Connect()); + RETURN_NOT_OK(conn.Execute("CREATE EXTENSION vector")); + std::string stmt = "CREATE TABLE test (id INT PRIMARY KEY, embedding vector(1))"; + if (num_tablets) { + stmt += Format(" SPLIT INTO $0 TABLETS", *num_tablets); + } + RETURN_NOT_OK(conn.ExecuteFormat(stmt)); + return conn; + } + + Status CreateIndex(PGConn& conn) { + // TODO(vector_index) Switch to using CONCURRENT index creation when it will be ready. + return conn.Execute( + "CREATE INDEX ON test USING ybhnsw (embedding vector_l2_ops)"); + } +}; + +struct VectorIndexWriter { + static constexpr int kBig = 100000000; + + std::atomic counter = 0; + std::atomic extra_values_counter = kBig * 2; + std::atomic last_write; + std::atomic max_time_without_inserts = MonoDelta::FromNanoseconds(0); + std::atomic failure = false; + + void Perform(PGConn& conn) { + std::vector values; + for (int i = RandomUniformInt(3, 6); i > 0; --i) { + values.push_back(++counter); + } + size_t keep_values = values.size(); + for (int i = RandomUniformInt(0, 2); i > 0; --i) { + values.push_back(++extra_values_counter); + } + bool use_2_steps = RandomUniformBool(); + + int offset = use_2_steps ? kBig : 0; + ASSERT_NO_FATALS(Insert(conn, values, offset)); + if (use_2_steps || keep_values != values.size()) { + ASSERT_NO_FATALS(UpdateAndDelete(conn, values, keep_values)); + } + } + + void Insert(PGConn& conn, const std::vector& values, int offset) { + for (;;) { + ASSERT_OK(conn.StartTransaction(IsolationLevel::SNAPSHOT_ISOLATION)); + bool failed = false; + for (auto value : values) { + auto res = conn.ExecuteFormat( + "INSERT INTO test VALUES ($0, '[$1.0]')", value, value + offset); + if (!res.ok()) { + ASSERT_OK(conn.RollbackTransaction()); + LOG(INFO) << "Insert " << value << " failed: " << res; + ASSERT_STR_CONTAINS(res.message().ToBuffer(), "schema version mismatch"); + failed = true; + break; + } + } + if (!failed) { + ASSERT_OK(conn.CommitTransaction()); + auto now = CoarseMonoClock::Now(); + auto prev_last_write = last_write.exchange(now); + if (prev_last_write != CoarseTimePoint()) { + MonoDelta new_value(now - prev_last_write); + if (MakeAtLeast(max_time_without_inserts, new_value)) { + LOG(INFO) << "Update max time without inserts: " << new_value; + } + } + std::this_thread::sleep_for(100ms); + break; + } + } + } + + void UpdateAndDelete(PGConn& conn, const std::vector& values, size_t keep_values) { + for (;;) { + ASSERT_OK(conn.StartTransaction(IsolationLevel::SNAPSHOT_ISOLATION)); + bool failed = false; + for (size_t i = 0; i != values.size(); ++i) { + auto value = values[i]; + Status res; + if (i < keep_values) { + res = conn.ExecuteFormat( + "UPDATE test SET embedding = '[$0.0]' WHERE id = $0", value); + } else { + res = conn.ExecuteFormat("DELETE FROM test WHERE id = $0", value); + } + if (!res.ok()) { + ASSERT_OK(conn.RollbackTransaction()); + LOG(INFO) << + (i < keep_values ? "Update " : "Delete " ) << value << " failed: " << res; + ASSERT_STR_CONTAINS(res.message().ToBuffer(), "schema version mismatch"); + failed = true; + break; + } + } + if (!failed) { + ASSERT_OK(conn.CommitTransaction()); + std::this_thread::sleep_for(100ms); + break; + } + } + } + + void WaitWritten(int num_rows) { + auto limit = counter.load() + num_rows; + while (counter.load() < limit && !failure) { + std::this_thread::sleep_for(10ms); + } + } + + void Verify(PGConn& conn) { + int num_bad_results = 0; + for (int i = 2; i < counter.load(); ++i) { + auto rows = ASSERT_RESULT(conn.FetchAllAsString(Format( + "SELECT id FROM test ORDER BY embedding <-> '[$0]' LIMIT 3", i * 1.0 - 0.01))); + auto expected = Format("$0; $1; $2", i, i - 1, i + 1); + if (rows != expected) { + LOG(INFO) << "Bad result: " << rows << " vs " << expected; + ++num_bad_results; + } + } + // Expect recall 98% or better. + ASSERT_LE(num_bad_results, counter.load() / 50); + } +}; + +class PgVectorIndexBackfillITest : public PgVectorIndexITest { + public: + void UpdateMiniClusterOptions(ExternalMiniClusterOptions* options) override { + PgVectorIndexITest::UpdateMiniClusterOptions(options); + options->extra_master_flags.push_back("--ysql_disable_index_backfill=false"); + options->extra_tserver_flags.push_back( + Format("--TEST_sleep_before_vector_index_backfill_seconds=$0", kBackfillSleepSec)); + } +}; + +TEST_F_EX(PgVectorIndexITest, Backfill, PgVectorIndexBackfillITest) { + auto conn = ASSERT_RESULT(ConnectAndInit()); + TestThreadHolder thread_holder; + VectorIndexWriter writer; + for (int i = 0; i != 8; ++i) { + thread_holder.AddThreadFunctor( + [this, &stop_flag = thread_holder.stop_flag(), &writer] { + auto se = CancelableScopeExit([&writer] { + writer.failure = true; + }); + auto conn = ASSERT_RESULT(Connect()); + while (!stop_flag.load()) { + ASSERT_NO_FATALS(writer.Perform(conn)); + } + se.Cancel(); + }); + } + writer.WaitWritten(32); + LOG(INFO) << "Started to create index"; + ASSERT_OK(CreateIndex(conn)); + LOG(INFO) << "Finished to create index"; + writer.WaitWritten(32); + thread_holder.Stop(); + LOG(INFO) << "Max time without inserts: " << writer.max_time_without_inserts; + ASSERT_LT(writer.max_time_without_inserts, 1s * kBackfillSleepSec); + SCOPED_TRACE(Format("Total rows: $0", writer.counter.load())); + + // VerifyVectorIndexes does not take intents into account, so could produce false failure. + ASSERT_OK(cluster_->WaitForAllIntentsApplied(30s * kTimeMultiplier)); + + for (size_t i = 0; i != cluster_->num_tablet_servers(); ++i) { + tserver::VerifyVectorIndexesRequestPB req; + tserver::VerifyVectorIndexesResponsePB resp; + rpc::RpcController controller; + controller.set_timeout(30s); + auto proxy = cluster_->tablet_server(i)->Proxy(); + ASSERT_OK(proxy->VerifyVectorIndexes(req, &resp, &controller)); + ASSERT_FALSE(resp.has_error()) << resp.ShortDebugString(); + } + writer.Verify(conn); +} + +class PgVectorIndexRBSITest : public PgVectorIndexITest { + public: + void UpdateMiniClusterOptions(ExternalMiniClusterOptions* options) override { + options->extra_tserver_flags.push_back("--log_min_seconds_to_retain=0"); + options->extra_tserver_flags.push_back("--xcluster_checkpoint_max_staleness_secs=0"); + } +}; + +TEST_F_EX(PgVectorIndexITest, CrashAfterRBSDownload, PgVectorIndexRBSITest) { + constexpr size_t kNumRows = 5; + constexpr size_t kTsIndex = 2; + + auto conn = ASSERT_RESULT(ConnectAndInit(1)); + ASSERT_OK(CreateIndex(conn)); + auto* lagging_ts = cluster_->tablet_server(kTsIndex); + lagging_ts->Shutdown(); + + auto tablets = ASSERT_RESULT(cluster_->ListTablets(cluster_->tablet_server(0))); + ASSERT_EQ(tablets.status_and_schema().size(), 1); + auto tablet_id = tablets.status_and_schema()[0].tablet_status().tablet_id(); + + for (size_t i = 1; i <= kNumRows; ++i) { + ASSERT_OK(conn.ExecuteFormat("INSERT INTO test VALUES ($0, '[$0.0]')", i)); + for (size_t j = 0; j != 2; ++j) { + ASSERT_OK(cluster_->FlushTabletsOnSingleTServer(j, {tablet_id})); + ASSERT_OK(cluster_->LogGCOnSingleTServer(j, {tablet_id}, true)); + } + } + + ASSERT_OK(lagging_ts->Start()); + + auto good_tablets = ASSERT_RESULT(cluster_->ListTablets(cluster_->tablet_server(0))); + LOG(INFO) << "Good tablets: " << AsString(good_tablets); + ASSERT_EQ(good_tablets.status_and_schema().size(), 1); + ASSERT_OK(WaitFor([this, &good_tablets, lagging_ts]() -> Result { + auto lagging_tablets = VERIFY_RESULT(cluster_->ListTablets(lagging_ts)); + LOG(INFO) << "Lagging tablets: " << AsString(lagging_tablets); + if (lagging_tablets.status_and_schema().size() != 1) { + return false; + } + EXPECT_EQ(lagging_tablets.status_and_schema().size(), 1); + return lagging_tablets.status_and_schema()[0].tablet_status().last_op_id().index() >= + good_tablets.status_and_schema()[0].tablet_status().last_op_id().index(); + }, 5s, "Wait lagging TS to catch up")); +} + +} // namespace yb::pgwrapper From 95a58aeecb06cdbc2f670c837c4768eb649bac96 Mon Sep 17 00:00:00 2001 From: Gaurav Kukreja Date: Fri, 25 Apr 2025 13:56:22 +0200 Subject: [PATCH 100/146] [#26991] YSQL: Improve cost modelling for backward index scans Summary: In this change, we improve the cost modeling of backward index scans. Before this change, we used a heuristic cost factor to model the overhead of backward index scans. In this change, we try to estimate the seeks and nexts that will be performed for backward scans. We also model this correctly depending on whether `use_fast_backward_scan` gflag is enabled or disabled. This flag is currently disabled by default in 2024.2 but enabled in later releases. In the old implementation of backward index scans, we could only build tuples from DocDB key-value pairs by reading from the top. This meant, we needed to `seek` to the first key-value pair, then perform a series of `next` operations to read all key-value pairs for this tuple. Then, we must seek back to the first key-value pair and perform a `prev` to move to the last key-value pair for the previous tuple. This means, for each key-value pair we must do 2 seeks, as many nexts as the number of key-value pairs for the tuple and 1 prev. Currently, we cannot predict the number of key-value pairs per tuple. We assume a single next operation will be needed for each tuple. This may be improved in the future. There is additional nuance to this algorithm that has not been modeled in the cost model, we may not perform nexts if we find a "tombstone" for the tuple, marking the tuple as deleted. This is also currently not modeled in the cost model for the sake of simplicity. In the new implementation of backward index scans, we build the tuples by reading the key-value pairs in reverse order by performing `prev` operations. In this way, we avoid expensive seeks. Currently, we do not have a separate counter for `prev` and we count them as `nexts`. Once again, there is nuance that has not been modeled in the cost model. Jira: DB-16466 Test Plan: ./yb_build.sh --java-test 'org.yb.pgsql.TestPgFastBackwardIndexScan' ./yb_build.sh --java-test 'org.yb.pgsql.TestPgBackwardIndexScan' Reviewers: mihnea, arybochkin, mtakahara Reviewed By: mtakahara Subscribers: yql Differential Revision: https://phorge.dev.yugabyte.com/D43513 --- .../org/yb/pgsql/ExplainAnalyzeUtils.java | 5 +- .../org/yb/pgsql/TestPgBackwardIndexScan.java | 214 +++++++++++++++++ .../TestPgCostModelSeekNextEstimation.java | 20 +- .../yb/pgsql/TestPgFastBackwardIndexScan.java | 216 ++++++++++++++++++ src/postgres/src/backend/commands/explain.c | 4 +- .../src/backend/optimizer/path/costsize.c | 59 +++-- src/postgres/src/include/nodes/pathnodes.h | 2 +- src/postgres/src/include/optimizer/cost.h | 10 +- 8 files changed, 484 insertions(+), 46 deletions(-) create mode 100644 java/yb-pgsql/src/test/java/org/yb/pgsql/TestPgBackwardIndexScan.java create mode 100644 java/yb-pgsql/src/test/java/org/yb/pgsql/TestPgFastBackwardIndexScan.java diff --git a/java/yb-pgsql/src/test/java/org/yb/pgsql/ExplainAnalyzeUtils.java b/java/yb-pgsql/src/test/java/org/yb/pgsql/ExplainAnalyzeUtils.java index 3d673b36fcdc..8fa6e5712b51 100644 --- a/java/yb-pgsql/src/test/java/org/yb/pgsql/ExplainAnalyzeUtils.java +++ b/java/yb-pgsql/src/test/java/org/yb/pgsql/ExplainAnalyzeUtils.java @@ -35,6 +35,7 @@ public class ExplainAnalyzeUtils { public static final String NODE_HASH_JOIN = "Hash Join"; public static final String NODE_INDEX_ONLY_SCAN = "Index Only Scan"; public static final String NODE_INDEX_SCAN = "Index Scan"; + public static final String NODE_INDEX_SCAN_BACKWARD = "Index Scan Backward"; public static final String NODE_LIMIT = "Limit"; public static final String NODE_MERGE_JOIN = "Merge Join"; public static final String NODE_MODIFY_TABLE = "ModifyTable"; @@ -45,6 +46,7 @@ public class ExplainAnalyzeUtils { public static final String NODE_VALUES_SCAN = "Values Scan"; public static final String NODE_YB_BITMAP_TABLE_SCAN = "YB Bitmap Table Scan"; public static final String NODE_YB_BATCHED_NESTED_LOOP = "YB Batched Nested Loop"; + public static final String INDEX_SCAN_DIRECTION_BACKWARD = "Backward"; public static final String PLAN = "Plan"; @@ -74,6 +76,7 @@ public interface PlanCheckerBuilder extends ObjectCheckerBuilder { PlanCheckerBuilder alias(String value); PlanCheckerBuilder indexName(String value); PlanCheckerBuilder nodeType(String value); + PlanCheckerBuilder scanDirection(String value); PlanCheckerBuilder operation(String value); PlanCheckerBuilder planRows(ValueChecker checker); PlanCheckerBuilder plans(Checker... checker); @@ -112,7 +115,7 @@ public interface PlanCheckerBuilder extends ObjectCheckerBuilder { // Seek and Next Estimation PlanCheckerBuilder estimatedSeeks(ValueChecker checker); - PlanCheckerBuilder estimatedNexts(ValueChecker checker); + PlanCheckerBuilder estimatedNextsAndPrevs(ValueChecker checker); // Estimated Docdb Result Width PlanCheckerBuilder estimatedDocdbResultWidth(ValueChecker checker); diff --git a/java/yb-pgsql/src/test/java/org/yb/pgsql/TestPgBackwardIndexScan.java b/java/yb-pgsql/src/test/java/org/yb/pgsql/TestPgBackwardIndexScan.java new file mode 100644 index 000000000000..d3c53448b4c7 --- /dev/null +++ b/java/yb-pgsql/src/test/java/org/yb/pgsql/TestPgBackwardIndexScan.java @@ -0,0 +1,214 @@ +package org.yb.pgsql; + +import java.sql.Connection; +import java.sql.Statement; +import java.util.Map; + +import org.junit.After; +import org.junit.Before; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import org.yb.YBTestRunner; +import org.yb.pgsql.ExplainAnalyzeUtils.PlanCheckerBuilder; +import org.yb.pgsql.ExplainAnalyzeUtils.TopLevelCheckerBuilder; +import org.yb.util.json.Checkers; +import org.yb.util.json.JsonUtil; +import org.yb.util.json.ValueChecker; + +import static org.yb.pgsql.ExplainAnalyzeUtils.NODE_INDEX_ONLY_SCAN; +import static org.yb.pgsql.ExplainAnalyzeUtils.NODE_LIMIT; +import static org.yb.pgsql.ExplainAnalyzeUtils.INDEX_SCAN_DIRECTION_BACKWARD; +import static org.yb.pgsql.ExplainAnalyzeUtils.testExplainDebug; + +@RunWith(value=YBTestRunner.class) +public class TestPgBackwardIndexScan extends BasePgSQLTest { + + private static final double SEEK_FAULT_TOLERANCE_OFFSET = 1; + private static final double SEEK_FAULT_TOLERANCE_RATE = 0.2; + private static final double SEEK_LOWER_BOUND_FACTOR = 1 - SEEK_FAULT_TOLERANCE_RATE; + private static final double SEEK_UPPER_BOUND_FACTOR = 1 + SEEK_FAULT_TOLERANCE_RATE; + private static final double NEXT_FAULT_TOLERANCE_OFFSET = 2; + private static final double NEXT_FAULT_TOLERANCE_RATE = 0.5; + private static final double NEXT_LOWER_BOUND_FACTOR = 1 - NEXT_FAULT_TOLERANCE_RATE; + private static final double NEXT_UPPER_BOUND_FACTOR = 1 + NEXT_FAULT_TOLERANCE_RATE; + + private static final Logger LOG = + LoggerFactory.getLogger(TestPgBackwardIndexScan.class); + + private static TopLevelCheckerBuilder makeTopLevelBuilder() { + return JsonUtil.makeCheckerBuilder(TopLevelCheckerBuilder.class, false); + } + + private static PlanCheckerBuilder makePlanBuilder() { + return JsonUtil.makeCheckerBuilder(PlanCheckerBuilder.class, false); + } + + @Before + public void setUp() throws Exception { + try (Statement stmt = connection.createStatement()) { + stmt.execute("SET yb_enable_base_scans_cost_model = true"); + } + } + + @After + public void tearDown() throws Exception { + try (Statement stmt = connection.createStatement()) { + stmt.execute("DROP TABLE IF EXISTS test"); + } + } + + @Override + protected Map getTServerFlags() { + Map flagMap = super.getTServerFlags(); + flagMap.put("use_fast_backward_scan", "false"); + flagMap.put("ysql_enable_packed_row_for_colocated_table", "true"); + return flagMap; + } + + @Override + protected Map getMasterFlags() { + Map flagMap = super.getMasterFlags(); + flagMap.put("use_fast_backward_scan", "false"); + flagMap.put("ysql_enable_packed_row_for_colocated_table", "true"); + return flagMap; + } + + private ValueChecker expectedSeeksRange(double expected_seeks) { + double expected_lower_bound = expected_seeks * SEEK_LOWER_BOUND_FACTOR + - SEEK_FAULT_TOLERANCE_OFFSET; + double expected_upper_bound = expected_seeks * SEEK_UPPER_BOUND_FACTOR + + SEEK_FAULT_TOLERANCE_OFFSET; + return Checkers.closed(expected_lower_bound, expected_upper_bound); + } + + private ValueChecker expectedNextsRange(double expected_nexts) { + double expected_lower_bound = expected_nexts * NEXT_LOWER_BOUND_FACTOR + - NEXT_FAULT_TOLERANCE_OFFSET; + double expected_upper_bound = expected_nexts * NEXT_UPPER_BOUND_FACTOR + + NEXT_FAULT_TOLERANCE_OFFSET; + return Checkers.closed(expected_lower_bound, expected_upper_bound); + } + + private void testSeekAndNextEstimationIndexOnlyScanBackwardHelper( + Statement stmt, String query, + String table_name, String index_name, + double expected_seeks, + double expected_nexts) throws Exception { + try { + testExplainDebug(stmt, + String.format("/*+ Set(enable_sort off) IndexOnlyScan(%s %s) */ %s", + table_name, index_name, query), + makeTopLevelBuilder() + .plan(makePlanBuilder() + .nodeType(NODE_INDEX_ONLY_SCAN) + .scanDirection(INDEX_SCAN_DIRECTION_BACKWARD) + .relationName(table_name) + .indexName(index_name) + .estimatedSeeks(expectedSeeksRange(expected_seeks)) + .estimatedNextsAndPrevs(expectedNextsRange(expected_nexts)) + .build()) + .build()); + } + catch (AssertionError e) { + LOG.info("Failed Query: " + query); + LOG.info(e.toString()); + throw e; + } + } + + private void testSeekAndNextEstimationLimitIndexOnlyScanBackwardHelper( + Statement stmt, String query, + String table_name, String index_name, + double expected_seeks, + double expected_nexts) throws Exception { + try { + testExplainDebug(stmt, + String.format("/*+ IndexOnlyScan(%s %s) */ %s", table_name, index_name, query), + makeTopLevelBuilder() + .plan(makePlanBuilder() + .nodeType(NODE_LIMIT) + .plans(makePlanBuilder() + .nodeType(NODE_INDEX_ONLY_SCAN) + .scanDirection(INDEX_SCAN_DIRECTION_BACKWARD) + .relationName(table_name) + .indexName(index_name) + .estimatedSeeks(expectedSeeksRange(expected_seeks)) + .estimatedNextsAndPrevs(expectedNextsRange(expected_nexts)) + .build()) + .build()) + .build()); + } + catch (AssertionError e) { + LOG.info("Failed Query: " + query); + LOG.info(e.toString()); + throw e; + } + } + + @Test + public void testSeekNextEstimationIndexScan() throws Exception { + setConnMgrWarmupModeAndRestartCluster(ConnectionManagerWarmupMode.ROUND_ROBIN); + boolean isConnMgr = isTestRunningWithConnectionManager(); + if (isConnMgr) { + setUp(); + } + + try (Statement stmt = connection.createStatement()) { + stmt.execute("CREATE TABLE t1 (k1 INT, k2 INT, v1 INT)"); + stmt.execute("INSERT INTO t1 SELECT s1, s2, s2 FROM " + + "generate_series(1, 300) s1, generate_series(1, 300) s2"); + + stmt.execute("CREATE INDEX t1_idx_1 ON t1 (k1 ASC, k2 ASC)"); + stmt.execute("ANALYZE t1"); + + testSeekAndNextEstimationLimitIndexOnlyScanBackwardHelper(stmt, + "SELECT k1, k2 FROM t1 ORDER BY k1 DESC LIMIT 5000", + "t1", "t1_idx_1", 180000, 180000); + testSeekAndNextEstimationLimitIndexOnlyScanBackwardHelper(stmt, + "SELECT k1, k2 FROM t1 ORDER BY k1 DESC LIMIT 10000", + "t1", "t1_idx_1", 180000, 180000); + + testSeekAndNextEstimationIndexOnlyScanBackwardHelper(stmt, + "SELECT k1, k2 FROM t1 WHERE k1 < 10 ORDER BY k1 DESC", + "t1", "t1_idx_1", 4800, 4800); + testSeekAndNextEstimationIndexOnlyScanBackwardHelper(stmt, + "SELECT k1, k2 FROM t1 WHERE k1 > 290 ORDER BY k1 DESC", + "t1", "t1_idx_1", 6419, 6419); + + testSeekAndNextEstimationIndexOnlyScanBackwardHelper(stmt, + "SELECT k1, k2 FROM t1 WHERE k2 < 10 ORDER BY k1 DESC", + "t1", "t1_idx_1", 4805, 5251); + testSeekAndNextEstimationIndexOnlyScanBackwardHelper(stmt, + "SELECT k1, k2 FROM t1 WHERE k2 > 290 ORDER BY k1 DESC", + "t1", "t1_idx_1", 6773, 6835); + + stmt.execute("DROP INDEX t1_idx_1"); + + stmt.execute("CREATE INDEX t1_idx_2 ON t1 (k1 DESC, k2 ASC)"); + stmt.execute("ANALYZE t1"); + + testSeekAndNextEstimationLimitIndexOnlyScanBackwardHelper(stmt, + "SELECT k1, k2 FROM t1 ORDER BY k1 ASC LIMIT 5000", + "t1", "t1_idx_2", 180000, 180000); + testSeekAndNextEstimationLimitIndexOnlyScanBackwardHelper(stmt, + "SELECT k1, k2 FROM t1 ORDER BY k1 ASC LIMIT 10000", + "t1", "t1_idx_2", 180000, 180000); + + testSeekAndNextEstimationIndexOnlyScanBackwardHelper(stmt, + "SELECT k1, k2 FROM t1 WHERE k1 < 10 ORDER BY k1 ASC", + "t1", "t1_idx_2", 4800, 4800); + testSeekAndNextEstimationIndexOnlyScanBackwardHelper(stmt, + "SELECT k1, k2 FROM t1 WHERE k1 > 290 ORDER BY k1 ASC", + "t1", "t1_idx_2", 6419, 6419); + + testSeekAndNextEstimationIndexOnlyScanBackwardHelper(stmt, + "SELECT k1, k2 FROM t1 WHERE k2 < 10 ORDER BY k1 ASC", + "t1", "t1_idx_2", 4805, 5251); + testSeekAndNextEstimationIndexOnlyScanBackwardHelper(stmt, + "SELECT k1, k2 FROM t1 WHERE k2 > 290 ORDER BY k1 ASC", + "t1", "t1_idx_2", 6773, 6835); + } + } +} diff --git a/java/yb-pgsql/src/test/java/org/yb/pgsql/TestPgCostModelSeekNextEstimation.java b/java/yb-pgsql/src/test/java/org/yb/pgsql/TestPgCostModelSeekNextEstimation.java index 718cb95e2036..db9126c4dab8 100644 --- a/java/yb-pgsql/src/test/java/org/yb/pgsql/TestPgCostModelSeekNextEstimation.java +++ b/java/yb-pgsql/src/test/java/org/yb/pgsql/TestPgCostModelSeekNextEstimation.java @@ -143,7 +143,7 @@ private void testSeekAndNextEstimationIndexScanHelper( .relationName(table_name) .indexName(index_name) .estimatedSeeks(expectedSeeksRange(expected_seeks)) - .estimatedNexts(expectedNextsRange(expected_nexts)) + .estimatedNextsAndPrevs(expectedNextsRange(expected_nexts)) .estimatedDocdbResultWidth(Checkers.equal(expected_docdb_result_width)) .metric(METRIC_NUM_DB_SEEK, expectedSeeksRange(expected_seeks)) .metric(METRIC_NUM_DB_NEXT, expectedNextsRange(expected_nexts)) @@ -165,7 +165,7 @@ private ObjectChecker makeBitmapIndexScanChecker( .nodeType(NODE_BITMAP_INDEX_SCAN) .indexName(index_name) .estimatedSeeks(expectedSeeksRange(expected_seeks)) - .estimatedNexts(expectedNextsRange(expected_nexts)) + .estimatedNextsAndPrevs(expectedNextsRange(expected_nexts)) .metric(METRIC_NUM_DB_SEEK, expectedSeeksRange(expected_seeks)) .metric(METRIC_NUM_DB_NEXT, expectedNextsRange(expected_nexts)) .build(); @@ -179,7 +179,7 @@ private ObjectChecker makeBitmapIndexScanChecker_IgnoreActualResults( .nodeType(NODE_BITMAP_INDEX_SCAN) .indexName(index_name) .estimatedSeeks(expectedSeeksRange(expected_seeks)) - .estimatedNexts(expectedNextsRange(expected_nexts)) + .estimatedNextsAndPrevs(expectedNextsRange(expected_nexts)) .build(); } @@ -197,7 +197,7 @@ private void testSeekAndNextEstimationBitmapScanHelper( .nodeType(NODE_YB_BITMAP_TABLE_SCAN) .relationName(table_name) .estimatedSeeks(expectedSeeksRange(expected_seeks)) - .estimatedNexts(expectedNextsRange(expected_nexts)) + .estimatedNextsAndPrevs(expectedNextsRange(expected_nexts)) .estimatedDocdbResultWidth(Checkers.equal(expected_docdb_result_width)) .metric(METRIC_NUM_DB_SEEK, expectedSeeksRange(expected_seeks)) .metric(METRIC_NUM_DB_NEXT, expectedNextsRange(expected_nexts)) @@ -226,7 +226,7 @@ private void testSeekAndNextEstimationBitmapScanHelper_IgnoreActualResults( .nodeType(NODE_YB_BITMAP_TABLE_SCAN) .relationName(table_name) .estimatedSeeks(expectedSeeksRange(expected_seeks)) - .estimatedNexts(expectedNextsRange(expected_nexts)) + .estimatedNextsAndPrevs(expectedNextsRange(expected_nexts)) .estimatedDocdbResultWidth(Checkers.equal(expected_docdb_result_width)) .plans(bitmap_index_checker) .build()) @@ -254,7 +254,7 @@ private void testSeekAndNextEstimationIndexScanHelper_IgnoreActualResults( .relationName(table_name) .indexName(index_name) .estimatedSeeks(expectedSeeksRange(expected_seeks)) - .estimatedNexts(expectedNextsRange(expected_nexts)) + .estimatedNextsAndPrevs(expectedNextsRange(expected_nexts)) .estimatedDocdbResultWidth(Checkers.equal(expected_docdb_result_width)) .build()) .build()); @@ -278,7 +278,7 @@ private void testSeekAndNextEstimationSeqScanHelper( .nodeType(NODE_SEQ_SCAN) .relationName(table_name) .estimatedSeeks(expectedSeeksRange(expected_seeks)) - .estimatedNexts(expectedNextsRange(expected_nexts)) + .estimatedNextsAndPrevs(expectedNextsRange(expected_nexts)) .estimatedDocdbResultWidth(Checkers.equal(expected_docdb_result_width)) .metric(METRIC_NUM_DB_SEEK, expectedSeeksRange(expected_seeks)) .metric(METRIC_NUM_DB_NEXT, expectedNextsRange(expected_nexts)) @@ -304,7 +304,7 @@ private void testSeekAndNextEstimationSeqScanHelper_IgnoreActualResults( .nodeType(NODE_SEQ_SCAN) .relationName(table_name) .estimatedSeeks(expectedSeeksRange(expected_seeks)) - .estimatedNexts(expectedNextsRange(expected_nexts)) + .estimatedNextsAndPrevs(expectedNextsRange(expected_nexts)) .estimatedDocdbResultWidth(Checkers.equal(expected_docdb_result_width)) .build()) .build()); @@ -333,13 +333,13 @@ private void testSeekAndNextEstimationJoinHelper_IgnoreActualResults( .relationName(outer_table_name) .nodeType(outer_table_scan_type) .estimatedSeeks(expectedSeeksRange(outer_expected_seeks)) - .estimatedNexts(expectedNextsRange(outer_expected_nexts)) + .estimatedNextsAndPrevs(expectedNextsRange(outer_expected_nexts)) .build(), makePlanBuilder() .relationName(inner_table_name) .nodeType(inner_table_scan_type) .estimatedSeeks(expectedSeeksRange(inner_expected_seeks)) - .estimatedNexts(expectedNextsRange(inner_expected_nexts)) + .estimatedNextsAndPrevs(expectedNextsRange(inner_expected_nexts)) .build()) .build()) .build()); diff --git a/java/yb-pgsql/src/test/java/org/yb/pgsql/TestPgFastBackwardIndexScan.java b/java/yb-pgsql/src/test/java/org/yb/pgsql/TestPgFastBackwardIndexScan.java new file mode 100644 index 000000000000..13ff5511f257 --- /dev/null +++ b/java/yb-pgsql/src/test/java/org/yb/pgsql/TestPgFastBackwardIndexScan.java @@ -0,0 +1,216 @@ +package org.yb.pgsql; + +import java.sql.Connection; +import java.sql.Statement; +import java.util.Map; + +import org.junit.After; +import org.junit.Before; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import org.yb.YBTestRunner; +import org.yb.pgsql.ExplainAnalyzeUtils.PlanCheckerBuilder; +import org.yb.pgsql.ExplainAnalyzeUtils.TopLevelCheckerBuilder; +import org.yb.util.json.Checkers; +import org.yb.util.json.JsonUtil; +import org.yb.util.json.ValueChecker; + +import static org.yb.pgsql.ExplainAnalyzeUtils.NODE_INDEX_ONLY_SCAN; +import static org.yb.pgsql.ExplainAnalyzeUtils.NODE_LIMIT; +import static org.yb.pgsql.ExplainAnalyzeUtils.INDEX_SCAN_DIRECTION_BACKWARD; +import static org.yb.pgsql.ExplainAnalyzeUtils.testExplainDebug; + +@RunWith(value=YBTestRunner.class) +public class TestPgFastBackwardIndexScan extends BasePgSQLTest { + + private static final double SEEK_FAULT_TOLERANCE_OFFSET = 1; + private static final double SEEK_FAULT_TOLERANCE_RATE = 0.2; + private static final double SEEK_LOWER_BOUND_FACTOR = 1 - SEEK_FAULT_TOLERANCE_RATE; + private static final double SEEK_UPPER_BOUND_FACTOR = 1 + SEEK_FAULT_TOLERANCE_RATE; + private static final double NEXT_FAULT_TOLERANCE_OFFSET = 2; + private static final double NEXT_FAULT_TOLERANCE_RATE = 0.5; + private static final double NEXT_LOWER_BOUND_FACTOR = 1 - NEXT_FAULT_TOLERANCE_RATE; + private static final double NEXT_UPPER_BOUND_FACTOR = 1 + NEXT_FAULT_TOLERANCE_RATE; + + private static final Logger LOG = + LoggerFactory.getLogger(TestPgFastBackwardIndexScan.class); + + private static TopLevelCheckerBuilder makeTopLevelBuilder() { + return JsonUtil.makeCheckerBuilder(TopLevelCheckerBuilder.class, false); + } + + private static PlanCheckerBuilder makePlanBuilder() { + return JsonUtil.makeCheckerBuilder(PlanCheckerBuilder.class, false); + } + + @Before + public void setUp() throws Exception { + try (Statement stmt = connection.createStatement()) { + stmt.execute("SET yb_enable_base_scans_cost_model = true"); + } + } + + @After + public void tearDown() throws Exception { + try (Statement stmt = connection.createStatement()) { + stmt.execute("DROP TABLE IF EXISTS test"); + } + } + + @Override + protected Map getTServerFlags() { + Map flagMap = super.getTServerFlags(); + flagMap.put("use_fast_backward_scan", "true"); + flagMap.put("ysql_enable_packed_row_for_colocated_table", "true"); + return flagMap; + } + + @Override + protected Map getMasterFlags() { + Map flagMap = super.getMasterFlags(); + flagMap.put("use_fast_backward_scan", "true"); + flagMap.put("ysql_enable_packed_row_for_colocated_table", "true"); + return flagMap; + } + + private ValueChecker expectedSeeksRange(double expected_seeks) { + double expected_lower_bound = expected_seeks * SEEK_LOWER_BOUND_FACTOR + - SEEK_FAULT_TOLERANCE_OFFSET; + double expected_upper_bound = expected_seeks * SEEK_UPPER_BOUND_FACTOR + + SEEK_FAULT_TOLERANCE_OFFSET; + return Checkers.closed(expected_lower_bound, expected_upper_bound); + } + + private ValueChecker expectedNextsRange(double expected_nexts) { + double expected_lower_bound = expected_nexts * NEXT_LOWER_BOUND_FACTOR + - NEXT_FAULT_TOLERANCE_OFFSET; + double expected_upper_bound = expected_nexts * NEXT_UPPER_BOUND_FACTOR + + NEXT_FAULT_TOLERANCE_OFFSET; + return Checkers.closed(expected_lower_bound, expected_upper_bound); + } + + private void testSeekAndNextEstimationIndexOnlyScanBackwardHelper( + Statement stmt, String query, + String table_name, String index_name, + double expected_seeks, + double expected_nexts) throws Exception { + try { + testExplainDebug(stmt, + String.format("/*+ IndexOnlyScan(%s %s) */ %s", table_name, index_name, query), + makeTopLevelBuilder() + .plan(makePlanBuilder() + .nodeType(NODE_INDEX_ONLY_SCAN) + .scanDirection(INDEX_SCAN_DIRECTION_BACKWARD) + .relationName(table_name) + .indexName(index_name) + .estimatedSeeks(expectedSeeksRange(expected_seeks)) + .estimatedNextsAndPrevs(expectedNextsRange(expected_nexts)) + .build()) + .build()); + } + catch (AssertionError e) { + LOG.info("Failed Query: " + query); + LOG.info(e.toString()); + throw e; + } + } + + private void testSeekAndNextEstimationLimitIndexOnlyScanBackwardHelper( + Statement stmt, String query, + String table_name, String index_name, + double expected_seeks, + double expected_nexts) throws Exception { + try { + testExplainDebug(stmt, + String.format("/*+ IndexOnlyScan(%s %s) */ %s", table_name, index_name, query), + makeTopLevelBuilder() + .plan(makePlanBuilder() + .nodeType(NODE_LIMIT) + .plans(makePlanBuilder() + .nodeType(NODE_INDEX_ONLY_SCAN) + .scanDirection(INDEX_SCAN_DIRECTION_BACKWARD) + .relationName(table_name) + .indexName(index_name) + .estimatedSeeks(expectedSeeksRange(expected_seeks)) + .estimatedNextsAndPrevs(expectedNextsRange(expected_nexts)) + .build()) + .build()) + .build()); + } + catch (AssertionError e) { + LOG.info("Failed Query: " + query); + LOG.info(e.toString()); + throw e; + } + } + + @Test + public void testSeekNextEstimationIndexScan() throws Exception { + setConnMgrWarmupModeAndRestartCluster(ConnectionManagerWarmupMode.ROUND_ROBIN); + boolean isConnMgr = isTestRunningWithConnectionManager(); + if (isConnMgr) { + setUp(); + } + + try (Statement stmt = connection.createStatement()) { + stmt.execute("CREATE TABLE t1 (k1 INT, k2 INT, v1 INT)"); + stmt.execute("INSERT INTO t1 SELECT s1, s2, s2 FROM " + + "generate_series(1, 300) s1, generate_series(1, 300) s2"); + + stmt.execute("CREATE INDEX t1_idx_1 ON t1 (k1 ASC, k2 ASC)"); + stmt.execute("ANALYZE t1"); + + /* Index Scan node isn't aware of LIMIT on top, expects to return all 125k rows */ + testSeekAndNextEstimationLimitIndexOnlyScanBackwardHelper(stmt, + "SELECT k1, k2 FROM t1 ORDER BY k1 DESC LIMIT 5000", + "t1", "t1_idx_1", 89, 90000); + testSeekAndNextEstimationLimitIndexOnlyScanBackwardHelper(stmt, + "SELECT k1, k2 FROM t1 ORDER BY k1 DESC LIMIT 10000", + "t1", "t1_idx_1", 89, 90000); + + testSeekAndNextEstimationIndexOnlyScanBackwardHelper(stmt, + "SELECT k1, k2 FROM t1 WHERE k1 < 100 ORDER BY k1 DESC", + "t1", "t1_idx_1",30, 30000); + testSeekAndNextEstimationIndexOnlyScanBackwardHelper(stmt, + "SELECT k1, k2 FROM t1 WHERE k1 > 200 ORDER BY k1 DESC", + "t1", "t1_idx_1",30, 30000); + + /* When filter on k2, additional seeks are needed for skip scan */ + testSeekAndNextEstimationIndexOnlyScanBackwardHelper(stmt, + "SELECT k1, k2 FROM t1 WHERE k2 < 100 ORDER BY k1 DESC", + "t1", "t1_idx_1",330, 30000); + testSeekAndNextEstimationIndexOnlyScanBackwardHelper(stmt, + "SELECT k1, k2 FROM t1 WHERE k2 > 200 ORDER BY k1 DESC", + "t1", "t1_idx_1",330, 30000); + + stmt.execute("DROP INDEX t1_idx_1"); + + stmt.execute("CREATE INDEX t1_idx_2 ON t1 (k1 DESC, k2 ASC)"); + stmt.execute("ANALYZE t1"); + + /* Index Scan node isn't aware of LIMIT on top, expects to return all 125k rows */ + testSeekAndNextEstimationLimitIndexOnlyScanBackwardHelper(stmt, + "SELECT k1, k2 FROM t1 ORDER BY k1 ASC LIMIT 5000", + "t1", "t1_idx_2", 89, 125000); + testSeekAndNextEstimationLimitIndexOnlyScanBackwardHelper(stmt, + "SELECT k1, k2 FROM t1 ORDER BY k1 ASC LIMIT 10000", + "t1", "t1_idx_2", 89, 125000); + + testSeekAndNextEstimationIndexOnlyScanBackwardHelper(stmt, + "SELECT k1, k2 FROM t1 WHERE k1 < 100 ORDER BY k1 ASC", + "t1", "t1_idx_2", 30, 30000); + testSeekAndNextEstimationIndexOnlyScanBackwardHelper(stmt, + "SELECT k1, k2 FROM t1 WHERE k1 > 200 ORDER BY k1 ASC", + "t1", "t1_idx_2", 30, 30000); + + testSeekAndNextEstimationIndexOnlyScanBackwardHelper(stmt, + "SELECT k1, k2 FROM t1 WHERE k2 < 100 ORDER BY k1 ASC", + "t1", "t1_idx_2", 330, 30000); + testSeekAndNextEstimationIndexOnlyScanBackwardHelper(stmt, + "SELECT k1, k2 FROM t1 WHERE k2 > 200 ORDER BY k1 ASC", + "t1", "t1_idx_2", 330, 30000); + } + } +} diff --git a/src/postgres/src/backend/commands/explain.c b/src/postgres/src/backend/commands/explain.c index 5e2368eedb1e..156786bbc65f 100644 --- a/src/postgres/src/backend/commands/explain.c +++ b/src/postgres/src/backend/commands/explain.c @@ -4858,8 +4858,8 @@ show_yb_planning_stats(YbPlanInfo *planinfo, ExplainState *es) { ExplainPropertyFloat("Estimated Seeks", NULL, planinfo->estimated_num_seeks, 0, es); - ExplainPropertyFloat("Estimated Nexts", NULL, - planinfo->estimated_num_nexts, 0, es); + ExplainPropertyFloat("Estimated Nexts And Prevs", NULL, + planinfo->estimated_num_nexts_prevs, 0, es); ExplainPropertyInteger("Estimated Docdb Result Width", NULL, planinfo->estimated_docdb_result_width, es); } diff --git a/src/postgres/src/backend/optimizer/path/costsize.c b/src/postgres/src/backend/optimizer/path/costsize.c index 4824b4a784b3..b5d93aade708 100644 --- a/src/postgres/src/backend/optimizer/path/costsize.c +++ b/src/postgres/src/backend/optimizer/path/costsize.c @@ -173,8 +173,6 @@ double yb_seq_block_cost = DEFAULT_SEQ_PAGE_COST; double yb_random_block_cost = DEFAULT_RANDOM_PAGE_COST; double yb_docdb_next_cpu_cycles = YB_DEFAULT_DOCDB_NEXT_CPU_CYCLES; double yb_seek_cost_factor = YB_DEFAULT_SEEK_COST_FACTOR; -double yb_backward_seek_cost_factor = YB_DEFAULT_BACKWARD_SEEK_COST_FACTOR; -double yb_fast_backward_seek_cost_factor = YB_DEFAULT_FAST_BACKWARD_SEEK_COST_FACTOR; int yb_docdb_merge_cpu_cycles = YB_DEFAULT_DOCDB_MERGE_CPU_CYCLES; int yb_docdb_remote_filter_overhead_cycles = YB_DEFAULT_DOCDB_REMOTE_FILTER_OVERHEAD_CYCLES; @@ -7389,7 +7387,7 @@ yb_cost_seqscan(Path *path, PlannerInfo *root, RelOptInfo *baserel, num_seeks = num_result_pages; num_nexts = (num_result_pages - 1) + (adjusted_baserel_tuples - 1); - path->yb_plan_info.estimated_num_nexts = num_nexts; + path->yb_plan_info.estimated_num_nexts_prevs = num_nexts; path->yb_plan_info.estimated_num_seeks = num_seeks; run_cost += (num_seeks * per_seek_cost) + (num_nexts * per_next_cost); @@ -7952,7 +7950,7 @@ yb_cost_index(IndexPath *path, PlannerInfo *root, double loop_count, int num_sst_files_baserel = YB_DEFAULT_NUM_SST_FILES_PER_TABLE; Cost per_next_cost; double num_seeks; - double num_nexts; + double num_nexts_prevs; QualCost qual_cost; List *base_table_pushed_down_filters = NIL; List *base_table_colrefs = NIL; @@ -8138,13 +8136,13 @@ yb_cost_index(IndexPath *path, PlannerInfo *root, double loop_count, * scan. */ num_seeks = 0; - num_nexts = 0; + num_nexts_prevs = 0; if (yb_exist_conditions_on_all_hash_keys_) { yb_estimate_seeks_nexts_in_index_scan(root, index, baserel, baserel_oid, index_conditions_selectivity, index_conditions_on_each_column, - &num_seeks, &num_nexts); + &num_seeks, &num_nexts_prevs); } else { @@ -8156,7 +8154,31 @@ yb_cost_index(IndexPath *path, PlannerInfo *root, double loop_count, * find the next keys. */ num_seeks = 1; - num_nexts = index->tuples; + num_nexts_prevs = index->tuples; + } + + if (path->indexscandir == BackwardScanDirection && !YbUseFastBackwardScan()) + { + /* + * In case of backward index scan without the fast backward scan + * optimization, we need to do additional seeks and nexts. + * + * The seek and next estimates to look up the index for the index + * conditions are the same, but for each row that matches, we must + * first seek to the first key-value pair in DocDB and then do nexts + * to build the tuple. + * + * However, for each tuple we must first seek to the first key-value + * pair for the tuple and then do nexts to build the tuple. At the + * end, we must seek back to the first key-value pair for the tuple + * and do prev to find the last key-value pair of the previous + * tuple. This is why we must add 2 seeks and 1 prev for each tuple. + * + * There is additional logic in this implementation that has not + * been modeled here for the sake of simplicity. + */ + num_seeks += (index_conditions_selectivity * adjusted_index_tuples * 2); + num_nexts_prevs += (index_conditions_selectivity * adjusted_index_tuples); } pfree(index_conditions_on_each_column); @@ -8188,23 +8210,17 @@ yb_cost_index(IndexPath *path, PlannerInfo *root, double loop_count, per_next_cost = ((yb_docdb_next_cpu_cycles * cpu_operator_cost) + per_merge_cost); - if (path->indexscandir == BackwardScanDirection) - { - per_next_cost *= (YbUseFastBackwardScan() ? - yb_fast_backward_seek_cost_factor : - yb_backward_seek_cost_factor); - } - /* Adjust costing for parallelism, if used. */ if (path->path.parallel_workers > 0) { parallel_divisor = get_parallel_divisor(&path->path); adjusted_index_tuples = index->tuples / parallel_divisor; num_seeks = ceil(num_seeks / parallel_divisor); - num_nexts = ceil(num_nexts / parallel_divisor); + num_nexts_prevs = ceil(num_nexts_prevs / parallel_divisor); } - run_cost += num_seeks * index_per_seek_cost + num_nexts * per_next_cost; + run_cost += (num_seeks * index_per_seek_cost) + + (num_nexts_prevs * per_next_cost); /* Estimate the cost of checking the index conditions and filters */ List *index_conditions_and_filters = NIL; @@ -8304,7 +8320,7 @@ yb_cost_index(IndexPath *path, PlannerInfo *root, double loop_count, YB_DEFAULT_DOCDB_BLOCK_SIZE); index_pages_fetched = clamp_row_est(index_selectivity * index_total_pages); index_random_pages_fetched = - ceil(num_seeks / (num_seeks + num_nexts)) * index_pages_fetched; + ceil(num_seeks / (num_seeks + num_nexts_prevs)) * index_pages_fetched; index_sequential_pages_fetched = index_pages_fetched - index_random_pages_fetched; @@ -8543,7 +8559,7 @@ yb_cost_index(IndexPath *path, PlannerInfo *root, double loop_count, (MAX_NEXTS_TO_AVOID_SEEK + 1)); num_seeks += baserel_num_seeks; - num_nexts += baserel_num_nexts; + num_nexts_prevs += baserel_num_nexts; run_cost += (baserel_per_seek_cost * baserel_num_seeks); run_cost += (per_next_cost * baserel_num_nexts); } @@ -8564,7 +8580,7 @@ yb_cost_index(IndexPath *path, PlannerInfo *root, double loop_count, } } - path->yb_plan_info.estimated_num_nexts = num_nexts; + path->yb_plan_info.estimated_num_nexts_prevs = num_nexts_prevs; path->yb_plan_info.estimated_num_seeks = num_seeks; /* Local filter costs */ @@ -8889,9 +8905,6 @@ yb_cost_bitmap_table_scan(Path *path, PlannerInfo *root, RelOptInfo *baserel, num_seeks = tuples_scanned; num_nexts = (max_nexts_to_avoid_seek + 1) * tuples_scanned; - path->yb_plan_info.estimated_num_nexts = num_nexts; - path->yb_plan_info.estimated_num_seeks = num_seeks; - run_cost += (num_seeks * per_seek_cost) + (num_nexts * per_next_cost); yb_get_roundtrip_transfer_costs(baserel_tablespace_id, @@ -8919,6 +8932,6 @@ yb_cost_bitmap_table_scan(Path *path, PlannerInfo *root, RelOptInfo *baserel, path->startup_cost = startup_cost * YB_BITMAP_DISCOURAGE_MODIFIER; path->total_cost = (startup_cost + run_cost) * YB_BITMAP_DISCOURAGE_MODIFIER; - path->yb_plan_info.estimated_num_nexts = num_nexts; + path->yb_plan_info.estimated_num_nexts_prevs = num_nexts; path->yb_plan_info.estimated_num_seeks = num_seeks; } diff --git a/src/postgres/src/include/nodes/pathnodes.h b/src/postgres/src/include/nodes/pathnodes.h index f9ebda694dea..a9e028972f0d 100644 --- a/src/postgres/src/include/nodes/pathnodes.h +++ b/src/postgres/src/include/nodes/pathnodes.h @@ -1268,7 +1268,7 @@ typedef struct YbPathInfo typedef struct YbPlanInfo { - double estimated_num_nexts; + double estimated_num_nexts_prevs; double estimated_num_seeks; int estimated_docdb_result_width; } YbPlanInfo; diff --git a/src/postgres/src/include/optimizer/cost.h b/src/postgres/src/include/optimizer/cost.h index bac9d62a196a..8226678f3e50 100644 --- a/src/postgres/src/include/optimizer/cost.h +++ b/src/postgres/src/include/optimizer/cost.h @@ -44,15 +44,7 @@ /* LSM Lookup costs */ #define YB_DEFAULT_DOCDB_NEXT_CPU_CYCLES 5 #define YB_DEFAULT_SEEK_COST_FACTOR 0.4 -#define YB_DEFAULT_BACKWARD_SEEK_COST_FACTOR 1 -/* - * YB: The value for the fast backward scan seek cost factor has been selected - * based on the smallest improvement (2.8 times) for the backward scan related - * Order By workloads of Featurebench. It might be good to use a different - * factor for colocated case, where the smallest improvement is 3 times higher - * comparing to non-colocated case; refer to D35894 for the details. - */ -#define YB_DEFAULT_FAST_BACKWARD_SEEK_COST_FACTOR (YB_DEFAULT_BACKWARD_SEEK_COST_FACTOR / 3.0) + /* YB: DocDB row decode and process cost */ #define YB_DEFAULT_DOCDB_MERGE_CPU_CYCLES 5 /* YB: DocDB storage filter cost */ From 363fe449a7b7a2b13d0b15a7c191064e36c8595a Mon Sep 17 00:00:00 2001 From: Arpit Nabaria Date: Tue, 13 May 2025 10:07:32 +0000 Subject: [PATCH 101/146] [PLAT-16735]Notify user on password reset Summary: Notify user after we hit /reset_password call. Added case to rerun the original task if it fails due to YBA restart Test Plan: Tested following cases - - Verified notification being sent after calling /reset_password API - Manually added some wait in sendNotificationTask -> reset password -> force kill YBA -> Notification is sent on YBA restart Reviewers: amalyshev Reviewed By: amalyshev Differential Revision: https://phorge.dev.yugabyte.com/D43211 --- managed/RUNTIME-FLAGS.md | 1 + .../tasks/SendUserNotification.java | 85 +++++++++++++++++++ .../yw/common/CustomerTaskManager.java | 27 ++++-- .../yw/common/config/CustomerConfKeys.java | 8 ++ .../yw/controllers/UsersController.java | 41 +++++++-- .../com/yugabyte/yw/models/CustomerTask.java | 12 ++- .../yugabyte/yw/models/helpers/TaskType.java | 5 ++ managed/src/main/resources/reference.conf | 1 + .../src/main/resources/swagger-strict.json | 26 +++++- managed/src/main/resources/swagger.json | 26 +++++- .../yw/controllers/UsersControllerTest.java | 7 ++ 11 files changed, 219 insertions(+), 20 deletions(-) create mode 100644 managed/src/main/java/com/yugabyte/yw/commissioner/tasks/SendUserNotification.java diff --git a/managed/RUNTIME-FLAGS.md b/managed/RUNTIME-FLAGS.md index f031a8723787..dd6e9362401c 100644 --- a/managed/RUNTIME-FLAGS.md +++ b/managed/RUNTIME-FLAGS.md @@ -21,6 +21,7 @@ | "Default Metric Graph Point Count" | "yb.metrics.default_points" | "CUSTOMER" | "Default Metric Graph Point Count, if step is not defined in the query" | "Integer" | | "Fetch Batch Size of Task Info" | "yb.task_info_db_query_batch_size" | "CUSTOMER" | "Knob that can be used to make lesser number of calls to DB" | "Integer" | | "Use Ansible for provisioning" | "yb.node_agent.use_ansible_provisioning" | "CUSTOMER" | "If enabled use Ansible for provisioning" | "Boolean" | +| "Notify user on password reset" | "yb.user.send_password_reset_notification" | "CUSTOMER" | "If enabled, user will be notified on password reset" | "Boolean" | | "Allow Unsupported Instances" | "yb.internal.allow_unsupported_instances" | "PROVIDER" | "Enabling removes supported instance type filtering on AWS providers." | "Boolean" | | "Default AWS Instance Type" | "yb.aws.default_instance_type" | "PROVIDER" | "Default AWS Instance Type" | "String" | | "Default GCP Instance Type" | "yb.gcp.default_instance_type" | "PROVIDER" | "Default GCP Instance Type" | "String" | diff --git a/managed/src/main/java/com/yugabyte/yw/commissioner/tasks/SendUserNotification.java b/managed/src/main/java/com/yugabyte/yw/commissioner/tasks/SendUserNotification.java new file mode 100644 index 000000000000..5537238e89ec --- /dev/null +++ b/managed/src/main/java/com/yugabyte/yw/commissioner/tasks/SendUserNotification.java @@ -0,0 +1,85 @@ +package com.yugabyte.yw.commissioner.tasks; + +import com.yugabyte.yw.commissioner.AbstractTaskBase; +import com.yugabyte.yw.commissioner.BaseTaskDependencies; +import com.yugabyte.yw.common.EmailHelper; +import com.yugabyte.yw.common.alerts.SmtpData; +import com.yugabyte.yw.forms.AbstractTaskParams; +import com.yugabyte.yw.models.Customer; +import com.yugabyte.yw.models.Users; +import jakarta.mail.MessagingException; +import java.time.Duration; +import java.util.Map; +import java.util.UUID; +import javax.inject.Inject; +import lombok.extern.slf4j.Slf4j; + +@Slf4j +public class SendUserNotification extends AbstractTaskBase { + + private final EmailHelper emailHelper; + private static final int MAX_RETRY_ATTEMPTS = 3; + private final Duration SLEEP_TIME = Duration.ofSeconds(10); + + @Inject + public SendUserNotification(BaseTaskDependencies baseTaskDependencies, EmailHelper emailHelper) { + super(baseTaskDependencies); + this.emailHelper = emailHelper; + } + + public static class Params extends AbstractTaskParams { + public UUID customerUUID; + public UUID userUUID; + public String emailSubject; + public String emailBody; + } + + public Params params() { + return (Params) taskParams; + } + + @Override + public void run() { + Customer customer = Customer.getOrBadRequest(params().customerUUID); + Users user = Users.getOrBadRequest(params().userUUID); + String emailDestinations = user.getEmail(); + if (emailDestinations == null || emailDestinations.isEmpty()) { + log.warn("Email destination is empty or null"); + return; + } + if (!sendNotificationToUser(customer, emailDestinations)) { + log.warn( + "Failed to send email after {} attempts to {}. Skipping", + MAX_RETRY_ATTEMPTS, + emailDestinations); + } else { + log.info("Password reset notification sent to user: " + emailDestinations); + } + } + + private boolean sendNotificationToUser(Customer customer, String emailDestinations) { + + SmtpData smtpData = emailHelper.getSmtpData(customer.getUuid()); + if (smtpData == null) { + log.warn("SMTP data not found for customer: {}", customer.getUuid()); + return false; + } + Map contentMap = Map.of("text/plain; charset=\"us-ascii\"", params().emailBody); + for (int attempt = 1; attempt <= MAX_RETRY_ATTEMPTS; attempt++) { + try { + emailHelper.sendEmail( + customer, params().emailSubject, emailDestinations, smtpData, contentMap); + log.info("Email sent successfully on attempt {}", attempt); + return true; + } catch (MessagingException e) { + log.warn("Attempt {} to send email failed: {}", attempt, e.getMessage()); + if (attempt == MAX_RETRY_ATTEMPTS) { + log.warn("All attempts to send email failed for {}", emailDestinations); + } else { + waitFor(SLEEP_TIME); + } + } + } + return false; + } +} diff --git a/managed/src/main/java/com/yugabyte/yw/common/CustomerTaskManager.java b/managed/src/main/java/com/yugabyte/yw/common/CustomerTaskManager.java index f41fda2d8758..76a9062ca44d 100644 --- a/managed/src/main/java/com/yugabyte/yw/common/CustomerTaskManager.java +++ b/managed/src/main/java/com/yugabyte/yw/common/CustomerTaskManager.java @@ -21,6 +21,7 @@ import com.yugabyte.yw.commissioner.tasks.ReadOnlyClusterDelete; import com.yugabyte.yw.commissioner.tasks.ReadOnlyKubernetesClusterDelete; import com.yugabyte.yw.commissioner.tasks.RebootNodeInUniverse; +import com.yugabyte.yw.commissioner.tasks.SendUserNotification; import com.yugabyte.yw.commissioner.tasks.params.IProviderTaskParams; import com.yugabyte.yw.commissioner.tasks.params.NodeTaskParams; import com.yugabyte.yw.common.YsqlQueryExecutor.ConsistencyInfoResp; @@ -242,6 +243,8 @@ public void handlePendingTask(CustomerTask customerTask, TaskInfo taskInfo) { resumeTask = true; isRestoreYbc = true; } + } else if (CustomerTask.TaskType.SendUserNotification.equals(type)) { + resumeTask = true; } if (!isRestoreYbc) { @@ -286,18 +289,19 @@ public void handlePendingTask(CustomerTask customerTask, TaskInfo taskInfo) { // Resume tasks if any TaskType taskType = taskInfo.getTaskType(); - UniverseTaskParams taskParams = null; + AbstractTaskParams taskParams = null; log.info("Resume Task: {}", resumeTask); try { - if (resumeTask && optUniv.isPresent()) { - Universe universe = optUniv.get(); - if (!taskUUID.equals(universe.getUniverseDetails().updatingTaskUUID)) { - log.debug("Invalid task state: Task {} cannot be resumed", taskUUID); - customerTask.markAsCompleted(); - return; + if (resumeTask) { + if (optUniv.isPresent()) { + Universe universe = optUniv.get(); + if (!taskUUID.equals(universe.getUniverseDetails().updatingTaskUUID)) { + log.debug("Invalid task state: Task {} cannot be resumed", taskUUID); + customerTask.markAsCompleted(); + return; + } } - switch (taskType) { case CreateBackup: BackupRequestParams backupParams = @@ -309,6 +313,11 @@ public void handlePendingTask(CustomerTask customerTask, TaskInfo taskInfo) { Json.fromJson(taskInfo.getTaskParams(), RestoreBackupParams.class); taskParams = restoreParams; break; + case SendUserNotification: + SendUserNotification.Params sendParams = + Json.fromJson(taskInfo.getTaskParams(), SendUserNotification.Params.class); + taskParams = sendParams; + break; default: log.error("Invalid task type: {} during platform restart", taskType); return; @@ -320,7 +329,7 @@ public void handlePendingTask(CustomerTask customerTask, TaskInfo taskInfo) { subtask -> { subtask.delete(); }); - UniverseTaskParams finalTaskParams = taskParams; + AbstractTaskParams finalTaskParams = taskParams; Util.doWithCorrelationId( corrId -> { // There is a chance that async execution is delayed and correlation ID is diff --git a/managed/src/main/java/com/yugabyte/yw/common/config/CustomerConfKeys.java b/managed/src/main/java/com/yugabyte/yw/common/config/CustomerConfKeys.java index b427cebb2d50..0190ae5609e3 100644 --- a/managed/src/main/java/com/yugabyte/yw/common/config/CustomerConfKeys.java +++ b/managed/src/main/java/com/yugabyte/yw/common/config/CustomerConfKeys.java @@ -242,4 +242,12 @@ public class CustomerConfKeys extends RuntimeConfigKeysModule { "If enabled use Ansible for provisioning", ConfDataType.BooleanType, ImmutableList.of(ConfKeyTags.PUBLIC)); + public static final ConfKeyInfo notifyUserOnPasswordReset = + new ConfKeyInfo<>( + "yb.user.send_password_reset_notification", + ScopeType.CUSTOMER, + "Notify user on password reset", + "If enabled, user will be notified on password reset", + ConfDataType.BooleanType, + ImmutableList.of(ConfKeyTags.PUBLIC)); } diff --git a/managed/src/main/java/com/yugabyte/yw/controllers/UsersController.java b/managed/src/main/java/com/yugabyte/yw/controllers/UsersController.java index 1367a67fc76a..4c84b246be2e 100644 --- a/managed/src/main/java/com/yugabyte/yw/controllers/UsersController.java +++ b/managed/src/main/java/com/yugabyte/yw/controllers/UsersController.java @@ -5,8 +5,11 @@ import static com.yugabyte.yw.common.Util.getRandomPassword; import com.fasterxml.jackson.databind.JsonNode; +import com.yugabyte.yw.commissioner.Commissioner; +import com.yugabyte.yw.commissioner.tasks.SendUserNotification; import com.yugabyte.yw.common.PlatformServiceException; import com.yugabyte.yw.common.Util; +import com.yugabyte.yw.common.config.CustomerConfKeys; import com.yugabyte.yw.common.config.GlobalConfKeys; import com.yugabyte.yw.common.config.RuntimeConfGetter; import com.yugabyte.yw.common.config.RuntimeConfigFactory; @@ -15,7 +18,6 @@ import com.yugabyte.yw.common.rbac.PermissionInfo.ResourceType; import com.yugabyte.yw.common.rbac.RoleBindingUtil; import com.yugabyte.yw.common.rbac.RoleResourceDefinition; -import com.yugabyte.yw.common.rbac.RoleUtil; import com.yugabyte.yw.common.user.UserService; import com.yugabyte.yw.forms.PlatformResults; import com.yugabyte.yw.forms.PlatformResults.YBPSuccess; @@ -24,10 +26,12 @@ import com.yugabyte.yw.forms.UserRegisterFormData; import com.yugabyte.yw.models.Audit; import com.yugabyte.yw.models.Customer; +import com.yugabyte.yw.models.CustomerTask; import com.yugabyte.yw.models.Users; import com.yugabyte.yw.models.Users.UserType; import com.yugabyte.yw.models.common.YbaApi; import com.yugabyte.yw.models.extended.UserWithFeatures; +import com.yugabyte.yw.models.helpers.TaskType; import com.yugabyte.yw.models.rbac.ResourceGroup; import com.yugabyte.yw.models.rbac.Role; import com.yugabyte.yw.models.rbac.RoleBinding; @@ -69,8 +73,8 @@ public class UsersController extends AuthenticatedController { private final UserService userService; private final TokenAuthenticator tokenAuthenticator; private final RuntimeConfigFactory runtimeConfigFactory; - private final RoleUtil roleUtil; private final RoleBindingUtil roleBindingUtil; + private final Commissioner commissioner; @Inject public UsersController( @@ -78,14 +82,14 @@ public UsersController( UserService userService, TokenAuthenticator tokenAuthenticator, RuntimeConfigFactory runtimeConfigFactory, - RoleUtil roleUtil, - RoleBindingUtil roleBindingUtil) { + RoleBindingUtil roleBindingUtil, + Commissioner commissioner) { this.passwordPolicyService = passwordPolicyService; this.userService = userService; this.tokenAuthenticator = tokenAuthenticator; this.runtimeConfigFactory = runtimeConfigFactory; - this.roleUtil = roleUtil; this.roleBindingUtil = roleBindingUtil; + this.commissioner = commissioner; } /** @@ -491,6 +495,11 @@ public Result resetPassword(UUID customerUUID, Http.Request request) { passwordPolicyService.checkPasswordPolicy(customerUUID, formData.getNewPassword()); user.setPassword(formData.getNewPassword()); user.save(); + if (confGetter.getConfForScope( + Customer.get(customerUUID), CustomerConfKeys.notifyUserOnPasswordReset)) { + sendPasswordResetNotification(customerUUID, user); + } + auditService() .createAuditEntryWithReqBody( request, @@ -627,4 +636,26 @@ public Result updateProfile(UUID customerUUID, UUID userUUID, Http.Request reque user.save(); return ok(Json.toJson(user)); } + + private void sendPasswordResetNotification(UUID customerUUID, Users user) { + SendUserNotification.Params taskParams = new SendUserNotification.Params(); + taskParams.customerUUID = customerUUID; + taskParams.userUUID = user.getUuid(); + taskParams.emailSubject = "YugabyteDB Anywhere Password Reset Notification"; + taskParams.emailBody = + "This is to confirm that your account password for " + + "YugabyteDB Anywhere (YBA) was successfully changed.\n\n" + + "If you made this change, no further action is required.\n\n" + + "If you did not request this change or believe your account may be compromised," + + " please reset your password immediately or contact support.\n\n"; + UUID taskUUID = commissioner.submit(TaskType.SendUserNotification, taskParams); + log.info("Submitted task to notify user {}, task uuid = {}.", user.getEmail(), taskUUID); + CustomerTask.createWithBackgroundUser( + Customer.getOrBadRequest(customerUUID), + user.getUuid(), + taskUUID, + CustomerTask.TargetType.User, + CustomerTask.TaskType.SendUserNotification, + user.getEmail()); + } } diff --git a/managed/src/main/java/com/yugabyte/yw/models/CustomerTask.java b/managed/src/main/java/com/yugabyte/yw/models/CustomerTask.java index 841b7d517169..ef85ed9a40dc 100644 --- a/managed/src/main/java/com/yugabyte/yw/models/CustomerTask.java +++ b/managed/src/main/java/com/yugabyte/yw/models/CustomerTask.java @@ -101,7 +101,10 @@ public enum TargetType { NodeAgent(false), @EnumValue("Platform") - Yba(false); + Yba(false), + + @EnumValue("User") + User(false); private final boolean universeTarget; @@ -402,7 +405,10 @@ public enum TaskType { CloneNamespace, @EnumValue("UpdateOOMServiceState") - UpdateOOMServiceState; + UpdateOOMServiceState, + + @EnumValue("SendUserNotification") + SendUserNotification; public String toString(boolean completed) { switch (this) { @@ -589,6 +595,8 @@ public String toString(boolean completed) { return completed ? "Cloned Namespace" : "Cloning Namespace"; case UpdateOOMServiceState: return completed ? "Updated OOM service state" : "Updating OOM service state"; + case SendUserNotification: + return completed ? "Sent user notification" : "Sending user notification"; default: return null; } diff --git a/managed/src/main/java/com/yugabyte/yw/models/helpers/TaskType.java b/managed/src/main/java/com/yugabyte/yw/models/helpers/TaskType.java index b917ae47a1e9..f7251fc667cb 100644 --- a/managed/src/main/java/com/yugabyte/yw/models/helpers/TaskType.java +++ b/managed/src/main/java/com/yugabyte/yw/models/helpers/TaskType.java @@ -660,6 +660,11 @@ public enum TaskType { CustomerTask.TaskType.UpdateOOMServiceState, CustomerTask.TargetType.Universe), + SendUserNotification( + com.yugabyte.yw.commissioner.tasks.SendUserNotification.class, + CustomerTask.TaskType.SendUserNotification, + CustomerTask.TargetType.User), + /* Subtasks start here */ KubernetesCheckVolumeExpansion( diff --git a/managed/src/main/resources/reference.conf b/managed/src/main/resources/reference.conf index 018418662f72..4b5a92b04009 100644 --- a/managed/src/main/resources/reference.conf +++ b/managed/src/main/resources/reference.conf @@ -984,6 +984,7 @@ yb { user { disable_v1_api_token = false + send_password_reset_notification = true } security { diff --git a/managed/src/main/resources/swagger-strict.json b/managed/src/main/resources/swagger-strict.json index 4bfcd5179593..73a427fa7f5e 100644 --- a/managed/src/main/resources/swagger-strict.json +++ b/managed/src/main/resources/swagger-strict.json @@ -2559,6 +2559,7 @@ "DecommissionNode", "CloneNamespace", "UpdateOOMServiceState", + "SendUserNotification", "KubernetesCheckVolumeExpansion", "KubernetesPostExpansionCheckVolume", "NodeCertReloadTask", @@ -5185,6 +5186,7 @@ "DecommissionNode", "CloneNamespace", "UpdateOOMServiceState", + "SendUserNotification", "KubernetesCheckVolumeExpansion", "KubernetesPostExpansionCheckVolume", "NodeCertReloadTask", @@ -8173,6 +8175,7 @@ "DecommissionNode", "CloneNamespace", "UpdateOOMServiceState", + "SendUserNotification", "KubernetesCheckVolumeExpansion", "KubernetesPostExpansionCheckVolume", "NodeCertReloadTask", @@ -8944,6 +8947,7 @@ "DecommissionNode", "CloneNamespace", "UpdateOOMServiceState", + "SendUserNotification", "KubernetesCheckVolumeExpansion", "KubernetesPostExpansionCheckVolume", "NodeCertReloadTask", @@ -10333,6 +10337,7 @@ "DecommissionNode", "CloneNamespace", "UpdateOOMServiceState", + "SendUserNotification", "KubernetesCheckVolumeExpansion", "KubernetesPostExpansionCheckVolume", "NodeCertReloadTask", @@ -14443,6 +14448,7 @@ "DecommissionNode", "CloneNamespace", "UpdateOOMServiceState", + "SendUserNotification", "KubernetesCheckVolumeExpansion", "KubernetesPostExpansionCheckVolume", "NodeCertReloadTask", @@ -15421,6 +15427,7 @@ "DecommissionNode", "CloneNamespace", "UpdateOOMServiceState", + "SendUserNotification", "KubernetesCheckVolumeExpansion", "KubernetesPostExpansionCheckVolume", "NodeCertReloadTask", @@ -16223,6 +16230,7 @@ "DecommissionNode", "CloneNamespace", "UpdateOOMServiceState", + "SendUserNotification", "KubernetesCheckVolumeExpansion", "KubernetesPostExpansionCheckVolume", "NodeCertReloadTask", @@ -17706,6 +17714,7 @@ "DecommissionNode", "CloneNamespace", "UpdateOOMServiceState", + "SendUserNotification", "KubernetesCheckVolumeExpansion", "KubernetesPostExpansionCheckVolume", "NodeCertReloadTask", @@ -18240,6 +18249,7 @@ "DecommissionNode", "CloneNamespace", "UpdateOOMServiceState", + "SendUserNotification", "KubernetesCheckVolumeExpansion", "KubernetesPostExpansionCheckVolume", "NodeCertReloadTask", @@ -18616,6 +18626,7 @@ "DecommissionNode", "CloneNamespace", "UpdateOOMServiceState", + "SendUserNotification", "KubernetesCheckVolumeExpansion", "KubernetesPostExpansionCheckVolume", "NodeCertReloadTask", @@ -19413,6 +19424,7 @@ "DecommissionNode", "CloneNamespace", "UpdateOOMServiceState", + "SendUserNotification", "KubernetesCheckVolumeExpansion", "KubernetesPostExpansionCheckVolume", "NodeCertReloadTask", @@ -20480,6 +20492,7 @@ "DecommissionNode", "CloneNamespace", "UpdateOOMServiceState", + "SendUserNotification", "KubernetesCheckVolumeExpansion", "KubernetesPostExpansionCheckVolume", "NodeCertReloadTask", @@ -21114,7 +21127,8 @@ "UniverseKey", "MasterKey", "NodeAgent", - "Yba" + "Yba", + "User" ], "type" : "string" }, @@ -21220,7 +21234,8 @@ "EnableNodeAgent", "Decommission", "CloneNamespace", - "UpdateOOMServiceState" + "UpdateOOMServiceState", + "SendUserNotification" ], "type" : "string" }, @@ -21746,6 +21761,7 @@ "DecommissionNode", "CloneNamespace", "UpdateOOMServiceState", + "SendUserNotification", "KubernetesCheckVolumeExpansion", "KubernetesPostExpansionCheckVolume", "NodeCertReloadTask", @@ -22391,6 +22407,7 @@ "DecommissionNode", "CloneNamespace", "UpdateOOMServiceState", + "SendUserNotification", "KubernetesCheckVolumeExpansion", "KubernetesPostExpansionCheckVolume", "NodeCertReloadTask", @@ -23291,6 +23308,7 @@ "DecommissionNode", "CloneNamespace", "UpdateOOMServiceState", + "SendUserNotification", "KubernetesCheckVolumeExpansion", "KubernetesPostExpansionCheckVolume", "NodeCertReloadTask", @@ -23900,6 +23918,7 @@ "DecommissionNode", "CloneNamespace", "UpdateOOMServiceState", + "SendUserNotification", "KubernetesCheckVolumeExpansion", "KubernetesPostExpansionCheckVolume", "NodeCertReloadTask", @@ -24509,6 +24528,7 @@ "DecommissionNode", "CloneNamespace", "UpdateOOMServiceState", + "SendUserNotification", "KubernetesCheckVolumeExpansion", "KubernetesPostExpansionCheckVolume", "NodeCertReloadTask", @@ -25520,6 +25540,7 @@ "DecommissionNode", "CloneNamespace", "UpdateOOMServiceState", + "SendUserNotification", "KubernetesCheckVolumeExpansion", "KubernetesPostExpansionCheckVolume", "NodeCertReloadTask", @@ -26637,6 +26658,7 @@ "DecommissionNode", "CloneNamespace", "UpdateOOMServiceState", + "SendUserNotification", "KubernetesCheckVolumeExpansion", "KubernetesPostExpansionCheckVolume", "NodeCertReloadTask", diff --git a/managed/src/main/resources/swagger.json b/managed/src/main/resources/swagger.json index 970e4866f4ae..0e7fc86b1126 100644 --- a/managed/src/main/resources/swagger.json +++ b/managed/src/main/resources/swagger.json @@ -2574,6 +2574,7 @@ "DecommissionNode", "CloneNamespace", "UpdateOOMServiceState", + "SendUserNotification", "KubernetesCheckVolumeExpansion", "KubernetesPostExpansionCheckVolume", "NodeCertReloadTask", @@ -5223,6 +5224,7 @@ "DecommissionNode", "CloneNamespace", "UpdateOOMServiceState", + "SendUserNotification", "KubernetesCheckVolumeExpansion", "KubernetesPostExpansionCheckVolume", "NodeCertReloadTask", @@ -8211,6 +8213,7 @@ "DecommissionNode", "CloneNamespace", "UpdateOOMServiceState", + "SendUserNotification", "KubernetesCheckVolumeExpansion", "KubernetesPostExpansionCheckVolume", "NodeCertReloadTask", @@ -8986,6 +8989,7 @@ "DecommissionNode", "CloneNamespace", "UpdateOOMServiceState", + "SendUserNotification", "KubernetesCheckVolumeExpansion", "KubernetesPostExpansionCheckVolume", "NodeCertReloadTask", @@ -10379,6 +10383,7 @@ "DecommissionNode", "CloneNamespace", "UpdateOOMServiceState", + "SendUserNotification", "KubernetesCheckVolumeExpansion", "KubernetesPostExpansionCheckVolume", "NodeCertReloadTask", @@ -14550,6 +14555,7 @@ "DecommissionNode", "CloneNamespace", "UpdateOOMServiceState", + "SendUserNotification", "KubernetesCheckVolumeExpansion", "KubernetesPostExpansionCheckVolume", "NodeCertReloadTask", @@ -15541,6 +15547,7 @@ "DecommissionNode", "CloneNamespace", "UpdateOOMServiceState", + "SendUserNotification", "KubernetesCheckVolumeExpansion", "KubernetesPostExpansionCheckVolume", "NodeCertReloadTask", @@ -16343,6 +16350,7 @@ "DecommissionNode", "CloneNamespace", "UpdateOOMServiceState", + "SendUserNotification", "KubernetesCheckVolumeExpansion", "KubernetesPostExpansionCheckVolume", "NodeCertReloadTask", @@ -17834,6 +17842,7 @@ "DecommissionNode", "CloneNamespace", "UpdateOOMServiceState", + "SendUserNotification", "KubernetesCheckVolumeExpansion", "KubernetesPostExpansionCheckVolume", "NodeCertReloadTask", @@ -18368,6 +18377,7 @@ "DecommissionNode", "CloneNamespace", "UpdateOOMServiceState", + "SendUserNotification", "KubernetesCheckVolumeExpansion", "KubernetesPostExpansionCheckVolume", "NodeCertReloadTask", @@ -18744,6 +18754,7 @@ "DecommissionNode", "CloneNamespace", "UpdateOOMServiceState", + "SendUserNotification", "KubernetesCheckVolumeExpansion", "KubernetesPostExpansionCheckVolume", "NodeCertReloadTask", @@ -19541,6 +19552,7 @@ "DecommissionNode", "CloneNamespace", "UpdateOOMServiceState", + "SendUserNotification", "KubernetesCheckVolumeExpansion", "KubernetesPostExpansionCheckVolume", "NodeCertReloadTask", @@ -20608,6 +20620,7 @@ "DecommissionNode", "CloneNamespace", "UpdateOOMServiceState", + "SendUserNotification", "KubernetesCheckVolumeExpansion", "KubernetesPostExpansionCheckVolume", "NodeCertReloadTask", @@ -21242,7 +21255,8 @@ "UniverseKey", "MasterKey", "NodeAgent", - "Yba" + "Yba", + "User" ], "type" : "string" }, @@ -21348,7 +21362,8 @@ "EnableNodeAgent", "Decommission", "CloneNamespace", - "UpdateOOMServiceState" + "UpdateOOMServiceState", + "SendUserNotification" ], "type" : "string" }, @@ -21874,6 +21889,7 @@ "DecommissionNode", "CloneNamespace", "UpdateOOMServiceState", + "SendUserNotification", "KubernetesCheckVolumeExpansion", "KubernetesPostExpansionCheckVolume", "NodeCertReloadTask", @@ -22519,6 +22535,7 @@ "DecommissionNode", "CloneNamespace", "UpdateOOMServiceState", + "SendUserNotification", "KubernetesCheckVolumeExpansion", "KubernetesPostExpansionCheckVolume", "NodeCertReloadTask", @@ -23419,6 +23436,7 @@ "DecommissionNode", "CloneNamespace", "UpdateOOMServiceState", + "SendUserNotification", "KubernetesCheckVolumeExpansion", "KubernetesPostExpansionCheckVolume", "NodeCertReloadTask", @@ -24028,6 +24046,7 @@ "DecommissionNode", "CloneNamespace", "UpdateOOMServiceState", + "SendUserNotification", "KubernetesCheckVolumeExpansion", "KubernetesPostExpansionCheckVolume", "NodeCertReloadTask", @@ -24637,6 +24656,7 @@ "DecommissionNode", "CloneNamespace", "UpdateOOMServiceState", + "SendUserNotification", "KubernetesCheckVolumeExpansion", "KubernetesPostExpansionCheckVolume", "NodeCertReloadTask", @@ -25648,6 +25668,7 @@ "DecommissionNode", "CloneNamespace", "UpdateOOMServiceState", + "SendUserNotification", "KubernetesCheckVolumeExpansion", "KubernetesPostExpansionCheckVolume", "NodeCertReloadTask", @@ -26812,6 +26833,7 @@ "DecommissionNode", "CloneNamespace", "UpdateOOMServiceState", + "SendUserNotification", "KubernetesCheckVolumeExpansion", "KubernetesPostExpansionCheckVolume", "NodeCertReloadTask", diff --git a/managed/src/test/java/com/yugabyte/yw/controllers/UsersControllerTest.java b/managed/src/test/java/com/yugabyte/yw/controllers/UsersControllerTest.java index 56d44761a86e..f6c53521c997 100644 --- a/managed/src/test/java/com/yugabyte/yw/controllers/UsersControllerTest.java +++ b/managed/src/test/java/com/yugabyte/yw/controllers/UsersControllerTest.java @@ -9,6 +9,8 @@ import static com.yugabyte.yw.models.Users.Role; import static org.hamcrest.CoreMatchers.*; import static org.junit.Assert.*; +import static org.mockito.ArgumentMatchers.any; +import static org.mockito.Mockito.when; import static play.mvc.Http.Status.*; import static play.test.Helpers.contentAsString; import static play.test.Helpers.fakeRequest; @@ -30,6 +32,7 @@ import com.yugabyte.yw.models.Users; import com.yugabyte.yw.models.Users.Role; import com.yugabyte.yw.models.extended.UserWithFeatures; +import com.yugabyte.yw.models.helpers.TaskType; import com.yugabyte.yw.models.rbac.ResourceGroup; import com.yugabyte.yw.models.rbac.ResourceGroup.ResourceDefinition; import com.yugabyte.yw.models.rbac.Role.RoleType; @@ -299,6 +302,8 @@ public void testPasswordChangeValid() throws IOException { @Test public void testResetPassword() { + UUID fakeTaskUUID = buildTaskInfo(null, TaskType.SendUserNotification); + when(mockCommissioner.submit(any(), any())).thenReturn(fakeTaskUUID); Users testUser1 = ModelFactory.testUser(customer1, "tc3@test.com", Role.Admin); String authTokenTest = testUser1.createAuthToken(); assertEquals(testUser1.getRole(), Role.Admin); @@ -345,6 +350,8 @@ public void testResetPasswordForNonLocalUser() { @Test public void testResetPasswordWithNewRbac() { + UUID fakeTaskUUID = buildTaskInfo(null, TaskType.SendUserNotification); + when(mockCommissioner.submit(any(), any())).thenReturn(fakeTaskUUID); RuntimeConfigEntry.upsertGlobal("yb.rbac.use_new_authz", "true"); ResourceGroup rG = new ResourceGroup(new HashSet<>(Arrays.asList(rd1))); RoleBinding.create(user1, RoleBindingType.Custom, role, rG); From 6c31ff6ee302894ecea13011d0de555cbc4419c0 Mon Sep 17 00:00:00 2001 From: Kai Franz Date: Fri, 16 May 2025 18:10:02 -0700 Subject: [PATCH 102/146] [#27062] YSQL: Fix ConcurrentTablespaceTest.testAlterIndexSetTablespace MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Summary: Currently, `ConcurrentTablespaceTest.testAlterIndexSetTablespace` has a concurrency bug due to the way `CyclicBarrier` is used. The test starts two threads, one that does DDL and one that does DML, and has them each execute 12 statements. Before executing each statement, each thread calls `CyclicBarrier.await` to ensure they are executing the statements at the same time. ### The problem The problem is when the DML thread can runs into retriable errors (which is expected)—if this happens, the statement is retried without incrementing the statement counter, but the thread calls `CyclicBarrier.await` before retrying the statement. This means that if thread A runs into an error on statement 10, it'll retry statement 10 but this retry will be synchronized with other threads' statement 11. This causes an issue at the end of the test when the threads run into uneven numbers of errors; since `CyclicBarrier.await` waits until all threads call `await`, it's possible that one thread finishes executing, so it never calls `await`, but another thread is stuck waiting at `await`. ### The fix We fix this by swapping out the `CyclicBarrier` for a `Phaser`--`CyclicBarrier` requires a fixed number of threads, while `Phaser` allows for a dynamic number of threads. Using the `Phaser`, we de-register threads that are done executing, allowing the remaining threads to continue execution afterwards. Jira: DB-16543 Test Plan: ``` ./yb_build.sh release --java-test org.yb.pgsql.ConcurrentTablespaceTest#testAlterIndexSetTablespace ``` Reviewers: sanketh, myang Reviewed By: myang Differential Revision: https://phorge.dev.yugabyte.com/D43992 --- .../yb/pgsql/ConcurrentTablespaceTest.java | 42 +++++++++++-------- 1 file changed, 24 insertions(+), 18 deletions(-) diff --git a/java/yb-pgsql/src/test/java/org/yb/pgsql/ConcurrentTablespaceTest.java b/java/yb-pgsql/src/test/java/org/yb/pgsql/ConcurrentTablespaceTest.java index 446848b2d501..70e9ac24f034 100644 --- a/java/yb-pgsql/src/test/java/org/yb/pgsql/ConcurrentTablespaceTest.java +++ b/java/yb-pgsql/src/test/java/org/yb/pgsql/ConcurrentTablespaceTest.java @@ -121,7 +121,7 @@ private Tablespace[] generateTestTablespaces() { private List setupConcurrentDdlDmlThreads(String ddlTemplate) { final int totalThreads = numDmlThreads + numDdlThreads; - final CyclicBarrier barrier = new CyclicBarrier(totalThreads); + final Phaser phaser = new Phaser(totalThreads); final List threads = new ArrayList<>(); // Add the DDL thread. @@ -131,14 +131,14 @@ private List setupConcurrentDdlDmlThreads(String ddlTemplate) { connections[i], ddlTemplate, errorsDetected, - barrier, + phaser, numStmtsPerThread, tablespaces)); } // Add the DML threads. for (int i = numDdlThreads; i < totalThreads; ++i) { - threads.add(new DMLRunner(connections[i], errorsDetected, barrier, numStmtsPerThread, i)); + threads.add(new DMLRunner(connections[i], errorsDetected, phaser, numStmtsPerThread, i)); } return threads; } @@ -223,7 +223,7 @@ public void testTableCreationFailure() throws Exception { YBClient client = miniCluster.getClient(); connections = setupConnections(); final int totalThreads = numDmlThreads + numDdlThreads; - final CyclicBarrier barrier = new CyclicBarrier(totalThreads); + final Phaser phaser = new Phaser(totalThreads); final List threads = new ArrayList<>(); AtomicBoolean invalidPlacementError = new AtomicBoolean(false); @@ -236,7 +236,7 @@ public void testTableCreationFailure() throws Exception { connections[0], "CREATE TABLE validplacementtable (a int) TABLESPACE %s", errorsDetected, - barrier, + phaser, 1, new Tablespace[] {valid_ts})); @@ -248,7 +248,7 @@ public void testTableCreationFailure() throws Exception { connections[1], "CREATE TABLE invalid_placementtable (a int) TABLESPACE %s", invalidPlacementError, - barrier, + phaser, 1, new Tablespace[] {invalid_ts})); @@ -268,19 +268,19 @@ public void testTableCreationFailure() throws Exception { public abstract class SQLRunner extends Thread { protected final Connection conn; protected final AtomicBoolean errorsDetected; - protected final CyclicBarrier barrier; + protected final Phaser phaser; protected final int numStmtsPerThread; protected int idx; // Only used by DMLRunner public SQLRunner( Connection conn, AtomicBoolean errorsDetected, - CyclicBarrier barrier, + Phaser phaser, int numStmtsPerThread, int idx) { this.conn = conn; this.errorsDetected = errorsDetected; - this.barrier = barrier; + this.phaser = phaser; this.numStmtsPerThread = numStmtsPerThread; this.idx = idx; // This field is not used in DDLRunner } @@ -289,16 +289,22 @@ public SQLRunner( public void run() { int item_idx = 0; while (item_idx < numStmtsPerThread && !errorsDetected.get()) { - try (Statement lstmt = conn.createStatement()) { - barrier.await(); - executeStatement(lstmt, item_idx); + try { + phaser.arriveAndAwaitAdvance(); + + try (Statement lstmt = conn.createStatement()) { + executeStatement(lstmt, item_idx); + } + item_idx++; } catch (PSQLException e) { handlePSQLException(e); - } catch (SQLException | InterruptedException | BrokenBarrierException e) { + } catch (SQLException e) { logAndSetError(e); } } + // Thread finished: shrink the party count. + phaser.arriveAndDeregister(); } protected abstract void executeStatement(Statement lstmt, int item_idx) throws SQLException; @@ -324,7 +330,7 @@ private void handlePSQLException(PSQLException e) { protected void logAndSetError(Exception e) { LOG.info("SQL thread: Unexpected error: ", e); errorsDetected.set(true); - barrier.reset(); + phaser.forceTermination(); } } @@ -337,10 +343,10 @@ public class DMLRunner extends SQLRunner { public DMLRunner( Connection conn, AtomicBoolean errorsDetected, - CyclicBarrier barrier, + Phaser phaser, int numStmtsPerThread, int idx) { - super(conn, errorsDetected, barrier, numStmtsPerThread, idx); + super(conn, errorsDetected, phaser, numStmtsPerThread, idx); } @Override @@ -372,10 +378,10 @@ public DDLRunner( Connection conn, String sql, AtomicBoolean errorsDetected, - CyclicBarrier barrier, + Phaser phaser, int numStmtsPerThread, Tablespace[] tablespaces) { - super(conn, errorsDetected, barrier, numStmtsPerThread, 0); // idx is not used here + super(conn, errorsDetected, phaser, numStmtsPerThread, 0); // idx is not used here this.sql = sql; this.tablespaces = tablespaces; } From 5946bfec9693672045943aba350baa4f1f2e9389 Mon Sep 17 00:00:00 2001 From: Timur Yusupov Date: Fri, 16 May 2025 19:02:14 +0300 Subject: [PATCH 103/146] [#27233] docdb: added check_data_blocks command to SST dump Summary: Added `check_data_blocks` to sst_dump tool. It will check all data blocks consistency using SST file index and per-block checksums. `--skip_uncompress` option specifies that `check_data_blocks` command should skip uncompressing data blocks (false by default). If any corrupt data blocks are found, the tool will generate `dd` commands which could be used to save specific blocks (corrupted block and one block before/after) for analysis. Example usage: ``` ./sst_dump --command=check_data_blocks --file=000046.sst 2>&1 | tee sst_dump.out cat sst_dump.out | grep '^dd ' ``` Jira: DB-16720 Test Plan: Tested manually on artificially corrupt SST file: ``` ~/code/yugabyte5/build/latest/bin/sst_dump --command=check_data_blocks --file=000046.sst 2>&1 | tee sst_dump.out WARNING: Logging before InitGoogleLogging() is written to STDERR I0517 07:21:12.046017 3481 sst_dump_tool.cc:433] Checking data block #0 handle: BlockHandle { offset: 0 size: 3927 } W0517 07:21:12.051733 3481 file_reader_writer.cc:101] Read attempt #1 failed in file 000046.sst.sblock.0 : Corruption (yb/rocksdb/table/format.cc:345): Block checksum mismatch in file: 000046.sst.sblock.0, block handle: BlockHandle { offset: 996452 size: 4024 }, expected checksum: 1118200703, actual checksum: 1023486239. W0517 07:21:12.051852 3481 file_reader_writer.cc:101] Read attempt #2 failed in file 000046.sst.sblock.0 : Corruption (yb/rocksdb/table/format.cc:345): Block checksum mismatch in file: 000046.sst.sblock.0, block handle: BlockHandle { offset: 996452 size: 4024 }, expected checksum: 1118200703, actual checksum: 1023486239. E0517 07:21:12.051862 3481 format.cc:466] ReadBlockContents: Corruption (yb/rocksdb/table/format.cc:345): Block checksum mismatch in file: 000046.sst.sblock.0, block handle: BlockHandle { offset: 996452 size: 4024 }, expected checksum: 1118200703, actual checksum: 1023486239. @ 0x7fd68e2eafde rocksdb::SstFileReader::CheckDataBlocks() @ 0x7fd68e2ec8ca rocksdb::SSTDumpTool::Run() @ 0x555855a53f78 main @ 0x7fd688e45ca3 __libc_start_main @ 0x555855a53eae _start W0517 07:21:12.056023 3481 sst_dump_tool.cc:441] Failed to read block with handle: BlockHandle { offset: 996452 size: 4024 }. Corruption (yb/rocksdb/table/format.cc:345): Block checksum mismatch in file: 000046.sst.sblock.0, block handle: BlockHandle { offset: 996452 size: 4024 }, expected checksum: 1118200703, actual checksum: 1023486239. from [] to [] Process 000046.sst Sst file format: block-based dd if="000046.sst.sblock.0" bs=1 skip=992433 count=4014 of="000046.sst.sblock.0.offset_992433.size_4014.part" dd if="000046.sst.sblock.0" bs=1 skip=996452 count=4024 of="000046.sst.sblock.0.offset_996452.size_4024.part" dd if="000046.sst.sblock.0" bs=1 skip=1000481 count=4019 of="000046.sst.sblock.0.offset_1000481.size_4019.part" W0517 07:21:12.634256 3481 file_reader_writer.cc:101] Read attempt #1 failed in file 000046.sst.sblock.0 : Corruption (yb/rocksdb/table/format.cc:345): Block checksum mismatch in file: 000046.sst.sblock.0, block handle: BlockHandle { offset: 119754675 size: 11766 }, expected checksum: 84894407, actual checksum: 2295535311. W0517 07:21:12.634332 3481 file_reader_writer.cc:101] Read attempt #2 failed in file 000046.sst.sblock.0 : Corruption (yb/rocksdb/table/format.cc:345): Block checksum mismatch in file: 000046.sst.sblock.0, block handle: BlockHandle { offset: 119754675 size: 11766 }, expected checksum: 84894407, actual checksum: 2295535311. E0517 07:21:12.634339 3481 format.cc:466] ReadBlockContents: Corruption (yb/rocksdb/table/format.cc:345): Block checksum mismatch in file: 000046.sst.sblock.0, block handle: BlockHandle { offset: 119754675 size: 11766 }, expected checksum: 84894407, actual checksum: 2295535311. @ 0x7fd68e2eafde rocksdb::SstFileReader::CheckDataBlocks() @ 0x7fd68e2ec8ca rocksdb::SSTDumpTool::Run() @ 0x555855a53f78 main @ 0x7fd688e45ca3 __libc_start_main @ 0x555855a53eae _start W0517 07:21:12.638135 3481 sst_dump_tool.cc:441] Failed to read block with handle: BlockHandle { offset: 119754675 size: 11766 }. Corruption (yb/rocksdb/table/format.cc:345): Block checksum mismatch in file: 000046.sst.sblock.0, block handle: BlockHandle { offset: 119754675 size: 11766 }, expected checksum: 84894407, actual checksum: 2295535311. dd if="000046.sst.sblock.0" bs=1 skip=119742901 count=11769 of="000046.sst.sblock.0.offset_119742901.size_11769.part" dd if="000046.sst.sblock.0" bs=1 skip=119754675 count=11766 of="000046.sst.sblock.0.offset_119754675.size_11766.part" dd if="000046.sst.sblock.0" bs=1 skip=119766446 count=1384 of="000046.sst.sblock.0.offset_119766446.size_1384.part" ``` ``` cat sst_dump.out | grep '^dd ' dd if="000046.sst.sblock.0" bs=1 skip=992433 count=4014 of="000046.sst.sblock.0.offset_992433.size_4014.part" dd if="000046.sst.sblock.0" bs=1 skip=996452 count=4024 of="000046.sst.sblock.0.offset_996452.size_4024.part" dd if="000046.sst.sblock.0" bs=1 skip=1000481 count=4019 of="000046.sst.sblock.0.offset_1000481.size_4019.part" dd if="000046.sst.sblock.0" bs=1 skip=119742901 count=11769 of="000046.sst.sblock.0.offset_119742901.size_11769.part" dd if="000046.sst.sblock.0" bs=1 skip=119754675 count=11766 of="000046.sst.sblock.0.offset_119754675.size_11766.part" dd if="000046.sst.sblock.0" bs=1 skip=119766446 count=1384 of="000046.sst.sblock.0.offset_119766446.size_1384.part" ``` Reviewers: hsunder, yyan Reviewed By: yyan Subscribers: ybase Tags: #jenkins-ready Differential Revision: https://phorge.dev.yugabyte.com/D44035 --- src/yb/rocksdb/tools/sst_dump_tool.cc | 112 +++++++++++++++++++---- src/yb/rocksdb/tools/sst_dump_tool_imp.h | 9 +- 2 files changed, 101 insertions(+), 20 deletions(-) diff --git a/src/yb/rocksdb/tools/sst_dump_tool.cc b/src/yb/rocksdb/tools/sst_dump_tool.cc index 3ac6ef2a7a4e..1e9751ea1461 100644 --- a/src/yb/rocksdb/tools/sst_dump_tool.cc +++ b/src/yb/rocksdb/tools/sst_dump_tool.cc @@ -53,6 +53,7 @@ #include "yb/docdb/docdb_debug.h" +#include "yb/util/format.h" #include "yb/util/status_log.h" using yb::docdb::StorageDbType; @@ -60,8 +61,6 @@ using yb::docdb::StorageDbType; namespace rocksdb { using std::dynamic_pointer_cast; -using std::unique_ptr; -using std::shared_ptr; std::string DocDBKVFormatter::Format( const yb::Slice&, const yb::Slice&, yb::docdb::StorageDbType) const { @@ -97,9 +96,7 @@ Status SstFileReader::GetTableReader(const std::string& file_path) { uint64_t magic_number; // read table magic number - Footer footer; - - unique_ptr file; + std::unique_ptr file; uint64_t file_size; Status s = options_.env->NewRandomAccessFile(file_path, &file, soptions_); if (s.ok()) { @@ -109,10 +106,10 @@ Status SstFileReader::GetTableReader(const std::string& file_path) { file_.reset(new RandomAccessFileReader(std::move(file))); if (s.ok()) { - s = ReadFooterFromFile(file_.get(), file_size, &footer); + s = ReadFooterFromFile(file_.get(), file_size, &footer_); } if (s.ok()) { - magic_number = footer.table_magic_number(); + magic_number = footer_.table_magic_number(); } if (s.ok()) { @@ -135,10 +132,10 @@ Status SstFileReader::GetTableReader(const std::string& file_path) { s = NewTableReader(ioptions_, soptions_, *internal_comparator_, file_size, &table_reader_); if (s.ok() && table_reader_->IsSplitSst()) { - unique_ptr data_file; + std::unique_ptr data_file; RETURN_NOT_OK(options_.env->NewRandomAccessFile( TableBaseToDataFileName(file_path), &data_file, soptions_)); - unique_ptr data_file_reader( + std::unique_ptr data_file_reader( new RandomAccessFileReader(std::move(data_file))); table_reader_->SetDataFileReader(std::move(data_file_reader)); } @@ -149,10 +146,10 @@ Status SstFileReader::GetTableReader(const std::string& file_path) { Status SstFileReader::NewTableReader( const ImmutableCFOptions& ioptions, const EnvOptions& soptions, const InternalKeyComparator& internal_comparator, uint64_t file_size, - unique_ptr* table_reader) { + std::unique_ptr* table_reader) { // We need to turn off pre-fetching of index and filter nodes for // BlockBasedTable - shared_ptr block_table_factory = + std::shared_ptr block_table_factory = dynamic_pointer_cast(options_.table_factory); if (block_table_factory) { @@ -172,7 +169,7 @@ Status SstFileReader::NewTableReader( } Status SstFileReader::DumpTable(const std::string& out_filename) { - unique_ptr out_file; + std::unique_ptr out_file; Env* env = Env::Default(); RETURN_NOT_OK(env->NewWritableFile(out_filename, &out_file, soptions_)); Status s = table_reader_->DumpTable(out_file.get()); @@ -182,20 +179,20 @@ Status SstFileReader::DumpTable(const std::string& out_filename) { uint64_t SstFileReader::CalculateCompressedTableSize( const TableBuilderOptions& tb_options, size_t block_size) { - unique_ptr out_file; - unique_ptr env(NewMemEnv(Env::Default())); + std::unique_ptr out_file; + std::unique_ptr env(NewMemEnv(Env::Default())); CHECK_OK(env->NewWritableFile(testFileName, &out_file, soptions_)); - unique_ptr dest_writer; + std::unique_ptr dest_writer; dest_writer.reset(new WritableFileWriter(std::move(out_file), soptions_)); BlockBasedTableOptions table_options; table_options.block_size = block_size; BlockBasedTableFactory block_based_tf(table_options); - unique_ptr table_builder; + std::unique_ptr table_builder; table_builder = block_based_tf.NewTableBuilder( tb_options, TablePropertiesCollectorFactory::Context::kUnknownColumnFamily, dest_writer.get()); - unique_ptr iter(table_reader_->NewIterator(ReadOptions())); + std::unique_ptr iter(table_reader_->NewIterator(ReadOptions())); for (iter->SeekToFirst(); iter->Valid(); iter->Next()) { if (!iter->status().ok()) { fputs(iter->status().ToString().c_str(), stderr); @@ -387,6 +384,77 @@ Status SstFileReader::ReadSequential(bool print_kv, return ret; } +namespace { + +void PrintSaveBlockCommand(const std::string& data_file_path, const BlockHandle& block_handle) { + std::cout << "dd if=\"" << data_file_path << "\" bs=1 skip=" << block_handle.offset() + << " count=" << block_handle.size() << " of=\"" << data_file_path << ".offset_" + << block_handle.offset() << ".size_" << block_handle.size() << ".part\"" << std::endl; +} + +} // namespace + +Status SstFileReader::CheckDataBlocks(DoUncompress do_uncompress) { + if (!table_reader_) { + return init_result_; + } + ReadOptions read_options; + read_options.verify_checksums = true; + + std::unique_ptr index_iterator(table_reader_->NewIndexIterator(read_options)); + RETURN_NOT_OK(index_iterator->status()); + + const auto data_file_path = + table_reader_->IsSplitSst() ? TableBaseToDataFileName(file_name_) : file_name_; + std::unique_ptr data_file; + RETURN_NOT_OK(options_.env->NewRandomAccessFile(data_file_path, &data_file, soptions_)); + std::unique_ptr data_file_reader( + new RandomAccessFileReader(std::move(data_file))); + + size_t index_entry_pos = 0; + BlockHandle prev_block_handle; + BlockHandle block_handle; + bool save_block = false; + for (index_iterator->SeekToFirst(); index_iterator->Valid(); + index_iterator->Next(), ++index_entry_pos) { + prev_block_handle = block_handle; + { + auto index_value_slice = index_iterator->Entry().value; + auto status = block_handle.DecodeFrom(&index_value_slice); + if (!status.ok()) { + LOG(WARNING) << "Failed to decode SST index entry #" << index_entry_pos << ": " + << index_iterator->Entry().value.ToDebugHexString() << ". " << status; + continue; + } + LOG_IF(WARNING, index_value_slice.size() > 0) + << "Extra bytes (" << index_value_slice.size() + << ") in index entry: " << index_iterator->Entry().value.ToDebugHexString(); + } + YB_LOG_EVERY_N_SECS(INFO, 30) << "Checking data block #" << index_entry_pos + << " handle: " << block_handle.ToDebugString(); + + BlockContents block_contents; + auto status = ReadBlockContents( + data_file_reader.get(), footer_, read_options, block_handle, &block_contents, options_.env, + /* mem_tracker = */ nullptr, do_uncompress); + if (!status.ok()) { + LOG(WARNING) << "Failed to read block with handle: " << block_handle.ToDebugString() << ". " + << status; + if (prev_block_handle.IsSet()) { + PrintSaveBlockCommand(data_file_path, prev_block_handle); + } + PrintSaveBlockCommand(data_file_path, block_handle); + // Save next block as well. + save_block = true; + } else if (save_block) { + PrintSaveBlockCommand(data_file_path, block_handle); + save_block = false; + } + } + + return Status::OK(); +} + Status SstFileReader::ReadTableProperties( std::shared_ptr* table_properties) { if (!table_reader_) { @@ -401,7 +469,7 @@ namespace { void print_help() { fprintf(stderr, - "sst_dump [--command=check|scan|none|raw] [--verify_checksum] " + "sst_dump [--command=check|scan|check_data_blocks|none|raw] [--verify_checksum] " "--file=data_dir_OR_sst_file" " [--output_format=raw|hex|decoded_regulardb|decoded_intentsdb]" " [--formatter_tablet_metadata=" @@ -411,7 +479,8 @@ void print_help() { " [--read_num=NUM]" " [--show_properties]" " [--show_compression_sizes]" - " [--show_compression_sizes [--set_block_size=]]\n"); + " [--show_compression_sizes [--set_block_size=]]" + " [--skip_uncompress]\n"); } } // namespace @@ -431,6 +500,7 @@ int SSTDumpTool::Run(int argc, char** argv) { bool show_properties = false; bool show_compression_sizes = false; bool set_block_size = false; + DoUncompress do_uncompress = DoUncompress ::kTrue; std::string from_key; std::string to_key; std::string block_size_str; @@ -470,6 +540,8 @@ int SSTDumpTool::Run(int argc, char** argv) { show_properties = true; } else if (strcmp(argv[i], "--show_compression_sizes") == 0) { show_compression_sizes = true; + } else if (strcmp(argv[i], "--skip_uncompress") == 0) { + do_uncompress = DoUncompress::kFalse; } else if (strncmp(argv[i], "--set_block_size=", 17) == 0) { set_block_size = true; block_size_str = argv[i] + 17; @@ -575,6 +647,8 @@ int SSTDumpTool::Run(int argc, char** argv) { if (read_num > 0 && total_read > read_num) { break; } + } else if (command == "check_data_blocks") { + ERROR_NOT_OK(reader.CheckDataBlocks(do_uncompress), "Failed to scan SST file blocks:"); } if (show_properties) { const rocksdb::TableProperties* table_properties; diff --git a/src/yb/rocksdb/tools/sst_dump_tool_imp.h b/src/yb/rocksdb/tools/sst_dump_tool_imp.h index 6671c9590c9b..3e6235f88b95 100644 --- a/src/yb/rocksdb/tools/sst_dump_tool_imp.h +++ b/src/yb/rocksdb/tools/sst_dump_tool_imp.h @@ -25,13 +25,17 @@ #include #include -#include "yb/rocksdb/rocksdb_fwd.h" + #include "yb/rocksdb/db/dbformat.h" #include "yb/rocksdb/immutable_options.h" +#include "yb/rocksdb/rocksdb_fwd.h" +#include "yb/rocksdb/table/format.h" #include "yb/rocksdb/util/file_reader_writer.h" namespace rocksdb { +YB_STRONGLY_TYPED_BOOL(DoUncompress); + class SstFileReader { public: SstFileReader( @@ -43,6 +47,8 @@ class SstFileReader { const std::string& from_key, bool has_to, const std::string& to_key); + Status CheckDataBlocks(DoUncompress do_uncompress); + Status ReadTableProperties( std::shared_ptr* table_properties); uint64_t GetReadNumber() { return read_num_; } @@ -81,6 +87,7 @@ class SstFileReader { EnvOptions soptions_; Status init_result_; + Footer footer_; std::unique_ptr table_reader_; std::unique_ptr file_; // options_ and internal_comparator_ will also be used in From f42bad605e8ad43fff46212dd428ecb213a689f4 Mon Sep 17 00:00:00 2001 From: Sanketh I Date: Sun, 18 May 2025 16:06:20 -0700 Subject: [PATCH 104/146] Create clients.md --- .../develop/best-practices-ysql/clients.md | 36 +++++++++++++++++++ 1 file changed, 36 insertions(+) create mode 100644 docs/content/stable/develop/best-practices-ysql/clients.md diff --git a/docs/content/stable/develop/best-practices-ysql/clients.md b/docs/content/stable/develop/best-practices-ysql/clients.md new file mode 100644 index 000000000000..b8ae1f2dc58f --- /dev/null +++ b/docs/content/stable/develop/best-practices-ysql/clients.md @@ -0,0 +1,36 @@ +## Load balance and failover using smart drivers + +YugabyteDB [smart drivers](../../drivers-orms/smart-drivers/) provide advanced cluster-aware load-balancing capabilities that enables your applications to send requests to multiple nodes in the cluster just by connecting to one node. You can also set a fallback hierarchy by assigning priority to specific regions and ensuring that connections are made to the region with the highest priority, and then fall back to the region with the next priority in case the high-priority region fails. + +{{}} +For more information, see [Load balancing with smart drivers](https://www.yugabyte.com/blog/multi-region-database-deployment-best-practices/#load-balancing-with-smart-driver). +{{}} + +## Make sure the application uses new nodes + +When a cluster is expanded, newly added nodes do not automatically start to receive client traffic. Regardless of the language of the driver or whether you are using a smart driver, the application must either explicitly request new connections or, if it is using a pooling solution, it can configure the pooler to recycle connections periodically (for example, by setting maxLifetime and/or idleTimeout). + +## Scale your application with connection pools + +Set up different pools with different load balancing policies as needed for your application to scale by using popular pooling solutions such as HikariCP and Tomcat along with YugabyteDB [smart drivers](../../drivers-orms/smart-drivers/). + +{{}} +For more information, see [Connection pooling](../../drivers-orms/smart-drivers/#connection-pooling). +{{}} + +### Database migrations and connection pools + +In some cases, connection pools may trigger unexpected errors while running a sequence of database migrations or other DDL operations. + +Because YugabyteDB is distributed, it can take a while for the result of a DDL to fully propagate to all caches on all nodes in a cluster. As a result, after a DDL statement completes, the next DDL statement that runs right afterwards on a different PostgreSQL connection may, in rare cases, see errors such as `duplicate key value violates unique constraint "pg_attribute_relid_attnum_index"` (see issue {{}}). It is recommended to use a single connection while running a sequence of DDL operations, as is common with application migration scripts with tools such as Flyway or Active Record. + +## Use YSQL Connection Manager + +YugabyteDB includes a built-in connection pooler, YSQL Connection Manager {{}}, which provides the same connection pooling advantages as other external pooling solutions, but without many of their limitations. As the manager is bundled with the product, it is convenient to manage, monitor, and configure the server connections. + +For more information, refer to the following: + +- [YSQL Connection Manager](../../explore/going-beyond-sql/connection-mgr-ysql/) +- [Built-in Connection Manager Turns Key PostgreSQL Weakness into a Strength](https://www.yugabyte.com/blog/connection-pooling-management/) + + From f2fdd775ced6b908d9988b545443022401cf22ad Mon Sep 17 00:00:00 2001 From: Sanketh I Date: Sun, 18 May 2025 16:10:59 -0700 Subject: [PATCH 105/146] Create data-modeling-perf.md --- .../best-practices-ysql/data-modeling-perf.md | 256 ++++++++++++++++++ 1 file changed, 256 insertions(+) create mode 100644 docs/content/stable/develop/best-practices-ysql/data-modeling-perf.md diff --git a/docs/content/stable/develop/best-practices-ysql/data-modeling-perf.md b/docs/content/stable/develop/best-practices-ysql/data-modeling-perf.md new file mode 100644 index 000000000000..22c423940021 --- /dev/null +++ b/docs/content/stable/develop/best-practices-ysql/data-modeling-perf.md @@ -0,0 +1,256 @@ +--- +title: Best practices for Data Modeling and performance of YSQL applications +headerTitle: Best practices +linkTitle: Best practices +description: Tips and tricks to build YSQL applications +headcontent: Tips and tricks to build YSQL applications for high performance and availability +menu: + stable: + identifier: best-practices-ysql-data-modeling-perf + parent: best-practices-ysql + weight: 570 +type: docs +--- + +## Use application patterns + +Running applications in multiple data centers with data split across them is not a trivial task. When designing global applications, choose a suitable design pattern for your application from a suite of battle-tested design paradigms, including [Global database](../build-global-apps/global-database), [Multi-master](../build-global-apps/active-active-multi-master), [Standby cluster](../build-global-apps/active-active-single-master), [Duplicate indexes](../build-global-apps/duplicate-indexes), [Follower reads](../build-global-apps/follower-reads), and more. You can also combine these patterns as per your needs. + +{{}} +For more details, see [Build global applications](../build-global-apps). +{{}} + +## Colocation + +Colocated tables optimize latency and performance for data access by reducing the need for additional trips across the network for small tables. Additionally, it reduces the overhead of creating a tablet for every relation (tables, indexes, and so on) and their storage per node. + +{{}} +For more details, see [Colocation](../../explore/colocation/). +{{}} + +## Faster reads with covering indexes + +When a query uses an index to look up rows faster, the columns that are not present in the index are fetched from the original table. This results in additional round trips to the main table leading to increased latency. + +Use [covering indexes](../../explore/ysql-language-features/indexes-constraints/covering-index-ysql/) to store all the required columns needed for your queries in the index. Indexing converts a standard Index-Scan to an [Index-Only-Scan](https://dev.to/yugabyte/boosts-secondary-index-queries-with-index-only-scan-5e7j). + +{{}} +For more details, see [Avoid trips to the table with covering indexes](https://www.yugabyte.com/blog/multi-region-database-deployment-best-practices/#avoid-trips-to-the-table-with-covering-indexes). +{{}} + +## Faster writes with partial indexes + +A partial index is an index that is built on a subset of a table and includes only rows that satisfy the condition specified in the WHERE clause. This speeds up any writes to the table and reduces the size of the index, thereby improving speed for read queries that use the index. + +{{}} +For more details, see [Partial indexes](../../explore/ysql-language-features/indexes-constraints/partial-index-ysql/). +{{}} + +## Distinct keys with unique indexes + +If you need values in some of the columns to be unique, you can specify your index as UNIQUE. + +When a unique index is applied to two or more columns, the combined values in these columns can't be duplicated in multiple rows. Note that because a NULL value is treated as a distinct value, you can have multiple NULL values in a column with a unique index. + +{{}} +For more details, see [Unique indexes](../../explore/ysql-language-features/indexes-constraints/unique-index-ysql/). +{{}} + +## Faster sequences with server-level caching + +Sequences in databases automatically generate incrementing numbers, perfect for generating unique values like order numbers, user IDs, check numbers, and so on. They prevent multiple application instances from concurrently generating duplicate values. However, generating sequences on a database that is spread across regions could have a latency impact on your applications. + +Enable [server-level caching](../../api/ysql/exprs/func_nextval/#caching-values-on-the-yb-tserver) to improve the speed of sequences, and also avoid discarding many sequence values when an application disconnects. + +{{}} +For a demo, see the YugabyteDB Friday Tech Talk on [Scaling sequences with server-level caching](https://www.youtube.com/watch?v=hs-CU3vjMQY&list=PL8Z3vt4qJTkLTIqB9eTLuqOdpzghX8H40&index=76). +{{}} + +## Fast single-row transactions + +Common scenarios of updating rows and fetching the results in multiple statements can lead to multiple round-trips between the application and server. In many cases, rewriting these statements as single statements using the RETURNING clause will lead to lower latencies as YugabyteDB has optimizations to make single statements faster. For example, the following statements: + +```sql +SELECT v FROM txndemo WHERE k=1 FOR UPDATE; +UPDATE txndemo SET v = v + 3 WHERE k=1; +SELECT v FROM txndemo WHERE k=1; +``` + +can be re-written as follows: + +```sql +UPDATE txndemo SET v = v + 3 WHERE k=1 RETURNING v; +``` + +{{}} +For more details, see [Fast single-row transactions](../../develop/learn/transactions/transactions-performance-ysql/#fast-single-row-transactions). +{{}} + +## Delete older data quickly with partitioning + +Use [table partitioning](../../explore/ysql-language-features/advanced-features/partitions/) to split your data into multiple partitions according to date so that you can quickly delete older data by dropping the partition. + +{{}} +For more details, see [Partition data by time](../data-modeling/common-patterns/timeseries/partitioning-by-time/). +{{}} + +## Use the right data types for partition keys + +In general, integer, arbitrary precision number, character string (not very long ones), and timestamp types are safe choices for comparisons. + +Avoid the following: + +- Floating point number data types - because they are stored as binary float format that cannot represent most of the decimal values precisely, values that are supposedly the same may not be treated as a match because of possible multiple internal representations. + +- Date, time, and similar timestamp component types if they may be compared with values from a different timezone or different day of the year, or when either value comes from a country or region that observes or ever observed daylight savings time. + +## Use multi row inserts wherever possible + +If you're inserting multiple rows, it's faster to batch them together whenever possible. You can start with 128 rows per batch +and test different batch sizes to find the sweet spot. + +Don't use multiple statements: + +```postgresql +INSERT INTO users(name,surname) VALUES ('bill', 'jane'); +INSERT INTO users(name,surname) VALUES ('billy', 'bob'); +INSERT INTO users(name,surname) VALUES ('joey', 'does'); +``` + +Instead, group values into a single statement as follows: + +```postgresql +INSERT INTO users(name,surname) VALUES ('bill', 'jane'), ('billy', 'bob'), ('joe', 'does'); +``` + +## UPSERT multiple rows wherever possible + +PostgreSQL and YSQL enable you to do upserts using the INSERT ON CONFLICT clause. Similar to multi-row inserts, you can also batch multiple upserts in a single INSERT ON CONFLICT statement for better performance. + +In case the row already exists, you can access the existing values using `EXCLUDED.` in the query. + +The following example creates a table to track the quantity of products, and increments rows in batches: + +```postgresql +CREATE TABLE products + ( + name TEXT PRIMARY KEY, + quantity BIGINT DEFAULT 0 + ); +--- +INSERT INTO products(name, quantity) +VALUES + ('apples', 1), + ('oranges', 5) ON CONFLICT(name) DO UPDATE +SET + quantity = products.quantity + excluded.quantity; +--- +INSERT INTO products(name, quantity) +VALUES + ('apples', 1), + ('oranges', 5) ON CONFLICT(name) DO UPDATE +SET + quantity = products.quantity + excluded.quantity; +--- +SELECT * FROM products; + name | quantity +---------+---------- + apples | 2 + oranges | 10 +(2 rows) +``` + +{{}} +For more information, see [Data manipulation](../../explore/ysql-language-features/data-manipulation). +{{}} + +## Re-use query plans with prepared statements + +Whenever possible, use [prepared statements](../../api/ysql/the-sql-language/statements/perf_prepare/) to ensure that YugabyteDB can re-use the same query plan and eliminate the need for a server to parse the query on each operation. + +{{}} + +When using server-side pooling, avoid explicit PREPARE and EXECUTE calls and use protocol-level prepared statements instead. Explicit prepare/execute calls can make connections sticky, which prevents you from realizing the benefits of using YSQL Connection Manager{{}} and server-side pooling. + +Depending on your driver, you may have to set some parameters to leverage prepared statements. For example, Npgsql supports automatic preparation using the Max Auto Prepare and Auto Prepare Min Usages connection parameters, which you add to your connection string as follows: + +```sh +Max Auto Prepare=100;Auto Prepare Min Usages=5; +``` + +Consult your driver documentation. + +{{}} + +{{}} +For more details, see [Prepared statements in PL/pgSQL](https://dev.to/aws-heroes/postgresql-prepared-statements-in-pl-pgsql-jl3). +{{}} + +## Large scans and batch jobs + +Use BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE READ ONLY DEFERRABLE for batch or long-running jobs, which need a consistent snapshot of the database without interfering, or being interfered with by other transactions. + +{{}} +For more details, see [Large scans and batch jobs](../../develop/learn/transactions/transactions-performance-ysql/#large-scans-and-batch-jobs). +{{}} + +## JSONB datatype + +Use the [JSONB](../../api/ysql/datatypes/type_json) datatype to model JSON data; that is, data that doesn't have a set schema but has a truly dynamic schema. + +JSONB in YSQL is the same as the [JSONB datatype in PostgreSQL](https://www.postgresql.org/docs/11/datatype-json.html). + +You can use JSONB to group less interesting or less frequently accessed columns of a table. + +YSQL also supports JSONB expression indexes, which can be used to speed up data retrieval that would otherwise require scanning the JSON entries. + +{{< note title="Use JSONB columns only when necessary" >}} + +- A good schema design is to only use JSONB for truly dynamic schema. That is, don't create a "data JSONB" column where you put everything; instead, create a JSONB column for dynamic data, and use regular columns for the other data. +- JSONB columns are slower to read/write compared to normal columns. +- JSONB values take more space because they need to store keys in strings, and maintaining data consistency is harder, requiring more complex queries to get/set JSONB values. +- JSONB is a good fit when writes are done as a whole document with a per-row hierarchical structure. If there are arrays, the choice is not JSONB vs. column, but vs additional relational tables. +- For reads, JSONB is a good fit if you read the whole document and the searched expression is indexed. +- When reading one attribute frequently, it's better to move it to a column as it can be included in an index for an `Index Only Scan`. + +{{< /note >}} + +## Parallelizing across tablets + +For large or batch SELECT or DELETE that have to scan all tablets, you can parallelize your operation by creating queries that affect only a specific part of the tablet using the `yb_hash_code` function. + +{{}} +For more details, see [Distributed parallel queries](../../api/ysql/exprs/func_yb_hash_code/#distributed-parallel-queries). +{{}} + +## Row size limit + +Big columns add up when you select full or multiple rows. For consistent latency or performance, it is recommended keeping the size under 10MB or less, and a maximum of 32MB. + +## Column size limit + +For consistent latency or performance, it is recommended to size columns in the 2MB range or less even though an individual column or row limit is supported till 32MB. + +## TRUNCATE tables instead of DELETE + +[TRUNCATE](../../api/ysql/the-sql-language/statements/ddl_truncate/) deletes the database files that store the table data and is much faster than [DELETE](../../api/ysql/the-sql-language/statements/dml_delete/), which inserts a _delete marker_ for each row in transactions that are later removed from storage during compaction runs. + +{{}} +Currently, TRUNCATE is not transactional. Also, similar to PostgreSQL, TRUNCATE is not MVCC-safe. For more details, see [TRUNCATE](../../api/ysql/the-sql-language/statements/ddl_truncate/). +{{}} + +## Minimize the number of tablets you need + +Each table and index is split into tablets and each tablet has overhead. The more tablets you need, the bigger your universe will need to be. See [allowing for tablet replica overheads](#allowing-for-tablet-replica-overheads) for how the number of tablets affects how big your universe needs to be. + +Each table and index consists of several tablets based on the [--ysql_num_shards_per_tserver](../../reference/configuration/yb-tserver/#yb-num-shards-per-tserver) flag. + +You can try one of the following methods to reduce the number of tablets: + +- Use [colocation](../../explore/colocation/) to group small tables into 1 tablet. +- Reduce number of tablets-per-table using the [--ysql_num_shards_per_tserver](../../reference/configuration/yb-tserver/#yb-num-shards-per-tserver) flag. +- Use the [SPLIT INTO](../../api/ysql/the-sql-language/statements/ddl_create_table/#split-into) clause when creating a table. +- Start with few tablets and use [automatic tablet splitting](../../architecture/docdb-sharding/tablet-splitting/). + +Note that multiple tablets can allow work to proceed in parallel so you may not want every table to have only one tablet. + From 590bf75e1334de9e2c70d0185e8b17ecea23556d Mon Sep 17 00:00:00 2001 From: Sanketh I Date: Sun, 18 May 2025 16:11:30 -0700 Subject: [PATCH 106/146] Update clients.md --- .../stable/develop/best-practices-ysql/clients.md | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/docs/content/stable/develop/best-practices-ysql/clients.md b/docs/content/stable/develop/best-practices-ysql/clients.md index b8ae1f2dc58f..a361510b1a2b 100644 --- a/docs/content/stable/develop/best-practices-ysql/clients.md +++ b/docs/content/stable/develop/best-practices-ysql/clients.md @@ -1,3 +1,17 @@ +--- +title: Best practices for YSQL clients +headerTitle: Best practices +linkTitle: Best practices +description: Tips and tricks to build YSQL applications +headcontent: Tips and tricks to build YSQL applications for high performance and availability +menu: + stable: + identifier: best-practices-ysql-clients + parent: best-practices-ysql + weight: 570 +type: docs +--- + ## Load balance and failover using smart drivers YugabyteDB [smart drivers](../../drivers-orms/smart-drivers/) provide advanced cluster-aware load-balancing capabilities that enables your applications to send requests to multiple nodes in the cluster just by connecting to one node. You can also set a fallback hierarchy by assigning priority to specific regions and ensuring that connections are made to the region with the highest priority, and then fall back to the region with the next priority in case the high-priority region fails. From 65af5b74bdbff4603a601e38f96cc15d71355732 Mon Sep 17 00:00:00 2001 From: Sanketh I Date: Sun, 18 May 2025 16:12:42 -0700 Subject: [PATCH 107/146] Create administration.md --- .../best-practices-ysql/administration.md | 37 +++++++++++++++++++ 1 file changed, 37 insertions(+) create mode 100644 docs/content/stable/develop/best-practices-ysql/administration.md diff --git a/docs/content/stable/develop/best-practices-ysql/administration.md b/docs/content/stable/develop/best-practices-ysql/administration.md new file mode 100644 index 000000000000..2ba1a3e08478 --- /dev/null +++ b/docs/content/stable/develop/best-practices-ysql/administration.md @@ -0,0 +1,37 @@ +--- +title: Best practices for YSQL DB administrators +headerTitle: Best practices +linkTitle: Best practices +description: Tips and tricks to build YSQL applications +headcontent: Tips and tricks to administer YSQL DBs +menu: + stable: + identifier: best-practices-ysql-db-admins + parent: best-practices-ysql + weight: 570 +type: docs +--- + +## Single availability zone (AZ) deployments + +In single AZ deployments, you need to set the [yb-tserver](../../reference/configuration/yb-tserver) flag `--durable_wal_write=true` to not lose data if the whole data center goes down (For example, power failure). + +## Allow for tablet replica overheads + +Although you can manually provision the amount of memory each TServer uses using flags ([--memory_limit_hard_bytes](../../reference/configuration/yb-tserver/#memory-limit-hard-bytes) or [--default_memory_limit_to_ram_ratio](../../reference/configuration/yb-tserver/#default-memory-limit-to-ram-ratio)), this can be tricky as you need to take into account how much memory the kernel needs, along with the PostgreSQL processes and any Master process that is going to be colocated with the TServer. + +Accordingly, you should use the [--use_memory_defaults_optimized_for_ysql](../../reference/configuration/yb-tserver/#use-memory-defaults-optimized-for-ysql) flag, which gives good memory division settings for using YSQL, optimized for your node's size. + +If this flag is true, then the [memory division flag defaults](../../reference/configuration/yb-tserver/#memory-division-flags) change to provide much more memory for PostgreSQL; furthermore, they optimize for the node size. + +Note that although the default setting is false, when creating a new universe using yugabyted or YugabyteDB Anywhere, the flag is set to true, unless you explicitly set it to false. + +## Settings for CI and CD integration tests + +You can set certain flags to increase performance using YugabyteDB in CI and CD automated test scenarios as follows: + +- Point the flags `--fs_data_dirs`, and `--fs_wal_dirs` to a RAMDisk directory to make DML, DDL, cluster creation, and cluster deletion faster, ensuring that data is not written to disk. +- Set the flag `--yb_num_shards_per_tserver=1`. Reducing the number of shards lowers overhead when creating or dropping YSQL tables, and writing or reading small amounts of data. +- Use colocated databases in YSQL. Colocation lowers overhead when creating or dropping YSQL tables, and writing or reading small amounts of data. +- Set the flag `--replication_factor=1` for test scenarios, as keeping the data three way replicated (default) is not necessary. Reducing that to 1 reduces space usage and increases performance. +- Use `TRUNCATE table1,table2,table3..tablen;` instead of CREATE TABLE, and DROP TABLE between test cases. From 746ee0a2c3105917b6d6e70324ad21b93b5d38d5 Mon Sep 17 00:00:00 2001 From: Sanketh I Date: Sun, 18 May 2025 16:14:23 -0700 Subject: [PATCH 108/146] Create _index.md --- docs/content/stable/develop/best-practices-ysql/_index.md | 6 ++++++ 1 file changed, 6 insertions(+) create mode 100644 docs/content/stable/develop/best-practices-ysql/_index.md diff --git a/docs/content/stable/develop/best-practices-ysql/_index.md b/docs/content/stable/develop/best-practices-ysql/_index.md new file mode 100644 index 000000000000..da249dff078c --- /dev/null +++ b/docs/content/stable/develop/best-practices-ysql/_index.md @@ -0,0 +1,6 @@ + +[Data Modeling & Perf](../data-modeling-perf) + +[Clients](../clients) + +[DB Administrators](../administration) From 87a9d17e67d6201a1f4cdce1d82719567baca6d7 Mon Sep 17 00:00:00 2001 From: Sanketh I Date: Sun, 18 May 2025 16:17:23 -0700 Subject: [PATCH 109/146] Update administration.md --- .../best-practices-ysql/administration.md | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/docs/content/stable/develop/best-practices-ysql/administration.md b/docs/content/stable/develop/best-practices-ysql/administration.md index 2ba1a3e08478..073dfc17ede0 100644 --- a/docs/content/stable/develop/best-practices-ysql/administration.md +++ b/docs/content/stable/develop/best-practices-ysql/administration.md @@ -35,3 +35,21 @@ You can set certain flags to increase performance using YugabyteDB in CI and CD - Use colocated databases in YSQL. Colocation lowers overhead when creating or dropping YSQL tables, and writing or reading small amounts of data. - Set the flag `--replication_factor=1` for test scenarios, as keeping the data three way replicated (default) is not necessary. Reducing that to 1 reduces space usage and increases performance. - Use `TRUNCATE table1,table2,table3..tablen;` instead of CREATE TABLE, and DROP TABLE between test cases. + + +## Concurrent DML during a DDL operation + +In YugabyteDB, DML is allowed to execute while a DDL statement modifies the schema that is accessed by the DML statement. For example, an `ALTER TABLE
Granted object locks)"<< caption << R"(
Lock Owner Object Id Num Holders
.. ADD COLUMN` DDL statement may add a new column while a `SELECT * from
` executes concurrently on the same relation. In PostgreSQL, this is typically not allowed because such DDL statements take a table-level exclusive lock that prevents concurrent DML from executing. (Support for similar behavior in YugabyteDB is being tracked in issue {{}}.) + +In YugabyteDB, when a DDL modifies the schema of tables that are accessed by concurrent DML statements, the DML statement may do one of the following: +1. Operate with the old schema prior to the DDL, or +2. Operate with the new schema after the DDL completes, or +3. Encounter temporary errors such as `schema mismatch errors` or `catalog version mismatch`. It is recommended for the client to [retry such operations](https://www.yugabyte.com/blog/retry-mechanism-spring-boot-app/) whenever possible. + +Most DDL statements complete quickly, so this is typically not a significant issue in practice. However, [certain kinds of ALTER TABLE DDL statements](../the-sql-language/statements/ddl_alter_table/#alter-table-operations-that-involve-a-table-rewrite) involve making a full copy of the table(s) whose schema is being modified. For these operations, it is not recommended to run any concurrent DML statements on the table being modified by the `ALTER TABLE`, as the effect of such concurrent DML may not be reflected in the table copy. + +## Concurrent DDL during a DDL operation + +DDL statements that affect entities in different databases can be run concurrently. However, for DDL statements that impact the same database, it is recommended to execute them sequentially. + +DDL statements that relate to shared objects, such as roles or tablespaces, are considered as affecting all databases in the cluster, so they should also be run sequentially. From 520233c22030acfc2853c911b84339e50c02dc14 Mon Sep 17 00:00:00 2001 From: Sanketh I Date: Sun, 18 May 2025 16:18:04 -0700 Subject: [PATCH 110/146] Delete docs/content/stable/develop/best-practices-ysql.md --- .../stable/develop/best-practices-ysql.md | 316 ------------------ 1 file changed, 316 deletions(-) delete mode 100644 docs/content/stable/develop/best-practices-ysql.md diff --git a/docs/content/stable/develop/best-practices-ysql.md b/docs/content/stable/develop/best-practices-ysql.md deleted file mode 100644 index ea389c65d8db..000000000000 --- a/docs/content/stable/develop/best-practices-ysql.md +++ /dev/null @@ -1,316 +0,0 @@ ---- -title: Best practices for YSQL applications -headerTitle: Best practices -linkTitle: Best practices -description: Tips and tricks to build YSQL applications -headcontent: Tips and tricks to build YSQL applications for high performance and availability -menu: - stable: - identifier: best-practices-ysql - parent: develop - weight: 570 -type: docs ---- - -{{}} - -## Use application patterns - -Running applications in multiple data centers with data split across them is not a trivial task. When designing global applications, choose a suitable design pattern for your application from a suite of battle-tested design paradigms, including [Global database](../build-global-apps/global-database), [Multi-master](../build-global-apps/active-active-multi-master), [Standby cluster](../build-global-apps/active-active-single-master), [Duplicate indexes](../build-global-apps/duplicate-indexes), [Follower reads](../build-global-apps/follower-reads), and more. You can also combine these patterns as per your needs. - -{{}} -For more details, see [Build global applications](../build-global-apps). -{{}} - -## Colocation - -Colocated tables optimize latency and performance for data access by reducing the need for additional trips across the network for small tables. Additionally, it reduces the overhead of creating a tablet for every relation (tables, indexes, and so on) and their storage per node. - -{{}} -For more details, see [Colocation](../../explore/colocation/). -{{}} - -## Faster reads with covering indexes - -When a query uses an index to look up rows faster, the columns that are not present in the index are fetched from the original table. This results in additional round trips to the main table leading to increased latency. - -Use [covering indexes](../../explore/ysql-language-features/indexes-constraints/covering-index-ysql/) to store all the required columns needed for your queries in the index. Indexing converts a standard Index-Scan to an [Index-Only-Scan](https://dev.to/yugabyte/boosts-secondary-index-queries-with-index-only-scan-5e7j). - -{{}} -For more details, see [Avoid trips to the table with covering indexes](https://www.yugabyte.com/blog/multi-region-database-deployment-best-practices/#avoid-trips-to-the-table-with-covering-indexes). -{{}} - -## Faster writes with partial indexes - -A partial index is an index that is built on a subset of a table and includes only rows that satisfy the condition specified in the WHERE clause. This speeds up any writes to the table and reduces the size of the index, thereby improving speed for read queries that use the index. - -{{}} -For more details, see [Partial indexes](../../explore/ysql-language-features/indexes-constraints/partial-index-ysql/). -{{}} - -## Distinct keys with unique indexes - -If you need values in some of the columns to be unique, you can specify your index as UNIQUE. - -When a unique index is applied to two or more columns, the combined values in these columns can't be duplicated in multiple rows. Note that because a NULL value is treated as a distinct value, you can have multiple NULL values in a column with a unique index. - -{{}} -For more details, see [Unique indexes](../../explore/ysql-language-features/indexes-constraints/unique-index-ysql/). -{{}} - -## Faster sequences with server-level caching - -Sequences in databases automatically generate incrementing numbers, perfect for generating unique values like order numbers, user IDs, check numbers, and so on. They prevent multiple application instances from concurrently generating duplicate values. However, generating sequences on a database that is spread across regions could have a latency impact on your applications. - -Enable [server-level caching](../../api/ysql/exprs/func_nextval/#caching-values-on-the-yb-tserver) to improve the speed of sequences, and also avoid discarding many sequence values when an application disconnects. - -{{}} -For a demo, see the YugabyteDB Friday Tech Talk on [Scaling sequences with server-level caching](https://www.youtube.com/watch?v=hs-CU3vjMQY&list=PL8Z3vt4qJTkLTIqB9eTLuqOdpzghX8H40&index=76). -{{}} - -## Fast single-row transactions - -Common scenarios of updating rows and fetching the results in multiple statements can lead to multiple round-trips between the application and server. In many cases, rewriting these statements as single statements using the RETURNING clause will lead to lower latencies as YugabyteDB has optimizations to make single statements faster. For example, the following statements: - -```sql -SELECT v FROM txndemo WHERE k=1 FOR UPDATE; -UPDATE txndemo SET v = v + 3 WHERE k=1; -SELECT v FROM txndemo WHERE k=1; -``` - -can be re-written as follows: - -```sql -UPDATE txndemo SET v = v + 3 WHERE k=1 RETURNING v; -``` - -{{}} -For more details, see [Fast single-row transactions](../../develop/learn/transactions/transactions-performance-ysql/#fast-single-row-transactions). -{{}} - -## Delete older data quickly with partitioning - -Use [table partitioning](../../explore/ysql-language-features/advanced-features/partitions/) to split your data into multiple partitions according to date so that you can quickly delete older data by dropping the partition. - -{{}} -For more details, see [Partition data by time](../data-modeling/common-patterns/timeseries/partitioning-by-time/). -{{}} - -## Use the right data types for partition keys - -In general, integer, arbitrary precision number, character string (not very long ones), and timestamp types are safe choices for comparisons. - -Avoid the following: - -- Floating point number data types - because they are stored as binary float format that cannot represent most of the decimal values precisely, values that are supposedly the same may not be treated as a match because of possible multiple internal representations. - -- Date, time, and similar timestamp component types if they may be compared with values from a different timezone or different day of the year, or when either value comes from a country or region that observes or ever observed daylight savings time. - -## Use multi row inserts wherever possible - -If you're inserting multiple rows, it's faster to batch them together whenever possible. You can start with 128 rows per batch -and test different batch sizes to find the sweet spot. - -Don't use multiple statements: - -```postgresql -INSERT INTO users(name,surname) VALUES ('bill', 'jane'); -INSERT INTO users(name,surname) VALUES ('billy', 'bob'); -INSERT INTO users(name,surname) VALUES ('joey', 'does'); -``` - -Instead, group values into a single statement as follows: - -```postgresql -INSERT INTO users(name,surname) VALUES ('bill', 'jane'), ('billy', 'bob'), ('joe', 'does'); -``` - -## UPSERT multiple rows wherever possible - -PostgreSQL and YSQL enable you to do upserts using the INSERT ON CONFLICT clause. Similar to multi-row inserts, you can also batch multiple upserts in a single INSERT ON CONFLICT statement for better performance. - -In case the row already exists, you can access the existing values using `EXCLUDED.` in the query. - -The following example creates a table to track the quantity of products, and increments rows in batches: - -```postgresql -CREATE TABLE products - ( - name TEXT PRIMARY KEY, - quantity BIGINT DEFAULT 0 - ); ---- -INSERT INTO products(name, quantity) -VALUES - ('apples', 1), - ('oranges', 5) ON CONFLICT(name) DO UPDATE -SET - quantity = products.quantity + excluded.quantity; ---- -INSERT INTO products(name, quantity) -VALUES - ('apples', 1), - ('oranges', 5) ON CONFLICT(name) DO UPDATE -SET - quantity = products.quantity + excluded.quantity; ---- -SELECT * FROM products; - name | quantity ----------+---------- - apples | 2 - oranges | 10 -(2 rows) -``` - -{{}} -For more information, see [Data manipulation](../../explore/ysql-language-features/data-manipulation). -{{}} - -## Load balance and failover using smart drivers - -YugabyteDB [smart drivers](../../drivers-orms/smart-drivers/) provide advanced cluster-aware load-balancing capabilities that enables your applications to send requests to multiple nodes in the cluster just by connecting to one node. You can also set a fallback hierarchy by assigning priority to specific regions and ensuring that connections are made to the region with the highest priority, and then fall back to the region with the next priority in case the high-priority region fails. - -{{}} -For more information, see [Load balancing with smart drivers](https://www.yugabyte.com/blog/multi-region-database-deployment-best-practices/#load-balancing-with-smart-driver). -{{}} - -## Make sure the application uses new nodes - -When a cluster is expanded, newly added nodes do not automatically start to receive client traffic. Regardless of the language of the driver or whether you are using a smart driver, the application must either explicitly request new connections or, if it is using a pooling solution, it can configure the pooler to recycle connections periodically (for example, by setting maxLifetime and/or idleTimeout). - -## Scale your application with connection pools - -Set up different pools with different load balancing policies as needed for your application to scale by using popular pooling solutions such as HikariCP and Tomcat along with YugabyteDB [smart drivers](../../drivers-orms/smart-drivers/). - -{{}} -For more information, see [Connection pooling](../../drivers-orms/smart-drivers/#connection-pooling). -{{}} - -### Database migrations and connection pools - -In some cases, connection pools may trigger unexpected errors while running a sequence of database migrations or other DDL operations. - -Because YugabyteDB is distributed, it can take a while for the result of a DDL to fully propagate to all caches on all nodes in a cluster. As a result, after a DDL statement completes, the next DDL statement that runs right afterwards on a different PostgreSQL connection may, in rare cases, see errors such as `duplicate key value violates unique constraint "pg_attribute_relid_attnum_index"` (see issue {{}}). It is recommended to use a single connection while running a sequence of DDL operations, as is common with application migration scripts with tools such as Flyway or Active Record. - -## Use YSQL Connection Manager - -YugabyteDB includes a built-in connection pooler, YSQL Connection Manager {{}}, which provides the same connection pooling advantages as other external pooling solutions, but without many of their limitations. As the manager is bundled with the product, it is convenient to manage, monitor, and configure the server connections. - -For more information, refer to the following: - -- [YSQL Connection Manager](../../explore/going-beyond-sql/connection-mgr-ysql/) -- [Built-in Connection Manager Turns Key PostgreSQL Weakness into a Strength](https://www.yugabyte.com/blog/connection-pooling-management/) - -## Re-use query plans with prepared statements - -Whenever possible, use [prepared statements](../../api/ysql/the-sql-language/statements/perf_prepare/) to ensure that YugabyteDB can re-use the same query plan and eliminate the need for a server to parse the query on each operation. - -{{}} - -When using server-side pooling, avoid explicit PREPARE and EXECUTE calls and use protocol-level prepared statements instead. Explicit prepare/execute calls can make connections sticky, which prevents you from realizing the benefits of using YSQL Connection Manager{{}} and server-side pooling. - -Depending on your driver, you may have to set some parameters to leverage prepared statements. For example, Npgsql supports automatic preparation using the Max Auto Prepare and Auto Prepare Min Usages connection parameters, which you add to your connection string as follows: - -```sh -Max Auto Prepare=100;Auto Prepare Min Usages=5; -``` - -Consult your driver documentation. - -{{}} - -{{}} -For more details, see [Prepared statements in PL/pgSQL](https://dev.to/aws-heroes/postgresql-prepared-statements-in-pl-pgsql-jl3). -{{}} - -## Large scans and batch jobs - -Use BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE READ ONLY DEFERRABLE for batch or long-running jobs, which need a consistent snapshot of the database without interfering, or being interfered with by other transactions. - -{{}} -For more details, see [Large scans and batch jobs](../../develop/learn/transactions/transactions-performance-ysql/#large-scans-and-batch-jobs). -{{}} - -## JSONB datatype - -Use the [JSONB](../../api/ysql/datatypes/type_json) datatype to model JSON data; that is, data that doesn't have a set schema but has a truly dynamic schema. - -JSONB in YSQL is the same as the [JSONB datatype in PostgreSQL](https://www.postgresql.org/docs/11/datatype-json.html). - -You can use JSONB to group less interesting or less frequently accessed columns of a table. - -YSQL also supports JSONB expression indexes, which can be used to speed up data retrieval that would otherwise require scanning the JSON entries. - -{{< note title="Use JSONB columns only when necessary" >}} - -- A good schema design is to only use JSONB for truly dynamic schema. That is, don't create a "data JSONB" column where you put everything; instead, create a JSONB column for dynamic data, and use regular columns for the other data. -- JSONB columns are slower to read/write compared to normal columns. -- JSONB values take more space because they need to store keys in strings, and maintaining data consistency is harder, requiring more complex queries to get/set JSONB values. -- JSONB is a good fit when writes are done as a whole document with a per-row hierarchical structure. If there are arrays, the choice is not JSONB vs. column, but vs additional relational tables. -- For reads, JSONB is a good fit if you read the whole document and the searched expression is indexed. -- When reading one attribute frequently, it's better to move it to a column as it can be included in an index for an `Index Only Scan`. - -{{< /note >}} - -## Parallelizing across tablets - -For large or batch SELECT or DELETE that have to scan all tablets, you can parallelize your operation by creating queries that affect only a specific part of the tablet using the `yb_hash_code` function. - -{{}} -For more details, see [Distributed parallel queries](../../api/ysql/exprs/func_yb_hash_code/#distributed-parallel-queries). -{{}} - -## Single availability zone (AZ) deployments - -In single AZ deployments, you need to set the [yb-tserver](../../reference/configuration/yb-tserver) flag `--durable_wal_write=true` to not lose data if the whole data center goes down (For example, power failure). - -## Row size limit - -Big columns add up when you select full or multiple rows. For consistent latency or performance, it is recommended keeping the size under 10MB or less, and a maximum of 32MB. - -## Column size limit - -For consistent latency or performance, it is recommended to size columns in the 2MB range or less even though an individual column or row limit is supported till 32MB. - -## TRUNCATE tables instead of DELETE - -[TRUNCATE](../../api/ysql/the-sql-language/statements/ddl_truncate/) deletes the database files that store the table data and is much faster than [DELETE](../../api/ysql/the-sql-language/statements/dml_delete/), which inserts a _delete marker_ for each row in transactions that are later removed from storage during compaction runs. - -{{}} -Currently, TRUNCATE is not transactional. Also, similar to PostgreSQL, TRUNCATE is not MVCC-safe. For more details, see [TRUNCATE](../../api/ysql/the-sql-language/statements/ddl_truncate/). -{{}} - -## Minimize the number of tablets you need - -Each table and index is split into tablets and each tablet has overhead. The more tablets you need, the bigger your universe will need to be. See [allowing for tablet replica overheads](#allowing-for-tablet-replica-overheads) for how the number of tablets affects how big your universe needs to be. - -Each table and index consists of several tablets based on the [--ysql_num_shards_per_tserver](../../reference/configuration/yb-tserver/#yb-num-shards-per-tserver) flag. - -You can try one of the following methods to reduce the number of tablets: - -- Use [colocation](../../explore/colocation/) to group small tables into 1 tablet. -- Reduce number of tablets-per-table using the [--ysql_num_shards_per_tserver](../../reference/configuration/yb-tserver/#yb-num-shards-per-tserver) flag. -- Use the [SPLIT INTO](../../api/ysql/the-sql-language/statements/ddl_create_table/#split-into) clause when creating a table. -- Start with few tablets and use [automatic tablet splitting](../../architecture/docdb-sharding/tablet-splitting/). - -Note that multiple tablets can allow work to proceed in parallel so you may not want every table to have only one tablet. - -## Allow for tablet replica overheads - -Although you can manually provision the amount of memory each TServer uses using flags ([--memory_limit_hard_bytes](../../reference/configuration/yb-tserver/#memory-limit-hard-bytes) or [--default_memory_limit_to_ram_ratio](../../reference/configuration/yb-tserver/#default-memory-limit-to-ram-ratio)), this can be tricky as you need to take into account how much memory the kernel needs, along with the PostgreSQL processes and any Master process that is going to be colocated with the TServer. - -Accordingly, you should use the [--use_memory_defaults_optimized_for_ysql](../../reference/configuration/yb-tserver/#use-memory-defaults-optimized-for-ysql) flag, which gives good memory division settings for using YSQL, optimized for your node's size. - -If this flag is true, then the [memory division flag defaults](../../reference/configuration/yb-tserver/#memory-division-flags) change to provide much more memory for PostgreSQL; furthermore, they optimize for the node size. - -Note that although the default setting is false, when creating a new universe using yugabyted or YugabyteDB Anywhere, the flag is set to true, unless you explicitly set it to false. - -## Settings for CI and CD integration tests - -You can set certain flags to increase performance using YugabyteDB in CI and CD automated test scenarios as follows: - -- Point the flags `--fs_data_dirs`, and `--fs_wal_dirs` to a RAMDisk directory to make DML, DDL, cluster creation, and cluster deletion faster, ensuring that data is not written to disk. -- Set the flag `--yb_num_shards_per_tserver=1`. Reducing the number of shards lowers overhead when creating or dropping YSQL tables, and writing or reading small amounts of data. -- Use colocated databases in YSQL. Colocation lowers overhead when creating or dropping YSQL tables, and writing or reading small amounts of data. -- Set the flag `--replication_factor=1` for test scenarios, as keeping the data three way replicated (default) is not necessary. Reducing that to 1 reduces space usage and increases performance. -- Use `TRUNCATE table1,table2,table3..tablen;` instead of CREATE TABLE, and DROP TABLE between test cases. From 8fb21a98b46aee474a4c6c716cc1594db136243a Mon Sep 17 00:00:00 2001 From: Sanketh I Date: Sun, 18 May 2025 16:18:39 -0700 Subject: [PATCH 111/146] Delete docs/content/stable/api/ysql/ddl-behavior/_index.md --- .../stable/api/ysql/ddl-behavior/_index.md | 33 ------------------- 1 file changed, 33 deletions(-) delete mode 100644 docs/content/stable/api/ysql/ddl-behavior/_index.md diff --git a/docs/content/stable/api/ysql/ddl-behavior/_index.md b/docs/content/stable/api/ysql/ddl-behavior/_index.md deleted file mode 100644 index bf5d47a11f07..000000000000 --- a/docs/content/stable/api/ysql/ddl-behavior/_index.md +++ /dev/null @@ -1,33 +0,0 @@ ---- -title: Behavior of DDL statements [YSQL] -headerTitle: Behavior of DDL statements -linkTitle: Behavior of DDL statements -description: Explains specific aspects of DDL statement behavior in YugabyteDB. [YSQL]. -menu: - stable_api: - identifier: ddl-behavior - parent: api-ysql - weight: 20 -type: docs ---- - - -This section describes specific concurrency-related aspects of DDL statement behavior in YugabyteDB. - -## Concurrent DML during a DDL operation - -In YugabyteDB, DML is allowed to execute while a DDL statement modifies the schema that is accessed by the DML statement. For example, an `ALTER TABLE
.. ADD COLUMN` DDL statement may add a new column while a `SELECT * from
` executes concurrently on the same relation. In PostgreSQL, this is typically not allowed because such DDL statements take a table-level exclusive lock that prevents concurrent DML from executing. (Support for similar behavior in YugabyteDB is being tracked in issue {{}}.) - -In YugabyteDB, when a DDL modifies the schema of tables that are accessed by concurrent DML statements, the DML statement may do one of the following: -1. Operate with the old schema prior to the DDL, or -2. Operate with the new schema after the DDL completes, or -3. Encounter temporary errors such as `schema mismatch errors` or `catalog version mismatch`. It is recommended for the client to [retry such operations](https://www.yugabyte.com/blog/retry-mechanism-spring-boot-app/) whenever possible. - -Most DDL statements complete quickly, so this is typically not a significant issue in practice. However, [certain kinds of ALTER TABLE DDL statements](../the-sql-language/statements/ddl_alter_table/#alter-table-operations-that-involve-a-table-rewrite) involve making a full copy of the table(s) whose schema is being modified. For these operations, it is not recommended to run any concurrent DML statements on the table being modified by the `ALTER TABLE`, as the effect of such concurrent DML may not be reflected in the table copy. - -## Concurrent DDL during a DDL operation - -DDL statements that affect entities in different databases can be run concurrently. However, for DDL statements that impact the same database, it is recommended to execute them sequentially. - -DDL statements that relate to shared objects, such as roles or tablespaces, are considered as affecting all databases in the cluster, so they should also be run sequentially. - From eb4f165250a043c38f8d7f679be25881e5823e4a Mon Sep 17 00:00:00 2001 From: Yury Shchetinin Date: Tue, 13 May 2025 18:34:25 +0300 Subject: [PATCH 112/146] [PLAT-17420] In a 9 node 3 rf universe 3 AZ with equal node distribution among AZ, batchSize = 3 is not working Summary: We are doing software upgrade in 3 phases: 1) Updating master software on tserver-only servers 2) Updating master software on master nodes 3) Updating tserver software on tserver nodes Phase 1 is now done in parallel (since we are not affecting any running processes). During that phase we don't change state of the nodes (to avoid confusing user). Phase 2 is done one at a time, so we are using new node state here - UpdateMasterSoftware Phase 3 is using provided batch size Test Plan: sbt test Run software upgade in UI using batches - verify expected behavior Reviewers: nsingh Reviewed By: nsingh Subscribers: yugaware Differential Revision: https://phorge.dev.yugabyte.com/D43794 --- .../UniverseDefinitionTaskParamsMapper.java | 1 + .../yw/commissioner/UpgradeTaskBase.java | 43 ++++++++------ .../upgrade/SoftwareUpgradeTaskBase.java | 7 +++ .../yw/models/helpers/NodeDetails.java | 5 +- .../components/schemas/NodeDetails.yaml | 1 + .../src/main/resources/swagger-strict.json | 2 + managed/src/main/resources/swagger.json | 2 + .../tasks/upgrade/RollbackUpgradeTest.java | 34 ++++++----- .../tasks/upgrade/SoftwareUpgradeTest.java | 58 +++++++++++++++++++ .../tasks/upgrade/UpgradeTaskTest.java | 58 ++++--------------- .../yw/models/helpers/NodeDetailsTest.java | 2 + 11 files changed, 136 insertions(+), 77 deletions(-) diff --git a/managed/src/main/java/api/v2/mappers/UniverseDefinitionTaskParamsMapper.java b/managed/src/main/java/api/v2/mappers/UniverseDefinitionTaskParamsMapper.java index c86aaeba18e3..ac572f959506 100644 --- a/managed/src/main/java/api/v2/mappers/UniverseDefinitionTaskParamsMapper.java +++ b/managed/src/main/java/api/v2/mappers/UniverseDefinitionTaskParamsMapper.java @@ -254,6 +254,7 @@ CloudSpecificInfo toV2CloudSpecificInfo( @ValueMapping(target = "PROVISIONED", source = "Provisioned"), @ValueMapping(target = "SOFTWAREINSTALLED", source = "SoftwareInstalled"), @ValueMapping(target = "UPGRADESOFTWARE", source = "UpgradeSoftware"), + @ValueMapping(target = "UPGRADEMASTERSOFTWARE", source = "UpgradeMasterSoftware"), @ValueMapping(target = "ROLLBACKUPGRADE", source = "RollbackUpgrade"), @ValueMapping(target = "FINALIZEUPGRADE", source = "FinalizeUpgrade"), @ValueMapping(target = "UPDATEGFLAGS", source = "UpdateGFlags"), diff --git a/managed/src/main/java/com/yugabyte/yw/commissioner/UpgradeTaskBase.java b/managed/src/main/java/com/yugabyte/yw/commissioner/UpgradeTaskBase.java index fd8d45c7a011..6089ceeecf0e 100644 --- a/managed/src/main/java/com/yugabyte/yw/commissioner/UpgradeTaskBase.java +++ b/managed/src/main/java/com/yugabyte/yw/commissioner/UpgradeTaskBase.java @@ -239,6 +239,10 @@ protected UpgradeTaskParams taskParams() { // State set on node while it is being upgraded public abstract NodeState getNodeState(); + protected NodeState getNodeState(Set processTypes) { + return getNodeState(); + } + public void runUpgrade(Runnable upgradeLambda) { runUpgrade(upgradeLambda, null /* Txn callback */); } @@ -582,7 +586,6 @@ private void createRollingUpgradeTaskFlow( } Universe universe = getUniverse(); - NodeState nodeState = getNodeState(); if (hasTServer) { if (!isBlacklistLeaders()) { // Need load balancer on to perform leader blacklist. @@ -596,25 +599,29 @@ private void createRollingUpgradeTaskFlow( .setSubTaskGroupType(subGroupType); } } - // For inactive role (updating master links for tserver-only nodes) - // we can process all the nodes concurrently. - RollMaxBatchSize rollMaxBatchSize = - activeRole - ? getCurrentRollBatchSize(universe) - : RollMaxBatchSize.of(Integer.MAX_VALUE, Integer.MAX_VALUE); - List> split = - splitNodes(getUniverse(), nodes, processTypesFunction, rollMaxBatchSize); + List> split; + if (activeRole) { + RollMaxBatchSize rollMaxBatchSize = getCurrentRollBatchSize(universe); + split = splitNodes(getUniverse(), nodes, processTypesFunction, rollMaxBatchSize); + } else { + // For inactive role (updating master links for tserver-only nodes) + // we can process all the nodes concurrently. + split = Collections.singletonList(new ArrayList<>(nodes)); + } for (List nodeList : split) { // Nodes are grouped by the same set of server types, so it doesn't matter which node to take. Set processTypes = processTypesFunction.apply(nodeList.get(0)); + NodeState nodeState = getNodeState(processTypes); + if (nodeList.size() > 1) { log.debug("Stopping {} nodes simultaneously, processes {}", nodeList.size(), processTypes); } - createSetNodeStateTasks(nodeList, nodeState).setSubTaskGroupType(subGroupType); - + if (activeRole) { + createSetNodeStateTasks(nodeList, nodeState).setSubTaskGroupType(subGroupType); + } UUID primaryId = universe.getUniverseDetails().getPrimaryCluster().uuid; boolean hasPrimaryNodes = false; for (NodeDetails node : nodeList) { @@ -709,12 +716,14 @@ private void createRollingUpgradeTaskFlow( // Run post node upgrade hooks createHookTriggerTasks(nodeList, false, true); - createSetNodeStateTasks(nodeList, NodeState.Live).setSubTaskGroupType(subGroupType); - for (NodeDetails node : nodeList) { - createSleepAfterStartupTask( - taskParams().getUniverseUUID(), - processTypes, - SetNodeState.getStartKey(node.getNodeName(), nodeState)); + if (activeRole) { + createSetNodeStateTasks(nodeList, NodeState.Live).setSubTaskGroupType(subGroupType); + for (NodeDetails node : nodeList) { + createSleepAfterStartupTask( + taskParams().getUniverseUUID(), + processTypes, + SetNodeState.getStartKey(node.getNodeName(), nodeState)); + } } if (context.postAction != null) { nodeList.forEach(context.postAction); diff --git a/managed/src/main/java/com/yugabyte/yw/commissioner/tasks/upgrade/SoftwareUpgradeTaskBase.java b/managed/src/main/java/com/yugabyte/yw/commissioner/tasks/upgrade/SoftwareUpgradeTaskBase.java index 1c68e0f0d85a..091fd02181a5 100644 --- a/managed/src/main/java/com/yugabyte/yw/commissioner/tasks/upgrade/SoftwareUpgradeTaskBase.java +++ b/managed/src/main/java/com/yugabyte/yw/commissioner/tasks/upgrade/SoftwareUpgradeTaskBase.java @@ -60,6 +60,13 @@ public NodeState getNodeState() { return NodeState.UpgradeSoftware; } + @Override + protected NodeState getNodeState(Set processTypes) { + return getSingle(processTypes) == ServerType.MASTER + ? NodeState.UpgradeMasterSoftware + : NodeState.UpgradeSoftware; + } + protected UpgradeContext getUpgradeContext(String targetSoftwareVersion) { return UpgradeContext.builder() .reconfigureMaster(false) diff --git a/managed/src/main/java/com/yugabyte/yw/models/helpers/NodeDetails.java b/managed/src/main/java/com/yugabyte/yw/models/helpers/NodeDetails.java index de1e5536cee0..14fe81f40a45 100644 --- a/managed/src/main/java/com/yugabyte/yw/models/helpers/NodeDetails.java +++ b/managed/src/main/java/com/yugabyte/yw/models/helpers/NodeDetails.java @@ -112,7 +112,9 @@ public enum NodeState { Provisioned(DELETE, ADD), // Set after the YB software installed and some basic configuration done on a provisioned node. SoftwareInstalled(START, DELETE, ADD), - // Set after the YB software is upgraded via Rolling Restart. + // Set after the YB master software is upgraded via Rolling Restart. + UpgradeMasterSoftware(), + // Set after the YB tserver software is upgraded via Rolling Restart. UpgradeSoftware(), // set when software version is rollback. RollbackUpgrade(), @@ -435,6 +437,7 @@ public boolean isActive() { @JsonIgnore public boolean isQueryable() { return (state == NodeState.UpgradeSoftware + || state == NodeState.UpgradeMasterSoftware || state == NodeState.FinalizeUpgrade || state == NodeState.RollbackUpgrade || state == NodeState.UpdateGFlags diff --git a/managed/src/main/resources/openapi/components/schemas/NodeDetails.yaml b/managed/src/main/resources/openapi/components/schemas/NodeDetails.yaml index 0d5c9eba0c06..d4eacb695dec 100644 --- a/managed/src/main/resources/openapi/components/schemas/NodeDetails.yaml +++ b/managed/src/main/resources/openapi/components/schemas/NodeDetails.yaml @@ -94,6 +94,7 @@ properties: - Provisioned - SoftwareInstalled - UpgradeSoftware + - UpgradeMasterSoftware - RollbackUpgrade - FinalizeUpgrade - UpdateGFlags diff --git a/managed/src/main/resources/swagger-strict.json b/managed/src/main/resources/swagger-strict.json index 73a427fa7f5e..a1b4f1f7bf39 100644 --- a/managed/src/main/resources/swagger-strict.json +++ b/managed/src/main/resources/swagger-strict.json @@ -12282,6 +12282,7 @@ "Reprovisioning", "Provisioned", "SoftwareInstalled", + "UpgradeMasterSoftware", "UpgradeSoftware", "RollbackUpgrade", "FinalizeUpgrade", @@ -12538,6 +12539,7 @@ "Reprovisioning", "Provisioned", "SoftwareInstalled", + "UpgradeMasterSoftware", "UpgradeSoftware", "RollbackUpgrade", "FinalizeUpgrade", diff --git a/managed/src/main/resources/swagger.json b/managed/src/main/resources/swagger.json index 0e7fc86b1126..30dfa867e474 100644 --- a/managed/src/main/resources/swagger.json +++ b/managed/src/main/resources/swagger.json @@ -12340,6 +12340,7 @@ "Reprovisioning", "Provisioned", "SoftwareInstalled", + "UpgradeMasterSoftware", "UpgradeSoftware", "RollbackUpgrade", "FinalizeUpgrade", @@ -12596,6 +12597,7 @@ "Reprovisioning", "Provisioned", "SoftwareInstalled", + "UpgradeMasterSoftware", "UpgradeSoftware", "RollbackUpgrade", "FinalizeUpgrade", diff --git a/managed/src/test/java/com/yugabyte/yw/commissioner/tasks/upgrade/RollbackUpgradeTest.java b/managed/src/test/java/com/yugabyte/yw/commissioner/tasks/upgrade/RollbackUpgradeTest.java index c03d7c644efd..ec1bb86d2bfa 100644 --- a/managed/src/test/java/com/yugabyte/yw/commissioner/tasks/upgrade/RollbackUpgradeTest.java +++ b/managed/src/test/java/com/yugabyte/yw/commissioner/tasks/upgrade/RollbackUpgradeTest.java @@ -41,6 +41,7 @@ import com.yugabyte.yw.models.helpers.TaskType; import java.util.ArrayList; import java.util.Arrays; +import java.util.Collections; import java.util.HashMap; import java.util.List; import java.util.Map; @@ -100,12 +101,7 @@ public class RollbackUpgradeTest extends UpgradeTaskTest { TaskType.WaitStartingFromTime); private static final List ROLLING_UPGRADE_TASK_SEQUENCE_INACTIVE_ROLE = - ImmutableList.of( - TaskType.SetNodeState, - TaskType.AnsibleClusterServerCtl, - TaskType.AnsibleConfigureServers, - TaskType.SetNodeState, - TaskType.WaitStartingFromTime); + ImmutableList.of(TaskType.AnsibleClusterServerCtl, TaskType.AnsibleConfigureServers); private static final List NON_ROLLING_UPGRADE_TASK_SEQUENCE_ACTIVE_ROLE = ImmutableList.of( @@ -177,12 +173,21 @@ private int assertSequence( : ROLLING_UPGRADE_TASK_SEQUENCE_INACTIVE_ROLE) : ROLLING_UPGRADE_TASK_SEQUENCE_TSERVER; List nodeOrder = getRollingUpgradeNodeOrder(serverType, activeRole); - - for (int nodeIdx : nodeOrder) { - String nodeName = String.format("host-n%d", nodeIdx); + List> nodesOrder = + activeRole + ? nodeOrder.stream() + .map(n -> Collections.singletonList(n)) + .collect(Collectors.toList()) + : Collections.singletonList(nodeOrder); + + for (List nodeIndexes : nodesOrder) { + List nodeNames = + nodeIndexes.stream() + .map(nodeIdx -> String.format("host-n%d", nodeIdx)) + .collect(Collectors.toList()); int pos = position; for (TaskType type : taskSequence) { - log.debug("exp {} {} - {}", nodeName, pos++, type); + log.debug("exp {} {} - {}", nodeNames, pos++, type); } pos = position; for (TaskType type : taskSequence) { @@ -200,12 +205,15 @@ private int assertSequence( TaskType taskType = tasks.get(0).getTaskType(); UserTaskDetails.SubTaskGroupType subTaskGroupType = tasks.get(0).getSubTaskGroupType(); // Leader blacklisting adds a ModifyBlackList task at position 0 - int numTasksToAssert = position == 0 ? 2 : 1; + int numTasksToAssert = position == 0 ? 2 : nodeNames.size(); assertEquals(numTasksToAssert, tasks.size()); assertEquals(type, taskType); if (!NON_NODE_TASKS.contains(taskType)) { Map assertValues = - new HashMap<>(ImmutableMap.of("nodeName", nodeName, "nodeCount", 1)); + nodeIndexes.size() == 1 + ? new HashMap<>(ImmutableMap.of("nodeName", nodeNames.get(0), "nodeCount", 1)) + : new HashMap<>( + ImmutableMap.of("nodeNames", nodeNames, "nodeCount", nodeNames.size())); if (taskType.equals(TaskType.AnsibleConfigureServers)) { String version = "2.21.0.0-b1"; @@ -364,7 +372,7 @@ public void testRollbackUpgradeInRollingManner() { position = assertSequence(subTasksByPosition, MASTER, position, true, false); position = assertSequence(subTasksByPosition, MASTER, position, true, true); assertCommonTasks(subTasksByPosition, position, UpgradeType.ROLLING_UPGRADE, true); - assertEquals(125, position); + assertEquals(117, position); assertEquals(100.0, taskInfo.getPercentCompleted(), 0); assertEquals(Success, taskInfo.getTaskState()); defaultUniverse = Universe.getOrBadRequest(defaultUniverse.getUniverseUUID()); diff --git a/managed/src/test/java/com/yugabyte/yw/commissioner/tasks/upgrade/SoftwareUpgradeTest.java b/managed/src/test/java/com/yugabyte/yw/commissioner/tasks/upgrade/SoftwareUpgradeTest.java index 07325585174d..f2e344c936a2 100644 --- a/managed/src/test/java/com/yugabyte/yw/commissioner/tasks/upgrade/SoftwareUpgradeTest.java +++ b/managed/src/test/java/com/yugabyte/yw/commissioner/tasks/upgrade/SoftwareUpgradeTest.java @@ -20,6 +20,7 @@ import com.google.common.net.HostAndPort; import com.yugabyte.yw.commissioner.MockUpgrade; import com.yugabyte.yw.commissioner.UpgradeTaskBase; +import com.yugabyte.yw.commissioner.tasks.UniverseTaskBase; import com.yugabyte.yw.commissioner.tasks.subtasks.RunYsqlUpgrade; import com.yugabyte.yw.common.ApiUtils; import com.yugabyte.yw.common.ModelFactory; @@ -29,6 +30,8 @@ import com.yugabyte.yw.common.TestUtils; import com.yugabyte.yw.common.Util; import com.yugabyte.yw.common.config.UniverseConfKeys; +import com.yugabyte.yw.common.utils.Pair; +import com.yugabyte.yw.forms.RollMaxBatchSize; import com.yugabyte.yw.forms.SoftwareUpgradeParams; import com.yugabyte.yw.forms.UniverseDefinitionTaskParams; import com.yugabyte.yw.forms.UpgradeTaskParams; @@ -46,6 +49,7 @@ import java.util.Arrays; import java.util.HashSet; import java.util.List; +import java.util.Map; import java.util.Optional; import java.util.Set; import java.util.UUID; @@ -216,6 +220,60 @@ public void testSoftwareUpgrade() { .verifyTasks(taskInfo.getSubTasks()); } + @Test + public void testSoftwareUpgradeBatches() { + updateDefaultUniverse(true, OLD_VERSION, 4, 4, 4); + + RuntimeConfigEntry.upsertGlobal("yb.task.upgrade.batch_roll_enabled", "true"); + SoftwareUpgradeParams taskParams = new SoftwareUpgradeParams(); + taskParams.ybSoftwareVersion = NEW_VERSION; + taskParams.clusters.add(defaultUniverse.getUniverseDetails().getPrimaryCluster()); + taskParams.rollMaxBatchSize = new RollMaxBatchSize(); + taskParams.rollMaxBatchSize.setPrimaryBatchSize(2); + mockDBServerVersion( + defaultUniverse.getUniverseDetails().getPrimaryCluster().userIntent.ybSoftwareVersion, + taskParams.ybSoftwareVersion, + defaultUniverse.getMasters().size() + defaultUniverse.getTServers().size()); + TaskInfo taskInfo = submitTask(taskParams, defaultUniverse.getVersion()); + verify(mockNodeUniverseManager, times(24)).runCommand(any(), any(), anyList(), any()); + + assertEquals(100.0, taskInfo.getPercentCompleted(), 0); + assertEquals(Success, taskInfo.getTaskState()); + + Map> subTasksByPosition = + taskInfo.getSubTasks().stream().collect(Collectors.groupingBy(TaskInfo::getPosition)); + + List> counts = new ArrayList<>(); + for (int i = 0; i < subTasksByPosition.size(); i++) { + List subtasks = subTasksByPosition.get(i); + log.debug("tt " + subtasks.get(0).getTaskType()); + if (subtasks.get(0).getTaskType() == TaskType.AnsibleConfigureServers) { + Map params = extractParams(subtasks.get(0), Set.of("type", "processType")); + log.debug("params " + params); + if ("Software".equals(params.get("type"))) { + counts.add( + new Pair<>( + subtasks.size(), + UniverseTaskBase.ServerType.valueOf(params.get("processType").toString()))); + } + } + } + assertEquals( + Arrays.asList( + new Pair<>(12, UniverseTaskBase.ServerType.TSERVER), // Download + new Pair<>(9, UniverseTaskBase.ServerType.MASTER), // Inactive + new Pair<>(1, UniverseTaskBase.ServerType.MASTER), + new Pair<>(1, UniverseTaskBase.ServerType.MASTER), + new Pair<>(1, UniverseTaskBase.ServerType.MASTER), + new Pair<>(2, UniverseTaskBase.ServerType.TSERVER), + new Pair<>(2, UniverseTaskBase.ServerType.TSERVER), + new Pair<>(2, UniverseTaskBase.ServerType.TSERVER), + new Pair<>(2, UniverseTaskBase.ServerType.TSERVER), + new Pair<>(2, UniverseTaskBase.ServerType.TSERVER), + new Pair<>(2, UniverseTaskBase.ServerType.TSERVER)), + counts); + } + @Test public void testSoftwareUpgradeAndInstallYbc() { updateDefaultUniverseTo5Nodes(true, OLD_VERSION); diff --git a/managed/src/test/java/com/yugabyte/yw/commissioner/tasks/upgrade/UpgradeTaskTest.java b/managed/src/test/java/com/yugabyte/yw/commissioner/tasks/upgrade/UpgradeTaskTest.java index 16bbe4adae7d..fd8f191a3a32 100644 --- a/managed/src/test/java/com/yugabyte/yw/commissioner/tasks/upgrade/UpgradeTaskTest.java +++ b/managed/src/test/java/com/yugabyte/yw/commissioner/tasks/upgrade/UpgradeTaskTest.java @@ -52,7 +52,6 @@ import java.security.NoSuchAlgorithmException; import java.util.ArrayList; import java.util.Arrays; -import java.util.Collections; import java.util.Date; import java.util.HashMap; import java.util.List; @@ -391,47 +390,6 @@ protected void assertNodeSubTask(List subTasks, Map as }); } - private void printTaskSequence( - int startPosition, - Map> subTasksByPosition, - List expectedTaskTypes, - Map> expectedParams, - int failedPosition) { - log.debug("Expected:"); - for (int i = 0; i < expectedTaskTypes.size(); i++) { - log.debug( - "#" - + i - + " " - + expectedTaskTypes.get(i) - + " " - + expectedParams.getOrDefault(i, Collections.emptyMap())); - } - log.debug("Actual:"); - int maxPosition = subTasksByPosition.keySet().stream().max(Integer::compare).get(); - for (int i = 0; i < maxPosition - startPosition; i++) { - int position = startPosition + i; - String suff = ""; - if (position == failedPosition) { - suff = "Failed!! ->"; - } - String task; - String taskParams; - List taskInfos = subTasksByPosition.get(position); - Set keySet = expectedParams.getOrDefault(i, Collections.emptyMap()).keySet(); - if (taskInfos != null) { - TaskInfo taskInfo = taskInfos.get(0); - task = taskInfo.getTaskType().toString(); - taskParams = extractParams(taskInfo, keySet).toString(); - } else { - task = "-"; - taskParams = ""; - } - log.debug(suff + "#" + i + " " + task + " " + taskParams); - } - log.debug("------"); - } - protected Map extractParams(TaskInfo task, Set keys) { Map result = new HashMap<>(); for (String key : keys) { @@ -458,9 +416,15 @@ protected void updateDefaultUniverseTo5Nodes(boolean enableYSQL) { } protected void updateDefaultUniverseTo5Nodes(boolean enableYSQL, String ybSoftwareVersion) { + updateDefaultUniverse(enableYSQL, ybSoftwareVersion, 2, 1, 2); + } + + protected void updateDefaultUniverse( + boolean enableYSQL, String ybSoftwareVersion, Integer... countInAz) { + int numNodes = Arrays.stream(countInAz).reduce(Integer::sum).get(); UniverseDefinitionTaskParams.UserIntent userIntent = new UniverseDefinitionTaskParams.UserIntent(); - userIntent.numNodes = 5; + userIntent.numNodes = numNodes; userIntent.replicationFactor = 3; userIntent.ybSoftwareVersion = ybSoftwareVersion; userIntent.accessKeyCode = "demo-access"; @@ -469,9 +433,11 @@ protected void updateDefaultUniverseTo5Nodes(boolean enableYSQL, String ybSoftwa userIntent.provider = defaultProvider.getUuid().toString(); PlacementInfo pi = new PlacementInfo(); - PlacementInfoUtil.addPlacementZone(az1.getUuid(), pi, 1, 2, false); - PlacementInfoUtil.addPlacementZone(az2.getUuid(), pi, 1, 1, true); - PlacementInfoUtil.addPlacementZone(az3.getUuid(), pi, 1, 2, false); + List azUUIDs = Arrays.asList(az1.getUuid(), az2.getUuid(), az3.getUuid()); + int idx = 0; + for (Integer num : countInAz) { + PlacementInfoUtil.addPlacementZone(azUUIDs.get(idx++), pi, 1, num, idx % 2 == 0); + } defaultUniverse = Universe.saveDetails( diff --git a/managed/src/test/java/com/yugabyte/yw/models/helpers/NodeDetailsTest.java b/managed/src/test/java/com/yugabyte/yw/models/helpers/NodeDetailsTest.java index 348e7be08a6e..272231b8a558 100644 --- a/managed/src/test/java/com/yugabyte/yw/models/helpers/NodeDetailsTest.java +++ b/managed/src/test/java/com/yugabyte/yw/models/helpers/NodeDetailsTest.java @@ -29,6 +29,7 @@ public void testIsActive() { activeStates.add(NodeDetails.NodeState.ToJoinCluster); activeStates.add(NodeDetails.NodeState.Provisioned); activeStates.add(NodeDetails.NodeState.SoftwareInstalled); + activeStates.add(NodeDetails.NodeState.UpgradeMasterSoftware); activeStates.add(NodeDetails.NodeState.UpgradeSoftware); activeStates.add(NodeDetails.NodeState.FinalizeUpgrade); activeStates.add(NodeDetails.NodeState.RollbackUpgrade); @@ -53,6 +54,7 @@ public void testIsActive() { @Test public void testIsQueryable() { Set queryableStates = new HashSet<>(); + queryableStates.add(NodeDetails.NodeState.UpgradeMasterSoftware); queryableStates.add(NodeDetails.NodeState.UpgradeSoftware); queryableStates.add(NodeDetails.NodeState.RollbackUpgrade); queryableStates.add(NodeDetails.NodeState.FinalizeUpgrade); From 9df7e80a32c29d1226ec9d4f0e712e69e1cd2ef4 Mon Sep 17 00:00:00 2001 From: Sumukh-Phalgaonkar Date: Fri, 16 May 2025 15:16:07 +0530 Subject: [PATCH 113/146] [#27069] CDC: Avoid deletion of children tablet state table entries while cleaning stream metadata Summary: There is a possible race condition when master leadership changes which could potentially lead to deletion of newly added state table entries for children tablets. The race is as follows: - Assume master_1 is currently the master leader. The master bg task, performs various periodic Xrepl related tasks on the leader. One such task is `CleanUpCDCSDKStreamsMetadata()` which removes dropped tables from stream metadata and also deletes their state table entries. Assume that this task starts on master_1 and it looses the leadership. - Now lets say, master_2 becomes the new leader. As soon as this happens a tablet split is triggered, and children tablet entries are added to the state table. During this process entries corresponding to the children tablets are added to the in-memory `tablet_map_` on master_2. - On master_1 `CleanUpCDCSDKStreamsMetadata()` tries to find the tablets belonging to each of the tables in stream metadata. If for any table, no tablets are found it concludes that this table has been dropped. It then iterates over the state table entries and tries to find the `table_id` from the `tablet_id` to identify if the entry belongs to a dropped table. For this it uses `tablet_map_`. However since the children tablet entries are added to the `tablet_map_` on master_2, it is not able to find the `TableInfo` for the children tablet entries and ends up deleting them. This can render a slot useless and will show up as stream expiry error. In order to fix this issue, we delete the state table entries only when we are sure that it belongs to a dropped table. Jira: DB-16553 Test Plan: Existing unit tests Reviewers: vkushwaha, skumar Reviewed By: skumar Subscribers: ycdcxcluster Tags: #jenkins-ready Differential Revision: https://phorge.dev.yugabyte.com/D43883 --- src/yb/master/xrepl_catalog_manager.cc | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/src/yb/master/xrepl_catalog_manager.cc b/src/yb/master/xrepl_catalog_manager.cc index bf719d54cb77..70b38d53ad69 100644 --- a/src/yb/master/xrepl_catalog_manager.cc +++ b/src/yb/master/xrepl_catalog_manager.cc @@ -2730,7 +2730,9 @@ Status CatalogManager::CleanUpCDCSDKStreamsMetadata(const LeaderEpoch& epoch) { // itself is not found, we can safely delete the cdc_state entry. auto tablet_info_result = GetTabletInfo(entry.tablet_id); if (!tablet_info_result.ok()) { - keys_to_delete.emplace_back(entry.tablet_id, entry.stream_id); + LOG_WITH_FUNC(WARNING) << "Did not find tablet info for tablet_id: " << entry.tablet_id + << " , will not delete its cdc_state entry for stream id:" + << entry.stream_id << "in this iteration"; continue; } From 4accd3b1f423e253560af69a571735f2f04e601c Mon Sep 17 00:00:00 2001 From: Sumukh-Phalgaonkar Date: Mon, 19 May 2025 12:44:29 +0530 Subject: [PATCH 114/146] [#26891] DocDB: Make default value of TEST_dcheck_for_missing_schema_packing true Summary: This revision reverts the default value of the test flag `dcheck_for_missing_schema_packing` back to true. The default value was mistakenly made false in D43537. The value of this flag shall remain false in all CDC related tests. Jira: DB-16301 Test Plan: Existing unit tests Reviewers: sergei, skumar Reviewed By: sergei Subscribers: svc_phabricator, ybase, ycdcxcluster Tags: #jenkins-ready Differential Revision: https://phorge.dev.yugabyte.com/D43947 --- .../yb-cdc/src/test/java/org/yb/cdc/common/CDCBaseClass.java | 5 +++-- .../src/test/java/org/yb/pgsql/TestPgReplicationSlot.java | 1 + src/yb/dockv/schema_packing.cc | 2 +- src/yb/integration-tests/cdcsdk_test_base.h | 2 ++ src/yb/integration-tests/xcluster/xcluster_ysql-test.cc | 2 ++ 5 files changed, 9 insertions(+), 3 deletions(-) diff --git a/java/yb-cdc/src/test/java/org/yb/cdc/common/CDCBaseClass.java b/java/yb-cdc/src/test/java/org/yb/cdc/common/CDCBaseClass.java index 39d3a7aebad3..9f8fa8c5d1f0 100644 --- a/java/yb-cdc/src/test/java/org/yb/cdc/common/CDCBaseClass.java +++ b/java/yb-cdc/src/test/java/org/yb/cdc/common/CDCBaseClass.java @@ -52,7 +52,8 @@ public class CDCBaseClass extends BaseMiniClusterTest { protected String CDC_INTENT_SIZE_GFLAG = "cdc_max_stream_intent_records"; protected String CDC_ENABLE_CONSISTENT_RECORDS = "cdc_enable_consistent_records"; protected String CDC_POPULATE_SAFEPOINT_RECORD = "cdc_populate_safepoint_record"; - + protected String TEST_DCHECK_FOR_MISSING_SCHEMA_PACKING = + "TEST_dcheck_for_missing_schema_packing"; // Postgres settings. protected static final String DEFAULT_PG_DATABASE = "yugabyte"; protected static final String DEFAULT_PG_USER = "yugabyte"; @@ -97,8 +98,8 @@ protected Integer getYsqlRequestLimit() { return null; } - /** empty helper function */ protected void setUp() throws Exception { + setServerFlag(getTserverHostAndPort(), TEST_DCHECK_FOR_MISSING_SCHEMA_PACKING, "false"); } /** diff --git a/java/yb-pgsql/src/test/java/org/yb/pgsql/TestPgReplicationSlot.java b/java/yb-pgsql/src/test/java/org/yb/pgsql/TestPgReplicationSlot.java index 3ce3dd7a39ce..ce0ce2a6986f 100644 --- a/java/yb-pgsql/src/test/java/org/yb/pgsql/TestPgReplicationSlot.java +++ b/java/yb-pgsql/src/test/java/org/yb/pgsql/TestPgReplicationSlot.java @@ -83,6 +83,7 @@ protected Map getTServerFlags() { flagMap.put( "cdcsdk_publication_list_refresh_interval_secs","" + kPublicationRefreshIntervalSec); flagMap.put("cdc_send_null_before_image_if_not_exists", "true"); + flagMap.put("TEST_dcheck_for_missing_schema_packing", "false"); return flagMap; } diff --git a/src/yb/dockv/schema_packing.cc b/src/yb/dockv/schema_packing.cc index 5c17d732d907..a7daab8325c6 100644 --- a/src/yb/dockv/schema_packing.cc +++ b/src/yb/dockv/schema_packing.cc @@ -33,7 +33,7 @@ #include "yb/util/fast_varint.h" #include "yb/util/flags/flag_tags.h" -DEFINE_test_flag(bool, dcheck_for_missing_schema_packing, false, +DEFINE_test_flag(bool, dcheck_for_missing_schema_packing, true, "Whether we use check failure for missing schema packing in debug builds"); namespace yb::dockv { diff --git a/src/yb/integration-tests/cdcsdk_test_base.h b/src/yb/integration-tests/cdcsdk_test_base.h index 49dec8d965fa..826a6446c16b 100644 --- a/src/yb/integration-tests/cdcsdk_test_base.h +++ b/src/yb/integration-tests/cdcsdk_test_base.h @@ -58,6 +58,7 @@ DECLARE_bool(ysql_yb_allow_replication_slot_ordering_modes); DECLARE_bool(cdc_send_null_before_image_if_not_exists); DECLARE_bool(enable_tablet_split_of_replication_slot_streamed_tables); DECLARE_bool(TEST_simulate_load_txn_for_cdc); +DECLARE_bool(TEST_dcheck_for_missing_schema_packing); namespace yb { using client::YBClient; @@ -146,6 +147,7 @@ class CDCSDKTestBase : public YBTest { ANNOTATE_UNPROTECTED_WRITE(FLAGS_ysql_enable_packed_row_for_colocated_table) = true; + ANNOTATE_UNPROTECTED_WRITE(FLAGS_TEST_dcheck_for_missing_schema_packing) = false; } void TearDown() override; diff --git a/src/yb/integration-tests/xcluster/xcluster_ysql-test.cc b/src/yb/integration-tests/xcluster/xcluster_ysql-test.cc index efa64269178d..e105871eede6 100644 --- a/src/yb/integration-tests/xcluster/xcluster_ysql-test.cc +++ b/src/yb/integration-tests/xcluster/xcluster_ysql-test.cc @@ -111,6 +111,7 @@ DECLARE_uint64(snapshot_coordinator_poll_interval_ms); DECLARE_uint32(cdc_wal_retention_time_secs); DECLARE_int32(catalog_manager_bg_task_wait_ms); DECLARE_bool(TEST_enable_sync_points); +DECLARE_bool(TEST_dcheck_for_missing_schema_packing); namespace yb { @@ -2181,6 +2182,7 @@ void XClusterYsqlTest::ValidateRecordsXClusterWithCDCSDK( ANNOTATE_UNPROTECTED_WRITE(FLAGS_update_min_cdc_indices_interval_secs) = 1; } std::vector tables_vector = {kNTabletsPerTable, kNTabletsPerTable}; + ANNOTATE_UNPROTECTED_WRITE(FLAGS_TEST_dcheck_for_missing_schema_packing) = false; ASSERT_OK(SetUpWithParams(tables_vector, tables_vector, 1)); // 2. Setup replication. From dc6d14a65aaf78c094624f9000e0db8d6e58e228 Mon Sep 17 00:00:00 2001 From: William McKenna Date: Fri, 22 Nov 2024 15:27:02 -0800 Subject: [PATCH 115/146] [#25063] YSQL: Add estimated roundtrips to EXPLAIN output Summary: Add estimated roundtrip for index and sequential scans to EXPLAIN. Examples with index-only, index, and sequential scans: yugabyte=# \d+ rand_dist Table "public.rand_dist" Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description --------------+---------------+-----------+----------+---------+----------+-------------+--------------+------------- pk | integer | | not null | | plain | | | norm_col | character(15) | | not null | | extended | | | exp_col | integer | | not null | | plain | | | uni_col | integer | | | | plain | | | uni_corr_col | character(10) | | | | extended | | | step_col | integer | | not null | | plain | | | Indexes: "rand_dist_pkey" PRIMARY KEY, lsm (pk HASH) "exp_uni" lsm (exp_col HASH, uni_col ASC) "norm_exp" lsm (norm_col HASH, exp_col ASC) "norm_uni" lsm (norm_col HASH, uni_col ASC) "step_exp" lsm (step_col HASH, exp_col ASC) Access method: heap yugabyte=# explain (dist on, analyze on, debug on) select count(*) from rand_dist where norm_col='00000000000933' and uni_col < 50; QUERY PLAN --------------------------------------------------------------------------------------------------------------------------------- Finalize Aggregate (cost=24.36..24.37 rows=1 width=8) (actual time=1.879..1.880 rows=1 loops=1) -> Index Only Scan using norm_uni on rand_dist (cost=20.02..24.36 rows=1 width=0) (actual time=1.872..1.873 rows=0 loops=1) Index Cond: ((norm_col = '00000000000933'::bpchar) AND (uni_col < 50)) Heap Fetches: 0 Storage Index Read Requests: 1 Storage Index Read Execution Time: 1.598 ms Metric rocksdb_number_db_seek: 1.000 Metric rocksdb_number_db_seek_found: 1.000 Metric rocksdb_iter_bytes_read: 56.000 Metric ql_read_latency: sum: 387.000, count: 1.000 Estimated Seeks: 2 Estimated Nexts: 3 Estimated Table Roundtrips: 0 Estimated Index Roundtrips: 1 Estimated Docdb Result Width: 30 Partial Aggregate: true Planning Time: 24.238 ms Execution Time: 2.042 ms Storage Read Requests: 1 Storage Read Execution Time: 1.598 ms Storage Rows Scanned: 0 Storage Write Requests: 0 Catalog Read Requests: 6 Catalog Read Execution Time: 20.065 ms Catalog Write Requests: 0 Storage Flush Requests: 0 Metric rocksdb_number_db_seek: 1 Metric rocksdb_number_db_seek_found: 1 Metric rocksdb_iter_bytes_read: 56 Metric ql_read_latency: sum: 387, count: 1 Storage Execution Time: 21.664 ms Peak Memory Usage: 25 kB (32 rows) yugabyte=# /*+ indexscan(rand_dist) */ explain (dist on, analyze on, debug on) select count(*) from rand_dist where norm_col='00000000000933' and uni_col < 50; QUERY PLAN ---------------------------------------------------------------------------------------------------------------------------- Finalize Aggregate (cost=68.73..68.74 rows=1 width=8) (actual time=2.857..2.857 rows=1 loops=1) -> Index Scan using norm_uni on rand_dist (cost=40.04..68.73 rows=1 width=0) (actual time=2.848..2.848 rows=0 loops=1) Index Cond: ((norm_col = '00000000000933'::bpchar) AND (uni_col < 50)) Storage Index Read Requests: 1 Storage Index Read Execution Time: 2.235 ms Metric rocksdb_number_db_seek: 1.000 Metric rocksdb_number_db_seek_found: 1.000 Metric rocksdb_iter_bytes_read: 56.000 Metric ql_read_latency: sum: 499.000, count: 1.000 Estimated Seeks: 4 Estimated Nexts: 6 Estimated Table Roundtrips: 1 Estimated Index Roundtrips: 1 Estimated Docdb Result Width: 30 Partial Aggregate: true Planning Time: 0.524 ms Execution Time: 3.155 ms Storage Read Requests: 1 Storage Read Execution Time: 2.235 ms Storage Rows Scanned: 0 Storage Write Requests: 0 Catalog Read Requests: 0 Catalog Write Requests: 0 Storage Flush Requests: 0 Metric rocksdb_number_db_seek: 1 Metric rocksdb_number_db_seek_found: 1 Metric rocksdb_iter_bytes_read: 56 Metric ql_read_latency: sum: 499, count: 1 Storage Execution Time: 2.235 ms Peak Memory Usage: 25 kB (30 rows) yugabyte=# /*+ seqscan(rand_dist) */ explain (dist on, analyze on, debug on) select count(*) from rand_dist where norm_col='00000000000933' and uni_col < 50; QUERY PLAN ------------------------------------------------------------------------------------------------------------------- Finalize Aggregate (cost=7184.05..7184.06 rows=1 width=8) (actual time=1290.385..1290.386 rows=1 loops=1) -> Seq Scan on rand_dist (cost=20.00..7184.05 rows=1 width=0) (actual time=1290.378..1290.378 rows=0 loops=1) Storage Filter: ((uni_col < 50) AND (norm_col = '00000000000933'::bpchar)) Storage Table Read Requests: 1 Storage Table Read Execution Time: 1289.778 ms Storage Table Rows Scanned: 100000 Metric rocksdb_number_db_seek: 2.000 Metric rocksdb_number_db_next: 200000.000 Metric rocksdb_number_db_seek_found: 2.000 Metric rocksdb_number_db_next_found: 199998.000 Metric rocksdb_iter_bytes_read: 10698356.000 Metric docdb_keys_found: 100000.000 Metric ql_read_latency: sum: 2563198.000, count: 2.000 Estimated Seeks: 1 Estimated Nexts: 99999 Estimated Table Roundtrips: 1 Estimated Index Roundtrips: 0 Estimated Docdb Result Width: 9 Partial Aggregate: true Planning Time: 0.778 ms Execution Time: 1290.532 ms Storage Read Requests: 1 Storage Read Execution Time: 1289.778 ms Storage Rows Scanned: 100000 Storage Write Requests: 0 Catalog Read Requests: 0 Catalog Write Requests: 0 Storage Flush Requests: 0 Metric rocksdb_number_db_seek: 2 Metric rocksdb_number_db_next: 200000 Metric rocksdb_number_db_seek_found: 2 Metric rocksdb_number_db_next_found: 199998 Metric rocksdb_iter_bytes_read: 10698356 Metric docdb_keys_found: 100000 Metric ql_read_latency: sum: 2563198, count: 2 Storage Execution Time: 1289.778 ms Peak Memory Usage: 24 kB (37 rows) Jira: DB-14194 Test Plan: TestPgCostModelSeekNextEstimation Reviewers: gkukreja, kramanathan Reviewed By: gkukreja, kramanathan Subscribers: smishra, svc_phabricator, yql, gkukreja Differential Revision: https://phorge.dev.yugabyte.com/D40230 --- .../org/yb/pgsql/ExplainAnalyzeUtils.java | 4 + .../TestPgCostModelSeekNextEstimation.java | 274 +++++++++++------- src/postgres/src/backend/commands/explain.c | 13 + .../src/backend/optimizer/path/allpaths.c | 10 + .../src/backend/optimizer/path/costsize.c | 25 +- src/postgres/src/include/nodes/pathnodes.h | 2 + .../pg_hint_plan/pg_hint_plan.c | 5 +- 7 files changed, 223 insertions(+), 110 deletions(-) diff --git a/java/yb-pgsql/src/test/java/org/yb/pgsql/ExplainAnalyzeUtils.java b/java/yb-pgsql/src/test/java/org/yb/pgsql/ExplainAnalyzeUtils.java index 8fa6e5712b51..0138f07a6623 100644 --- a/java/yb-pgsql/src/test/java/org/yb/pgsql/ExplainAnalyzeUtils.java +++ b/java/yb-pgsql/src/test/java/org/yb/pgsql/ExplainAnalyzeUtils.java @@ -117,6 +117,10 @@ public interface PlanCheckerBuilder extends ObjectCheckerBuilder { PlanCheckerBuilder estimatedSeeks(ValueChecker checker); PlanCheckerBuilder estimatedNextsAndPrevs(ValueChecker checker); + // Roundtrips Estimation + PlanCheckerBuilder estimatedTableRoundtrips(ValueChecker checker); + PlanCheckerBuilder estimatedIndexRoundtrips(ValueChecker checker); + // Estimated Docdb Result Width PlanCheckerBuilder estimatedDocdbResultWidth(ValueChecker checker); diff --git a/java/yb-pgsql/src/test/java/org/yb/pgsql/TestPgCostModelSeekNextEstimation.java b/java/yb-pgsql/src/test/java/org/yb/pgsql/TestPgCostModelSeekNextEstimation.java index db9126c4dab8..46b85e0282fc 100644 --- a/java/yb-pgsql/src/test/java/org/yb/pgsql/TestPgCostModelSeekNextEstimation.java +++ b/java/yb-pgsql/src/test/java/org/yb/pgsql/TestPgCostModelSeekNextEstimation.java @@ -62,6 +62,10 @@ public class TestPgCostModelSeekNextEstimation extends BasePgSQLTest { private static final String METRIC_NUM_DB_SEEK = "rocksdb_number_db_seek"; private static final String METRIC_NUM_DB_NEXT = "rocksdb_number_db_next"; + private static final String T5_NAME = "t5"; + private static final String T5_K1_INDEX_NAME = "t5_k1_idx"; + private static final String T5_K2_INDEX_NAME = "t5_k2_idx"; + private Connection connection2; private static TopLevelCheckerBuilder makeTopLevelBuilder() { @@ -88,14 +92,36 @@ private ValueChecker expectedNextsRange(double expected_nexts) { return Checkers.closed(expected_lower_bound, expected_upper_bound); } + // If expected_roundtrips==0, then the checker will verify the actual value + // is exactly 0. If non-zero, then the actual value should be in the range 1 to + // max Double (so the checker is only verifying the value is non-zero). + private ValueChecker expectedRoundtripsRange(double expected_roundtrips) { + + ValueChecker checker; + + if (expected_roundtrips == 0) + { + checker = Checkers.closed(0.0d, 0.0d); + } + else + { + checker = Checkers.closed(1.0, Integer.MAX_VALUE); + } + + return checker; + } + private void testSeekAndNextEstimationIndexScanHelper( Statement stmt, String query, String table_name, String index_name, double expected_seeks, double expected_nexts, + double expected_index_roundtrips, + double expected_table_roundtrips, Integer expected_docdb_result_width) throws Exception { testSeekAndNextEstimationIndexScanHelper(stmt, query, table_name, index_name, - NODE_INDEX_SCAN, expected_seeks, expected_nexts, expected_docdb_result_width); + NODE_INDEX_SCAN, expected_seeks, expected_nexts, expected_index_roundtrips, + expected_table_roundtrips,expected_docdb_result_width); } private void testSeekAndNextEstimationIndexScanHelper_IgnoreActualResults( @@ -103,9 +129,12 @@ private void testSeekAndNextEstimationIndexScanHelper_IgnoreActualResults( String table_name, String index_name, double expected_seeks, double expected_nexts, + double expected_index_roundtrips, + double expected_table_roundtrips, Integer expected_docdb_result_width) throws Exception { testSeekAndNextEstimationIndexScanHelper_IgnoreActualResults(stmt, query, table_name, index_name, - NODE_INDEX_SCAN, expected_seeks, expected_nexts, expected_docdb_result_width); + NODE_INDEX_SCAN, expected_seeks, expected_nexts, expected_index_roundtrips, + expected_table_roundtrips, expected_docdb_result_width); } private void testSeekAndNextEstimationIndexOnlyScanHelper( @@ -113,9 +142,12 @@ private void testSeekAndNextEstimationIndexOnlyScanHelper( String table_name, String index_name, double expected_seeks, double expected_nexts, + double expected_index_roundtrips, + double expected_table_roundtrips, Integer expected_docdb_result_width) throws Exception { testSeekAndNextEstimationIndexScanHelper(stmt, query, table_name, index_name, - NODE_INDEX_ONLY_SCAN, expected_seeks, expected_nexts, expected_docdb_result_width); + NODE_INDEX_ONLY_SCAN, expected_seeks, expected_nexts, expected_index_roundtrips, + expected_table_roundtrips,expected_docdb_result_width); } private void testSeekAndNextEstimationIndexOnlyScanHelper_IgnoreActualResults( @@ -123,9 +155,12 @@ private void testSeekAndNextEstimationIndexOnlyScanHelper_IgnoreActualResults( String table_name, String index_name, double expected_seeks, double expected_nexts, + double expected_index_roundtrips, + double expected_table_roundtrips, Integer expected_docdb_result_width) throws Exception { testSeekAndNextEstimationIndexScanHelper_IgnoreActualResults(stmt, query, table_name, index_name, - NODE_INDEX_ONLY_SCAN, expected_seeks, expected_nexts, expected_docdb_result_width); + NODE_INDEX_ONLY_SCAN, expected_seeks, expected_nexts, expected_index_roundtrips, + expected_table_roundtrips,expected_docdb_result_width); } private void testSeekAndNextEstimationIndexScanHelper( @@ -134,6 +169,8 @@ private void testSeekAndNextEstimationIndexScanHelper( String node_type, double expected_seeks, double expected_nexts, + double expected_index_roundtrips, + double expected_table_roundtrips, Integer expected_docdb_result_width) throws Exception { try { testExplainDebug(stmt, query, @@ -144,6 +181,8 @@ private void testSeekAndNextEstimationIndexScanHelper( .indexName(index_name) .estimatedSeeks(expectedSeeksRange(expected_seeks)) .estimatedNextsAndPrevs(expectedNextsRange(expected_nexts)) + .estimatedIndexRoundtrips(expectedRoundtripsRange(expected_index_roundtrips)) + .estimatedTableRoundtrips(expectedRoundtripsRange(expected_table_roundtrips)) .estimatedDocdbResultWidth(Checkers.equal(expected_docdb_result_width)) .metric(METRIC_NUM_DB_SEEK, expectedSeeksRange(expected_seeks)) .metric(METRIC_NUM_DB_NEXT, expectedNextsRange(expected_nexts)) @@ -245,6 +284,8 @@ private void testSeekAndNextEstimationIndexScanHelper_IgnoreActualResults( String node_type, double expected_seeks, double expected_nexts, + double expected_index_roundtrips, + double expected_table_roundtrips, Integer expected_docdb_result_width) throws Exception { try { testExplainDebug(stmt, query, @@ -255,6 +296,8 @@ private void testSeekAndNextEstimationIndexScanHelper_IgnoreActualResults( .indexName(index_name) .estimatedSeeks(expectedSeeksRange(expected_seeks)) .estimatedNextsAndPrevs(expectedNextsRange(expected_nexts)) + .estimatedIndexRoundtrips(expectedRoundtripsRange(expected_index_roundtrips)) + .estimatedTableRoundtrips(expectedRoundtripsRange(expected_table_roundtrips)) .estimatedDocdbResultWidth(Checkers.equal(expected_docdb_result_width)) .build()) .build()); @@ -270,6 +313,7 @@ private void testSeekAndNextEstimationSeqScanHelper( Statement stmt, String query, String table_name, double expected_seeks, double expected_nexts, + double expected_table_roundtrips, long expected_docdb_result_width) throws Exception { try { testExplainDebug(stmt, query, @@ -279,6 +323,8 @@ private void testSeekAndNextEstimationSeqScanHelper( .relationName(table_name) .estimatedSeeks(expectedSeeksRange(expected_seeks)) .estimatedNextsAndPrevs(expectedNextsRange(expected_nexts)) + .estimatedIndexRoundtrips(expectedRoundtripsRange(0)) + .estimatedTableRoundtrips(expectedRoundtripsRange(expected_table_roundtrips)) .estimatedDocdbResultWidth(Checkers.equal(expected_docdb_result_width)) .metric(METRIC_NUM_DB_SEEK, expectedSeeksRange(expected_seeks)) .metric(METRIC_NUM_DB_NEXT, expectedNextsRange(expected_nexts)) @@ -296,6 +342,7 @@ private void testSeekAndNextEstimationSeqScanHelper_IgnoreActualResults( Statement stmt, String query, String table_name, double expected_seeks, double expected_nexts, + double expected_table_roundtrips, long expected_docdb_result_width) throws Exception { try { testExplainDebug(stmt, query, @@ -305,6 +352,7 @@ private void testSeekAndNextEstimationSeqScanHelper_IgnoreActualResults( .relationName(table_name) .estimatedSeeks(expectedSeeksRange(expected_seeks)) .estimatedNextsAndPrevs(expectedNextsRange(expected_nexts)) + .estimatedTableRoundtrips(expectedRoundtripsRange(expected_table_roundtrips)) .estimatedDocdbResultWidth(Checkers.equal(expected_docdb_result_width)) .build()) .build()); @@ -401,8 +449,19 @@ public void setUp() throws Exception { + "generate_series(1, 20) s4", T4_NO_PKEY_NAME)); stmt.execute(String.format("CREATE STATISTICS %s_stx ON k1, k2, k3, k4 FROM %s", T4_NO_PKEY_NAME, T4_NO_PKEY_NAME)); - stmt.execute(String.format("ANALYZE %s, %s, %s, %s, %s, %s;", - T1_NAME, T2_NAME, T3_NAME, T4_NAME, T4_NO_PKEY_NAME, T2_NO_PKEY_NAME)); + // Create a non-colocated table. + stmt.execute(String.format("CREATE TABLE %s (k1 INT, k2 INT) " + + "WITH (colocated = false)", T5_NAME)); + stmt.execute(String.format("CREATE INDEX %s on %s (k1 ASC)", + T5_K1_INDEX_NAME, T5_NAME)); + stmt.execute(String.format("CREATE INDEX %s on %s (k2 ASC)", + T5_K2_INDEX_NAME, T5_NAME)); + stmt.execute(String.format("INSERT INTO %s SELECT k1, k2 FROM %s", + T5_NAME, T2_NO_PKEY_NAME)); + stmt.execute(String.format("CREATE STATISTICS %s_stx ON k1, k2 FROM %s", + T5_NAME, T5_NAME)); + stmt.execute(String.format("ANALYZE %s, %s, %s, %s, %s, %s, %s;", + T1_NAME, T2_NAME, T3_NAME, T4_NAME, T4_NO_PKEY_NAME, T2_NO_PKEY_NAME, T5_NAME)); stmt.execute("SET yb_enable_optimizer_statistics = true"); stmt.execute("SET yb_enable_base_scans_cost_model = true"); stmt.execute("SET yb_bnl_batch_size = 1024"); @@ -461,175 +520,180 @@ public void testSeekNextEstimationIndexScan() throws Exception { } testSeekAndNextEstimationIndexScanHelper(stmt, String.format("/*+IndexScan(%s)*/ SELECT * " + "FROM %s WHERE k1 IN (4, 8)", T1_NAME, T1_NAME), - T1_NAME, T1_INDEX_NAME, 2, 4, 5); + T1_NAME, T1_INDEX_NAME, 2, 4, 1, 0, 5); testSeekAndNextEstimationIndexScanHelper(stmt, String.format("/*+IndexScan(%s)*/ SELECT * " + "FROM %s WHERE k1 IN (4, 8, 12)", T1_NAME, T1_NAME), - T1_NAME, T1_INDEX_NAME, 3, 7, 5); + T1_NAME, T1_INDEX_NAME, 3, 7, 1, 0, 5); testSeekAndNextEstimationIndexScanHelper(stmt, String.format("/*+IndexScan(%s)*/ SELECT * " + "FROM %s WHERE k1 IN (4, 8, 12, 16)", T1_NAME, T1_NAME), - T1_NAME, T1_INDEX_NAME, 4, 10, 5); + T1_NAME, T1_INDEX_NAME, 4, 10, 1, 0, 5); testSeekAndNextEstimationIndexScanHelper(stmt, String.format("/*+IndexScan(%s)*/ SELECT * " + "FROM %s WHERE k1 IN (4, 8, 12, 16)", T2_NAME, T2_NAME), - T2_NAME, T2_INDEX_NAME, 4, 86, 10); + T2_NAME, T2_INDEX_NAME, 4, 86, 1, 0, 10); testSeekAndNextEstimationIndexScanHelper(stmt, String.format("/*+IndexScan(%s)*/ SELECT * " + "FROM %s WHERE k2 IN (4, 8, 12, 16)", T2_NAME, T2_NAME), - T2_NAME, T2_INDEX_NAME, 101, 280, 10); + T2_NAME, T2_INDEX_NAME, 101, 280, 1, 0, 10); testSeekAndNextEstimationIndexScanHelper(stmt, String.format("/*+IndexScan(%s)*/ SELECT * " + "FROM %s WHERE k1 IN (4, 8, 12) AND k4 IN (4, 8, 12, 16)", T4_NAME, T4_NAME), - T4_NAME, T4_INDEX_NAME, 6007, 16808, 20); + T4_NAME, T4_INDEX_NAME, 6007, 16808, 1, 0, 20); testSeekAndNextEstimationIndexScanHelper(stmt, String.format("/*+IndexScan(%s)*/ SELECT * " + "FROM %s WHERE k2 IN (4, 8, 12, 16) AND k4 IN (4, 8, 12)", T4_NAME, T4_NAME), - T4_NAME, T4_INDEX_NAME, 6505, 17804, 20); + T4_NAME, T4_INDEX_NAME, 6505, 17804, 1, 0, 20); testSeekAndNextEstimationIndexScanHelper(stmt, String.format("/*+IndexScan(%s)*/ SELECT * " + "FROM %s WHERE k1 IN (4, 8, 12, 16) AND k2 IN (4, 8, 12, 16)", T4_NAME, T4_NAME), - T4_NAME, T4_INDEX_NAME, 22, 6440, 20); + T4_NAME, T4_INDEX_NAME, 22, 6440, 1, 0, 20); testSeekAndNextEstimationIndexScanHelper(stmt, String.format("/*+IndexScan(%s)*/ SELECT * " + "FROM %s WHERE k1 >= 4 and k1 < 14", T1_NAME, T1_NAME), - T1_NAME, T1_INDEX_NAME, 1, 10, 5); + T1_NAME, T1_INDEX_NAME, 1, 10, 1, 0, 5); testSeekAndNextEstimationIndexScanHelper(stmt, String.format("/*+IndexScan(%s)*/ SELECT * " + "FROM %s WHERE k1 >= 4 and k1 < 14", T2_NAME, T2_NAME), - T2_NAME, T2_INDEX_NAME, 1, 200, 10); + T2_NAME, T2_INDEX_NAME, 1, 200, 1, 0, 10); testSeekAndNextEstimationIndexScanHelper(stmt, String.format("/*+IndexScan(%s)*/ SELECT * " + "FROM %s WHERE k1 >= 4 and k1 < 14", T3_NAME, T3_NAME), - T3_NAME, T3_INDEX_NAME, 5, 4000, 15); + T3_NAME, T3_INDEX_NAME, 5, 4000, 1, 0, 15); testSeekAndNextEstimationIndexScanHelper(stmt, String.format("/*+IndexScan(%s)*/ SELECT * " + "FROM %s WHERE k1 >= 4 and k1 < 14", T4_NAME, T4_NAME), - T4_NAME, T4_INDEX_NAME, 79, 80000, 20); + T4_NAME, T4_INDEX_NAME, 79, 80000, 1, 0, 20); testSeekAndNextEstimationIndexScanHelper(stmt, String.format("/*+IndexScan(%s)*/ SELECT * " + "FROM %s WHERE k2 >= 4 and k2 < 14", T2_NAME, T2_NAME), - T2_NAME, T2_INDEX_NAME, 41, 280, 10); + T2_NAME, T2_INDEX_NAME, 41, 280, 1, 0, 10); testSeekAndNextEstimationIndexScanHelper(stmt, String.format("/*+IndexScan(%s)*/ SELECT * " + "FROM %s WHERE k3 >= 4 and k3 < 14", T3_NAME, T3_NAME), - T3_NAME, T3_INDEX_NAME, 804, 5600, 15); + T3_NAME, T3_INDEX_NAME, 804, 5600, 1, 0, 15); testSeekAndNextEstimationIndexScanHelper(stmt, String.format("/*+IndexScan(%s)*/ SELECT * " + "FROM %s WHERE k4 >= 4 and k4 < 14", T4_NAME, T4_NAME), - T4_NAME, T4_INDEX_NAME, 16079, 112000, 20); + T4_NAME, T4_INDEX_NAME, 16079, 112000, 1, 0, 20); testSeekAndNextEstimationIndexScanHelper(stmt, String.format("/*+IndexScan(%s)*/ SELECT * " + "FROM %s WHERE k3 >= 4 and k3 < 14", T4_NAME, T4_NAME), - T4_NAME, T4_INDEX_NAME, 879, 81600, 20); + T4_NAME, T4_INDEX_NAME, 879, 81600, 1, 0, 20); testSeekAndNextEstimationIndexScanHelper(stmt, String.format("/*+IndexScan(%s)*/ SELECT * " + "FROM %s WHERE k2 >= 4 and k2 < 14", T4_NAME, T4_NAME), - T4_NAME, T4_INDEX_NAME, 120, 80000, 20); + T4_NAME, T4_INDEX_NAME, 120, 80000, 1, 0, 20); testSeekAndNextEstimationIndexScanHelper(stmt, String.format("/*+IndexScan(%s)*/ SELECT * " + "FROM %s WHERE k1 >= 4 and k1 < 14 and k3 >= 4 and k3 < 14", T4_NAME, T4_NAME), - T4_NAME, T4_INDEX_NAME, 440, 40800, 20); + T4_NAME, T4_INDEX_NAME, 440, 40800, 1, 0, 20); testSeekAndNextEstimationIndexScanHelper(stmt, String.format("/*+IndexScan(%s)*/ SELECT * " + "FROM %s WHERE k1 = 4 and k2 IN (4, 8, 12, 16)", T4_NAME, T4_NAME), - T4_NAME, T4_INDEX_NAME, 5, 1606, 20); + T4_NAME, T4_INDEX_NAME, 5, 1606, 1, 0, 20); testSeekAndNextEstimationIndexScanHelper(stmt, String.format("/*+IndexScan(%s)*/ SELECT * " + "FROM %s WHERE k1 IN (4, 8, 12, 16) and k2 = 4", T4_NAME, T4_NAME), - T4_NAME, T4_INDEX_NAME, 5, 1606, 20); + T4_NAME, T4_INDEX_NAME, 5, 1606, 1, 0, 20); testSeekAndNextEstimationIndexScanHelper(stmt, String.format("/*+IndexScan(%s)*/ SELECT * " + "FROM %s WHERE k3 IN (4, 8, 12, 16) and k4 = 4", T4_NAME, T4_NAME), - T4_NAME, T4_INDEX_NAME, 2002, 4000, 20); + T4_NAME, T4_INDEX_NAME, 2002, 4000, 1, 0, 20); testSeekAndNextEstimationIndexScanHelper(stmt, String.format("/*+IndexScan(%s)*/ SELECT * " + "FROM %s WHERE k1 >= 4 and k1 < 5 and k2 IN (4, 8, 12, 16)", T2_NAME, T2_NAME), - T2_NAME, T2_INDEX_NAME, 5, 8, 10); + T2_NAME, T2_INDEX_NAME, 5, 8, 1, 0, 10); testSeekAndNextEstimationIndexScanHelper(stmt, String.format("/*+IndexScan(%s)*/ SELECT * " + "FROM %s WHERE k1 >= 4 and k1 < 6 and k2 IN (4, 8, 12, 16)", T2_NAME, T2_NAME), - T2_NAME, T2_INDEX_NAME, 10, 18, 10); + T2_NAME, T2_INDEX_NAME, 10, 18, 1, 0, 10); testSeekAndNextEstimationIndexScanHelper(stmt, String.format("/*+IndexScan(%s)*/ SELECT * " + "FROM %s WHERE k1 >= 4 and k1 < 14 and k2 IN (4, 8, 12, 16)", T2_NAME, T2_NAME), - T2_NAME, T2_INDEX_NAME, 50, 98, 10); + T2_NAME, T2_INDEX_NAME, 50, 98, 1, 0, 10); testSeekAndNextEstimationIndexScanHelper(stmt, String.format("/*+IndexScan(%s)*/ SELECT * " + "FROM %s WHERE k1 >= 4 and k1 < 7 and k3 IN (4, 8, 12, 16)", T3_NAME, T3_NAME), - T3_NAME, T3_INDEX_NAME, 301, 600, 15); + T3_NAME, T3_INDEX_NAME, 301, 600, 1, 0, 15); testSeekAndNextEstimationIndexScanHelper(stmt, String.format("/*+IndexScan(%s)*/ SELECT * " + "FROM %s WHERE k2 >= 4 and k2 < 7 and k4 IN (4, 8, 12)", T4_NAME, T4_NAME), - T4_NAME, T4_INDEX_NAME, 4844, 9680, 20); + T4_NAME, T4_INDEX_NAME, 4844, 9680, 1, 0, 20); testSeekAndNextEstimationIndexScanHelper(stmt, String.format("/*+IndexScan(%s)*/ SELECT * " + "FROM %s WHERE k1 IN (1, 4, 7, 10)", T4_NAME, T4_NAME), - T4_NAME, T4_INDEX_NAME, 35, 32037, 20); + T4_NAME, T4_INDEX_NAME, 35, 32037, 1, 0, 20); testSeekAndNextEstimationIndexScanHelper(stmt, String.format("/*+IndexScan(%s)*/ SELECT * " + "FROM %s WHERE k1 >= 4 AND k2 >= 4", T4_NAME, T4_NAME), - T4_NAME, T4_INDEX_NAME, 129, 115744, 20); + T4_NAME, T4_INDEX_NAME, 129, 115744, 1, 0, 20); testSeekAndNextEstimationIndexScanHelper(stmt, String.format("/*+IndexScan(%s)*/ SELECT * " + "FROM %s WHERE k1 >= 4 AND k1 < 14 AND k2 >= 4", T4_NAME, T4_NAME), - T4_NAME, T4_INDEX_NAME, 76, 68084, 20); + T4_NAME, T4_INDEX_NAME, 76, 68084, 1, 0, 20); testSeekAndNextEstimationIndexScanHelper(stmt, String.format("/*+IndexScan(%s)*/ SELECT * " + "FROM %s WHERE k1 >= 4 AND k1 < 14 AND k2 >= 4 AND k2 < 14", T4_NAME, T4_NAME), - T4_NAME, T4_INDEX_NAME, 59, 40077, 20); + T4_NAME, T4_INDEX_NAME, 59, 40077, 1, 0, 20); testSeekAndNextEstimationIndexScanHelper(stmt, String.format("/*+IndexScan(%s)*/ SELECT * " + "FROM %s WHERE k1 >= 4 AND k2 >= 4 AND k2 < 14", T4_NAME, T4_NAME), - T4_NAME, T4_INDEX_NAME, 100, 68132, 20); + T4_NAME, T4_INDEX_NAME, 100, 68132, 1, 0, 20); testSeekAndNextEstimationIndexScanHelper(stmt, String.format("/*+IndexScan(%s)*/ SELECT * " + "FROM %s WHERE k1 IN (1, 4, 7, 10) AND k2 IN (1, 4, 7, 10)", T4_NAME, T4_NAME), - T4_NAME, T4_INDEX_NAME, 22, 6436, 20); + T4_NAME, T4_INDEX_NAME, 22, 6436, 1, 0, 20); testSeekAndNextEstimationIndexScanHelper(stmt, String.format("/*+IndexScan(%s)*/ SELECT * " + "FROM %s WHERE k1 >= 4 AND k3 >= 4", T4_NAME, T4_NAME), - T4_NAME, T4_INDEX_NAME, 453, 116392, 20); + T4_NAME, T4_INDEX_NAME, 453, 116392, 1, 0, 20); testSeekAndNextEstimationIndexScanHelper(stmt, String.format("/*+IndexScan(%s)*/ SELECT * " + "FROM %s WHERE k1 >= 4 AND k1 < 14 AND k3 >= 4 AND k3 < 14", T4_NAME, T4_NAME), - T4_NAME, T4_INDEX_NAME, 440, 40839, 20); + T4_NAME, T4_INDEX_NAME, 440, 40839, 1, 0, 20); testSeekAndNextEstimationIndexScanHelper(stmt, String.format("/*+IndexScan(%s)*/ SELECT * " + "FROM %s WHERE k1 >= 4 AND k3 >= 4 AND k3 < 14", T4_NAME, T4_NAME), - T4_NAME, T4_INDEX_NAME, 747, 69426, 20); + T4_NAME, T4_INDEX_NAME, 747, 69426, 1, 0, 20); testSeekAndNextEstimationIndexOnlyScanHelper(stmt, String.format("/*+IndexOnlyScan(%s %s)*/ SELECT k2 FROM %s " + "WHERE k1 >= 4 AND k3 >= 4", T4_NO_PKEY_NAME, T4_NO_PKEY_SINDEX_3_NAME, T4_NO_PKEY_NAME), - T4_NO_PKEY_NAME, T4_NO_PKEY_SINDEX_3_NAME, 452, 116394, 5); + T4_NO_PKEY_NAME, T4_NO_PKEY_SINDEX_3_NAME, 452, 116394, 1, 0, 5); testSeekAndNextEstimationIndexOnlyScanHelper(stmt, String.format("/*+IndexOnlyScan(%s %s)*/ SELECT k2 FROM %s " + "WHERE k1 >= 4 AND k3 = 4", T4_NO_PKEY_NAME, T4_NO_PKEY_SINDEX_3_NAME, T4_NO_PKEY_NAME), - T4_NO_PKEY_NAME, T4_NO_PKEY_SINDEX_3_NAME, 686, 8168, 5); + T4_NO_PKEY_NAME, T4_NO_PKEY_SINDEX_3_NAME, 686, 8168, 1, 0, 5); testSeekAndNextEstimationIndexOnlyScanHelper(stmt, String.format("/*+IndexOnlyScan(%s %s)*/ SELECT k2 FROM %s " + "WHERE k1 >= 4 AND k3 IN (4, 8, 12)", T4_NO_PKEY_NAME, T4_NO_PKEY_SINDEX_3_NAME, T4_NO_PKEY_NAME), - T4_NO_PKEY_NAME, T4_NO_PKEY_SINDEX_3_NAME, 1379, 23141, 5); + T4_NO_PKEY_NAME, T4_NO_PKEY_SINDEX_3_NAME, 1379, 23141, 1, 0, 5); testSeekAndNextEstimationIndexOnlyScanHelper(stmt, String.format("/*+IndexOnlyScan(%s %s)*/ SELECT k2 FROM %s " + "WHERE k1 = 4 AND k3 IN (4, 8, 12)", T4_NO_PKEY_NAME, T4_NO_PKEY_SINDEX_3_NAME, T4_NO_PKEY_NAME), - T4_NO_PKEY_NAME, T4_NO_PKEY_SINDEX_3_NAME, 81, 1363, 5); + T4_NO_PKEY_NAME, T4_NO_PKEY_SINDEX_3_NAME, 81, 1363, 1, 0, 5); testSeekAndNextEstimationIndexOnlyScanHelper(stmt, String.format("/*+IndexOnlyScan(%s %s)*/ SELECT k2 FROM %s " + "WHERE k1 IN (4, 8, 12) AND k3 IN (4, 8, 12)", T4_NO_PKEY_NAME, T4_NO_PKEY_SINDEX_3_NAME, T4_NO_PKEY_NAME), - T4_NO_PKEY_NAME, T4_NO_PKEY_SINDEX_3_NAME, 246, 4091, 5); + T4_NO_PKEY_NAME, T4_NO_PKEY_SINDEX_3_NAME, 246, 4091, 1, 0, 5); testSeekAndNextEstimationIndexOnlyScanHelper(stmt, String.format("/*+IndexOnlyScan(%s %s)*/ SELECT k2 FROM %s " + "WHERE k1 IN (4, 8, 12) AND k3 >= 4", T4_NO_PKEY_NAME, T4_NO_PKEY_SINDEX_3_NAME, T4_NO_PKEY_NAME), - T4_NO_PKEY_NAME, T4_NO_PKEY_SINDEX_3_NAME, 82, 20547, 5); + T4_NO_PKEY_NAME, T4_NO_PKEY_SINDEX_3_NAME, 82, 20547, 1, 0, 5); testSeekAndNextEstimationIndexOnlyScanHelper(stmt, String.format("/*+IndexOnlyScan(%s %s)*/ SELECT k2 FROM %s " + "WHERE k1 IN (4, 8, 12) AND k3 = 4", T4_NO_PKEY_NAME, T4_NO_PKEY_SINDEX_3_NAME, T4_NO_PKEY_NAME), - T4_NO_PKEY_NAME, T4_NO_PKEY_SINDEX_3_NAME, 124, 1389, 5); + T4_NO_PKEY_NAME, T4_NO_PKEY_SINDEX_3_NAME, 124, 1389, 1, 0, 5); testSeekAndNextEstimationIndexOnlyScanHelper(stmt, String.format(" SELECT k2 FROM %s " + "WHERE k1 = 4 AND k3 = 4", T4_NO_PKEY_NAME, T4_NO_PKEY_SINDEX_3_NAME, T4_NO_PKEY_NAME), - T4_NO_PKEY_NAME, T4_NO_PKEY_SINDEX_3_NAME, 40, 482, 5); + T4_NO_PKEY_NAME, T4_NO_PKEY_SINDEX_3_NAME, 40, 482, 1, 0, 5); testSeekAndNextEstimationIndexScanHelper_IgnoreActualResults(stmt, String.format("/*+IndexScan(%s %s)*/ SELECT * FROM %s WHERE k1 = 4", T2_NO_PKEY_NAME, T2_NO_PKEY_SINDEX_K1_NAME, T2_NO_PKEY_NAME), - T2_NO_PKEY_NAME, T2_NO_PKEY_SINDEX_K1_NAME, 21, 20, 10); + T2_NO_PKEY_NAME, T2_NO_PKEY_SINDEX_K1_NAME, 21, 20, 0, 0, 10); testSeekAndNextEstimationIndexScanHelper_IgnoreActualResults(stmt, String.format("/*+IndexScan(%s %s)*/ SELECT * FROM %s WHERE k1 >= 4", T2_NO_PKEY_NAME, T2_NO_PKEY_SINDEX_K1_NAME, T2_NO_PKEY_NAME), - T2_NO_PKEY_NAME, T2_NO_PKEY_SINDEX_K1_NAME, 341, 340, 10); + T2_NO_PKEY_NAME, T2_NO_PKEY_SINDEX_K1_NAME, 341, 340, 0, 0, 10); testSeekAndNextEstimationIndexScanHelper_IgnoreActualResults(stmt, String.format("/*+IndexScan(%s %s)*/ SELECT * FROM %s WHERE k1 IN (4, 8, 12)", T2_NO_PKEY_NAME, T2_NO_PKEY_SINDEX_K1_NAME, T2_NO_PKEY_NAME), - T2_NO_PKEY_NAME, T2_NO_PKEY_SINDEX_K1_NAME, 63, 63, 10); + T2_NO_PKEY_NAME, T2_NO_PKEY_SINDEX_K1_NAME, 63, 63, 0, 0, 10); testSeekAndNextEstimationIndexScanHelper_IgnoreActualResults(stmt, String.format("/*+IndexScan(%s %s)*/ SELECT * FROM %s WHERE k2 = 4", T2_NO_PKEY_NAME, T2_NO_PKEY_SINDEX_K2_NAME, T2_NO_PKEY_NAME), - T2_NO_PKEY_NAME, T2_NO_PKEY_SINDEX_K2_NAME, 21, 20, 10); + T2_NO_PKEY_NAME, T2_NO_PKEY_SINDEX_K2_NAME, 21, 20, 0, 0, 10); testSeekAndNextEstimationIndexScanHelper_IgnoreActualResults(stmt, String.format("/*+IndexScan(%s %s)*/ SELECT * FROM %s WHERE k2 >= 4", T2_NO_PKEY_NAME, T2_NO_PKEY_SINDEX_K2_NAME, T2_NO_PKEY_NAME), - T2_NO_PKEY_NAME, T2_NO_PKEY_SINDEX_K2_NAME, 341, 340, 10); + T2_NO_PKEY_NAME, T2_NO_PKEY_SINDEX_K2_NAME, 341, 340, 0, 0, 10); testSeekAndNextEstimationIndexScanHelper_IgnoreActualResults(stmt, String.format("/*+IndexScan(%s %s)*/ SELECT * FROM %s WHERE k2 IN (4, 8, 12)", T2_NO_PKEY_NAME, T2_NO_PKEY_SINDEX_K2_NAME, T2_NO_PKEY_NAME), - T2_NO_PKEY_NAME, T2_NO_PKEY_SINDEX_K2_NAME, 63, 63, 10); + T2_NO_PKEY_NAME, T2_NO_PKEY_SINDEX_K2_NAME, 63, 63, 0, 0, 10); + // Try a non-colocated table with a secondary index. + testSeekAndNextEstimationIndexScanHelper_IgnoreActualResults(stmt, + String.format("/*+IndexScan(%s %s)*/ SELECT * FROM %s " + + "WHERE k1 < 10 /* t5 query 1 */", T5_NAME, T5_K1_INDEX_NAME, T5_NAME), + T5_NAME, T5_K1_INDEX_NAME, 93, 450, 1, 1, 10); } } @@ -902,7 +966,7 @@ public void testSeekNextEstimationBitmapScanExceedingWorkMem() throws Exception final String query = "/*+ %s(t) */ SELECT * FROM %s AS t WHERE %s >= 4 AND %s >= 4"; testSeekAndNextEstimationSeqScanHelper(stmt, String.format(query, "SeqScan", T4_NAME, "k1", "k2"), - T4_NAME, estimated_seeks, estimated_nexts, 20); + T4_NAME, estimated_seeks, estimated_nexts, 1, 20); testSeekAndNextEstimationBitmapScanHelper(stmt, String.format(query, "BitmapScan", T4_NAME, "k1", "k2"), T4_NAME, estimated_seeks, estimated_nexts, 20, @@ -910,7 +974,7 @@ public void testSeekNextEstimationBitmapScanExceedingWorkMem() throws Exception testSeekAndNextEstimationSeqScanHelper(stmt, String.format(query, "SeqScan", T4_NAME, "k1", "k3"), - T4_NAME, estimated_seeks, estimated_nexts, 20); + T4_NAME, estimated_seeks, estimated_nexts, 1, 20); testSeekAndNextEstimationBitmapScanHelper(stmt, String.format(query, "BitmapScan", T4_NAME, "k1", "k3"), T4_NAME, estimated_seeks, estimated_nexts, 20, @@ -918,7 +982,7 @@ public void testSeekNextEstimationBitmapScanExceedingWorkMem() throws Exception testSeekAndNextEstimationSeqScanHelper(stmt, String.format(query, "SeqScan", T4_NAME, "k1", "k4"), - T4_NAME, estimated_seeks, estimated_nexts, 20); + T4_NAME, estimated_seeks, estimated_nexts, 1, 20); testSeekAndNextEstimationBitmapScanHelper(stmt, String.format(query, "BitmapScan", T4_NAME, "k1", "k4"), T4_NAME, estimated_seeks, estimated_nexts, 20, @@ -932,109 +996,109 @@ public void testSeekNextEstimationSeqScan() throws Exception { stmt.execute(String.format("SET enable_indexscan=off")); testSeekAndNextEstimationSeqScanHelper(stmt, String.format("/*+SeqScan(%s)*/ SELECT * " + "FROM %s WHERE k1 IN (4, 8)", T1_NAME, T1_NAME), - T1_NAME, 1, 19, 5); + T1_NAME, 1, 19, 1, 5); testSeekAndNextEstimationSeqScanHelper(stmt, String.format("/*+SeqScan(%s)*/ SELECT * " + "FROM %s WHERE k1 IN (4, 8, 12)", T1_NAME, T1_NAME), - T1_NAME, 1, 19, 5); + T1_NAME, 1, 19, 1, 5); testSeekAndNextEstimationSeqScanHelper(stmt, String.format("/*+SeqScan(%s)*/ SELECT * " + "FROM %s WHERE k1 IN (4, 8, 12, 16)", T1_NAME, T1_NAME), - T1_NAME, 1, 19, 5); + T1_NAME, 1, 19, 1, 5); testSeekAndNextEstimationSeqScanHelper(stmt, String.format("/*+SeqScan(%s)*/ SELECT * " + "FROM %s WHERE k1 IN (4, 8, 12, 16)", T2_NAME, T2_NAME), - T2_NAME, 1, 399, 10); + T2_NAME, 1, 399, 1, 10); testSeekAndNextEstimationSeqScanHelper(stmt, String.format("/*+SeqScan(%s)*/ SELECT * " + "FROM %s WHERE k2 IN (4, 8, 12, 16)", T2_NAME, T2_NAME), - T2_NAME, 1, 399, 10); + T2_NAME, 1, 399, 1, 10); testSeekAndNextEstimationSeqScanHelper(stmt, String.format("/*+SeqScan(%s)*/ SELECT * " + "FROM %s WHERE k1 IN (4, 8, 12) AND k4 IN (4, 8, 12, 16)", T4_NAME, T4_NAME), - T4_NAME, 5, 160003, 20); + T4_NAME, 5, 160003, 1, 20); testSeekAndNextEstimationSeqScanHelper(stmt, String.format("/*+SeqScan(%s)*/ SELECT * " + "FROM %s WHERE k2 IN (4, 8, 12, 16) AND k4 IN (4, 8, 12)", T4_NAME, T4_NAME), - T4_NAME, 5, 160003, 20); + T4_NAME, 5, 160003, 1, 20); testSeekAndNextEstimationSeqScanHelper(stmt, String.format("/*+SeqScan(%s)*/ SELECT * " + "FROM %s WHERE k1 IN (4, 8, 12, 16) AND k2 IN (4, 8, 12, 16)", T4_NAME, T4_NAME), - T4_NAME, 7, 160005, 20); + T4_NAME, 7, 160005, 1, 20); testSeekAndNextEstimationSeqScanHelper(stmt, String.format("/*+SeqScan(%s)*/ SELECT * " + "FROM %s WHERE k1 >= 4 and k1 < 14", T1_NAME, T1_NAME), - T1_NAME, 1, 19, 5); + T1_NAME, 1, 19, 1, 5); testSeekAndNextEstimationSeqScanHelper(stmt, String.format("/*+SeqScan(%s)*/ SELECT * " + "FROM %s WHERE k1 >= 4 and k1 < 14", T2_NAME, T2_NAME), - T2_NAME, 1, 399, 10); + T2_NAME, 1, 399, 1, 10); testSeekAndNextEstimationSeqScanHelper(stmt, String.format("/*+SeqScan(%s)*/ SELECT * " + "FROM %s WHERE k1 >= 4 and k1 < 14", T3_NAME, T3_NAME), - T3_NAME, 4, 8002, 15); + T3_NAME, 4, 8002, 1, 15); testSeekAndNextEstimationSeqScanHelper(stmt, String.format("/*+SeqScan(%s)*/ SELECT * " + "FROM %s WHERE k1 >= 4 and k1 < 14", T4_NAME, T4_NAME), - T4_NAME, 79, 160077, 20); + T4_NAME, 79, 160077, 1, 20); testSeekAndNextEstimationSeqScanHelper(stmt, String.format("/*+SeqScan(%s)*/ SELECT * " + "FROM %s WHERE k2 >= 4 and k2 < 14", T2_NAME, T2_NAME), - T2_NAME, 1, 399, 10); + T2_NAME, 1, 399, 1, 10); testSeekAndNextEstimationSeqScanHelper(stmt, String.format("/*+SeqScan(%s)*/ SELECT * " + "FROM %s WHERE k3 >= 4 and k3 < 14", T3_NAME, T3_NAME), - T3_NAME, 4, 8002, 15); + T3_NAME, 4, 8002, 1, 15); testSeekAndNextEstimationSeqScanHelper(stmt, String.format("/*+SeqScan(%s)*/ SELECT * " + "FROM %s WHERE k4 >= 4 and k4 < 14", T4_NAME, T4_NAME), - T4_NAME, 79, 160077, 20); + T4_NAME, 79, 160077, 1, 20); testSeekAndNextEstimationSeqScanHelper(stmt, String.format("/*+SeqScan(%s)*/ SELECT * " + "FROM %s WHERE k3 >= 4 and k3 < 14", T4_NAME, T4_NAME), - T4_NAME, 79, 160077, 20); + T4_NAME, 79, 160077, 1, 20); testSeekAndNextEstimationSeqScanHelper(stmt, String.format("/*+SeqScan(%s)*/ SELECT * " + "FROM %s WHERE k2 >= 4 and k2 < 14", T4_NAME, T4_NAME), - T4_NAME, 79, 160077, 20); + T4_NAME, 79, 160077, 1, 20); testSeekAndNextEstimationSeqScanHelper(stmt, String.format("/*+SeqScan(%s)*/ SELECT * " + "FROM %s WHERE k1 >= 4 and k1 < 14 and k3 >= 4 and k3 < 14", T4_NAME, T4_NAME), - T4_NAME, 40, 160038, 20); + T4_NAME, 40, 160038, 1, 20); testSeekAndNextEstimationSeqScanHelper(stmt, String.format("/*+SeqScan(%s)*/ SELECT * " + "FROM %s WHERE k1 = 4 and k2 IN (4, 8, 12, 16)", T4_NAME, T4_NAME), - T4_NAME, 2, 160000, 20); + T4_NAME, 2, 160000, 1, 20); testSeekAndNextEstimationSeqScanHelper(stmt, String.format("/*+SeqScan(%s)*/ SELECT * " + "FROM %s WHERE k1 IN (4, 8, 12, 16) and k2 = 4", T4_NAME, T4_NAME), - T4_NAME, 2, 160000, 20); + T4_NAME, 2, 160000, 1, 20); testSeekAndNextEstimationSeqScanHelper(stmt, String.format("/*+SeqScan(%s)*/ SELECT * " + "FROM %s WHERE k3 IN (4, 8, 12, 16) and k4 = 4", T4_NAME, T4_NAME), - T4_NAME, 2, 160000, 20); + T4_NAME, 2, 160000, 1, 20); testSeekAndNextEstimationSeqScanHelper(stmt, String.format("/*+SeqScan(%s)*/ SELECT * " + "FROM %s WHERE k1 >= 4 and k1 < 5 and k2 IN (4, 8, 12, 16)", T2_NAME, T2_NAME), - T2_NAME, 1, 399, 10); + T2_NAME, 1, 399, 1, 10); testSeekAndNextEstimationSeqScanHelper(stmt, String.format("/*+SeqScan(%s)*/ SELECT * " + "FROM %s WHERE k1 >= 4 and k1 < 6 and k2 IN (4, 8, 12, 16)", T2_NAME, T2_NAME), - T2_NAME, 1, 399, 10); + T2_NAME, 1, 399, 1, 10); testSeekAndNextEstimationSeqScanHelper(stmt, String.format("/*+SeqScan(%s)*/ SELECT * " + "FROM %s WHERE k1 >= 4 and k1 < 14 and k2 IN (4, 8, 12, 16)", T2_NAME, T2_NAME), - T2_NAME, 1, 399, 10); + T2_NAME, 1, 399, 1, 10); testSeekAndNextEstimationSeqScanHelper(stmt, String.format("/*+SeqScan(%s)*/ SELECT * " + "FROM %s WHERE k1 >= 4 and k1 < 7 and k3 IN (4, 8, 12, 16)", T3_NAME, T3_NAME), - T3_NAME, 2, 8000, 15); + T3_NAME, 2, 8000, 1, 15); testSeekAndNextEstimationSeqScanHelper(stmt, String.format("/*+SeqScan(%s)*/ SELECT * " + "FROM %s WHERE k2 >= 4 and k2 < 7 and k4 IN (4, 8, 12)", T4_NAME, T4_NAME), - T4_NAME, 4, 160002, 20); + T4_NAME, 4, 160002, 1, 20); testSeekAndNextEstimationSeqScanHelper(stmt, String.format("/*+SeqScan(%s)*/ SELECT * " + "FROM %s WHERE k1 IN (1, 4, 7, 10)", T4_NAME, T4_NAME), - T4_NAME, 32, 160029, 20); + T4_NAME, 32, 160029, 1, 20); testSeekAndNextEstimationSeqScanHelper(stmt, String.format("/*+SeqScan(%s)*/ SELECT * " + "FROM %s WHERE k1 >= 4 AND k2 >= 4", T4_NAME, T4_NAME), - T4_NAME, 114, 160112, 20); + T4_NAME, 114, 160112, 1, 20); testSeekAndNextEstimationSeqScanHelper(stmt, String.format("/*+SeqScan(%s)*/ SELECT * " + "FROM %s WHERE k1 >= 4 AND k1 < 14 AND k2 >= 4", T4_NAME, T4_NAME), - T4_NAME, 67, 160065, 20); + T4_NAME, 67, 160065, 1, 20); testSeekAndNextEstimationSeqScanHelper(stmt, String.format("/*+SeqScan(%s)*/ SELECT * " + "FROM %s WHERE k1 >= 4 AND k1 < 14 AND k2 >= 4 AND k2 < 14", T4_NAME, T4_NAME), - T4_NAME, 40, 160038, 20); + T4_NAME, 40, 160038, 1, 20); testSeekAndNextEstimationSeqScanHelper(stmt, String.format("/*+SeqScan(%s)*/ SELECT * " + "FROM %s WHERE k1 >= 4 AND k2 >= 4 AND k2 < 14", T4_NAME, T4_NAME), - T4_NAME, 67, 160065, 20); + T4_NAME, 67, 160065, 1, 20); testSeekAndNextEstimationSeqScanHelper(stmt, String.format("/*+SeqScan(%s)*/ SELECT * " + "FROM %s WHERE k1 IN (1, 4, 7, 10) AND k2 IN (1, 4, 7, 10)", T4_NAME, T4_NAME), - T4_NAME, 5, 160003, 20); + T4_NAME, 5, 160003, 1, 20); testSeekAndNextEstimationSeqScanHelper(stmt, String.format("/*+SeqScan(%s)*/ SELECT * " + "FROM %s WHERE k1 >= 4 AND k3 >= 4", T4_NAME, T4_NAME), - T4_NAME, 114, 160112, 20); + T4_NAME, 114, 160112, 1, 20); testSeekAndNextEstimationSeqScanHelper(stmt, String.format("/*+SeqScan(%s)*/ SELECT * " + "FROM %s WHERE k1 >= 4 AND k1 < 14 AND k3 >= 4 AND k3 < 14", T4_NAME, T4_NAME), - T4_NAME, 40, 160038, 20); + T4_NAME, 40, 160038, 1, 20); testSeekAndNextEstimationSeqScanHelper(stmt, String.format("/*+SeqScan(%s)*/ SELECT * " + "FROM %s WHERE k1 >= 4 AND k3 >= 4 AND k3 < 14", T4_NAME, T4_NAME), - T4_NAME, 67, 160065, 20); + T4_NAME, 67, 160065, 1, 20); } } @@ -1052,14 +1116,14 @@ public void testSeekNextEstimationStorageIndexFilters() throws Exception { */ testSeekAndNextEstimationIndexScanHelper_IgnoreActualResults(stmt, "/*+IndexScan(test test_index_k1) */ SELECT * FROM test WHERE k1 > 50000 and v1 > 80000", - "test", "test_index_k1", 50000, 50000, 10); + "test", "test_index_k1", 50000, 50000, 0, 0, 10); /* The filter on v1 will be executed on the included column in test_index_k1_v1. As a result, * fewer seeks will be needed on the base table. */ testSeekAndNextEstimationIndexScanHelper_IgnoreActualResults(stmt, "/*+IndexScan(test test_index_k1_v1) */ SELECT * FROM test WHERE k1 > 50000 and v1 > 80000", - "test", "test_index_k1_v1", 10000, 50000, 10); + "test", "test_index_k1_v1", 10000, 50000, 0, 0, 10); } } @@ -1084,27 +1148,27 @@ public void testSeekNextEstimationSeekForwardOptimization() throws Exception { */ testSeekAndNextEstimationIndexScanHelper_IgnoreActualResults(stmt, "/*+IndexScan(t4)*/ SELECT * FROM t4 WHERE k2 IN (4, 5, 6, 7)", - T4_NAME, T4_INDEX_NAME, 132, 32200, 20); + T4_NAME, T4_INDEX_NAME, 132, 32200, 1, 0, 20); testSeekAndNextEstimationIndexScanHelper(stmt, "/*+IndexScan(t4)*/ SELECT * FROM t4 WHERE k2 IN (4, 6, 8, 10)", - T4_NAME, T4_INDEX_NAME, 132, 32200, 20); + T4_NAME, T4_INDEX_NAME, 132, 32200, 1, 0, 20); testSeekAndNextEstimationIndexScanHelper_IgnoreActualResults(stmt, "/*+IndexScan(t4)*/ SELECT * FROM t4 WHERE k4 IN (4, 5, 6, 7)", - T4_NAME, T4_INDEX_NAME, 40031, 80000, 20); + T4_NAME, T4_INDEX_NAME, 40031, 80000, 1, 0, 20); testSeekAndNextEstimationIndexScanHelper_IgnoreActualResults(stmt, "/*+IndexScan(t4)*/ SELECT * FROM t4 WHERE k4 IN (4, 6, 8, 10)", - T4_NAME, T4_INDEX_NAME, 40031, 80000, 20); + T4_NAME, T4_INDEX_NAME, 40031, 80000, 1, 0, 20); testSeekAndNextEstimationIndexScanHelper_IgnoreActualResults(stmt, "/*+IndexScan(t4)*/ SELECT * FROM t4 WHERE k4 IN (4, 7, 10, 13)", - T4_NAME, T4_INDEX_NAME, 40031, 80000, 20); + T4_NAME, T4_INDEX_NAME, 40031, 80000, 1, 0, 20); testSeekAndNextEstimationIndexScanHelper(stmt, "/*+IndexScan(t4)*/ SELECT * FROM t4 WHERE k4 IN (4, 8, 12, 16)", - T4_NAME, T4_INDEX_NAME, 40031, 80000, 20); + T4_NAME, T4_INDEX_NAME, 40031, 80000, 1, 0, 20); } } @@ -1130,16 +1194,16 @@ public void testSeekNextEstimation25862IntegerOverflow() throws Exception { testSeekAndNextEstimationSeqScanHelper_IgnoreActualResults(stmt, "/*+ SeqScan(t_25862) */ SELECT * FROM t_25862 WHERE k1 > 0", - "t_25862", 1302084.0, 4001302082.0, 2); + "t_25862", 1302084.0, 4001302082.0, 1, 2); testSeekAndNextEstimationIndexScanHelper_IgnoreActualResults(stmt, "/*+ IndexScan(t_25862 t_25862_pkey) */ SELECT * FROM t_25862 WHERE k1 > 0", - "t_25862", "t_25862_pkey", 1302084.0, 1333333334.0, 2); + "t_25862", "t_25862_pkey", 1302084.0, 1333333334.0, 1, 0, 2); testSeekAndNextEstimationIndexScanHelper_IgnoreActualResults(stmt, "/*+ IndexScan(t_25862 t_25862_idx) */ SELECT * FROM t_25862 WHERE v1 > 0", - "t_25862", "t_25862_idx", 1334635417.0, 1333333334.0, 2); + "t_25862", "t_25862_idx", 1334635417.0, 1333333334.0, 0, 0, 2); testSeekAndNextEstimationIndexOnlyScanHelper_IgnoreActualResults(stmt, "/*+ IndexOnlyScan(t_25862 t_25862_idx) */ SELECT v1 FROM t_25862 WHERE v1 > 0", - "t_25862", "t_25862_idx", 1302084.0, 1333333334.0, 1); + "t_25862", "t_25862_idx", 1302084.0, 1333333334.0, 1, 0, 1); } } diff --git a/src/postgres/src/backend/commands/explain.c b/src/postgres/src/backend/commands/explain.c index 156786bbc65f..09a541636b8e 100644 --- a/src/postgres/src/backend/commands/explain.c +++ b/src/postgres/src/backend/commands/explain.c @@ -4860,6 +4860,19 @@ show_yb_planning_stats(YbPlanInfo *planinfo, ExplainState *es) planinfo->estimated_num_seeks, 0, es); ExplainPropertyFloat("Estimated Nexts And Prevs", NULL, planinfo->estimated_num_nexts_prevs, 0, es); + + /* + * YB_TODO(#27210): Do not print values of estimated_num_table_result_pages + * or estimated_num_index_result_pages if == 0. + */ + if (planinfo->estimated_num_table_result_pages >= 0) + ExplainPropertyFloat("Estimated Table Roundtrips", NULL, + planinfo->estimated_num_table_result_pages, 0, es); + + if (planinfo->estimated_num_index_result_pages >= 0) + ExplainPropertyFloat("Estimated Index Roundtrips", NULL, + planinfo->estimated_num_index_result_pages, 0, es); + ExplainPropertyInteger("Estimated Docdb Result Width", NULL, planinfo->estimated_docdb_result_width, es); } diff --git a/src/postgres/src/backend/optimizer/path/allpaths.c b/src/postgres/src/backend/optimizer/path/allpaths.c index b029118afa81..4cc4797ec672 100644 --- a/src/postgres/src/backend/optimizer/path/allpaths.c +++ b/src/postgres/src/backend/optimizer/path/allpaths.c @@ -5080,6 +5080,7 @@ ybTracePath(PlannerInfo *root, Path *path, char *msg) const char *ptype; bool join = false; + bool index = false; switch (nodeTag(path)) { @@ -5120,6 +5121,7 @@ ybTracePath(PlannerInfo *root, Path *path, char *msg) break; case T_IndexPath: ptype = "IdxScan"; + index = true; break; case T_BitmapHeapPath: ptype = "BitmapHeapScan"; @@ -5234,6 +5236,14 @@ ybTracePath(PlannerInfo *root, Path *path, char *msg) appendStringInfoSpaces(&buf, 2); appendStringInfo(&buf, "%s (NODE %u , hinted = %s)\n", ptype, path->ybUniqueId, path->ybIsHinted ? "true" : "false"); + if (index) + { + IndexPath *indexPath = (IndexPath *) path; + char *indexName = get_rel_name(indexPath->indexinfo->indexoid); + appendStringInfoSpaces(&buf, 4); + appendStringInfo(&buf, "index name : %s\n", indexName); + } + appendStringInfoSpaces(&buf, 4); appendStringInfo(&buf, "parallel aware = %s , parallel safe = %s, parallel workers = %d\n", path->parallel_aware? "true" : "false", path->parallel_safe ? "true" : "false", path->parallel_workers); diff --git a/src/postgres/src/backend/optimizer/path/costsize.c b/src/postgres/src/backend/optimizer/path/costsize.c index b5d93aade708..811460d0c32b 100644 --- a/src/postgres/src/backend/optimizer/path/costsize.c +++ b/src/postgres/src/backend/optimizer/path/costsize.c @@ -6705,7 +6705,7 @@ yb_compute_result_transfer_cost(double result_tuples, Cost roundtrip_cost, Cost transfer_cost) { - int num_result_pages; + double num_result_pages; double result_page_size; /* Network costs */ @@ -7389,6 +7389,8 @@ yb_cost_seqscan(Path *path, PlannerInfo *root, RelOptInfo *baserel, path->yb_plan_info.estimated_num_nexts_prevs = num_nexts; path->yb_plan_info.estimated_num_seeks = num_seeks; + path->yb_plan_info.estimated_num_table_result_pages = num_result_pages; + path->yb_plan_info.estimated_num_index_result_pages = 0; run_cost += (num_seeks * per_seek_cost) + (num_nexts * per_next_cost); @@ -8387,6 +8389,8 @@ yb_cost_index(IndexPath *path, PlannerInfo *root, double loop_count, baserel, baserel_oid); path->yb_plan_info.estimated_docdb_result_width = docdb_result_width; + double index_result_num_pages = 0; + /* * Compute the cost of transferring results over network from DocDB to * pggate. @@ -8433,7 +8437,6 @@ yb_cost_index(IndexPath *path, PlannerInfo *root, double loop_count, } Cost baserel_ybctid_transfer_cost; - double index_result_num_pages; if (path->path.parallel_workers > 0) { @@ -8473,6 +8476,8 @@ yb_cost_index(IndexPath *path, PlannerInfo *root, double loop_count, (1 - 1 / index_result_num_pages))); } + double num_baserel_result_pages = 0; + if (!is_primary_index && !index_only && !baserel_is_colocated) { /* @@ -8494,7 +8499,6 @@ yb_cost_index(IndexPath *path, PlannerInfo *root, double loop_count, /* Add cost of fetching result from base table */ startup_cost += baserel_roundtrip_cost; - double num_baserel_result_pages; if (path->path.parallel_workers > 0) { num_baserel_result_pages = ceil(num_baserel_result_rows * @@ -8582,6 +8586,8 @@ yb_cost_index(IndexPath *path, PlannerInfo *root, double loop_count, path->yb_plan_info.estimated_num_nexts_prevs = num_nexts_prevs; path->yb_plan_info.estimated_num_seeks = num_seeks; + path->yb_plan_info.estimated_num_table_result_pages = num_baserel_result_pages; + path->yb_plan_info.estimated_num_index_result_pages = index_result_num_pages; /* Local filter costs */ cost_qual_eval(&qual_cost, local_clauses, root); @@ -8905,6 +8911,19 @@ yb_cost_bitmap_table_scan(Path *path, PlannerInfo *root, RelOptInfo *baserel, num_seeks = tuples_scanned; num_nexts = (max_nexts_to_avoid_seek + 1) * tuples_scanned; + path->yb_plan_info.estimated_num_nexts_prevs = num_nexts; + path->yb_plan_info.estimated_num_seeks = num_seeks; + + /* + * Initialize to -1 to indicate the values are not set. + * + * YB_TODO(#27232) : Enhance EXPLAIN to show round trips for + * bitmap scans and check to see if we still need to set these + * valuess to -1. + */ + path->yb_plan_info.estimated_num_table_result_pages = -1; + path->yb_plan_info.estimated_num_index_result_pages = -1; + run_cost += (num_seeks * per_seek_cost) + (num_nexts * per_next_cost); yb_get_roundtrip_transfer_costs(baserel_tablespace_id, diff --git a/src/postgres/src/include/nodes/pathnodes.h b/src/postgres/src/include/nodes/pathnodes.h index a9e028972f0d..21ed498b9ce5 100644 --- a/src/postgres/src/include/nodes/pathnodes.h +++ b/src/postgres/src/include/nodes/pathnodes.h @@ -1271,6 +1271,8 @@ typedef struct YbPlanInfo double estimated_num_nexts_prevs; double estimated_num_seeks; int estimated_docdb_result_width; + double estimated_num_table_result_pages; + double estimated_num_index_result_pages; } YbPlanInfo; /* diff --git a/src/postgres/third-party-extensions/pg_hint_plan/pg_hint_plan.c b/src/postgres/third-party-extensions/pg_hint_plan/pg_hint_plan.c index f7e36ee471cb..eba08685c72b 100644 --- a/src/postgres/third-party-extensions/pg_hint_plan/pg_hint_plan.c +++ b/src/postgres/third-party-extensions/pg_hint_plan/pg_hint_plan.c @@ -3486,8 +3486,9 @@ pg_hint_plan_planner(Query *parse, const char *query_string, int cursorOptions, if (IsYugaByteEnabled() && yb_enable_planner_trace) { ereport(DEBUG1, - (errmsg("\n++ BEGIN pg_hint_plan_planner\n query: %s", - query_string))); + (errmsg("\n++ BEGIN pg_hint_plan_planner\n query: %s\n" + " query id: %lu", + query_string, parse->queryId))); } /* From 78ab045850d6c6c3cc9bf0b64660947dbf1a8d25 Mon Sep 17 00:00:00 2001 From: Zachary Drudi Date: Wed, 14 May 2025 23:59:01 -0400 Subject: [PATCH 116/146] [#25642, #25682] docdb: Scheduled ysql lease checker in tservers. Summary: This diff adds a scheduled ysql lease check to the tserver. The idea is in the event of a network partition the tserver should kill all its hosted pg sessions before the master leader decides the tserver has lost its ysql lease in order to protect DDL/DML correctness. This diff adds a new flag, `tserver_ysql_operation_lease_ttl_ms`, that determines how long a tserver thinks its lease is valid for. This flag must have a lesser value than the corresponding master flag `master_ysql_operation_lease_ttl_ms`. TServers measure the time of their last lease refresh as immediately before sending an ACK'd `RefreshYsqlLease` rpc to the master leader. The default value of `tserver_ysql_operation_lease_ttl_ms` is 28 seconds, which is less than the default value of `master_ysql_operation_lease_ttl_ms`: 30 seconds. **Upgrade/Rollback safety:** The proto changes in this diff are just to the `RefreshYsqlLease` RPC, which is not enabled in any released version. Jira: DB-14893, DB-14940 Test Plan: ``` ./yb_build.sh fastdebug --cxx-test object_lock-test --gtest_filter 'ExternalObjectLockTestOneTS.TabletServerKillsSessionsWhenItsLeaseExpires:ExternalObjectLockTestOneTS.TabletServerKillsSessionsWhenItAcquiresNewLease' ``` Reviewers: amitanand Reviewed By: amitanand Subscribers: sergei, rthallam, ybase, yql, slingam Differential Revision: https://phorge.dev.yugabyte.com/D44001 --- src/yb/integration-tests/mini_cluster.cc | 6 + src/yb/integration-tests/object_lock-test.cc | 62 +++++++++-- src/yb/master/master-test.cc | 22 +++- src/yb/master/master.h | 1 + src/yb/master/master_ddl.proto | 2 + src/yb/master/master_ddl_client.cc | 3 +- src/yb/master/master_ddl_client.h | 2 +- src/yb/master/master_tserver.h | 4 +- src/yb/master/object_lock_info_manager.cc | 15 +++ src/yb/tserver/pg_client_service.cc | 110 ++++++++++++++++--- src/yb/tserver/pg_client_service.h | 2 +- src/yb/tserver/tablet_server.cc | 16 ++- src/yb/tserver/tablet_server.h | 7 +- src/yb/tserver/tablet_server_interface.h | 2 + src/yb/tserver/ysql_lease_poller.cc | 6 +- src/yb/yql/pgwrapper/pg_on_conflict-test.cc | 3 + src/yb/yql/pgwrapper/pg_wrapper.cc | 1 + src/yb/yql/pgwrapper/pg_wrapper_context.h | 1 + 18 files changed, 223 insertions(+), 42 deletions(-) diff --git a/src/yb/integration-tests/mini_cluster.cc b/src/yb/integration-tests/mini_cluster.cc index 48c227032e6c..565a3d452f95 100644 --- a/src/yb/integration-tests/mini_cluster.cc +++ b/src/yb/integration-tests/mini_cluster.cc @@ -110,6 +110,7 @@ DECLARE_int32(transaction_table_num_tablets); DECLARE_int64(rocksdb_compact_flush_rate_limit_bytes_per_sec); DECLARE_string(fs_data_dirs); DECLARE_string(use_private_ip); +DECLARE_bool(TEST_enable_ysql_operation_lease_expiry_check); namespace yb { @@ -217,6 +218,11 @@ Status MiniCluster::StartAsync( // We are testing public/private IPs using mini cluster. So set mode to 'cloud'. ANNOTATE_UNPROTECTED_WRITE(FLAGS_use_private_ip) = "cloud"; + // todo(zdrudi): There are currently use after free issues with how the minicluster handles the + // pg process. The background ysql lease checker can call a method on a null pointer. This is only + // an issue in the test harness so we disable the tserver's ysql op lease check for miniclusters. + ANNOTATE_UNPROTECTED_WRITE(FLAGS_TEST_enable_ysql_operation_lease_expiry_check) = false; + // This dictates the RF of newly created tables. SetAtomicFlag(options_.num_tablet_servers >= 3 ? 3 : 1, &FLAGS_replication_factor); FLAGS_memstore_size_mb = 16; diff --git a/src/yb/integration-tests/object_lock-test.cc b/src/yb/integration-tests/object_lock-test.cc index 896b21696039..d700a8d8caf2 100644 --- a/src/yb/integration-tests/object_lock-test.cc +++ b/src/yb/integration-tests/object_lock-test.cc @@ -1108,11 +1108,13 @@ TEST_F(ExternalObjectLockTest, RefreshYsqlLease) { master::MasterDDLClient ddl_client{cluster_->GetLeaderMasterProxy()}; + auto lease_refresh_time_ms = MonoTime::Now().GetDeltaSinceMin().ToMilliseconds(); // Request a lease refresh on behalf of ts with no lease epoch in the request. // Master should respond with our ts' current lease epoch, the acquired lock entries, and // new_lease. - auto info = ASSERT_RESULT( - ddl_client.RefreshYsqlLease(ts->uuid(), ts->instance_id().instance_seqno(), {})); + auto info = ASSERT_RESULT(ddl_client.RefreshYsqlLease( + ts->uuid(), ts->instance_id().instance_seqno(), + lease_refresh_time_ms, {})); ASSERT_TRUE(info.new_lease()); ASSERT_EQ(info.lease_epoch(), kLeaseEpoch); ASSERT_TRUE(info.has_ddl_lock_entries()); @@ -1121,8 +1123,8 @@ TEST_F(ExternalObjectLockTest, RefreshYsqlLease) { // Request a lease refresh on behalf of ts with the incorrect lease epoch in the request. // Expect the master to respond with our ts' current lease epoch, the acquired lock entries, and // new_lease. - info = - ASSERT_RESULT(ddl_client.RefreshYsqlLease(ts->uuid(), ts->instance_id().instance_seqno(), 0)); + info = ASSERT_RESULT(ddl_client.RefreshYsqlLease( + ts->uuid(), ts->instance_id().instance_seqno(), lease_refresh_time_ms, 0)); ASSERT_TRUE(info.new_lease()); ASSERT_EQ(info.lease_epoch(), kLeaseEpoch); ASSERT_TRUE(info.has_ddl_lock_entries()); @@ -1130,8 +1132,8 @@ TEST_F(ExternalObjectLockTest, RefreshYsqlLease) { // Request a lease refresh on behalf of ts with the correct lease epoch in the request. // Expect the master to omit most information and set new_lease to false. - info = ASSERT_RESULT( - ddl_client.RefreshYsqlLease(ts->uuid(), ts->instance_id().instance_seqno(), kLeaseEpoch)); + info = ASSERT_RESULT(ddl_client.RefreshYsqlLease( + ts->uuid(), ts->instance_id().instance_seqno(), lease_refresh_time_ms, kLeaseEpoch)); ASSERT_FALSE(info.new_lease()); ASSERT_FALSE(info.has_ddl_lock_entries()); } @@ -1220,6 +1222,10 @@ TEST_F(ExternalObjectLockTestOneTS, TabletServerKillsSessionsWhenItAcquiresNewLe constexpr std::string_view kTableName = "test_table"; auto conn = ASSERT_RESULT(cluster_->ConnectToDB("yugabyte", kTSIdx)); ASSERT_OK(conn.Execute(Format("CREATE TABLE $0 (k INT PRIMARY KEY, v INT)", kTableName))); + // Disable the tserver's lease expiry check task so acquiring a new lease is what prompts the + // tserver to kill its pg sessions, not the tserver itself deciding its lease has expired. + ASSERT_OK(cluster_->SetFlag( + tablet_server(kTSIdx), "TEST_enable_ysql_operation_lease_expiry_check", "false")); ASSERT_OK(cluster_->SetFlag(tablet_server(kTSIdx), kTServerYsqlLeaseRefreshFlagName, "false")); auto cluster_client = master::MasterClusterClient(cluster_->GetLeaderMasterProxy()); @@ -1247,6 +1253,44 @@ TEST_F(ExternalObjectLockTestOneTS, TabletServerKillsSessionsWhenItAcquiresNewLe timeout, "Wait for tserver to accept new pg sessions")); } +TEST_F(ExternalObjectLockTestOneTS, TabletServerKillsSessionsWhenItsLeaseExpires) { + constexpr size_t kTSIdx = 0; + MonoDelta timeout = MonoDelta::FromSeconds(10); + auto ts_uuid = tablet_server(kTSIdx)->uuid(); + constexpr std::string_view kTableName = "test_table"; + auto conn = ASSERT_RESULT(cluster_->ConnectToDB("yugabyte", kTSIdx)); + ASSERT_OK(conn.Execute(Format("CREATE TABLE $0 (k INT PRIMARY KEY, v INT)", kTableName))); + ASSERT_OK(cluster_->SetFlag(tablet_server(kTSIdx), kTServerYsqlLeaseRefreshFlagName, "false")); + auto cluster_client = + master::MasterClusterClient(cluster_->GetLeaderMasterProxy()); + ASSERT_OK(WaitFor( + [&conn, kTableName]() -> Result { + auto result = conn.FetchRow(Format("SELECT count(*) from $0", kTableName)); + if (result.ok()) { + return false; + } + if (PGSessionKilledStatus(result.status())) { + return true; + } + return result.status(); + }, + timeout, "Wait for pg session to be killed")); + // At this point we shouldn't be able to start a new session. + ExternalClusterPGConnectionOptions conn_options; + conn_options.timeout_secs = 2; + auto conn_result = cluster_->ConnectToDB(std::move(conn_options)); + ASSERT_FALSE(conn_result.ok()); + ASSERT_OK( + cluster_->SetFlag(tablet_server(kTSIdx), kTServerYsqlLeaseRefreshFlagName, "true")); + // Once re-acquiring the lease, the tserver should accept new sessions again. + ASSERT_OK(WaitFor( + [this, kTSIdx]() -> Result { + auto result = cluster_->ConnectToDB("yugabyte", kTSIdx); + return result.ok(); + }, + timeout, "Wait for tserver to accept new pg sessions")); +} + class ExternalObjectLockTestOneTSWithoutLease : public ExternalObjectLockTestOneTS { public: ExternalMiniClusterOptions MakeExternalMiniClusterOptions() override; @@ -1499,8 +1543,10 @@ bool StatusContainsMessage(const Status& status, std::string_view s) { } bool PGSessionKilledStatus(const Status& status) { - constexpr std::string_view message = "server closed the connection unexpectedly"; - return status.IsNetworkError() && StatusContainsMessage(status, message); + constexpr std::string_view closed_message = "server closed the connection unexpectedly"; + constexpr std::string_view shutdown_message = "Object Lock Manager Shutdown"; + return status.IsNetworkError() && (StatusContainsMessage(status, closed_message) || + StatusContainsMessage(status, shutdown_message)); } bool SameCodeAndMessage(const Status& lhs, const Status& rhs) { diff --git a/src/yb/master/master-test.cc b/src/yb/master/master-test.cc index 536b02aad970..85f38daa502c 100644 --- a/src/yb/master/master-test.cc +++ b/src/yb/master/master-test.cc @@ -2845,7 +2845,8 @@ TEST_F(MasterTest, RefreshYsqlLeaseWithoutRegistration) { const char* kTsUUID = "my-ts-uuid"; auto ddl_client = MasterDDLClient{std::move(*proxy_ddl_)}; - auto result = ddl_client.RefreshYsqlLease(kTsUUID, 1, {}); + auto result = ddl_client.RefreshYsqlLease( + kTsUUID, 1, MonoTime::Now().GetDeltaSinceMin().ToMilliseconds(), {}); ASSERT_NOK(result); ASSERT_TRUE(result.status().IsNotFound()); } @@ -2859,26 +2860,37 @@ TEST_F(MasterTest, RefreshYsqlLease) { ASSERT_FALSE(reg_resp1.needs_reregister()); auto ddl_client = MasterDDLClient{std::move(*proxy_ddl_)}; - auto info = ASSERT_RESULT(ddl_client.RefreshYsqlLease(kTsUUID1, /* instance_seqno */ 1, {})); + auto lease_refresh_send_time_ms = MonoTime::Now().GetDeltaSinceMin().ToMilliseconds(); + auto info = ASSERT_RESULT(ddl_client.RefreshYsqlLease( + kTsUUID1, /* instance_seqno */ 1, lease_refresh_send_time_ms, {})); ASSERT_TRUE(info.new_lease()); ASSERT_EQ(info.lease_epoch(), 1); + ASSERT_GT( + info.lease_expiry_time_ms(), + lease_refresh_send_time_ms); // todo(zdrudi): but we need to do this and check the bootstrap entries... // Refresh lease again. Since we omitted current lease epoch, master leader should still say this // is a new lease. - info = ASSERT_RESULT(ddl_client.RefreshYsqlLease(kTsUUID1, /* instance_seqno */ 1, {})); + info = ASSERT_RESULT(ddl_client.RefreshYsqlLease( + kTsUUID1, /* instance_seqno */ 1, lease_refresh_send_time_ms, {})); ASSERT_TRUE(info.new_lease()); ASSERT_EQ(info.lease_epoch(), 1); + ASSERT_GT(info.lease_expiry_time_ms(), lease_refresh_send_time_ms); // Refresh lease again. We included current lease epoch but it's incorrect. - info = ASSERT_RESULT(ddl_client.RefreshYsqlLease(kTsUUID1, /* instance_seqno */ 1, 0)); + info = ASSERT_RESULT( + ddl_client.RefreshYsqlLease(kTsUUID1, /* instance_seqno */ 1, lease_refresh_send_time_ms, 0)); ASSERT_TRUE(info.new_lease()); ASSERT_EQ(info.lease_epoch(), 1); + ASSERT_GT(info.lease_expiry_time_ms(), lease_refresh_send_time_ms); // Refresh lease again. Current lease epoch is correct so master leader should not set new lease // bit. - info = ASSERT_RESULT(ddl_client.RefreshYsqlLease(kTsUUID1, /* instance_seqno */ 1, 1)); + info = ASSERT_RESULT( + ddl_client.RefreshYsqlLease(kTsUUID1, /* instance_seqno */ 1, lease_refresh_send_time_ms, 1)); ASSERT_FALSE(info.new_lease()); + ASSERT_GT(info.lease_expiry_time_ms(), lease_refresh_send_time_ms); } Result MasterTest::SendHeartbeat( diff --git a/src/yb/master/master.h b/src/yb/master/master.h index 3ca054a54d03..22d2ecdc5ede 100644 --- a/src/yb/master/master.h +++ b/src/yb/master/master.h @@ -238,6 +238,7 @@ class Master : public tserver::DbServerBase { void RegisterCertificateReloader(tserver::CertificateReloader reloader) override {} void RegisterPgProcessRestarter(std::function restarter) override {} + void RegisterPgProcessKiller(std::function killer) override {} protected: Status RegisterServices(); diff --git a/src/yb/master/master_ddl.proto b/src/yb/master/master_ddl.proto index 16d8f4478f39..9731735288cd 100644 --- a/src/yb/master/master_ddl.proto +++ b/src/yb/master/master_ddl.proto @@ -819,6 +819,7 @@ message RefreshYsqlLeaseRequestPB { // The current lease epoch of the tserver making this request. // Unset if the tserver doesn't think it has a live lease. optional uint64 current_lease_epoch = 2; + optional uint64 local_request_send_time_ms = 3; } message RefreshYsqlLeaseInfoPB { @@ -827,6 +828,7 @@ message RefreshYsqlLeaseInfoPB { // TODO: If this ends up being too big, consider adding a way to break this up // into multiple messages. optional tserver.DdlLockEntriesPB ddl_lock_entries = 3; + optional uint64 lease_expiry_time_ms = 4; } message RefreshYsqlLeaseResponsePB { diff --git a/src/yb/master/master_ddl_client.cc b/src/yb/master/master_ddl_client.cc index 3faed8b9d34e..9b5c2d5fad41 100644 --- a/src/yb/master/master_ddl_client.cc +++ b/src/yb/master/master_ddl_client.cc @@ -70,11 +70,12 @@ Status MasterDDLClient::WaitForCreateNamespaceDone(const NamespaceId& id, MonoDe } Result MasterDDLClient::RefreshYsqlLease( - const std::string& permanent_uuid, int64_t instance_seqno, + const std::string& permanent_uuid, int64_t instance_seqno, uint64_t time_ms, std::optional current_lease_epoch) { RefreshYsqlLeaseRequestPB req; req.mutable_instance()->set_permanent_uuid(permanent_uuid); req.mutable_instance()->set_instance_seqno(instance_seqno); + req.set_local_request_send_time_ms(time_ms); if (current_lease_epoch) { req.set_current_lease_epoch(*current_lease_epoch); } diff --git a/src/yb/master/master_ddl_client.h b/src/yb/master/master_ddl_client.h index 354269c7e306..fb25a10e694a 100644 --- a/src/yb/master/master_ddl_client.h +++ b/src/yb/master/master_ddl_client.h @@ -34,7 +34,7 @@ class MasterDDLClient { Status WaitForCreateNamespaceDone(const NamespaceId& id, MonoDelta timeout); Result RefreshYsqlLease( - const std::string& permanent_uuid, int64_t instance_seqno, + const std::string& permanent_uuid, int64_t instance_seqno, uint64_t time_ms, std::optional current_lease_epoch); private: diff --git a/src/yb/master/master_tserver.h b/src/yb/master/master_tserver.h index 5784e364ef04..7439a4e5a652 100644 --- a/src/yb/master/master_tserver.h +++ b/src/yb/master/master_tserver.h @@ -129,7 +129,9 @@ class MasterTabletServer : public tserver::TabletServerIf, Status RestartPG() const override { return STATUS(NotSupported, "RestartPG not implemented for masters"); } - + Status KillPg() const override { + return STATUS(NotSupported, "KillPg not implemented for masters"); + } const std::string& permanent_uuid() const override; Result GetLocalPgTxnSnapshot( diff --git a/src/yb/master/object_lock_info_manager.cc b/src/yb/master/object_lock_info_manager.cc index 8970b8950132..2620d59d32bf 100644 --- a/src/yb/master/object_lock_info_manager.cc +++ b/src/yb/master/object_lock_info_manager.cc @@ -57,6 +57,13 @@ DEFINE_RUNTIME_uint64(master_ysql_operation_lease_ttl_ms, 30 * 1000, "through the YSQL API."); TAG_FLAG(master_ysql_operation_lease_ttl_ms, advanced); +DEFINE_RUNTIME_uint64(ysql_operation_lease_ttl_client_buffer_ms, 2 * 1000, + "The difference between the duration masters and tservers use for ysql " + "operation lease TTLs. This is non-zero to account for clock skew and give " + "tservers time to clean up their existing pg sessions before the master " + "leader ignores them for exclusive table lock requests."); +TAG_FLAG(ysql_operation_lease_ttl_client_buffer_ms, advanced); + DEFINE_NON_RUNTIME_uint64(object_lock_cleanup_interval_ms, 5000, "The interval between runs of the background cleanup task for " "table-level locks held by unresponsive TServers."); @@ -890,6 +897,14 @@ Status ObjectLockInfoManager::Impl::RefreshYsqlLease( if (!FLAGS_enable_ysql_operation_lease && !FLAGS_TEST_enable_object_locking_for_table_locks) { return STATUS(NotSupported, "The ysql lease is currently disabled."); } + if (!req.has_local_request_send_time_ms()) { + return STATUS(InvalidArgument, "Missing required local_request_send_time_ms"); + } + auto master_ttl = GetAtomicFlag(&FLAGS_master_ysql_operation_lease_ttl_ms); + auto buffer = GetAtomicFlag(&FLAGS_ysql_operation_lease_ttl_client_buffer_ms); + CHECK_GT(master_ttl, buffer); + resp.mutable_info()->set_lease_expiry_time_ms( + req.local_request_send_time_ms() + master_ttl - buffer); // Sanity check that the tserver has already registered with the same instance_seqno. RETURN_NOT_OK(master_.ts_manager()->LookupTS(req.instance())); auto object_lock_info = GetOrCreateObjectLockInfo(req.instance().permanent_uuid()); diff --git a/src/yb/tserver/pg_client_service.cc b/src/yb/tserver/pg_client_service.cc index e0424117eae9..1ca0c54b664a 100644 --- a/src/yb/tserver/pg_client_service.cc +++ b/src/yb/tserver/pg_client_service.cc @@ -139,6 +139,10 @@ TAG_FLAG(check_pg_object_id_allocators_interval_secs, advanced); DEFINE_NON_RUNTIME_int64(shmem_exchange_idle_timeout_ms, 2000 * yb::kTimeMultiplier, "Idle timeout interval in milliseconds used by shared memory exchange thread pool."); +DEFINE_test_flag(bool, enable_ysql_operation_lease_expiry_check, true, + "Whether tservers should monitor their ysql op lease and kill their hosted pg " + "sessions when it expires. Only available as a flag for tests."); + DECLARE_uint64(cdc_intent_retention_ms); DECLARE_uint64(transaction_heartbeat_usec); DECLARE_int32(cdc_read_rpc_timeout_ms); @@ -533,6 +537,7 @@ class PgClientServiceImpl::Impl : public LeaseEpochValidator, public SessionProv table_cache_(client_future_), check_expired_sessions_("check_expired_sessions", &messenger->scheduler()), check_object_id_allocators_("check_object_id_allocators", &messenger->scheduler()), + check_ysql_lease_("check_ysql_lease_liveness", &messenger->scheduler()), response_cache_(parent_mem_tracker, metric_entity), instance_id_(permanent_uuid), shared_mem_pool_(parent_mem_tracker, instance_id_), @@ -566,6 +571,7 @@ class PgClientServiceImpl::Impl : public LeaseEpochValidator, public SessionProv DCHECK(!permanent_uuid.empty()); ScheduleCheckExpiredSessions(CoarseMonoClock::now()); ScheduleCheckObjectIdAllocators(); + ScheduleCheckYsqlLeaseWithNoLease(); if (FLAGS_pg_client_use_shared_memory) { WARN_NOT_OK(SharedExchange::Cleanup(instance_id_), "Cleanup shared memory failed"); } @@ -591,6 +597,7 @@ class PgClientServiceImpl::Impl : public LeaseEpochValidator, public SessionProv sessions.clear(); check_expired_sessions_.Shutdown(); check_object_id_allocators_.Shutdown(); + check_ysql_lease_.Shutdown(); if (exchange_thread_pool_) { exchange_thread_pool_->Shutdown(); } @@ -1916,33 +1923,98 @@ class PgClientServiceImpl::Impl : public LeaseEpochValidator, public SessionProv table_cache_.InvalidateDbTables(db_oids_updated, db_oids_deleted); } - void ProcessLeaseUpdate(const master::RefreshYsqlLeaseInfoPB& lease_refresh_info, MonoTime time) { - std::lock_guard lock(mutex_); - last_lease_refresh_time_ = time; - if (lease_refresh_info.new_lease() || lease_epoch_ != lease_refresh_info.lease_epoch()) { - LOG(INFO) << Format( - "Received new lease epoch $0 from the master leader, old lease epoch was $1. Clearing " - "all pg sessions.", - lease_refresh_info.lease_epoch(), lease_epoch_); - lease_epoch_ = lease_refresh_info.lease_epoch(); - auto s = tablet_server_.RestartPG(); - if (!s.ok()) { - LOG(WARNING) << "Failed to restart PG postmaster: " << s; + void ProcessLeaseUpdate(const master::RefreshYsqlLeaseInfoPB& lease_refresh_info) { + { + std::lock_guard lock(mutex_); + lease_expiry_time_ = + CoarseTimePoint{std::chrono::milliseconds(lease_refresh_info.lease_expiry_time_ms())}; + if (lease_expiry_time_ < CoarseMonoClock::Now()) { + // This function is passed the timestamp from before the RefreshYsqlLeaseRpc is sent. So it + // is possible the RPC takes longer than the lease TTL the master gave us, in which case + // this tserver still does not have a live lease. + return; + } + bool had_live_lease = ysql_lease_is_live_; + ysql_lease_is_live_ = true; + if (lease_refresh_info.new_lease() || lease_epoch_ != lease_refresh_info.lease_epoch()) { + LOG(INFO) << Format( + "Received new lease epoch $0 from the master leader. Clearing all pg sessions.", + lease_refresh_info.lease_epoch()); + lease_epoch_ = lease_refresh_info.lease_epoch(); + } else if (!had_live_lease) { + LOG(INFO) << Format( + "Master leader refreshed our lease for epoch $0. We thought this lease had " + "expired but it hadn't. Restarting pg.", + lease_epoch_); + } else { + // Lease was live and is live after this update. The epoch didn't change. Nothing left to + // do. + return; } } + // No need to hold lock while restarting the pg process. + WARN_NOT_OK(tablet_server_.RestartPG(), "Failed to restart PG postmaster."); } YSQLLeaseInfo GetYSQLLeaseInfo() { SharedLock lock(mutex_); YSQLLeaseInfo lease_info; - // todo(zdrudi): For now just return is live if we've ever received a lease. - lease_info.is_live = last_lease_refresh_time_.Initialized(); + lease_info.is_live = ysql_lease_is_live_; if (lease_info.is_live) { lease_info.lease_epoch = lease_epoch_; } return lease_info; } + void ScheduleCheckYsqlLeaseWithNoLease() { + ScheduleCheckYsqlLease(CoarseMonoClock::now() + 1s); + } + + void ScheduleCheckYsqlLease(CoarseTimePoint next_check_time) { + check_ysql_lease_.Schedule( + [this, next_check_time](const Status& status) { + if (!status.ok()) { + return; + } + if (CoarseMonoClock::now() < next_check_time) { + ScheduleCheckYsqlLease(next_check_time); + return; + } + CheckYsqlLeaseStatus(); + }, + next_check_time - CoarseMonoClock::now()); + } + + std::optional CheckYsqlLeaseStatusInner() { + { + std::lock_guard lock(mutex_); + if (!ysql_lease_is_live_) { + return {}; + } + if (CoarseMonoClock::now() < lease_expiry_time_) { + return lease_expiry_time_; + } + ysql_lease_is_live_ = false; + LOG(INFO) << "Lease has expired, killing pg sessions."; + } + // todo(zdrudi): make this a fatal? + WARN_NOT_OK(tablet_server_.KillPg(), "Couldn't stop PG"); + return {}; + } + + void CheckYsqlLeaseStatus() { + if (PREDICT_FALSE(!FLAGS_TEST_enable_ysql_operation_lease_expiry_check)) { + ScheduleCheckYsqlLeaseWithNoLease(); + return; + } + auto lease_expiry = CheckYsqlLeaseStatusInner(); + if (lease_expiry) { + ScheduleCheckYsqlLease(*lease_expiry); + } else { + ScheduleCheckYsqlLeaseWithNoLease(); + } + } + void CleanupSessions( std::vector&& expired_sessions, CoarseTimePoint time) { if (expired_sessions.empty()) { @@ -2451,6 +2523,7 @@ class PgClientServiceImpl::Impl : public LeaseEpochValidator, public SessionProv rpc::ScheduledTaskTracker check_expired_sessions_ GUARDED_BY(mutex_); CoarseTimePoint check_expired_sessions_time_ GUARDED_BY(mutex_); rpc::ScheduledTaskTracker check_object_id_allocators_; + rpc::ScheduledTaskTracker check_ysql_lease_; PgResponseCache response_cache_; @@ -2484,7 +2557,8 @@ class PgClientServiceImpl::Impl : public LeaseEpochValidator, public SessionProv std::optional cdc_state_table_; PgTxnSnapshotManager txn_snapshot_manager_; - MonoTime last_lease_refresh_time_ GUARDED_BY(mutex_); + CoarseTimePoint lease_expiry_time_ GUARDED_BY(mutex_); + bool ysql_lease_is_live_ GUARDED_BY(mutex_) {false}; uint64_t lease_epoch_ GUARDED_BY(mutex_) = 0; }; @@ -2525,9 +2599,9 @@ Result PgClientServiceImpl::GetLocalPgTxnSnapshot( return impl_->GetLocalPgTxnSnapshot(snapshot_id); } -void PgClientServiceImpl::ProcessLeaseUpdate(const master::RefreshYsqlLeaseInfoPB& - lease_refresh_info, MonoTime time) { - impl_->ProcessLeaseUpdate(lease_refresh_info, time); +void PgClientServiceImpl::ProcessLeaseUpdate( + const master::RefreshYsqlLeaseInfoPB& lease_refresh_info) { + impl_->ProcessLeaseUpdate(lease_refresh_info); } YSQLLeaseInfo PgClientServiceImpl::GetYSQLLeaseInfo() const { diff --git a/src/yb/tserver/pg_client_service.h b/src/yb/tserver/pg_client_service.h index 8a3814415663..aaab6313bbb8 100644 --- a/src/yb/tserver/pg_client_service.h +++ b/src/yb/tserver/pg_client_service.h @@ -137,7 +137,7 @@ class PgClientServiceImpl : public PgClientServiceIf { const std::unordered_set& db_oids_deleted); Result GetLocalPgTxnSnapshot(const PgTxnSnapshotLocalId& snapshot_id); - void ProcessLeaseUpdate(const master::RefreshYsqlLeaseInfoPB& lease_refresh_info, MonoTime time); + void ProcessLeaseUpdate(const master::RefreshYsqlLeaseInfoPB& lease_refresh_info); YSQLLeaseInfo GetYSQLLeaseInfo() const; size_t TEST_SessionsCount(); diff --git a/src/yb/tserver/tablet_server.cc b/src/yb/tserver/tablet_server.cc index 17d8ed03e39e..8028076d30d3 100644 --- a/src/yb/tserver/tablet_server.cc +++ b/src/yb/tserver/tablet_server.cc @@ -820,8 +820,7 @@ bool TabletServer::HasBootstrappedLocalLockManager() const { return lock_manager && lock_manager->IsBootstrapped(); } -Status TabletServer::ProcessLeaseUpdate( - const master::RefreshYsqlLeaseInfoPB& lease_refresh_info, MonoTime time) { +Status TabletServer::ProcessLeaseUpdate(const master::RefreshYsqlLeaseInfoPB& lease_refresh_info) { VLOG(2) << __func__; auto lock_manager = ts_local_lock_manager(); if (lease_refresh_info.new_lease() && lock_manager) { @@ -836,7 +835,7 @@ Status TabletServer::ProcessLeaseUpdate( // having it the other way around, and having an old-session that is not reset. auto pg_client_service = pg_client_service_.lock(); if (pg_client_service) { - pg_client_service->impl.ProcessLeaseUpdate(lease_refresh_info, time); + pg_client_service->impl.ProcessLeaseUpdate(lease_refresh_info); } return Status::OK(); } @@ -860,6 +859,13 @@ Status TabletServer::RestartPG() const { return STATUS(IllegalState, "PG restarter callback not registered, cannot restart PG"); } +Status TabletServer::KillPg() const { + if (pg_killer_) { + return pg_killer_(); + } + return STATUS(IllegalState, "Pg killer callback not registered, cannot restart PG"); +} + bool TabletServer::IsYsqlLeaseEnabled() { return GetAtomicFlag(&FLAGS_TEST_enable_object_locking_for_table_locks) || GetAtomicFlag(&FLAGS_enable_ysql_operation_lease); @@ -2089,6 +2095,10 @@ void TabletServer::RegisterPgProcessRestarter(std::function restar pg_restarter_ = std::move(restarter); } +void TabletServer::RegisterPgProcessKiller(std::function killer) { + pg_killer_ = std::move(killer); +} + Status TabletServer::StartYSQLLeaseRefresher() { return ysql_lease_poller_->Start(); } diff --git a/src/yb/tserver/tablet_server.h b/src/yb/tserver/tablet_server.h index 2e290a8a1972..e93d249b0f23 100644 --- a/src/yb/tserver/tablet_server.h +++ b/src/yb/tserver/tablet_server.h @@ -207,10 +207,10 @@ class TabletServer : public DbServerBase, public TabletServerIf { ConcurrentPointerReference SharedObject() override { return shared_object(); } Status PopulateLiveTServers(const master::TSHeartbeatResponsePB& heartbeat_resp) EXCLUDES(lock_); - Status ProcessLeaseUpdate( - const master::RefreshYsqlLeaseInfoPB& lease_refresh_info, MonoTime time); + Status ProcessLeaseUpdate(const master::RefreshYsqlLeaseInfoPB& lease_refresh_info); Result GetYSQLLeaseInfo() const override; Status RestartPG() const override; + Status KillPg() const override; static bool IsYsqlLeaseEnabled(); tserver::TSLocalLockManagerPtr ResetAndGetTSLocalLockManager() EXCLUDES(lock_); @@ -371,6 +371,8 @@ class TabletServer : public DbServerBase, public TabletServerIf { void RegisterPgProcessRestarter(std::function restarter) override; + void RegisterPgProcessKiller(std::function killer) override; + Status StartYSQLLeaseRefresher(); TserverXClusterContextIf& GetXClusterContext() const; @@ -625,6 +627,7 @@ class TabletServer : public DbServerBase, public TabletServerIf { std::unique_ptr secure_context_; std::vector certificate_reloaders_; std::function pg_restarter_; + std::function pg_killer_; // xCluster consumer. mutable std::mutex xcluster_consumer_mutex_; diff --git a/src/yb/tserver/tablet_server_interface.h b/src/yb/tserver/tablet_server_interface.h index 1fbc8fe6d963..1b1b5030414f 100644 --- a/src/yb/tserver/tablet_server_interface.h +++ b/src/yb/tserver/tablet_server_interface.h @@ -151,6 +151,8 @@ class TabletServerIf : public LocalTabletServer { virtual Result GetYSQLLeaseInfo() const = 0; virtual Status RestartPG() const = 0; + + virtual Status KillPg() const = 0; }; } // namespace tserver diff --git a/src/yb/tserver/ysql_lease_poller.cc b/src/yb/tserver/ysql_lease_poller.cc index 4abb201b652b..83d71cd15700 100644 --- a/src/yb/tserver/ysql_lease_poller.cc +++ b/src/yb/tserver/ysql_lease_poller.cc @@ -127,17 +127,19 @@ Status YsqlLeasePoller::Poll() { if (current_lease_info.is_live) { req.set_current_lease_epoch(current_lease_info.lease_epoch); } + req.set_local_request_send_time_ms(std::chrono::duration_cast( + CoarseMonoClock::now().time_since_epoch()) + .count()); rpc::RpcController rpc; rpc.set_timeout(timeout); master::RefreshYsqlLeaseResponsePB resp; - MonoTime pre_request_time = MonoTime::Now(); RETURN_NOT_OK(proxy_->RefreshYsqlLease(req, &resp, &rpc)); if (RandomActWithProbability( GetAtomicFlag(&FLAGS_TEST_tserver_ysql_lease_refresh_failure_prob))) { return STATUS_FORMAT(NetworkError, "Pretending to fail ysql lease refresh RPC"); } RETURN_NOT_OK(ResponseStatus(resp)); - return server_.ProcessLeaseUpdate(resp.info(), pre_request_time); + return server_.ProcessLeaseUpdate(resp.info()); } MonoDelta YsqlLeasePoller::IntervalToNextPoll(int32_t consecutive_failures) { diff --git a/src/yb/yql/pgwrapper/pg_on_conflict-test.cc b/src/yb/yql/pgwrapper/pg_on_conflict-test.cc index 55aad8258f7d..5c95f77ae520 100644 --- a/src/yb/yql/pgwrapper/pg_on_conflict-test.cc +++ b/src/yb/yql/pgwrapper/pg_on_conflict-test.cc @@ -40,6 +40,9 @@ class PgFailOnConflictTest : public PgOnConflictTest { // TODO(wait-queues): https://github.com/yugabyte/yugabyte-db/issues/17871 opts->extra_tserver_flags.push_back("--enable_wait_queues=false"); opts->extra_tserver_flags.push_back("--yb_enable_read_committed_isolation=true"); + // Disable the tserver's lease expiry check as the window between killing and resuming a master + // can exceed the lease TTL. + opts->extra_tserver_flags.push_back("--TEST_enable_ysql_operation_lease_expiry_check=false"); } }; diff --git a/src/yb/yql/pgwrapper/pg_wrapper.cc b/src/yb/yql/pgwrapper/pg_wrapper.cc index 68d99608706e..4a6536ef915b 100644 --- a/src/yb/yql/pgwrapper/pg_wrapper.cc +++ b/src/yb/yql/pgwrapper/pg_wrapper.cc @@ -1332,6 +1332,7 @@ PgSupervisor::PgSupervisor(PgProcessConf conf, PgWrapperContext* server) if (server_) { server_->RegisterCertificateReloader(std::bind(&PgSupervisor::ReloadConfig, this)); server_->RegisterPgProcessRestarter(std::bind(&PgSupervisor::Restart, this)); + server_->RegisterPgProcessKiller(std::bind(&PgSupervisor::Pause, this)); } } diff --git a/src/yb/yql/pgwrapper/pg_wrapper_context.h b/src/yb/yql/pgwrapper/pg_wrapper_context.h index 1f7ff38ff1e2..5b374c0b4b97 100644 --- a/src/yb/yql/pgwrapper/pg_wrapper_context.h +++ b/src/yb/yql/pgwrapper/pg_wrapper_context.h @@ -25,6 +25,7 @@ class PgWrapperContext { virtual ~PgWrapperContext() = default; virtual void RegisterCertificateReloader(tserver::CertificateReloader reloader) = 0; virtual void RegisterPgProcessRestarter(std::function restarter) = 0; + virtual void RegisterPgProcessKiller(std::function killer) = 0; virtual Status StartSharedMemoryNegotiation() = 0; virtual Status StopSharedMemoryNegotiation() = 0; virtual int SharedMemoryNegotiationFd() = 0; From 29b3c823ee9e3ea79ece51715ab1260cac5968dd Mon Sep 17 00:00:00 2001 From: asharma Date: Fri, 16 May 2025 09:08:58 +0000 Subject: [PATCH 117/146] [PLAT-17638] Fix NFS space precheck on k8s universes Summary: This diff has the following changes - The NFS precheck was failing on k8s universe. Slightly modified the command we run to get free space. - Improvement where we fail a NFS backup only if the precheck fails. For any other exceptions, we log the error and continue with the backup. Test Plan: Manually verified the correct free space is reported on both VM and k8s based universe. Reviewers: vkumar Reviewed By: vkumar Subscribers: vkumar Differential Revision: https://phorge.dev.yugabyte.com/D44022 --- .../subtasks/BackupPreflightValidate.java | 32 ++++++++++++++----- 1 file changed, 24 insertions(+), 8 deletions(-) diff --git a/managed/src/main/java/com/yugabyte/yw/commissioner/tasks/subtasks/BackupPreflightValidate.java b/managed/src/main/java/com/yugabyte/yw/commissioner/tasks/subtasks/BackupPreflightValidate.java index ebe773d360fc..9f46692d841b 100644 --- a/managed/src/main/java/com/yugabyte/yw/commissioner/tasks/subtasks/BackupPreflightValidate.java +++ b/managed/src/main/java/com/yugabyte/yw/commissioner/tasks/subtasks/BackupPreflightValidate.java @@ -32,7 +32,8 @@ public class BackupPreflightValidate extends AbstractTaskBase { private final UniverseTableHandler tableHandler; private final NodeUniverseManager nodeUniverseManager; private final RuntimeConfGetter confGetter; - private final String FREE_SPACE_CMD = "df -P %s | tail -n1 | awk \'{print$4}\'"; + private final String FREE_SPACE_CMD = "df -P %s | tail -n1 "; + private final String PRECHECK_FAILED_MSG = "NFS space precheck failed. "; public static class Params extends AbstractTaskParams { public Params(UUID storageConfigUUID, UUID customerUUID, UUID universeUUID, boolean ybcBackup) { @@ -85,7 +86,18 @@ public void run() { backupHelper.validateStorageConfigForBackupOnUniverse(storageConfig, universe); if (confGetter.getConfForScope(universe, UniverseConfKeys.enableNfsBackupPrecheck)) { - doNfsSpacePrecheck(storageConfig, universe); + try { + doNfsSpacePrecheck(storageConfig, universe); + } catch (Exception e) { + // Only throw if the space precheck fails, + // log error and continue backup otherwise + if (e.getMessage().contains(PRECHECK_FAILED_MSG)) { + throw e; + } else { + log.error("Error while running NFS precheck on universe: {}", universe.getName()); + e.printStackTrace(); + } + } } } } @@ -197,12 +209,14 @@ private void tserverSpaceCheck( ShellResponse resp = nodeUniverseManager .runCommand( - tserver, universe, List.of(String.format(FREE_SPACE_CMD, location).split(" "))) + tserver, universe, List.of("bash", "-c", String.format(FREE_SPACE_CMD, location))) .processErrors("Error while finding free space on " + tserver.nodeName); log.debug("Response for free space cmd on node {} = {}", tserver.nodeName, resp.toString()); - // Resp.message is in the format "Command output:202020\n" - long spaceAvailable = - Long.parseLong(resp.getMessage().substring(resp.getMessage().indexOf(":") + 1).trim()); + // Resp.message looks like: "Command output:/dev/sda1 47227284 16333704 30877196 35%" + var respSplit = + List.of( + resp.getMessage().substring(resp.getMessage().indexOf(":") + 1).strip().split("\\s+")); + long spaceAvailable = Long.parseLong(respSplit.get(3).trim()); log.debug( "Space available on path {} on node {} = {}MB.", location, @@ -211,9 +225,11 @@ private void tserverSpaceCheck( if (spaceAvailable < spaceNeeded) { throw new RuntimeException( String.format( - "NFS space precheck failed. Need atleast %dMB but only %dMB present. Set" + PRECHECK_FAILED_MSG + + "Need atleast %dMB but only %dMB present. Set" + " 'yb.backup.enableNfsPrecheck' to false to disable this check.", - spaceNeeded / 1024, spaceAvailable / 1024)); + spaceNeeded / 1024, + spaceAvailable / 1024)); } } } From 49d5736480b0242cbd2e80b64bdb0dbac904ff46 Mon Sep 17 00:00:00 2001 From: Sergei Politov Date: Fri, 16 May 2025 10:00:19 +0300 Subject: [PATCH 118/146] [#27121] DocDB: Add SemiFairQueue and use it for thread pool Summary: This diff adds lock-free queue `SemiFairQueue` class. And adds logic to use this queue in rpc::ThreadPool. SemiFairQueue does not guarantee that pushed values will be popped in exactly the same order as they were pushed. But order is the same for most entries. Could be useful to implement thread pool, since it does not guarantee exact task execution order. But stack is not suitable for thread pool because task could remain in stack while new tasks arrive. With SemiFairQueue new tasks executed before particular old task should be pretty low. Jira: DB-16608 Test Plan: LockfreeTest.SemiFairQueue Reviewers: hsunder Reviewed By: hsunder Subscribers: rthallam, ybase Tags: #jenkins-ready Differential Revision: https://phorge.dev.yugabyte.com/D43707 --- src/yb/rpc/strand.h | 10 ++++++++ src/yb/rpc/thread_pool.cc | 13 ++++------ src/yb/rpc/thread_pool.h | 3 ++- src/yb/util/lockfree-test.cc | 27 ++++++++++++++------- src/yb/util/lockfree.h | 46 ++++++++++++++++++++++++++++++++++-- 5 files changed, 79 insertions(+), 20 deletions(-) diff --git a/src/yb/rpc/strand.h b/src/yb/rpc/strand.h index 5f0a20e1a051..9ed190b42305 100644 --- a/src/yb/rpc/strand.h +++ b/src/yb/rpc/strand.h @@ -24,6 +24,16 @@ namespace rpc { class StrandTask : public MPSCQueueEntry, public ThreadPoolTask { protected: ~StrandTask() = default; + private: + friend void SetNext(StrandTask* entry, StrandTask* next) { + entry->next_ = next; + } + + friend StrandTask* GetNext(const StrandTask* entry) { + return entry->next_; + } + + StrandTask* next_ = nullptr; }; template diff --git a/src/yb/rpc/thread_pool.cc b/src/yb/rpc/thread_pool.cc index 1e37c5562ad4..acf300bbea9e 100644 --- a/src/yb/rpc/thread_pool.cc +++ b/src/yb/rpc/thread_pool.cc @@ -41,7 +41,7 @@ namespace { class Worker; -using TaskQueue = RWQueue; +using TaskQueue = SemiFairQueue; using WaitingWorkers = LockFreeStack; struct ThreadPoolShare { @@ -144,8 +144,7 @@ class Worker : public boost::intrusive::list_base_hook<> { ThreadPoolTask* PopTask() { // First of all we try to get already queued task, w/o locking. // If there is no task, so we could go to waiting state. - ThreadPoolTask* task; - if (share_.task_queue.pop(task)) { + if (auto* task = share_.task_queue.Pop()) { return task; } @@ -190,8 +189,7 @@ class Worker : public boost::intrusive::list_base_hook<> { if (task_) { return std::exchange(task_, nullptr); } - ThreadPoolTask* task; - if (share_.task_queue.pop(task)) { + if (auto task = share_.task_queue.Pop()) { return task; } return std::nullopt; @@ -333,7 +331,7 @@ class ThreadPool::Impl { { std::lock_guard lock(mutex_); if (closing_) { - CHECK(share_.task_queue.empty()); + CHECK(share_.task_queue.Empty()); CHECK(workers_.empty()); return; } @@ -357,8 +355,7 @@ class ThreadPool::Impl { workers_.clear(); } } - ThreadPoolTask* task = nullptr; - while (share_.task_queue.pop(task)) { + while (auto* task = share_.task_queue.Pop()) { TaskDone(task, kShuttingDownStatus); } diff --git a/src/yb/rpc/thread_pool.h b/src/yb/rpc/thread_pool.h index 57928dcebd0f..2eb31745cbb4 100644 --- a/src/yb/rpc/thread_pool.h +++ b/src/yb/rpc/thread_pool.h @@ -21,6 +21,7 @@ #include "yb/gutil/port.h" +#include "yb/util/lockfree.h" #include "yb/util/status.h" #include "yb/util/tostring.h" #include "yb/util/type_traits.h" @@ -34,7 +35,7 @@ namespace rpc { class ThreadSubPoolBase; -class ThreadPoolTask { +class ThreadPoolTask : public MPSCQueueEntry { public: // Invoked in thread pool virtual void Run() = 0; diff --git a/src/yb/util/lockfree-test.cc b/src/yb/util/lockfree-test.cc index 8f46a29fbb6f..8ef7d870f951 100644 --- a/src/yb/util/lockfree-test.cc +++ b/src/yb/util/lockfree-test.cc @@ -388,7 +388,8 @@ TEST(LockfreeTest, QueuePerformance) { helper.Perform(0x10, true); } -TEST(LockfreeTest, Stack) { +template class Collection> +void TestIntrusive() { constexpr int kNumEntries = 100; constexpr int kNumThreads = 5; @@ -396,11 +397,11 @@ TEST(LockfreeTest, Stack) { int value; }; - LockFreeStack stack; + Collection collection; std::vector entries(kNumEntries); for (int i = 0; i != kNumEntries; ++i) { entries[i].value = i; - stack.Push(&entries[i]); + collection.Push(&entries[i]); } TestThreadHolder holder; @@ -408,24 +409,24 @@ TEST(LockfreeTest, Stack) { // Each thread randomly does one of // 1) pull items from shared stack and store it to local set. // 2) push random item from local set to shared stack. - holder.AddThread([&stack, &stop = holder.stop_flag()] { + holder.AddThread([&collection, &stop = holder.stop_flag()] { std::vector local; while (!stop.load(std::memory_order_acquire)) { bool push = !local.empty() && RandomUniformInt(0, 1); if (push) { size_t index = RandomUniformInt(0, local.size() - 1); - stack.Push(local[index]); + collection.Push(local[index]); local[index] = local.back(); local.pop_back(); } else { - auto entry = stack.Pop(); + auto entry = collection.Pop(); if (entry) { local.push_back(entry); } } } while (!local.empty()) { - stack.Push(local.back()); + collection.Push(local.back()); local.pop_back(); } }); @@ -435,14 +436,14 @@ TEST(LockfreeTest, Stack) { std::vector content; while (content.size() <= kNumEntries) { - auto entry = stack.Pop(); + auto entry = collection.Pop(); if (!entry) { break; } content.push_back(entry->value); } - LOG(INFO) << "Content: " << yb::ToString(content); + LOG(INFO) << "Content: " << AsString(content); ASSERT_EQ(content.size(), kNumEntries); @@ -452,6 +453,14 @@ TEST(LockfreeTest, Stack) { } } +TEST(LockfreeTest, Stack) { + TestIntrusive(); +} + +TEST(LockfreeTest, SemiFairQueue) { + TestIntrusive(); +} + TEST(LockfreeTest, WriteOnceWeakPtr) { std::shared_ptr hello = std::make_shared("Hello"); std::shared_ptr world = std::make_shared("world"); diff --git a/src/yb/util/lockfree.h b/src/yb/util/lockfree.h index 6152d51488e6..7e98f1f7d15e 100644 --- a/src/yb/util/lockfree.h +++ b/src/yb/util/lockfree.h @@ -179,8 +179,8 @@ class LockFreeStack { CHECK(IsAcceptableAtomicImpl(head_)); } - void Clear() { - head_.store(nullptr, std::memory_order_release); + bool Empty() const { + return head_.load(boost::memory_order_relaxed).pointer == nullptr; } void Push(T* value) { @@ -223,6 +223,48 @@ class LockFreeStack { boost::atomic head_{Head{nullptr, 0}}; }; +// SemiFairQueue does not guarantee that pushed values will be popped in exactly the same order +// as they were pushed. +// But order will be the same as long as the consumer keeps up with the producer. +// This is useful to implement thread pool, since it does not guarantee exact task execution order. +// A single stack is not suitable for thread pool because older tasks would get starved. +// SemiFairQueue uses two stacks to sort the tasks in the correct order with a high enough +// probability. +template +class SemiFairQueue { + public: + bool Empty() const { + return write_stack_.Empty() && read_stack_.Empty(); + } + + void Push(T* value) { + write_stack_.Push(value); + } + + T* Pop() { + auto result = read_stack_.Pop(); + if (result) { + return result; + } + result = write_stack_.Pop(); + if (!result) { + return nullptr; + } + for (;;) { + auto next = write_stack_.Pop(); + if (!next) { + return result; + } + read_stack_.Push(result); + result = next; + } + } + + private: + LockFreeStack write_stack_; + LockFreeStack read_stack_; +}; + // A weak pointer that can only be written to once, but can be read and written in a lock-free way. template class WriteOnceWeakPtr { From 1a9bad4472ca8097662f3e67f008eeb6cbc34666 Mon Sep 17 00:00:00 2001 From: Sahith Kurapati Date: Wed, 14 May 2025 08:33:32 +0000 Subject: [PATCH 119/146] [PLAT-17315] Make ciphertrust runtime config public after QA qualification Summary: Make ciphertrust runtime config public after QA qualification. Test Plan: Manually tested. Reviewers: dkumar Reviewed By: dkumar Differential Revision: https://phorge.dev.yugabyte.com/D43982 --- managed/RUNTIME-FLAGS.md | 1 + .../main/java/com/yugabyte/yw/common/config/GlobalConfKeys.java | 2 +- 2 files changed, 2 insertions(+), 1 deletion(-) diff --git a/managed/RUNTIME-FLAGS.md b/managed/RUNTIME-FLAGS.md index dd6e9362401c..be552bb8a1aa 100644 --- a/managed/RUNTIME-FLAGS.md +++ b/managed/RUNTIME-FLAGS.md @@ -96,6 +96,7 @@ | "Shell Output Max Directory Size" | "yb.logs.shell.output_dir_max_size" | "GLOBAL" | "Output logs for shell commands are written to tmp folder.This setting defines rotation policy based on directory size." | "Bytes" | | "Max Size of each log message" | "yb.logs.max_msg_size" | "GLOBAL" | "We limit the length of each log line as sometimes we dump entire output of script. If you want to debug something specific and the script output isgetting truncated in application log then increase this limit" | "Bytes" | | "KMS Refresh Interval" | "yb.kms.refresh_interval" | "GLOBAL" | "Default refresh interval for the KMS providers." | "Duration" | +| "Allow CipherTrust KMS" | "yb.kms.allow_ciphertrust" | "GLOBAL" | "Allow the usage of CipherTrust KMS." | "Boolean" | | "Percentage of Hashicorp vault TTL to renew the token after" | "yb.kms.hcv_token_renew_percent" | "GLOBAL" | "HashiCorp Vault tokens expire when their TTL is reached. This setting renews the token after it has used the specified percentage of its original TTL. Default: 70%." | "Integer" | | "Start Master On Stop Node" | "yb.start_master_on_stop_node" | "GLOBAL" | "Auto-start master process on a similar available node on stopping a master node" | "Boolean" | | "Start Master On Remove Node" | "yb.start_master_on_remove_node" | "GLOBAL" | "Auto-start master process on a similar available node on removal of a master node" | "Boolean" | diff --git a/managed/src/main/java/com/yugabyte/yw/common/config/GlobalConfKeys.java b/managed/src/main/java/com/yugabyte/yw/common/config/GlobalConfKeys.java index 1d4487b78d29..b1376166267b 100644 --- a/managed/src/main/java/com/yugabyte/yw/common/config/GlobalConfKeys.java +++ b/managed/src/main/java/com/yugabyte/yw/common/config/GlobalConfKeys.java @@ -476,7 +476,7 @@ public class GlobalConfKeys extends RuntimeConfigKeysModule { "Allow CipherTrust KMS", "Allow the usage of CipherTrust KMS.", ConfDataType.BooleanType, - ImmutableList.of(ConfKeyTags.INTERNAL)); + ImmutableList.of(ConfKeyTags.PUBLIC)); public static final ConfKeyInfo hcvTokenRenewPercent = new ConfKeyInfo<>( "yb.kms.hcv_token_renew_percent", From 9e1c57471a0ebb1149738973f090c5ea81a17029 Mon Sep 17 00:00:00 2001 From: Piyush Jain Date: Sat, 3 May 2025 00:09:20 +0530 Subject: [PATCH 120/146] [#27036] YSQL: Auto Analyze DDLs shouldn't block or prempt user DDLs. Summary: ## Issue: If object locking is not used (i.e., TEST_enable_object_locking_for_table_locks=false), YSQL doesn't allow concurrent execution of DDLs on the same database. DDLs would conflict with each other when trying to increment the catalog version (the final step in almost all DDLs). Moreover, a global DDL would conflict with any other DDL. If object locking is used, DDLs that don't logically conflict with each other would be allowed to run concurrently. This is not yet implemented fully but would be done by relying on Pg's business logic to acquire/release locks to serialize DDLs (this work is being tracked by #27037). Given this, when object locking is disabled, turning on the auto-analyze service by default can result in serialization errors for user DDLs. Consider the following cases: (1) A long running (say a few hours) INDEX creation with backfill is in progress, but before it can get to incrementing the catalog version, an auto-ANALYZE bumps up the catalog version. The index creation will face a conflict error resulting in a lot of wasted work. To make things worse, this error wouldn't be thrown as soon as an auto-ANALYZE occurred, it would be seen only after the whole index is backfilled. (2) An auto-ANALYZE command is running. Concurrently issued user DDLs that don't complete before the ANALYZE would face a serialization error when they reach completion. ## Solution: To mitigate this issue in the interim i.e., until we allow concurrent DDLs: (1) When object locking is off, most DDLs will acquire a FOR UPDATE lock on the catalog version row at the start of the DDL. The lock will be on the catalog version of the current database that the connection is based on. (2) Auto-Analyze DDLs will set yb_use_internal_auto_analyze_service_conn=true. All DDLs except auto-ANALYZEs are given kHighestPriority. This way, auto-analyzes will never preempt or block other DDLs. Note that the system catalog tablet doesn't have wait queues enabled i.e., it uses Fail-on-Conflict concurrency control. Thus, the priorities take effect. Also, since kHighestPriority is given to regular DDLs, an older DDL won't be preempted by a younger DDL. Further, user issued ANALYZEs are treated like regular DDLs and use kHighestPriority. This change can be toggled using `yb_force_perdb_ddl_serialization` Tserver gflag. Also, this change only takes effect if `yb_enable_invalidation_messages=true` and we are in per-db catalog version mode. ------------------------------------------------------------------------------------------ There were a few other approaches taken into consideration to solve the issue until object locks are supported: (1) Avoid incrementing the catalog version for auto-ANALYZEs. This requires more work to build a mechanism of notifying all backends of new table statistics. Jira: DB-16510 Test Plan: Jenkins: urgent ./yb_build.sh release --cxx-test pgwrapper_pg_auto_analyze-test --gtest_filter PgAutoAnalyzeTest.DDLsInParallelWithAutoAnalyze This new test fails without the fix. Also, confirmed that auto ANALYZEs are being aborted numerous times in the test. Log lines of this form appear: ``` [ts-1] W0515 01:09:50.784226 145717 pg_auto_analyze_service.cc:468] In YSQL database: abc, failed ANALYZE statement: ANALYZE "pg_catalog"."pg_class" with error: Network error (yb/yql/pgwrapper/libpq_utils.cc:453): Execute of 'ANALYZE "pg_catalog"."pg_class"' failed: 7, message: ERROR: could not serialize access due to concurrent update ``` Reviewers: sanketh, patnaik.balivada, myang, #db-approvers Reviewed By: sanketh, patnaik.balivada, myang, #db-approvers Subscribers: jason, yql Differential Revision: https://phorge.dev.yugabyte.com/D43672 --- .../catalog/yb_catalog/yb_catalog_version.c | 72 ++++++++++++++++--- src/postgres/src/backend/utils/misc/guc.c | 31 ++++++++ .../src/backend/utils/misc/pg_yb_utils.c | 20 +++--- .../src/include/catalog/yb_catalog_version.h | 2 + src/postgres/src/include/pg_yb_utils.h | 2 + src/yb/docdb/conflict_resolution.cc | 11 +-- src/yb/tablet/transaction_participant.cc | 1 + src/yb/tserver/pg_client_session.cc | 18 +++-- .../pg_auto_analyze_service.cc | 5 ++ src/yb/yql/pggate/pg_session.cc | 52 +++++++++++--- src/yb/yql/pggate/pg_txn_manager.cc | 31 +++++--- src/yb/yql/pggate/pg_txn_manager.h | 2 +- src/yb/yql/pggate/util/ybc_guc.cc | 3 + src/yb/yql/pggate/util/ybc_guc.h | 2 + src/yb/yql/pggate/ybc_pggate.cc | 10 +++ src/yb/yql/pgwrapper/pg_auto_analyze-test.cc | 58 ++++++++++++++- .../yql/pgwrapper/pg_catalog_version-test.cc | 5 ++ src/yb/yql/pgwrapper/pg_hint_table-test.cc | 42 ++++++----- .../yql/pgwrapper/pg_index_backfill-test.cc | 7 ++ 19 files changed, 307 insertions(+), 67 deletions(-) diff --git a/src/postgres/src/backend/catalog/yb_catalog/yb_catalog_version.c b/src/postgres/src/backend/catalog/yb_catalog/yb_catalog_version.c index cbe77af6fc7d..628206ddc6ba 100644 --- a/src/postgres/src/backend/catalog/yb_catalog/yb_catalog_version.c +++ b/src/postgres/src/backend/catalog/yb_catalog/yb_catalog_version.c @@ -45,22 +45,23 @@ static FormData_pg_attribute Desc_pg_yb_catalog_version[Natts_pg_yb_catalog_vers Schema_pg_yb_catalog_version }; -static bool YbGetMasterCatalogVersionFromTable(Oid db_oid, uint64_t *version); +static bool YbGetMasterCatalogVersionFromTable(Oid db_oid, uint64_t *version, + bool acquire_for_update_lock); static Datum YbGetMasterCatalogVersionTableEntryYbctid(Relation catalog_version_rel, Oid db_oid); /* Retrieve Catalog Version */ -uint64_t -YbGetMasterCatalogVersion() +static uint64_t +YbGetMasterCatalogVersionImpl(bool acquire_for_update_lock) { uint64_t version = YB_CATCACHE_VERSION_UNINITIALIZED; switch (YbGetCatalogVersionType()) { case CATALOG_VERSION_CATALOG_TABLE: - if (YbGetMasterCatalogVersionFromTable(YbMasterCatalogVersionTableDBOid(), - &version)) + if (YbGetMasterCatalogVersionFromTable(YbMasterCatalogVersionTableDBOid(), &version, + acquire_for_update_lock)) return version; /* * In spite of the fact the pg_yb_catalog_version table exists it has no actual @@ -83,6 +84,49 @@ YbGetMasterCatalogVersion() return version; } +uint64_t +YbGetMasterCatalogVersion() +{ + return YbGetMasterCatalogVersionImpl(false /* acquire_for_update_lock */ ); +} + +void +YbMaybeLockMasterCatalogVersion(YbDdlMode mode) +{ + /* + * When object locks are off (i.e., the old way), we ensure that concurrent DDLs don't stamp on + * each other by incrementing the catalog version. DDLs with overlapping [read time, commit time] + * windows would conflict with each other and only one of them would be able to make progress. + * This catalog version increment happens at the end of a DDL transaction. + * + * By relying on catalog version increments for disallowing concurrent DDLs, long running DDLs + * run into the risk of finding that they face conflict with other concurrently committed DDLs + * only after doing all the expensive work (think of index backfill). To fail fast, we acquire + * a FOR UPDATE lock at the start of the DDL. + * + * An online index backfill goes through multiple phases, each of which is a DDL transaction. + * We acquire this lock at the start of each phase. + * + * NOTE: + * (1) Some DDLs don't increment the catalog version (e.g., CREATE TABLE). These are harmless + * for concurrent DDLs. We avoid acquiring the lock for them, they don't have + * YB_SYS_CAT_MOD_ASPECT_VERSION_INCREMENT in their mode. + * (2) Global DDLs (i.e., those that modify shared catalog tables) will increment all catalog + * versions. We still only lock the catalog version of the current database. So, they might + * still face the problem described above if they are long running. + * (3) We enable this feature only if the invalidation messages are used and per-database catalog + * version mode is enabled. + */ + if (yb_force_early_ddl_serialization && + !*YBCGetGFlags()->TEST_enable_object_locking_for_table_locks && + (mode & YB_SYS_CAT_MOD_ASPECT_VERSION_INCREMENT) && + YbIsInvalidationMessageEnabled() && YBIsDBCatalogVersionMode()) + { + elog(DEBUG1, "Locking catalog version for db oid %d", MyDatabaseId); + YbGetMasterCatalogVersionImpl(true /* acquire_for_update_lock */ ); + } +} + /* Modify Catalog Version */ static Datum @@ -787,9 +831,8 @@ YbIsSystemCatalogChange(Relation rel) return IsCatalogRelation(rel) && !IsBootstrapProcessingMode(); } - bool -YbGetMasterCatalogVersionFromTable(Oid db_oid, uint64_t *version) +YbGetMasterCatalogVersionFromTable(Oid db_oid, uint64_t *version, bool acquire_for_update_lock) { *version = 0; /* unset; */ @@ -822,8 +865,21 @@ YbGetMasterCatalogVersionFromTable(Oid db_oid, uint64_t *version) YbDmlAppendTargetRegularAttr(&Desc_pg_yb_catalog_version[attnum - 1], ybc_stmt); + YbcPgExecParameters exec_params = {0}; + if (acquire_for_update_lock) + { + exec_params.rowmark = ROW_MARK_EXCLUSIVE; + /* + * We want to stick to Fail-on-Conflict concurrency control to ensure that higher priority + * user DDLs always take precedence over lower priority auto-ANALYZEs. In other words, user DDLs + * should abort running auto-ANALYZEs, and auto-ANALYZEs should face a conflict error if a user + * DDL is already running. + */ + exec_params.pg_wait_policy = LockWaitError; + exec_params.docdb_wait_policy = YBGetDocDBWaitPolicy(exec_params.pg_wait_policy); + } - HandleYBStatus(YBCPgExecSelect(ybc_stmt, NULL /* exec_params */ )); + HandleYBStatus(YBCPgExecSelect(ybc_stmt, acquire_for_update_lock ? &exec_params : NULL)); bool has_data = false; diff --git a/src/postgres/src/backend/utils/misc/guc.c b/src/postgres/src/backend/utils/misc/guc.c index eb540c58ea4c..b384dae8d90e 100644 --- a/src/postgres/src/backend/utils/misc/guc.c +++ b/src/postgres/src/backend/utils/misc/guc.c @@ -3394,6 +3394,37 @@ static struct config_bool ConfigureNamesBool[] = NULL, NULL, NULL }, + { + {"yb_use_internal_auto_analyze_service_conn", PGC_USERSET, AUTOVACUUM, + gettext_noop("[Internal Only GUC] - Help a backend identify that this is a connection from " + "the internal Auto-Analyze service"), + NULL, + GUC_NOT_IN_SAMPLE + }, + &yb_use_internal_auto_analyze_service_conn, + false, + NULL, NULL, NULL + }, + + { + {"yb_force_early_ddl_serialization", PGC_USERSET, DEVELOPER_OPTIONS, + gettext_noop("If object locking is off (i.e., " + "TEST_enable_object_locking_for_table_locks=false), concurrent DDLs might face a " + "conflict error on the catalog version increment at the end after doing all the work. " + "Setting this flag enables a fail-fast strategy by locking the catalog version at the " + "start of DDLs, causing conflict errors to occur before useful work is done. This " + "flag is only applicable without object locking. If object locking is enabled, it " + "ensures that concurrent DDLs block on each other for serialization. Also, this flag " + "is valid only if ysql_enable_db_catalog_version_mode and " + "yb_enable_invalidation_messages are enabled."), + NULL, + GUC_NOT_IN_SAMPLE + }, + &yb_force_early_ddl_serialization, + true, + NULL, NULL, NULL + }, + /* End-of-list marker */ { {NULL, 0, 0, NULL, NULL}, NULL, false, NULL, NULL, NULL diff --git a/src/postgres/src/backend/utils/misc/pg_yb_utils.c b/src/postgres/src/backend/utils/misc/pg_yb_utils.c index 53449532d166..6fd8d5015fb0 100644 --- a/src/postgres/src/backend/utils/misc/pg_yb_utils.c +++ b/src/postgres/src/backend/utils/misc/pg_yb_utils.c @@ -2167,6 +2167,8 @@ bool yb_skip_data_insert_for_xcluster_target = false; bool yb_enable_extended_sql_codes = false; +bool yb_force_early_ddl_serialization = true; + const char * YBDatumToString(Datum datum, Oid typid) { @@ -2457,6 +2459,7 @@ YBIncrementDdlNestingLevel(YbDdlMode mode) YbSendParameterStatusForConnectionManager("yb_force_catalog_update_on_next_ddl", "false"); } + YbMaybeLockMasterCatalogVersion(mode); } ++ddl_transaction_state.nesting_level; @@ -3907,6 +3910,7 @@ YBTxnDdlProcessUtility(PlannedStmt *pstmt, (ddl_mode.value == YB_DDL_MODE_AUTONOMOUS_TRANSACTION_CHANGE_VERSION_INCREMENT || !*YBCGetGFlags()->TEST_ysql_yb_ddl_transaction_block_enabled); + elog(DEBUG3, "is_ddl %d", is_ddl); PG_TRY(); { if (is_ddl) @@ -3933,7 +3937,10 @@ YBTxnDdlProcessUtility(PlannedStmt *pstmt, if (use_separate_ddl_transaction) YBIncrementDdlNestingLevel(ddl_mode.value); else + { YBSetDdlState(ddl_mode.value); + YbMaybeLockMasterCatalogVersion(ddl_mode.value); + } if (YbShouldIncrementLogicalClientVersion(pstmt) && YbIsClientYsqlConnMgr() && @@ -6631,19 +6638,14 @@ YBGetDocDBWaitPolicy(LockWaitPolicy pg_wait_policy) { LockWaitPolicy result = pg_wait_policy; - if (XactIsoLevel == XACT_REPEATABLE_READ && pg_wait_policy == LockWaitError) - { - /* The user requested NOWAIT, which isn't allowed in RR. */ - elog(WARNING, - "Setting wait policy to NOWAIT which is not allowed in " - "REPEATABLE READ isolation (GH issue #12166)"); - } - - if (IsolationIsSerializable()) + if (!YBCPgIsDdlMode() && IsolationIsSerializable()) { /* * TODO(concurrency-control): We don't honour SKIP LOCKED/ NO WAIT yet in serializable * isolation level. + * + * The !YBCPgIsDdlMode() check is to avoid the warning for DDLs because they try to acquire a + * row lock on the catalog version with LockWaitError for Fail-on-Conflict semantics. */ if (pg_wait_policy == LockWaitSkip || pg_wait_policy == LockWaitError) elog(WARNING, diff --git a/src/postgres/src/include/catalog/yb_catalog_version.h b/src/postgres/src/include/catalog/yb_catalog_version.h index 7b1e71b73e6b..1740d203b1a2 100644 --- a/src/postgres/src/include/catalog/yb_catalog_version.h +++ b/src/postgres/src/include/catalog/yb_catalog_version.h @@ -12,6 +12,7 @@ #pragma once +#include "pg_yb_utils.h" #include "storage/sinval.h" #include "yb/yql/pggate/ybc_pg_typedefs.h" @@ -34,6 +35,7 @@ extern YbCatalogVersionType yb_catalog_version_type; /* Get the latest catalog version from the master leader. */ extern uint64_t YbGetMasterCatalogVersion(); +extern void YbMaybeLockMasterCatalogVersion(YbDdlMode mode); /* Send a request to increment the master catalog version. */ extern bool YbIncrementMasterCatalogVersionTableEntry(bool is_breaking_change, diff --git a/src/postgres/src/include/pg_yb_utils.h b/src/postgres/src/include/pg_yb_utils.h index 22826c9358bb..c7fffc5fc97c 100644 --- a/src/postgres/src/include/pg_yb_utils.h +++ b/src/postgres/src/include/pg_yb_utils.h @@ -770,6 +770,8 @@ extern bool yb_silence_advisory_locks_not_supported_error; extern bool yb_skip_data_insert_for_xcluster_target; +extern bool yb_force_early_ddl_serialization; + /* * See also ybc_util.h which contains additional such variable declarations for * variables that are (also) used in the pggate layer. diff --git a/src/yb/docdb/conflict_resolution.cc b/src/yb/docdb/conflict_resolution.cc index 1ad92e88159d..416807c92c11 100644 --- a/src/yb/docdb/conflict_resolution.cc +++ b/src/yb/docdb/conflict_resolution.cc @@ -89,10 +89,10 @@ using TransactionConflictInfoMap = std::unordered_map; Status MakeConflictStatus(const TransactionId& our_id, const TransactionId& other_id, - const char* reason, + const std::string& reason, const std::shared_ptr& tablet_metrics) { (*tablet_metrics)->Increment(tablet::TabletCounters::kTransactionConflicts); - return (STATUS(TryAgain, Format("$0 conflicts with $1 transaction: $2", our_id, reason, other_id), + return (STATUS(TryAgain, Format("$0 conflicts with $1: $2", our_id, reason, other_id), Slice(), TransactionError(TransactionErrorCode::kConflict))); } @@ -1085,7 +1085,10 @@ class ConflictResolverContextBase : public ConflictResolverContext { if (our_priority <= their_priority) { return MakeConflictStatus( our_transaction_id, transaction.id, - our_priority == their_priority ? "same priority" : "higher priority", + our_priority == their_priority ? + Format("same priority transaction (pri: $0)", our_priority) : + Format("higher priority transaction (our pri: $0, their pri: $1)", + our_priority, their_priority), GetTabletMetrics()); } } @@ -1283,7 +1286,7 @@ class TransactionConflictResolverContext : public ConflictResolverContextBase { TransactionError(TransactionErrorCode::kSkipLocking)); } else { return MakeConflictStatus( - *transaction_id_, transaction_data.id, "committed", GetTabletMetrics()); + *transaction_id_, transaction_data.id, "committed transaction", GetTabletMetrics()); } } } diff --git a/src/yb/tablet/transaction_participant.cc b/src/yb/tablet/transaction_participant.cc index feffa431d4de..d05f9bcf309b 100644 --- a/src/yb/tablet/transaction_participant.cc +++ b/src/yb/tablet/transaction_participant.cc @@ -696,6 +696,7 @@ class TransactionParticipant::Impl } void Abort(const TransactionId& id, TransactionStatusCallback callback) { + VLOG_WITH_PREFIX(2) << "Abort transaction: " << id; // We are not trying to cleanup intents here because we don't know whether this transaction // has intents of not. auto lock_and_iterator_result = LockAndFind( diff --git a/src/yb/tserver/pg_client_session.cc b/src/yb/tserver/pg_client_session.cc index 1a5efb6df03e..243faecbba3f 100644 --- a/src/yb/tserver/pg_client_session.cc +++ b/src/yb/tserver/pg_client_session.cc @@ -12,6 +12,7 @@ // #include "yb/tserver/pg_client_session.h" +#include #include #include @@ -42,8 +43,9 @@ #include "yb/common/common_util.h" #include "yb/common/ql_type.h" #include "yb/common/pgsql_error.h" -#include "yb/common/transaction_error.h" #include "yb/common/schema.h" +#include "yb/common/transaction_error.h" +#include "yb/common/transaction_priority.h" #include "yb/common/wire_protocol.h" #include "yb/rpc/lightweight_message.h" @@ -1337,9 +1339,9 @@ class PgClientSession::Impl { PgCreateTable helper(req); RETURN_NOT_OK(helper.Prepare()); - if (xcluster_context()) { - xcluster_context()->PrepareCreateTableHelper(req, helper); - } + if (xcluster_context()) { + xcluster_context()->PrepareCreateTableHelper(req, helper); + } const auto* metadata = VERIFY_RESULT(GetDdlTransactionMetadata( req.use_transaction(), req.use_regular_transaction_block(), context->GetClientDeadline())); @@ -2671,7 +2673,8 @@ class PgClientSession::Impl { kind = PgClientSessionKind::kDdl; EnsureSession(kind, deadline); RETURN_NOT_OK(GetDdlTransactionMetadata( - true /* use_transaction */, false /* use_regular_transaction_block */, deadline)); + true /* use_transaction */, false /* use_regular_transaction_block */, deadline, + options.priority())); } else { DCHECK(kind == PgClientSessionKind::kPlain); auto& session = EnsureSession(kind, deadline); @@ -2893,8 +2896,10 @@ class PgClientSession::Impl { return Status::OK(); } + // All DDLs use kHighestPriority unless specified otherwise. Result GetDdlTransactionMetadata( - bool use_transaction, bool use_regular_transaction_block, CoarseTimePoint deadline) { + bool use_transaction, bool use_regular_transaction_block, CoarseTimePoint deadline, + uint64_t priority = kHighPriTxnUpperBound) { if (!use_transaction) { return nullptr; } @@ -2923,6 +2928,7 @@ class PgClientSession::Impl { ? IsolationLevel::SERIALIZABLE_ISOLATION : IsolationLevel::SNAPSHOT_ISOLATION; txn = transaction_provider_.Take(deadline); RETURN_NOT_OK(txn->Init(isolation)); + txn->SetPriority(priority); txn->SetLogPrefixTag(kTxnLogPrefixTag, id_); ddl_txn_metadata_ = VERIFY_RESULT(Copy(txn->GetMetadata(deadline).get())); EnsureSession(kSessionKind, deadline)->SetTransaction(txn); diff --git a/src/yb/tserver/stateful_services/pg_auto_analyze_service.cc b/src/yb/tserver/stateful_services/pg_auto_analyze_service.cc index 9ef65a9e3dd7..4e005ed02a3c 100644 --- a/src/yb/tserver/stateful_services/pg_auto_analyze_service.cc +++ b/src/yb/tserver/stateful_services/pg_auto_analyze_service.cc @@ -410,6 +410,7 @@ Result, std::vector>> VLOG_WITH_FUNC(1) << "Deleted or renamed " << dbname << "/" << namespace_id << ", skipping"; continue; } + if (!conn_result) { VLOG_WITH_FUNC(1) << "Conn failed: " << conn_result.status(); return conn_result.status(); @@ -422,6 +423,9 @@ Result, std::vector>> continue; } + auto s = conn.Execute("SET yb_use_internal_auto_analyze_service_conn=true"); + RETURN_NOT_OK(s); + // Construct ANALYZE statement and RUN ANALYZE. // Try to analyze all tables in batches to minimize the number of catalog version increments. // More catalog version increments lead to a higher number of PG cache refreshes on all PG @@ -477,6 +481,7 @@ Result, std::vector>> // Need to refresh name cache because the cached table name is outdated. refresh_name_cache_ = true; } else { + // TODO: Fix this, else branch doesn't imply that the table was deleted. VLOG(1) << "Table " << table_name << " was deleted"; // Need to remove deleted table entries from the YCQL service table. deleted_tables.push_back(table_id); diff --git a/src/yb/yql/pggate/pg_session.cc b/src/yb/yql/pggate/pg_session.cc index 441bf7ca080b..a3d4f166259b 100644 --- a/src/yb/yql/pggate/pg_session.cc +++ b/src/yb/yql/pggate/pg_session.cc @@ -268,6 +268,40 @@ void AdvisoryLockRequestInitCommon( } +YbcTxnPriorityRequirement GetTxnPriorityRequirement( + bool is_ddl_mode, PgIsolationLevel isolation_level, RowMarkType row_mark_type) { + YbcTxnPriorityRequirement txn_priority_requirement; + if (is_ddl_mode) { + // DDLs acquire object locks to serialize conflicting concurrent DDLs. Concurrent DDLs that + // don't conflict can make progress without blocking each other. + // + // However, if object locks are disabled, concurrent DDLs are disallowed for safety. + // This is done by relying on conflicting increments to the catalog version (most DDLs do this + // except some like CREATE TABLE). Note that global DDLs (those that affect catalog tables + // shared across databases) conflict with all other DDLs since they increment all per-db + // catalog versions. + // + // For detecting and resolving these conflicts, DDLs use Fail-on-Conflict concurrency + // control (system catalog table doesn't have wait queues enabled). All DDLs except + // Auto-ANALYZEs use kHighestPriority priority to mimic first-come-first-serve behavior. We + // want to give Auto-ANALYZEs a lower priority to ensure they don't abort already running + // user DDLs. Also, user DDLs should preempt Auto-ANALYZEs. + // + // With object level locking, priorities are meaningless since DDLs don't rely on DocDB's + // conflict resolution for concurrent DDLs. + if (!yb_use_internal_auto_analyze_service_conn) + txn_priority_requirement = kHighestPriority; + else + txn_priority_requirement = kHigherPriorityRange; + } else { + txn_priority_requirement = + isolation_level == PgIsolationLevel::READ_COMMITTED ? kHighestPriority : + (RowMarkNeedsHigherPriority(row_mark_type) ? + kHigherPriorityRange : kLowerPriorityRange); + } + return txn_priority_requirement; +} + } // namespace //-------------------------------------------------------------------------------------------------- @@ -376,13 +410,13 @@ class PgSession::RunHelper { return Status::OK(); } - const auto txn_priority_requirement = - pg_session_.GetIsolationLevel() == PgIsolationLevel::READ_COMMITTED - ? kHighestPriority : - (RowMarkNeedsHigherPriority(row_mark_type) ? kHigherPriorityRange : kLowerPriorityRange); read_only = read_only && !IsValidRowMarkType(row_mark_type); - return pg_session_.pg_txn_manager_->CalculateIsolation(read_only, txn_priority_requirement); + return pg_session_.pg_txn_manager_->CalculateIsolation( + read_only, + GetTxnPriorityRequirement( + pg_session_.pg_txn_manager_->IsDdlMode(), pg_session_.GetIsolationLevel(), + row_mark_type)); } Result Flush(std::optional&& cache_options) { @@ -731,12 +765,10 @@ Result PgSession::FlushOperations(BufferableOperations&& ops, boo } if (transactional) { - const auto txn_priority_requirement = - GetIsolationLevel() == PgIsolationLevel::READ_COMMITTED - ? kHighestPriority : kLowerPriorityRange; - RETURN_NOT_OK(pg_txn_manager_->CalculateIsolation( - false /* read_only */, txn_priority_requirement)); + false /* read_only */, + GetTxnPriorityRequirement( + pg_txn_manager_->IsDdlMode(), GetIsolationLevel(), RowMarkType::ROW_MARK_ABSENT))); } // When YSQL is flushing a pipeline of Perform rpcs asynchronously i.e., without waiting for diff --git a/src/yb/yql/pggate/pg_txn_manager.cc b/src/yb/yql/pggate/pg_txn_manager.cc index 66cdd2d1be5f..505e9d3a6235 100644 --- a/src/yb/yql/pggate/pg_txn_manager.cc +++ b/src/yb/yql/pggate/pg_txn_manager.cc @@ -312,6 +312,11 @@ Status PgTxnManager::SetReadOnlyStmt(bool read_only_stmt) { } uint64_t PgTxnManager::NewPriority(YbcTxnPriorityRequirement txn_priority_requirement) { + VLOG(1) << "txn_priority_requirement: " << txn_priority_requirement + << " txn_priority_highpri_lower_bound: " << txn_priority_highpri_lower_bound + << " txn_priority_highpri_upper_bound: " << txn_priority_highpri_upper_bound + << " txn_priority_regular_lower_bound: " << txn_priority_regular_lower_bound + << " txn_priority_regular_upper_bound: " << txn_priority_regular_upper_bound; if (txn_priority_requirement == kHighestPriority) { return yb::kHighPriTxnUpperBound; } @@ -330,6 +335,8 @@ Status PgTxnManager::CalculateIsolation( if (FLAGS_TEST_ysql_yb_ddl_transaction_block_enabled ? IsDdlModeWithSeparateTransaction() : IsDdlMode()) { VLOG_TXN_STATE(2); + if (!priority_.has_value()) + priority_ = NewPriority(txn_priority_requirement); return Status::OK(); } @@ -407,13 +414,13 @@ Status PgTxnManager::CalculateIsolation( << "Unexpected DDL state found in plain transaction"; } - if (!use_saved_priority_) { + if (!use_saved_priority_ && !priority_.has_value()) { priority_ = NewPriority(txn_priority_requirement); } isolation_level_ = docdb_isolation; VLOG_TXN_STATE(2) << "effective isolation level: " << IsolationLevel_Name(docdb_isolation) - << " priority_: " << priority_ + << " priority_: " << (priority_ ? std::to_string(*priority_) : "nullopt") << "; transaction started successfully."; } @@ -540,7 +547,7 @@ Status PgTxnManager::FinishPlainTransaction( void PgTxnManager::ResetTxnAndSession() { txn_in_progress_ = false; isolation_level_ = IsolationLevel::NON_TRANSACTIONAL; - priority_ = 0; + priority_ = std::nullopt; IncTxnSerialNo(); enable_follower_reads_ = false; @@ -672,8 +679,8 @@ Status PgTxnManager::SetupPerformOptions( if (use_saved_priority_) { options->set_use_existing_priority(true); - } else { - options->set_priority(priority_); + } else if (priority_) { + options->set_priority(*priority_); } if (need_restart_) { options->set_restart_transaction(true); @@ -719,22 +726,26 @@ Status PgTxnManager::SetupPerformOptions( } double PgTxnManager::GetTransactionPriority() const { - if (priority_ <= yb::kRegularTxnUpperBound) { - return ToTxnPriority(priority_, + if (!priority_.has_value()) { + return 0.0; + } + + if (*priority_ <= yb::kRegularTxnUpperBound) { + return ToTxnPriority(*priority_, yb::kRegularTxnLowerBound, yb::kRegularTxnUpperBound); } - return ToTxnPriority(priority_, + return ToTxnPriority(*priority_, yb::kHighPriTxnLowerBound, yb::kHighPriTxnUpperBound); } YbcTxnPriorityRequirement PgTxnManager::GetTransactionPriorityType() const { - if (priority_ <= yb::kRegularTxnUpperBound) { + if (!priority_.has_value() || (*priority_ <= yb::kRegularTxnUpperBound)) { return kLowerPriorityRange; } - if (priority_ < yb::kHighPriTxnUpperBound) { + if (*priority_ < yb::kHighPriTxnUpperBound) { return kHigherPriorityRange; } return kHighestPriority; diff --git a/src/yb/yql/pggate/pg_txn_manager.h b/src/yb/yql/pggate/pg_txn_manager.h index e8ad924f132c..6806f8ceab00 100644 --- a/src/yb/yql/pggate/pg_txn_manager.h +++ b/src/yb/yql/pggate/pg_txn_manager.h @@ -202,7 +202,7 @@ class PgTxnManager : public RefCountedThreadSafe { // On a transaction conflict error we want to recreate the transaction with the same priority as // the last transaction. This avoids the case where the current transaction gets a higher priority // and cancels the other transaction. - uint64_t priority_ = 0; + std::optional priority_; SavePriority use_saved_priority_ = SavePriority::kFalse; int64_t pg_txn_start_us_ = 0; bool snapshot_read_time_is_used_ = false; diff --git a/src/yb/yql/pggate/util/ybc_guc.cc b/src/yb/yql/pggate/util/ybc_guc.cc index 719bc1e6309d..857dbc1bc875 100644 --- a/src/yb/yql/pggate/util/ybc_guc.cc +++ b/src/yb/yql/pggate/util/ybc_guc.cc @@ -124,3 +124,6 @@ bool yb_mixed_mode_expression_pushdown = true; bool yb_debug_log_catcache_events = false; bool yb_mixed_mode_saop_pushdown = false; + +// Internal GUC to help a backend identify that the connection is from the Auto-Analyze service. +bool yb_use_internal_auto_analyze_service_conn = false; diff --git a/src/yb/yql/pggate/util/ybc_guc.h b/src/yb/yql/pggate/util/ybc_guc.h index bad97cc69238..3be59e061a37 100644 --- a/src/yb/yql/pggate/util/ybc_guc.h +++ b/src/yb/yql/pggate/util/ybc_guc.h @@ -277,6 +277,8 @@ extern bool yb_mixed_mode_expression_pushdown; extern bool yb_mixed_mode_saop_pushdown; +extern bool yb_use_internal_auto_analyze_service_conn; + // Should be in sync with YsqlSamplingAlgorithm protobuf. typedef enum { YB_SAMPLING_ALGORITHM_FULL_TABLE_SCAN = 0, diff --git a/src/yb/yql/pggate/ybc_pggate.cc b/src/yb/yql/pggate/ybc_pggate.cc index 596b12326551..02fb2711b4b6 100644 --- a/src/yb/yql/pggate/ybc_pggate.cc +++ b/src/yb/yql/pggate/ybc_pggate.cc @@ -163,6 +163,16 @@ DEFINE_RUNTIME_PREVIEW_bool( "Enables the support for synchronizing snapshots across transactions, using pg_export_snapshot " "and SET TRANSACTION SNAPSHOT"); +DEFINE_RUNTIME_PG_FLAG( + bool, yb_force_early_ddl_serialization, true, + "If object locking is off (i.e., TEST_enable_object_locking_for_table_locks=false), concurrent " + "DDLs might face a conflict error on the catalog version increment at the end after doing all " + "the work. Setting this flag enables a fail-fast strategy by locking the catalog version at " + "the start of DDLs, causing conflict errors to occur before useful work is done. This flag is " + "only applicable without object locking. If object locking is enabled, it ensures that " + "concurrent DDLs block on each other for serialization. Also, this flag is valid only if " + "ysql_enable_db_catalog_version_mode and yb_enable_invalidation_messages are enabled."); + DECLARE_bool(TEST_ash_debug_aux); DECLARE_bool(TEST_generate_ybrowid_sequentially); DECLARE_bool(TEST_ysql_log_perdb_allocated_new_objectid); diff --git a/src/yb/yql/pgwrapper/pg_auto_analyze-test.cc b/src/yb/yql/pgwrapper/pg_auto_analyze-test.cc index 02e364bd34ff..769f46750946 100644 --- a/src/yb/yql/pgwrapper/pg_auto_analyze-test.cc +++ b/src/yb/yql/pgwrapper/pg_auto_analyze-test.cc @@ -36,6 +36,7 @@ #include "yb/util/logging_test_util.h" #include "yb/util/string_case.h" #include "yb/util/test_macros.h" +#include "yb/util/test_thread_holder.h" #include "yb/util/tostring.h" #include "yb/yql/cql/ql/util/statement_result.h" @@ -53,6 +54,8 @@ DECLARE_uint32(ysql_auto_analyze_batch_size); DECLARE_bool(TEST_sort_auto_analyze_target_table_ids); DECLARE_int32(TEST_simulate_analyze_deleted_table_secs); DECLARE_string(vmodule); +DECLARE_int64(TEST_delay_after_table_analyze_ms); +DECLARE_bool(TEST_enable_object_locking_for_table_locks); using namespace std::chrono_literals; @@ -75,8 +78,8 @@ class PgAutoAnalyzeTest : public PgMiniTestBase { ANNOTATE_UNPROTECTED_WRITE(FLAGS_ysql_enable_table_mutation_counter) = true; // Set low values for the node level mutation reporting and the cluster level persisting - // intervals ensures that the aggregate mutations are frequently applied to the underlying YCQL - // table, hence capping the test time low. + // intervals. This ensures that the aggregate mutations are frequently applied to the underlying + // YCQL table, hence capping the test time low. ANNOTATE_UNPROTECTED_WRITE(FLAGS_ysql_node_level_mutation_reporting_interval_ms) = 10; ANNOTATE_UNPROTECTED_WRITE(FLAGS_ysql_cluster_level_mutation_persist_interval_ms) = 10; google::SetVLOGLevel("pg_auto_analyze_service", 2); @@ -985,5 +988,56 @@ TEST_F(PgAutoAnalyzeTest, MutationsCleanupForDeletedAnalyzeTargetTable) { ASSERT_OK(WaitForTableMutationsCleanUp({table_id})); } +TEST_F(PgAutoAnalyzeTest, DDLsInParallelWithAutoAnalyze) { + ANNOTATE_UNPROTECTED_WRITE(FLAGS_ysql_auto_analyze_threshold) = 1; + ANNOTATE_UNPROTECTED_WRITE(FLAGS_ysql_auto_analyze_scale_factor) = 0.01; + ANNOTATE_UNPROTECTED_WRITE(FLAGS_TEST_delay_after_table_analyze_ms) = 10; + // Explicitly disable object locking. With object locking, concurrent DDLs will be handled + // without relying on catalog version increments. + ANNOTATE_UNPROTECTED_WRITE(FLAGS_TEST_enable_object_locking_for_table_locks) = false; + + auto conn = ASSERT_RESULT(Connect()); + auto db_name = "abc"; + ASSERT_OK(conn.ExecuteFormat("CREATE DATABASE $0", db_name)); + conn = ASSERT_RESULT(ConnectToDB(db_name)); + const std::string table_name = "test_tbl"; + const std::string table2_name = "test_tbl2"; + ASSERT_OK(conn.ExecuteFormat( + "CREATE TABLE $0 (h1 INT, v1 INT DEFAULT 5, PRIMARY KEY(h1))", table_name)); + ASSERT_OK(conn.ExecuteFormat( + "CREATE TABLE $0 (h1 INT, v1 INT DEFAULT 5, PRIMARY KEY(h1))", table2_name)); + + TestThreadHolder thread_holder; + thread_holder.AddThreadFunctor([this, db_name, table_name, &stop = thread_holder.stop_flag()] { + auto conn = ASSERT_RESULT(ConnectToDB(db_name)); + auto num_inserts = 0; + while (!stop.load(std::memory_order_acquire)) { + auto status = conn.ExecuteFormat("INSERT INTO $0 (h1) VALUES ($1)", table_name, num_inserts); + if (status.ToString().find("schema version mismatch") == std::string::npos) { + ASSERT_OK(status); + num_inserts++; + } + } + ASSERT_OK(WaitFor([&conn, table_name, num_inserts]() -> Result { + const std::string format_query = "SELECT reltuples FROM pg_class WHERE relname = '$0'"; + auto res = VERIFY_RESULT(conn.FetchFormat(format_query, table_name)); + auto tuples = VERIFY_RESULT(GetValue(res.get(), 0, 0)); + LOG(INFO) << "Saw " << tuples << " reltuples"; + return num_inserts == tuples; + }, 10s * kTimeMultiplier, + Format("Check expected reltuples vs actual reltuples (%0)", num_inserts))); + }); + + // Perform DDLs on another table to avoid read restart errors. + ASSERT_OK(conn.Execute("SET yb_max_query_layer_retries = 0")); + ASSERT_OK(conn.ExecuteFormat("CREATE INDEX idx ON $0 (v1)", table2_name)); + ASSERT_OK(conn.ExecuteFormat("DROP INDEX idx")); + ASSERT_OK(conn.ExecuteFormat("ALTER TABLE $0 ADD COLUMN v2 INT", table2_name)); + ASSERT_OK(conn.ExecuteFormat("ALTER TABLE $0 DROP COLUMN v2", table2_name)); + + thread_holder.Stop(); + thread_holder.JoinAll(); +} + } // namespace pgwrapper } // namespace yb diff --git a/src/yb/yql/pgwrapper/pg_catalog_version-test.cc b/src/yb/yql/pgwrapper/pg_catalog_version-test.cc index 953199a8236d..55207a991e2d 100644 --- a/src/yb/yql/pgwrapper/pg_catalog_version-test.cc +++ b/src/yb/yql/pgwrapper/pg_catalog_version-test.cc @@ -928,6 +928,11 @@ TEST_F(PgCatalogVersionTest, FixCatalogVersionTable) { ASSERT_TRUE(ASSERT_RESULT( VerifyCatalogVersionTableDbOids(&conn_yugabyte, true /* single_row */))); + // Do not force early serialization for DDLs since the pg_yb_catalog_version table is in global + // catalog version mode and early serialization requires taking a lock on the per-db catalog + // version row. + ASSERT_OK(conn_yugabyte.Execute("SET yb_force_early_ddl_serialization=false")); + // At this time, an existing connection is still in per-db catalog version mode // but the table pg_yb_catalog_version has only one row for template1 and is out // of sync with pg_database. Note that once a connection is in per-db catalog diff --git a/src/yb/yql/pgwrapper/pg_hint_table-test.cc b/src/yb/yql/pgwrapper/pg_hint_table-test.cc index c9f0b992fa56..739b97ce7870 100644 --- a/src/yb/yql/pgwrapper/pg_hint_table-test.cc +++ b/src/yb/yql/pgwrapper/pg_hint_table-test.cc @@ -251,6 +251,21 @@ TEST_F(PgHintTableTest, SimpleConcurrencyTest) { } } +void FailIfNotConcurrentDDLErrors(const Status& status) { + // Expect a catalog version mismatch error during concurrent operations + if (!status.ok()) { + std::string error_msg = status.ToString(); + if (error_msg.find("pgsql error 40001") != std::string::npos || + error_msg.find("Catalog Version Mismatch") != std::string::npos || + error_msg.find("Restart read required") != std::string::npos || + error_msg.find("schema version mismatch") != std::string::npos) { + LOG(INFO) << "Expected error: " << error_msg; + } else { + FAIL() << "Unexpected error: " << error_msg; + } + } +} + // Test that hints work correctly when ANALYZE is running concurrently TEST_F(PgHintTableTest, HintWithConcurrentAnalyze) { // ---------------------------------------------------------------------------------------------- @@ -263,42 +278,35 @@ TEST_F(PgHintTableTest, HintWithConcurrentAnalyze) { // Thread to run ANALYZE auto conn_analyze = ASSERT_RESULT(ConnectWithHintTable()); + // Thread to insert the hint + auto conn_hint = ASSERT_RESULT(ConnectWithHintTable()); + threads.AddThreadFunctor([&stop_threads, &conn_analyze]() { LOG(INFO) << "Starting ANALYZE thread"; while (!stop_threads.load()) { auto status = conn_analyze.Execute("ANALYZE VERBOSE"); - // Expect a catalog version mismatch error during concurrent operations - if (!status.ok()) { - std::string error_msg = status.ToString(); - if (error_msg.find("pgsql error 40001") != std::string::npos || - error_msg.find("Catalog Version Mismatch") != std::string::npos || - error_msg.find("Restart read required") != std::string::npos || - error_msg.find("schema version mismatch") != std::string::npos) { - // These errors are expected during concurrent operations - LOG(INFO) << "Expected error during ANALYZE: " << error_msg; - } else { - FAIL() << "Unexpected error during ANALYZE: " << error_msg; - } - } + FailIfNotConcurrentDDLErrors(status); } LOG(INFO) << "ANALYZE completed"; }); - // Thread to insert the hint - auto conn_hint = ASSERT_RESULT(ConnectWithHintTable()); threads.AddThreadFunctor([&stop_threads, &hint_num, &conn_hint]() { LOG(INFO) << "Starting hint insertion thread"; while (!stop_threads.load()) { - ASSERT_OK(conn_hint.ExecuteFormat( + auto status = conn_hint.ExecuteFormat( "INSERT INTO hint_plan.hints (norm_query_string, application_name, hints) " "VALUES ('$0', '', 'MergeJoin(pg_class pg_attribute)') " "ON CONFLICT (norm_query_string, application_name) " "DO UPDATE SET hints = 'MergeJoin(pg_class pg_attribute)'", - hint_num++)); + hint_num); + FailIfNotConcurrentDDLErrors(status); + if (status.ok()) { + hint_num++; + } } LOG(INFO) << "Successfully inserted " << hint_num << " hints"; }); diff --git a/src/yb/yql/pgwrapper/pg_index_backfill-test.cc b/src/yb/yql/pgwrapper/pg_index_backfill-test.cc index af62c4ab2921..e3ad9de5dd35 100644 --- a/src/yb/yql/pgwrapper/pg_index_backfill-test.cc +++ b/src/yb/yql/pgwrapper/pg_index_backfill-test.cc @@ -909,6 +909,7 @@ TEST_P(PgIndexBackfillTestSimultaneously, CreateIndexSimultaneously) { // TODO (#19975): Enable read committed isolation PGConn create_conn = ASSERT_RESULT(SetDefaultTransactionIsolation( ConnectToDB(kDatabaseName), IsolationLevel::SNAPSHOT_ISOLATION)); + ASSERT_OK(create_conn.Execute("SET yb_force_early_ddl_serialization=false")); statuses[i] = MoveStatus(create_conn.ExecuteFormat( "CREATE INDEX $0 ON $1 (i)", kIndexName, kTableName)); @@ -1856,6 +1857,9 @@ TEST_P(PgIndexBackfillFastClientTimeout, DropWhileBackfilling) { thread_holder_.AddThreadFunctor([this] { LOG(INFO) << "Begin create thread"; PGConn create_conn = ASSERT_RESULT(ConnectToDB(kDatabaseName)); + // We don't want the DROP INDEX to face a serialization error when acquiring the FOR UPDATE lock + // on the catalog version row. + ASSERT_OK(create_conn.Execute("SET yb_force_early_ddl_serialization=false")); Status status = create_conn.ExecuteFormat("CREATE INDEX $0 ON $1 (i)", kIndexName, kTableName); // Expect timeout because // DROP INDEX is currently not online and removes the index info from the indexed table @@ -2436,6 +2440,9 @@ TEST_P(PgIndexBackfillReadCommittedBlockIndisliveBlockDoBackfill, CatVerBumps) { thread_holder_.AddThreadFunctor([this] { LOG(INFO) << "Begin create index thread"; auto create_idx_conn = ASSERT_RESULT(ConnectToDB(kDatabaseName)); + // We don't want the catalog version increments to conflict with the FOR UPDATE lock on the + // catalog version row. + ASSERT_OK(create_idx_conn.Execute("SET yb_force_early_ddl_serialization=false")); ASSERT_OK(create_idx_conn.ExecuteFormat("CREATE INDEX $0 ON $1 (i)", kIndexName, kTableName)); LOG(INFO) << "End create index thread"; }); From a61155d4b8c8bec0b6ca8c3423da0e3c7215213d Mon Sep 17 00:00:00 2001 From: jhe Date: Fri, 16 May 2025 11:45:47 -0700 Subject: [PATCH 121/146] [#27216] xClusterDDLRepl: Replace TEST_override_replication_role with a GUC Summary: Replacing the TEST_override_replication_role function with a GUC `yb_xcluster_ddl_replication.TEST_replication_role_override`. `FetchReplicationRole` now checks if this GUC is set to something other than `UNSPECIFIED`, and if so overrides the replication role with that value instead of fetching the role from the tserver. Jira: DB-16706 Test Plan: ``` ybd --java-test "org.yb.pgsql.TestPgRegressThirdPartyExtensionsYBXClusterDDLReplication" ``` Reviewers: hsunder, xCluster, #db-approvers Reviewed By: hsunder, #db-approvers Subscribers: slingam, svc_phabricator, yql, ybase Differential Revision: https://phorge.dev.yugabyte.com/D44016 --- .../expected/colocated_setup.out | 2 +- .../expected/create_colocated_table.out | 6 +- .../expected/create_drop_index.out | 20 +++--- .../expected/create_drop_table.out | 16 ++--- .../expected/routines.out | 33 +++++----- .../expected/setup.out | 2 +- .../sql/routines.sql | 26 ++++---- .../yb_xcluster_ddl_replication/sql/setup.sql | 2 +- .../yb_xcluster_ddl_replication--1.0.sql | 4 -- .../yb_xcluster_ddl_replication.c | 64 +++++++------------ 10 files changed, 77 insertions(+), 98 deletions(-) diff --git a/src/postgres/yb-extensions/yb_xcluster_ddl_replication/expected/colocated_setup.out b/src/postgres/yb-extensions/yb_xcluster_ddl_replication/expected/colocated_setup.out index b712fb360e37..bae34340eea1 100644 --- a/src/postgres/yb-extensions/yb_xcluster_ddl_replication/expected/colocated_setup.out +++ b/src/postgres/yb-extensions/yb_xcluster_ddl_replication/expected/colocated_setup.out @@ -15,7 +15,7 @@ $$; CREATE PROCEDURE TEST_reset() LANGUAGE SQL AS $$ - CALL yb_xcluster_ddl_replication.TEST_override_replication_role('source'); + SET yb_xcluster_ddl_replication.TEST_replication_role_override = 'source'; DELETE FROM yb_xcluster_ddl_replication.ddl_queue; DELETE FROM yb_xcluster_ddl_replication.replicated_ddls; $$; diff --git a/src/postgres/yb-extensions/yb_xcluster_ddl_replication/expected/create_colocated_table.out b/src/postgres/yb-extensions/yb_xcluster_ddl_replication/expected/create_colocated_table.out index 264a9ea82938..4b469d0191a9 100644 --- a/src/postgres/yb-extensions/yb_xcluster_ddl_replication/expected/create_colocated_table.out +++ b/src/postgres/yb-extensions/yb_xcluster_ddl_replication/expected/create_colocated_table.out @@ -13,7 +13,7 @@ CREATE TABLE coloc_foo(i int PRIMARY KEY); SELECT yb_data FROM yb_xcluster_ddl_replication.ddl_queue ORDER BY ddl_end_time; yb_data ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- - {"user": "yugabyte", "query": "CREATE TABLE coloc_foo(i int PRIMARY KEY);", "schema": "public", "version": 1, "command_tag": "CREATE TABLE", "new_rel_map": [{"rel_name": "coloc_foo", "relfile_oid": 16416, "colocation_id": 20001}]} + {"user": "yugabyte", "query": "CREATE TABLE coloc_foo(i int PRIMARY KEY);", "schema": "public", "version": 1, "command_tag": "CREATE TABLE", "new_rel_map": [{"rel_name": "coloc_foo", "relfile_oid": 16415, "colocation_id": 20001}]} (1 row) -- Verify that non-colocated table is captured. @@ -21,8 +21,8 @@ CREATE TABLE non_coloc_foo(i int PRIMARY KEY) WITH (COLOCATION = false); SELECT yb_data FROM yb_xcluster_ddl_replication.ddl_queue ORDER BY ddl_end_time; yb_data -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- - {"user": "yugabyte", "query": "CREATE TABLE coloc_foo(i int PRIMARY KEY);", "schema": "public", "version": 1, "command_tag": "CREATE TABLE", "new_rel_map": [{"rel_name": "coloc_foo", "relfile_oid": 16416, "colocation_id": 20001}]} - {"user": "yugabyte", "query": "CREATE TABLE non_coloc_foo(i int PRIMARY KEY) WITH (COLOCATION = false);", "schema": "public", "version": 1, "command_tag": "CREATE TABLE", "new_rel_map": [{"rel_name": "non_coloc_foo", "relfile_oid": 16421}]} + {"user": "yugabyte", "query": "CREATE TABLE coloc_foo(i int PRIMARY KEY);", "schema": "public", "version": 1, "command_tag": "CREATE TABLE", "new_rel_map": [{"rel_name": "coloc_foo", "relfile_oid": 16415, "colocation_id": 20001}]} + {"user": "yugabyte", "query": "CREATE TABLE non_coloc_foo(i int PRIMARY KEY) WITH (COLOCATION = false);", "schema": "public", "version": 1, "command_tag": "CREATE TABLE", "new_rel_map": [{"rel_name": "non_coloc_foo", "relfile_oid": 16420}]} (2 rows) SELECT yb_data FROM yb_xcluster_ddl_replication.replicated_ddls ORDER BY ddl_end_time; diff --git a/src/postgres/yb-extensions/yb_xcluster_ddl_replication/expected/create_drop_index.out b/src/postgres/yb-extensions/yb_xcluster_ddl_replication/expected/create_drop_index.out index 10cfd5e9b460..694f2f67b0cf 100644 --- a/src/postgres/yb-extensions/yb_xcluster_ddl_replication/expected/create_drop_index.out +++ b/src/postgres/yb-extensions/yb_xcluster_ddl_replication/expected/create_drop_index.out @@ -33,11 +33,11 @@ SELECT yb_data FROM yb_xcluster_ddl_replication.ddl_queue ORDER BY ddl_end_time; yb_data ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- {"user": "yugabyte", "query": "CREATE SCHEMA create_index;", "schema": "public", "version": 1, "command_tag": "CREATE SCHEMA"} - {"user": "yugabyte", "query": "CREATE TABLE foo(i int PRIMARY KEY, a int, b text, c int);", "schema": "create_index", "version": 1, "command_tag": "CREATE TABLE", "new_rel_map": [{"rel_name": "foo", "relfile_oid": 16462}]} - {"user": "yugabyte", "query": "CREATE INDEX foo_idx_simple ON foo(a);", "schema": "create_index", "version": 1, "command_tag": "CREATE INDEX", "new_rel_map": [{"is_index": true, "rel_name": "foo_idx_simple", "relfile_oid": 16467}]} - {"user": "yugabyte", "query": "CREATE UNIQUE INDEX foo_idx_unique ON foo(b);", "schema": "create_index", "version": 1, "command_tag": "CREATE INDEX", "new_rel_map": [{"is_index": true, "rel_name": "foo_idx_unique", "relfile_oid": 16468}]} - {"user": "yugabyte", "query": "CREATE INDEX foo_idx_filtered ON foo(c ASC, a) WHERE a > c;", "schema": "create_index", "version": 1, "command_tag": "CREATE INDEX", "new_rel_map": [{"is_index": true, "rel_name": "foo_idx_filtered", "relfile_oid": 16469}]} - {"user": "new_role", "query": "CREATE INDEX foo_idx_include ON foo(lower(b)) INCLUDE (a) SPLIT INTO 2 TABLETS;", "schema": "create_index", "version": 1, "command_tag": "CREATE INDEX", "new_rel_map": [{"is_index": true, "rel_name": "foo_idx_include", "relfile_oid": 16470}]} + {"user": "yugabyte", "query": "CREATE TABLE foo(i int PRIMARY KEY, a int, b text, c int);", "schema": "create_index", "version": 1, "command_tag": "CREATE TABLE", "new_rel_map": [{"rel_name": "foo", "relfile_oid": 16461}]} + {"user": "yugabyte", "query": "CREATE INDEX foo_idx_simple ON foo(a);", "schema": "create_index", "version": 1, "command_tag": "CREATE INDEX", "new_rel_map": [{"is_index": true, "rel_name": "foo_idx_simple", "relfile_oid": 16466}]} + {"user": "yugabyte", "query": "CREATE UNIQUE INDEX foo_idx_unique ON foo(b);", "schema": "create_index", "version": 1, "command_tag": "CREATE INDEX", "new_rel_map": [{"is_index": true, "rel_name": "foo_idx_unique", "relfile_oid": 16467}]} + {"user": "yugabyte", "query": "CREATE INDEX foo_idx_filtered ON foo(c ASC, a) WHERE a > c;", "schema": "create_index", "version": 1, "command_tag": "CREATE INDEX", "new_rel_map": [{"is_index": true, "rel_name": "foo_idx_filtered", "relfile_oid": 16468}]} + {"user": "new_role", "query": "CREATE INDEX foo_idx_include ON foo(lower(b)) INCLUDE (a) SPLIT INTO 2 TABLETS;", "schema": "create_index", "version": 1, "command_tag": "CREATE INDEX", "new_rel_map": [{"is_index": true, "rel_name": "foo_idx_include", "relfile_oid": 16469}]} (6 rows) SELECT yb_data FROM yb_xcluster_ddl_replication.replicated_ddls ORDER BY ddl_end_time; @@ -61,11 +61,11 @@ SELECT yb_data FROM yb_xcluster_ddl_replication.ddl_queue ORDER BY ddl_end_time; yb_data ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- {"user": "yugabyte", "query": "CREATE SCHEMA create_index;", "schema": "public", "version": 1, "command_tag": "CREATE SCHEMA"} - {"user": "yugabyte", "query": "CREATE TABLE foo(i int PRIMARY KEY, a int, b text, c int);", "schema": "create_index", "version": 1, "command_tag": "CREATE TABLE", "new_rel_map": [{"rel_name": "foo", "relfile_oid": 16462}]} - {"user": "yugabyte", "query": "CREATE INDEX foo_idx_simple ON foo(a);", "schema": "create_index", "version": 1, "command_tag": "CREATE INDEX", "new_rel_map": [{"is_index": true, "rel_name": "foo_idx_simple", "relfile_oid": 16467}]} - {"user": "yugabyte", "query": "CREATE UNIQUE INDEX foo_idx_unique ON foo(b);", "schema": "create_index", "version": 1, "command_tag": "CREATE INDEX", "new_rel_map": [{"is_index": true, "rel_name": "foo_idx_unique", "relfile_oid": 16468}]} - {"user": "yugabyte", "query": "CREATE INDEX foo_idx_filtered ON foo(c ASC, a) WHERE a > c;", "schema": "create_index", "version": 1, "command_tag": "CREATE INDEX", "new_rel_map": [{"is_index": true, "rel_name": "foo_idx_filtered", "relfile_oid": 16469}]} - {"user": "new_role", "query": "CREATE INDEX foo_idx_include ON foo(lower(b)) INCLUDE (a) SPLIT INTO 2 TABLETS;", "schema": "create_index", "version": 1, "command_tag": "CREATE INDEX", "new_rel_map": [{"is_index": true, "rel_name": "foo_idx_include", "relfile_oid": 16470}]} + {"user": "yugabyte", "query": "CREATE TABLE foo(i int PRIMARY KEY, a int, b text, c int);", "schema": "create_index", "version": 1, "command_tag": "CREATE TABLE", "new_rel_map": [{"rel_name": "foo", "relfile_oid": 16461}]} + {"user": "yugabyte", "query": "CREATE INDEX foo_idx_simple ON foo(a);", "schema": "create_index", "version": 1, "command_tag": "CREATE INDEX", "new_rel_map": [{"is_index": true, "rel_name": "foo_idx_simple", "relfile_oid": 16466}]} + {"user": "yugabyte", "query": "CREATE UNIQUE INDEX foo_idx_unique ON foo(b);", "schema": "create_index", "version": 1, "command_tag": "CREATE INDEX", "new_rel_map": [{"is_index": true, "rel_name": "foo_idx_unique", "relfile_oid": 16467}]} + {"user": "yugabyte", "query": "CREATE INDEX foo_idx_filtered ON foo(c ASC, a) WHERE a > c;", "schema": "create_index", "version": 1, "command_tag": "CREATE INDEX", "new_rel_map": [{"is_index": true, "rel_name": "foo_idx_filtered", "relfile_oid": 16468}]} + {"user": "new_role", "query": "CREATE INDEX foo_idx_include ON foo(lower(b)) INCLUDE (a) SPLIT INTO 2 TABLETS;", "schema": "create_index", "version": 1, "command_tag": "CREATE INDEX", "new_rel_map": [{"is_index": true, "rel_name": "foo_idx_include", "relfile_oid": 16469}]} {"user": "yugabyte", "query": "DROP INDEX foo_idx_unique;", "schema": "create_index", "version": 1, "command_tag": "DROP INDEX"} {"user": "yugabyte", "query": "DROP INDEX foo_idx_filtered;", "schema": "create_index", "version": 1, "command_tag": "DROP INDEX"} {"user": "yugabyte", "query": "DROP TABLE foo;", "schema": "create_index", "version": 1, "command_tag": "DROP TABLE"} diff --git a/src/postgres/yb-extensions/yb_xcluster_ddl_replication/expected/create_drop_table.out b/src/postgres/yb-extensions/yb_xcluster_ddl_replication/expected/create_drop_table.out index 577e10587631..4c549e4da075 100644 --- a/src/postgres/yb-extensions/yb_xcluster_ddl_replication/expected/create_drop_table.out +++ b/src/postgres/yb-extensions/yb_xcluster_ddl_replication/expected/create_drop_table.out @@ -26,10 +26,10 @@ CREATE TABLE unique_foo(i int PRIMARY KEY, u text UNIQUE); SELECT yb_data FROM yb_xcluster_ddl_replication.ddl_queue ORDER BY ddl_end_time; yb_data ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- - {"user": "yugabyte", "query": "CREATE TABLE foo(i int PRIMARY KEY);", "schema": "public", "version": 1, "command_tag": "CREATE TABLE", "new_rel_map": [{"rel_name": "foo", "relfile_oid": 16415}]} + {"user": "yugabyte", "query": "CREATE TABLE foo(i int PRIMARY KEY);", "schema": "public", "version": 1, "command_tag": "CREATE TABLE", "new_rel_map": [{"rel_name": "foo", "relfile_oid": 16414}]} {"user": "yugabyte", "query": "CREATE TABLE manual_foo(i int PRIMARY KEY);", "schema": "public", "version": 1, "command_tag": "CREATE TABLE", "manual_replication": true} - {"user": "yugabyte", "query": "CREATE TABLE extra_foo(i int PRIMARY KEY) WITH (COLOCATION = false) SPLIT INTO 1 TABLETS;", "schema": "public", "version": 1, "command_tag": "CREATE TABLE", "new_rel_map": [{"rel_name": "extra_foo", "relfile_oid": 16425}]} - {"user": "yugabyte", "query": "CREATE TABLE unique_foo(i int PRIMARY KEY, u text UNIQUE);", "schema": "public", "version": 1, "command_tag": "CREATE TABLE", "new_rel_map": [{"rel_name": "unique_foo", "relfile_oid": 16430}, {"is_index": true, "rel_name": "unique_foo_u_key", "relfile_oid": 16435}]} + {"user": "yugabyte", "query": "CREATE TABLE extra_foo(i int PRIMARY KEY) WITH (COLOCATION = false) SPLIT INTO 1 TABLETS;", "schema": "public", "version": 1, "command_tag": "CREATE TABLE", "new_rel_map": [{"rel_name": "extra_foo", "relfile_oid": 16424}]} + {"user": "yugabyte", "query": "CREATE TABLE unique_foo(i int PRIMARY KEY, u text UNIQUE);", "schema": "public", "version": 1, "command_tag": "CREATE TABLE", "new_rel_map": [{"rel_name": "unique_foo", "relfile_oid": 16429}, {"is_index": true, "rel_name": "unique_foo_u_key", "relfile_oid": 16434}]} (4 rows) SELECT yb_data FROM yb_xcluster_ddl_replication.replicated_ddls ORDER BY ddl_end_time; @@ -57,12 +57,12 @@ DROP TABLE foo_partitioned_by_col; SELECT yb_data FROM yb_xcluster_ddl_replication.ddl_queue ORDER BY ddl_end_time; yb_data ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- - {"user": "yugabyte", "query": "CREATE TABLE foo(i int PRIMARY KEY);", "schema": "public", "version": 1, "command_tag": "CREATE TABLE", "new_rel_map": [{"rel_name": "foo", "relfile_oid": 16415}]} + {"user": "yugabyte", "query": "CREATE TABLE foo(i int PRIMARY KEY);", "schema": "public", "version": 1, "command_tag": "CREATE TABLE", "new_rel_map": [{"rel_name": "foo", "relfile_oid": 16414}]} {"user": "yugabyte", "query": "CREATE TABLE manual_foo(i int PRIMARY KEY);", "schema": "public", "version": 1, "command_tag": "CREATE TABLE", "manual_replication": true} - {"user": "yugabyte", "query": "CREATE TABLE extra_foo(i int PRIMARY KEY) WITH (COLOCATION = false) SPLIT INTO 1 TABLETS;", "schema": "public", "version": 1, "command_tag": "CREATE TABLE", "new_rel_map": [{"rel_name": "extra_foo", "relfile_oid": 16425}]} - {"user": "yugabyte", "query": "CREATE TABLE unique_foo(i int PRIMARY KEY, u text UNIQUE);", "schema": "public", "version": 1, "command_tag": "CREATE TABLE", "new_rel_map": [{"rel_name": "unique_foo", "relfile_oid": 16430}, {"is_index": true, "rel_name": "unique_foo_u_key", "relfile_oid": 16435}]} - {"user": "yugabyte", "query": "CREATE TABLE foo_partitioned_by_pkey(id int, PRIMARY KEY (id)) PARTITION BY RANGE (id);", "schema": "public", "version": 1, "command_tag": "CREATE TABLE", "new_rel_map": [{"rel_name": "foo_partitioned_by_pkey", "relfile_oid": 16437}]} - {"user": "yugabyte", "query": "CREATE TABLE foo_partitioned_by_col(id int) PARTITION BY RANGE (id);", "schema": "public", "version": 1, "command_tag": "CREATE TABLE", "new_rel_map": [{"rel_name": "foo_partitioned_by_col", "relfile_oid": 16442}]} + {"user": "yugabyte", "query": "CREATE TABLE extra_foo(i int PRIMARY KEY) WITH (COLOCATION = false) SPLIT INTO 1 TABLETS;", "schema": "public", "version": 1, "command_tag": "CREATE TABLE", "new_rel_map": [{"rel_name": "extra_foo", "relfile_oid": 16424}]} + {"user": "yugabyte", "query": "CREATE TABLE unique_foo(i int PRIMARY KEY, u text UNIQUE);", "schema": "public", "version": 1, "command_tag": "CREATE TABLE", "new_rel_map": [{"rel_name": "unique_foo", "relfile_oid": 16429}, {"is_index": true, "rel_name": "unique_foo_u_key", "relfile_oid": 16434}]} + {"user": "yugabyte", "query": "CREATE TABLE foo_partitioned_by_pkey(id int, PRIMARY KEY (id)) PARTITION BY RANGE (id);", "schema": "public", "version": 1, "command_tag": "CREATE TABLE", "new_rel_map": [{"rel_name": "foo_partitioned_by_pkey", "relfile_oid": 16436}]} + {"user": "yugabyte", "query": "CREATE TABLE foo_partitioned_by_col(id int) PARTITION BY RANGE (id);", "schema": "public", "version": 1, "command_tag": "CREATE TABLE", "new_rel_map": [{"rel_name": "foo_partitioned_by_col", "relfile_oid": 16441}]} {"user": "yugabyte", "query": "DROP TABLE foo;", "schema": "public", "version": 1, "command_tag": "DROP TABLE"} {"user": "yugabyte", "query": "DROP TABLE manual_foo;", "schema": "public", "version": 1, "command_tag": "DROP TABLE", "manual_replication": true} {"user": "yugabyte", "query": "DROP TABLE extra_foo;", "schema": "public", "version": 1, "command_tag": "DROP TABLE"} diff --git a/src/postgres/yb-extensions/yb_xcluster_ddl_replication/expected/routines.out b/src/postgres/yb-extensions/yb_xcluster_ddl_replication/expected/routines.out index 518e4c93278b..97bd94006aa2 100644 --- a/src/postgres/yb-extensions/yb_xcluster_ddl_replication/expected/routines.out +++ b/src/postgres/yb-extensions/yb_xcluster_ddl_replication/expected/routines.out @@ -6,31 +6,31 @@ SELECT yb_xcluster_ddl_replication.get_replication_role(); (1 row) -- Check can override with every possible role. -CALL yb_xcluster_ddl_replication.TEST_override_replication_role('unspecified'); +SET yb_xcluster_ddl_replication.TEST_replication_role_override = 'unspecified'; SELECT yb_xcluster_ddl_replication.get_replication_role(); get_replication_role ---------------------- - unspecified + not_automatic_mode (1 row) -CALL yb_xcluster_ddl_replication.TEST_override_replication_role('unavailable'); +SET yb_xcluster_ddl_replication.TEST_replication_role_override = 'unavailable'; SELECT yb_xcluster_ddl_replication.get_replication_role(); ERROR: unable to fetch replication role -CALL yb_xcluster_ddl_replication.TEST_override_replication_role('not_automatic_mode'); +SET yb_xcluster_ddl_replication.TEST_replication_role_override = 'not_automatic_mode'; SELECT yb_xcluster_ddl_replication.get_replication_role(); get_replication_role ---------------------- not_automatic_mode (1 row) -CALL yb_xcluster_ddl_replication.TEST_override_replication_role('automatic_source'); +SET yb_xcluster_ddl_replication.TEST_replication_role_override = 'automatic_source'; SELECT yb_xcluster_ddl_replication.get_replication_role(); get_replication_role ---------------------- source (1 row) -CALL yb_xcluster_ddl_replication.TEST_override_replication_role('automatic_target'); +SET yb_xcluster_ddl_replication.TEST_replication_role_override = 'automatic_target'; SELECT yb_xcluster_ddl_replication.get_replication_role(); get_replication_role ---------------------- @@ -38,14 +38,14 @@ SELECT yb_xcluster_ddl_replication.get_replication_role(); (1 row) -- Shortcuts for automatic roles. -CALL yb_xcluster_ddl_replication.TEST_override_replication_role('source'); +SET yb_xcluster_ddl_replication.TEST_replication_role_override = 'source'; SELECT yb_xcluster_ddl_replication.get_replication_role(); get_replication_role ---------------------- source (1 row) -CALL yb_xcluster_ddl_replication.TEST_override_replication_role('target'); +SET yb_xcluster_ddl_replication.TEST_replication_role_override = 'target'; SELECT yb_xcluster_ddl_replication.get_replication_role(); get_replication_role ---------------------- @@ -53,8 +53,9 @@ SELECT yb_xcluster_ddl_replication.get_replication_role(); (1 row) -- Check for invalid roles. -CALL yb_xcluster_ddl_replication.TEST_override_replication_role('invalid'); -ERROR: invalid replication role: 'invalid' +SET yb_xcluster_ddl_replication.TEST_replication_role_override = 'invalid'; +ERROR: invalid value for parameter "yb_xcluster_ddl_replication.test_replication_role_override": "invalid" +HINT: Available values: , NONE, SOURCE, TARGET. SELECT yb_xcluster_ddl_replication.get_replication_role(); get_replication_role ---------------------- @@ -62,16 +63,16 @@ SELECT yb_xcluster_ddl_replication.get_replication_role(); (1 row) -- Check we can turn off override. -CALL yb_xcluster_ddl_replication.TEST_override_replication_role('source'); -CALL yb_xcluster_ddl_replication.TEST_override_replication_role('no_override'); +SET yb_xcluster_ddl_replication.TEST_replication_role_override = 'source'; +SET yb_xcluster_ddl_replication.TEST_replication_role_override = 'none'; SELECT yb_xcluster_ddl_replication.get_replication_role(); get_replication_role ---------------------- not_automatic_mode (1 row) -CALL yb_xcluster_ddl_replication.TEST_override_replication_role('target'); -CALL yb_xcluster_ddl_replication.TEST_override_replication_role(''); +SET yb_xcluster_ddl_replication.TEST_replication_role_override = 'target'; +SET yb_xcluster_ddl_replication.TEST_replication_role_override = ''; SELECT yb_xcluster_ddl_replication.get_replication_role(); get_replication_role ---------------------- @@ -81,8 +82,8 @@ SELECT yb_xcluster_ddl_replication.get_replication_role(); -- Check override cannot be called if you are not superuser but -- get_replication_role can. SET SESSION AUTHORIZATION testuser; -CALL yb_xcluster_ddl_replication.TEST_override_replication_role('target'); -ERROR: permission denied for procedure test_override_replication_role +SET yb_xcluster_ddl_replication.TEST_replication_role_override = 'target'; +ERROR: permission denied to set parameter "yb_xcluster_ddl_replication.test_replication_role_override" SELECT yb_xcluster_ddl_replication.get_replication_role(); get_replication_role ---------------------- diff --git a/src/postgres/yb-extensions/yb_xcluster_ddl_replication/expected/setup.out b/src/postgres/yb-extensions/yb_xcluster_ddl_replication/expected/setup.out index 78a71626100e..1f8bd044164f 100644 --- a/src/postgres/yb-extensions/yb_xcluster_ddl_replication/expected/setup.out +++ b/src/postgres/yb-extensions/yb_xcluster_ddl_replication/expected/setup.out @@ -12,7 +12,7 @@ $$; CREATE PROCEDURE TEST_reset() LANGUAGE SQL AS $$ - CALL yb_xcluster_ddl_replication.TEST_override_replication_role('source'); + SET yb_xcluster_ddl_replication.TEST_replication_role_override = 'source'; DELETE FROM yb_xcluster_ddl_replication.ddl_queue; DELETE FROM yb_xcluster_ddl_replication.replicated_ddls; $$; diff --git a/src/postgres/yb-extensions/yb_xcluster_ddl_replication/sql/routines.sql b/src/postgres/yb-extensions/yb_xcluster_ddl_replication/sql/routines.sql index fbd86a34700b..a9cfa9241f99 100644 --- a/src/postgres/yb-extensions/yb_xcluster_ddl_replication/sql/routines.sql +++ b/src/postgres/yb-extensions/yb_xcluster_ddl_replication/sql/routines.sql @@ -3,49 +3,49 @@ SELECT yb_xcluster_ddl_replication.get_replication_role(); -- Check can override with every possible role. -CALL yb_xcluster_ddl_replication.TEST_override_replication_role('unspecified'); +SET yb_xcluster_ddl_replication.TEST_replication_role_override = 'unspecified'; SELECT yb_xcluster_ddl_replication.get_replication_role(); -CALL yb_xcluster_ddl_replication.TEST_override_replication_role('unavailable'); +SET yb_xcluster_ddl_replication.TEST_replication_role_override = 'unavailable'; SELECT yb_xcluster_ddl_replication.get_replication_role(); -CALL yb_xcluster_ddl_replication.TEST_override_replication_role('not_automatic_mode'); +SET yb_xcluster_ddl_replication.TEST_replication_role_override = 'not_automatic_mode'; SELECT yb_xcluster_ddl_replication.get_replication_role(); -CALL yb_xcluster_ddl_replication.TEST_override_replication_role('automatic_source'); +SET yb_xcluster_ddl_replication.TEST_replication_role_override = 'automatic_source'; SELECT yb_xcluster_ddl_replication.get_replication_role(); -CALL yb_xcluster_ddl_replication.TEST_override_replication_role('automatic_target'); +SET yb_xcluster_ddl_replication.TEST_replication_role_override = 'automatic_target'; SELECT yb_xcluster_ddl_replication.get_replication_role(); -- Shortcuts for automatic roles. -CALL yb_xcluster_ddl_replication.TEST_override_replication_role('source'); +SET yb_xcluster_ddl_replication.TEST_replication_role_override = 'source'; SELECT yb_xcluster_ddl_replication.get_replication_role(); -CALL yb_xcluster_ddl_replication.TEST_override_replication_role('target'); +SET yb_xcluster_ddl_replication.TEST_replication_role_override = 'target'; SELECT yb_xcluster_ddl_replication.get_replication_role(); -- Check for invalid roles. -CALL yb_xcluster_ddl_replication.TEST_override_replication_role('invalid'); +SET yb_xcluster_ddl_replication.TEST_replication_role_override = 'invalid'; SELECT yb_xcluster_ddl_replication.get_replication_role(); -- Check we can turn off override. -CALL yb_xcluster_ddl_replication.TEST_override_replication_role('source'); -CALL yb_xcluster_ddl_replication.TEST_override_replication_role('no_override'); +SET yb_xcluster_ddl_replication.TEST_replication_role_override = 'source'; +SET yb_xcluster_ddl_replication.TEST_replication_role_override = 'none'; SELECT yb_xcluster_ddl_replication.get_replication_role(); -CALL yb_xcluster_ddl_replication.TEST_override_replication_role('target'); -CALL yb_xcluster_ddl_replication.TEST_override_replication_role(''); +SET yb_xcluster_ddl_replication.TEST_replication_role_override = 'target'; +SET yb_xcluster_ddl_replication.TEST_replication_role_override = ''; SELECT yb_xcluster_ddl_replication.get_replication_role(); -- Check override cannot be called if you are not superuser but -- get_replication_role can. SET SESSION AUTHORIZATION testuser; -CALL yb_xcluster_ddl_replication.TEST_override_replication_role('target'); +SET yb_xcluster_ddl_replication.TEST_replication_role_override = 'target'; SELECT yb_xcluster_ddl_replication.get_replication_role(); diff --git a/src/postgres/yb-extensions/yb_xcluster_ddl_replication/sql/setup.sql b/src/postgres/yb-extensions/yb_xcluster_ddl_replication/sql/setup.sql index 5f6b03aa1598..ddf96f06f9f8 100644 --- a/src/postgres/yb-extensions/yb_xcluster_ddl_replication/sql/setup.sql +++ b/src/postgres/yb-extensions/yb_xcluster_ddl_replication/sql/setup.sql @@ -14,7 +14,7 @@ $$; CREATE PROCEDURE TEST_reset() LANGUAGE SQL AS $$ - CALL yb_xcluster_ddl_replication.TEST_override_replication_role('source'); + SET yb_xcluster_ddl_replication.TEST_replication_role_override = 'source'; DELETE FROM yb_xcluster_ddl_replication.ddl_queue; DELETE FROM yb_xcluster_ddl_replication.replicated_ddls; $$; diff --git a/src/postgres/yb-extensions/yb_xcluster_ddl_replication/yb_xcluster_ddl_replication--1.0.sql b/src/postgres/yb-extensions/yb_xcluster_ddl_replication/yb_xcluster_ddl_replication--1.0.sql index 7e7e31d514ac..7ead7179e477 100644 --- a/src/postgres/yb-extensions/yb_xcluster_ddl_replication/yb_xcluster_ddl_replication--1.0.sql +++ b/src/postgres/yb-extensions/yb_xcluster_ddl_replication/yb_xcluster_ddl_replication--1.0.sql @@ -24,10 +24,6 @@ CREATE FUNCTION yb_xcluster_ddl_replication.get_replication_role() LANGUAGE C AS 'MODULE_PATHNAME', 'get_replication_role'; -CREATE PROCEDURE yb_xcluster_ddl_replication.TEST_override_replication_role(role text) - LANGUAGE C - AS 'MODULE_PATHNAME', 'TEST_override_replication_role'; - /* ------------------------------------------------------------------------- */ /* Create event triggers. */ diff --git a/src/postgres/yb-extensions/yb_xcluster_ddl_replication/yb_xcluster_ddl_replication.c b/src/postgres/yb-extensions/yb_xcluster_ddl_replication/yb_xcluster_ddl_replication.c index e4dabd02cafb..3af31bcab426 100644 --- a/src/postgres/yb-extensions/yb_xcluster_ddl_replication/yb_xcluster_ddl_replication.c +++ b/src/postgres/yb-extensions/yb_xcluster_ddl_replication/yb_xcluster_ddl_replication.c @@ -52,16 +52,23 @@ typedef enum YbXClusterReplicationRole AUTOMATIC_TARGET = 4, } YbXClusterReplicationRole; +static const struct config_enum_entry replication_role_overrides[] = { + {"", UNSPECIFIED, /* hidden */ false}, + {"NONE", UNSPECIFIED, /* hidden */ false}, + {"UNSPECIFIED", UNSPECIFIED, /* hidden */ true}, + {"NOT_AUTOMATIC_MODE", NOT_AUTOMATIC_MODE, /* hidden */ true}, + {"UNAVAILABLE", UNAVAILABLE, /* hidden */ true}, + {"SOURCE", AUTOMATIC_SOURCE, /* hidden */ false}, + {"AUTOMATIC_SOURCE", AUTOMATIC_SOURCE, /* hidden */ true}, + {"TARGET", AUTOMATIC_TARGET, /* hidden */ false}, + {"AUTOMATIC_TARGET", AUTOMATIC_TARGET, /* hidden */ true}, + {NULL, 0, false}}; + /* * Call FetchReplicationRole() at the start of every DDL to fill this variable * in before using it. */ static int replication_role = UNAVAILABLE; -static bool role_override_present = false; -/* - * If role_override_present, then this overrides the value of replication_role - * fetched from the TServer. - */ static int replication_role_override = UNSPECIFIED; /* @@ -121,12 +128,22 @@ _PG_init(void) PGC_SUSET, 0, NULL, NULL, NULL); + + DefineCustomEnumVariable("yb_xcluster_ddl_replication.TEST_replication_role_override", + gettext_noop("Test override for replication role."), + NULL, + &replication_role_override, + UNSPECIFIED, + replication_role_overrides, + PGC_SUSET, + 0, + NULL, NULL, NULL); } void FetchReplicationRole() { - if (role_override_present) + if (replication_role_override != UNSPECIFIED) replication_role = replication_role_override; else replication_role = YBCGetXClusterRole(MyDatabaseId); @@ -188,41 +205,6 @@ get_replication_role(PG_FUNCTION_ARGS) PG_RETURN_TEXT_P(cstring_to_text(role_name)); } -PG_FUNCTION_INFO_V1(TEST_override_replication_role); -Datum -TEST_override_replication_role(PG_FUNCTION_ARGS) -{ - text *role_text = PG_GETARG_TEXT_PP(0); - char *role_name = text_to_cstring(role_text); - - if (pg_strcasecmp(role_name, "no_override") == 0 || - pg_strcasecmp(role_name, "") == 0) - { - role_override_present = false; - PG_RETURN_VOID(); - } - - if (pg_strcasecmp(role_name, "unspecified") == 0) - replication_role_override = UNSPECIFIED; - else if (pg_strcasecmp(role_name, "unavailable") == 0) - replication_role_override = UNAVAILABLE; - else if (pg_strcasecmp(role_name, "not_automatic_mode") == 0) - replication_role_override = NOT_AUTOMATIC_MODE; - else if (pg_strcasecmp(role_name, "source") == 0 || - pg_strcasecmp(role_name, "automatic_source") == 0) - replication_role_override = AUTOMATIC_SOURCE; - else if (pg_strcasecmp(role_name, "target") == 0 || - pg_strcasecmp(role_name, "automatic_target") == 0) - replication_role_override = AUTOMATIC_TARGET; - else - ereport(ERROR, - (errcode(ERRCODE_INVALID_PARAMETER_VALUE), - errmsg("invalid replication role: '%s'", role_name))); - - role_override_present = true; - PG_RETURN_VOID(); -} - void InsertIntoTable(const char *table_name, int64 ddl_end_time, int64 query_id, Jsonb *yb_data) From 11e70ad9a60c37968bcba304836d96cbfd820fd0 Mon Sep 17 00:00:00 2001 From: Sudhanshu Prajapati Date: Tue, 20 May 2025 03:02:34 +0530 Subject: [PATCH 122/146] [docs] Release notes for 2024.2.2.4-b2 (#27262) * release notes for 2024.2.2.4-b2 * reposition the content --- .../preview/releases/yba-releases/v2024.2.md | 28 ++++++++++++++ .../preview/releases/ybdb-releases/v2024.2.md | 37 +++++++++++++++++++ 2 files changed, 65 insertions(+) diff --git a/docs/content/preview/releases/yba-releases/v2024.2.md b/docs/content/preview/releases/yba-releases/v2024.2.md index c4f742e01493..c73e2798adf3 100644 --- a/docs/content/preview/releases/yba-releases/v2024.2.md +++ b/docs/content/preview/releases/yba-releases/v2024.2.md @@ -119,6 +119,34 @@ The use of [cron to start YB services](/stable/yugabyte-platform/upgrade/prepare +## v2024.2.2.4 - May 19, 2025 {#v2024.2.2.4} + +**Build:** `2024.2.2.4-b2` + +**Third-party licenses:** [YugabyteDB](https://downloads.yugabyte.com/releases/2024.2.2.4/yugabytedb-2024.2.2.4-b2-third-party-licenses.html), [YugabyteDB Anywhere](https://downloads.yugabyte.com/releases/2024.2.2.4/yugabytedb-anywhere-2024.2.2.4-b2-third-party-licenses.html) + +### Download + + + +### Change log + +
+ View the detailed changelog + +### Bug fixes + +* Allows specifying full URNs for Azure vnet/subnet to improve resource grouping. PLAT-17115 + +
+ ## v2024.2.2.3 - May 6, 2025 {#v2024.2.2.3} **Build:** `2024.2.2.3-b1` diff --git a/docs/content/preview/releases/ybdb-releases/v2024.2.md b/docs/content/preview/releases/ybdb-releases/v2024.2.md index 68ef37b00107..d6726d5d320d 100644 --- a/docs/content/preview/releases/ybdb-releases/v2024.2.md +++ b/docs/content/preview/releases/ybdb-releases/v2024.2.md @@ -202,6 +202,43 @@ YugabyteDB now includes the [PostgreSQL Anonymizer extension](/stable/explore/ys +## v2024.2.2.4 - May 19, 2025 {#v2024.2.2.4} + +**Build:** `2024.2.2.4-b2` + +**Third-party licenses:** [YugabyteDB](https://downloads.yugabyte.com/releases/2024.2.2.4/yugabytedb-2024.2.2.4-b2-third-party-licenses.html), [YugabyteDB Anywhere](https://downloads.yugabyte.com/releases/2024.2.2.4/yugabytedb-anywhere-2024.2.2.4-b2-third-party-licenses.html) + +### Downloads + + + +**Docker:** + +```sh +docker pull yugabytedb/yugabyte:2024.2.2.4-b2 +``` + +This is a YugabyteDB Anywhere only release, with no changes to YugabyteDB. + ## v2024.2.2.3 - May 6, 2025 {#v2024.2.2.3} **Build:** `2024.2.2.3-b1` From 817efbfadd0d2add141eb4d68ef01bf1e44add80 Mon Sep 17 00:00:00 2001 From: svarshney Date: Tue, 20 May 2025 10:35:47 +0530 Subject: [PATCH 123/146] [PLAT-17613] Added support for otel collector in node-agent Summary: Configure otel collector for universe using node-agent Test Plan: iTest pipeline Manually configured otel collector on the running universe. Reviewers: nsingh Reviewed By: nsingh Subscribers: agarg, skhilar Differential Revision: https://phorge.dev.yugabyte.com/D44028 --- managed/node-agent/app/server/rpc.go | 15 + .../app/task/install_otel_collector.go | 297 ++++++++++ managed/node-agent/proto/server.proto | 2 + managed/node-agent/proto/yb.proto | 16 + .../templates/server/otel-collector.service | 40 ++ .../tasks/payload/NodeAgentRpcPayload.java | 535 ++++++++++++++++++ .../subtasks/AnsibleConfigureServers.java | 338 +---------- .../tasks/subtasks/ManageOtelCollector.java | 40 +- .../yugabyte/yw/common/NodeAgentClient.java | 14 + 9 files changed, 981 insertions(+), 316 deletions(-) create mode 100644 managed/node-agent/app/task/install_otel_collector.go create mode 100644 managed/node-agent/resources/templates/server/otel-collector.service create mode 100644 managed/src/main/java/com/yugabyte/yw/commissioner/tasks/payload/NodeAgentRpcPayload.java diff --git a/managed/node-agent/app/server/rpc.go b/managed/node-agent/app/server/rpc.go index 8954cb8da985..069b1b93dce1 100644 --- a/managed/node-agent/app/server/rpc.go +++ b/managed/node-agent/app/server/rpc.go @@ -372,6 +372,21 @@ func (server *RPCServer) SubmitTask( res.TaskId = taskID return res, nil } + installOtelCollectorInput := req.GetInstallOtelCollectorInput() + if installOtelCollectorInput != nil { + installOtelCollectorHandler := task.NewInstallOtelCollectorHandler( + installOtelCollectorInput, + username, + ) + err := task.GetTaskManager().Submit(ctx, taskID, installOtelCollectorHandler) + if err != nil { + util.FileLogger(). + Errorf(ctx, "Error in running install otel collector - %s", err.Error()) + return res, status.Error(codes.Internal, err.Error()) + } + res.TaskId = taskID + return res, nil + } return res, status.Error(codes.Unimplemented, "Unknown task") } diff --git a/managed/node-agent/app/task/install_otel_collector.go b/managed/node-agent/app/task/install_otel_collector.go new file mode 100644 index 000000000000..b88bb2074d9a --- /dev/null +++ b/managed/node-agent/app/task/install_otel_collector.go @@ -0,0 +1,297 @@ +// Copyright (c) YugaByte, Inc. + +package task + +import ( + "context" + "errors" + "fmt" + "io/fs" + "node-agent/app/task/module" + pb "node-agent/generated/service" + "node-agent/util" + "path/filepath" +) + +type InstallOtelCollector struct { + shellTask *ShellTask + param *pb.InstallOtelCollectorInput + username string + logOut util.Buffer +} + +func NewInstallOtelCollectorHandler( + param *pb.InstallOtelCollectorInput, + username string, +) *InstallOtelCollector { + return &InstallOtelCollector{ + param: param, + username: username, + logOut: util.NewBuffer(module.MaxBufferCapacity), + } +} + +// CurrentTaskStatus implements the AsyncTask method. +func (h *InstallOtelCollector) CurrentTaskStatus() *TaskStatus { + return &TaskStatus{ + Info: h.logOut, + ExitStatus: &ExitStatus{}, + } +} + +func (h *InstallOtelCollector) String() string { + return "Install otel collector Task" +} + +func (h *InstallOtelCollector) Handle(ctx context.Context) (*pb.DescribeTaskResponse, error) { + util.FileLogger().Infof(ctx, "Starting otel collector installation") + + // 1) figure out home dir + home := "" + if h.param.GetYbHomeDir() != "" { + home = h.param.GetYbHomeDir() + } else { + err := errors.New("ybHomeDir is required") + util.FileLogger().Error(ctx, err.Error()) + return nil, err + } + + // 2) Put & setup the otel collector. + err := h.execOtelCollectorSetupSteps(ctx, home) + if err != nil { + util.FileLogger().Error(ctx, err.Error()) + return nil, err + } + + // 3) Place the otel-collector.service at desired location. + otelCollectorServiceContext := map[string]any{ + "user_name": h.username, + "yb_home_dir": home, + } + + unit := "otel-collector.service" + // Copy otel-collector.service + err = module.CopyFile( + ctx, + otelCollectorServiceContext, + filepath.Join(ServerTemplateSubpath, unit), + filepath.Join(home, SystemdUnitPath, unit), + fs.FileMode(0755), + ) + + if err != nil { + return nil, err + } + + // 4) stop the systemd-unit if it's running. + stopCmd := module.StopSystemdUnit(h.username, unit) + h.logOut.WriteLine("Running otel-collector server phase: %s", stopCmd) + if _, err := module.RunShellCmd(ctx, h.username, "stop-otel-collector", stopCmd, h.logOut); err != nil { + return nil, err + } + + // 5) Configure the otel-collector service. + err = h.configureOtelCollector(ctx, home) + if err != nil { + util.FileLogger().Error(ctx, err.Error()) + return nil, err + } + + // 6) Start and enable the otel-collector service. + startCmd := module.StartSystemdUnit(h.username, unit) + h.logOut.WriteLine("Running otel-collector phase: %s", startCmd) + if _, err = module.RunShellCmd(ctx, h.username, "start-otel-collector", startCmd, h.logOut); err != nil { + return nil, err + } + + return nil, nil +} + +// GetOtelCollectorSetupSteps returns the sequence of steps needed for configuring the otel collector. +func (h *InstallOtelCollector) execOtelCollectorSetupSteps( + ctx context.Context, + ybHome string, +) error { + pkgName := filepath.Base(h.param.GetOtelColPackagePath()) + otelCollectorPackagePath := filepath.Join(h.param.GetRemoteTmp(), pkgName) + otelCollectorDirectory := filepath.Join(ybHome, "otel-collector") + mountPoint := "" + if len(h.param.GetMountPoints()) > 0 { + mountPoint = h.param.GetMountPoints()[0] + } + + steps := []struct { + Desc string + Cmd string + }{ + { + "make-yb-otel-collector-dir", + fmt.Sprintf( + "mkdir -p %s && chmod 0755 %s", + otelCollectorDirectory, + otelCollectorDirectory, + ), + }, + { + "untar-otel-collector", + fmt.Sprintf( + "tar --no-same-owner -xzvf %s -C %s", + otelCollectorPackagePath, + otelCollectorDirectory, + ), + }, + { + "ensure 755 permission for otelcol-contrib", + fmt.Sprintf( + "chmod -R 755 %s", + filepath.Join(otelCollectorDirectory, "otelcol-contrib"), + ), + }, + { + "create OpenTelemetry collector logs directory", + fmt.Sprintf( + "mkdir -p %s && chmod 0755 %s", + filepath.Join(mountPoint, "otel-collector/logs"), + filepath.Join(mountPoint, "otel-collector/logs"), + ), + }, + { + "symlink OpenTelemetry collector logs directory", + fmt.Sprintf( + "rm -rf %s && ln -sf %s %s && chmod 0755 %s", + filepath.Join(ybHome, "otel-collector/logs"), + filepath.Join(mountPoint, "otel-collector/logs"), + filepath.Join(ybHome, "otel-collector/logs"), + filepath.Join(ybHome, "otel-collector/logs"), + ), + }, + { + "create OpenTelemetry collector persistent queues directory", + fmt.Sprintf( + "mkdir -p %s && chmod 0755 %s", + filepath.Join(mountPoint, "otel-collector/queue"), + filepath.Join(mountPoint, "otel-collector/queue"), + ), + }, + { + "symlink OpenTelemetry collector persistent queues directory", + fmt.Sprintf( + "rm -rf %s && ln -sf %s %s && chmod 0755 %s", + filepath.Join(ybHome, "otel-collector/queue"), + filepath.Join(mountPoint, "otel-collector/queue"), + filepath.Join(ybHome, "otel-collector/queue"), + filepath.Join(ybHome, "otel-collector/queue"), + ), + }, + { + "delete-otel-collector-package", + fmt.Sprintf("rm -rf %s", otelCollectorPackagePath), + }, + } + + if err := module.RunShellSteps(ctx, h.username, steps, h.logOut); err != nil { + return err + } + return nil +} + +func (h *InstallOtelCollector) configureOtelCollector(ctx context.Context, ybHome string) error { + otelCollectorConfigFile := filepath.Join(ybHome, "otel-collector", "config.yml") + otelColLogCleanupEnv := filepath.Join(ybHome, "otel-collector", "log_cleanup_env") + awsCredsFile := filepath.Join(ybHome, ".aws", "credentials") + gcpCredsFile := filepath.Join(ybHome, "otel-collector", "gcp_creds") + + steps := []struct { + Desc string + Cmd string + }{ + { + "remove-otel-collector-config-file-if-exists", + fmt.Sprintf( + "rm -rf %s", + otelCollectorConfigFile, + ), + }, + { + "place-new-otel-collector-config-file", + fmt.Sprintf( + "mv %s %s", + h.param.GetOtelColConfigFile(), + otelCollectorConfigFile, + ), + }, + { + "create-aws-creds-dir", + fmt.Sprintf("mkdir -p %s/.aws", ybHome), + }, + { + "remove-otel-collector-aws-block-if-exists", + fmt.Sprintf(`if [ -f %s ]; then \ + awk '/# BEGIN YB MANAGED BLOCK - OTEL COLLECTOR CREDENTIALS/ {inblock=1} \ + /# END YB MANAGED BLOCK - OTEL COLLECTOR CREDENTIALS/ {inblock=0; next} \ + !inblock' %s > %s.tmp && mv %s.tmp %s; fi`, + awsCredsFile, + awsCredsFile, + awsCredsFile, + awsCredsFile, + awsCredsFile, + ), + }, + { + "remove-gcp-credentials", + fmt.Sprintf("rm -rf %s", gcpCredsFile), + }, + { + "clean-up-otel-log-cleanup-env", + fmt.Sprintf("rm -rf %s", otelColLogCleanupEnv), + }, + { + "write-otel-log-cleanup-env", + fmt.Sprintf( + `echo "preserve_audit_logs=true" > %s && echo "ycql_audit_log_level=%s" >> %s`, + otelColLogCleanupEnv, + h.param.GetYcqlAuditLogLevel(), + otelColLogCleanupEnv, + ), + }, + { + "set-permission-otel-log-cleanup-env", + fmt.Sprintf(`chmod 0440 %s`, otelColLogCleanupEnv), + }, + } + + if h.param.GetOtelColAwsAccessKey() != "" && h.param.GetOtelColAwsSecretKey() != "" { + steps = append(steps, struct { + Desc string + Cmd string + }{ + "append-otel-collector-creds", + fmt.Sprintf( + `echo '# BEGIN YB MANAGED BLOCK - OTEL COLLECTOR CREDENTIALS + [otel-collector] + aws_access_key_id = %s + aws_secret_access_key = %s + # END YB MANAGED BLOCK - OTEL COLLECTOR CREDENTIALS' >> %s && chmod 440 %s`, + h.param.GetOtelColAwsAccessKey(), + h.param.GetOtelColAwsSecretKey(), + awsCredsFile, + awsCredsFile, + ), + }) + } + + if h.param.GetOtelColGcpCredsFile() != "" { + steps = append(steps, struct { + Desc string + Cmd string + }{ + "place-new-gcp-creds-file", + fmt.Sprintf("mv %s %s", h.param.GetOtelColGcpCredsFile(), gcpCredsFile), + }) + } + + if err := module.RunShellSteps(ctx, h.username, steps, h.logOut); err != nil { + return err + } + return nil +} diff --git a/managed/node-agent/proto/server.proto b/managed/node-agent/proto/server.proto index 13d0fa9ca674..7ba5791df3c2 100644 --- a/managed/node-agent/proto/server.proto +++ b/managed/node-agent/proto/server.proto @@ -60,6 +60,7 @@ message SubmitTaskRequest { ServerGFlagsInput serverGFlagsInput = 8; InstallYbcInput installYbcInput = 9; ConfigureServerInput configureServerInput = 10; + InstallOtelCollectorInput installOtelCollectorInput = 11; } } @@ -83,6 +84,7 @@ message DescribeTaskResponse { ServerGFlagsOutput serverGFlagsOutput = 8; InstallYbcOutput installYbcOutput = 9; ConfigureServerOutput configureServerOutput = 10; + InstallOtelCollectorOutput installOtelCollectorOutput = 11; } } diff --git a/managed/node-agent/proto/yb.proto b/managed/node-agent/proto/yb.proto index 6990f49d1259..15447d8c4f00 100644 --- a/managed/node-agent/proto/yb.proto +++ b/managed/node-agent/proto/yb.proto @@ -150,3 +150,19 @@ message ConfigureServerInput { message ConfigureServerOutput { int32 pid = 1; } + +message InstallOtelCollectorInput { + string ybHomeDir = 1; + repeated string mountPoints = 2; + string otelColPackagePath = 3; + string ycqlAuditLogLevel = 4; + string otelColConfigFile = 5; + string otelColAwsAccessKey = 6; + string otelColAwsSecretKey = 7; + string otelColGcpCredsFile = 8; + string remoteTmp = 9; +} + +message InstallOtelCollectorOutput { + int32 pid = 1; +} diff --git a/managed/node-agent/resources/templates/server/otel-collector.service b/managed/node-agent/resources/templates/server/otel-collector.service new file mode 100644 index 000000000000..a2634466300a --- /dev/null +++ b/managed/node-agent/resources/templates/server/otel-collector.service @@ -0,0 +1,40 @@ +[Unit] +Description=OpenTelemetry Collector +After=network.target network-online.target multi-user.target +# Disable restart limits, using RestartSec to rate limit restarts +StartLimitInterval=0 + +[Path] +PathExists={{yb_home_dir}}/otel-collector/otelcol-contrib +PathExists={{yb_home_dir}}/otel-collector/config.yml + +[Service] +{% if ansible_os_family == 'RedHat' and (ansible_distribution_major_version == '7' or (ansible_distribution == 'Amazon' and ansible_distribution_major_version == '2')) %} +User={{ user_name }} +Group={{ user_name }} +{% endif %} +# Start +ExecStart={{yb_home_dir}}/otel-collector/otelcol-contrib \ + --config=file:{{yb_home_dir}}/otel-collector/config.yml +Restart=always +RestartSec=5 +# Stop -> SIGTERM - 10s - SIGKILL (if not stopped) +KillMode=process +TimeoutStopFailureMode=terminate +KillSignal=SIGTERM +TimeoutStopSec=10 +FinalKillSignal=SIGKILL +# Logs +StandardOutput=syslog +StandardError=syslog +# ulimit +LimitCORE=infinity +LimitNOFILE=1048576 +LimitNPROC=12000 + +Environment="AWS_PROFILE=otel-collector" +Environment="GOOGLE_APPLICATION_CREDENTIALS={{yb_home_dir}}/otel-collector/gcp_creds" +Environment="HOME={{yb_home_dir}}" + +[Install] +WantedBy=default.target diff --git a/managed/src/main/java/com/yugabyte/yw/commissioner/tasks/payload/NodeAgentRpcPayload.java b/managed/src/main/java/com/yugabyte/yw/commissioner/tasks/payload/NodeAgentRpcPayload.java new file mode 100644 index 000000000000..79c37c4e6127 --- /dev/null +++ b/managed/src/main/java/com/yugabyte/yw/commissioner/tasks/payload/NodeAgentRpcPayload.java @@ -0,0 +1,535 @@ +// Copyright (c) YugaByte, Inc. + +package com.yugabyte.yw.commissioner.tasks.payload; + +import com.typesafe.config.Config; +import com.yugabyte.yw.cloud.PublicCloudConstants.Architecture; +import com.yugabyte.yw.cloud.gcp.GCPCloudImpl; +import com.yugabyte.yw.commissioner.Common; +import com.yugabyte.yw.commissioner.tasks.UniverseTaskBase; +import com.yugabyte.yw.commissioner.tasks.UniverseTaskBase.ServerType; +import com.yugabyte.yw.commissioner.tasks.params.NodeTaskParams; +import com.yugabyte.yw.commissioner.tasks.subtasks.AnsibleConfigureServers; +import com.yugabyte.yw.commissioner.tasks.subtasks.ManageOtelCollector; +import com.yugabyte.yw.common.FileHelperService; +import com.yugabyte.yw.common.NodeAgentClient; +import com.yugabyte.yw.common.NodeManager; +import com.yugabyte.yw.common.NodeUniverseManager; +import com.yugabyte.yw.common.ReleaseContainer; +import com.yugabyte.yw.common.ReleaseManager; +import com.yugabyte.yw.common.Util; +import com.yugabyte.yw.common.audit.otel.OtelCollectorConfigGenerator; +import com.yugabyte.yw.common.config.GlobalConfKeys; +import com.yugabyte.yw.common.config.ProviderConfKeys; +import com.yugabyte.yw.common.config.RuntimeConfGetter; +import com.yugabyte.yw.common.config.UniverseConfKeys; +import com.yugabyte.yw.common.gflags.GFlagsUtil; +import com.yugabyte.yw.common.utils.FileUtils; +import com.yugabyte.yw.common.utils.Pair; +import com.yugabyte.yw.forms.UniverseDefinitionTaskParams.Cluster; +import com.yugabyte.yw.forms.UniverseDefinitionTaskParams.UserIntent; +import com.yugabyte.yw.models.NodeAgent; +import com.yugabyte.yw.models.Provider; +import com.yugabyte.yw.models.Region; +import com.yugabyte.yw.models.TelemetryProvider; +import com.yugabyte.yw.models.Universe; +import com.yugabyte.yw.models.helpers.CloudInfoInterface; +import com.yugabyte.yw.models.helpers.NodeDetails; +import com.yugabyte.yw.models.helpers.TelemetryProviderService; +import com.yugabyte.yw.models.helpers.audit.AuditLogConfig; +import com.yugabyte.yw.models.helpers.audit.UniverseLogsExporterConfig; +import com.yugabyte.yw.models.helpers.audit.YCQLAuditConfig; +import com.yugabyte.yw.models.helpers.telemetry.AWSCloudWatchConfig; +import com.yugabyte.yw.models.helpers.telemetry.GCPCloudMonitoringConfig; +import com.yugabyte.yw.nodeagent.ConfigureServerInput; +import com.yugabyte.yw.nodeagent.InstallOtelCollectorInput; +import com.yugabyte.yw.nodeagent.InstallSoftwareInput; +import com.yugabyte.yw.nodeagent.InstallYbcInput; +import com.yugabyte.yw.nodeagent.ServerGFlagsInput; +import java.io.File; +import java.nio.file.Path; +import java.nio.file.Paths; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collections; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.UUID; +import java.util.stream.Collectors; +import javax.inject.Inject; +import lombok.extern.slf4j.Slf4j; +import org.apache.commons.collections4.CollectionUtils; +import org.apache.commons.lang3.StringUtils; + +@Slf4j +public class NodeAgentRpcPayload { + public static final String DEFAULT_CONFIGURE_USER = "yugabyte"; + private final ReleaseManager releaseManager; + private final Config appConfig; + private final OtelCollectorConfigGenerator otelCollectorConfigGenerator; + private final TelemetryProviderService telemetryProviderService; + private final FileHelperService fileHelperService; + private final RuntimeConfGetter confGetter; + private final NodeAgentClient nodeAgentClient; + private final NodeUniverseManager nodeUniverseManager; + private final NodeManager nodeManager; + + @Inject + public NodeAgentRpcPayload( + ReleaseManager releaseManager, + Config appConfig, + RuntimeConfGetter confGetter, + OtelCollectorConfigGenerator otelCollectorConfigGenerator, + TelemetryProviderService telemetryProviderService, + FileHelperService fileHelperService, + NodeAgentClient nodeAgentClient, + NodeUniverseManager nodeUniverseManager, + NodeManager nodeManager) { + this.releaseManager = releaseManager; + this.appConfig = appConfig; + this.otelCollectorConfigGenerator = otelCollectorConfigGenerator; + this.telemetryProviderService = telemetryProviderService; + this.fileHelperService = fileHelperService; + this.confGetter = confGetter; + this.nodeAgentClient = nodeAgentClient; + this.nodeUniverseManager = nodeUniverseManager; + this.nodeManager = nodeManager; + } + + private List getMountPoints(NodeTaskParams params) { + if (StringUtils.isNotBlank(params.deviceInfo.mountPoints)) { + return Arrays.stream(params.deviceInfo.mountPoints.split("\\s*,\\s*")) + .map(String::trim) + .filter(s -> !s.isEmpty()) + .collect(Collectors.toList()); + } else if (params.deviceInfo.numVolumes != null + && params.getProvider().getCloudCode() != Common.CloudType.onprem) { + List mountPoints = new ArrayList<>(); + for (int i = 0; i < params.deviceInfo.numVolumes; i++) { + mountPoints.add("/mnt/d" + i); + } + return mountPoints; + } + return Collections.emptyList(); + } + + private String getYbPackage(ReleaseContainer release, Architecture arch, Region region) { + String ybServerPackage = null; + if (release != null) { + if (arch != null) { + ybServerPackage = release.getFilePath(arch); + } else { + ybServerPackage = release.getFilePath(region); + } + } + + return ybServerPackage; + } + + private String getThirdpartyPackagePath() { + String packagePath = appConfig.getString("yb.thirdparty.packagePath"); + if (packagePath != null && !packagePath.isEmpty()) { + File thirdpartyPackagePath = new File(packagePath); + if (thirdpartyPackagePath.exists() && thirdpartyPackagePath.isDirectory()) { + return packagePath; + } + } + + return null; + } + + private String getOtelCollectorPackagePath(Architecture arch) { + String architecture = ""; + if (arch.equals(Architecture.x86_64)) { + architecture = "amd64"; + } else { + architecture = "arm64"; + } + return String.format( + "otelcol-contrib_%s_%s_%s.tar.gz", + ManageOtelCollector.OtelCollectorVersion, + ManageOtelCollector.OtelCollectorPlatform, + architecture); + } + + private InstallSoftwareInput.Builder fillYbReleaseMetadata( + Universe universe, + Provider provider, + NodeDetails node, + String ybSoftwareVersion, + Region region, + Architecture arch, + InstallSoftwareInput.Builder installSoftwareInputBuilder, + NodeAgent nodeAgent, + String customTmpDirectory) { + Map envConfig = CloudInfoInterface.fetchEnvVars(provider); + ReleaseContainer release = releaseManager.getReleaseByVersion(ybSoftwareVersion); + String ybServerPackage = getYbPackage(release, arch, region); + installSoftwareInputBuilder.setYbPackage(ybServerPackage); + if (release.isS3Download(ybServerPackage)) { + installSoftwareInputBuilder.setS3RemoteDownload(true); + String accessKey = envConfig.get("AWS_ACCESS_KEY_ID"); + if (StringUtils.isEmpty(accessKey)) { + // TODO: This will be removed once iTest moves to new release API. + accessKey = release.getAwsAccessKey(arch); + } + if (StringUtils.isEmpty(accessKey)) { + accessKey = System.getenv("AWS_ACCESS_KEY_ID"); + } + if (StringUtils.isNotBlank(accessKey)) { + installSoftwareInputBuilder.setAwsAccessKey(accessKey); + } + String secretKey = envConfig.get("AWS_SECRET_ACCESS_KEY"); + if (StringUtils.isEmpty(secretKey)) { + secretKey = release.getAwsSecretKey(arch); + } + if (StringUtils.isEmpty(secretKey)) { + secretKey = System.getenv("AWS_SECRET_ACCESS_KEY"); + } + if (StringUtils.isNotBlank(secretKey)) { + installSoftwareInputBuilder.setAwsSecretKey(secretKey); + } + } else if (release.isGcsDownload(ybServerPackage)) { + installSoftwareInputBuilder.setGcsRemoteDownload(true); + // Upload the Credential json to the remote host. + nodeAgentClient.uploadFile( + nodeAgent, + envConfig.get(GCPCloudImpl.GCE_PROJECT_PROPERTY), + customTmpDirectory + + "/" + + Paths.get(envConfig.get(GCPCloudImpl.GCE_PROJECT_PROPERTY)) + .getFileName() + .toString(), + DEFAULT_CONFIGURE_USER, + 0, + null); + installSoftwareInputBuilder.setGcsCredentialsJson( + customTmpDirectory + + "/" + + Paths.get(envConfig.get(GCPCloudImpl.GCE_PROJECT_PROPERTY)) + .getFileName() + .toString()); + } else if (release.isHttpDownload(ybServerPackage)) { + installSoftwareInputBuilder.setHttpRemoteDownload(true); + if (StringUtils.isNotBlank(release.getHttpChecksum())) { + installSoftwareInputBuilder.setHttpPackageChecksum(release.getHttpChecksum().toLowerCase()); + } + } else if (release.hasLocalRelease()) { + // Upload the release to the node. + nodeAgentClient.uploadFile( + nodeAgent, + ybServerPackage, + customTmpDirectory + "/" + Paths.get(ybServerPackage).getFileName().toString(), + DEFAULT_CONFIGURE_USER, + 0, + null); + installSoftwareInputBuilder.setYbPackage( + customTmpDirectory + "/" + Paths.get(ybServerPackage).getFileName().toString()); + } + if (!node.isInPlacement(universe.getUniverseDetails().getPrimaryCluster().uuid)) { + // For RR we don't setup master + installSoftwareInputBuilder.addSymLinkFolders("tserver"); + } else { + installSoftwareInputBuilder.addSymLinkFolders("tserver"); + installSoftwareInputBuilder.addSymLinkFolders("master"); + } + installSoftwareInputBuilder.setRemoteTmp(customTmpDirectory); + installSoftwareInputBuilder.setYbHomeDir(provider.getYbHome()); + return installSoftwareInputBuilder; + } + + public InstallSoftwareInput setupInstallSoftwareBits( + Universe universe, NodeDetails nodeDetails, NodeTaskParams taskParams, NodeAgent nodeAgent) { + InstallSoftwareInput.Builder installSoftwareInputBuilder = InstallSoftwareInput.newBuilder(); + String ybSoftwareVersion = ""; + if (taskParams instanceof AnsibleConfigureServers.Params) { + AnsibleConfigureServers.Params params = (AnsibleConfigureServers.Params) taskParams; + ybSoftwareVersion = params.ybSoftwareVersion; + } + Cluster cluster = universe.getCluster(nodeDetails.placementUuid); + Provider provider = Provider.getOrBadRequest(UUID.fromString(cluster.userIntent.provider)); + String customTmpDirectory = + confGetter.getConfForScope(provider, ProviderConfKeys.remoteTmpDirectory); + installSoftwareInputBuilder = + fillYbReleaseMetadata( + universe, + provider, + nodeDetails, + ybSoftwareVersion, + taskParams.getRegion(), + universe.getUniverseDetails().arch, + installSoftwareInputBuilder, + nodeAgent, + customTmpDirectory); + + return installSoftwareInputBuilder.build(); + } + + public InstallYbcInput setupInstallYbcSoftwareBits( + Universe universe, NodeDetails nodeDetails, NodeTaskParams taskParams, NodeAgent nodeAgent) { + InstallYbcInput.Builder installYbcInputBuilder = InstallYbcInput.newBuilder(); + String ybSoftwareVersion = ""; + if (taskParams instanceof AnsibleConfigureServers.Params) { + AnsibleConfigureServers.Params params = (AnsibleConfigureServers.Params) taskParams; + ybSoftwareVersion = params.ybSoftwareVersion; + } + ReleaseContainer release = releaseManager.getReleaseByVersion(ybSoftwareVersion); + String ybServerPackage = + getYbPackage(release, universe.getUniverseDetails().arch, taskParams.getRegion()); + Cluster cluster = universe.getCluster(nodeDetails.placementUuid); + Provider provider = Provider.getOrBadRequest(UUID.fromString(cluster.userIntent.provider)); + String customTmpDirectory = + confGetter.getConfForScope(provider, ProviderConfKeys.remoteTmpDirectory); + String ybcPackage = null; + Pair ybcPackageDetails = + Util.getYbcPackageDetailsFromYbServerPackage(ybServerPackage); + String stableYbc = confGetter.getGlobalConf(GlobalConfKeys.ybcStableVersion); + ReleaseManager.ReleaseMetadata releaseMetadata = + releaseManager.getYbcReleaseByVersion( + stableYbc, ybcPackageDetails.getFirst(), ybcPackageDetails.getSecond()); + if (releaseMetadata == null) { + throw new RuntimeException( + String.format("Ybc package metadata: %s cannot be empty with ybc enabled", stableYbc)); + } + if (universe.getUniverseDetails().arch != null) { + ybcPackage = releaseMetadata.getFilePath(universe.getUniverseDetails().arch); + } else { + // Fallback to region in case arch is not present + ybcPackage = releaseMetadata.getFilePath(taskParams.getRegion()); + } + if (StringUtils.isBlank(ybcPackage)) { + throw new RuntimeException("Ybc package cannot be empty with ybc enabled"); + } + installYbcInputBuilder.setYbcPackage(ybcPackage); + nodeAgentClient.uploadFile( + nodeAgent, + ybcPackage, + customTmpDirectory + "/" + Paths.get(ybcPackage).getFileName().toString(), + DEFAULT_CONFIGURE_USER, + 0, + null); + installYbcInputBuilder.setRemoteTmp(customTmpDirectory); + installYbcInputBuilder.setYbHomeDir(provider.getYbHome()); + installYbcInputBuilder.addAllMountPoints(getMountPoints(taskParams)); + return installYbcInputBuilder.build(); + } + + public ConfigureServerInput setUpConfigureServerBits( + Universe universe, NodeDetails nodeDetails, NodeTaskParams taskParams, NodeAgent nodeAgent) { + ConfigureServerInput.Builder configureServerInputBuilder = ConfigureServerInput.newBuilder(); + Cluster cluster = universe.getCluster(nodeDetails.placementUuid); + Provider provider = Provider.getOrBadRequest(UUID.fromString(cluster.userIntent.provider)); + String customTmpDirectory = + confGetter.getConfForScope(provider, ProviderConfKeys.remoteTmpDirectory); + + configureServerInputBuilder.setRemoteTmp(customTmpDirectory); + configureServerInputBuilder.setYbHomeDir(provider.getYbHome()); + configureServerInputBuilder.addAllMountPoints(getMountPoints(taskParams)); + if (!nodeDetails.isInPlacement(universe.getUniverseDetails().getPrimaryCluster().uuid)) { + // For RR we don't setup master + configureServerInputBuilder.addProcesses("tserver"); + } else { + // For dedicated nodes, both are set up. + configureServerInputBuilder.addProcesses("master"); + configureServerInputBuilder.addProcesses("tserver"); + } + + Integer num_cores_to_keep = + confGetter.getConfForScope(universe, UniverseConfKeys.numCoresToKeep); + configureServerInputBuilder.setNumCoresToKeep(num_cores_to_keep); + return configureServerInputBuilder.build(); + } + + public InstallOtelCollectorInput setupInstallOtelCollectorBits( + Universe universe, NodeDetails nodeDetails, NodeTaskParams taskParams, NodeAgent nodeAgent) { + InstallOtelCollectorInput.Builder installOtelCollectorInputBuilder = + InstallOtelCollectorInput.newBuilder(); + Cluster cluster = universe.getCluster(nodeDetails.placementUuid); + Provider provider = Provider.getOrBadRequest(UUID.fromString(cluster.userIntent.provider)); + String customTmpDirectory = + confGetter.getConfForScope(provider, ProviderConfKeys.remoteTmpDirectory); + AuditLogConfig config = null; + if (taskParams instanceof ManageOtelCollector.Params) { + ManageOtelCollector.Params params = (ManageOtelCollector.Params) taskParams; + config = params.auditLogConfig; + } else if (taskParams instanceof AnsibleConfigureServers.Params) { + AnsibleConfigureServers.Params params = (AnsibleConfigureServers.Params) taskParams; + config = params.auditLogConfig; + } + Map gflags = + GFlagsUtil.getGFlagsForAZ( + taskParams.azUuid, + UniverseTaskBase.ServerType.TSERVER, + cluster, + universe.getUniverseDetails().clusters); + + installOtelCollectorInputBuilder.setRemoteTmp(customTmpDirectory); + installOtelCollectorInputBuilder.setYbHomeDir(provider.getYbHome()); + String otelCollectorPackagePath = + getThirdpartyPackagePath() + + "/" + + getOtelCollectorPackagePath(universe.getUniverseDetails().arch); + nodeAgentClient.uploadFile( + nodeAgent, + otelCollectorPackagePath, + customTmpDirectory + "/" + getOtelCollectorPackagePath(universe.getUniverseDetails().arch), + DEFAULT_CONFIGURE_USER, + 0, + null); + installOtelCollectorInputBuilder.setOtelColPackagePath( + getOtelCollectorPackagePath(universe.getUniverseDetails().arch)); + String ycqlAuditLogLevel = "NONE"; + if (config.getYcqlAuditConfig() != null) { + YCQLAuditConfig.YCQLAuditLogLevel logLevel = + config.getYcqlAuditConfig().getLogLevel() != null + ? config.getYcqlAuditConfig().getLogLevel() + : YCQLAuditConfig.YCQLAuditLogLevel.ERROR; + ycqlAuditLogLevel = logLevel.name(); + } + installOtelCollectorInputBuilder.setYcqlAuditLogLevel(ycqlAuditLogLevel); + installOtelCollectorInputBuilder.addAllMountPoints(getMountPoints(taskParams)); + + if (config.isExportActive() + && CollectionUtils.isNotEmpty(config.getUniverseLogsExporterConfig())) { + String otelCollectorConfigFile = + otelCollectorConfigGenerator + .generateConfigFile( + taskParams, + provider, + universe.getUniverseDetails().getPrimaryCluster().userIntent, + config, + GFlagsUtil.getLogLinePrefix(gflags.get(GFlagsUtil.YSQL_PG_CONF_CSV)), + NodeManager.getOtelColMetricsPort(taskParams)) + .toAbsolutePath() + .toString(); + nodeAgentClient.uploadFile( + nodeAgent, + otelCollectorConfigFile, + customTmpDirectory + "/" + Paths.get(otelCollectorConfigFile).getFileName().toString(), + DEFAULT_CONFIGURE_USER, + 0, + null); + installOtelCollectorInputBuilder.setOtelColConfigFile( + customTmpDirectory + "/" + Paths.get(otelCollectorConfigFile).getFileName().toString()); + + for (UniverseLogsExporterConfig logsExporterConfig : config.getUniverseLogsExporterConfig()) { + TelemetryProvider telemetryProvider = + telemetryProviderService.get(logsExporterConfig.getExporterUuid()); + switch (telemetryProvider.getConfig().getType()) { + case AWS_CLOUDWATCH -> { + AWSCloudWatchConfig awsCloudWatchConfig = + (AWSCloudWatchConfig) telemetryProvider.getConfig(); + if (StringUtils.isNotEmpty(awsCloudWatchConfig.getAccessKey())) { + installOtelCollectorInputBuilder.setOtelColAwsAccessKey( + awsCloudWatchConfig.getAccessKey()); + } + if (StringUtils.isNotEmpty(awsCloudWatchConfig.getSecretKey())) { + installOtelCollectorInputBuilder.setOtelColAwsSecretKey( + awsCloudWatchConfig.getSecretKey()); + } + } + case GCP_CLOUD_MONITORING -> { + GCPCloudMonitoringConfig gcpCloudMonitoringConfig = + (GCPCloudMonitoringConfig) telemetryProvider.getConfig(); + if (gcpCloudMonitoringConfig.getCredentials() != null) { + Path path = + fileHelperService.createTempFile( + "otel_collector_gcp_creds_" + + taskParams.getUniverseUUID() + + "_" + + taskParams.nodeUuid, + ".json"); + String filePath = path.toAbsolutePath().toString(); + FileUtils.writeJsonFile(filePath, gcpCloudMonitoringConfig.getCredentials()); + nodeAgentClient.uploadFile( + nodeAgent, + filePath, + customTmpDirectory + "/" + Paths.get(filePath).getFileName().toString(), + DEFAULT_CONFIGURE_USER, + 0, + null); + installOtelCollectorInputBuilder.setOtelColGcpCredsFile( + customTmpDirectory + "/" + Paths.get(filePath).getFileName().toString()); + } + } + } + } + } + + return installOtelCollectorInputBuilder.build(); + } + + public void runServerGFlagsWithNodeAgent( + NodeAgent nodeAgent, Universe universe, NodeDetails nodeDetails, NodeTaskParams taskParams) { + String processType = taskParams.getProperty("processType"); + if (!processType.equals(ServerType.CONTROLLER.toString()) + && !processType.equals(ServerType.MASTER.toString()) + && !processType.equals(ServerType.TSERVER.toString())) { + throw new RuntimeException("Invalid processType: " + processType); + } + runServerGFlagsWithNodeAgent(nodeAgent, universe, nodeDetails, processType, taskParams); + } + + public void runServerGFlagsWithNodeAgent( + NodeAgent nodeAgent, + Universe universe, + NodeDetails nodeDetails, + String processType, + NodeTaskParams nodeTaskParams) { + String serverName = processType.toLowerCase(); + String serverHome = + Paths.get(nodeUniverseManager.getYbHomeDir(nodeDetails, universe), serverName).toString(); + boolean useHostname = + universe.getUniverseDetails().getPrimaryCluster().userIntent.useHostname + || !Util.isIpAddress(nodeDetails.cloudInfo.private_ip); + AnsibleConfigureServers.Params taskParams = null; + if (nodeTaskParams instanceof AnsibleConfigureServers.Params) { + taskParams = (AnsibleConfigureServers.Params) nodeTaskParams; + } + UserIntent userIntent = nodeManager.getUserIntentFromParams(universe, taskParams); + ServerGFlagsInput.Builder builder = + ServerGFlagsInput.newBuilder() + .setServerHome(serverHome) + .setServerName(serverHome) + .setServerName(serverName); + Map gflags = + new HashMap<>( + GFlagsUtil.getAllDefaultGFlags( + taskParams, universe, userIntent, useHostname, appConfig, confGetter)); + if (processType.equals(ServerType.CONTROLLER.toString())) { + // TODO Is the check taskParam.isEnableYbc() required here? + Map ybcFlags = + GFlagsUtil.getYbcFlags(universe, taskParams, confGetter, appConfig, taskParams.ybcGflags); + // Override for existing keys as this has higher precedence. + gflags.putAll(ybcFlags); + } else if (processType.equals(ServerType.MASTER.toString()) + || processType.equals(ServerType.TSERVER.toString())) { + // Override for existing keys as this has higher precedence. + gflags.putAll(taskParams.gflags); + nodeManager.processGFlags(appConfig, universe, nodeDetails, taskParams, gflags, useHostname); + if (!appConfig.getBoolean("yb.cloud.enabled")) { + if (gflags.containsKey(GFlagsUtil.YSQL_HBA_CONF_CSV)) { + String hbaConfValue = gflags.get(GFlagsUtil.YSQL_HBA_CONF_CSV); + if (hbaConfValue.contains(GFlagsUtil.JWT_AUTH)) { + Path tmpDirectoryPath = + FileUtils.getOrCreateTmpDirectory( + confGetter.getGlobalConf(GlobalConfKeys.ybTmpDirectoryPath)); + Path localGflagFilePath = + tmpDirectoryPath.resolve(nodeDetails.getNodeUuid().toString()); + String providerUUID = userIntent.provider; + String ybHomeDir = GFlagsUtil.getYbHomeDir(providerUUID); + String remoteGFlagPath = ybHomeDir + GFlagsUtil.GFLAG_REMOTE_FILES_PATH; + nodeAgentClient.uploadFile(nodeAgent, localGflagFilePath.toString(), remoteGFlagPath); + } + } + } + if (taskParams.resetMasterState) { + builder.setResetMasterState(true); + } + } + ServerGFlagsInput input = builder.putAllGflags(gflags).build(); + log.debug("Setting gflags using node agent: {}", input.getGflagsMap()); + nodeAgentClient.runServerGFlags(nodeAgent, input, DEFAULT_CONFIGURE_USER); + } +} diff --git a/managed/src/main/java/com/yugabyte/yw/commissioner/tasks/subtasks/AnsibleConfigureServers.java b/managed/src/main/java/com/yugabyte/yw/commissioner/tasks/subtasks/AnsibleConfigureServers.java index b82a24e3da39..c4dd0874c9b2 100644 --- a/managed/src/main/java/com/yugabyte/yw/commissioner/tasks/subtasks/AnsibleConfigureServers.java +++ b/managed/src/main/java/com/yugabyte/yw/commissioner/tasks/subtasks/AnsibleConfigureServers.java @@ -13,35 +13,21 @@ import static com.yugabyte.yw.common.metrics.MetricService.buildMetricTemplate; import com.fasterxml.jackson.annotation.JsonIgnore; -import com.yugabyte.yw.cloud.PublicCloudConstants.Architecture; -import com.yugabyte.yw.cloud.gcp.GCPCloudImpl; import com.yugabyte.yw.commissioner.BaseTaskDependencies; -import com.yugabyte.yw.commissioner.Common; import com.yugabyte.yw.commissioner.tasks.params.NodeTaskParams; +import com.yugabyte.yw.commissioner.tasks.payload.NodeAgentRpcPayload; +import com.yugabyte.yw.commissioner.tasks.subtasks.AnsibleSetupServer.Params; import com.yugabyte.yw.common.CallHomeManager.CollectionLevel; import com.yugabyte.yw.common.NodeManager; import com.yugabyte.yw.common.NodeManager.CertRotateAction; -import com.yugabyte.yw.common.ReleaseContainer; -import com.yugabyte.yw.common.ReleaseManager; import com.yugabyte.yw.common.ShellResponse; -import com.yugabyte.yw.common.Util; import com.yugabyte.yw.common.config.GlobalConfKeys; -import com.yugabyte.yw.common.config.ProviderConfKeys; -import com.yugabyte.yw.common.config.UniverseConfKeys; -import com.yugabyte.yw.common.gflags.GFlagsUtil; -import com.yugabyte.yw.common.utils.FileUtils; -import com.yugabyte.yw.common.utils.Pair; import com.yugabyte.yw.forms.CertsRotateParams.CertRotationType; -import com.yugabyte.yw.forms.UniverseDefinitionTaskParams.Cluster; -import com.yugabyte.yw.forms.UniverseDefinitionTaskParams.UserIntent; import com.yugabyte.yw.forms.UpgradeTaskParams.UpgradeTaskType; import com.yugabyte.yw.forms.VMImageUpgradeParams.VmUpgradeTaskType; import com.yugabyte.yw.models.NodeAgent; -import com.yugabyte.yw.models.Provider; -import com.yugabyte.yw.models.Region; import com.yugabyte.yw.models.Universe; import com.yugabyte.yw.models.Universe.UniverseUpdater; -import com.yugabyte.yw.models.helpers.CloudInfoInterface; import com.yugabyte.yw.models.helpers.NodeDetails; import com.yugabyte.yw.models.helpers.NodeDetails.MasterState; import com.yugabyte.yw.models.helpers.NodeDetails.NodeState; @@ -49,24 +35,12 @@ import com.yugabyte.yw.models.helpers.PlatformMetrics; import com.yugabyte.yw.models.helpers.UpgradeDetails.YsqlMajorVersionUpgradeState; import com.yugabyte.yw.models.helpers.audit.AuditLogConfig; -import com.yugabyte.yw.nodeagent.ConfigureServerInput; -import com.yugabyte.yw.nodeagent.InstallSoftwareInput; -import com.yugabyte.yw.nodeagent.InstallYbcInput; -import com.yugabyte.yw.nodeagent.ServerGFlagsInput; -import java.nio.file.Path; -import java.nio.file.Paths; -import java.util.ArrayList; -import java.util.Arrays; -import java.util.Collections; import java.util.HashMap; import java.util.HashSet; -import java.util.List; import java.util.Map; import java.util.Optional; import java.util.Set; -import java.util.UUID; import java.util.function.Supplier; -import java.util.stream.Collectors; import javax.annotation.Nullable; import javax.inject.Inject; import lombok.extern.slf4j.Slf4j; @@ -75,221 +49,13 @@ @Slf4j public class AnsibleConfigureServers extends NodeTaskBase { private static final String DEFAULT_CONFIGURE_USER = "yugabyte"; - private final ReleaseManager releaseManager; + private final NodeAgentRpcPayload nodeAgentRpcPayload; @Inject protected AnsibleConfigureServers( - BaseTaskDependencies baseTaskDependencies, ReleaseManager releaseManager) { + BaseTaskDependencies baseTaskDependencies, NodeAgentRpcPayload nodeAgentRpcPayload) { super(baseTaskDependencies); - this.releaseManager = releaseManager; - } - - private List getMountPoints() { - if (StringUtils.isNotBlank(taskParams().deviceInfo.mountPoints)) { - return Arrays.stream(taskParams().deviceInfo.mountPoints.split("\\s*,\\s*")) - .map(String::trim) - .filter(s -> !s.isEmpty()) - .collect(Collectors.toList()); - } else if (taskParams().deviceInfo.numVolumes != null - && taskParams().getProvider().getCloudCode() != Common.CloudType.onprem) { - List mountPoints = new ArrayList<>(); - for (int i = 0; i < taskParams().deviceInfo.numVolumes; i++) { - mountPoints.add("/mnt/d" + i); - } - return mountPoints; - } - return Collections.emptyList(); - } - - private String getYbPackage(ReleaseContainer release, Architecture arch, Region region) { - String ybServerPackage = null; - if (release != null) { - if (arch != null) { - ybServerPackage = release.getFilePath(arch); - } else { - ybServerPackage = release.getFilePath(region); - } - } - - return ybServerPackage; - } - - private InstallSoftwareInput.Builder fillYbReleaseMetadata( - Universe universe, - Provider provider, - NodeDetails node, - String ybSoftwareVersion, - Region region, - Architecture arch, - InstallSoftwareInput.Builder installSoftwareInputBuilder, - NodeAgent nodeAgent, - String customTmpDirectory) { - Map envConfig = CloudInfoInterface.fetchEnvVars(provider); - ReleaseContainer release = releaseManager.getReleaseByVersion(ybSoftwareVersion); - String ybServerPackage = getYbPackage(release, arch, region); - installSoftwareInputBuilder.setYbPackage(ybServerPackage); - if (release.isS3Download(ybServerPackage)) { - installSoftwareInputBuilder.setS3RemoteDownload(true); - String accessKey = envConfig.get("AWS_ACCESS_KEY_ID"); - if (StringUtils.isEmpty(accessKey)) { - // TODO: This will be removed once iTest moves to new release API. - accessKey = release.getAwsAccessKey(arch); - } - if (StringUtils.isEmpty(accessKey)) { - accessKey = System.getenv("AWS_ACCESS_KEY_ID"); - } - if (StringUtils.isEmpty(accessKey)) { - installSoftwareInputBuilder.setAwsAccessKey(accessKey); - } - String secretKey = envConfig.get("AWS_SECRET_ACCESS_KEY"); - if (StringUtils.isEmpty(secretKey)) { - secretKey = release.getAwsSecretKey(arch); - } - if (StringUtils.isEmpty(secretKey)) { - secretKey = System.getenv("AWS_SECRET_ACCESS_KEY"); - } - if (StringUtils.isEmpty(secretKey)) { - installSoftwareInputBuilder.setAwsSecretKey(secretKey); - } - } else if (release.isGcsDownload(ybServerPackage)) { - installSoftwareInputBuilder.setGcsRemoteDownload(true); - // Upload the Credential json to the remote host. - nodeAgentClient.uploadFile( - nodeAgent, - envConfig.get(GCPCloudImpl.GCE_PROJECT_PROPERTY), - customTmpDirectory - + "/" - + Paths.get(envConfig.get(GCPCloudImpl.GCE_PROJECT_PROPERTY)) - .getFileName() - .toString(), - DEFAULT_CONFIGURE_USER, - 0, - null); - installSoftwareInputBuilder.setGcsCredentialsJson( - customTmpDirectory - + "/" - + Paths.get(envConfig.get(GCPCloudImpl.GCE_PROJECT_PROPERTY)) - .getFileName() - .toString()); - } else if (release.isHttpDownload(ybServerPackage)) { - installSoftwareInputBuilder.setHttpRemoteDownload(true); - if (StringUtils.isNotBlank(release.getHttpChecksum())) { - installSoftwareInputBuilder.setHttpPackageChecksum(release.getHttpChecksum().toLowerCase()); - } - } else if (release.hasLocalRelease()) { - // Upload the release to the node. - nodeAgentClient.uploadFile( - nodeAgent, - ybServerPackage, - customTmpDirectory + "/" + Paths.get(ybServerPackage).getFileName().toString(), - DEFAULT_CONFIGURE_USER, - 0, - null); - installSoftwareInputBuilder.setYbPackage( - customTmpDirectory + "/" + Paths.get(ybServerPackage).getFileName().toString()); - } - if (!node.isInPlacement(universe.getUniverseDetails().getPrimaryCluster().uuid)) { - // For RR we don't setup master - installSoftwareInputBuilder.addSymLinkFolders("tserver"); - } else { - installSoftwareInputBuilder.addSymLinkFolders("tserver"); - installSoftwareInputBuilder.addSymLinkFolders("master"); - } - installSoftwareInputBuilder.setRemoteTmp(customTmpDirectory); - installSoftwareInputBuilder.setYbHomeDir(provider.getYbHome()); - return installSoftwareInputBuilder; - } - - private InstallSoftwareInput setupInstallSoftwareBits( - Universe universe, NodeDetails nodeDetails, Params taskParams, NodeAgent nodeAgent) { - InstallSoftwareInput.Builder installSoftwareInputBuilder = InstallSoftwareInput.newBuilder(); - Cluster cluster = universe.getCluster(nodeDetails.placementUuid); - Provider provider = Provider.getOrBadRequest(UUID.fromString(cluster.userIntent.provider)); - String customTmpDirectory = - confGetter.getConfForScope(provider, ProviderConfKeys.remoteTmpDirectory); - installSoftwareInputBuilder = - fillYbReleaseMetadata( - universe, - provider, - nodeDetails, - taskParams.ybSoftwareVersion, - taskParams.getRegion(), - universe.getUniverseDetails().arch, - installSoftwareInputBuilder, - nodeAgent, - customTmpDirectory); - - return installSoftwareInputBuilder.build(); - } - - private InstallYbcInput setupInstallYbcSoftwareBits( - Universe universe, NodeDetails nodeDetails, Params taskParams, NodeAgent nodeAgent) { - InstallYbcInput.Builder installYbcInputBuilder = InstallYbcInput.newBuilder(); - ReleaseContainer release = releaseManager.getReleaseByVersion(taskParams.ybSoftwareVersion); - String ybServerPackage = - getYbPackage(release, universe.getUniverseDetails().arch, taskParams.getRegion()); - Cluster cluster = universe.getCluster(nodeDetails.placementUuid); - Provider provider = Provider.getOrBadRequest(UUID.fromString(cluster.userIntent.provider)); - String customTmpDirectory = - confGetter.getConfForScope(provider, ProviderConfKeys.remoteTmpDirectory); - String ybcPackage = null; - Pair ybcPackageDetails = - Util.getYbcPackageDetailsFromYbServerPackage(ybServerPackage); - String stableYbc = confGetter.getGlobalConf(GlobalConfKeys.ybcStableVersion); - ReleaseManager.ReleaseMetadata releaseMetadata = - releaseManager.getYbcReleaseByVersion( - stableYbc, ybcPackageDetails.getFirst(), ybcPackageDetails.getSecond()); - if (releaseMetadata == null) { - throw new RuntimeException( - String.format("Ybc package metadata: %s cannot be empty with ybc enabled", stableYbc)); - } - if (universe.getUniverseDetails().arch != null) { - ybcPackage = releaseMetadata.getFilePath(universe.getUniverseDetails().arch); - } else { - // Fallback to region in case arch is not present - ybcPackage = releaseMetadata.getFilePath(taskParams.getRegion()); - } - if (StringUtils.isBlank(ybcPackage)) { - throw new RuntimeException("Ybc package cannot be empty with ybc enabled"); - } - installYbcInputBuilder.setYbcPackage(ybcPackage); - nodeAgentClient.uploadFile( - nodeAgent, - ybcPackage, - customTmpDirectory + "/" + Paths.get(ybcPackage).getFileName().toString(), - DEFAULT_CONFIGURE_USER, - 0, - null); - installYbcInputBuilder.setRemoteTmp(customTmpDirectory); - installYbcInputBuilder.setYbHomeDir(provider.getYbHome()); - installYbcInputBuilder.addAllMountPoints(getMountPoints()); - return installYbcInputBuilder.build(); - } - - private ConfigureServerInput setUpConfigureServerBits( - Universe universe, NodeDetails nodeDetails, Params taskParams, NodeAgent nodeAgent) { - ConfigureServerInput.Builder configureServerInputBuilder = ConfigureServerInput.newBuilder(); - Cluster cluster = universe.getCluster(nodeDetails.placementUuid); - Provider provider = Provider.getOrBadRequest(UUID.fromString(cluster.userIntent.provider)); - String customTmpDirectory = - confGetter.getConfForScope(provider, ProviderConfKeys.remoteTmpDirectory); - - configureServerInputBuilder.setRemoteTmp(customTmpDirectory); - configureServerInputBuilder.setYbHomeDir(provider.getYbHome()); - configureServerInputBuilder.addAllMountPoints(getMountPoints()); - if (!nodeDetails.isInPlacement(universe.getUniverseDetails().getPrimaryCluster().uuid)) { - // For RR we don't setup master - configureServerInputBuilder.addProcesses("tserver"); - } else { - // For dedicated nodes, both are set up. - configureServerInputBuilder.addProcesses("master"); - configureServerInputBuilder.addProcesses("tserver"); - } - - Integer num_cores_to_keep = - confGetter.getConfForScope(universe, UniverseConfKeys.numCoresToKeep); - configureServerInputBuilder.setNumCoresToKeep(num_cores_to_keep); - return configureServerInputBuilder.build(); + this.nodeAgentRpcPayload = nodeAgentRpcPayload; } public static class Params extends NodeTaskParams { @@ -409,7 +175,8 @@ && taskParams().isMasterInShellMode && (taskParams().type == UpgradeTaskType.GFlags || taskParams().type == UpgradeTaskType.YbcGFlags)) { log.info("Updating gflags using node agent {}", optional.get()); - runServerGFlagsWithNodeAgent(optional.get(), universe, nodeDetails); + nodeAgentRpcPayload.runServerGFlagsWithNodeAgent( + optional.get(), universe, nodeDetails, taskParams()); return; } // Execute the ansible command. @@ -423,21 +190,35 @@ && taskParams().isMasterInShellMode log.info("Installing software using node agent {}", optional.get()); nodeAgentClient.runConfigureServer( optional.get(), - setUpConfigureServerBits(universe, nodeDetails, taskParams(), optional.get()), + nodeAgentRpcPayload.setUpConfigureServerBits( + universe, nodeDetails, taskParams(), optional.get()), DEFAULT_CONFIGURE_USER); nodeAgentClient.runInstallSoftware( optional.get(), - setupInstallSoftwareBits(universe, nodeDetails, taskParams(), optional.get()), + nodeAgentRpcPayload.setupInstallSoftwareBits( + universe, nodeDetails, taskParams(), optional.get()), DEFAULT_CONFIGURE_USER); if (taskParams().isEnableYbc()) { log.info("Installing YBC using node agent {}", optional.get()); nodeAgentClient.runInstallYbcSoftware( optional.get(), - setupInstallYbcSoftwareBits(universe, nodeDetails, taskParams(), optional.get()), + nodeAgentRpcPayload.setupInstallYbcSoftwareBits( + universe, nodeDetails, taskParams(), optional.get()), DEFAULT_CONFIGURE_USER); - runServerGFlagsWithNodeAgent( - optional.get(), universe, nodeDetails, ServerType.CONTROLLER.toString()); + nodeAgentRpcPayload.runServerGFlagsWithNodeAgent( + optional.get(), universe, nodeDetails, ServerType.CONTROLLER.toString(), taskParams()); + } + if (taskParams().otelCollectorEnabled && taskParams().auditLogConfig != null) { + AuditLogConfig config = taskParams().auditLogConfig; + if (!((config.getYsqlAuditConfig() == null || !config.getYsqlAuditConfig().isEnabled()) + && (config.getYcqlAuditConfig() == null || !config.getYcqlAuditConfig().isEnabled()))) { + nodeAgentClient.runInstallOtelCollector( + optional.get(), + nodeAgentRpcPayload.setupInstallOtelCollectorBits( + universe, nodeDetails, taskParams(), optional.get()), + DEFAULT_CONFIGURE_USER); + } } } @@ -487,73 +268,6 @@ public void run(Universe universe) { } } - private void runServerGFlagsWithNodeAgent( - NodeAgent nodeAgent, Universe universe, NodeDetails nodeDetails) { - String processType = taskParams().getProperty("processType"); - if (!processType.equals(ServerType.CONTROLLER.toString()) - && !processType.equals(ServerType.MASTER.toString()) - && !processType.equals(ServerType.TSERVER.toString())) { - throw new RuntimeException("Invalid processType: " + processType); - } - runServerGFlagsWithNodeAgent(nodeAgent, universe, nodeDetails, processType); - } - - private void runServerGFlagsWithNodeAgent( - NodeAgent nodeAgent, Universe universe, NodeDetails nodeDetails, String processType) { - String serverName = processType.toLowerCase(); - String serverHome = - Paths.get(nodeUniverseManager.getYbHomeDir(nodeDetails, universe), serverName).toString(); - boolean useHostname = - universe.getUniverseDetails().getPrimaryCluster().userIntent.useHostname - || !Util.isIpAddress(nodeDetails.cloudInfo.private_ip); - UserIntent userIntent = getNodeManager().getUserIntentFromParams(universe, taskParams()); - ServerGFlagsInput.Builder builder = - ServerGFlagsInput.newBuilder() - .setServerHome(serverHome) - .setServerName(serverHome) - .setServerName(serverName); - Map gflags = - new HashMap<>( - GFlagsUtil.getAllDefaultGFlags( - taskParams(), universe, userIntent, useHostname, config, confGetter)); - if (processType.equals(ServerType.CONTROLLER.toString())) { - // TODO Is the check taskParam.isEnableYbc() required here? - Map ybcFlags = - GFlagsUtil.getYbcFlags( - universe, taskParams(), confGetter, config, taskParams().ybcGflags); - // Override for existing keys as this has higher precedence. - gflags.putAll(ybcFlags); - } else if (processType.equals(ServerType.MASTER.toString()) - || processType.equals(ServerType.TSERVER.toString())) { - // Override for existing keys as this has higher precedence. - gflags.putAll(taskParams().gflags); - getNodeManager() - .processGFlags(config, universe, nodeDetails, taskParams(), gflags, useHostname); - if (!config.getBoolean("yb.cloud.enabled")) { - if (gflags.containsKey(GFlagsUtil.YSQL_HBA_CONF_CSV)) { - String hbaConfValue = gflags.get(GFlagsUtil.YSQL_HBA_CONF_CSV); - if (hbaConfValue.contains(GFlagsUtil.JWT_AUTH)) { - Path tmpDirectoryPath = - FileUtils.getOrCreateTmpDirectory( - confGetter.getGlobalConf(GlobalConfKeys.ybTmpDirectoryPath)); - Path localGflagFilePath = - tmpDirectoryPath.resolve(nodeDetails.getNodeUuid().toString()); - String providerUUID = userIntent.provider; - String ybHomeDir = GFlagsUtil.getYbHomeDir(providerUUID); - String remoteGFlagPath = ybHomeDir + GFlagsUtil.GFLAG_REMOTE_FILES_PATH; - nodeAgentClient.uploadFile(nodeAgent, localGflagFilePath.toString(), remoteGFlagPath); - } - } - } - if (taskParams().resetMasterState) { - builder.setResetMasterState(true); - } - } - ServerGFlagsInput input = builder.putAllGflags(gflags).build(); - log.debug("Setting gflags using node agent: {}", input.getGflagsMap()); - nodeAgentClient.runServerGFlags(nodeAgent, input, DEFAULT_CONFIGURE_USER); - } - @Override public int getRetryLimit() { return 2; diff --git a/managed/src/main/java/com/yugabyte/yw/commissioner/tasks/subtasks/ManageOtelCollector.java b/managed/src/main/java/com/yugabyte/yw/commissioner/tasks/subtasks/ManageOtelCollector.java index e288d1ffb43d..d71fdcd20d57 100644 --- a/managed/src/main/java/com/yugabyte/yw/commissioner/tasks/subtasks/ManageOtelCollector.java +++ b/managed/src/main/java/com/yugabyte/yw/commissioner/tasks/subtasks/ManageOtelCollector.java @@ -4,29 +4,40 @@ import com.yugabyte.yw.commissioner.BaseTaskDependencies; import com.yugabyte.yw.commissioner.tasks.params.NodeTaskParams; +import com.yugabyte.yw.commissioner.tasks.payload.NodeAgentRpcPayload; import com.yugabyte.yw.common.NodeManager; import com.yugabyte.yw.common.NodeUniverseManager; import com.yugabyte.yw.common.ShellProcessContext; +import com.yugabyte.yw.common.config.GlobalConfKeys; +import com.yugabyte.yw.models.NodeAgent; import com.yugabyte.yw.models.Universe; import com.yugabyte.yw.models.helpers.NodeDetails; import com.yugabyte.yw.models.helpers.audit.AuditLogConfig; import java.util.Arrays; import java.util.Map; +import java.util.Optional; import javax.inject.Inject; import lombok.extern.slf4j.Slf4j; @Slf4j public class ManageOtelCollector extends NodeTaskBase { + public static String OtelCollectorVersion = "0.90.0"; + public static String OtelCollectorPlatform = "linux"; + private final NodeUniverseManager nodeUniverseManager; + private final NodeAgentRpcPayload nodeAgentRpcPayload; private ShellProcessContext shellContext = ShellProcessContext.builder().logCmdOutput(true).build(); @Inject protected ManageOtelCollector( - BaseTaskDependencies baseTaskDependencies, NodeUniverseManager nodeUniverseManager) { + BaseTaskDependencies baseTaskDependencies, + NodeUniverseManager nodeUniverseManager, + NodeAgentRpcPayload nodeAgentRpcPayload) { super(baseTaskDependencies); this.nodeUniverseManager = nodeUniverseManager; + this.nodeAgentRpcPayload = nodeAgentRpcPayload; } public static class Params extends NodeTaskParams { @@ -58,8 +69,29 @@ public void run() { taskParams().useSudo = true; } log.info("Managing OpenTelemetry collector on instance {}", taskParams().nodeName); - getNodeManager() - .nodeCommand(NodeManager.NodeCommandType.Manage_Otel_Collector, taskParams()) - .processErrors(); + Optional optional = + confGetter.getGlobalConf(GlobalConfKeys.nodeAgentEnableConfigureServer) + ? nodeUniverseManager.maybeGetNodeAgent( + getUniverse(), node, true /*check feature flag*/) + : Optional.empty(); + + if (optional.isPresent()) { + log.info("Configuring otel-collector using node-agent"); + if (taskParams().otelCollectorEnabled && taskParams().auditLogConfig != null) { + AuditLogConfig config = taskParams().auditLogConfig; + if (!((config.getYsqlAuditConfig() == null || !config.getYsqlAuditConfig().isEnabled()) + && (config.getYcqlAuditConfig() == null || !config.getYcqlAuditConfig().isEnabled()))) { + nodeAgentClient.runInstallOtelCollector( + optional.get(), + nodeAgentRpcPayload.setupInstallOtelCollectorBits( + universe, node, taskParams(), optional.get()), + NodeAgentRpcPayload.DEFAULT_CONFIGURE_USER); + } + } + } else { + getNodeManager() + .nodeCommand(NodeManager.NodeCommandType.Manage_Otel_Collector, taskParams()) + .processErrors(); + } } } diff --git a/managed/src/main/java/com/yugabyte/yw/common/NodeAgentClient.java b/managed/src/main/java/com/yugabyte/yw/common/NodeAgentClient.java index 1a3a49e7eddd..870b66774c26 100644 --- a/managed/src/main/java/com/yugabyte/yw/common/NodeAgentClient.java +++ b/managed/src/main/java/com/yugabyte/yw/common/NodeAgentClient.java @@ -39,6 +39,8 @@ import com.yugabyte.yw.nodeagent.ExecuteCommandRequest; import com.yugabyte.yw.nodeagent.ExecuteCommandResponse; import com.yugabyte.yw.nodeagent.FileInfo; +import com.yugabyte.yw.nodeagent.InstallOtelCollectorInput; +import com.yugabyte.yw.nodeagent.InstallOtelCollectorOutput; import com.yugabyte.yw.nodeagent.InstallSoftwareInput; import com.yugabyte.yw.nodeagent.InstallSoftwareOutput; import com.yugabyte.yw.nodeagent.InstallYbcInput; @@ -955,6 +957,18 @@ public InstallYbcOutput runInstallYbcSoftware( return runAsyncTask(nodeAgent, builder.build(), InstallYbcOutput.class); } + public InstallOtelCollectorOutput runInstallOtelCollector( + NodeAgent nodeAgent, InstallOtelCollectorInput input, String user) { + SubmitTaskRequest.Builder builder = + SubmitTaskRequest.newBuilder() + .setInstallOtelCollectorInput(input) + .setTaskId(UUID.randomUUID().toString()); + if (StringUtils.isNotBlank(user)) { + builder.setUser(user); + } + return runAsyncTask(nodeAgent, builder.build(), InstallOtelCollectorOutput.class); + } + public ServerGFlagsOutput runServerGFlags( NodeAgent nodeAgent, ServerGFlagsInput input, String user) { SubmitTaskRequest.Builder builder = From 0c43a70681e09761e54d62500c32fc6d571cf65e Mon Sep 17 00:00:00 2001 From: Cloud User Date: Fri, 16 May 2025 11:47:17 +0000 Subject: [PATCH 124/146] [PLAT-17659] Fix volume resize when combined with other spec changes Summary: During K8s volume resize, we do helm upgrade to recreate STS. But instead of using only the modified volume size, we use the entire new task params in helm upgrade subtask. This is an issue when combining volume resize with other spec changes, as it leads to pod restarts before reaching the actual subtasks which take care of modifying the spec. This is not a regression as before master volume feature too, we used to apply KubernetesCommandExecutor params with taskParams() based universe details itself. Fixed the behavior by only using new device info changes when doing helm upgrade for volume resize case. Also made the volume resize go from zone to zone instead of all at once. Test Plan: Verified by combining volume resize for tserver/master with resource spec changes. Reviewers: dshubin, anabaria, anijhawan Reviewed By: dshubin, anabaria Differential Revision: https://phorge.dev.yugabyte.com/D44026 --- .../tasks/EditKubernetesUniverse.java | 61 ++++++++++--------- .../tasks/KubernetesTaskBase.java | 43 +++++++++---- 2 files changed, 64 insertions(+), 40 deletions(-) diff --git a/managed/src/main/java/com/yugabyte/yw/commissioner/tasks/EditKubernetesUniverse.java b/managed/src/main/java/com/yugabyte/yw/commissioner/tasks/EditKubernetesUniverse.java index 10f7ad82082f..6cb7de895bfc 100644 --- a/managed/src/main/java/com/yugabyte/yw/commissioner/tasks/EditKubernetesUniverse.java +++ b/managed/src/main/java/com/yugabyte/yw/commissioner/tasks/EditKubernetesUniverse.java @@ -949,30 +949,31 @@ protected void createResizeDiskTask( UUID providerUUID = UUID.fromString(userIntent.provider); Provider provider = Provider.getOrBadRequest(providerUUID); - // Subtask groups( ignore Errors is false by default ) - SubTaskGroup validateExpansion = - createSubTaskGroup( - KubernetesCheckVolumeExpansion.getSubTaskGroupName(), SubTaskGroupType.PreflightChecks); - SubTaskGroup stsDelete = - createSubTaskGroup( - KubernetesCommandExecutor.CommandType.STS_DELETE.getSubTaskGroupName(), - SubTaskGroupType.ResizingDisk); - SubTaskGroup pvcExpand = - createSubTaskGroup( - KubernetesCommandExecutor.CommandType.PVC_EXPAND_SIZE.getSubTaskGroupName(), - SubTaskGroupType.ResizingDisk, - true /* ignoreErrors */); - SubTaskGroup helmUpgrade = - createSubTaskGroup( - KubernetesCommandExecutor.CommandType.HELM_UPGRADE.getSubTaskGroupName(), - SubTaskGroupType.HelmUpgrade); - SubTaskGroup postExpansionValidate = - createSubTaskGroup( - KubernetesPostExpansionCheckVolume.getSubTaskGroupName(), - SubTaskGroupType.PostUpdateValidations); - for (Entry> entry : placement.configs.entrySet()) { + // Subtask groups( ignore Errors is false by default ) + SubTaskGroup validateExpansion = + createSubTaskGroup( + KubernetesCheckVolumeExpansion.getSubTaskGroupName(), + SubTaskGroupType.PreflightChecks); + SubTaskGroup stsDelete = + createSubTaskGroup( + KubernetesCommandExecutor.CommandType.STS_DELETE.getSubTaskGroupName(), + SubTaskGroupType.ResizingDisk); + SubTaskGroup pvcExpand = + createSubTaskGroup( + KubernetesCommandExecutor.CommandType.PVC_EXPAND_SIZE.getSubTaskGroupName(), + SubTaskGroupType.ResizingDisk, + true /* ignoreErrors */); + SubTaskGroup helmUpgrade = + createSubTaskGroup( + KubernetesCommandExecutor.CommandType.HELM_UPGRADE.getSubTaskGroupName(), + SubTaskGroupType.HelmUpgrade); + SubTaskGroup postExpansionValidate = + createSubTaskGroup( + KubernetesPostExpansionCheckVolume.getSubTaskGroupName(), + SubTaskGroupType.PostUpdateValidations); + UUID azUUID = entry.getKey(); String azName = PlacementInfoUtil.isMultiAZ(provider) @@ -1126,13 +1127,15 @@ protected void createResizeDiskTask( providerUUID, newDiskSizeGi, serverType)); - } - if (validateExpansion.getSubTaskCount() > 0) { - getRunnableTask().addSubTaskGroup(validateExpansion); - getRunnableTask().addSubTaskGroup(stsDelete); - getRunnableTask().addSubTaskGroup(pvcExpand); - getRunnableTask().addSubTaskGroup(helmUpgrade); - getRunnableTask().addSubTaskGroup(postExpansionValidate); + + // Add all subtasks to runnable + if (validateExpansion.getSubTaskCount() > 0) { + getRunnableTask().addSubTaskGroup(validateExpansion); + getRunnableTask().addSubTaskGroup(stsDelete); + getRunnableTask().addSubTaskGroup(pvcExpand); + getRunnableTask().addSubTaskGroup(helmUpgrade); + getRunnableTask().addSubTaskGroup(postExpansionValidate); + } } } diff --git a/managed/src/main/java/com/yugabyte/yw/commissioner/tasks/KubernetesTaskBase.java b/managed/src/main/java/com/yugabyte/yw/commissioner/tasks/KubernetesTaskBase.java index 5636add36456..33091d4b9957 100644 --- a/managed/src/main/java/com/yugabyte/yw/commissioner/tasks/KubernetesTaskBase.java +++ b/managed/src/main/java/com/yugabyte/yw/commissioner/tasks/KubernetesTaskBase.java @@ -687,8 +687,8 @@ public void upgradePodsNonRolling( null, false /* usePreviousGflagsChecksum */, null /* previousGflagsChecksumMap */, - true, /* useNewMasterDiskSize */ - true /* useNewTserverDiskSize */, + false, /* useNewMasterDiskSize */ + false /* useNewTserverDiskSize */, ysqlMajorVersionUpgradeState)); if (sType.equals(ServerType.EITHER)) { @@ -764,8 +764,8 @@ public void upgradePodsNonRolling( null, false /* usePreviousGflagsChecksum */, null /* previousGflagsChecksumMap */, - true, /* useNewMasterDiskSize */ - true /* useNewTserverDiskSize */, + false, /* useNewMasterDiskSize */ + false /* useNewTserverDiskSize */, ysqlMajorVersionUpgradeState)); }); getRunnableTask().addSubTaskGroup(helmUpgrade); @@ -1114,8 +1114,8 @@ public void upgradePodsTask( ybcSoftwareVersion, false, null, - true /* useNewMasterDiskSize */, - true /* useNewTserverDiskSize */, + false /* useNewMasterDiskSize */, + false /* useNewTserverDiskSize */, ysqlMajorVersionUpgradeState), commandType.getSubTaskGroupName(), UserTaskDetails.SubTaskGroupType.Provisioning, @@ -2076,8 +2076,8 @@ public void createSingleKubernetesExecutorTaskForServerType( ybcSoftwareVersion, usePreviousGflagsChecksum, previousGflagsChecksumMap, - true /* useNewMasterDiskSize */, - true /* useNewTserverDiskSize */, + false /* useNewMasterDiskSize */, + false /* useNewTserverDiskSize */, ysqlMajorVersionUpgradeState)); getRunnableTask().addSubTaskGroup(subTaskGroup); subTaskGroup.setSubTaskGroupType(UserTaskDetails.SubTaskGroupType.Provisioning); @@ -2121,8 +2121,8 @@ public KubernetesCommandExecutor getSingleKubernetesExecutorTaskForServerTypeTas ybcSoftwareVersion, false /* usePreviousGflagsChecksum */, null /* previousGflagsChecksumMap */, - true, /* useNewMasterDiskSize */ - true /* useNewTserverDiskSize */, + false, /* useNewMasterDiskSize */ + false /* useNewTserverDiskSize */, null /* ysqlMajorVersionUpgradeState */); } @@ -2173,9 +2173,30 @@ public KubernetesCommandExecutor getSingleKubernetesExecutorTaskForServerTypeTas params.azOverrides = azOverrides; params.universeName = universeName; // sending in the entire taskParams only for selected commandTypes that need it - if (commandType == CommandType.HELM_INSTALL || commandType == CommandType.HELM_UPGRADE) { + if (commandType == CommandType.HELM_INSTALL) { params.universeDetails = taskParams(); params.universeConfig = universe.getConfig(); + } else if (commandType == CommandType.HELM_UPGRADE) { + params.universeConfig = universe.getConfig(); + if (useNewMasterDiskSize || useNewTserverDiskSize) { + // Only update the deviceInfo all other things remain same + params.universeDetails = universe.getUniverseDetails(); + if (useNewTserverDiskSize) { + if (isReadOnlyCluster) { + params.universeDetails.getReadOnlyClusters().get(0).userIntent.deviceInfo = + taskParams().getReadOnlyClusters().get(0).userIntent.deviceInfo; + } else { + params.universeDetails.getPrimaryCluster().userIntent.deviceInfo = + taskParams().getPrimaryCluster().userIntent.deviceInfo; + } + } + if (useNewMasterDiskSize) { + params.universeDetails.getPrimaryCluster().userIntent.masterDeviceInfo = + taskParams().getPrimaryCluster().userIntent.masterDeviceInfo; + } + } else { + params.universeDetails = taskParams(); + } } if (masterAddresses != null) { From c488a5e5872f6cb9dbdcabfd858482a06931220d Mon Sep 17 00:00:00 2001 From: Deepti-yb Date: Mon, 19 May 2025 18:18:25 +0000 Subject: [PATCH 125/146] [PLAT-17669][YBA CLI]Replace and decommission nodes throw error at fetch nodes Summary: The operations throw an error since the node no longer exists in the universe. Instead, for these operations, the list of all nodes in the universe will be printed Test Plan: Manually test the 2 commands Reviewers: skurapati Reviewed By: skurapati Differential Revision: https://phorge.dev.yugabyte.com/D44074 --- managed/yba-cli/cmd/universe/node/nodeutil.go | 39 +++++++++++++++---- 1 file changed, 32 insertions(+), 7 deletions(-) diff --git a/managed/yba-cli/cmd/universe/node/nodeutil.go b/managed/yba-cli/cmd/universe/node/nodeutil.go index c4f0516f3aad..3a76d1bbcfe5 100644 --- a/managed/yba-cli/cmd/universe/node/nodeutil.go +++ b/managed/yba-cli/cmd/universe/node/nodeutil.go @@ -122,19 +122,39 @@ func nodeOperationsUtil(cmd *cobra.Command, operation, command string) { Format: universe.NewNodesFormat(viper.GetString("output")), } - nodeInstance, response, err := authAPI.GetNodeDetails(universeUUID, nodeName).Execute() + if !isNodeRemovingOperation(operation) { + nodeInstance, response, err := authAPI.GetNodeDetails(universeUUID, nodeName).Execute() + if err != nil { + errMessage := util.ErrorFromHTTPResponse(response, err, "Node", + fmt.Sprintf("%s - Fetch Nodes", operation)) + logrus.Fatalf(formatter.Colorize(errMessage.Error()+"\n", formatter.RedColor)) + } + + nodeInstanceList := make([]ybaclient.NodeDetailsResp, 0) + nodeInstanceList = append(nodeInstanceList, nodeInstance) + + universe.NodeWrite(nodesCtx, nodeInstanceList) + return + } + nodesCtx.Command = "list" + + r, response, err := universeListRequest.Execute() if err != nil { - errMessage := util.ErrorFromHTTPResponse(response, err, "Node", - fmt.Sprintf("%s - Fetch Nodes", operation)) + + errMessage := util.ErrorFromHTTPResponse( + response, err, + "Node", + fmt.Sprintf("%s - List Universes", operation)) logrus.Fatalf(formatter.Colorize(errMessage.Error()+"\n", formatter.RedColor)) } - nodeInstanceList := make([]ybaclient.NodeDetailsResp, 0) - nodeInstanceList = append(nodeInstanceList, nodeInstance) - - universe.NodeWrite(nodesCtx, nodeInstanceList) + selectedUniverse := r[0] + details := selectedUniverse.GetUniverseDetails() + nodes := details.GetNodeDetailsSet() + universe.NodeWrite(nodesCtx, nodes) return } + logrus.Infoln(msg + "\n") taskCtx := formatter.Context{ @@ -145,3 +165,8 @@ func nodeOperationsUtil(cmd *cobra.Command, operation, command string) { ybatask.Write(taskCtx, []ybaclient.YBPTask{rTask}) } + +func isNodeRemovingOperation(operation string) bool { + operation = strings.ToLower(operation) + return strings.EqualFold(operation, "replace") || strings.EqualFold(operation, "decommission") +} From fdf63d991351bd1721ce03eb54891ba31613d0c1 Mon Sep 17 00:00:00 2001 From: kkannan Date: Mon, 19 May 2025 11:03:58 +0530 Subject: [PATCH 126/146] [PLAT-17626]Display OriginMessage instead of Message for failed tasks Summary: The message property for failed tasks was previously formatted as: "Failed to execute task {$task_params}: ${actual_error_message}". While this provided context, it often made the error messages unnecessarily verbose and harder to read. To improve clarity and user experience, we have updated the implementation to use only the actual error message. This has been achieved by replacing the message property with originMessage, which now displays only the raw error message returned by the task Also, instead of ordering the subTasks by groupType we are now sorting them by position and group them if two consecutive tasks have same subTaskType. Also, added a expand All button Test Plan: {F356442} {F357448} Tested manually Reviewers: lsangappa Reviewed By: lsangappa Differential Revision: https://phorge.dev.yugabyte.com/D43972 --- .../components/drawerComp/SubTaskDetails.tsx | 146 +++++++++++++----- .../ui/src/redesign/features/tasks/dtos.ts | 7 + managed/ui/src/translations/en.json | 2 + 3 files changed, 114 insertions(+), 41 deletions(-) diff --git a/managed/ui/src/redesign/features/tasks/components/drawerComp/SubTaskDetails.tsx b/managed/ui/src/redesign/features/tasks/components/drawerComp/SubTaskDetails.tsx index 8c9c034b0274..3e5a8566c4ab 100644 --- a/managed/ui/src/redesign/features/tasks/components/drawerComp/SubTaskDetails.tsx +++ b/managed/ui/src/redesign/features/tasks/components/drawerComp/SubTaskDetails.tsx @@ -9,11 +9,11 @@ import { FC, useEffect } from 'react'; import clsx from 'clsx'; -import { usePrevious, useToggle } from 'react-use'; +import { useMap, useMount, usePrevious, useToggle } from 'react-use'; import { useQuery } from 'react-query'; import { useTranslation } from 'react-i18next'; -import { groupBy, keys, startCase, values } from 'lodash'; -import { Collapse, Typography, makeStyles } from '@material-ui/core'; +import { keys, sortBy, startCase, values } from 'lodash'; +import { Collapse, Tooltip, Typography, makeStyles } from '@material-ui/core'; import { YBButton } from '../../../../components'; import { YBLoadingCircleIcon } from '../../../../../components/common/indicators'; import { getFailedTaskDetails, getSubTaskDetails } from './api'; @@ -64,6 +64,8 @@ const useStyles = makeStyles((theme) => ({ export const SubTaskDetails: FC = ({ currentTask }) => { const classes = useStyles(); const [expandDetails, toggleExpandDetails] = useToggle(false); + const [expandedSubTasks, { setAll, set, get }] = useMap(); + const failedTask = isTaskFailed(currentTask); const currentTaskPrevState = usePrevious(currentTask); const { t } = useTranslation('translation', { @@ -99,23 +101,39 @@ export const SubTaskDetails: FC = ({ currentTask }) => { if (!currentTask) return null; - // we have duplicate subtasks in the response, so we are filtering out the latest subtask (by last updated time) - const uniqueTasks: Record = {}; + const subTasksList: Array<{ + key: string; + subTasks: SubTaskInfo[]; + }> = []; - detailedTaskInfo?.[currentTask.targetUUID]?.[0].subtaskInfos.forEach((task: SubTaskInfo) => { - const key = task.subTaskGroupType + task.taskType; - if (!uniqueTasks[key]) { - uniqueTasks[key] = task; - } else { - const taskToCompare = uniqueTasks[key]; - if (taskToCompare.updateTime < task.updateTime) { - uniqueTasks[key] = task; - } + // sort the tasks by position + // if two consecutive tasks have the same subtask group type, group them together + const sortedSubTasks = sortBy( + values(detailedTaskInfo?.[currentTask.targetUUID]?.[0].subtaskInfos), + 'position' + ); + let subTasksListIndex = 0; + let sortedSubTaskIndex = 0; + // loop through the sorted subtasks + while (sortedSubTaskIndex < sortedSubTasks.length) { + const subTask = sortedSubTasks[sortedSubTaskIndex]; + const subTaskGroup = subTask.subTaskGroupType; + + // if the subtask group type is different from the previous one, create a new group + if (subTasksList[subTasksListIndex - 1]?.key !== subTaskGroup) { + subTasksList.push({ + key: subTaskGroup, + subTasks: [] + }); + subTasksListIndex++; + } + // if the subtask group type is the same as the previous one, push the subtask to the previous group + if (subTasksList[subTasksListIndex - 1].key === subTaskGroup) { + subTasksList[subTasksListIndex - 1].subTasks.push(subTask); } - }); - //group them by Task Group Type - const subTasksList = groupBy(values(uniqueTasks), 'subTaskGroupType'); + sortedSubTaskIndex++; + } const getFailedTaskData = () => { if (isLoading) return ; @@ -164,14 +182,36 @@ export const SubTaskDetails: FC = ({ currentTask }) => { {isSubTaskLoading ? ( ) : ( - keys(subTasksList).map((key, index) => ( - - )) + <> + {subTasksList.length > 0 && ( +
{ + const allExpanded = keys(expandedSubTasks).every((key) => expandedSubTasks[key]); + setAll(keys(expandedSubTasks).map(() => !allExpanded)); + }} + data-testid="expand-all-subtasks" + > + {t( + keys(expandedSubTasks).every((k) => expandedSubTasks[k]) + ? 'collapseAll' + : 'expandAll' + )} +
+ )} + {subTasksList.map((subTask, index) => ( + { + set(index, get(index) === undefined ? false : !get(index)); + }} + /> + ))} + )} ); @@ -181,6 +221,8 @@ export type SubTaskCardProps = { subTasks: SubTaskInfo[]; index: number; category: string; + expanded: boolean; + toggleExpanded: (index: number) => void; }; const subTaskCardStyles = makeStyles((theme) => ({ @@ -273,15 +315,22 @@ const subTaskCardStyles = makeStyles((theme) => ({ background: theme.palette.error[100], padding: '8px 10px', wordBreak: 'break-word' + }, + timeElapsed: { + marginLeft: 'auto', } })); -export const SubTaskCard: FC = ({ subTasks, index, category }) => { +export const SubTaskCard: FC = ({ + subTasks, + index, + category, + expanded, + toggleExpanded +}) => { const classes = subTaskCardStyles(); - - const [showDetails, toggleDetails] = useToggle(false); - const { t } = useTranslation('translation', { - keyPrefix: 'taskDetails.progress' + useMount(() => { + toggleExpanded(index); }); const getTaskIcon = (state: Task['status'], position?: number) => { @@ -315,34 +364,49 @@ export const SubTaskCard: FC = ({ subTasks, index, category }) categoryTaskStatus = TaskStates.SUCCESS; } + const getNodeNames = (subTask: SubTaskInfo) => { + if (subTask.taskParams?.nodeDetailsSet) { + return <>{subTask.taskParams.nodeDetailsSet.map(node =>
{`(${node.nodeName})`}
)}; + } + if (subTask.taskParams?.nodeName) { + return ` (${subTask.taskParams.nodeName})`; + } + return ''; + }; + + return (
-
toggleDetails(!showDetails)}> +
toggleExpanded(index)}>
- {getTaskIcon(categoryTaskStatus, index)} + {getTaskIcon(categoryTaskStatus, index + 1)}
{startCase(category)}
- +
- {subTasks.map((subTask, index) => ( -
+ {subTasks.map((subTask, index) => { + return
{getTaskIcon(subTask.taskState, index + 1)}
- {startCase(subTask.taskType)} - {subTask.details?.error?.message && ( -
{subTask.details?.error?.message}
+ + + {startCase(subTask.taskType)} + + + {subTask.details?.error?.originMessage && ( +
{subTask.details?.error?.originMessage}
)} -
- ))} +
; + })}
diff --git a/managed/ui/src/redesign/features/tasks/dtos.ts b/managed/ui/src/redesign/features/tasks/dtos.ts index 1f7dfde4b8f0..108f3450acf0 100644 --- a/managed/ui/src/redesign/features/tasks/dtos.ts +++ b/managed/ui/src/redesign/features/tasks/dtos.ts @@ -94,8 +94,15 @@ export interface SubTaskInfo { error?: { code: string; message: string; + originMessage: string; } } + taskParams? : { + nodeName?: string; + nodeDetailsSet? : { + nodeName?: string; + }[] + } }; export type SubTaskDetailsResp = { diff --git a/managed/ui/src/translations/en.json b/managed/ui/src/translations/en.json index 8d78386d37dd..0b83edaf5678 100644 --- a/managed/ui/src/translations/en.json +++ b/managed/ui/src/translations/en.json @@ -2417,6 +2417,8 @@ "expand": "Expand", "viewLess": "Collapse", "showLog": "YugaWare Log", + "expandAll":"Expand All", + "collapseAll":"Collapse All", "percentComplete": "{{percent}}% Completed" }, "banner": { From fb44a285c0cb11f8fcb1efeb161fdad0f3d0ece5 Mon Sep 17 00:00:00 2001 From: Sami Ahmed Siddiqui Date: Tue, 20 May 2025 10:56:04 +0500 Subject: [PATCH 127/146] Autoscroll right nav where the right nav is a very long one (#27225) --- docs/assets/scss/_sidebar-toc.scss | 1 + docs/src/index.js | 49 +++++++++++++++++++++++++++++- 2 files changed, 49 insertions(+), 1 deletion(-) diff --git a/docs/assets/scss/_sidebar-toc.scss b/docs/assets/scss/_sidebar-toc.scss index 2e192b2337fa..922fdc80e32a 100644 --- a/docs/assets/scss/_sidebar-toc.scss +++ b/docs/assets/scss/_sidebar-toc.scss @@ -15,6 +15,7 @@ height: auto; scrollbar-width: thin; overflow-y: auto; + scroll-behavior: smooth; &::-webkit-scrollbar { height: 5px; diff --git a/docs/src/index.js b/docs/src/index.js index b89f2ef5c553..5c5361592199 100644 --- a/docs/src/index.js +++ b/docs/src/index.js @@ -171,6 +171,17 @@ function yugabyteActiveLeftNav() { }); } +/** + * Add class to `right-nav-auto-scroll` in right menu. + */ +function rightnavAutoScroll() { + if ($('.td-sidebar-toc .td-toc').innerHeight() + 260 >= window.innerHeight) { + $('.td-sidebar-toc .td-toc').addClass('right-nav-auto-scroll'); + } else { + $('.td-sidebar-toc .td-toc').removeClass('right-nav-auto-scroll'); + } +} + $(document).ready(() => { const isSafari = /Safari/.test(navigator.userAgent) && /Apple Computer/.test(navigator.vendor); if (isSafari) { @@ -275,6 +286,7 @@ $(document).ready(() => { }); $('body').addClass('dragging'); yugabytePageFinderWidth(); + rightnavAutoScroll(); }); }); @@ -574,7 +586,10 @@ $(document).ready(() => { } })(document); + let lastScrollTop = 0; $(window).on('scroll', () => { + let activeLink = ''; + // Active TOC link on scroll. if ($('.td-toc #TableOfContents').length > 0) { let rightMenuSelector = '.td-content > h2,.td-content > h3,.td-content > h4'; @@ -589,10 +604,39 @@ $(document).ready(() => { const scrollTop = $(window).scrollTop(); const headingId = $(element).attr('id'); if (offsetTop - 75 <= scrollTop) { + activeLink = $(`.td-toc #TableOfContents a[href="#${headingId}"]`); $('.td-toc #TableOfContents a').removeClass('active-scroll'); - $(`.td-toc #TableOfContents a[href="#${headingId}"]`).addClass('active-scroll'); + activeLink.addClass('active-scroll'); } }); + + /* + * Autoscroll right nav where the right nav is a very long one. + */ + const tocContainer = $('.td-sidebar-toc .td-toc.right-nav-auto-scroll'); + if (tocContainer.length > 0) { + const linkOffset = activeLink.length ? activeLink.position().top : 0; + const containerHeight = tocContainer.height(); + const linkHeight = activeLink.length ? activeLink.outerHeight() : 0; + + let scrollFlag = 'up'; + let currentScroll = window.pageYOffset || document.documentElement.scrollTop; + if (currentScroll > lastScrollTop) { + scrollFlag = 'down'; + } + lastScrollTop = currentScroll <= 0 ? 0 : currentScroll; + + if (scrollFlag === 'down') { + tocContainer.scrollTop(tocContainer.scrollTop() + linkOffset - (containerHeight - linkHeight) - 20); + } else if (scrollFlag === 'up') { + let currentPosition = linkOffset - 145 - linkHeight; + if (currentPosition > containerHeight) { + tocContainer.scrollTop(tocContainer.scrollTop() + linkOffset - ((containerHeight / 2) + linkHeight / 2)); + } else if (currentPosition <= 18) { + tocContainer.scrollTop(tocContainer.scrollTop() + linkOffset - (150 + linkHeight)); + } + } + } } }); @@ -705,10 +749,13 @@ $(document).ready(() => { yugabytePageFinderWidth(); }, 500); }); + + rightnavAutoScroll(); }); $(window).resize(() => { rightnavAppend(); + rightnavAutoScroll(); $('.td-main .td-sidebar').attr('style', ''); $('.td-main #dragbar').attr('style', ''); $('.td-main').attr('style', ''); From 9e9783df4530806b2c08737c9cd1bf3be8891c07 Mon Sep 17 00:00:00 2001 From: Sergey Stelmakh Date: Tue, 20 May 2025 09:13:28 +0200 Subject: [PATCH 128/146] DOC-773 license file update (#27266) --- LICENSE.md | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/LICENSE.md b/LICENSE.md index 8558c904f172..9fafa3520aaa 100644 --- a/LICENSE.md +++ b/LICENSE.md @@ -1,5 +1,8 @@ ## YugabyteDB Licensing +Source code in this repository is variously licensed under the [Apache License 2.0](licenses/APACHE-LICENSE-2.0.txt) and the [Polyform Free Trial License 1.0.0](licenses/POLYFORM-FREE-TRIAL-LICENSE-1.0.0.txt). A copy of each license can be found in the [licenses](licenses) directory. -Source code in this repository is variously licensed under the Apache License 2.0 and the Polyform Free Trial License 1.0.0. A copy of each license can be found in the [licenses](licenses) directory. +The build produces two sets of binaries: -The build produces two sets of binaries - one set that falls under the Apache License 2.0 and another set that falls under the Polyform Free Trial License 1.0.0. The binaries that contain `-managed` in the artifact name are licensed under the Polyform Free Trial License 1.0.0. By default, only the Apache License 2.0 binaries are generated. +(1) YugabyteDB, available [here](https://download.yugabyte.com/local#linux) and licensed under Apache License 2.0 + +(2) YugabyteDB Aeon (Self-Managed), available [here](https://docs.yugabyte.com/preview/yugabyte-platform/install-yugabyte-platform/install-software/installer/) and licensed under Polyform Free Trial License 1.0.0. From 91caee4fb8df92da3e1c855bc380a71e50fe70ce Mon Sep 17 00:00:00 2001 From: Fizaa Luthra Date: Thu, 8 May 2025 15:17:56 -0400 Subject: [PATCH 129/146] [#27281] YSQL: Add test for YSQL major upgrade with enums Summary: Add a test to verify that the enum oids are preserved after a YSQL major upgrade. The test also verifies that when an enum type is used as a hash partition key, rows remain correctly routed to the same partitions after the upgrade. Jira: DB-16768 Test Plan: ./yb_build.sh release --cxx-test integration-tests_ysql_major_upgrade-test --gtest_filter YsqlMajorUpgradeTest.EnumTypes Reviewers: telgersma Reviewed By: telgersma Subscribers: yql Differential Revision: https://phorge.dev.yugabyte.com/D43872 --- .../upgrade-tests/ysql_major_upgrade-test.cc | 35 +++++++++++++++++++ 1 file changed, 35 insertions(+) diff --git a/src/yb/integration-tests/upgrade-tests/ysql_major_upgrade-test.cc b/src/yb/integration-tests/upgrade-tests/ysql_major_upgrade-test.cc index b44702720797..22f98851e21e 100644 --- a/src/yb/integration-tests/upgrade-tests/ysql_major_upgrade-test.cc +++ b/src/yb/integration-tests/upgrade-tests/ysql_major_upgrade-test.cc @@ -1688,4 +1688,39 @@ TEST_F(YsqlMajorUpgradeTest, Analyze) { check_analyze(kAnyTserver, std::nullopt); } +TEST_F(YsqlMajorUpgradeTest, EnumTypes) { + ASSERT_OK(ExecuteStatements({ + "CREATE TYPE color AS ENUM ('red', 'green', 'blue', 'yellow')", + "CREATE TABLE paint_log (id serial, shade color) PARTITION BY HASH (shade)", + "CREATE TABLE paint_log_p0 PARTITION OF paint_log FOR VALUES WITH (MODULUS 2, REMAINDER 0)", + "CREATE TABLE paint_log_p1 PARTITION OF paint_log FOR VALUES WITH (MODULUS 2, REMAINDER 1)", + "INSERT INTO paint_log (shade) VALUES ('red'), ('green'), ('blue'), ('yellow')" + })); + auto conn = ASSERT_RESULT(cluster_->ConnectToDB()); + auto type_oid = ASSERT_RESULT(conn.FetchRow( + "SELECT oid FROM pg_type WHERE typname = 'color'")); + + const auto fetch_partition_data = [&](const std::string& partition) { + return conn.FetchRows( + Format("SELECT id, shade::text FROM $0 ORDER BY shade", partition)); + }; + + const auto fetch_enum_data = [&]() { + return conn.FetchRows(Format( + "SELECT oid, enumsortorder, enumlabel FROM pg_enum WHERE enumtypid = $0" + " ORDER BY enumsortorder", type_oid)); + }; + + auto paint_log_p0_res = ASSERT_RESULT(fetch_partition_data("paint_log_p0")); + auto paint_log_p1_res = ASSERT_RESULT(fetch_partition_data("paint_log_p1")); + auto enum_oids = ASSERT_RESULT(fetch_enum_data()); + + ASSERT_OK(UpgradeClusterToCurrentVersion(kNoDelayBetweenNodes)); + + conn = ASSERT_RESULT(cluster_->ConnectToDB()); + ASSERT_VECTORS_EQ(ASSERT_RESULT(fetch_partition_data("paint_log_p0")), paint_log_p0_res); + ASSERT_VECTORS_EQ(ASSERT_RESULT(fetch_partition_data("paint_log_p1")), paint_log_p1_res); + ASSERT_VECTORS_EQ(ASSERT_RESULT(fetch_enum_data()), enum_oids); +} + } // namespace yb From d3923edcec3db138b51ac14584d222d182222287 Mon Sep 17 00:00:00 2001 From: Cloud User Date: Fri, 16 May 2025 04:19:40 +0000 Subject: [PATCH 130/146] [PLAT-17392]Add runtime flag to control backups during DDL Summary: Added runtime flag to control backups during DDL. The flag yb.backup.enable_backups_during_ddl will control whether backup should run with ysql-dump read time, if supported by DB. Incremented YBC client-server version to 2.2.0.2-b3. Includes commits: - Revert tablespaces default to false - https://github.com/yugabyte/ybc/commit/48b4628d0c65c86fcf1eca29f937eeede11376c3 - Make usage of ysql-dump read conditional on backup extended args params supplied by YBA - https://github.com/yugabyte/ybc/commit/ddfd3375af499f12ab1ebb0c48f1f91c277a463a Also noticed a bug with revertToPreRoleBehavior where the params would not pass to the subtask BackupTableYbc, fixed. Test Plan: dev itests, dev UTs Reviewers: dshubin Reviewed By: dshubin Subscribers: mhaddad Differential Revision: https://phorge.dev.yugabyte.com/D44018 --- managed/RUNTIME-FLAGS.md | 1 + managed/build.sbt | 2 +- .../yugabyte/yw/commissioner/tasks/UniverseTaskBase.java | 6 ++++++ .../yw/common/backuprestore/ybc/YbcBackupUtil.java | 2 ++ .../com/yugabyte/yw/common/config/UniverseConfKeys.java | 8 ++++++++ .../java/com/yugabyte/yw/forms/BackupTableParams.java | 5 +++++ managed/src/main/resources/reference.conf | 3 ++- 7 files changed, 25 insertions(+), 2 deletions(-) diff --git a/managed/RUNTIME-FLAGS.md b/managed/RUNTIME-FLAGS.md index be552bb8a1aa..612be4136b2b 100644 --- a/managed/RUNTIME-FLAGS.md +++ b/managed/RUNTIME-FLAGS.md @@ -327,3 +327,4 @@ | "NFS precheck buffer space" | "yb.backup.nfs_precheck_buffer_kb" | "UNIVERSE" | "Amount of space (in KB) we want as buffer for NFS precheck" | "Long" | | "Wait after each pod restart in rolling operations" | "yb.kubernetes.operator.rolling_ops_wait_after_each_pod_ms" | "UNIVERSE" | "Time to wait after each pod restart before restarting the next pod in rolling operations" | "Integer" | | "Backup and restore to use pre roles behaviour" | "ybc.revert_to_pre_roles_behaviour" | "UNIVERSE" | "Have YBC use the pre roles backup and restore behaviour" | "Boolean" | +| "Enable backups during DDL" | "yb.backup.enable_backups_during_ddl" | "UNIVERSE" | "Have YBC ysql-dump use read-time as of snapshot time to support backups during DDL" | "Boolean" | diff --git a/managed/build.sbt b/managed/build.sbt index abe15e814793..e43d152efc13 100644 --- a/managed/build.sbt +++ b/managed/build.sbt @@ -929,7 +929,7 @@ runPlatform := { } libraryDependencies += "org.yb" % "yb-client" % "0.8.104-SNAPSHOT" -libraryDependencies += "org.yb" % "ybc-client" % "2.2.0.2-b2" +libraryDependencies += "org.yb" % "ybc-client" % "2.2.0.2-b3" libraryDependencies += "org.yb" % "yb-perf-advisor" % "1.0.0-b35" libraryDependencies ++= Seq( diff --git a/managed/src/main/java/com/yugabyte/yw/commissioner/tasks/UniverseTaskBase.java b/managed/src/main/java/com/yugabyte/yw/commissioner/tasks/UniverseTaskBase.java index d3f36ef85265..355834106ea1 100644 --- a/managed/src/main/java/com/yugabyte/yw/commissioner/tasks/UniverseTaskBase.java +++ b/managed/src/main/java/com/yugabyte/yw/commissioner/tasks/UniverseTaskBase.java @@ -3628,6 +3628,9 @@ public BackupTableParams getBackupTableParams( } else { backupTableParams.backupList = backupTableParamsList; } + if (confGetter.getConfForScope(universe, UniverseConfKeys.enableBackupsDuringDDL)) { + backupTableParams.setEnableBackupsDuringDDL(true); + } return backupTableParams; } @@ -4181,6 +4184,9 @@ public SubTaskGroup createTableBackupTasksYbc( backupParams.backupDBStates.get(paramsEntry.backupParamsIdentifier) .currentYbcTaskId; backupYbcParams.scheduleRetention = scheduleRetention; + backupYbcParams.setEnableBackupsDuringDDL(backupParams.getEnableBackupsDuringDDL()); + backupYbcParams.setRevertToPreRolesBehaviour( + backupParams.getRevertToPreRolesBehaviour()); task.initialize(backupYbcParams); task.setUserTaskUUID(getUserTaskUUID()); subTaskGroup.addSubTask(task); diff --git a/managed/src/main/java/com/yugabyte/yw/common/backuprestore/ybc/YbcBackupUtil.java b/managed/src/main/java/com/yugabyte/yw/common/backuprestore/ybc/YbcBackupUtil.java index 6f6451da9898..222112e9daf1 100644 --- a/managed/src/main/java/com/yugabyte/yw/common/backuprestore/ybc/YbcBackupUtil.java +++ b/managed/src/main/java/com/yugabyte/yw/common/backuprestore/ybc/YbcBackupUtil.java @@ -1107,6 +1107,8 @@ public BackupServiceTaskExtendedArgs getExtendedArgsForBackup(BackupTableYbc.Par log.debug("database version {} does not support --dump-role-checks", ybdbSoftwareVersion); extendedArgsBuilder.setDumpRoleChecks(false); // DB does not support dump role checks flag. } + // Set enable backups during DDL + extendedArgsBuilder.setUseReadTimeYsqlDump(tableParams.getEnableBackupsDuringDDL()); return extendedArgsBuilder.build(); } catch (Exception e) { log.error("Error while fetching extended args for backup: ", e); diff --git a/managed/src/main/java/com/yugabyte/yw/common/config/UniverseConfKeys.java b/managed/src/main/java/com/yugabyte/yw/common/config/UniverseConfKeys.java index 8bef508a46ce..0d5802b5cd68 100644 --- a/managed/src/main/java/com/yugabyte/yw/common/config/UniverseConfKeys.java +++ b/managed/src/main/java/com/yugabyte/yw/common/config/UniverseConfKeys.java @@ -1654,4 +1654,12 @@ public class UniverseConfKeys extends RuntimeConfigKeysModule { "Threshold of memory percent for non-yba processes", ConfDataType.IntegerType, ImmutableList.of(ConfKeyTags.INTERNAL)); + public static final ConfKeyInfo enableBackupsDuringDDL = + new ConfKeyInfo<>( + "yb.backup.enable_backups_during_ddl", + ScopeType.UNIVERSE, + "Enable backups during DDL", + "Have YBC ysql-dump use read-time as of snapshot time to support backups during DDL", + ConfDataType.BooleanType, + ImmutableList.of(ConfKeyTags.PUBLIC)); } diff --git a/managed/src/main/java/com/yugabyte/yw/forms/BackupTableParams.java b/managed/src/main/java/com/yugabyte/yw/forms/BackupTableParams.java index 64404912d20d..43316110adfb 100644 --- a/managed/src/main/java/com/yugabyte/yw/forms/BackupTableParams.java +++ b/managed/src/main/java/com/yugabyte/yw/forms/BackupTableParams.java @@ -217,6 +217,11 @@ public enum ActionType { @Setter private Boolean dumpRoleChecks = false; + @ApiModelProperty(hidden = true) + @Getter + @Setter + private Boolean enableBackupsDuringDDL = false; + @ToString public static class ParallelBackupState { public String nodeIp; diff --git a/managed/src/main/resources/reference.conf b/managed/src/main/resources/reference.conf index 4b5a92b04009..d95839d04146 100644 --- a/managed/src/main/resources/reference.conf +++ b/managed/src/main/resources/reference.conf @@ -1135,6 +1135,7 @@ yb { } enable_nfs_precheck = true nfs_precheck_buffer_kb = 50000 + enable_backups_during_ddl = false } logs { @@ -1440,7 +1441,7 @@ yb { ybc { releases { - stable_version = "2.2.0.2-b2" + stable_version = "2.2.0.2-b3" path = "/opt/yugabyte/ybc/releases" } compatible_db_version = "2.15.0.0-b1" From dedd581ed960fa35716eeee9c88d64292a0668a9 Mon Sep 17 00:00:00 2001 From: Sudhanshu Prajapati Date: Tue, 20 May 2025 22:29:29 +0530 Subject: [PATCH 131/146] [DOC-774] Yugabyte Voyager Release v2025.5.2 (#27265) * release notes for voyager 2025.5.2 * docs: Clarify versioning alignment with Yugabyte Voyager release cadence in release notes * Apply suggestions from code review Co-authored-by: Dwight Hodge <79169168+ddhodge@users.noreply.github.com> * docs: Update release notes for v2025.5.2 to include new features * update release note * docs: Add note on automatic schema assessment for PostgreSQL in migration guides * edit and format * fix * format * edit --------- Co-authored-by: Dwight Hodge <79169168+ddhodge@users.noreply.github.com> Co-authored-by: Dwight Hodge --- .../migrate/live-fall-back.md | 4 +- .../migrate/live-fall-forward.md | 4 +- .../yugabyte-voyager/migrate/live-migrate.md | 4 +- .../yugabyte-voyager/migrate/migrate-steps.md | 2 + .../preview/yugabyte-voyager/release-notes.md | 44 +++++++++++++++---- 5 files changed, 46 insertions(+), 12 deletions(-) diff --git a/docs/content/preview/yugabyte-voyager/migrate/live-fall-back.md b/docs/content/preview/yugabyte-voyager/migrate/live-fall-back.md index b5cca4ab0108..73771eb76eb4 100644 --- a/docs/content/preview/yugabyte-voyager/migrate/live-fall-back.md +++ b/docs/content/preview/yugabyte-voyager/migrate/live-fall-back.md @@ -678,7 +678,7 @@ The `yb-voyager export schema` command extracts the schema from the source datab The `source_db_schema` argument specifies the schema of the source database. -- For Oracle, `source-db-schema` can take only one schema name and you can migrate _only one_ schema at a time. +For Oracle, `source-db-schema` can take only one schema name and you can migrate _only one_ schema at a time. {{< /note >}} @@ -696,6 +696,8 @@ yb-voyager export schema --export-dir \ ``` +Note that if the source database is PostgreSQL and you haven't already run `assess-migration`, the schema is also assessed and a migration assessment report is generated. + Refer to [export schema](../../reference/schema-migration/export-schema/) for details about the arguments. #### Analyze schema diff --git a/docs/content/preview/yugabyte-voyager/migrate/live-fall-forward.md b/docs/content/preview/yugabyte-voyager/migrate/live-fall-forward.md index 8e5ceb15606e..cccd44a8be59 100644 --- a/docs/content/preview/yugabyte-voyager/migrate/live-fall-forward.md +++ b/docs/content/preview/yugabyte-voyager/migrate/live-fall-forward.md @@ -689,7 +689,7 @@ The `yb-voyager export schema` command extracts the schema from the source datab The `source_db_schema` argument specifies the schema of the source database. -- For Oracle, `source-db-schema` can take only one schema name and you can migrate _only one_ schema at a time. +For Oracle, `source-db-schema` can take only one schema name and you can migrate _only one_ schema at a time. {{< /note >}} @@ -707,6 +707,8 @@ yb-voyager export schema --export-dir \ ``` +Note that if the source database is PostgreSQL and you haven't already run `assess-migration`, the schema is also assessed and a migration assessment report is generated. + Refer to [export schema](../../reference/schema-migration/export-schema/) for details about the arguments. #### Analyze schema diff --git a/docs/content/preview/yugabyte-voyager/migrate/live-migrate.md b/docs/content/preview/yugabyte-voyager/migrate/live-migrate.md index bedcb12983ee..1d46654a3c4d 100644 --- a/docs/content/preview/yugabyte-voyager/migrate/live-migrate.md +++ b/docs/content/preview/yugabyte-voyager/migrate/live-migrate.md @@ -539,7 +539,7 @@ The `yb-voyager export schema` command extracts the schema from the source datab The `source_db_schema` argument specifies the schema of the source database. -- For Oracle, `source-db-schema` can take only one schema name and you can migrate _only one_ schema at a time. +For Oracle, `source-db-schema` can take only one schema name and you can migrate _only one_ schema at a time. {{< /note >}} @@ -557,6 +557,8 @@ yb-voyager export schema --export-dir \ ``` +Note that if the source database is PostgreSQL and you haven't already run `assess-migration`, the schema is also assessed and a migration assessment report is generated. + Refer to [export schema](../../reference/schema-migration/export-schema/) for details about the arguments. #### Analyze schema diff --git a/docs/content/preview/yugabyte-voyager/migrate/migrate-steps.md b/docs/content/preview/yugabyte-voyager/migrate/migrate-steps.md index e3e49de0bdf9..079f0c4a8667 100644 --- a/docs/content/preview/yugabyte-voyager/migrate/migrate-steps.md +++ b/docs/content/preview/yugabyte-voyager/migrate/migrate-steps.md @@ -180,6 +180,8 @@ yb-voyager export schema --export-dir \ ``` +Note that if the source database is PostgreSQL and you haven't already run `assess-migration`, the schema is also assessed and a migration assessment report is generated. + Refer to [export schema](../../reference/schema-migration/export-schema/) for details about the arguments. #### Analyze schema diff --git a/docs/content/preview/yugabyte-voyager/release-notes.md b/docs/content/preview/yugabyte-voyager/release-notes.md index 840b05e55a47..4e539190e120 100644 --- a/docs/content/preview/yugabyte-voyager/release-notes.md +++ b/docs/content/preview/yugabyte-voyager/release-notes.md @@ -13,9 +13,35 @@ type: docs What follows are the release notes for the YugabyteDB Voyager v1 release series. Content will be added as new notable features and changes are available in the patch releases of the YugabyteDB v1 series. +## Versioning + +Voyager releases (starting with v2025.5.2) use the numbering format `YYYY.M.N`, where `YYYY` is the release year, `M` is the month, and `N` is the number of the release in that month. + +## v2025.5.2 - May 20, 2025 + +### New features + +- Added support for using a config file to manage parameters in offline migration using `yb-voyager`. + +### Enhancements + +- If you run `export schema` without first running `assess-migration`, Voyager will now automatically run assess the migration before exporting the schema for PostgreSQL source databases. +- Performance optimizations are now reported only in assessment reports, not in schema analysis reports. +- Assessment Report + - The assessment report now includes detailed recommendations related to index design to help you identify potential uneven distribution or hotspot issues in YugabyteDB. This includes: + - Indexes on low-cardinality columns (for example, `BOOLEAN` or `ENUM`) + - Indexes on columns with a high percentage of `NULL` values + - Indexes on columns with a high frequency of a particular value +- Import Data + - The `import-data` command now monitors replication (CDC/xCluster) only for the target database specified in the migration. This avoids false positives caused by replication streams on other databases. + +### Bug fixes + +- Fixed an issue where left-padded zeros in PostgreSQL `BIT VARYING` columns were incorrectly omitted during live migration. + ## v1.8.17 - May 6, 2025 -### New Feature +### New feature - New Command: `finalize-schema-post-data-import` This command is used to re-add NOT VALID constraints and refresh materialized views after import, and replaces the use of `import schema` with the `--post-snapshot-import true` and `--refresh-mviews` flags; both of these flags are now deprecated in import schema. @@ -32,7 +58,7 @@ What follows are the release notes for the YugabyteDB Voyager v1 release series. ## v1.8.16 - April 22, 2025 -### New Features +### New features - Regularly monitor the YugabyteDB cluster during data import to ensure good health and prevent suboptimal configurations. - If a YugabyteDB node goes down, the terminal UI notifies the user, and Voyager automatically shifts the load to the remaining nodes. @@ -85,7 +111,7 @@ What follows are the release notes for the YugabyteDB Voyager v1 release series. - Merged the ALTER TABLE ADD constraints DDL (Primary Key, Unique Key, and Check Constraints) with the CREATE TABLE statement, reducing the number of DDLs to analyze/review and improving overall import schema performance. - Introduced a guardrails check to ensure live migration uses a single, fixed table list throughout the migration, preventing any changes to the table list after the migration has started. -### Bug Fixes +### Bug fixes - Fixed an issue where the `iops-capture-interval` flag in the assess-migration command did not honor the user-defined value and always defaulted to its preset. - Fixed an issue in the IOPs calculation logic, ensuring it counts the number of scans (both sequential and index) instead of using `seq_tup_read` for read statistics. @@ -202,14 +228,14 @@ What follows are the release notes for the YugabyteDB Voyager v1 release series. - Miscellaneous - Enhanced guardrail checks in import-schema for YugabyteDB Aeon. -### Bug Fixes +### Bug fixes - Skip Unsupported Query Constructs detection if `pg_stat_statements` is not loaded via `shared_preloaded_libraries`. - Prevent Voyager from panicking/erroring out in case of `analyze-schema` and `import data` when `export-dir` is empty. ## v1.8.7 - December 10, 2024 -### New Features +### New features - Introduced a framework in the `assess-migration` and `analyze-schema` commands to accept the target database version (`--target-db-version` flag) as input and use it for reporting issues not supported in that target version for the source schema. @@ -231,7 +257,7 @@ What follows are the release notes for the YugabyteDB Voyager v1 release series. ## v1.8.6 - November 26, 2024 -### New Features +### New features - Unsupported PL/pgSQL objects detection. Migration assessment and schema analysis commands can now detect and report SQL features and constructs in PL/pgSQL objects in the source schema that are not supported by YugabyteDB. This includes detecting advisory locks, system columns, and XML functions. Voyager reports individual queries in these objects that contain unsupported constructs, such as queries in PL/pgSQL blocks for functions and procedures, or select statements in views and materialized views. @@ -352,7 +378,7 @@ To bypass this issue, set the environment variable `REPORT_UNSUPPORTED_QUERY_CON ## v1.8 - September 3, 2024 -### New Features +### New features - Introduced the notion of Migration complexity in assessment and analyze-schema reports, which range from LOW to MEDIUM to HIGH. For PostgreSQL source, this depends on the number and complexity of the PostgreSQL features present in the schema that are unsupported in YugabyteDB. - Introduced a bulk assessment command (`assess-migration-bulk`) for Oracle which allows you to assess multiple schemas in one or more database instances simultaneously. @@ -408,7 +434,7 @@ To bypass this issue, set the environment variable `REPORT_UNSUPPORTED_QUERY_CON ## v1.7.1 - May 28, 2024 -### Bug Fixes +### Bug fixes - Fixed a bug where [export data](../reference/data-migration/export-data/) command ([live migration](../migrate/live-migrate/)) from Oracle source fails with a "table already exists" error, when stopped and re-run (resuming CDC phase of export-data). - Fixed a known issue in the dockerized version of yb-voyager where commands [get data-migration-report](../reference/data-migration/import-data/#get-data-migration-report) and [end migration](../reference/end-migration/) did not work if you had previously passed ssl-cert/ssl-key/ssl-root-cert in [export data](../reference/data-migration/export-data/) or [import data](../reference/data-migration/import-data/) or [import data to source replica](../reference/data-migration/import-data/#import-data-to-source-replica) commands. @@ -489,7 +515,7 @@ To bypass this issue, set the environment variable `REPORT_UNSUPPORTED_QUERY_CON ## v1.6 - November 30, 2023 -### New Features +### New features - Live migration From c66b30f1fe69c678e3774a62d3573a80abf8a66e Mon Sep 17 00:00:00 2001 From: Basava Date: Mon, 19 May 2025 23:55:15 -0700 Subject: [PATCH 132/146] [#27238] DocDB: Make object lock calls async Summary: Transition TSLocalLockManager lock calls to use the async functionality introduced in https://phorge.dev.yugabyte.com/D42862 Jira: DB-16723 Test Plan: Jenkins ./yb_build.sh --cxx-test object_lock-test ./yb_build.sh --cxx-test ts_local_lock_manager-test ./yb_build.sh --cxx-test pg_object_locks-test Reviewers: amitanand, zdrudi, rthallam, #db-approvers Reviewed By: amitanand, #db-approvers Subscribers: svc_phabricator, ybase, yql Differential Revision: https://phorge.dev.yugabyte.com/D44043 --- src/yb/master/object_lock_info_manager.cc | 199 ++++++++++--------- src/yb/tserver/pg_client_service.h | 2 +- src/yb/tserver/pg_client_session.cc | 23 ++- src/yb/tserver/pg_client_session.h | 2 +- src/yb/tserver/tablet_service.cc | 11 +- src/yb/tserver/ts_local_lock_manager-test.cc | 10 +- src/yb/tserver/ts_local_lock_manager.cc | 19 -- src/yb/tserver/ts_local_lock_manager.h | 10 - 8 files changed, 137 insertions(+), 139 deletions(-) diff --git a/src/yb/master/object_lock_info_manager.cc b/src/yb/master/object_lock_info_manager.cc index 2620d59d32bf..2c1f58531d2a 100644 --- a/src/yb/master/object_lock_info_manager.cc +++ b/src/yb/master/object_lock_info_manager.cc @@ -82,6 +82,7 @@ namespace yb { namespace master { using namespace std::literals; +using namespace std::placeholders; using server::MonitoredTaskState; using strings::Substitute; using tserver::AcquireObjectLockRequestPB; @@ -151,8 +152,8 @@ class ObjectLockInfoManager::Impl { void PopulateDbCatalogVersionCache(ReleaseObjectLockRequestPB& req); Status UnlockObject( - ReleaseObjectLockRequestPB&& req, std::optional leader_epoch = std::nullopt, - std::optional callback = std::nullopt); + ReleaseObjectLockRequestPB&& req, std::optional&& callback = std::nullopt, + std::optional leader_epoch = std::nullopt); Status UnlockObjectSync( const ReleaseObjectLocksGlobalRequestPB& master_request, tserver::ReleaseObjectLockRequestPB&& tserver_request, CoarseTimePoint deadline); @@ -294,7 +295,7 @@ class UpdateAllTServers : public std::enable_shared_from_this leader_epoch, std::optional callback); + std::optional&& callback, std::optional leader_epoch); Status Launch(); const Req& request() const override { @@ -318,16 +319,19 @@ class UpdateAllTServers : public std::enable_shared_from_this TServerTaskFor( const TabletServerId& ts_uuid, StdStatusCallback&& callback); @@ -827,8 +831,7 @@ Status ObjectLockInfoManager::Impl::UnlockObjectSync( auto promise = std::make_shared>(); WARN_NOT_OK( UnlockObject( - std::move(tserver_req), std::nullopt, - [promise](const Status& s) { promise->set_value(s); }), + std::move(tserver_req), [promise](const Status& s) { promise->set_value(s); }), "Failed to unlock object"); auto future = promise->get_future(); return ( @@ -838,8 +841,8 @@ Status ObjectLockInfoManager::Impl::UnlockObjectSync( } Status ObjectLockInfoManager::Impl::UnlockObject( - ReleaseObjectLockRequestPB&& req, std::optional leader_epoch, - std::optional callback) { + ReleaseObjectLockRequestPB&& req, std::optional&& callback, + std::optional leader_epoch) { VLOG(1) << __PRETTY_FUNCTION__ << req.ShortDebugString(); if (req.session_host_uuid().empty()) { // session_host_uuid would be unset for release requests that are manually @@ -851,8 +854,8 @@ Status ObjectLockInfoManager::Impl::UnlockObject( RETURN_NOT_OK(s); } auto unlock_objects = std::make_shared>( - master_, catalog_manager_, *this, std::move(req), std::move(leader_epoch), - std::move(callback)); + master_, catalog_manager_, *this, std::move(req), std::move(callback), + std::move(leader_epoch)); return unlock_objects->Launch(); } @@ -978,7 +981,7 @@ std::shared_ptr ObjectLockInfoManager::Impl::ReleaseLocksHeldByE auto session_host_uuid = request.session_host_uuid(); WARN_NOT_OK( UnlockObject( - std::move(request), leader_epoch, [latch](const Status& s) { latch->CountDown(); }), + std::move(request), [latch](const Status& s) { latch->CountDown(); }, leader_epoch), yb::Format("Failed to enqueue request for unlock object $0 $1", session_host_uuid, txn_id)); } return latch; @@ -1040,7 +1043,7 @@ void ObjectLockInfoManager::Impl::RelaunchInProgressRequests( VLOG(1) << __func__ << " for " << tserver_uuid << " " << requests.size() << " requests"; for (auto& request : requests) { WARN_NOT_OK( - UnlockObject(std::move(request), leader_epoch), + UnlockObject(std::move(request), std::nullopt /* callback */, leader_epoch), "Failed to enqueue request for unlock object"); } } @@ -1184,7 +1187,7 @@ UpdateAllTServers::UpdateAllTServers( template UpdateAllTServers::UpdateAllTServers( Master& master, CatalogManager& catalog_manager, ObjectLockInfoManager::Impl& olm, Req&& req, - std::optional leader_epoch, std::optional callback) + std::optional&& callback, std::optional leader_epoch) : master_(master), catalog_manager_(catalog_manager), object_lock_info_manager_(olm), @@ -1244,6 +1247,15 @@ bool UpdateAllTServers::IsReleaseRequest() const { return true; } +template +Status UpdateAllTServers::VerifyTxnId() { + if (!txn_id_) { + return STATUS_FORMAT( + InvalidArgument, "Could not parse txn_id for the request. $0", txn_id_.status()); + } + return Status::OK(); +} + template std::string UpdateAllTServers::LogPrefix() const { return Format( @@ -1253,45 +1265,82 @@ std::string UpdateAllTServers::LogPrefix() const { } template -Status UpdateAllTServers::Launch() { - auto s = DoLaunch(); - if (!launched_) { - DoCallbackAndRespond(s); +void UpdateAllTServers::LaunchRpcs() { + // todo(zdrudi): special case for 0 tservers with a live lease. This doesn't work. + ts_descriptors_ = object_lock_info_manager_.GetAllTSDescriptorsWithALiveLease(); + statuses_ = std::vector{ts_descriptors_.size(), STATUS(Uninitialized, "")}; + LaunchRpcsFrom(0); +} + +template <> +Status UpdateAllTServers::BeforeRpcs() { + TRACE_FUNC(); + RETURN_NOT_OK(VerifyTxnId()); + RETURN_NOT_OK(ValidateLockRequest(req_, requestor_latest_lease_epoch_)); + std::shared_ptr local_lock_manager; + DCHECK(!epoch_.has_value()) << "Epoch should not yet be set for AcquireObjectLockRequestPB"; + { + SCOPED_LEADER_SHARED_LOCK(l, &catalog_manager_); + RETURN_NOT_OK(CheckLeaderLockStatus(l, std::nullopt)); + epoch_ = l.epoch(); + local_lock_manager = object_lock_info_manager_.ts_local_lock_manager(); } - return s; + // Update Local State. + launched_ = true; + local_lock_manager->AcquireObjectLocksAsync( + req_, GetClientDeadline(), + [shared_this = shared_from_this()](Status s) { + if (s.ok()) { + s = shared_this->DoPersistRequest(); + } + if (!s.ok()) { + LOG(WARNING) << "Failed to acquire object locks locally at the master " << s; + shared_this->DoCallbackAndRespond(s.CloneAndReplaceCode(Status::kRemoteError)); + return; + } + shared_this->LaunchRpcs(); + }, tserver::WaitForBootstrap::kFalse); + return Status::OK(); } -template -Status UpdateAllTServers::DoLaunch() { - if (!txn_id_) { - return STATUS( - InvalidArgument, "Could not parse txn_id for the request. $0", txn_id_.status().ToString()); - } else if ( - IsReleaseRequest() && object_lock_info_manager_.IsDdlVerificationInProgress(*txn_id_)) { +template <> +Status UpdateAllTServers::BeforeRpcs() { + TRACE_FUNC(); + RETURN_NOT_OK(VerifyTxnId()); + if (object_lock_info_manager_.IsDdlVerificationInProgress(*txn_id_)) { VLOG_WITH_PREFIX(1) << " is already scheduled for ddl verification. " << "Ignoring release request, as it will be released by the ddl verifier."; return Status::OK(); } VLOG_WITH_PREFIX(2) << " processing request: " << req_.ShortDebugString(); - RETURN_NOT_OK(BeforeRpcs()); + if (!epoch_.has_value()) { + SCOPED_LEADER_SHARED_LOCK(l, &catalog_manager_); + RETURN_NOT_OK(CheckLeaderLockStatus(l, std::nullopt)); + epoch_ = l.epoch(); + } + RETURN_NOT_OK(object_lock_info_manager_.AddToInProgress(*epoch_, req_)); - // Do this check after the BeforeRpcs() call, to ensure that the request was added to - // in progress requests. - if (PREDICT_FALSE(FLAGS_TEST_skip_launch_release_request) && IsReleaseRequest()) { + // Do this check after adding the request to in progress requests. + if (PREDICT_FALSE(FLAGS_TEST_skip_launch_release_request)) { return Status::OK(); } - - // todo(zdrudi): special case for 0 tservers with a live lease. This doesn't work. - ts_descriptors_ = object_lock_info_manager_.GetAllTSDescriptorsWithALiveLease(); - statuses_ = std::vector{ts_descriptors_.size(), STATUS(Uninitialized, "")}; - LaunchFrom(0); launched_ = true; + LaunchRpcs(); return Status::OK(); } template -void UpdateAllTServers::LaunchFrom(size_t start_idx) { +Status UpdateAllTServers::Launch() { + auto s = BeforeRpcs(); + if (!launched_) { + DoCallbackAndRespond(s); + } + return s; +} + +template +void UpdateAllTServers::LaunchRpcsFrom(size_t start_idx) { TRACE("Launching for $0 TServers from $1", ts_descriptors_.size(), start_idx); ts_pending_ = ts_descriptors_.size() - start_idx; VLOG(1) << __func__ << " launching for " << ts_pending_ << " tservers."; @@ -1300,20 +1349,21 @@ void UpdateAllTServers::LaunchFrom(size_t start_idx) { VLOG(1) << "Launching for " << ts_uuid; auto task = TServerTaskFor( ts_uuid, - std::bind( - &UpdateAllTServers::Done, this->shared_from_this(), i, std::placeholders::_1)); - WARN_NOT_OK( - catalog_manager_.ScheduleTask(task), - yb::Format( - "Failed to schedule request to UpdateTServer to $0 for $1", ts_uuid, - request().DebugString())); + std::bind(&UpdateAllTServers::Done, this->shared_from_this(), i, _1)); + auto s = catalog_manager_.ScheduleTask(task); + if (!s.ok()) { + Done(i, + s.CloneAndPrepend(Format( + "Failed to schedule request to UpdateTServer to $0 for $1", ts_uuid, + request().DebugString()))); + } } } template void UpdateAllTServers::DoCallbackAndRespond(const Status& s) { - TRACE("$0: $1", __func__, s.ToString()); - VLOG_WITH_FUNC(2) << s.ToString(); + TRACE("$0: $1 $2", __func__, (IsReleaseRequest() ? "Release" : "Acquire"), s.ToString()); + VLOG_WITH_FUNC(2) << (IsReleaseRequest() ? "Release" : "Acquire") << " " << s.ToString(); WARN_NOT_OK( s, yb::Format( "$0Failed.$1", LogPrefix(), @@ -1340,49 +1390,22 @@ void UpdateAllTServers::CheckForDone() { DoneAll(); } -template <> -Status UpdateAllTServers::BeforeRpcs() { - TRACE_FUNC(); - RETURN_NOT_OK(ValidateLockRequest(req_, requestor_latest_lease_epoch_)); - std::shared_ptr local_lock_manager; - DCHECK(!epoch_.has_value()) << "Epoch should not yet be set for AcquireObjectLockRequestPB"; - { - SCOPED_LEADER_SHARED_LOCK(l, &catalog_manager_); - RETURN_NOT_OK(CheckLeaderLockStatus(l, std::nullopt)); - epoch_ = l.epoch(); - local_lock_manager = object_lock_info_manager_.ts_local_lock_manager(); - } - // Update Local State. - // TODO: Use RETURN_NOT_OK_PREPEND - auto s = local_lock_manager->AcquireObjectLocks( - req_, GetClientDeadline(), tserver::WaitForBootstrap::kFalse); +template +Status UpdateAllTServers::DoPersistRequestUnlocked(const ScopedLeaderSharedLock& l) { + // Persist the request. + RETURN_NOT_OK(CheckLeaderLockStatus(l, epoch_)); + auto s = object_lock_info_manager_.PersistRequest(*epoch_, req_, *txn_id_); if (!s.ok()) { - LOG(WARNING) << "Failed to acquire object locks locally at the master " << s; + LOG(WARNING) << "Failed to update object lock " << s; return s.CloneAndReplaceCode(Status::kRemoteError); } - // todo(zdrudi): Do we want to verify the requestor has a valid lease here before persisting? - // Persist the request. - { - SCOPED_LEADER_SHARED_LOCK(l, &catalog_manager_); - RETURN_NOT_OK(CheckLeaderLockStatus(l, epoch_)); - auto s = object_lock_info_manager_.PersistRequest(*epoch_, req_, *txn_id_); - if (!s.ok()) { - LOG(WARNING) << "Failed to update object lock " << s; - return s.CloneAndReplaceCode(Status::kRemoteError); - } - } return Status::OK(); } -template <> -Status UpdateAllTServers::BeforeRpcs() { - TRACE_FUNC(); - if (!epoch_.has_value()) { - SCOPED_LEADER_SHARED_LOCK(l, &catalog_manager_); - RETURN_NOT_OK(CheckLeaderLockStatus(l, std::nullopt)); - epoch_ = l.epoch(); - } - return object_lock_info_manager_.AddToInProgress(*epoch_, req_); +template +Status UpdateAllTServers::DoPersistRequest() { + SCOPED_LEADER_SHARED_LOCK(l, &catalog_manager_); + return DoPersistRequestUnlocked(l); } template <> @@ -1396,16 +1419,10 @@ Status UpdateAllTServers::AfterRpcs() { TRACE_FUNC(); VLOG_WITH_FUNC(2); SCOPED_LEADER_SHARED_LOCK(l, &catalog_manager_); - RETURN_NOT_OK(CheckLeaderLockStatus(l, epoch_)); - // Persist the request. - auto s = object_lock_info_manager_.PersistRequest(*epoch_, req_, *txn_id_); - if (!s.ok()) { - LOG(WARNING) << "Failed to update object lock " << s; - return s.CloneAndReplaceCode(Status::kRemoteError); - } + RETURN_NOT_OK(DoPersistRequestUnlocked(l)); // Update Local State. auto local_lock_manager = object_lock_info_manager_.ts_local_lock_manager(); - s = local_lock_manager->ReleaseObjectLocks(req_, GetClientDeadline()); + auto s = local_lock_manager->ReleaseObjectLocks(req_, GetClientDeadline()); if (!s.ok()) { LOG(WARNING) << "Failed to release object lock locally." << s; return s.CloneAndReplaceCode(Status::kRemoteError); @@ -1442,7 +1459,7 @@ bool UpdateAllTServers::RelaunchIfNecessary() { } VLOG(1) << "New TServers were added. Relaunching."; - LaunchFrom(old_size); + LaunchRpcsFrom(old_size); return true; } diff --git a/src/yb/tserver/pg_client_service.h b/src/yb/tserver/pg_client_service.h index aaab6313bbb8..db0cd857d13a 100644 --- a/src/yb/tserver/pg_client_service.h +++ b/src/yb/tserver/pg_client_service.h @@ -97,7 +97,6 @@ class TserverXClusterContextIf; (CronGetLastMinute) \ (AcquireAdvisoryLock) \ (ReleaseAdvisoryLock) \ - (AcquireObjectLock) \ (ExportTxnSnapshot) \ (ImportTxnSnapshot) \ (ClearExportedTxnSnapshots) \ @@ -110,6 +109,7 @@ class TserverXClusterContextIf; // Forwards call to corresponding PgClientSession async method (see // PG_CLIENT_SESSION_ASYNC_METHODS). #define YB_PG_CLIENT_ASYNC_METHODS \ + (AcquireObjectLock) \ (OpenTable) \ (GetTableKeyRanges) \ /**/ diff --git a/src/yb/tserver/pg_client_session.cc b/src/yb/tserver/pg_client_session.cc index 243faecbba3f..5902c5d62a32 100644 --- a/src/yb/tserver/pg_client_session.cc +++ b/src/yb/tserver/pg_client_session.cc @@ -2325,7 +2325,7 @@ class PgClientSession::Impl { return status; } - Status AcquireObjectLock( + Status DoAcquireObjectLock( const PgAcquireObjectLockRequestPB& req, PgAcquireObjectLockResponsePB* resp, rpc::RpcContext* context) { RSTATUS_DCHECK(IsObjectLockingEnabled(), IllegalState, "Table Locking feature not enabled."); @@ -2351,6 +2351,8 @@ class PgClientSession::Impl { << " lock_type: " << AsString(lock_type) << " req: " << req.ShortDebugString(); + auto callback = MakeRpcOperationCompletionCallback( + std::move(*context), resp, nullptr /* clock */); if (IsTableLockTypeGlobal(lock_type)) { if (setup_session_result.is_plain) { plain_session_has_exclusive_object_locks_.store(true); @@ -2359,16 +2361,25 @@ class PgClientSession::Impl { instance_uuid(), txn_meta_res->transaction_id, options.active_sub_transaction_id(), req.database_oid(), req.object_oid(), lock_type, lease_epoch_, context_.clock.get(), deadline, txn_meta_res->status_tablet); - auto status_future = MakeFuture([&](auto callback) { - client_.AcquireObjectLocksGlobalAsync(lock_req, callback, deadline); - }); - return status_future.get(); + client_.AcquireObjectLocksGlobalAsync(lock_req, std::move(callback), deadline); + return Status::OK(); } auto lock_req = AcquireRequestFor( instance_uuid(), txn_meta_res->transaction_id, options.active_sub_transaction_id(), req.database_oid(), req.object_oid(), lock_type, lease_epoch_, context_.clock.get(), deadline, txn_meta_res->status_tablet); - return ts_lock_manager()->AcquireObjectLocks(lock_req, deadline); + ts_lock_manager()->AcquireObjectLocksAsync(lock_req, deadline, std::move(callback)); + return Status::OK(); + } + + void AcquireObjectLock( + const PgAcquireObjectLockRequestPB& req, PgAcquireObjectLockResponsePB* resp, + yb::rpc::RpcContext context) { + auto s = DoAcquireObjectLock(req, resp, &context); + if (!s.ok()) { + StatusToPB(s, resp->mutable_status()); + context.RespondSuccess(); + } } void StartShutdown() { diff --git a/src/yb/tserver/pg_client_session.h b/src/yb/tserver/pg_client_session.h index 5622ae767264..aaf6bd808d58 100644 --- a/src/yb/tserver/pg_client_session.h +++ b/src/yb/tserver/pg_client_session.h @@ -68,7 +68,6 @@ namespace tserver { (WaitForBackendsCatalogVersion) \ (AcquireAdvisoryLock) \ (ReleaseAdvisoryLock) \ - (AcquireObjectLock) \ /**/ // These methods may respond with Status::OK() and continue async processing (including network @@ -77,6 +76,7 @@ namespace tserver { // If such method responds with error Status, it will be handled by the upper layer that will fill // response with error status and call context.RespondSuccess. #define PG_CLIENT_SESSION_ASYNC_METHODS \ + (AcquireObjectLock) \ (GetTableKeyRanges) \ /**/ diff --git a/src/yb/tserver/tablet_service.cc b/src/yb/tserver/tablet_service.cc index 1782966c3b69..e1d020240daa 100644 --- a/src/yb/tserver/tablet_service.cc +++ b/src/yb/tserver/tablet_service.cc @@ -3671,13 +3671,10 @@ void TabletServiceImpl::AcquireObjectLocks( SetupErrorAndRespond( resp->mutable_error(), STATUS(IllegalState, "TSLocalLockManager not found..."), &context); } - auto s = ts_local_lock_manager->AcquireObjectLocks(*req, context.GetClientDeadline()); - resp->set_propagated_hybrid_time(server_->Clock()->Now().ToUint64()); - if (!s.ok()) { - SetupErrorAndRespond(resp->mutable_error(), s, &context); - } else { - context.RespondSuccess(); - } + const auto deadline = context.GetClientDeadline(); + ts_local_lock_manager->AcquireObjectLocksAsync( + *req, deadline, + MakeRpcOperationCompletionCallback(std::move(context), resp, server_->Clock())); } void TabletServiceImpl::ReleaseObjectLocks( diff --git a/src/yb/tserver/ts_local_lock_manager-test.cc b/src/yb/tserver/ts_local_lock_manager-test.cc index 1eb34f497fe8..775a163c7859 100644 --- a/src/yb/tserver/ts_local_lock_manager-test.cc +++ b/src/yb/tserver/ts_local_lock_manager-test.cc @@ -84,15 +84,17 @@ class TSLocalLockManagerTest : public TabletServerTestBase { lock->set_lock_type(lock_types[i]); } req.set_propagated_hybrid_time(MonoTime::Now().ToUint64()); - auto s = lm_->AcquireObjectLocks(req, deadline); - if (!state_map || !s.ok()) { - return s; + Synchronizer synchronizer; + lm_->AcquireObjectLocksAsync(req, deadline, synchronizer.AsStdStatusCallback()); + RETURN_NOT_OK(synchronizer.Wait()); + if (!state_map) { + return Status::OK(); } auto res = VERIFY_RESULT(DetermineObjectsToLock(req.object_locks())); for (auto& lock_batch_entry : res.lock_batch) { (*state_map)[lock_batch_entry.key] += IntentTypeSetAdd(lock_batch_entry.intent_types); } - return s; + return Status::OK(); } Status LockObject( diff --git a/src/yb/tserver/ts_local_lock_manager.cc b/src/yb/tserver/ts_local_lock_manager.cc index a3695f50b74b..8657a6e8f4d9 100644 --- a/src/yb/tserver/ts_local_lock_manager.cc +++ b/src/yb/tserver/ts_local_lock_manager.cc @@ -329,25 +329,6 @@ TSLocalLockManager::TSLocalLockManager( TSLocalLockManager::~TSLocalLockManager() {} -// TODO: Remove this method and enforce callers supply a callback func. -Status TSLocalLockManager::AcquireObjectLocks( - const tserver::AcquireObjectLockRequestPB& req, CoarseTimePoint deadline, - WaitForBootstrap wait) { - if (VLOG_IS_ON(4)) { - std::stringstream output; - impl_->DumpLocksToHtml(output); - VLOG(4) << "Dumping current state Before acquire : " << output.str(); - } - auto ret = impl_->AcquireObjectLocks(req, deadline, wait); - if (VLOG_IS_ON(3)) { - std::stringstream output; - impl_->DumpLocksToHtml(output); - VLOG(3) << "Acquire " << (ret.ok() ? "succeded" : "failed") - << ". Dumping current state After acquire : " << output.str(); - } - return ret; -} - void TSLocalLockManager::AcquireObjectLocksAsync( const tserver::AcquireObjectLockRequestPB& req, CoarseTimePoint deadline, StdStatusCallback&& callback, WaitForBootstrap wait) { diff --git a/src/yb/tserver/ts_local_lock_manager.h b/src/yb/tserver/ts_local_lock_manager.h index ff52753f8b28..f225f342df88 100644 --- a/src/yb/tserver/ts_local_lock_manager.h +++ b/src/yb/tserver/ts_local_lock_manager.h @@ -70,17 +70,7 @@ class TSLocalLockManager { // conflicting lock types on a key given that there aren't other txns with active conflciting // locks on the key. // - // Continuous influx of readers can starve writers. For instance, if there are multiple txns - // requesting ACCESS_SHARE on a key, a writer requesting ACCESS_EXCLUSIVE may face starvation. - // Since we intend to use this for table locks, DDLs may face starvation if there is influx of - // conflicting DMLs. - // TODO: DDLs don't face starvation in PG. Address the above starvation problem. - // // TODO: Augment the 'pg_locks' path to show the acquired/waiting object/table level locks. - Status AcquireObjectLocks( - const tserver::AcquireObjectLockRequestPB& req, CoarseTimePoint deadline, - WaitForBootstrap wait = WaitForBootstrap::kTrue); - void AcquireObjectLocksAsync( const tserver::AcquireObjectLockRequestPB& req, CoarseTimePoint deadline, StdStatusCallback&& callback, WaitForBootstrap wait = WaitForBootstrap::kTrue); From 4bf4c50298cb8c037380f8f768ef045a557f558a Mon Sep 17 00:00:00 2001 From: Minghui Yang Date: Mon, 19 May 2025 22:53:24 +0000 Subject: [PATCH 133/146] [#26906] YSQL: Fix flaky test PgHeapSnapshotTest.TestYsqlHeapSnapshotSimple Summary: The test PgHeapSnapshotTest.TestYsqlHeapSnapshotSimple started being flaky since ab7b8ed21f13947373bcbb234d38822e2e117cd4. That commit turned on incremental catalog cache refresh by default. The test relies on doing full catalog cache refreshes in order to allocate enough memory. With incremental catalog cache refresh the test no longer runs full catalog cache refreshes and that's why it became flaky. Commit 61c7270a3d0c19b81a614171a4360caf9a65d60c changed the test to disable incremental catalog cache refresh by setting `--ysql-yb_enable_invalidation_messages=false` to get back the previous behavior. Although the test has been passing, after a recent commit a260932fe7aba6653b17812c7a8f8b6b6b2e4633 the test started being flaky again with the same symptom. After debugging I found that the fix by 61c7270a3d0c19b81a614171a4360caf9a65d60c did not work as expected because the postmaster process is already started before we turn off `--ysql-yb_enable_invalidation_messages=false`. The test has been passing by accident. I reworked the fix to turn off `--ysql-yb_enable_invalidation_messages` by implementing `SetUp()` which ensures that the postmaster process will have the gflag `--ysql-yb_enable_invalidation_messages=false`. Jira: DB-16328 Test Plan: (1) ./yb_build.sh release --cxx-test pgwrapper_pg_heap_snapshot-test --gtest_filter PgHeapSnapshotTest.TestYsqlHeapSnapshotSimple --clang19 -n 50 (2) ./yb_build.sh release --cxx-test pgwrapper_pg_heap_snapshot-test Verify from test output that only PgHeapSnapshotTest.TestYsqlHeapSnapshotSimple has `--ysql-yb_enable_invalidation_messages=false`. Other tests continue to have the default value of `--ysql-yb_enable_invalidation_messages=true`. ``` I0519 23:22:53.945465 1289735 pg_heap_snapshot-test.cc:39] FLAGS_ysql_yb_enable_invalidation_messages: 0 I0519 23:23:26.073213 1290285 pg_heap_snapshot-test.cc:39] FLAGS_ysql_yb_enable_invalidation_messages: 1 I0519 23:23:31.624302 1290674 pg_heap_snapshot-test.cc:39] FLAGS_ysql_yb_enable_invalidation_messages: 1 I0519 23:23:37.160039 1291064 pg_heap_snapshot-test.cc:39] FLAGS_ysql_yb_enable_invalidation_messages: 1 ``` Reviewers: kfranz, sanketh, mihnea Reviewed By: sanketh Subscribers: yql Differential Revision: https://phorge.dev.yugabyte.com/D44082 --- src/yb/yql/pgwrapper/pg_heap_snapshot-test.cc | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/src/yb/yql/pgwrapper/pg_heap_snapshot-test.cc b/src/yb/yql/pgwrapper/pg_heap_snapshot-test.cc index dfddcc16c263..1dcd177f15f9 100644 --- a/src/yb/yql/pgwrapper/pg_heap_snapshot-test.cc +++ b/src/yb/yql/pgwrapper/pg_heap_snapshot-test.cc @@ -31,6 +31,16 @@ using namespace std::chrono_literals; namespace yb::pgwrapper { class PgHeapSnapshotTest : public PgMiniTestBase { + public: + void SetUp() override { + if (CURRENT_TEST_CASE_AND_TEST_NAME_STR() == "PgHeapSnapshotTest.TestYsqlHeapSnapshotSimple") { + ANNOTATE_UNPROTECTED_WRITE(FLAGS_ysql_yb_enable_invalidation_messages) = false; + } + LOG(INFO) << "FLAGS_ysql_yb_enable_invalidation_messages: " + << FLAGS_ysql_yb_enable_invalidation_messages; + PgMiniTestBase::SetUp(); + } + protected: auto PgConnect(const std::string& username) { auto settings = MakeConnSettings(); @@ -40,7 +50,6 @@ class PgHeapSnapshotTest : public PgMiniTestBase { }; TEST_F(PgHeapSnapshotTest, YB_DISABLE_TEST_IN_SANITIZERS(TestYsqlHeapSnapshotSimple)) { - ANNOTATE_UNPROTECTED_WRITE(FLAGS_ysql_yb_enable_invalidation_messages) = false; auto conn1 = ASSERT_RESULT(Connect()); auto conn2 = ASSERT_RESULT(Connect()); From e98d25abfd42fa3c5aa10c7077bcef4b7702d304 Mon Sep 17 00:00:00 2001 From: Sudhanshu Prajapati Date: Wed, 21 May 2025 00:12:25 +0530 Subject: [PATCH 134/146] [DOC-722] Add and update voyager schema workaround (#27276) * add new and update schema workarounds for voyager * keep similar wording * Apply suggestions from code review Co-authored-by: Dwight Hodge <79169168+ddhodge@users.noreply.github.com> --------- Co-authored-by: Dwight Hodge <79169168+ddhodge@users.noreply.github.com> --- .../known-issues/postgresql.md | 204 ++++++++++++++++-- 1 file changed, 189 insertions(+), 15 deletions(-) diff --git a/docs/content/preview/yugabyte-voyager/known-issues/postgresql.md b/docs/content/preview/yugabyte-voyager/known-issues/postgresql.md index 7845190e7077..bdc2387ad053 100644 --- a/docs/content/preview/yugabyte-voyager/known-issues/postgresql.md +++ b/docs/content/preview/yugabyte-voyager/known-issues/postgresql.md @@ -1546,14 +1546,12 @@ CREATE INDEX idx_orders_created ON orders(created_at DESC); Note that if the table is colocated, this hotspot concern can safely be ignored, as all the data resides on a single tablet, and the distribution is no longer relevant. -**Workaround**: To address this issue and improve query performance, application-level sharding is recommended. This approach involves adding an additional column to the table and creating a multi-column index including both the new column and the timestamp/date column. The additional column distributes data using a hash-based strategy, effectively spreading the load across multiple nodes. +**Workaround**: -Implementing this solution requires minor adjustments to queries. In addition to range conditions on the timestamp/date column, the new sharding column should be included in the query filters to benefit from distributed execution. +To address this issue and improve query performance, the recommendation is to change the sharding key to a value that is well distributed among all nodes while keeping the timestamp column as the clustering key. The new sharding key will be a modulo of the hash of the timestamp column value, which is then used to distribute data using a hash-based strategy, effectively spreading the load across multiple nodes. Ensure that the index on the column is configured to be range-sharded. -References: [How to Avoid Hotspots on Range-based Indexes in Distributed Databases](https://www.yugabyte.com/blog/distributed-databases-hotspots-range-based-indexes/), [[YFTT] Avoiding Hot-Spots on Timestamp Based Index](https://www.youtube.com/watch?v=tiYZn0U1wzY) - **Example** An example schema on the source database is as follows: @@ -1564,27 +1562,19 @@ CREATE TABLE orders ( ... created_at timestamp ); - CREATE INDEX idx_orders_created ON orders(created_at DESC); -SELECT * FROM orders WHERE created_at >= NOW() - INTERVAL '1 month'; -- for fetching orders of last one month ``` -Suggested change to the schema is to add the column `shard_id` with a default value as an integer between 0 and the number of shards required for the use case. In addition, you add this column to the index columns with hash sharding. In this way the data is distributed by `shard_id` and ordered based on `created_at`. - -This also requires modifying the range queries to include the `shard_id` in the filter to help the optimizer. In this example, you specify the shard IDs in the IN clause. +Suggested change to the schema is to add the sharding key as the modulo of the hash of the timestamp column value, which gives a key in a range (for example, 0-15). This can change depending on the use case. This key will be used to distribute the data among various tablets and hence help in distributing the data evenly. ```sql CREATE TABLE orders ( order_id int PRIMARY, - ..., - shard_id int DEFAULT (floor(random() * 100)::int % 16), + ... created_at timestamp ); - -CREATE INDEX idx_orders_created ON orders(shard_id HASH, created_at DESC); - -SELECT * FROM orders WHERE shard_id IN (0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15) AND created_at >= NOW() - INTERVAL '1 month'; -- for fetching orders of last one month +CREATE INDEX idx_orders_created ON orders( (yb_hash_code(created_at) % 16) ASC, created_at DESC); ``` --- @@ -1615,3 +1605,187 @@ Suggested change to the schema is to remove this redundant index `idx_orders_ord ```sql CREATE INDEX idx_orders_order_id on orders(order_id); ``` + +--- + +### Index on low-cardinality column + +**Description**: + +In YugabyteDB, you can specify three kinds of columns when using [CREATE INDEX](../../../api/ysql/the-sql-language/statements/ddl_create_index): sharding, clustering, and covering. (For more details, refer to [Secondary indexes](../../../explore/ysql-language-features/indexes-constraints/secondary-indexes-ysql/).) The default sharding strategy is HASH unless [Enhanced PostgreSQL Compatibility mode](./../../develop/postgresql-compatibility/) is enabled, in which case, RANGE is the default sharding strategy. + +Design the index to evenly distribute data across all nodes and optimize performance based on query patterns. Avoid using low-cardinality columns, such as boolean values, ENUMs, or days of the week, as sharding keys, as they result in data being distributed across only a few tablets. + +#### Single column index + +Using a single-column index on a low-cardinality column leads to uneven data distribution, regardless of the sharding strategy. + +**Workaround**: + +It is recommended to drop the index if it is not required. + +If the index is used in queries, combine it with a high-cardinality column to create either a multi-column index with the sharding key on the high-cardinality column or a multi-column range-sharding index. This ensures better data distribution across all nodes. + +#### Multi-column index + +In a multi-column index with a low cardinality column as the sharding key, the data will be unevenly distributed. + +**Workaround**: + +Make the index range-sharded to distribute data based on the combined values of all columns, or reorder the index columns to place the high-cardinality column first. This enables sharding on the high-cardinality column and ensures even distribution across all nodes. + +**Example**: + +An example schema on the source database is as follows: + +```sql +CREATE TYPE order_statuses AS ENUM ('CONFIRMED', 'SHIPPED', 'OUT FOR DELIVERY', 'DELIVERED', 'CANCELLED'); + +CREATE TABLE orders ( + order_id int PRIMARY, + ..., + status order_statuses +); + +CREATE INDEX idx_order_status on orders (status); --single column index on column having only 5 values + +CREATE INDEX idx_order_status_order_id on orders (status, order_id); --multi column index on first column with only 5 values +``` + +Since the number of distinct values of the column `status` is 5, there will be a maximum of 5 tablets created, limiting the scalability. + +Suggested change to both types of indexes is one of the following. + +Make it a multi-column range-index: + +```sql + --These indexes will distribute the data on the combine value of both and as order_id is high cardinality column, it will make sure that data is distributed evenly + +CREATE INDEX idx_order_status on orders(status ASC, order_id); --adding order_id and making it a range-sharded index explictly + +CREATE INDEX idx_order_status_order_id on orders (status ASC, order_id); --making it a range-sharded index explictly +``` + + +Make it multi-column with a sharding key on a high-cardinality column: + +```sql +--these indexes will distribute the data on order_id first and then each shard is clustered on status + +CREATE INDEX idx_orders_status on orders(order_id, status); --making it multi column by adding order_id as first column + +CREATE INDEX idx_order_status_order_id on orders (order_id, status); --reordering the columns to place the order_id first and then keeping status. +``` + +--- + +### Index on column with a high percentage of NULL values + +**Description**: + +In YugabyteDB, you can specify three kinds of columns when using [CREATE INDEX](../../../api/ysql/the-sql-language/statements/ddl_create_index): sharding, clustering, and covering. (For more details, refer to [Secondary indexes](../../../explore/ysql-language-features/indexes-constraints/secondary-indexes-ysql/).) The default sharding strategy is HASH unless [Enhanced PostgreSQL Compatibility mode](./../../develop/postgresql-compatibility/) is enabled, in which case, RANGE is the default sharding strategy. + +Design the index to evenly distribute data across all nodes and optimize performance based on query patterns. + +If an index is created on a column with a high percentage of NULL values, all NULL entries will be stored in a single tablet. This concentration can create a hotspot, leading to performance degradation. + +**Workaround**: If the NULL values are not being queried, it is recommended to create a Partial index by filtering the NULL values and optimizing it for the other data. + +If NULL values are being queried and the index is a single-column index, it is recommended to add another column and make it a multi-column range-sharded index to distribute the NULL values evenly across various nodes. If the index is multi-column, it is recommended to make it a range-sharded index. + +**Example** + +An example schema on the source database is as follows: + +```sql +CREATE TABLE users ( + user_id int PRIMARY, + first_name text, + middle_name text, + ... +); + +CREATE INDEX idx_users_middle_name on users (middle_name); -- this index is on middle name which is having 50% NULL values + +CREATE INDEX idx_users_middle_name_user_id on users (middle_name, user_id); -- this index is having first column as middle name which is having 50% NULL values +``` + +As these indexes have a sharding key on the `middle_name` column, where half of the values as NULL, half of the data resides on a single tablet and becomes a hotspot. + +Suggested change to the schema is one of the following. + +Partial indexing by removing the NULL values: + +```sql +CREATE INDEX idx_users_middle_name on users (middle_name) where middle_name <> NULL; --filtering the NULL values so those will not be indexed + +CREATE INDEX idx_users_middle_name_user_id on users (middle_name, user_id) where middle_name <> NULL; --filtering the NULL values so those will not be indexed +``` + + +Making it a range-sharded index explicitly so that NULLs are evenly distributed across all nodes by using another column: + +```sql +CREATE INDEX idx_users_middle_name on users (middle_name ASC, user_id); --adding user_id + +CREATE INDEX idx_users_middle_name_user_id on users (middle_name ASC, user_id); + +``` + +--- + +### Index on column with high percentage of a particular value + +**Description**: + +In YugabyteDB, you can specify three kinds of columns when using [CREATE INDEX](../../../api/ysql/the-sql-language/statements/ddl_create_index): sharding, clustering, and covering. (For more details, refer to [Secondary indexes](../../../explore/ysql-language-features/indexes-constraints/secondary-indexes-ysql/).) The default sharding strategy is HASH unless [Enhanced PostgreSQL Compatibility mode](./../../develop/postgresql-compatibility/) is enabled, in which case, RANGE is the default sharding strategy. + +Design the index to evenly distribute data across all nodes and optimize performance based on query patterns. + +If the index is designed for a column with a high percentage of a particular value in the data, all the data for that value will reside on a single tablet, which will become a hotspot, causing performance degradation. + +**Workaround**: If the frequently occurring value is not being queried, it is recommended that a Partial index be created by filtering this value, optimizing it for other data. + +If the value is being queried and the index is a single-column index, it is recommended to add another column and make it a multi-column range-sharded index to distribute the value evenly across various nodes. If the index is multi-column, it is recommended to make it a range-sharded index. + +**Example** + +An example schema on the source database is as follows: + +```sql +CREATE TABLE user_activity ( + user_id int PRIMARY, + event_type text, --type of the activity 'login', 'logout', 'profile_update +, 'email_verification', so on.. various events + event_timestamp timestampz, + ... +); + +CREATE INDEX idx_user_activity_event_type on user_activity (event_type); --this index is on the event_type which is having 80% data with 'login' type + +CREATE INDEX idx_user_activity_event_type_user_id on user_activity (event_type, user_id); --this index is on the event_type which is having 80% data with 'login' type + +``` + +As these indexes have a sharding key on the `event_type` column, where the value ‘login’ is 80% of the data, 80% of the data resides on a single tablet, which becomes a hotspot. + +Suggested change to the schema is one of the following. + +Partial indexing by removing the ‘login’ value from the index to optimize it for other values. + +```sql +CREATE INDEX idx_user_activity_event_type on user_activity (event_type) where event_type <> 'login' ; --filtering the 'login' values so those will not be indexed + +CREATE INDEX idx_user_activity_event_type_user_id on user_activity (event_type, user_id) where event_type <> 'login' ; --filtering the 'login' values so those will not be indexed +``` + +OR + +Explicitly making it a range-sharded index so that the empty string value is evenly distributed across all nodes by adding another column. + +```sql +CREATE INDEX idx_user_activity_event_type on user_activity (event_type ASC, user_id); --adding column user_id + +CREATE INDEX idx_user_activity_event_type_user_id on user_activity (event_type ASC, user_id) + +``` From 8095803561c5fae4b81faff53bbdb797751ce083 Mon Sep 17 00:00:00 2001 From: Dwight Hodge <79169168+ddhodge@users.noreply.github.com> Date: Tue, 20 May 2025 15:34:15 -0400 Subject: [PATCH 135/146] [DOC-768] Tutorial AI menus (#27228) * Tutorial AI menus * edit * menus * minor edit --- docs/config/_default/menus.toml | 36 ++++++++++--- docs/content/preview/tutorials/AI/_index.md | 51 +++++++++++-------- .../tutorials/AI/ai-langchain-openai.md | 6 +-- .../tutorials/AI/ai-llamaindex-openai.md | 6 +-- .../preview/tutorials/AI/ai-localai.md | 6 +-- .../content/preview/tutorials/AI/ai-ollama.md | 6 +-- .../tutorials/{azure => AI}/azure-openai.md | 8 +-- .../{google => AI}/google-vertex-ai.md | 8 +-- .../content/preview/tutorials/azure/_index.md | 2 +- .../preview/tutorials/google/_index.md | 2 +- 10 files changed, 85 insertions(+), 46 deletions(-) rename docs/content/preview/tutorials/{azure => AI}/azure-openai.md (98%) rename docs/content/preview/tutorials/{google => AI}/google-vertex-ai.md (98%) diff --git a/docs/config/_default/menus.toml b/docs/config/_default/menus.toml index b5061baa4fe6..f52f856cdf13 100644 --- a/docs/config/_default/menus.toml +++ b/docs/config/_default/menus.toml @@ -713,17 +713,41 @@ showSection = true [[preview_tutorials]] - name = "Cloud" + name = "AI" + weight = 20 + identifier = "tutorials-ai" + url = "/preview/tutorials/ai/" + [preview_tutorials.params] + showSection = true + +[[preview_tutorials]] + name = "RAG" + weight = 10 + identifier = "tutorials-ai-rag" + parent = "tutorials-ai" + [preview_tutorials.params] + showSection = true + +[[preview_tutorials]] + name = "Vector basics" + weight = 20 + identifier = "tutorials-ai-vector" + parent = "tutorials-ai" + [preview_tutorials.params] + showSection = true + +[[preview_tutorials]] + name = "Agentic" weight = 30 - identifier = "tutorials-cloud" + identifier = "tutorials-ai-agentic" + parent = "tutorials-ai" [preview_tutorials.params] showSection = true [[preview_tutorials]] - name = "AI" - weight = 35 - identifier = "tutorials-ai" - url = "/preview/tutorials/ai/" + name = "Cloud" + weight = 30 + identifier = "tutorials-cloud" [preview_tutorials.params] showSection = true diff --git a/docs/content/preview/tutorials/AI/_index.md b/docs/content/preview/tutorials/AI/_index.md index 3e36896d3cf7..1c566902f28e 100644 --- a/docs/content/preview/tutorials/AI/_index.md +++ b/docs/content/preview/tutorials/AI/_index.md @@ -6,46 +6,57 @@ description: How to Develop Applications with AI and YugabyteDB image: headcontent: Add a scalable and highly-available database to your AI projects type: indexpage +showRightNav: true cascade: unversioned: true --- -{{}} +## Retrieval-augmented generation +{{}} {{}} + title="Similarity Search using Azure AI" + body="Build a scalable generative AI application using YugabyteDB as the database backend." + href="azure-openai/" + icon="/images/tutorials/azure/icons/OpenAI-Icon.svg">}} {{}} + title="Similarity Search using Google Vertex AI" + body="Deploy generative AI applications using Google Vertex AI and YugabyteDB." + href="google-vertex-ai/" + icon="/images/tutorials/google/icons/Google-Vertex-AI-Icon.svg">}} +{{}} + +## Vector basics + +{{}} {{}} {{}} +{{}} -{{}} +## Agentic, multiple data sources, and multi-step reasoning + +{{}} {{}} + title="Use a knowledge base using Llama-Index" + body="Build a scalable RAG (Retrieval-Augmented Generation) app using LlamaIndex and OpenAI." + href="ai-llamaindex-openai/" + icon="/images/tutorials/ai/icons/llamaindex-icon.svg">}} +{{}} {{}} diff --git a/docs/content/preview/tutorials/AI/ai-langchain-openai.md b/docs/content/preview/tutorials/AI/ai-langchain-openai.md index 64c9b993b0fd..106b91000162 100644 --- a/docs/content/preview/tutorials/AI/ai-langchain-openai.md +++ b/docs/content/preview/tutorials/AI/ai-langchain-openai.md @@ -1,14 +1,14 @@ --- title: How to Develop LLM Apps with LangChain, OpenAI and YugabyteDB -headerTitle: LangChain and OpenAI -linkTitle: LangChain and OpenAI +headerTitle: Query without SQL using LangChain +linkTitle: Query without SQL - LangChain description: Learn to build context-aware LLM applications using LangChain and OpenAI. image: /images/tutorials/ai/icons/langchain-icon.svg headcontent: Query your database using natural language menu: preview_tutorials: identifier: tutorials-ai-langchain-openai - parent: tutorials-ai + parent: tutorials-ai-agentic weight: 60 type: docs --- diff --git a/docs/content/preview/tutorials/AI/ai-llamaindex-openai.md b/docs/content/preview/tutorials/AI/ai-llamaindex-openai.md index fb28da9d176d..6e92abb4fbef 100644 --- a/docs/content/preview/tutorials/AI/ai-llamaindex-openai.md +++ b/docs/content/preview/tutorials/AI/ai-llamaindex-openai.md @@ -1,14 +1,14 @@ --- title: How to Develop RAG Apps with LlamaIndex, OpenAI and YugabyteDB -headerTitle: Build RAG applications with LlamaIndex, OpenAI, and YugabyteDB -linkTitle: LlamaIndex and OpenAI +headerTitle: Talk to a database and knowledge base +linkTitle: Knowledge base - LlamaIndex description: Learn to build RAG applications using LlamaIndex and OpenAI. image: /images/tutorials/ai/icons/llamaindex-icon.svg headcontent: Use YugabyteDB as the database backend for RAG applications menu: preview_tutorials: identifier: tutorials-ai-llamaindex-openai - parent: tutorials-ai + parent: tutorials-ai-agentic weight: 60 type: docs --- diff --git a/docs/content/preview/tutorials/AI/ai-localai.md b/docs/content/preview/tutorials/AI/ai-localai.md index 9c7e7145c21d..477ad9b66cb2 100644 --- a/docs/content/preview/tutorials/AI/ai-localai.md +++ b/docs/content/preview/tutorials/AI/ai-localai.md @@ -1,14 +1,14 @@ --- title: How to Develop LLM Apps with LocalAI and YugabyteDB -headerTitle: Build LLM applications using LocalAI and YugabyteDB -linkTitle: LocalAI +headerTitle: Similarity search using LocalAI +linkTitle: Similarity search - LocalAI description: Learn to build LLM applications using LocalAI. image: /images/tutorials/ai/icons/localai-icon.svg headcontent: Use YugabyteDB as the database backend for LLM applications menu: preview_tutorials: identifier: tutorials-ai-localai - parent: tutorials-ai + parent: tutorials-ai-vector weight: 60 type: docs --- diff --git a/docs/content/preview/tutorials/AI/ai-ollama.md b/docs/content/preview/tutorials/AI/ai-ollama.md index 9c90cf9b0e9c..916c3953643c 100644 --- a/docs/content/preview/tutorials/AI/ai-ollama.md +++ b/docs/content/preview/tutorials/AI/ai-ollama.md @@ -1,14 +1,14 @@ --- title: How to Develop AI Apps Locally with Ollama and YugabyteDB -headerTitle: Build applications with locally-hosted embedding models using Ollama and YugabyteDB -linkTitle: Ollama +headerTitle: Similarity search using Ollama +linkTitle: Similarity search - Ollama description: Learn to build LLM applications using Ollama. image: /images/tutorials/ai/icons/ollama-icon.svg headcontent: Use YugabyteDB as the database backend for LLM applications menu: preview_tutorials: identifier: tutorials-ai-ollama - parent: tutorials-ai + parent: tutorials-ai-vector weight: 60 type: docs --- diff --git a/docs/content/preview/tutorials/azure/azure-openai.md b/docs/content/preview/tutorials/AI/azure-openai.md similarity index 98% rename from docs/content/preview/tutorials/azure/azure-openai.md rename to docs/content/preview/tutorials/AI/azure-openai.md index 7ee708cc28ea..6ede99bd673e 100644 --- a/docs/content/preview/tutorials/azure/azure-openai.md +++ b/docs/content/preview/tutorials/AI/azure-openai.md @@ -1,14 +1,16 @@ --- title: Build Scalable Generative AI Applications with Azure OpenAI and YugabyteDB -headerTitle: Build scalable generative AI applications with Azure OpenAI and YugabyteDB -linkTitle: Azure OpenAI +headerTitle: Similarity search using Azure OpenAI +linkTitle: Similarity search - Azure description: Build scalable generative AI applications with Azure OpenAI and YugabyteDB image: /images/tutorials/azure/icons/OpenAI-Icon.svg headcontent: Use YugabyteDB as the database backend for Azure OpenAI applications +aliases: + - /tutorials/azure/azure-openai/ menu: preview_tutorials: identifier: tutorials-azure-openai - parent: tutorials-azure + parent: tutorials-ai-rag weight: 40 type: docs --- diff --git a/docs/content/preview/tutorials/google/google-vertex-ai.md b/docs/content/preview/tutorials/AI/google-vertex-ai.md similarity index 98% rename from docs/content/preview/tutorials/google/google-vertex-ai.md rename to docs/content/preview/tutorials/AI/google-vertex-ai.md index fb187003cc22..a8a1ac09a904 100644 --- a/docs/content/preview/tutorials/google/google-vertex-ai.md +++ b/docs/content/preview/tutorials/AI/google-vertex-ai.md @@ -1,14 +1,16 @@ --- title: Build Scalable Generative AI Applications with Google Vertex AI and YugabyteDB -headerTitle: Build scalable generative AI applications with Google Vertex AI and YugabyteDB -linkTitle: Google Vertex AI +headerTitle: Similarity search using Google Vertex AI +linkTitle: Similarity search - Google Vertex description: Build scalable generative AI applications with Google Vertex AI and YugabyteDB image: /images/tutorials/google/icons/Google-Vertex-AI-Icon.svg headcontent: Use YugabyteDB as the database backend for Google Vertex AI applications +aliases: + - /tutorials/google/google-vertex-ai/ menu: preview_tutorials: identifier: tutorials-google-vertex-ai - parent: tutorials-google + parent: tutorials-ai-rag weight: 40 type: docs --- diff --git a/docs/content/preview/tutorials/azure/_index.md b/docs/content/preview/tutorials/azure/_index.md index ade8962b0d60..82703a9f5302 100644 --- a/docs/content/preview/tutorials/azure/_index.md +++ b/docs/content/preview/tutorials/azure/_index.md @@ -35,7 +35,7 @@ type: indexpage {{}} {{}} {{}} From 8b77c0a3cc4a1adf9a0d509ff5ca06de164c1016 Mon Sep 17 00:00:00 2001 From: Sudhanshu Prajapati Date: Wed, 21 May 2025 01:07:18 +0530 Subject: [PATCH 136/146] [docs] Release note for 2.25.2.0-b359 (#27175) * release notes for 2.25.2.0-b359 * edits * date --------- Co-authored-by: Dwight Hodge --- .../preview/releases/yba-releases/v2.25.md | 185 +++++++++++ .../preview/releases/ybdb-releases/v2.25.md | 290 ++++++++++++++++++ docs/data/currentVersions.json | 6 +- 3 files changed, 478 insertions(+), 3 deletions(-) diff --git a/docs/content/preview/releases/yba-releases/v2.25.md b/docs/content/preview/releases/yba-releases/v2.25.md index 127a6110cd99..a853689ab83d 100644 --- a/docs/content/preview/releases/yba-releases/v2.25.md +++ b/docs/content/preview/releases/yba-releases/v2.25.md @@ -15,6 +15,191 @@ What follows are the release notes for all releases in the YugabyteDB Anywhere ( For an RSS feed of all release series, point your feed reader to the [RSS feed for releases](../index.xml). +## v2.25.2.0 - May 20, 2025 {#v2.25.2.0} + +**Build:** `2.25.2.0-b359` + +### Download + + + +### Change log + +
+ View the detailed changelog + +### Improvements + +* Ensures unique zone names in each provider to avoid confusion and enhance clarity in the UI. PLAT-16367 +* Automatically deletes associated backup policies when a universe is removed. PLAT-17197 +* Reduces failover task execution time by skipping the `UpdateConsistencyCheck` subtask. PLAT-17037 +* Displays aggregated table replication status as the namespace status based on severity. PLAT-17273 +* Enables connection pooling during universe creation with new flags. PLAT-16688 +* Allows custom configuration of GCP connection draining timeout. PLAT-17356 +* Enables LDAP URL validation to support IPv6 addresses. PLAT-17180 +* Modify PITR endpoints to return both taskUUID and pitrUUID. PLAT-16805 +* Adds a linter to the YBA CLI project for enhanced code formatting. PLAT-16887 +* Allows setting a custom timeout for `DeleteReplicationOnSource` during failover. PLAT-17038 +* Speeds up failover by skipping `createTransferXClusterCertsRemoveTasks` on the source universe. PLAT-17039 +* Enables optional `enable-pitr` flag for scheduled backups and corrects PITR command help text. PLAT-17031,PLAT-17058 + +### Bug fixes + +* Removes "Alerts are snoozed" text from the Health widget. PLAT-15744 +* Adds a bootstrap summary to the DR config creation modal to clarify which tables will be bootstrapped. PLAT-15973 +* Enables viewing specific TServer metrics on Kubernetes by adjusting metric query processing. PLAT-16268 +* Changes default label for tserver/master metrics from `HOSTNAME` to `EXPORTED_INSTANCE`. PLAT-16268 +* Refreshes KMS tokens at 70% TTL and hourly via YBA backend. PLAT-16290 +* Supports health checks for multiple installed NTP services. PLAT-16709 +* Now supports `awsHostedZoneName` in AWS provider edit payload to prevent failures. PLAT-16723 +* Switches SSL certificate verification to use fingerprint comparison, enhancing compatibility and reducing task failures. PLAT-16726 +* Ensures master statefulsets are not deployed in read replica clusters to avoid confusion and potential errors. PLAT-11348,PLAT-16727 +* Disables clock drift check for Kubernetes clusters and when disabled by config. PLAT-16819 +* Ensures the Metrics page in YBA handles proxy settings correctly. PLAT-16868 +* Enhances database health checks and process management for better stability and performance. PLAT-14999,PLAT-16895,PLAT-15742,PLAT-16197 +* Ensures all cloudInfo fields are merged in YBA UI before edit requests, preventing mischaracterized edits. PLAT-16924 +* Enables force deletion even if `DeleteBootstrapIds` subtask fails. PLAT-16982 +* Enhances RR cluster deletion by making it retryable, abortable, and classifying it as a placement modification task. PLAT-16991 +* Enhances node agent to anticipate certificate expiration and enable prompt renewal. PLAT-17056 +* Adds retry for disk mount/unmount during OS patching and ensures volume attachment before VM start. PLAT-17094 +* Re-enables node safety checks in YBM, ensuring nodes are safe to take down. PLAT-17097 +* Re-disables the cluster consistency check for YBM dual-NIC configurations. PLAT-17097 +* Enhances node agent installation for manual provisioning in YNP to be idempotent. PLAT-17141 +* Enables conditional validation for AWS keys based on IAM role settings. PLAT-17192 +* Enables clearer metrics and alerts for backup deletions. PLAT-17251 +* Ensures TLS toggle and cert rotation manage `YBC` flags on dedicated masters. PLAT-17472 +* Enhances Kubernetes support for Prometheus backups and restores, including retaining PostgreSQL dumps on restore. PLAT-8626 +* Switches the default YugabyteDB managed cloud image back to AlmaLinux 8.9. PLAT-15311 +* Ensures Kubernetes operator correctly handles storage configurations without setting default S3 attributes. PLAT-16760 +* Fixes Azure resource deletion by correctly reading the error code field. PLAT-16769 +* Ensure instance types exist before node addition in on-premises providers. PLAT-16810 +* Restores using KMS now function correctly due to improved field annotations in YBA CLI. PLAT-16811,PLAT-16783 +* Enables upgrading universes without unintended server cert rotation. PLAT-16812 +* Fixes the issue where changing timezone doesn't update on zoomed metrics graphs. PLAT-16833 +* Upgrades Prometheus in YBA installer to version 3.2.1, enhancing security. PLAT-16872 +* Upgrades Prometheus in helm charts to version 3.2.1, enhancing security. PLAT-16872 +* Upgrades PostgreSQL to version 14.17 to address critical security vulnerabilities. PLAT-16873 +* Upgrades key dependencies for enhanced security against critical vulnerabilities. PLAT-16873,PLAT-16874,PLAT-16876 +* Upgrades azcopy to version 10.28.0 to enhance security and performance. PLAT-16893 +* Upgrades address security vulnerabilities in Netty, Json-smart, and Mina-core, ensuring increased safety against potential attacks. PLAT-16894 +* Prevents YBA crash loop caused by invalid OIDC configuration settings. PLAT-16905 +* Fixes issue where prometheus-based alerts for clock drift were not triggering. PLAT-16984 +* Fixes script error to correctly handle the 10th argument during PostgreSQL restore. PLAT-16990 +* Speeds up Azure blob deletion and handles backups more efficiently. PLAT-17040 +* Fixes the directory path for installing Clockbound binaries. PLAT-17135 +* Enhanced the restore function to properly filter keyspaces during a single keyspace restore. PLAT-17146 +* Ensures K8s Helm override form correctly submits pre-check requests. PLAT-17184 +* Ensures `SetupYNP` only prepares the node agent package without creating an entry. PLAT-17194 +* Fixes issues with creating universes and editing read replicas when primary cluster payload is missing. PLAT-17224 +* Ensures Ansible provisioning validation works on Ubuntu by updating the scripting method. PLAT-17349 +* Enables retrying `CreateUniverse` for on-prem nodes by modifying preflight checks. PLAT-17368 +* Ensures YBA HA promotion success even if it fails midway after a restore. PLAT-17369 +* Ensures only `Running` tables are added to xCluster replication edits. PLAT-17387 +* Ensures node updates during tasks won't overwrite live data with stale information. PLAT-17405 +* Disables background node agent installer by default, but tracks universes needing migration. PLAT-17435,PLAT-17449 +* Resolves issue where xCluster edit command incorrectly removes tables from replication. PLAT-17521 +* Retries failed CREATE TABLESPACE queries up to 3 times to ensure success. PLAT-14388 +* Enables TLS certificate verification by default in the YBA CLI, adds `insecure` and `ca-cert` flags. PLAT-16083 +* Allows S3 bucket access through both global and private endpoints using the new `globalBucketAccess` field. PLAT-16571 +* Allows deleting Kubernetes universes even when paused. PLAT-16808 +* Enables Kubernetes-based backup and restore for Prometheus in YugabyteDB. PLAT-16824 +* Ensures `dedicatedNodes` is set to true for all Kubernetes universes. PLAT-16827 +* Enables more flexible regex matching for S3 Host Base domains. PLAT-16842 +* Blocks creation of cron-based universes in YNP to prevent health check failures. PLAT-16879 +* Simplifies the AsyncTask interface in the node agent, reducing method count. PLAT-16886 +* Ensures crontab binary exists before disabling services on Amazon Linux. PLAT-16902 +* Adds a refresh button to the slow queries UI for easier data updates. PLAT-16917 +* Fixes configuration display and saving issues for migrated universes from 2.20 to 2024.2. PLAT-16918 +* Enables scraping of node agent metrics through YBA proxy endpoint. PLAT-16939 +* Fixes UUID comparison in manual incremental backup creation. PLAT-16953 +* Appends `node_ip` to the config file to prevent race conditions. PLAT-16960 +* Fixes errors in health checks when changing node IPs manually. PLAT-16963 +* Groups all prechecks into a single subtask group for better user experience. PLAT-16965 +* Removes duplicate case in switch statement to fix compilation errors. PLAT-16974 +* Enhances PerfAdvisor by ignoring new fields and supporting custom temp directories. PLAT-14028,PLAT-17020 +* Fixes incorrect data-test-id for Full Move button and adds translation to Run Prechecks. PLAT-17034 +* Reduces UI flickering in task tabs during database upgrades. PLAT-17057 +* Fixes deadlock issue in backups by using sequential streams instead of parallel streams. PLAT-17063 +* Moves YSQL server health checks to after cluster configuration updates during universe creation. PLAT-17085 +* Ensures YSQL health checks run successfully after cluster configuration updates during universe creation. PLAT-17085 +* Ensures Prometheus data directory script runs properly using `sh` and moves directories correctly. PLAT-17091 +* Enables `xCluster` creation only with specified table UUIDs despite new flags. PLAT-17105 +* Fixes xCluster creation in YBA CLI by updating client to handle bootstrap tables UUID. PLAT-17105 +* Sends HTTP 529 response when `tasks_list` API encounters exceptions. PLAT-17111 +* Allows specifying full URNs for Azure vnet/subnet to improve resource grouping. PLAT-17115 +* Enables correct THP parameter settings in Ansible and YNP provisioning. PLAT-171678,PLAT-17171,PLAT-17167 +* Ensures core dump file generation pattern matches the one from Ansible playbooks. PLAT-17201 +* Enables server control via RPC to node agent, gated by a global runtime feature flag. PLAT-17216 +* Enhances cluster consistency checks to handle multiple IP addresses per node. PLAT-17222 +* Speeds up upgrade processes by moving pre-checks to asynchronous tasks. PLAT-17238 +* Ensures alert for orphan masters is raised correctly in specific cases. PLAT-17257 +* Adds metrics to track and alert on node agent installation failures. PLAT-17274 +* Fixes issue where adding a node incorrectly re-creates existing nodes in async clusters. PLAT-17311 +* Writes PG upgrade check logs to a temporary file for better error parsing. PLAT-17418 +* Enables attach-detach script to work with YBA on HTTPS platforms. PLAT-9692 +* Allows the TlsToggle task to retry with consistent intent settings. PLAT-11187 +* Ensures masters and TServers are verified to belong to the correct universe after startup. PLAT-11696 +* Ensures xCluster deletion can proceed by using either source or target universe UUID when available. PLAT-13785 +* Ensures timezone dropdown defaults to the set preference after clearing or refreshing. PLAT-16606,PLAT-16705 +* Fixes inconsistent `useTimeSync` setting for K8s and OnPrem universes. PLAT-16749 +* Allows empty fields in Cert Manager Issuer during K8s setup. PLAT-16759,PLAT-16758 +* Ensures `semanage fcontext` runs regardless of SELinux mode to prevent node-agent issues. PLAT-16762 +* Restores `semanage fcontext` execution regardless of SELinux mode. PLAT-16762 +* Enables RunApiTriggeredHooks to correctly mark updateSucceeded as true. PLAT-16839 +* Extracts `node_exporter` based on architecture and enhances Python support. PLAT-16871 +* Ensures tag changes are saved and visible in audit logs. PLAT-16875 +* Fixes node state accuracy during resize task retries. PLAT-16916 +* Blocks cron-based universe creation when Ansible provisioning is disabled. PLAT-16925 +* Ensures subtask details update correctly when main tasks complete. PLAT-16961 +* Fixes the display of TServer label for disk volume stats in K8s environments. PLAT-16964 +* Ensures the task banner updates with new tasks on launch by maintaining universe state. PLAT-16970 +* Ensures correct scheduling of incremental backups by updating full backup times first. PLAT-16972 +* Removes YEDIS option from CREATE and EDIT modes in the UI, ensuring a cleaner interface. PLAT-17015,PLAT-16983 +* Updates the xCluster version threshold to `2024.1.3.0-b104` on the YBA UI to ensure accuracy in displaying semi-automatic mode availability. PLAT-17045 +* Enhances backup and restore by reconfiguring YBC on all queryable nodes, not just `Running` or `ToBeRemoved`. PLAT-17252 +* Adds YugabyteDB package support to the YNP module. PLAT-17260 +* Allows configuring the SSHD daemon via YNP for custom SSH ports. PLAT-17283 +* Corrects counting of failed tables for DR error banners. PLAT-17348 +* Allows configuring the timeout for PostgreSQL upgrade checks, defaulting to 600 seconds. PLAT-17473 +* Adds Kubernetes overrides to API examples for creating universes. PLAT-8019 +* Adds commands to edit and delete read replica clusters in YBA CLI. PLAT-12842 +* Enables using `yba universe describe` outputs as templates for `yba universe create`. PLAT-16360 +* Adds CLI commands to list, describe, download, and delete support bundles. PLAT-16362 +* Enables the creation of support bundles via the YBA CLI. PLAT-16363 +* Corrects API notations for Point-in-Time Recovery operations. PLAT-16364 +* Enhances YBA CLI with comprehensive alert management commands. PLAT-16365 +* Adds CLI commands for managing alert channels and destinations. PLAT-16366 +* Prompts users for confirmation if an existing config file will be overwritten. PLAT-16617 +* Ignores consistency checks on retries when finding a TServer fails. PLAT-16667 +* Adds commands to manage alert maintenance windows. PLAT-16696 +* Prevents failures in OperatorUtils by not running ConfigBuilder during initialization. PLAT-16882 +* Adds endpoint to list backup directories for selected storage config. PLAT-16900 +* Adds DELETE node command to YBA CLI for managing universe nodes. PLAT-16903 +* Adds prechecks-only functionality for Kubernetes upgrades and edits. PLAT-17019 +* Adds pull secrets and node selector rules to customer creation jobs. PLAT-17026 +* Adds CLI support for creating and managing user groups. PLAT-17032 +* Enhances the `describe` command output spacing for better readability. PLAT-17096 +* Adds instance type commands to all cloud service providers in CLI. PLAT-17099 +* Switches AWSUtil from parallel streams to regular streams to avoid thread exhaustion. PLAT-17102 +* Adds commands to refresh KMS configurations from YBA CLI. PLAT-17131 +* Adds a YBA CLI command for configuring YCQL in existing universes. PLAT-17137 +* Enhances data persistence by copying PG restore dump files to `/opt/yugabyte/yugaware/data` in Kubernetes environments. PLAT-17138 +* Fixes cert-manager certificate names and SAN entries for MCS. PLAT-17142,GH-163 +* Reverts erroneous method changes to fix Azure Private DNS in universe creation/deletion. PLAT-17152 +* Adds support for new statuses in `GetReplicationStatus` RPC, enhancing xCluster replication monitoring. PLAT-17230 +* Ensures correct permissions on /run/user with a new precheck. PLAT-17246 +* Resolves the date conversion bug in the get JWT endpoint. PLAT-17261 +* Ensures PYTHON_EXECUTABLE is set for ntpd service checks in clock-skew configuration. PLAT-17524 + +
+ ## v2.25.1.0 - March 21, 2025 {#v2.25.1.0} **Build:** `2.25.1.0-b381` diff --git a/docs/content/preview/releases/ybdb-releases/v2.25.md b/docs/content/preview/releases/ybdb-releases/v2.25.md index b13c75ed3b67..fb157cfc77f5 100644 --- a/docs/content/preview/releases/ybdb-releases/v2.25.md +++ b/docs/content/preview/releases/ybdb-releases/v2.25.md @@ -15,6 +15,296 @@ What follows are the release notes for the YugabyteDB v2.25 release series. Cont For an RSS feed of all release series, point your feed reader to the [RSS feed for releases](../index.xml). +## v2.25.2.0 - May 20, 2025 {#v2.25.2.0} + +**Build:** `2.25.2.0-b359` + +### Downloads + + + +**Docker:** + +```sh +docker pull yugabytedb/yugabyte:2.25.2.0-b359 +``` + +### Change log + +
+ View the detailed changelog + +### New features + +* yugabyted UI now displays xCluster replication details for the source and destination universe. {{}} + +### Improvements + +#### YSQL + +* Enhances nested loop joins by rechecking pushability of conditions and renames relevant function to reduce confusion. {{}} +* Restores CREATE permission on the public schema to `yb_db_admin`. {{}} +* Exempts walsender from YSQL backend check to prevent index creation delays. {{}} +* Enables viewing TCMalloc heap snapshots for PG backend processes via new YSQL functions. {{}} +* Enhances `yb_servers` function to include `universe_uuid` for better cluster identification. {{}} +* Fixes comment linting issues to handle non-word characters. {{}} +* Enhances ASH data retrieval in query diagnostics using the SPI framework. {{}} +* Allows customization of `ybhnsw` index creation options in YSQL. {{}} +* Integrates new data types and functions from pgvector 0.8.0 into YSQL. {{}} +* Enables on-demand logging and enhanced catalog cache statistics tracking. {{}} +* Enables conditional checks for role existence in `ysql_dump` outputs with the `dump_role_checks` flag. {{}} +* Removes the check that the first operation in a plain session must set the read time. {{}} +* Enhances code consistency in `ybgate_api.h` by matching PostgreSQL style. {{}} +* Consolidates multiple suppression flags into one for cleaner `pg_regress` outputs. {{}} +* Refactors `PgDocReadOp` to enhance modularity by isolating sampling logic into `PgDocSampleOp`. {{}} +* Enables `ALTER TYPE ... SET SCHEMA` support for orafce extension upgrades. {{}} +* Enhances `pg_stat_get_progress_info` by adding new fields. {{}} +* Eliminates unnecessary workaround in `ALTER TABLE` operations related to constraint handling. {{}} +* Reinstates checks for `ash_metadata` in PgClient RPC requests with added code explanations. {{}} +* Re-adds `bitmap_scans_distinct` test to ensure consistent behavior. {{}} +* Adds support for datetime and UUID type pushdown in mixed mode. {{}},{{}} +* Organizes YSQL code by splitting function definitions into a new file. {{}} +* Enables expression pushdown for MOD, LIKE, ASCII, SUBSTRING in mixed mode upgrades. {{}} +* Disables AutoAnalyze during the entire PG15 upgrade to ensure stability. {{}} +* Enforces naming conventions for distinguishing YugabyteDB-specific files. {{}} +* Aligns `CurrentMemoryContext` handling more closely with PostgreSQL updates. {{}} +* Enhances compatibility with PostgreSQL numeric tests by refining data ordering and simplifying table structures. {{}} +* Maintains workaround in `pg_dump` to support upgrades with `pg_stat_statements`. {{}} +* Ensures consistent transaction path settings for single-shard operations. {{}} +* Merges PostgreSQL 15.12 improvements into YugabyteDB, enhancing database compatibility and stability. {{}} +* Allows users to adjust `ybhnsw.ef_search` for HNSW index searches in YSQL. {{}} +* Automatically maps `hnsw` to `ybhnsw` in `CREATE INDEX` statements for seamless index creation. {{}} +* Recommends changing isolation level to read committed to avoid errors during concurrent inserts. {{}} +* Excludes PostgreSQL owned code from `bad_variable_declaration_spacing` lint rule. {{}} +* Adds `server_type` option to differentiate foreign servers in `postgres_fdw`. {{}} +* Renames `switch_fallthrough` to `yb_switch_fallthrough` for consistency. {{}} +* Enables the PostgreSQL anonymizer extension via the `enable_pg_anonymizer` flag. {{}} +* Enhances error reporting by including index names for missing rows. {{}} +* Displays rows removed by YugabyteDB index recheck in execution plans. {{}} +* Aligns `get_relation_constraint_attnos` function to use correct flag. {{}} +* Disallow setting `ysql_select_parallelism` to zero to prevent errors. {{}} +* Removes `pg_strtouint64` and adopts `strtoull` or `strtou64` for consistency. {{}} +* Aligns YSQL more closely with upstream PostgreSQL, reducing discrepancies and streamlining changes. {{}} +* Logs now detail the cause and context of read restart errors for better troubleshooting. {{}} +* Limits output buffer to 8kB to ensure compatibility with certain clients. {{}} +* Enhances TServer by adding support for garbage collection of invalidation messages, reducing memory usage. {{}} +* Increases the timeout for detecting `pg_yb_catalog_version` mode from 10 seconds to 20 seconds. {{}} +* Enhances `pg_stats` with length and bounds histograms for better query planning. {{}} +* Fixes build failures and enhances memory usage reporting with TCMalloc stats. {{}} + +#### YCQL + +* Allows setting NULL in YCQL JSONB column values using UPDATE statements. {{}} + +#### DocDB + +* Enables placing intermediate CA certificates directly in the server cert file for node-to-node encryption. {{}} +* Tracks ByteBuffer memory usage with `MemTracker`. {{}} +* Allows dynamic adjustment of `rocksdb_compact_flush_rate_limit_bytes_per_sec` across all tablets. {{}} +* Selects geographically closest TServer for faster clone operations. {{}} +* Switches most builds to clang 19, enhancing code safety and addressing new warnings. {{}} +* Introduces block-based data organization in `YbHnsw` for efficient memory management during data loading and unloading. {{}} +* Enables manual compaction of vector index chunks in Vector LSM. {{}} +* Ensures vector index backfill reads from the indexed table at the correct time. {{}} +* Upgrades protobuf to version 21.12 for better C++23 compatibility. {{}} +* Updates codebase to C++23, enhancing compatibility and performance. {{}} +* Enables sequence replication in xCluster by default, removing the need for a flag. {{}} +* Adds logging for vector index search stats when `vector_index_dump_stats` flag is true. {{}} +* Ensures consistent bootstrapping of vector indexes after a TServer restart. {{}} +* Enhances handling of expired snapshots by retrying deletion tasks automatically. {{}} +* Ensures vector index contains all entries from the indexed table. {{}} +* Adds detailed cluster balancer warnings to the master UI page. {{}} +* Adds tombstones to obsolete vector IDs, reducing queries to the main table. {{}} +* Displays cluster balancer tasks on the master UI page. {{}} +* Adds annotations to prevent compiler reordering in shared memory interactions. {{}} +* Uses non-concurrent mode by default for creating vector indexes to streamline processes. {{}} +* Adds safeguard to pause replication after repeated DDL failures. {{}} +* Enhances xCluster DDL replication by adjusting `yb_read_time` usage and silencing related warnings. {{}} +* Renames `docdb::VectorIndex` to `docdb::DocVectorIndex` to eliminate name confusion. {{}} +* Allows specific compaction and flush for vector indexes via `yb-admin` commands. {{}} +* Adds `yb-ts-cli compact_vector_index` command for tablet-specific vector index compaction. {{}} +* Adds `automatic_mode` flag to `create_checkpoint` for simpler xCluster setup. {{}} +* Enables dropping vector indexes in DocDB. {{}} +* Displays replication mode in the master UI `Outbound Replication Groups` section. {{}} +* Enhances vector index compaction with a new deletion API and clearer naming conventions. {{}} +* Automatic mode now always requires bootstrapping to ensure OID consistency. {{}} +* Reuse threads to enhance connection efficiency in shared memory communication. {{}} +* Enhances vector index query stats logging and adds new metrics tracking. {{}} +* Enhances monitoring by using thread pool names for thread categorization. {{}} +* Simplifies navigation and modification of master async RPC tasks code. {{}} +* Introduces idle timeouts in `rpc::ThreadPool` to automatically adjust thread counts based on activity, enhancing resource efficiency. {{}} +* Switches to `MPSCQueue` for enhanced single-consumer performance and simpler maintenance. {{}} +* Adds support for the DocumentDB extension v0.102-0 to enhance database functionality. {{}} +* Simplifies xCluster DDL replication tests by removing bidirectional roles. {{}} +* Allows setting `ybhnsw.ef_search` to customize search expansion factor. {{}} +* Adds paginated querying for vector index operations. {{}} +* Cancels vector index compaction during VectorLSM shutdown. {{}} +* Enables cloning of vector indexes in databases. {{}} +* Enables consistent backup and restore for vector indexes. {{}} +* Speeds up ExternalMiniCluster tests by directly triggering master elections. {{}} +* Deprecates the `load_balancer_count_move_as_add` flag to simplify cluster balancing. {{}} +* Removes `master_replication` from `master_fwd.h` to optimize file parsing times. {{}} + +#### CDC + +* Enhances CDC streaming by advancing restart time in idle periods, supported by the new flag `cdcsdk_update_restart_time_interval_secs`. {{}} +* Reduces logging frequency for certain CDC errors to avoid clutter. {{}} +* Sets `wal_status` in `pg_replication_slots` based on CDC consumption timing. {{}} +* Corrects flag value conversion to ensure accurate update intervals for CDC restart times. {{}} +* Blocks table drops if they are part of a publication to prevent replication issues. {{}} +* Reduces the default `yb_walsender_poll_sleep_duration_empty_ms` flag value to 10 ms to speed up replication in sparse workloads. {{}} +* Increases log visibility for netty errors by changing levels from `DEBUG` to `WARN`. {{}} + +#### yugabyted + +* Removes `psutil` dependency in `yugabyted` for better compatibility. {{}} + +### Bug fixes + +#### YSQL + +* Reduces XID usage by generating one per `REFRESH MATERIALIZED VIEW CONCURRENTLY` operation. {{}} +* Renames on unique constraints now update associated DocDB table names. {{}} +* Reduces read restart errors during concurrent disjoint key writes. {{}} +* Avoids unnecessary catalog version bumps during in-place materialized view refreshes. {{}} +* Disables index-only scans on copartitioned indexes. {{}} +* Introduces custom SQL error codes for better error handling across processes. {{}} +* Fixes crashes when using `yb_get_range_split_clause` with partitioned tables. {{}} +* Fixes incorrect error message related to "INSERT ON CONFLICT" under concurrent transactions. {{}} +* Corrects batched read behavior for mixed immediate and deferred FK constraints. {{}} +* Reduces latency after DDL changes by using catalog version for cache invalidation. {{}} +* Refines cost model tuning using server-side execution times for more accurate query optimization. {{}} +* Removes redundant `yb_cdc_snapshot_read_time` field, simplifying snapshot management. {{}} +* Enables geolocation costing in the new cost model using the `yb_enable_geolocation_costing` flag. {{}} +* Fixes flaky behavior in Connection Manager when handling prepared statements. {{}} +* Disables fast-path transactions for bulk loads on colocated tables by default. {{}} +* Refactors the FK cache handling in YSQL for cleaner code structure. {{}} +* Optimizes cost modeling for primary index scans to assume sequential disk block fetching. {{}} +* Ensures accurate detection of duplicate entries during fast-path transactions. {{}} +* Enables setting follower reads YSQL parameters at connection time. {{}} +* Resolves multiple issues in tuple-to-string utility functions. {{}} +* Ensures stable operation of refresh materialized view during major upgrades. {{}} +* Uses auto-generated OID symbols for `pg_proc` entries. {{}} +* Displays the `initdb` log file path on stdout for easier debugging. {{}} +* Ensures consistent data during fast-path `COPY` operations on tables with unique indexes. {{}} +* Organizes tests into separate files for better clarity and maintenance. {{}} +* Enhances query planning for inherited and partitioned tables with more efficient path usage. {{}} +* Ensures PostgreSQL compilation only executes necessary tasks by correctly handling `MAKELEVEL`. {{}} +* Prevents database crashes by blocking index creation on dimensionless vector columns. {{}} +* Fixes upgrade issues for partitioned tables by reverting `relam` settings. {{}} +* Prevents crash by excluding NULL values from vector indices. {{}} +* Enhances index scans and partition pruning for BOOLEAN conditions. {{}} +* Ensures correct behavior of YbBitmapIndexScan upon rescan by updating pushdown expressions. {{}} +* Eliminates erroneous colocation data in `indexdef` for copartitioned indexes. {{}} +* Adds unit test to handle `SELECT` errors in incremental cache refresh. {{}} +* Fixes regression bug in handling incremental cache refresh across concurrent DDL sessions. {{}} +* Restores and repositions a critical statement to its intended location in the planner. {{}} +* Enables selective whole row retrieval for DELETE on partitioned tables with varying configurations. {{}} +* Corrects estimations for inner table scans in Batched Nested Loop Joins. {{}} +* Fixes integer overflow in BNL cost estimation, preventing negative values. {{}} +* Prevents incorrect sharing of query limits in subplan executions. {{}},{{}} +* Adds a YSQL configuration parameter to customize negative catalog caching. {{}},{{}} +* Ensures the `vmodule` flag is respected by the postgres process. {{}} +* Adds `liblz4.1.dylib` to macOS `yugabyte-client` package for successful deployment. {{}} +* Enables `ANALYZE` to collect accurate stats for parent-level of partitioned tables. {{}} +* Prevents crashes by handling non-variable expressions in single-row updates or deletes. {{}} +* Adds a safeguard to prevent crashes during NULL vector index scans. {{}} +* Enhances stability by initializing vector index scan costs to prevent undefined behavior. {{}} +* Prevents relcache reference leaks in `yb_get_ybctid_width`. {{}} +* Fixes port conflict issue when setting `pgsql_proxy_bind_address` in dual NIC setups. {{}} +* Addresses "Duplicate table" error by ensuring unique OID allocation during restores. {{}} +* Ensures YSQL dumps set `colocated = false` for non-colocated databases during backups. {{}} +* Reduces default RPC message size limit for better data handling. {{}} +* Enhances `yb_index_check` to verify materialized view indexes' consistency. {{}} +* Ensures `ysql_dump` maintains enum sort order during backup and restore. {{}} +* Ensures accurate data return from index scans by correctly fetching needed values for rechecks. {{}} +* Ensures `path->rows` reflects accurate row estimates in scans, avoiding incorrect overwrites. {{}} +* Prevents "Duplicate table" errors by not computing `relfilenode_htab` during initdb. {{}} +* Switches from `now` to `clock_timestamp` for recording invalidation message time. {{}} +* Ignores the `SPLIT` option when creating a partitioned table. {{}} +* Renames YSQL metric prefixes for clarity and maintains compatibility with old names. {{}} +* Updates description for `ysql_yb_enable_ash` flag. {{}} +* Allows restoration of old backups with enum types without errors by reverting to warnings and auto-generated OIDs. {{}} +* Logs odd `pg_enum` OID during restore if `sortorder` is missing, enhancing troubleshooting. {{}} +* Restores the call to `ScheduleCheckObjectIdAllocators` inadvertently removed. {{}} +* Fixes a use-after-free bug in ysql_dump by copying tablegroup_name. {{}} +* Allows `yb_binary_restore` to be set by `ybdbadmin` for vector extension creation. {{}} + +#### DocDB + +* Resolves issue where tables could get indefinitely stuck in HIDING state. {{}} +* Prevents creation of tablespaces with duplicate placement blocks. {{}} +* Prevents crashes by ensuring non-null frontiers during transaction apply after a TServer restart. {{}} +* Fixes load balancing for rewritten tables in colocated databases. {{}} +* Prevents deadlocks in PG sessions when using shared memory, enhancing stability. {{}} +* Fixes crashes in ProcessSupervisor when unable to restart a process. {{}} +* Ensures yb-admin commands respect user-specified timeouts for table flushes and compactions. {{}} +* Enhances transactional xCluster accuracy by using majority replicated OpId for ApplySafeTime calculations. {{}} +* Ensures accurate `WaitForReplicationDrain` behavior by not mislabeling tablets as drained. {{}} +* Enables cloning databases to any time in the snapshot schedule retention period. {{}} +* Fixes handling of `db_max_flushing_bytes` to properly limit memory usage under high write loads. {{}} +* Prevents unbounded growth of the `recently_applied_map` by not adding read-only transactions, conserving memory. {{}} +* Enables cloning databases with sequences to earlier states without errors. {{}} +* Prevents false conflict detection in snapshot isolation operations. {{}} +* Fixes lock order for vector index creation to prevent deadlocks. {{}} +* Allows xCluster to handle `UNKNOWN` state TableInfos gracefully. {{}} +* Disables `-Wmisleading-indentation` warnings in GCC to prevent increased compile times. {{}} +* Adjusts shared memory address range to 0x350000000000-0x3f0000000000 to avoid conflicts. {{}} +* Ensures continuous leader lease revocation by supporting multiple old leader leases. {{}} +* Prevents potential deadlocks by ensuring table locks follow ID order during namespace copies. {{}} +* Separates thread pools for vector index backfill and inserts to avoid deadlocks. {{}} +* Escapes non-printable characters in UI and file outputs. {{}} +* Fixes logging of partition keys for new child tablets post-split. {{}} +* Fixes flush failure reporting during shutdown to prevent data loss. {{}} +* Ensures accurate data tracking during master leader transitions by handling chunked tablet reports efficiently. {{}} +* Ensures system tables are correctly removed during deletions or upgrades. {{}} +* Reduces thread contention by using a lock-free queue for thread management. {{}} +* Enhances local RPC call handling to execute in the same thread when possible. {{}} +* Stops tracking transactions when `use_bootstrap_intent_ht_filter` is set to false, preventing memory leaks. {{}} +* Ensures `Slice::ToDebugString` respects the `max_len` setting for hex outputs. {{}} +* Prevents `yb-admin` crashes by correctly handling argument count for `create_database_snapshot`. {{}} +* Improves error handling for shared memory operations in DocDB. {{}} +* Removes 60-second timeout upper bound on admin RPCs and adds new `yb_client_admin_rpc_timeout_sec` flag. {{}} +* Prevents deadlocks during background compaction and transaction loading. {{}} +* Fixes the issue of recording "query id 0" in Active Session History samples. {{}} +* Blocks nonconcurrent index creation on xCluster replicated tables. {{}} +* Prevents master process crashes by fixing an iteration modification issue in TriggerDdlVerificationIfNeeded. {{}} +* Reverts non-printable character handling to fix tests and API scraping issues. {{}} +* Prevents crashes during vector index flush on shutdown. {{}} + +#### CDC + +* Ensures only relevant `COMMIT` records are streamed, fixing gRPC connector crashes. {{}} +* Prevents CDC crashes by logging a warning for dropped indexes on colocated tables. {{}} +* Prevents data loss by not streaming records during transaction load. {{}} +* Ensures reliable CDC stream functionality during index creation, preventing schema packing errors. {{}} + +#### yugabyted + +* Checks for chrony service before enabling clockbound during node start. {{}} +* Preserves the universe key locally after enabling EAR for recovery scenarios. {{}} + +
+ ## v2.25.1.0 - March 21, 2025 {#v2.25.1.0} **Build:** `2.25.1.0-b381` diff --git a/docs/data/currentVersions.json b/docs/data/currentVersions.json index 38c64577e95b..884a398bdedf 100644 --- a/docs/data/currentVersions.json +++ b/docs/data/currentVersions.json @@ -19,9 +19,9 @@ "series": "v2.25", "alias": "preview", "display": "v2.25 (Preview)", - "version": "2.25.1.0", - "versionShort": "2.25.1", - "appVersion": "2.25.1.0-b381", + "version": "2.25.2.0", + "versionShort": "2.25.2", + "appVersion": "2.25.2.0-b359", "isStable": false, "initialRelease": "2025-01-17" }, From 61d1d0676c55c8731f56426e1ebe0b0672664539 Mon Sep 17 00:00:00 2001 From: Bvsk Patnaik Date: Tue, 20 May 2025 07:24:26 +0000 Subject: [PATCH 137/146] [#27280] YSQL: Fix ALTER DATABASE OWNER assertion failure Summary: ### Issue After 9e1c57471a0e / D43672, the following command fails an assertion check ``` CREATE DATABASE restored_db; ALTER DATABASE restored_db OWNER TO yugabyte; ``` The assertion check is in CheckAlterDatabaseDdl ``` case T_AlterOwnerStmt: { const AlterOwnerStmt *const stmt = castNode(AlterOwnerStmt, parsetree); /* * ALTER DATABASE OWNER needs to have global impact, however we * may have a no-op ALTER DATABASE OWNER when the new owner is the * same as the old owner and there is no write made to pg_database * to turn on is_global_ddl. Also in global catalog version mode * is_global_ddl does not apply so it is not turned on either. */ if (stmt->objectType == OBJECT_DATABASE) Assert(ddl_transaction_state.is_global_ddl || !YBCPgHasWriteOperationsInDdlTxnMode() || !YBIsDBCatalogVersionMode()); break; } ``` The assertion failure happens because YBCPgHasWriteOperationsInDdlTxnMode() is true after the commit 9e1c57471a0e. The commit locks catalog version using SELECT FOR UPDATE. Although SELECT FOR UPDATE is not a write operation, the logic to determine whether there is a write operation also includes lock operations. See DoRunAsync in pg_session.cc ``` // We can have a DDL event trigger that writes to a user table instead of ysql // catalog table. The DDL itself may be a no-op (e.g., GRANT a privilege to a // user that already has that privilege). We do not want to account this case // as writing to ysql catalog so we can avoid incrementing the catalog version. has_catalog_write_ops_in_ddl_mode_ = has_catalog_write_ops_in_ddl_mode_ || (is_ddl && !IsReadOnly(*op) && is_ysql_catalog_table); ``` IsReadOnly also includes lock operations ``` bool IsReadOnly(const PgsqlOp& op) { return op.is_read() && !IsValidRowMarkType(GetRowMarkType(op)); } ``` However, a SELECT FOR UPDATE by itself is not a write operation for the purposes of YBCPgHasWriteOperationsInDdlTxnMode. ### Fix Replace !IsReadOnly(*op) with op->is_write(). ### Impact YBCPgHasWriteOperationsInDdlTxnMode() is also called to early return YbTrackPgTxnInvalMessagesForAnalyze() ``` /* * If there is no write, then there are no inval messages so this commit is * equivalent to a no-op. */ if (!YBCPgHasWriteOperationsInDdlTxnMode()) return false; ``` This change also allows this early return optimization in the presence of SELECT FOR UPDATE on the catalog version. SELECT FOR UPDATE by itself does not generate any invalidation messages. Jira: DB-16767 Test Plan: Jenkins Reviewers: pjain, myang, smishra Reviewed By: pjain, myang Subscribers: yql Differential Revision: https://phorge.dev.yugabyte.com/D44093 --- src/yb/yql/pggate/pg_session.cc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/yb/yql/pggate/pg_session.cc b/src/yb/yql/pggate/pg_session.cc index a3d4f166259b..271e342cb0fa 100644 --- a/src/yb/yql/pggate/pg_session.cc +++ b/src/yb/yql/pggate/pg_session.cc @@ -1099,7 +1099,7 @@ Result PgSession::DoRunAsync( // as writing to ysql catalog so we can avoid incrementing the catalog version. has_catalog_write_ops_in_ddl_mode_ = has_catalog_write_ops_in_ddl_mode_ || - (is_ddl && !IsReadOnly(*op) && is_ysql_catalog_table); + (is_ddl && op->is_write() && is_ysql_catalog_table); return runner.Apply(table, op); }; RETURN_NOT_OK(processor(first_table_op)); From f666db8cc5140000e84f4ee329fa929dba0a8a98 Mon Sep 17 00:00:00 2001 From: Jason Kim Date: Mon, 19 May 2025 17:59:34 -0700 Subject: [PATCH 138/146] [#27294] YSQL: fix perf regression on some secondary index scans Summary: Commit b5b495dbd9b4ac6ec197705f89c795d9510c799e incorrectly translates some logic, causing system and copartition secondary index scans to lose the single-RPC optimization to embed a table and index scan together in the same RPC. Fix the logic and add tests to cover the cases. Jira: DB-16789 Test Plan: On Almalinux 8: ./yb_build.sh fastdebug --gcc11 daemons initdb \ --cxx-test pgwrapper_pg_libpq-test \ --gtest_filter PgLibPqTest.Embedded\* Close: #27294 Reviewers: sanketh Reviewed By: sanketh Subscribers: yql Differential Revision: https://phorge.dev.yugabyte.com/D44085 --- .../src/backend/access/yb_access/yb_scan.c | 39 ++++--- src/yb/yql/pgwrapper/pg_libpq-test.cc | 103 ++++++++++++++++++ 2 files changed, 124 insertions(+), 18 deletions(-) diff --git a/src/postgres/src/backend/access/yb_access/yb_scan.c b/src/postgres/src/backend/access/yb_access/yb_scan.c index 5a55ba980390..2b5e99184744 100644 --- a/src/postgres/src/backend/access/yb_access/yb_scan.c +++ b/src/postgres/src/backend/access/yb_access/yb_scan.c @@ -784,31 +784,34 @@ YbIsScanningEmbeddedIdx(Relation table, Relation index) yb_table_properties_table = YbGetTableProperties(table); /* - * - All system tables and indexes are specially colocated to the sys - * catalog tablet. - * - Some indexes may use copartitioning, which shards the table and index - * together. + * There are a few cases where embedding happens. + * 1. System table: all system tables and indexes are specially colocated + * to the sys catalog tablet. + * 2. Copartitioning: some indexes may use copartitioning, which shards the + * table and index together. */ is_embedded = (IsSystemRelation(table) || - yb_table_properties_table->is_colocated || (index && index->rd_indam->yb_amiscopartitioned)); /* - * - If ysql_enable_colocated_tables_with_tablespaces, check that the table - * and index are in the same colocation tablet using tablegroup_oid. - * - TODO(#25940): index->rd_index->indisprimary seems irrelevant and - * should not be a pass condition. - * - Else, simply check for colocation of the table because the index - * should follow the table. - * - TODO(#25940): index being NULL or pk index should not be a pass - * condition. - * - TODO(#25940): the gflag could be turned on/off in the lifetime of - * a cluster, so it shouldn't even be involved in this logic. Everything - * should be validated, likely using the tablegroup_oid check, assuming - * that holds even when indexes are created when the flag is false. + * 3. Colocation: the table and index may be colocated to the same tablet: + * - If ysql_enable_colocated_tables_with_tablespaces, check that the + * table and index are in the same colocation tablet using + * tablegroup_oid. + * - TODO(#25940): index->rd_index->indisprimary seems irrelevant and + * should not be a pass condition. + * - Else, simply check for colocation of the table because the index + * should follow the table. + * - TODO(#25940): index being NULL or pk index should not be a pass + * condition. + * - TODO(#25940): the gflag could be turned on/off in the lifetime of + * a cluster, so it shouldn't even be involved in this logic. + * Everything should be validated, likely using the tablegroup_oid + * check, assuming that holds even when indexes are created when the + * flag is false. */ if (*YBCGetGFlags()->ysql_enable_colocated_tables_with_tablespaces) - is_embedded &= (yb_table_properties_table->is_colocated && + is_embedded |= (yb_table_properties_table->is_colocated && ((index && index->rd_index->indisprimary) || (index && (YbGetTableProperties(index)->tablegroup_oid == diff --git a/src/yb/yql/pgwrapper/pg_libpq-test.cc b/src/yb/yql/pgwrapper/pg_libpq-test.cc index 4c290c6a5269..484ef0b0ae0e 100644 --- a/src/yb/yql/pgwrapper/pg_libpq-test.cc +++ b/src/yb/yql/pgwrapper/pg_libpq-test.cc @@ -165,6 +165,8 @@ class PgLibPqTest : public LibPqTestBase { void TestSecondaryIndexInsertSelect(); + Status TestEmbeddedIndexScanOptimization(bool is_colocated_with_tablespaces); + void KillPostmasterProcessOnTservers(); Result GetPostmasterPidViaShell(pid_t backend_pid); @@ -947,6 +949,107 @@ TEST_F_EX(PgLibPqTest, SecondaryIndexInsertSelectWithSharedMem, PgLibPqWithShare TestSecondaryIndexInsertSelect(); } +Status PgLibPqTest::TestEmbeddedIndexScanOptimization(bool is_colocated_with_tablespaces) { + auto conn = VERIFY_RESULT(Connect()); + constexpr auto kPgCatalogOid = 11; + + // Secondary index scan on system table. + auto query = Format( + "EXPLAIN (ANALYZE, DIST, FORMAT JSON)" + " SELECT * FROM pg_class WHERE relname = 'pg_class' AND relnamespace = $0", + kPgCatalogOid); + // First run is to warm up the cache. + RETURN_NOT_OK(conn.FetchRow(query)); + // Second run is the real test. + auto explain_str = VERIFY_RESULT(conn.FetchRow(query)); + rapidjson::Document explain_json; + explain_json.Parse(explain_str.c_str()); + auto scan_type = std::string(explain_json[0]["Plan"]["Node Type"].GetString()); + SCHECK_EQ(scan_type, "Index Scan", + IllegalState, + "Unexpected scan type"); + SCHECK_EQ(explain_json[0]["Catalog Read Requests"].GetDouble(), 1, + IllegalState, + "Unexpected number of catalog read requests"); + + // Secondary index scan on copartitioned table. + RETURN_NOT_OK(conn.Execute("CREATE EXTENSION vector")); + RETURN_NOT_OK(conn.Execute( + "CREATE TABLE vector_test (id int PRIMARY KEY, embedding vector(3)) SPLIT INTO 2 TABLETS")); + RETURN_NOT_OK(conn.Execute( + "CREATE INDEX ON vector_test USING ybhnsw (embedding vector_l2_ops)")); + RETURN_NOT_OK(conn.Execute("INSERT INTO vector_test VALUES (1, '[1, 2, 3]')")); + explain_str = VERIFY_RESULT(conn.FetchRow( + "EXPLAIN (ANALYZE, DIST, FORMAT JSON)" + " SELECT * FROM vector_test ORDER BY embedding <-> '[0, 0, 0]' LIMIT 1")); + explain_json.Parse(explain_str.c_str()); + scan_type = std::string(explain_json[0]["Plan"]["Node Type"].GetString()); + SCHECK_EQ(scan_type, "Limit", + IllegalState, + "Unexpected scan type"); + scan_type = std::string(explain_json[0]["Plan"]["Plans"][0]["Node Type"].GetString()); + SCHECK_EQ(scan_type, "Index Scan", + IllegalState, + "Unexpected scan type"); + SCHECK_EQ(explain_json[0]["Storage Read Requests"].GetDouble(), 1, + IllegalState, + "Unexpected number of storage read requests"); + + // Secondary index scan on colocated table and index. + RETURN_NOT_OK(conn.Execute("CREATE DATABASE colodb WITH colocation = true")); + conn = VERIFY_RESULT(ConnectToDB("colodb")); + RETURN_NOT_OK(conn.Execute( + "CREATE TABLE colo_test (id int PRIMARY KEY, value TEXT)")); + RETURN_NOT_OK(conn.Execute("CREATE INDEX ON colo_test (value)")); + RETURN_NOT_OK(conn.Execute("INSERT INTO colo_test VALUES (1, 'hi')")); + query = "EXPLAIN (ANALYZE, DIST, FORMAT JSON) SELECT * FROM colo_test WHERE value = 'hi'"; + explain_str = VERIFY_RESULT(conn.FetchRow(query)); + explain_json.Parse(explain_str.c_str()); + scan_type = std::string(explain_json[0]["Plan"]["Node Type"].GetString()); + SCHECK_EQ(scan_type, "Index Scan", + IllegalState, + "Unexpected scan type"); + SCHECK_EQ(explain_json[0]["Storage Read Requests"].GetDouble(), 1, + IllegalState, + "Unexpected number of storage read requests"); + + // Secondary index scan on colocated table and index on different tablespaces. + if (is_colocated_with_tablespaces) { + RETURN_NOT_OK(conn.Execute("DROP INDEX colo_test_value_idx")); + RETURN_NOT_OK(conn.Execute("CREATE TABLESPACE spc LOCATION '/dne'")); + RETURN_NOT_OK(conn.Execute("CREATE INDEX ON colo_test (value) TABLESPACE spc")); + explain_str = VERIFY_RESULT(conn.FetchRow(query)); + explain_json.Parse(explain_str.c_str()); + scan_type = std::string(explain_json[0]["Plan"]["Node Type"].GetString()); + SCHECK_EQ(scan_type, "Index Scan", + IllegalState, + "Unexpected scan type"); + SCHECK_GT(explain_json[0]["Storage Read Requests"].GetDouble(), 1, + IllegalState, + "Unexpected number of storage read requests"); + } + + return Status::OK(); +} + +TEST_F(PgLibPqTest, EmbeddedIndexScanOptimizationColocatedWithTablespacesFalse) { + ASSERT_OK(TestEmbeddedIndexScanOptimization(false)); +} + +class PgLibPqColocatedTablesWithTablespacesTest : public PgLibPqTest { + void UpdateMiniClusterOptions(ExternalMiniClusterOptions* options) override { + const auto flag = "--ysql_enable_colocated_tables_with_tablespaces=true"s; + options->extra_master_flags.push_back(flag); + options->extra_tserver_flags.push_back(flag); + } +}; + +TEST_F_EX(PgLibPqTest, + EmbeddedIndexScanOptimizationColocatedWithTablespacesTrue, + PgLibPqColocatedTablesWithTablespacesTest) { + ASSERT_OK(TestEmbeddedIndexScanOptimization(true)); +} + void AssertRows(PGConn *conn, int expected_num_rows) { auto res = ASSERT_RESULT(conn->Fetch("SELECT * FROM test")); ASSERT_EQ(PQntuples(res.get()), expected_num_rows); From eae8de9a416be27872cc2e805eb5cac01ee13cd2 Mon Sep 17 00:00:00 2001 From: Dwight Hodge Date: Tue, 20 May 2025 21:59:59 -0400 Subject: [PATCH 139/146] format --- .../develop/best-practices-develop/_index.md | 49 +++++++++++++++++ .../administration.md | 16 +++--- .../best-practices-ycql.md | 10 ++-- .../clients.md | 14 ++--- .../data-modeling-perf.md | 54 ++++++++++--------- .../develop/best-practices-ysql/_index.md | 6 --- 6 files changed, 99 insertions(+), 50 deletions(-) create mode 100644 docs/content/stable/develop/best-practices-develop/_index.md rename docs/content/stable/develop/{best-practices-ysql => best-practices-develop}/administration.md (85%) rename docs/content/stable/develop/{ => best-practices-develop}/best-practices-ycql.md (92%) rename docs/content/stable/develop/{best-practices-ysql => best-practices-develop}/clients.md (82%) rename docs/content/stable/develop/{best-practices-ysql => best-practices-develop}/data-modeling-perf.md (75%) delete mode 100644 docs/content/stable/develop/best-practices-ysql/_index.md diff --git a/docs/content/stable/develop/best-practices-develop/_index.md b/docs/content/stable/develop/best-practices-develop/_index.md new file mode 100644 index 000000000000..92efa422f34b --- /dev/null +++ b/docs/content/stable/develop/best-practices-develop/_index.md @@ -0,0 +1,49 @@ +--- +title: Best practices for applications +headerTitle: Best practices +linkTitle: Best practices +description: Tips and tricks to build applications +headcontent: Tips and tricks to build applications for high performance and availability +menu: + stable: + identifier: best-practices-develop + parent: develop + weight: 570 +type: indexpage +--- + +## YSQL + +{{}} + + {{}} + + {{}} + + {{}} + +{{}} + +## YCQL + +{{}} + + {{}} + +{{}} diff --git a/docs/content/stable/develop/best-practices-ysql/administration.md b/docs/content/stable/develop/best-practices-develop/administration.md similarity index 85% rename from docs/content/stable/develop/best-practices-ysql/administration.md rename to docs/content/stable/develop/best-practices-develop/administration.md index 073dfc17ede0..f2dcd7f3d0bd 100644 --- a/docs/content/stable/develop/best-practices-ysql/administration.md +++ b/docs/content/stable/develop/best-practices-develop/administration.md @@ -1,17 +1,19 @@ --- -title: Best practices for YSQL DB administrators -headerTitle: Best practices -linkTitle: Best practices +title: Best practices for YSQL database administrators +headerTitle: Best practices for YSQL database administrators +linkTitle: YSQL database administrators description: Tips and tricks to build YSQL applications -headcontent: Tips and tricks to administer YSQL DBs +headcontent: Tips and tricks for administering YSQL databases menu: stable: - identifier: best-practices-ysql-db-admins - parent: best-practices-ysql - weight: 570 + identifier: best-practices-ysql-administration + parent: best-practices-develop + weight: 30 type: docs --- +Database administrators can fine-tune YugabyteDB deployments for better reliability, performance, and operational efficiency by following targeted best practices. This guide outlines key recommendations for configuring single-AZ environments, optimizing memory use, accelerating CI/CD tests, and safely managing concurrent DML and DDL operations. These tips are designed to help DBAs maintain stable, scalable YSQL clusters in real-world and test scenarios alike. + ## Single availability zone (AZ) deployments In single AZ deployments, you need to set the [yb-tserver](../../reference/configuration/yb-tserver) flag `--durable_wal_write=true` to not lose data if the whole data center goes down (For example, power failure). diff --git a/docs/content/stable/develop/best-practices-ycql.md b/docs/content/stable/develop/best-practices-develop/best-practices-ycql.md similarity index 92% rename from docs/content/stable/develop/best-practices-ycql.md rename to docs/content/stable/develop/best-practices-develop/best-practices-ycql.md index f70cb4a3d805..0591895015f2 100644 --- a/docs/content/stable/develop/best-practices-ycql.md +++ b/docs/content/stable/develop/best-practices-develop/best-practices-ycql.md @@ -1,18 +1,18 @@ --- title: Best practices for YCQL applications -headerTitle: Best practices -linkTitle: Best practices +headerTitle: Best practices for YCQL applications +linkTitle: YCQL applications description: Tips and tricks to build YCQL applications headcontent: Tips and tricks to build YCQL applications for high performance and availability menu: stable: identifier: best-practices-ycql - parent: develop - weight: 571 + parent: best-practices-develop + weight: 40 type: docs --- -{{}} +To build high-performance and scalable applications using YCQL, developers should follow key schema design and operational best practices tailored for YugabyteDB's distributed architecture. This guide covers strategies for using indexes efficiently, optimizing read/write paths with batching and prepared statements, managing JSON and collection data types, and ensuring memory settings align with your query layer. These practices help ensure reliable performance, especially under real-world workloads. ## Global secondary indexes diff --git a/docs/content/stable/develop/best-practices-ysql/clients.md b/docs/content/stable/develop/best-practices-develop/clients.md similarity index 82% rename from docs/content/stable/develop/best-practices-ysql/clients.md rename to docs/content/stable/develop/best-practices-develop/clients.md index a361510b1a2b..8a327bee3f88 100644 --- a/docs/content/stable/develop/best-practices-ysql/clients.md +++ b/docs/content/stable/develop/best-practices-develop/clients.md @@ -1,17 +1,19 @@ --- title: Best practices for YSQL clients -headerTitle: Best practices -linkTitle: Best practices -description: Tips and tricks to build YSQL applications -headcontent: Tips and tricks to build YSQL applications for high performance and availability +headerTitle: Best practices for YSQL clients +linkTitle: YSQL clients +description: Tips and tricks for administering YSQL clients +headcontent: Tips and tricks for administering YSQL clients menu: stable: identifier: best-practices-ysql-clients - parent: best-practices-ysql - weight: 570 + parent: best-practices-develop + weight: 20 type: docs --- +Client-side configuration plays a critical role in the performance, scalability, and resilience of YSQL applications. This guide highlights essential best practices for managing connections, balancing load across nodes, and handling failovers efficiently using YugabyteDB's smart drivers and connection pooling. Whether you're deploying in a single region or across multiple data centers, these tips will help ensure your applications make the most of YugabyteDB's distributed architecture + ## Load balance and failover using smart drivers YugabyteDB [smart drivers](../../drivers-orms/smart-drivers/) provide advanced cluster-aware load-balancing capabilities that enables your applications to send requests to multiple nodes in the cluster just by connecting to one node. You can also set a fallback hierarchy by assigning priority to specific regions and ensuring that connections are made to the region with the highest priority, and then fall back to the region with the next priority in case the high-priority region fails. diff --git a/docs/content/stable/develop/best-practices-ysql/data-modeling-perf.md b/docs/content/stable/develop/best-practices-develop/data-modeling-perf.md similarity index 75% rename from docs/content/stable/develop/best-practices-ysql/data-modeling-perf.md rename to docs/content/stable/develop/best-practices-develop/data-modeling-perf.md index 22c423940021..7822b2972a7c 100644 --- a/docs/content/stable/develop/best-practices-ysql/data-modeling-perf.md +++ b/docs/content/stable/develop/best-practices-develop/data-modeling-perf.md @@ -1,23 +1,25 @@ --- title: Best practices for Data Modeling and performance of YSQL applications -headerTitle: Best practices -linkTitle: Best practices -description: Tips and tricks to build YSQL applications -headcontent: Tips and tricks to build YSQL applications for high performance and availability +headerTitle: Best practices for Data Modeling and performance of YSQL applications +linkTitle: YSQL data modeling +description: Tips and tricks for building YSQL applications +headcontent: Tips and tricks for building YSQL applications menu: stable: - identifier: best-practices-ysql-data-modeling-perf - parent: best-practices-ysql - weight: 570 + identifier: data-modeling-perf + parent: best-practices-develop + weight: 10 type: docs --- +Designing efficient, high-performance YSQL applications requires thoughtful data modeling and a deep understanding of how YugabyteDB handles distributed workloads. This guide offers a collection of best practices, from leveraging colocation and indexing techniques to optimizing transactions and parallelizing queries, that can help you build scalable, globally distributed applications with low latency and high availability. Whether you're developing new applications or tuning existing ones, these tips will help you make the most of YSQL's capabilities + ## Use application patterns Running applications in multiple data centers with data split across them is not a trivial task. When designing global applications, choose a suitable design pattern for your application from a suite of battle-tested design paradigms, including [Global database](../build-global-apps/global-database), [Multi-master](../build-global-apps/active-active-multi-master), [Standby cluster](../build-global-apps/active-active-single-master), [Duplicate indexes](../build-global-apps/duplicate-indexes), [Follower reads](../build-global-apps/follower-reads), and more. You can also combine these patterns as per your needs. {{}} -For more details, see [Build global applications](../build-global-apps). +For more details, see [Build global applications](../../build-global-apps). {{}} ## Colocation @@ -25,14 +27,14 @@ For more details, see [Build global applications](../build-global-apps). Colocated tables optimize latency and performance for data access by reducing the need for additional trips across the network for small tables. Additionally, it reduces the overhead of creating a tablet for every relation (tables, indexes, and so on) and their storage per node. {{}} -For more details, see [Colocation](../../explore/colocation/). +For more details, see [Colocation](../../../explore/colocation/). {{}} ## Faster reads with covering indexes When a query uses an index to look up rows faster, the columns that are not present in the index are fetched from the original table. This results in additional round trips to the main table leading to increased latency. -Use [covering indexes](../../explore/ysql-language-features/indexes-constraints/covering-index-ysql/) to store all the required columns needed for your queries in the index. Indexing converts a standard Index-Scan to an [Index-Only-Scan](https://dev.to/yugabyte/boosts-secondary-index-queries-with-index-only-scan-5e7j). +Use [covering indexes](../../../explore/ysql-language-features/indexes-constraints/covering-index-ysql/) to store all the required columns needed for your queries in the index. Indexing converts a standard Index-Scan to an [Index-Only-Scan](https://dev.to/yugabyte/boosts-secondary-index-queries-with-index-only-scan-5e7j). {{}} For more details, see [Avoid trips to the table with covering indexes](https://www.yugabyte.com/blog/multi-region-database-deployment-best-practices/#avoid-trips-to-the-table-with-covering-indexes). @@ -43,7 +45,7 @@ For more details, see [Avoid trips to the table with covering indexes](https://w A partial index is an index that is built on a subset of a table and includes only rows that satisfy the condition specified in the WHERE clause. This speeds up any writes to the table and reduces the size of the index, thereby improving speed for read queries that use the index. {{}} -For more details, see [Partial indexes](../../explore/ysql-language-features/indexes-constraints/partial-index-ysql/). +For more details, see [Partial indexes](../../../explore/ysql-language-features/indexes-constraints/partial-index-ysql/). {{}} ## Distinct keys with unique indexes @@ -53,14 +55,14 @@ If you need values in some of the columns to be unique, you can specify your ind When a unique index is applied to two or more columns, the combined values in these columns can't be duplicated in multiple rows. Note that because a NULL value is treated as a distinct value, you can have multiple NULL values in a column with a unique index. {{}} -For more details, see [Unique indexes](../../explore/ysql-language-features/indexes-constraints/unique-index-ysql/). +For more details, see [Unique indexes](../../../explore/ysql-language-features/indexes-constraints/unique-index-ysql/). {{}} ## Faster sequences with server-level caching Sequences in databases automatically generate incrementing numbers, perfect for generating unique values like order numbers, user IDs, check numbers, and so on. They prevent multiple application instances from concurrently generating duplicate values. However, generating sequences on a database that is spread across regions could have a latency impact on your applications. -Enable [server-level caching](../../api/ysql/exprs/func_nextval/#caching-values-on-the-yb-tserver) to improve the speed of sequences, and also avoid discarding many sequence values when an application disconnects. +Enable [server-level caching](../../../api/ysql/exprs/func_nextval/#caching-values-on-the-yb-tserver) to improve the speed of sequences, and also avoid discarding many sequence values when an application disconnects. {{}} For a demo, see the YugabyteDB Friday Tech Talk on [Scaling sequences with server-level caching](https://www.youtube.com/watch?v=hs-CU3vjMQY&list=PL8Z3vt4qJTkLTIqB9eTLuqOdpzghX8H40&index=76). @@ -83,15 +85,15 @@ UPDATE txndemo SET v = v + 3 WHERE k=1 RETURNING v; ``` {{}} -For more details, see [Fast single-row transactions](../../develop/learn/transactions/transactions-performance-ysql/#fast-single-row-transactions). +For more details, see [Fast single-row transactions](../../../develop/learn/transactions/transactions-performance-ysql/#fast-single-row-transactions). {{}} ## Delete older data quickly with partitioning -Use [table partitioning](../../explore/ysql-language-features/advanced-features/partitions/) to split your data into multiple partitions according to date so that you can quickly delete older data by dropping the partition. +Use [table partitioning](../../../explore/ysql-language-features/advanced-features/partitions/) to split your data into multiple partitions according to date so that you can quickly delete older data by dropping the partition. {{}} -For more details, see [Partition data by time](../data-modeling/common-patterns/timeseries/partitioning-by-time/). +For more details, see [Partition data by time](../../data-modeling/common-patterns/timeseries/partitioning-by-time/). {{}} ## Use the right data types for partition keys @@ -161,12 +163,12 @@ SELECT * FROM products; ``` {{}} -For more information, see [Data manipulation](../../explore/ysql-language-features/data-manipulation). +For more information, see [Data manipulation](../../../explore/ysql-language-features/data-manipulation). {{}} ## Re-use query plans with prepared statements -Whenever possible, use [prepared statements](../../api/ysql/the-sql-language/statements/perf_prepare/) to ensure that YugabyteDB can re-use the same query plan and eliminate the need for a server to parse the query on each operation. +Whenever possible, use [prepared statements](../../../api/ysql/the-sql-language/statements/perf_prepare/) to ensure that YugabyteDB can re-use the same query plan and eliminate the need for a server to parse the query on each operation. {{}} @@ -191,12 +193,12 @@ For more details, see [Prepared statements in PL/pgSQL](https://dev.to/aws-heroe Use BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE READ ONLY DEFERRABLE for batch or long-running jobs, which need a consistent snapshot of the database without interfering, or being interfered with by other transactions. {{}} -For more details, see [Large scans and batch jobs](../../develop/learn/transactions/transactions-performance-ysql/#large-scans-and-batch-jobs). +For more details, see [Large scans and batch jobs](../../../develop/learn/transactions/transactions-performance-ysql/#large-scans-and-batch-jobs). {{}} ## JSONB datatype -Use the [JSONB](../../api/ysql/datatypes/type_json) datatype to model JSON data; that is, data that doesn't have a set schema but has a truly dynamic schema. +Use the [JSONB](../../../api/ysql/datatypes/type_json) datatype to model JSON data; that is, data that doesn't have a set schema but has a truly dynamic schema. JSONB in YSQL is the same as the [JSONB datatype in PostgreSQL](https://www.postgresql.org/docs/11/datatype-json.html). @@ -220,7 +222,7 @@ YSQL also supports JSONB expression indexes, which can be used to speed up data For large or batch SELECT or DELETE that have to scan all tablets, you can parallelize your operation by creating queries that affect only a specific part of the tablet using the `yb_hash_code` function. {{}} -For more details, see [Distributed parallel queries](../../api/ysql/exprs/func_yb_hash_code/#distributed-parallel-queries). +For more details, see [Distributed parallel queries](../../../api/ysql/exprs/func_yb_hash_code/#distributed-parallel-queries). {{}} ## Row size limit @@ -233,7 +235,7 @@ For consistent latency or performance, it is recommended to size columns in the ## TRUNCATE tables instead of DELETE -[TRUNCATE](../../api/ysql/the-sql-language/statements/ddl_truncate/) deletes the database files that store the table data and is much faster than [DELETE](../../api/ysql/the-sql-language/statements/dml_delete/), which inserts a _delete marker_ for each row in transactions that are later removed from storage during compaction runs. +[TRUNCATE](../../../api/ysql/the-sql-language/statements/ddl_truncate/) deletes the database files that store the table data and is much faster than [DELETE](../../../api/ysql/the-sql-language/statements/dml_delete/), which inserts a _delete marker_ for each row in transactions that are later removed from storage during compaction runs. {{}} Currently, TRUNCATE is not transactional. Also, similar to PostgreSQL, TRUNCATE is not MVCC-safe. For more details, see [TRUNCATE](../../api/ysql/the-sql-language/statements/ddl_truncate/). @@ -243,14 +245,14 @@ Currently, TRUNCATE is not transactional. Also, similar to PostgreSQL, TRUNCATE Each table and index is split into tablets and each tablet has overhead. The more tablets you need, the bigger your universe will need to be. See [allowing for tablet replica overheads](#allowing-for-tablet-replica-overheads) for how the number of tablets affects how big your universe needs to be. -Each table and index consists of several tablets based on the [--ysql_num_shards_per_tserver](../../reference/configuration/yb-tserver/#yb-num-shards-per-tserver) flag. +Each table and index consists of several tablets based on the [--ysql_num_shards_per_tserver](../../../reference/configuration/yb-tserver/#yb-num-shards-per-tserver) flag. You can try one of the following methods to reduce the number of tablets: -- Use [colocation](../../explore/colocation/) to group small tables into 1 tablet. -- Reduce number of tablets-per-table using the [--ysql_num_shards_per_tserver](../../reference/configuration/yb-tserver/#yb-num-shards-per-tserver) flag. +- Use [colocation](../../../explore/colocation/) to group small tables into 1 tablet. +- Reduce number of tablets-per-table using the [--ysql_num_shards_per_tserver](../../../reference/configuration/yb-tserver/#yb-num-shards-per-tserver) flag. - Use the [SPLIT INTO](../../api/ysql/the-sql-language/statements/ddl_create_table/#split-into) clause when creating a table. -- Start with few tablets and use [automatic tablet splitting](../../architecture/docdb-sharding/tablet-splitting/). +- Start with few tablets and use [automatic tablet splitting](../../../architecture/docdb-sharding/tablet-splitting/). Note that multiple tablets can allow work to proceed in parallel so you may not want every table to have only one tablet. diff --git a/docs/content/stable/develop/best-practices-ysql/_index.md b/docs/content/stable/develop/best-practices-ysql/_index.md deleted file mode 100644 index da249dff078c..000000000000 --- a/docs/content/stable/develop/best-practices-ysql/_index.md +++ /dev/null @@ -1,6 +0,0 @@ - -[Data Modeling & Perf](../data-modeling-perf) - -[Clients](../clients) - -[DB Administrators](../administration) From c617ef8a2802ff922e76c44006ee7038779260f1 Mon Sep 17 00:00:00 2001 From: svarshney Date: Wed, 21 May 2025 08:52:19 +0530 Subject: [PATCH 140/146] [PLAT-17614] Implement cGroup via node-agent Summary: Implement cGroup via node-agent Test Plan: manual testing Reviewers: nsingh Reviewed By: nsingh Differential Revision: https://phorge.dev.yugabyte.com/D44096 --- .../tasks/prepare-configure-server.yml | 2 - managed/node-agent/app/server/rpc.go | 15 ++ .../node-agent/app/task/helpers/yb_helper.go | 22 ++- .../node-agent/app/task/server_gflags_task.go | 21 ++- managed/node-agent/app/task/setup_cgroups.go | 133 ++++++++++++++++++ managed/node-agent/proto/server.proto | 2 + managed/node-agent/proto/yb.proto | 9 ++ .../templates/server/yb-ysql-cgroup.service | 9 ++ .../tasks/payload/NodeAgentRpcPayload.java | 16 +++ .../subtasks/AnsibleConfigureServers.java | 7 + .../yugabyte/yw/common/NodeAgentClient.java | 14 ++ .../com/yugabyte/yw/common/NodeManager.java | 6 +- 12 files changed, 234 insertions(+), 22 deletions(-) create mode 100644 managed/node-agent/app/task/setup_cgroups.go create mode 100644 managed/node-agent/resources/templates/server/yb-ysql-cgroup.service diff --git a/managed/devops/roles/configure-cluster-server/tasks/prepare-configure-server.yml b/managed/devops/roles/configure-cluster-server/tasks/prepare-configure-server.yml index a3ab79b1fbcd..ce52330e1cb8 100644 --- a/managed/devops/roles/configure-cluster-server/tasks/prepare-configure-server.yml +++ b/managed/devops/roles/configure-cluster-server/tasks/prepare-configure-server.yml @@ -199,7 +199,6 @@ shell: cmd: "loginctl enable-linger {{ user_name }}" - # Todo: In node-agent. - name: Configure | Setup OpenTelemetry Collector include_role: name: manage_otel_collector @@ -312,7 +311,6 @@ when: (systemd_option and not ((ansible_os_family == 'RedHat' and ansible_distribution_major_version == '7') or (ansible_distribution == 'Amazon' and ansible_distribution_major_version == '2'))) -# Todo: In node-agent. - name: Configure | setup-postgres-cgroups include_role: name: setup-cgroup diff --git a/managed/node-agent/app/server/rpc.go b/managed/node-agent/app/server/rpc.go index 069b1b93dce1..66f88320cf85 100644 --- a/managed/node-agent/app/server/rpc.go +++ b/managed/node-agent/app/server/rpc.go @@ -387,6 +387,21 @@ func (server *RPCServer) SubmitTask( res.TaskId = taskID return res, nil } + setupCGroupInput := req.GetSetupCGroupInput() + if setupCGroupInput != nil { + SetupCgroupHandler := task.NewSetupCgroupHandler( + setupCGroupInput, + username, + ) + err := task.GetTaskManager().Submit(ctx, taskID, SetupCgroupHandler) + if err != nil { + util.FileLogger(). + Errorf(ctx, "Error in running setup cGroup - %s", err.Error()) + return res, status.Error(codes.Internal, err.Error()) + } + res.TaskId = taskID + return res, nil + } return res, status.Error(codes.Unimplemented, "Unknown task") } diff --git a/managed/node-agent/app/task/helpers/yb_helper.go b/managed/node-agent/app/task/helpers/yb_helper.go index 4da8c69d27db..386339443c40 100644 --- a/managed/node-agent/app/task/helpers/yb_helper.go +++ b/managed/node-agent/app/task/helpers/yb_helper.go @@ -20,10 +20,11 @@ type Release struct { // OSInfo represents parsed OS release info type OSInfo struct { - ID string // e.g., "ubuntu" - Family string // e.g., "debian" - Pretty string // e.g., "Ubuntu 22.04.4 LTS" - Arch string // e.g., "x86_64" + ID string // e.g., "ubuntu" + Family string // e.g., "debian" + Pretty string // e.g., "Ubuntu 22.04.4 LTS" + Arch string // e.g., "x86_64" + Version string // e.g., "22" } var releaseFormat = regexp.MustCompile(`yugabyte[-_]([\d]+\.[\d]+\.[\d]+\.[\d]+-[a-z0-9]+)`) @@ -95,11 +96,15 @@ func GetOSInfo() (*OSInfo, error) { val := strings.Trim(keyVal[1], `"`) switch key { case "ID": - info.ID = val + info.ID = strings.ToLower(val) case "ID_LIKE": - info.Family = val + info.Family = strings.ToLower(val) case "PRETTY_NAME": info.Pretty = val + case "VERSION_ID": + if parts := strings.SplitN(val, ".", 2); len(parts) > 0 { + info.Version = parts[0] + } } } } @@ -109,3 +114,8 @@ func GetOSInfo() (*OSInfo, error) { info.Arch = runtime.GOARCH return info, nil } + +func IsRhel9(osInfo *OSInfo) bool { + return (strings.Contains(osInfo.Family, "rhel") || strings.Contains(osInfo.ID, "rhel")) && + osInfo.Version == "9" +} diff --git a/managed/node-agent/app/task/server_gflags_task.go b/managed/node-agent/app/task/server_gflags_task.go index e60299854d33..a0e180a27c6e 100644 --- a/managed/node-agent/app/task/server_gflags_task.go +++ b/managed/node-agent/app/task/server_gflags_task.go @@ -4,6 +4,7 @@ package task import ( "context" + "fmt" "io/fs" "node-agent/app/task/module" pb "node-agent/generated/service" @@ -11,7 +12,6 @@ import ( "path/filepath" "strconv" "strings" - "sync/atomic" ) const ( @@ -25,10 +25,9 @@ var ( ) type ServerGflagsHandler struct { - taskStatus *atomic.Value - param *pb.ServerGFlagsInput - username string - logOut util.Buffer + param *pb.ServerGFlagsInput + username string + logOut util.Buffer } // NewServerGflagsHandler returns a new instance of ServerControlHandler. @@ -63,7 +62,7 @@ func (handler *ServerGflagsHandler) postmasterCgroupPath(ctx context.Context) (s User: handler.username, Desc: "DetermineCgroupVersion", Cmd: "stat", - Args: []string{"-fc", "%%T", "/sys/fs/cgroup/"}, + Args: []string{"-fc", "%T", "/sys/fs/cgroup/"}, StdOut: util.NewBuffer(module.MaxBufferCapacity), } err = cmdInfo.RunCmd(ctx) @@ -75,11 +74,9 @@ func (handler *ServerGflagsHandler) postmasterCgroupPath(ctx context.Context) (s stdout := strings.TrimSpace(cmdInfo.StdOut.String()) if stdout == "cgroup2fs" { postmasterCgroupPath = filepath.Join( - "/sys/fs/cgroup/user.slice/user-", - userID, - ".slice/user@", - userID, - ".service/ysql") + fmt.Sprintf("user.slice/user-%s.slice", userID), + fmt.Sprintf("user@%s.service", userID), + "ysql") } return postmasterCgroupPath, nil } @@ -125,7 +122,7 @@ func (handler *ServerGflagsHandler) Handle( if err != nil { return nil, err } - processedGflags := map[string]any{} + processedGflags = map[string]string{} for k, v := range gflags { if k == "postmaster_cgroup" { processedGflags["postmaster_cgroup"] = path diff --git a/managed/node-agent/app/task/setup_cgroups.go b/managed/node-agent/app/task/setup_cgroups.go new file mode 100644 index 000000000000..fa6b57e14ebb --- /dev/null +++ b/managed/node-agent/app/task/setup_cgroups.go @@ -0,0 +1,133 @@ +// Copyright (c) YugaByte, Inc. + +package task + +import ( + "context" + "errors" + "fmt" + "io/fs" + "node-agent/app/task/helpers" + "node-agent/app/task/module" + pb "node-agent/generated/service" + "node-agent/util" + "path/filepath" + "strconv" + "strings" +) + +type SetupCgroupHandler struct { + param *pb.SetupCGroupInput + username string + logOut util.Buffer +} + +func NewSetupCgroupHandler(param *pb.SetupCGroupInput, username string) *SetupCgroupHandler { + return &SetupCgroupHandler{ + param: param, + username: username, + logOut: util.NewBuffer(module.MaxBufferCapacity), + } +} + +// CurrentTaskStatus implements the AsyncTask method. +func (h *SetupCgroupHandler) CurrentTaskStatus() *TaskStatus { + return &TaskStatus{ + Info: h.logOut, + ExitStatus: &ExitStatus{}, + } +} + +func (h *SetupCgroupHandler) String() string { + return "Setup cGroup Task" +} + +func (h *SetupCgroupHandler) Handle(ctx context.Context) (*pb.DescribeTaskResponse, error) { + util.FileLogger().Info(ctx, "Starting setup cGroup handler.") + + // 1) Retrieve OS information. + osInfo, err := helpers.GetOSInfo() + if err != nil { + err := errors.New("error retrieving OS information") + util.FileLogger().Error(ctx, err.Error()) + return nil, err + } + + // 2) figure out home dir + home := "" + if h.param.GetYbHomeDir() != "" { + home = h.param.GetYbHomeDir() + } else { + err := errors.New("ybHomeDir is required") + util.FileLogger().Error(ctx, err.Error()) + return nil, err + } + + // Setup cGroups for rhel:9 deployments + if helpers.IsRhel9(osInfo) { + h.logOut.WriteLine("Determining cgroup version") + cmdInfo := &module.CommandInfo{ + User: h.username, + Desc: "DetermineCgroupVersion", + Cmd: "stat", + Args: []string{"-fc", "%T", "/sys/fs/cgroup/"}, + StdOut: util.NewBuffer(module.MaxBufferCapacity), + } + util.FileLogger().Infof(ctx, "Running command %v", cmdInfo) + err = cmdInfo.RunCmd(ctx) + if err != nil { + return nil, err + } + + userInfo, _ := util.UserInfo(h.username) + stdout := strings.TrimSpace(cmdInfo.StdOut.String()) + userID := strconv.Itoa(int(userInfo.UserID)) + cGroupPath := "memory/ysql" + memMax := "memory.limit_in_bytes" + memSwapMap := "memory.memsw.limit_in_bytes" + + if stdout == "cgroup2fs" { + cGroupPath = filepath.Join( + fmt.Sprintf("user.slice/user-%s.slice", userID), + fmt.Sprintf("user@%s.service", userID), + "ysql") + memMax = "memory.max" + memSwapMap = "memory.swap.max" + } + + cGroupServiceContext := map[string]any{ + "cgroup_path": cGroupPath, + "mem_max": memMax, + "mem_swap_max": memSwapMap, + "pg_max_mem_mb": h.param.GetPgMaxMemMb(), + } + + h.logOut.WriteLine("Configuring cgroup systemd unit") + // Copy yb-ysql-cgroup.service. + _ = module.CopyFile( + ctx, + cGroupServiceContext, + filepath.Join(ServerTemplateSubpath, "yb-ysql-cgroup.service"), + filepath.Join(home, SystemdUnitPath, "yb-ysql-cgroup.service"), + fs.FileMode(0755), + ) + + cmd, err := module.ControlServerCmd( + h.username, + "yb-ysql-cgroup.service", + "start", + ) + if err != nil { + util.FileLogger().Errorf(ctx, "Failed to get server control command - %s", err.Error()) + return nil, err + } + util.FileLogger().Infof(ctx, "Running command %v", cmd) + _, err = module.RunShellCmd(ctx, h.username, "serverControl", cmd, h.logOut) + if err != nil { + util.FileLogger().Errorf(ctx, "Server control failed in %v - %s", cmd, err.Error()) + return nil, err + } + } + + return nil, nil +} diff --git a/managed/node-agent/proto/server.proto b/managed/node-agent/proto/server.proto index 7ba5791df3c2..8c317a2c7ff0 100644 --- a/managed/node-agent/proto/server.proto +++ b/managed/node-agent/proto/server.proto @@ -61,6 +61,7 @@ message SubmitTaskRequest { InstallYbcInput installYbcInput = 9; ConfigureServerInput configureServerInput = 10; InstallOtelCollectorInput installOtelCollectorInput = 11; + SetupCGroupInput setupCGroupInput = 12; } } @@ -85,6 +86,7 @@ message DescribeTaskResponse { InstallYbcOutput installYbcOutput = 9; ConfigureServerOutput configureServerOutput = 10; InstallOtelCollectorOutput installOtelCollectorOutput = 11; + SetupCGroupOutput setupCGroupOutput = 12; } } diff --git a/managed/node-agent/proto/yb.proto b/managed/node-agent/proto/yb.proto index 15447d8c4f00..e6ea223f2705 100644 --- a/managed/node-agent/proto/yb.proto +++ b/managed/node-agent/proto/yb.proto @@ -166,3 +166,12 @@ message InstallOtelCollectorInput { message InstallOtelCollectorOutput { int32 pid = 1; } + +message SetupCGroupInput { + string ybHomeDir = 1; + uint32 pgMaxMemMb = 2; +} + +message SetupCGroupOutput { + int32 pid = 1; +} diff --git a/managed/node-agent/resources/templates/server/yb-ysql-cgroup.service b/managed/node-agent/resources/templates/server/yb-ysql-cgroup.service new file mode 100644 index 000000000000..37ace085f311 --- /dev/null +++ b/managed/node-agent/resources/templates/server/yb-ysql-cgroup.service @@ -0,0 +1,9 @@ +[Unit] +Description=Yugabyte ysql cgroup manager +Before=yb-tserver.service + +[Service] +Type=oneshot +ExecStart=mkdir -p /sys/fs/cgroup/{{cgroup_path}} +ExecStart=bash -c 'echo {{pg_max_mem_mb}}M > /sys/fs/cgroup/{{cgroup_path}}/{{mem_max}}' +ExecStart=bash -c 'echo {{pg_max_mem_mb}}M > /sys/fs/cgroup/{{cgroup_path}}/{{mem_swap_max}}' diff --git a/managed/src/main/java/com/yugabyte/yw/commissioner/tasks/payload/NodeAgentRpcPayload.java b/managed/src/main/java/com/yugabyte/yw/commissioner/tasks/payload/NodeAgentRpcPayload.java index 79c37c4e6127..f2718e3ac55f 100644 --- a/managed/src/main/java/com/yugabyte/yw/commissioner/tasks/payload/NodeAgentRpcPayload.java +++ b/managed/src/main/java/com/yugabyte/yw/commissioner/tasks/payload/NodeAgentRpcPayload.java @@ -46,6 +46,7 @@ import com.yugabyte.yw.nodeagent.InstallSoftwareInput; import com.yugabyte.yw.nodeagent.InstallYbcInput; import com.yugabyte.yw.nodeagent.ServerGFlagsInput; +import com.yugabyte.yw.nodeagent.SetupCGroupInput; import java.io.File; import java.nio.file.Path; import java.nio.file.Paths; @@ -532,4 +533,19 @@ public void runServerGFlagsWithNodeAgent( log.debug("Setting gflags using node agent: {}", input.getGflagsMap()); nodeAgentClient.runServerGFlags(nodeAgent, input, DEFAULT_CONFIGURE_USER); } + + public SetupCGroupInput setupSetupCGroupBits( + Universe universe, NodeDetails nodeDetails, NodeTaskParams taskParams, NodeAgent nodeAgent) { + SetupCGroupInput.Builder setupSetupCGroupBuilder = SetupCGroupInput.newBuilder(); + Cluster cluster = universe.getCluster(nodeDetails.placementUuid); + Provider provider = Provider.getOrBadRequest(UUID.fromString(cluster.userIntent.provider)); + + setupSetupCGroupBuilder.setYbHomeDir(provider.getYbHome()); + if (taskParams instanceof AnsibleConfigureServers.Params) { + AnsibleConfigureServers.Params params = (AnsibleConfigureServers.Params) taskParams; + setupSetupCGroupBuilder.setPgMaxMemMb(params.cgroupSize); + } + + return setupSetupCGroupBuilder.build(); + } } diff --git a/managed/src/main/java/com/yugabyte/yw/commissioner/tasks/subtasks/AnsibleConfigureServers.java b/managed/src/main/java/com/yugabyte/yw/commissioner/tasks/subtasks/AnsibleConfigureServers.java index c4dd0874c9b2..c4e4ae36a532 100644 --- a/managed/src/main/java/com/yugabyte/yw/commissioner/tasks/subtasks/AnsibleConfigureServers.java +++ b/managed/src/main/java/com/yugabyte/yw/commissioner/tasks/subtasks/AnsibleConfigureServers.java @@ -220,6 +220,13 @@ universe, nodeDetails, taskParams(), optional.get()), DEFAULT_CONFIGURE_USER); } } + if (taskParams().cgroupSize > 0) { + nodeAgentClient.runSetupCGroupInput( + optional.get(), + nodeAgentRpcPayload.setupSetupCGroupBits( + universe, nodeDetails, taskParams(), optional.get()), + DEFAULT_CONFIGURE_USER); + } } if (taskParams().type == UpgradeTaskType.Everything && !taskParams().updateMasterAddrsOnly) { diff --git a/managed/src/main/java/com/yugabyte/yw/common/NodeAgentClient.java b/managed/src/main/java/com/yugabyte/yw/common/NodeAgentClient.java index 870b66774c26..e9c973d1f403 100644 --- a/managed/src/main/java/com/yugabyte/yw/common/NodeAgentClient.java +++ b/managed/src/main/java/com/yugabyte/yw/common/NodeAgentClient.java @@ -56,6 +56,8 @@ import com.yugabyte.yw.nodeagent.ServerControlOutput; import com.yugabyte.yw.nodeagent.ServerGFlagsInput; import com.yugabyte.yw.nodeagent.ServerGFlagsOutput; +import com.yugabyte.yw.nodeagent.SetupCGroupInput; +import com.yugabyte.yw.nodeagent.SetupCGroupOutput; import com.yugabyte.yw.nodeagent.SubmitTaskRequest; import com.yugabyte.yw.nodeagent.SubmitTaskResponse; import com.yugabyte.yw.nodeagent.UpdateRequest; @@ -969,6 +971,18 @@ public InstallOtelCollectorOutput runInstallOtelCollector( return runAsyncTask(nodeAgent, builder.build(), InstallOtelCollectorOutput.class); } + public SetupCGroupOutput runSetupCGroupInput( + NodeAgent nodeAgent, SetupCGroupInput input, String user) { + SubmitTaskRequest.Builder builder = + SubmitTaskRequest.newBuilder() + .setSetupCGroupInput(input) + .setTaskId(UUID.randomUUID().toString()); + if (StringUtils.isNotBlank(user)) { + builder.setUser(user); + } + return runAsyncTask(nodeAgent, builder.build(), SetupCGroupOutput.class); + } + public ServerGFlagsOutput runServerGFlags( NodeAgent nodeAgent, ServerGFlagsInput input, String user) { SubmitTaskRequest.Builder builder = diff --git a/managed/src/main/java/com/yugabyte/yw/common/NodeManager.java b/managed/src/main/java/com/yugabyte/yw/common/NodeManager.java index 420af80835a2..0eb598b79b4b 100644 --- a/managed/src/main/java/com/yugabyte/yw/common/NodeManager.java +++ b/managed/src/main/java/com/yugabyte/yw/common/NodeManager.java @@ -2211,8 +2211,10 @@ && imdsv2required(arch, userIntent, provider)) { commandArgs.add(localPackagePath); } - commandArgs.add("--pg_max_mem_mb"); - commandArgs.add(Integer.toString(taskParam.cgroupSize)); + if (!taskParam.skipDownloadSoftware) { + commandArgs.add("--pg_max_mem_mb"); + commandArgs.add(Integer.toString(taskParam.cgroupSize)); + } break; } case List: From e73b62e471c01719d3f3f6c47e9b353238bdfc3c Mon Sep 17 00:00:00 2001 From: Daniel Shubin Date: Tue, 20 May 2025 18:01:27 +0000 Subject: [PATCH 141/146] [PLAT-17627] Handle null check for dumpRoleChecks Summary: dumpRoleChecks can be null Test Plan: itests Reviewers: vkumar Reviewed By: vkumar Differential Revision: https://phorge.dev.yugabyte.com/D44107 --- .../yugabyte/yw/common/backuprestore/ybc/YbcBackupUtil.java | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/managed/src/main/java/com/yugabyte/yw/common/backuprestore/ybc/YbcBackupUtil.java b/managed/src/main/java/com/yugabyte/yw/common/backuprestore/ybc/YbcBackupUtil.java index 222112e9daf1..592a7e16c750 100644 --- a/managed/src/main/java/com/yugabyte/yw/common/backuprestore/ybc/YbcBackupUtil.java +++ b/managed/src/main/java/com/yugabyte/yw/common/backuprestore/ybc/YbcBackupUtil.java @@ -1224,7 +1224,9 @@ public BackupServiceTaskExtendedArgs getExtendedArgsForRestore( // Only skip ignore errors if requested by the user AND the backup supports 'dump_role_checks'. extendedArgsBuilder.setIgnoreRestoreErrors(true); - if (successMarker.dumpRoleChecks && !backupStorageInfo.getIgnoreErrors()) { + if (successMarker.dumpRoleChecks != null + && successMarker.dumpRoleChecks + && !backupStorageInfo.getIgnoreErrors()) { extendedArgsBuilder.setIgnoreRestoreErrors(false); } From 86b614a8790bc0eb3f71a6c103f2607c532d802c Mon Sep 17 00:00:00 2001 From: Sahith Kurapati Date: Mon, 19 May 2025 19:20:28 +0000 Subject: [PATCH 142/146] [PLAT-17480] Skip collection of WARN logs in support bundle Summary: Skip collection of WARN logs in support bundle. Reason: WARN logs are already present in the INFO logs and this is just a waste of space in the support bundle. Test Plan: Manually tested. Run itests. Run UTs Reviewers: vkumar Reviewed By: vkumar Differential Revision: https://phorge.dev.yugabyte.com/D44075 --- managed/src/main/resources/reference.conf | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/managed/src/main/resources/reference.conf b/managed/src/main/resources/reference.conf index d95839d04146..770ebc0313b8 100644 --- a/managed/src/main/resources/reference.conf +++ b/managed/src/main/resources/reference.conf @@ -1176,7 +1176,7 @@ yb { application_logs_sdf_pattern = "'application-log-'yyyy-MM-dd" k8s_mount_point_prefix = "/mnt/disk" default_mount_point_prefix = "/mnt/d" - universe_logs_regex_pattern = "((?:.*)(?:yb-)(?:master|tserver)(?:.*))(\\d{8})-(?:\\d*)\\.(?:.*)" + universe_logs_regex_pattern = "((?:.*)(?:yb-)(?:master|tserver)(?!.*WARNING)(?:.*))(\\d{8})-(?:\\d*)\\.(?:.*)" postgres_logs_regex_pattern = "((?:.*)(?:postgresql)-)(.{10})(?:.*)" connection_pooling_logs_regex_pattern = "((?:.*)(?:ysql-conn-mgr)-)(.{10})(?:.*)" ybc_logs_regex_pattern = "((?:.*)(?:log)(?:.*))(\\d{8})-(?:\\d*)\\.(?:.*)" From e65458881f040106226c7a0399aef733d0c9acc6 Mon Sep 17 00:00:00 2001 From: Dwight Hodge Date: Wed, 21 May 2025 01:44:08 -0400 Subject: [PATCH 143/146] edits and links --- .../statements/ddl_alter_table.md | 57 ++++++++++--------- docs/content/stable/develop/_index.md | 4 +- .../develop/best-practices-develop/_index.md | 6 +- .../best-practices-develop/administration.md | 18 +++--- .../best-practices-ycql.md | 26 ++++----- .../develop/best-practices-develop/clients.md | 12 ++-- .../data-modeling-perf.md | 9 ++- .../data-migration/migrate-from-postgres.md | 2 +- 8 files changed, 68 insertions(+), 66 deletions(-) diff --git a/docs/content/stable/api/ysql/the-sql-language/statements/ddl_alter_table.md b/docs/content/stable/api/ysql/the-sql-language/statements/ddl_alter_table.md index 53b0511a5775..1150c9dd2c95 100644 --- a/docs/content/stable/api/ysql/the-sql-language/statements/ddl_alter_table.md +++ b/docs/content/stable/api/ysql/the-sql-language/statements/ddl_alter_table.md @@ -27,7 +27,7 @@ Use the `ALTER TABLE` statement to change the definition of a table.

{{< note title="Table inheritance is not yet supported" >}} -YSQL in the present "latest" YugabyteDB does not yet support the "table inheritance" feature that is described in the [PostgreSQL documentation](https://www.postgresql.org/docs/11/ddl-inherit.html). The attempt to create a table that inherits another table causes the _0A000 (feature_not_supported)_ error with the message _"INHERITS not supported yet"_. This means that the syntax that the `table_expr` rule allows doesn't not yet bring any useful meaning. +YSQL in the present "latest" YugabyteDB does not yet support the "table inheritance" feature that is described in the [PostgreSQL documentation](https://www.postgresql.org/docs/11/ddl-inherit.html). The attempt to create a table that inherits another table causes the _0A000 (feature_not_supported)_ error with the message _"INHERITS not supported yet"_. This means that the syntax that the `table_expr` rule allows doesn't yet bring any useful meaning. It says that you can write, for example, this: @@ -50,20 +50,20 @@ These variants are useful only when at least one other table inherits `t`. But a Specify one of the following actions. -#### ADD [ COLUMN ] [ IF NOT EXISTS ] *column_name* *data_type* [*constraint*](#constraints) +#### ADD [ COLUMN ] [ IF NOT EXISTS ] *column_name* *data_type* *constraint* -Add the specified column with the specified data type and constraint. +Add the specified column with the specified data type and [constraint](#constraints). ##### Table rewrites -ADD COLUMN … DEFAULT statements require a [table rewrite](#alter-table-operations-that-involve-a-table-rewrite) when the default value is a *volatile* expression. [Volatile expressions](https://www.postgresql.org/docs/current/xfunc-volatility.html#XFUNC-VOLATILITY) can return different results for different rows, so a table rewrite is required to fill in values for existing rows. For non-volatile expressions, no table rewrite is required. - -Examples of volatile expressions +ADD COLUMN … DEFAULT statements require a [table rewrite](#alter-table-operations-that-involve-a-table-rewrite) when the default value is a _volatile_ expression. [Volatile expressions](https://www.postgresql.org/docs/current/xfunc-volatility.html#XFUNC-VOLATILITY) can return different results for different rows, so a table rewrite is required to fill in values for existing rows. For non-volatile expressions, no table rewrite is required. -- ALTER TABLE … ADD COLUMN v1 INT DEFAULT random()  -- ALTER TABLE .. ADD COLUMN v2 UUID DEFAULT gen_random_uuid(); - -Examples of non-volatile expressions (no table rewrite)  +Examples of volatile expressions: + +- ALTER TABLE … ADD COLUMN v1 INT DEFAULT random() +- ALTER TABLE .. ADD COLUMN v2 UUID DEFAULT gen_random_uuid() + +Examples of non-volatile expressions (no table rewrite): - ALTER TABLE … ADD COLUMN nv1 INT DEFAULT 5 - ALTER TABLE … ADD COLUMN nv2 timestamp DEFAULT now() -- uses the same timestamp now() for all existing rows @@ -78,11 +78,12 @@ Renaming a table is a non blocking metadata change operation. {{< /note >}} - #### SET TABLESPACE *tablespace_name* Asynchronously change the tablespace of an existing table. + The tablespace change will immediately reflect in the config of the table, however the tablet move by the load balancer happens in the background. + While the load balancer is performing the move it is perfectly safe from a correctness perspective to do reads and writes, however some query optimization that happens based on the data location may be off while data is being moved. ##### Example @@ -97,8 +98,8 @@ DETAIL: Data movement is a long running asynchronous process and can be monitor ALTER TABLE ``` - Tables can be moved to the default tablespace using: + ```sql ALTER TABLE table_name SET TABLESPACE pg_default; ``` @@ -232,20 +233,20 @@ alter table parents drop column b cascade; It quietly succeeds. Now `\d children` shows that the foreign key constraint `children_fk` has been transitively dropped. -#### ADD [*alter_table_constraint*](#constraints) +#### ADD *alter_table_constraint* -Add the specified constraint to the table. +Add the specified [constraint](#constraints) to the table. ##### Table rewrites Adding a `PRIMARY KEY` constraint results in a full table rewrite of the main table and all associated indexes, which can be a potentially expensive operation. For more details about table rewrites, see [Alter table operations that involve a table rewrite](#alter-table-operations-that-involve-a-table-rewrite). -The table rewrite is needed because of how YugabyteDB stores rows and indexes. In YugabyteDB, data is distributed based on the primary key; when a table does not have an explicit primary key assigned, YugabyteDB automatically creates an internal row ID to use as the table's primary key. As a result, these rows need to be rewritten to use the newly added primary key column. For more information, refer to [Primary keys](../../../../../develop/data-modeling/primary-keys-ysql). - +The table rewrite is needed because of how YugabyteDB stores rows and indexes. In YugabyteDB, data is distributed based on the primary key; when a table does not have an explicit primary key assigned, YugabyteDB automatically creates an internal row ID to use as the table's primary key. As a result, these rows need to be rewritten to use the newly added primary key column. For more information, refer to [Primary keys](../../../../../develop/data-modeling/primary-keys-ysql). #### ALTER [ COLUMN ] *column_name* [ SET DATA ] TYPE *data_type* [ COLLATE *collation* ] [ USING *expression* ] Change the type of an existing column. The following semantics apply: + - If the optional `COLLATE` clause is not specified, the default collation for the new column type will be used. - If the optional `USING` clause is not provided, the default conversion for the new column value will be the same as an assignment cast from the old type to the new type. - A `USING` clause must be included when there is no implicit assignment cast available from the old type to the new type. @@ -255,7 +256,7 @@ Change the type of an existing column. The following semantics apply: ##### Table rewrites -Altering a column's type requires a [full table rewrite](#alter-table-operations-that-involve-a-table-rewrite) of the table, and any indexes that contain this column when the underlying storage format changes or if the data changes. +Altering a column's type requires a [full table rewrite](#alter-table-operations-that-involve-a-table-rewrite), and any indexes that contain this column when the underlying storage format changes or if the data changes. The following type changes commonly require a table rewrite: @@ -299,7 +300,6 @@ The following ALTER TYPE statement does not cause a table rewrite: - ALTER TABLE test ALTER COLUMN a TYPE VARCHAR(51); -- from VARCHAR(50) - #### DROP CONSTRAINT *constraint_name* [ RESTRICT | CASCADE ] Drop the named constraint from the table. @@ -311,7 +311,6 @@ Drop the named constraint from the table. Dropping the `PRIMARY KEY` constraint results in a full table rewrite and full rewrite of all indexes associated with the table, which is a potentially expensive operation. For more details and common limitations of table rewrites, refer to [Alter table operations that involve a table rewrite](#alter-table-operations-that-involve-a-table-rewrite). - #### RENAME [ COLUMN ] *column_name* TO *column_name* Rename a column to the specified name. @@ -333,15 +332,21 @@ ALTER TABLE test RENAME CONSTRAINT vague_name TO unique_a_constraint; #### ENABLE / DISABLE ROW LEVEL SECURITY This enables or disables row level security for the table. + If enabled and no policies exist for the table, then a default-deny policy is applied. + If disabled, then existing policies for the table will not be applied and will be ignored. + See [CREATE POLICY](../dcl_create_policy) for details on how to create row level security policies. #### FORCE / NO FORCE ROW LEVEL SECURITY This controls the application of row security policies for the table when the user is the table owner. + If enabled, row level security policies will be applied when the user is the table owner. + If disabled (the default) then row level security will not be applied when the user is the table owner. + See [CREATE POLICY](../dcl_create_policy) for details on how to create row level security policies. ### Constraints @@ -383,20 +388,20 @@ Constraints marked as `INITIALLY DEFERRED` will be checked at the end of the tra Most ALTER TABLE statements only involve a schema modification and complete quickly. However, certain specific ALTER TABLE statements require a new copy of the underlying table (and associated index tables, in some cases) to be made and can potentially take a long time, depending on the sizes of the tables and indexes involved. This is typically referred to as a "table rewrite". This behavior is [similar to PostgreSQL](https://www.crunchydata.com/blog/when-does-alter-table-require-a-rewrite), though the exact scenarios when a rewrite is triggered may differ between PostgreSQL and YugabyteDB. -It is not safe to execute concurrent DML on the table during a table rewrite because the results of any concurrent DML are not guaranteed to be reflected in the copy of the table being made. This restriction is similar to PostgresSQL, which explicitly prevents concurrent DML during a table rewrite by acquiring an ACCESS EXCLUSIVE table lock. +It is not safe to execute concurrent DML on the table during a table rewrite because the results of any concurrent DML are not guaranteed to be reflected in the copy of the table being made. This restriction is similar to PostgreSQL, which explicitly prevents concurrent DML during a table rewrite by acquiring an ACCESS EXCLUSIVE table lock. If you need to perform one of these expensive rewrites, it is recommended to combine them into a single ALTER TABLE statement to avoid multiple expensive rewrites. For example: -``` +```sql ALTER TABLE t ADD COLUMN c6 UUID DEFAULT gen_random_uuid(), ALTER COLUMN c8 TYPE TEXT ``` The following ALTER TABLE operations involve making a full copy of the underlying table (and possibly associated index tables): -1. [Adding](#add-alter-table-constraint-constraints) or [dropping](#drop-constraint-constraint-name-restrict-cascade) the primary key of a table. -2. [Adding a column with a (volatile) default value](#add-column-if-not-exists-column-name-data-type-constraint-constraints). -4. [Changing the type of a column](#alter-column-column-name-set-data-type-data-type-collate-collation-using-expression). - +1. [Adding](#add-alter) or [dropping](#drop-constraint-constraint-restrict-cascade) the primary key of a table. +1. [Adding a column with a (volatile) default value](#add-column-if-not-exists-column-data-constraint). +1. [Changing the type of a column](#alter-column-column-set-data-type-data-collate-collation-using-expression). + ## See also -- [`CREATE TABLE`](../ddl_create_table) +- [CREATE TABLE](../ddl_create_table) diff --git a/docs/content/stable/develop/_index.md b/docs/content/stable/develop/_index.md index 5084599f7ed6..d9fa6c765f23 100644 --- a/docs/content/stable/develop/_index.md +++ b/docs/content/stable/develop/_index.md @@ -43,8 +43,8 @@ To learn how to build applications on top of YugabyteDB, see [Learn app developm Use these best practices to build distributed applications on top of YugabyteDB; this includes a list of techniques that you can adopt to make your application perform its best. -{{}} -For more details, see [Best practices](./best-practices-ysql). +{{}} +For more details, see [Best practices](./best-practices-develop). {{}} ## Drivers and ORMs diff --git a/docs/content/stable/develop/best-practices-develop/_index.md b/docs/content/stable/develop/best-practices-develop/_index.md index 92efa422f34b..08af7a5f0147 100644 --- a/docs/content/stable/develop/best-practices-develop/_index.md +++ b/docs/content/stable/develop/best-practices-develop/_index.md @@ -18,19 +18,19 @@ type: indexpage {{}} {{}} {{}} diff --git a/docs/content/stable/develop/best-practices-develop/administration.md b/docs/content/stable/develop/best-practices-develop/administration.md index f2dcd7f3d0bd..249986767c98 100644 --- a/docs/content/stable/develop/best-practices-develop/administration.md +++ b/docs/content/stable/develop/best-practices-develop/administration.md @@ -16,15 +16,15 @@ Database administrators can fine-tune YugabyteDB deployments for better reliabil ## Single availability zone (AZ) deployments -In single AZ deployments, you need to set the [yb-tserver](../../reference/configuration/yb-tserver) flag `--durable_wal_write=true` to not lose data if the whole data center goes down (For example, power failure). +In single AZ deployments, you need to set the [yb-tserver](../../../reference/configuration/yb-tserver) flag `--durable_wal_write=true` to not lose data if the whole data center goes down (for example, power failure). ## Allow for tablet replica overheads -Although you can manually provision the amount of memory each TServer uses using flags ([--memory_limit_hard_bytes](../../reference/configuration/yb-tserver/#memory-limit-hard-bytes) or [--default_memory_limit_to_ram_ratio](../../reference/configuration/yb-tserver/#default-memory-limit-to-ram-ratio)), this can be tricky as you need to take into account how much memory the kernel needs, along with the PostgreSQL processes and any Master process that is going to be colocated with the TServer. +Although you can manually provision the amount of memory each TServer uses using flags ([--memory_limit_hard_bytes](../../../reference/configuration/yb-tserver/#memory-limit-hard-bytes) or [--default_memory_limit_to_ram_ratio](../../../reference/configuration/yb-tserver/#default-memory-limit-to-ram-ratio)), this can be tricky as you need to take into account how much memory the kernel needs, along with the PostgreSQL processes and any Master process that is going to be colocated with the TServer. -Accordingly, you should use the [--use_memory_defaults_optimized_for_ysql](../../reference/configuration/yb-tserver/#use-memory-defaults-optimized-for-ysql) flag, which gives good memory division settings for using YSQL, optimized for your node's size. +Accordingly, you should use the [--use_memory_defaults_optimized_for_ysql](../../../reference/configuration/yb-tserver/#use-memory-defaults-optimized-for-ysql) flag, which gives good memory division settings for using YSQL, optimized for your node's size. -If this flag is true, then the [memory division flag defaults](../../reference/configuration/yb-tserver/#memory-division-flags) change to provide much more memory for PostgreSQL; furthermore, they optimize for the node size. +If this flag is true, then the [memory division flag defaults](../../../reference/configuration/yb-tserver/#memory-division-flags) change to provide much more memory for PostgreSQL; furthermore, they optimize for the node size. Note that although the default setting is false, when creating a new universe using yugabyted or YugabyteDB Anywhere, the flag is set to true, unless you explicitly set it to false. @@ -38,17 +38,17 @@ You can set certain flags to increase performance using YugabyteDB in CI and CD - Set the flag `--replication_factor=1` for test scenarios, as keeping the data three way replicated (default) is not necessary. Reducing that to 1 reduces space usage and increases performance. - Use `TRUNCATE table1,table2,table3..tablen;` instead of CREATE TABLE, and DROP TABLE between test cases. - ## Concurrent DML during a DDL operation In YugabyteDB, DML is allowed to execute while a DDL statement modifies the schema that is accessed by the DML statement. For example, an `ALTER TABLE
.. ADD COLUMN` DDL statement may add a new column while a `SELECT * from
` executes concurrently on the same relation. In PostgreSQL, this is typically not allowed because such DDL statements take a table-level exclusive lock that prevents concurrent DML from executing. (Support for similar behavior in YugabyteDB is being tracked in issue {{}}.) In YugabyteDB, when a DDL modifies the schema of tables that are accessed by concurrent DML statements, the DML statement may do one of the following: -1. Operate with the old schema prior to the DDL, or -2. Operate with the new schema after the DDL completes, or -3. Encounter temporary errors such as `schema mismatch errors` or `catalog version mismatch`. It is recommended for the client to [retry such operations](https://www.yugabyte.com/blog/retry-mechanism-spring-boot-app/) whenever possible. -Most DDL statements complete quickly, so this is typically not a significant issue in practice. However, [certain kinds of ALTER TABLE DDL statements](../the-sql-language/statements/ddl_alter_table/#alter-table-operations-that-involve-a-table-rewrite) involve making a full copy of the table(s) whose schema is being modified. For these operations, it is not recommended to run any concurrent DML statements on the table being modified by the `ALTER TABLE`, as the effect of such concurrent DML may not be reflected in the table copy. +- Operate with the old schema prior to the DDL. +- Operate with the new schema after the DDL completes. +- Encounter temporary errors such as `schema mismatch errors` or `catalog version mismatch`. It is recommended for the client to [retry such operations](https://www.yugabyte.com/blog/retry-mechanism-spring-boot-app/) whenever possible. + +Most DDL statements complete quickly, so this is typically not a significant issue in practice. However, [certain kinds of ALTER TABLE DDL statements](../../../api/ysql/the-sql-language/statements/ddl_alter_table/#alter-table-operations-that-involve-a-table-rewrite) involve making a full copy of the table(s) whose schema is being modified. For these operations, it is not recommended to run any concurrent DML statements on the table being modified by the `ALTER TABLE`, as the effect of such concurrent DML may not be reflected in the table copy. ## Concurrent DDL during a DDL operation diff --git a/docs/content/stable/develop/best-practices-develop/best-practices-ycql.md b/docs/content/stable/develop/best-practices-develop/best-practices-ycql.md index 0591895015f2..a258d6a6a50f 100644 --- a/docs/content/stable/develop/best-practices-develop/best-practices-ycql.md +++ b/docs/content/stable/develop/best-practices-develop/best-practices-ycql.md @@ -16,17 +16,17 @@ To build high-performance and scalable applications using YCQL, developers shoul ## Global secondary indexes -Indexes use multi-shard transactional capability of YugabyteDB and are global and strongly consistent (ACID). To add secondary indexes, you need to create tables with [transactions enabled](../../api/ycql/ddl_create_table/#table-properties-1). They can also be used as materialized views by using the [`INCLUDE` clause](../../api/ycql/ddl_create_index#included-columns). +Indexes use multi-shard transactional capability of YugabyteDB and are global and strongly consistent (ACID). To add secondary indexes, you need to create tables with [transactions enabled](../../../api/ycql/ddl_create_table/#table-properties-1). They can also be used as materialized views by using the [`INCLUDE` clause](../../../api/ycql/ddl_create_index#included-columns). ## Unique indexes -YCQL supports [unique indexes](../../api/ycql/ddl_create_index#unique-index). A unique index disallows duplicate values from being inserted into the indexed columns. +YCQL supports [unique indexes](../../../api/ycql/ddl_create_index#unique-index). A unique index disallows duplicate values from being inserted into the indexed columns. ## Covering indexes When querying by a secondary index, the original table is consulted to get the columns that aren't specified in the index. This can result in multiple random reads across the main table. -Sometimes, a better way is to include the other columns that you're querying that are not part of the index using the [`INCLUDE` clause](../../api/ycql/ddl_create_index/#included-columns). When additional columns are included in the index, they can be used to respond to queries directly from the index without querying the table. +Sometimes, a better way is to include the other columns that you're querying that are not part of the index using the [`INCLUDE` clause](../../../api/ycql/ddl_create_index/#included-columns). When additional columns are included in the index, they can be used to respond to queries directly from the index without querying the table. This turns a (possible) random read from the main table to just a filter on the index. @@ -36,7 +36,7 @@ For operations like `UPDATE ... IF EXISTS` and `INSERT ... IF NOT EXISTS` that r ## JSONB -YugabyteDB supports the [`jsonb`](../../api/ycql/type_jsonb/) data type to model JSON data, which does not have a set schema and might change often. You can use JSONB to group less accessed columns of a table. YCQL also supports JSONB expression indexes that can be used to speed up data retrieval that would otherwise require scanning the JSON entries. +YugabyteDB supports the [`jsonb`](../../../api/ycql/type_jsonb/) data type to model JSON data, which does not have a set schema and might change often. You can use JSONB to group less accessed columns of a table. YCQL also supports JSONB expression indexes that can be used to speed up data retrieval that would otherwise require scanning the JSON entries. {{< note title="Use JSONB columns only when necessary" >}} @@ -46,13 +46,13 @@ YugabyteDB supports the [`jsonb`](../../api/ycql/type_jsonb/) data type to model ## Increment and decrement numeric types -In YugabyteDB, YCQL extends Apache Cassandra to add increment and decrement operators for integer data types. [Integers](../../api/ycql/type_int) can be set, inserted, incremented, and decremented while `COUNTER` can only be incremented or decremented. YugabyteDB implements CAS(compare-and-set) operations in one round trip, compared to four for Apache Cassandra. +In YugabyteDB, YCQL extends Apache Cassandra to add increment and decrement operators for integer data types. [Integers](../../../api/ycql/type_int) can be set, inserted, incremented, and decremented while `COUNTER` can only be incremented or decremented. YugabyteDB implements CAS(compare-and-set) operations in one round trip, compared to four for Apache Cassandra. ## Expire older records automatically with TTL -YCQL supports automatic expiration of data using the [TTL feature](../../api/ycql/ddl_create_table/#use-table-property-to-define-the-default-expiration-time-for-rows). You can set a retention policy for data at table/row/column level and the older data is automatically purged from the database. +YCQL supports automatic expiration of data using the [TTL feature](../../../api/ycql/ddl_create_table/#use-table-property-to-define-the-default-expiration-time-for-rows). You can set a retention policy for data at table/row/column level and the older data is automatically purged from the database. -If configuring TTL for a time series dataset or any dataset with a table-level TTL, it is recommended for CPU and space efficiency to expire older files directly by using TTL-specific configuration options. More details can be found in [Efficient data expiration for TTL](../learn/ttl-data-expiration-ycql/#efficient-data-expiration-for-ttl). +If configuring TTL for a time series dataset or any dataset with a table-level TTL, it is recommended for CPU and space efficiency to expire older files directly by using TTL-specific configuration options. More details can be found in [Efficient data expiration for TTL](../../learn/ttl-data-expiration-ycql/#efficient-data-expiration-for-ttl). {{}} TTL does not apply to transactional tables and so, its unsupported in that context. @@ -60,7 +60,7 @@ TTL does not apply to transactional tables and so, its unsupported in that conte ## Use YugabyteDB drivers -Use YugabyteDB-specific [client drivers](../../drivers-orms/) because they are cluster- and partition-aware, and support `jsonb` columns. +Use YugabyteDB-specific [client drivers](../../../drivers-orms/) because they are cluster- and partition-aware, and support `jsonb` columns. ## Leverage connection pooling in the YCQL client @@ -88,22 +88,22 @@ Collections are designed for storing small sets of values that are not expected ## Collections with many elements -Each element inside a collection ends up as a [separate key value](../../architecture/docdb/data-model#examples) in DocDB adding per-element overhead. +Each element inside a collection ends up as a [separate key value](../../../architecture/docdb/data-model#examples) in DocDB adding per-element overhead. If your collections are immutable, or you update the whole collection in full, consider using the `JSONB` data type. An alternative would also be to use ProtoBuf or FlatBuffers and store the serialized data in a `BLOB` column. ## Use partition_hash for large table scans -`partition_hash` function can be used for querying a subset of the data to get approximate row counts or to break down full-table operations into smaller sub-tasks that can be run in parallel. See [example usage](../../api/ycql/expr_fcall#partition-hash-function) along with a working Python script. +`partition_hash` function can be used for querying a subset of the data to get approximate row counts or to break down full-table operations into smaller sub-tasks that can be run in parallel. See [example usage](../../../api/ycql/expr_fcall#partition-hash-function) along with a working Python script. ## TRUNCATE tables instead of DELETE -[TRUNCATE](../../api/ycql/dml_truncate/) deletes the database files that store the table and is much faster than [DELETE](../../api/ycql/dml_delete/) which inserts a _delete marker_ for each row in transactions and they are removed from storage when a compaction runs. +[TRUNCATE](../../../api/ycql/dml_truncate/) deletes the database files that store the table and is much faster than [DELETE](../../../api/ycql/dml_delete/) which inserts a _delete marker_ for each row in transactions and they are removed from storage when a compaction runs. ## Memory and tablet limits -If you are not using YSQL, ensure the [use_memory_defaults_optimized_for_ysql](../../reference/configuration/yb-master/#use-memory-defaults-optimized-for-ysql) flag is set to false. This flag optimizes YugabyteDB's memory setup for YSQL, reserving a considerable amount of memory for PostgreSQL; if you are not using YSQL then that memory is wasted when it could be helping improve performance by allowing more data to be cached. +If you are not using YSQL, ensure the [use_memory_defaults_optimized_for_ysql](../../../reference/configuration/yb-master/#use-memory-defaults-optimized-for-ysql) flag is set to false. This flag optimizes YugabyteDB's memory setup for YSQL, reserving a considerable amount of memory for PostgreSQL; if you are not using YSQL then that memory is wasted when it could be helping improve performance by allowing more data to be cached. Note that although the default setting is false, when creating a new universe using yugabyted or YugabyteDB Anywhere, the flag is set to true, unless you explicitly set it to false. -See [Memory division flags](../../reference/configuration/yb-tserver/#memory-division-flags) for more information. +See [Memory division flags](../../../reference/configuration/yb-tserver/#memory-division-flags) for more information. diff --git a/docs/content/stable/develop/best-practices-develop/clients.md b/docs/content/stable/develop/best-practices-develop/clients.md index 8a327bee3f88..46b006e03968 100644 --- a/docs/content/stable/develop/best-practices-develop/clients.md +++ b/docs/content/stable/develop/best-practices-develop/clients.md @@ -16,7 +16,7 @@ Client-side configuration plays a critical role in the performance, scalability, ## Load balance and failover using smart drivers -YugabyteDB [smart drivers](../../drivers-orms/smart-drivers/) provide advanced cluster-aware load-balancing capabilities that enables your applications to send requests to multiple nodes in the cluster just by connecting to one node. You can also set a fallback hierarchy by assigning priority to specific regions and ensuring that connections are made to the region with the highest priority, and then fall back to the region with the next priority in case the high-priority region fails. +YugabyteDB [smart drivers](../../../drivers-orms/smart-drivers/) provide advanced cluster-aware load-balancing capabilities that enable your applications to send requests to multiple nodes in the cluster by connecting to one node. You can also set a fallback hierarchy by assigning priority to specific regions and ensuring that connections are made to the region with the highest priority, and then fall back to the region with the next priority in case the high-priority region fails. {{}} For more information, see [Load balancing with smart drivers](https://www.yugabyte.com/blog/multi-region-database-deployment-best-practices/#load-balancing-with-smart-driver). @@ -28,10 +28,10 @@ When a cluster is expanded, newly added nodes do not automatically start to rece ## Scale your application with connection pools -Set up different pools with different load balancing policies as needed for your application to scale by using popular pooling solutions such as HikariCP and Tomcat along with YugabyteDB [smart drivers](../../drivers-orms/smart-drivers/). +Set up different pools with different load balancing policies as needed for your application to scale by using popular pooling solutions such as HikariCP and Tomcat along with YugabyteDB [smart drivers](../../../drivers-orms/smart-drivers/). -{{}} -For more information, see [Connection pooling](../../drivers-orms/smart-drivers/#connection-pooling). +{{}} +For more information, see [Connection pooling](../../../drivers-orms/smart-drivers/#connection-pooling). {{}} ### Database migrations and connection pools @@ -46,7 +46,5 @@ YugabyteDB includes a built-in connection pooler, YSQL Connection Manager {{}} For more details, see [Build global applications](../../build-global-apps). @@ -238,7 +238,7 @@ For consistent latency or performance, it is recommended to size columns in the [TRUNCATE](../../../api/ysql/the-sql-language/statements/ddl_truncate/) deletes the database files that store the table data and is much faster than [DELETE](../../../api/ysql/the-sql-language/statements/dml_delete/), which inserts a _delete marker_ for each row in transactions that are later removed from storage during compaction runs. {{}} -Currently, TRUNCATE is not transactional. Also, similar to PostgreSQL, TRUNCATE is not MVCC-safe. For more details, see [TRUNCATE](../../api/ysql/the-sql-language/statements/ddl_truncate/). +Currently, TRUNCATE is not transactional. Also, similar to PostgreSQL, TRUNCATE is not MVCC-safe. For more details, see [TRUNCATE](../../../api/ysql/the-sql-language/statements/ddl_truncate/). {{}} ## Minimize the number of tablets you need @@ -251,8 +251,7 @@ You can try one of the following methods to reduce the number of tablets: - Use [colocation](../../../explore/colocation/) to group small tables into 1 tablet. - Reduce number of tablets-per-table using the [--ysql_num_shards_per_tserver](../../../reference/configuration/yb-tserver/#yb-num-shards-per-tserver) flag. -- Use the [SPLIT INTO](../../api/ysql/the-sql-language/statements/ddl_create_table/#split-into) clause when creating a table. +- Use the [SPLIT INTO](../../../api/ysql/the-sql-language/statements/ddl_create_table/#split-into) clause when creating a table. - Start with few tablets and use [automatic tablet splitting](../../../architecture/docdb-sharding/tablet-splitting/). Note that multiple tablets can allow work to proceed in parallel so you may not want every table to have only one tablet. - diff --git a/docs/content/stable/manage/data-migration/migrate-from-postgres.md b/docs/content/stable/manage/data-migration/migrate-from-postgres.md index d8807ee24683..5c91284ce9d2 100644 --- a/docs/content/stable/manage/data-migration/migrate-from-postgres.md +++ b/docs/content/stable/manage/data-migration/migrate-from-postgres.md @@ -248,7 +248,7 @@ For more details, see [Live migration with fall-back](/preview/yugabyte-voyager/ When porting an existing PostgreSQL application to YugabyteDB you can follow a set of best practices to get the best out of your new deployment. {{}} -For a full list of tips and tricks for high performance and availability, see [Best practices](../../../develop/best-practices-ysql/). +For a full list of tips and tricks for high performance and availability, see [Best practices](../../../develop/best-practices-develop/). {{}} ### Retry transactions on conflicts From 33f43235837b2e08640fdc7e7e924979efee7cbd Mon Sep 17 00:00:00 2001 From: Dwight Hodge Date: Wed, 21 May 2025 01:52:58 -0400 Subject: [PATCH 144/146] minor fix --- .../best-practices-develop/best-practices-ycql.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/docs/content/stable/develop/best-practices-develop/best-practices-ycql.md b/docs/content/stable/develop/best-practices-develop/best-practices-ycql.md index a258d6a6a50f..a0f32b3c3923 100644 --- a/docs/content/stable/develop/best-practices-develop/best-practices-ycql.md +++ b/docs/content/stable/develop/best-practices-develop/best-practices-ycql.md @@ -16,7 +16,7 @@ To build high-performance and scalable applications using YCQL, developers shoul ## Global secondary indexes -Indexes use multi-shard transactional capability of YugabyteDB and are global and strongly consistent (ACID). To add secondary indexes, you need to create tables with [transactions enabled](../../../api/ycql/ddl_create_table/#table-properties-1). They can also be used as materialized views by using the [`INCLUDE` clause](../../../api/ycql/ddl_create_index#included-columns). +Indexes use multi-shard transactional capability of YugabyteDB and are global and strongly consistent (ACID). To add secondary indexes, you need to create tables with [transactions enabled](../../../api/ycql/ddl_create_table/#table). They can also be used as materialized views by using the [INCLUDE clause](../../../api/ycql/ddl_create_index#included-columns). ## Unique indexes @@ -26,7 +26,7 @@ YCQL supports [unique indexes](../../../api/ycql/ddl_create_index#unique-index). When querying by a secondary index, the original table is consulted to get the columns that aren't specified in the index. This can result in multiple random reads across the main table. -Sometimes, a better way is to include the other columns that you're querying that are not part of the index using the [`INCLUDE` clause](../../../api/ycql/ddl_create_index/#included-columns). When additional columns are included in the index, they can be used to respond to queries directly from the index without querying the table. +Sometimes, a better way is to include the other columns that you're querying that are not part of the index using the [INCLUDE clause](../../../api/ycql/ddl_create_index/#included-columns). When additional columns are included in the index, they can be used to respond to queries directly from the index without querying the table. This turns a (possible) random read from the main table to just a filter on the index. @@ -36,11 +36,11 @@ For operations like `UPDATE ... IF EXISTS` and `INSERT ... IF NOT EXISTS` that r ## JSONB -YugabyteDB supports the [`jsonb`](../../../api/ycql/type_jsonb/) data type to model JSON data, which does not have a set schema and might change often. You can use JSONB to group less accessed columns of a table. YCQL also supports JSONB expression indexes that can be used to speed up data retrieval that would otherwise require scanning the JSON entries. +YugabyteDB supports the [JSONB](../../../api/ycql/type_jsonb/) data type to model JSON data, which does not have a set schema and might change often. You can use JSONB to group less accessed columns of a table. YCQL also supports JSONB expression indexes that can be used to speed up data retrieval that would otherwise require scanning the JSON entries. {{< note title="Use JSONB columns only when necessary" >}} -`jsonb` columns are slower to read and write compared to normal columns. They also take more space because they need to store keys in strings and make keeping data consistency more difficult. A good schema design is to keep most columns as regular columns or collections, and use `jsonb` only for truly dynamic values. Don't create a `data jsonb` column where you store everything; instead, use a `dynamic_data jsonb` column with the others being primitive columns. +JSONB columns are slower to read and write compared to normal columns. They also take more space because they need to store keys in strings and make keeping data consistency more difficult. A good schema design is to keep most columns as regular columns or collections, and use JSONB only for truly dynamic values. Don't create a `data jsonb` column where you store everything; instead, use a `dynamic_data jsonb` column with the others being primitive columns. {{< /note >}} @@ -90,7 +90,7 @@ Collections are designed for storing small sets of values that are not expected Each element inside a collection ends up as a [separate key value](../../../architecture/docdb/data-model#examples) in DocDB adding per-element overhead. -If your collections are immutable, or you update the whole collection in full, consider using the `JSONB` data type. An alternative would also be to use ProtoBuf or FlatBuffers and store the serialized data in a `BLOB` column. +If your collections are immutable, or you update the whole collection in full, consider using the JSONB data type. An alternative would also be to use ProtoBuf or FlatBuffers and store the serialized data in a BLOB column. ## Use partition_hash for large table scans From 36ab55078c4df1d13e76b5f542866a9673db9934 Mon Sep 17 00:00:00 2001 From: Dwight Hodge Date: Wed, 21 May 2025 14:27:59 -0400 Subject: [PATCH 145/146] copy to preview --- .../statements/ddl_alter_table.md | 139 +++++++++++------- docs/content/preview/develop/_index.md | 4 +- .../develop/best-practices-develop/_index.md | 51 +++++++ .../best-practices-develop/administration.md | 57 +++++++ .../best-practices-ycql.md | 44 +++--- .../develop/best-practices-develop/clients.md | 50 +++++++ .../data-modeling-perf.md} | 131 +++++------------ .../data-migration/migrate-from-postgres.md | 4 +- .../preview/reference/get-started-guide.md | 2 +- .../preview/releases/ybdb-releases/v2024.1.md | 64 ++++---- .../preview/releases/ybdb-releases/v2024.2.md | 2 +- .../data-migration/migrate-from-postgres.md | 2 +- 12 files changed, 340 insertions(+), 210 deletions(-) create mode 100644 docs/content/preview/develop/best-practices-develop/_index.md create mode 100644 docs/content/preview/develop/best-practices-develop/administration.md rename docs/content/preview/develop/{ => best-practices-develop}/best-practices-ycql.md (54%) create mode 100644 docs/content/preview/develop/best-practices-develop/clients.md rename docs/content/preview/develop/{best-practices-ysql.md => best-practices-develop/data-modeling-perf.md} (51%) diff --git a/docs/content/preview/api/ysql/the-sql-language/statements/ddl_alter_table.md b/docs/content/preview/api/ysql/the-sql-language/statements/ddl_alter_table.md index e3fe49cc1d25..599ddc16ad90 100644 --- a/docs/content/preview/api/ysql/the-sql-language/statements/ddl_alter_table.md +++ b/docs/content/preview/api/ysql/the-sql-language/statements/ddl_alter_table.md @@ -31,7 +31,7 @@ Use the `ALTER TABLE` statement to change the definition of a table.

{{< note title="Table inheritance is not yet supported" >}} -YSQL in the present "latest" YugabyteDB does not yet support the "table inheritance" feature that is described in the [PostgreSQL documentation](https://www.postgresql.org/docs/15/ddl-inherit.html). The attempt to create a table that inherits another table causes the _0A000 (feature_not_supported)_ error with the message _"INHERITS not supported yet"_. This means that the syntax that the `table_expr` rule allows doesn't not yet bring any useful meaning. +YSQL in the present "latest" YugabyteDB does not yet support the "table inheritance" feature that is described in the [PostgreSQL documentation](https://www.postgresql.org/docs/15/ddl-inherit.html). The attempt to create a table that inherits another table causes the _0A000 (feature_not_supported)_ error with the message _"INHERITS not supported yet"_. This means that the syntax that the `table_expr` rule allows doesn't yet bring any useful meaning. It says that you can write, for example, this: @@ -54,9 +54,23 @@ These variants are useful only when at least one other table inherits `t`. But a Specify one of the following actions. -#### ADD [ COLUMN ] [ IF NOT EXISTS ] *column_name* *data_type* [*constraint*](#constraints) +#### ADD [ COLUMN ] [ IF NOT EXISTS ] *column_name* *data_type* *constraint* -Add the specified column with the specified data type and constraint. +Add the specified column with the specified data type and [constraint](#constraints). + +##### Table rewrites + +ADD COLUMN … DEFAULT statements require a [table rewrite](#alter-table-operations-that-involve-a-table-rewrite) when the default value is a _volatile_ expression. [Volatile expressions](https://www.postgresql.org/docs/current/xfunc-volatility.html#XFUNC-VOLATILITY) can return different results for different rows, so a table rewrite is required to fill in values for existing rows. For non-volatile expressions, no table rewrite is required. + +Examples of volatile expressions: + +- ALTER TABLE … ADD COLUMN v1 INT DEFAULT random() +- ALTER TABLE .. ADD COLUMN v2 UUID DEFAULT gen_random_uuid() + +Examples of non-volatile expressions (no table rewrite): + +- ALTER TABLE … ADD COLUMN nv1 INT DEFAULT 5 +- ALTER TABLE … ADD COLUMN nv2 timestamp DEFAULT now() -- uses the same timestamp now() for all existing rows #### RENAME TO *table_name* @@ -71,7 +85,9 @@ Renaming a table is a non blocking metadata change operation. #### SET TABLESPACE *tablespace_name* Asynchronously change the tablespace of an existing table. + The tablespace change will immediately reflect in the config of the table, however the tablet move by the load balancer happens in the background. + While the load balancer is performing the move it is perfectly safe from a correctness perspective to do reads and writes, however some query optimization that happens based on the data location may be off while data is being moved. ##### Example @@ -221,24 +237,20 @@ alter table parents drop column b cascade; It quietly succeeds. Now `\d children` shows that the foreign key constraint `children_fk` has been transitively dropped. -#### ADD [*alter_table_constraint*](#constraints) +#### ADD *alter_table_constraint* -Add the specified constraint to the table. +Add the specified [constraint](#constraints) to the table. +##### Table rewrites -{{< warning >}} -Adding a `PRIMARY KEY` constraint results in a full table rewrite and full rewrite of all indexes associated with the table. -This happens because of the clustered storage by primary key that YugabyteDB uses to store rows and indexes. -Tables without a `PRIMARY KEY` have a hidden one underneath and rows are stored clustered on it. The secondary indexes of the table -link to this hidden `PRIMARY KEY`. -While the tables and indexes are being rewritten, you may lose any modifications made to the table. -For reference, the same semantics as [Alter type with table rewrite](#alter-type-with-table-rewrite) apply. -{{< /warning >}} +Adding a `PRIMARY KEY` constraint results in a full table rewrite of the main table and all associated indexes, which can be a potentially expensive operation. For more details about table rewrites, see [Alter table operations that involve a table rewrite](#alter-table-operations-that-involve-a-table-rewrite). + +The table rewrite is needed because of how YugabyteDB stores rows and indexes. In YugabyteDB, data is distributed based on the primary key; when a table does not have an explicit primary key assigned, YugabyteDB automatically creates an internal row ID to use as the table's primary key. As a result, these rows need to be rewritten to use the newly added primary key column. For more information, refer to [Primary keys](../../../../../develop/data-modeling/primary-keys-ysql). #### ALTER [ COLUMN ] *column_name* [ SET DATA ] TYPE *data_type* [ COLLATE *collation* ] [ USING *expression* ] Change the type of an existing column. The following semantics apply: -- If data on disk is required to change, a full table rewrite is needed. + - If the optional `COLLATE` clause is not specified, the default collation for the new column type will be used. - If the optional `USING` clause is not provided, the default conversion for the new column value will be the same as an assignment cast from the old type to the new type. - A `USING` clause must be included when there is no implicit assignment cast available from the old type to the new type. @@ -246,48 +258,51 @@ Change the type of an existing column. The following semantics apply: - Alter type is not supported for tables with rules (limitation inherited from PostgreSQL). - Alter type is not supported for tables with CDC streams, or xCluster replication when it requires data on disk to change. See [#16625](https://github.com/yugabyte/yugabyte-db/issues/16625). -##### Alter type without table-rewrite +##### Table rewrites -If the change doesn't require data on disk to change, concurrent DMLs to the table can be safely performed as shown in the following example: +Altering a column's type requires a [full table rewrite](#alter-table-operations-that-involve-a-table-rewrite), and any indexes that contain this column when the underlying storage format changes or if the data changes. +The following type changes commonly require a table rewrite: -```sql -CREATE TABLE test (id BIGSERIAL PRIMARY KEY, a VARCHAR(50)); -ALTER TABLE test ALTER COLUMN a TYPE VARCHAR(51); -``` +| From | To | Reason for table rewrite | +| ------------ | -------------- | --------------------------------------------------------------------- | +| INTEGER | TEXT | Different storage formats. | +| TEXT | INTEGER | Needs parsing and validation. | +| JSON | JSONB | Different internal representation. | +| UUID | TEXT | Different binary format. | +| BYTEA | TEXT | Different encoding. | +| TIMESTAMP | DATE | Loses time info; storage changes. | +| BOOLEAN | INTEGER | Different sizes and encoding. | +| REAL | NUMERIC | Different precision and format. | +| NUMERIC(p,s) | NUMERIC(p2,s2) | Requires data changes if scale is changed or if precision is smaller. | -##### Alter type with table rewrite +The following type changes do not require a rewrite when there is no associated index table on the column. When there is an associated index table on the column, a rewrite is performed on the index table alone but not on the main table. -If the change requires data on disk to change, a full table rewrite will be done and the following semantics apply: -- The action creates an entirely new table under the hood, and concurrent DMLs may not be reflected in the new table which can lead to correctness issues. -- The operation preserves split properties for hash-partitioned tables and hash-partitioned secondary indexes. For range-partitioned tables (and secondary indexes), split properties are only preserved if the altered column is not part of the table's (or secondary index's) range key. +| From | To | Notes | +| ------------ | ------------------ | ------------------------------------------------------ | +| VARCHAR(n) | VARCHAR(m) (m > n) | Length increase is compatible. | +| VARCHAR(n) | TEXT | Always compatible. | +| SERIAL | INTEGER | Underlying type is INTEGER; usually OK. | +| NUMERIC(p,s) | NUMERIC(p2,s2) | If new precision is larger and scale remains the same. | +| CHAR(n) | CHAR(m) (m > n) | PG stores it as padded TEXT, so often fine. | +| Domain types | Their base type | Compatible, unless additional constraints exist. | -Following is an example of alter type with table rewrite: +Altering a column with a (non-trivial) USING clause always requires a rewrite. -```sql -CREATE TABLE test (id BIGSERIAL PRIMARY KEY, a VARCHAR(50)); -INSERT INTO test(a) VALUES ('1234555'); -ALTER TABLE test ALTER COLUMN a TYPE VARCHAR(40); --- try to change type to BIGINT -ALTER TABLE test ALTER COLUMN a TYPE BIGINT; -ERROR: column "a" cannot be cast automatically to type bigint -HINT: You might need to specify "USING a::bigint". --- use USING clause to cast the values -ALTER TABLE test ALTER COLUMN a SET DATA TYPE BIGINT USING a::BIGINT; -``` +The table rewrite operation preserves split properties for hash-partitioned tables and hash-partitioned secondary indexes. For range-partitioned tables (and secondary indexes), split properties are only preserved if the altered column is not part of the table's (or secondary index's) range key. -Another option is to use a custom function as follows: +For example, the following ALTER TYPE statements would cause a table rewrite: -```sql -CREATE OR REPLACE FUNCTION myfunc(text) RETURNS BIGINT - AS 'select $1::BIGINT;' - LANGUAGE SQL - IMMUTABLE - RETURNS NULL ON NULL INPUT; +- ALTER TABLE foo + ALTER COLUMN foo_timestamp TYPE timestamp with time zone + USING + timestamp with time zone 'epoch' + foo_timestamp * interval '1 second'; +- ALTER TABLE t ALTER COLUMN t_num1 TYPE NUMERIC(9,5) -- from NUMERIC(6,1); +- ALTER TABLE test ALTER COLUMN a SET DATA TYPE BIGINT USING a::BIGINT; -- from INT -ALTER TABLE test ALTER COLUMN a SET DATA TYPE BIGINT USING myfunc(a); -``` +The following ALTER TYPE statement does not cause a table rewrite: +- ALTER TABLE test ALTER COLUMN a TYPE VARCHAR(51); -- from VARCHAR(50) #### DROP CONSTRAINT *constraint_name* [ RESTRICT | CASCADE ] @@ -296,13 +311,9 @@ Drop the named constraint from the table. - `RESTRICT` — Remove only the specified constraint. - `CASCADE` — Remove the specified constraint and any dependent objects. -{{< warning >}} -Dropping the `PRIMARY KEY` constraint results in a full table rewrite and full rewrite of all indexes associated with the table. -This happens because of the clustered storage by primary key that YugabyteDB uses to store rows and indexes. -While the tables and indexes are being rewritten, you may lose any modifications made to the table. -For reference, the same semantics as [Alter type with table rewrite](#alter-type-with-table-rewrite) apply. -{{< /warning >}} +##### Table rewrites +Dropping the `PRIMARY KEY` constraint results in a full table rewrite and full rewrite of all indexes associated with the table, which is a potentially expensive operation. For more details and common limitations of table rewrites, refer to [Alter table operations that involve a table rewrite](#alter-table-operations-that-involve-a-table-rewrite). #### RENAME [ COLUMN ] *column_name* TO *column_name* @@ -325,15 +336,21 @@ ALTER TABLE test RENAME CONSTRAINT vague_name TO unique_a_constraint; #### ENABLE / DISABLE ROW LEVEL SECURITY This enables or disables row level security for the table. + If enabled and no policies exist for the table, then a default-deny policy is applied. + If disabled, then existing policies for the table will not be applied and will be ignored. + See [CREATE POLICY](../dcl_create_policy) for details on how to create row level security policies. #### FORCE / NO FORCE ROW LEVEL SECURITY This controls the application of row security policies for the table when the user is the table owner. + If enabled, row level security policies will be applied when the user is the table owner. + If disabled (the default) then row level security will not be applied when the user is the table owner. + See [CREATE POLICY](../dcl_create_policy) for details on how to create row level security policies. ### Constraints @@ -371,6 +388,24 @@ Constraints marked as `INITIALLY IMMEDIATE` will be checked after every row with Constraints marked as `INITIALLY DEFERRED` will be checked at the end of the transaction. +## Alter table operations that involve a table rewrite + +Most ALTER TABLE statements only involve a schema modification and complete quickly. However, certain specific ALTER TABLE statements require a new copy of the underlying table (and associated index tables, in some cases) to be made and can potentially take a long time, depending on the sizes of the tables and indexes involved. This is typically referred to as a "table rewrite". This behavior is [similar to PostgreSQL](https://www.crunchydata.com/blog/when-does-alter-table-require-a-rewrite), though the exact scenarios when a rewrite is triggered may differ between PostgreSQL and YugabyteDB. + +It is not safe to execute concurrent DML on the table during a table rewrite because the results of any concurrent DML are not guaranteed to be reflected in the copy of the table being made. This restriction is similar to PostgreSQL, which explicitly prevents concurrent DML during a table rewrite by acquiring an ACCESS EXCLUSIVE table lock. + +If you need to perform one of these expensive rewrites, it is recommended to combine them into a single ALTER TABLE statement to avoid multiple expensive rewrites. For example: + +```sql +ALTER TABLE t ADD COLUMN c6 UUID DEFAULT gen_random_uuid(), ALTER COLUMN c8 TYPE TEXT +``` + +The following ALTER TABLE operations involve making a full copy of the underlying table (and possibly associated index tables): + +1. [Adding](#add-alter) or [dropping](#drop-constraint-constraint-restrict-cascade) the primary key of a table. +1. [Adding a column with a (volatile) default value](#add-column-if-not-exists-column-data-constraint). +1. [Changing the type of a column](#alter-column-column-set-data-type-data-collate-collation-using-expression). + ## See also -- [`CREATE TABLE`](../ddl_create_table) +- [CREATE TABLE](../ddl_create_table) diff --git a/docs/content/preview/develop/_index.md b/docs/content/preview/develop/_index.md index 5084599f7ed6..d9fa6c765f23 100644 --- a/docs/content/preview/develop/_index.md +++ b/docs/content/preview/develop/_index.md @@ -43,8 +43,8 @@ To learn how to build applications on top of YugabyteDB, see [Learn app developm Use these best practices to build distributed applications on top of YugabyteDB; this includes a list of techniques that you can adopt to make your application perform its best. -{{}} -For more details, see [Best practices](./best-practices-ysql). +{{}} +For more details, see [Best practices](./best-practices-develop). {{}} ## Drivers and ORMs diff --git a/docs/content/preview/develop/best-practices-develop/_index.md b/docs/content/preview/develop/best-practices-develop/_index.md new file mode 100644 index 000000000000..83a00e37b543 --- /dev/null +++ b/docs/content/preview/develop/best-practices-develop/_index.md @@ -0,0 +1,51 @@ +--- +title: Best practices for applications +headerTitle: Best practices +linkTitle: Best practices +description: Tips and tricks to build applications +headcontent: Tips and tricks to build applications for high performance and availability +aliases: + - /preview/develop/best-practices-ysql/ +menu: + preview: + identifier: best-practices-develop + parent: develop + weight: 570 +type: indexpage +--- + +## YSQL + +{{}} + + {{}} + + {{}} + + {{}} + +{{}} + +## YCQL + +{{}} + + {{}} + +{{}} diff --git a/docs/content/preview/develop/best-practices-develop/administration.md b/docs/content/preview/develop/best-practices-develop/administration.md new file mode 100644 index 000000000000..c3be52f1af49 --- /dev/null +++ b/docs/content/preview/develop/best-practices-develop/administration.md @@ -0,0 +1,57 @@ +--- +title: Best practices for YSQL database administrators +headerTitle: Best practices for YSQL database administrators +linkTitle: YSQL database administrators +description: Tips and tricks to build YSQL applications +headcontent: Tips and tricks for administering YSQL databases +menu: + preview: + identifier: best-practices-ysql-administration + parent: best-practices-develop + weight: 30 +type: docs +--- + +Database administrators can fine-tune YugabyteDB deployments for better reliability, performance, and operational efficiency by following targeted best practices. This guide outlines key recommendations for configuring single-AZ environments, optimizing memory use, accelerating CI/CD tests, and safely managing concurrent DML and DDL operations. These tips are designed to help DBAs maintain stable, scalable YSQL clusters in real-world and test scenarios alike. + +## Single availability zone (AZ) deployments + +In single AZ deployments, you need to set the [yb-tserver](../../../reference/configuration/yb-tserver) flag `--durable_wal_write=true` to not lose data if the whole data center goes down (for example, power failure). + +## Allow for tablet replica overheads + +Although you can manually provision the amount of memory each TServer uses using flags ([--memory_limit_hard_bytes](../../../reference/configuration/yb-tserver/#memory-limit-hard-bytes) or [--default_memory_limit_to_ram_ratio](../../../reference/configuration/yb-tserver/#default-memory-limit-to-ram-ratio)), this can be tricky as you need to take into account how much memory the kernel needs, along with the PostgreSQL processes and any Master process that is going to be colocated with the TServer. + +Accordingly, you should use the [--use_memory_defaults_optimized_for_ysql](../../../reference/configuration/yb-tserver/#use-memory-defaults-optimized-for-ysql) flag, which gives good memory division settings for using YSQL, optimized for your node's size. + +If this flag is true, then the [memory division flag defaults](../../../reference/configuration/yb-tserver/#memory-division-flags) change to provide much more memory for PostgreSQL; furthermore, they optimize for the node size. + +Note that although the default setting is false, when creating a new universe using yugabyted or YugabyteDB Anywhere, the flag is set to true, unless you explicitly set it to false. + +## Settings for CI and CD integration tests + +You can set certain flags to increase performance using YugabyteDB in CI and CD automated test scenarios as follows: + +- Point the flags `--fs_data_dirs`, and `--fs_wal_dirs` to a RAMDisk directory to make DML, DDL, cluster creation, and cluster deletion faster, ensuring that data is not written to disk. +- Set the flag `--yb_num_shards_per_tserver=1`. Reducing the number of shards lowers overhead when creating or dropping YSQL tables, and writing or reading small amounts of data. +- Use colocated databases in YSQL. Colocation lowers overhead when creating or dropping YSQL tables, and writing or reading small amounts of data. +- Set the flag `--replication_factor=1` for test scenarios, as keeping the data three way replicated (default) is not necessary. Reducing that to 1 reduces space usage and increases performance. +- Use `TRUNCATE table1,table2,table3..tablen;` instead of CREATE TABLE, and DROP TABLE between test cases. + +## Concurrent DML during a DDL operation + +In YugabyteDB, DML is allowed to execute while a DDL statement modifies the schema that is accessed by the DML statement. For example, an `ALTER TABLE
.. ADD COLUMN` DDL statement may add a new column while a `SELECT * from
` executes concurrently on the same relation. In PostgreSQL, this is typically not allowed because such DDL statements take a table-level exclusive lock that prevents concurrent DML from executing. (Support for similar behavior in YugabyteDB is being tracked in issue {{}}.) + +In YugabyteDB, when a DDL modifies the schema of tables that are accessed by concurrent DML statements, the DML statement may do one of the following: + +- Operate with the old schema prior to the DDL. +- Operate with the new schema after the DDL completes. +- Encounter temporary errors such as `schema mismatch errors` or `catalog version mismatch`. It is recommended for the client to [retry such operations](https://www.yugabyte.com/blog/retry-mechanism-spring-boot-app/) whenever possible. + +Most DDL statements complete quickly, so this is typically not a significant issue in practice. However, [certain kinds of ALTER TABLE DDL statements](../../../api/ysql/the-sql-language/statements/ddl_alter_table/#alter-table-operations-that-involve-a-table-rewrite) involve making a full copy of the table(s) whose schema is being modified. For these operations, it is not recommended to run any concurrent DML statements on the table being modified by the `ALTER TABLE`, as the effect of such concurrent DML may not be reflected in the table copy. + +## Concurrent DDL during a DDL operation + +DDL statements that affect entities in different databases can be run concurrently. However, for DDL statements that impact the same database, it is recommended to execute them sequentially. + +DDL statements that relate to shared objects, such as roles or tablespaces, are considered as affecting all databases in the cluster, so they should also be run sequentially. diff --git a/docs/content/preview/develop/best-practices-ycql.md b/docs/content/preview/develop/best-practices-develop/best-practices-ycql.md similarity index 54% rename from docs/content/preview/develop/best-practices-ycql.md rename to docs/content/preview/develop/best-practices-develop/best-practices-ycql.md index 1c8fa50559cd..00b032af74c3 100644 --- a/docs/content/preview/develop/best-practices-ycql.md +++ b/docs/content/preview/develop/best-practices-develop/best-practices-ycql.md @@ -1,34 +1,34 @@ --- title: Best practices for YCQL applications -headerTitle: Best practices -linkTitle: Best practices +headerTitle: Best practices for YCQL applications +linkTitle: YCQL applications description: Tips and tricks to build YCQL applications headcontent: Tips and tricks to build YCQL applications for high performance and availability +aliases: + - /preview/develop/best-practices-ycql/ menu: preview: identifier: best-practices-ycql - parent: develop - weight: 571 -aliases: - - /preview/develop/best-practices/ + parent: best-practices-develop + weight: 40 type: docs --- -{{}} +To build high-performance and scalable applications using YCQL, developers should follow key schema design and operational best practices tailored for YugabyteDB's distributed architecture. This guide covers strategies for using indexes efficiently, optimizing read/write paths with batching and prepared statements, managing JSON and collection data types, and ensuring memory settings align with your query layer. These practices help ensure reliable performance, especially under real-world workloads. ## Global secondary indexes -Indexes use multi-shard transactional capability of YugabyteDB and are global and strongly consistent (ACID). To add secondary indexes, you need to create tables with [transactions enabled](../../api/ycql/ddl_create_table/#table-properties-1). They can also be used as materialized views by using the [`INCLUDE` clause](../../api/ycql/ddl_create_index#included-columns). +Indexes use multi-shard transactional capability of YugabyteDB and are global and strongly consistent (ACID). To add secondary indexes, you need to create tables with [transactions enabled](../../../api/ycql/ddl_create_table/#table). They can also be used as materialized views by using the [INCLUDE clause](../../../api/ycql/ddl_create_index#included-columns). ## Unique indexes -YCQL supports [unique indexes](../../api/ycql/ddl_create_index#unique-index). A unique index disallows duplicate values from being inserted into the indexed columns. +YCQL supports [unique indexes](../../../api/ycql/ddl_create_index#unique-index). A unique index disallows duplicate values from being inserted into the indexed columns. ## Covering indexes When querying by a secondary index, the original table is consulted to get the columns that aren't specified in the index. This can result in multiple random reads across the main table. -Sometimes, a better way is to include the other columns that you're querying that are not part of the index using the [`INCLUDE` clause](../../api/ycql/ddl_create_index/#included-columns). When additional columns are included in the index, they can be used to respond to queries directly from the index without querying the table. +Sometimes, a better way is to include the other columns that you're querying that are not part of the index using the [INCLUDE clause](../../../api/ycql/ddl_create_index/#included-columns). When additional columns are included in the index, they can be used to respond to queries directly from the index without querying the table. This turns a (possible) random read from the main table to just a filter on the index. @@ -38,23 +38,23 @@ For operations like `UPDATE ... IF EXISTS` and `INSERT ... IF NOT EXISTS` that r ## JSONB -YugabyteDB supports the [`jsonb`](../../api/ycql/type_jsonb/) data type to model JSON data, which does not have a set schema and might change often. You can use JSONB to group less accessed columns of a table. YCQL also supports JSONB expression indexes that can be used to speed up data retrieval that would otherwise require scanning the JSON entries. +YugabyteDB supports the [JSONB](../../../api/ycql/type_jsonb/) data type to model JSON data, which does not have a set schema and might change often. You can use JSONB to group less accessed columns of a table. YCQL also supports JSONB expression indexes that can be used to speed up data retrieval that would otherwise require scanning the JSON entries. {{< note title="Use JSONB columns only when necessary" >}} -`jsonb` columns are slower to read and write compared to normal columns. They also take more space because they need to store keys in strings and make keeping data consistency more difficult. A good schema design is to keep most columns as regular columns or collections, and use `jsonb` only for truly dynamic values. Don't create a `data jsonb` column where you store everything; instead, use a `dynamic_data jsonb` column with the others being primitive columns. +JSONB columns are slower to read and write compared to normal columns. They also take more space because they need to store keys in strings and make keeping data consistency more difficult. A good schema design is to keep most columns as regular columns or collections, and use JSONB only for truly dynamic values. Don't create a `data jsonb` column where you store everything; instead, use a `dynamic_data jsonb` column with the others being primitive columns. {{< /note >}} ## Increment and decrement numeric types -In YugabyteDB, YCQL extends Apache Cassandra to add increment and decrement operators for integer data types. [Integers](../../api/ycql/type_int) can be set, inserted, incremented, and decremented while `COUNTER` can only be incremented or decremented. YugabyteDB implements CAS(compare-and-set) operations in one round trip, compared to four for Apache Cassandra. +In YugabyteDB, YCQL extends Apache Cassandra to add increment and decrement operators for integer data types. [Integers](../../../api/ycql/type_int) can be set, inserted, incremented, and decremented while `COUNTER` can only be incremented or decremented. YugabyteDB implements CAS(compare-and-set) operations in one round trip, compared to four for Apache Cassandra. ## Expire older records automatically with TTL -YCQL supports automatic expiration of data using the [TTL feature](../../api/ycql/ddl_create_table/#use-table-property-to-define-the-default-expiration-time-for-rows). You can set a retention policy for data at table/row/column level and the older data is automatically purged from the database. +YCQL supports automatic expiration of data using the [TTL feature](../../../api/ycql/ddl_create_table/#use-table-property-to-define-the-default-expiration-time-for-rows). You can set a retention policy for data at table/row/column level and the older data is automatically purged from the database. -If configuring TTL for a time series dataset or any dataset with a table-level TTL, it is recommended for CPU and space efficiency to expire older files directly by using TTL-specific configuration options. More details can be found in [Efficient data expiration for TTL](../learn/ttl-data-expiration-ycql/#efficient-data-expiration-for-ttl). +If configuring TTL for a time series dataset or any dataset with a table-level TTL, it is recommended for CPU and space efficiency to expire older files directly by using TTL-specific configuration options. More details can be found in [Efficient data expiration for TTL](../../learn/ttl-data-expiration-ycql/#efficient-data-expiration-for-ttl). {{}} TTL does not apply to transactional tables and so, its unsupported in that context. @@ -62,7 +62,7 @@ TTL does not apply to transactional tables and so, its unsupported in that conte ## Use YugabyteDB drivers -Use YugabyteDB-specific [client drivers](../../drivers-orms/) because they are cluster- and partition-aware, and support `jsonb` columns. +Use YugabyteDB-specific [client drivers](../../../drivers-orms/) because they are cluster- and partition-aware, and support `jsonb` columns. ## Leverage connection pooling in the YCQL client @@ -90,22 +90,22 @@ Collections are designed for storing small sets of values that are not expected ## Collections with many elements -Each element inside a collection ends up as a [separate key value](../../architecture/docdb/data-model#examples) in DocDB adding per-element overhead. +Each element inside a collection ends up as a [separate key value](../../../architecture/docdb/data-model#examples) in DocDB adding per-element overhead. -If your collections are immutable, or you update the whole collection in full, consider using the `JSONB` data type. An alternative would also be to use ProtoBuf or FlatBuffers and store the serialized data in a `BLOB` column. +If your collections are immutable, or you update the whole collection in full, consider using the JSONB data type. An alternative would also be to use ProtoBuf or FlatBuffers and store the serialized data in a BLOB column. ## Use partition_hash for large table scans -`partition_hash` function can be used for querying a subset of the data to get approximate row counts or to break down full-table operations into smaller sub-tasks that can be run in parallel. See [example usage](../../api/ycql/expr_fcall#partition-hash-function) along with a working Python script. +`partition_hash` function can be used for querying a subset of the data to get approximate row counts or to break down full-table operations into smaller sub-tasks that can be run in parallel. See [example usage](../../../api/ycql/expr_fcall#partition-hash-function) along with a working Python script. ## TRUNCATE tables instead of DELETE -[TRUNCATE](../../api/ycql/dml_truncate/) deletes the database files that store the table and is much faster than [DELETE](../../api/ycql/dml_delete/) which inserts a _delete marker_ for each row in transactions and they are removed from storage when a compaction runs. +[TRUNCATE](../../../api/ycql/dml_truncate/) deletes the database files that store the table and is much faster than [DELETE](../../../api/ycql/dml_delete/) which inserts a _delete marker_ for each row in transactions and they are removed from storage when a compaction runs. ## Memory and tablet limits -If you are not using YSQL, ensure the [use_memory_defaults_optimized_for_ysql](../../reference/configuration/yb-master/#use-memory-defaults-optimized-for-ysql) flag is set to false. This flag optimizes YugabyteDB's memory setup for YSQL, reserving a considerable amount of memory for PostgreSQL; if you are not using YSQL then that memory is wasted when it could be helping improve performance by allowing more data to be cached. +If you are not using YSQL, ensure the [use_memory_defaults_optimized_for_ysql](../../../reference/configuration/yb-master/#use-memory-defaults-optimized-for-ysql) flag is set to false. This flag optimizes YugabyteDB's memory setup for YSQL, reserving a considerable amount of memory for PostgreSQL; if you are not using YSQL then that memory is wasted when it could be helping improve performance by allowing more data to be cached. Note that although the default setting is false, when creating a new universe using yugabyted or YugabyteDB Anywhere, the flag is set to true, unless you explicitly set it to false. -See [Memory division flags](../../reference/configuration/yb-tserver/#memory-division-flags) for more information. +See [Memory division flags](../../../reference/configuration/yb-tserver/#memory-division-flags) for more information. diff --git a/docs/content/preview/develop/best-practices-develop/clients.md b/docs/content/preview/develop/best-practices-develop/clients.md new file mode 100644 index 000000000000..050b44adb420 --- /dev/null +++ b/docs/content/preview/develop/best-practices-develop/clients.md @@ -0,0 +1,50 @@ +--- +title: Best practices for YSQL clients +headerTitle: Best practices for YSQL clients +linkTitle: YSQL clients +description: Tips and tricks for administering YSQL clients +headcontent: Tips and tricks for administering YSQL clients +menu: + preview: + identifier: best-practices-ysql-clients + parent: best-practices-develop + weight: 20 +type: docs +--- + +Client-side configuration plays a critical role in the performance, scalability, and resilience of YSQL applications. This guide highlights essential best practices for managing connections, balancing load across nodes, and handling failovers efficiently using YugabyteDB's smart drivers and connection pooling. Whether you're deploying in a single region or across multiple data centers, these tips will help ensure your applications make the most of YugabyteDB's distributed architecture + +## Load balance and failover using smart drivers + +YugabyteDB [smart drivers](../../../drivers-orms/smart-drivers/) provide advanced cluster-aware load-balancing capabilities that enable your applications to send requests to multiple nodes in the cluster by connecting to one node. You can also set a fallback hierarchy by assigning priority to specific regions and ensuring that connections are made to the region with the highest priority, and then fall back to the region with the next priority in case the high-priority region fails. + +{{}} +For more information, see [Load balancing with smart drivers](https://www.yugabyte.com/blog/multi-region-database-deployment-best-practices/#load-balancing-with-smart-driver). +{{}} + +## Make sure the application uses new nodes + +When a cluster is expanded, newly added nodes do not automatically start to receive client traffic. Regardless of the language of the driver or whether you are using a smart driver, the application must either explicitly request new connections or, if it is using a pooling solution, it can configure the pooler to recycle connections periodically (for example, by setting maxLifetime and/or idleTimeout). + +## Scale your application with connection pools + +Set up different pools with different load balancing policies as needed for your application to scale by using popular pooling solutions such as HikariCP and Tomcat along with YugabyteDB [smart drivers](../../../drivers-orms/smart-drivers/). + +{{}} +For more information, see [Connection pooling](../../../drivers-orms/smart-drivers/#connection-pooling). +{{}} + +### Database migrations and connection pools + +In some cases, connection pools may trigger unexpected errors while running a sequence of database migrations or other DDL operations. + +Because YugabyteDB is distributed, it can take a while for the result of a DDL to fully propagate to all caches on all nodes in a cluster. As a result, after a DDL statement completes, the next DDL statement that runs right afterwards on a different PostgreSQL connection may, in rare cases, see errors such as `duplicate key value violates unique constraint "pg_attribute_relid_attnum_index"` (see issue {{}}). It is recommended to use a single connection while running a sequence of DDL operations, as is common with application migration scripts with tools such as Flyway or Active Record. + +## Use YSQL Connection Manager + +YugabyteDB includes a built-in connection pooler, YSQL Connection Manager {{}}, which provides the same connection pooling advantages as other external pooling solutions, but without many of their limitations. As the manager is bundled with the product, it is convenient to manage, monitor, and configure the server connections. + +For more information, refer to the following: + +- [YSQL Connection Manager](../../../explore/going-beyond-sql/connection-mgr-ysql/) +- [Built-in Connection Manager Turns Key PostgreSQL Weakness into a Strength](https://www.yugabyte.com/blog/connection-pooling-management/) diff --git a/docs/content/preview/develop/best-practices-ysql.md b/docs/content/preview/develop/best-practices-develop/data-modeling-perf.md similarity index 51% rename from docs/content/preview/develop/best-practices-ysql.md rename to docs/content/preview/develop/best-practices-develop/data-modeling-perf.md index 2850606d4cb3..d308b4eda185 100644 --- a/docs/content/preview/develop/best-practices-ysql.md +++ b/docs/content/preview/develop/best-practices-develop/data-modeling-perf.md @@ -1,25 +1,25 @@ --- -title: Best practices for YSQL applications -headerTitle: Best practices -linkTitle: Best practices -description: Tips and tricks to build YSQL applications -headcontent: Tips and tricks to build YSQL applications for high performance and availability +title: Best practices for Data Modeling and performance of YSQL applications +headerTitle: Best practices for Data Modeling and performance of YSQL applications +linkTitle: YSQL data modeling +description: Tips and tricks for building YSQL applications +headcontent: Tips and tricks for building YSQL applications menu: preview: - identifier: best-practices-ysql - parent: develop - weight: 570 + identifier: data-modeling-perf + parent: best-practices-develop + weight: 10 type: docs --- -{{}} +Designing efficient, high-performance YSQL applications requires thoughtful data modeling and an understanding of how YugabyteDB handles distributed workloads. This guide offers a collection of best practices, from leveraging colocation and indexing techniques to optimizing transactions and parallelizing queries, that can help you build scalable, globally distributed applications with low latency and high availability. Whether you're developing new applications or tuning existing ones, these tips will help you make the most of YSQL's capabilities ## Use application patterns -Running applications in multiple data centers with data split across them is not a trivial task. When designing global applications, choose a suitable design pattern for your application from a suite of battle-tested design paradigms, including [Global database](../build-global-apps/global-database), [Multi-master](../build-global-apps/active-active-multi-master), [Standby cluster](../build-global-apps/active-active-single-master), [Duplicate indexes](../build-global-apps/duplicate-indexes), [Follower reads](../build-global-apps/follower-reads), and more. You can also combine these patterns as per your needs. +Running applications in multiple data centers with data split across them is not a trivial task. When designing global applications, choose a suitable design pattern for your application from a suite of battle-tested design paradigms, including [Global database](../../build-global-apps/global-database), [Multi-master](../../build-global-apps/active-active-multi-master), [Standby cluster](../../build-global-apps/active-active-single-master), [Duplicate indexes](../../build-global-apps/duplicate-indexes), [Follower reads](../../build-global-apps/follower-reads), and more. You can also combine these patterns as per your needs. {{}} -For more details, see [Build global applications](../build-global-apps). +For more details, see [Build global applications](../../build-global-apps). {{}} ## Colocation @@ -27,14 +27,14 @@ For more details, see [Build global applications](../build-global-apps). Colocated tables optimize latency and performance for data access by reducing the need for additional trips across the network for small tables. Additionally, it reduces the overhead of creating a tablet for every relation (tables, indexes, and so on) and their storage per node. {{}} -For more details, see [Colocation](../../explore/colocation/). +For more details, see [Colocation](../../../explore/colocation/). {{}} ## Faster reads with covering indexes When a query uses an index to look up rows faster, the columns that are not present in the index are fetched from the original table. This results in additional round trips to the main table leading to increased latency. -Use [covering indexes](../../explore/ysql-language-features/indexes-constraints/covering-index-ysql/) to store all the required columns needed for your queries in the index. Indexing converts a standard Index-Scan to an [Index-Only-Scan](https://dev.to/yugabyte/boosts-secondary-index-queries-with-index-only-scan-5e7j). +Use [covering indexes](../../../explore/ysql-language-features/indexes-constraints/covering-index-ysql/) to store all the required columns needed for your queries in the index. Indexing converts a standard Index-Scan to an [Index-Only-Scan](https://dev.to/yugabyte/boosts-secondary-index-queries-with-index-only-scan-5e7j). {{}} For more details, see [Avoid trips to the table with covering indexes](https://www.yugabyte.com/blog/multi-region-database-deployment-best-practices/#avoid-trips-to-the-table-with-covering-indexes). @@ -45,28 +45,24 @@ For more details, see [Avoid trips to the table with covering indexes](https://w A partial index is an index that is built on a subset of a table and includes only rows that satisfy the condition specified in the WHERE clause. This speeds up any writes to the table and reduces the size of the index, thereby improving speed for read queries that use the index. {{}} -For more details, see [Partial indexes](../../explore/ysql-language-features/indexes-constraints/partial-index-ysql/). +For more details, see [Partial indexes](../../../explore/ysql-language-features/indexes-constraints/partial-index-ysql/). {{}} ## Distinct keys with unique indexes If you need values in some of the columns to be unique, you can specify your index as UNIQUE. -When a unique index is applied to two or more columns, the combined values in these columns can't be duplicated in multiple rows. - -{{}} -By default a NULL value is treated as a distinct value, allowing you to have multiple NULL values in a column with a unique index. This can be turned OFF by adding the [NULLS NOT DISTINCT](../../api/ysql/the-sql-language/statements/ddl_create_index#nulls-not-distinct) option when creating the unique index. -{{}} +When a unique index is applied to two or more columns, the combined values in these columns can't be duplicated in multiple rows. Note that because a NULL value is treated as a distinct value, you can have multiple NULL values in a column with a unique index. {{}} -For more details, see [Unique indexes](../../explore/ysql-language-features/indexes-constraints/unique-index-ysql/). +For more details, see [Unique indexes](../../../explore/ysql-language-features/indexes-constraints/unique-index-ysql/). {{}} ## Faster sequences with server-level caching Sequences in databases automatically generate incrementing numbers, perfect for generating unique values like order numbers, user IDs, check numbers, and so on. They prevent multiple application instances from concurrently generating duplicate values. However, generating sequences on a database that is spread across regions could have a latency impact on your applications. -Enable [server-level caching](../../api/ysql/exprs/sequence_functions/func_nextval/#caching-values-on-the-yb-tserver) to improve the speed of sequences, and also avoid discarding many sequence values when an application disconnects. +Enable [server-level caching](../../../api/ysql/exprs/func_nextval/#caching-values-on-the-yb-tserver) to improve the speed of sequences, and also avoid discarding many sequence values when an application disconnects. {{}} For a demo, see the YugabyteDB Friday Tech Talk on [Scaling sequences with server-level caching](https://www.youtube.com/watch?v=hs-CU3vjMQY&list=PL8Z3vt4qJTkLTIqB9eTLuqOdpzghX8H40&index=76). @@ -89,15 +85,15 @@ UPDATE txndemo SET v = v + 3 WHERE k=1 RETURNING v; ``` {{}} -For more details, see [Fast single-row transactions](../../develop/learn/transactions/transactions-performance-ysql/#fast-single-row-transactions). +For more details, see [Fast single-row transactions](../../../develop/learn/transactions/transactions-performance-ysql/#fast-single-row-transactions). {{}} ## Delete older data quickly with partitioning -Use [table partitioning](../../explore/ysql-language-features/advanced-features/partitions/) to split your data into multiple partitions according to date so that you can quickly delete older data by dropping the partition. +Use [table partitioning](../../../explore/ysql-language-features/advanced-features/partitions/) to split your data into multiple partitions according to date so that you can quickly delete older data by dropping the partition. {{}} -For more details, see [Partition data by time](../data-modeling/common-patterns/timeseries/partitioning-by-time/). +For more details, see [Partition data by time](../../data-modeling/common-patterns/timeseries/partitioning-by-time/). {{}} ## Use the right data types for partition keys @@ -167,51 +163,16 @@ SELECT * FROM products; ``` {{}} -For more information, see [Data manipulation](../../explore/ysql-language-features/data-manipulation). -{{}} - -## Load balance and failover using smart drivers - -YugabyteDB [smart drivers](../../drivers-orms/smart-drivers/) provide advanced cluster-aware load-balancing capabilities that enables your applications to send requests to multiple nodes in the cluster just by connecting to one node. You can also set a fallback hierarchy by assigning priority to specific regions and ensuring that connections are made to the region with the highest priority, and then fall back to the region with the next priority in case the high-priority region fails. - -{{}} -For more information, see [Load balancing with smart drivers](https://www.yugabyte.com/blog/multi-region-database-deployment-best-practices/#load-balancing-with-smart-driver). -{{}} - -## Make sure the application uses new nodes - -When a cluster is expanded, newly added nodes do not automatically start to receive client traffic. Regardless of the language of the driver or whether you are using a smart driver, the application must either explicitly request new connections or, if it is using a pooling solution, it can configure the pooler to recycle connections periodically (for example, by setting maxLifetime and/or idleTimeout). - -## Scale your application with connection pools - -Set up different pools with different load balancing policies as needed for your application to scale by using popular pooling solutions such as HikariCP and Tomcat along with YugabyteDB [smart drivers](../../drivers-orms/smart-drivers/). - -{{}} -For more information, see [Connection pooling](../../drivers-orms/smart-drivers/#connection-pooling). +For more information, see [Data manipulation](../../../explore/ysql-language-features/data-manipulation). {{}} -### Database migrations and connection pools - -In some cases, connection pools may trigger unexpected errors while running a sequence of database migrations or other DDL operations. - -Because YugabyteDB is distributed, it can take a while for the result of a DDL to fully propagate to all caches on all nodes in a cluster. As a result, after a DDL statement completes, the next DDL statement that runs right afterwards on a different PostgreSQL connection may, in rare cases, see errors such as `duplicate key value violates unique constraint "pg_attribute_relid_attnum_index"` (see issue {{}}). It is recommended to use a single connection while running a sequence of DDL operations, as is common with application migration scripts with tools such as Flyway or Active Record. - -## Use YSQL Connection Manager - -YugabyteDB includes a built-in connection pooler, YSQL Connection Manager {{}}, which provides the same connection pooling advantages as other external pooling solutions, but without many of their limitations. As the manager is bundled with the product, it is convenient to manage, monitor, and configure the server connections. - -For more information, refer to the following: - -- [YSQL Connection Manager](../../explore/going-beyond-sql/connection-mgr-ysql/) -- [Built-in Connection Manager Turns Key PostgreSQL Weakness into a Strength](https://www.yugabyte.com/blog/connection-pooling-management/) - ## Re-use query plans with prepared statements -Whenever possible, use [prepared statements](../../api/ysql/the-sql-language/statements/perf_prepare/) to ensure that YugabyteDB can re-use the same query plan and eliminate the need for a server to parse the query on each operation. +Whenever possible, use [prepared statements](../../../api/ysql/the-sql-language/statements/perf_prepare/) to ensure that YugabyteDB can re-use the same query plan and eliminate the need for a server to parse the query on each operation. {{}} -When using server-side pooling, avoid explicit PREPARE and EXECUTE calls and use protocol-level prepared statements instead. Explicit prepare/execute calls can make connections sticky, which prevents you from realizing the benefits of using YSQL Connection Manager{{}} and server-side pooling. +When using server-side pooling, avoid explicit PREPARE and EXECUTE calls and use protocol-level prepared statements instead. Explicit prepare/execute calls can make connections sticky, which prevents you from realizing the benefits of using YSQL Connection Manager{{}} and server-side pooling. Depending on your driver, you may have to set some parameters to leverage prepared statements. For example, Npgsql supports automatic preparation using the Max Auto Prepare and Auto Prepare Min Usages connection parameters, which you add to your connection string as follows: @@ -232,14 +193,14 @@ For more details, see [Prepared statements in PL/pgSQL](https://dev.to/aws-heroe Use BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE READ ONLY DEFERRABLE for batch or long-running jobs, which need a consistent snapshot of the database without interfering, or being interfered with by other transactions. {{}} -For more details, see [Large scans and batch jobs](../../develop/learn/transactions/transactions-performance-ysql/#large-scans-and-batch-jobs). +For more details, see [Large scans and batch jobs](../../../develop/learn/transactions/transactions-performance-ysql/#large-scans-and-batch-jobs). {{}} ## JSONB datatype -Use the [JSONB](../../api/ysql/datatypes/type_json) datatype to model JSON data; that is, data that doesn't have a set schema but has a truly dynamic schema. +Use the [JSONB](../../../api/ysql/datatypes/type_json) datatype to model JSON data; that is, data that doesn't have a set schema but has a truly dynamic schema. -JSONB in YSQL is the same as the [JSONB datatype in PostgreSQL](https://www.postgresql.org/docs/15/datatype-json.html). +JSONB in YSQL is the same as the [JSONB datatype in PostgreSQL](https://www.postgresql.org/docs/11/datatype-json.html). You can use JSONB to group less interesting or less frequently accessed columns of a table. @@ -261,13 +222,9 @@ YSQL also supports JSONB expression indexes, which can be used to speed up data For large or batch SELECT or DELETE that have to scan all tablets, you can parallelize your operation by creating queries that affect only a specific part of the tablet using the `yb_hash_code` function. {{}} -For more details, see [Distributed parallel queries](../../api/ysql/exprs/func_yb_hash_code/#distributed-parallel-queries). +For more details, see [Distributed parallel queries](../../../api/ysql/exprs/func_yb_hash_code/#distributed-parallel-queries). {{}} -## Single availability zone (AZ) deployments - -In single AZ deployments, you need to set the [yb-tserver](../../reference/configuration/yb-tserver) flag `--durable_wal_write=true` to not lose data if the whole data center goes down (For example, power failure). - ## Row size limit Big columns add up when you select full or multiple rows. For consistent latency or performance, it is recommended keeping the size under 10MB or less, and a maximum of 32MB. @@ -278,43 +235,23 @@ For consistent latency or performance, it is recommended to size columns in the ## TRUNCATE tables instead of DELETE -[TRUNCATE](../../api/ysql/the-sql-language/statements/ddl_truncate/) deletes the database files that store the table data and is much faster than [DELETE](../../api/ysql/the-sql-language/statements/dml_delete/), which inserts a _delete marker_ for each row in transactions that are later removed from storage during compaction runs. +[TRUNCATE](../../../api/ysql/the-sql-language/statements/ddl_truncate/) deletes the database files that store the table data and is much faster than [DELETE](../../../api/ysql/the-sql-language/statements/dml_delete/), which inserts a _delete marker_ for each row in transactions that are later removed from storage during compaction runs. {{}} -Currently, TRUNCATE is not transactional. Also, similar to PostgreSQL, TRUNCATE is not MVCC-safe. For more details, see [TRUNCATE](../../api/ysql/the-sql-language/statements/ddl_truncate/). +Currently, TRUNCATE is not transactional. Also, similar to PostgreSQL, TRUNCATE is not MVCC-safe. For more details, see [TRUNCATE](../../../api/ysql/the-sql-language/statements/ddl_truncate/). {{}} ## Minimize the number of tablets you need Each table and index is split into tablets and each tablet has overhead. The more tablets you need, the bigger your universe will need to be. See [allowing for tablet replica overheads](#allowing-for-tablet-replica-overheads) for how the number of tablets affects how big your universe needs to be. -Each table and index consists of several tablets based on the [--ysql_num_shards_per_tserver](../../reference/configuration/yb-tserver/#yb-num-shards-per-tserver) flag. +Each table and index consists of several tablets based on the [--ysql_num_shards_per_tserver](../../../reference/configuration/yb-tserver/#yb-num-shards-per-tserver) flag. You can try one of the following methods to reduce the number of tablets: -- Use [colocation](../../explore/colocation/) to group small tables into 1 tablet. -- Reduce number of tablets-per-table using the [--ysql_num_shards_per_tserver](../../reference/configuration/yb-tserver/#yb-num-shards-per-tserver) flag. -- Use the [SPLIT INTO](../../api/ysql/the-sql-language/statements/ddl_create_table/#split-into) clause when creating a table. -- Start with few tablets and use [automatic tablet splitting](../../architecture/docdb-sharding/tablet-splitting/). +- Use [colocation](../../../explore/colocation/) to group small tables into 1 tablet. +- Reduce number of tablets-per-table using the [--ysql_num_shards_per_tserver](../../../reference/configuration/yb-tserver/#yb-num-shards-per-tserver) flag. +- Use the [SPLIT INTO](../../../api/ysql/the-sql-language/statements/ddl_create_table/#split-into) clause when creating a table. +- Start with few tablets and use [automatic tablet splitting](../../../architecture/docdb-sharding/tablet-splitting/). Note that multiple tablets can allow work to proceed in parallel so you may not want every table to have only one tablet. - -## Allow for tablet replica overheads - -Although you can manually provision the amount of memory each TServer uses using flags ([--memory_limit_hard_bytes](../../reference/configuration/yb-tserver/#memory-limit-hard-bytes) or [--default_memory_limit_to_ram_ratio](../../reference/configuration/yb-tserver/#default-memory-limit-to-ram-ratio)), this can be tricky as you need to take into account how much memory the kernel needs, along with the PostgreSQL processes and any Master process that is going to be colocated with the TServer. - -Accordingly, you should use the [--use_memory_defaults_optimized_for_ysql](../../reference/configuration/yb-tserver/#use-memory-defaults-optimized-for-ysql) flag, which gives good memory division settings for using YSQL, optimized for your node's size. - -If this flag is true, then the [memory division flag defaults](../../reference/configuration/yb-tserver/#memory-division-flags) change to provide much more memory for PostgreSQL; furthermore, they optimize for the node size. - -Note that although the default setting is false, when creating a new universe using yugabyted or YugabyteDB Anywhere, the flag is set to true, unless you explicitly set it to false. - -## Settings for CI and CD integration tests - -You can set certain flags to increase performance using YugabyteDB in CI and CD automated test scenarios as follows: - -- Point the flags `--fs_data_dirs`, and `--fs_wal_dirs` to a RAMDisk directory to make DML, DDL, cluster creation, and cluster deletion faster, ensuring that data is not written to disk. -- Set the flag `--yb_num_shards_per_tserver=1`. Reducing the number of shards lowers overhead when creating or dropping YSQL tables, and writing or reading small amounts of data. -- Use colocated databases in YSQL. Colocation lowers overhead when creating or dropping YSQL tables, and writing or reading small amounts of data. -- Set the flag `--replication_factor=1` for test scenarios, as keeping the data three way replicated (default) is not necessary. Reducing that to 1 reduces space usage and increases performance. -- Use `TRUNCATE table1,table2,table3..tablen;` instead of CREATE TABLE, and DROP TABLE between test cases. diff --git a/docs/content/preview/manage/data-migration/migrate-from-postgres.md b/docs/content/preview/manage/data-migration/migrate-from-postgres.md index eefdef4873fe..c42c3443b9d5 100644 --- a/docs/content/preview/manage/data-migration/migrate-from-postgres.md +++ b/docs/content/preview/manage/data-migration/migrate-from-postgres.md @@ -247,8 +247,8 @@ For more details, see [Live migration with fall-back](/preview/yugabyte-voyager/ When porting an existing PostgreSQL application to YugabyteDB you can follow a set of best practices to get the best out of your new deployment. -{{}} -For a full list of tips and tricks for high performance and availability, see [Best practices](../../../develop/best-practices-ysql/). +{{}} +For a full list of tips and tricks for high performance and availability, see [Best practices](../../../develop/best-practices-develop/). {{}} ### Retry transactions on conflicts diff --git a/docs/content/preview/reference/get-started-guide.md b/docs/content/preview/reference/get-started-guide.md index ce0c75b7625e..fa3dd531bd8b 100644 --- a/docs/content/preview/reference/get-started-guide.md +++ b/docs/content/preview/reference/get-started-guide.md @@ -124,7 +124,7 @@ Find resources for getting started, migrating existing databases, using your dat [Distributed PostgreSQL Essentials for Developers: Hands-on Course](https://www.youtube.com/watch?v=rqJBFQ-4Hgk) : Build a scalable and fault-tolerant movie recommendation service. -[Best practices](../../develop/best-practices-ysql/) +[Best practices](../../develop/best-practices-develop/) : Tips and tricks to build applications for high performance and availability. [Drivers and ORMs](../../drivers-orms/) diff --git a/docs/content/preview/releases/ybdb-releases/v2024.1.md b/docs/content/preview/releases/ybdb-releases/v2024.1.md index 8ca09a28bea9..84fda1dbbb1d 100644 --- a/docs/content/preview/releases/ybdb-releases/v2024.1.md +++ b/docs/content/preview/releases/ybdb-releases/v2024.1.md @@ -18,16 +18,16 @@ What follows are the release notes for the YugabyteDB 2024.1 release series. Con For an RSS feed of all release series, point your feed reader to the [RSS feed for releases](../index.xml). {{}} -YugabyteDB 2024.1.0.0 and newer releases do not support v7 Linux versions (CentOS7, Red Hat Enterprise Linux 7, Oracle Enterprise Linux 7.x), Amazon Linux 2, and Ubuntu 18. If you're currently using one of these Linux versions, upgrade to a supported OS version before installing YugabyteDB v2024.1.0. Refer to [Operating system support](/stable/reference/configuration/operating-systems/) for the complete list of supported operating systems. +YugabyteDB 2024.1.0.0 and newer releases do not support v7 Linux versions (CentOS7, Red Hat Enterprise Linux 7, Oracle Enterprise Linux 7.x), Amazon Linux 2, and Ubuntu 18. If you're currently using one of these Linux versions, upgrade to a supported OS version before installing YugabyteDB v2024.1.0. Refer to [Operating system support](/v2024.1/reference/configuration/operating-systems/) for the complete list of supported operating systems. {{}} {{< tip title="New memory division settings available" >}} -YugabyteDB uses [memory division flags](/stable/reference/configuration/yb-master/#memory-division-flags) to specify how memory should be divided between different processes (for example, [YB-TServer](/stable/architecture/yb-tserver/) versus [YB-Master](/stable/architecture/yb-master/)) on a YugabyteDB node as well as within processes. Using these flags, you can better allocate memory for PostgreSQL, making it more suitable for a wider range of use cases. +YugabyteDB uses [memory division flags](/v2024.1/reference/configuration/yb-master/#memory-division-flags) to specify how memory should be divided between different processes (for example, [YB-TServer](/v2024.1/architecture/yb-tserver/) versus [YB-Master](/v2024.1/architecture/yb-master/)) on a YugabyteDB node as well as within processes. Using these flags, you can better allocate memory for PostgreSQL, making it more suitable for a wider range of use cases. -For _new_ v2024.1.x universes, if you are expecting to use any nontrivial amount of [YSQL](/stable/api/ysql/), it is strongly recommended to set [‑‑use_memory_defaults_optimized_for_ysql](/stable/reference/configuration/yb-tserver/#use-memory-defaults-optimized-for-ysql). This changes the memory division defaults to better values for YSQL usage, and optimizes memory for the node size. +For _new_ v2024.1.x universes, if you are expecting to use any nontrivial amount of [YSQL](/v2024.1/api/ysql/), it is strongly recommended to set [‑‑use_memory_defaults_optimized_for_ysql](/v2024.1/reference/configuration/yb-tserver/#use-memory-defaults-optimized-for-ysql). This changes the memory division defaults to better values for YSQL usage, and optimizes memory for the node size. -If you are _upgrading_ a universe, you may want to instead review your memory division settings and adjust them if desired; see [best practices](/stable/develop/best-practices-ysql/#minimize-the-number-of-tablets-you-need). +If you are _upgrading_ a universe, you may want to instead review your memory division settings and adjust them if desired; see [best practices](/v2024.1/develop/best-practices-ysql/#minimize-the-number-of-tablets-you-need). In future releases, the memory division settings will be used to determine how many tablet replicas a YB-TServer can safely support; this information will power new alerts warning you about overloading nodes with too many tablet replicas and allow blocking operations that would create too many tablet replicas. @@ -561,17 +561,17 @@ docker pull yugabytedb/yugabyte:2024.1.2.0-b77 ### New features -* [Semi-automatic transactional xCluster setup](/stable/deploy/multi-dc/async-replication/async-replication-transactional/). Provides operationally simpler setup and management of YSQL transactional xCluster replication, as well as simpler steps for performing DDL changes. {{}} +* [Semi-automatic transactional xCluster setup](/v2024.1/deploy/multi-dc/async-replication/async-replication-transactional/). Provides operationally simpler setup and management of YSQL transactional xCluster replication, as well as simpler steps for performing DDL changes. {{}} -* [Kubernetes readiness probe](/stable/deploy/kubernetes/single-zone/oss/helm-chart/#readiness-probes). Added readiness probes for TServer and Master pods in YugabyteDB, supporting custom or default configurations, thereby enhancing stability by ensuring YSQL/YCQL and YB-Master pods are ready before traffic is routed. {{}} +* [Kubernetes readiness probe](/v2024.1/deploy/kubernetes/single-zone/oss/helm-chart/#readiness-probes). Added readiness probes for TServer and Master pods in YugabyteDB, supporting custom or default configurations, thereby enhancing stability by ensuring YSQL/YCQL and YB-Master pods are ready before traffic is routed. {{}} * yugabyted * [Voyager assessment visualisation in yugabyted UI](/preview/yugabyte-voyager/migrate/assess-migration/). Yugabyted UI provides a dashboard to allow the users to effectively plan the migrations based on the complexity and also be able to monitor the progress of each migration - * [Backup/restore support with TLS enabled](/stable/reference/configuration/yugabyted/#backup). In secure mode, yugabyted cluster supports taking full backup/restores. {{}} + * [Backup/restore support with TLS enabled](/v2024.1/reference/configuration/yugabyted/#backup). In secure mode, yugabyted cluster supports taking full backup/restores. {{}} - * [xCluster support](/stable/reference/configuration/yugabyted/#set-up-xcluster-replication-between-clusters). yugabyted enables native support for setting up xCluster between two yugabyted deployed clusters. {{}} + * [xCluster support](/v2024.1/reference/configuration/yugabyted/#set-up-xcluster-replication-between-clusters). yugabyted enables native support for setting up xCluster between two yugabyted deployed clusters. {{}} ### Change log @@ -676,14 +676,14 @@ docker pull yugabytedb/yugabyte:2024.1.1.0-b137 **PostgreSQL Logical Replication Protocol Support** {{}} -We're excited to announce in the 2024.1.1.0 release support for the PostgreSQL Logical Replication Protocol for Change Data Capture (CDC), in addition to the existing native [gRPC Replication protocol](/stable/develop/change-data-capture/using-yugabytedb-grpc-replication/). -This feature allows you to manage CDC streams using [Publications](https://www.postgresql.org/docs/11/sql-createpublication.html) and [Replication Slots](https://www.postgresql.org/docs/11/logicaldecoding-explanation.html#LOGICALDECODING-REPLICATION-SLOTS), similar to native PostgreSQL. Additionally, a [new connector](/stable/develop/change-data-capture/using-logical-replication/get-started/#get-started-with-yugabytedb-connector) is introduced that utilizes the logical replication protocol to consume the CDC streams via [Replication slots](https://www.postgresql.org/docs/current/logicaldecoding-explanation.html#LOGICALDECODING-REPLICATION-SLOTS). +We're excited to announce in the 2024.1.1.0 release support for the PostgreSQL Logical Replication Protocol for Change Data Capture (CDC), in addition to the existing native [gRPC Replication protocol](/v2024.1/develop/change-data-capture/using-yugabytedb-grpc-replication/). +This feature allows you to manage CDC streams using [Publications](https://www.postgresql.org/docs/11/sql-createpublication.html) and [Replication Slots](https://www.postgresql.org/docs/11/logicaldecoding-explanation.html#LOGICALDECODING-REPLICATION-SLOTS), similar to native PostgreSQL. Additionally, a [new connector](/v2024.1/develop/change-data-capture/using-logical-replication/get-started/#get-started-with-yugabytedb-connector) is introduced that utilizes the logical replication protocol to consume the CDC streams via [Replication slots](https://www.postgresql.org/docs/current/logicaldecoding-explanation.html#LOGICALDECODING-REPLICATION-SLOTS). -For more information, refer to [logical replication protocol](/stable/develop/change-data-capture/using-logical-replication/). +For more information, refer to [logical replication protocol](/v2024.1/develop/change-data-capture/using-logical-replication/). ### New features -* [Automated SQL/CQL Shell binary](/stable/api/ysqlsh/#installation). Along with full binary, added separate downloadable SQL/CQL Shell binary. +* [Automated SQL/CQL Shell binary](/v2024.1/api/ysqlsh/#installation). Along with full binary, added separate downloadable SQL/CQL Shell binary. * [Voyager assessment visualisation in yugabyted UI](/preview/yugabyte-voyager/migrate/assess-migration/). Yugabyted UI provides a dashboard to allow the users to effectively plan the migrations based on the complexity and also be able to monitor the progress of each migration {{}} @@ -842,52 +842,52 @@ We're pleased to announce the early access of the new [Enhanced Postgres Compati **Rollback after upgrade** -Rolling back to the pre-upgrade version if you're not satisfied with the upgraded version is now {{}}. Refer to the [Rollback phase](/stable/manage/upgrade-deployment/#b-rollback-phase) for more information. +Rolling back to the pre-upgrade version if you're not satisfied with the upgraded version is now {{}}. Refer to the [Rollback phase](/v2024.1/manage/upgrade-deployment/#b-rollback-phase) for more information. ### New features -* [yugabyted](/stable/reference/configuration/yugabyted/) - * Set preferred regions. The preferred region handles all read and write requests from clients. Use the [yugabyted configure data_placement](/stable/reference/configuration/yugabyted/#data-placement) command to specify preferred regions for clusters. +* [yugabyted](/v2024.1/reference/configuration/yugabyted/) + * Set preferred regions. The preferred region handles all read and write requests from clients. Use the [yugabyted configure data_placement](/v2024.1/reference/configuration/yugabyted/#data-placement) command to specify preferred regions for clusters. * Connection management integration. With connection management enabled, the **Nodes** page of yugabyted UI displays the split of physical and logical connections. - * [Docker-based deployments](/stable/reference/configuration/yugabyted/#create-a-multi-region-cluster-in-docker). Improves the yugabyted Docker user experience for RF-3 deployments and docker container/host restarts. - * Simplified [PITR configuration](/stable/reference/configuration/yugabyted/#restore). - * Perform all admin operations using a [pass through mechanism](/stable/reference/configuration/yugabyted/#admin-operation) to execute yb-admin commands. + * [Docker-based deployments](/v2024.1/reference/configuration/yugabyted/#create-a-multi-region-cluster-in-docker). Improves the yugabyted Docker user experience for RF-3 deployments and docker container/host restarts. + * Simplified [PITR configuration](/v2024.1/reference/configuration/yugabyted/#restore). + * Perform all admin operations using a [pass through mechanism](/v2024.1/reference/configuration/yugabyted/#admin-operation) to execute yb-admin commands. * DocDB Availability * Speed up local bootstrap. Faster rolling upgrades and restarts by minimizing table bootstrap time. * Hardening Raft. Reduced time window for re-tryable requests by honoring write RPC timeouts. -* [Batched nested loop joins](/stable/architecture/query-layer/join-strategies/#batched-nested-loop-join-bnl). A join execution strategy that is an improvement on Nested Loop joins that sends one request to the inner table per batch of outer table tuples instead of once per individual outer table tuple. +* [Batched nested loop joins](/v2024.1/architecture/query-layer/join-strategies/#batched-nested-loop-join-bnl). A join execution strategy that is an improvement on Nested Loop joins that sends one request to the inner table per batch of outer table tuples instead of once per individual outer table tuple. -* [Tablet splitting on range-sharded tables](/stable/architecture/docdb-sharding/tablet-splitting/#range-sharded-tables). Optimized the tablet split thresholds to speed up data ingestion. +* [Tablet splitting on range-sharded tables](/v2024.1/architecture/docdb-sharding/tablet-splitting/#range-sharded-tables). Optimized the tablet split thresholds to speed up data ingestion. -* [Catalog Caching](/stable/reference/configuration/yb-tserver/#catalog-flags). Reduce master requests during PostgreSQL system catalog refresh by populating YB-TServer catalog cache. {{}} +* [Catalog Caching](/v2024.1/reference/configuration/yb-tserver/#catalog-flags). Reduce master requests during PostgreSQL system catalog refresh by populating YB-TServer catalog cache. {{}} -* [Catalog Caching](/stable/reference/configuration/yb-tserver/#ysql-yb-toast-catcache-threshold). Use TOAST compression to reduce PG catalog cache memory. Compressed catalog tuples when storing in the PG catalog cache to reduce the memory consumption. {{}} +* [Catalog Caching](/v2024.1/reference/configuration/yb-tserver/#ysql-yb-toast-catcache-threshold). Use TOAST compression to reduce PG catalog cache memory. Compressed catalog tuples when storing in the PG catalog cache to reduce the memory consumption. {{}} -* [Index backfill](/stable/explore/ysql-language-features/indexes-constraints/index-backfill/) stability improvements. Ensure timely notification to all nodes and PostgreSQL backends before initiating index backfill to prevent missing entries during index creation. +* [Index backfill](/v2024.1/explore/ysql-language-features/indexes-constraints/index-backfill/) stability improvements. Ensure timely notification to all nodes and PostgreSQL backends before initiating index backfill to prevent missing entries during index creation. * Support for CDC with atomic DDL. In case of DDL being rolled back, CDC will not send records with rolled back schema. -* [Wait-On Conflict Concurrency Control](/stable/architecture/transactions/concurrency-control/#wait-on-conflict). Cross-tablet fairness in resuming "waiters". Resume waiters in a consistent order across tablets, when a set of transactions simultaneously wait on more than one intent/lock on various tablets. +* [Wait-On Conflict Concurrency Control](/v2024.1/architecture/transactions/concurrency-control/#wait-on-conflict). Cross-tablet fairness in resuming "waiters". Resume waiters in a consistent order across tablets, when a set of transactions simultaneously wait on more than one intent/lock on various tablets. * YSQL - * [Cost-based optimizer](/stable/reference/configuration/yb-tserver/#yb-enable-base-scans-cost-model). Added support for cost based optimizer for YSQL. {{}} - * [DDL concurrency](/stable/reference/configuration/yb-tserver/#ddl-concurrency-flags). Support for isolating DDLs per database. Specifically, a DDL in one database does not cause catalog cache refreshes or aborts transactions due to breaking change in another database. - * [DDL atomicity](/stable/reference/configuration/yb-tserver/#ddl-atomicity-flags). Ensures that YSQL DDLs are fully atomic between YSQL and DocDB layers, that is in case of any errors, they are fully rolled back, and in case of success they are applied fully. Currently, such inconsistencies are rare but can happen. - * [ALTER TABLE support](/stable/api/ysql/the-sql-language/statements/ddl_alter_table/#add-column-if-not-exists-column-name-data-type-constraint-constraints). Adds support for the following variants of ALTER TABLE ADD COLUMN: + * [Cost-based optimizer](/v2024.1/reference/configuration/yb-tserver/#yb-enable-base-scans-cost-model). Added support for cost based optimizer for YSQL. {{}} + * [DDL concurrency](/v2024.1/reference/configuration/yb-tserver/#ddl-concurrency-flags). Support for isolating DDLs per database. Specifically, a DDL in one database does not cause catalog cache refreshes or aborts transactions due to breaking change in another database. + * [DDL atomicity](/v2024.1/reference/configuration/yb-tserver/#ddl-atomicity-flags). Ensures that YSQL DDLs are fully atomic between YSQL and DocDB layers, that is in case of any errors, they are fully rolled back, and in case of success they are applied fully. Currently, such inconsistencies are rare but can happen. + * [ALTER TABLE support](/v2024.1/api/ysql/the-sql-language/statements/ddl_alter_table/#add-column-if-not-exists-column-name-data-type-constraint-constraints). Adds support for the following variants of ALTER TABLE ADD COLUMN: * with a SERIAL data type * with a volatile DEFAULT * with a PRIMARY KEY - * [Lower latency for large scans with size-based fetching](/stable/reference/configuration/yb-tserver/#ysql-yb-fetch-size-limit). A static size based fetch limit value to control how many rows can be returned in one request from DocDB. {{}} + * [Lower latency for large scans with size-based fetching](/v2024.1/reference/configuration/yb-tserver/#ysql-yb-fetch-size-limit). A static size based fetch limit value to control how many rows can be returned in one request from DocDB. {{}} -* [Tablet limits](/stable/architecture/docdb-sharding/tablet-splitting/#tablet-limits). Depending on the available nodes and resources such as memory and CPU, YugabyteDB can automatically limit the total number of tables that can be created to ensure that the system can be stable and performant. +* [Tablet limits](/v2024.1/architecture/docdb-sharding/tablet-splitting/#tablet-limits). Depending on the available nodes and resources such as memory and CPU, YugabyteDB can automatically limit the total number of tables that can be created to ensure that the system can be stable and performant. -* Truncate support with [PITR](/stable/manage/backup-restore/point-in-time-recovery/). The TRUNCATE command is now allowed for databases with PITR enabled. +* Truncate support with [PITR](/v2024.1/manage/backup-restore/point-in-time-recovery/). The TRUNCATE command is now allowed for databases with PITR enabled. * DocDB memory tracking enhancements. Memory tracking in DocDB to account for 90% of memory used. -* [Enhanced Explain Analyze output](/stable/explore/query-1-performance/explain-analyze/). Explain Analyze when used with DIST option will also show the rows read from the storage layer, which can help diagnosing the query performance. +* [Enhanced Explain Analyze output](/v2024.1/explore/query-1-performance/explain-analyze/). Explain Analyze when used with DIST option will also show the rows read from the storage layer, which can help diagnosing the query performance. * Upgrade OpenSSL to 3.0.8 from 1.1.1. OpenSSL 1.1.1 is out of support. This feature upgrades YugabyteDB to FIPS compliant OpenSSL version 3.0.8. diff --git a/docs/content/preview/releases/ybdb-releases/v2024.2.md b/docs/content/preview/releases/ybdb-releases/v2024.2.md index d6726d5d320d..7b9fea256230 100644 --- a/docs/content/preview/releases/ybdb-releases/v2024.2.md +++ b/docs/content/preview/releases/ybdb-releases/v2024.2.md @@ -718,7 +718,7 @@ YugabyteDB uses [memory division flags](/stable/reference/configuration/yb-maste Also turned on by default is `--enforce_tablet_replica_limits`, which enforces tablet replica limits based on the memory allocated to per-tablet overhead across the universe. When turned on, any CREATE TABLE request that would create too many tablets returns an error, and tablet splitting is also blocked if the result would be too many tablets. -In addition, YugabyteDB will alert you if your system currently has too many tablets. If you are *upgrading* a universe, you may want to review your memory division settings and adjust them if desired; see [Best practices](/stable/develop/best-practices-ysql/). +In addition, YugabyteDB will alert you if your system currently has too many tablets. If you are *upgrading* a universe, you may want to review your memory division settings and adjust them if desired; see [Best practices](/stable/develop/best-practices-develop/data-modeling-perf/#minimize-the-number-of-tablets-you-need). ### New features diff --git a/docs/content/stable/manage/data-migration/migrate-from-postgres.md b/docs/content/stable/manage/data-migration/migrate-from-postgres.md index 5c91284ce9d2..2e400acf4693 100644 --- a/docs/content/stable/manage/data-migration/migrate-from-postgres.md +++ b/docs/content/stable/manage/data-migration/migrate-from-postgres.md @@ -247,7 +247,7 @@ For more details, see [Live migration with fall-back](/preview/yugabyte-voyager/ When porting an existing PostgreSQL application to YugabyteDB you can follow a set of best practices to get the best out of your new deployment. -{{}} +{{}} For a full list of tips and tricks for high performance and availability, see [Best practices](../../../develop/best-practices-develop/). {{}} From 86db7d240c98eca40daf055fe92e434980e13b51 Mon Sep 17 00:00:00 2001 From: Dwight Hodge Date: Wed, 21 May 2025 14:53:05 -0400 Subject: [PATCH 146/146] links --- .../pg-extensions/extension-pgvector.md | 2 +- docs/content/preview/tutorials/AI/ai-langchain-openai.md | 2 +- docs/content/preview/tutorials/AI/azure-openai.md | 2 +- docs/content/preview/tutorials/AI/google-vertex-ai.md | 2 +- .../preview/yugabyte-voyager/known-issues/postgresql.md | 6 +++--- .../pg-extensions/extension-pgvector.md | 2 +- 6 files changed, 8 insertions(+), 8 deletions(-) diff --git a/docs/content/preview/explore/ysql-language-features/pg-extensions/extension-pgvector.md b/docs/content/preview/explore/ysql-language-features/pg-extensions/extension-pgvector.md index 91cbb9a3a8c2..80bcac87249d 100644 --- a/docs/content/preview/explore/ysql-language-features/pg-extensions/extension-pgvector.md +++ b/docs/content/preview/explore/ysql-language-features/pg-extensions/extension-pgvector.md @@ -215,6 +215,6 @@ A higher `ef_construction` value provides faster recall at the cost of index bui ## Learn more - Tutorial: [Build and Learn](/preview/tutorials/build-and-learn/) -- Tutorial: [Build scalable generative AI applications with Azure OpenAI and YugabyteDB](/preview/tutorials/azure/azure-openai/) +- Tutorials: [Build scalable generative AI applications with YugabyteDB](/preview/tutorials/ai/) - [PostgreSQL pgvector: Getting Started and Scaling](https://www.yugabyte.com/blog/postgresql-pgvector-getting-started/) - [Multimodal Search with PostgreSQL pgvector](https://www.yugabyte.com/blog/postgresql-pgvector-multimodal-search/) diff --git a/docs/content/preview/tutorials/AI/ai-langchain-openai.md b/docs/content/preview/tutorials/AI/ai-langchain-openai.md index 106b91000162..9dc8034d49e2 100644 --- a/docs/content/preview/tutorials/AI/ai-langchain-openai.md +++ b/docs/content/preview/tutorials/AI/ai-langchain-openai.md @@ -287,4 +287,4 @@ LangChain provides a powerful toolkit to application developers seeking LLM inte For more information about LangChain, see the [LangChain documentation](https://python.langchain.com/docs/get_started/introduction). -If you would like to learn more on integrating OpenAI with YugabyteDB, check out the [Azure OpenAI](../../azure/azure-openai/) tutorial. +If you would like to learn more on integrating OpenAI with YugabyteDB, check out the [Azure OpenAI](../azure-openai/) tutorial. diff --git a/docs/content/preview/tutorials/AI/azure-openai.md b/docs/content/preview/tutorials/AI/azure-openai.md index 6ede99bd673e..4949e5a2b44e 100644 --- a/docs/content/preview/tutorials/AI/azure-openai.md +++ b/docs/content/preview/tutorials/AI/azure-openai.md @@ -306,4 +306,4 @@ With the help of the PostgreSQL pgvector extension, YugabyteDB enhances the scal To learn more about additional updates to YugabyteDB with release 2.19, check out [Dream Big, Go Bigger: Turbocharging PostgreSQL](https://www.yugabyte.com/blog/postgresql-turbocharging/). -To learn how to run this application using Google Cloud, see [Build scalable generative AI applications with Google Vertex AI and YugabyteDB](../../google/google-vertex-ai/). +To learn how to run this application using Google Cloud, see [Build scalable generative AI applications with Google Vertex AI and YugabyteDB](../google-vertex-ai/). diff --git a/docs/content/preview/tutorials/AI/google-vertex-ai.md b/docs/content/preview/tutorials/AI/google-vertex-ai.md index a8a1ac09a904..de07a7340bac 100644 --- a/docs/content/preview/tutorials/AI/google-vertex-ai.md +++ b/docs/content/preview/tutorials/AI/google-vertex-ai.md @@ -203,4 +203,4 @@ The Google Vertex AI service simplifies the process of designing, building, and With the help of the PostgreSQL pgvector extension, YugabyteDB enhances the scalability of these applications by distributing data and embeddings across a cluster of nodes, facilitating similarity searches on a large scale. -To learn how to run this application using Azure, see [Build scalable generative AI applications with Azure OpenAI and YugabyteDB](../../azure/azure-openai/). +To learn how to run this application using Azure, see [Build scalable generative AI applications with Azure OpenAI and YugabyteDB](../azure-openai/). diff --git a/docs/content/preview/yugabyte-voyager/known-issues/postgresql.md b/docs/content/preview/yugabyte-voyager/known-issues/postgresql.md index bdc2387ad053..9e27e3feb7f3 100644 --- a/docs/content/preview/yugabyte-voyager/known-issues/postgresql.md +++ b/docs/content/preview/yugabyte-voyager/known-issues/postgresql.md @@ -1612,7 +1612,7 @@ CREATE INDEX idx_orders_order_id on orders(order_id); **Description**: -In YugabyteDB, you can specify three kinds of columns when using [CREATE INDEX](../../../api/ysql/the-sql-language/statements/ddl_create_index): sharding, clustering, and covering. (For more details, refer to [Secondary indexes](../../../explore/ysql-language-features/indexes-constraints/secondary-indexes-ysql/).) The default sharding strategy is HASH unless [Enhanced PostgreSQL Compatibility mode](./../../develop/postgresql-compatibility/) is enabled, in which case, RANGE is the default sharding strategy. +In YugabyteDB, you can specify three kinds of columns when using [CREATE INDEX](../../../api/ysql/the-sql-language/statements/ddl_create_index): sharding, clustering, and covering. (For more details, refer to [Secondary indexes](../../../explore/ysql-language-features/indexes-constraints/secondary-indexes-ysql/).) The default sharding strategy is HASH unless [Enhanced PostgreSQL Compatibility mode](../../../develop/postgresql-compatibility/) is enabled, in which case, RANGE is the default sharding strategy. Design the index to evenly distribute data across all nodes and optimize performance based on query patterns. Avoid using low-cardinality columns, such as boolean values, ENUMs, or days of the week, as sharding keys, as they result in data being distributed across only a few tablets. @@ -1683,7 +1683,7 @@ CREATE INDEX idx_order_status_order_id on orders (order_id, status); --reorderin **Description**: -In YugabyteDB, you can specify three kinds of columns when using [CREATE INDEX](../../../api/ysql/the-sql-language/statements/ddl_create_index): sharding, clustering, and covering. (For more details, refer to [Secondary indexes](../../../explore/ysql-language-features/indexes-constraints/secondary-indexes-ysql/).) The default sharding strategy is HASH unless [Enhanced PostgreSQL Compatibility mode](./../../develop/postgresql-compatibility/) is enabled, in which case, RANGE is the default sharding strategy. +In YugabyteDB, you can specify three kinds of columns when using [CREATE INDEX](../../../api/ysql/the-sql-language/statements/ddl_create_index): sharding, clustering, and covering. (For more details, refer to [Secondary indexes](../../../explore/ysql-language-features/indexes-constraints/secondary-indexes-ysql/).) The default sharding strategy is HASH unless [Enhanced PostgreSQL Compatibility mode](../../../develop/postgresql-compatibility/) is enabled, in which case, RANGE is the default sharding strategy. Design the index to evenly distribute data across all nodes and optimize performance based on query patterns. @@ -1738,7 +1738,7 @@ CREATE INDEX idx_users_middle_name_user_id on users (middle_name ASC, user_id); **Description**: -In YugabyteDB, you can specify three kinds of columns when using [CREATE INDEX](../../../api/ysql/the-sql-language/statements/ddl_create_index): sharding, clustering, and covering. (For more details, refer to [Secondary indexes](../../../explore/ysql-language-features/indexes-constraints/secondary-indexes-ysql/).) The default sharding strategy is HASH unless [Enhanced PostgreSQL Compatibility mode](./../../develop/postgresql-compatibility/) is enabled, in which case, RANGE is the default sharding strategy. +In YugabyteDB, you can specify three kinds of columns when using [CREATE INDEX](../../../api/ysql/the-sql-language/statements/ddl_create_index): sharding, clustering, and covering. (For more details, refer to [Secondary indexes](../../../explore/ysql-language-features/indexes-constraints/secondary-indexes-ysql/).) The default sharding strategy is HASH unless [Enhanced PostgreSQL Compatibility mode](../../../develop/postgresql-compatibility/) is enabled, in which case, RANGE is the default sharding strategy. Design the index to evenly distribute data across all nodes and optimize performance based on query patterns. diff --git a/docs/content/stable/explore/ysql-language-features/pg-extensions/extension-pgvector.md b/docs/content/stable/explore/ysql-language-features/pg-extensions/extension-pgvector.md index cd040721424f..2646c9519dff 100644 --- a/docs/content/stable/explore/ysql-language-features/pg-extensions/extension-pgvector.md +++ b/docs/content/stable/explore/ysql-language-features/pg-extensions/extension-pgvector.md @@ -155,6 +155,6 @@ SELECT category_id, AVG(embedding) FROM items GROUP BY category_id; ## Read more - Tutorial: [Build and Learn](/preview/tutorials/build-and-learn/) -- Tutorial: [Build scalable generative AI applications with Azure OpenAI and YugabyteDB](/preview/tutorials/azure/azure-openai/) +- Tutorials: [Build scalable generative AI applications with YugabyteDB](/preview/tutorials/ai/) - [PostgreSQL pgvector: Getting Started and Scaling](https://www.yugabyte.com/blog/postgresql-pgvector-getting-started/) - [Multimodal Search with PostgreSQL pgvector](https://www.yugabyte.com/blog/postgresql-pgvector-multimodal-search/)