diff --git a/deploy_ai_search_indexes/README.md b/deploy_ai_search_indexes/README.md index 8676061..e66fac1 100644 --- a/deploy_ai_search_indexes/README.md +++ b/deploy_ai_search_indexes/README.md @@ -14,7 +14,7 @@ The associated scripts in this portion of the repository contains pre-built scri **Execute the following commands in the `deploy_ai_search_indexes/src/deploy_ai_search_indexes` directory:** 3. Adjust `image_processing.py` with any changes to the index / indexer. The `get_skills()` method implements the skills pipeline. Make any adjustments here in the skills needed to enrich the data source. -4. Run `deploy.py` with the following args: +4. Run `uv run deploy.py` with the following args: - `index_type image_processing`. This selects the `ImageProcessingAISearch` sub class. - `enable_page_wise_chunking True`. This determines whether page wise chunking is applied in ADI, or whether the inbuilt skill is used for TextSplit. This suits documents that are inheritely page-wise e.g. pptx files. - `rebuild`. Whether to delete and rebuild the index. @@ -34,7 +34,7 @@ The associated scripts in this portion of the repository contains pre-built scri **Execute the following commands in the `deploy_ai_search_indexes/src/deploy_ai_search_indexes` directory:** 3. Adjust `text_2_sql_schema_store.py` with any changes to the index / indexer. The `get_skills()` method implements the skills pipeline. Make any adjustments here in the skills needed to enrich the data source. -4. Run `deploy.py` with the following args: +4. Run `uv run deploy.py` with the following args: - `index_type text_2_sql_schema_store`. This selects the `Text2SQLSchemaStoreAISearch` sub class. - `rebuild`. Whether to delete and rebuild the index. @@ -53,7 +53,7 @@ The associated scripts in this portion of the repository contains pre-built scri **Execute the following commands in the `deploy_ai_search_indexes/src/deploy_ai_search_indexes` directory:** 3. Adjust `text_2_sql_column_value_store.py` with any changes to the index / indexer. -4. Run `deploy.py` with the following args: +4. Run `uv run deploy.py` with the following args: - `index_type text_2_sql_column_value_store`. This selects the `Text2SQLColumnValueStoreAISearch` sub class. - `rebuild`. Whether to delete and rebuild the index. @@ -71,7 +71,7 @@ The associated scripts in this portion of the repository contains pre-built scri **Execute the following commands in the `deploy_ai_search_indexes/src/deploy_ai_search_indexes` directory:** 3. Adjust `text_2_sql_query_cache.py` with any changes to the index. **There is an optional provided indexer or skillset for this cache. You may instead want the application code will write directly to it. See the details in the Text2SQL README for different cache strategies.** -4. Run `deploy.py` with the following args: +4. Run `uv run deploy.py` with the following args: - `index_type text_2_sql_query_cache`. This selects the `Text2SQLQueryCacheAISearch` sub class. - `rebuild`. Whether to delete and rebuild the index. diff --git a/text_2_sql/data_dictionary/README.md b/text_2_sql/data_dictionary/README.md index 5f6a287..572c485 100644 --- a/text_2_sql/data_dictionary/README.md +++ b/text_2_sql/data_dictionary/README.md @@ -232,7 +232,7 @@ To generate a data dictionary, perform the following steps: 2. Package and install the `text_2_sql_core` library. See [build](https://docs.astral.sh/uv/concepts/projects/build/) if you want to build as a wheel and install on an agent. Or you can run from within a `uv` environment and skip packaging. - Install the optional dependencies if you need a database connector other than TSQL. `uv sync --extra ` - + 3. Run `uv run data_dictionary ` - You can pass the following command line arguements: - `-- output_directory` or `-o`: Optional directory that the script will write the output files to. @@ -242,7 +242,7 @@ To generate a data dictionary, perform the following steps: - `entities`: A list of entities to extract. Defaults to None. - `excluded_entities`: A list of entities to exclude. - `excluded_schemas`: A list of schemas to exclude. - + 4. Upload these generated data dictionaries files to the relevant containers in your storage account. Wait for them to be automatically indexed with the included skillsets. > [!IMPORTANT]