diff --git a/docs/README.md b/docs/README.md
index a4a73e3fd..3de1b6bda 100644
--- a/docs/README.md
+++ b/docs/README.md
@@ -11,7 +11,7 @@ The following folders exist:
- `zh_CN` - Chinese translated content.
- `all_langs` - Content which is intended for all languages (usually in english) should go here.
- `images` - All images should be located in the images subfolder structure.
-- `_data/en`, `_data/ja`, ... - Site metadata (e.g. landing page content), organized by language.
+- `_data/en`, `_data/ja`, ... - Site metadata (e.g. landing page content), organized by language.
## Content Authoring
diff --git a/docs/en/event-daemon/event-daemon-api.md b/docs/en/event-daemon/event-daemon-api.md
index f644b4735..73adf619f 100644
--- a/docs/en/event-daemon/event-daemon-api.md
+++ b/docs/en/event-daemon/event-daemon-api.md
@@ -8,16 +8,17 @@ lang: en
# API
+
## registerCallbacks
A global level function in all plugins that is used to tell the framework about event processing entry points in the plugin.
**registerCallbacks(reg)**
-* reg: The [`Registrar`](#Registrar) you will interact with to tell the framework which functions to call.
-
+- reg: The [`Registrar`](#Registrar) you will interact with to tell the framework which functions to call.
+
## Registrar
The Registrar is the object used to tell the framework how to interact with a plugin. It is passed to the [`registerCallbacks`](#registerCallbacks) function.
@@ -36,9 +37,7 @@ See [`getLogger`](#getLogger).
Get the python Logger object used to log messages from within the plugin.
-
-
-__setEmails(*emails)__
+**setEmails(\*emails)**
Set the emails that should receive error and critical notices when something bad happens in this plugin or any of its callbacks.
@@ -71,12 +70,12 @@ reg.setEmails('user1@domain.com', 'user2@domain.com')
Register a callback into the engine for this plugin.
-* `sgScriptName`: The name of the script taken from the {% include product %} scripts page.
-* `sgScriptKey`: The application key for the script taken from a {% include product %} script page.
-* `callback`: A function or an object with a `__call__` method. See [`exampleCallback`](#exampleCallback).
-* `matchEvents`: A filter of events you want to have passed to your callback.
-* `args`: Any object you want the framework to pass back into your callback.
-* `stopOnError`: Boolean, should an exception in this callback halt processing of events by all callbacks in this plugin. Default is `True`.
+- `sgScriptName`: The name of the script taken from the {% include product %} scripts page.
+- `sgScriptKey`: The application key for the script taken from a {% include product %} script page.
+- `callback`: A function or an object with a `__call__` method. See [`exampleCallback`](#exampleCallback).
+- `matchEvents`: A filter of events you want to have passed to your callback.
+- `args`: Any object you want the framework to pass back into your callback.
+- `stopOnError`: Boolean, should an exception in this callback halt processing of events by all callbacks in this plugin. Default is `True`.
The `sgScriptName` is used to identify the plugin to {% include product %}. Any name can be shared across any number of callbacks or be unique for a single callback.
@@ -125,7 +124,7 @@ matchEvents = {
}
```
-When matching against non field specific event types such as "_New" or "_Retirement", you don't provide a list, instead you pass `None` as the value.
+When matching against non field specific event types such as "\_New" or "\_Retirement", you don't provide a list, instead you pass `None` as the value.
```python
matchEvents = {
@@ -142,6 +141,7 @@ Another use of the `args` argument could be to pass in a common mutable, a `dict
The `stopOnError` argument tells the system if an exception in this callback can cause event processing to stop for all callbacks in the plugin. By default this is `True` but can be switched to `False`. You will still get mail notifications of errors should there be any but processing of events will not stop. Being a per callback setting you can have some critical callbacks for whom this is `True` but others for whom this is `False`.
+
## Callback
Any plugin entry point registered by [`Registrar.registerCallback`](#registerCallback) is generally a global level function that looks like this:
@@ -149,9 +149,9 @@ Any plugin entry point registered by [`Registrar.registerCallback`](#registerCal
**exampleCallback(sg, logger, event, args)**
-* `sg`: A {% include product %} connection instance.
-* `logger`: A Python logging.Logger object preconfigured for you.
-* `event`: A {% include product %} event to process.
-* `args`: The args argument specified at callback registration time.
+- `sg`: A {% include product %} connection instance.
+- `logger`: A Python logging.Logger object preconfigured for you.
+- `event`: A {% include product %} event to process.
+- `args`: The args argument specified at callback registration time.
{% include info title="Note" content="Implementing a callback as a `__call__` method on an object instance is possible but left as an exercise for the user." %}
diff --git a/docs/en/event-daemon/event-daemon-configuration.md b/docs/en/event-daemon/event-daemon-configuration.md
index 55b436262..25afb1d10 100644
--- a/docs/en/event-daemon/event-daemon-configuration.md
+++ b/docs/en/event-daemon/event-daemon-configuration.md
@@ -14,11 +14,12 @@ Most of the configuration for {% include product %}Events is controlled by the `
{% include info title="Note" content="**For Windows:** Windows users will need to change all the paths in the configuration file for Windows equivalents. We suggest keeping all paths, including logging, under one single location for the sake of simplicity. This documentation tends to refer to `C:\shotgun\shotgunEvents` when mentioning Windows paths." %}
+
## Edit shotgunEventDaemon.conf
-Once you have installed {% include product %}Events, the next step is to open the `shotgunEventDaemon.conf` file in a text editor and modify the settings to match your studio's needs. The defaults will be fine for most studios, however, there are some settings that have no defaults that will need to be provided by you before you can run the daemon.
+Once you have installed {% include product %}Events, the next step is to open the `shotgunEventDaemon.conf` file in a text editor and modify the settings to match your studio's needs. The defaults will be fine for most studios, however, there are some settings that have no defaults that will need to be provided by you before you can run the daemon.
-The items you *must* provide are:
+The items you _must_ provide are:
- your {% include product %} server URL
- the Script name and Application key for connecting to {% include product %}
@@ -29,6 +30,7 @@ Optionally, you can also specify an SMTP server and email-specific settings in o
There is also a section for an optional timing log which can help with troubleshooting if you ever encounter performance issues with your daemon. Enabling timing logging will populate its own separate log file with the timing information.
+
### {% include product %} Settings
Underneath the `[{% include product %}]` section, replace the default tokens with the correct values for `server`, `name`, and `key`. These should be the same values you'd provide to a standard API script connecting to {% include product %}.
@@ -42,9 +44,10 @@ key: e37d855e4824216573472846e0cb3e49c7f6f7b1
```
+
### Plugin Settings
-You will need to tell the {% include product %}EventDaemon where to look for plugins to run. Under the `[plugins]` section replace the default token with the correct value for `paths`.
+You will need to tell the {% include product %}EventDaemon where to look for plugins to run. Under the `[plugins]` section replace the default token with the correct value for `paths`.
You can specify multiple locations (which may be useful if you have multiple departments or repositories using the daemon). The value here must be a full path to a readable existing directory.
@@ -57,6 +60,7 @@ paths: /usr/local/shotgun/{% include product %}Events/plugins
When you're first getting started, a good plugin to test with is the `logArgs.py` plugin located in the `/usr/local/shotgun/{% include product %}Events/src/examplePlugins` directory. Copy that into the plugins folder you specified and we'll use that for testing things.
+
### Location of shotgunEventDaemon.conf
By default, the daemon will look for the shotgunEventDaemon.conf file in the same directory that {% include product %}EventDaemon.py is in, and in the `/etc` directory. If you need to put the conf file in another directory, it's recommended you create a symlink to it from the current directory.
@@ -66,6 +70,7 @@ By default, the daemon will look for the shotgunEventDaemon.conf file in the sam
{% include info title="Note" content="**For Windows** The `/etc` doesn't exist on Windows so the configuration file should be put in the same directory as the Python files." %}
+
## Testing the Daemon
Daemons can be difficult to test since they run in the background. There isn't always an obvious way to see what they're doing. Lucky for us, the {% include product %}EventDaemon has an option to run it as a foreground process. Now that we have done the minimum required setup, let's go ahead and test the daemon and see how things go.
@@ -81,11 +86,11 @@ INFO:engine:Last event id (248429) from the {% include product %} database.
You should see the lines above when you start the script (some of the details may differ obviously). If you get any errors, the script will terminate since we opted to run it in the foreground we'll see that happen. Some common errors are displayed below if you get stuck.
-The `logArgs.py` plugin simply takes the event that occurred in {% include product %} and passes it to the logger. Not very exciting but it's a simple way to ensure that the script is running and the plugin is working. If you're at a busy studio, you may have already noticed a rapid stream of messages flowing by. If not, login to your {% include product %} server in your web browser and change some values or create something. You should see log statements printed out to your terminal window corresponding to the type of event you generated with your changes.
+The `logArgs.py` plugin simply takes the event that occurred in {% include product %} and passes it to the logger. Not very exciting but it's a simple way to ensure that the script is running and the plugin is working. If you're at a busy studio, you may have already noticed a rapid stream of messages flowing by. If not, login to your {% include product %} server in your web browser and change some values or create something. You should see log statements printed out to your terminal window corresponding to the type of event you generated with your changes.
{% include info title="Note" content="There are variables in the logArgs.py file that need to be filled in with appropriate values. '$DEMO_SCRIPT_NAMES$' and '$DEMO_API_KEY$' must be edited to contain the same values that were used in the shotgunEventDaemon.conf file in order for the logging to function correctly." %}
-If you don't see anything logged to the log file, check your log-related settings in {% include product %}EventDaemon.conf to ensure that the ``logging`` value is set to log INFO level messages
+If you don't see anything logged to the log file, check your log-related settings in {% include product %}EventDaemon.conf to ensure that the `logging` value is set to log INFO level messages
```
logging: 20
@@ -100,6 +105,7 @@ reg.logger.setLevel(logging.INFO)
Assuming all looks good, to stop the {% include product %}EventDaemon process, simply type `-c` in the terminal and you should see the script terminate.
+
## Running the daemon
Assuming all went well with your testing, we can now run the daemon as intended, in the background.
@@ -116,7 +122,7 @@ kp 4029 0.0 0.0 2435492 192 s001 R+ 9:37AM 0:00.00 gre
root 4020 0.0 0.1 2443824 4876 ?? S 9:36AM 0:00.02 /usr/bin/python ./{% include product %}EventDaemon.py start
```
-We can see by the second line returned that the daemon is running. The first line is matching the command we just ran. So we know it's running, but to ensure that it's *working* and the plugins are doing what they're supposed to, we can look at the log files and see if there's any output there.
+We can see by the second line returned that the daemon is running. The first line is matching the command we just ran. So we know it's running, but to ensure that it's _working_ and the plugins are doing what they're supposed to, we can look at the log files and see if there's any output there.
```
$ sudo tail -f /var/log/shotgunEventDaemon/shotgunEventDaemon
@@ -134,14 +140,16 @@ Go back to your web browser and make some changes to an entity. Then head back t
2011-09-09 09:45:31,228 - plugin.logArgs.logArgs - INFO - {'attribute_name': 'sg_status_list', 'event_type': 'Shotgun_Shot_Change', 'entity': {'type': 'Shot', 'name': 'bunny_010_0010', 'id': 860}, 'project': {'type': 'Project', 'name': 'Big Buck Bunny', 'id': 65}, 'meta': {'entity_id': 860, 'attribute_name': 'sg_status_list', 'entity_type': 'Shot', 'old_value': 'omt', 'new_value': 'ip', 'type': 'attribute_change'}, 'user': {'type': 'HumanUser', 'name': 'Kevin Porterfield', 'id': 35}, 'session_uuid': '450e4da2-dafa-11e0-9ba7-0023dffffeab', 'type': 'EventLogEntry', 'id': 276560}
```
-The exact details of your output will differ, but what you should see is that the plugin has done what it is supposed to do, that is, log the event to the logfile. Again, if you don't see anything logged to the log file, check your log-related settings in {% include product %}EventDaemon.conf to ensure that the ``logging``value is set to log INFO level messages and your logArgs plugin is also configured to show INFO level messages.
+The exact details of your output will differ, but what you should see is that the plugin has done what it is supposed to do, that is, log the event to the logfile. Again, if you don't see anything logged to the log file, check your log-related settings in {% include product %}EventDaemon.conf to ensure that the `logging`value is set to log INFO level messages and your logArgs plugin is also configured to show INFO level messages.
+
### A Note About Logging
It should be noted that log rotation is a feature of the {% include product %} daemon. Logs are rotated at midnight every night and ten daily files are kept per plugin.
+
## Common Errors
The following are a few of the common errors that you can run into and how to resolve them. If you get really stuck, please visit our [support site](https://knowledge.autodesk.com/contact-support) for help.
@@ -160,12 +168,14 @@ You may need to run the daemon with `sudo` or as a user that has permissions to
The {% include product %} API is not installed. Make sure it is either located in the current directory or it is in a directory in your `PYTHONPATH`.
-If you have to run as sudo and you think you have the `PYTHONPATH` setup correctly, remember that sudo resets the environment variables. You can edit the sudoers file to preserve the `PYTHONPATH` or run sudo -e(?)
+If you have to run as sudo and you think you have the `PYTHONPATH` setup correctly, remember that sudo resets the environment variables. You can edit the sudoers file to preserve the `PYTHONPATH` or run sudo -e(?)
+
## List of Configuration File Settings
+
### Daemon Settings
The following are general daemon operational settings.
@@ -182,9 +192,9 @@ pidFile: /var/log/shotgunEventDaemon.pid
**eventIdFile**
-The eventIdFile points to the location where the daemon will store the id of the last processed {% include product %} event. This will allow the daemon to pick up where it left off when it was last shutdown, thus it won't miss any events. If you want to ignore any events since the last daemon shutdown, remove this file before starting up the daemon and the daemon will process only new events created after startup.
+The eventIdFile points to the location where the daemon will store the id of the last processed {% include product %} event. This will allow the daemon to pick up where it left off when it was last shutdown, thus it won't miss any events. If you want to ignore any events since the last daemon shutdown, remove this file before starting up the daemon and the daemon will process only new events created after startup.
-This file keeps track of the last event id for *each* plugin and stores this information in pickle format.
+This file keeps track of the last event id for _each_ plugin and stores this information in pickle format.
```
eventIdFile: /var/log/shotgunEventDaemon.id
@@ -205,7 +215,7 @@ logMode: 1
**logPath**
-The path where to put log files (both for the main engine and plugin log files). The name of the main log file is controlled by the ``logFile``setting below.
+The path where to put log files (both for the main engine and plugin log files). The name of the main log file is controlled by the `logFile`setting below.
```
logPath: /var/log/shotgunEventDaemon
@@ -215,7 +225,7 @@ logPath: /var/log/shotgunEventDaemon
**logFile**
-The name of the main daemon log file. Logging is configured to store up to 10 log files that rotate every night at midnight.
+The name of the main daemon log file. Logging is configured to store up to 10 log files that rotate every night at midnight.
```
logFile: shotgunEventDaemon
@@ -224,6 +234,7 @@ logFile: shotgunEventDaemon
**logging**
The threshold level for log messages sent to the log files. This value is the default for the main dispatching engine and can be overridden on a per plugin basis. This value is simply passed to the Python logging module. The most common values are:
+
- **10:** Debug
- **20:** Info
- **30:** Warnings
@@ -259,7 +270,7 @@ conn_retry_sleep = 60
**max_conn_retries**
-Number of times to retry the connection before logging an error level message(which potentially sends an email if email notification is configured below).
+Number of times to retry the connection before logging an error level message(which potentially sends an email if email notification is configured below).
```
max_conn_retries = 5
@@ -274,6 +285,7 @@ fetch_interval = 5
```
+
### {% include product %} Settings
The following are settings related to your {% include product %} instance.
@@ -295,7 +307,7 @@ The {% include product %} Script name the {% include product %}EventDaemon shoul
```
name: %(SG_ED_SCRIPT_NAME)s
```
-
+
{% include info title="Note" content="There is no default value here. Set the `SG_ED_SCRIPT_NAME` environment variable to the Script name for your ShotGrid server (ie. `shotgunEventDaemon`)" %}
**key**
@@ -305,7 +317,7 @@ The {% include product %} Application Key for the Script name specified above.
```
key: %(SG_ED_API_KEY)s
```
-
+
{% include info title="Note" content="There is no default value here. Set the `SG_ED_API_KEY` environment variable to the Application Key for your Script name above (ie:`0123456789abcdef0123456789abcdef01234567`)" %}
**use_session_uuid**
@@ -322,6 +334,7 @@ use_session_uuid: True
{% include info title="Note" content="The ShotGrid UI will *only* show updates live for the browser session that spawned the original event. Other browser windows with the same page open will not see updates live." %}
+
### Plugin Settings
**paths**
@@ -335,6 +348,7 @@ paths: /usr/local/shotgun/plugins
{% include info title="Note" content="There is no default value here. You must set the value to the location(s) of your plugin files (ie:`/usr/local/shotgun/shotgunEvents/plugins` or `C:\shotgun\shotgunEvents\plugins` on Windows)" %}
+
### Email Settings
These are used for error reporting because we figured you wouldn't constantly be tailing the log and would rather have an active notification system.
@@ -350,7 +364,7 @@ The server that should be used for SMTP connections. The username and password v
```
server: smtp.yourdomain.com
```
-
+
{% include info title="Note" content="There is no default value here. You must replace the smtp.yourdomain.com token with the address of your SMTP server (ie. `smtp.mystudio.com`)." %}
**username**
diff --git a/docs/en/event-daemon/event-daemon-example-plugins.md b/docs/en/event-daemon/event-daemon-example-plugins.md
index 2a7c0b895..a47e9c14a 100644
--- a/docs/en/event-daemon/event-daemon-example-plugins.md
+++ b/docs/en/event-daemon/event-daemon-example-plugins.md
@@ -7,13 +7,16 @@ lang: en
# Example Plugins
-There is a [folder of example plugins](https://github.com/shotgunsoftware/shotgunEvents/tree/master/src/examplePlugins) in the source code.
+There is a [folder of example plugins](https://github.com/shotgunsoftware/shotgunEvents/tree/master/src/examplePlugins) in the source code.
This page includes a few more simple examples, for anyone looking to get started. You can copy/paste this code and it should run(Note: you'll have to update the `script_name`, and `script_key` values to something specific for your installation)
First, here's a template upon which all SG event code should be written
+
## 1. Code Template
+
### Copy / Paste this to get started on new plugins
+
```python
"""
Necessary Documentation of the code
@@ -51,11 +54,15 @@ def registerCallbacks(reg):
# }
def entry_function_call(sg, logger, event, args):
# Now do stuff
- pass
+ pass
```
+
## 2. Note Subject Renaming
+
### Working with `New` Entity Events
-This is a great one to start with because it's simple, but it also deals with a rather tricky aspect of catching `Shotgun_Entity_New` events...
+
+This is a great one to start with because it's simple, but it also deals with a rather tricky aspect of catching `Shotgun_Entity_New` events...
+
```python
import time
from pprint import pprint
@@ -69,7 +76,7 @@ def registerCallbacks(reg):
def Function_Name(sg, logger, event, args):
- # Waiting here should allow the entity to be fully created
+ # Waiting here should allow the entity to be fully created
# and all the necessary attributes to be added to the NOTE entity
time.sleep(1)
current_date = time.strftime("%Y-%m %b-%d")
@@ -98,14 +105,18 @@ def Function_Name(sg, logger, event, args):
logger.info('Dates are not prepended for notes in project id 116 - Software Development')
return
```
+
Note the `sleep` call as the very first line of the function body. The reason for this deals with the way that `new` events are handled.
+
1. When a NEW entity is created in SG, it is still rather unformed - meaning that it doesn't possess all the attributes needed to fully define that entity as you're used to it. In fact, in this example, I can't even guarantee that the `subject` attribute will be on the Note entity when SG emits the `Shotgun_Note_New` event.
2. In order to add all of the necessary attributes, SG then publishes a series of `Shotgun_Note_Change` events wherein SG will add every single attribute to the entity and update the values of those attributes - if required.
3. This means that a multiplicity of events are created, which means that if you need two different attributes to be present and you didn't write a `sleep` aspect to your code, then you'd have to sift through ALL of the `Shotgun_Note_Change` events and the internal metadata looking for only those that have new attributes added and values set... This is a cumbersome process and will process many `Shotgun_Note_Change` events looking for - effectively - just one per note at time of creation.
4. The solution as I've found it is to rely on `Shotgun_Entity_New` and let the script sleep for a short period. At the end of the sleep, SG will have updated all the attributes required for the entity and then you can re-query that same entity for any of the fields you need
## 2. Field Deletion Warning
+
### Generating Notes, Working with Fields as Entities, and Entity Retirement Events
+
```python
"""
@@ -202,4 +213,5 @@ def trashedFieldWarning(sg, logger, event, args):
CreateNote(sg, logger, event)
```
+
This is a very simple script. There is no special logic in checking for deleted fields. If a field is deleted, then a note is created and sent to a group of people that need to know. In my department, we have the group id set to the 'programmers' group, and the project id of the note set to the 'development' project.
diff --git a/docs/en/event-daemon/event-daemon-installation.md b/docs/en/event-daemon/event-daemon-installation.md
index a3c08ac7b..1c80b0e0c 100644
--- a/docs/en/event-daemon/event-daemon-installation.md
+++ b/docs/en/event-daemon/event-daemon-installation.md
@@ -5,23 +5,24 @@ pagename: event-daemon-installation
lang: en
---
-
# Installation
The following guide will help you setup {% include product %}Events for your studio.
+
## System Requirements
The daemon can run on any machine that has Python installed and has network access to your {% include product %} server. It does **not** need to run on the {% include product %} server itself. In fact, if you are using the hosted version of {% include product %}, this isn't an option. However, you may run it on your {% include product %} server if you like. Otherwise, any server will do.
-* Python v2.6, v2.7 or 3.7
-* [{% include product %} Python API](https://github.com/shotgunsoftware/python-api)
- * Use v3.0.37 or higher for Python v2.6 or v2.7 and use v3.1.0 or more for Python 3.7.
- * In either case, we strongly suggest using [the most up to date Python API version](https://github.com/shotgunsoftware/python-api/releases) and keeping this dependency updated over time.
-* Network access to your {% include product %} server
+- Python v2.6, v2.7 or 3.7
+- [{% include product %} Python API](https://github.com/shotgunsoftware/python-api)
+ - Use v3.0.37 or higher for Python v2.6 or v2.7 and use v3.1.0 or more for Python 3.7.
+ - In either case, we strongly suggest using [the most up to date Python API version](https://github.com/shotgunsoftware/python-api/releases) and keeping this dependency updated over time.
+- Network access to your {% include product %} server
+
## Installing the {% include product %} API
Assuming Python is already installed on your machine, you now need to install the {% include product %} Python API so that the {% include product %} Event Daemon can use it to connect to your {% include product %} server. You can do this in a couple of ways:
@@ -45,6 +46,7 @@ ImportError: No module named shotgun_api3
```
+
## Installing {% include product %}Events
The location you choose to install {% include product %}Events is really up to you. Again, as long as Python and the {% include product %} API are installed on the machine, and it has network access to your {% include product %} server, it can run from anywhere. However, it makes sense to install it somehwere that is logical to your studio, something like `/usr/local/shotgun/shotgunEvents` seems logical so we'll use that as the example from here on out.
@@ -54,6 +56,7 @@ The source and archives are available on GitHub at [https://github.com/shotgunso
{% include info title="Note" content="**For Windows:** You could use `C:\shotgun\shotgunEvents` if you have a Windows server but for this documentation we will be using the Linux path." %}
+
### Cloning the source
The easiest way to grab the source if you have `git` installed on the machine is to simply clone the project. This way you can also easily pull in any updates that are committed to ensure you stay up to date with bug fixes and new features.
@@ -66,6 +69,7 @@ $ git clone git://github.com/shotgunsoftware/shotgunEvents.git
{% include info title="Warning" content="Always make sure you backup your configuration, plugins, and any modifications you make to shotgunEvents before pulling in updates from GitHub so you don't lose anything. Or, fork the project yourself so you can maintain your own repository of changes :)" %}
+
### Downloading the archive
If you don't have `git` on your machine or you simply would rather download an archive of the source, you can get things rolling following these steps.
@@ -108,6 +112,7 @@ drwxr-xr-x 6 kp wheel 204 Sep 1 17:46 src
```
+
### Installing Requirements
A `requirements.txt` file is provided at the root of the repository. You should use this to install the required packages
@@ -116,14 +121,14 @@ A `requirements.txt` file is provided at the root of the repository. You should
$ pip install -r /path/to/requirements.txt
```
-
+
### Windows specifics
You will need one of the following on your Windows system:
-* Python with [PyWin32](http://sourceforge.net/projects/pywin32/) installed
-* [Active Python](http://www.activestate.com/activepython)
+- Python with [PyWin32](http://sourceforge.net/projects/pywin32/) installed
+- [Active Python](http://www.activestate.com/activepython)
Active Python ships with the PyWin32 module which is required for integrating the {% include product %} Event Daemon with Windows' Service architecture.
diff --git a/docs/en/event-daemon/event-daemon-plugins.md b/docs/en/event-daemon/event-daemon-plugins.md
index 3c5a20027..4cdc7f43f 100644
--- a/docs/en/event-daemon/event-daemon-plugins.md
+++ b/docs/en/event-daemon/event-daemon-plugins.md
@@ -7,17 +7,18 @@ lang: en
# Plugins Overview
-A plugin file is any *.py* file in a plugin path as specified in the config file.
+A plugin file is any _.py_ file in a plugin path as specified in the config file.
There are some example plugins provided in the `src/examplePlugins` folder in your download of the code. These provide simple examples of how to build your own plugins to look for specific events generated, and act on them, changing other values on your {% include product %} instance.
-You do not need to restart the daemon each time you make updates to a plugin, the daemon will detect that the plugin has been updated and reload it automatically.
+You do not need to restart the daemon each time you make updates to a plugin, the daemon will detect that the plugin has been updated and reload it automatically.
If a plugin generates an error, it will not cause the daemon to crash. The plugin will be disabled until it is updated again (hopefully fixed). Any other plugins will continue to run and events will continue to be processed. The daemon will keep track of the last event id that the broken plugin processed successfully. When it is updated (and fixed, hopefully), the daemon will reload it and attempt to process events starting from where that plugin left off. Assuming all is well again, the daemon will catch the plugin up to the current event and then continue to process events with all plugins as normal.
-
+
A {% include product %} event processing plugin has two main parts: a callback registration function and any number of callbacks.
+
## registerCallbacks function
To be loaded by the framework, your plugin should at least implement the following function:
@@ -38,6 +39,7 @@ For each of your functions that should process {% include product %} events, cal
You can register as many functions as you wish and not all functions in the file need to be registered as event processing callbacks.
+
## Callbacks
A callback that will be registered with the system must take four arguments:
@@ -50,6 +52,7 @@ A callback that will be registered with the system must take four arguments:
{% include info title="Warning" content="You can do whatever you want in a plugin but if any exception raises back to the framework, the plugin within which the offending callback lives (and all contained callbacks) will be deactivated until the file on disk is changed (read: fixed)." %}
+
## Logging
Using the print statement in an event plugin is not recommended. It is prefered you use the standard logging module from the Python standard library. A logger object will be provided to your various functions
@@ -71,6 +74,7 @@ def exampleCallback(sg, logger, event, args):
If the event framework is running as a daemon this will be logged to a file otherwise it will be logged to stdout.
+
## Building Robust plugins
The daemon runs queries against {% include product %} but has built in functionality to retry find() commands should they fail, giving the daemon itself a certain degree of robustness.
diff --git a/docs/en/event-daemon/event-daemon-technical-details.md b/docs/en/event-daemon/event-daemon-technical-details.md
index 1e6eabe98..dcd9da62c 100644
--- a/docs/en/event-daemon/event-daemon-technical-details.md
+++ b/docs/en/event-daemon/event-daemon-technical-details.md
@@ -8,6 +8,7 @@ lang: en
# Technical Overview
+
## Event Types
The event types your triggers can register to be notified of are generally respect the following form `Shotgun_[entity_type]_[New|Change|Retirement|Revival]`. Here are a few examples of this pattern:
@@ -22,7 +23,7 @@ The event types your triggers can register to be notified of are generally respe
Some notable departures from this pattern are used for events that aren't related to entity record activity but rather key points in application behavior.
CRS_PlaylistShare_Create
- CRS_PlaylistShare_Revoke
+ CRS_PlaylistShare_Revoke
SG_RV_Session_Validate_Success
Shotgun_Attachment_View
Shotgun_Big_Query
@@ -34,45 +35,53 @@ Some notable departures from this pattern are used for events that aren't relate
Toolkit_Desktop_ProjectLaunch
Toolkit_Desktop_AppLaunch
Toolkit_Folders_Create
- Toolkit_Folders_Delete
+ Toolkit_Folders_Delete
This list is not exhaustive but is a good place to start. If you wish to know more about the activity and event types on your {% include product %} site, please consult a page of EventLogEntries where you can filter and search through like any other grid page of any other entity type.
### Event Log Entries for Thumbnails
-When a new thumbnail is uploaded for an entity, an Event Log entry is created with ``` `Type` == `Shotgun__Change` ``` (e.g. `Shotgun_Shot_Change`).
-1. The ```‘is_transient’``` field value is set to true:
+
+When a new thumbnail is uploaded for an entity, an Event Log entry is created with `` `Type` == `Shotgun__Change` `` (e.g. `Shotgun_Shot_Change`).
+
+1. The `‘is_transient’` field value is set to true:
+
```
{ "type": "attribute_change","attribute_name": "image",
"entity_type": "Shot", "entity_id": 1286, "field_data_type": "image",
- "old_value": null, "new_value": 11656,
- "is_transient": true
+ "old_value": null, "new_value": 11656,
+ "is_transient": true
}
```
-2. When the thumbnail becomes available, a new Event Log entry is created with the ```‘is_transient’``` field value now set to false:
+
+2. When the thumbnail becomes available, a new Event Log entry is created with the `‘is_transient’` field value now set to false:
+
```
{ "type": "attribute_change", "attribute_name": "image",
"entity_type": "Shot", "entity_id": 1286, "field_data_type": "image",
"old_value": null, "new_value": 11656,
- "is_transient": false
+ "is_transient": false
}
```
+
3. If we update the thumbnail again, we get these new Event Log entries:
+
```
{ "type": "attribute_change", "attribute_name": "image",
- "entity_type": "Shot", "entity_id": 1286, "field_data_type": "image",
- "old_value": 11656, "new_value": 11657,
- "is_transient": true
+ "entity_type": "Shot", "entity_id": 1286, "field_data_type": "image",
+ "old_value": 11656, "new_value": 11657,
+ "is_transient": true
}
-{ "type": "attribute_change", "attribute_name": "image",
- "entity_type": "Shot", "entity_id": 1286, "field_data_type": "image",
- "old_value": null, "new_value": 11657,
- "is_transient": false
+{ "type": "attribute_change", "attribute_name": "image",
+ "entity_type": "Shot", "entity_id": 1286, "field_data_type": "image",
+ "old_value": null, "new_value": 11657,
+ "is_transient": false
}
```
-4. Notice the ```‘old_value’``` field is set to null when the attachment’s thumbnail is the placeholder thumbnail.
+4. Notice the `‘old_value’` field is set to null when the attachment’s thumbnail is the placeholder thumbnail.
+
## Plugin Processing Order
Each event is always processed in the same predictable order so that if any plugins or callbacks are co-dependant, you can safely organize their processing.
@@ -88,6 +97,7 @@ Finally, each callback registered by a plugin is called in registration order. F
We suggested keeping any functionality that needs to share state somehow in the same plugin as one or multiple callbacks.
+
## Sharing state
Many options exist for multiple callbacks that need to share state.
@@ -97,8 +107,8 @@ Many options exist for multiple callbacks that need to share state.
- A mutable passed in the `args` argument when calling [`Registrar.registerCallback`](API#wiki-registerCallback). A state object of your design or something as simple as a `dict`. Preferred.
- Implement callbacks such as `__call__` on object instances and provide some shared state object at callback object initialization. Most powerful yet most convoluted method. May be redundant compared to the args argument method above.
-
+
## Event Backlogs
The framework is designed to have every plugin process every single event they are interested in exactly once, without exception. To make sure this happens, the framework stores a backlog of unprocessed events for each plugin and remembers the last event each plugin was provided. Here is a description of situations in which a backlog may occur.
diff --git a/docs/en/event-daemon/event-daemon.md b/docs/en/event-daemon/event-daemon.md
index 66989236c..82a28027d 100644
--- a/docs/en/event-daemon/event-daemon.md
+++ b/docs/en/event-daemon/event-daemon.md
@@ -5,16 +5,15 @@ pagename: event-daemon
lang: en
---
+# {% include product %} Event Framework
-# {% include product %} Event Framework
-This software was originaly developed by [Patrick Boucher](http://www.patrickboucher.com) with support from [Rodeo Fx](http://rodeofx.com) and Oblique. It is now part of [{% include product %} Software](http://www.shotgridsoftware.com)'s [open source initiative](https://github.com/shotgunsoftware).
+This software was originaly developed by [Patrick Boucher](http://www.patrickboucher.com) with support from [Rodeo Fx](http://rodeofx.com) and Oblique. It is now part of [{% include product %} Software](http://www.shotgridsoftware.com)'s [open source initiative](https://github.com/shotgunsoftware).
This software is provided under the MIT License that can be found in the LICENSE file or at the [Open Source Initiative](http://www.opensource.org/licenses/mit-license.php) website.
-
## Overview
-When you want to access the {% include product %} event stream, the preferred way to do so it to monitor the events table, get any new events, process them and repeat.
+When you want to access the {% include product %} event stream, the preferred way to do so it to monitor the events table, get any new events, process them and repeat.
A lot of stuff is required for this process to work successfully, stuff that may not have any direct bearing on the business rules that need to be applied.
@@ -27,12 +26,12 @@ The daemon handles:
- Registering plugins from one or more specified paths.
- Deactivate any crashing plugins.
- Reloading plugins when they change on disk.
-- Monitoring the {% include product %} event stream.
+- Monitoring the {% include product %} event stream.
- Remembering the last processed event id and any backlog.
- Starting from the last processed event id on daemon startup.
- Catching any connection errors.
- Logging information to stdout, file or email as required.
-- Creating a connection to {% include product %} that will be used by the callback.
+- Creating a connection to {% include product %} that will be used by the callback.
- Handing off events to registered callbacks.
A plugin handles:
@@ -40,11 +39,9 @@ A plugin handles:
- Registering any number of callbacks into the framework.
- Processing a single event when one is provided by the framework.
-
## Advantages of the framework
- Only deal with a single monitoring mechanism for all scripts, not one per
script.
- Minimize network and database load (only one monitor that supplies event to
many event processing plugins).
-
\ No newline at end of file
diff --git a/docs/en/guides/pipeline-integrations.md b/docs/en/guides/pipeline-integrations.md
index 2ddfb2a1b..47b5cf1b3 100644
--- a/docs/en/guides/pipeline-integrations.md
+++ b/docs/en/guides/pipeline-integrations.md
@@ -7,6 +7,6 @@ lang: en
# Pipeline Integrations
-{% include product %}'s pipeline integrations bring {% include product %} data to your artists. Customizable UIs within popular content creation software give artists out-of-the-box tools to view information about their tasks, read and add notes, and share files with teammates. Pipeline integrations are build on the {% include product %} Toolkit platform, and developers can use the Toolkit API to extend functionality or create custom Toolkit apps.
+{% include product %}'s pipeline integrations bring {% include product %} data to your artists. Customizable UIs within popular content creation software give artists out-of-the-box tools to view information about their tasks, read and add notes, and share files with teammates. Pipeline integrations are build on the {% include product %} Toolkit platform, and developers can use the Toolkit API to extend functionality or create custom Toolkit apps.
-This section contains learning materials to help you get started as you administer a {% include product %} pipeline. You'll find guides to configuring your pipeline and managing your production file system, a tutorial for building a basic vfx pipeline, and resources for writing your own pipeline tools.
+This section contains learning materials to help you get started as you administer a {% include product %} pipeline. You'll find guides to configuring your pipeline and managing your production file system, a tutorial for building a basic vfx pipeline, and resources for writing your own pipeline tools.
diff --git a/docs/en/guides/pipeline-integrations/administration.md b/docs/en/guides/pipeline-integrations/administration.md
index 518f1d59f..7214842a4 100644
--- a/docs/en/guides/pipeline-integrations/administration.md
+++ b/docs/en/guides/pipeline-integrations/administration.md
@@ -7,6 +7,6 @@ lang: en
# Administration
-{% include product %}'s pipeline integrations offer a vast set of customization options. Getting your studio's desired pipeline up and running can be a combination of configuration, running command line tools, and ensuring that the {% include product %} tools work in your studio environment.
+{% include product %}'s pipeline integrations offer a vast set of customization options. Getting your studio's desired pipeline up and running can be a combination of configuration, running command line tools, and ensuring that the {% include product %} tools work in your studio environment.
-This section contains information about administering your studio's {% include product %} Toolkit pipeline.
+This section contains information about administering your studio's {% include product %} Toolkit pipeline.
diff --git a/docs/en/guides/pipeline-integrations/administration/apps-and-engines-config-reference.md b/docs/en/guides/pipeline-integrations/administration/apps-and-engines-config-reference.md
index 3bca477ac..d6d9f3264 100644
--- a/docs/en/guides/pipeline-integrations/administration/apps-and-engines-config-reference.md
+++ b/docs/en/guides/pipeline-integrations/administration/apps-and-engines-config-reference.md
@@ -7,7 +7,7 @@ lang: en
# Apps and Engines Configuration Reference
-This document contains an overview of all the different options that you can include when creating configurations for Apps, Engines and Frameworks in the {% include product %} Pipeline Toolkit. It can be useful when doing advanced configuration of Apps, and it is important when you are doing development and need to add parameters to your App Configuration Manifest.
+This document contains an overview of all the different options that you can include when creating configurations for Apps, Engines and Frameworks in the {% include product %} Pipeline Toolkit. It can be useful when doing advanced configuration of Apps, and it is important when you are doing development and need to add parameters to your App Configuration Manifest.
_This document describes functionality only available if you have taken control over a Toolkit configuration. For more info, see [{% include product %} Integrations Admin Guide](https://support.shotgunsoftware.com/hc/en-us/articles/115000067493)._
@@ -21,10 +21,10 @@ This document contains specifications for the various file formats that Sgtk use
Three major components exists in Toolkit:
-- _An engine_ provides a translation layer or an adapter between a host application (such as Maya or Nuke) and Sgtk Apps. Apps typically use python and PySide, and it is the responsibility of the engine to present the host application in a standardized fashion and for example add pyside on top of the host application if this doesn't exist already.
-- _An app_ provides a piece of business logic, it is essentially a tool that does something. Apps can be hand crafted to work in a specific host application, or they can be designed to run in more than one host application.
-- _A framework_ is a library which can be used by engines, apps or other frameworks. A framework makes it possible to more easily manage code or behaviour which is shared between multiple apps.
-
+- _An engine_ provides a translation layer or an adapter between a host application (such as Maya or Nuke) and Sgtk Apps. Apps typically use python and PySide, and it is the responsibility of the engine to present the host application in a standardized fashion and for example add pyside on top of the host application if this doesn't exist already.
+- _An app_ provides a piece of business logic, it is essentially a tool that does something. Apps can be hand crafted to work in a specific host application, or they can be designed to run in more than one host application.
+- _A framework_ is a library which can be used by engines, apps or other frameworks. A framework makes it possible to more easily manage code or behaviour which is shared between multiple apps.
+
An _environment file_ contains the configuration settings for a collection of engines, apps and frameworks. Such a collection is called an Environment. Sgtk launches different environments for different files or different people. You can for example have an environment for Shot production and environment for Rigging. Each environment is a single yaml file.
Environment files are located at `//software/shotgun//config/env`
@@ -36,29 +36,29 @@ The yaml file has the following basic format:
tk-maya:
location
engine settings
-
+
apps:
tk-maya-publish:
location
app settings
-
+
tk-maya-revolver:
location
app settings
-
+
tk-nuke:
location
engine settings
-
+
apps:
tk-nuke-setframerange:
location
app settings
-
+
tk-nuke-nukepub:
location
app settings
-
+
frameworks:
tk-framework-tools:
location
@@ -75,13 +75,13 @@ Each app, engine or framework defined in the environment file has got a `locatio
Toolkit currently supports app installation and management using the following location _descriptors_:
-- An **app_store** descriptor represents an item in the Toolkit App Store
-- A **{% include product %}** descriptor represents an item stored in {% include product %}
-- A **git** descriptor represents a tag in a git repository
-- A **git_branch** descriptor represents a commit in a git branch
-- A **path** descriptor represents a location on disk
-- A **dev** descriptor represents a developer sandbox
-- A **manual** descriptor that is used for custom deployment and rollout
+- An **app_store** descriptor represents an item in the Toolkit App Store
+- A **{% include product %}** descriptor represents an item stored in {% include product %}
+- A **git** descriptor represents a tag in a git repository
+- A **git_branch** descriptor represents a commit in a git branch
+- A **path** descriptor represents a location on disk
+- A **dev** descriptor represents a developer sandbox
+- A **manual** descriptor that is used for custom deployment and rollout
For documentation on how to use the different descriptors, please see the [Toolkit reference documentation](http://developer.shotgridsoftware.com/tk-core/descriptor.html#descriptor-types).
@@ -90,16 +90,28 @@ For documentation on how to use the different descriptors, please see the [Toolk
Sometimes it can be useful to temporarily disable an app or an engine. The recommended way of doing this is to to add a `disabled: true` parameter to the location dictionary that specifies where the app or engine should be loaded from. This syntax is supported by all the different location types. It may look like this for example:
```yaml
-location: {"type": "app_store", "name": "tk-nukepublish", "version": "v0.5.0", "disabled": true}
+location:
+ {
+ "type": "app_store",
+ "name": "tk-nukepublish",
+ "version": "v0.5.0",
+ "disabled": true,
+ }
```
Alternatively, if you want an app to only run on certain platforms, you can specify this using the special `deny_platforms` setting:
```yaml
-location: {"type": "app_store", "name": "tk-nukepublish", "version": "v0.5.0", "deny_platforms": [windows, linux]}
+location:
+ {
+ "type": "app_store",
+ "name": "tk-nukepublish",
+ "version": "v0.5.0",
+ "deny_platforms": [windows, linux],
+ }
```
-Possible values for the _deny_platforms_ parameter are `windows`, `linux`, and `mac`.
+Possible values for the _deny_platforms_ parameter are `windows`, `linux`, and `mac`.
## Settings and parameters
diff --git a/docs/en/guides/pipeline-integrations/administration/beyond-your-first-project.md b/docs/en/guides/pipeline-integrations/administration/beyond-your-first-project.md
index e6c005b01..eb7d1dcd8 100644
--- a/docs/en/guides/pipeline-integrations/administration/beyond-your-first-project.md
+++ b/docs/en/guides/pipeline-integrations/administration/beyond-your-first-project.md
@@ -5,177 +5,172 @@ pagename: beyond-your-first-project
lang: en
---
-
-# Beyond your first project
-
-Here, we explain where to go once you have got your first project up and running using the {% include product %} Desktop. It covers useful common questions and topics and lists useful documentation resources.
-
-
-# Welcome to Toolkit
-
-Welcome to Toolkit! If you are reading this, it probably means that you have managed to successfully install your first {% include product %} Pipeline Toolkit Project using the {% include product %} Desktop.
-
-
-
-At this stage, we are hoping you are up and running and have something looking like the screenshot above, a project page with several application launchers. At this stage, try opening Maya, Nuke or any of the other Applications. You should find a {% include product %} menu with further functionality for managing files and assets.
-
-So where do you go from here? Toolkit offers a lot of flexibility in terms of its configuration and how it works. This document tries to cover some of the next steps that we recommend that you carry out once you are up and running with your first project using the {% include product %} Desktop.
-
-# Basic Configuration
-
+# Beyond your first project
+
+Here, we explain where to go once you have got your first project up and running using the {% include product %} Desktop. It covers useful common questions and topics and lists useful documentation resources.
+
+# Welcome to Toolkit
+
+Welcome to Toolkit! If you are reading this, it probably means that you have managed to successfully install your first {% include product %} Pipeline Toolkit Project using the {% include product %} Desktop.
+
+
+
+At this stage, we are hoping you are up and running and have something looking like the screenshot above, a project page with several application launchers. At this stage, try opening Maya, Nuke or any of the other Applications. You should find a {% include product %} menu with further functionality for managing files and assets.
+
+So where do you go from here? Toolkit offers a lot of flexibility in terms of its configuration and how it works. This document tries to cover some of the next steps that we recommend that you carry out once you are up and running with your first project using the {% include product %} Desktop.
+
+# Basic Configuration
+
This section contains a collection of tweaks and useful things to configure. If you have just set up your very first Toolkit project, there are most likely a number of little tweaks and adjustments you need to do to get everything up and running properly. This section tries to explain these various steps. Please note that some of these things involve editing configuration files and going "under the hood" at the moment. If you have any questions about anything, please visit our [support site](https://knowledge.autodesk.com/contact-support) for help.
-
-## Setting up Application Paths
-
-Once you have set up your first project and click one of the launch buttons to launch Maya, Motionbuilder or Nuke, it is possible that you see an error message looking something like this:
-
-
-
-In the toolkit project configuration, we store paths to the various executables that you can launch. If you are seeing the above message, it probably means that those paths are not matching your studio setup. You may also find that the wrong version of the application is being launched; for example, our default configuration may have a path to maya 2015 but your studio is running maya 2014. In this case, you also need to change the paths.
-
-In our default configurations, these paths are all stored in a single file called `paths.yml`. In order to change a path, locate your project configuration on disk and then navigate into the config folder until you find the `paths.yml` file:
-
-
-
-Open this file and make the necessary changes to the paths. Once you have saved the file, you need to leave the project inside of {% include product %} desktop and then click back into it. (but no need to restart the entire application).
-
-**Further Reading**
-
-For more information about applications, check out the following topics:
-
-- [The Toolkit Application Launcher](https://support.shotgunsoftware.com/hc/en-us/articles/219032968)
-- [Passing Commandline Arguments](https://support.shotgunsoftware.com/hc/en-us/articles/219032968#Use%20Command%20Line%20Arguments%20at%20Launch)
-
-
-## {% include product %} Integration
-
-Toolkit integrates with {% include product %} and extends the traditional interface by adding special toolkit action menu items to various parts of the UI:
-
-
-
-This offers a way to launch Toolkit applications or custom tools that operate on data directly from {% include product %}. You can learn more about integrating with your {% include product %} site in [the Browser Integration section of the Admin Guide](https://support.shotgunsoftware.com/hc/en-us/articles/115000067493-Integrations-Admin-Guide#Browser%20Integration).
-
-## Adding Publishes to the {% include product %} UI
-
-Once you have toolkit installed, it usually makes sense to make some minor adjustments to the {% include product %} UI layouts. The {% include product %} Pipeline Toolkit creates _Publish Entities_ when you publish a file, so it is handy to add a _Publishes Tab_ to key Assets such as Shot and Asset. To do this, make sure that you are logged in as an admin user. Start by navigating to an Asset or Shot and enter into _Design Mode_:
-
-
-
-Now click the little menu triangle on one of the tabs, and select the _Add New Tab_ action. This will bring up a Dialog UI. Call the tab _Publishes_ and make sure that it is associated with _Published File_ Entities:
-
-
-
-Now click _Save_ to save your changes. You are all set!
-
-Note: {% include product %} will choose a couple of default fields to pull in when you create a new tab. You may want to add a couple of extra fields for publishes. This is done by clicking the little plus button in the top-right hand corner of the spreadsheet you can see under your new publishes tab. We recommend that you add the following fields:
-
-- **Description** - Holds a description of the changes in this publish
-- **Created By** - The user who created the publish
-- **Date Created** - When the publish was made
-
-If you make changes to your layouts, don't forget to save the page afterwards!
-
-## Multiple Operating Systems
-
-In some cases, you may be seeing a message popping up, informing that you **Python cannot be found** with a link to this section of the documentation.
-
-Toolkit executes its scripts and functionality using a language called [Python](https://www.python.org/). The {% include product %} Desktop comes with a complete Python installation built in, so normally you never need to worry about this. When you set up a new Toolkit project using the {% include product %} Desktop, the project will be set up by default to use the Python that comes bundled with the {% include product %} Desktop. However, sometimes you may explicitly have to tell Toolkit which Python you want it to use. This can happen in if you for example:
-
-- Use an older version of the {% include product %} Desktop which doesn't set up all Python defaults automatically.
-- If you have installed the {% include product %} Desktop in a non-standard location on disk.
-- If you a running a manual or more complex Toolkit project setup.
-
-The path to Python is stored in configuration files which you can manually edit:
-
-
-
-In order to find the right file, first navigate to your project configuration. In there, find the tree files starting with `interpreter_`. These contain the paths to the python interpreter for Linux, Windows and Mac ("Darwin"). These files contain the location of Python for each of the three operating systems. You now need to go in and manually add the python locations for any operating system you wish you use.
-
-If the files are blank, this indicates that you are using an older version of the {% include product %} Desktop. If this is the case, simply try to update the blank files with the default Python paths. They are as follows:
-
-- Macosx (Darwin): `/Applications/Shotgun.app/Contents/Frameworks/Python/bin/python`
-- Windows: `C:\Program Files\Shotgun\Python\python.exe`
-- Linux: `/opt/Shotgun/Python/bin/python`
-
-If you rather have installed the {% include product %} Desktop in a non-standard location or intend to use a custom python location, please ensure that the paths in the files point to a valid Python installation. It needs to be v2.6 or above (but not Python 3!). If you want to execute UI based applications and tools, please make sure that the Python you specify has either PyQt or PySide installed and is linked up to a QT v4.6 or higher.
-
+
+## Setting up Application Paths
+
+Once you have set up your first project and click one of the launch buttons to launch Maya, Motionbuilder or Nuke, it is possible that you see an error message looking something like this:
+
+
+
+In the toolkit project configuration, we store paths to the various executables that you can launch. If you are seeing the above message, it probably means that those paths are not matching your studio setup. You may also find that the wrong version of the application is being launched; for example, our default configuration may have a path to maya 2015 but your studio is running maya 2014. In this case, you also need to change the paths.
+
+In our default configurations, these paths are all stored in a single file called `paths.yml`. In order to change a path, locate your project configuration on disk and then navigate into the config folder until you find the `paths.yml` file:
+
+
+
+Open this file and make the necessary changes to the paths. Once you have saved the file, you need to leave the project inside of {% include product %} desktop and then click back into it. (but no need to restart the entire application).
+
+**Further Reading**
+
+For more information about applications, check out the following topics:
+
+- [The Toolkit Application Launcher](https://support.shotgunsoftware.com/hc/en-us/articles/219032968)
+- [Passing Commandline Arguments](https://support.shotgunsoftware.com/hc/en-us/articles/219032968#Use%20Command%20Line%20Arguments%20at%20Launch)
+
+## {% include product %} Integration
+
+Toolkit integrates with {% include product %} and extends the traditional interface by adding special toolkit action menu items to various parts of the UI:
+
+
+
+This offers a way to launch Toolkit applications or custom tools that operate on data directly from {% include product %}. You can learn more about integrating with your {% include product %} site in [the Browser Integration section of the Admin Guide](https://support.shotgunsoftware.com/hc/en-us/articles/115000067493-Integrations-Admin-Guide#Browser%20Integration).
+
+## Adding Publishes to the {% include product %} UI
+
+Once you have toolkit installed, it usually makes sense to make some minor adjustments to the {% include product %} UI layouts. The {% include product %} Pipeline Toolkit creates _Publish Entities_ when you publish a file, so it is handy to add a _Publishes Tab_ to key Assets such as Shot and Asset. To do this, make sure that you are logged in as an admin user. Start by navigating to an Asset or Shot and enter into _Design Mode_:
+
+
+
+Now click the little menu triangle on one of the tabs, and select the _Add New Tab_ action. This will bring up a Dialog UI. Call the tab _Publishes_ and make sure that it is associated with _Published File_ Entities:
+
+
+
+Now click _Save_ to save your changes. You are all set!
+
+Note: {% include product %} will choose a couple of default fields to pull in when you create a new tab. You may want to add a couple of extra fields for publishes. This is done by clicking the little plus button in the top-right hand corner of the spreadsheet you can see under your new publishes tab. We recommend that you add the following fields:
+
+- **Description** - Holds a description of the changes in this publish
+- **Created By** - The user who created the publish
+- **Date Created** - When the publish was made
+
+If you make changes to your layouts, don't forget to save the page afterwards!
+
+## Multiple Operating Systems
+
+In some cases, you may be seeing a message popping up, informing that you **Python cannot be found** with a link to this section of the documentation.
+
+Toolkit executes its scripts and functionality using a language called [Python](https://www.python.org/). The {% include product %} Desktop comes with a complete Python installation built in, so normally you never need to worry about this. When you set up a new Toolkit project using the {% include product %} Desktop, the project will be set up by default to use the Python that comes bundled with the {% include product %} Desktop. However, sometimes you may explicitly have to tell Toolkit which Python you want it to use. This can happen in if you for example:
+
+- Use an older version of the {% include product %} Desktop which doesn't set up all Python defaults automatically.
+- If you have installed the {% include product %} Desktop in a non-standard location on disk.
+- If you a running a manual or more complex Toolkit project setup.
+
+The path to Python is stored in configuration files which you can manually edit:
+
+
+
+In order to find the right file, first navigate to your project configuration. In there, find the tree files starting with `interpreter_`. These contain the paths to the python interpreter for Linux, Windows and Mac ("Darwin"). These files contain the location of Python for each of the three operating systems. You now need to go in and manually add the python locations for any operating system you wish you use.
+
+If the files are blank, this indicates that you are using an older version of the {% include product %} Desktop. If this is the case, simply try to update the blank files with the default Python paths. They are as follows:
+
+- Macosx (Darwin): `/Applications/Shotgun.app/Contents/Frameworks/Python/bin/python`
+- Windows: `C:\Program Files\Shotgun\Python\python.exe`
+- Linux: `/opt/Shotgun/Python/bin/python`
+
+If you rather have installed the {% include product %} Desktop in a non-standard location or intend to use a custom python location, please ensure that the paths in the files point to a valid Python installation. It needs to be v2.6 or above (but not Python 3!). If you want to execute UI based applications and tools, please make sure that the Python you specify has either PyQt or PySide installed and is linked up to a QT v4.6 or higher.
+
Please also note that in order to run Toolkit on multiple operating systems, you need to specify the paths to all your desired platforms when you are running the project setup wizard. If you haven't done this, and want to add an additional operating system to a storage path or configuration location, please visit our [support site](https://knowledge.autodesk.com/contact-support) for help.
-
-# Next Steps
-
-Hopefully at this point you now have the default {% include product %} setup working for a {% include product %} project (or test project). Applications are launching, Context menu actions and publishes are showing up in {% include product %} and things are working on all your desired operating system platforms.
-
-This next section is all about what to do next -- the process of starting to take that default configuration and adjust it to work more like the rest of your studio pipeline. Toolkit is flexible and highly configurable, and we have lots of documentation. But before you get started, to see it all in action, we recommend spending a couple of minutes checking out our various walkthrough videos. These show the {% include product %} Pipeline Toolkit in action, how it works inside applications such as Maya and Nuke. It also goes through basic concepts such as publishing, version control, loading etc.
-
-[{% include product %} Toolkit Video Collection](https://support.shotgunsoftware.com/hc/en-us/articles/219040678)
-
-## The anatomy of a Toolkit Project
-
-When you create a new Toolkit project, you end up with a couple of key locations.
-
-
-
-- The {% include product %} Desktop and its configuration is installed on your local machine. (If you want, it is possible to relocate both the application and the configuration to a shared storage).
-- The data area where the Toolkit project will store textures, files, renders etc. This is normally on a shared storage, because you want to share this data with other users, however there are exceptions to this rule; user work areas can be stored on local (user only) storage, and integrations such as our perforce integration uses an external system to help distribute content.
-- The toolkit configuration is a fully self contained bundle, including code, apps, core API etc. This is normally stored on a shared storage so that the configuration is easily accessible by all users.
-
-Your Project configuration on disk contains a couple of different items.
-
-
-
-In the following sections we'll walk through the various parts of the project configuration folder.
-
-### Command line access
-
-As well as using the {% include product %} Desktop, you can also access Toolkit via a terminal or shell. Each project that you create on disk comes with a special `tank` command which gives you command line based access to a lot of functionality, including starting up an API session and launching applications.
-
-If you navigate to your project configuration, you can see a `tank` and a `tank.bat` command in the root of the configuration. Running these commands without any options will give you a list of all the commands that are supported in your current configuration, including the following useful commands:
-
-- `tank shell` - Start an interactive python shell with tk api access
-- `tank core` - Check if there are any core API updates available for this project
-- `tank updates` - Check if any of the apps or engines in this configuration has got any updates available
-
-For more details on what you can do with the `tank` command, please see the in-depth technical documentation:
-
-[How to Administer Toolkit](https://support.shotgunsoftware.com/hc/en-us/articles/219033178)
-
-### Key Configuration Files
-
-The `config` folder contains a couple of key configuration files.
-
-
-
-Toolkit comes with a folder creation system which tries to automatically create folders on disk to make sure that when you start up an application, all the necessary structure on disk exists and has been prepared on beforehand! The configuration for this can be found in the `schema` folder indicated above.
-
-Hand in hand with this goes the Toolkit _template system_ which makes it easy to define the various paths to files that you can configure; your publishes, work files, renders etc. This is stored in the `templates.yml` file above.
-
+
+# Next Steps
+
+Hopefully at this point you now have the default {% include product %} setup working for a {% include product %} project (or test project). Applications are launching, Context menu actions and publishes are showing up in {% include product %} and things are working on all your desired operating system platforms.
+
+This next section is all about what to do next -- the process of starting to take that default configuration and adjust it to work more like the rest of your studio pipeline. Toolkit is flexible and highly configurable, and we have lots of documentation. But before you get started, to see it all in action, we recommend spending a couple of minutes checking out our various walkthrough videos. These show the {% include product %} Pipeline Toolkit in action, how it works inside applications such as Maya and Nuke. It also goes through basic concepts such as publishing, version control, loading etc.
+
+[{% include product %} Toolkit Video Collection](https://support.shotgunsoftware.com/hc/en-us/articles/219040678)
+
+## The anatomy of a Toolkit Project
+
+When you create a new Toolkit project, you end up with a couple of key locations.
+
+
+
+- The {% include product %} Desktop and its configuration is installed on your local machine. (If you want, it is possible to relocate both the application and the configuration to a shared storage).
+- The data area where the Toolkit project will store textures, files, renders etc. This is normally on a shared storage, because you want to share this data with other users, however there are exceptions to this rule; user work areas can be stored on local (user only) storage, and integrations such as our perforce integration uses an external system to help distribute content.
+- The toolkit configuration is a fully self contained bundle, including code, apps, core API etc. This is normally stored on a shared storage so that the configuration is easily accessible by all users.
+
+Your Project configuration on disk contains a couple of different items.
+
+
+
+In the following sections we'll walk through the various parts of the project configuration folder.
+
+### Command line access
+
+As well as using the {% include product %} Desktop, you can also access Toolkit via a terminal or shell. Each project that you create on disk comes with a special `tank` command which gives you command line based access to a lot of functionality, including starting up an API session and launching applications.
+
+If you navigate to your project configuration, you can see a `tank` and a `tank.bat` command in the root of the configuration. Running these commands without any options will give you a list of all the commands that are supported in your current configuration, including the following useful commands:
+
+- `tank shell` - Start an interactive python shell with tk api access
+- `tank core` - Check if there are any core API updates available for this project
+- `tank updates` - Check if any of the apps or engines in this configuration has got any updates available
+
+For more details on what you can do with the `tank` command, please see the in-depth technical documentation:
+
+[How to Administer Toolkit](https://support.shotgunsoftware.com/hc/en-us/articles/219033178)
+
+### Key Configuration Files
+
+The `config` folder contains a couple of key configuration files.
+
+
+
+Toolkit comes with a folder creation system which tries to automatically create folders on disk to make sure that when you start up an application, all the necessary structure on disk exists and has been prepared on beforehand! The configuration for this can be found in the `schema` folder indicated above.
+
+Hand in hand with this goes the Toolkit _template system_ which makes it easy to define the various paths to files that you can configure; your publishes, work files, renders etc. This is stored in the `templates.yml` file above.
+
Together, these two parts of the project configuration makes it possible to adjust the various Apps that toolkit use to write out data to locations on disk which make sense are are understood by your existing pipeline.
-Read more about this in our advanced documentation:
-
-- [Folder Configuration](https://support.shotgunsoftware.com/hc/en-us/articles/219033178#Creating%20folders%20on%20disk%20with%20Sgtk)
-- [Filesystem Templates](https://support.shotgunsoftware.com/hc/en-us/articles/219033178#Configuring%20Templates)
-
-A toolkit configuration is essentially made up of a collection of configured **apps and engines**. This configuration is located in the `env` folder. If the file system configuration files discussed above define _where_ resources should be located on disk, the environment configuration with its apps and engines define _what_ the pipeline is supposed to do.
-
-
-### Core API platform
-
-Each project configuration uses a collection of Apps and Engines. The configuration for these apps and engines are stored in the `env` folder inside the configuration. Toolkit will then automatically download and manage the various versions of the code needed to run these apps and engines. The code is placed inside the `install`folder.
-
-The configuration, apps and engines are all running on top of the Toolkit Core platform. For new projects, this is also stored inside the `install` folder. Essentially, a project configuration is fully self contained - all the necessary pieces required to run toolkit are in a single place. This also means that each project is independent and updating one project will not break another.
-
-Tech Notes: Using a shared Toolkit Core (Click to expand)
-
-### Further reading
-
-We also have a more technical document that goes through the high level concepts in the {% include product %} Pipeline Toolkit and explains 'bigger picture' things. Once you have a good grasp of what Toolkit does out of the box, we recommend that you move on to this document to get a deeper undestanding of how Toolkit could be adjusted to suit your particular studio needs.
-
-[An introduction to the high level concepts in the {% include product %} Toolkit](https://support.shotgunsoftware.com/hc/en-us/articles/219040648)
-
-## The Toolkit Community
-
-A part of Toolkit is its community of pipeline engineers and TDs! We are on a mission to create a vibrant, code sharing community where we all can help evolve Toolkit together to become a powerful and flexible pipeline environment.
-
-If you have any questions, or want to read through existing posts and conversations, please visit our [public forums section](https://support.shotgunsoftware.com/hc/en-us/community/topics/200682428-Pipeline-Toolkit-Common-Questions-and-Answers).
-
+Read more about this in our advanced documentation:
+
+- [Folder Configuration](https://support.shotgunsoftware.com/hc/en-us/articles/219033178#Creating%20folders%20on%20disk%20with%20Sgtk)
+- [Filesystem Templates](https://support.shotgunsoftware.com/hc/en-us/articles/219033178#Configuring%20Templates)
+
+A toolkit configuration is essentially made up of a collection of configured **apps and engines**. This configuration is located in the `env` folder. If the file system configuration files discussed above define _where_ resources should be located on disk, the environment configuration with its apps and engines define _what_ the pipeline is supposed to do.
+
+### Core API platform
+
+Each project configuration uses a collection of Apps and Engines. The configuration for these apps and engines are stored in the `env` folder inside the configuration. Toolkit will then automatically download and manage the various versions of the code needed to run these apps and engines. The code is placed inside the `install`folder.
+
+The configuration, apps and engines are all running on top of the Toolkit Core platform. For new projects, this is also stored inside the `install` folder. Essentially, a project configuration is fully self contained - all the necessary pieces required to run toolkit are in a single place. This also means that each project is independent and updating one project will not break another.
+
+Tech Notes: Using a shared Toolkit Core (Click to expand)
+
+### Further reading
+
+We also have a more technical document that goes through the high level concepts in the {% include product %} Pipeline Toolkit and explains 'bigger picture' things. Once you have a good grasp of what Toolkit does out of the box, we recommend that you move on to this document to get a deeper undestanding of how Toolkit could be adjusted to suit your particular studio needs.
+
+[An introduction to the high level concepts in the {% include product %} Toolkit](https://support.shotgunsoftware.com/hc/en-us/articles/219040648)
+
+## The Toolkit Community
+
+A part of Toolkit is its community of pipeline engineers and TDs! We are on a mission to create a vibrant, code sharing community where we all can help evolve Toolkit together to become a powerful and flexible pipeline environment.
+
+If you have any questions, or want to read through existing posts and conversations, please visit our [public forums section](https://support.shotgunsoftware.com/hc/en-us/community/topics/200682428-Pipeline-Toolkit-Common-Questions-and-Answers).
diff --git a/docs/en/guides/pipeline-integrations/administration/community-shared-integrations.md b/docs/en/guides/pipeline-integrations/administration/community-shared-integrations.md
index d29fb1d6c..fdb069ad8 100644
--- a/docs/en/guides/pipeline-integrations/administration/community-shared-integrations.md
+++ b/docs/en/guides/pipeline-integrations/administration/community-shared-integrations.md
@@ -11,33 +11,33 @@ Here are projects that people in the Toolkit community have been gracious enough
### Engines
-----------
-
-| Integration | Engine | Information |
-|:-----------:|:------:| ----------- |
-|
| **tk-katana** | Project URL: [https://github.com/robblau/tk-katana](https://github.com/robblau/tk-katana)
Project Contributor: [Lightchaser Animation](https://github.com/LightChaserAnimationStudio)
Project Maintainer:
Project Description: A {% include product %} Engine for Foundry's Katana |
-|
| **tk-unreal** | Project URL: [https://docs.unrealengine.com/en-US/Engine/Content/UsingUnrealEnginewithAutodeskShotgun/index.html](https://docs.unrealengine.com/en-US/Engine/Content/UsingUnrealEnginewithAutodeskShotgun/index.html)
Project Contributor: [Epic Games](http://epicgames.com/)
Project Maintainer:
Project Description: A {% include product %} Engine for [Unreal Engine](https://www.unrealengine.com/en-US/) |
-|
| **tk-substancepainter** | Project URL: [https://github.com/diegogarciahuerta/tk-substancepainter](https://github.com/diegogarciahuerta/tk-substancepainter)
Project Contributor: [Factor64](https://www.factor64.com/)
Project Maintainer: [Diego Garcia Huerta](https://www.linkedin.com/in/diegogh/)
Project Description: A {% include product %} Engine for Adobe's Substance Painter |
-|
| **tk-substancedesigner** | Project URL: [https://github.com/diegogarciahuerta/tk-substancedesigner](https://github.com/diegogarciahuerta/tk-substancedesigner)
Project Contributor: [Factor64](https://www.factor64.com/)
Project Maintainer: [Diego Garcia Huerta](https://www.linkedin.com/in/diegogh/)
Project Description: A {% include product %} Engine for Adobe's Substance Designer
More info: [{% include product %} Community Forums](https://community.shotgridsoftware.com/t/substance-designer-shotgun-toolkit-engine-released/9944)|
-|
| **tk-modo** | Project URL: [https://github.com/tremolo/tk-modo](https://github.com/tremolo/tk-modo)
Project Contributor: Lutz Pälike and [Walking The Dog](http://www.walkingthedog.be/)
Project Maintainer:
Project Description: A {% include product %} Engine for Foundry's Modo |
-|
| **tk-clarisse** | Project URL: [https://github.com/diegogarciahuerta/tk-clarisse](https://github.com/diegogarciahuerta/tk-clarisse)
Project Contributor: [Factor64](https://www.factor64.com/)
Project Maintainer: [Diego Garcia Huerta](https://www.linkedin.com/in/diegogh/)
Project Description: A {% include product %} Engine for [Clarisse iFX](https://www.isotropix.com/products), a fully interactive CG toolset for set-dressing, look development, lighting and rendering. |
-|
| **tk-natron** | Project URL: [https://github.com/diegogarciahuerta/tk-natron](https://github.com/diegogarciahuerta/tk-natron)
Project Contributor: [Factor64](https://www.factor64.com/)
Project Maintainer: [Diego Garcia Huerta](https://www.linkedin.com/in/diegogh/)
Project Description: A {% include product %} Engine for [Natron](https://natrongithub.github.io/), a free and open-source node-based software application. |
-|
| **tk-harmony** | Project URL: [https://github.com/diegogarciahuerta/tk-harmony](https://github.com/diegogarciahuerta/tk-harmony)
Project Contributor: [Factor64](https://www.factor64.com/)
Project Maintainer: [Diego Garcia Huerta](https://www.linkedin.com/in/diegogh/)
Project Description: A {% include product %} Engine for [Toon Boom Harmony](https://www.toonboom.com/products/harmony), industry leading production animation software.|
-|
| **tk-cinema** | Project URL: [https://github.com/mikedatsik/tk-cinema](https://github.com/mikedatsik/tk-cinema)
Project Contributor: Mykhailo Datsyk
Project Maintainer: Mykhailo Datsyk
Project Description: A {% include product %} Engine for [Maxon Cinema 4D](https://www.maxon.net/en-us/products/cinema-4d/overview/), a designer-friendly toolset for modeling, animation, and rendering.
More Information: [{% include product %} Community Forums](https://community.shotgridsoftware.com/t/shotgun-toolkit-engine-for-maxon-cinema-4d/6437)|
-|
| **tk-krita** | Project URL: [https://github.com/diegogarciahuerta/tk-krita](https://github.com/diegogarciahuerta/tk-krita)
Project Contributor: [Factor64](https://www.factor64.com/)
Project Maintainer: [Diego Garcia Huerta](https://www.linkedin.com/in/diegogh/)
Project Description: A {% include product %} Engine for [Krita](https://krita.org/en/), a free and open-source raster graphics editor designed primarily for digital painting and 2D animation.
More Information: [{% include product %} Community Forums](https://community.shotgridsoftware.com/t/krita-shotgun-toolkit-engine-released/8724) |
-|
| **tk-blender** | Project URL: [https://github.com/diegogarciahuerta/tk-blender](https://github.com/diegogarciahuerta/tk-blender)
Project Contributor: [Factor64](https://www.factor64.com/)
Project Maintainer: [Diego Garcia Huerta](https://www.linkedin.com/in/diegogh/)
Project Description: A {% include product %} Engine for [Blender](https://www.blender.org/), a free and open-source 3D computer graphics software toolset used for creating animated films, visual effects, art, 3D printed models, motion graphics, interactive 3D applications, virtual reality and computer games.
More Information: [{% include product %} Community Forums](https://community.shotgridsoftware.com/t/blender-shotgun-toolkit-engine-released/10773)|
+---
+
+| Integration | Engine | Information |
+| :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | :----------------------: | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+|
| **tk-katana** | Project URL: [https://github.com/robblau/tk-katana](https://github.com/robblau/tk-katana)
Project Contributor: [Lightchaser Animation](https://github.com/LightChaserAnimationStudio)
Project Maintainer:
Project Description: A {% include product %} Engine for Foundry's Katana |
+|
| **tk-unreal** | Project URL: [https://docs.unrealengine.com/en-US/Engine/Content/UsingUnrealEnginewithAutodeskShotgun/index.html](https://docs.unrealengine.com/en-US/Engine/Content/UsingUnrealEnginewithAutodeskShotgun/index.html)
Project Contributor: [Epic Games](http://epicgames.com/)
Project Maintainer:
Project Description: A {% include product %} Engine for [Unreal Engine](https://www.unrealengine.com/en-US/) |
+|
| **tk-substancepainter** | Project URL: [https://github.com/diegogarciahuerta/tk-substancepainter](https://github.com/diegogarciahuerta/tk-substancepainter)
Project Contributor: [Factor64](https://www.factor64.com/)
Project Maintainer: [Diego Garcia Huerta](https://www.linkedin.com/in/diegogh/)
Project Description: A {% include product %} Engine for Adobe's Substance Painter |
+|
| **tk-substancedesigner** | Project URL: [https://github.com/diegogarciahuerta/tk-substancedesigner](https://github.com/diegogarciahuerta/tk-substancedesigner)
Project Contributor: [Factor64](https://www.factor64.com/)
Project Maintainer: [Diego Garcia Huerta](https://www.linkedin.com/in/diegogh/)
Project Description: A {% include product %} Engine for Adobe's Substance Designer
More info: [{% include product %} Community Forums](https://community.shotgridsoftware.com/t/substance-designer-shotgun-toolkit-engine-released/9944) |
+|
| **tk-modo** | Project URL: [https://github.com/tremolo/tk-modo](https://github.com/tremolo/tk-modo)
Project Contributor: Lutz Pälike and [Walking The Dog](http://www.walkingthedog.be/)
Project Maintainer:
Project Description: A {% include product %} Engine for Foundry's Modo |
+|
| **tk-clarisse** | Project URL: [https://github.com/diegogarciahuerta/tk-clarisse](https://github.com/diegogarciahuerta/tk-clarisse)
Project Contributor: [Factor64](https://www.factor64.com/)
Project Maintainer: [Diego Garcia Huerta](https://www.linkedin.com/in/diegogh/)
Project Description: A {% include product %} Engine for [Clarisse iFX](https://www.isotropix.com/products), a fully interactive CG toolset for set-dressing, look development, lighting and rendering. |
+|
| **tk-natron** | Project URL: [https://github.com/diegogarciahuerta/tk-natron](https://github.com/diegogarciahuerta/tk-natron)
Project Contributor: [Factor64](https://www.factor64.com/)
Project Maintainer: [Diego Garcia Huerta](https://www.linkedin.com/in/diegogh/)
Project Description: A {% include product %} Engine for [Natron](https://natrongithub.github.io/), a free and open-source node-based software application. |
+|
| **tk-harmony** | Project URL: [https://github.com/diegogarciahuerta/tk-harmony](https://github.com/diegogarciahuerta/tk-harmony)
Project Contributor: [Factor64](https://www.factor64.com/)
Project Maintainer: [Diego Garcia Huerta](https://www.linkedin.com/in/diegogh/)
Project Description: A {% include product %} Engine for [Toon Boom Harmony](https://www.toonboom.com/products/harmony), industry leading production animation software. |
+|
| **tk-cinema** | Project URL: [https://github.com/mikedatsik/tk-cinema](https://github.com/mikedatsik/tk-cinema)
Project Contributor: Mykhailo Datsyk
Project Maintainer: Mykhailo Datsyk
Project Description: A {% include product %} Engine for [Maxon Cinema 4D](https://www.maxon.net/en-us/products/cinema-4d/overview/), a designer-friendly toolset for modeling, animation, and rendering.
More Information: [{% include product %} Community Forums](https://community.shotgridsoftware.com/t/shotgun-toolkit-engine-for-maxon-cinema-4d/6437) |
+|
| **tk-krita** | Project URL: [https://github.com/diegogarciahuerta/tk-krita](https://github.com/diegogarciahuerta/tk-krita)
Project Contributor: [Factor64](https://www.factor64.com/)
Project Maintainer: [Diego Garcia Huerta](https://www.linkedin.com/in/diegogh/)
Project Description: A {% include product %} Engine for [Krita](https://krita.org/en/), a free and open-source raster graphics editor designed primarily for digital painting and 2D animation.
More Information: [{% include product %} Community Forums](https://community.shotgridsoftware.com/t/krita-shotgun-toolkit-engine-released/8724) |
+|
| **tk-blender** | Project URL: [https://github.com/diegogarciahuerta/tk-blender](https://github.com/diegogarciahuerta/tk-blender)
Project Contributor: [Factor64](https://www.factor64.com/)
Project Maintainer: [Diego Garcia Huerta](https://www.linkedin.com/in/diegogh/)
Project Description: A {% include product %} Engine for [Blender](https://www.blender.org/), a free and open-source 3D computer graphics software toolset used for creating animated films, visual effects, art, 3D printed models, motion graphics, interactive 3D applications, virtual reality and computer games.
More Information: [{% include product %} Community Forums](https://community.shotgridsoftware.com/t/blender-shotgun-toolkit-engine-released/10773) |
### Apps
-----------
-
-| Integration | Engine | Information |
-|:-----------:|:------:| ----------- |
-|
| **tk-maya-playblast** | Project URL: [https://github.com/basestudio/tk-maya-playblast](https://github.com/basestudio/tk-maya-playblast)
Project Contributor: [BASE Studio](https://github.com/basestudio)
Project Maintainer:
Project Description: App to publish playblasts from Maya. See [https://goo.gl/5oJTv0](https://goo.gl/5oJTv0)|
-|
| **tk-multi-renderfarm** | Project URL: [https://github.com/baitstudio/tk-multi-renderfarm](https://github.com/baitstudio/tk-multi-renderfarm)
Project Contributor: [Bait Studio](http://www.baitstudio.com/)
Project Maintainer:
Project Description: App to submit work to the farm. See [https://goo.gl/ew6mkD](https://goo.gl/ew6mkD) |
-|
| **tk-shotgun-publishrenders** | Project URL: [https://github.com/janimation/tk-shotgun-publishrenders](https://github.com/janimation/tk-shotgun-publishrenders)
Project Contributor:
Project Maintainer: [Dave Sisk](mailto:dave@janimation.com)
Project Description: This app searches the directory structure of a project to find existing published files or file sequences, then registers them in {% include product %} as published files if the published file objects don't already exist. |
-|
| **nuke-getShotgunData** | Project URL: [https://github.com/RicardoMusch/nuke-getShotgunData](https://github.com/RicardoMusch/nuke-getShotgunData)
Project Contributor: [Ricardo Musch](https://www.ricardo-musch.com/)
Project Maintainer: Ricardo Musch
Project Description: Getting {% include product %} data into nuke text nodes can be a bit of a pain. This node can be used to pipe that info into slates, burn-ins, or anywhere else. |
-|
| **sb-shotgun-schema-introspection** | Project URL: [https://github.com/scottb08/sb-shotgun-schema-introspection](https://github.com/scottb08/sb-shotgun-schema-introspection)
Project Contributor: [Scott Ballard](https://www.linkedin.com/in/scottballard/)
Project Maintainer: Scott Ballard
Project Description: This is a simple Toolkit app that allows {% include product %} and Toolkit developers to quickly navigate and inspect the {% include product %} entities, fields and the underlying schema. |
-|
| **foto-multi-namingconvention** | Project URL: [https://github.com/scottb08/foto-multi-namingconvention](https://github.com/scottb08/foto-multi-namingconvention)
Project Contributor: [Griffith Observatory](http://www.griffithobservatory.org/)
Project Maintainer: [Scott Ballard](https://www.linkedin.com/in/scottballard/)
Project Description: This is a simple Toolkit app that allows {% include product %} and Toolkit developers to quickly navigate and inspect the {% include product %} entities, fields and the underlying schema. |
-|
| **tk-cpenv** | Project URL: [https://github.com/cpenv/tk-cpenv](https://github.com/cpenv/tk-cpenv)
Project Contributor: [Dan Bradham](https://github.com/danbradham)
Project Maintainer: [Dan Bradham](https://github.com/danbradham)
Project Description: This app adds support for [cpenv](https://github.com/cpenv/cpenv), a tool that uses modules to manage software plugins, project dependencies and environment variables.
More Information: [{% include product %} Community Forums](https://community.shotgridsoftware.com/t/rez-support/7350/7) |
-|
| **rtm-tk-hiero-shotgunDropper** | Project URL: [https://github.com/RicardoMusch/rtm-tk-hiero-shotgunDropper](https://github.com/RicardoMusch/rtm-tk-hiero-shotgunDropper)
Project Contributor: [Ricardo Musch](https://www.ricardo-musch.com/)
Project Maintainer: Ricardo Musch
Project Description: This app allows you to drop Versions and playlists from {% include product %} into Hiero.
More Information: [{% include product %} Community Forums](https://community.shotgridsoftware.com/t/release-shotgundropper-for-hiero/4183) |
+---
+
+| Integration | Engine | Information |
+| :----------------------------------------------------------------------------------------------------------------------------------------------------------: | :---------------------------------: | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+|
| **tk-maya-playblast** | Project URL: [https://github.com/basestudio/tk-maya-playblast](https://github.com/basestudio/tk-maya-playblast)
Project Contributor: [BASE Studio](https://github.com/basestudio)
Project Maintainer:
Project Description: App to publish playblasts from Maya. See [https://goo.gl/5oJTv0](https://goo.gl/5oJTv0) |
+|
| **tk-multi-renderfarm** | Project URL: [https://github.com/baitstudio/tk-multi-renderfarm](https://github.com/baitstudio/tk-multi-renderfarm)
Project Contributor: [Bait Studio](http://www.baitstudio.com/)
Project Maintainer:
Project Description: App to submit work to the farm. See [https://goo.gl/ew6mkD](https://goo.gl/ew6mkD) |
+|
| **tk-shotgun-publishrenders** | Project URL: [https://github.com/janimation/tk-shotgun-publishrenders](https://github.com/janimation/tk-shotgun-publishrenders)
Project Contributor:
Project Maintainer: [Dave Sisk](mailto:dave@janimation.com)
Project Description: This app searches the directory structure of a project to find existing published files or file sequences, then registers them in {% include product %} as published files if the published file objects don't already exist. |
+|
| **nuke-getShotgunData** | Project URL: [https://github.com/RicardoMusch/nuke-getShotgunData](https://github.com/RicardoMusch/nuke-getShotgunData)
Project Contributor: [Ricardo Musch](https://www.ricardo-musch.com/)
Project Maintainer: Ricardo Musch
Project Description: Getting {% include product %} data into nuke text nodes can be a bit of a pain. This node can be used to pipe that info into slates, burn-ins, or anywhere else. |
+|
| **sb-shotgun-schema-introspection** | Project URL: [https://github.com/scottb08/sb-shotgun-schema-introspection](https://github.com/scottb08/sb-shotgun-schema-introspection)
Project Contributor: [Scott Ballard](https://www.linkedin.com/in/scottballard/)
Project Maintainer: Scott Ballard
Project Description: This is a simple Toolkit app that allows {% include product %} and Toolkit developers to quickly navigate and inspect the {% include product %} entities, fields and the underlying schema. |
+|
| **foto-multi-namingconvention** | Project URL: [https://github.com/scottb08/foto-multi-namingconvention](https://github.com/scottb08/foto-multi-namingconvention)
Project Contributor: [Griffith Observatory](http://www.griffithobservatory.org/)
Project Maintainer: [Scott Ballard](https://www.linkedin.com/in/scottballard/)
Project Description: This is a simple Toolkit app that allows {% include product %} and Toolkit developers to quickly navigate and inspect the {% include product %} entities, fields and the underlying schema. |
+|
| **tk-cpenv** | Project URL: [https://github.com/cpenv/tk-cpenv](https://github.com/cpenv/tk-cpenv)
Project Contributor: [Dan Bradham](https://github.com/danbradham)
Project Maintainer: [Dan Bradham](https://github.com/danbradham)
Project Description: This app adds support for [cpenv](https://github.com/cpenv/cpenv), a tool that uses modules to manage software plugins, project dependencies and environment variables.
More Information: [{% include product %} Community Forums](https://community.shotgridsoftware.com/t/rez-support/7350/7) |
+|
| **rtm-tk-hiero-shotgunDropper** | Project URL: [https://github.com/RicardoMusch/rtm-tk-hiero-shotgunDropper](https://github.com/RicardoMusch/rtm-tk-hiero-shotgunDropper)
Project Contributor: [Ricardo Musch](https://www.ricardo-musch.com/)
Project Maintainer: Ricardo Musch
Project Description: This app allows you to drop Versions and playlists from {% include product %} into Hiero.
More Information: [{% include product %} Community Forums](https://community.shotgridsoftware.com/t/release-shotgundropper-for-hiero/4183) |
diff --git a/docs/en/guides/pipeline-integrations/administration/config-staging-and-rollout.md b/docs/en/guides/pipeline-integrations/administration/config-staging-and-rollout.md
index 63f6cfe50..f8b5c11c9 100644
--- a/docs/en/guides/pipeline-integrations/administration/config-staging-and-rollout.md
+++ b/docs/en/guides/pipeline-integrations/administration/config-staging-and-rollout.md
@@ -7,9 +7,9 @@ lang: en
# Configuration Staging and Rollout
-This document explains best practices for how to safely roll out changes to your production pipeline. It explains how you can create a staging sandbox, which is a copy of your production configuration, update this sandbox and do testing and then finally push your changes to the production config.
+This document explains best practices for how to safely roll out changes to your production pipeline. It explains how you can create a staging sandbox, which is a copy of your production configuration, update this sandbox and do testing and then finally push your changes to the production config.
-_Please note that this document describes functionality only available if you have taken control over a Toolkit configuration. For the default setup, please see [{% include product %} Integrations Admin Guide](https://support.shotgunsoftware.com/hc/en-us/articles/115000067493)._
+_Please note that this document describes functionality only available if you have taken control over a Toolkit configuration. For the default setup, please see [{% include product %} Integrations Admin Guide](https://support.shotgunsoftware.com/hc/en-us/articles/115000067493)._
# Introduction
@@ -17,11 +17,11 @@ This document outlines how to manage your Toolkit configuration. Toolkit contain
In this document, we'll describe how to:
-- Safely upgrade the Toolkit Core API.
-- Upgrading your Apps and Engines.
-- Various ways to manage your config across multiple projects.
-- Go through Toolkit's _Clone_ and _Push_ functionality that allows you to safely test upgrades and changes without disrupting production.
-- How to work with `git` source control and Toolkit.
+- Safely upgrade the Toolkit Core API.
+- Upgrading your Apps and Engines.
+- Various ways to manage your config across multiple projects.
+- Go through Toolkit's _Clone_ and _Push_ functionality that allows you to safely test upgrades and changes without disrupting production.
+- How to work with `git` source control and Toolkit.
# Basics of configuration management
@@ -29,13 +29,13 @@ Each Toolkit Project has one or more configurations associated with it. The conf
If you are working with a large number of projects, this may become cumbersome and we offer several ways to make this process easy, safe and streamlined.
-In {% include product %}, each project has a number of **Pipeline Configurations**. When a project is first set up with Toolkit, a Pipeline Configuration called `primary` is created. The pipeline configuration entity in {% include product %} points at a location on disk where the Toolkit configuration can be found.
+In {% include product %}, each project has a number of **Pipeline Configurations**. When a project is first set up with Toolkit, a Pipeline Configuration called `primary` is created. The pipeline configuration entity in {% include product %} points at a location on disk where the Toolkit configuration can be found.
During the course of a project, you often need to make changes to the configuration. This can be tweaks to the configuration, or perhaps you need to add additional apps or engines. We also release new app versions frequently and we recommend that you use the latest versions if possible.
-While it is possible to upgrade your _primary_ project configuration straight away, this can be risky; since this configuration is used by everybody on the project, introducing a problem will affect everyone. A better approach is to create an isolated version of the configuration that a select group of people have access to. In this safe environment, upgrades, configuration changes and development can happen without impacting the rest of the production. Once the changes have been tested, they can be safely and confidently pushed to the primary configuration.
+While it is possible to upgrade your _primary_ project configuration straight away, this can be risky; since this configuration is used by everybody on the project, introducing a problem will affect everyone. A better approach is to create an isolated version of the configuration that a select group of people have access to. In this safe environment, upgrades, configuration changes and development can happen without impacting the rest of the production. Once the changes have been tested, they can be safely and confidently pushed to the primary configuration.
-This process is called _cloning_ and means that you make a personal copy of the primary configuration that only you (and other people you invite) have access to the clone. In here you can make changes safely and once you are happy you can push these changes back to the primary configuration.
+This process is called _cloning_ and means that you make a personal copy of the primary configuration that only you (and other people you invite) have access to the clone. In here you can make changes safely and once you are happy you can push these changes back to the primary configuration.
## Cloning your Configuration
@@ -43,19 +43,19 @@ Once you have set up Toolkit and configured a project, the setup will look somet

-There is a _studio_ install which holds the Core API for all projects. This _studio_ location also contains a `tank` command and a Toolkit Python API you can use to access any of your Toolkit-enabled {% include product %} projects.
+There is a _studio_ install which holds the Core API for all projects. This _studio_ location also contains a `tank` command and a Toolkit Python API you can use to access any of your Toolkit-enabled {% include product %} projects.
-In addition to this, there is a configuration folder for each project. This folder contains all the settings for that project. It also contains a `tank` command (and a Python API) which specifically operates on this configuration. When you are using this `tank` command or API code, you can strictly only operate on this configuration.
+In addition to this, there is a configuration folder for each project. This folder contains all the settings for that project. It also contains a `tank` command (and a Python API) which specifically operates on this configuration. When you are using this `tank` command or API code, you can strictly only operate on this configuration.
-When a new project is set up, a _Primary_ configuration is created. This is the configuration that Toolkit will use by default for the project. In addition to the primary configuration, you can create additional configurations for a project. These can exist in parallel and are useful if you for example want to privately test some modifications, upgrade some apps or do development without impacting the entire team. Additional configurations are created by a process called _cloning_, a process where a configuration is copied to a new location.
+When a new project is set up, a _Primary_ configuration is created. This is the configuration that Toolkit will use by default for the project. In addition to the primary configuration, you can create additional configurations for a project. These can exist in parallel and are useful if you for example want to privately test some modifications, upgrade some apps or do development without impacting the entire team. Additional configurations are created by a process called _cloning_, a process where a configuration is copied to a new location.
Once you have cloned your configuration, your setup may look something like this:

-In addition to the studio level `tank` command and your primary project configuration `tank` command, you now have a new pipeline configuration which has its own `tank` command. If you run this `tank` command, you will operate exclusively on the configuration located in the staging sandbox. So if you want to test out some new things in Maya, you can simply navigate to your cloned sandbox, run `./tank Shot xyz launch_maya` and the {% include product %} menu which appears in Maya will reflect the configuration inside of your staging sandbox rather than your Primary configuration.
+In addition to the studio level `tank` command and your primary project configuration `tank` command, you now have a new pipeline configuration which has its own `tank` command. If you run this `tank` command, you will operate exclusively on the configuration located in the staging sandbox. So if you want to test out some new things in Maya, you can simply navigate to your cloned sandbox, run `./tank Shot xyz launch_maya` and the {% include product %} menu which appears in Maya will reflect the configuration inside of your staging sandbox rather than your Primary configuration.
-Note that the studio level `tank` command always uses the Primary config, so the only way to access a cloned configuration is by navigating to its location and using the `tank` command that is located in that folder. In {% include product %}, you can assign a specific set of users to a pipeline configuration entry, and any users that are associated with a configuration will now see menu entries appear in addition to those coming from the Primary config:
+Note that the studio level `tank` command always uses the Primary config, so the only way to access a cloned configuration is by navigating to its location and using the `tank` command that is located in that folder. In {% include product %}, you can assign a specific set of users to a pipeline configuration entry, and any users that are associated with a configuration will now see menu entries appear in addition to those coming from the Primary config:

@@ -73,15 +73,16 @@ When you press ok, Toolkit will copy the configuration across and set up the clo
### Pushing changes from your staging sandbox to Primary
-Once you have applied the relevant updates and run any testing that you deem is necessary, you can push back your changes into the production configuration by executing the `tank push_configuration` command. This will transfer all the changes you have made in your staging sandbox to your Primary configuration.
+Once you have applied the relevant updates and run any testing that you deem is necessary, you can push back your changes into the production configuration by executing the `tank push_configuration` command. This will transfer all the changes you have made in your staging sandbox to your Primary configuration.
Please note that your current configuration is moved to a backup location when you run the `push_configuration` command. If you accidentally push or if there is a problem with the push, you can roll back simply by taking the content in the backup folder and copying into the config folder.
-By default, this command will copy a collection of files into the `config` folder in the target pipeline configuration. If you are using unix and would like a more atomic update, you can add a `--symlink` flag to the `push_configuration` command. This will turn the `config` folder in the target pipeline configuration into a symbolic link which makes it easier to upgrade without running the risk of having configuration mismatches in currently running sessions.
+By default, this command will copy a collection of files into the `config` folder in the target pipeline configuration. If you are using unix and would like a more atomic update, you can add a `--symlink` flag to the `push_configuration` command. This will turn the `config` folder in the target pipeline configuration into a symbolic link which makes it easier to upgrade without running the risk of having configuration mismatches in currently running sessions.
### Refreshing an old cloned configuration
-If you have a old dev or staging sandbox set up, but it is out of date and you need to sync its contents with the latest production configuration, you do this by running the `push_configuration` command for the primary configuration:
+If you have a old dev or staging sandbox set up, but it is out of date and you need to sync its contents with the latest production configuration, you do this by running the `push_configuration` command for the primary configuration:
+
```shell
tank push_configuration
@@ -112,7 +113,8 @@ Push Complete!
Your old configuration has been backed up into the following folder:
/my/staging/sandbox/config.bak.20140108_093218
```
-Note how we are pushing from the primary project config to the staging sandbox. We do this by running the _primary_ configuration's `tank` command. If you have multiple sandboxes set up, it is also possible to push data between those.
+
+Note how we are pushing from the primary project config to the staging sandbox. We do this by running the _primary_ configuration's `tank` command. If you have multiple sandboxes set up, it is also possible to push data between those.
### Deleting a cloned configuration
@@ -120,7 +122,7 @@ If you want to delete a cloned configuration, simply delete the entry in {% incl
## Getting latest apps and engines
-Inside your staging sandbox (or in any other config), you can run the `tank updates` command in order to check if there are any app updates available. This command has got rudimentary filters that you can use if you only want to check certain areas of your configuration:
+Inside your staging sandbox (or in any other config), you can run the `tank updates` command in order to check if there are any app updates available. This command has got rudimentary filters that you can use if you only want to check certain areas of your configuration:
```shell
----------------------------------------------------------------------
@@ -161,13 +163,15 @@ Make sure the loader app is up to date everywhere:
Make sure the loader app is up to date in maya:
> tank updates ALL tk-maya tk-multi-loader
```
+
## Upgrading the Toolkit Core API
This section explains how you can use a clone staging sandbox configuration to safely upgrade the Toolkit Core API. If you haven't got a staging sandbox prepared yet, just follow the instructions in the previous section!
-If your staging sandbox was cloned from a pipeline configuration using a [shared studio Core API](https://support.shotgunsoftware.com/hc/en-us/articles/219040448), you'll want to update your sandbox to use it's own unique Core API code. This is called "localizing" the core and can be done by navigating to your staging sandbox and running `tank localize`. This command will copy the Core API from the studio install, into your sandbox, making it possible to run and test a different version of the Core API later on.
+If your staging sandbox was cloned from a pipeline configuration using a [shared studio Core API](https://support.shotgunsoftware.com/hc/en-us/articles/219040448), you'll want to update your sandbox to use it's own unique Core API code. This is called "localizing" the core and can be done by navigating to your staging sandbox and running `tank localize`. This command will copy the Core API from the studio install, into your sandbox, making it possible to run and test a different version of the Core API later on.
_The default behavior in Toolkit is to localize the core by default. If you haven't explicitly created a shared studio core previously, it's safe to assume your core is localized already._
+
```shell
cd /my/staging/sandbox
./tank localize
@@ -195,20 +199,23 @@ Localizing Engines: /mnt/software/shotgun/studio/install/engines -> /my/staging/
Localizing Frameworks: /mnt/software/shotgun/studio/install/frameworks -> /my/staging/sandbox/install/frameworks
The Core API was successfully localized.
-Localize complete! This pipeline configuration now has an independent API.
-If you upgrade the API for this configuration (using the 'tank core' command),
+Localize complete! This pipeline configuration now has an independent API.
+If you upgrade the API for this configuration (using the 'tank core' command),
no other configurations or projects will be affected.
```
+
Now we are no longer sharing the Core API with the studio location but are running our own, independent version. We can now go ahead and perform a standard Core API upgrade, again using our local tank command:
+
```shell
cd /my/staging/sandbox
./tank core
```
+
Toolkit will check if there is a new version available and offer you to download and install it.
Once you have updated the Core API, make sure to test the installation. Launch some apps, either using the `tank` command in the sandbox or using the special menu entries in {% include product %}. Do a basic run-through of your pipeline and perform the tests you deem necessary.
-Finally, once you are happy, it is time to go ahead and update the studio version of the Core API. Note that in the typical Toolkit setup, the Core API is shared between all projects, so by running the `tank core` command from your studio location `tank` command, you are updating the Core API for all projects.
+Finally, once you are happy, it is time to go ahead and update the studio version of the Core API. Note that in the typical Toolkit setup, the Core API is shared between all projects, so by running the `tank core` command from your studio location `tank` command, you are updating the Core API for all projects.
# Managing the Project Lifecycle
@@ -216,9 +223,9 @@ Each Toolkit project contains an independent configuration which holds all the s
Depending on the needs of your studio, different levels of complexity may be relevant. Toolkit offers three different approaches and we'll explain each one of them in detail:
-- The most straightforward approach is to copy the config from the previous project when you set up a new project. This is good if you are a small studio and don't have a large number of projects.
-- If you have a higher project turnover and if you run more than one project in parallel, the next level of integration that we recommend involves `git` version control. Toolkit has native support for git and once you are up and running with a git-based workflow you have a single configuration for your studio and are tracking all the changes you are making to that configuration over time. Each project can safely pull in configuration changes as and when they need to.
-- If you are running a large-scale facility, it may be worth considering a setup where a single configuration is directly connected to all the currently-active projects in the studio. A single change to this configuration will have an immediate impact on all the projects.
+- The most straightforward approach is to copy the config from the previous project when you set up a new project. This is good if you are a small studio and don't have a large number of projects.
+- If you have a higher project turnover and if you run more than one project in parallel, the next level of integration that we recommend involves `git` version control. Toolkit has native support for git and once you are up and running with a git-based workflow you have a single configuration for your studio and are tracking all the changes you are making to that configuration over time. Each project can safely pull in configuration changes as and when they need to.
+- If you are running a large-scale facility, it may be worth considering a setup where a single configuration is directly connected to all the currently-active projects in the studio. A single change to this configuration will have an immediate impact on all the projects.
In the following sections we'll describe the different approaches in detail.
@@ -230,9 +237,10 @@ When your second project comes around, you don't want to start with the default

-This is a very simple way to gradually evolve the configuration over time. Changes and improvements will flow from project to project in an ad hoc fashion. The first time you run the `setup_project` command, just hit enter when the setup process prompts for the configuration to use. This will download and install the default configuration.
+This is a very simple way to gradually evolve the configuration over time. Changes and improvements will flow from project to project in an ad hoc fashion. The first time you run the `setup_project` command, just hit enter when the setup process prompts for the configuration to use. This will download and install the default configuration.
For your second project, you will be presented with a list of paths to configurations for previous projects. Choose one of these paths and enter that when the setup process prompts for a config. This will copy that configuration to the new project:
+
```
Welcome to the {% include product %} Pipeline Toolkit!
For documentation, see https://support.shotgunsoftware.com
@@ -272,9 +280,10 @@ clone this repository and base the config on its content.
[tk-config-default]: /mnt/software/shotgun/first_project/config
```
+
## A studio configuration in git source control
-Limitations with the first approach include the fact that the projects are not connected to each other. If you have 10 projects and you all need to update them because a critical bug fix has been released, you would have to manually go through each project and run the `tank updates` command.
+Limitations with the first approach include the fact that the projects are not connected to each other. If you have 10 projects and you all need to update them because a critical bug fix has been released, you would have to manually go through each project and run the `tank updates` command.
One way to resolve this is to create a master configuration and store it in git source control. Whenever you create a new project, simply type in the path to this git repository in the setup project dialog and Toolkit will clone it for you. Now all the projects are connected to the same "studio master" config. If you have made some good changes to a project configuration, you can commit them and push them to the studio master. Other projects can then easily pull these down. You also retain a history of all your changes via git.
@@ -282,25 +291,30 @@ One way to resolve this is to create a master configuration and store it in git

-The basic idea is that you set up a git repository which holds the git configuration. Whenever you run `tank setup_project`, you specify the git url to this repository (for example `username@someserver.com:/studio_config.git`) and the setup process will clone the repository so that the new project becomes a repository connected to the main studio repository. Once they are connected you can push and pull changes, and work in branches for finer granularity.
+The basic idea is that you set up a git repository which holds the git configuration. Whenever you run `tank setup_project`, you specify the git url to this repository (for example `username@someserver.com:/studio_config.git`) and the setup process will clone the repository so that the new project becomes a repository connected to the main studio repository. Once they are connected you can push and pull changes, and work in branches for finer granularity.
### Setting up your studio config repository
Before you do anything else, you need to create a studio config repository. This section shows how to take an existing toolkit configuration and creating a git repository from that.
-First, you need to go to your git server and create a repository. This process may be different depending on your setup. If you are using something like GitHub, you would start a web browser and navigate to github.com. If you have access to the server you may do something like `git init --bare`. In our example, we assume that the git repository you create is called `username@someserver.com:/studio_config.git`.
+First, you need to go to your git server and create a repository. This process may be different depending on your setup. If you are using something like GitHub, you would start a web browser and navigate to github.com. If you have access to the server you may do something like `git init --bare`. In our example, we assume that the git repository you create is called `username@someserver.com:/studio_config.git`.
+
+Now move the `config` folder of the project you want to use to seed your repo with into a `config.bak` location:
-Now move the `config` folder of the project you want to use to seed your repo with into a `config.bak` location:
```shell
cd /project_configs/studio_config
mv config config.bak
```
-Clone your initialized git repository into the `config` location of your project that you want to base the studio config on. Once you have run the clone command, you will have an empty `config folder` which is also a git repository:
+
+Clone your initialized git repository into the `config` location of your project that you want to base the studio config on. Once you have run the clone command, you will have an empty `config folder` which is also a git repository:
+
```shell
cd /project_configs/studio_config
git clone username@someserver.com:/studio_config.git config
```
-Copy all the files from your `config.bak` location back into the `config` folder. Once done, you can delete the empty `config.bak` folder. Your config files are now inside the git repository and we need to add them, commit them and push them to the server. But before doing that, we need to do some house keeping to handle some Toolkit system files correctly. In the `config` folder, create a `.gitignore` file and add the following lines to it:
+
+Copy all the files from your `config.bak` location back into the `config` folder. Once done, you can delete the empty `config.bak` folder. Your config files are now inside the git repository and we need to add them, commit them and push them to the server. But before doing that, we need to do some house keeping to handle some Toolkit system files correctly. In the `config` folder, create a `.gitignore` file and add the following lines to it:
+
```shell
install_location.yml
pipeline_configuration.yml
@@ -313,6 +327,7 @@ git add --all
git commit -am "initial commit of our studio config!"
git push
```
+
### Creating a new project from git
When you create a new project, simply specify a valid git url when the setup process prompts you to enter the path to the configuration to use. Following our example above, we would enter `username@someserver.com:/studio_config.git`. As part of the project setup process, Toolkit will clone this repository into the `config` folder of your new project configuration. This means that you can later on go into this config folder and run git commands. Note that any cloned pipeline configurations will also clone the git repository and will work seamlessly.
@@ -323,7 +338,7 @@ Whenever you have made changes to your primary config, you can simply go to your
### Updating a project to have the latest version
-Alternatively, if you have updated your studio level config with some changes and you want to pull those down to your project, just go to your `config` folder and run a `git pull`. **Important**: Note that once you have done this, make sure you run a `tank cache_apps` to ensure that all the app versions that your changed config requires are present in the system!
+Alternatively, if you have updated your studio level config with some changes and you want to pull those down to your project, just go to your `config` folder and run a `git pull`. **Important**: Note that once you have done this, make sure you run a `tank cache_apps` to ensure that all the app versions that your changed config requires are present in the system!
### Advanced git usage: Branches
@@ -337,11 +352,12 @@ Since Toolkit keeps a list of all the different configurations for a {% include
The git based approach above handles independent project configurations which are connected via git: Updates are not automatically reflected across projects but will have to be pulled and pushed.
-For a fully centralized configuration, where the configuration truly resides in one place and where a single change immediately reflects a group of projects, you will need to make use of the `@include` functionality in the Toolkit configuration. This makes it possible to create references so that each project configuration points at a central location where the actual configuration is being kept.
+For a fully centralized configuration, where the configuration truly resides in one place and where a single change immediately reflects a group of projects, you will need to make use of the `@include` functionality in the Toolkit configuration. This makes it possible to create references so that each project configuration points at a central location where the actual configuration is being kept.

-The `@include` syntax allows you to chain together multiple files. For example, if you have a file `/tmp/stuff.yml`, which contains the following content:
+The `@include` syntax allows you to chain together multiple files. For example, if you have a file `/tmp/stuff.yml`, which contains the following content:
+
```
# paths to maya
maya_windows: 'C:\Program Files\Autodesk\Maya2012\bin\maya.exe'
@@ -369,7 +385,9 @@ file_manager:
template_work: null
template_work_area: null
```
+
As you can see above, you can create include definitions at several different levels - in the case above, we have an app definition and three strings values. These can then be referenced from an environment file:
+
```
includes: ['/tmp/stuff.yml']
@@ -378,13 +396,13 @@ engines:
tk-maya:
# First all our app definitions
- apps:
+ apps:
# normally, we would have the entire set of configuration parameters at this point.
# because we are using an include, we can reference an entire sub-section of configuration
- # using the @ keyword:
+ # using the @ keyword:
tk-multi-workfiles: '@file_manager'
- # alternatively, for simple values, we can use them as parameter values for apps:
+ # alternatively, for simple values, we can use them as parameter values for apps:
tk-maya-launcher:
mac_path: '@maya_mac'
linux_path: '@maya_linux'
@@ -398,9 +416,11 @@ engines:
template_project: null
use_sgtk_as_menu_name: false
```
+
Furthermore, you can read in several include files, one after the other. If the same include definition exists in two different files, the most recently read file will take precedence. We could extend our example environment above:
+
```
-includes:
+includes:
# first include a global config file which contains all the studio level app definitions
- '/studio/configurations/studio_apps.yml'
@@ -412,20 +432,22 @@ includes:
engines:
- tk-maya:
- apps:
+ tk-maya:
+ apps:
tk-multi-workfiles: '@file_manager'
location: {name: tk-maya, type: app_store, version: v0.4.1}
use_sgtk_as_menu_name: false
```
+
With the approach just shown, it is possible to have a set of studio defaults which can be overridden by project type defaults which in turn can be overridden by specific project settings. You can either do it on the app level, as shown in the example above, or an engine level, as shown in the next section.
### Best practices when setting up a global config
There are several ways to set up a global configuration. Our recommended best practices approach for setting this up breaks the configuration down on a per engine basis. Each environment file is completely empty and references engines (and apps) defined in separate files. This makes it easy to tweak and reconfigure things - one engine at a time.
-Each of these include files are in a standard form, named after the engine. For example, if you have a Maya engine, an include file would contain just the engine and its apps. Its top level entry would simply be named `maya`:
+Each of these include files are in a standard form, named after the engine. For example, if you have a Maya engine, an include file would contain just the engine and its apps. Its top level entry would simply be named `maya`:
+
```yaml
maya:
apps:
@@ -468,31 +490,34 @@ frameworks:
tk-framework-shotgunutils_v1.x.x:
location: {name: tk-framework-shotgunutils, type: app_store, version: v1.0.8}
```
-In your studio, you most likely don't have a single maya setup, but may have a number of different ones for different departments and types of things. We recommend that you set up a maya include file for each of these ones, organized in a file hierarchy. Each of these files has a top level `maya` entry just like the file above.
+
+In your studio, you most likely don't have a single maya setup, but may have a number of different ones for different departments and types of things. We recommend that you set up a maya include file for each of these ones, organized in a file hierarchy. Each of these files has a top level `maya` entry just like the file above.

Now each project will contain a number of environments. Each of these environment files will be a list of engine includes, linking that specific environment to a collection of engine and app setups. You can set up one (or several) default project configurations for your studio, all containing includes like this, thereby ensuring that the actual app and engine payload is completely contained within the include files and therefore global. If you make a change to your global include files, all projects will be affected. With this setup, your environment files will then be on the following form:
+
```yaml
-includes:
- - '/studio/configurations/maya/asset.yml'
- - '/studio/configurations/nuke/asset.yml'
- - '/studio/configurations/houdini/generic.yml'
+includes:
+ - "/studio/configurations/maya/asset.yml"
+ - "/studio/configurations/nuke/asset.yml"
+ - "/studio/configurations/houdini/generic.yml"
engines:
- tk-maya: '@maya'
- tk-nuke: '@nuke'
- tk-houdini: '@houdini'
+ tk-maya: "@maya"
+ tk-nuke: "@nuke"
+ tk-houdini: "@houdini"
# we don't need any frameworks here because there are no apps or engines defined
frameworks: null
```
-If you wanted to break out of the above setup and start defining some project specific entries, you would simply replace `@maya` with a series of app and engine definitions in the environment file itself.
+
+If you wanted to break out of the above setup and start defining some project specific entries, you would simply replace `@maya` with a series of app and engine definitions in the environment file itself.
### Managing a global configuration
-Managing a global configuration is more involved than managing a normal one. Because you have effectively combined a number of projects into a single file structure, running the `tank updates` command from any project and choosing to update an app version will affect all other projects, so proceed with some caution here.
+Managing a global configuration is more involved than managing a normal one. Because you have effectively combined a number of projects into a single file structure, running the `tank updates` command from any project and choosing to update an app version will affect all other projects, so proceed with some caution here.
Furthermore, the standard clone workflow won't work out of the box, since what you are cloning is the project configuration, which now only contains includes.
-For safe testing and maintenance, we recommend storing the global configuration in source control (e.g. `git`) and do updates in a separate test area with a special test project. Once the quality control passes, commit the changes and pull them down into the actual global configuration.
+For safe testing and maintenance, we recommend storing the global configuration in source control (e.g. `git`) and do updates in a separate test area with a special test project. Once the quality control passes, commit the changes and pull them down into the actual global configuration.
diff --git a/docs/en/guides/pipeline-integrations/administration/file-system-config-reference.md b/docs/en/guides/pipeline-integrations/administration/file-system-config-reference.md
index e6de96d8d..d8c5c753d 100644
--- a/docs/en/guides/pipeline-integrations/administration/file-system-config-reference.md
+++ b/docs/en/guides/pipeline-integrations/administration/file-system-config-reference.md
@@ -8,43 +8,40 @@ lang: en
# File System Configuration Reference
This document is a complete reference of the file system centric configurations in the {% include product %} Pipeline Toolkit. It outlines how the template system works and which options are available. It also shows all the different parameters you can include in the folder creation configuration.
-_Please note that this document describes functionality only available if you have taken control over a Toolkit configuration. For details, see [{% include product %} Integrations Admin Guide](https://support.shotgunsoftware.com/hc/en-us/articles/115000067493)._
+_Please note that this document describes functionality only available if you have taken control over a Toolkit configuration. For details, see [{% include product %} Integrations Admin Guide](https://support.shotgunsoftware.com/hc/en-us/articles/115000067493)._
# Introduction
This document explains how to configure the part of Toolkit's configuration related to your file system, including examples. Toolkit handles a lot of files and directories, and you can leverage Toolkit's configuration as a way of expressing how paths are put together and what they mean. The file system is typically accessed in two different and completely separate ways:
-1. **Folder Creation:** After an object has been created in {% include product %}, folders on disk need to be created before work can begin. This can be as simple as having a folder on disk representing the Shot, or can be more complex-for example setting up a user specific work sandbox so that each user that works on the shot will work in a separate area on disk.
-
- - Toolkit automates folder creation when you launch an application (for example you launch Maya for shot BECH_0010), Toolkit ensures that folders exist prior to launching Maya. If folders do not exist, they are created on the fly. Folders can also be created using API methods, using the [tank command in the shell](https://support.shotgunsoftware.com/hc/en-us/articles/219033178-Administering-Toolkit#Useful%20tank%20commands) and via the [Create Folders menu in ShotGrid](https://support.shotgunsoftware.com/hc/en-us/articles/219040688-Beyond-your-first-project#Shotgun%20Integration). A special set of configuration files drives this folder creation process and this is outlined in [Part 1](https://support.shotgunsoftware.com/hc/en-us/articles/219039868-Integrations-File-System-Reference#Part%201%20-%20Folder%20Creation%20Syntax) of the document below.
-2. **Opening and Saving Work:** While working, files need to be opened from and saved into standardized locations on disk. These file locations typically exist within the folder structure created prior to work beginning.
-
- - Once a folder structure has been established, we can use that structure to identify key locations on disk. These locations are called [Templates](https://support.shotgunsoftware.com/hc/en-us/articles/219039868-Integrations-File-System-Reference#Part%202%20-%20Configuring%20File%20System%20Templates). For example, you can define a template called `maya_shot_publish` to refer to published Maya files for Shots. [Toolkit apps](https://support.shotgunsoftware.com/hc/en-us/articles/219039798) will then use this template-a publish app may use it to control where it should be writing its files, while a [Workfiles App](https://support.shotgunsoftware.com/hc/en-us/articles/219033088-Your-Work-Files) may use the template to understand where to open files from. Inside Toolkit's environment configuration, you can control which templates each app uses. All the key file locations used by Toolkit are therefore defined in a single template file and are easy to overview.
+1. **Folder Creation:** After an object has been created in {% include product %}, folders on disk need to be created before work can begin. This can be as simple as having a folder on disk representing the Shot, or can be more complex-for example setting up a user specific work sandbox so that each user that works on the shot will work in a separate area on disk.
+
+ - Toolkit automates folder creation when you launch an application (for example you launch Maya for shot BECH_0010), Toolkit ensures that folders exist prior to launching Maya. If folders do not exist, they are created on the fly. Folders can also be created using API methods, using the [tank command in the shell](https://support.shotgunsoftware.com/hc/en-us/articles/219033178-Administering-Toolkit#Useful%20tank%20commands) and via the [Create Folders menu in ShotGrid](https://support.shotgunsoftware.com/hc/en-us/articles/219040688-Beyond-your-first-project#Shotgun%20Integration). A special set of configuration files drives this folder creation process and this is outlined in [Part 1](https://support.shotgunsoftware.com/hc/en-us/articles/219039868-Integrations-File-System-Reference#Part%201%20-%20Folder%20Creation%20Syntax) of the document below.
+
+2. **Opening and Saving Work:** While working, files need to be opened from and saved into standardized locations on disk. These file locations typically exist within the folder structure created prior to work beginning.
+
+ - Once a folder structure has been established, we can use that structure to identify key locations on disk. These locations are called [Templates](https://support.shotgunsoftware.com/hc/en-us/articles/219039868-Integrations-File-System-Reference#Part%202%20-%20Configuring%20File%20System%20Templates). For example, you can define a template called `maya_shot_publish` to refer to published Maya files for Shots. [Toolkit apps](https://support.shotgunsoftware.com/hc/en-us/articles/219039798) will then use this template-a publish app may use it to control where it should be writing its files, while a [Workfiles App](https://support.shotgunsoftware.com/hc/en-us/articles/219033088-Your-Work-Files) may use the template to understand where to open files from. Inside Toolkit's environment configuration, you can control which templates each app uses. All the key file locations used by Toolkit are therefore defined in a single template file and are easy to overview.
# Part 1 - Folder Creation Syntax
-The folder configuration maps entities in {% include product %} to locations on disk. Rather than using a single configuration file, the configuration is in the form of a "mini file system" which acts as a template for each unit that is configured-this is called the **schema configuration**. Folders and files will be copied across from this "mini file system" to their target location when Toolkit's folder creation executes. It is possible to create dynamic behavior. For example, a folder can represent a Shot in {% include product %}, and you can control the naming of that folder. More specifically, you can pull the name of that folder from several {% include product %} fields and then perform character conversions before the folder is created.
+The folder configuration maps entities in {% include product %} to locations on disk. Rather than using a single configuration file, the configuration is in the form of a "mini file system" which acts as a template for each unit that is configured-this is called the **schema configuration**. Folders and files will be copied across from this "mini file system" to their target location when Toolkit's folder creation executes. It is possible to create dynamic behavior. For example, a folder can represent a Shot in {% include product %}, and you can control the naming of that folder. More specifically, you can pull the name of that folder from several {% include product %} fields and then perform character conversions before the folder is created.

-The above image shows a schema configuration. When you run the Toolkit folder creation, a connection is established between an entity in {% include product %} and a folder on disk. Toolkit uses this folder schema configuration to generate a series of folders on disk and each of these folders are registered as a [`Filesystem Location`](https://developer.shotgridsoftware.com/cbbf99a4/) entity in {% include product %}. One way to think about this is that {% include product %} data (e.g., Shot and Asset names) and the configuration is "baked" out into actual folders on disk and in {% include product %}. Configurations always start with a folder named "project". This will always represent the connected project in {% include product %} and will be replaced with the Toolkit name for the project. Below this level are static folders. The folder creator will automatically create the **sequences**folder, for example.
+The above image shows a schema configuration. When you run the Toolkit folder creation, a connection is established between an entity in {% include product %} and a folder on disk. Toolkit uses this folder schema configuration to generate a series of folders on disk and each of these folders are registered as a [`Filesystem Location`](https://developer.shotgridsoftware.com/cbbf99a4/) entity in {% include product %}. One way to think about this is that {% include product %} data (e.g., Shot and Asset names) and the configuration is "baked" out into actual folders on disk and in {% include product %}. Configurations always start with a folder named "project". This will always represent the connected project in {% include product %} and will be replaced with the Toolkit name for the project. Below this level are static folders. The folder creator will automatically create the **sequences**folder, for example.
-Digging further inside the sequences folder, there is a **sequence** folder and a **sequence.yml** file. Whenever Toolkit detects a YAML file with the same name as a folder, it will read the contents of the YAML file and add the desired dynamic behavior. In this case, the **sequence.yml** file contains the structure underneath the project folder, which consists of three types of items:
+Digging further inside the sequences folder, there is a **sequence** folder and a **sequence.yml** file. Whenever Toolkit detects a YAML file with the same name as a folder, it will read the contents of the YAML file and add the desired dynamic behavior. In this case, the **sequence.yml** file contains the structure underneath the project folder, which consists of three types of items:
-1. **Normal folders and files:** these are simply copied across to the target location.
-2. **A folder with a YAML file** (having the same name as the folder): this represents dynamic content. For example, there may be a **shot** and **shot.yml** and when folders are created, this **shot** folder is the template used to generate a number of folders-one folder per shot.
-3. **A file named name.symlink.yml** which will generate a symbolic link as folders are being processed. [Symbolic links are covered later in this document](https://support.shotgunsoftware.com/hc/en-us/articles/219039868-File-System-Reference#Symbolic%20Links).
+1. **Normal folders and files:** these are simply copied across to the target location.
+2. **A folder with a YAML file** (having the same name as the folder): this represents dynamic content. For example, there may be a **shot** and **shot.yml** and when folders are created, this **shot** folder is the template used to generate a number of folders-one folder per shot.
+3. **A file named name.symlink.yml** which will generate a symbolic link as folders are being processed. [Symbolic links are covered later in this document](https://support.shotgunsoftware.com/hc/en-us/articles/219039868-File-System-Reference#Symbolic%20Links).
The dynamic configuration setup expressed in the YAML files currently supports the following modes:
-- **[{% include product %} Query folders:](#shotgun-query-folders)** Dynamic folder names based on a {% include product %} Database Query. For example, this mode can be used to create a folder for every Shot in a project.
-
-- **[{% include product %} List Field folders:](#shotgun-list-field-folders)** Dynamic folder names based on a {% include product %} List Field. For example, this mode can be used to create a folder for every value in the {% include product %} List field "Asset Type", found on the Asset Entity in {% include product %}.
-
-- **[Deferred folders:](#workspaces-and-deferred-folder-creation)** Only executed when a second folder creation pass is requested via the create folders method of the Toolkit API, usually when an application (such as Maya) is launched. Typically, this method is executed by Toolkit's various application launchers just prior to starting up an application.
-
-- **[Current User Folders:](#current-user-folder)** A special folder, which represents the current user.
-
+- **[{% include product %} Query folders:](#shotgun-query-folders)** Dynamic folder names based on a {% include product %} Database Query. For example, this mode can be used to create a folder for every Shot in a project.
+- **[{% include product %} List Field folders:](#shotgun-list-field-folders)** Dynamic folder names based on a {% include product %} List Field. For example, this mode can be used to create a folder for every value in the {% include product %} List field "Asset Type", found on the Asset Entity in {% include product %}.
+- **[Deferred folders:](#workspaces-and-deferred-folder-creation)** Only executed when a second folder creation pass is requested via the create folders method of the Toolkit API, usually when an application (such as Maya) is launched. Typically, this method is executed by Toolkit's various application launchers just prior to starting up an application.
+- **[Current User Folders:](#current-user-folder)** A special folder, which represents the current user.
Let's dive deeper into these modes.
@@ -55,15 +52,15 @@ For a dynamic folder which corresponds to a {% include product %} query, use the
# the type of dynamic content
type: shotgun_entity
-
+
# the {% include product %} entity type to connect to
entity_type: Asset
-
+
# the {% include product %} field to use for the folder name
name: code
-
+
# {% include product %} filters to apply when getting the list of items
@@ -74,17 +71,17 @@ For a dynamic folder which corresponds to a {% include product %} query, use the
# (this is std {% include product %} API syntax)
# any values starting with $ are resolved into path objects
- filters: [ { "path": "project", "relation": "is", "values": [ "$project" ] } ]
-
-- Set the value of dynamic content **type** field to be **shotgun_entity**.
-- The **entity_type** field should be set to the {% include product %} entity from which we want to pull data from (e.g., "Asset", "Shot", "Sequence", "CustomEntity02", etc).
-- The **name** field is the name that should be given to each folder based on the data in {% include product %}.
-
- - You can use a single field, like in the example above (e.g., `name: code`).
- - You can use multiple fields in brackets (e.g., `name:` `"{asset_type}_{code}"`).
- - If you want to include fields from other linked entities, you can use the standard {% include product %} dot syntax (e.g., `name: "{sg_sequence.Sequence.code}_{code}"`).
-- The **filters** field is a {% include product %} Query. It follows the [{% include product %} API syntax](http://developer.shotgridsoftware.com/python-api/reference.html) relatively closely. It is a list of dictionaries, and each dictionary needs to have the keys _path_, _relation_, and _values_. Valid values for $syntax are any ancestor folder that has a corresponding {% include product %} entity (e.g., `"$project"` for the Project and `"$sequence"` if you have a sequence.yml higher up the directory hierarchy). For {% include product %} entity links, you can use the $syntax (e.g., `{ "path": "project", "relation": "is", "values": [ "$project" ] }`) to refer to a parent folder in the configuration-this is explained more in depth in the [examples below](https://support.shotgunsoftware.com/hc/en-us/articles/219039868-Integrations-File-System-Reference#Examples).
-
+ filters: [ { "path": "project", "relation": "is", "values": [ "$project" ] } ]
+
+- Set the value of dynamic content **type** field to be **shotgun_entity**.
+- The **entity_type** field should be set to the {% include product %} entity from which we want to pull data from (e.g., "Asset", "Shot", "Sequence", "CustomEntity02", etc).
+- The **name** field is the name that should be given to each folder based on the data in {% include product %}.
+
+ - You can use a single field, like in the example above (e.g., `name: code`).
+ - You can use multiple fields in brackets (e.g., `name:` `"{asset_type}_{code}"`).
+ - If you want to include fields from other linked entities, you can use the standard {% include product %} dot syntax (e.g., `name: "{sg_sequence.Sequence.code}_{code}"`).
+
+- The **filters** field is a {% include product %} Query. It follows the [{% include product %} API syntax](http://developer.shotgridsoftware.com/python-api/reference.html) relatively closely. It is a list of dictionaries, and each dictionary needs to have the keys _path_, _relation_, and _values_. Valid values for $syntax are any ancestor folder that has a corresponding {% include product %} entity (e.g., `"$project"` for the Project and `"$sequence"` if you have a sequence.yml higher up the directory hierarchy). For {% include product %} entity links, you can use the $syntax (e.g., `{ "path": "project", "relation": "is", "values": [ "$project" ] }`) to refer to a parent folder in the configuration-this is explained more in depth in the [examples below](https://support.shotgunsoftware.com/hc/en-us/articles/219039868-Integrations-File-System-Reference#Examples).
## Multiple folders
@@ -93,15 +90,15 @@ Include a slash in your name definition in order to create an expression which c
# the type of dynamic content
type: shotgun_entity
-
+
# the {% include product %} entity type to connect to
entity_type: Asset
-
+
# the {% include product %} field to use for the folder name
name: "{sg_asset_type}/{code}"
-
+
# {% include product %} filters to apply when getting the list of items
@@ -112,13 +109,13 @@ Include a slash in your name definition in order to create an expression which c
# (this is std {% include product %} API syntax)
# any values starting with $ are resolved into path objects
- filters: [ { "path": "project", "relation": "is", "values": [ "$project" ] } ]
+ filters: [ { "path": "project", "relation": "is", "values": [ "$project" ] } ]
-When creating a file system template (see for a Part 2 of this document for details) for this kind of path, the _last_folder will represent the {% include product %} entity. The example above could for example be expressed with the following template
+When creating a file system template (see for a Part 2 of this document for details) for this kind of path, the \_last_folder will represent the {% include product %} entity. The example above could for example be expressed with the following template
asset_step_folder: assets/{asset_type}/{Asset}/{Step}
-`{asset_type}` and `{Asset}` are both defined as string template keys and the `{Asset}` token will be used in context calculations when determining the context for a given path.
+`{asset_type}` and `{Asset}` are both defined as string template keys and the `{Asset}` token will be used in context calculations when determining the context for a given path.
## Create With Parent Folder
@@ -132,23 +129,23 @@ A shotgun_entity type folder supports an optional flag to control whether the fo
# recurse down from parent folder
- create_with_parent: true
+ create_with_parent: true
-As mentioned, this setting is optional and set to false by default. If you set it to true, Toolkit create folders for any child entity it finds. To continue with our example, if you want Shots to be created whenever their parent Sequence is created, set `create_with_parent` to `true` for the Shot.
+As mentioned, this setting is optional and set to false by default. If you set it to true, Toolkit create folders for any child entity it finds. To continue with our example, if you want Shots to be created whenever their parent Sequence is created, set `create_with_parent` to `true` for the Shot.
{% include info title="Note" content="the default setting is `false`, meaning that if you create folders for a Sequence, shot folders will not be created automatically. Also, you will need to add this flag to make it true. There will not be a flag in the shotgun_entity folder specifying false since false is the default behavior." %}
## Optional fields
-Typically, when you define the folder name (e.g., `{code}_{sg_extra_field}`), Toolkit requires all fields to have values in {% include product %}. For example, if the `sg_extra_field` is blank, an error message will be generated. If you have a field that is sometimes populated and sometimes not, you can mark it as optional. This means that Toolkit will include the field if it has a value, and exclude it if the value is blank-without error.
+Typically, when you define the folder name (e.g., `{code}_{sg_extra_field}`), Toolkit requires all fields to have values in {% include product %}. For example, if the `sg_extra_field` is blank, an error message will be generated. If you have a field that is sometimes populated and sometimes not, you can mark it as optional. This means that Toolkit will include the field if it has a value, and exclude it if the value is blank-without error.
-You define optional fields using square brackets, like: `{code}[_{sg_extra_field}]`. This will generate the following folder names:
+You define optional fields using square brackets, like: `{code}[_{sg_extra_field}]`. This will generate the following folder names:
-- If the `code` is BECH_0010 and the `sg_extra_field` is extra, the folder name will be `BECH_0010_extra`.
+- If the `code` is BECH_0010 and the `sg_extra_field` is extra, the folder name will be `BECH_0010_extra`.

-- If the `code` is BECH_0010 and the `sg_extra_field` is blank, the folder name will be `BECH_0010`.
+- If the `code` is BECH_0010 and the `sg_extra_field` is blank, the folder name will be `BECH_0010`.

@@ -156,20 +153,20 @@ You define optional fields using square brackets, like: `{code}[_{sg_extra_fiel
## Regular expression token matching
-Toolkit supports the extraction of parts of a {% include product %} field name using regular expressions. This makes it possible to create simple expressions where a value in {% include product %} can drive the folder creation. For example, if all assets in {% include product %} are named with a three letter prefix followed by an underscore (e.g `AAT_Boulder7`), this can split into two filesystem folder levels, e.g. `AAT/Boulder7`:
+Toolkit supports the extraction of parts of a {% include product %} field name using regular expressions. This makes it possible to create simple expressions where a value in {% include product %} can drive the folder creation. For example, if all assets in {% include product %} are named with a three letter prefix followed by an underscore (e.g `AAT_Boulder7`), this can split into two filesystem folder levels, e.g. `AAT/Boulder7`:
# the type of dynamic content
type: shotgun_entity
-
+
# the {% include product %} entity type to connect to
entity_type: Asset
-
+
# Extract parts of the name using regular expressions
name: "{code:^([^_]+)}/{code^[^_]+_(.*)}"
-
+
# {% include product %} filters to apply when getting the list of items
@@ -182,34 +179,34 @@ Toolkit supports the extraction of parts of a {% include product %} field name u
# any values starting with $ are resolved into path objects
filters: [ { "path": "project", "relation": "is", "values": [ "$project" ] } ]
-The syntax is similar to the `subset` tokens in the Template system; Simply add a colon after the {% include product %} field name, then followed by a regular expression. Any groups (e.g. sections surrounded by `()`s) defined in the regular expression will be used to extract values. If there are multiple groups in the regex, these will be concatenated together. For example, the following expression would extract the intials for the user who created an object: `{created_by.HumanUser.code:^([A-Z])[a-z]* ([A-Z])[a-z]*}`
+The syntax is similar to the `subset` tokens in the Template system; Simply add a colon after the {% include product %} field name, then followed by a regular expression. Any groups (e.g. sections surrounded by `()`s) defined in the regular expression will be used to extract values. If there are multiple groups in the regex, these will be concatenated together. For example, the following expression would extract the intials for the user who created an object: `{created_by.HumanUser.code:^([A-Z])[a-z]* ([A-Z])[a-z]*}`
## Examples
Below are a collection of examples showing how to use the filters syntax.
-To **find all shots which belong to the current project and are in progress**, use the syntax below. Note that the {% include product %} Shot entity has a link field called project which connects a shot to a project. We want to make sure that we only create folders for the shots that are associated with the current project. Since there is a project level higher up in the configuration file system, we can refer to this via the $syntax and Toolkit will automatically create to this {% include product %} entity link reference. Remember, valid values for $syntax are any ancestor folder that has a corresponding {% include product %} entity (e.g., `"$project"` for the Project and `"$sequence"` if you have a sequence.yml higher up the directory hierarchy).
+To **find all shots which belong to the current project and are in progress**, use the syntax below. Note that the {% include product %} Shot entity has a link field called project which connects a shot to a project. We want to make sure that we only create folders for the shots that are associated with the current project. Since there is a project level higher up in the configuration file system, we can refer to this via the $syntax and Toolkit will automatically create to this {% include product %} entity link reference. Remember, valid values for $syntax are any ancestor folder that has a corresponding {% include product %} entity (e.g., `"$project"` for the Project and `"$sequence"` if you have a sequence.yml higher up the directory hierarchy).
entity_type: Shot
filters:
- { "path": "project", "relation": "is", "values": [ "$project" ] }
- { "path": "status", "relation": "is", "values": [ "ip" ] }
-If you have a Sequence folder higher up the tree and want to **create folders for all Shots which belong to that Sequence**, you can create the following filters:
+If you have a Sequence folder higher up the tree and want to **create folders for all Shots which belong to that Sequence**, you can create the following filters:
entity_type: Shot
filters:
- { "path": "project", "relation": "is", "values": [ "$project" ] }
- { "path": "sg_sequence", "relation": "is", "values": [ "$sequence" ] }
-To **find all assets** use this syntax:
+To **find all assets** use this syntax:
entity_type: Asset
filters: [ { "path": "project", "relation": "is", "values": [ "$project" ] } ]
## {% include product %} List Field Folders
-[{% include product %} list field](https://support.shotgunsoftware.com/hc/en-us/articles/219031008) folders are useful if you want to create one folder for every asset type in {% include product %}, for instance. Asset types are list fields in {% include product %}, and this folder config type makes it possible to define a layer in the file system that reflects those asset type listings.
+[{% include product %} list field](https://support.shotgunsoftware.com/hc/en-us/articles/219031008) folders are useful if you want to create one folder for every asset type in {% include product %}, for instance. Asset types are list fields in {% include product %}, and this folder config type makes it possible to define a layer in the file system that reflects those asset type listings.

@@ -220,17 +217,17 @@ When you want a dynamic folder which corresponds to all the items in a {% includ
# the type of dynamic content
type: "shotgun_list_field"
-
+
# the {% include product %} entity type to connect to
entity_type: "Asset"
-
+
# only create for values which are used in this project.
# this is optional and will be set to false if not specified.
skip_unused: false
-
+
# by default, list fields are only created if they are needed by a child entity node
@@ -238,35 +235,32 @@ When you want a dynamic folder which corresponds to all the items in a {% includ
# nodes are always created
create_with_parent: false
-
+
# the {% include product %} field to use for the folder name
field_name: "{sg_asset_type}_type"
-- Set value of dynamic content **type** field to be `shotgun_list_field`.
-- The `entity_type` field should be set to the {% include product %} entity from which we want to pull data (for instance, "Asset", "Sequence", "Shot", etc.).
-- The `field_name` field should be set to the {% include product %} field from which the data is pulled from and must be a [list type field](https://support.shotgunsoftware.com/hc/en-us/articles/219031008). You can use expressions if you want to add static text alongside the dynamic content.`field_name: "{sg_asset_type}_type"` This example expression includes text as well as a template key.
-
-- The optional `skip_unused` parameter will prevent the creation of directories for list type field values which are not used (as covered under the [Optional Fields](https://support.shotgunsoftware.com/hc/en-us/articles/219039868-File-System-Reference#Optional%20fields) section above). {% include info title="Note" content="setting this to True may negatively affect folder creation performance. Also, the culling algorithm is currently crude and does not work in scenarios where complex filters have been applied to the associated entity." %}
-
-- The optional `create_with_parent` parameter forces the creation of the list_field node, even if there isn't a child entity level node that is currently being processed (see [Create With Parent Folder](https://support.shotgunsoftware.com/hc/en-us/articles/219039868-File-System-Reference#Create%20With%20Parent%20Folder) section above).
-
+- Set value of dynamic content **type** field to be `shotgun_list_field`.
+- The `entity_type` field should be set to the {% include product %} entity from which we want to pull data (for instance, "Asset", "Sequence", "Shot", etc.).
+- The `field_name` field should be set to the {% include product %} field from which the data is pulled from and must be a [list type field](https://support.shotgunsoftware.com/hc/en-us/articles/219031008). You can use expressions if you want to add static text alongside the dynamic content.`field_name: "{sg_asset_type}_type"` This example expression includes text as well as a template key.
+- The optional `skip_unused` parameter will prevent the creation of directories for list type field values which are not used (as covered under the [Optional Fields](https://support.shotgunsoftware.com/hc/en-us/articles/219039868-File-System-Reference#Optional%20fields) section above). {% include info title="Note" content="setting this to True may negatively affect folder creation performance. Also, the culling algorithm is currently crude and does not work in scenarios where complex filters have been applied to the associated entity." %}
+- The optional `create_with_parent` parameter forces the creation of the list_field node, even if there isn't a child entity level node that is currently being processed (see [Create With Parent Folder](https://support.shotgunsoftware.com/hc/en-us/articles/219039868-File-System-Reference#Create%20With%20Parent%20Folder) section above).
## Pipeline Step Folder
-The Pipeline Step folder represents a [Pipeline Step](https://support.shotgunsoftware.com/hc/en-us/articles/219031288) in {% include product %}. Pipeline Steps are also referred to as Steps.
+The Pipeline Step folder represents a [Pipeline Step](https://support.shotgunsoftware.com/hc/en-us/articles/219031288) in {% include product %}. Pipeline Steps are also referred to as Steps.

# the type of dynamic content
type: "shotgun_step"
-
+
# the {% include product %} field to use for the folder name. This field needs to come from a step entity.
name: "short_name"
-You can use name expressions here, just like you can with the [{% include product %} entity described above](https://support.shotgunsoftware.com/hc/en-us/articles/219039868-File-System-Reference#Shotgun%20List%20Field%20Folders). The node will look at its parent, grandparent, etc., until a {% include product %} entity folder configuration is found. This entity folder will be associated with the Step and the type of the entity will be used to determine which Steps to create.
+You can use name expressions here, just like you can with the [{% include product %} entity described above](https://support.shotgunsoftware.com/hc/en-us/articles/219039868-File-System-Reference#Shotgun%20List%20Field%20Folders). The node will look at its parent, grandparent, etc., until a {% include product %} entity folder configuration is found. This entity folder will be associated with the Step and the type of the entity will be used to determine which Steps to create.
{% include info title="Note" content="If you want to create a top level folder with Pipeline Steps, just use the ShotGrid entity node and set the associated type to step." %}
@@ -282,11 +276,11 @@ Adding this setting to the configuration means that no Step folders will be crea
### Different file system layouts for different pipeline steps
-Imagine you want to have one folder structure for Lighting and Comp and one for everything else. If you want to have different file system layouts for different Pipeline Steps, you can achieve this by adding a `filter` clause to your config. This filter allows you to scope which Pipeline Steps will be covered by a particular Step's configuration. In our example, you can create two configuration files: `step_lightcomp.yml` and `step.yml`. In the first one, you would add the following filter:
+Imagine you want to have one folder structure for Lighting and Comp and one for everything else. If you want to have different file system layouts for different Pipeline Steps, you can achieve this by adding a `filter` clause to your config. This filter allows you to scope which Pipeline Steps will be covered by a particular Step's configuration. In our example, you can create two configuration files: `step_lightcomp.yml` and `step.yml`. In the first one, you would add the following filter:
- filters: [ { "path": "short_name", "relation": "in", "values": [ "Light", "Comp" ] } ]
+ filters: [ { "path": "short_name", "relation": "in", "values": [ "Light", "Comp" ] } ]
-The above syntax will only be used when Step folders of the type `Light` or `Comp` are being created. For the other file, we want to create a rule for everything else:
+The above syntax will only be used when Step folders of the type `Light` or `Comp` are being created. For the other file, we want to create a rule for everything else:
filters: [ { "path": "short_name", "relation": "not_in", "values": [ "Light", "Comp" ] } ]
@@ -294,35 +288,35 @@ Now you can define separate sub structures in each of these folders.
## Advanced: Specifying a parent
-As part of the folder creation, Toolkit needs to associate a Pipeline Step with an entity (e.g., "Shot", "Asset", etc). Toolkit does this by default by looking up the folder tree and picking the first {% include product %} entity folder it finds. For example, if you have the hierarchy `Sequence > Shot > Step`, the Step folder will automatically be associated with the Shot, which is typically what you want.
+As part of the folder creation, Toolkit needs to associate a Pipeline Step with an entity (e.g., "Shot", "Asset", etc). Toolkit does this by default by looking up the folder tree and picking the first {% include product %} entity folder it finds. For example, if you have the hierarchy `Sequence > Shot > Step`, the Step folder will automatically be associated with the Shot, which is typically what you want.
-However, if you have a hierarchy with entities below your primary entity, for example `Sequence > Shot > Department > Step`, Toolkit will, by default, associate the Step with the Department level, which is not desired. In this case, we need to explicitly tell Toolkit where to look. We can do this by adding the following to the Step configuration:
+However, if you have a hierarchy with entities below your primary entity, for example `Sequence > Shot > Department > Step`, Toolkit will, by default, associate the Step with the Department level, which is not desired. In this case, we need to explicitly tell Toolkit where to look. We can do this by adding the following to the Step configuration:
associated_entity_type: Shot
## {% include product %} Task Folder
-The Task folder represents a [Task](https://support.shotgunsoftware.com/hc/en-us/articles/219031248) in {% include product %}. By default, the Task folder will not will not be created with its parent. For example, if the folder creation is triggered for a Shot which has a Task node associated, the Task folders will not be created automatically. Instead, Task folders will only be created when the folder creation is executed for the Task (e.g., launching a Task from {% include product %}).
+The Task folder represents a [Task](https://support.shotgunsoftware.com/hc/en-us/articles/219031248) in {% include product %}. By default, the Task folder will not will not be created with its parent. For example, if the folder creation is triggered for a Shot which has a Task node associated, the Task folders will not be created automatically. Instead, Task folders will only be created when the folder creation is executed for the Task (e.g., launching a Task from {% include product %}).

# the type of dynamic content
type: "shotgun_task"
-
+
# the {% include product %} field to use for the folder name. This field needs to come from a task entity.
- name: "content"
-
+ name: "content"
+
You can, however, turn on creation so that Tasks are created with their parent entity by using the following syntax:
-
+
# recurse down from parent folder
create_with_parent: true
-Similar to a Step, you can also optionally supply a `filter` parameter if you want to filter which Tasks your folder configuration should operate on.
+Similar to a Step, you can also optionally supply a `filter` parameter if you want to filter which Tasks your folder configuration should operate on.
-Once again, you can use name expressions, just like you can with the [{% include product %} entity described above](https://support.shotgunsoftware.com/hc/en-us/articles/219039868-File-System-Reference#Shotgun%20List%20Field%20Folders), where static text can be used alongside dynamic content so that you can create a name that has both dynamic and static context.
+Once again, you can use name expressions, just like you can with the [{% include product %} entity described above](https://support.shotgunsoftware.com/hc/en-us/articles/219039868-File-System-Reference#Shotgun%20List%20Field%20Folders), where static text can be used alongside dynamic content so that you can create a name that has both dynamic and static context.
`name: "task_{content}"`
@@ -330,33 +324,33 @@ The node will look at its parent, grandparent etc., until a {% include product %
### Advanced: Specifying a parent
-As part of the folder creation, Toolkit needs to associate a Task with an entity (e.g., a Shot, an Asset, etc.). Toolkit does this by default by looking up the folder tree and picking the first {% include product %} entity folder it finds. For example, if you have the hierarchy `Sequence > Shot > Task`, the Task folder will automatically be associated with the Shot, which is typically what you want.
+As part of the folder creation, Toolkit needs to associate a Task with an entity (e.g., a Shot, an Asset, etc.). Toolkit does this by default by looking up the folder tree and picking the first {% include product %} entity folder it finds. For example, if you have the hierarchy `Sequence > Shot > Task`, the Task folder will automatically be associated with the Shot, which is typically what you want.
-However, if you have a hierarchy with entities below your primary entity (e.g., below Shot), like `Sequence > Shot > Department > Task,` Toolkit would by default associate the Task with the department level, which is not desired. In this case, we need to explicitly tell Toolkit where to look, similarly to how we updated this with Steps in the [previous section](https://support.shotgunsoftware.com/hc/en-us/articles/219039868-File-System-Reference#Create%20With%20Parent%20Folder). We can do this by adding the following to the Task configuration:
+However, if you have a hierarchy with entities below your primary entity (e.g., below Shot), like `Sequence > Shot > Department > Task,` Toolkit would by default associate the Task with the department level, which is not desired. In this case, we need to explicitly tell Toolkit where to look, similarly to how we updated this with Steps in the [previous section](https://support.shotgunsoftware.com/hc/en-us/articles/219039868-File-System-Reference#Create%20With%20Parent%20Folder). We can do this by adding the following to the Task configuration:
-`associated_entity_type: Shot`
+`associated_entity_type: Shot`
## Workspaces and Deferred Folder Creation
-Deferred folder creation means that creation will only be executed when a second folder creation pass is requested via the optional `engine` parameter in the create folders method of the Toolkit API. Typically, this method is executed by Toolkit's various application launchers just prior to starting up an application. Most folder types support a deferred flag, which is `false` by default. To make deferred folder creation `true`, you can add this flag:
+Deferred folder creation means that creation will only be executed when a second folder creation pass is requested via the optional `engine` parameter in the create folders method of the Toolkit API. Typically, this method is executed by Toolkit's various application launchers just prior to starting up an application. Most folder types support a deferred flag, which is `false` by default. To make deferred folder creation `true`, you can add this flag:
# only create this folder when tk.create_filesystem_structure is
# called with tk-maya, tk-nuke or any-custom-string.
defer_creation: ["tk-maya", "tk-nuke", "any-custom-string]
-
+
# create this folder when any application launches, but not when normal folder
# creation runs
defer_creation: true
-This flag makes it possible to split the folder creation in half-one part that runs in a first "global" pass and a second pass that runs at a later point. Typically, the second pass is associated with the engine launching (although it does not happen automatically since the default is `false`) and allows for a user to create folders just before engine startup. This allows for two primary workflows:
+This flag makes it possible to split the folder creation in half-one part that runs in a first "global" pass and a second pass that runs at a later point. Typically, the second pass is associated with the engine launching (although it does not happen automatically since the default is `false`) and allows for a user to create folders just before engine startup. This allows for two primary workflows:
-1. **Workspaces:** Application specific folder setups. Folders can be created just before an application launches.
+1. **Workspaces:** Application specific folder setups. Folders can be created just before an application launches.
2. A common workflow for this is to have a Pipeline Step that might require Houdini, Maya, and another Engine, depending on what the shot requires and how an Artist chooses to tackle it. The Artist can create maya/, houdini/, and other directories for that Pipeline Step initially, but if the Artist on a given shot only ever works in Maya, empty folders for Houdini and any other Engine are unnecessary. So, if you defer the folder creation to happen at the time of the launch of individual engines, then if an Artist never uses Houdini, the houdini/ folder will not be created for that shot.
-3. **User folders:** A user folder is created just before application launch. The user folder config construct (described above) is deferred by default.
+3. **User folders:** A user folder is created just before application launch. The user folder config construct (described above) is deferred by default.
4. This can happen so that instead of basing a user folder on the assigned user in {% include product %}, you can create a folder for the current user whenever they launch an Engine. For instance, if you start working on a shot, and you launch Maya, a username folder will be created for you (based on your username in {% include product %}), and you will not interfere with anyone else's work.
_Tip: If you prefer a normal, static folder to be created when an application (like Maya) launches, just create a config YAML file named the same as the folder and add the following:_
@@ -364,43 +358,43 @@ _Tip: If you prefer a normal, static folder to be created when an application (l
# type of content
type: "static"
-
+
# only create this folder for maya
defer_creation: "tk-maya"
-
+
:::yaml
# type of content
type: "static"
-
+
# only create this folder when tk.create_filesystem_structure is
# called with any-custom-string.
- defer_creation: "any-custom-string"
+ defer_creation: "any-custom-string"
## Current User Folder
-The current user folder is a special construct that lets you set up work areas for different users. A common scenario is if you have multiple artists from a department working on the same shot. User folders can be used so that artists can store their workfiles in their own directories and be able to filter just for their files in the [Workfiles App](https://support.shotgunsoftware.com/hc/en-us/articles/219033088-Your-Work-Files). In this case, the configuration file needs to include the following options:
+The current user folder is a special construct that lets you set up work areas for different users. A common scenario is if you have multiple artists from a department working on the same shot. User folders can be used so that artists can store their workfiles in their own directories and be able to filter just for their files in the [Workfiles App](https://support.shotgunsoftware.com/hc/en-us/articles/219033088-Your-Work-Files). In this case, the configuration file needs to include the following options:
# the type of dynamic content
type: "user_workspace"
-
+
name: "login"
-- Set value of **type** field to be `user_workspace`.
-- The **name** field is the name that should be given to a user folder. It must consist of a combination of fields fetched from People in {% include product %} (`HumanUser` in {% include product %}).
-- You can use a single field, like in the example above (e.g., `name: login`).
-- You can use multiple fields in brackets (e.g., `name: "{firstname}_{lastname}"`).
-- If you want to include fields from other linked entities, you can use the standard {% include product %} dot syntax (e.g., `name: "{sg_group.Group.code}_{login}"`).
+- Set value of **type** field to be `user_workspace`.
+- The **name** field is the name that should be given to a user folder. It must consist of a combination of fields fetched from People in {% include product %} (`HumanUser` in {% include product %}).
+- You can use a single field, like in the example above (e.g., `name: login`).
+- You can use multiple fields in brackets (e.g., `name: "{firstname}_{lastname}"`).
+- If you want to include fields from other linked entities, you can use the standard {% include product %} dot syntax (e.g., `name: "{sg_group.Group.code}_{login}"`).
-The current user folder is created as a deferred folder by default, meaning that it will only be executed when a second folder creation pass is requested via the optional `engine` parameter in the create folders method of the Toolkit API.
+The current user folder is created as a deferred folder by default, meaning that it will only be executed when a second folder creation pass is requested via the optional `engine` parameter in the create folders method of the Toolkit API.
## Static folders
-Static folders (and files) are the most simple type. You can drop them into the configuration structure, and they will automatically get copied across when the folder creation process executes. [Here are some examples of static folders](https://github.com/shotgunsoftware/tk-config-default/tree/master/core/schema/project) (https://github.com/shotgunsoftware/tk-config-default/tree/master/core/schema/project) in the default configuration (note that static folders do not have a corresponding YAML file).
+Static folders (and files) are the most simple type. You can drop them into the configuration structure, and they will automatically get copied across when the folder creation process executes. [Here are some examples of static folders](https://github.com/shotgunsoftware/tk-config-default/tree/master/core/schema/project) (https://github.com/shotgunsoftware/tk-config-default/tree/master/core/schema/project) in the default configuration (note that static folders do not have a corresponding YAML file).
Often, you will not need to go beyond this for static folders; however, Toolkit does support some more advanced functionality for static folders. It is possible to define dynamic conditions to determine if a static folder should get created. For example, you may want to have special static folders that only get created for Pipeline Steps of the Editorial type. In this case, you need to add a YAML configuration file next to the static folder and give it the same name, with the extension "yml". Then, use the following syntax:
@@ -417,7 +411,7 @@ Often, you will not need to go beyond this for static folders; however, Toolkit
# static folder should be created or not.
constrain_by_entity: "$step"
-
+
# we can now define constraints for this step. Constraints are simple
@@ -432,27 +426,27 @@ Often, you will not need to go beyond this for static folders; however, Toolkit
# it (and its children) will be ignored by the folder creation process.
constraints:
- - { "path": "short_name", "relation": "is", "values": [ "edit" ] }
+ - { "path": "short_name", "relation": "is", "values": [ "edit" ] }
By default, static folders will automatically get created together with their parent folder. There may be cases where this is not desirable, and in those cases you can add a special flag to indicate that the static folder should not be created together with its parent:
# do not recurse down automatically
- create_with_parent: false
+ create_with_parent: false
## Symbolic Links
-You can create symbolic links (symlink) as part of the dynamic folder creation. If you want to create a symbolic link with the name `artwork`, create a file in your schema configuration named `artwork.symlink.yml`. This will be identified by the system as a symbolic link request and will not be copied across, but will instead be processed.
+You can create symbolic links (symlink) as part of the dynamic folder creation. If you want to create a symbolic link with the name `artwork`, create a file in your schema configuration named `artwork.symlink.yml`. This will be identified by the system as a symbolic link request and will not be copied across, but will instead be processed.
-The `artwork.symlink.yml` file must, at the very least, contain a `target` key:
+The `artwork.symlink.yml` file must, at the very least, contain a `target` key:
# Example of a .symlink.yml file
-
+
# A target parameter is required.
target: "../Stuff/$Project/$Shot"
-
+
# Additional parameters will be passed to the hook as metadata
@@ -460,21 +454,21 @@ The `artwork.symlink.yml` file must, at the very least, contain a `target` k
# that you may need for advanced customization
additional_param1: abc
- additional_param2: def
+ additional_param2: def
-If the target parameter contains `$EntityType` tokens such as `$Asset`, `$Shot`, or `$Project`, these will attempt to be resolved with the name of the folder representing that entity (Asset, Shot, Project, etc.). Toolkit will look up the filesystem tree for these values and if they are not defined higher up in the tree, an error will be reported.
+If the target parameter contains `$EntityType` tokens such as `$Asset`, `$Shot`, or `$Project`, these will attempt to be resolved with the name of the folder representing that entity (Asset, Shot, Project, etc.). Toolkit will look up the filesystem tree for these values and if they are not defined higher up in the tree, an error will be reported.
-List fields, such as asset type on assets, are expressed with a syntax that includes the entity type, e.g. `$Asset.sg_asset_type`. For example:
+List fields, such as asset type on assets, are expressed with a syntax that includes the entity type, e.g. `$Asset.sg_asset_type`. For example:
# Example of a .symlink.yml file
-
+
# A target parameter is required.
target: "../renders/$Project/$Asset.sg_asset_type/$Asset"
-
-Symlink creation happens (like all input/output, or I/O) inside the folder processing hook. A special `symlink`action is passed from the system into the hook, and you will get the name of the symlink, the fully resolved target, and all the YAML metadata contained within the definition file along with this request. For our `artwork`example above, we create the folder under the Shot like this:
-
+
+Symlink creation happens (like all input/output, or I/O) inside the folder processing hook. A special `symlink`action is passed from the system into the hook, and you will get the name of the symlink, the fully resolved target, and all the YAML metadata contained within the definition file along with this request. For our `artwork`example above, we create the folder under the Shot like this:
+
{'action': 'symlink',
'path': '/mnt/projects/chasing_the_light/Sequences/AA/AA001/artwork'
'target': '../Stuff/chasing_the_light/AA001',
@@ -487,20 +481,20 @@ Files that are placed in the schema scaffold will be copied across into the targ
{% include info title="Note" content="There are more details on this kind of handling in the [Customizing I/O and Permissions section](https://support.shotgunsoftware.com/hc/en-us/articles/219039868-Integrations-File-System-Reference#Simple%20customization%20of%20how%20folders%20are%20created)Customizing I/O and Permissions section under Simple Customization. We have a [process_folder_creation core hook](https://github.com/shotgunsoftware/tk-core/blob/master/hooks/process_folder_creation.py#L62-L71) (https://github.com/shotgunsoftware/tk-core/blob/master/hooks/process_folder_creation.py#L62-L71) that handles a lot of folder setup. You can add chmod calls into this hook (and/or set permissions as you mkdir), thereby setting permissions for the folders you are creating." %}
-Sometimes it can be useful to exclude certain files and folders from being copied across as part of the folder creation. For example, if you store your folder creation configs in Git or SVN, you will have `.git` and `.svn`folders that you will not want to copy to each Shot or Asset folder. If there are files which you do not want to have copied, a file named `ignore_files` can be placed in the `config/core/schema` folder inside the project configuration. This file should contain glob-style patterns to define files not to copy. Each pattern should be on a separate line:
+Sometimes it can be useful to exclude certain files and folders from being copied across as part of the folder creation. For example, if you store your folder creation configs in Git or SVN, you will have `.git` and `.svn`folders that you will not want to copy to each Shot or Asset folder. If there are files which you do not want to have copied, a file named `ignore_files` can be placed in the `config/core/schema` folder inside the project configuration. This file should contain glob-style patterns to define files not to copy. Each pattern should be on a separate line:
# This is a good example of a standard ignore_files file
-
+
.svn # no svn temp files to be copied across at folder creation time
.git # no git temp files to be copied across at folder creation time
- .DS_Store # no mac temp files to be copied across at folder creation time
+ .DS_Store # no mac temp files to be copied across at folder creation time
-You can also use wildcards. For example, if you need to exclude all files with the TMP extension, just add a *.tmp line to the file.
+You can also use wildcards. For example, if you need to exclude all files with the TMP extension, just add a \*.tmp line to the file.
# This is a good example of a standard ignore_files file
-
+
.svn # no svn temp files to be copied across at folder creation time
.git # no git temp files to be copied across at folder creation time
*.tmp # no files with tmp extension to be copied across at folder creation time
@@ -511,7 +505,7 @@ Shot and Asset folders often need to be created with special permissions and par
It is also common that folders on different levels in the file system tree need to have different permissions; a work area folder is typically writeable for everybody, whereas a shot folder may have much stricter permissions.
-Toolkit allows for customization of the folder creation via a single hook. This is a core hook and it is named `process_folder_creation.py`. As the folder creation API call is traversing the folder configuration and deciding which folders should be created, it builds up a list of items that could be created. These items can be both files and folders. As the final step of the folder creation, this list is passed to a hook to handle the actual folder processing. You can examine the default [process_folder_creation core hook here](https://github.com/shotgunsoftware/tk-core/blob/master/hooks/process_folder_creation.py#L62-L71)(https://github.com/shotgunsoftware/tk-core/blob/master/hooks/process_folder_creation.py#L62-L71).
+Toolkit allows for customization of the folder creation via a single hook. This is a core hook and it is named `process_folder_creation.py`. As the folder creation API call is traversing the folder configuration and deciding which folders should be created, it builds up a list of items that could be created. These items can be both files and folders. As the final step of the folder creation, this list is passed to a hook to handle the actual folder processing. You can examine the default [process_folder_creation core hook here](https://github.com/shotgunsoftware/tk-core/blob/master/hooks/process_folder_creation.py#L62-L71)(https://github.com/shotgunsoftware/tk-core/blob/master/hooks/process_folder_creation.py#L62-L71).
### Data passed to the hook
@@ -520,16 +514,16 @@ The folder creation hook is executed just once for each folder creation request.
The data in the list is always a depth first recursion, starting with the top level folders and files and then traversing deeper and deeper. Here is an example of what the data passed to the hook may look like:
[
-
+
{'action': 'entity_folder',
'entity': {'id': 88, 'name': 'Chasing the Light', 'type': 'Project'},
'metadata': {'root_name': 'primary', 'type': 'project'},
'path': '/mnt/projects/chasing_the_light'},
-
+
{'action': 'folder',
'metadata': {'type': 'static'},
'path': '/mnt/projects/chasing_the_light/sequences'},
-
+
{'action': 'entity_folder',
'entity': {'id': 32, 'name': 'aa2', 'type': 'Sequence'},
'metadata': {'entity_type': 'Sequence',
@@ -539,7 +533,7 @@ The data in the list is always a depth first recursion, starting with the top le
'name': 'code',
'type': 'shotgun_entity'},
'path': '/mnt/projects/chasing_the_light/sequences/aa2'},
-
+
{'action': 'entity_folder',
'entity': {'id': 1184, 'name': 'moo87', 'type': 'Shot'},
'metadata': {'entity_type': 'Shot',
@@ -549,7 +543,7 @@ The data in the list is always a depth first recursion, starting with the top le
'name': 'code',
'type': 'shotgun_entity'},
'path': '/mnt/projects/chasing_the_light/sequences/aa2/moo87'},
-
+
{'action': 'copy',
'metadata': {'entity_type': 'Shot',
'filters': [{'path': 'sg_sequence',
@@ -559,7 +553,7 @@ The data in the list is always a depth first recursion, starting with the top le
'type': 'shotgun_entity'},
'source_path': '/mnt/software/tank/chasing_the_light/config/core/schema/project/sequences/sequence/shot/sgtk_overrides.yml',
'target_path': '/mnt/projects/chasing_the_light/sequences/aa2/moo87/sgtk_overrides.yml'},
-
+
{'action': 'create_file',
'metadata': {'entity_type': 'Shot',
'filters': [{'path': 'sg_sequence',
@@ -569,26 +563,26 @@ The data in the list is always a depth first recursion, starting with the top le
'type': 'shotgun_entity'},
'content': 'foo bar',
'target_path': '/mnt/projects/chasing_the_light/sequences/aa2/moo87/automatic_content.txt'},
-
+
{'action': 'symlink',
'path': '/mnt/projects/chasing_the_light/Sequences/AA/AA001/artwork'
'target': '../Stuff/chasing_the_light/AA001',
'metadata': {'target': '../Stuff/$Project/$Shot', 'additional_param1': 'abc', 'additional_param2': 'def'}
},
-
+
]
-The data is a list of dictionaries. Each dictionary has a key called `action`. This key denotes the type of I/O item that is requested. If you are implementing the folder creation hook, you need to add support for the following different actions:
+The data is a list of dictionaries. Each dictionary has a key called `action`. This key denotes the type of I/O item that is requested. If you are implementing the folder creation hook, you need to add support for the following different actions:
-- `entity_folder`: A folder on disk which is associated with a {% include product %} entity.
-- `folder`: A folder on disk.
-- `copy`: A file that needs to be copied from a source location to a target location.
-- `create_file`:- A file that needs to be created on disk.
-- `symlink`: A symbolic link should be created.
+- `entity_folder`: A folder on disk which is associated with a {% include product %} entity.
+- `folder`: A folder on disk.
+- `copy`: A file that needs to be copied from a source location to a target location.
+- `create_file`:- A file that needs to be created on disk.
+- `symlink`: A symbolic link should be created.
-Each of the actions have a different set of dictionary keys. For example, the `entity_folder` action has an `entity key` which contains the details of the entity that it is connected to. The `create_file` has a `source_path` and a `target_path` key which inform the hook which file to copy and where.
+Each of the actions have a different set of dictionary keys. For example, the `entity_folder` action has an `entity key` which contains the details of the entity that it is connected to. The `create_file` has a `source_path` and a `target_path` key which inform the hook which file to copy and where.
-All `actions` also have a key called `metadata`. This key represents the YAML configuration data that comes from the associated configuration file in the schema setup. You can see in the example above how the `metadata` key for a {% include product %} folder contains all the filter and naming information that is set up within the schema configuration. For example, here is the metadata for the Shot folder in the example above:
+All `actions` also have a key called `metadata`. This key represents the YAML configuration data that comes from the associated configuration file in the schema setup. You can see in the example above how the `metadata` key for a {% include product %} folder contains all the filter and naming information that is set up within the schema configuration. For example, here is the metadata for the Shot folder in the example above:
{'action': 'entity_folder',
'entity': {'id': 1184, 'name': 'moo87', 'type': 'Shot'},
@@ -598,9 +592,9 @@ All `actions` also have a key called `metadata`. This key represents the YAML
'values': []}],
'name': 'code',
'type': 'shotgun_entity'},
- 'path': '/mnt/projects/chasing_the_light/sequences/aa2/moo87'}
+ 'path': '/mnt/projects/chasing_the_light/sequences/aa2/moo87'}
-...which corresponds to the `shot.yml` schema configuration file:
+...which corresponds to the `shot.yml` schema configuration file:
# Copyright (c) 2013 {% include product %} Software Inc.
@@ -618,19 +612,19 @@ All `actions` also have a key called `metadata`. This key represents the YAML
# agreement to the {% include product %} Pipeline Toolkit Source Code License. All rights
# not expressly granted therein are reserved by {% include product %} Software Inc.
-
+
# the type of dynamic content
type: "shotgun_entity"
-
+
# the {% include product %} field to use for the folder name
name: "code"
-
+
# the {% include product %} entity type to connect to
entity_type: "Shot"
-
+
# {% include product %} filters to apply when getting the list of items
@@ -641,35 +635,35 @@ All `actions` also have a key called `metadata`. This key represents the YAML
# (this is std {% include product %} API syntax)
# any values starting with $ are resolved into path objects
- filters: [ { "path": "sg_sequence", "relation": "is", "values": [ "$sequence" ] } ]
+ filters: [ { "path": "sg_sequence", "relation": "is", "values": [ "$sequence" ] } ]
-_Note that the dynamic token `$sequence` has been resolved into an actual object at runtime._
+_Note that the dynamic token `$sequence` has been resolved into an actual object at runtime._
### Passing your own folder creation directives to the hook
-In addition to the various configuration directives required by Toolkit, you can also define your own configuration items as part of the schema configuration. These are passed into the hook via the `metadata` key described above, and can be used to drive folder creation.
+In addition to the various configuration directives required by Toolkit, you can also define your own configuration items as part of the schema configuration. These are passed into the hook via the `metadata` key described above, and can be used to drive folder creation.
For example, if you had the following structure in your schema setup:
# the type of dynamic content
type: "shotgun_entity"
-
+
# the {% include product %} field to use for the folder name
name: "code"
-
+
# the {% include product %} entity type to connect to
entity_type: "Shot"
-
+
# {% include product %} filters to apply when getting the list of items
filters: [ { "path": "sg_sequence", "relation": "is", "values": [ "$sequence" ] } ]
-
+
# user settings
- studio_permissions_level: "admin"
+ studio_permissions_level: "admin"
...the data passed via the folder creation hook would be:
@@ -682,15 +676,15 @@ For example, if you had the following structure in your schema setup:
'name': 'code',
'type': 'shotgun_entity',
'studio_permissions_level': 'admin'},
- 'path': '/mnt/projects/chasing_the_light/sequences/aa2/moo87'}
+ 'path': '/mnt/projects/chasing_the_light/sequences/aa2/moo87'}
-Now the special parameter `studio_permissions_level` is passed into the hook and you can use that, for example, to control file permissions. You can also pass arbitrarily complex data structures using this method. A typical usecase for this would be to control permissions at a very detailed level.
+Now the special parameter `studio_permissions_level` is passed into the hook and you can use that, for example, to control file permissions. You can also pass arbitrarily complex data structures using this method. A typical usecase for this would be to control permissions at a very detailed level.
### Adding custom configuration to static folders
Typically, when you create a folder inside the folder schema configuration, and it does not have a corresponding YAML file, Toolkit will assume that it is a static folder and will simply create it.
-If you would like to associate custom configuration metadata with a static folder, you have to create a YAML configuration file with the `static` type. For example, let's say you have a static `assets` folder just under the project root and would like to group together assets and add custom configuration metadata. To achieve this, create the following `assets.yml` file:
+If you would like to associate custom configuration metadata with a static folder, you have to create a YAML configuration file with the `static` type. For example, let's say you have a static `assets` folder just under the project root and would like to group together assets and add custom configuration metadata. To achieve this, create the following `assets.yml` file:
type: static
studio_permissions_level: "admin"
@@ -701,7 +695,6 @@ The configuration data passed to the hook will then contain the following:
'metadata': {'studio_permissions_level': 'admin', 'type': 'static'},
'path': '/mnt/projects/chasing_the_light/assets'},
-
Again, arbitrarily complex data can be passed from the YAML configuration file into the hook in this way.
## Simple customization of how folders are created
@@ -709,68 +702,68 @@ Again, arbitrarily complex data can be passed from the YAML configuration file i
A simple folder creation hook could look something like this:
class ProcessFolderCreation(Hook):
-
+
def execute(self, items, preview_mode, **kwargs):
"""
The default implementation creates folders recursively using open permissions.
-
+
This hook should return a list of created items.
-
+
Items is a list of dictionaries. Each dictionary can be of the following type:
-
+
Standard Folder
---------------
This represents a standard folder in the file system which is not associated
with anything in {% include product %}. It contains the following keys:
-
+
* "action": "folder"
* "metadata": The configuration yaml data for this item
* "path": path on disk to the item
-
+
Entity Folder
-------------
This represents a folder in the file system which is associated with a
{% include product %} entity. It contains the following keys:
-
+
* "action": "entity_folder"
* "metadata": The configuration yaml data for this item
* "path": path on disk to the item
* "entity": {% include product %} entity link dict with keys type, id and name.
-
+
File Copy
---------
This represents a file copy operation which should be carried out.
It contains the following keys:
-
+
* "action": "copy"
* "metadata": The configuration yaml data associated with the directory level
on which this object exists.
* "source_path": location of the file that should be copied
* "target_path": target location to where the file should be copied.
-
+
File Creation
-------------
This is similar to the file copy, but instead of a source path, a chunk
of data is specified. It contains the following keys:
-
+
* "action": "create_file"
* "metadata": The configuration yaml data associated with the directory level
on which this object exists.
* "content": file content
* "target_path": target location to where the file should be copied.
-
+
"""
-
+
# set the umask so that we get true permissions
old_umask = os.umask(0)
folders = []
try:
-
+
# loop through our list of items
for i in items:
-
+
action = i.get("action")
-
+
if action == "entity_folder" or action == "folder":
# folder creation
path = i.get("path")
@@ -779,7 +772,7 @@ A simple folder creation hook could look something like this:
# create the folder using open permissions
os.makedirs(path, 0777)
folders.append(path)
-
+
elif action == "copy":
# a file copy
source_path = i.get("source_path")
@@ -791,7 +784,7 @@ A simple folder creation hook could look something like this:
# set permissions to open
os.chmod(target_path, 0666)
folders.append(target_path)
-
+
elif action == "create_file":
# create a new file based on content
path = i.get("path")
@@ -808,27 +801,27 @@ A simple folder creation hook could look something like this:
# and set permissions to open
os.chmod(path, 0666)
folders.append(path)
-
+
else:
raise Exception("Unknown folder hook action '%s'" % action)
-
+
finally:
# reset umask
os.umask(old_umask)
-
- return folders
+
+ return folders
# Part 2 - Configuring File System Templates
-The Toolkit templates file is one of the hubs of the Toolkit configuration. There is always one of these files per project and it resides inside the **config/core** folder inside your pipeline configuration.
+The Toolkit templates file is one of the hubs of the Toolkit configuration. There is always one of these files per project and it resides inside the **config/core** folder inside your pipeline configuration.

-This file contains definitions for _templates_ and their _keys_.
+This file contains definitions for _templates_ and their _keys_.
-A **key** is a dynamic field we defined. It can be a name, a version number, a screen resolution, a shot name etc. Keys are configured with types, so we can define that a key should be a string or an int for example. They are also formatted, so we can define that a string should only contain alpha numeric characters, or that all integers should be padded with eight zeroes.
+A **key** is a dynamic field we defined. It can be a name, a version number, a screen resolution, a shot name etc. Keys are configured with types, so we can define that a key should be a string or an int for example. They are also formatted, so we can define that a string should only contain alpha numeric characters, or that all integers should be padded with eight zeroes.
-A **template** is a dynamic path. An example of a template is `shots/{shot}/publish/{name}.{version}.ma`. This template could for represent maya publishes for a shot - the bracketed fields are keys.
+A **template** is a dynamic path. An example of a template is `shots/{shot}/publish/{name}.{version}.ma`. This template could for represent maya publishes for a shot - the bracketed fields are keys.
The templates file is divided into three sections: keys, paths and strings.
@@ -839,42 +832,28 @@ Keys define what values are acceptable for fields. In the template config file k
key_name:
type: key_type
option: option_value
- option: option_value
+ option: option_value
-Key type is either `str`, `int`, or `sequence`. Str keys are keys whose values are strings, int keys are keys whose values are integers, and sequence keys are keys whose values are sequences of integers.
+Key type is either `str`, `int`, or `sequence`. Str keys are keys whose values are strings, int keys are keys whose values are integers, and sequence keys are keys whose values are sequences of integers.
In addition to specifying the type, you can also specify additional options. The following options exist:
-- `default: default_value` - Value used if no value was supplied. This can happen if you are using the Toolkit API and trying to resolve a set of field values into a path for example.
-
-- `choices: [choice1, choice2, etc]` - An enumeration of possible values for this key.
-
-- `exclusions: [bad1, bad2, etc]` - An enumeration of forbidden values for this key. If key is of type sequence, frame spec values cannot be invalidated with this setting.
-
-- `length: 12` - This key needs to be of an exact length.
-
-- `alias: new_name` - Provides a name which will be used by templates using this key rather than the key_name. For example if you have two concepts of a version number, one is four zero padded because that is how the client wants it, and one is three zero padded because that how it is handled internally - in this case you really want both keys named "version" but this is not really possible since key names need to be unique. In this case you can create an alias. See one of the examples below for more information.
-
-- `filter_by: alphanumeric` - Only works for keys of type string. If this option is specified, only strings containing alphanumeric values (typically a-z, A-Z and 0-9 for ascii strings but may include other characters if your input data is unicode) will be considered valid values.
-
-- `filter_by: alpha` - Only works for keys of type string. If this option is specified, only strings containing alpha values (typically a-z, A-Z for ascii strings but may include other characters if your input data is unicode) will be considered valid values.
-
-- `filter_by: '^[0-9]{4}_[a-z]{3}$'` - Only works for keys of type string. You can define a regular expression as a validation mask. The above example would for example require the key to have four digits, then an underscore and finally three lower case letters.
-
-- `format_spec: "04"` - For keys of type int and sequence, this setting means that the int or sequence number will be zero or space padded. Specifying "04" like in the example will result in a four digit long zero padded number (e.g. 0003). Specifying "03" would result in three digit long zero padded number (e.g. 042), etc. Specifying "3" would result in three digit long space padded number (e.g. " 3"). For keys of type timestamp, the format_spec follows the [strftime and strptime convention](https://docs.python.org/2/library/datetime.html#strftime-strptime-behavior).
-
-- `strict_matching: true` - Only works for keys of type type int. This settings means that the field will only match numbers that have been properly formatted. For example, given "003" and strict_matching set to true, we would match "002", "12345" and "042", but not "00003" or "2". If you need the matching to be less strict, set strict_matching to false. The default behavior is to strictly match.
-
-- `shotgun_entity_type` - When used in conjunction with the `shotgun_field_name` option, will cause contexts to query {% include product %} directly for values. This allows using values from fields not seen in the folder structure to be used in file names.
-
-- `shotgun_field_name` - Only used in conjunction with `shotgun_entity_type`.
-
-- `abstract` - Denotes that the field is abstract. Abstract fields are used when a pattern is needed to describe a path - for example image sequences (%04d) or stereo (%V). Abstract fields require a default value.
-
-- `subset` and `subset_format` - Extracts a subset of the given input string and makes that the key value, allowing you to create for example an initials key from a full username or a key that holds the three first letters of every shot name.
-
-
-For technical details about template keys, see the [API reference](http://developer.shotgridsoftware.com/tk-core/core.html#template-system).
+- `default: default_value` - Value used if no value was supplied. This can happen if you are using the Toolkit API and trying to resolve a set of field values into a path for example.
+- `choices: [choice1, choice2, etc]` - An enumeration of possible values for this key.
+- `exclusions: [bad1, bad2, etc]` - An enumeration of forbidden values for this key. If key is of type sequence, frame spec values cannot be invalidated with this setting.
+- `length: 12` - This key needs to be of an exact length.
+- `alias: new_name` - Provides a name which will be used by templates using this key rather than the key_name. For example if you have two concepts of a version number, one is four zero padded because that is how the client wants it, and one is three zero padded because that how it is handled internally - in this case you really want both keys named "version" but this is not really possible since key names need to be unique. In this case you can create an alias. See one of the examples below for more information.
+- `filter_by: alphanumeric` - Only works for keys of type string. If this option is specified, only strings containing alphanumeric values (typically a-z, A-Z and 0-9 for ascii strings but may include other characters if your input data is unicode) will be considered valid values.
+- `filter_by: alpha` - Only works for keys of type string. If this option is specified, only strings containing alpha values (typically a-z, A-Z for ascii strings but may include other characters if your input data is unicode) will be considered valid values.
+- `filter_by: '^[0-9]{4}_[a-z]{3}$'` - Only works for keys of type string. You can define a regular expression as a validation mask. The above example would for example require the key to have four digits, then an underscore and finally three lower case letters.
+- `format_spec: "04"` - For keys of type int and sequence, this setting means that the int or sequence number will be zero or space padded. Specifying "04" like in the example will result in a four digit long zero padded number (e.g. 0003). Specifying "03" would result in three digit long zero padded number (e.g. 042), etc. Specifying "3" would result in three digit long space padded number (e.g. " 3"). For keys of type timestamp, the format_spec follows the [strftime and strptime convention](https://docs.python.org/2/library/datetime.html#strftime-strptime-behavior).
+- `strict_matching: true` - Only works for keys of type type int. This settings means that the field will only match numbers that have been properly formatted. For example, given "003" and strict_matching set to true, we would match "002", "12345" and "042", but not "00003" or "2". If you need the matching to be less strict, set strict_matching to false. The default behavior is to strictly match.
+- `shotgun_entity_type` - When used in conjunction with the `shotgun_field_name` option, will cause contexts to query {% include product %} directly for values. This allows using values from fields not seen in the folder structure to be used in file names.
+- `shotgun_field_name` - Only used in conjunction with `shotgun_entity_type`.
+- `abstract` - Denotes that the field is abstract. Abstract fields are used when a pattern is needed to describe a path - for example image sequences (%04d) or stereo (%V). Abstract fields require a default value.
+- `subset` and `subset_format` - Extracts a subset of the given input string and makes that the key value, allowing you to create for example an initials key from a full username or a key that holds the three first letters of every shot name.
+
+For technical details about template keys, see the [API reference](http://developer.shotgridsoftware.com/tk-core/core.html#template-system).
### Example - An alphanumeric name
@@ -884,8 +863,8 @@ A name that defaults to "comp" and that is alphanumeric:
type: str
default: "comp"
filter_by: alphanumeric
-
- nuke_shot_work: sequences/{Sequence}/{Shot}/{Step}/work/nuke/{name}.v{version}.nk
+
+ nuke_shot_work: sequences/{Sequence}/{Shot}/{Step}/work/nuke/{name}.v{version}.nk
### Example - Version number
@@ -893,25 +872,25 @@ A version number that would match numbers such as 002, 102, 034, 12341
version:
type: int
- format_spec: "03"
+ format_spec: "03"
A version number that would match numbers such as 002, 102, 034, 12341, but also 0002, 2 and 0102
version:
type: int
format_spec: "03"
- strict_matching: false
+ strict_matching: false
### Example - A stereo eye
-A typical stereo eye setup. The eye field is either L or R, but when used in software, it is often referred to in a generic, abstract fashion as %V. Since %V does not really refer to a file name but rather a collection of files, we set the _abstract_ flag. Abstract fields need to have a default value that is pulled in whenever the abstract representation is being requested.
+A typical stereo eye setup. The eye field is either L or R, but when used in software, it is often referred to in a generic, abstract fashion as %V. Since %V does not really refer to a file name but rather a collection of files, we set the _abstract_ flag. Abstract fields need to have a default value that is pulled in whenever the abstract representation is being requested.
eye:
type: str
choices: ["L", "R", "%V"]
default: "%V"
abstract: true
-
+
nuke_shot_render_stereo: sequences/{Sequence}/{Shot}/{Step}/work/images/{Shot}_{name}_{eye}_v{version}.{SEQ}.exr
### Example - Image sequences
@@ -921,12 +900,12 @@ Image sequences are abstract by definition and they have a default value set to
SEQ:
type: sequence
format_spec: "04"
-
- nuke_shot_render_stereo: sequences/{Sequence}/{Shot}/{Step}/work/images/{Shot}_{name}_{channel}_{eye}_v{version}.{SEQ}.exr
+
+ nuke_shot_render_stereo: sequences/{Sequence}/{Shot}/{Step}/work/images/{Shot}_{name}_{channel}_{eye}_v{version}.{SEQ}.exr
### Example - Two fields both named version via an alias
-Two definitions of version number that can both be used by code that expects a key which is named "version". This is useful if you have two Toolkit apps that both need a _version_ field but you want these version field to be formatted differently.
+Two definitions of version number that can both be used by code that expects a key which is named "version". This is useful if you have two Toolkit apps that both need a _version_ field but you want these version field to be formatted differently.
nuke_version:
type: int
@@ -936,7 +915,7 @@ Two definitions of version number that can both be used by code that expects a k
type: int
format_spec: "04"
alias: version
-
+
# nuke versions are using numbers on the form 003, 004, 005
@@ -946,10 +925,10 @@ Two definitions of version number that can both be used by code that expects a k
# because it has an alias defined
nuke_shot_work: sequences/{Sequence}/{Shot}/{Step}/work/nuke/{name}.v{nuke_version}.nk
-
+
# maya versions are using numbers on the form 0004, 0005, 0006
- maya_shot_work: sequences/{Sequence}/{Shot}/{Step}/work/maya/{name}.v{maya_version}.ma
+ maya_shot_work: sequences/{Sequence}/{Shot}/{Step}/work/maya/{name}.v{maya_version}.ma
### Example - Timestamp
@@ -958,21 +937,21 @@ A timestamp that defaults to the current local time and is formatted as YYYY-MM-
now:
type: timestamp
format_spec: "%Y-%m-%d-%H-%M-%S"
- default: now
+ default: now
A timestamp that defaults to the current utc time and is formatted as YYYY.MM.DD.
year_month_day:
type: timestamp
format_spec: "%Y.%m.%d"
- default: utc_now
+ default: utc_now
A timestamp that defaults to 9:00:00 and is formatted as HH-MM-SS.
nine_am_time:
type: timestamp
format_spec: "%H-%M-%S"
- default: "09-00-00"
+ default: "09-00-00"
### Example - {% include product %} mappings
@@ -982,10 +961,10 @@ This is useful when you would like to to add {% include product %} fields to a f
type: str
shotgun_entity_type: HumanUser
shotgun_field_name: login
-
+
nuke_shot_work: sequences/{Sequence}/{Shot}/{Step}/work/nuke/{current_user_name}_{name}.v{version}.nk
-When a Toolkit app populates all the context fields (via the `context.as_template_fields()` method, it will populate the higher level fields `Shot`, `Sequence` and `Step` automatically. It will also scan through all fields which have `shotgun_entity_type` defined (like our `current_user_name` field above). If the {% include product %} Entity is defined in the context, it will be able to automatically resolve the value. The current user is always tracked in the context, and in the above example, it would also be possible to pull data from fields on Shot, Sequence and Step since these are defined as part of the higher level path and therefore part of the context. However, trying to refer to an Asset entity in a field wouldn't work in the above example since Toolkit would have no way of knowing which asset in {% include product %} to pull the data from.
+When a Toolkit app populates all the context fields (via the `context.as_template_fields()` method, it will populate the higher level fields `Shot`, `Sequence` and `Step` automatically. It will also scan through all fields which have `shotgun_entity_type` defined (like our `current_user_name` field above). If the {% include product %} Entity is defined in the context, it will be able to automatically resolve the value. The current user is always tracked in the context, and in the above example, it would also be possible to pull data from fields on Shot, Sequence and Step since these are defined as part of the higher level path and therefore part of the context. However, trying to refer to an Asset entity in a field wouldn't work in the above example since Toolkit would have no way of knowing which asset in {% include product %} to pull the data from.
### Example - String field with two valid values
@@ -993,21 +972,21 @@ Often times a studio will have a project that needs to save out ASCII and Binary
maya_file_extension:
type: str
- choices: ["ma", "mb"]
+ choices: ["ma", "mb"]
{% include info title="Note" content="the default apps use either `.ma` or `.mb` based on what's configured in the `templates.yml`. So, for example, if you want to change the work files app to save `.mb` instead of `.ma` in a project, you can change these three templates (for Shots):" %}
maya_shot_work: '@shot_root/work/maya/{name}.v{version}.ma'
maya_shot_snapshot: '@shot_root/work/maya/snapshots/{name}.v{version}.{timestamp}.ma'
maya_shot_publish: '@shot_root/publish/maya/{name}.v{version}.ma'
-
+
If you instead end them with .mb, then the apps will save out as Maya binary:
-
+
maya_shot_work: '@shot_root/work/maya/{name}.v{version}.mb'
maya_shot_snapshot: '@shot_root/work/maya/snapshots/{name}.v{version}.{timestamp}.mb'
- maya_shot_publish: '@shot_root/publish/maya/{name}.v{version}.mb'
+ maya_shot_publish: '@shot_root/publish/maya/{name}.v{version}.mb'
-Check out [The Paths Section](https://support.shotgunsoftware.com/hc/en-us/articles/219039868-File-System-Reference#The%20Paths%20Section) below for more details.
+Check out [The Paths Section](https://support.shotgunsoftware.com/hc/en-us/articles/219039868-File-System-Reference#The%20Paths%20Section) below for more details.
### Example - Disallowing a value
@@ -1017,18 +996,18 @@ A string field for which the value "assets" is not allowed. This is useful if yo
|--- sequence1
|--- sequence2
|--- sequence3
- \--- assets
+ \--- assets
In order for Toolkit to correctly understand that the assets folder is not just another sequence, we can define that "assets" is not a valid value for the sequence template.
sequence:
type: str
- exclusions: ["assets"]
+ exclusions: ["assets"]
The exclusions field above allows us to define two templates that both correctly resolves:
sequence_work_area: {sequence}/{shot}/work
- asset_work_area: assets/{asset}/work
+ asset_work_area: assets/{asset}/work
### Example - Subsets of strings
@@ -1041,19 +1020,19 @@ The following example extends a previous example and shows how to prefix filenam
subset: '([A-Z])[a-z]* ([A-Z])[a-z]*'
subset_format: '{0}{1}'
- nuke_shot_work: sequences/{Sequence}/{Shot}/{Step}/work/nuke/{user_initials}_{name}.v{version}.nk
+ nuke_shot_work: sequences/{Sequence}/{Shot}/{Step}/work/nuke/{user_initials}_{name}.v{version}.nk
## The Paths Section
The Paths section specifies where work will be saved. All paths consist of at least a name and a definition, where the definition is a combination of key names in brackets interspersed with non-key values representing a path. For example, a definition for a shot work file might look like:
- shot_work: sequences/{Sequence}/{Shot}/{Step}/work/{Shot}.v{version}.ma
+ shot_work: sequences/{Sequence}/{Shot}/{Step}/work/{Shot}.v{version}.ma
With Sequence, Shot, Step and version being keys defined in the same template file.
{% include info title="Note" content="If a string key's name matches the entity type of a dynamic schema folder that has an associated ShotGrid entity, then that folder name will be substituted in for the token. For example, if you are using a {Sequence} template key of type 'string' like the above snippet, and in your schema, you have a dynamic folder named 'sequence', and in its corresponding `sequence.yml` file, it's defined to be of type `shotgun_entity`, and is connected to the 'Sequence' entity type in ShotGrid. Toolkit will recognize that your template key corresponds to this dynamic folder's entity type (in that they are both Sequence). So, Toolkit will take the resulting folder name (i.e., the name of the specific sequence in question), and substitutes that in for the template key." %}
-This form is required if any optional attributes need to be defined. Currently, the only optional attribute is `root_name`, which can be used to specify a project root for a path in a project that has multiple roots. [Multiple roots](https://developer.shotgridsoftware.com/9ea9dd4e/) are used when you'd like to add a new storage root to store some of your project files.
+This form is required if any optional attributes need to be defined. Currently, the only optional attribute is `root_name`, which can be used to specify a project root for a path in a project that has multiple roots. [Multiple roots](https://developer.shotgridsoftware.com/9ea9dd4e/) are used when you'd like to add a new storage root to store some of your project files.
`root_name: name_of_project_root`
@@ -1061,7 +1040,7 @@ For example, it may look like this:
shot_work:
definition: sequences/{Sequence}/{Shot}/{Step}/work/{Shot}.v{version}.ma
- root_name: primary
+ root_name: primary
You need to use the above format if you want to use another storage root than the primary one. In this example, using this simple format implies that you are using the primary root for all entries.
@@ -1079,16 +1058,16 @@ With name and version as key names defined in the same file.
## Using Optional Keys in Templates
-Optional keys in templates are useful for a number of reasons. One common case is when `{SEQ}` is optional for rendered images. In this example, there can be a set of exrs that that are comprised of frame numbers, like `/path/to/render/shot.101.exr` (and 102, 103, etc), while you are also able to use the same template for quicktime movies, like `/path/to/render/shot.qt`. Another more common case is when you are rendering stereo images. If you are in a studio where the convention is: `left eye: file.LFT.exr, right eye: file.RGT.exr, stereo image: file.exr?`, you can make `{eye}` optional.
+Optional keys in templates are useful for a number of reasons. One common case is when `{SEQ}` is optional for rendered images. In this example, there can be a set of exrs that that are comprised of frame numbers, like `/path/to/render/shot.101.exr` (and 102, 103, etc), while you are also able to use the same template for quicktime movies, like `/path/to/render/shot.qt`. Another more common case is when you are rendering stereo images. If you are in a studio where the convention is: `left eye: file.LFT.exr, right eye: file.RGT.exr, stereo image: file.exr?`, you can make `{eye}` optional.
Optional sections can be defined using square brackets:
- shot_work: sequences/{Shot}/work/{Shot}.[v{version}.]ma
+ shot_work: sequences/{Shot}/work/{Shot}.[v{version}.]ma
The optional section must contain at least one key. If the path is resolved with no value for the key(s) in an optional section, the path will resolve as if that section did not exist in the definition. The example above can be thought of as two templates baked into a single definition:
shot_work: sequences/{Shot}/work/{Shot}.v{version}.ma
- shot_work: sequences/{Shot}/work/{Shot}.ma
+ shot_work: sequences/{Shot}/work/{Shot}.ma
As you pass in a dictionary of fields, Toolkit will choose the right version of the template depending on the values:
@@ -1096,13 +1075,13 @@ As you pass in a dictionary of fields, Toolkit will choose the right version of
>>> template.apply_fields({"Shot":"ABC_123", "version": 12}
/project/sequences/ABC_123/work/ABC_123.v12.ma
>>> template.apply_fields({"Shot":"ABC_123"}
- /project/sequences/ABC_123/work/ABC_123.ma
+ /project/sequences/ABC_123/work/ABC_123.ma
# Advanced questions and troubleshooting
## How can I add a new entity type to my file structure?
-Let's say you have been working on feature animations and shorts on your {% include product %} site, and now you have been awarded episodic work. Let's walk through how you can incorporate an episodic workflow to Toolkit. The first thing to do is to set up your hierarchy in {% include product %} for episodic work following the instructions [here](https://support.shotgunsoftware.com/hc/en-us/articles/115000019414).
+Let's say you have been working on feature animations and shorts on your {% include product %} site, and now you have been awarded episodic work. Let's walk through how you can incorporate an episodic workflow to Toolkit. The first thing to do is to set up your hierarchy in {% include product %} for episodic work following the instructions [here](https://support.shotgunsoftware.com/hc/en-us/articles/115000019414).

@@ -1110,25 +1089,24 @@ Let's say you have been working on feature animations and shorts on your {% incl
**Additional Reference:**
-- [How does the Episode entity work?](https://support.shotgunsoftware.com/hc/en-us/articles/115000019414)
-- [Customizing an entity's hierarchy](https://support.shotgunsoftware.com/hc/en-us/articles/219030828)
-
+- [How does the Episode entity work?](https://support.shotgunsoftware.com/hc/en-us/articles/115000019414)
+- [Customizing an entity's hierarchy](https://support.shotgunsoftware.com/hc/en-us/articles/219030828)
### {% include product %} fields required for the Episode > Sequence > Shot hierarchy
[You can choose to use any Custom Entity](https://support.shotgunsoftware.com/hc/en-us/articles/114094182834) for `Episode` (Site Preferences > Entities), or you can use the official Episode entity that was made available in {% include product %} [7.0.7.0](https://support.shotgunsoftware.com/hc/en-us/articles/220062367-7-0-Release-Notes#7_0_7_0). If you signed up for {% include product %} pre-7.0.7.0 (before 2017), the "TV Show" template uses `CustomEntity02` for Episodes. If you decide to use another entity that is not `CustomEntity02` or the official Episode entity, no worries! {% include product %} and Toolkit are flexible. Let's walk through both cases.
-For the purpose of this exercise, we will use Episode (`CustomEntity02`) and the official Episode entity as examples of how to incorporate Episodes with the project hierarchy update (you can use either/or). First, the way to properly set up our Project's **Episode > Sequence > Shot** hierarchy is to ensure the following fields are in {% include product %}:
+For the purpose of this exercise, we will use Episode (`CustomEntity02`) and the official Episode entity as examples of how to incorporate Episodes with the project hierarchy update (you can use either/or). First, the way to properly set up our Project's **Episode > Sequence > Shot** hierarchy is to ensure the following fields are in {% include product %}:
#### Episode
-a) **Using the official Episode entity:** the "Episode" entity may be the entity used when creating a TV Show project from the Project Template.
+a) **Using the official Episode entity:** the "Episode" entity may be the entity used when creating a TV Show project from the Project Template.

**OR**
-b) **Using a custom entity:** `CustomEntity02` may be the custom entity used when creating a TV Show project from the Project Template. _As noted previously, you can enable another custom entity and use it instead of `CustomEntity02`—just make sure to replace all `CustomEntity02`'s with the specific one that you have enabled._
+b) **Using a custom entity:** `CustomEntity02` may be the custom entity used when creating a TV Show project from the Project Template. _As noted previously, you can enable another custom entity and use it instead of `CustomEntity02`—just make sure to replace all `CustomEntity02`'s with the specific one that you have enabled._

@@ -1136,11 +1114,11 @@ b) **Using a custom entity:** `CustomEntity02` may be the custom entity used
A single entity link called Episode (`sg_episode`) that links to the Episode entity is required.
-**Using the official `Episode` Entity**
+**Using the official `Episode` Entity**
-**Using `CustomEntity02`**
+**Using `CustomEntity02`**
@@ -1148,17 +1126,17 @@ A single entity link called Episode (`sg_episode`) that links to the Episode ent
A single entity field called Sequence (`sg_sequence`) that links to the Sequence entity. This should already exist as part of the TV Show Project Template in {% include product %}.
-**Using the official `Episode` Entity**
+**Using the official `Episode` Entity**
-**Using `CustomEntity02`**
+**Using `CustomEntity02`**
### Toolkit schema definition
-Let's assume a hierarchy as follows (where the folders in `{}`s are dynamically named based on their name in {% include product %}):
+Let's assume a hierarchy as follows (where the folders in `{}`s are dynamically named based on their name in {% include product %}):
- {project_name}
- shots
@@ -1175,26 +1153,26 @@ Let's assume a hierarchy as follows (where the folders in `{}`s are dynamically
#### Episodes
-In your `config/core/schema/project/shots` folder, create a folder named `episode` with a corresponding `episode.yml` file in the same directory with the following content:
+In your `config/core/schema/project/shots` folder, create a folder named `episode` with a corresponding `episode.yml` file in the same directory with the following content:
-**Using the official `Episode` Entity**
+**Using the official `Episode` Entity**
# the type of dynamic content
type: "shotgun_entity"
-
+
# the {% include product %} field to use for the folder name
name: "code"
-
+
# the {% include product %} entity type to connect to
entity_type: "Episode"
-
+
# {% include product %} filters to apply when getting the list of items
- # this should be a list of dicts, each dict containing
+ # this should be a list of dicts, each dict containing
# three fields: path, relation and values
@@ -1203,92 +1181,91 @@ In your `config/core/schema/project/shots` folder, create a folder named `epi
# any values starting with $ are resolved into path objects
filters: [ { "path": "project", "relation": "is", "values": [ "$project" ] } ]
-**Using `CustomEntity02`**
+**Using `CustomEntity02`**
# the type of dynamic content
type: "shotgun_entity"
-
+
# the {% include product %} field to use for the folder name
name: "code"
-
+
# the {% include product %} entity type to connect to
entity_type: "CustomEntity02"
-
+
# {% include product %} filters to apply when getting the list of items
- # this should be a list of dicts, each dict containing
+ # this should be a list of dicts, each dict containing
# three fields: path, relation and values
# (this is std {% include product %} API syntax)
# any values starting with $ are resolved into path objects
- filters: [ { "path": "project", "relation": "is", "values": [ "$project" ] } ]
+ filters: [ { "path": "project", "relation": "is", "values": [ "$project" ] } ]
This tells Toolkit to create folders for every Episode in this project.
#### Sequence
-In your `config/core/schema/project/shots/episode` folder, create a folder named `sequence` with a corresponding `sequence.yml` file in the same directory with the following content:
+In your `config/core/schema/project/shots/episode` folder, create a folder named `sequence` with a corresponding `sequence.yml` file in the same directory with the following content:
# the type of dynamic content
type: "shotgun_entity"
-
+
# the {% include product %} field to use for the folder name
name: "code"
-
+
# the {% include product %} entity type to connect to
entity_type: "Sequence"
-
+
# {% include product %} filters to apply when getting the list of items
- # this should be a list of dicts, each dict containing
+ # this should be a list of dicts, each dict containing
# three fields: path, relation and values
# (this is std {% include product %} API syntax)
# any values starting with $ are resolved into path objects
- filters: [ { "path": "sg_episode", "relation": "is", "values": [ "$episode" ] } ]`
+ filters: [ { "path": "sg_episode", "relation": "is", "values": [ "$episode" ] } ]`
This tells Toolkit to create folders for every Sequence that is linked to the Episode above it in the directory tree.
#### Shots
-In your `config/core/schema/project/shots/episode/sequence` folder, create a folder named `shot` with a corresponding `shot.yml` file in the same directory with the following content:
-
+In your `config/core/schema/project/shots/episode/sequence` folder, create a folder named `shot` with a corresponding `shot.yml` file in the same directory with the following content:
# the type of dynamic content
type: "shotgun_entity"
-
+
# the {% include product %} field to use for the folder name
name: "code"
-
+
# the {% include product %} entity type to connect to
entity_type: "Shot"
-
+
# {% include product %} filters to apply when getting the list of items
- # this should be a list of dicts, each dict containing
+ # this should be a list of dicts, each dict containing
# three fields: path, relation and values
# (this is std {% include product %} API syntax)
# any values starting with $ are resolved into path objects
- filters: [ { "path": "sg_sequence", "relation": "is", "values": [ "$sequence" ] } ]`
+ filters: [ { "path": "sg_sequence", "relation": "is", "values": [ "$sequence" ] } ]`
This tells Toolkit to create folders for every Shot that is linked to the Sequence above it in the directory tree.
@@ -1298,24 +1275,24 @@ After you've done this, your schema should reflect the following:
#### Toolkit template definitions
-In order to tell Toolkit that you are using Episodes in your schema, you need to create a new key in the [keys section](https://support.shotgunsoftware.com/hc/en-us/articles/219039868#The%20Keys%20Section) at the top to define it:
+In order to tell Toolkit that you are using Episodes in your schema, you need to create a new key in the [keys section](https://support.shotgunsoftware.com/hc/en-us/articles/219039868#The%20Keys%20Section) at the top to define it:
-**Using the official `Episode` Entity**
+**Using the official `Episode` Entity**
keys:
...
Episode:
type: str
- ...
+ ...
-Then, in your template paths below, update the `shot_root` template, as well as any other template paths that are in the shot hierarchy, to match your episodic hierarchy by inserting `{Episode}` in the proper place to specify the episode in the directory structure:
+Then, in your template paths below, update the `shot_root` template, as well as any other template paths that are in the shot hierarchy, to match your episodic hierarchy by inserting `{Episode}` in the proper place to specify the episode in the directory structure:
...
paths:
shot_root: shots/{Episode}/{Sequence}/{Shot}/{Step}
- …
+ …
-**Using `CustomEntity02`**
+**Using `CustomEntity02`**
keys:
...
@@ -1323,47 +1300,45 @@ Then, in your template paths below, update the `shot_root` template, as well a
type: str
...
-Then, in your template paths below, update the `shot_root` template, as well as any other template paths that are in the shot hierarchy, to match your episodic hierarchy by inserting `{CustomEntity02}` in the proper place to specify the episode in the directory structure:
+Then, in your template paths below, update the `shot_root` template, as well as any other template paths that are in the shot hierarchy, to match your episodic hierarchy by inserting `{CustomEntity02}` in the proper place to specify the episode in the directory structure:
...
paths:
shot_root: shots/{CustomEntity02}/{Sequence}/{Shot}/{Step}
- …
+ …
-That's all you need for the basic **Episode > Sequence > Shot** workflow!
+That's all you need for the basic **Episode > Sequence > Shot** workflow!
## How can I set up a branch in my structure?
-This relates to [Different file system layouts for different Pipeline Steps](https://support.shotgunsoftware.com/hc/en-us/articles/219039868#Different%20file%20system%20layouts%20for%20different%20pipeline%20steps), more specifically, if you are looking to add a branch to your structure. For example, you can have one structure for "Pipeline Step A" and another for all other Pipeline Steps.
+This relates to [Different file system layouts for different Pipeline Steps](https://support.shotgunsoftware.com/hc/en-us/articles/219039868#Different%20file%20system%20layouts%20for%20different%20pipeline%20steps), more specifically, if you are looking to add a branch to your structure. For example, you can have one structure for "Pipeline Step A" and another for all other Pipeline Steps.
-Let's say you are adding another kind of [Asset Type](https://support.shotgunsoftware.com/hc/en-us/articles/219030738-Customizing-existing-fields) to your Pipeline, and that new Asset Type is a Vehicle. You want to change the file structure for Vehicles so that it has different folders for different Pipeline Steps; for example, "geoprep" and "lookdev", with additional folders inside each of those Pipeline Step folders. In parallel to this update, the way that you create Assets currently should remain the same. Let's walk through how to update your pipeline to accommodate this new flow.
+Let's say you are adding another kind of [Asset Type](https://support.shotgunsoftware.com/hc/en-us/articles/219030738-Customizing-existing-fields) to your Pipeline, and that new Asset Type is a Vehicle. You want to change the file structure for Vehicles so that it has different folders for different Pipeline Steps; for example, "geoprep" and "lookdev", with additional folders inside each of those Pipeline Step folders. In parallel to this update, the way that you create Assets currently should remain the same. Let's walk through how to update your pipeline to accommodate this new flow.
**Step 1: Modify the schema**
First, modify your schema to reflect the way your folder structure will look with the new Asset Type.
-- Start by creating a new branch in the schema for this new Asset Type: vehicle.
-- At the same level as `asset/` and `asset.yml`, add an `asset_vehicle/` folder and `asset_vehicle.yml`.
-- These YAML files also have a filter setting in them. Modify the filter in your `asset.yml` so that it applies to all assets _except for_ vehicle, and then modify `asset_vehicle.yml` to apply _only to_ assets of type vehicle. [Here is an example of what those filters look like](https://support.shotgunsoftware.com/hc/en-us/articles/219039868-File-System-Configuration-Reference#Different%20file%20system%20layouts%20for%20different%20pipeline%20steps).
-- Now that you have two folders to represent `asset` and `asset_vehicles`, add all the folders underneath `asset_vehicle` that you expect to be created for those assets (e.g., `geoprep`, `lookdev`, etc.).
-
-- If you are saving and publishing files for these assets, you'll want to create templates, in `core/templates.yml`, that describe the file paths for saved and publish files. For example, in addition to [`maya_asset_work`](https://github.com/shotgunsoftware/tk-config-default/blob/v0.17.3/core/templates.yml#L480), you may create a template called `maya_asset_work_vehicle`, and its definition will be the templated path where you want to save Maya work files for vehicle assets.
-
+- Start by creating a new branch in the schema for this new Asset Type: vehicle.
+- At the same level as `asset/` and `asset.yml`, add an `asset_vehicle/` folder and `asset_vehicle.yml`.
+- These YAML files also have a filter setting in them. Modify the filter in your `asset.yml` so that it applies to all assets _except for_ vehicle, and then modify `asset_vehicle.yml` to apply _only to_ assets of type vehicle. [Here is an example of what those filters look like](https://support.shotgunsoftware.com/hc/en-us/articles/219039868-File-System-Configuration-Reference#Different%20file%20system%20layouts%20for%20different%20pipeline%20steps).
+- Now that you have two folders to represent `asset` and `asset_vehicles`, add all the folders underneath `asset_vehicle` that you expect to be created for those assets (e.g., `geoprep`, `lookdev`, etc.).
+- If you are saving and publishing files for these assets, you'll want to create templates, in `core/templates.yml`, that describe the file paths for saved and publish files. For example, in addition to [`maya_asset_work`](https://github.com/shotgunsoftware/tk-config-default/blob/v0.17.3/core/templates.yml#L480), you may create a template called `maya_asset_work_vehicle`, and its definition will be the templated path where you want to save Maya work files for vehicle assets.
**Step 2: Create a new environment file**
At this point, you have a directory structure for the new Asset Type, and you have templates that describe where to save and publish files in the new directory structure. Now, you need to tell Toolkit when to use these new templates. To do this, create a new environment file.
-- Create a copy of `env/asset_step.yml` called `env/asset_vehicle_step.yml`. These two files will be identical, except `env/asset_vehicle_step.yml` will use your new templates. Replace any instances of `maya_asset_work` with `maya_asset_work_vehicle`. Do the same for any other vehicle templates you've created.
-- Finally, you'll need to teach Toolkit when to pick your new environment. To do this, modify the [pick_environment](https://github.com/shotgunsoftware/tk-config-default/blob/master/core/hooks/pick_environment.py) core hook to return `asset_vehicle` or `asset_vehicle_step` when the asset in context is of type `vehicle` . Now, when you are working with an Asset of this new type (vehicle), Toolkit will know to use its environment configuration, and to therefore save and publish files to its alternate file system structure.
+- Create a copy of `env/asset_step.yml` called `env/asset_vehicle_step.yml`. These two files will be identical, except `env/asset_vehicle_step.yml` will use your new templates. Replace any instances of `maya_asset_work` with `maya_asset_work_vehicle`. Do the same for any other vehicle templates you've created.
+- Finally, you'll need to teach Toolkit when to pick your new environment. To do this, modify the [pick_environment](https://github.com/shotgunsoftware/tk-config-default/blob/master/core/hooks/pick_environment.py) core hook to return `asset_vehicle` or `asset_vehicle_step` when the asset in context is of type `vehicle` . Now, when you are working with an Asset of this new type (vehicle), Toolkit will know to use its environment configuration, and to therefore save and publish files to its alternate file system structure.
## How can I create a custom Pipeline Step using a custom entity?
-In {% include product %} 7.0.6.0, [managing Pipeline Steps via the Admin menu](https://support.shotgunsoftware.com/hc/en-us/articles/222766227#managing_pipeline_steps) was introduced. With this feature, you can easily add custom fields to Pipeline Steps. **Pro Tip: In most cases, utilizing custom fields on Pipeline Steps helps keep your pipeline more organized than creating a custom entity to manage those Pipeline Steps.**
+In {% include product %} 7.0.6.0, [managing Pipeline Steps via the Admin menu](https://support.shotgunsoftware.com/hc/en-us/articles/222766227#managing_pipeline_steps) was introduced. With this feature, you can easily add custom fields to Pipeline Steps. **Pro Tip: In most cases, utilizing custom fields on Pipeline Steps helps keep your pipeline more organized than creating a custom entity to manage those Pipeline Steps.**
-However, in more advanced cases, it may be useful to have an alternative Pipeline Step. For instance, you might like to have the flexibility of different naming conventions and structures for production versus pipeline in the area of Pipeline Steps, as well as flexibility in naming and structuring them independently. While typically {% include product %}'s built-in Pipeline Steps are used for scheduling purposes, you may want to use another [Custom Entity](https://support.shotgunsoftware.com/hc/en-us/articles/114094182834) to structure the file system and group individual tasks together in the pipeline. You can accomplish this by creating a custom link field from a Task to a custom entity. This is then used by the system to group tasks together, via the step node.
+However, in more advanced cases, it may be useful to have an alternative Pipeline Step. For instance, you might like to have the flexibility of different naming conventions and structures for production versus pipeline in the area of Pipeline Steps, as well as flexibility in naming and structuring them independently. While typically {% include product %}'s built-in Pipeline Steps are used for scheduling purposes, you may want to use another [Custom Entity](https://support.shotgunsoftware.com/hc/en-us/articles/114094182834) to structure the file system and group individual tasks together in the pipeline. You can accomplish this by creating a custom link field from a Task to a custom entity. This is then used by the system to group tasks together, via the step node.
In the folder configuration, add two special options to tell it to use your custom step setup rather than {% include product %}'s built-in Pipeline Step:
entity_type: "CustomNonProjectEntity05"
- task_link_field: "sg_task_type"
\ No newline at end of file
+ task_link_field: "sg_task_type"
diff --git a/docs/en/guides/pipeline-integrations/administration/integrations-admin-guide.md b/docs/en/guides/pipeline-integrations/administration/integrations-admin-guide.md
index f9abcb7c3..4d2f354a7 100644
--- a/docs/en/guides/pipeline-integrations/administration/integrations-admin-guide.md
+++ b/docs/en/guides/pipeline-integrations/administration/integrations-admin-guide.md
@@ -9,7 +9,7 @@ lang: en
## Introduction
-This document serves as a guide for administrators of {% include product %} integrations. It's one of three: user, admin, and developer. Our [User Guide](https://support.shotgunsoftware.com/hc/en-us/articles/115000068574) is intended for artists who will be the end users of {% include product %} integrations in their daily workflow, and our [Developer Guide](https://support.shotgunsoftware.com/hc/en-us/articles/115000067513) is technical documentation for those writing Python code to extend the functionality. This document falls between those two: it's intended for those who are implementing {% include product %} integrations for a studio, managing software versions, and making storage decisions for published files.
+This document serves as a guide for administrators of {% include product %} integrations. It's one of three: user, admin, and developer. Our [User Guide](https://support.shotgunsoftware.com/hc/en-us/articles/115000068574) is intended for artists who will be the end users of {% include product %} integrations in their daily workflow, and our [Developer Guide](https://support.shotgunsoftware.com/hc/en-us/articles/115000067513) is technical documentation for those writing Python code to extend the functionality. This document falls between those two: it's intended for those who are implementing {% include product %} integrations for a studio, managing software versions, and making storage decisions for published files.
## Standard Pipeline Configurations
@@ -17,23 +17,23 @@ At the heart of any Toolkit setup is the Pipeline Configuration, a set of YAML f
### The Basic Config
-Our out-of-the-box integrations are designed to run without the need to set up or modify any configuration files. When you use our out-of-the-box integrations, there's nothing to administer, but Toolkit uses an implied Pipeline Configuration under the hood, and we call this Pipeline Configuration the Basic Config. The Basic Config makes three Toolkit apps – The Panel, Publisher, and Loader – available in all supported software packages, and looks to your Software Entities in {% include product %} to determine which software packages to display in {% include product %} Desktop. The Basic Config does not include filesystem location support. When you use out-of-the-box integrations on a project, your copy of the Basic Config is auto-updated whenever you launch Desktop, so you'll always have the latest version of our integrations. You can [subscribe to release notes here](https://support.shotgunsoftware.com/hc/en-us/sections/115000020494-Integrations), and [see the Basic Config in Github here](https://github.com/shotgunsoftware/tk-config-basic/).
+Our out-of-the-box integrations are designed to run without the need to set up or modify any configuration files. When you use our out-of-the-box integrations, there's nothing to administer, but Toolkit uses an implied Pipeline Configuration under the hood, and we call this Pipeline Configuration the Basic Config. The Basic Config makes three Toolkit apps – The Panel, Publisher, and Loader – available in all supported software packages, and looks to your Software Entities in {% include product %} to determine which software packages to display in {% include product %} Desktop. The Basic Config does not include filesystem location support. When you use out-of-the-box integrations on a project, your copy of the Basic Config is auto-updated whenever you launch Desktop, so you'll always have the latest version of our integrations. You can [subscribe to release notes here](https://support.shotgunsoftware.com/hc/en-us/sections/115000020494-Integrations), and [see the Basic Config in Github here](https://github.com/shotgunsoftware/tk-config-basic/).
### The Default Config
-This is the default starting point for our Advanced project setup. It includes [filesystem location support](https://support.shotgunsoftware.com/hc/en-us/articles/219039868-Integrations-File-System-Reference) and a wider array of Toolkit apps and engines.
+This is the default starting point for our Advanced project setup. It includes [filesystem location support](https://support.shotgunsoftware.com/hc/en-us/articles/219039868-Integrations-File-System-Reference) and a wider array of Toolkit apps and engines.
-You can [see the Default Config in Github here](https://github.com/shotgunsoftware/tk-config-default2). For a detailed description of the Default Config's structure, see the `config/env/README.md` file in your Pipeline Configuration, or [view it here in Github](https://github.com/shotgunsoftware/tk-config-default2/blob/master/env/README.md).
+You can [see the Default Config in Github here](https://github.com/shotgunsoftware/tk-config-default2). For a detailed description of the Default Config's structure, see the `config/env/README.md` file in your Pipeline Configuration, or [view it here in Github](https://github.com/shotgunsoftware/tk-config-default2/blob/master/env/README.md).
-If you're familiar with the old structure of the Default Config, take a look at the [Default Config Update FAQ](https://support.shotgunsoftware.com/hc/en-us/community/posts/115003376154-Default-Configuration-Update-FAQ).
+If you're familiar with the old structure of the Default Config, take a look at the [Default Config Update FAQ](https://support.shotgunsoftware.com/hc/en-us/community/posts/115003376154-Default-Configuration-Update-FAQ).
{% include info title="Note" content="Looking for the old config structure? With the v1.1 release of Integrations, we reorganized the structure of the Default Config to help maximize efficiency and readability, and to make it match the Basic Config's structure more closely You can still base projects on the legacy Default Config. Just choose 'Legacy Default' when prompted to select a configuration in the Desktop Set Up Project Wizard." %}
## The Publisher
-The Publisher is designed to ease the transition between the out-of-the-box workflow and the full pipeline configuration. In the out-of-the-box setup, files are published in place, which avoids the need to define templates or filesystem schema. Once a project has gone through the advanced setup and has a full Pipeline Configuration, the same publish plugins will recognize the introduction of templates to the app settings and begin copying files to their designated publish location prior to publishing. Studios can therefore introduce template-based settings on a per-environment or per-DCC basis as needed for projects with full configurations. The Default Config comes fully configured for template-based workflows and is a good reference to see how templates can be configured for the Publish app. See the [tk-multi-publish2.yml file](https://github.com/shotgunsoftware/tk-config-default2/blob/master/env/includes/settings/tk-multi-publish2.yml) in the Default Config in Github for more info.
+The Publisher is designed to ease the transition between the out-of-the-box workflow and the full pipeline configuration. In the out-of-the-box setup, files are published in place, which avoids the need to define templates or filesystem schema. Once a project has gone through the advanced setup and has a full Pipeline Configuration, the same publish plugins will recognize the introduction of templates to the app settings and begin copying files to their designated publish location prior to publishing. Studios can therefore introduce template-based settings on a per-environment or per-DCC basis as needed for projects with full configurations. The Default Config comes fully configured for template-based workflows and is a good reference to see how templates can be configured for the Publish app. See the [tk-multi-publish2.yml file](https://github.com/shotgunsoftware/tk-config-default2/blob/master/env/includes/settings/tk-multi-publish2.yml) in the Default Config in Github for more info.
-For details on writing plugins for the Publisher, see the [Publisher section of our Developer Guide](https://support.shotgunsoftware.com/hc/en-us/articles/115000067513-Integrations-Developer-Guide#Publisher).
+For details on writing plugins for the Publisher, see the [Publisher section of our Developer Guide](https://support.shotgunsoftware.com/hc/en-us/articles/115000067513-Integrations-Developer-Guide#Publisher).
## Configuring software launches
@@ -41,22 +41,22 @@ It’s simple to rely on {% include product %}’s auto-detection of host applic
When you create a new {% include product %} site, it will have a set of default Software entities—one for each supported host application. You can modify these and add your own to manage the software that shows up in Desktop exactly how you want it.
-To see your Software entities in {% include product %}, open the Admin menu by clicking on the profile icon in the upper right corner of the screen, and choose `Software`.
+To see your Software entities in {% include product %}, open the Admin menu by clicking on the profile icon in the upper right corner of the screen, and choose `Software`.
The Software entity has the following fields:
-- **Software Name:** The display name of the Software in Desktop.
-- **Thumbnail:** Uploaded image file for Desktop icon.
-- **Status:** Controls whether or not the Software is available.
-- **Engine:** The name of the integration for the content creation tool.
-- **Products:** For Software packages that include variants (e.g., Houdini vs. Houdini FX), you can specify a comma separated list here. Valid only in auto-detect mode, not manual mode.
-- **Versions:** Specific versions of the software to display. You can specify a comma separated list here. Valid only in auto-detect mode, not manual mode.
-- **Group:** Entities with the same value for the `Group` field will be grouped under a single icon in Desktop and a single menu in {% include product %}. For example, you could create an FX group that includes Houdini and Nuke.
-- **Group Default**: When one member of a group has `Group Default` checked, clicking the icon or menu item for the group will launch this software.
-- **Projects:** A way to restrict software to certain projects.
-- **User Restrictions:** A way to restrict software to certain users or groups.
-- **Linux/Mac/Windows Path:** Use these fields to explicitly specify an OS-specific path to software.
-- **Linux/Mac/Windows Args:** Commandline args to append to the command when launching the Software.
+- **Software Name:** The display name of the Software in Desktop.
+- **Thumbnail:** Uploaded image file for Desktop icon.
+- **Status:** Controls whether or not the Software is available.
+- **Engine:** The name of the integration for the content creation tool.
+- **Products:** For Software packages that include variants (e.g., Houdini vs. Houdini FX), you can specify a comma separated list here. Valid only in auto-detect mode, not manual mode.
+- **Versions:** Specific versions of the software to display. You can specify a comma separated list here. Valid only in auto-detect mode, not manual mode.
+- **Group:** Entities with the same value for the `Group` field will be grouped under a single icon in Desktop and a single menu in {% include product %}. For example, you could create an FX group that includes Houdini and Nuke.
+- **Group Default**: When one member of a group has `Group Default` checked, clicking the icon or menu item for the group will launch this software.
+- **Projects:** A way to restrict software to certain projects.
+- **User Restrictions:** A way to restrict software to certain users or groups.
+- **Linux/Mac/Windows Path:** Use these fields to explicitly specify an OS-specific path to software.
+- **Linux/Mac/Windows Args:** Commandline args to append to the command when launching the Software.
We can learn a lot about how these fields work together by demonstrating some ways of using them.
@@ -70,8 +70,8 @@ If these three versions of Maya are installed in the standard location on your f
A few things to note here:
-- When {% include product %} auto-detects your software, a single Software entity generates the menu items for all versions.
-- None of the Path fields have values specified. The Software entity is in auto-detect mode, so the App is assumed to be in the standard location.
+- When {% include product %} auto-detects your software, a single Software entity generates the menu items for all versions.
+- None of the Path fields have values specified. The Software entity is in auto-detect mode, so the App is assumed to be in the standard location.
These will show up in Desktop as you see here: one icon for Maya, with a drop-down listing all the available versions. If you click on the icon itself, you’ll launch the latest version of Maya.
@@ -83,22 +83,22 @@ It’s perfectly fine to store Maya in a non-standard location in your studio. Y
Some notes here:
-- Unlike in auto-detect mode, here you have a Software entity for each version of a given software package.
-- In order to group them together, use the `Group` and `Group Default` fields. Software entities that share the same value for `Group` will be grouped in Desktop in a dropdown under a single icon, which uses the `Group` value as its name.
-- When you click on that icon itself, you’ll launch the software within the group with `Group Default`checked.
-- **When you specify a value for _any_ of Linux Path, Mac Path, or Windows Path on a Software entity, that entity will shift to Manual mode.** Unlike auto-detect mode, where the software _would_ show up in Desktop when a path field is empty, in manual mode, a software package will _only_ show up on a given operating system if a path is specified for it and the file exists at the specified path.
-- In this example, none of the three Maya versions would show up in Desktop on Windows because no `Windows Path` has been specified.
+- Unlike in auto-detect mode, here you have a Software entity for each version of a given software package.
+- In order to group them together, use the `Group` and `Group Default` fields. Software entities that share the same value for `Group` will be grouped in Desktop in a dropdown under a single icon, which uses the `Group` value as its name.
+- When you click on that icon itself, you’ll launch the software within the group with `Group Default`checked.
+- **When you specify a value for _any_ of Linux Path, Mac Path, or Windows Path on a Software entity, that entity will shift to Manual mode.** Unlike auto-detect mode, where the software _would_ show up in Desktop when a path field is empty, in manual mode, a software package will _only_ show up on a given operating system if a path is specified for it and the file exists at the specified path.
+- In this example, none of the three Maya versions would show up in Desktop on Windows because no `Windows Path` has been specified.
### Example: Restrict by users or groups
-Now, say with that last example that we’re not ready to make Maya 2018 available to all users just yet. But we do want TDs, Devs, and our QA engineer, Tessa Tester, to be able to access it. We can achieve this with the `User Restrictions` field. Here’s an example:
+Now, say with that last example that we’re not ready to make Maya 2018 available to all users just yet. But we do want TDs, Devs, and our QA engineer, Tessa Tester, to be able to access it. We can achieve this with the `User Restrictions` field. Here’s an example:

We made a couple changes from the last example:
-- The group default is now Maya 2017. We want that to be the production version, so with that box checked, clicking the icon for Maya will now go to this version.
-- We’ve added a few values to the `User Restrictions` field: It can take both users and groups, and we’ve added our Dev and TD groups, as well as the user Tessa Tester. Now, only those people will see Maya 2018 in Desktop.
+- The group default is now Maya 2017. We want that to be the production version, so with that box checked, clicking the icon for Maya will now go to this version.
+- We’ve added a few values to the `User Restrictions` field: It can take both users and groups, and we’ve added our Dev and TD groups, as well as the user Tessa Tester. Now, only those people will see Maya 2018 in Desktop.
### Example: Restrict software versions by project
@@ -108,100 +108,94 @@ Sometimes you want to do more complex version management across projects in your
A few important things to note:
-- We’ve removed the `Group` and `Group Default` values here, as only one Maya version will ever show up in Desktop for a given environment.
-- We’ve set the `Software Name` for all three versions to “Maya”. This way, on every project, users will have an icon with the same name, but it will point to different versions depending on what’s configured here.
-- We’ve set Maya 2016’s `Status` field to `Disabled`. We are no longer using this version in our studio, and this field toggles global visibility across all projects.
-- We’ve specified values for `Projects` for Maya 2017 and Maya 2018. This `Projects` field acts as a restriction. Maya 2017 will _only_ show up in the Chicken Planet project, and Maya 2018 will only show up in Chicken Planet II.
-- Note that once you’ve specified a value for `Projects` for a Software entity, that Software will only show up in the projects you've specified. So, if you have other projects in your studio in addition to the Chicken Planet series, you’ll need to specify software for them explicitly.
+- We’ve removed the `Group` and `Group Default` values here, as only one Maya version will ever show up in Desktop for a given environment.
+- We’ve set the `Software Name` for all three versions to “Maya”. This way, on every project, users will have an icon with the same name, but it will point to different versions depending on what’s configured here.
+- We’ve set Maya 2016’s `Status` field to `Disabled`. We are no longer using this version in our studio, and this field toggles global visibility across all projects.
+- We’ve specified values for `Projects` for Maya 2017 and Maya 2018. This `Projects` field acts as a restriction. Maya 2017 will _only_ show up in the Chicken Planet project, and Maya 2018 will only show up in Chicken Planet II.
+- Note that once you’ve specified a value for `Projects` for a Software entity, that Software will only show up in the projects you've specified. So, if you have other projects in your studio in addition to the Chicken Planet series, you’ll need to specify software for them explicitly.
### Example: Add your own Software
There are several reasons you might add a new software entity in addition to those that {% include product %} Desktop has auto-detected on your system:
-- You want to make an application for which there is no engine available to your users through Desktop.
-- You have in-house software, or third-party software that we don’t have an integration for, for which you’ve written your own engine.
-- Your software doesn’t live in a standard location, so you want to point {% include product %} to it manually. (This case was described in the “Grouping versions of the same Application, Manual mode” example above.)
+- You want to make an application for which there is no engine available to your users through Desktop.
+- You have in-house software, or third-party software that we don’t have an integration for, for which you’ve written your own engine.
+- Your software doesn’t live in a standard location, so you want to point {% include product %} to it manually. (This case was described in the “Grouping versions of the same Application, Manual mode” example above.)
-In these cases, you can add your own Software entities. You'll need to have a value for the `Software Name`field. If you're using an in-house engine for your software, specify the engine name in the `Engine` field. Some studios may want to include apps in Desktop that don’t have {% include product %} integrations, as a convenience for artists. Your artists can launch the app straight from Desktop. You can even use all of the settings above to manage versions and usage restrictions. In this case, leave the `Engine` field empty, but you'll need to specify a value for at least one of `Mac Path`, `Linux Path`, and `Windows Path`.
+In these cases, you can add your own Software entities. You'll need to have a value for the `Software Name`field. If you're using an in-house engine for your software, specify the engine name in the `Engine` field. Some studios may want to include apps in Desktop that don’t have {% include product %} integrations, as a convenience for artists. Your artists can launch the app straight from Desktop. You can even use all of the settings above to manage versions and usage restrictions. In this case, leave the `Engine` field empty, but you'll need to specify a value for at least one of `Mac Path`, `Linux Path`, and `Windows Path`.
## Configuring published file path resolution
-When you publish a file, the Publisher creates a PublishedFile entity in {% include product %}, which includes a [File/Link](https://support.shotgunsoftware.com/hc/en-us/articles/219031008-Field-types) field called `Path`. Later on, a different user may try to load this file into their own work session using the Loader. The Loader uses complex logic to resolve a valid local path to the PublishedFile across operating systems.
+When you publish a file, the Publisher creates a PublishedFile entity in {% include product %}, which includes a [File/Link](https://support.shotgunsoftware.com/hc/en-us/articles/219031008-Field-types) field called `Path`. Later on, a different user may try to load this file into their own work session using the Loader. The Loader uses complex logic to resolve a valid local path to the PublishedFile across operating systems.
-The way in which the Loader attempts to resolve the publish data into a path depends on whether the the publish is associated with a local file link or a `file://` URL.
+The way in which the Loader attempts to resolve the publish data into a path depends on whether the the publish is associated with a local file link or a `file://` URL.
### Resolving local file links
-Local file links are generated automatically at publish time if the path you are publishing matches any local storage defined in the {% include product %} Site Preferences. If the publish is a local file link, its local operating system representation will be used. Read more about local file links [here](https://support.shotgunsoftware.com/hc/en-us/articles/219030938-Linking-to-local-files).
+Local file links are generated automatically at publish time if the path you are publishing matches any local storage defined in the {% include product %} Site Preferences. If the publish is a local file link, its local operating system representation will be used. Read more about local file links [here](https://support.shotgunsoftware.com/hc/en-us/articles/219030938-Linking-to-local-files).
-If a local storage doesn’t define a path for the operating system you are currently using, you can use an environment variable to specify your local storage root. The name of the environment variable should take the form of `SHOTGUN_PATH__`. So, if you wanted to define a path on a Mac for a storage root called "Renders", you'd create a `SHOTGUN_PATH_MAC_RENDERS` environment variable. Let's go deeper with that example:
+If a local storage doesn’t define a path for the operating system you are currently using, you can use an environment variable to specify your local storage root. The name of the environment variable should take the form of `SHOTGUN_PATH__`. So, if you wanted to define a path on a Mac for a storage root called "Renders", you'd create a `SHOTGUN_PATH_MAC_RENDERS` environment variable. Let's go deeper with that example:
-- Say your {% include product %} site has a storage root called "Renders", with the following paths specified:
-- Linux path: `/studio/renders/`
-- Windows path: `S:\renders\`
-- Mac path: ``
-
-- You are on a Mac.
-
-- You want to load a publish with the path `/studio/renders/sq100/sh001/bg/bg.001.exr` into your session.
+- Say your {% include product %} site has a storage root called "Renders", with the following paths specified:
+- Linux path: `/studio/renders/`
+- Windows path: `S:\renders\`
+- Mac path: ``
+- You are on a Mac.
+- You want to load a publish with the path `/studio/renders/sq100/sh001/bg/bg.001.exr` into your session.
-The Loader can parse the path and deduce that `/studio/renders/` is the storage root part of it, but no storage root is defined for Mac. So, it will look for a `SHOTGUN_PATH_MAC_RENDERS` environment variable, and if it finds one, it will replace `/studio/renders` in the path with its value.
+The Loader can parse the path and deduce that `/studio/renders/` is the storage root part of it, but no storage root is defined for Mac. So, it will look for a `SHOTGUN_PATH_MAC_RENDERS` environment variable, and if it finds one, it will replace `/studio/renders` in the path with its value.
-**Note:** If you define a `SHOTGUN_PATH_MAC_RENDERS` environment variable, and the local storage Renders _does_have Mac path set, the local storage value will be used and a warning will be logged.
+**Note:** If you define a `SHOTGUN_PATH_MAC_RENDERS` environment variable, and the local storage Renders \_does_have Mac path set, the local storage value will be used and a warning will be logged.
-**Note:** If no storage can be resolved for the current operating system, a `PublishPathNotDefinedError` is raised.
+**Note:** If no storage can be resolved for the current operating system, a `PublishPathNotDefinedError` is raised.
### Resolving file URLs
-The Loader also supports the resolution of `file://` URLs. At publish time, if the path you are publishing does not match any of your site's local storages, the path is saved as a `file://` URL. Contrary to local file links, these paths are not stored in a multi-OS representation, but are just defined for the operating system where they were created.
+The Loader also supports the resolution of `file://` URLs. At publish time, if the path you are publishing does not match any of your site's local storages, the path is saved as a `file://` URL. Contrary to local file links, these paths are not stored in a multi-OS representation, but are just defined for the operating system where they were created.
-If you are trying to resolve a `file://` URL on a different operating system from the one where where the URL was created, the Loader will attempt to resolve it into a valid path using a series of approaches:
+If you are trying to resolve a `file://` URL on a different operating system from the one where where the URL was created, the Loader will attempt to resolve it into a valid path using a series of approaches:
-- First, it will look for the three environment variables `SHOTGUN_PATH_WINDOWS`, `SHOTGUN_PATH_MAC`, and `SHOTGUN_PATH_LINUX`. If these are defined, the method will attempt to translate the path this way. For example, if you are trying to resolve `file:///prod/proj_x/assets/bush/file.txt` on Windows, you could set up `SHOTGUN_PATH_WINDOWS=P:\prod` and `SHOTGUN_PATH_LINUX=/prod` in order to hint the way the path should be resolved.
-- If you want to use more than one set of environment variables, in order to represent multiple storages, this is possible by extending the above variable name syntax with a suffix:
-- If you have a storage for renders, you could for example define `SHOTGUN_PATH_LINUX_RENDERS`, `SHOTGUN_PATH_MAC_RENDERS`, and `SHOTGUN_PATH_WINDOWS_RENDERS` in order to provide a translation mechanism for all `file://` URLs published that refer to data inside your render storage.
-- Then, if you also have a storage for editorial data, you could define `SHOTGUN_PATH_LINUX_EDITORIAL`, `SHOTGUN_PATH_MAC_EDITORIAL`, and `SHOTGUN_PATH_WINDOWS_EDITORIAL`, in order to provide a translation mechanism for your editorial storage roots.
+- First, it will look for the three environment variables `SHOTGUN_PATH_WINDOWS`, `SHOTGUN_PATH_MAC`, and `SHOTGUN_PATH_LINUX`. If these are defined, the method will attempt to translate the path this way. For example, if you are trying to resolve `file:///prod/proj_x/assets/bush/file.txt` on Windows, you could set up `SHOTGUN_PATH_WINDOWS=P:\prod` and `SHOTGUN_PATH_LINUX=/prod` in order to hint the way the path should be resolved.
+- If you want to use more than one set of environment variables, in order to represent multiple storages, this is possible by extending the above variable name syntax with a suffix:
+- If you have a storage for renders, you could for example define `SHOTGUN_PATH_LINUX_RENDERS`, `SHOTGUN_PATH_MAC_RENDERS`, and `SHOTGUN_PATH_WINDOWS_RENDERS` in order to provide a translation mechanism for all `file://` URLs published that refer to data inside your render storage.
+- Then, if you also have a storage for editorial data, you could define `SHOTGUN_PATH_LINUX_EDITORIAL`, `SHOTGUN_PATH_MAC_EDITORIAL`, and `SHOTGUN_PATH_WINDOWS_EDITORIAL`, in order to provide a translation mechanism for your editorial storage roots.
Once you have standardized on these environment variables, you could consider converting them into a {% include product %} local storage. Once they are defined in the {% include product %} preferences, they will be automatically picked up and no environment variables will be needed.
-- In addition to the above, all local storages defined in the {% include product %} preferences will be handled the same way.
-- If a local storage has been defined, but an operating system is missing, this can be supplied via an environment variable. For example, if there is a local storage named `Renders` that is defined on Linux and Windows, you can extend to support mac by creating an environment variable named `SHOTGUN_PATH_MAC_RENDERS`. The general syntax for this is `SHOTGUN_PATH__`.
-- If no root matches, the file path will be returned as is.
+- In addition to the above, all local storages defined in the {% include product %} preferences will be handled the same way.
+- If a local storage has been defined, but an operating system is missing, this can be supplied via an environment variable. For example, if there is a local storage named `Renders` that is defined on Linux and Windows, you can extend to support mac by creating an environment variable named `SHOTGUN_PATH_MAC_RENDERS`. The general syntax for this is `SHOTGUN_PATH__`.
+- If no root matches, the file path will be returned as is.
Here's an example:
-Say you've published the file `/projects/some/file.txt` on Linux, and a {% include product %} publish with the URL `file:///projects/some/file.txt` was generated. In your studio, the Linux path `/projects` equates to `Q:\projects` on Windows, and hence you expect the full path to be translated to `Q:\projects\some\file.txt`.
+Say you've published the file `/projects/some/file.txt` on Linux, and a {% include product %} publish with the URL `file:///projects/some/file.txt` was generated. In your studio, the Linux path `/projects` equates to `Q:\projects` on Windows, and hence you expect the full path to be translated to `Q:\projects\some\file.txt`.
All of the following setups would handle this:
-- A general environment-based override:
-- `SHOTGUN_PATH_LINUX=/projects`
-- `SHOTGUN_PATH_WINDOWS=Q:\projects`
-- `SHOTGUN_PATH_MAC=/projects`
-
-- A {% include product %} local storage called “Projects”, set up with:
-
-- Linux Path: `/projects`
-- Windows Path: `Q:\projects`
-- Mac Path: `/projects`
-
-- A {% include product %} local storage called “Projects”, augmented with an environment variable:
-
-- Linux Path: `/projects`
-- `Windows Path:``
-- `Mac Path:`/projects`
-- `SHOTGUN_PATH_WINDOWS_PROJECTS=Q:\projects`
-
-**Note:** If you have a local storage `Renders` defined in {% include product %} with `Linux path` set, and also a `SHOTGUN_PATH_LINUX_RENDERS` environment variable defined, the storage will take precedence, the environment variable will be ignored, and a warning will be logged. Generally speaking, local storage definitions always take precedence over environment variables.
+- A general environment-based override:
+- `SHOTGUN_PATH_LINUX=/projects`
+- `SHOTGUN_PATH_WINDOWS=Q:\projects`
+- `SHOTGUN_PATH_MAC=/projects`
+- A {% include product %} local storage called “Projects”, set up with:
+- Linux Path: `/projects`
+- Windows Path: `Q:\projects`
+- Mac Path: `/projects`
+- A {% include product %} local storage called “Projects”, augmented with an environment variable:
+- Linux Path: `/projects`
+- `Windows Path:``
+- `Mac Path:`/projects`
+- `SHOTGUN_PATH_WINDOWS_PROJECTS=Q:\projects`
+
+**Note:** If you have a local storage `Renders` defined in {% include product %} with `Linux path` set, and also a `SHOTGUN_PATH_LINUX_RENDERS` environment variable defined, the storage will take precedence, the environment variable will be ignored, and a warning will be logged. Generally speaking, local storage definitions always take precedence over environment variables.
### Advanced configuration
-For information on the underlying method that performs the resolution of PublishedFile paths, take a look at our [developer reference docs](http://developer.shotgridsoftware.com/tk-core/utils.html#sgtk.util.resolve_publish_path).
+For information on the underlying method that performs the resolution of PublishedFile paths, take a look at our [developer reference docs](http://developer.shotgridsoftware.com/tk-core/utils.html#sgtk.util.resolve_publish_path).
-If you are using Advanced Project Setup, you can add support beyond local file links and `file://` URLs by customizing the `resolve_publish` core hook. Possible customizations include:
+If you are using Advanced Project Setup, you can add support beyond local file links and `file://` URLs by customizing the `resolve_publish` core hook. Possible customizations include:
-- Publishes with associated uploaded files could be automatically downloaded into an appropriate cache location by the core hook and the path would be be returned.
-- Custom URL schemes (such as `perforce://`) could be resolved into local paths.
+- Publishes with associated uploaded files could be automatically downloaded into an appropriate cache location by the core hook and the path would be be returned.
+- Custom URL schemes (such as `perforce://`) could be resolved into local paths.
## Browser Integration
@@ -227,7 +221,7 @@ Hosting a websocket server within the {% include product %} Desktop app was, and
**Websockets v2 via {% include product %} Desktop**
-The second iteration of the websocket server’s RPC API changes the underlying mechanism used to get, cache, and execute Toolkit actions. This implementation addresses a number of performance issues related to the earlier browser integration implementations, improves the visual organization of the action menus, and adds support for [out-of-the-box {% include product %} Integrations](https://support.shotgunsoftware.com/hc/en-us/articles/115000068574-User-Guide#Getting%20started%20with%20Shotgun%20Desktop), which work without explicitly configuring Toolkit. This is the current implementation of browser integration.
+The second iteration of the websocket server’s RPC API changes the underlying mechanism used to get, cache, and execute Toolkit actions. This implementation addresses a number of performance issues related to the earlier browser integration implementations, improves the visual organization of the action menus, and adds support for [out-of-the-box {% include product %} Integrations](https://support.shotgunsoftware.com/hc/en-us/articles/115000068574-User-Guide#Getting%20started%20with%20Shotgun%20Desktop), which work without explicitly configuring Toolkit. This is the current implementation of browser integration.
### Configuration
@@ -235,23 +229,23 @@ To control what actions are presented to the user for each entity type, you modi
**Which engine configuration?**
-The Toolkit engine that manages Toolkit actions within the {% include product %} web app is `tk-shotgun`, so it’s this engine’s configuration that controls what shows up in the action menus.
+The Toolkit engine that manages Toolkit actions within the {% include product %} web app is `tk-shotgun`, so it’s this engine’s configuration that controls what shows up in the action menus.

-In the above example from [tk-config-basic](https://github.com/shotgunsoftware/tk-config-basic/), there are two apps configured that will result in a number of engine commands turned into menu actions. Toolkit apps will register commands that are to be included in the action menu, including launcher commands for each software package found on the local system that correspond to the list of [Software entities](https://support.shotgunsoftware.com/hc/en-us/articles/115000067493#Configuring%20software%20launches) in the {% include product %} site. The result is the list of menu actions shown here:
+In the above example from [tk-config-basic](https://github.com/shotgunsoftware/tk-config-basic/), there are two apps configured that will result in a number of engine commands turned into menu actions. Toolkit apps will register commands that are to be included in the action menu, including launcher commands for each software package found on the local system that correspond to the list of [Software entities](https://support.shotgunsoftware.com/hc/en-us/articles/115000067493#Configuring%20software%20launches) in the {% include product %} site. The result is the list of menu actions shown here:

-The browser integration code found installations of Houdini, Maya, Nuke, and Photoshop on the user's system, which resulted in menu actions for launching each of those integrations. Note that in a given environment configuration file, the _engine_ for a Software entity needs to be present in order for that Software's launcher to show up for entities of that environment. So, in this example, the `tk-houdini`, `tk-maya`, `tk-nuke`, and `tk-photoshopcc` engines must all be present in the file from which this snippet was taken. If you wanted to remove, for example, Maya from the list of launchers on this entity, you could just remove the `tk-maya` engine block from the environment config file.
+The browser integration code found installations of Houdini, Maya, Nuke, and Photoshop on the user's system, which resulted in menu actions for launching each of those integrations. Note that in a given environment configuration file, the _engine_ for a Software entity needs to be present in order for that Software's launcher to show up for entities of that environment. So, in this example, the `tk-houdini`, `tk-maya`, `tk-nuke`, and `tk-photoshopcc` engines must all be present in the file from which this snippet was taken. If you wanted to remove, for example, Maya from the list of launchers on this entity, you could just remove the `tk-maya` engine block from the environment config file.
In addition to these launchers, the Publish app’s “Publish…” command is included in the menu.
**Which YML file?**
-You can take one of two paths: making use of the primary environment configuration (`config/env/*.yml`), as controlled by the config’s [pick_environment.py core hook](https://github.com/shotgunsoftware/tk-core/blob/master/hooks/pick_environment.py), or the legacy approach employed by [tk-config-default](https://github.com/shotgunsoftware/tk-config-default/), which uses `config/env/shotgun_.yml` files.
+You can take one of two paths: making use of the primary environment configuration (`config/env/*.yml`), as controlled by the config’s [pick_environment.py core hook](https://github.com/shotgunsoftware/tk-core/blob/master/hooks/pick_environment.py), or the legacy approach employed by [tk-config-default](https://github.com/shotgunsoftware/tk-config-default/), which uses `config/env/shotgun_.yml` files.
-In the case where the standard environment files are used, browser integration uses the `pick_environment`core hook to determine which environment configuration file to use for a given entity’s action menu. In the simplest case, the environment corresponds to the entity type. For example, if you right-click on a Shot, the resulting action menu will be configured by the `tk-shotgun` block in `config/env/shot.yml`. You can customize the `pick_environment` hook to use more complex logic. Should there be no `tk-shotgun` engine configured in the standard environment file, a fallback occurs if a `shotgun_.yml` file exists. This allows browser integration to work with legacy configurations that make use of the entity-specific environment files.
+In the case where the standard environment files are used, browser integration uses the `pick_environment`core hook to determine which environment configuration file to use for a given entity’s action menu. In the simplest case, the environment corresponds to the entity type. For example, if you right-click on a Shot, the resulting action menu will be configured by the `tk-shotgun` block in `config/env/shot.yml`. You can customize the `pick_environment` hook to use more complex logic. Should there be no `tk-shotgun` engine configured in the standard environment file, a fallback occurs if a `shotgun_.yml` file exists. This allows browser integration to work with legacy configurations that make use of the entity-specific environment files.
**_Tip: Removing Software from the Browser Launchers with tk-config-default2_**
@@ -261,9 +255,10 @@ With tk-config-default2, updates should be applied to config/env/includes/settin
As an example, let’s remove Mari from the list of options when launching from an Asset through the browser.
-First, navigate to [`config/env/asset.yml`](https://github.com/shotgunsoftware/tk-config-default2/blob/v1.1.10/env/asset.yml#L47) and notice how the `tk-shotgun` engine engine block is pointing to [`@settings.tk-shotgun.asset`](https://github.com/shotgunsoftware/tk-config-default2/blob/v1.1.10/env/asset.yml#L47). The `@` symbol signifies that the value for the configuration is coming from an included file. This means you'll need to go to your [env/includes/settings/`tk-shotgun.yml`](https://github.com/shotgunsoftware/tk-config-default2/blob/v1.1.10/env/includes/settings/tk-shotgun.yml) to make the update.
+First, navigate to [`config/env/asset.yml`](https://github.com/shotgunsoftware/tk-config-default2/blob/v1.1.10/env/asset.yml#L47) and notice how the `tk-shotgun` engine engine block is pointing to [`@settings.tk-shotgun.asset`](https://github.com/shotgunsoftware/tk-config-default2/blob/v1.1.10/env/asset.yml#L47). The `@` symbol signifies that the value for the configuration is coming from an included file. This means you'll need to go to your [env/includes/settings/`tk-shotgun.yml`](https://github.com/shotgunsoftware/tk-config-default2/blob/v1.1.10/env/includes/settings/tk-shotgun.yml) to make the update.
+
+While in your `env/includes/settings/tk-shotgun.yml`, notice how each block is per Entity. So, for instance, Asset first:
-While in your `env/includes/settings/tk-shotgun.yml`, notice how each block is per Entity. So, for instance, Asset first:
```
# asset
@@ -277,7 +272,7 @@ settings.tk-shotgun.asset:
tk-shotgun-folders: "@settings.tk-shotgun-folders"
tk-shotgun-launchfolder: "@settings.tk-shotgun-launchfolder"
location: "@engines.tk-shotgun.location"
- ```
+```
To remove Mari from the list of options on an Asset in the browser, remove the Mari line ([`tk-multi-launchmari: "@settings.tk-multi-launchapp.mari"`](https://github.com/shotgunsoftware/tk-config-default2/blob/master/env/includes/settings/tk-shotgun.yml#L29)):
@@ -317,15 +312,15 @@ Linux: ~/.shotgun\\site.basic.desktop\tk-desktop
### Hook Methods
-A `browser_integration.py` hook is included in `tk-framework-desktopserver`, which provides the following hook methods:
+A `browser_integration.py` hook is included in `tk-framework-desktopserver`, which provides the following hook methods:
-- `get_cache_key`: This method determines the cache entry's key for the given configuration URI, project entity, and entity type. The default implementation combines the configuration URI and entity type.
-- `get_site_state_data`: This method can be used to include additional queried data from {% include product %} into the hash that's used to test the validity of cached data. By default, the state of all Software entities that exist on the site are used, but if additional data should be included in the hash, that can be implemented in this hook method.
-- `process_commands`: This method provides a place to customize or alter the commands that are to be returned to the {% include product %} web application. The data structure provided to the method is a list of dictionaries, with each dictionary representing a single menu action. Data can be altered, filtered out, or added into the list as is necessary and will be reflected in the menu requesting Toolkit actions immediately.
+- `get_cache_key`: This method determines the cache entry's key for the given configuration URI, project entity, and entity type. The default implementation combines the configuration URI and entity type.
+- `get_site_state_data`: This method can be used to include additional queried data from {% include product %} into the hash that's used to test the validity of cached data. By default, the state of all Software entities that exist on the site are used, but if additional data should be included in the hash, that can be implemented in this hook method.
+- `process_commands`: This method provides a place to customize or alter the commands that are to be returned to the {% include product %} web application. The data structure provided to the method is a list of dictionaries, with each dictionary representing a single menu action. Data can be altered, filtered out, or added into the list as is necessary and will be reflected in the menu requesting Toolkit actions immediately.
### Logs
-Logs for browser integration can be found in Toolkit’s [standard log location](https://developer.shotgridsoftware.com/38c5c024/). The relevant log files are `tk-desktop.log` and `tk-shotgun.log`. In addition, if you are using Google Chrome, some relevant log output is sometimes available in the developer console within the browser.
+Logs for browser integration can be found in Toolkit’s [standard log location](https://developer.shotgridsoftware.com/38c5c024/). The relevant log files are `tk-desktop.log` and `tk-shotgun.log`. In addition, if you are using Google Chrome, some relevant log output is sometimes available in the developer console within the browser.
### Troubleshooting
@@ -338,13 +333,11 @@ The complex nature of communicating from a web application with the local deskto
This likely means one of three things:
1. {% include product %} Desktop is not currently running on the local machine. It seems obvious, but it is definitely worth double checking.
-
2. Chrome or the Python websocket server has refused the connection, resulting in the {% include product %} web application being unable to communicate with {% include product %} Desktop. This situation is most likely related to the self-signed certificates that allow the connection to proceed when requested. Regenerating these certificates from scratch often resolves the issue, and can be triggered from {% include product %} Desktop, as shown below.
-

-1. {% include product %} Desktop’s websocket server failed to start on launch. This situation is likely limited to situations where a bad release of the websocket server has gone out to the public, which should be exceedingly rare. In this situation, logging will be present in [tk-desktop.log](https://developer.shotgridsoftware.com/38c5c024/) explaining the error, which can be [sent to {% include product %}’s support team](https://support.shotgunsoftware.com/hc/en-us/requests/new).
+1. {% include product %} Desktop’s websocket server failed to start on launch. This situation is likely limited to situations where a bad release of the websocket server has gone out to the public, which should be exceedingly rare. In this situation, logging will be present in [tk-desktop.log](https://developer.shotgridsoftware.com/38c5c024/) explaining the error, which can be [sent to {% include product %}’s support team](https://support.shotgunsoftware.com/hc/en-us/requests/new).
**No actions are shown in the action menu**
@@ -352,17 +345,14 @@ This likely means one of three things:
This is indicative of a configuration problem if actions were expected for this entity type. Some possible issues:
-1. The `tk-shotgun` engine is configured in the correct environment YAML file, but there are no apps present in that configuration. In this case, it’s likely that the intention was for no actions to be present for this entity type.
-
-2. The `tk-shotgun` engine is configured in the correct environment YML file, and apps are present, but actions still do not appear in the menu. This is likely due to apps failing to initialize. In this case, there will be information in [tk-shotgun.log and tk-desktop.log](https://developer.shotgridsoftware.com/38c5c024/) describing the problems.
-
-3. The environment that corresponds to this entity type does not contain configuration for `tk-shotgun`. The end result here is the same as #1 on this list. In this case, you can look at the pipeline configuration’s `pick_environment` hook to determine which environment is being loaded for this entity type, and the configuration of `tk-shotgun` can be verified there.
-
+1. The `tk-shotgun` engine is configured in the correct environment YAML file, but there are no apps present in that configuration. In this case, it’s likely that the intention was for no actions to be present for this entity type.
+2. The `tk-shotgun` engine is configured in the correct environment YML file, and apps are present, but actions still do not appear in the menu. This is likely due to apps failing to initialize. In this case, there will be information in [tk-shotgun.log and tk-desktop.log](https://developer.shotgridsoftware.com/38c5c024/) describing the problems.
+3. The environment that corresponds to this entity type does not contain configuration for `tk-shotgun`. The end result here is the same as #1 on this list. In this case, you can look at the pipeline configuration’s `pick_environment` hook to determine which environment is being loaded for this entity type, and the configuration of `tk-shotgun` can be verified there.
4. There is an empty list of menu actions cached on disk. To force the cache to be regenerated, there are a few options:
-
- - Update the modification time of a YAML file in your project's configuration. This will trigger a recache of menu actions when they are next requested by {% include product %}. Worth noting is that this will trigger a recache for _all_ users working in the project.
- - Update the value of a field in any of the Software entities on your {% include product %} site. The behavior here is the same as the above option concerning YAML file modification time, but will invalidate cached data for all users in _all_ projects on your {% include product %} site. Software entities are non-project entities, which means they're shared across all projects. If data in any of the Software entities is altered, all projects are impacted.
- - The cache file can be deleted on the host suffering from the problem. It is typically safe to remove the cache, and since it is stored locally on each host, it will only cause data to be recached from scratch on that one system. The cache is stored in the following SQLite file within your {% include product %} cache location: `/site.basic.desktop/tk-desktop/shotgun_engine_commands_v1.sqlite`
+
+ - Update the modification time of a YAML file in your project's configuration. This will trigger a recache of menu actions when they are next requested by {% include product %}. Worth noting is that this will trigger a recache for _all_ users working in the project.
+ - Update the value of a field in any of the Software entities on your {% include product %} site. The behavior here is the same as the above option concerning YAML file modification time, but will invalidate cached data for all users in _all_ projects on your {% include product %} site. Software entities are non-project entities, which means they're shared across all projects. If data in any of the Software entities is altered, all projects are impacted.
+ - The cache file can be deleted on the host suffering from the problem. It is typically safe to remove the cache, and since it is stored locally on each host, it will only cause data to be recached from scratch on that one system. The cache is stored in the following SQLite file within your {% include product %} cache location: `/site.basic.desktop/tk-desktop/shotgun_engine_commands_v1.sqlite`
**“Toolkit: Retrieving actions…” is never replaced with menu actions**
@@ -370,62 +360,57 @@ This is indicative of a configuration problem if actions were expected for this
There are a few possibilities for this one:
-1. The websocket server has not yet finished caching actions. If this is the first time actions are being retrieved after a significant update to the project’s config, the process can take some time to complete. Wait longer, and observe the contents of `tk-desktop.log` to see if processing is still occurring.
-
-2. The websocket server has failed to respond and never will. This situation should be rare, but if it becomes obvious that there is no additional processing occurring as a result of the request for actions, as seen in `tk-desktop.log`, [contact ShotGrid support](https://support.shotgunsoftware.com/hc/en-us/requests/new), providing relevant log data.
-
+1. The websocket server has not yet finished caching actions. If this is the first time actions are being retrieved after a significant update to the project’s config, the process can take some time to complete. Wait longer, and observe the contents of `tk-desktop.log` to see if processing is still occurring.
+2. The websocket server has failed to respond and never will. This situation should be rare, but if it becomes obvious that there is no additional processing occurring as a result of the request for actions, as seen in `tk-desktop.log`, [contact ShotGrid support](https://support.shotgunsoftware.com/hc/en-us/requests/new), providing relevant log data.
3. The user is working in more than one {% include product %} site. With {% include product %} Desktop authenticated against a single site, requesting menu actions from a second {% include product %} site results in the user being queried about restarting {% include product %} Desktop and logging into the new site. If that request is ignored, the second site will never receive a list of menu actions.
-
## Toolkit Configuration File
-If your studio is using a proxy server, if you want to pre-populate the initial login screen with some values, or if you want to tweak how the browser-based application launcher integrates with {% include product %} Desktop, there is a special configuration file called `toolkit.ini`. {% include product %} Desktop does not require this file in order to run; it’s only needed if you need to configure its behavior. Toolkit looks for the file in multiple locations, in the following order:
+If your studio is using a proxy server, if you want to pre-populate the initial login screen with some values, or if you want to tweak how the browser-based application launcher integrates with {% include product %} Desktop, there is a special configuration file called `toolkit.ini`. {% include product %} Desktop does not require this file in order to run; it’s only needed if you need to configure its behavior. Toolkit looks for the file in multiple locations, in the following order:
-1. An environment variable named `SGTK_PREFERENCES_LOCATION` that points to a file path.
+1. An environment variable named `SGTK_PREFERENCES_LOCATION` that points to a file path.
2. Inside the {% include product %} Toolkit preferences folder: (Note that this file does not exist by default in these locations; you must create it.)
- - Windows: `%APPDATA%\Shotgun\Preferences\toolkit.ini`
- - macOS: `~/Library/Preferences/Shotgun/toolkit.ini`
- - Linux: `~/.shotgun/preferences/toolkit.ini`
+ - Windows: `%APPDATA%\Shotgun\Preferences\toolkit.ini`
+ - macOS: `~/Library/Preferences/Shotgun/toolkit.ini`
+ - Linux: `~/.shotgun/preferences/toolkit.ini`
-The `SGTK_PREFERENCES_LOCATION` environment variable option allows you to store your configuration file somewhere else on your computer or on your network. Please note that `toolkit.ini` is the current standard file name. If you were using `config.ini`, check below in the _“Legacy Locations”_ section.
+The `SGTK_PREFERENCES_LOCATION` environment variable option allows you to store your configuration file somewhere else on your computer or on your network. Please note that `toolkit.ini` is the current standard file name. If you were using `config.ini`, check below in the _“Legacy Locations”_ section.
-You can see a documented example of a configuration file [here](https://raw.githubusercontent.com/shotgunsoftware/tk-framework-desktopstartup/master/config.ini.example).
+You can see a documented example of a configuration file [here](https://raw.githubusercontent.com/shotgunsoftware/tk-framework-desktopstartup/master/config.ini.example).
-Please note that this example file is called `config.ini` but it can be just renamed to `toolkit.ini`
+Please note that this example file is called `config.ini` but it can be just renamed to `toolkit.ini`
Please also note that you can use environment variables as well as hard coded values in this file, so that you could, for example, pick up the default user name to suggest to a user via the USERNAME variable that exists on Windows.
-
-
**Legacy Locations (DEPRECATED)**
-Although `toolkit.ini` is the current standard file name, we previously used a `config.ini` file for same purpose. The contents of `toolkit.ini` and `config.ini` are the same. The `config.ini` will be searched for using the following deprecated locations:
+Although `toolkit.ini` is the current standard file name, we previously used a `config.ini` file for same purpose. The contents of `toolkit.ini` and `config.ini` are the same. The `config.ini` will be searched for using the following deprecated locations:
-1. An environment variable named `SGTK_DESKTOP_CONFIG_LOCATION` that points to a file.
+1. An environment variable named `SGTK_DESKTOP_CONFIG_LOCATION` that points to a file.
2. In the following paths:
- - Windows: `%APPDATA%\Shotgun\desktop\config\config.ini`
- - macOS: `~/Library/Caches/Shotgun/desktop/config/config.ini`
- - Linux: `~/shotgun/desktop/config/config.ini`
+ - Windows: `%APPDATA%\Shotgun\desktop\config\config.ini`
+ - macOS: `~/Library/Caches/Shotgun/desktop/config/config.ini`
+ - Linux: `~/shotgun/desktop/config/config.ini`
**Proxy Configuration**
-If your studio is accessing the internet through a proxy, you’ll need to tell Toolkit to use this proxy when it accesses the Internet. Do so by specifying your proxy as the value of the `http_proxy` setting:
+If your studio is accessing the internet through a proxy, you’ll need to tell Toolkit to use this proxy when it accesses the Internet. Do so by specifying your proxy as the value of the `http_proxy` setting:
`http_proxy: `
**Running {% include product %} Desktop with a locally hosted site**
-If your {% include product %} site URL does not end with `shotgunstudio.com` or `shotgrid.autodesk.com`, it means that you are running a local {% include product %} site. In this case, it is possible that your site has not yet been fully prepared for {% include product %} integrations and the {% include product %} team may need to go in and do some small adjustments before you can get going! In this case, [please submit a ticket](https://support.shotgunsoftware.com/hc/en-us/requests/new) and we'll help sort you out.
+If your {% include product %} site URL does not end with `shotgunstudio.com` or `shotgrid.autodesk.com`, it means that you are running a local {% include product %} site. In this case, it is possible that your site has not yet been fully prepared for {% include product %} integrations and the {% include product %} team may need to go in and do some small adjustments before you can get going! In this case, [please submit a ticket](https://support.shotgunsoftware.com/hc/en-us/requests/new) and we'll help sort you out.
**Connecting to the app store with a locally hosted site**
-If you are using a local {% include product %} site with access to the Internet through a proxy, you might want to set an HTTP proxy for accessing the app store, but not the local {% include product %} website. To do this, simply add the following line to `toolkit.ini`:
+If you are using a local {% include product %} site with access to the Internet through a proxy, you might want to set an HTTP proxy for accessing the app store, but not the local {% include product %} website. To do this, simply add the following line to `toolkit.ini`:
`app_store_http_proxy: `
-where `` is a string that follows the convention documented [in our developer docs.](http://developer.shotgridsoftware.com/python-api/reference.html?highlight=reference%20methods#shotgun-methods)
+where `` is a string that follows the convention documented [in our developer docs.](http://developer.shotgridsoftware.com/python-api/reference.html?highlight=reference%20methods#shotgun-methods)
-If you need to override this setting on a per-project basis, you can do so in `config/core/shotgun.yml` in your project’s Pipeline Configuration.
+If you need to override this setting on a per-project basis, you can do so in `config/core/shotgun.yml` in your project’s Pipeline Configuration.
## Offline Usage Scenarios
@@ -433,7 +418,7 @@ In general use, {% include product %} Desktop automatically checks for updates f
### {% include product %} Create
-The approaches to resolving offline usage scenarios outlined in this document also apply to the integration features provided by [{% include product %} Create](https://support.shotgunsoftware.com/hc/en-us/articles/360012554734). The various environment variables used to tailor the behavior of {% include product %} Toolkit, such as `SHOTGUN_BUNDLE_CACHE_FALLBACK_PATHS`, apply when using {% include product %} Create in the same ways as {% include product %} Desktop.
+The approaches to resolving offline usage scenarios outlined in this document also apply to the integration features provided by [{% include product %} Create](https://support.shotgunsoftware.com/hc/en-us/articles/360012554734). The various environment variables used to tailor the behavior of {% include product %} Toolkit, such as `SHOTGUN_BUNDLE_CACHE_FALLBACK_PATHS`, apply when using {% include product %} Create in the same ways as {% include product %} Desktop.
### Running integrations while offline
@@ -441,13 +426,13 @@ _Scenario: I want to run {% include product %} integrations, but I am not connec
**Solution**
-- If you can temporarily connect to the internet, just download {% include product %} Desktop. It comes prepackaged with a set of [integrations](https://support.shotgunsoftware.com/hc/en-us/articles/115000068574-User-Guide#Introduction), and pre-bundled with all the apps and engines needed for the {% include product %} integrations for all supported DCCs. When you start it up, it will automatically try to look for upgrades, but if it cannot connect to the {% include product %} App Store, it will simply run the most recent version that exists locally.
+- If you can temporarily connect to the internet, just download {% include product %} Desktop. It comes prepackaged with a set of [integrations](https://support.shotgunsoftware.com/hc/en-us/articles/115000068574-User-Guide#Introduction), and pre-bundled with all the apps and engines needed for the {% include product %} integrations for all supported DCCs. When you start it up, it will automatically try to look for upgrades, but if it cannot connect to the {% include product %} App Store, it will simply run the most recent version that exists locally.
**Good to know**
-- Some Toolkit operations (such as registering a Publish) require access to your {% include product %} site. So, this solution only works for locally hosted sites.
-- Updates are downloaded to your local machine.
-- If you switch between being connected and disconnected, Desktop, as well as in-app integrations like those inside Maya and Nuke, will download upgrades at startup whenever they are connected.
+- Some Toolkit operations (such as registering a Publish) require access to your {% include product %} site. So, this solution only works for locally hosted sites.
+- Updates are downloaded to your local machine.
+- If you switch between being connected and disconnected, Desktop, as well as in-app integrations like those inside Maya and Nuke, will download upgrades at startup whenever they are connected.
### Managing updates via manual download
@@ -455,18 +440,18 @@ _Scenario: Our artist workstations are disconnected from the internet, so we can
**Solution**
-- Run {% include product %} Desktop on a workstation connected to the internet. When it starts up, the latest upgrades are automatically downloaded at launch time.
-- Option 1: Shared Desktop bundle
-- Copy the [bundle cache](https://developer.shotgridsoftware.com/7c9867c0/) to a shared location where all machines can access it.
-- Set the `SHOTGUN_BUNDLE_CACHE_FALLBACK_PATHS` environment variable on offline machines to point to this location.
-- When Desktop starts up on offline machines, they will pick up the latest upgrades that are available in the bundle cache.
-- Option 2: Local deployment
-- Distribute the updated bundle cache to the correct bundle cache location on each local machine.
+- Run {% include product %} Desktop on a workstation connected to the internet. When it starts up, the latest upgrades are automatically downloaded at launch time.
+- Option 1: Shared Desktop bundle
+- Copy the [bundle cache](https://developer.shotgridsoftware.com/7c9867c0/) to a shared location where all machines can access it.
+- Set the `SHOTGUN_BUNDLE_CACHE_FALLBACK_PATHS` environment variable on offline machines to point to this location.
+- When Desktop starts up on offline machines, they will pick up the latest upgrades that are available in the bundle cache.
+- Option 2: Local deployment
+- Distribute the updated bundle cache to the correct bundle cache location on each local machine.
**Good to know**
-- With Option 1, the Toolkit code will be loaded from the location defined in `SHOTGUN_BUNDLE_CACHE_FALLBACK_PATHS`. If this location is on a shared storage, make sure that it is performant enough to load many small files.
-- For Windows setups, this is often not the case. Here we would instead recommend Option 2.
+- With Option 1, the Toolkit code will be loaded from the location defined in `SHOTGUN_BUNDLE_CACHE_FALLBACK_PATHS`. If this location is on a shared storage, make sure that it is performant enough to load many small files.
+- For Windows setups, this is often not the case. Here we would instead recommend Option 2.
## Locking off updates
@@ -478,33 +463,31 @@ _Scenario: My project is about to wrap and I would like to freeze it so that no
**Solution**
-- Determine the version you want to lock your project to. [You can find the integration releases here.](https://support.shotgunsoftware.com/hc/en-us/sections/115000020494-Integrations)
-- In {% include product %}, create a Pipeline Configuration entity for the project you want to lock down, with the following fields populated (In this example, we are locking down the config to use v1.0.36 of the integrations):
-- Name: `Primary`
-- Project: The project you want to lock down
-- Plugin ids: `basic.*`
-- Descriptor: `sgtk:descriptor:app_store?name=tk-config-basic&version=v1.0.36`
-
-- Anyone starting {% include product %} Desktop on the project will now always use v1.0.36. Any new users starting to work on the project will also get v1.0.36.
-
+- Determine the version you want to lock your project to. [You can find the integration releases here.](https://support.shotgunsoftware.com/hc/en-us/sections/115000020494-Integrations)
+- In {% include product %}, create a Pipeline Configuration entity for the project you want to lock down, with the following fields populated (In this example, we are locking down the config to use v1.0.36 of the integrations):
+- Name: `Primary`
+- Project: The project you want to lock down
+- Plugin ids: `basic.*`
+- Descriptor: `sgtk:descriptor:app_store?name=tk-config-basic&version=v1.0.36`
+- Anyone starting {% include product %} Desktop on the project will now always use v1.0.36. Any new users starting to work on the project will also get v1.0.36.

**Good to know**
-- Updates are downloaded to your local machine.
-- The next time a user launches Desktop while connected to the Internet, `v1.0.36` of the basic config, and all of its related code, will be downloaded to their machine.
-- `basic.*` means that all plugins in the basic configuration will pick up this override. If, for example, you wanted to freeze the Nuke and Maya integrations only, you could specify `basic.maya, basic.nuke`.
-- To test, you can create a duplicate of this Pipeline Configuration entity, and add your username to the `User Restrictions` field. This will restrict the entity such that it's only available to you and won't impact other users. You can then launch Maya or some other software from this duplicate configuration and confirm that it’s running the expected integrations versions.
+- Updates are downloaded to your local machine.
+- The next time a user launches Desktop while connected to the Internet, `v1.0.36` of the basic config, and all of its related code, will be downloaded to their machine.
+- `basic.*` means that all plugins in the basic configuration will pick up this override. If, for example, you wanted to freeze the Nuke and Maya integrations only, you could specify `basic.maya, basic.nuke`.
+- To test, you can create a duplicate of this Pipeline Configuration entity, and add your username to the `User Restrictions` field. This will restrict the entity such that it's only available to you and won't impact other users. You can then launch Maya or some other software from this duplicate configuration and confirm that it’s running the expected integrations versions.
**Known issues**
-- The Flame integration is namespaced `basic.flame`, and so is implied to be part of `basic.*`. However, the Flame integration isn't actually included in the basic config. So, if you are using Flame for a project and implement this override, the Flame integration will stop working.
-- The solution would be to create an additional Pipeline Configuration override specifically for flame:
-- Name: `Primary`
-- Project: The project you want to lock down (or None for all projects)
-- Plugin ids: `basic.flame`
-- Descriptor: `sgtk:descriptor:app_store?name=tk-config-flameplugin`
+- The Flame integration is namespaced `basic.flame`, and so is implied to be part of `basic.*`. However, the Flame integration isn't actually included in the basic config. So, if you are using Flame for a project and implement this override, the Flame integration will stop working.
+- The solution would be to create an additional Pipeline Configuration override specifically for flame:
+- Name: `Primary`
+- Project: The project you want to lock down (or None for all projects)
+- Plugin ids: `basic.flame`
+- Descriptor: `sgtk:descriptor:app_store?name=tk-config-flameplugin`
### Freezing updates for your site
@@ -512,18 +495,18 @@ _Scenario: I don’t want any updates. I want full control over what is being do
**Solution**
-- Follow the steps in the above example, but leave the `Project` field blank. With no override in the `Project` field, this Pipeline Configuration entity will apply to all projects, including the “site” project, i.e., the site configuration that is used by Desktop outside of any project.
+- Follow the steps in the above example, but leave the `Project` field blank. With no override in the `Project` field, this Pipeline Configuration entity will apply to all projects, including the “site” project, i.e., the site configuration that is used by Desktop outside of any project.

**Good to know**
-- This is the workflow to use if you want to “lock down the site config”. This would lock down everything, and you can then proceed with the advanced project setup via the Desktop menu.
-- If you lock down your entire site to use, for example, `v1.2.3`, you can still lock down an individual project to use another config.
+- This is the workflow to use if you want to “lock down the site config”. This would lock down everything, and you can then proceed with the advanced project setup via the Desktop menu.
+- If you lock down your entire site to use, for example, `v1.2.3`, you can still lock down an individual project to use another config.
**Known issues**
-- Flame would be affected by this. See the ‘Known Issues’ section of the above scenario for a solution.
+- Flame would be affected by this. See the ‘Known Issues’ section of the above scenario for a solution.
### Freezing updates for all but one project
@@ -533,30 +516,30 @@ _Scenario: I’d like to lock down all projects in our site, except for our test
**Solution**
-- Freeze updates for your site as described in the above section.
-- Configure the exception project’s Pipeline Configuration entity to have the following field values:
-- Name: `Primary`
-- Project: The project you want _not_ to lock down
-- Plugin ids: `basic.*`
-- Descriptor: `sgtk:descriptor:app_store?name=tk-config-basic`
+- Freeze updates for your site as described in the above section.
+- Configure the exception project’s Pipeline Configuration entity to have the following field values:
+- Name: `Primary`
+- Project: The project you want _not_ to lock down
+- Plugin ids: `basic.*`
+- Descriptor: `sgtk:descriptor:app_store?name=tk-config-basic`
**Good to know**
-- Note that you’ve omitted the version number from the Descriptor field for the project. This will mean that the project is tracking the latest release of the basic config.
+- Note that you’ve omitted the version number from the Descriptor field for the project. This will mean that the project is tracking the latest release of the basic config.
### Safely Upgrading a locked off site
-- Scenario: We’re locked down to v1.0.0, and we’d like to upgrade to v2.0.0, but first I want to test out the new version before deploying it to the studio.*
+- Scenario: We’re locked down to v1.0.0, and we’d like to upgrade to v2.0.0, but first I want to test out the new version before deploying it to the studio.\*
**Solution**
-- Duplicate the Pipeline Configuration entity in {% include product %} by right-clicking on it and selecting "Duplicate Selected".
-- Name the cloned config “update test”, and assign yourself to the User Restrictions field.
-- You will now begin to use this Pipeline Configuration.
-- Change the descriptor to point to the version you wish to test.
-- You can invite any users you want to partake in testing by adding them to the User Restrictions field.
-- Once you are happy with testing, simply update the main Pipeline Configuration to use that version.
-- Once users restart Desktop or DCCs, the update will be picked up.
+- Duplicate the Pipeline Configuration entity in {% include product %} by right-clicking on it and selecting "Duplicate Selected".
+- Name the cloned config “update test”, and assign yourself to the User Restrictions field.
+- You will now begin to use this Pipeline Configuration.
+- Change the descriptor to point to the version you wish to test.
+- You can invite any users you want to partake in testing by adding them to the User Restrictions field.
+- Once you are happy with testing, simply update the main Pipeline Configuration to use that version.
+- Once users restart Desktop or DCCs, the update will be picked up.
## Taking over a Pipeline Configuration
@@ -574,11 +557,11 @@ Once you have navigated to a project there will be an "Advanced Project Setup...

-When you start configuring a new project, the first thing to decide is _which configuration template to use_. A configuration template is essentially the complete project configuration with all settings, file system templates, apps and logic needed to run the project.
+When you start configuring a new project, the first thing to decide is _which configuration template to use_. A configuration template is essentially the complete project configuration with all settings, file system templates, apps and logic needed to run the project.
-- If this is your very first project, head over to the {% include product %} defaults to get you started.
-- If you already have configured projects and configurations for previous projects, you can easily reuse these by basing your new project on an existing project
-- For advanced workflows, you can use external configurations or configs stored in git repositories.
+- If this is your very first project, head over to the {% include product %} defaults to get you started.
+- If you already have configured projects and configurations for previous projects, you can easily reuse these by basing your new project on an existing project
+- For advanced workflows, you can use external configurations or configs stored in git repositories.
#### Default configuration templates
@@ -594,9 +577,9 @@ This is the default Toolkit VFX configuration and usually a great starting point
The configuration contains a number of different pieces:
-- A file system setup
-- A set of templates to identify key locations on disk
-- A set of preconfigured engines and apps which are chained together into a workflow.
+- A file system setup
+- A set of templates to identify key locations on disk
+- A set of preconfigured engines and apps which are chained together into a workflow.
**File System Overview**
@@ -608,12 +591,12 @@ The standard config handles Assets and Shots in {% include product %}. It breaks
The config contains the following components:
-- Maya, Mari, Nuke, 3dsmax, Flame, Houdini, Photoshop, and Motionbuilder support
-- {% include product %} Application Launchers
-- Publishing, Snapshotting, and Version Control
-- A Nuke custom Write Node
-- {% include product %} integration
-- A number of other tools and utilities
+- Maya, Mari, Nuke, 3dsmax, Flame, Houdini, Photoshop, and Motionbuilder support
+- {% include product %} Application Launchers
+- Publishing, Snapshotting, and Version Control
+- A Nuke custom Write Node
+- {% include product %} integration
+- A number of other tools and utilities
In addition to the apps above, you can easily install additional apps and engines once the config has been installed.
@@ -631,11 +614,11 @@ For more ways and documentation on how to evolve and maintain your pipeline conf

-Use this option if you want to keep your project's configuration connected to source control. Specify a url to a remote git or github repository and the setup process will clone it for you. Note that this is not just github, but works with any git repository. Just make sure that the path to the repository ends with `.git`, and Toolkit will try to process it as a git setup. Because your project configuration is a git repository, you can commit and push any changes you make to your master repository and beyond that to other projects. Using a github based configuration makes it easy to keep multiple Toolkit projects in sync. You can read more about it here:
+Use this option if you want to keep your project's configuration connected to source control. Specify a url to a remote git or github repository and the setup process will clone it for you. Note that this is not just github, but works with any git repository. Just make sure that the path to the repository ends with `.git`, and Toolkit will try to process it as a git setup. Because your project configuration is a git repository, you can commit and push any changes you make to your master repository and beyond that to other projects. Using a github based configuration makes it easy to keep multiple Toolkit projects in sync. You can read more about it here:
[Managing your project configuration](https://support.shotgunsoftware.com/hc/en-us/articles/219033168#A%20studio%20configuration%20in%20git%20source%20control)
-Please note that if you are running on Windows, you need to have git installed on your machine and accessible in your `PATH`. On Linux and Mac OS X, it is usually installed by default.
+Please note that if you are running on Windows, you need to have git installed on your machine and accessible in your `PATH`. On Linux and Mac OS X, it is usually installed by default.
#### Browsing for a configuration template
@@ -645,21 +628,21 @@ Use this option if you have a configuration on disk, either as a folder or zippe
#### Setting up a storage
-Each Toolkit project writes all its files and data to one or more shared storage locations on disk. For example, a configuration may require one storage where it keeps textures, one where it keeps renders and one where it stores scene files. Normally, these storages are controlled from within the {% include product %} Site Preferences, under the _File Management_ tab.
+Each Toolkit project writes all its files and data to one or more shared storage locations on disk. For example, a configuration may require one storage where it keeps textures, one where it keeps renders and one where it stores scene files. Normally, these storages are controlled from within the {% include product %} Site Preferences, under the _File Management_ tab.
The Toolkit Setup wizard will ask you to map each storage root required by the configuration to a local storage in {% include product %}.

-The required root is listed on the left with its description (as defined in the configuration's `roots.yml` file). On the right, a list of existing {% include product %} local storages is listed. You must select a storage for each required root and enter a path for the current OS if one does not already exist in {% include product %}.
+The required root is listed on the left with its description (as defined in the configuration's `roots.yml` file). On the right, a list of existing {% include product %} local storages is listed. You must select a storage for each required root and enter a path for the current OS if one does not already exist in {% include product %}.
You can also add paths for other operating systems that have not been defined. Existing paths are locked to ensure you don't accidentally affect other projects that may be relying on that storage path. The mapping page in the wizard will ensure that you've mapped each required root and that each mapping is valid.
-You can create a new local storage in the wizard as well by selecting the `+New` item at the end of the storage selection list. You will be prompted for a local storage name and path for the current OS.
+You can create a new local storage in the wizard as well by selecting the `+New` item at the end of the storage selection list. You will be prompted for a local storage name and path for the current OS.
-Once the project is being set up, Toolkit will create a folder for each new project in each of the storage locations. For example, if your primary storage location is `/mnt/projects`, a project called _The Edwardian Cry_ would end up in `/mnt/projects/the_edwardian_cry`. And if the config is using more than just the primary storage, each of the storages would end up with an `the_edwardian_cry` folder.
+Once the project is being set up, Toolkit will create a folder for each new project in each of the storage locations. For example, if your primary storage location is `/mnt/projects`, a project called _The Edwardian Cry_ would end up in `/mnt/projects/the_edwardian_cry`. And if the config is using more than just the primary storage, each of the storages would end up with an `the_edwardian_cry` folder.
-Your primary storage location is typically something like `/mnt/projects` or `\\studio\projects` and is typically a location where you are already storing project data, grouped by projects. It is almost always on a shared network storage.
+Your primary storage location is typically something like `/mnt/projects` or `\\studio\projects` and is typically a location where you are already storing project data, grouped by projects. It is almost always on a shared network storage.
#### Choosing a project folder name
@@ -675,9 +658,10 @@ Lastly, please decide where to put your configuration files on disk. Toolkit wil
The configuration normally resides on a shared storage or disk, so that it can be accessed by all users in the studio who needs it. If you are planning on using more than one operating system for this project, make sure to enter all the necessary paths. All paths should represent the same location on disk. Often, the path can be the same on Mac OS X and Linux but will be different on Windows.
-If this is your first project, you typically want to identify a shared area on disk where you store all your future pipeline configurations. This is typically a location where you store software or software settings shared across the studio. This could be something like `/mnt/software/shotgun`. It may vary depending on your studio network and file naming conventions.
+If this is your first project, you typically want to identify a shared area on disk where you store all your future pipeline configurations. This is typically a location where you store software or software settings shared across the studio. This could be something like `/mnt/software/shotgun`. It may vary depending on your studio network and file naming conventions.
+
+When you set up your first configuration, set it up with paths for all the platforms you use in your studio. This will make it easier later on to create an environment which is accessible from all machines. As a hypothetical example, if your project name is _Golden Circle_ you may type in the following three paths:
-When you set up your first configuration, set it up with paths for all the platforms you use in your studio. This will make it easier later on to create an environment which is accessible from all machines. As a hypothetical example, if your project name is _Golden Circle_ you may type in the following three paths:
```
linux: /mnt/software/shotgun/golden_circle
macosx: /servers/production/software/shotgun/golden_circle
@@ -690,7 +674,7 @@ Once you are up and running with your first configuration, please navigate to ou
[Beyond your first project](https://support.shotgunsoftware.com/hc/en-us/articles/219040688)
-You can also learn more in our [Advanced Project Setup documentation](https://support.shotgunsoftware.com/hc/en-us/articles/219039808-Index-of-Documentation).
+You can also learn more in our [Advanced Project Setup documentation](https://support.shotgunsoftware.com/hc/en-us/articles/219039808-Index-of-Documentation).
## Advanced functionality
diff --git a/docs/en/guides/pipeline-integrations/administration/offline-and-disabled-auto-updates.md b/docs/en/guides/pipeline-integrations/administration/offline-and-disabled-auto-updates.md
index 24188f8bf..84d8b4993 100644
--- a/docs/en/guides/pipeline-integrations/administration/offline-and-disabled-auto-updates.md
+++ b/docs/en/guides/pipeline-integrations/administration/offline-and-disabled-auto-updates.md
@@ -8,17 +8,18 @@ lang: en
# Offline usage and disabling auto updates
- [Auto updates](#auto-updates)
- - [What parts auto update?](#what-parts-auto-update)
- - [What doesn't auto update?](#what-doesnt-auto-update)
+ - [What parts auto update?](#what-parts-auto-update)
+ - [What doesn't auto update?](#what-doesnt-auto-update)
- [Running the integrations offline](#running-the-integrations-offline)
- - [Initial Setup](#initial-setup)
- - [Managing updates](#managing-updates)
+ - [Initial Setup](#initial-setup)
+ - [Managing updates](#managing-updates)
- [Disabling auto updates](#disabling-auto-updates)
- - [Disabling updates at a project or site level](#disabling-updates-at-a-project-or-site-level)
- - [Disabling updates for all but one project](#disabling-updates-for-all-but-one-project)
- - [Upgrading](#upgrading)
+ - [Disabling updates at a project or site level](#disabling-updates-at-a-project-or-site-level)
+ - [Disabling updates for all but one project](#disabling-updates-for-all-but-one-project)
+ - [Upgrading](#upgrading)
## Auto updates
+
### What parts auto update?
By default {% include product %} Desktop will automatically check for updates, and install them to the local machine if it finds any.
@@ -41,8 +42,8 @@ However the integration features provided in {% include product %} Create work i
- Any projects that aren't using the default site configuration (I.e. a project where the Toolkit advanced setup wizard has been run on it.), will not have their configuration auto updated.
-- Resources such as Python and QT that come bundled with {% include product %} Desktop, don't auto update.
-We occasionally release new {% include product %} Desktop installers when we need to update these parts.
+- Resources such as Python and QT that come bundled with {% include product %} Desktop, don't auto update.
+ We occasionally release new {% include product %} Desktop installers when we need to update these parts.
## Running the integrations offline
@@ -51,14 +52,14 @@ We occasionally release new {% include product %} Desktop installers when we nee
If your studio has restricted internet access or no internet access then you will need to ensure that you have all the required parts cached locally.
You will still need one machine that can connect to the internet in order to download {% include product %} Create or {% include product %} Desktop.
-{% include product %} Desktop comes prepackaged with all the dependencies required to run the basic integrations.
+{% include product %} Desktop comes prepackaged with all the dependencies required to run the basic integrations.
Whilst {% include product %} Create also comes bundled with the dependencies, it requires you to follow the steps mentioned in [managing updates](#managing-updates) as well.
-
+
When you start either of them up, it will automatically try to look for updates, but if it cannot connect to the {% include product %} App Store, it will simply run the most recent version that exists locally.
It is recommended that you follow the [managing updates](#managing-updates) steps bellow after installing {% include product %} Desktop, as the components bundled with the installer may not be the latest.
-{% include info title="Note" content="Depending on your network setup, it can sometimes get stuck looking for updates online even though it won't be able to access them.
+{% include info title="Note" content="Depending on your network setup, it can sometimes get stuck looking for updates online even though it won't be able to access them.
In this situation you can set the environment variable `SHOTGUN_DISABLE_APPSTORE_ACCESS` to `\"1\"` to stop it from trying." %}
{% include info title="Note" content="You will still need to be able to connect to your ShotGrid site. When we say offline we are talking about not being able to connect to our app store to download updates." %}
@@ -66,17 +67,17 @@ In this situation you can set the environment variable `SHOTGUN_DISABLE_APPSTORE
### Managing updates
To update the `tk-framework-desktopstartup` component, you will need to [download the latest version](https://github.com/shotgunsoftware/tk-framework-desktopstartup/releases), and set the environment variable
-`SGTK_DESKTOP_STARTUP_LOCATION` to point to its location on disk, (This only applies to {% include product %} Desktop.)
+`SGTK_DESKTOP_STARTUP_LOCATION` to point to its location on disk, (This only applies to {% include product %} Desktop.)
For the `tk-config-basic` component it's a bit more tricky, due to all its dependencies.
1. Run {% include product %} Desktop or {% include product %} Create on a workstation connected to the internet. When it starts up, the latest upgrades will be automatically downloaded.
-(Ensure `SHOTGUN_DISABLE_APPSTORE_ACCESS` is not set on this machine.)
+ (Ensure `SHOTGUN_DISABLE_APPSTORE_ACCESS` is not set on this machine.)
2. Copy the bundle cache to a shared location where all machines can access it.
3. Set the `SHOTGUN_BUNDLE_CACHE_FALLBACK_PATHS` environment variable on offline machines to point to this location.
4. When {% include product %} Desktop or {% include product %} Create starts up on offline machines, they will pick up the latest upgrades that are available in the bundle cache.
-{% include info title="Warning" content="Depending on your network setup, it can sometimes get stuck looking for updates online even though it won't be able to access them.
+{% include info title="Warning" content="Depending on your network setup, it can sometimes get stuck looking for updates online even though it won't be able to access them.
In this situation you can set the environment variable `SHOTGUN_DISABLE_APPSTORE_ACCESS` to `\"1\"` to stop it from trying." %}
## Disabling auto updates
@@ -94,33 +95,31 @@ Follow these steps to disable automatic updates for the integrations.
2. Project: Leave empty if you want updates disabled for all projects, or pick a specific project if you only want to lock down a single project.
3. Plugin ids: `basic.*`
4. Descriptor: `sgtk:descriptor:app_store?name=tk-config-basic&version=v1.0.36`
-
+

+
3. Start {% include product %} Desktop, and if you left the project field empty then {% include product %} Desktop will have switched over to using this version if it wasn't already doing so.
- 
+ 
+
+ If you set a project, then only that project will be affected and you won't see a change in the {% include product %} Desktop about window.
- If you set a project, then only that project will be affected and you won't see a change in the {% include product %} Desktop about window.
4. [Optional] To lock the version of `tk-framework-desktopstartup` you will need to [download the latest version](https://github.com/shotgunsoftware/tk-framework-desktopstartup/releases), and set the environment variable
-`SGTK_DESKTOP_STARTUP_LOCATION` to point to its location on disk, (This only applies to {% include product %} Desktop.)
+ `SGTK_DESKTOP_STARTUP_LOCATION` to point to its location on disk, (This only applies to {% include product %} Desktop.)
-It's The majority for the functionality is controlled by the config which can be locked down with the previous steps, however as mentioned in the "what parts auto update?" section, the
- component is also updated and that is handled separately from the config. This also only applies to {% include product %} Desktop.
+It's The majority for the functionality is controlled by the config which can be locked down with the previous steps, however as mentioned in the "what parts auto update?" section, the
+component is also updated and that is handled separately from the config. This also only applies to {% include product %} Desktop.
#### Good to know
- You don't need to download the release of the configuration manually, {% include product %} Desktop will handle this when it launches or you enter the project.
- `basic.*` means that all plugins in the basic configuration will pick up this override. If, for example, you wanted to freeze the Nuke and Maya integrations only, you could specify `basic.maya`, `basic.nuke`.
- To test, you can create a duplicate of this Pipeline Configuration entity, and add your username to the `User Restrictions` field. This will restrict the entity such that it's only available to you and won't impact other users. You can then launch Maya or some other software from this duplicate configuration and confirm that it’s running the expected integrations versions.
-- Leaving the project field blank is what we call a site configuration. {% include product %} Desktop uses the site configuration, as it operates outside of projects. When you select a project in {% include product %} Desktop it then loads the project configuration as well.
+- Leaving the project field blank is what we call a site configuration. {% include product %} Desktop uses the site configuration, as it operates outside of projects. When you select a project in {% include product %} Desktop it then loads the project configuration as well.
-- The Flame integration is namespaced `basic.flame`, and so is implied to be part of `basic.*`.
-However, the Flame integration isn't actually included in the basic config. So, if you are using Flame for a project and implement this override, the Flame integration will stop working.
-The solution would be to create an additional Pipeline Configuration override specifically for flame:
- - Name: `Primary`
- - Project: The project you want to lock down (or None for all projects)
- - Plugin ids: `basic.flame`
- - Descriptor: `sgtk:descriptor:app_store?name=tk-config-flameplugin`
+- The Flame integration is namespaced `basic.flame`, and so is implied to be part of `basic.*`.
+ However, the Flame integration isn't actually included in the basic config. So, if you are using Flame for a project and implement this override, the Flame integration will stop working.
+ The solution would be to create an additional Pipeline Configuration override specifically for flame: - Name: `Primary` - Project: The project you want to lock down (or None for all projects) - Plugin ids: `basic.flame` - Descriptor: `sgtk:descriptor:app_store?name=tk-config-flameplugin`
### Disabling updates for all but one project
@@ -129,12 +128,12 @@ You can
1. Disabling updates for your site as described in the above section.
2. Configure the exception project’s Pipeline Configuration entity to have the following field values:
- - Name: `Primary`
- - Project: The project you want not to lock down
- - Plugin ids: `basic.*`
- - Descriptor: `sgtk:descriptor:app_store?name=tk-config-basic`
- 
- With the version number omitted from the Descriptor field, the project is now tracking the latest release of the basic config.
+ - Name: `Primary`
+ - Project: The project you want not to lock down
+ - Plugin ids: `basic.*`
+ - Descriptor: `sgtk:descriptor:app_store?name=tk-config-basic`
+ 
+ With the version number omitted from the Descriptor field, the project is now tracking the latest release of the basic config.
### Upgrading
@@ -144,6 +143,6 @@ When it comes to updating your configuration, you may wish to test out the newer
2. Name the cloned config “update test”, and assign yourself to the User Restrictions field.
3. You will now begin to use this Pipeline Configuration.
4. Change the descriptor to point to the version you wish to test.
-4. You can invite any users you want to partake in testing by adding them to the `User Restrictions` field.
-5. Once you are happy with testing, simply update the main Pipeline Configuration to use that version.
-6. Once users restart {% include product %} Desktop and relaunch any currently open software with the {% include product %} integration running, the update will be picked up.
+5. You can invite any users you want to partake in testing by adding them to the `User Restrictions` field.
+6. Once you are happy with testing, simply update the main Pipeline Configuration to use that version.
+7. Once users restart {% include product %} Desktop and relaunch any currently open software with the {% include product %} integration running, the update will be picked up.
diff --git a/docs/en/guides/pipeline-integrations/administration/toolkit-overview.md b/docs/en/guides/pipeline-integrations/administration/toolkit-overview.md
index 08aa9ed40..975804b74 100644
--- a/docs/en/guides/pipeline-integrations/administration/toolkit-overview.md
+++ b/docs/en/guides/pipeline-integrations/administration/toolkit-overview.md
@@ -11,9 +11,9 @@ lang: en
# An overview of the different concepts in the {% include product %} Pipeline Toolkit.
-Here, we cover the main concepts in detail: How apps and Engines work, how Toolkit is launched and manages the current context (work area), how folders are created on disk, etc. We recommend that anyone involved in configuration or development start here.
+Here, we cover the main concepts in detail: How apps and Engines work, how Toolkit is launched and manages the current context (work area), how folders are created on disk, etc. We recommend that anyone involved in configuration or development start here.
-_Please note that this document describes functionality only available if you have taken control over a Toolkit configuration. For details, see the [{% include product %} Integrations Admin Guide](https://support.shotgunsoftware.com/hc/en-us/articles/115000067493)._
+_Please note that this document describes functionality only available if you have taken control over a Toolkit configuration. For details, see the [{% include product %} Integrations Admin Guide](https://support.shotgunsoftware.com/hc/en-us/articles/115000067493)._
# Introduction
@@ -23,10 +23,10 @@ This document explains some of the key features in more depth. With explanations
Below is a brief description of Toolkit (SGTK):
-- Toolkit is a _Pipeline Toolkit_ based on the {% include product %} platform - it makes it easier to write and install tools for a studio.
-- Toolkit is filesystem based - it helps you organize where things are stored on disk so that what you have on disk is nicely structured.
-- Toolkit is an assistant - it does not try to take over or abstract the data in your pipeline, but rather is there to provide artists with compelling tools to make finding information easier and avoid making mistakes.
-- Toolkit is helping you to share work by storing all of its publishes in {% include product %}. Toolkit makes it easy to share updates and work that is going on across a production.
+- Toolkit is a _Pipeline Toolkit_ based on the {% include product %} platform - it makes it easier to write and install tools for a studio.
+- Toolkit is filesystem based - it helps you organize where things are stored on disk so that what you have on disk is nicely structured.
+- Toolkit is an assistant - it does not try to take over or abstract the data in your pipeline, but rather is there to provide artists with compelling tools to make finding information easier and avoid making mistakes.
+- Toolkit is helping you to share work by storing all of its publishes in {% include product %}. Toolkit makes it easy to share updates and work that is going on across a production.
In the following sections, we will be looking in depth at the Toolkit and how it works.
@@ -38,9 +38,9 @@ In Toolkit, everything is project centric. A project typically starts its lifecy
When you set up a new project, you use a _template configuration_. This is a predefined config containing engines and apps, filesystem configuration, and other settings. If you are just starting with Toolkit, you can use our example configuration as a starting point for your exploration. If you have already been using Toolkit on another project, we suggest that you take that configuration and use that as the starting point for your new project. That way, you will be evolving a studio configuration and it will be refined with each new project. Of course, you can also maintain a studio configuration separately and use this as a template for all new projects.
-Each configuration defines a number of _storage points_. For the standard sample configuration, `tk-config-default`, we define a single storage point called _primary_. This means that all your production data will be under a single filesystem project root. You can also set up configs with more than a single file system root. We call these _multi-root configurations_. Examples of when you might need multi-root configurations include having a separate storage for renders, a separate storage for editorial, etc. Each of these storage points need to exist as a _Local File Storage_ in {% include product %}, which can be set up in the Site Preferences, under the _File Management_ tab.
+Each configuration defines a number of _storage points_. For the standard sample configuration, `tk-config-default`, we define a single storage point called _primary_. This means that all your production data will be under a single filesystem project root. You can also set up configs with more than a single file system root. We call these _multi-root configurations_. Examples of when you might need multi-root configurations include having a separate storage for renders, a separate storage for editorial, etc. Each of these storage points need to exist as a _Local File Storage_ in {% include product %}, which can be set up in the Site Preferences, under the _File Management_ tab.
-Toolkit will install the actual project configuration in any location you like. Typically this will go into a _software install_ area on disk and not into the project data area itself.
+Toolkit will install the actual project configuration in any location you like. Typically this will go into a _software install_ area on disk and not into the project data area itself.
## Let your studio configuration evolve
@@ -68,7 +68,7 @@ Example:
Similar to other App stores out there, the Toolkit app store constantly gets new versions for apps and engines. These new versions may contain important bug fixes or interesting new features. Upgrading your apps and engines is completely optional. It is normally a quick process and the upgrade scripts will always prompt you before making any changes. Likewise, it is straightforward to roll back should you have accidentally installed an unsatisfactory version.
-A single command handles the upgrade process. Simply run the `tank` command located in your project configuration folder and add an `updates` parameter:
+A single command handles the upgrade process. Simply run the `tank` command located in your project configuration folder and add an `updates` parameter:
```shell
/software/shotgun/bug_buck_bunny/tank updates
@@ -85,12 +85,13 @@ tank updates [environment_name] [engine_name] [app_name]
The special keyword `ALL` can be used to denote all items in a category.
Examples:
-- Check everything: `tank updates`
-- Check the Shot environment: `tank updates Shot`
-- Check all maya apps in all environments: `tank updates ALL tk-maya`
-- Check all maya apps in the Shot environment: `tank updates Shot tk-maya`
-- Make sure the loader app is up to date everywhere: `tank updates ALL ALL tk-multi-loader`
-- Make sure the loader app is up to date in maya: `tank updates ALL tk-maya tk-multi-loader`
+
+- Check everything: `tank updates`
+- Check the Shot environment: `tank updates Shot`
+- Check all maya apps in all environments: `tank updates ALL tk-maya`
+- Check all maya apps in the Shot environment: `tank updates Shot tk-maya`
+- Make sure the loader app is up to date everywhere: `tank updates ALL ALL tk-multi-loader`
+- Make sure the loader app is up to date in maya: `tank updates ALL tk-maya tk-multi-loader`
In addition to checking the app store, this script checks all other registered locations too, so it may query your local git, a GitHub repository, a file on disk and the app store, depending on where you have deployed your apps.
@@ -126,10 +127,10 @@ The context can be created either from a {% include product %} object, such as a
The Toolkit Core contains a system for handling file paths. It is called the _Templates System_. Since Toolkit is filesystem based, apps will need to resolve file paths whenever they need to read or write data from disk. Apps are filesystem-structure agnostic, meaning that they don't know how the filesystem is organized. The template system handles all this for them.
-At the heart of the template system, there is a _Templates Configuration File_. This file contains all the important filesystem locations for a project. A _Template_ looks something like this:
+At the heart of the template system, there is a _Templates Configuration File_. This file contains all the important filesystem locations for a project. A _Template_ looks something like this:
```yaml
-maya_shot_publish: 'shots/{Shot}/{Step}/pub/{name}.v{version}.ma'
+maya_shot_publish: "shots/{Shot}/{Step}/pub/{name}.v{version}.ma"
```
It defines a path which contains certain dynamic fields. Each field can be configured with validation and typing, so that, for example, you can define that the `{version}` field in the template above is an integer padded with three zeros (e.g. `001`, `012`, `132`). Whenever and app needs to write or read something from disk, a template is added to the templates file to describe that location. Since apps often are set up to form a pipeline, the output template of one app (e.g. a publishing app) is often the input template of another app (e.g. a loading app). This is why all of the filesystem locations are kept in a single file.
@@ -161,7 +162,7 @@ When we develop an app that does publishing, we obviously don't want to have a s

-This is where the _Toolkit Context_ comes into play. The Toolkit Context allows us to split the template fields into two distinct groups: the Context fields (`Shot`, `Step`, `Asset`, etc) are fields that we want to ensure are resolved outside of the app in such a way that the app's logic will not have to have code that specifically handles concepts such as Shots and Assets. Instead, the app should only populate the fields that are directly associated with the particular _business logic_ of the app. In our example of a publish app, the business logic consists of the `name` and the `version` fields. As the figure above illustrates, Toolkit therefore splits the field resolution into two distinct phases: some fields are populated by the context and some fields are populated by the business logic inside the app. This way, apps can be designed that are not tied to a particular filesystem layout. We believe this is an important aspect of building good pipeline tools.
+This is where the _Toolkit Context_ comes into play. The Toolkit Context allows us to split the template fields into two distinct groups: the Context fields (`Shot`, `Step`, `Asset`, etc) are fields that we want to ensure are resolved outside of the app in such a way that the app's logic will not have to have code that specifically handles concepts such as Shots and Assets. Instead, the app should only populate the fields that are directly associated with the particular _business logic_ of the app. In our example of a publish app, the business logic consists of the `name` and the `version` fields. As the figure above illustrates, Toolkit therefore splits the field resolution into two distinct phases: some fields are populated by the context and some fields are populated by the business logic inside the app. This way, apps can be designed that are not tied to a particular filesystem layout. We believe this is an important aspect of building good pipeline tools.
The App Code that would deal with the path resolve would typically look something like this:
@@ -181,6 +182,7 @@ fields["version"] = 234
# order to save out the file
path = publish_template_obj.apply_fields(fields)
```
+
For more details of how you can configure and use the Templates API, see the following:
[File System Configuration Reference](https://support.shotgunsoftware.com/hc/en-us/articles/219039868)
@@ -203,18 +205,18 @@ This makes it possible to configure separate collections of apps for different p
To give you a practical example of how environments work and can be structured, let's take a look at the environments that come with the default configuration:
-- `project.yml` - Apps and Engines to run when the context only contains a project.
-- `shot_and_asset.yml` - Apps and Engines to run when the context contains a shot or an asset.
-- `shot_step.yml` - Apps ane Engines when the context contains a Shot and a Pipeline Step.
-- `asset_step.yml` - Apps and Engines when the context contains an Asset and a Pipeline Step.
+- `project.yml` - Apps and Engines to run when the context only contains a project.
+- `shot_and_asset.yml` - Apps and Engines to run when the context contains a shot or an asset.
+- `shot_step.yml` - Apps ane Engines when the context contains a Shot and a Pipeline Step.
+- `asset_step.yml` - Apps and Engines when the context contains an Asset and a Pipeline Step.
The default config has organized its filesystem based on pipeline steps. This means that under a Shot location, you can find folders for modeling, rigging, etc. Essentially, there is one folder for each pipeline step you work on. Each of these folders have its own work and publish areas on disk. This means that a publish template may look like this:
```yaml
-maya_shot_publish: 'sequences/{Sequence}/{Shot}/{Step}/pub/{name}.v{version}.ma'
+maya_shot_publish: "sequences/{Sequence}/{Shot}/{Step}/pub/{name}.v{version}.ma"
```
-In order to use this template, the context needs to contain both an entity and a Pipeline Step. For Shot `1122`, parented under Sequence `ABC` and pipeline step `Modeling`, the above template would resolve to `sequences/ABC/1122/Modeling/...`. This means that a context that contains a Shot but not a Pipeline Step is not enough to populate the above template. You cannot launch Maya for a Shot-only context and use the above template. In order for it to be functional, a Step is required.
+In order to use this template, the context needs to contain both an entity and a Pipeline Step. For Shot `1122`, parented under Sequence `ABC` and pipeline step `Modeling`, the above template would resolve to `sequences/ABC/1122/Modeling/...`. This means that a context that contains a Shot but not a Pipeline Step is not enough to populate the above template. You cannot launch Maya for a Shot-only context and use the above template. In order for it to be functional, a Step is required.
This leads us to the environment breakdown shown above. Because the filesystem structure defined in the default configuration is centered around steps, all the main apps need to run in a context which has a step defined. We define two such environments in the default config: the `asset_step.yml` file and the `shot_step.yml` file. Each of these files contain engines for a number of DCCs, such as Maya, Nuke, 3dsmax, Motionbuilder, and Photoshop to mention a few. When you launch Maya from a Task inside of {% include product %}, the pick environment hook will choose the `shot_step` environment, start Maya and load up the Maya app configuration.
@@ -254,10 +256,10 @@ This situation is handled in Toolkit using a _hook_. The hook is a customizable
Once Toolkit is installed, you can access it from several primary entry points:
-- {% include product %} Actions will appear on the right-click menus inside of {% include product %}
-- Launch icons will appear for the project in the {% include product %} Desktop app
-- You can use the `tank` command in a console.
-- The Toolkit Python API is available both inside applications and in the shell.
+- {% include product %} Actions will appear on the right-click menus inside of {% include product %}
+- Launch icons will appear for the project in the {% include product %} Desktop app
+- You can use the `tank` command in a console.
+- The Toolkit Python API is available both inside applications and in the shell.
Running the Toolkit from within {% include product %} is a common way of starting applications and carrying out tasks. {% include product %} will use {% include product %} Desktop to communicate with the Toolkit install that is local on your machine and use a local Python to execute a Toolkit command. This means that you can run local operations such as folder creation right from inside of {% include product %}.
@@ -281,10 +283,10 @@ As a starting point, however, we recommend our Publish App:
Toolkit is not just a collection of apps and engines. It is also a framework that you can use to develop your own tools and technologies! We have included a lot of features to make Toolkit a rich studio development platform. With Toolkit as a foundation, you can focus on the problems at hand rather than building the underlying platform yourself. We have tried to make it easy for developers to build, evaluate and release software without accidentally breaking the pipeline for artists.
-- The engines ensure that apps can be written in Python and Qt (PySide/PySide2) regardless of the underlying foundation. This means that some engines are very simple, while some engines are more complex depending on their provided APIs. This means that there is a straightforward, consistent way to develop tools for the studio. In our experience, Python and Qt is often found being the development environment studios use and many TDs are familiar with it.
-- The engine layer also means that apps can be written once and then be deployed in multiple environments. We have developed the standard app suite as _Multi Apps_, meaning that the same app is used in all engines. There will inevitably be specific code that needs to be tailored to work with the specific API that each DCC application exposes, but this is typically contained in one or more hooks, making it easy to reuse an app. Another consequence of being able to create multi apps like this is that when a new engine is being developed, all the standard apps can be easily configured to work with that new engine.
-- Via Pipeline Configurations and Cloning, it is easy to create a development sandbox, allowing developers to do work on a production without interfering with the day-to-day production activity. Once the tools are ready to be deployed, the main project configuration can be easily updated and the tool is rolled out to all artists.
-- Since apps run inside an engine, it is easy to reload them. Instead of having to restart Nuke or Maya every time you want to test a new code change, simply hit the reload button in Toolkit and the latest code is loaded in.
+- The engines ensure that apps can be written in Python and Qt (PySide/PySide2) regardless of the underlying foundation. This means that some engines are very simple, while some engines are more complex depending on their provided APIs. This means that there is a straightforward, consistent way to develop tools for the studio. In our experience, Python and Qt is often found being the development environment studios use and many TDs are familiar with it.
+- The engine layer also means that apps can be written once and then be deployed in multiple environments. We have developed the standard app suite as _Multi Apps_, meaning that the same app is used in all engines. There will inevitably be specific code that needs to be tailored to work with the specific API that each DCC application exposes, but this is typically contained in one or more hooks, making it easy to reuse an app. Another consequence of being able to create multi apps like this is that when a new engine is being developed, all the standard apps can be easily configured to work with that new engine.
+- Via Pipeline Configurations and Cloning, it is easy to create a development sandbox, allowing developers to do work on a production without interfering with the day-to-day production activity. Once the tools are ready to be deployed, the main project configuration can be easily updated and the tool is rolled out to all artists.
+- Since apps run inside an engine, it is easy to reload them. Instead of having to restart Nuke or Maya every time you want to test a new code change, simply hit the reload button in Toolkit and the latest code is loaded in.
For an more extensive introduction to App Development, see the following documents:
diff --git a/docs/en/guides/pipeline-integrations/development.md b/docs/en/guides/pipeline-integrations/development.md
index be70c031d..401bc7c35 100644
--- a/docs/en/guides/pipeline-integrations/development.md
+++ b/docs/en/guides/pipeline-integrations/development.md
@@ -9,7 +9,7 @@ lang: en
## What is Toolkit?
-Toolkit is the platform that underpins our pipeline integrations.
+Toolkit is the platform that underpins our pipeline integrations.
For example, If you are using the {% include product %} Panel app in Maya or launching the Publish app from {% include product %} Create, you are using tools built upon the Toolkit platform.
## How can I develop with Toolkit?
@@ -23,6 +23,7 @@ There are a number of different ways in which you can develop with Toolkit.
To do any of these things it's important to understand how to work with the Toolkit API.
{% include product %} as a whole has three main API's
+
- [{% include product %} Python API](https://developer.shotgridsoftware.com/python-api)
- [{% include product %} REST API](https://developer.shotgridsoftware.com/rest-api/)
- [{% include product %} Toolkit API](https://developer.shotgridsoftware.com/tk-core)
@@ -31,6 +32,6 @@ The Toolkit API is a Python API, designed to be used alongside the {% include pr
Although the Toolkit API does have some wrapper methods, in general whenever you need to access data from your {% include product %} site you will use the {% include product %} Python or REST APIs instead.
The Toolkit API instead focuses on the integrations and management of file paths.
-Some Toolkit apps and frameworks also [have their own APIs](../../reference/pipeline-integrations.md).
+Some Toolkit apps and frameworks also [have their own APIs](../../reference/pipeline-integrations.md).
-These articles will guide you through how you can develop with Toolkit.
\ No newline at end of file
+These articles will guide you through how you can develop with Toolkit.
diff --git a/docs/en/guides/pipeline-integrations/development/sgtk-developer-app.md b/docs/en/guides/pipeline-integrations/development/sgtk-developer-app.md
index 59f6df930..bd3e1e80e 100644
--- a/docs/en/guides/pipeline-integrations/development/sgtk-developer-app.md
+++ b/docs/en/guides/pipeline-integrations/development/sgtk-developer-app.md
@@ -16,20 +16,22 @@ This guide outlines what a Toolkit app is, covers how to create one, and explain
- [Creating your own app](#creating-your-own-app)
Steps:
+
1. [Creating a development sandbox](#part-1-creating-a-development-sandbox)
2. [Forking or downloading the starter app repository](#part-2-forking-or-downloading-the-starter-app-repository)
3. [Adding the app to your config](#part-3-adding-the-app-to-your-config)
4. [Developing the app](#part-4-developing-the-app)
- - [Anatomy of the Template Starter App](#anatomy-of-the-template-starter-app)
- - [Configuration settings](#configuration-settings)
- - [Frameworks](#frameworks)
- - [Reloading your changes](#reloading-your-changes)
+ - [Anatomy of the Template Starter App](#anatomy-of-the-template-starter-app)
+ - [Configuration settings](#configuration-settings)
+ - [Frameworks](#frameworks)
+ - [Reloading your changes](#reloading-your-changes)
5. [Testing](#part-5-testing)
6. [Preparing your first release](#part-6-preparing-your-first-release)
Additional info:
+
- [Modifying an existing app](#modifying-an-existing-app)
- - [Contributing](#contributing)
+ - [Contributing](#contributing)
## What is a Toolkit app?
@@ -47,27 +49,30 @@ Toolkit apps are initialized by Toolkit engines. [Engines](https://developer.sho
This means the app only needs to focus on providing the functionality to fulfill its purpose and doesn't need to, for example, handle window parenting, keeping track of the user's context, or providing a shortcut for launching itself.
## Creating your own app.
+
All apps and engines maintained and released by {% include product %} Software are open source and you can find them in [GitHub](https://github.com/shotgunsoftware).
-This section goes through how to create a new app using our starter template.
+This section goes through how to create a new app using our starter template.
We assume that you are familiar with GitHub and git workflows, but please note that you can do Toolkit development even if you are not using git as your source control solution.
-
## Part 1: Creating a development sandbox
+
Before you do anything else, we recommend that you set up a [development sandbox by cloning the project configuration](../getting-started/installing_app.md#clone-the-pipeline-configuration-you-want-to-add-an-app-to) by cloning your project's configuration.
-This will result in a separate configuration where you can develop your code and test changes without affecting anyone else on the production.
+This will result in a separate configuration where you can develop your code and test changes without affecting anyone else on the production.
## Part 2: Forking or downloading the starter app repository
+
We provide a [template starter app](https://github.com/shotgunsoftware/tk-multi-starterapp) that you can use as a starting point for your own app.
By using this app you get all the standard Toolkit boilerplate code set up for you, and a basic example GUI.

-To use it, you can either fork the git repository and then clone it to your local dev area on disk,
+To use it, you can either fork the git repository and then clone it to your local dev area on disk,
or if you don't want to use git source control at this stage, you can just download the files from GitHub as a zip file, and unzip them locally (you can always set up a git repository later).
Either way, the goal is to have a local copy of the starter app code so you can start making changes.
## Part 3: Adding the app to your config
+
We recommend reading the "[Adding an app](../getting-started/installing_app.md)" guide, as it explains in more detail how to add an app to your configuration.
When adding the app to your config, you need to consider where your app will be used, i.e. perhaps only in Nuke or in multiple different software, or standalone from {% include product %} Desktop.
@@ -75,7 +80,7 @@ You also need to think about the context that your app will depend on.
For example, can your app only run when you know the task the user is working on, or can it run with just the project known?
Knowing this will dictate which environment YAMLs and engines you need to add your app settings to.
-If you're not sure right now, it's a good idea to start by adding it to the `tk-shell` engine in the project environment.
+If you're not sure right now, it's a good idea to start by adding it to the `tk-shell` engine in the project environment.
That way you can [run it from your IDE](./sgtk-developer-bootstrapping.md) or via the command line with the tank command if you have a [centralized config](https://developer.shotgridsoftware.com/tk-core/initializing.html#centralized-configurations). This will make it quicker to develop with.
To start, use a [dev descriptor](https://developer.shotgridsoftware.com/tk-core/descriptor.html#pointing-to-a-path-on-disk) for the location of your app.
@@ -86,6 +91,7 @@ tk-multi-starterapp:
type: dev
path: /path/to/source_code/tk-multi-starterapp
```
+
This instructs Toolkit to load the app code directly from disk in the given location, which is great for development, where you want to change the code all the time.
Later when you add the app to your production config, you may want to use a different descriptor.
@@ -102,37 +108,41 @@ The [template starter app](https://github.com/shotgunsoftware/tk-multi-starterap

- **app.py** - The app entry point and menu registration can be found in the `app.py` file. This is where you typically set up your classes, get things initialized, and get menu items registered.
-- **info.yml** - Also known as the manifest file. It defines all the different settings that this app requires when it is installed, along with their default values if supplied.
-Settings are often useful if you want reusable apps and you don't want to hard code any values in the app itself.
+- **info.yml** - Also known as the manifest file. It defines all the different settings that this app requires when it is installed, along with their default values if supplied.
+ Settings are often useful if you want reusable apps and you don't want to hard code any values in the app itself.
- **python/app/dialog.py** - This contains the logic, event callbacks, etc. that produce the main app window.
- **python/app/ui** - This folder contains the automatically generated UI code and resource file. You don't edit this directly; instead, you edit the Qt UI file in the `resources` folder.
-- **resources/** - In the resources folder, the `dialog.ui` file is a Qt Designer file that you can open up and use to rapidly design and define the look and feel of the app.
-Once you have made changes, you have to execute the `build_resources.sh` script, which will convert the UI file to python code and store it as `/python/app/ui/dialog.py`.
+- **resources/** - In the resources folder, the `dialog.ui` file is a Qt Designer file that you can open up and use to rapidly design and define the look and feel of the app.
+ Once you have made changes, you have to execute the `build_resources.sh` script, which will convert the UI file to python code and store it as `/python/app/ui/dialog.py`.
- **style.qss** - You can define QSS (Qt style sheets) for your UI in this file.
{% include info title="Note" content="An app doesn't need to have a UI however, and the minimum requirements for a valid app are an `app.py` containing an `Application` class and an `info.yml`." %}
### Configuration settings
+
Inside the manifest file, there should be a `configuration` section where you can define app settings.
Defining a setting in the manifest file allows you to configure different setting values for your app in the environment YAML files.
This is useful if your app needs to behave differently depending on the environment it is in.
-For example, you may wish to have a setting that defines a template to use when saving a file.
+For example, you may wish to have a setting that defines a template to use when saving a file.
+
```yaml
save_template:
- type: template
- default_value: "maya_asset_work"
- description: The template to use when building the path to save the file into
- allows_empty: False
+ type: template
+ default_value: "maya_asset_work"
+ description: The template to use when building the path to save the file into
+ allows_empty: False
```
-Creating a setting for this means you don't have to hard code the template name in your app code,
-and [can instead get the value](https://developer.shotgridsoftware.com/tk-core/platform.html#sgtk.platform.Application.get_setting)
+
+Creating a setting for this means you don't have to hard code the template name in your app code,
+and [can instead get the value](https://developer.shotgridsoftware.com/tk-core/platform.html#sgtk.platform.Application.get_setting)
from the settings defined either by default in the `info.yml` or overridden in the environment YAML file settings.
```python
template = app.get_setting("save_template")
```
+
This means that you could configure your app to use a different template depending on the environment the app is running in.
You can read more on configuration settings [in the reference documentation](https://developer.shotgridsoftware.com/tk-core/platform.html#the-configuration-section).
@@ -151,6 +161,7 @@ frameworks:
```
#### Minimum required framework versions
+
If there is a required minimum version for a framework, the minimum_version setting can be used in the `info.yml`:
```python
@@ -160,8 +171,8 @@ frameworks:
- {"name": "tk-framework-qtwidgets", "version": "v1.x.x", "minimum_version": "v1.5.0"}
```
-The above will ensure that `v1.5.0` of `tk-framework-qtwidgets` is available for the app to use.
-If it is not, the app will not be loaded at startup time and an error will be printed to the console.
+The above will ensure that `v1.5.0` of `tk-framework-qtwidgets` is available for the app to use.
+If it is not, the app will not be loaded at startup time and an error will be printed to the console.
When the app is updated using `tank updates`, any configured frameworks not meeting their required minimum versions will be automatically updated along with the app.
For more information about frameworks and how they can be useful, check out the following links:
@@ -171,17 +182,18 @@ For more information about frameworks and how they can be useful, check out the
### Reloading your changes
-If you are testing your app within software such as Maya, then as soon as you have one or more dev items in your configuration,
+If you are testing your app within software such as Maya, then as soon as you have one or more dev items in your configuration,
Toolkit will automatically add a **Reload and Restart** option to the {% include product %} menu.

-Clicking this will reload your configuration and code and then restart your engine.
-This means that you can iterate quickly: start Maya once, and then each time you make code or configuration changes that you want to try out, simply hit the **Reload and Restart** button and your changes will be pulled in.
+Clicking this will reload your configuration and code and then restart your engine.
+This means that you can iterate quickly: start Maya once, and then each time you make code or configuration changes that you want to try out, simply hit the **Reload and Restart** button and your changes will be pulled in.
{% include info title="Note" content="If you have any UIs active on screen, these will not automatically update, but you have to go in and re-launch the UIs from the menu." %}
## Part 5: Testing
-When you want to test your code, you can easily invite other users to your dev sandbox by adding them to the `User Restrictions` field on the `PipelineConfiguration` entity in {% include product %}.
+
+When you want to test your code, you can easily invite other users to your dev sandbox by adding them to the `User Restrictions` field on the `PipelineConfiguration` entity in {% include product %}.
As soon as you have added a user, they will see new entries on their menus inside of {% include product %} Create and the browser actions, as well as an option to pick the configuration inside of {% include product %} Desktop.

@@ -197,11 +209,12 @@ All apps provided by {% include product %} use the Toolkit app store to track up
```yaml
location:
- name: tk-multi-setframerange
- type: app_store
- version: v0.1.7
+ name: tk-multi-setframerange
+ type: app_store
+ version: v0.1.7
```
-This allows Toolkit (for example the `tank updates` command) to check when updates are available, update and maintain configurations in a very safe way.
+
+This allows Toolkit (for example the `tank updates` command) to check when updates are available, update and maintain configurations in a very safe way.
Whenever the updates command is run and a new version is available, Toolkit will download the code and place it in a local "bundle cache" on disk and ensure that users have access to it.
There are a few different options for sourcing your app releases.
@@ -222,37 +235,39 @@ The requirements for this are:
- Your git repo needs to contain just a single app
- Your git repo should have the same structure as the [starter app repository](https://github.com/shotgunsoftware/tk-multi-starterapp).
-- You use [Semantic Versioning](http://semver.org) when creating your tags.
-Toolkit will use these version numbers to try to determine which version is the most recent, and by following the convention `vX.Y.Z`.
+- You use [Semantic Versioning](http://semver.org) when creating your tags.
+ Toolkit will use these version numbers to try to determine which version is the most recent, and by following the convention `vX.Y.Z`.
Once you have created your first tag in git (eg. `v1.0.0`), you could then set up your config to use a git descriptor that points to your tag.
-Then you can simply run `tank updates`, and if new tags have been created, you will be prompted if you want to upgrade.
+Then you can simply run `tank updates`, and if new tags have been created, you will be prompted if you want to upgrade.
The workflow is now identical to the one which happens with official app store apps.
{% include warning title="Caution" content="The git descriptor works well with [centralized configs](https://developer.shotgridsoftware.com/tk-core/initializing.html#centralized-configurations), where the caching of apps is usually run by an admin and is stored to a central location where all users can access it. However, if you are using a [distributed config](https://developer.shotgridsoftware.com/tk-core/initializing.html#distributed-configurations), then it may not be as suitable. Your app will be downloaded per user, which means each user will need to have git installed and setup to authenticate with your repo and access the code." %}
## Modifying an existing app
-Rather than starting from an empty starter template, it is sometimes necessary to add a minor feature to an existing app, for example, one of {% include product %} Software's standard apps.
+
+Rather than starting from an empty starter template, it is sometimes necessary to add a minor feature to an existing app, for example, one of {% include product %} Software's standard apps.
When you work with a modified version of an app, you typically want to 'track' against the source app and make sure to regularly pull in changes and bug fixes.
-When you do this type of development, you pick up the parent code, then apply some of your changes, then release this to your pipeline.
-Your release effectively consists of the base version of the app PLUS your applied local changes.
-We recommend that you add a version suffix to the existing version number.
-This will work seamlessly with Toolkit and is relatively straight forward.
+When you do this type of development, you pick up the parent code, then apply some of your changes, then release this to your pipeline.
+Your release effectively consists of the base version of the app PLUS your applied local changes.
+We recommend that you add a version suffix to the existing version number.
+This will work seamlessly with Toolkit and is relatively straight forward.
The following workflow illustrates how to proceed:
-- You fork the parent app and create your own repository. With the fork, you get all the git tags.
-Let's say the latest tag is called `v0.2.12` and the master branch basically equals the contents in this tag.
-- You apply your changes and commit to your master branch. You now have `v0.2.12` PLUS your changes.
-When you release this to production you need to create a tag. Name the tag `v0.2.12.1`, to indicate that your code is based on `v0.2.12` and it is the first release.
+- You fork the parent app and create your own repository. With the fork, you get all the git tags.
+ Let's say the latest tag is called `v0.2.12` and the master branch basically equals the contents in this tag.
+- You apply your changes and commit to your master branch. You now have `v0.2.12` PLUS your changes.
+ When you release this to production you need to create a tag. Name the tag `v0.2.12.1`, to indicate that your code is based on `v0.2.12` and it is the first release.
- Now someone finds a bug in your modifications. Fix the bug and tag up and release `v0.2.12.2`.
-- A number of important bug fixes have been released in the parent repository.
-Pull them down to your repository. The most recent tag is now `v0.2.15` due to releases that have happened in the parent repository.
-Now merge your changes with master and test. You now basically have parent app `v0.2.15` merged with your changes. Tag up `v0.2.15.1`.
+- A number of important bug fixes have been released in the parent repository.
+ Pull them down to your repository. The most recent tag is now `v0.2.15` due to releases that have happened in the parent repository.
+ Now merge your changes with master and test. You now basically have parent app `v0.2.15` merged with your changes. Tag up `v0.2.15.1`.
The tagging scheme outlined above guarantees that the Toolkit updates work correctly and make it easy to quickly see what code each tag in your fork is based on.
### Contributing
+
We love pull requests! If you feel you have made a change that can benefit others, don't hesitate to feed it back to us as a pull request.
We may then fold it back into the main version of the app.
Alternatively, add a suggestion for a new idea on our [roadmap page](https://www.shotgunsoftware.com/roadmap/).
diff --git a/docs/en/guides/pipeline-integrations/development/sgtk-developer-bootstrapping.md b/docs/en/guides/pipeline-integrations/development/sgtk-developer-bootstrapping.md
index e897ada4f..3e42177d5 100644
--- a/docs/en/guides/pipeline-integrations/development/sgtk-developer-bootstrapping.md
+++ b/docs/en/guides/pipeline-integrations/development/sgtk-developer-bootstrapping.md
@@ -15,7 +15,6 @@ Or you may wish to be able to run your Toolkit app from your favorite IDE.
{% include info title="Note" content="If you are using a [distributed config](https://developer.shotgridsoftware.com/tk-core/initializing.html#distributed-configurations), a Toolkit engine must be initialized before running Toolkit API methods. It is possible to use the API without bootstrapping an engine if you are using a [centralized config](https://developer.shotgridsoftware.com/tk-core/initializing.html#centralized-configurations), using the [factory methods](https://developer.shotgridsoftware.com/tk-core/initializing.html#factory-methods), however, you will need to manually find the path to the correct core API for your project when importing `sgtk`." %}
-
### Requirements
- An understanding of Python programming fundamentals.
@@ -43,10 +42,10 @@ The bootstrap process will swap out the currently imported sgtk package for the
To start, you need to import an `sgtk` API package which is found in [`tk-core`](https://github.com/shotgunsoftware/tk-core/tree/v0.18.172/python).
You could import one from an existing project, however, this might be tricky to conveniently locate.
-A recommended approach would be to download a standalone copy
+A recommended approach would be to download a standalone copy
of the [latest core API](https://github.com/shotgunsoftware/tk-core/releases) which will be used purely for the purpose of bootstrapping.
-You should store it in a convenient place where it can be imported.
-Make sure that the path you add points to the `python` folder inside the `tk-core` folder as this is where the `sgtk` package is located.
+You should store it in a convenient place where it can be imported.
+Make sure that the path you add points to the `python` folder inside the `tk-core` folder as this is where the `sgtk` package is located.
### Code
@@ -65,7 +64,7 @@ If you are running this script via an IDE or shell, then you will most likely wa
To do this you need to run [`LogManager().initialize_custom_handler()`](https://developer.shotgridsoftware.com/tk-core/utils.html#sgtk.log.LogManager.initialize_custom_handler).
You don't need to provide a custom handler for this purpose, as not providing one will set up a standard stream-based logging handler.
-Optionally you can also set the [`LogManager().global_debug = True`](https://developer.shotgridsoftware.com/tk-core/utils.html#sgtk.log.LogManager.global_debug) to give you more verbose output.
+Optionally you can also set the [`LogManager().global_debug = True`](https://developer.shotgridsoftware.com/tk-core/utils.html#sgtk.log.LogManager.global_debug) to give you more verbose output.
This means that any `logger.debug()` calls in our code or yours will now be output.
Logging can have an impact on performance, so you should only enable debug logging when developing, and try to limit the amount of `logger.info()` method calls to those that are important to have visibility over during normal operation.
@@ -86,7 +85,7 @@ So before you can perform the bootstrapping, you need to authenticate the Toolki
You can authenticate with user credentials or with script credentials.
- If the purpose is to bootstrap for a user-facing process like launching an app, or running some code that will require user input,
-then user authentication is the best way to go, (This is how all our integrations work by default).
+ then user authentication is the best way to go, (This is how all our integrations work by default).
- If you're writing a script to automate something and a user is not present to authenticate then you should use script credentials.
Authentication is handled via the [`{% include product %}Authenticator`](https://developer.shotgridsoftware.com/tk-core/authentication.html?highlight=shotgunauthenticator#sgtk.authentication.ShotgunAuthenticator) class.
@@ -144,17 +143,16 @@ You can find a lot of information on the bootstrap API in our [reference docs](h
The bootstrapping process at a high level essentially performs the following steps:
1. Retrieves or locates the Toolkit configuration folder.
-2. Ensures that the configuration dependencies such as the apps and engines are present in the [bundle cache](../../../quick-answers/administering/where-is-my-cache.md#bundle-cache).
-If they are not present, and they are using cloud-based descriptors such as [`app_store`](https://developer.shotgridsoftware.com/tk-core/descriptor.html#the-shotgun-app-store), or [`{% include product %}`](https://developer.shotgridsoftware.com/tk-core/descriptor.html#pointing-at-a-file-attachment-in-shotgun) then it will download them to the bundle cache.
+2. Ensures that the configuration dependencies such as the apps and engines are present in the [bundle cache](../../../quick-answers/administering/where-is-my-cache.md#bundle-cache).
+ If they are not present, and they are using cloud-based descriptors such as [`app_store`](https://developer.shotgridsoftware.com/tk-core/descriptor.html#the-shotgun-app-store), or [`{% include product %}`](https://developer.shotgridsoftware.com/tk-core/descriptor.html#pointing-at-a-file-attachment-in-shotgun) then it will download them to the bundle cache.
3. Swaps out the current loaded sgtk core for the one appropriate to the config.
4. Initializes the engine, apps, and frameworks.
-
{% include info title="Note" content="Usually bootstrapping should take care of everything that is needed for that engine to run successfully.
However, in some situations, the engine may have specific setup requirements that fall outside of the bootstrap process, and must be handled separately." %}
-
### Bootstrap Preparation
+
To bootstrap, you must first create a [`ToolkitManager`](https://developer.shotgridsoftware.com/tk-core/initializing.html#toolkitmanager) instance.
```python
@@ -167,25 +165,28 @@ This guide won't cover all the available parameters and options, as they are cov
#### Plugin ID
You can define the plugin id by passing a string to the `ToolkitManager.plugin_id` parameter before calling the bootstrap method.
-In this guide, you will be bootstrapping the `tk-shell` engine so you should provide a suitable plugin id name following the conventions described in the reference docs.
+In this guide, you will be bootstrapping the `tk-shell` engine so you should provide a suitable plugin id name following the conventions described in the reference docs.
+
```python
mgr.plugin_id = "basic.shell"
```
#### Engine
-If your goal is to launch an app or run Toolkit code in a standalone python environment outside of software such as Maya or Nuke, then `tk-shell` is the engine you will want to bootstrap into.
+
+If your goal is to launch an app or run Toolkit code in a standalone python environment outside of software such as Maya or Nuke, then `tk-shell` is the engine you will want to bootstrap into.
If you are wanting to run Toolkit apps within supported Software, then you will want to pick the appropriate engine, such as `tk-maya` or `tk-nuke`.
This parameter is passed directly to the [`ToolkitManager.bootstrap_engine()`](https://developer.shotgridsoftware.com/tk-core/initializing.html#sgtk.bootstrap.ToolkitManager.bootstrap_engine) method. See the example in the [entity section](#entity) bellow.
#### Entity
+
The [`ToolkitManager.bootstrap_engine()`](https://developer.shotgridsoftware.com/tk-core/initializing.html#sgtk.bootstrap.ToolkitManager.bootstrap_engine) methods `entity` parameter, is used to set the [context](https://developer.shotgridsoftware.com/tk-core/core.html#context) and therefore [environment](https://developer.shotgridsoftware.com/tk-core/core.html?highlight=environment#module-pick_environment) for the launched engine.
-The entity can be of any entity type that the configuration is set up to work with.
+The entity can be of any entity type that the configuration is set up to work with.
For example, if you provide a `Project` entity, the engine will start up in a project context, using the project environment settings.
Likewise, you could provide a `Task` entity (where the task is linked to an `Asset`), and it will start up using the `asset_step.yml` environment.
This is based on the default configuration behavior, [the environment that is chosen](https://developer.shotgridsoftware.com/487a9f2c/?title=Environment+Configuration+Reference#how-toolkit-determines-the-current-environment) is controlled via the core hook, [`pick_environment.py`](https://github.com/shotgunsoftware/tk-config-default2/blob/v1.2.11/core/hooks/pick_environment.py), and so could be changed to pick a different environment based on the context or other parameters.
-You need to provide the entity in the format of a {% include product %} entity dictionary which must contain at least the type and id:
+You need to provide the entity in the format of a {% include product %} entity dictionary which must contain at least the type and id:
```python
task = {"type": "Task", "id": 17264}
@@ -211,12 +212,11 @@ def pre_engine_start_callback(ctx):
mgr.pre_engine_start_callback = pre_engine_start_callback
```
-
#### Choice of configuration
You have the choice of explicitly defining which configuration to bootstrap, or leaving the bootstrap logic to [autodetect an appropriate configuration](https://developer.shotgridsoftware.com/tk-core/initializing.html#managing-distributed-configurations).
You can even set a fallback configuration in case one is not automatically found.
-In this guide, we assume that your project has a configuration already setup and that it will be found automatically.
+In this guide, we assume that your project has a configuration already setup and that it will be found automatically.
### Bootstrapping
@@ -252,7 +252,7 @@ sgtk.set_authenticated_user(user)
# Bootstrap
###########
-# create an instance of the ToolkitManager which we will use to set a bunch of settings before initiating the bootstrap.
+# create an instance of the ToolkitManager which we will use to set a bunch of settings before initiating the bootstrap.
mgr = sgtk.bootstrap.ToolkitManager()
mgr.plugin_id = "basic.shell"
@@ -272,14 +272,16 @@ engine.context
engine.sgtk
engine.shotgun
```
+
Whilst the end goal of this guide is to show you how to launch an app, you could from this point make use of the above attributes and test some code snippets or run some automation that makes use of the Toolkit API.
### Launching the App
-When the engine starts, it initializes all the apps defined for the environment.
+When the engine starts, it initializes all the apps defined for the environment.
The apps in turn register commands with the engine, and the engine usually displays these as actions in a menu, if running in Software like Maya.
#### Finding the commands
+
To first see what commands have been registered, you can print out the [`Engine.commands`](https://developer.shotgridsoftware.com/tk-core/platform.html#sgtk.platform.Engine.commands) property:
```python
@@ -355,7 +357,7 @@ sgtk.set_authenticated_user(user)
# Bootstrap
###########
-# create an instance of the ToolkitManager which we will use to set a bunch of settings before initiating the bootstrap.
+# create an instance of the ToolkitManager which we will use to set a bunch of settings before initiating the bootstrap.
mgr = sgtk.bootstrap.ToolkitManager()
mgr.plugin_id = "basic.shell"
@@ -371,4 +373,4 @@ engine = mgr.bootstrap_engine("tk-shell", entity=project)
if "Publish..." in engine.commands:
# Launch the Publish app, and it doesn't require any arguments to run so provide an empty list.
engine.execute_command("Publish...",[])
-```
\ No newline at end of file
+```
diff --git a/docs/en/guides/pipeline-integrations/development/sgtk-developer-engine.md b/docs/en/guides/pipeline-integrations/development/sgtk-developer-engine.md
index 5277d148a..824bc95a1 100644
--- a/docs/en/guides/pipeline-integrations/development/sgtk-developer-engine.md
+++ b/docs/en/guides/pipeline-integrations/development/sgtk-developer-engine.md
@@ -8,27 +8,30 @@ lang: en
# Developing your own engine
## Introduction
+
This document outlines some of the technical details relating to Toolkit engine development.
Table of Contents:
+
- [What is a Toolkit engine?](#what-is-a-toolkit-engine)
- [Things to know before you start](#things-to-know-before-you-start)
- [Approaches to engine integration](#approaches-to-engine-integration)
- - [Host software includes Qt, PyQt/PySide and Python](#host-software-includes-qt-pyqtpyside-and-python)
- - [Host software includes Qt and Python but not PySide/PyQt](#host-software-includes-qt-and-python-but-not-pysidepyqt)
- - [Host software includes Python](#host-software-includes-python)
- - [Host software does not contain Python but you can write plugins](#host-software-does-not-contain-python-but-you-can-write-plugins)
- - [Host software provides no scriptability at all](#host-software-provides-no-scriptability-at-all)
+ - [Host software includes Qt, PyQt/PySide and Python](#host-software-includes-qt-pyqtpyside-and-python)
+ - [Host software includes Qt and Python but not PySide/PyQt](#host-software-includes-qt-and-python-but-not-pysidepyqt)
+ - [Host software includes Python](#host-software-includes-python)
+ - [Host software does not contain Python but you can write plugins](#host-software-does-not-contain-python-but-you-can-write-plugins)
+ - [Host software provides no scriptability at all](#host-software-provides-no-scriptability-at-all)
- [Qt window parenting](#qt-window-parenting)
- [Startup behavior](#startup-behavior)
- [Host software wish list](#host-software-wish-list)
## What is a Toolkit engine?
-When developing an engine, you effectively establish a bridge between the host software and the various Toolkit apps and frameworks that are loaded into the engine.
+
+When developing an engine, you effectively establish a bridge between the host software and the various Toolkit apps and frameworks that are loaded into the engine.
The engine makes it possible to abstract the differences between software so that apps can be written in more of a software-agnostic manner using Python and Qt.
-The engine is a collection of files, [similar in structure to an app](sgtk-developer-app.md#anatomy-of-the-template-starter-app). It has an `engine.py` file and this must derive from the core [`Engine` base class](https://github.com/shotgunsoftware/tk-core/blob/master/python/tank/platform/engine.py).
-Different engines then re-implement various aspects of this base class depending on their internal complexity.
+The engine is a collection of files, [similar in structure to an app](sgtk-developer-app.md#anatomy-of-the-template-starter-app). It has an `engine.py` file and this must derive from the core [`Engine` base class](https://github.com/shotgunsoftware/tk-core/blob/master/python/tank/platform/engine.py).
+Different engines then re-implement various aspects of this base class depending on their internal complexity.
An engine typically handles or provides the following services:
- Menu management. At engine startup, once the apps have been loaded, the engine needs to create its {% include product %} menu and add the various apps to this menu.
@@ -38,25 +41,25 @@ An engine typically handles or provides the following services:
- The base class exposes various init and destroy methods that are executed at various points in the startup process. These can be overridden to control startup and shutdown execution.
- Startup logic that gets called by the `tk-multi-launchapp` at launch time, as well as automatic software discovery.
-Engines are launched by the Toolkit platform using the [`sgtk.platform.start_engine()`](https://developer.shotgridsoftware.com/tk-core/platform.html#sgtk.platform.start_engine) or the [`sgtk.bootstrap.ToolkitManager.bootstrap_engine()`](https://developer.shotgridsoftware.com/tk-core/initializing.html#sgtk.bootstrap.ToolkitManager.bootstrap_engine) methods.
+Engines are launched by the Toolkit platform using the [`sgtk.platform.start_engine()`](https://developer.shotgridsoftware.com/tk-core/platform.html#sgtk.platform.start_engine) or the [`sgtk.bootstrap.ToolkitManager.bootstrap_engine()`](https://developer.shotgridsoftware.com/tk-core/initializing.html#sgtk.bootstrap.ToolkitManager.bootstrap_engine) methods.
This command will read the configuration files, launch the engines, load all apps, etc.
-The goal with the engine is that once it has launched, it will provide a consistent Python/Qt interface to the apps.
-Since all engines implement the same base class, apps can call methods on the engines, for example, to create UIs.
+The goal with the engine is that once it has launched, it will provide a consistent Python/Qt interface to the apps.
+Since all engines implement the same base class, apps can call methods on the engines, for example, to create UIs.
It is up to each engine to implement these methods so that they work nicely inside the host software.
## Things to know before you start
we provide [integrations](https://support.shotgunsoftware.com/hc/en-us/articles/219039798-Integrations-Apps-and-Engines) for the most commonly used content creation software.
-There are also [engines that Toolkit Community members have built and shared back](https://support.shotgunsoftware.com/hc/en-us/articles/219039828-Community-Shared-Integrations). But sometimes you'll need pipeline integrations for software that doesn't have a Toolkit engine yet.
+There are also [engines that Toolkit Community members have built and shared back](https://support.shotgunsoftware.com/hc/en-us/articles/219039828-Community-Shared-Integrations). But sometimes you'll need pipeline integrations for software that doesn't have a Toolkit engine yet.
If you have the time and resources, we encourage you to help the Toolkit Community (and yourselves) in writing a missing engine you would like to use!
-Before embarking on writing code, [talk to us!](https://knowledge.autodesk.com/contact-support) We can't promise anything, but we will be happy to discuss your plans with you.
+Before embarking on writing code, [talk to us!](https://knowledge.autodesk.com/contact-support) We can't promise anything, but we will be happy to discuss your plans with you.
We may also be able to connect you to other users who are interested in or have done work on the same engine.
-If you can, open a channel of communication with a technical contact or developer of the software you are looking to integrate Toolkit into.
-This helps gain insight into what the possibilities and/or roadblocks are for getting something going.
+If you can, open a channel of communication with a technical contact or developer of the software you are looking to integrate Toolkit into.
+This helps gain insight into what the possibilities and/or roadblocks are for getting something going.
Once you establish a contact and talk through the basics of what you are trying to do, you can bring us into the conversation and set up a meeting with all of us to talk through some of the specifics of the engine.
-Also, you can engage directly with the Toolkit community in the [{% include product %} community forum](https://community.shotgridsoftware.com/c/pipeline).
+Also, you can engage directly with the Toolkit community in the [{% include product %} community forum](https://community.shotgridsoftware.com/c/pipeline).
We love to see new integrations, and are always eternally grateful for people's generous contributions to the Toolkit Community!
@@ -64,58 +67,60 @@ We love to see new integrations, and are always eternally grateful for people's
## Approaches to engine integration
-Depending on what the capabilities of the host app are, engine development may be more or less complex.
+Depending on what the capabilities of the host app are, engine development may be more or less complex.
This section outlines a couple of different complexity levels that we have noticed during engine development.
-
### Host software includes Qt, PyQt/PySide, and Python
-This is the best setup for Toolkit and implementing an engine on top of a host software that supports Qt, Python, and PySide is very straight forward.
-The [Nuke engine](https://github.com/shotgunsoftware/tk-nuke) or the [Maya engine](https://github.com/shotgunsoftware/tk-maya) is a good example of this. Integration is merely a matter of hooking up some log file management and write code to set up the {% include product %} menu.
+This is the best setup for Toolkit and implementing an engine on top of a host software that supports Qt, Python, and PySide is very straight forward.
+The [Nuke engine](https://github.com/shotgunsoftware/tk-nuke) or the [Maya engine](https://github.com/shotgunsoftware/tk-maya) is a good example of this. Integration is merely a matter of hooking up some log file management and write code to set up the {% include product %} menu.
### Host software includes Qt and Python but not PySide/PyQt
-This class of software includes for example [Motionbuilder](https://github.com/shotgunsoftware/tk-motionbuilder) and is relatively easy to integrate.
+
+This class of software includes for example [Motionbuilder](https://github.com/shotgunsoftware/tk-motionbuilder) and is relatively easy to integrate.
Since the host software itself was written in Qt and contains a Python interpreter, it is possible to compile a version of PySide or PyQt and distribute it with the engine.
-This PySide is then added to the Python environment and will allow access to the Qt objects using Python.
+This PySide is then added to the Python environment and will allow access to the Qt objects using Python.
Commonly, the exact compiler settings that were used when compiling the shot application must be used when compiling PySide, to guarantee it to work.
-
### Host software includes Python
+
This class of software includes for example, the third party integration [Unreal](https://github.com/ue4plugins/tk-unreal).
-These host software have a non-Qt UI but contain a Python interpreter.
-This means that Python code can execute inside of the environment, but there is no existing Qt event loop running.
-In this case, Qt and PySide will need to be included with the engine and the Qt message pump (event) loop must be hooked up with the main event loop in the UI.
-Sometimes the host software may contain special methods for doing precisely this.
+These host software have a non-Qt UI but contain a Python interpreter.
+This means that Python code can execute inside of the environment, but there is no existing Qt event loop running.
+In this case, Qt and PySide will need to be included with the engine and the Qt message pump (event) loop must be hooked up with the main event loop in the UI.
+Sometimes the host software may contain special methods for doing precisely this.
If not, arrangements must be made so that the Qt event loop runs regularly, for example via an on-idle call.
-
### Host software does not contain Python but you can write plugins
+
This class includes [Photoshop](https://github.com/shotgunsoftware/tk-photoshopcc) and [After Effects](https://github.com/shotgunsoftware/tk-aftereffects).
-There is no Python scripting, but C++ plugins can be created.
+There is no Python scripting, but C++ plugins can be created.
In this case, the strategy is often to create a plugin that contains an IPC layer and launches Qt and Python in a separate process at startup.
- Once the secondary process is running, commands are sent back and forth using the IPC layer.
- This type of host software usually means significant work to get a working engine solution.
-
- {% include info title="Tip" content="With the Photoshop and After Effects engines we actually created [a framework that handles the adobe plugin](https://github.com/shotgunsoftware/tk-framework-adobe).
- Both engine make use of the framework to communicate with the host software, and it makes it easier to build other engines for the rest of the adobe family." %}
+Once the secondary process is running, commands are sent back and forth using the IPC layer.
+This type of host software usually means significant work to get a working engine solution.
+{% include info title="Tip" content="With the Photoshop and After Effects engines we actually created [a framework that handles the adobe plugin](https://github.com/shotgunsoftware/tk-framework-adobe).
+ Both engine make use of the framework to communicate with the host software, and it makes it easier to build other engines for the rest of the adobe family." %}
### Host software provides no scriptability at all
-If the host software cannot be accessed programmatically in any way, it is not possible to create an engine for it.
+If the host software cannot be accessed programmatically in any way, it is not possible to create an engine for it.
## Qt window parenting
-Special attention typically needs to be paid to window parenting.
-Usually, the PySide windows will not have a natural parent in the widget hierarchy and this needs to be explicitly called out.
+
+Special attention typically needs to be paid to window parenting.
+Usually, the PySide windows will not have a natural parent in the widget hierarchy and this needs to be explicitly called out.
The window parenting is important to provide a consistent experience and without it implemented, Toolkit app windows may appear behind the main window, which can be quite confusing.
## Startup behavior
-The engine is also responsible for handling how the software is launched and its integration is started.
+
+The engine is also responsible for handling how the software is launched and its integration is started.
This logic will be called when the `tk-multi-launchapp` tries to launch the software with your engine.
You can read more about how this is set up in the [core documentation](https://developer.shotgridsoftware.com/tk-core/initializing.html?highlight=create_engine_launcher#launching-software).
## Host software wish list
-The following host software traits can be taken advantage of by Toolkit engines.
+
+The following host software traits can be taken advantage of by Toolkit engines.
The more of them that are supported, the better the engine experience will be!
- Built-in Python interpreter, Qt, and PySide!
@@ -124,12 +129,13 @@ The more of them that are supported, the better the engine experience will be!
- API commands that wrap filesystem interaction: Open, Save, Save As, Add reference, etc.
- API commands to add UI elements
- - Add a custom Qt widget as a panel to the app (ideally via a bundled PySide)
- - Add custom Menu / Context Menu items
- - Custom nodes in node-based packages (with an easy way to integrate a custom UI for interaction)
- - Introspection to get at things like selected items/nodes
+ - Add a custom Qt widget as a panel to the app (ideally via a bundled PySide)
+ - Add custom Menu / Context Menu items
+ - Custom nodes in node-based packages (with an easy way to integrate a custom UI for interaction)
+ - Introspection to get at things like selected items/nodes
+
- Flexible event system
- - "Interesting" events can trigger custom code
+ - "Interesting" events can trigger custom code
- Support for running UI asynchronously
- - For example, pop up a dialog when a custom menu item is triggered that does not lock up the interface
- - Provide a handle to a top-level window so custom UI windows can be parented correctly
\ No newline at end of file
+ - For example, pop up a dialog when a custom menu item is triggered that does not lock up the interface
+ - Provide a handle to a top-level window so custom UI windows can be parented correctly
diff --git a/docs/en/guides/pipeline-integrations/development/sgtk-developer-framework.md b/docs/en/guides/pipeline-integrations/development/sgtk-developer-framework.md
index a83360be3..5d49657a9 100644
--- a/docs/en/guides/pipeline-integrations/development/sgtk-developer-framework.md
+++ b/docs/en/guides/pipeline-integrations/development/sgtk-developer-framework.md
@@ -8,9 +8,11 @@ lang: en
# Developing your own framework
## Introduction
+
This document outlines some of the technical details relating to Toolkit framework development.
Table of Contents:
+
- [What is a Toolkit framework?](#what-is-a-toolkit-framework)
- [Pre-made {% include product %} frameworks](#pre-made-shotgun-frameworks)
- [Creating a Framework](#creating-a-framework)
@@ -18,11 +20,11 @@ Table of Contents:
## What is a Toolkit framework?
-Toolkit [frameworks](https://developer.shotgridsoftware.com/tk-core/platform.html?highlight=hide_tk_title_bar#frameworks) are very similar to Toolkit apps.
+Toolkit [frameworks](https://developer.shotgridsoftware.com/tk-core/platform.html?highlight=hide_tk_title_bar#frameworks) are very similar to Toolkit apps.
The main difference is that a framework is not something you would run on its own.
Instead, you would import a framework into your app or engine. It allows you to keep reusable logic separate so that it can be used in multiple engines and apps.
An example of a framework would be a library of reusable UI components, that might contain a playlist picker component.
-You could then import that framework in your app, and plug in the playlist picker component to your main app UI.
+You could then import that framework in your app, and plug in the playlist picker component to your main app UI.
## Pre-made {% include product %} frameworks
@@ -42,6 +44,7 @@ To access them, you would import the framework, and then use the [`import_module
The API docs contain examples on how to [import frameworks](https://developer.shotgridsoftware.com/tk-core/platform.html?highlight=hide_tk_title_bar#frameworks).
## Using Frameworks from hooks
+
It can be useful to create a framework so that you can share some common logic across hooks.
A framework can be used in an app, or other framework hooks, even if the app/framework doesn't explicitly require it in the manifest file, via the
[`Hook.load_framework()`](https://developer.shotgridsoftware.com/tk-core/core.html#sgtk.Hook.load_framework) method. Note frameworks can't be used in core hooks even with this method.
diff --git a/docs/en/guides/pipeline-integrations/development/sgtk-developer-generating-path-and-publish.md b/docs/en/guides/pipeline-integrations/development/sgtk-developer-generating-path-and-publish.md
index 486122db2..32568412e 100644
--- a/docs/en/guides/pipeline-integrations/development/sgtk-developer-generating-path-and-publish.md
+++ b/docs/en/guides/pipeline-integrations/development/sgtk-developer-generating-path-and-publish.md
@@ -13,7 +13,7 @@ The purpose of this guide is to walk through a basic example of how you can use
### Requirements
-- An understanding of Python programming fundamentals.
+- An understanding of Python programming fundamentals.
- A project with an advanced configuration. If you haven't set up a configuration before you can follow the ["Getting started with configurations"](need link) guide.
### Steps
@@ -29,7 +29,7 @@ The purpose of this guide is to walk through a basic example of how you can use
## Part 1: Importing sgtk
-The Toolkit API is contained in a python package called `sgtk`.
+The Toolkit API is contained in a python package called `sgtk`.
Each Toolkit configuration has its own copy of the API, which comes as part of [`tk-core`](https://developer.shotgridsoftware.com/tk-core/overview.html).
To use the API on a project's configuration, you must import the `sgtk` package from the configuration you wish to work with; importing it from a different configuration will lead to errors.
@@ -44,7 +44,8 @@ When running your code in an environment where {% include product %} is already
```python
import sgtk
-```
+```
+
If you want to use the API outside of a {% include product %} integration, for example, if you're testing it in your favorite IDE, then you will need to set the path to the API first:
```python
@@ -68,27 +69,27 @@ As the API documentation states, you don't create an instance of `Sgtk` directly
1. You can get an `Sgtk` instance from the current engine, if you are running the Python code within an environment where the {% include product %} integrations are already running, (such as the Maya Python console, if Maya was launched from {% include product %}.)
The `Engine.sgtk` property holds the engine's `Sgtk` instance.
So for example, in Maya, you could run the following:
-
- ```python
- # Get the engine that is currently running.
- current_engine = sgtk.platform.current_engine()
-
- # Grab the already created Sgtk instance from the current engine.
- tk = current_engine.sgtk
- ```
-
- You can access the `Sgtk` instance through the [`Engine.sgtk`](https://developer.shotgridsoftware.com/tk-core/platform.html#sgtk.platform.Engine.sgtk) property.
-
- *Note: The `Engine.sgtk` property should not be confused with or considered the same as the `sgtk` package that you imported in part 1.*
-
-2. [`sgtk.sgtk_from_entity()`](https://developer.shotgridsoftware.com/tk-core/initializing.html#sgtk.sgtk_from_entity) -
- If you are running in an environment where an engine hasn't already been started, you can use this method to get an `Sgtk` instance based upon an entity id.
- The entity whose id you are supplying must belong to the project that the `sgtk` API was imported from.
- *This doesn't work with distributed configs, please see the [bootstrapping guide](sgtk-developer-bootstrapping.md) for more details.*
-
+
+ ```python
+ # Get the engine that is currently running.
+ current_engine = sgtk.platform.current_engine()
+
+ # Grab the already created Sgtk instance from the current engine.
+ tk = current_engine.sgtk
+ ```
+
+ You can access the `Sgtk` instance through the [`Engine.sgtk`](https://developer.shotgridsoftware.com/tk-core/platform.html#sgtk.platform.Engine.sgtk) property.
+
+ _Note: The `Engine.sgtk` property should not be confused with or considered the same as the `sgtk` package that you imported in part 1._
+
+2. [`sgtk.sgtk_from_entity()`](https://developer.shotgridsoftware.com/tk-core/initializing.html#sgtk.sgtk_from_entity) -
+ If you are running in an environment where an engine hasn't already been started, you can use this method to get an `Sgtk` instance based upon an entity id.
+ The entity whose id you are supplying must belong to the project that the `sgtk` API was imported from.
+ _This doesn't work with distributed configs, please see the [bootstrapping guide](sgtk-developer-bootstrapping.md) for more details._
+
3. [`sgtk.sgtk_from_path()`](https://developer.shotgridsoftware.com/tk-core/initializing.html#sgtk.sgtk_from_path) -
- much like the `sgtk_from_entity()` except this will accept a path to a configuration or a path to or inside the project root folder, for example, a work file or shot folder.
- *This doesn't work with distributed configs, please see the [bootstrapping guide](sgtk-developer-bootstrapping.md) for more details.*
+ much like the `sgtk_from_entity()` except this will accept a path to a configuration or a path to or inside the project root folder, for example, a work file or shot folder.
+ _This doesn't work with distributed configs, please see the [bootstrapping guide](sgtk-developer-bootstrapping.md) for more details._
Throughout this guide we will assume you are running this code in an environment where an engine has already been started, so we'll use option 1.
Also you will store the `Sgtk` class instance in a variable called `tk`.
@@ -135,7 +136,8 @@ Instead of creating a new context however, you could [grab the current context f
```python
context = current_engine.context
```
-Since you will be using the context to help resolve a file path for a Task on a Shot in later steps, you need to be certain the context contains the relevant information.
+
+Since you will be using the context to help resolve a file path for a Task on a Shot in later steps, you need to be certain the context contains the relevant information.
If your code was running as part of a Toolkit app, and your app was configured to only run in a shot_step environment then you could safely assume you would get an appropriate current context.
However, for the sake of avoiding ambiguity in this guide, you will create a context explicitly from a `Task`, (that must belong to a `Shot`), using the `Sgtk.context_from_entity()`.
@@ -163,11 +165,11 @@ print(repr(context))
```
-Even though you only provided the task, it should have filled in the other related details.
+Even though you only provided the task, it should have filled in the other related details.
The publish script should now look like this:
-```python
+```python
import sgtk
# Get the engine instance that is currently running.
@@ -176,7 +178,7 @@ current_engine = sgtk.platform.current_engine()
# Grab the pre-created Sgtk instance from the current engine.
tk = current_engine.sgtk
-# Get a context object from a Task. This Task must belong to a Shot for the future steps to work.
+# Get a context object from a Task. This Task must belong to a Shot for the future steps to work.
context = tk.context_from_entity("Task", 13155)
```
@@ -195,6 +197,7 @@ You will use the [Sgtk.create_filesystem_structure()](https://developer.shotgrid
```python
tk.create_filesystem_structure("Task", context.task["id"])
```
+
You can use the context object to get the task id to generate the folders.
Your code should now look like this:
@@ -208,7 +211,7 @@ current_engine = sgtk.platform.current_engine()
# Grab the pre-created Sgtk instance from the current engine.
tk = current_engine.sgtk
-# Get a context object from a Task, this Task must belong to a Shot for the future steps to work.
+# Get a context object from a Task, this Task must belong to a Shot for the future steps to work.
context = tk.context_from_entity("Task", 13155)
# Create the required folders based upon the task.
@@ -224,7 +227,7 @@ You've now completed all the preparation steps and are ready to move onto genera
Whenever you need to know where a file should be placed or found in Toolkit you can use the templates to resolve an absolute path on disk.
[Templates](https://developer.shotgridsoftware.com/tk-core/core.html#templates) are essentially tokenized strings that when you apply the context and other data to, can be resolved into filesystem paths.
-They are customizable via your [project's pipeline configuration](https://support.shotgunsoftware.com/hc/en-us/articles/219039868-Integrations-File-System-Reference#Part%202%20-%20Configuring%20File%20System%20Templates), and their purpose is to provide a standardized method for working out where files should be stored.
+They are customizable via your [project's pipeline configuration](https://support.shotgunsoftware.com/hc/en-us/articles/219039868-Integrations-File-System-Reference#Part%202%20-%20Configuring%20File%20System%20Templates), and their purpose is to provide a standardized method for working out where files should be stored.
The first thing you need to do is get a template instance for the path you wish to generate.
Using the `Sgtk` instance you created, you can access the desired `Template` instance via the `Sgtk.templates` attribute, which is a dictionary where the keys are the template names, and the values are [`Template`](https://developer.shotgridsoftware.com/tk-core/core.html#template) instances.
@@ -233,11 +236,11 @@ Using the `Sgtk` instance you created, you can access the desired `Template` ins
template = tk.templates["maya_shot_publish"]
```
-In this example, you will use the `maya_shot_publish` template.
+In this example, you will use the `maya_shot_publish` template.
In the [Default Configuration](https://github.com/shotgunsoftware/tk-config-default2/blob/v1.2.12/core/templates.yml#L305-L306) the unresolved template path looks like this:
```yaml
-'sequences/{Sequence}/{Shot}/{Step}/work/maya/{name}.v{version}.{maya_extension}'
+"sequences/{Sequence}/{Shot}/{Step}/work/maya/{name}.v{version}.{maya_extension}"
```
The template is made up of keys that you will need to resolve into actual values.
@@ -248,7 +251,8 @@ fields = context.as_template_fields(template)
>> {'Sequence': 'seq01_chase', 'Shot': 'shot01_running_away', 'Step': 'comp'}
```
-The [`Context.as_template_fields()`](https://developer.shotgridsoftware.com/tk-core/core.html#sgtk.Context.as_template_fields) method gives you a dictionary with the correct values to resolve the template keys.
+
+The [`Context.as_template_fields()`](https://developer.shotgridsoftware.com/tk-core/core.html#sgtk.Context.as_template_fields) method gives you a dictionary with the correct values to resolve the template keys.
However, it hasn't provided values for all the keys. The `name`, `version` and `maya_extension` are still missing.
The `maya_extension` key [defines a default value](https://github.com/shotgunsoftware/tk-config-default2/blob/v1.2.8/core/templates.yml#L139) in the template keys section so you don't need to provide a value for that, although you could if you wanted a value other than the default.
@@ -282,12 +286,12 @@ Make sure to only create the folders for the directory and not the full file pat
You can import the [`os`](https://docs.python.org/3/library/os.html) module and run [`os.path.dirname(publish_path)`](https://docs.python.org/3/library/os.path.html#os.path.dirname) to extract the folder portion of the full file path.
### Creating or copying a file using the path
-At this point you have a path, and you could use this, for example, to tell Maya to save a file there, or perhaps copy the file from a different location.
+
+At this point you have a path, and you could use this, for example, to tell Maya to save a file there, or perhaps copy the file from a different location.
It's not important for the sake of this guide that you implement any behavior that actually creates a file on disk in that location.
-You can still publish the path even if there is no file there.
+You can still publish the path even if there is no file there.
However, you can use [`sgtk.util.filesystem.touch_file()`](https://developer.shotgridsoftware.com/tk-core/utils.html?#sgtk.util.filesystem.touch_file) to get Toolkit to create an empty file on disk.
-
### Bringing it all together so far
```python
@@ -300,7 +304,7 @@ current_engine = sgtk.platform.current_engine()
# Grab the pre-created Sgtk instance from the current engine.
tk = current_engine.sgtk
-# Get a context object from a Task. This Task must belong to a Shot for the future steps to work.
+# Get a context object from a Task. This Task must belong to a Shot for the future steps to work.
context = tk.context_from_entity("Task", 13155)
# Create the required folders based upon the task.
@@ -330,11 +334,11 @@ The next step is to dynamically work out the next version number rather than har
## Part 6: Finding existing files and getting the latest version number
-There two methods you could use here.
+There two methods you could use here.
1. Since in this particular example you are resolving a publish file, you could use the [{% include product %} API](https://developer.shotgridsoftware.com/python-api/) to query for the next available version number on `PublishedFile` entities.
-2. You can scan the files on disk and work out what versions already exist, and extract the next version number.
-This is helpful if the files you're working with aren't tracked in {% include product %} (such as work files).
+2. You can scan the files on disk and work out what versions already exist, and extract the next version number.
+ This is helpful if the files you're working with aren't tracked in {% include product %} (such as work files).
While the first option would probably be most suitable for the example in this guide, both approaches have their uses so we'll cover them both.
@@ -358,7 +362,7 @@ fields["version"] = r["summaries"]["version_number"] + 1
Using the Toolkit API you can gather a list of existing files, extract the template field values from them, and then figure out the next version.
-In the example below, it's gathering the latest version from the work file template.
+In the example below, it's gathering the latest version from the work file template.
Assuming the work file template and publish file template have the same fields, you could call the method below twice with the same fields to work out the highest publish and work file version and decide using a combination of the two.
```python
@@ -380,11 +384,11 @@ def get_next_version_number(tk, template_name, fields):
# extract the values from the path so we can read the version.
path_fields = template.get_fields(a_file)
versions.append(path_fields["version"])
-
+
# find the highest version in the list and add one.
return max(versions) + 1
-# Set the version number in the fields dictionary, that will be used to resolve the template into a path.
+# Set the version number in the fields dictionary, that will be used to resolve the template into a path.
fields["version"] = get_next_version_number(tk, "maya_shot_work", fields)
```
@@ -432,7 +436,7 @@ current_engine = sgtk.platform.current_engine()
# Grab the pre-created Sgtk instance from the current engine.
tk = current_engine.sgtk
-# Get a context object from a Task. This Task must belong to a Shot for the future steps to work.
+# Get a context object from a Task. This Task must belong to a Shot for the future steps to work.
context = tk.context_from_entity("Task", 13155)
# Create the required folders based upon the task
@@ -503,4 +507,4 @@ sgtk.util.register_publish(tk,
This guide has hopefully left you with a foundational understanding of how to get started with the Toolkit API.
There are of course many other uses for the API, so we recommend reading through the [tk-core API](https://developer.shotgridsoftware.com/tk-core/index.html) for more information.
-Also our [forums](https://community.shotgridsoftware.com/c/pipeline/6) are an excellent place to discuss API questions and get answers, and even leave feedback for us about the this guide.
\ No newline at end of file
+Also our [forums](https://community.shotgridsoftware.com/c/pipeline/6) are an excellent place to discuss API questions and get answers, and even leave feedback for us about the this guide.
diff --git a/docs/en/guides/pipeline-integrations/development/sgtk-developer-guide.md b/docs/en/guides/pipeline-integrations/development/sgtk-developer-guide.md
index 5ed2992f6..71dbac656 100644
--- a/docs/en/guides/pipeline-integrations/development/sgtk-developer-guide.md
+++ b/docs/en/guides/pipeline-integrations/development/sgtk-developer-guide.md
@@ -9,7 +9,7 @@ lang: en
## Panels
-See [https://github.com/shotgunsoftware/tk-multi-shotgunpanel/tree/master/hooks](https://github.com/shotgunsoftware/tk-multi-shotgunpanel/tree/master/hooks) for examples of panel actions.
+See [https://github.com/shotgunsoftware/tk-multi-shotgunpanel/tree/master/hooks](https://github.com/shotgunsoftware/tk-multi-shotgunpanel/tree/master/hooks) for examples of panel actions.
### Configuring what is being displayed
@@ -21,11 +21,9 @@ The hook supports a simple templating language, allowing for great flexibility.
The template language works in the following way:
-- {% include product %} values are enclosed in `{brackets}`, for example `Description: {description}`. When this template is rendered, the `{description}` part will be replaced with the description field value.
-
-- If you want an optional pre- or post-fix for a value which is only shown if the value is not empty, you can use the syntax `{[Prefix]sg_field[suffix]}`. The template `{[Start: ]start_date} {[End: ]end_date}` will render `Start: 12 July 2009 End: 14 July 2012` if both values are populated but `Start: 12 July 2009` if end date isn't set.
-
-- You can define fallbacks in the case some values are not set. For {% include product %} Versions, the `artist` fields takes precedence over the `created_by` field in order to support a workflow where a producer submits versions on behalf of an artist. In this case, the Version will be created by the producer but the `artist` field will be set to the artist. This, however, is not always the case - in some cases, artist is left blank in pipelines where artists submit their own work. When displaying versions, it is therefore useful to be able to check the `artist` field first, and in case this isn't set, fall back on the `created_by` field. This is done using the `{field1|field2}` syntax, for example: `Created By: {artist|created_by}`. You can combine this with optional fields too, e.g. `{[Created By: ]artist|created_by}`.
+- {% include product %} values are enclosed in `{brackets}`, for example `Description: {description}`. When this template is rendered, the `{description}` part will be replaced with the description field value.
+- If you want an optional pre- or post-fix for a value which is only shown if the value is not empty, you can use the syntax `{[Prefix]sg_field[suffix]}`. The template `{[Start: ]start_date} {[End: ]end_date}` will render `Start: 12 July 2009 End: 14 July 2012` if both values are populated but `Start: 12 July 2009` if end date isn't set.
+- You can define fallbacks in the case some values are not set. For {% include product %} Versions, the `artist` fields takes precedence over the `created_by` field in order to support a workflow where a producer submits versions on behalf of an artist. In this case, the Version will be created by the producer but the `artist` field will be set to the artist. This, however, is not always the case - in some cases, artist is left blank in pipelines where artists submit their own work. When displaying versions, it is therefore useful to be able to check the `artist` field first, and in case this isn't set, fall back on the `created_by` field. This is done using the `{field1|field2}` syntax, for example: `Created By: {artist|created_by}`. You can combine this with optional fields too, e.g. `{[Created By: ]artist|created_by}`.
This hook contains the following methods:
@@ -60,25 +58,25 @@ The `get_all_fields()` methods returns a list of fields to display for a given e
Actions are little snippets of code that operate on a piece of {% include product %} data. Examples include:
-- An action that launches RV for a given {% include product %} Version
-- An action that allows a user to assign herself to a given Task
-- An action that loads a {% include product %} publish into Maya as a Maya reference.
+- An action that launches RV for a given {% include product %} Version
+- An action that allows a user to assign herself to a given Task
+- An action that loads a {% include product %} publish into Maya as a Maya reference.
-The actual payload of an action is defined in an _action hook_. Once you have defined the action logic, you can then map that action to {% include product %} objects in the app configuration. These action mappings may for example look like this:
+The actual payload of an action is defined in an _action hook_. Once you have defined the action logic, you can then map that action to {% include product %} objects in the app configuration. These action mappings may for example look like this:
```yaml
action_mappings:
PublishedFile:
- - actions: [reference, import]
- filters: {published_file_type: Maya Scene}
- - actions: [texture_node]
- filters: {published_file_type: Rendered Image}
+ - actions: [reference, import]
+ filters: { published_file_type: Maya Scene }
+ - actions: [texture_node]
+ filters: { published_file_type: Rendered Image }
Task:
- - actions: [assign_task]
- filters: {}
+ - actions: [assign_task]
+ filters: {}
Version:
- - actions: [play_in_rv]
- filters: {}
+ - actions: [play_in_rv]
+ filters: {}
```
In the above example, we use the actions `reference`, `import`, `texture_node`, `assign_task` and `play_in_rv`. We then map the actions to various {% include product %} objects and conditions. For example, we are requesting the `import` action to appear for all publishes of type Maya Scene.
@@ -92,7 +90,7 @@ For each application that the panel supports, there is an actions hook which imp
The panel uses Toolkit's second generation hooks interface, allowing for greater flexibility. This hook format uses an improved syntax. You can see this in the default configuration settings, looking something like this:
```yaml
-actions_hook: '{self}/tk-maya_actions.py'
+actions_hook: "{self}/tk-maya_actions.py"
```
The `{self}` keyword tells Toolkit to look in the app's `hooks` folder for the hook. If you are overriding this hook with your implementation, change the value to `{config}/panel/maya_actions.py`. This will tell Toolkit to use a hook called `hooks/panel/maya_actions.py` in your configuration folder.
@@ -116,7 +114,7 @@ class MyActions(HookBaseClass):
def generate_actions(self, sg_data, actions, ui_area):
"""
Returns a list of action instances for a particular object.
- The data returned from this hook will be used to populate the
+ The data returned from this hook will be used to populate the
actions menu.
The mapping between {% include product %} objects and actions are kept in a different place
@@ -126,12 +124,12 @@ class MyActions(HookBaseClass):
This method needs to return detailed data for those actions, in the form of a list
of dictionaries, each with name, params, caption and description keys.
- Because you are operating on a particular object, you may tailor the output
+ Because you are operating on a particular object, you may tailor the output
(caption, tooltip etc) to contain custom information suitable for this publish.
- The ui_area parameter is a string and indicates where the publish is to be shown.
+ The ui_area parameter is a string and indicates where the publish is to be shown.
- - If it will be shown in the main browsing area, "main" is passed.
+ - If it will be shown in the main browsing area, "main" is passed.
- If it will be shown in the details area, "details" is passed.
:param sg_data: {% include product %} data dictionary with all the standard publish fields.
@@ -175,11 +173,11 @@ We could then bind this new action to a set of publish types in the configuratio
```yaml
action_mappings:
PublishedFile:
- - actions: [reference, import, my_new_action]
- filters: {published_file_type: Maya Scene}
+ - actions: [reference, import, my_new_action]
+ filters: { published_file_type: Maya Scene }
Version:
- - actions: [play_in_rv]
- filters: {}
+ - actions: [play_in_rv]
+ filters: {}
```
By deriving from the hook as shown above, the custom hook code only need to contain the actual added business logic which makes it easier to maintain and update.
@@ -188,7 +186,7 @@ By deriving from the hook as shown above, the custom hook code only need to cont
The Publish app is highly customizable by way of hooks that control how items are presented to artists for publishing and how those items are then processed.
-The full developer documentation for the publisher app can now be found on the [Toolkit Developer Site](http://developer.shotgridsoftware.com/tk-multi-publish2).
+The full developer documentation for the publisher app can now be found on the [Toolkit Developer Site](http://developer.shotgridsoftware.com/tk-multi-publish2).
For more information on how to use the Publish app, see the [User Guide](https://support.shotgunsoftware.com/hc/en-us/articles/115000067513-Integrations-Developer-Guide#User_guide_link). If you are looking for more information about the first generation Publisher, please visit the [classic Publisher docs](https://support.shotgunsoftware.com/hc/en-us/articles/115000067513-Integrations-Developer-Guide#classic_publisher_link).
@@ -202,4 +200,4 @@ See [https://github.com/shotgunsoftware/tk-multi-loader2/tree/master/hooks](http
Did we mention that you can write your own Apps? Each Engine exposes a consistent interface based on Python and PySide, so you can write a single App that works in both Nuke, Photoshop and 3dsmax. With the Core API functionality, there is no need to build a big pipeline stack for the studio - instead focus dev resources on solving production problems. Reusing tools between projects is easy with our Toolkit - if file naming conventions or other requirements are changing, simply reconfigure the app. Roll out tools safely via the Toolkit's built-in Git and Github support and quickly hot-load your code when doing development. Work in your own Dev Sandbox and invite TDs and early adopters to test your code without having to roll it out to everyone on the project.
-
\ No newline at end of file
+
diff --git a/docs/en/guides/pipeline-integrations/development/sgtk-how-to-submit-fixes.md b/docs/en/guides/pipeline-integrations/development/sgtk-how-to-submit-fixes.md
index 06c4a1442..9e463cef3 100644
--- a/docs/en/guides/pipeline-integrations/development/sgtk-how-to-submit-fixes.md
+++ b/docs/en/guides/pipeline-integrations/development/sgtk-how-to-submit-fixes.md
@@ -29,9 +29,9 @@ Make sure you add detailed comments about what it is you're doing any why you're
Remember that other users will have a wide variety of environments and variables in play that may not match what you have at your studio. Toolkit tries to minimize the impact of these types of things for users but there are always things that could be different in other users' environments. Some examples:
-- Will your code work the same on OS X, Windows, and Linux?
-- Will it work in all supported versions of a Software?
-- Will it work the same whether the user launches from a terminal, SG Desktop, {% include product %}, or perhaps their own custom app?
+- Will your code work the same on OS X, Windows, and Linux?
+- Will it work in all supported versions of a Software?
+- Will it work the same whether the user launches from a terminal, SG Desktop, {% include product %}, or perhaps their own custom app?
## Create a Pull Request
diff --git a/docs/en/guides/pipeline-integrations/getting-started/advanced_config.md b/docs/en/guides/pipeline-integrations/getting-started/advanced_config.md
index b732d32e0..543585b75 100644
--- a/docs/en/guides/pipeline-integrations/getting-started/advanced_config.md
+++ b/docs/en/guides/pipeline-integrations/getting-started/advanced_config.md
@@ -10,52 +10,52 @@ lang: en
After completing this guide, you will have the knowledge fundamental to adding your project information to a configuration, associating that configuration with your project, and preparing your pipeline configuration to be customized.
## About the guide
-
-This guide describes how to use the **Advanced Project Setup Wizard** in {% include product %} Desktop to create a configuration for a digital content creation pipeline. You will quickly become acquainted with the configuration tools, learn how to use the Wizard, and be presented with opportunities to learn more. Using the Wizard creates a pipeline configuration for your project and prepares it to be edited and extended to support each step in the pipeline. The configuration controls aspects of the UI, {% include product %} Apps, and various tools necessary to support a production pipeline. Using the Wizard is just one way to extend a configuration. Along with adding specific settings for each step in a pipeline, it will add integrations with software applications. In this guide, we'll be basing our project's configuration on Toolkit's Default Configuration.
-This guide assumes the user:
+This guide describes how to use the **Advanced Project Setup Wizard** in {% include product %} Desktop to create a configuration for a digital content creation pipeline. You will quickly become acquainted with the configuration tools, learn how to use the Wizard, and be presented with opportunities to learn more. Using the Wizard creates a pipeline configuration for your project and prepares it to be edited and extended to support each step in the pipeline. The configuration controls aspects of the UI, {% include product %} Apps, and various tools necessary to support a production pipeline. Using the Wizard is just one way to extend a configuration. Along with adding specific settings for each step in a pipeline, it will add integrations with software applications. In this guide, we'll be basing our project's configuration on Toolkit's Default Configuration.
+
+This guide assumes the user:
1. Has never used the Advanced Project Setup Wizard
2. Has some basic knowledge of how to use {% include product %}
3. Is new to {% include product %} Toolkit
### Using this document
-
+
To use this guide and create a customizable configuration for your project, the following is required:
-1. An active {% include product %} site. You can [register for {% include product %} here](https://www.shotgridsoftware.com/signup/?utm_source=autodesk.com&utm_medium=referral&utm_campaign=creative-project-management) and get a 30-day trial to begin exploring.
-2. {% include product %} Desktop. If Desktop is not installed, you can [begin by following this link.]( https://support.shotgunsoftware.com/hc/en-us/articles/115000068574-Integrations-user-guide#Installation%20of%20Desktop)
+1. An active {% include product %} site. You can [register for {% include product %} here](https://www.shotgridsoftware.com/signup/?utm_source=autodesk.com&utm_medium=referral&utm_campaign=creative-project-management) and get a 30-day trial to begin exploring.
+2. {% include product %} Desktop. If Desktop is not installed, you can [begin by following this link.](https://support.shotgunsoftware.com/hc/en-us/articles/115000068574-Integrations-user-guide#Installation%20of%20Desktop)
3. Access to a filesystem where you can store project files and a pipeline configuration. On the filesystem, create a folder called `Shotgun`, with two folders, `projects` and `configs`, within it.
## About the Advanced Project Setup Wizard
The Advanced Project Setup Wizard in {% include product %} Desktop generates a pipeline configuration based on the Default Configuration. The Default Configuration provides a solid base to build on, complete with customizable settings, apps, and UI elements that support the pipeline process. It creates a configuration you can edit and extend to meet your project’s pipeline needs.
-The Default Configuration includes:
-* A basic filesystem schema and templates that determine where files live on disk
-* All of the supported [software integrations](https://support.shotgunsoftware.com/hc/en-us/articles/219039798-Integrations-Apps-and-Engines) allowing interaction with {% include product %} and pipeline functions from directly inside the user’s software applications.
+The Default Configuration includes:
+
+- A basic filesystem schema and templates that determine where files live on disk
+- All of the supported [software integrations](https://support.shotgunsoftware.com/hc/en-us/articles/219039798-Integrations-Apps-and-Engines) allowing interaction with {% include product %} and pipeline functions from directly inside the user’s software applications.
Customizations are only limited by imagination, cleverness, and programming knowledge or the ability to borrow from what others in the {% include product %} community have created.
-
-### Creating the configuration
-A configuration is required for every project. The first time a project is accessed through {% include product %} Desktop, a Basic Configuration is downloaded and configured. This Basic Configuration automatically detects the supported content creation software a user has on their system and associates the configuration with the project. Settings in the pipeline configuration govern the integrations within the supported software applications. The [Panel]( https://support.shotgunsoftware.com/hc/en-us/articles/219033098-Shotgun-Panel) app displays project information from {% include product %} and allows artists to reply to notes and view Versions without leaving their work session. The [Publisher](https://support.shotgunsoftware.com/hc/en-us/articles/219032998-Publishing-your-work) app allows artists to make their work available for others on their team, and through the [Loader](https://support.shotgunsoftware.com/hc/en-us/articles/219033078-Load-Published-Files-) app, artists can load their teammates' published files. The Basic Configuration does not include file system management setup or the development of templates for specifying how files and directories are named on disk. It also doesn’t have the plethora of Apps that are added when a Default Configuration is configured. It is a simple configuration that allows Toolkit to run without any manual editing of configuration files. The Wizard replaces the Basic Configuration with a Default Configuration. It provides more apps and software integrations to support you on your journey. While the Basic Configuration can be edited as well, it's not until you have an advanced setup that your project will be set up for customization.
+### Creating the configuration
+A configuration is required for every project. The first time a project is accessed through {% include product %} Desktop, a Basic Configuration is downloaded and configured. This Basic Configuration automatically detects the supported content creation software a user has on their system and associates the configuration with the project. Settings in the pipeline configuration govern the integrations within the supported software applications. The [Panel](https://support.shotgunsoftware.com/hc/en-us/articles/219033098-Shotgun-Panel) app displays project information from {% include product %} and allows artists to reply to notes and view Versions without leaving their work session. The [Publisher](https://support.shotgunsoftware.com/hc/en-us/articles/219032998-Publishing-your-work) app allows artists to make their work available for others on their team, and through the [Loader](https://support.shotgunsoftware.com/hc/en-us/articles/219033078-Load-Published-Files-) app, artists can load their teammates' published files. The Basic Configuration does not include file system management setup or the development of templates for specifying how files and directories are named on disk. It also doesn’t have the plethora of Apps that are added when a Default Configuration is configured. It is a simple configuration that allows Toolkit to run without any manual editing of configuration files. The Wizard replaces the Basic Configuration with a Default Configuration. It provides more apps and software integrations to support you on your journey. While the Basic Configuration can be edited as well, it's not until you have an advanced setup that your project will be set up for customization.
### Differences between the Basic and Default Configurations
-| FEATURE | BASIC CONFIGURATION | DEFAULT CONFIGURATION |
-| ------- | ------------------- | --------------------- |
-| Download | Automatically downloaded when a project is accessed | Created via Advanced Setup Wizard |
-| Accessibility | Stored in a system location | Manually editable files |
-| Updates | Automatically updated | Manually updated |
-| File System Support | No support for filesystem schema | Includes tools to support folder structure and file naming standards |
-| Software Integrations | 3ds Max, Houdini, Maya, Nuke, Photoshop, Flame | Basic + Hiero, Motionbulder, Mari |
-| Toolkit Apps | {% include product %} Panel, Publisher, Loader | Basic + Workfiles, Snap Shot, Scene breakdown, Nuke write node, Houdini Mantra node, and more |
+| FEATURE | BASIC CONFIGURATION | DEFAULT CONFIGURATION |
+| --------------------- | --------------------------------------------------- | --------------------------------------------------------------------------------------------- |
+| Download | Automatically downloaded when a project is accessed | Created via Advanced Setup Wizard |
+| Accessibility | Stored in a system location | Manually editable files |
+| Updates | Automatically updated | Manually updated |
+| File System Support | No support for filesystem schema | Includes tools to support folder structure and file naming standards |
+| Software Integrations | 3ds Max, Houdini, Maya, Nuke, Photoshop, Flame | Basic + Hiero, Motionbulder, Mari |
+| Toolkit Apps | {% include product %} Panel, Publisher, Loader | Basic + Workfiles, Snap Shot, Scene breakdown, Nuke write node, Houdini Mantra node, and more |
In this guide, you will use the Wizard in {% include product %} Desktop to generate a pipeline configuration for your project based on the Default Configuration. Generating this configuration sets you up to make the customizations necessary to support a proprietary production pipeline.
-## Begin Exercise
+## Begin Exercise
### Prepare to use a Default Configuration
@@ -77,19 +77,19 @@ In this guide, you will use the Wizard in {% include product %} Desktop to gener
### Accessing the Default Configuration
-A Basic Configuration was downloaded and configured when the project was accessed. The Publish app and supported software packages were detected and automatically added to the **Apps** pane in {% include product %} Desktop.
+A Basic Configuration was downloaded and configured when the project was accessed. The Publish app and supported software packages were detected and automatically added to the **Apps** pane in {% include product %} Desktop.
**Step 4:** Once the project is loaded, select your profile **avatar** at the bottom right of the screen. In the popup menu, select **Advanced project setup…** to initiate the Wizard.

-A dialog box will be displayed with four options and {% include product %} Default selected. At this point, you can choose to base your project's pipeline configuration on the configuration of an existing project, on a configuration in a git repository, or on a path on disk.
+A dialog box will be displayed with four options and {% include product %} Default selected. At this point, you can choose to base your project's pipeline configuration on the configuration of an existing project, on a configuration in a git repository, or on a path on disk.
For this exercise, we'll choose **{% include product %} Default**. This option will create a pipeline configuration for your project that's based on {% include product %}'s Default Configuration.

-**Step 5:** Select **Continue**.
+**Step 5:** Select **Continue**.
A dialog box will be displayed with two options and **Default** selected. At this point there’s an option to select a Legacy Default Configuration setup. This configuration setup is from an older version of {% include product %} for studios who still use that version. We will use the Default for this exercise.
@@ -103,7 +103,7 @@ A dialog box will appear displaying a drop-down menu next to the word `Storage:`

-**Step 7:** Identify where to store project data for this project. From the dropdown at the top of the dialogue box select **+ New** and type **projects** in the field.
+**Step 7:** Identify where to store project data for this project. From the dropdown at the top of the dialogue box select **+ New** and type **projects** in the field.

@@ -113,7 +113,7 @@ A dialog box will appear displaying a drop-down menu next to the word `Storage:`

-This setup allows {% include product %} to have access to only the folder you identify for storing production data. When preparing for this exercise you added a `projects/` directory within your {% include product %} root directory. The `projects/` directory is where Toolkit will store any local project related information.
+This setup allows {% include product %} to have access to only the folder you identify for storing production data. When preparing for this exercise you added a `projects/` directory within your {% include product %} root directory. The `projects/` directory is where Toolkit will store any local project related information.

@@ -125,7 +125,7 @@ The operating system path is automatically updated to identify the path where th
**Step 10:** Select **Continue**.
-### Name the Project Folder
+### Name the Project Folder
A dialog box is displayed with the name of the project populating the text field. The name is automatically populated pulling from the project information and the path is automatically updated.
@@ -133,15 +133,15 @@ A dialog box is displayed with the name of the project populating the text field
Toolkit can work in either a Distributed Setup, where the pipeline configuration is uploaded to {% include product %} and cached locally for each user, or a Centralized Setup, where users access a single configuration in a shared location on disk. For this exercise we will use a Centralized Setup. You can [learn more about Distributed Setups here](https://developer.shotgridsoftware.com/tk-core/initializing.html#distributed-configurations).
-The final step generates the appropriate folders, files, and data necessary to create the configuration specific to a project.
+The final step generates the appropriate folders, files, and data necessary to create the configuration specific to a project.

-**Step 11:** Under the appropriate operating system, select **Browse...** and navigate to the configuration folder you created when preparing for this exercise, `configs`, and enter the project name **the_other_side**. This creates the folder where the project configuration is stored. Select **Run Setup** and wait for it to complete the setup.
+**Step 11:** Under the appropriate operating system, select **Browse...** and navigate to the configuration folder you created when preparing for this exercise, `configs`, and enter the project name **the_other_side**. This creates the folder where the project configuration is stored. Select **Run Setup** and wait for it to complete the setup.

-**Step 12:** Select **Done** to display the new icons populating the project windows.
+**Step 12:** Select **Done** to display the new icons populating the project windows.

@@ -157,17 +157,17 @@ And now the real fun begins, learning all the things you can do with the Configu
## Advanced topics
-{% include product %} Toolkit provides many convenient ways to edit, clone, or take over a configuration. Extending existing configurations will save time and allow you access to all of the cool stuff that others within your network have created. You can take advantage of the vast {% include product %} [community](https://groups.google.com/a/shotgunsoftware.com/forum/?fromgroups#!forum/shotgun-dev) that may have the exact configuration you need. The {% include product %} community is a sharing community, so be kind, say thank you, and recognize the person who created the configuration that helped you get the job done. Oh, and don’t forget to give back, it’s how we help our fellow {% include product %} gurus and what makes it so special to be a part of this community!
+{% include product %} Toolkit provides many convenient ways to edit, clone, or take over a configuration. Extending existing configurations will save time and allow you access to all of the cool stuff that others within your network have created. You can take advantage of the vast {% include product %} [community](https://groups.google.com/a/shotgunsoftware.com/forum/?fromgroups#!forum/shotgun-dev) that may have the exact configuration you need. The {% include product %} community is a sharing community, so be kind, say thank you, and recognize the person who created the configuration that helped you get the job done. Oh, and don’t forget to give back, it’s how we help our fellow {% include product %} gurus and what makes it so special to be a part of this community!
Below are some ways you can have fun with configurations.
### Using the command line to create a default configuration
-From inside any project configuration, the `tank` command lets you run administrative commands from a terminal. Each project has its own dedicated `tank` command. The `tank setup_project` command's functionality is analogous to the Advanced Setup Wizard's: it creates an editable configuration on disk for your project based either on an existing project's configuration or the Default Configuration. You can learn more about running [`tank setup_project` here](https://support.shotgunsoftware.com/hc/en-us/articles/219033178-Administering-Toolkit#setup_project), and more about the [`tank` command here](https://support.shotgunsoftware.com/hc/en-us/articles/219033178-Administering-Toolkit#Using%20the%20tank%20command).
+From inside any project configuration, the `tank` command lets you run administrative commands from a terminal. Each project has its own dedicated `tank` command. The `tank setup_project` command's functionality is analogous to the Advanced Setup Wizard's: it creates an editable configuration on disk for your project based either on an existing project's configuration or the Default Configuration. You can learn more about running [`tank setup_project` here](https://support.shotgunsoftware.com/hc/en-us/articles/219033178-Administering-Toolkit#setup_project), and more about the [`tank` command here](https://support.shotgunsoftware.com/hc/en-us/articles/219033178-Administering-Toolkit#Using%20the%20tank%20command).
### Editing a configuration that's in production
-There will be times when you want to modify a configuration that is currently in production, but you won't want to edit it while artists are using it. With just a few commands, {% include product %} provides a way to copy an existing configuration where you can test your modifications safely before pushing them into production. This process replaces the production configuration with the new one and automatically backs up the old one.
+There will be times when you want to modify a configuration that is currently in production, but you won't want to edit it while artists are using it. With just a few commands, {% include product %} provides a way to copy an existing configuration where you can test your modifications safely before pushing them into production. This process replaces the production configuration with the new one and automatically backs up the old one.
The reasons you would want work on a copy of a configuration are:
@@ -184,6 +184,6 @@ This guide walks through creating a **centralized configuration**: a single copy
### Working with more than one root folder
-Ideally your facility would want to be optimized for specific tasks. You can work with more than one root folder to optimize things such as video playback for dailies on one server and interactive processing on another. Toolkit allows you to work with more than one storage root in order to facilitate workflows such as these. Check out how to convert from a [single root to a multi-root configuration](../../../quick-answers/administering/convert-from-single-root-to-multi.md).
+Ideally your facility would want to be optimized for specific tasks. You can work with more than one root folder to optimize things such as video playback for dailies on one server and interactive processing on another. Toolkit allows you to work with more than one storage root in order to facilitate workflows such as these. Check out how to convert from a [single root to a multi-root configuration](../../../quick-answers/administering/convert-from-single-root-to-multi.md).
Now that you have a pipeline configuration for your project, get started on editing it! Jump into the next guide, [Editing a Pipeline Configuration](editing_app_setting.md), to learn how.
diff --git a/docs/en/guides/pipeline-integrations/getting-started/dynamic_filesystem_configuration.md b/docs/en/guides/pipeline-integrations/getting-started/dynamic_filesystem_configuration.md
index 100efc0f3..c95ca38cf 100644
--- a/docs/en/guides/pipeline-integrations/getting-started/dynamic_filesystem_configuration.md
+++ b/docs/en/guides/pipeline-integrations/getting-started/dynamic_filesystem_configuration.md
@@ -5,48 +5,48 @@ pagename: toolkit-guides-filesystem-configuration
lang: en
---
-# Dynamic filesystem configuration
+# Dynamic filesystem configuration
-In this guide, you will learn how to modify your Toolkit pipeline configuration to customize your production folder structure and file naming.
+In this guide, you will learn how to modify your Toolkit pipeline configuration to customize your production folder structure and file naming.
## About the guide
-One of the hardest things about managing a pipeline is keeping track of the myriad files that will be created. Your Toolkit pipeline automates filesystem management: by creating folders based on data in {% include product %} and a configured folder structure, and automatically writing files to the right place and with standardized naming, artists can focus on content creation. Your pipeline configuration comes with a default set of folder and file naming conventions, but productions often customize them. This guide will provide the knowledge necessary to make those customizations.
+One of the hardest things about managing a pipeline is keeping track of the myriad files that will be created. Your Toolkit pipeline automates filesystem management: by creating folders based on data in {% include product %} and a configured folder structure, and automatically writing files to the right place and with standardized naming, artists can focus on content creation. Your pipeline configuration comes with a default set of folder and file naming conventions, but productions often customize them. This guide will provide the knowledge necessary to make those customizations.
-In the Default Configuration, assets are managed in a folder structure like `asset_type/asset/pipeline_step`. In this guide, we’ll be using a custom entity called “Set” to organize them further by the production set on which each asset is used. We will first set up the custom entity in {% include product %}, then use it to manage the assets created for any given set, so that the folder structure looks like `set/asset_type/asset/pipeline_step`.
+In the Default Configuration, assets are managed in a folder structure like `asset_type/asset/pipeline_step`. In this guide, we’ll be using a custom entity called “Set” to organize them further by the production set on which each asset is used. We will first set up the custom entity in {% include product %}, then use it to manage the assets created for any given set, so that the folder structure looks like `set/asset_type/asset/pipeline_step`.
-We can demonstrate the idea behind organizing assets by set with an example: say you have a project where some scenes take place in a garage, while others take place in a dining room. With our setup, files for assets like “wrench”, “oilcan”, or “workbench” would be organized in a “garage” folder, while “plate”, “winebottle”, or “tablecloth” would be organized in a “dining_room” folder. In our example, we'll be ensuring that a juicy "filet" asset is properly placed in the dining room.
+We can demonstrate the idea behind organizing assets by set with an example: say you have a project where some scenes take place in a garage, while others take place in a dining room. With our setup, files for assets like “wrench”, “oilcan”, or “workbench” would be organized in a “garage” folder, while “plate”, “winebottle”, or “tablecloth” would be organized in a “dining_room” folder. In our example, we'll be ensuring that a juicy "filet" asset is properly placed in the dining room.
-As part of our example, we’ll also edit the filenaming templates for the project, such that Maya work files for assets will include the set in their name. The dynamically generated name of the file will distinguish files for the dining room from files used in other sets.
+As part of our example, we’ll also edit the filenaming templates for the project, such that Maya work files for assets will include the set in their name. The dynamically generated name of the file will distinguish files for the dining room from files used in other sets.
### There are three parts to this guide
-* Creating a **custom entity** in {% include product %} called “Set”, which you will use to associate with the dining room elements the artists are creating.
-* Editing the folder **schema**, enabling Toolkit to include a dynamically named folder based on the current set in the folder structure.
-* Editing the **template** used for naming asset work files, enabling Toolkit to include the name of the associated set in the file name.
+- Creating a **custom entity** in {% include product %} called “Set”, which you will use to associate with the dining room elements the artists are creating.
+- Editing the folder **schema**, enabling Toolkit to include a dynamically named folder based on the current set in the folder structure.
+- Editing the **template** used for naming asset work files, enabling Toolkit to include the name of the associated set in the file name.
+
+### Prerequisites
-### Prerequisites
-
To use this guide, the following is required:
-1. An active [{% include product %}](https://www.shotgridsoftware.com/signup/) site. You should have a project with at least one Asset created. The asset should have a Model task.
+1. An active [{% include product %}](https://www.shotgridsoftware.com/signup/) site. You should have a project with at least one Asset created. The asset should have a Model task.
2. A basic understanding of how a {% include product %} site is used to manage assets
3. [{% include product %} Desktop](https://support.shotgunsoftware.com/hc/en-us/articles/115000068574-Integrations-user-guide#Installation%20of%20Desktop) installed on your system.
4. A cloned pipeline configuration for the identified project, or complete the [Getting started with configurations](./advanced_config.md) guide and clone the configuration created in that exercise.
-5. Basic familiarity with YAML.
+5. Basic familiarity with YAML.
6. Read and write permissions set appropriately for the filesystem where the Pipeline Configuration is stored.
7. Read and write permissions set appropriately to allow Toolkit to read and write to the production filesystem.
-8. An active subscription for Maya. Get a 30 day trial of [Maya](https://www.autodesk.com/products/maya/free-trial-dts?adobe_mc_ref=https%3A%2F%2Fwww.google.com%2F&adobe_mc_sdid=SDID%3D577C0A84DDF5D35D-50E96EA2052056FE%7CMCORGID%3D6DC7655351E5696B0A490D44%2540AdobeOrg%7CTS%3D1543444689)
+8. An active subscription for Maya. Get a 30 day trial of [Maya](https://www.autodesk.com/products/maya/free-trial-dts?adobe_mc_ref=https%3A%2F%2Fwww.google.com%2F&adobe_mc_sdid=SDID%3D577C0A84DDF5D35D-50E96EA2052056FE%7CMCORGID%3D6DC7655351E5696B0A490D44%2540AdobeOrg%7CTS%3D1543444689)
{% include info title="Note" content="This guide is based on the `tk-config-default2` pipeline configuration. If your config was modified, the location of files, folders, and blocks of YAML settings may vary from what is described here." %}
### About file schemas and templates
-The schema and templates in the Toolkit pipeline configuration allow you to take advantage of your {% include product %} data for managing production files on disk. The schema in the Default Configuration includes entities like **Shot**, **Sequence**, **Asset**, **Asset Type**, etc. Other entities like **Level**, **Episode**, **Season**, or in our case, custom entities like **Set** can be added.
+The schema and templates in the Toolkit pipeline configuration allow you to take advantage of your {% include product %} data for managing production files on disk. The schema in the Default Configuration includes entities like **Shot**, **Sequence**, **Asset**, **Asset Type**, etc. Other entities like **Level**, **Episode**, **Season**, or in our case, custom entities like **Set** can be added.
-The Toolkit platform allows you to build your folder structure dynamically by using a **schema**, a miniature version of a production folder structure that will be used as a template when building out your actual production filesystem. The schema is an explicit guide for the dynamic creation of folders, and uses YAML files to define the rules for dynamically created folders. The Default Configuration includes a pre-configured schema that supports folder creation for both asset and shot pipelines. You will be modifying the portion of the schema that supports creating the asset folder structure, `/assets///`, to add support for the new **Set** entity you’re creating.
+The Toolkit platform allows you to build your folder structure dynamically by using a **schema**, a miniature version of a production folder structure that will be used as a template when building out your actual production filesystem. The schema is an explicit guide for the dynamic creation of folders, and uses YAML files to define the rules for dynamically created folders. The Default Configuration includes a pre-configured schema that supports folder creation for both asset and shot pipelines. You will be modifying the portion of the schema that supports creating the asset folder structure, `/assets///`, to add support for the new **Set** entity you’re creating.
-**Templates** allow you to dynamically name and save files as they’re created using {% include product %} data and information from the schema structure. The Default Configuration provides a set of starter templates that you can edit to meet the needs of your pipeline.
+**Templates** allow you to dynamically name and save files as they’re created using {% include product %} data and information from the schema structure. The Default Configuration provides a set of starter templates that you can edit to meet the needs of your pipeline.
{% include info title="Note" content="The Basic setup for ShotGrid integrations doesn’t include filesystem management. In order to configure filesystem management for your project, your project will need an Advanced setup. The first guide, [Getting started with configurations](./advanced_config.md) goes through the Advanced setup process" %}
@@ -60,7 +60,7 @@ Customizing your schema and templates will allow you to dynamically manage the f

-Displayed is a list of entity types that are available in {% include product %}. At the top of the list in the image below are some entity types that are configured for the current {% include product %} site. Underneath these entity types are several **Custom Entities** that are not configured or enabled.
+Displayed is a list of entity types that are available in {% include product %}. At the top of the list in the image below are some entity types that are configured for the current {% include product %} site. Underneath these entity types are several **Custom Entities** that are not configured or enabled.
### Choose one of the custom entity types, configure it, and enable it.
@@ -70,7 +70,7 @@ Displayed is a list of entity types that are available in {% include product %}.

-Doing this makes that custom entity active in {% include product %} and gives it the display name *Set*. Essentially you are creating an alias for the custom entity because the system name of the entity remains `CustomEntity01`. In this example, we're using `CustomEntity01`; you might use a different custom entity.
+Doing this makes that custom entity active in {% include product %} and gives it the display name _Set_. Essentially you are creating an alias for the custom entity because the system name of the entity remains `CustomEntity01`. In this example, we're using `CustomEntity01`; you might use a different custom entity.
{% include info title="Note" content="Remember the system name of the custom entity you chose." %}
@@ -78,9 +78,9 @@ Doing this makes that custom entity active in {% include product %} and gives it
Adding a data field to the Asset entity enables us to link assets to the new entity. The assets the artists create for the dining room will be associated with the **Dining Room** set entity.
-**Step 3:** Select the **Projects** dropdown at the top of the page to open the project you want to use for this exercise.
+**Step 3:** Select the **Projects** dropdown at the top of the page to open the project you want to use for this exercise.
-**Step 4:** Select **Assets** in your project menu bar to go to an Assets page. In the Assets menu, select **Fields > Manage Asset Fields…**
+**Step 4:** Select **Assets** in your project menu bar to go to an Assets page. In the Assets menu, select **Fields > Manage Asset Fields…**

@@ -96,7 +96,7 @@ In **New Field Name**, type “Set”. In the **GENERAL** menu under **Field Typ

-For this guide, apply it to **Only the current project** and select **Create Field**.
+For this guide, apply it to **Only the current project** and select **Create Field**.
{% include product %} will configure the new field.
@@ -106,7 +106,7 @@ Your change has been applied and you can select **Done**.
### Creating the **Dining Room** Set entity
-**Step 5:** Select the new **Set** field of an asset and start typing Dining Room. A dialog box is displayed stating, **No matches found. Create “Dining Room”**
+**Step 5:** Select the new **Set** field of an asset and start typing Dining Room. A dialog box is displayed stating, **No matches found. Create “Dining Room”**

@@ -116,15 +116,15 @@ Select **Create “Dining Room”**.
Select **Create Set**.
-Adding **Dining Room** in the Set field of an asset creates an [association](https://support.shotgunsoftware.com/hc/en-us/articles/115000010973-Linking-a-custom-entity) with the Dining Room set entity.
+Adding **Dining Room** in the Set field of an asset creates an [association](https://support.shotgunsoftware.com/hc/en-us/articles/115000010973-Linking-a-custom-entity) with the Dining Room set entity.

-**Step 6:** Assign the Model task on the **filet** asset to yourself, so you can find it easily for testing purposes.
+**Step 6:** Assign the Model task on the **filet** asset to yourself, so you can find it easily for testing purposes.
### Setting up the schema
-You’ve now enabled a Set custom entity, created a Set entity called “Dining Room”, and linked an Asset entity to the Dining Room set. You’ve got all the pieces in place in your {% include product %} site to now modify your folder structure. When an artist starts working on a task, Toolkit uses the associated {% include product %} data to determine what folders to create in the filesystem. New folders are created and named automatically based on the pipeline configuration’s schema.
+You’ve now enabled a Set custom entity, created a Set entity called “Dining Room”, and linked an Asset entity to the Dining Room set. You’ve got all the pieces in place in your {% include product %} site to now modify your folder structure. When an artist starts working on a task, Toolkit uses the associated {% include product %} data to determine what folders to create in the filesystem. New folders are created and named automatically based on the pipeline configuration’s schema.
Now it’s time to define the folder structure you want Toolkit to dynamically generate as artists step through the production pipeline. This is done by editing the schema.
@@ -150,7 +150,7 @@ To achieve this, you would set up the schema like this:
`/assets////`
-The Set entity is represented as `CustomEntity01`. While we gave CustomEntity01 the *display name* of Set in {% include product %}, in our configuration, we’ll always refer to it by its system name, `CustomEntity01`.
+The Set entity is represented as `CustomEntity01`. While we gave CustomEntity01 the _display name_ of Set in {% include product %}, in our configuration, we’ll always refer to it by its system name, `CustomEntity01`.
### How the schema uses YAML files
@@ -160,13 +160,12 @@ A schema can contain static and dynamic folders. If you have a static folder in
The schema has a `project` folder that contains folders relative to the different entities {% include product %} tracks. You are adding the new asset entity, CustomEntity01, to enable {% include product %} to track the items in a Set. These items are assets, so you will edit the folders and YAML files under assets.
-Again, our goal is to go from an `asset_type/asset/step` folder structure to `set/asset_type/asset/step`. So, we’ll want to add a folder to represent set in our schema, with a corresponding YAML file. Since we need to use the system name for custom entities, we’ll be creating the `CustomEntity01/` folder and `CustomEntity01.yml`.
+Again, our goal is to go from an `asset_type/asset/step` folder structure to `set/asset_type/asset/step`. So, we’ll want to add a folder to represent set in our schema, with a corresponding YAML file. Since we need to use the system name for custom entities, we’ll be creating the `CustomEntity01/` folder and `CustomEntity01.yml`.
**Step 8:** Add a `CustomEntity01` folder inside the `project/assets` folder of your schema.

-
**Step 9:** Create a file called `CustomEntity01.yml` file next to the `CustomEntity01` folder, with the following contents:
```yaml
@@ -177,16 +176,16 @@ name: "code"
entity_type: "CustomEntity01"
filters:
- - { "path": "project", "relation": "is", "values": [ "$project" ] }
+ - { "path": "project", "relation": "is", "values": ["$project"] }
```
-The YAML file will give the instructions to Toolkit for what to name the `CustomEntity01` folder. In this case, we’re making a folder of type `{% include product %}_entity`, which means that it corresponds to a {% include product %} query. The `entity_type` field tells us to query the `CustomEntity01` entity in {% include product %}, and the `name` field tells us which *field* on the entity to query – in this case we’re getting the `code` field from `CustomEntity01`.
+The YAML file will give the instructions to Toolkit for what to name the `CustomEntity01` folder. In this case, we’re making a folder of type `{% include product %}_entity`, which means that it corresponds to a {% include product %} query. The `entity_type` field tells us to query the `CustomEntity01` entity in {% include product %}, and the `name` field tells us which _field_ on the entity to query – in this case we’re getting the `code` field from `CustomEntity01`.
The `filters` field limits the cases in which this dynamic folder should be created.
**Step 10:** Move `asset_type/` and `asset_type.yml` into the `CustomEntity01` folder
-As we want our folder structure to look like `Dining-Room/Prop/filet`, the `asset_type` folder should be *below* the `CustomEntity01` folder in our hierarchy. Move `asset_type/` and `asset_type.yml` into the `CustomEntity01` folder.
+As we want our folder structure to look like `Dining-Room/Prop/filet`, the `asset_type` folder should be _below_ the `CustomEntity01` folder in our hierarchy. Move `asset_type/` and `asset_type.yml` into the `CustomEntity01` folder.

@@ -196,8 +195,8 @@ The `filters` field limits which entities have folders created for them at a giv
```yaml
filters:
- - { "path": "project", "relation": "is", "values": [ "$project" ] }
- - { "path": "sg_asset_type", "relation": "is", "values": [ "$asset_type"] }
+ - { "path": "project", "relation": "is", "values": ["$project"] }
+ - { "path": "sg_asset_type", "relation": "is", "values": ["$asset_type"] }
```
When we decide to make a folder for an asset, we want to make sure that we’re in the correct project folder, and in the correct asset_type folder. Now that we’ve added a set folder, we’ll want to add a third filter. Without it, we’d end up with folders like the following, which of course would be incorrect:
@@ -207,28 +206,28 @@ assets/Dining-Room/Prop/spoon
assets/Garage/Prop/spoon
assets/Classroom/Prop/spoon
```
+
To prevent that, we’ll add a third filter, which will ensure that an asset’s folder will only be created in the correct set’s folder.
**Step 11:** Modify the `filters` field in `asset.yml` to look like this:
```yaml
filters:
- - { "path": "project", "relation": "is", "values": [ "$project" ] }
- - { "path": "sg_asset_type", "relation": "is", "values": [ "$asset_type"] }
- - { "path": "sg_set", "relation": "is", "values": [ "$CustomEntity01" ] }
+ - { "path": "project", "relation": "is", "values": ["$project"] }
+ - { "path": "sg_asset_type", "relation": "is", "values": ["$asset_type"] }
+ - { "path": "sg_set", "relation": "is", "values": ["$CustomEntity01"] }
```
-
## Test folder creation
-You’ve now successfully modified your schema to organize assets by a Set custom entity. Now, let’s test it out.
+You’ve now successfully modified your schema to organize assets by a Set custom entity. Now, let’s test it out.
Folders are created at a few points in Toolkit pipeline workflows:
-* **Application launchers**: Every time a user launches a DCC for a task, Toolkit will create the directories for that task if they’re not already there. Since launching a DCC tends to be the first thing someone does with Toolkit, this is the usual way directories get created. This can happen via the right-click menus in {% include product %}, or from {% include product %} Desktop or Create apps.
-* **{% include product %} menu**: The most direct way to create folders for a task is to right-click on it in {% include product %} and choose the “Create Folders” menu item.
-* **Toolkit API**: You can trigger the directory creation logic directly through the Toolkit API. This can be used to plug Toolkit into a custom launcher, or for something like an event trigger for a workflow where you want to automatically create the directories for a Shot as it is created in {% include product %}.
-* **tank command**: Analogous to the menu item in {% include product %}, the `tank folders` terminal command will also create folders for a task.
+- **Application launchers**: Every time a user launches a DCC for a task, Toolkit will create the directories for that task if they’re not already there. Since launching a DCC tends to be the first thing someone does with Toolkit, this is the usual way directories get created. This can happen via the right-click menus in {% include product %}, or from {% include product %} Desktop or Create apps.
+- **{% include product %} menu**: The most direct way to create folders for a task is to right-click on it in {% include product %} and choose the “Create Folders” menu item.
+- **Toolkit API**: You can trigger the directory creation logic directly through the Toolkit API. This can be used to plug Toolkit into a custom launcher, or for something like an event trigger for a workflow where you want to automatically create the directories for a Shot as it is created in {% include product %}.
+- **tank command**: Analogous to the menu item in {% include product %}, the `tank folders` terminal command will also create folders for a task.
We’ll test with the `tank` command.
@@ -265,58 +264,57 @@ The final structure matches what was expected, and Toolkit is so smart that it e
`/the_other_side/assets/Dining-Room/Prop/Filet/model`
-

### Toolkit templates for reading and writing files
-Now that we’ve set up our folder structure, the next step is to edit the *templates*, so production files will be named appropriately and put in the correct folder when they're created.
+Now that we’ve set up our folder structure, the next step is to edit the _templates_, so production files will be named appropriately and put in the correct folder when they're created.
### How Toolkit apps use templates
-You first created a way to associate an asset with a set in {% include product %} by enabling CustomEntity01 to represent sets, then adding a link field to the Asset entity to represent the link between an asset and a set. After establishing the relationship between assets and sets, you set up your folder schema to use that association to place all asset *folders* within a folder for their associated set. Now you’re going to create a way to dynamically name *files* and allow Toolkit Apps to manage the files automatically.
+You first created a way to associate an asset with a set in {% include product %} by enabling CustomEntity01 to represent sets, then adding a link field to the Asset entity to represent the link between an asset and a set. After establishing the relationship between assets and sets, you set up your folder schema to use that association to place all asset _folders_ within a folder for their associated set. Now you’re going to create a way to dynamically name _files_ and allow Toolkit Apps to manage the files automatically.
-As artists start working on tasks in a project, the necessary folder structure is generated. Then, when they initiate the Workfiles app’s **File Save** action, the file is named automatically. A template accessed through Toolkit’s Workfiles app is used to name that file. Render apps like Nuke Write node and Houdini Mantra node use templates to name and save rendered files, as does the Publisher app for published files.
+As artists start working on tasks in a project, the necessary folder structure is generated. Then, when they initiate the Workfiles app’s **File Save** action, the file is named automatically. A template accessed through Toolkit’s Workfiles app is used to name that file. Render apps like Nuke Write node and Houdini Mantra node use templates to name and save rendered files, as does the Publisher app for published files.
When files are accessed using the Workfiles **File Open** action, it uses a template to find the appropriate file to load. The Publisher, Loader, and Nuke Studio Export apps also use templates to find and manage files. The artist doesn’t have to worry about file names or locations; Toolkit manages it all based on the template and the task being performed.
-Templates are managed by the configuration file `//config/core/templates.yml`. In the last two guides, you managed and created settings that were specific to work environments. The schema and template settings are stored in the `config/core` folder and are not specific to an environment. While all templates are stored in a single file, they are referenced from this file in app settings in the different environment configuration files. For example, `template_work` is the setting for the Workfiles app that specifies which template in `templates.yml` to use for work files. Depending on the environment and engine in which Workfiles is configured, you might use this configuration setting to point to the `maya_shot_work` template or the `houdini_asset_work` template from `templates.yml`.
+Templates are managed by the configuration file `//config/core/templates.yml`. In the last two guides, you managed and created settings that were specific to work environments. The schema and template settings are stored in the `config/core` folder and are not specific to an environment. While all templates are stored in a single file, they are referenced from this file in app settings in the different environment configuration files. For example, `template_work` is the setting for the Workfiles app that specifies which template in `templates.yml` to use for work files. Depending on the environment and engine in which Workfiles is configured, you might use this configuration setting to point to the `maya_shot_work` template or the `houdini_asset_work` template from `templates.yml`.
**Step 13:** Open `config/core/templates.yml` in your pipeline configuration.
This file is broken down into three sections:
-* **Keys:** A set of tokens (like `{version}`, `{Asset}`, etc.) to be used to build templates. They will be replaced with real values when the template is actually used. Each key has a required name and type and other optional parameters.
-* **Paths:** Named strings that use keys to represent paths to folders and files on disk. Note that templates in the `paths` section are validated and must actually exist on disk.
-* **Strings:** Similar to the paths section, but these are templates for arbitrary text. While items in the paths section are validated and must correspond with actual paths on disk, strings can be used to store any text data that you want to refer to in your Toolkit workflows.
+- **Keys:** A set of tokens (like `{version}`, `{Asset}`, etc.) to be used to build templates. They will be replaced with real values when the template is actually used. Each key has a required name and type and other optional parameters.
+- **Paths:** Named strings that use keys to represent paths to folders and files on disk. Note that templates in the `paths` section are validated and must actually exist on disk.
+- **Strings:** Similar to the paths section, but these are templates for arbitrary text. While items in the paths section are validated and must correspond with actual paths on disk, strings can be used to store any text data that you want to refer to in your Toolkit workflows.
### Add a template key for the Set entity
The first thing to do is define a new key for the Set entity, using the entity’s system name.
-**Step 14:** Add the following lines to the `keys` section of `templates.yml`, being mindful of proper indentation:
+**Step 14:** Add the following lines to the `keys` section of `templates.yml`, being mindful of proper indentation:
```yaml
- CustomEntity01:
- type: str
+CustomEntity01:
+ type: str
```
### Modifying the template
-Since templates define where Toolkit reads and writes files, it’s crucial that the paths we define here stay in step with the folder structure defined in the schema. After all, production files should go into the filesystem we’re creating. So, we’re going to modify all of our asset-related templates to match the new folder structure we defined in the schema.
+Since templates define where Toolkit reads and writes files, it’s crucial that the paths we define here stay in step with the folder structure defined in the schema. After all, production files should go into the filesystem we’re creating. So, we’re going to modify all of our asset-related templates to match the new folder structure we defined in the schema.
-Then, we’ll modify the template for work files on asset steps in Maya to also include the set in the file name. In the Default Config, the template in question is `maya_asset_work`, and that’s where we’ll start.
+Then, we’ll modify the template for work files on asset steps in Maya to also include the set in the file name. In the Default Config, the template in question is `maya_asset_work`, and that’s where we’ll start.
{% include info title="Note" content="Using a template called `maya_asset_work` for asset-based Workfiles in Maya is a convention of the Default Configuration. To confirm that that’s the right template, check the value of the `template_work` setting for `tk-multi-workfiles2` in the `tk-maya` engine, in the `asset_step` environment ([here it is in Github](https://github.com/shotgunsoftware/tk-config-default2/blob/v1.2.4/env/includes/settings/tk-multi-workfiles2.yml#L217))." %}
-**Step 15:** Open `templates.yml` and search for `maya_asset_work`.
+**Step 15:** Open `templates.yml` and search for `maya_asset_work`.
```yaml
- maya_asset_work:
- definition: '@asset_root/work/maya/{name}.v{version}.{maya_extension}'
+maya_asset_work:
+ definition: "@asset_root/work/maya/{name}.v{version}.{maya_extension}"
```
-The `definition` value for `maya_asset_work` begins with `@asset_root`. The `@` symbol signifies that the value of `@asset_root` is defined elsewhere.
+The `definition` value for `maya_asset_work` begins with `@asset_root`. The `@` symbol signifies that the value of `@asset_root` is defined elsewhere.
{% include info title="Note" content="A leading `@` symbol does not denote an *include* in `templates.yml` as it does in the environment configuration files." %}
@@ -336,24 +334,24 @@ Add `CustomEntity01` to the `asset_root` path to match the schema modifications:
### Add set to the file name
-We’ve changed the folder structure for our files to reflect our schema changes, and now files will be read and written to the proper location. Now, let’s modify the file *name* for the Maya asset work file template, so that it also includes the set.
+We’ve changed the folder structure for our files to reflect our schema changes, and now files will be read and written to the proper location. Now, let’s modify the file _name_ for the Maya asset work file template, so that it also includes the set.
-Find the `maya_asset_work` template definition again. In its current state, the file *name* is
+Find the `maya_asset_work` template definition again. In its current state, the file _name_ is
`{name}.v{version}.{maya_extension}`
-The `{name}` template key is a special key that represents user input in the Workfiles app’s File Save action. Let’s modify the template so that it doesn’t include any user input, and instead just consists of the current set and asset.
+The `{name}` template key is a special key that represents user input in the Workfiles app’s File Save action. Let’s modify the template so that it doesn’t include any user input, and instead just consists of the current set and asset.
**Step 17:** Modify the `maya_asset_work` template definition so that it looks like this:
```yaml
- maya_asset_work:
- definition: '@asset_root/work/maya/{CustomEntity01}_{Asset}.v{version}.{maya_extension}'
+maya_asset_work:
+ definition: "@asset_root/work/maya/{CustomEntity01}_{Asset}.v{version}.{maya_extension}"
```
This action allows you to use the Dining-Room entity proper name in the file name. The result will be something like `Dining-Room_Filet.v1.mb`.
-You’ve now modified `templates.yml` to reflect the new set folder in your production folder structure, and to include the name of the set in work files for asset tasks in Maya. Let’s test out the changes.
+You’ve now modified `templates.yml` to reflect the new set folder in your production folder structure, and to include the name of the set in work files for asset tasks in Maya. Let’s test out the changes.
### Test it
@@ -361,7 +359,7 @@ You’ve now modified `templates.yml` to reflect the new set folder in your prod

-In Maya, go to **{% include product %} > File Open**, and in the resulting dialog, select a task on an asset for which you’ve specified a Set in {% include product %}.
+In Maya, go to **{% include product %} > File Open**, and in the resulting dialog, select a task on an asset for which you’ve specified a Set in {% include product %}.

@@ -381,15 +379,13 @@ The **Work Area**: is displaying **.../{% include product %}/projects/the_other_
### Extending the example
-In this example, we modified a single template, but there’s plenty more you can do with your filesystem configuration. In a real world example, you’d likely change *all* asset-related files to have the same file naming conventions. You can make modifications based on other entities (Season, Episode, Level, etc.), create user folders, name your folders based on {% include product %} data manipulated with regular expressions, and much more. You can learn about all of Toolkit’s folder and schema options in the [Filesystem Configuration Reference](https://support.shotgunsoftware.com/hc/en-us/articles/219039868).
+In this example, we modified a single template, but there’s plenty more you can do with your filesystem configuration. In a real world example, you’d likely change _all_ asset-related files to have the same file naming conventions. You can make modifications based on other entities (Season, Episode, Level, etc.), create user folders, name your folders based on {% include product %} data manipulated with regular expressions, and much more. You can learn about all of Toolkit’s folder and schema options in the [Filesystem Configuration Reference](https://support.shotgunsoftware.com/hc/en-us/articles/219039868).
### The Path Cache
At folder creation time, a mapping is created between a folder on disk and a {% include product %} entity. These mappings are stored as FilesystemLocation entities in {% include product %}, and cached in an SQLite database on user machines. To learn more about how the path cache works and how to work with it, see [this document](../../../quick-answers/administering/what-is-path-cache.md).
-
### Additional Resources
-* [Filesystem Configuration Reference](https://support.shotgunsoftware.com/hc/en-us/articles/219039868)
-* [Intro to Toolkit Configuration webinar video](https://www.youtube.com/watch?v=7qZfy7KXXX0&t=1961s)
-
+- [Filesystem Configuration Reference](https://support.shotgunsoftware.com/hc/en-us/articles/219039868)
+- [Intro to Toolkit Configuration webinar video](https://www.youtube.com/watch?v=7qZfy7KXXX0&t=1961s)
diff --git a/docs/en/guides/pipeline-integrations/getting-started/editing_app_setting.md b/docs/en/guides/pipeline-integrations/getting-started/editing_app_setting.md
index 98d18a8bb..f10ac21fd 100644
--- a/docs/en/guides/pipeline-integrations/getting-started/editing_app_setting.md
+++ b/docs/en/guides/pipeline-integrations/getting-started/editing_app_setting.md
@@ -6,14 +6,15 @@ lang: en
---
# Editing a pipeline configuration
-After completing this guide, you will have the knowledge fundamental to:
-* Finding a configuration setting for a specific Toolkit app
-* Editing the settings
-* Exploring what other functions the configuration settings can extend.
+After completing this guide, you will have the knowledge fundamental to:
+
+- Finding a configuration setting for a specific Toolkit app
+- Editing the settings
+- Exploring what other functions the configuration settings can extend.
## About the guide
-
+
This guide describes how to edit settings within an existing Pipeline Configuration to meet the needs of a project pipeline. The first guide, **[Getting started with configurations](./advanced_config.md)**, described how to prepare a pipeline configuration for editing. If you aren’t familiar with how to create an editable configuration for your project, complete **Getting started with configurations** before proceeding.
Through extending the Default Configuration, {% include product %} Toolkit allows for customizing tasks within pipeline workflows. An example of a customization might be as simple as enabling or disabling a button in a Toolkit app within one or more software packages, changing the way users interact with Toolkit's features. Toolkit allows proprietary configurations that enable you to work smarter and faster by: creating custom workflows, automating repetitive and mundane tasks, modifying hooks, and even adding custom tools built on the Toolkit platform. Unfortunately, it’s only accessible through {% include product %} software integrations, and not yet released for everyday tasks like washing your clothes.
@@ -21,7 +22,7 @@ Through extending the Default Configuration, {% include product %} Toolkit allow
The exercises in this guide will teach you how to find what configuration settings control actions within the {% include product %} software integrations, where the settings live, and how to edit them. Specifically, we will edit a setting in the Workfiles app that manages the behavior of the **+New Task** button, preventing artists from creating a new task when working on a project inside Maya.
## Using this document
-
+
To use this guide and perform an edit on a pipeline configuration, the following is required:
1. An active [{% include product %}](https://www.shotgridsoftware.com/signup/?utm_source=autodesk.com&utm_medium=referral&utm_campaign=creative-project-management) site.
@@ -29,14 +30,14 @@ To use this guide and perform an edit on a pipeline configuration, the following
3. A pipeline configuration for the identified project, or complete the [Getting started with configurations](./advanced_config.md) guide and use the configuration created in that exercise.
4. Read and write permissions set appropriately for the filesystem where the Pipeline Configuration is stored.
5. {% include product %} Desktop installed on your system.
-6. An active subscription for Maya. Get a 30 day trial of Maya [here](https://www.autodesk.com/products/maya/free-trial-dts?adobe_mc_ref=https%3A%2F%2Fwww.google.com%2F&adobe_mc_sdid=SDID%3D577C0A84DDF5D35D-50E96EA2052056FE%7CMCORGID%3D6DC7655351E5696B0A490D44%2540AdobeOrg%7CTS%3D1543444689)
+6. An active subscription for Maya. Get a 30 day trial of Maya [here](https://www.autodesk.com/products/maya/free-trial-dts?adobe_mc_ref=https%3A%2F%2Fwww.google.com%2F&adobe_mc_sdid=SDID%3D577C0A84DDF5D35D-50E96EA2052056FE%7CMCORGID%3D6DC7655351E5696B0A490D44%2540AdobeOrg%7CTS%3D1543444689)
{% include info title="Note" content="This guide is based on the `tk-config-default2` pipeline configuration. If your config was modified, the location of files, folders, and blocks of YAML settings may vary from what is described here." %}
## About the Workfiles app
The Workfiles app governs file management in a {% include product %} software integration and controls access to functionality for browsing, opening, and saving work files. The **+New Task** button is an action of the Workfiles app that allows a user to add a task without having to go to {% include product %} to do so. The configuration is broken down into per-environment files. This allows you to manage functionality relative to different stages in the pipeline, controlling when a user can create, name and save files, execute tasks, or perform certain functions. This is relevant for all the functions in the Workfiles app and it also applies to modifying settings for any app or engine. Find more details in the [Advanced Topics](#advanced-topics) at the end of this document.
-
+
## Getting familiar with the configuration files
Use the Pipeline Configuration List in {% include product %} to locate where the pipeline configuration is stored for the project you’re working with. If you know where it’s stored, you can skip to Step 5.
@@ -54,16 +55,16 @@ Use the Pipeline Configuration List in {% include product %} to locate where the

**Step 4:** Once the **Pipeline Configuration List** is displayed, select the **+** sign on the far right of the column headers to add another column. In the dropdown list, choose the appropriate path for your operating system.
-
+

The path will be displayed in a new field.

-**Step 5:** In a terminal or file manager, browse to the folder where the project’s pipeline configuration is stored and open the folder.
+**Step 5:** In a terminal or file manager, browse to the folder where the project’s pipeline configuration is stored and open the folder.
-There are three subfolders in a Toolkit configuration root folder: **cache**, **config** and **install**. Open the **config** folder and nested inside you will find several subfolders and a few files.
+There are three subfolders in a Toolkit configuration root folder: **cache**, **config** and **install**. Open the **config** folder and nested inside you will find several subfolders and a few files.

@@ -81,19 +82,19 @@ Toolkit uses YAML files to configure functionality. YAML was chosen as the langu
**Step 7:** Open **{% include product %} Desktop**.
-**Step 8:** Select the project whose configuration you are going to edit.
+**Step 8:** Select the project whose configuration you are going to edit.

-**Step 9:** Launch Maya from {% include product %} Desktop.
+**Step 9:** Launch Maya from {% include product %} Desktop.

Wait for the **{% include product %}** menu to fully load. If you have a slow internet connection, this would be the time to run the configuration that makes you that perfect shot of espresso with just the right amount of crema.
-Once Maya and {% include product %} are fully loaded, the **File Open** dialog box will open automatically. When you launch Maya from {% include product %} Desktop, you will enter Maya in the **project** environment; the configuration of your Toolkit workflows will be driven by the file `config/env/project.yml`. The environments that are identified in the Default Configuration are `project`, `sequence`, `shot`, `shot_step`, `asset`, `asset_step`.
+Once Maya and {% include product %} are fully loaded, the **File Open** dialog box will open automatically. When you launch Maya from {% include product %} Desktop, you will enter Maya in the **project** environment; the configuration of your Toolkit workflows will be driven by the file `config/env/project.yml`. The environments that are identified in the Default Configuration are `project`, `sequence`, `shot`, `shot_step`, `asset`, `asset_step`.
-**Step 10:** Select the **Assets** tab in the left pane of the **File Open** dialog box. Select any asset inside the folder displayed in the search results.
+**Step 10:** Select the **Assets** tab in the left pane of the **File Open** dialog box. Select any asset inside the folder displayed in the search results.

@@ -103,13 +104,13 @@ The **+New Task** button is enabled.
Toolkit pipeline configurations are used to customize environments to meet your pipeline's needs. A pipeline configuration can override default {% include product %} integration settings, varying as much or as little as necessary to meet the needs of a project’s pipeline. This structure allows configurations to be lightweight, adding only the settings that are different from the default values in the {% include product %} core code. In this exercise, we want to turn off the Workfiles app's **+New Task** button, but before we can do so, we need to figure out which configuration setting controls it.
-**Step 11:** Select the **>** at the top right of the **File Open** window next to **Project (name of project)**.
+**Step 11:** Select the **>** at the top right of the **File Open** window next to **Project (name of project)**.
This reference box shows details about the configuration settings that control the functions of the **File Open** window. Some apps in Toolkit have a reference box to show what settings are used for the app and what the default settings are. Notice the **Location:**: identifier is **tk-multi-workfiles2**. This is the identifier for the bundle of code that creates the Workfiles app. When searching a pipeline configuration this name will identify where the settings live for the app. There’s an [Apps and Engines page](https://support.shotgunsoftware.com/hc/en-us/articles/219039798-Integrations-Apps-and-Engines) that lists all the configuration settings, apps, and engines for what’s available in a {% include product %} Integration.

-Look under the **Configuration** header to find the settings for this specific environment.
+Look under the **Configuration** header to find the settings for this specific environment.

@@ -121,19 +122,19 @@ Scroll down to **Setting allow_task_creation**. The default value of this settin
When searching for a setting there are several things to consider:
-* What software application you are running.
-* What file you are working on and what environment you are working in. This is found in the App’s reference box.
-* What the specific setting is called. This is found in the App’s reference box or on the [Apps and Engines page](https://support.shotgunsoftware.com/hc/en-us/articles/219039798-Integrations-Apps-and-Engines) page.
-* What YAML file to extend. There are identifiers and a roadmap detailed in the YAML files to guide you to where the settings live.
-* What specific blocks within the YAML file to extend. This is identified in the roadmap.
-* What identifiers and symbols are used in the YAML files.
-* And, most importantly, where the configuration is stored for the current project.
+- What software application you are running.
+- What file you are working on and what environment you are working in. This is found in the App’s reference box.
+- What the specific setting is called. This is found in the App’s reference box or on the [Apps and Engines page](https://support.shotgunsoftware.com/hc/en-us/articles/219039798-Integrations-Apps-and-Engines) page.
+- What YAML file to extend. There are identifiers and a roadmap detailed in the YAML files to guide you to where the settings live.
+- What specific blocks within the YAML file to extend. This is identified in the roadmap.
+- What identifiers and symbols are used in the YAML files.
+- And, most importantly, where the configuration is stored for the current project.
-A setting can be utilized in multiple places within a pipeline configuration. What determines where it goes are: which software integration you want to affect and where in the pipeline process you want to impact a change.
+A setting can be utilized in multiple places within a pipeline configuration. What determines where it goes are: which software integration you want to affect and where in the pipeline process you want to impact a change.
## Find where to set the value for `allow_task_creation`
-**Step 12:** Bring the main Maya window forward.
+**Step 12:** Bring the main Maya window forward.
**Step 13:** Find the **{% include product %}** menu item in the upper right of the Maya window.
@@ -145,7 +146,7 @@ A setting can be utilized in multiple places within a pipeline configuration. Wh

-The **Work Area Info** dialog box shows what’s under the hood and details about your current work area. This includes the environment that you are is working in and the path to the environment configuration file where the settings are located.
+The **Work Area Info** dialog box shows what’s under the hood and details about your current work area. This includes the environment that you are is working in and the path to the environment configuration file where the settings are located.
**Step 15:** In the **Your Current Work Area** dialog box, select the **Environment** tab at the bottom.
@@ -163,18 +164,18 @@ description: Apps and Engines when launching with a project only context.
################################################################################
includes:
-- ./includes/frameworks.yml
-- ./includes/settings/tk-3dsmaxplus.yml
-- ./includes/settings/tk-desktop.yml
-- ./includes/settings/tk-flame.yml
-- ./includes/settings/tk-houdini.yml
-- ./includes/settings/tk-mari.yml
-- ./includes/settings/tk-maya.yml
-- ./includes/settings/tk-motionbuilder.yml
-- ./includes/settings/tk-nuke.yml
-- ./includes/settings/tk-photoshopcc.yml
-- ./includes/settings/tk-shell.yml
-- ./includes/settings/tk-shotgun.yml
+ - ./includes/frameworks.yml
+ - ./includes/settings/tk-3dsmaxplus.yml
+ - ./includes/settings/tk-desktop.yml
+ - ./includes/settings/tk-flame.yml
+ - ./includes/settings/tk-houdini.yml
+ - ./includes/settings/tk-mari.yml
+ - ./includes/settings/tk-maya.yml
+ - ./includes/settings/tk-motionbuilder.yml
+ - ./includes/settings/tk-nuke.yml
+ - ./includes/settings/tk-photoshopcc.yml
+ - ./includes/settings/tk-shell.yml
+ - ./includes/settings/tk-shotgun.yml
################################################################################
# configuration for all engines to load in a project context
@@ -198,26 +199,25 @@ engines:
# reference all of the common frameworks
frameworks: "@frameworks"
-
```
-Inside `project.yml`, there are three sections below the description: `includes`, `engines`, and `frameworks`. The `includes` section is a list of file pointers that *reference* other YAML files in the configuration. The architecture of the Default Configuration takes advantage of nesting files and using pointers as another way to keep the files lightweight. Following the **includes** will bring you through one file to the next until you find the configuration setting you are looking for. It’s a bit like Russian Matryoshka dolls: you open up each doll that’s nested inside the next until you find the appropriate configuration setting.
+Inside `project.yml`, there are three sections below the description: `includes`, `engines`, and `frameworks`. The `includes` section is a list of file pointers that _reference_ other YAML files in the configuration. The architecture of the Default Configuration takes advantage of nesting files and using pointers as another way to keep the files lightweight. Following the **includes** will bring you through one file to the next until you find the configuration setting you are looking for. It’s a bit like Russian Matryoshka dolls: you open up each doll that’s nested inside the next until you find the appropriate configuration setting.
Every engine is identified as `tk-`. You know you want to affect settings in Maya, so the identifier we’re looking for is `tk-maya`.
Look under the `includes:` section of the `project.yml` file and find this line, `./includes/settings/tk-maya.yml`. This line indicates the configurations controlling the **settings** for the Maya engine, `tk-maya`, are nested inside the **includes** folder within the **settings** folder.
-In the `engines:` section find the `tk-maya` value.
+In the `engines:` section find the `tk-maya` value.
`tk-maya: "@settings.tk-maya.project"`
-The `@` signifies that a value is coming from an included file.
+The `@` signifies that a value is coming from an included file.
-The `settings` and `project` reference indicate it’s a project’s settings. These are naming conventions within the Default Configuration that help to guide you.
+The `settings` and `project` reference indicate it’s a project’s settings. These are naming conventions within the Default Configuration that help to guide you.
-This complete line tells us to look for the `settings.tk-maya.project` block in the included file to find the configuration settings for the Maya engine, `tk-maya`.
+This complete line tells us to look for the `settings.tk-maya.project` block in the included file to find the configuration settings for the Maya engine, `tk-maya`.
{% include product %} Toolkit uses simple terms in the YAML files to indicate the names of the settings and what paths will lead you to them. You already know from looking in the Maya **File Open** reference box that the bundle of code which controls how the **+New Task** button performs, is identified by `tk-multi-workfiles2`. Toolkit bundles are referenced in the YAML files using these identifiers. ‘tk-multi-workfiles2’ is the identifier for the Workfiles app code bundle, and the **+New Task** button is a function of the Workfiles app.
@@ -227,7 +227,7 @@ Looking for the Workfiles App settings in tk-maya.yml

-**Step 18:** Following the include from `project.yml`, search the `tk-maya.yml` file for `settings.tk-maya.project`. You are specifically looking to disable the **+New Task** button in the project environment of a specific project. You are in the configuration for that project and obtained the location information while you were in the project environment.
+**Step 18:** Following the include from `project.yml`, search the `tk-maya.yml` file for `settings.tk-maya.project`. You are specifically looking to disable the **+New Task** button in the project environment of a specific project. You are in the configuration for that project and obtained the location information while you were in the project environment.
```yaml
# project
@@ -239,13 +239,13 @@ settings.tk-maya.project:
tk-multi-shotgunpanel: "@settings.tk-multi-shotgunpanel"
tk-multi-workfiles2: "@settings.tk-multi-workfiles2.launch_at_startup"
menu_favourites:
- - {app_instance: tk-multi-workfiles2, name: File Open...}
+ - { app_instance: tk-multi-workfiles2, name: File Open... }
location: "@engines.tk-maya.location"
```
-Under `settings.tk-maya.projects`, the `tk-multi-workfiles2` app settings are listed as
+Under `settings.tk-maya.projects`, the `tk-multi-workfiles2` app settings are listed as
`tk-multi-workfiles2: "@settings.tk-multi-workfiles2.launch_at_startup"`
@@ -253,7 +253,7 @@ The `@` symbol tells us that that the value for `tk-multi-workfiles2` is coming
```yaml
includes:
-...
+---
- ./tk-multi-workfiles2.yml
```
@@ -267,12 +267,12 @@ settings.tk-multi-workfiles2.launch_at_startup:
launch_at_startup: true
entities:
```
-
+
-The Maya reference box indicated the `allow_task_creation` setting has a default value of `true`. As a best practice, no default settings are reflected in a pipeline configuration. This allows for a **sparse** format, adding only the settings that differ from the default code to the configuration. If a setting isn’t explicitly provided, any calls accessing that setting will receive the default value. When Toolkit reads the configuration and builds an environment, the apps, engines, and frameworks running in that environment use that project’s pipeline configuration settings and override any default settings based on what’s in the configuration.
+The Maya reference box indicated the `allow_task_creation` setting has a default value of `true`. As a best practice, no default settings are reflected in a pipeline configuration. This allows for a **sparse** format, adding only the settings that differ from the default code to the configuration. If a setting isn’t explicitly provided, any calls accessing that setting will receive the default value. When Toolkit reads the configuration and builds an environment, the apps, engines, and frameworks running in that environment use that project’s pipeline configuration settings and override any default settings based on what’s in the configuration.
-**Step 20:** In `tk-multi-workfiles2.yml`, add `allow_task_creation` under `settings.tk-multi-workfiles2.launch_at_startup:` and set the value to `false`
+**Step 20:** In `tk-multi-workfiles2.yml`, add `allow_task_creation` under `settings.tk-multi-workfiles2.launch_at_startup:` and set the value to `false`
```yaml
# launches at startup.
@@ -281,7 +281,7 @@ settings.tk-multi-workfiles2.launch_at_startup:
launch_at_startup: true
entities:
```
-
+
**NOTE:** Toolkit Default Configuration settings are organized alphabetically as an easy way to find specific settings. Keeping this convention will make your life a lot easier as the configuration gets a little heavier.
@@ -304,7 +304,7 @@ This will reload the configuration settings.
Notice that the **+New Task** button is not visible.
-You’ve modified a configuration setting for the Workfiles app, changing the behavior of a button in a project environment. Since you only modified that setting in the project environment, if you start working in another environment the settings for the **+New Task** button will still be active. In a real production example, you'd likely make the change we made here for *all* environments.
+You’ve modified a configuration setting for the Workfiles app, changing the behavior of a button in a project environment. Since you only modified that setting in the project environment, if you start working in another environment the settings for the **+New Task** button will still be active. In a real production example, you'd likely make the change we made here for _all_ environments.
## Changing environments
@@ -318,13 +318,13 @@ By selecting **+New File**, you began to work on a new asset and the `asset_step
## Discover what environment you are working in
-**Step 26:** In the upper right of the Maya menu select **{% include product %}**.
+**Step 26:** In the upper right of the Maya menu select **{% include product %}**.

**Art, Asset** tells you’re working on and what environment you’re in.
-**Step 27:** Select **Art, Asset > Work Area Info…** to display what the parameters are in your current work area.
+**Step 27:** Select **Art, Asset > Work Area Info…** to display what the parameters are in your current work area.
**Step 28:** Select the **Environment** tab at the bottom.
@@ -334,7 +334,7 @@ Each environment will display the information necessary to determine where the s
NOTE: Each environment is independent, a project has a dedicated configuration, and the software integrations only read settings for their specific software from the pipeline configuration when a project is loaded.
-You've now edited your pipeline configuration, making a change to the settings for an app. And now the real fun begins: learning all the things you can do with {% include product %} Toolkit environments. Here are some advanced topics to explore.
+You've now edited your pipeline configuration, making a change to the settings for an app. And now the real fun begins: learning all the things you can do with {% include product %} Toolkit environments. Here are some advanced topics to explore.
## Advanced topics
@@ -350,11 +350,11 @@ We disabled task creation in the project environment, but in a real studio envir
### Creating custom environments
-The Default Configuration comes with a set of pre-defined pipeline steps: `project`, `sequence`, `shot`, `shot_step`, `asset`, and `asset_step`. However, a studio might want different configuration settings for every stage in the pipeline – say `asset_step_rig`, `asset_step_model`, `shot_step_anim`, `shot_step_light`, and so on. Toolkit supports custom environments. See the ["Custom environments" section of the Environment Configuration Reference](../../../reference/pipeline-integrations/env-config-ref.md#custom-environments) for details.
+The Default Configuration comes with a set of pre-defined pipeline steps: `project`, `sequence`, `shot`, `shot_step`, `asset`, and `asset_step`. However, a studio might want different configuration settings for every stage in the pipeline – say `asset_step_rig`, `asset_step_model`, `shot_step_anim`, `shot_step_light`, and so on. Toolkit supports custom environments. See the ["Custom environments" section of the Environment Configuration Reference](../../../reference/pipeline-integrations/env-config-ref.md#custom-environments) for details.
### Video Resources
-* [Intro to Toolkit configurations](https://www.youtube.com/watch?v=7qZfy7KXXX0&t=1961s) from our SIGGRAPH 2018 Developer Day
-* [Demystifying the Default Configuration webinar](https://www.youtube.com/watch?v=eKHaC1dZCeE)
+- [Intro to Toolkit configurations](https://www.youtube.com/watch?v=7qZfy7KXXX0&t=1961s) from our SIGGRAPH 2018 Developer Day
+- [Demystifying the Default Configuration webinar](https://www.youtube.com/watch?v=eKHaC1dZCeE)
Now that you’ve learned how to modify an app configuration setting, try [adding an app to your Toolkit configuration](installing_app.md).
diff --git a/docs/en/guides/pipeline-integrations/getting-started/installing_app.md b/docs/en/guides/pipeline-integrations/getting-started/installing_app.md
index c9b54ba72..a6b15b857 100644
--- a/docs/en/guides/pipeline-integrations/getting-started/installing_app.md
+++ b/docs/en/guides/pipeline-integrations/getting-started/installing_app.md
@@ -6,31 +6,31 @@ lang: en
---
# Adding an app
-
+
By completing this guide, you will quickly become acquainted with the configuration management tools in Toolkit and learn how to:
-* Safely create a copy of an active pipeline configuration
-* Add an app to a configuration
-* Add the settings necessary to use that app in specific environments
-* Push your changes back to the active configuration
+- Safely create a copy of an active pipeline configuration
+- Add an app to a configuration
+- Add the settings necessary to use that app in specific environments
+- Push your changes back to the active configuration
## About the guide
-This guide will demonstrate how to add a {% include product %} Toolkit app to an existing pipeline configuration. You will quickly become acquainted with the configuration management tools.
+This guide will demonstrate how to add a {% include product %} Toolkit app to an existing pipeline configuration. You will quickly become acquainted with the configuration management tools.
-The app we will be adding is the {% include product %} Python Console app. Maya has its own Python console, but there are some features in the Toolkit app that don’t exist in the Maya console.
+The app we will be adding is the {% include product %} Python Console app. Maya has its own Python console, but there are some features in the Toolkit app that don’t exist in the Maya console.
This guide utilizes the pipeline configuration we created in the [Editing a Pipeline Configuration](./editing_app_setting.md) guide. If you haven’t completed this guide, you can use an existing pipeline configuration and add the app there.
## Using this document
-
+
To use this guide and install a Toolkit app, the following is required:
1. An active [{% include product %}](https://www.shotgridsoftware.com/signup/) site.
2. A pipeline configuration for the identified project, or complete the [Getting Started with Configurations guide](./advanced_config.md) and use the configuration created in that exercise.
3. Read and write permissions set appropriately for the filesystem where the pipeline configuration is stored.
4. {% include product %} Desktop installed on your system.
-5. An active subscription for Maya. Get a 30 day trial of Maya [here](https://www.autodesk.com/products/maya/free-trial-dts).
+5. An active subscription for Maya. Get a 30 day trial of Maya [here](https://www.autodesk.com/products/maya/free-trial-dts).
{% include info title="Note" content="This guide is based on the tk-config-default2 pipeline configuration. If your config was modified, the location of files, folders, and blocks of YAML settings may vary from what is described here." %}
@@ -79,7 +79,7 @@ Cloning a pipeline configuration automates the process of creating a copy, build
## Clone the Pipeline Configuration you want to add an app to
-### Go to the Pipeline Configuration list.
+### Go to the Pipeline Configuration list.
**Step 3:** Open {% include product %} and in the upper right, select the **Admin Menu (your avatar) > Default Layouts > Pipeline Configuration > Pipeline Configuration List**.
@@ -93,7 +93,7 @@ This action displays a detailed list of all of your {% include product %} site's
### Review where the project’s configuration is located
-**Step 5:** Additionally, add the the appropriate **Path** field for your operating system.
+**Step 5:** Additionally, add the the appropriate **Path** field for your operating system.

@@ -105,7 +105,7 @@ This displays the paths to the configuration files.

-**Step 7:** Name the configuration in the Configuration List and name the file in the directory: "Primary Clone Config 2" and “the_other_side_clone2,” respectively. Select **OK**.
+**Step 7:** Name the configuration in the Configuration List and name the file in the directory: "Primary Clone Config 2" and “the_other_side_clone2,” respectively. Select **OK**.

@@ -137,7 +137,7 @@ If an app that you want to use isn’t referenced in the little black book, you
## Tell Toolkit where to find the app
-**Step 10:** Search the file for `pythonconsole`. If you used the Default Configuration for the project, you will find that the descriptor for the Python Console app is listed in this file. It should match the description we found in the [list](https://support.shotgunsoftware.com/hc/en-us/articles/219039798-Integrations-Apps-and-Engines) of Maya apps we looked at when we started on our journey. Check to make sure the version matches what we looked at in the list of Maya apps.
+**Step 10:** Search the file for `pythonconsole`. If you used the Default Configuration for the project, you will find that the descriptor for the Python Console app is listed in this file. It should match the description we found in the [list](https://support.shotgunsoftware.com/hc/en-us/articles/219039798-Integrations-Apps-and-Engines) of Maya apps we looked at when we started on our journey. Check to make sure the version matches what we looked at in the list of Maya apps.
```yaml
apps.tk-multi-pythonconsole.location:
@@ -174,10 +174,9 @@ engines:
tk-photoshopcc: "@settings.tk-photoshopcc.project"
tk-shell: "@settings.tk-shell.project"
tk-shotgun: "@settings.tk-shotgun.project"
-
```
-The `tk-maya: “@settings.tk-maya.project”` line using the `@settings` tells you that the settings are in an included file. The `tk-maya` identifies the Maya engine and the `project` identifies the environment.
+The `tk-maya: “@settings.tk-maya.project”` line using the `@settings` tells you that the settings are in an included file. The `tk-maya` identifies the Maya engine and the `project` identifies the environment.
### YAML files
@@ -185,12 +184,12 @@ The {% include product %} Toolkit pipeline configuration uses simple terms in [Y
For this specific block:
-* `settings` is what was chosen for the Default Configuration as a reference for the settings folder
-* `project` is what was chosen for the Default Configuration as a reference for the project environment
-* `tk-maya` is the identifier for Toolkit's engine for Maya
-* `@` is a Toolkit term used to denote that a setting value is coming from an included file
+- `settings` is what was chosen for the Default Configuration as a reference for the settings folder
+- `project` is what was chosen for the Default Configuration as a reference for the project environment
+- `tk-maya` is the identifier for Toolkit's engine for Maya
+- `@` is a Toolkit term used to denote that a setting value is coming from an included file
-The YAML files are the windows into {% include product %}’s integrations and make it easier to configure working environments that meet the needs of your pipeline.
+The YAML files are the windows into {% include product %}’s integrations and make it easier to configure working environments that meet the needs of your pipeline.
### How the configuration references Toolkit bundles
@@ -204,18 +203,18 @@ For this specific block in the Default Configuration, ‘tk-maya.project’ is t
`./includes/settings/tk-maya.yml`
-**Step 14:** In your cloned configuration, open `config/env/includes/settings/tk-maya.yml` in a text editor, and search for `settings.tk-maya.project`.
+**Step 14:** In your cloned configuration, open `config/env/includes/settings/tk-maya.yml` in a text editor, and search for `settings.tk-maya.project`.

-**Step 15:** Add the location descriptor under
+**Step 15:** Add the location descriptor under
```yaml
settings.tk-maya.project:
apps:
```
-Use the `about` app, `tk-multi-about:`, as a guide for how to add the location descriptor, then save the file.
+Use the `about` app, `tk-multi-about:`, as a guide for how to add the location descriptor, then save the file.
{% include info title="Note" content="Make sure your [YAML](https://www.tutorialspoint.com/yaml/yaml_indentation_and_separation.htm) files are formatted correctly using spaces and not tabs." %}
@@ -231,7 +230,7 @@ settings.tk-maya.project:
tk-multi-shotgunpanel: "@settings.tk-multi-shotgunpanel"
tk-multi-workfiles2: "@settings.tk-multi-workfiles2.launch_at_startup"
```
-
+
You will notice the **Screening Room, {% include product %} Panel, and Workfiles2** app’s location identifiers are listed in a different included file and accessed differently than the **About** app. To keep things tidy, these apps were split off to the included settings folder because they have additional settings.
{% include info title="Note" content="The python console app already exists in the Default Configuration, however if you are adding an app that has never been added to your configuration before or if you have changed the version of an app, and you are using a [centralized configuration](https://developer.shotgridsoftware.com/tk-core/initializing.html#centralized-configurations), then there is an additional step you need to take. Open your terminal and browse to where your cloned configuration is stored. From your cloned configuration’s root folder, run the following command:
@@ -248,9 +247,9 @@ This will scan your configuration for apps, engines and frameworks and ensure th
## View the changes in Maya
-**Step 16:** Open {% include product %} Desktop, select the project you were working with, and confirm you are using the cloned configuration.
+**Step 16:** Open {% include product %} Desktop, select the project you were working with, and confirm you are using the cloned configuration.
-There will be a blue bar with the name of the clone that you created under the name of the project.
+There will be a blue bar with the name of the clone that you created under the name of the project.
{% include info title="Note" content="If you’re using the primary, there will be no blue bar and the configuration name won’t be visible." %}
@@ -260,11 +259,11 @@ There will be a blue bar with the name of the clone that you created under the n
If:
-* You’re using the cloned configuration that you just edited
-* The cloned configuration was extended correctly
-* You saved the extended files
-* You chose to associate the project with the cloned configuration
-* You relaunched Maya from {% include product %} Desktop
+- You’re using the cloned configuration that you just edited
+- The cloned configuration was extended correctly
+- You saved the extended files
+- You chose to associate the project with the cloned configuration
+- You relaunched Maya from {% include product %} Desktop
The Python Console app will be available in Maya.
@@ -274,7 +273,7 @@ The Python Console app will be available in Maya.
After confirming you added the Python Console app correctly, you’re ready to push the changes live.
-**Step 18:** Open your terminal and browse to where your cloned configuration is stored. From your cloned configuration's root folder, run the following command:
+**Step 18:** Open your terminal and browse to where your cloned configuration is stored. From your cloned configuration's root folder, run the following command:
On Linux or Mac:
@@ -284,7 +283,7 @@ On Windows:
`tank.bat push_configuration`
-Follow the prompts and type in the ID for your project’s Primary configuration, the configuration you want to push the changes to.
+Follow the prompts and type in the ID for your project’s Primary configuration, the configuration you want to push the changes to.
```
@@ -318,7 +317,7 @@ Your existing configuration will be backed up.
The following pipeline configurations are available to push to:
- [1] Primary (/Users/michelle/Documents/Shotgun/configs/the_other_side)
-Please type in the id of the configuration to push to (ENTER to exit):
+Please type in the id of the configuration to push to (ENTER to exit):
```
@@ -330,11 +329,11 @@ There will be a list of the available pipeline configurations that the cloned co
After you enter the ID, {% include product %} will:
-* Backup the Primary configuration
-* Copy the cloned configuration
-* Associate the copied cloned configuration with the project leaving the clone intact
-* Displaying where the Primary config was saved
-* Check to see if there are any apps that need to be downloaded and cached
+- Backup the Primary configuration
+- Copy the cloned configuration
+- Associate the copied cloned configuration with the project leaving the clone intact
+- Displaying where the Primary config was saved
+- Check to see if there are any apps that need to be downloaded and cached
```
Please type in the id of the configuration to push to (ENTER to exit): 1
@@ -346,7 +345,7 @@ Checking if there are any apps that need downloading…
Push Complete!
```
-## View the changes you made in the primary configuration
+## View the changes you made in the primary configuration
**Step 19:** In {% include product %} Desktop, click on the arrow in the upper right and choose **Primary** in the **CONFIGURATION** list.
@@ -356,7 +355,7 @@ Push Complete!

-The Python Console app was added to the Project environment for the chosen project. We discussed in the second guide, “[Editing a configuration](./editing_app_setting.md), that each environment is independent, a project has a dedicated configuration, and the software integrations gather settings from the pipeline configuration when a project is loaded. For the Python Console to be available in an environment, that environment will need instructions to look in the `app_locations.yml` file for the location descriptor. Given this, at any point in the pipeline where you want the Python Console app to be available will need the settings that say, “use the Python Console app here.”
+The Python Console app was added to the Project environment for the chosen project. We discussed in the second guide, “[Editing a configuration](./editing_app_setting.md), that each environment is independent, a project has a dedicated configuration, and the software integrations gather settings from the pipeline configuration when a project is loaded. For the Python Console to be available in an environment, that environment will need instructions to look in the `app_locations.yml` file for the location descriptor. Given this, at any point in the pipeline where you want the Python Console app to be available will need the settings that say, “use the Python Console app here.”
## Advanced topics
@@ -368,7 +367,7 @@ Standard Toolkit apps and apps created by the loving {% include product %} commu
### Investigate how to extend a configuration
-You may have noticed when we were selecting which configuration to use for the project, the Python Console App was available in the {% include product %} Desktop dropdown.
+You may have noticed when we were selecting which configuration to use for the project, the Python Console App was available in the {% include product %} Desktop dropdown.

@@ -376,7 +375,7 @@ If there’s an environment that is using an app you want to add to your pipelin
The Desktop app opens in the project environment, so find `tk-desktop` in the `project.yml` file.
-Open `config/env/project.yml`.
+Open `config/env/project.yml`.
{% include info title="Note" content='In the engine block, `tk-desktop` points to included content:
@@ -396,11 +395,11 @@ apps:
location: "@apps.tk-multi-pythonconsole.location"
```
-These blocks add the Python Console app to the Desktop engine in the project step.
+These blocks add the Python Console app to the Desktop engine in the project step.
Follow that include further to `../includes/app_locations.yml` and search for `apps.tk-multi-pythonconsole.location` to find the following:
-```yaml
+````yaml
# pythonconsole
apps.tk-multi-pythonconsole.location:
type: app_store
@@ -410,8 +409,9 @@ apps.tk-multi-pythonconsole.location:
Every app, engine, and framework has a location descriptor that is used to tell Toolkit where to access the specific bundle. Many app descriptors exist in the `app_locations.yml` file, but may not be referenced where you want them, as we saw with the Python Console app. All the standard Apps and Engines are listed on the [Apps and Engines page](https://support.shotgunsoftware.com/hc/en-us/articles/219039798-Integrations-Apps-and-Engines).
-You can add any app to any appropriate software integration that ShotGrid supports, or add your own proprietary application to your Toolkit arsenal. All the supported software applications are also listed on the Integrations [Apps and Engines page](https://support.shotgunsoftware.com/hc/en-us/articles/219039798-Integrations-Apps-and-Engines) page.
+You can add any app to any appropriate software integration that ShotGrid supports, or add your own proprietary application to your Toolkit arsenal. All the supported software applications are also listed on the Integrations [Apps and Engines page](https://support.shotgunsoftware.com/hc/en-us/articles/219039798-Integrations-Apps-and-Engines) page.
-If you can’t find the exact app you are looking for, you can create one. There’s a good chance that other ShotGrid users will need the same function and sharing new apps is one way to give back to the ShotGrid community.
+If you can’t find the exact app you are looking for, you can create one. There’s a good chance that other ShotGrid users will need the same function and sharing new apps is one way to give back to the ShotGrid community.
-In the next guide, you will learn how to customize your production folder structure to reflect how your facility is structured](./dynamic_filesystem_configuration.md).
+In the next guide, you will learn how to customize your production folder structure to reflect how your facility is structured](./dynamic_filesystem_configuration.md).
+````
diff --git a/docs/en/guides/pipeline-integrations/getting_started.md b/docs/en/guides/pipeline-integrations/getting_started.md
index d847cbd4b..86a3bead6 100644
--- a/docs/en/guides/pipeline-integrations/getting_started.md
+++ b/docs/en/guides/pipeline-integrations/getting_started.md
@@ -7,4 +7,4 @@ lang: en
# Getting Started
-{% include product %} Toolkit provides a set of highly customizable tools for creating studio pipelines where artists can access info from {% include product %} and share their work with each other, without ever leaving their content creation software. These guides are designed to get you started with setting up your custom Toolkit pipeline via hands-on exercises: you'll use the {% include product %} Desktop app to generate an editable configuration for your project, modify settings on existing apps, and even add Toolkit apps to different artist environments, getting you on the path to designing your custom pipeline. Each guide builds on the ones before it, but they include instructions for jumping in from anywhere.
+{% include product %} Toolkit provides a set of highly customizable tools for creating studio pipelines where artists can access info from {% include product %} and share their work with each other, without ever leaving their content creation software. These guides are designed to get you started with setting up your custom Toolkit pipeline via hands-on exercises: you'll use the {% include product %} Desktop app to generate an editable configuration for your project, modify settings on existing apps, and even add Toolkit apps to different artist environments, getting you on the path to designing your custom pipeline. Each guide builds on the ones before it, but they include instructions for jumping in from anywhere.
diff --git a/docs/en/guides/pipeline-integrations/workflows.md b/docs/en/guides/pipeline-integrations/workflows.md
index feebb0757..622e71273 100644
--- a/docs/en/guides/pipeline-integrations/workflows.md
+++ b/docs/en/guides/pipeline-integrations/workflows.md
@@ -7,6 +7,6 @@ lang: en
# Workflows
-The customizations in {% include product %}'s pipeline integrations allow you to use the tools to create a variety of workflows: from feature animation to episodic workflows, from visual effects to games.
+The customizations in {% include product %}'s pipeline integrations allow you to use the tools to create a variety of workflows: from feature animation to episodic workflows, from visual effects to games.
-This section contains resources for building specific workflows.
+This section contains resources for building specific workflows.
diff --git a/docs/en/guides/pipeline-integrations/workflows/pipeline-tutorial.md b/docs/en/guides/pipeline-integrations/workflows/pipeline-tutorial.md
index 7a49b956c..b671070ef 100644
--- a/docs/en/guides/pipeline-integrations/workflows/pipeline-tutorial.md
+++ b/docs/en/guides/pipeline-integrations/workflows/pipeline-tutorial.md
@@ -7,11 +7,11 @@ lang: en
# Animation Pipeline Tutorial
-This tutorial covers building a simplified, yet typical, pipeline for animation or visual effects production. By following this tutorial you will build a pipeline that provides all of the pieces necessary to push Assets from modeling through look development, and then into and through a production scene.
+This tutorial covers building a simplified, yet typical, pipeline for animation or visual effects production. By following this tutorial you will build a pipeline that provides all of the pieces necessary to push Assets from modeling through look development, and then into and through a production scene.
Much of the workflows covered in this pipeline work out-of-the-box with {% include product %}'s built-in integrations. For the portions of the pipeline where studios are more often building custom solutions the tutorial will walk you through the process of customizing the artists workflow using the Toolkit platform.
-Here is a high level view of the pipeline you will build in this tutorial:
+Here is a high level view of the pipeline you will build in this tutorial:
{% include figure src="./images/tutorial/image_0.png" caption="Pipeline Overview" %}
@@ -23,43 +23,43 @@ For simplicity, the digital content creation (DCC) software used will be kept to
## Prerequisites
-* **A working {% include product %} Project** - This tutorial assumes you have experience using {% include product %} for tracking and managing production data.
+- **A working {% include product %} Project** - This tutorial assumes you have experience using {% include product %} for tracking and managing production data.
-* **Understanding of {% include product %} Integrations** - {% include product %} ships with integrations that provide some simple production workflows without requiring any manual configuration. You should understand the features and scope of these workflows before diving into the manual configuration and customizations outlined in this tutorial. More information about {% include product %} Integrations can be found [here](https://support.shotgunsoftware.com/hc/en-us/articles/115000068574).
+- **Understanding of {% include product %} Integrations** - {% include product %} ships with integrations that provide some simple production workflows without requiring any manual configuration. You should understand the features and scope of these workflows before diving into the manual configuration and customizations outlined in this tutorial. More information about {% include product %} Integrations can be found [here](https://support.shotgunsoftware.com/hc/en-us/articles/115000068574).
-* **Maya & Nuke Experience** - This tutorial is designed to build a simple pipeline using Maya and Nuke. You should have a basic understanding of these packages in order to customize the integrations provided by {% include product %}.
+- **Maya & Nuke Experience** - This tutorial is designed to build a simple pipeline using Maya and Nuke. You should have a basic understanding of these packages in order to customize the integrations provided by {% include product %}.
-* **Working knowledge of Python** - The tutorial requires modifying the functionality of {% include product %} integrations via "hooks" that are written in Python.
+- **Working knowledge of Python** - The tutorial requires modifying the functionality of {% include product %} integrations via "hooks" that are written in Python.
-* **Familiarity with YAML** - Much of the configuration of the pipeline you will be building is handled by modifying YAML files.
+- **Familiarity with YAML** - Much of the configuration of the pipeline you will be building is handled by modifying YAML files.
## Additional Resources
-* [{% include product %} Support Site](https://support.shotgunsoftware.com)
+- [{% include product %} Support Site](https://support.shotgunsoftware.com)
-* [{% include product %} Integrations](https://www.shotgridsoftware.com/integrations/)
+- [{% include product %} Integrations](https://www.shotgridsoftware.com/integrations/)
- * [User Guide](https://support.shotgunsoftware.com/hc/en-us/articles/115000068574)
+ - [User Guide](https://support.shotgunsoftware.com/hc/en-us/articles/115000068574)
- * [Admin Guide](https://support.shotgunsoftware.com/hc/en-us/articles/115000067493)
+ - [Admin Guide](https://support.shotgunsoftware.com/hc/en-us/articles/115000067493)
- * [Developer Guide](https://support.shotgunsoftware.com/hc/en-us/articles/115000067513)
+ - [Developer Guide](https://support.shotgunsoftware.com/hc/en-us/articles/115000067513)
# Project Creation & Setup
For this Tutorial, you will need to create a new project in {% include product %} and configure it as if you were preparing for production to begin. This includes ensuring all of the necessary {% include product %} entities are in place and linked up properly. For this tutorial, the Asset, Sequence, Shot, and Task entities are required and should be available by default in a new project. You will create:
-* Two **Assets**:
+- Two **Assets**:
- * **_Teapot_** character
+ - **_Teapot_** character
- * **_Table_** prop
+ - **_Table_** prop
-* One **Sequence**
+- One **Sequence**
-* One **Shot** linked to the **Sequence** you created
+- One **Shot** linked to the **Sequence** you created
-* A **Task** per pipeline step
+- A **Task** per pipeline step
Here are some screenshots of what your configured project entities should look like in {% include product %}:
@@ -71,25 +71,25 @@ Here are some screenshots of what your configured project entities should look l
## Software Launchers
-Next, you'll need to ensure that Maya and Nuke are available to launch in {% include product %} Desktop. In Desktop, make sure that each of these packages can be launched by clicking on their icon. Be sure that the proper version of each package is launched.
+Next, you'll need to ensure that Maya and Nuke are available to launch in {% include product %} Desktop. In Desktop, make sure that each of these packages can be launched by clicking on their icon. Be sure that the proper version of each package is launched.
-If either application does not show up in Desktop or the expected version does not launch, you may need to manually configure the launch in {% include product %} via the Software entity.
+If either application does not show up in Desktop or the expected version does not launch, you may need to manually configure the launch in {% include product %} via the Software entity.
{% include figure src="./images/tutorial/image_4.png" caption="The default Software entities defined in ShotGrid" %}
-The Software entity is used to drive which DCC packages to use on your production. By default, the integrations will search for these packages in standard installation locations and make them launchable via Desktop. If you have more than one version installed or you have them installed in a non-standard location, it is possible you need to update the corresponding Software entity entry in {% include product %} to curate the launch experience for your artists.
+The Software entity is used to drive which DCC packages to use on your production. By default, the integrations will search for these packages in standard installation locations and make them launchable via Desktop. If you have more than one version installed or you have them installed in a non-standard location, it is possible you need to update the corresponding Software entity entry in {% include product %} to curate the launch experience for your artists.
For complete details on the Software entity and how to properly configure it, please see the [Integrations Admin Guide](https://support.shotgunsoftware.com/hc/en-us/articles/115000067493-Integrations-Admin-Guide#Configuring%20software%20launches). Once you have your DCCs launching the way you expect, you can continue to the next section.
# Configuration
-The configuration (config) defines the artist workflow for your project. This includes specifying which {% include product %} integrations to include within the DCCs your artists are launching, how your project's folder structure is defined, and the naming conventions for files and folders created as artists share data.
+The configuration (config) defines the artist workflow for your project. This includes specifying which {% include product %} integrations to include within the DCCs your artists are launching, how your project's folder structure is defined, and the naming conventions for files and folders created as artists share data.
By default, all new projects are configured to use the basic [{% include product %} Integrations](https://support.shotgunsoftware.com/hc/en-us/articles/115000068574) which provide a basic workflow for sharing files between artists using many off-the-shelf software packages. The following sections outline how to take over your project's pipeline configuration (config) and customize it for your studio.
## Taking Over the Project Config
-Use {% include product %} Desktop (Desktop) to take over your project's configuration. RMB click within Desktop or click the user icon in the bottom right to show the popup menu. Select the **Advanced project setup…** option and follow the wizard to locally install your project configuration. The images below show the required steps. You can also follow the steps outlined in the Integrations Admin Guide for [Taking over a Pipeline Configuration](https://support.shotgunsoftware.com/hc/en-us/articles/115000067493-Integrations-Admin-Guide#Taking%20over%20a%20Pipeline%20Configuration).
+Use {% include product %} Desktop (Desktop) to take over your project's configuration. RMB click within Desktop or click the user icon in the bottom right to show the popup menu. Select the **Advanced project setup…** option and follow the wizard to locally install your project configuration. The images below show the required steps. You can also follow the steps outlined in the Integrations Admin Guide for [Taking over a Pipeline Configuration](https://support.shotgunsoftware.com/hc/en-us/articles/115000067493-Integrations-Admin-Guide#Taking%20over%20a%20Pipeline%20Configuration).
{% include figure src="./images/tutorial/image_5.png" caption="Select the **Advanced project setup…** in the Desktop popup menu" %}
@@ -97,7 +97,7 @@ Use {% include product %} Desktop (Desktop) to take over your project's configur
{% include figure src="./images/tutorial/wizard_02.png" caption="Choose the **Default configuration**" %}
-If this is your first time setting up a {% include product %} project, you'll also be prompted to define a storage location for your project data. Otherwise, you can select an existing storage location.
+If this is your first time setting up a {% include product %} project, you'll also be prompted to define a storage location for your project data. Otherwise, you can select an existing storage location.
{% include figure src="./images/tutorial/wizard_03.png" caption="Create a new storage." %}
@@ -105,23 +105,23 @@ If this is your first time setting up a {% include product %} project, you'll al
{% include figure src="./images/tutorial/wizard_05.png" caption="Set the path(s) where this storage will be accessible on the operating systems you intend to use." %}
-You can view and edit the storages for your {% include product %} site in your **Site Preferences**, under the **File Management** section. You can learn more about these settings [here](https://support.shotgunsoftware.com/hc/en-us/articles/219030938).
+You can view and edit the storages for your {% include product %} site in your **Site Preferences**, under the **File Management** section. You can learn more about these settings [here](https://support.shotgunsoftware.com/hc/en-us/articles/219030938).
Now that you have a storage location selected, you'll choose the name of the directory in that location for your new project.
{% include figure src="./images/tutorial/wizard_06.png" caption="Enter the name of the folder where your project's files will live." %}
-For this tutorial, we'll be using a centralized configuration. The **Distributed Setup** option provides an alternate option that can provide a different set of benefits, and may be the preferred option for studios without fast shared storage. You can learn more about the pro and cons of different configuration setups in the [Toolkit Administration](https://www.youtube.com/watch?v=7qZfy7KXXX0&list=PLEOzU2tEw33r4yfX7_WD7anyKrsDpQY2d&index=2) presentation.
+For this tutorial, we'll be using a centralized configuration. The **Distributed Setup** option provides an alternate option that can provide a different set of benefits, and may be the preferred option for studios without fast shared storage. You can learn more about the pro and cons of different configuration setups in the [Toolkit Administration](https://www.youtube.com/watch?v=7qZfy7KXXX0&list=PLEOzU2tEw33r4yfX7_WD7anyKrsDpQY2d&index=2) presentation.
Unlike the storages, which are site-wide, the configuration will be project specific, and so the directory you choose here will be used directly to store your configuration.
{% include figure src="./images/tutorial/wizard_07.png" caption="Make a note of the configuration path you select for the current operating system." %}
-The folder you select on the screen above is where your configuration will be installed. You will explore and modify the contents of the configuration in this folder throughout this tutorial.
+The folder you select on the screen above is where your configuration will be installed. You will explore and modify the contents of the configuration in this folder throughout this tutorial.
When you click **Run Setup** on the above screen, Desktop will begin to download and install all of the required components of your configuration. The installation process could take several minutes to complete. Once complete, you will have a local copy of the entire project configuration that you will modify in the following steps.
-The configuration location you specified during the Desktop installation tutorial is recorded in {% include product %} in the Pipeline Configurations page for your project.
+The configuration location you specified during the Desktop installation tutorial is recorded in {% include product %} in the Pipeline Configurations page for your project.
{% include figure src="./images/tutorial/image_10.png" caption="The Pipeline Configuration entity in ShotGrid" %}
@@ -133,7 +133,7 @@ Before beginning the process of building your simple pipeline, you need to under
{% include figure src="./images/tutorial/image_11.png" %}
-### Project Schema
+### Project Schema
The simple pipeline you will build in this tutorial uses the project schema provided by the Default configuration. You can browse the **`config/core/schema`** folder to get a feel for the structure that will be created as Toolkit Apps write files to disk. For additional information about configuring the project directory structure, see the [File System Configuration Reference](https://support.shotgunsoftware.com/hc/en-us/articles/219039868) documentation.
@@ -143,11 +143,11 @@ This tutorial also uses the templates defined in the Default pipeline configurat
### Hooks
-Much of this tutorial will involve modifying App hooks in order to customize the artist workflows. Before diving into that customization, you should have a basic understanding of what hooks are, how they work, and where they live. Read through the Hooks section of the [Administration](https://support.shotgunsoftware.com/hc/en-us/articles/219033178#Hooks) and [Configuration](https://support.shotgunsoftware.com/hc/en-us/articles/219033178#Hooks) documentation.
+Much of this tutorial will involve modifying App hooks in order to customize the artist workflows. Before diving into that customization, you should have a basic understanding of what hooks are, how they work, and where they live. Read through the Hooks section of the [Administration](https://support.shotgunsoftware.com/hc/en-us/articles/219033178#Hooks) and [Configuration](https://support.shotgunsoftware.com/hc/en-us/articles/219033178#Hooks) documentation.
As you progress through the tutorial, you will be asked to "take over" a hook defined by one of the Toolkit Apps. The process of taking over an app hook is straightforward. Each time you're asked to do so, simply follow these steps:
-1. **Locate the app** containing the hook you want to override in your configuration's install folder. Find the **`hooks`** subdirectory for that app and locate the hook file you want to override.
+1. **Locate the app** containing the hook you want to override in your configuration's install folder. Find the **`hooks`** subdirectory for that app and locate the hook file you want to override.
2. **Copy the hook** (renaming it if necessary) into your configuration's top-level **`hooks`** directory.
@@ -157,7 +157,7 @@ Once the file is in your configuration's **`hooks`** folder, you will be ready t
# Building the Pipeline
-At this point you should be ready to begin building a pipeline. You have a project set up in {% include product %}, you can launch Maya & Nuke via Desktop, and you've taken control of the project's configuration. You also have a basic understanding of the structure of the config and are ready to begin fleshing out the artist workflow.
+At this point you should be ready to begin building a pipeline. You have a project set up in {% include product %}, you can launch Maya & Nuke via Desktop, and you've taken control of the project's configuration. You also have a basic understanding of the structure of the config and are ready to begin fleshing out the artist workflow.
The following sections will walk through each step of the pipeline, highlighting the features that are available out-of-the-box and walking you through the process of customizing the {% include product %} integrations. By the end of these sections, you will have a simple, fully functional, end-to-end production pipeline. You will also get a feel for the steps artists will take as they work on production.
@@ -165,17 +165,17 @@ The following sections will walk through each step of the pipeline, highlighting
## Modeling Workflow
-The first step in the simple pipeline is Modeling. In this section you will create the first iteration of the Teapot Asset in your project. You will save it to disk in your project's folder structure and then publish it.
+The first step in the simple pipeline is Modeling. In this section you will create the first iteration of the Teapot Asset in your project. You will save it to disk in your project's folder structure and then publish it.
-First, launch Maya from {% include product %} Desktop.
+First, launch Maya from {% include product %} Desktop.
-Once Maya has full loaded, you will see the File Open Dialog appear. This dialog allows you to browse existing Maya files within the project. It also allows you to create new files that the {% include product %} integrations will be aware of.
+Once Maya has full loaded, you will see the File Open Dialog appear. This dialog allows you to browse existing Maya files within the project. It also allows you to create new files that the {% include product %} integrations will be aware of.
-Select the Assets tab and drill down into the Teapot's modeling task. Since there are no artist work files for this task yet, click the **+ New File** button.
+Select the Assets tab and drill down into the Teapot's modeling task. Since there are no artist work files for this task yet, click the **+ New File** button.
{% include figure src="./images/tutorial/image_13.png" %}
-Clicking this button will create a new, empty Maya session and set your current working context to the Teapot Asset's Model task.
+Clicking this button will create a new, empty Maya session and set your current working context to the Teapot Asset's Model task.
{%include info title="Note" content="At any time during this tutorial you can launch the ShotGrid Panel via the ShotGrid menu in Maya or Nuke. This panel provides a view into your project data without leaving your DCC. It will show you your current working context and any recent activity within that context. You can also add notes for feedback directly into the panel. See the [ShotGrid Panel documentation](https://support.shotgunsoftware.com/hc/en-us/articles/115000068574-Integrations-user-guide#The%20Shotgun%20Panel) for more info." %}
@@ -195,15 +195,15 @@ The tokenized fields, **`{name}`**, **`{version}`**, and **`{maya_extension}`**
**`assets/{sg_asset_type}/{Asset}/{Step}`**
-The tokenized fields here can be automatically inferred by the Toolkit platform, given the current working context that you set when you created the new file above.
+The tokenized fields here can be automatically inferred by the Toolkit platform, given the current working context that you set when you created the new file above.
Also notice the preview of the file name and path to be written at the bottom of the dialog. Note the primary storage and project folder you defined while taking over the project configuration make up the root of the template path.
-Click the **Save** button to save the teapot model.
+Click the **Save** button to save the teapot model.
-An important thing to note at this point is that the steps you just completed will be the same steps artists take when opening and saving workfiles throughout the pipeline. The File Open and File Save dialogs are part of Workfiles App. This "multi" app runs in all of the DCCs supported by the {% include product %} integrations and provides a consistent workflow for all artists.
+An important thing to note at this point is that the steps you just completed will be the same steps artists take when opening and saving workfiles throughout the pipeline. The File Open and File Save dialogs are part of Workfiles App. This "multi" app runs in all of the DCCs supported by the {% include product %} integrations and provides a consistent workflow for all artists.
-The next step is to make some changes to your teapot. Make sure the lid geometry is separate from the rest of the model so that it can be rigged later on.
+The next step is to make some changes to your teapot. Make sure the lid geometry is separate from the rest of the model so that it can be rigged later on.
{% include figure src="./images/tutorial/image_16.png" %}
@@ -211,21 +211,21 @@ Once you're satisfied with your work, run the **{% include product %} > File Sav
{% include figure src="./images/tutorial/image_17.png" %}
-Once you have saved the Teapot model to version 2, you are ready for the last step in this section of the tutorial.
+Once you have saved the Teapot model to version 2, you are ready for the last step in this section of the tutorial.
-Now that your Teapot model is ready, you need to publish it so that it can be surfaced and rigged. To publish, click the **{% include product %} > Publish…** menu action. You will be presented with the Publish App dialog.
+Now that your Teapot model is ready, you need to publish it so that it can be surfaced and rigged. To publish, click the **{% include product %} > Publish…** menu action. You will be presented with the Publish App dialog.
{% include figure src="./images/tutorial/image_18.png" %}
-The dialog shows a tree of items representing what will be published. The tree includes some entries that represent the items to be published and some entries represent the actions that will be performed during the publish operation.
+The dialog shows a tree of items representing what will be published. The tree includes some entries that represent the items to be published and some entries represent the actions that will be performed during the publish operation.
-On the left side of the dialog you will see an item representing the current Maya session. Underneath it, you will see a **Publish to ShotGrid** child action. An additional item representing **All Session Geometry** is shown as a child item of the current session. It also has a **Publish to ShotGrid** child action.
+On the left side of the dialog you will see an item representing the current Maya session. Underneath it, you will see a **Publish to ShotGrid** child action. An additional item representing **All Session Geometry** is shown as a child item of the current session. It also has a **Publish to ShotGrid** child action.
{% include info title="Note" content="If the **All Session Geometry** item doesn't show up, ensure that the [Alembic export plugin is enabled](https://support.shotgunsoftware.com/hc/en-us/articles/219039928-Publishing-Alembic-From-Maya#Before%20You%20Begin) in Maya." %}
-Explore the Publish App by clicking on the items on the left side of the tree. You'll notice that the items to be acted upon, when selected, allow you to enter a description of what is being published. You can also take a screenshot to be associated with the item by clicking the camera icon on the right.
+Explore the Publish App by clicking on the items on the left side of the tree. You'll notice that the items to be acted upon, when selected, allow you to enter a description of what is being published. You can also take a screenshot to be associated with the item by clicking the camera icon on the right.
-When you are ready, click the **Publish** button in the bottom right corner to publish the current work file and the teapot geometry. Once complete, you can browse to the Teapot Asset in {% include product %} to verify that the publish completed successfully.
+When you are ready, click the **Publish** button in the bottom right corner to publish the current work file and the teapot geometry. Once complete, you can browse to the Teapot Asset in {% include product %} to verify that the publish completed successfully.
{% include figure src="./images/tutorial/image_19.png" %}
@@ -237,23 +237,23 @@ Like the work file created when using the File Save dialog, the output paths of
**`@asset_root/publish/maya/{name}.v{version}.{maya_extension}`**
-This template is very similar to the work file template by default, the only difference being the **`publish`** folder.
+This template is very similar to the work file template by default, the only difference being the **`publish`** folder.
**Asset publish:**
**`@asset_root/publish/caches/{name}.v{version}.abc`**
-This template is similar to the maya session publish template, but the file is written to a **`caches`** folder.
+This template is similar to the maya session publish template, but the file is written to a **`caches`** folder.
Unlike the File Save dialog, when publishing, you don't have to supply the name, version, or file extension values. This is because by default the publisher pulls these values from the work file path. Under the hood it is extracting these values through the work template and then applying them to the publish templates. This is an important concept with regard to the Toolkit platform and how templates are used to connect the output of one pipeline step to the input of another. You will look at this in more depth in subsequent sections.
-Browse to the files on disk to ensure they've been created in the correct location.
+Browse to the files on disk to ensure they've been created in the correct location.
Congratulations! You have successfully created the first published iteration of the Teapot. See if you can use what you've learned to publish a model of a table from the Table prop's modeling task. The result should look something like this:
{% include figure src="./images/tutorial/image_20.png" %}
-Next up, the surfacing workflow.
+Next up, the surfacing workflow.
## Surfacing Workflow
@@ -263,25 +263,25 @@ Start by launching Maya from Desktop. If you still have Maya open after working
{% include figure src="./images/tutorial/image_21.png" width="450px" %}
-You are now working in the Teapot's surfacing task. An easy way to verify that you are in the right production context is to check the first entry in the {% include product %} menu.
+You are now working in the Teapot's surfacing task. An easy way to verify that you are in the right production context is to check the first entry in the {% include product %} menu.
-{% include figure src="./images/tutorial/image_22.png" %}
+{% include figure src="./images/tutorial/image_22.png" %}
Next you need to load the teapot model into your new surfacing work file. To do this, launch the Loader app via the **{% include product %} > Load…** menu item in Maya.
{% include figure src="./images/tutorial/image_23.png" %}
-The layout of the Loader app is similar to the Workfiles app, but now you are browsing for published files to load rather than work files to open.
+The layout of the Loader app is similar to the Workfiles app, but now you are browsing for published files to load rather than work files to open.
-In the Assets tab, browse to the Teapot character to show the teapot publishes you created in the previous section. You should see a Maya Scene and an Alembic Cache publish. Select the Alembic Cache publish to show details about it on the right side of the dialog. Next, click the **Create Reference** item in the Actions menu of the Alembic Cache publish. The loader will remain open by default to allow additional actions to be performed, but you can close it to continue. You should see in Maya that a reference has been created pointing to the Teapot publish from the modeling task.
+In the Assets tab, browse to the Teapot character to show the teapot publishes you created in the previous section. You should see a Maya Scene and an Alembic Cache publish. Select the Alembic Cache publish to show details about it on the right side of the dialog. Next, click the **Create Reference** item in the Actions menu of the Alembic Cache publish. The loader will remain open by default to allow additional actions to be performed, but you can close it to continue. You should see in Maya that a reference has been created pointing to the Teapot publish from the modeling task.
{% include figure src="./images/tutorial/image_24.png" %}
-Next, add a simple procedural shader to the teapot.
+Next, add a simple procedural shader to the teapot.
{% include figure src="./images/tutorial/image_25.png" %}
-Shader management can be a time consuming and complex task when building a pipeline. It is often very specific to a studio. It is for these reasons that the shipped Maya integration does not handle shader or texture management out-of-the-box.
+Shader management can be a time consuming and complex task when building a pipeline. It is often very specific to a studio. It is for these reasons that the shipped Maya integration does not handle shader or texture management out-of-the-box.
Use the **{% include product %} > File Save…** menu action to save the current session before continuing.
@@ -289,7 +289,6 @@ Use the **{% include product %} > File Save…** menu action to save the current
For the purposes of this simple pipeline, you will customize the Publisher app to export Maya shader networks as additional publish items from the surfacing step. Later in the tutorial, you will put together a quick and dirty solution that allows the shaders to be reconnected to the Alembic geometry caches when referenced downstream.
-
{% include info title="Note" content="The customization you'll be adding is, admittedly, very simple and fragile. A more robust solution might take into account alternate representations of a surfaced character as well as the asset management side of using external images as texture maps. This example presents only a starting point for building a real-world solution." %}
{% include info title="Note" content="You can see the full details of how to write publisher plugins [here](https://developer.shotgridsoftware.com/tk-multi-publish2/)." %}
@@ -304,23 +303,23 @@ This file defines how the Publish app will be used within all of the artist envi
{% include figure src="./images/tutorial/image_26.png" %}
-The collector setting defines the hook where the publisher's collection logic lives. By default, the value is:
+The collector setting defines the hook where the publisher's collection logic lives. By default, the value is:
**`collector: "{self}/collector.py:{engine}/tk-multi-publish2/basic/collector.py"`**
-This definition includes two files. When multiple files are listed in a hook setting, it implies inheritance. The first file contains the **`{self}`** token which will evaluate to the installed Publish app's hooks folder. The second file contains the **`{engine}`** token which will evaluate to the current engine's (in this case the installed Maya engine's) hooks folder. To summarize, this value says the Maya-specific collector inherits the Publish app's collector. This is a common pattern for Publisher configuration since the app's collector hook has logic that is useful regardless of the DCC that is running. The DCC-specific logic inherits from that base logic and extends it to collect items that are specific to the current session.
+This definition includes two files. When multiple files are listed in a hook setting, it implies inheritance. The first file contains the **`{self}`** token which will evaluate to the installed Publish app's hooks folder. The second file contains the **`{engine}`** token which will evaluate to the current engine's (in this case the installed Maya engine's) hooks folder. To summarize, this value says the Maya-specific collector inherits the Publish app's collector. This is a common pattern for Publisher configuration since the app's collector hook has logic that is useful regardless of the DCC that is running. The DCC-specific logic inherits from that base logic and extends it to collect items that are specific to the current session.
{% include info title="Note" content="We're only changing the collector setting for the Asset step environment, so our modifications won't be seen by artists working in other contexts, like Shot steps. They will continue to use the shipped, default Maya collector." %}
-In the **Configuration** section you learned how to take over a hook. Begin the customization process by taking over the Maya engine's collector hook in your configuration.
+In the **Configuration** section you learned how to take over a hook. Begin the customization process by taking over the Maya engine's collector hook in your configuration.
{% include figure src="./images/tutorial/image_27.png" %}
-The image above shows how to do this. First, create a folder structure in your project configuration's **hooks** folder. This will provide some namespacing to the collector plugin since you may override the same hook for other DCCs later on. Next, copy the Maya engine's collector hook from the install folder into your new hook folder structure. You should now have a copy of the Maya collector in your configuration with the path:
+The image above shows how to do this. First, create a folder structure in your project configuration's **hooks** folder. This will provide some namespacing to the collector plugin since you may override the same hook for other DCCs later on. Next, copy the Maya engine's collector hook from the install folder into your new hook folder structure. You should now have a copy of the Maya collector in your configuration with the path:
**`config/hooks/tk-multi-publish2/maya/collector.py`**
-Next, update the publish2 settings file to point to your new hook location. Your collector setting should now have this value:
+Next, update the publish2 settings file to point to your new hook location. Your collector setting should now have this value:
**`collector: "{self}/collector.py:{config}/tk-multi-publish2/maya/collector.py"`**
@@ -332,7 +331,7 @@ Now you need to open up your copy of the collector in your preferred IDE or text
**`self._collect_meshes(item)`**
-This is a new method that you will add to collect any meshes found in the current session. The method will create mesh items that a shader publish plugin (that you'll create later) can act upon. The item being passed in is the session item that will be the parent for our mesh items.
+This is a new method that you will add to collect any meshes found in the current session. The method will create mesh items that a shader publish plugin (that you'll create later) can act upon. The item being passed in is the session item that will be the parent for our mesh items.
{% include info title="Note" content="This is a very directed approach to modifying existing publish plugins. For a deeper dive into the structure of the publisher and all of its moving parts, please [see the developer docs](http://developer.shotgridsoftware.com/tk-multi-publish2/)." %}
@@ -376,7 +375,7 @@ Now add the new method definition below to the bottom of the file:
"Mesh",
object
)
-
+
# set the icon for the item
mesh_item.set_icon_from_path(icon_path)
@@ -385,7 +384,7 @@ Now add the new method definition below to the bottom of the file:
mesh_item.properties["object"] = object
```
-The code is commented and should give you an idea of what is being done. The main point is that you've now added logic to collect mesh items for any top-level meshes in the current session. If you were to execute the publisher at this point however, you would not see any mesh items in the item tree. This is because there are no publish plugins defined to act on them. Next, you'll write a new shader publish plugin that will attach to these mesh items and handle publishing them for use downstream.
+The code is commented and should give you an idea of what is being done. The main point is that you've now added logic to collect mesh items for any top-level meshes in the current session. If you were to execute the publisher at this point however, you would not see any mesh items in the item tree. This is because there are no publish plugins defined to act on them. Next, you'll write a new shader publish plugin that will attach to these mesh items and handle publishing them for use downstream.
{% include info title="Note" content="You probably saw the call to set an icon for the mesh item in the code above. For this to work, you will need to add an icon to your configuration at the specified path:" %}
@@ -424,8 +423,7 @@ The last step before being able to publish shaders is to add the template and co
return plugin_settings
```
-
-This method defines the configuration interface for the plugin. A **"Publish Template"** setting is required to tell the plugin where to write the shader networks to disk. Add the new publish plugin to the publisher configuration and include the template setting. This is the same configuration block you modified before when taking over the collector. It is defined in this file:
+This method defines the configuration interface for the plugin. A **"Publish Template"** setting is required to tell the plugin where to write the shader networks to disk. Add the new publish plugin to the publisher configuration and include the template setting. This is the same configuration block you modified before when taking over the collector. It is defined in this file:
**`env/includes/settings/tk-multi-publish2.yml`**
@@ -441,13 +439,13 @@ Find the section where asset related Maya templates are defined and add the new
{% include figure src="./images/tutorial/image_29.png" %}
-That should be everything. You have overridden the Publish app's collector hook to find meshes to publish shaders for. You have implemented a new publish plugin to attach to the collected shader items, and you have defined and configured a new publish template where the shader networks will be written to disk.
+That should be everything. You have overridden the Publish app's collector hook to find meshes to publish shaders for. You have implemented a new publish plugin to attach to the collected shader items, and you have defined and configured a new publish template where the shader networks will be written to disk.
{% include info title="Note" content="If you closed Maya while making the customizations to your configuration, do not worry. You can simply launch Maya again and use the File Open dialog to open your surfacing work file. You can skip the reloading step below." %}
##### Reloading the {% include product %} Integrations
-In order to try out your customizations, you'll need to reload the integrations in your Maya session. To do this, click the **{% include product %} > [Task Name] > Work Area Info…** menu action.
+In order to try out your customizations, you'll need to reload the integrations in your Maya session. To do this, click the **{% include product %} > [Task Name] > Work Area Info…** menu action.
{% include figure src="./images/tutorial/image_30.png" %}
@@ -463,12 +461,11 @@ Now it is time to see the results of your changes to the project configuration.
Enter a description of your work and capture a thumbnail of your surfaced Teapot to associate with the published files. Finally, click publish to export the Teapot shaders to disk and register the file as a publish in {% include product %}. When finished, notice that the session publish plugin has automatically saved your work file to the next available version. This is the default behavior within all of the DCCs supported by {% include product %} integrations.
-
You can now browse to the Teapot asset in {% include product %} to verify that everything worked as expected.
{% include figure src="./images/tutorial/image_33.png" %}
-Congratulations! You have successfully customized your pipeline and published shaders for the Teapot. See if you can use what you've learned to publish shaders from the Table prop's surfacing task. The result should look something like this:
+Congratulations! You have successfully customized your pipeline and published shaders for the Teapot. See if you can use what you've learned to publish shaders from the Table prop's surfacing task. The result should look something like this:
{% include figure src="./images/tutorial/image_34.png" %}
@@ -478,21 +475,21 @@ Next up, the rigging workflow.
At this point, you should feel pretty comfortable opening (or creating), saving, and publishing workfiles using the Workfile and Publish apps provided by {% include product %}. You've also had a chance to use the Loader app to load a publish from upstream. Use what you've learned to complete the following tasks:
-* Launch Maya from {% include product %} Desktop
+- Launch Maya from {% include product %} Desktop
-* Create a new workfile in the Teapot asset's rigging step
+- Create a new workfile in the Teapot asset's rigging step
-* Load (reference) Teapot alembic cache publish from the modeling step
+- Load (reference) Teapot alembic cache publish from the modeling step
-* Rig the teapot's lid to open and close (keep it simple)
+- Rig the teapot's lid to open and close (keep it simple)
-* Save and publish the Teapot rig
+- Save and publish the Teapot rig
You should end up with something like this in {% include product %}:
{% include figure src="./images/tutorial/image_35.png" %}
-Next, let's see how artists handle upstream changes in their workflow. Open up the modeling work file and make some changes to the teapot model. Then publish the updated work. The result should be something like this:
+Next, let's see how artists handle upstream changes in their workflow. Open up the modeling work file and make some changes to the teapot model. Then publish the updated work. The result should be something like this:
{% include figure src="./images/tutorial/image_36.png" %}
@@ -500,39 +497,39 @@ Open the work file in the Teapot's rigging step again (via **{% include product
{% include figure src="./images/tutorial/image_37.png" width="400px" %}
-For each reference, the app shows you one of two indicators -- a green check to show that the referenced publish is the latest version, or a red "x" to indicate that there is a newer publish available. In this case, we can see that there is a newer publish available.
+For each reference, the app shows you one of two indicators -- a green check to show that the referenced publish is the latest version, or a red "x" to indicate that there is a newer publish available. In this case, we can see that there is a newer publish available.
Now select the referenced Teapot alembic cache item (or click the **Select All Red** button at the bottom), then click **Update Selected**.
-The app will update the Maya reference to the latest iteration of the Teapot alembic cache. You should now see your new model in the file.
+The app will update the Maya reference to the latest iteration of the Teapot alembic cache. You should now see your new model in the file.
{% include figure src="./images/tutorial/image_40.png" width="400px" %}
-Make any adjustments to your rigging setup that you need to account for the new model and then publish your changes.
+Make any adjustments to your rigging setup that you need to account for the new model and then publish your changes.
In the following sections, you'll be working in a shot context. Next up, shot layout.
## Layout Workflow
-In this section, you will begin working in the Shot you created for your project. You will load the assets created in the previous sections and block out the shot. You will then customize the publisher again, this time to publish the shot camera.
+In this section, you will begin working in the Shot you created for your project. You will load the assets created in the previous sections and block out the shot. You will then customize the publisher again, this time to publish the shot camera.
Begin by using what you learned in the previous sections to complete the following tasks:
-* Launch Maya from {% include product %} Desktop
+- Launch Maya from {% include product %} Desktop
-* Create a new workfile in your Shot's layout step (Hint: use the Shots tab in the Loader)
+- Create a new workfile in your Shot's layout step (Hint: use the Shots tab in the Loader)
-* Load (reference) the Teapot publish from the Teapot's rigging step
+- Load (reference) the Teapot publish from the Teapot's rigging step
-* Load (reference) the Table publish from the Table's model step
+- Load (reference) the Table publish from the Table's model step
-Now block your simple scene with the Teapot on the Table. Add a camera to your scene called **camMain** and animate a few frames to create your shot's camera move.
+Now block your simple scene with the Teapot on the Table. Add a camera to your scene called **camMain** and animate a few frames to create your shot's camera move.
{% include figure src="./images/tutorial/image_41.gif" %}
-Once you are happy with your shot layout, save the file via the **{% include product %} > File Save…** menu action. If you were to go ahead and publish at this point, you would only see the entire maya session as an available item to publish.
+Once you are happy with your shot layout, save the file via the **{% include product %} > File Save…** menu action. If you were to go ahead and publish at this point, you would only see the entire maya session as an available item to publish.
-An easy customization to add, and one that provides a lot of flexibility to a pipeline, is the ability to publish stand-alone cameras to a file format that is easy to import into other packages. This makes it possible to generate the camera once, typically in layout, and then have all other pipeline steps, such as animation, lighting, and compositing, consume it directly.
+An easy customization to add, and one that provides a lot of flexibility to a pipeline, is the ability to publish stand-alone cameras to a file format that is easy to import into other packages. This makes it possible to generate the camera once, typically in layout, and then have all other pipeline steps, such as animation, lighting, and compositing, consume it directly.
### Collecting cameras
@@ -540,7 +537,7 @@ As with shader publishing, the first step is to customize the collector hook. Yo
{% include figure src="./images/tutorial/image_42.png" %}
-Now, when working in a task within a Shot context, your custom collector logic will run. The next step is to add the custom camera collection logic.
+Now, when working in a task within a Shot context, your custom collector logic will run. The next step is to add the custom camera collection logic.
Open your custom collector hook and add the following method call at the bottom of the **`process_current_session`** method where you added the call to collect meshes in the surfacing section:
@@ -610,7 +607,7 @@ The next step is to connect the newly collected mesh items to a publish plugin t
### Camera publish configuration
-Finally, you need to update the Publish app's configuration for the Shot steps. Edit the settings file to add your new plugin.
+Finally, you need to update the Publish app's configuration for the Shot steps. Edit the settings file to add your new plugin.
**`env/includes/settings/tk-multi-publish2.yml`**
@@ -618,13 +615,13 @@ Your configuration should look like this now:
{% include figure src="./images/tutorial/image_43.png" %}
-You'll notice the two settings added to the file as defined by the **`settings`** method of the new plugin. As with the shader plugin, there is a **Publish Template** setting which defines where the camera files will be written. The Cameras setting is a list of camera strings that drive which cameras the plugin should act on. The expectation is that there is some type of camera naming convention and this setting prevents the user from being presented with publish items for cameras that don't match the convention. In the image above, only the **`camMain`** camera will be presented for publishing. The implementation of the plugin you added will also work with wildcard patterns like **`cam*`**.
+You'll notice the two settings added to the file as defined by the **`settings`** method of the new plugin. As with the shader plugin, there is a **Publish Template** setting which defines where the camera files will be written. The Cameras setting is a list of camera strings that drive which cameras the plugin should act on. The expectation is that there is some type of camera naming convention and this setting prevents the user from being presented with publish items for cameras that don't match the convention. In the image above, only the **`camMain`** camera will be presented for publishing. The implementation of the plugin you added will also work with wildcard patterns like **`cam*`**.
The last step before testing your changes is to add the definition for the new camera publish template. Edit the **`config/core/templates.yml`** file and add the template definition to the maya shot template section:
{% include figure src="./images/tutorial/image_44.png" %}
-At this point, you should be ready to publish your camera with the new plugin. Use the **Work Area Info** app to reload the integrations, then launch the publisher.
+At this point, you should be ready to publish your camera with the new plugin. Use the **Work Area Info** app to reload the integrations, then launch the publisher.
{% include figure src="./images/tutorial/image_45.png" %}
@@ -634,23 +631,23 @@ As you can see in the image, the new camera item is collected and the publish pl
You should see something like this in {% include product %}:
-{% include figure src="./images/tutorial/image_46.png" %}
+{% include figure src="./images/tutorial/image_46.png" %}
That's it! Next up, animation.
## Animation Workflow
-Up to this point, you've only customized the Publish app in order to write custom file types/contents to disk and share them with other pipeline steps. In this section, you will customize the Loader app's configuration to complete the round trip to make it possible to import/reference custom publishes.
+Up to this point, you've only customized the Publish app in order to write custom file types/contents to disk and share them with other pipeline steps. In this section, you will customize the Loader app's configuration to complete the round trip to make it possible to import/reference custom publishes.
-Use what you've learned in previous sections to complete the following tasks.
+Use what you've learned in previous sections to complete the following tasks.
-* Launch Maya from {% include product %} Desktop
+- Launch Maya from {% include product %} Desktop
-* Create a new workfile in your Shot's animation step
+- Create a new workfile in your Shot's animation step
-* Load (reference) the maya session publish from the Shot's layout step
+- Load (reference) the maya session publish from the Shot's layout step
-{% include info title="Note" content="You'll notice that the camera was included in the layout session publish file. In a robust pipeline, the camera might be explicitly hidden or excluded from the session publish in order to allow the separate camera publish file to be the one true camera definition. Go ahead and delete or hide the camera included by the reference." %}
+{% include info title="Note" content="You'll notice that the camera was included in the layout session publish file. In a robust pipeline, the camera might be explicitly hidden or excluded from the session publish in order to allow the separate camera publish file to be the one true camera definition. Go ahead and delete or hide the camera included by the reference." %}
### Custom camera Loader action
@@ -668,13 +665,13 @@ Your app settings should now look like this:
{% include figure src="./images/tutorial/image_47.png" width="400px" %}
-Now reload the integrations via the **Work Area Info** app to pick up the new setting, then browse to the published camera from layout.
+Now reload the integrations via the **Work Area Info** app to pick up the new setting, then browse to the published camera from layout.
{% include figure src="./images/tutorial/image_48.png" %}
Filter by the new publish type, then create a reference to the camera. Close the Loader and you should be able to play back the camera motion you created in the previous section with the newly reference camera.
-Next, animate your Teapot model to do something (keep it simple).
+Next, animate your Teapot model to do something (keep it simple).
{% include figure src="./images/tutorial/image_49.gif" %}
@@ -684,25 +681,25 @@ Next up, lighting.
## Lighting Workflow
-In this section, you will bring together everything you published in the previous sections and render your shot. To do this, you will customize the Loader app to load the published shaders from the Teapot asset's surfacing step.
+In this section, you will bring together everything you published in the previous sections and render your shot. To do this, you will customize the Loader app to load the published shaders from the Teapot asset's surfacing step.
-First, use what you've learned in previous sections to complete the following tasks.
+First, use what you've learned in previous sections to complete the following tasks.
-* Launch Maya from {% include product %} Desktop
+- Launch Maya from {% include product %} Desktop
-* Create a new workfile in your Shot's lighting step
+- Create a new workfile in your Shot's lighting step
-* Load (reference) the maya session publish from the Shot's animation step
+- Load (reference) the maya session publish from the Shot's animation step
-* Load (reference) the camera publish from the Shot's layout step
+- Load (reference) the camera publish from the Shot's layout step
### Custom shader Loader action
-In order to load the shaders you published in the surfacing step, you will need to take over the **`tk-maya-actions.py`** hook mentioned in the previous section. Copy that hook from the install location into your configuration.
+In order to load the shaders you published in the surfacing step, you will need to take over the **`tk-maya-actions.py`** hook mentioned in the previous section. Copy that hook from the install location into your configuration.
{% include figure src="./images/tutorial/image_50.png" %}
-This hook is responsible for generating a list of actions that can be performed for a given publish. The Loader app defines a different version of this hook for each DCC supported by the shipped integrations.
+This hook is responsible for generating a list of actions that can be performed for a given publish. The Loader app defines a different version of this hook for each DCC supported by the shipped integrations.
The shaders published in the surfacing workflow section are just Maya files, so like the exported cameras, they can be referenced by the Loader without changing the existing logic. The only change required is to add new logic to the actions hook to connect shaders to the appropriate mesh after they are referenced into the file.
@@ -740,7 +737,6 @@ Add the following method at the end of the actions hook (outside the class).
cmds.hyperShade(assign=shader)
```
-
Now add the following 2 lines at the end of the **`_create_reference`** method to call the shader hookup logic:
```python
@@ -748,8 +744,7 @@ Now add the following 2 lines at the end of the **`_create_reference`** method t
_hookup_shaders(reference_node)
```
-
-The code runs whenever a new reference is created, so it should assign the shader when referencing new geometry if the shader already exists in the file. Similarly, it should work when referencing the shader and the geometry already exists.
+The code runs whenever a new reference is created, so it should assign the shader when referencing new geometry if the shader already exists in the file. Similarly, it should work when referencing the shader and the geometry already exists.
{% include info title="Note" content="This hookup logic is very brute force and does not properly handle namespaces and other Maya-related subtleties that should be considered when implementing a production-ready pipeline." %}
@@ -759,15 +754,15 @@ Finally, point your shot's loader settings to your new hook by editing this file
While, there, also associate the Maya Shader Network publish type with the reference action. Your Loader settings should now look like this:
-{% include figure src="./images/tutorial/image_51.png" %}
+{% include figure src="./images/tutorial/image_51.png" %}
Now reload the integrations via the **Work Area Info** app to pick up the new settings, then browse to the published shaders from surfacing.
-Create a reference to the Teapot shader network publish.
+Create a reference to the Teapot shader network publish.
-{% include figure src="./images/tutorial/image_52.png" %}
+{% include figure src="./images/tutorial/image_52.png" %}
-Now load the Table shader network. If you turn on Hardware Texturing in Maya, your shaders should have been automatically connected to the meshes reference from the animation step.
+Now load the Table shader network. If you turn on Hardware Texturing in Maya, your shaders should have been automatically connected to the meshes reference from the animation step.
{% include figure src="./images/tutorial/image_53.png" %}
@@ -777,13 +772,13 @@ Now add some lights to your scene (keep it simple).
### Publishing Maya Renders
-Render your shot to disk.
+Render your shot to disk.
{% include figure src="./images/tutorial/image_54_5.gif" %}
{% include info title="Note" content="As you can see, there are issues with the surfacing of both the Teapot and the Table asset. For the purposes of this tutorial, assume these were intentional, artistic choices. If you want to address these issues, you can always load the surfacing work files for these assets and adjust the shaders and re-publish them. If you do, remember to update the references in the lighting work file and re-render. If you go through the steps, you may find that the breakdown app does not reconnect your updated shaders after reloading the reference. Based on your experience modifying the loader to hook up shader references, you should be able to update the breakdown app's scene operations hook to add the required logic. HINT: See the update method in [this file](https://github.com/shotgunsoftware/tk-multi-breakdown/blob/master/hooks/tk-maya_scene_operations.py#L69)." %}
-The shipped {% include product %} integrations will collect image sequences by looking at the render layers defined in the file. Once your render is complete, launch the publisher. You will see the rendered sequence as an item in the tree.
+The shipped {% include product %} integrations will collect image sequences by looking at the render layers defined in the file. Once your render is complete, launch the publisher. You will see the rendered sequence as an item in the tree.
{% include figure src="./images/tutorial/image_55.png" %}
@@ -797,18 +792,17 @@ Next up, compositing!
In this final section of the tutorial, you will be introduced to some of the default integrations provided by Nuke. In addition to the app's you have seen in previous sections, you will learn about the ShotGrid-aware Write node and an app that allows you to quickly send your renders to others for review.
-Start by following these steps to prepare your work file.
-
-* Launch Nuke from {% include product %} Desktop
+Start by following these steps to prepare your work file.
-* Just like in Maya, use the {% include product %} > File Open… menu action to create a new work file in the Shot's compositing step.
+- Launch Nuke from {% include product %} Desktop
+- Just like in Maya, use the {% include product %} > File Open… menu action to create a new work file in the Shot's compositing step.
Load the image sequence you rendered and published in the previous section via the Loader app.
{% include figure src="./images/tutorial/image_57.png" %}
-The action defined for the **`Image`** and **`Rendered Image`** publish types (the type depends on the file extension) is **Create Read Node**. Click this action to create a new **`Read`** node in your nuke session.
+The action defined for the **`Image`** and **`Rendered Image`** publish types (the type depends on the file extension) is **Create Read Node**. Click this action to create a new **`Read`** node in your nuke session.
Make sure your Nuke Project Settings output format matches your rendered images. Create a Constant color to use as your background and merge it with your Read node. Attach a viewer to see your composite.
@@ -820,7 +814,7 @@ Next, click the {% include product %} logo in the left hand menu in Nuke. Click
{% include figure src="./images/tutorial/image_59.png" width="400px" %}
-The {% include product %} Write Node app provides a layer on top of the built-in Nuke Write node that automatically evaluates the output path based on your current {% include product %} context.
+The {% include product %} Write Node app provides a layer on top of the built-in Nuke Write node that automatically evaluates the output path based on your current {% include product %} context.
{% include figure src="./images/tutorial/image_60.png" %}
@@ -836,7 +830,7 @@ Create a Quick Review node, then click the Upload button to render the input to
{% include figure src="./images/tutorial/image_63.png" %}
-Check the media tab in {% include product %} to see both of the uploaded quicktimes.
+Check the media tab in {% include product %} to see both of the uploaded quicktimes.
{% include figure src="./images/tutorial/image_64.png" %}
@@ -846,7 +840,7 @@ For more information on reviewing media in {% include product %}, see the [offic
Congratulations, you're done! Hopefully this tutorial has given you a starting point for building your own custom pipeline using the {% include product %} integrations. You should have an understanding of how to extend the default integrations to meet the specific needs of your studio.
-Ask questions and learn how other studios are using Toolkit over at the [shotgun-dev Google Group](https://groups.google.com/a/shotgunsoftware.com/forum/#!forum/shotgun-dev). Be sure to subscribe to stay up to date with the latest posts!
+Ask questions and learn how other studios are using Toolkit over at the [shotgun-dev Google Group](https://groups.google.com/a/shotgunsoftware.com/forum/#!forum/shotgun-dev). Be sure to subscribe to stay up to date with the latest posts!
If there are features or workflows that you feel are outside of the default integrations, then you can always write your own apps. [Here is an excellent document](https://support.shotgunsoftware.com/entries/95440137) to help you get started writing your first app.
diff --git a/docs/en/guides/python-3-best-practices.md b/docs/en/guides/python-3-best-practices.md
index 6151ba4f8..102da2a01 100644
--- a/docs/en/guides/python-3-best-practices.md
+++ b/docs/en/guides/python-3-best-practices.md
@@ -7,88 +7,87 @@ lang: en
# Python 3 Porting Best Practices
-
## Why the move to Python 3?
-There are a few compelling reasons to make the leap to Python 3. Perhaps the most dramatic is the Python 2 end of life, which occurred on January 1, 2020[^1]. With the sunsetting of Python 2, all support for Python 2 ceases, meaning that even new security vulnerabilities found in Python 2 will not be addressed.
+There are a few compelling reasons to make the leap to Python 3. Perhaps the most dramatic is the Python 2 end of life, which occurred on January 1, 2020[^1]. With the sunsetting of Python 2, all support for Python 2 ceases, meaning that even new security vulnerabilities found in Python 2 will not be addressed.
-For CY2020, the [VFX reference platform](https://vfxplatform.com/) makes the switch as well, targeting Python version 3.7.x. As a practical matter for many of us, all of this will mean we don't have much choice in when to add support for Python 3 -- as DCCs (digital content creation applications) that we develop for begin to move to Python 3 interpreters, it will become a necessity to support them.
+For CY2020, the [VFX reference platform](https://vfxplatform.com/) makes the switch as well, targeting Python version 3.7.x. As a practical matter for many of us, all of this will mean we don't have much choice in when to add support for Python 3 -- as DCCs (digital content creation applications) that we develop for begin to move to Python 3 interpreters, it will become a necessity to support them.
## Things to Consider Before Starting
-When considering moving to support Python 3, it's good to look at the requirements and application of your codebase to set expectations. Obviously, any host applications your code runs in will help drive this decision. Knowing whether you need to support many different Python interpreter versions and, if so, which ones, will be important information as you decide on the porting process that makes sense for you.
+When considering moving to support Python 3, it's good to look at the requirements and application of your codebase to set expectations. Obviously, any host applications your code runs in will help drive this decision. Knowing whether you need to support many different Python interpreter versions and, if so, which ones, will be important information as you decide on the porting process that makes sense for you.
-Next, take an audit of what libraries your code depends on. If any of these libraries do not have Python 3 compatible versions, you'll need to find an alternative library, or fork the library to provide compatibility yourself. Both of these options could potentially be costly decisions and are important to consider early on. Additionally, even libraries that do offer Python 3 compatible versions may not be drop-in replacements, and some libraries choose to fork for Python 3 support rather than maintain compatibility for both Python 2 and 3 as a single source. We'll discuss this in more depth in the "Porting Options" section below.
+Next, take an audit of what libraries your code depends on. If any of these libraries do not have Python 3 compatible versions, you'll need to find an alternative library, or fork the library to provide compatibility yourself. Both of these options could potentially be costly decisions and are important to consider early on. Additionally, even libraries that do offer Python 3 compatible versions may not be drop-in replacements, and some libraries choose to fork for Python 3 support rather than maintain compatibility for both Python 2 and 3 as a single source. We'll discuss this in more depth in the "Porting Options" section below.
-Finally, it's worth noting that while it is possible to continue to support Python versions older than 2.5 and Python 3 simultaneously[^2], this will make your life much harder. Since Python 2.5 is very old and not used in modern DCC versions, this guide will work under the assumption that Python 2.5 and earlier will not be targeted. If you do need to support older versions of Python, a branching approach as described in the "Porting Options" section below may be your best option.
+Finally, it's worth noting that while it is possible to continue to support Python versions older than 2.5 and Python 3 simultaneously[^2], this will make your life much harder. Since Python 2.5 is very old and not used in modern DCC versions, this guide will work under the assumption that Python 2.5 and earlier will not be targeted. If you do need to support older versions of Python, a branching approach as described in the "Porting Options" section below may be your best option.
## What's Different in Python 3
-Python 3 comes with some slight syntax changes, changes to builtin functions, new features, and small behavior changes. There are [many](https://docs.python.org/3.0/whatsnew/3.0.html#overview-of-syntax-changes) [great](https://portingguide.readthedocs.io/en/latest/) [guides](https://sebastianraschka.com/Articles/2014_python_2_3_key_diff.html) that enumerate these specific changes and provide examples. Rather than dive into specifics here, the goal of this guide will be to describe the porting process from a higher-level perspective, with a few small deep dives where compatibility may be more complicated than just matching syntax.
+Python 3 comes with some slight syntax changes, changes to builtin functions, new features, and small behavior changes. There are [many](https://docs.python.org/3.0/whatsnew/3.0.html#overview-of-syntax-changes) [great](https://portingguide.readthedocs.io/en/latest/) [guides](https://sebastianraschka.com/Articles/2014_python_2_3_key_diff.html) that enumerate these specific changes and provide examples. Rather than dive into specifics here, the goal of this guide will be to describe the porting process from a higher-level perspective, with a few small deep dives where compatibility may be more complicated than just matching syntax.
## Porting Options
-For most of us, porting our code to only support Python 3 is not yet an option. Many DCCs still require Python 2 support, and this is unlikely to change overnight. This means that in the real world, it will be a necessity to be able to support both Python 2 and 3.
+For most of us, porting our code to only support Python 3 is not yet an option. Many DCCs still require Python 2 support, and this is unlikely to change overnight. This means that in the real world, it will be a necessity to be able to support both Python 2 and 3.
-There are two major approaches to supporting Python 2 and 3 simultaneously. We'll discuss both of them briefly:
+There are two major approaches to supporting Python 2 and 3 simultaneously. We'll discuss both of them briefly:
### Branching
-In this approach, a new Python 3 compatible branch of your code is maintained in parallel with the current (Python 2 compatible) branch. This has the advantage of letting you write cleaner, easier to read Python 3 code, and allows you to fully leverage new features without needing branching logic to maintain Python 2 support. It also means that when the time comes to drop support for Python 2, you'll be left with a cleaner, more modern starting point in your Python 3 branch. The obvious downside here is that maintaining two branches can be unwieldy and mean more work, especially if the Python 3 and Python 2 code starts to diverge as the Python 3 branch can leverage new features that can significantly change how your code looks (e.g. [`asyncio`](https://docs.python.org/3.6/library/asyncio.html).)
+In this approach, a new Python 3 compatible branch of your code is maintained in parallel with the current (Python 2 compatible) branch. This has the advantage of letting you write cleaner, easier to read Python 3 code, and allows you to fully leverage new features without needing branching logic to maintain Python 2 support. It also means that when the time comes to drop support for Python 2, you'll be left with a cleaner, more modern starting point in your Python 3 branch. The obvious downside here is that maintaining two branches can be unwieldy and mean more work, especially if the Python 3 and Python 2 code starts to diverge as the Python 3 branch can leverage new features that can significantly change how your code looks (e.g. [`asyncio`](https://docs.python.org/3.6/library/asyncio.html).)
### Cross-Compatibility
-In this approach, a single branch is maintained that uses the subset of syntax and builtins that are compatible with both Python 2 and 3. This allows for a graceful transition from Python 2 to 3 without maintaining multiple branches of your code. There are a few popular libraries designed to help with this approach, and it's a commonly-used solution to the problem of transition from Python 2 to 3. In addition to the reduced complexity compared to maintaining multiple branches, this approach also means you don't need to change your code distribution mechanisms or worry about using the correct (Python 2 or 3) version of your code at import time.
+In this approach, a single branch is maintained that uses the subset of syntax and builtins that are compatible with both Python 2 and 3. This allows for a graceful transition from Python 2 to 3 without maintaining multiple branches of your code. There are a few popular libraries designed to help with this approach, and it's a commonly-used solution to the problem of transition from Python 2 to 3. In addition to the reduced complexity compared to maintaining multiple branches, this approach also means you don't need to change your code distribution mechanisms or worry about using the correct (Python 2 or 3) version of your code at import time.
The two most commonly used libraries for this approach are `future` and `six`.
#### `future`
-The future module is probably the most popular choice for Python 2 + 3 compatibility. It backports many Python 3 libraries to Python 2, and aims to allow you to move your codebase to a pure Python 3 syntax. Because it backports modules and works by shadowing builtins, it is slightly more invasive than `six`. Given the variety of DCCs and unknown client code in VFX environments, future may be too invasive and in an environment like this may pose a greater risk of causing problems down the road. For this reason, we will focus on using `six` instead.
+The future module is probably the most popular choice for Python 2 + 3 compatibility. It backports many Python 3 libraries to Python 2, and aims to allow you to move your codebase to a pure Python 3 syntax. Because it backports modules and works by shadowing builtins, it is slightly more invasive than `six`. Given the variety of DCCs and unknown client code in VFX environments, future may be too invasive and in an environment like this may pose a greater risk of causing problems down the road. For this reason, we will focus on using `six` instead.
#### `six`
-The `six` module does not attempt to backport Python 3 modules, or allow you to write pure Python 3 syntax, but instead unifies renamed modules and changed interfaces inside the `six.moves` namespace. This allows you to update imports and use `six`'s helper functions to write code that is both Python 2 and 3 compatible.
+The `six` module does not attempt to backport Python 3 modules, or allow you to write pure Python 3 syntax, but instead unifies renamed modules and changed interfaces inside the `six.moves` namespace. This allows you to update imports and use `six`'s helper functions to write code that is both Python 2 and 3 compatible.
## Testing and Linting
### Black
-The porting process requires an examination of the entire python codebase, and introduces a fair amount of noise in the revision control history. This makes it a good opportunity to take care of any other housekeeping that may have similar impacts. We took this opportunity to apply [`black`](https://black.readthedocs.io/en/stable/) to our code. This is not strictly necessary or directly related to Python 3 compatibility (unless your code is mixing tabs and spaces[^3]), but given the reasons identified above, we decided this was a good opportunity to modernize our code formatting.
+The porting process requires an examination of the entire python codebase, and introduces a fair amount of noise in the revision control history. This makes it a good opportunity to take care of any other housekeeping that may have similar impacts. We took this opportunity to apply [`black`](https://black.readthedocs.io/en/stable/) to our code. This is not strictly necessary or directly related to Python 3 compatibility (unless your code is mixing tabs and spaces[^3]), but given the reasons identified above, we decided this was a good opportunity to modernize our code formatting.
### Tests
-Test coverage was incredibly valuable during the porting process since it allowed us to quickly find problems that still needed to be addressed, and verify that large sections of code were working as expected without as much manual intervention. In many cases, we found it worthwhile to increase test coverage as part of the porting process to ensure that Python 2/3 specific cases (e.g. unicode handling) were being addressed correctly. This being said, we recognize that in many cases the realities of production mean that test coverage is sparse, and that adding tests to code that has little or no coverage may be too time consuming to be worthwhile as part of a project like adding Python 3 compatibility. For those in this situation, there may still be some value in using coverage measurement tools and some more basic testing code during the porting process, as these tools can provide fast feedback on what code has been covered and what may still need attention.
+Test coverage was incredibly valuable during the porting process since it allowed us to quickly find problems that still needed to be addressed, and verify that large sections of code were working as expected without as much manual intervention. In many cases, we found it worthwhile to increase test coverage as part of the porting process to ensure that Python 2/3 specific cases (e.g. unicode handling) were being addressed correctly. This being said, we recognize that in many cases the realities of production mean that test coverage is sparse, and that adding tests to code that has little or no coverage may be too time consuming to be worthwhile as part of a project like adding Python 3 compatibility. For those in this situation, there may still be some value in using coverage measurement tools and some more basic testing code during the porting process, as these tools can provide fast feedback on what code has been covered and what may still need attention.
### Porting Procedure
Automated Porting using `modernize`
-[`python-modernize`](https://python-modernize.readthedocs.io/en/latest/) is a tool that can be very useful for automatically generating Python 3 compatible code. `modernize` usually produces runnable code with minimal human intervention, and because of this can be a great tool to get most of the way to Python 3 compatibility very quickly. Of course, as an automated tool it does come with the drawbacks one would expect. It frequently produces less readable and less efficient code (e.g. wrapping all iterables in a `list()` instantiation.) In some cases, modernize can even introduce regressions that might be difficult to spot. There are also some areas where you'll find `modernize` is not much help at all, like when dealing with bytes and text. Since these decisions require a bit more understanding of context, you'll likely have to spend some time manually addressing the handling of strings in your code even if you do rely on `modernize` for the bulk of the compatibility work.
+[`python-modernize`](https://python-modernize.readthedocs.io/en/latest/) is a tool that can be very useful for automatically generating Python 3 compatible code. `modernize` usually produces runnable code with minimal human intervention, and because of this can be a great tool to get most of the way to Python 3 compatibility very quickly. Of course, as an automated tool it does come with the drawbacks one would expect. It frequently produces less readable and less efficient code (e.g. wrapping all iterables in a `list()` instantiation.) In some cases, modernize can even introduce regressions that might be difficult to spot. There are also some areas where you'll find `modernize` is not much help at all, like when dealing with bytes and text. Since these decisions require a bit more understanding of context, you'll likely have to spend some time manually addressing the handling of strings in your code even if you do rely on `modernize` for the bulk of the compatibility work.
-The alternative to using an automated tool like modernize, of course, is to go through code manually to fix incompatibilities. This can be tedious, but in our experience generally produces nicer looking code.
+The alternative to using an automated tool like modernize, of course, is to go through code manually to fix incompatibilities. This can be tedious, but in our experience generally produces nicer looking code.
-For our process we went with a hybrid approach, using `modernize` with a select set of fixers, and doing some of the work manually. We also broke the process into two stages; first doing a pure syntax compatibility and code formatting pass, and then doing a more manual Python 3 port. Our process was as follows:
+For our process we went with a hybrid approach, using `modernize` with a select set of fixers, and doing some of the work manually. We also broke the process into two stages; first doing a pure syntax compatibility and code formatting pass, and then doing a more manual Python 3 port. Our process was as follows:
In a branch:
1. Run modernize with the `except`, `numliterals`, and `print` fixers
- ```python-modernize --no-diffs --nobackups -f except -f numliterals -f print -w .```
-2. Make sure the resulting code is Python 3 syntax compliant by compiling it with Python 3. The goal here is not to have your code work in Python 3, but to ensure that the basic formatting and automatable syntax fixes are in place. If your code does not successfully compile after this step, you’ll need to find the source of the problem and either add additional fixers to the above step, or manually fix the incompatibilities. Ensure that any changes you make manually at this stage are syntax only and will not change the behavior of the code in Python 2.
- ```python3 -m compileall .```
+ `python-modernize --no-diffs --nobackups -f except -f numliterals -f print -w .`
+2. Make sure the resulting code is Python 3 syntax compliant by compiling it with Python 3. The goal here is not to have your code work in Python 3, but to ensure that the basic formatting and automatable syntax fixes are in place. If your code does not successfully compile after this step, you’ll need to find the source of the problem and either add additional fixers to the above step, or manually fix the incompatibilities. Ensure that any changes you make manually at this stage are syntax only and will not change the behavior of the code in Python 2.
+ `python3 -m compileall .`
3. Run `black` on the resulting code
-This branch should not change any behavior or functionality, and should not introduce regressions, so it is considered safe to merge at this point. This helps keep the history easier to read, and means that the Python 3 compatibility branch and master will diverge less during the porting process, making for an easier merge once the work is done.
+This branch should not change any behavior or functionality, and should not introduce regressions, so it is considered safe to merge at this point. This helps keep the history easier to read, and means that the Python 3 compatibility branch and master will diverge less during the porting process, making for an easier merge once the work is done.
In a new branch, the actual Python 3 port can now begin:
-1. Search for method names that may require some work to deal with list/view/iterator differences between Python 2 and 3. In Python 3 `.values()`, `.items()` and `.keys()` return an iterator or view instead of a list, so in cases where these methods are called the code should be able to handle both iterator and list returns, otherwise the result will need to be cast to a list. Similarly, the `filter()` method returned a list in Python 2, but now returns an iterator.
-2. Change calls from `dict.iteritems()` and `dict.itervalues()` to `dict.items()` and `dict.values()` if the returned collection won't be too big. In these cases, the resulting cleaner code at the cost of a slight performance hit in Python 2 is preferable. In cases where the collection might contain thousands of items or more, use `six.iteritems` and `six.itervalues` instead. If `dict.iterkeys()` was used, simply replace the code with something like `for key in dictionary:`, since this will iterate on keys in both Python versions. Watch out that returning an iterator in Python 3 doesn't change the semantics of the code however. If a method used to return `dict.values()`, you'll need to wrap the call inside `list(dict.values())` to ensure the method always returns a list in all versions on Python.
-3. Search for `str`, `basestring`, `unicode`, `open`, `pickle`, `encode`, `decode` since these will be areas of the code that likely require some attention to handling of bytes and strings. We used the coercion helper methods provided by six (e.g. `ensure_string`) where needed. See the sections on `bytes` and `pickle` below.
+1. Search for method names that may require some work to deal with list/view/iterator differences between Python 2 and 3. In Python 3 `.values()`, `.items()` and `.keys()` return an iterator or view instead of a list, so in cases where these methods are called the code should be able to handle both iterator and list returns, otherwise the result will need to be cast to a list. Similarly, the `filter()` method returned a list in Python 2, but now returns an iterator.
+2. Change calls from `dict.iteritems()` and `dict.itervalues()` to `dict.items()` and `dict.values()` if the returned collection won't be too big. In these cases, the resulting cleaner code at the cost of a slight performance hit in Python 2 is preferable. In cases where the collection might contain thousands of items or more, use `six.iteritems` and `six.itervalues` instead. If `dict.iterkeys()` was used, simply replace the code with something like `for key in dictionary:`, since this will iterate on keys in both Python versions. Watch out that returning an iterator in Python 3 doesn't change the semantics of the code however. If a method used to return `dict.values()`, you'll need to wrap the call inside `list(dict.values())` to ensure the method always returns a list in all versions on Python.
+3. Search for `str`, `basestring`, `unicode`, `open`, `pickle`, `encode`, `decode` since these will be areas of the code that likely require some attention to handling of bytes and strings. We used the coercion helper methods provided by six (e.g. `ensure_string`) where needed. See the sections on `bytes` and `pickle` below.
4. Unless generating a super long range, `xrange` can be changed to `range` for simplicity, otherwise `six.range` can be used.
-5. After committing the manual changes from above, run a full `python-modernize` and go through the diff manually. Many of the resulting changes will be unwanted, as discussed above, however this is a good way to catch potential problems that were overlooked in the manual porting process.
- ```python-modernize --no-diffs --nobackups -f default . -w && git diff HEAD```
-6. Test the resulting code to find the remaining problems. There are some incompatibilities that don’t have fixers ([this](https://portingguide.readthedocs.io/en/latest/core-obj-misc.html) is a good resource to look at to get an idea of what those changes entail), and it’s easy to overlook text/binary problems during the port process.
+5. After committing the manual changes from above, run a full `python-modernize` and go through the diff manually. Many of the resulting changes will be unwanted, as discussed above, however this is a good way to catch potential problems that were overlooked in the manual porting process.
+ `python-modernize --no-diffs --nobackups -f default . -w && git diff HEAD`
+6. Test the resulting code to find the remaining problems. There are some incompatibilities that don’t have fixers ([this](https://portingguide.readthedocs.io/en/latest/core-obj-misc.html) is a good resource to look at to get an idea of what those changes entail), and it’s easy to overlook text/binary problems during the port process.
We chose to use this process because we believe it allowed us to maintain a standard of more readable, efficient code than would have been automatically generated by using `modernize` on its own.
@@ -96,7 +95,7 @@ We chose to use this process because we believe it allowed us to maintain a stan
### Bytes Woes
-Python 3 introduces a strict separation between binary and textual data. This is a long-called-for addition that most see as an improvement, but for Python 2 + 3 compatible code it adds some headaches. Since Python 2 does not enforce this separation, and Python 3 introduces new types to do so, code that deals with data and strings will likely need some attention. For the most part this just means making sure that strings are encoded / decoded properly, for which the `six.ensure_binary` and `six.ensure_text` helper functions are invaluable. See the examples below for common applications of these methods. In some cases, however, this can be more complicated. For an example of this, see the pickle section below.
+Python 3 introduces a strict separation between binary and textual data. This is a long-called-for addition that most see as an improvement, but for Python 2 + 3 compatible code it adds some headaches. Since Python 2 does not enforce this separation, and Python 3 introduces new types to do so, code that deals with data and strings will likely need some attention. For the most part this just means making sure that strings are encoded / decoded properly, for which the `six.ensure_binary` and `six.ensure_text` helper functions are invaluable. See the examples below for common applications of these methods. In some cases, however, this can be more complicated. For an example of this, see the pickle section below.
```python
# base64.encodestring expects str in Python 2, and bytes in Python 3.
@@ -119,7 +118,7 @@ category_type = six.ensure_str(category_type)
### The `pickle` Pickle
-Pickle in Python 3 returns a `bytes` object from `dumps()`, where previously it had returned a `str`. Additionally, the output of `pickle.dumps()` in Python 3 contains `\x00` bytes, which cannot be decoded. This is not a problem if the data is being stored in a file, but if the pickled data is being stored in, for example, an environment variable, this can become problematic. As a workaround, we found that by forcing pickle to use protocol 0, no 0 bytes were included, and the output is once again decodable. This comes at the cost of the slightly less efficient and fewer-featured older protocol.
+Pickle in Python 3 returns a `bytes` object from `dumps()`, where previously it had returned a `str`. Additionally, the output of `pickle.dumps()` in Python 3 contains `\x00` bytes, which cannot be decoded. This is not a problem if the data is being stored in a file, but if the pickled data is being stored in, for example, an environment variable, this can become problematic. As a workaround, we found that by forcing pickle to use protocol 0, no 0 bytes were included, and the output is once again decodable. This comes at the cost of the slightly less efficient and fewer-featured older protocol.
```python
# Dumping data to a pickle string:
@@ -135,26 +134,23 @@ pickled_data = six.ensure_str(cPickle.dumps(data, **DUMP_KWARGS))
LOAD_KWARGS = {"encoding": "bytes"} if six.PY3 else {}
data = cPickle.loads(six.ensure_binary(data), **LOAD_KWARGS)
```
+
### Regex `\W` flag
-In Python 3, regular expression metacharacters match unicode characters where in Python 2 they do not. To reproduce the previous behavior, Python 3 introduces a new `re.ASCII` flag, which does not exist in Python 2. To maintain consistent behavior across Python 2 and 3, we wrapped `re` functions to include this flag across the board in Python 3.
+In Python 3, regular expression metacharacters match unicode characters where in Python 2 they do not. To reproduce the previous behavior, Python 3 introduces a new `re.ASCII` flag, which does not exist in Python 2. To maintain consistent behavior across Python 2 and 3, we wrapped `re` functions to include this flag across the board in Python 3.
### Dictionary Order
-Prior to Python 3.7, dictionary order was not guaranteed. As of Python 3.7, insertion order is preserved in dictionaries[11]. In practice, on Python 2.7 dictionary order was random but deterministic (though this was not guaranteed), on some versions of Python (including some version of Python 3) dictionary order is non-deterministic[10]. While code prior to Python 3.7 should not rely on dictionary key order being deterministic, there were instances where this assumption was made in our unit tests. These tests broke in Python 3.7, and needed to be updated to ensure that dictionary key order was not relied upon.
+Prior to Python 3.7, dictionary order was not guaranteed. As of Python 3.7, insertion order is preserved in dictionaries[11]. In practice, on Python 2.7 dictionary order was random but deterministic (though this was not guaranteed), on some versions of Python (including some version of Python 3) dictionary order is non-deterministic[10]. While code prior to Python 3.7 should not rely on dictionary key order being deterministic, there were instances where this assumption was made in our unit tests. These tests broke in Python 3.7, and needed to be updated to ensure that dictionary key order was not relied upon.
### `sys.platform`
-In Python 3.3+ `sys.platform` on Linux returns `linux`, where previously it had returned "linux" appended with the kernel major version (i.e. `linux2`). Of course when testing for Linux it is easy enough to check `sys.platform.startswith('linux')`. We chose to centralize these tests and platform "normalization", and introduced functions `sgtk.util.is_windows()`, `sgtk.util.is_linux()`, `sgtk.util.is_macos()`, as well as a `sgsix.platform` constant that contains a normalized platform string that can be used for consistent mapping to platform names across python versions.
+In Python 3.3+ `sys.platform` on Linux returns `linux`, where previously it had returned "linux" appended with the kernel major version (i.e. `linux2`). Of course when testing for Linux it is easy enough to check `sys.platform.startswith('linux')`. We chose to centralize these tests and platform "normalization", and introduced functions `sgtk.util.is_windows()`, `sgtk.util.is_linux()`, `sgtk.util.is_macos()`, as well as a `sgsix.platform` constant that contains a normalized platform string that can be used for consistent mapping to platform names across python versions.
-## Notes
-[^1]:
- [https://www.python.org/doc/sunset-python-2/](https://www.python.org/doc/sunset-python-2/)
-
-[^2]:
- [https://docs.python.org/3/howto/pyporting.html#drop-support-for-python-2-6-and-older](https://docs.python.org/3/howto/pyporting.html#drop-support-for-python-2-6-and-older)
+## Notes
-[^3]:
- [https://portingguide.readthedocs.io/en/latest/syntax.html#tabs-and-spaces](https://portingguide.readthedocs.io/en/latest/syntax.html#tabs-and-spaces)
\ No newline at end of file
+[^1]: [https://www.python.org/doc/sunset-python-2/](https://www.python.org/doc/sunset-python-2/)
+[^2]: [https://docs.python.org/3/howto/pyporting.html#drop-support-for-python-2-6-and-older](https://docs.python.org/3/howto/pyporting.html#drop-support-for-python-2-6-and-older)
+[^3]: [https://portingguide.readthedocs.io/en/latest/syntax.html#tabs-and-spaces](https://portingguide.readthedocs.io/en/latest/syntax.html#tabs-and-spaces)
diff --git a/docs/en/guides/review.md b/docs/en/guides/review.md
index 99b1de39d..480d3d55c 100644
--- a/docs/en/guides/review.md
+++ b/docs/en/guides/review.md
@@ -7,7 +7,7 @@ lang: en
# Review
-Learn to how to get the most out of RV, the award-winning suite of digital review tools that allows you to play back, compare, and convert digital media with collaboration tools and many deep integrations.
+Learn to how to get the most out of RV, the award-winning suite of digital review tools that allows you to play back, compare, and convert digital media with collaboration tools and many deep integrations.
Dig into RV's Reference Manuals for a complete understanding of node graphs, custom shaders, event handling, and networking.
diff --git a/docs/en/guides/webhooks.md b/docs/en/guides/webhooks.md
index 19711af77..98cf27780 100644
--- a/docs/en/guides/webhooks.md
+++ b/docs/en/guides/webhooks.md
@@ -31,7 +31,7 @@ Webhooks and the [{% include product %} event daemon](https://github.com/shotgun
## Creating a webhook
-To get started creating a webhook, go to a Webhooks page, then navigate to the button above the webhooks list. Access to webhooks is controlled by the "Advanced -> Show Webhooks" permission. It is enabled for default Admin and Manager roles.
+To get started creating a webhook, go to a Webhooks page, then navigate to the button above the webhooks list. Access to webhooks is controlled by the "Advanced -> Show Webhooks" permission. It is enabled for default Admin and Manager roles.

@@ -74,12 +74,12 @@ A webhook can have one of several different statuses, indicating its health and

-| Status | Example | Description |
-|--------|:-------:|:-----------:|
-| Active |  | The webhook is operating in a stable fashion. No deliveries to this webhook's consumer URL have failed to reach their destination in the past 24 hours. |
-| Unstable |  | The webhook is operating in an unstable fashion. Some deliveries have failed to reach their destination in the past 24 hours, but not enough to cause {% include product %} to consider the webhook to be dead. |
-| Failed |  | The webhook is considered to be dead, and no further deliveries will be attempted. This is a result of too many delivery failures in a short period of time, and the system has determined that the webhook should no longer be considered viable. **A webhook is considered failed if it has 10 failed deliveries in the past 24 hours**. |
-| Disabled |  | The webhook is in a disabled state, and no further deliveries will be attempted until it is re-enabled. |
+| Status | Example | Description |
+| -------- | :--------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: |
+| Active |  | The webhook is operating in a stable fashion. No deliveries to this webhook's consumer URL have failed to reach their destination in the past 24 hours. |
+| Unstable |  | The webhook is operating in an unstable fashion. Some deliveries have failed to reach their destination in the past 24 hours, but not enough to cause {% include product %} to consider the webhook to be dead. |
+| Failed |  | The webhook is considered to be dead, and no further deliveries will be attempted. This is a result of too many delivery failures in a short period of time, and the system has determined that the webhook should no longer be considered viable. **A webhook is considered failed if it has 10 failed deliveries in the past 24 hours**. |
+| Disabled |  | The webhook is in a disabled state, and no further deliveries will be attempted until it is re-enabled. |
## Deliveries
@@ -152,7 +152,7 @@ A webhook consumer service must respond to deliveries in order for the system to
{% include warning title="Response timeouts" content="A response must be received within six seconds of delivery to a webhook’s URL, after which the connection will be closed. Failure to respond in time will result in a failed delivery." %}
-Process time is recorded for each delivery and can be viewed in the Response details tab.
+Process time is recorded for each delivery and can be viewed in the Response details tab.
#### Throttling
@@ -160,17 +160,18 @@ Your consumer response times to deliveries will impact webhooks throughput for y
Each site is allowed 1 minute of response time per minute. So if all configured consumer endpoints for a site take the full 6 seconds to respond, webhooks deliveries for that site will be throttled to 10 per a minute.
Where a high rate of overall throughput is needed, then consumer endpoints should be designed according to the following model:
- 1. Receive the request
- 2. Spawn another process/thread to handle it the way you want
- 3. Answer an acknowledging 200 immediately
+
+1. Receive the request
+2. Spawn another process/thread to handle it the way you want
+3. Answer an acknowledging 200 immediately
#### Status codes
-| Status | Code | Description |
-|--------|:----:|:-----------:|
-| Success | < 400 | The delivery was received and processed successfully. |
-| Error | >= 400 | The delivery was received but was not processed successfully. |
-| Redirect | 3xx | The delivery was received, but should be redirected to another URL. |
+| Status | Code | Description |
+| -------- | :----: | :-----------------------------------------------------------------: |
+| Success | < 400 | The delivery was received and processed successfully. |
+| Error | >= 400 | The delivery was received but was not processed successfully. |
+| Redirect | 3xx | The delivery was received, but should be redirected to another URL. |
### Acknowledgements
diff --git a/docs/en/guides/webhooks/batch-deliveries.md b/docs/en/guides/webhooks/batch-deliveries.md
index 18e2a722a..007bb3090 100644
--- a/docs/en/guides/webhooks/batch-deliveries.md
+++ b/docs/en/guides/webhooks/batch-deliveries.md
@@ -23,11 +23,13 @@ If enabling batched deliveries, we recommend that your receiving service is desi
{% include info title="Note" content="For a receiving service that takes on order of 1 second to respond to a single event, the response time is the main performance factor, not delivery overhead. There will not be any significant benefit in batching." %}
#### Non-batched-deliveries Webhooks
-* timeout allowance is 6 seconds per delivery. i.e. a webhook endpoint must respond to each request within 6 seconds.
+
+- timeout allowance is 6 seconds per delivery. i.e. a webhook endpoint must respond to each request within 6 seconds.
#### Batched-deliveries Webhooks
-* timeout allowance is the maximum of: 6 seconds, or, 1 second per event in the batch.
-* throttling limits still apply: 1 minute of webhook endpoint response time per minute per ShotGrid site, across all webhooks.
+
+- timeout allowance is the maximum of: 6 seconds, or, 1 second per event in the batch.
+- throttling limits still apply: 1 minute of webhook endpoint response time per minute per ShotGrid site, across all webhooks.
## Comparison of Webhook Delivery Formats
@@ -35,82 +37,82 @@ If enabling batched deliveries, we recommend that your receiving service is desi
```json
{
- "data":{
- "id":"119.110.0",
- "event_log_entry_id":479004,
- "event_type":"Shotgun_Asset_Change",
- "operation":"update",
- "user":{"type":"HumanUser","id":24},
- "entity":{"type":"Asset","id":1419},
- "project":{"type":"Project","id":127},
- "meta":{
- "type":"attribute_change",
- "attribute_name":"code",
- "entity_type":"Asset",
- "entity_id":1419,
- "field_data_type":"text",
- "old_value":"Cypress test asset for Webhooks deliveries",
- "new_value":"Revised test asset for Webhooks deliveries"
+ "data": {
+ "id": "119.110.0",
+ "event_log_entry_id": 479004,
+ "event_type": "Shotgun_Asset_Change",
+ "operation": "update",
+ "user": { "type": "HumanUser", "id": 24 },
+ "entity": { "type": "Asset", "id": 1419 },
+ "project": { "type": "Project", "id": 127 },
+ "meta": {
+ "type": "attribute_change",
+ "attribute_name": "code",
+ "entity_type": "Asset",
+ "entity_id": 1419,
+ "field_data_type": "text",
+ "old_value": "Cypress test asset for Webhooks deliveries",
+ "new_value": "Revised test asset for Webhooks deliveries"
},
- "created_at":"2021-02-22 17:40:23.202136",
- "attribute_name":"code",
- "session_uuid":null,
+ "created_at": "2021-02-22 17:40:23.202136",
+ "attribute_name": "code",
+ "session_uuid": null
},
- "timestamp":"2021-02-22T17:40:27Z"
+ "timestamp": "2021-02-22T17:40:27Z"
}
```
#### Batched-Deliveries Webhook Message Body (may contain 1 to 50 deliveries)
-When batching is enabled, a `deliveries` key is always present, even if there is only 1 event in the batch. Its value is an array of individual event delivery data, where the information provided for each delivery is identical to un-batched mode.
+When batching is enabled, a `deliveries` key is always present, even if there is only 1 event in the batch. Its value is an array of individual event delivery data, where the information provided for each delivery is identical to un-batched mode.
```json
{
- "timestamp":"2021-02-22T18:04:40.140Z",
- "data":{
- "deliveries":[
+ "timestamp": "2021-02-22T18:04:40.140Z",
+ "data": {
+ "deliveries": [
{
- "id":"170.141.0",
- "event_log_entry_id":480850,
- "event_type":"Shotgun_Asset_Change",
- "operation":"update",
- "user":{"type":"HumanUser","id":24},
- "entity":{"type":"Asset","id":1424},
- "project":{"type":"Project","id":132},
- "meta":{
- "type":"attribute_change",
- "attribute_name":"code",
- "entity_type":"Asset",
- "entity_id":1424,
- "field_data_type":"text",
- "old_value":"Cypress test asset for Webhooks deliveries",
- "new_value":"Revised test asset for Webhooks deliveries"
+ "id": "170.141.0",
+ "event_log_entry_id": 480850,
+ "event_type": "Shotgun_Asset_Change",
+ "operation": "update",
+ "user": { "type": "HumanUser", "id": 24 },
+ "entity": { "type": "Asset", "id": 1424 },
+ "project": { "type": "Project", "id": 132 },
+ "meta": {
+ "type": "attribute_change",
+ "attribute_name": "code",
+ "entity_type": "Asset",
+ "entity_id": 1424,
+ "field_data_type": "text",
+ "old_value": "Cypress test asset for Webhooks deliveries",
+ "new_value": "Revised test asset for Webhooks deliveries"
},
- "created_at":"2021-02-22 18:04:39.198641",
- "attribute_name":"code",
- "session_uuid":null,
+ "created_at": "2021-02-22 18:04:39.198641",
+ "attribute_name": "code",
+ "session_uuid": null
},
{
- "id":"170.141.1",
- "event_log_entry_id":480851,
- "event_type":"Shotgun_Asset_Change",
- "operation":"update",
- "user":{"type":"HumanUser","id":24},
- "entity":{"type":"Asset","id":1424},
- "project":{"type":"Project","id":132},
- "meta":{
- "type":"attribute_change",
- "attribute_name":"description",
- "entity_type":"Asset",
- "entity_id":1424,
- "field_data_type":"text",
- "old_value":null,
- "new_value":"Some other *description*"
+ "id": "170.141.1",
+ "event_log_entry_id": 480851,
+ "event_type": "Shotgun_Asset_Change",
+ "operation": "update",
+ "user": { "type": "HumanUser", "id": 24 },
+ "entity": { "type": "Asset", "id": 1424 },
+ "project": { "type": "Project", "id": 132 },
+ "meta": {
+ "type": "attribute_change",
+ "attribute_name": "description",
+ "entity_type": "Asset",
+ "entity_id": 1424,
+ "field_data_type": "text",
+ "old_value": null,
+ "new_value": "Some other *description*"
},
- "created_at":"2021-02-22 18:04:39.212032",
- "attribute_name":"description",
- "session_uuid":null,
- },
+ "created_at": "2021-02-22 18:04:39.212032",
+ "attribute_name": "description",
+ "session_uuid": null
+ }
]
}
}
diff --git a/docs/en/index.md b/docs/en/index.md
index bad4fec9f..f9f18da4d 100644
--- a/docs/en/index.md
+++ b/docs/en/index.md
@@ -4,4 +4,3 @@ title: Overview
pagename: index
lang: en
---
-
diff --git a/docs/en/quick-answers/administering.md b/docs/en/quick-answers/administering.md
index 624446216..9ddf0d071 100644
--- a/docs/en/quick-answers/administering.md
+++ b/docs/en/quick-answers/administering.md
@@ -5,12 +5,12 @@ pagename: quick-answers-administering
lang: en
---
-Administering
-=====
+# Administering
A collection of quick answers based around administering and configuring your Toolkit setup.
#### {% include product %} Desktop:
+
- [How do I re-setup a Toolkit project using {% include product %} Desktop?](./administering/resetup-project-with-sg-desktop.md)
- [How do I install the {% include product %} Desktop silently on Windows?](./administering/install-desktop-silent.md)
- [How do I set up a desktop/launcher icon for {% include product %} Desktop on Linux?](./administering/create-shotgun-desktop-shortcut.md)
diff --git a/docs/en/quick-answers/administering/convert-from-single-root-to-multi.md b/docs/en/quick-answers/administering/convert-from-single-root-to-multi.md
index 96469a4de..5500f130c 100644
--- a/docs/en/quick-answers/administering/convert-from-single-root-to-multi.md
+++ b/docs/en/quick-answers/administering/convert-from-single-root-to-multi.md
@@ -15,7 +15,7 @@ Let’s say you want to add another root named “secondary”. Here are the ste
- In {% include product %}, navigate to the **Admin > Site Preferences** page
- Open up the **File Management** section
- Click on **[+] Add Local File Storage**
-- Fill out the name ("secondary") and the paths to the storage root on all of the relevant platforms. *If you're not using a particular platform, you can simply leave it blank.*
+- Fill out the name ("secondary") and the paths to the storage root on all of the relevant platforms. _If you're not using a particular platform, you can simply leave it blank._
- Click on the **Save Page** button on the top or the bottom of the page

@@ -25,13 +25,13 @@ Let’s say you want to add another root named “secondary”. Here are the ste
Toolkit caches information about the local storages used in a pipeline configuration in the `config/core/roots.yml` file. Edit this file to add the new **secondary** storage root you just created in {% include product %}:
primary: {
- linux_path: /mnt/hgfs/sgtk/projects,
- mac_path: /sgtk/projects,
+ linux_path: /mnt/hgfs/sgtk/projects,
+ mac_path: /sgtk/projects,
windows_path: 'z:\sgtk\projects'
}
secondary: {
- linux_path: /mnt/hgfs/sgtk/secondaries,
- mac_path: /sgtk/secondaries,
+ linux_path: /mnt/hgfs/sgtk/secondaries,
+ mac_path: /sgtk/secondaries,
windows_path: 'z:\sgtk\secondaries'
}
@@ -39,8 +39,8 @@ Toolkit caches information about the local storages used in a pipeline configura
Example:
secondary: {
- linux_path: /mnt/hgfs/sgtk/secondaries,
- mac_path: /sgtk/secondaries,
+ linux_path: /mnt/hgfs/sgtk/secondaries,
+ mac_path: /sgtk/secondaries,
windows_path: 'z:\sgtk\secondaries'
shotgun_storage_id: 123
}
@@ -90,4 +90,4 @@ You should follow this same pattern for each template path in your `config/core/
{% include info title="Note" content="You do not need to specify a `root_name` for templates that use the default storage root. The default root is indicated by specifying `default: true` in the `roots.yml` file. If a default is not explicitly defined in `roots.yml`, the root named **primary** will be considered the default." %}
-1 *It is worth noting that updating the paths might not be ideal, since any old files that were created using the previous value will not be accessible by Toolkit once the new value is set (e.g. old work files won't be found by Toolkit after changing their template path). If this is a concern, you may then create a new template (e.g. houdini_shot_publish_v2) with the new location and upgrade your apps to use that new version. Not all apps handle a fallback concept like this, but this will allow some apps to recognize the old files. This does not affect publishes, as these are always linked to their publish in {% include product %}.*
+1 _It is worth noting that updating the paths might not be ideal, since any old files that were created using the previous value will not be accessible by Toolkit once the new value is set (e.g. old work files won't be found by Toolkit after changing their template path). If this is a concern, you may then create a new template (e.g. houdini_shot_publish_v2) with the new location and upgrade your apps to use that new version. Not all apps handle a fallback concept like this, but this will allow some apps to recognize the old files. This does not affect publishes, as these are always linked to their publish in {% include product %}._
diff --git a/docs/en/quick-answers/administering/create-shotgun-desktop-shortcut.md b/docs/en/quick-answers/administering/create-shotgun-desktop-shortcut.md
index ee27d627a..1e0625c9c 100644
--- a/docs/en/quick-answers/administering/create-shotgun-desktop-shortcut.md
+++ b/docs/en/quick-answers/administering/create-shotgun-desktop-shortcut.md
@@ -7,8 +7,8 @@ lang: en
# How do I set up a desktop/launcher icon for {% include product %} Desktop on Linux?
-The current {% include product %} Desktop installer doesn't automatically create shortcuts and launch entries, so you have to manually go in and do this afterwards. It's straightforward and may differ depending on which flavour of Linux you are using.
+The current {% include product %} Desktop installer doesn't automatically create shortcuts and launch entries, so you have to manually go in and do this afterwards. It's straightforward and may differ depending on which flavour of Linux you are using.
Once you have run the {% include product %} desktop installer, the {% include product %} Desktop executable will be located in the `/opt/Shotgun folder`. The name of the executable is {% include product %}.
No icon is distributed with the installer. Download it from the [{% include product %} Desktop engine github repository](https://github.com/shotgunsoftware/tk-desktop/blob/aac6fe004bd003bf26316b9859bd4ebc42eb82dc/resources/default_systray_icon.png).
-Once you have downloaded the icon and have the path to the executable (`/opt/Shotgun/Shotgun`), please manually create any desktop or menu launchers you may require. The process for doing this varies depending on the version of Linux, but you can typically create a desktop launcher by right clicking on the Desktop and looking for a suitable menu option there.
\ No newline at end of file
+Once you have downloaded the icon and have the path to the executable (`/opt/Shotgun/Shotgun`), please manually create any desktop or menu launchers you may require. The process for doing this varies depending on the version of Linux, but you can typically create a desktop launcher by right clicking on the Desktop and looking for a suitable menu option there.
diff --git a/docs/en/quick-answers/administering/disable-browser-integration.md b/docs/en/quick-answers/administering/disable-browser-integration.md
index 38a60f257..5dd092983 100644
--- a/docs/en/quick-answers/administering/disable-browser-integration.md
+++ b/docs/en/quick-answers/administering/disable-browser-integration.md
@@ -9,13 +9,13 @@ lang: en
To disable browser integration, follow these two simple steps.
-1. Create or open the text file at:
+1. Create or open the text file at:
Windows: %APPDATA%\{% include product %}\preferences\toolkit.ini
Macosx: ~/Library/Preferences/{% include product %}/toolkit.ini
Linux: ~/.{% include product %}/preferences/toolkit.ini
-2. Add the following section:
+2. Add the following section:
[BrowserIntegration]
enabled=0
@@ -24,4 +24,4 @@ See complete instructions on how to configure the browser integration in our [Ad
**Alternate method**
-If you've taken over your Toolkit pipeline configuration, an alternative would be to remove the [`tk-{% include product %}` engine from your environments](https://github.com/shotgunsoftware/tk-config-default2/blob/master/env/project.yml#L48) so that it can't load any actions.
\ No newline at end of file
+If you've taken over your Toolkit pipeline configuration, an alternative would be to remove the [`tk-{% include product %}` engine from your environments](https://github.com/shotgunsoftware/tk-config-default2/blob/master/env/project.yml#L48) so that it can't load any actions.
diff --git a/docs/en/quick-answers/administering/install-desktop-silent.md b/docs/en/quick-answers/administering/install-desktop-silent.md
index 70d370a4b..84d4a3b9e 100644
--- a/docs/en/quick-answers/administering/install-desktop-silent.md
+++ b/docs/en/quick-answers/administering/install-desktop-silent.md
@@ -15,4 +15,4 @@ If you wish to also specify the installation folder, launch it with the `/D` arg
`ShotgunInstaller_Current.exe /S /D=X:\path\to\install\folder.`
-{% include info title="Note" content="The `/D` argument must be the last argument and no `\"` should be used in the path, even if there are spaces in it." %}
\ No newline at end of file
+{% include info title="Note" content="The `/D` argument must be the last argument and no `\"` should be used in the path, even if there are spaces in it." %}
diff --git a/docs/en/quick-answers/administering/move-configuration-location.md b/docs/en/quick-answers/administering/move-configuration-location.md
index e7fdab6b9..7c752d6e6 100644
--- a/docs/en/quick-answers/administering/move-configuration-location.md
+++ b/docs/en/quick-answers/administering/move-configuration-location.md
@@ -9,9 +9,9 @@ lang: en
{% include info title="Note" content="The contents of this doc only apply to [centralized configuration setups](https://developer.shotgridsoftware.com/tk-core/initializing.html#centralized-configurations). [Distributed configurations](https://developer.shotgridsoftware.com/tk-core/initializing.html#distributed-configurations) are cached locally to the individual client machines and are managed automatically by Toolkit." %}
-The easiest way to move your pipeline configuration to a new location is by using the `tank move_configuration` command. This will take care of moving your files, updating {% include product %}, and updating your config files to point to the new location.
+The easiest way to move your pipeline configuration to a new location is by using the `tank move_configuration` command. This will take care of moving your files, updating {% include product %}, and updating your config files to point to the new location.
-This command is also useful if you are only moving the location for a single operating system, or were not previously using a certain operating system but would like to add it now. Toolkit will detect what needs to be moved or added and what doesn’t, and will show you what it is about to do to allow you to confirm before progressing.
+This command is also useful if you are only moving the location for a single operating system, or were not previously using a certain operating system but would like to add it now. Toolkit will detect what needs to be moved or added and what doesn’t, and will show you what it is about to do to allow you to confirm before progressing.
- [Using the tank move_configuration command](#using-the-tank-move_configuration-command)
- [Manually moving your pipeline configuration](#manually-moving-your-pipeline-configuration)
@@ -24,7 +24,7 @@ This command is also useful if you are only moving the location for a single ope
## Using the tank move_configuration command:
- $ cd /sgtk/software/shotgun/scarlet
+ $ cd /sgtk/software/shotgun/scarlet
$ ./tank move_configuration
Welcome to the {% include product %} Pipeline Toolkit!
@@ -64,11 +64,10 @@ This command is also useful if you are only moving the location for a single ope
you want a configuration which only works on windows, do like this:
> tank move_configuration "" "p:\configs\my_config" ""
-
### Example:
- $ cd /sgtk/software/shotgun/scarlet
+ $ cd /sgtk/software/shotgun/scarlet
$ ./tank move_configuration "/mnt/hgfs/sgtk/software/shotgun/scarlet_new" "z:\sgtk\software\shotgun\scarlet_new" "/sgtk/software/shotgun/scarlet_new"
Welcome to the {% include product %} Pipeline Toolkit!
@@ -133,8 +132,6 @@ This command is also useful if you are only moving the location for a single ope
Deleting original configuration files...
All done! Your configuration has been successfully moved.
-
-
## Manually moving your pipeline configuration
@@ -142,11 +139,11 @@ This command is also useful if you are only moving the location for a single ope
If you've already started moving things manually and are stuck, here's a rundown of what you need to change to ensure Toolkit continues to work with your pipeline configuration now in a new location.
-1. Move your pipeline configuration files to their new location
+1. Move your pipeline configuration files to their new location
$ mv /sgtk/software/shotgun/scarlet /mnt/newserver/sgtk/software/shotgun/scarlet_new
-2. Edit your `install_location.yml`, which helps Toolkit find where the pipeline configuration is located:
+2. Edit your `install_location.yml`, which helps Toolkit find where the pipeline configuration is located:
$ vi /mnt/newserver/sgtk/software/shotgun/scarlet_new/config/core/install_location.yml
@@ -163,10 +160,10 @@ If you've already started moving things manually and are stuck, here's a rundown
# End of file.
-3. Locate the corresponding PipelineConfiguration entity in {% include product %} for this project and modify the Linux Path, Mac Path, and Windows Path field values to match the changes you made above.
+3. Locate the corresponding PipelineConfiguration entity in {% include product %} for this project and modify the Linux Path, Mac Path, and Windows Path field values to match the changes you made above.

Now your pipeline configuration should work as expected from the new location.
-{% include info title="Note" content="If you're using SG Desktop, you'll need to navigate out of your project and then click on the project icon again in order to reload the pipeline configuration from its new location." %}
\ No newline at end of file
+{% include info title="Note" content="If you're using SG Desktop, you'll need to navigate out of your project and then click on the project icon again in order to reload the pipeline configuration from its new location." %}
diff --git a/docs/en/quick-answers/administering/move-project-directories.md b/docs/en/quick-answers/administering/move-project-directories.md
index 348191460..9c60622ce 100644
--- a/docs/en/quick-answers/administering/move-project-directories.md
+++ b/docs/en/quick-answers/administering/move-project-directories.md
@@ -13,7 +13,7 @@ Sometimes it's necessary to move your project files (scene files, renders, etc.)
- Copy (or move) your project files from the old location to the new location.
- In {% include product %}, navigate to the **Admin > Site Preferences** page and open the **File Management** section.
- 
+ 
- Update the Local File Storage named "primary" with the paths for each platform to the new storage for your project files. If you're not using a specific platform, leave it blank.
- Click on the **"Save Changes"** button on the top or bottom of the page.
- Update your `config/core/roots.yml` file in your project configuration to match the new path values you just saved in {% include product %}.
@@ -26,7 +26,7 @@ With the new storage root definition, the path is now expanded like this:
[asset-storage]/assets/Character/betty => /mnt/bigdrive/foo/assets/Character/betty
-and we don't need to worry about updating any other publish information in {% include product %} or Toolkit!
+and we don't need to worry about updating any other publish information in {% include product %} or Toolkit!
{% include warning title="Warning" content="The above steps assume that you are re-pathing the existing storage root. If instead you trash the existing one or create a new one then you will need to re-register all your folders and re-publish your `PublishedFiles` entities." %}
@@ -36,4 +36,4 @@ If any of your scene files have references in them that are pointing to the old
## Versions
-If you have Version entities in {% include product %} that store information in the Path to Movie or Path to Frames fields that are affected by this change, these will also have to be updated to point to the new location since these fields are string fields that contain an absolute path to the media.
\ No newline at end of file
+If you have Version entities in {% include product %} that store information in the Path to Movie or Path to Frames fields that are affected by this change, these will also have to be updated to point to the new location since these fields are string fields that contain an absolute path to the media.
diff --git a/docs/en/quick-answers/administering/resetup-project-with-sg-desktop.md b/docs/en/quick-answers/administering/resetup-project-with-sg-desktop.md
index 8760a803b..1d109daac 100644
--- a/docs/en/quick-answers/administering/resetup-project-with-sg-desktop.md
+++ b/docs/en/quick-answers/administering/resetup-project-with-sg-desktop.md
@@ -7,20 +7,18 @@ lang: en
# How do I re-setup a Toolkit project using {% include product %} Desktop?
-If you’ve already set up a Toolkit configuration for a project and need to start fresh, the Advanced Setup Wizard in {% include product %} Desktop will not allow you to re-setup the project unless you’ve removed the previously setup configuration.
+If you’ve already set up a Toolkit configuration for a project and need to start fresh, the Advanced Setup Wizard in {% include product %} Desktop will not allow you to re-setup the project unless you’ve removed the previously setup configuration.
Here are the steps for manually removing those settings:
1. Delete any `PipelineConfiguration` entities linked to your Project in {% include product %}..

2. Set the `Tank Name` field on your `Project` entity in {% include product %} to a blank value.

3. Remove any corresponding pipeline configuration directories on disk..
-4. In {% include product %} Desktop select the project you wish to set up. *If you were already viewing the project, jump out to the project list view and then back into your project again.*
-6. Now you can run the project setup process again.
+4. In {% include product %} Desktop select the project you wish to set up. _If you were already viewing the project, jump out to the project list view and then back into your project again._
+5. Now you can run the project setup process again.
**Alternate method**
-If you are used to using the command line to set up your project with the `tank setup_project` command then you can add a `--force` argument to the end of the command. This allows you to set up a previously setup project without following the manual steps listed above.
-
- tank setup_project --force"
+If you are used to using the command line to set up your project with the `tank setup_project` command then you can add a `--force` argument to the end of the command. This allows you to set up a previously setup project without following the manual steps listed above.
-
\ No newline at end of file
+ tank setup_project --force"
diff --git a/docs/en/quick-answers/administering/share-assets-between-projects.md b/docs/en/quick-answers/administering/share-assets-between-projects.md
index 153c2bf99..857c5b282 100644
--- a/docs/en/quick-answers/administering/share-assets-between-projects.md
+++ b/docs/en/quick-answers/administering/share-assets-between-projects.md
@@ -18,11 +18,11 @@ caption: Asset Library
hierarchy: [project, sg_asset_type, code]
entity_type: Asset
filters:
-- [project, is, {'type': 'Project', 'id': 207}]
+ - [project, is, { "type": "Project", "id": 207 }]
```
replacing `207` with your library project's ID.
When you're working in the shot step environment in Maya now, this will add a new tab that will display all the available publishes in that project. If you want to add this tab to the Loader in other engines (e.g., Nuke, 3dsmax, etc.) you'll have to modify the `tk-multi-loader2` settings for each of those engines as well. If you want to enable this in other environments, you'll have to go through the same steps in the asset step environment, and any other environments you want it to be in. A bit tedious, but it allows some fine-grain control.
-With these settings, you should get the Loader app to show a tab that lists publishes from your generic project.
\ No newline at end of file
+With these settings, you should get the Loader app to show a tab that lists publishes from your generic project.
diff --git a/docs/en/quick-answers/administering/uninstalling-an-app-or-engine.md b/docs/en/quick-answers/administering/uninstalling-an-app-or-engine.md
index b5c35d0cd..b22e35a3d 100644
--- a/docs/en/quick-answers/administering/uninstalling-an-app-or-engine.md
+++ b/docs/en/quick-answers/administering/uninstalling-an-app-or-engine.md
@@ -7,11 +7,11 @@ lang: en
# How do I uninstall an app or engine?
-You can remove an app or engine by editing your configuration's environment YAML files, so that the app or engine is no longer present.
+You can remove an app or engine by editing your configuration's environment YAML files, so that the app or engine is no longer present.
The environment files allow you to configure apps to only be available in certain contexts or engines instead of removing them entirely.
To find out more about editing environment files in general, take a look at [this guide](../../guides/pipeline-integrations/getting-started/editing_app_setting.md).
-## Example
+## Example
Here is an example on how to entirely remove the Publish app from our Default Configuration.
Apps are added to engines inside the environment settings, so we must remove the Publish app from all engines that its been added to.
@@ -28,7 +28,6 @@ The app is also being included in the Maya engine when in an Asset Step context:
As well as a line adding it to the menu favourites:
[`.../env/includes/settings/tk-maya.yml L56`](https://github.com/shotgunsoftware/tk-config-default2/blob/e09236bf4b91a6dd79ca5b3ef1258d0eb0afd871/env/includes/settings/tk-maya.yml#L56)
-
Then you have a repeat of these lines under the Shot Step settings:
[`.../env/includes/settings/tk-maya.yml L106`](https://github.com/shotgunsoftware/tk-config-default2/blob/e09236bf4b91a6dd79ca5b3ef1258d0eb0afd871/env/includes/settings/tk-maya.yml#L106)
[`.../env/includes/settings/tk-maya.yml L115`](https://github.com/shotgunsoftware/tk-config-default2/blob/e09236bf4b91a6dd79ca5b3ef1258d0eb0afd871/env/includes/settings/tk-maya.yml#L115)
@@ -41,10 +40,11 @@ You would then repeat these steps for all the other engine environment yml files
All those engines YAML files were including [the `tk-multi-publish2.yml`](https://github.com/shotgunsoftware/tk-config-default2/blob/e09236bf4b91a6dd79ca5b3ef1258d0eb0afd871/env/includes/settings/tk-multi-publish2.yml) settings file. Now that you have removed reference to it in your engine YAML files, you can remove this file entirely.
-{% include warning title="Important" content="If you remove the `tk-multi-publish2.yml` but still have engine files pointing at it then you will likely get an error along the lines of this:
+{% include warning title="Important" content="If you remove the `tk-multi-publish2.yml` but still have engine files pointing at it then you will likely get an error along the lines of this:
Error
Include resolve error in '/configs/my_project/env/./includes/settings/tk-desktop2.yml': './tk-multi-publish2.yml' resolved to '/configs/my_project/env/./includes/settings/./tk-multi-publish2.yml' which does not exist!
+
" %}
### Removing the App Location
diff --git a/docs/en/quick-answers/administering/update-configuration-core-locations.md b/docs/en/quick-answers/administering/update-configuration-core-locations.md
index 1da7a2055..0c190c491 100644
--- a/docs/en/quick-answers/administering/update-configuration-core-locations.md
+++ b/docs/en/quick-answers/administering/update-configuration-core-locations.md
@@ -9,13 +9,13 @@ lang: en
## How do I update my pipeline configuration to use a local core?
-If your pipeline configuration has been setup to use a shared Toolkit core, you can essentially undo that process, or "unshare" your core, installing a copy of the Toolkit Core API inside your pipeline configuration using the tank localize command. We refer to this as "localizing" your core.
+If your pipeline configuration has been setup to use a shared Toolkit core, you can essentially undo that process, or "unshare" your core, installing a copy of the Toolkit Core API inside your pipeline configuration using the tank localize command. We refer to this as "localizing" your core.
-1. Open a terminal and navigate to the pipeline configuration you wish to install the Toolkit core into.
+1. Open a terminal and navigate to the pipeline configuration you wish to install the Toolkit core into.
$ cd /sgtk/software/shotgun/scarlet
-2. Run the following tank command:
+2. Run the following tank command:
$ ./tank localize
@@ -25,16 +25,15 @@ If your pipeline configuration has been setup to use a shared Toolkit core, you
----------------------------------------------------------------------
Command: Localize
----------------------------------------------------------------------
-
+
This will copy the Core API in /sgtk/software/shotgun/studio into the Pipeline
configuration /sgtk/software/shotgun/scarlet.
Do you want to proceed [yn]
Toolkit will confirm everything before continuing. A copy of the Toolkit core, which your pipeline configuration is currently pointing at, will be copied locally into your pipeline configuration.
-
-3. Toolkit will now copy all of the apps, engines, and frameworks in use by your pipeline configuration locally into the `install` folder. It will then copy the Toolkit core and update the configuration files in your pipeline configuration to use the newly installed local Toolkit core.
+3. Toolkit will now copy all of the apps, engines, and frameworks in use by your pipeline configuration locally into the `install` folder. It will then copy the Toolkit core and update the configuration files in your pipeline configuration to use the newly installed local Toolkit core.
Copying 59 apps, engines and frameworks...
1/59: Copying tk-multi-workfiles v0.6.15...
@@ -61,15 +60,16 @@ If your pipeline configuration has been setup to use a shared Toolkit core, you
{% include info title="Note" content="Your output will vary depending on which apps, engines, and framework versions you have installed." %}
## How do I update my pipeline configuration to use an existing shared core?
+
If you have an existing shared Toolkit core, you can update any existing "localized" pipeline configurations to use the shared core using the tank command.
-1. Open a terminal and navigate to the pipeline configuration you wish to update.
+1. Open a terminal and navigate to the pipeline configuration you wish to update.
$ cd /sgtk/software/shotgun/scarlet
-2. Next you'll run the `tank attach_to_core` command and provide the valid path to the shared core on the current platform.
-
- $ ./tank attach_to_core /sgtk/software/shotgun/studio
+2. Next you'll run the `tank attach_to_core` command and provide the valid path to the shared core on the current platform.
+
+ $ ./tank attach_to_core /sgtk/software/shotgun/studio
...
...
----------------------------------------------------------------------
@@ -78,7 +78,7 @@ If you have an existing shared Toolkit core, you can update any existing "locali
After this command has completed, the configuration will not contain an
embedded copy of the core but instead it will be picked up from the following
locations:
-
+
- Linux: '/mnt/hgfs/sgtk/software/shotgun/studio'
- Windows: 'z:\sgtk\software\shotgun\studio'
- Mac: '/sgtk/software/shotgun/studio'
@@ -87,17 +87,17 @@ If you have an existing shared Toolkit core, you can update any existing "locali
have no configurations that are using the core embedded in this configuration.
Do you want to proceed [yn]
-
+
Toolkit will confirm everything before continuing. Since this shared core was already set up for multiple platforms, it shows you the location for each.
-
- *If you need to add the location for a new platform, update the config/core/install_location.yml file in the shared core configuration and add the necessary path(s).*
-3. Toolkit will now back up the local core API in your pipeline configuration, remove localized core, and add the necessary configurations to point your pipeline configuration at the shared core.
+ _If you need to add the location for a new platform, update the config/core/install_location.yml file in the shared core configuration and add the necessary path(s)._
+
+3. Toolkit will now back up the local core API in your pipeline configuration, remove localized core, and add the necessary configurations to point your pipeline configuration at the shared core.
Backing up local core install...
Removing core system files from configuration...
Creating core proxy...
- The Core API was successfully processed.
+ The Core API was successfully processed.
If you decide later you would like to localize the Toolkit core inside your pipeline configuration (i.e., detaching your pipeline configuration from the shared core and using a locally installed version), you can do so using the `tank localize` command.
@@ -107,22 +107,22 @@ If you have an existing shared Toolkit core, you can update any existing "locali
Currently when you set up a project with SG Desktop, the Toolkit core API is "localized", which means it's installed inside the pipeline configuration. This means every pipeline configuration is a fully self-contained Toolkit installation. You may prefer to have version of the Toolkit Core API that is shared between projects which can minimize maintenance and ensure all of your projects are using the same core code. We sometimes refer to this as a **"shared studio core"**.
-Here's how to create a new Toolkit Core API configuration that can be shared between different project pipeline configurations.
+Here's how to create a new Toolkit Core API configuration that can be shared between different project pipeline configurations.
-1. Open a terminal and navigate to an existing pipeline configuration that contains the Toolkit Core version you wish to share. Once the process is complete, this pipeline configuration will no longer be localized, but will use the newly created shared core.
+1. Open a terminal and navigate to an existing pipeline configuration that contains the Toolkit Core version you wish to share. Once the process is complete, this pipeline configuration will no longer be localized, but will use the newly created shared core.
$ cd /sgtk/software/shotgun/pied_piper
-2. Run the following tank command to copy the Toolkit core to an external location on disk. You need to provide the location this path can be found on all platforms (linux_path, windows_path, mac_path). We recommend using quotes for each path. If you don't use Toolkit on a particular platform, you can simply specify an empty string `""`.
+2. Run the following tank command to copy the Toolkit core to an external location on disk. You need to provide the location this path can be found on all platforms (linux_path, windows_path, mac_path). We recommend using quotes for each path. If you don't use Toolkit on a particular platform, you can simply specify an empty string `""`.
$ ./tank share_core "/mnt/sgtk/software/shotgun/studio" "Z:\sgtk\software\shotgun\studio" \ "/sgtk/software/shotgun/studio"
-
-3. You will be shown a summary of the change that is about to be made before Toolkit will proceed.
+
+3. You will be shown a summary of the change that is about to be made before Toolkit will proceed.
----------------------------------------------------------------------
Command: Share core
----------------------------------------------------------------------
- This will move the embedded core API in the configuration
+ This will move the embedded core API in the configuration
'/sgtk/software/shotgun/pied_piper'.
After this command has completed, the configuration will not contain an
embedded copy of the core but instead it will be picked up from the following
@@ -134,7 +134,7 @@ Here's how to create a new Toolkit Core API configuration that can be shared bet
have no configurations that are using the core embedded in this configuration.
Do you want to proceed [yn]
-4. Toolkit will copy the core installation to your new shared location and will update your existing pipeline configuration to point to the new shared core.
+4. Toolkit will copy the core installation to your new shared location and will update your existing pipeline configuration to point to the new shared core.
Setting up base structure...
Copying configuration files...
@@ -143,5 +143,5 @@ Here's how to create a new Toolkit Core API configuration that can be shared bet
Removing core system files from configuration...
Creating core proxy...
The Core API was successfully processed.
-
-You can now use this new shared core from other pipeline configurations. In order to update a pipeline configuration to use an existing shared core (like the one you just created), you can use the `tank attach_to_core` command.
\ No newline at end of file
+
+You can now use this new shared core from other pipeline configurations. In order to update a pipeline configuration to use an existing shared core (like the one you just created), you can use the `tank attach_to_core` command.
diff --git a/docs/en/quick-answers/administering/what-is-path-cache.md b/docs/en/quick-answers/administering/what-is-path-cache.md
index 5bf165f18..2809ea712 100644
--- a/docs/en/quick-answers/administering/what-is-path-cache.md
+++ b/docs/en/quick-answers/administering/what-is-path-cache.md
@@ -7,19 +7,19 @@ lang: en
# What is the Path Cache? What are Filesystem Locations?
-The path cache is used by Toolkit to track the associations between folders on disk and entities in {% include product %}.
-The master cache is stored in {% include product %} using the `FilesystemLocation` entity type. Each user then has their own version
-of the path cache [stored locally in the Toolkit cache directory on disk](./where-is-my-cache.md), which is synchronized in the background
+The path cache is used by Toolkit to track the associations between folders on disk and entities in {% include product %}.
+The master cache is stored in {% include product %} using the `FilesystemLocation` entity type. Each user then has their own version
+of the path cache [stored locally in the Toolkit cache directory on disk](./where-is-my-cache.md), which is synchronized in the background
whenever applications are launched or folders are created.
-Typically, we don't advise modifying the path cache manually. Our internal processes not only sync your local cache
+Typically, we don't advise modifying the path cache manually. Our internal processes not only sync your local cache
with the FilesystemLocation entities in {% include product %}, but also create event log entries that allow all users'
- machines to stay in sync with {% include product %}.
+machines to stay in sync with {% include product %}.
There are a couple tank commands that can be used to modify the path cache:
- - `tank unregister_folders` removes path cache associations.
- - `tank synchronize_folders` forces a sync of the local path cache with {% include product %}.
-
+- `tank unregister_folders` removes path cache associations.
+- `tank synchronize_folders` forces a sync of the local path cache with {% include product %}.
+
Typically you won't need to run either of these commands, but in certain circumstances, they can be useful.
- For example, `unregister_folders` should be run before renaming or recreating an entity in your project.
\ No newline at end of file
+For example, `unregister_folders` should be run before renaming or recreating an entity in your project.
diff --git a/docs/en/quick-answers/administering/where-is-my-cache.md b/docs/en/quick-answers/administering/where-is-my-cache.md
index e290d9d33..d7b3524a7 100644
--- a/docs/en/quick-answers/administering/where-is-my-cache.md
+++ b/docs/en/quick-answers/administering/where-is-my-cache.md
@@ -7,10 +7,9 @@ lang: en
# Where is my cache?
-
## Root Cache Location
-Toolkit stores some data in a local cache to prevent unnecessary calls to the {% include product %} server. This includes the [path cache](./what-is-path-cache.md), bundle cache, and thumbnails. While the default location should work for most users, it is configurable using the [cache_location core hook](https://github.com/shotgunsoftware/tk-core/blob/master/hooks/cache_location.py) should you need to change it.
+Toolkit stores some data in a local cache to prevent unnecessary calls to the {% include product %} server. This includes the [path cache](./what-is-path-cache.md), bundle cache, and thumbnails. While the default location should work for most users, it is configurable using the [cache_location core hook](https://github.com/shotgunsoftware/tk-core/blob/master/hooks/cache_location.py) should you need to change it.
The default cache root location is:
@@ -36,7 +35,7 @@ The path cache is located at:
**Distributed Configurations**
-The bundle cache is a cached collection of all the applications, engines, and frameworks used across all of the
+The bundle cache is a cached collection of all the applications, engines, and frameworks used across all of the
projects on your {% include product %} site. The bundle cache for distributed configs is stored in the following location:
Mac:
@@ -54,12 +53,12 @@ Linux:
The bundle cache for centralized configs are located inside the centralized configuration.
-`...{project configuration}/install/`
+`...{project configuration}/install/`
If your configuration uses a shared core, then this will be located inside your shared core's install folder instead.
## Thumbnails
-
+
Thumbnails used by Toolkit apps (like the [Loader](https://support.shotgunsoftware.com/entries/95442527)) are stored in the local Toolkit cache. They are stored per Project, Pipeline Configuration, and App (as needed). The structure beneath the root cache directory is as follows:
`/pc//thumbs/`
diff --git a/docs/en/quick-answers/developing.md b/docs/en/quick-answers/developing.md
index 4cd0c978f..8f296d42f 100644
--- a/docs/en/quick-answers/developing.md
+++ b/docs/en/quick-answers/developing.md
@@ -5,8 +5,7 @@ pagename: quick-answers-developing
lang: en
---
-Developing
-===
+# Developing
A collection of quick answers based around development with Toolkit.
diff --git a/docs/en/quick-answers/developing/create-publishes-via-api.md b/docs/en/quick-answers/developing/create-publishes-via-api.md
index 13e854958..98d364dea 100644
--- a/docs/en/quick-answers/developing/create-publishes-via-api.md
+++ b/docs/en/quick-answers/developing/create-publishes-via-api.md
@@ -9,11 +9,12 @@ lang: en
Our sgtk API provides a [convenience method](https://developer.shotgridsoftware.com/tk-core/utils.html#sgtk.util.register_publish) for registering `PublishedFiles` entities in ShotGrid.
-In addition we also have a Publish app, that comes with [its own API](https://developer.shotgridsoftware.com/tk-multi-publish2/).
+In addition we also have a Publish app, that comes with [its own API](https://developer.shotgridsoftware.com/tk-multi-publish2/).
The Publish API ultimately uses the core sgtk API method to register the PublishedFile, but it also provides a framework around collection, validation, and publishing, which can be customized
In addition to the the Publish API documentation, we have examples of writing your own publish plugins in our [pipeline tutorial](https://developer.shotgridsoftware.com/cb8926fc/?title=Pipeline+Tutorial).
## Using the register_publish() API method
+
While it is possible to create publish records in {% include product %} using a raw {% include product %} API call, we would strongly recommend using Toolkit's convenience method.
All toolkit apps that create publishes are using a API utility method method called [`sgtk.util.register_publish()`](https://developer.shotgridsoftware.com/tk-core/utils.html#sgtk.util.register_publish).
@@ -34,7 +35,7 @@ file_to_publish = "/mnt/projects/proj/seq_abc/shot_123/comp/foreground.v034.nk"
# without any version number or extension
name = "foreground"
-# initialize an API object. If you have used the Toolkit folder creation
+# initialize an API object. If you have used the Toolkit folder creation
# to create the folders where the published file resides, you can use this path
# to construct the API object. Alternatively you can create it from any ShotGrid
# entity using the sgtk_from_entity() method.
@@ -55,17 +56,17 @@ ctx = tk.context_from_entity("Task", 123)
# the third parameter (file.nk) is typically the file name, without a version number.
# this makes grouping inside of ShotGrid easy. The last parameter is the version number.
sgtk.util.register_publish(
- tk,
- ctx,
- file_to_publish,
- name,
+ tk,
+ ctx,
+ file_to_publish,
+ name,
published_file_type="Nuke Script",
version_number=34
)
```
-There are several options you can populate in addition to the basic ones shown above.
-For a full list of parameters and what they do, see the [Core API documentation](https://developer.shotgridsoftware.com/tk-core/utils.html#sgtk.util.register_publish).
+There are several options you can populate in addition to the basic ones shown above.
+For a full list of parameters and what they do, see the [Core API documentation](https://developer.shotgridsoftware.com/tk-core/utils.html#sgtk.util.register_publish).
{% include info title="Tip" content="If your code is running from within a Toolkit app you can grab the sgtk instance via `self.sgtk` and the context with `self.context`.
If it's not in an app, but will be running within software where a Toolkit integration is present, you can access the current context and sgtk instance with the following code:
@@ -76,4 +77,5 @@ currentEngine = sgtk.platform.current_engine()
tk = currentEngine.sgtk
ctx = currentEngine.context
```
-" %}
\ No newline at end of file
+
+" %}
diff --git a/docs/en/quick-answers/developing/maya-shelf-app-launcher.md b/docs/en/quick-answers/developing/maya-shelf-app-launcher.md
index dfe542a3c..b8bb2a5f2 100644
--- a/docs/en/quick-answers/developing/maya-shelf-app-launcher.md
+++ b/docs/en/quick-answers/developing/maya-shelf-app-launcher.md
@@ -7,37 +7,37 @@ lang: en
# How do I add a shelf button to launch a Toolkit app in Maya?
-Adding a shelf button in Maya to launch Toolkit apps in Maya is pretty straightforward. Here is an example of how to add a custom shelf button that opens the [Loader app](https://support.shotgunsoftware.com/entries/95442527).
+Adding a shelf button in Maya to launch Toolkit apps in Maya is pretty straightforward. Here is an example of how to add a custom shelf button that opens the [Loader app](https://support.shotgunsoftware.com/entries/95442527).
{% include info title="Note" content="This assumes Toolkit is currently enabled in your Maya session. This example code does not bootstrap Toolkit." %}
-Open your Script Editor in Maya and paste in the following Python code:
+Open your Script Editor in Maya and paste in the following Python code:
```python
-import maya.cmds as cmds
+import maya.cmds as cmds
# Define the name of the app command we want to run.
# If your not sure on the actual name you can print the current_engine.commands to get a full list, see below.
tk_app = "Publish..."
-try:
+try:
import sgtk
- # get the current engine (e.g. tk-maya)
- current_engine = sgtk.platform.current_engine()
- if not current_engine:
- cmds.error("ShotGrid integration is not available!")
+ # get the current engine (e.g. tk-maya)
+ current_engine = sgtk.platform.current_engine()
+ if not current_engine:
+ cmds.error("ShotGrid integration is not available!")
# find the current instance of the app.
# You can print current_engine.commands to list all available commands.
- command = current_engine.commands.get(tk_app)
- if not app:
- cmds.error("The Toolkit app '%s' is not available!" % tk_app)
+ command = current_engine.commands.get(tk_app)
+ if not app:
+ cmds.error("The Toolkit app '%s' is not available!" % tk_app)
# now we have the command we need to call the registered callback
command['callback']()
-except Exception, e:
+except Exception, e:
msg = "Unable to launch Toolkit app '%s': %s" % (tk_app, e)
cmds.confirmDialog(title="Toolkit Error", icon="critical", message=msg)
cmds.error(msg)
diff --git a/docs/en/quick-answers/developing/setting-software-environment-variables.md b/docs/en/quick-answers/developing/setting-software-environment-variables.md
index ebc2bf1b8..6ae6e7f13 100644
--- a/docs/en/quick-answers/developing/setting-software-environment-variables.md
+++ b/docs/en/quick-answers/developing/setting-software-environment-variables.md
@@ -14,7 +14,7 @@ This app is responsible for launching the software and ensuring the {% include p
## before_app_launch.py
-The [`before_app_launch.py`](https://github.com/shotgunsoftware/tk-multi-launchapp/blob/6a884aa144851148e8369e9f35a2471087f98d16/hooks/before_app_launch.py) hook is called just before the software is launched.
+The [`before_app_launch.py`](https://github.com/shotgunsoftware/tk-multi-launchapp/blob/6a884aa144851148e8369e9f35a2471087f98d16/hooks/before_app_launch.py) hook is called just before the software is launched.
This provides a perfect opportunity to set any custom environment variables to be passed onto the launched software.
Example:
@@ -26,23 +26,24 @@ import tank
class BeforeAppLaunch(tank.Hook):
def execute(self, app_path, app_args, version, engine_name, **kwargs):
-
+
if engine_name == "tk-maya":
os.environ["MY_CUSTOM_MAYA_ENV_VAR"] = "Some Maya specific setting"
```
-{% include warning title="Warning" content="Be careful not to completely redefine environment variables set by ShotGrid.
+{% include warning title="Warning" content="Be careful not to completely redefine environment variables set by ShotGrid.
For example, if you need to add a path to `NUKE_PATH` (for Nuke), or `PYTHONPATH` (for Maya), make sure you append your path to the existing value, rather than replace it.
You can use our convenience method for this:
```python
tank.util.append_path_to_env_var(\"NUKE_PATH\", \"/my/custom/path\")
```
+
" %}
## Custom wrapper
-Some studios have custom wrappers that handle setting the environment variables and launching the software.
+Some studios have custom wrappers that handle setting the environment variables and launching the software.
If you prefer to use custom code like this to set the environment, you can point the `Software` entity's [path fields](https://support.shotgunsoftware.com/hc/en-us/articles/115000067493-Integrations-Admin-Guide#Example:%20Add%20your%20own%20Software) to your executable wrapper, and `tk-multi-launchapp` will run that instead.
-{% include warning title="Warning" content="Take care with this approach to preserve the environment variables set by ShotGrid other wise the integration will not start." %}
\ No newline at end of file
+{% include warning title="Warning" content="Take care with this approach to preserve the environment variables set by ShotGrid other wise the integration will not start." %}
diff --git a/docs/en/quick-answers/developing/sgtk-script-authentication.md b/docs/en/quick-answers/developing/sgtk-script-authentication.md
index b22f5ac89..e7531f691 100644
--- a/docs/en/quick-answers/developing/sgtk-script-authentication.md
+++ b/docs/en/quick-answers/developing/sgtk-script-authentication.md
@@ -8,16 +8,19 @@ lang: en
# How do I work with authentication and login credentials in custom scripts?
## Error Message
+
If you're seeing an error like the one below coming from your script, then it means your script is not authorized to talk to your {% include product %} site.
```text
tank.errors.TankError: Missing required script user in config '/path/to/your/project/config/core/shotgun.yml'
```
+
If user authentication or script authentication is not provided up front, then Toolkit falls back to checking credentials have been defined in the config's `shotgun.yml` file.
Defining credentials in your `shotgun.yml` file is the old method of handling authentication.
You should avoid defining them in the `shotgun.yml` file, and instead use one of the approaches detailed below:
## User-facing scripts
+
If the script is user-facing, you can add this at the beginning, before creating a `Sgtk` instance:
```python
@@ -69,7 +72,8 @@ If `QApplication` is available, you'll get something akin to this:

{% include info title="Note" content="If you are importing a Toolkit API (`sgtk` package) that isn't associated with a configuration, for example one that you have downloaded to use to bootstrap into a different configuration, then you shouldn't attempt to create a `CoreDefaultsManager`. Instead, create a `ShotgunAuthenticator()` instance without passing a defaults manager.
-```python
+
+````python
authenticator = ShotgunAuthenticator()
```" %}
@@ -102,7 +106,7 @@ user = authenticator.create_script_user(
# Tells Toolkit which user to use for connecting to ShotGrid.
sgtk.set_authenticated_user(user)
-```
+````
{% include info title="Note" content="As noted at the end of the [user facing scripts](#user-facing-scripts) section, you shouldn't create a defaults manager if the `sgtk` package you imported is standalone/isn't from a configuration. Also you should provide the `host` kwarg to the `create_script_user()` method:
@@ -113,4 +117,5 @@ user = authenticator.create_script_user(
api_key=\"4e48f....