This webservice is used to translate a petrinet into plain text.
| URL | Description |
|---|---|
| https://woped.dhbw-karlsruhe.de/p2t/ | Embedded UI |
| https://woped.dhbw-karlsruhe.de/p2t/swagger-ui/ | Swagger UI |
| URL | Description |
|---|---|
| https://github.com/tfreytag/T2P | Text2Process Webservice |
| https://github.com/tfreytag/WoPeD | WoPeD-Client |
| URL | Description |
|---|---|
| https://hub.docker.com/r/woped/process2text | Docker Hub |
- IDE of your choice
- Java 11
It is recommended to use IntelliJ IDE.
- Git clone this project onto your machine.
- Start IntelliJ and open the project.
- Wait until all files have been loaded.
- Run Application with the Start-Button or with
mvn spring-boot:run
After cloning this repository, it's essential to set up git hooks to ensure project standards.
- Start the application.
- Navigate to
http://localhost:8080/p2t/swagger-ui. - Paste your petrinet (the content of the xml file) in the body of the
POST /p2t/generateTextendpoint.
- Start the application.
- Navigate to
http://localhost:8080/p2t/. - Paste your petrinet (the content of the xml file) in the first text area and submit the form.
- Add a new collection in Postman.
- Add a new request in your created collection.
- For your request change
GettoPost. - Enter URL
http://localhost:8080/p2t/generateText - Open the body configuration and choose
raw. - Copy the content of a
.pnmlfile (must be a sound petrinet) in the body of the request. - Click send button
- Start the application.
- Follow the installation instructions of the WoPeD-Client (
https://github.com/tfreytag/WoPeD). - Start WoPeD-Client and.
- Open the configuration and navigate to
NLP Tools. Adapt theProcess2Textconfiguration:Server host:localhostPort:8080URI:/p2t
- Test your configuration.
- Close the configuration and import or create a new petrinet.
- Navigate to
Analyse->Translate to textand execute. The petrinet will now be transformed by your locally started P2T webservice.
- Pull our pre-build docker image from docker hub (see above).
- Run this image on your server.
- Build your own docker image with the Dockerfile.
- Run this image on your server.
This repository uses jars that are unavailable on Maven central. Hence, these jar files are stored in this repository in
the folder lib. The chosen procedure was described in this SO answer.
To check the formatting of all Java files, run mvn spotless:check.
If formatting are identified, run mvn spotless:apply to automatically reformat that affected files.
Working with BPMN or PNML files often involves handling unexpectedly large file sizes. Due to the limited context length of many language models, we recommend choosing a model with sufficient context length to ensure reliable and complete processing.
- GPT-4 Turbo (128k context)
- GPT-4 (8k or 32k context)
- GPT-3.5 Turbo (16k context)
- Gemini 1.5 Pro (1M+ context)
-
Download LM Studio:
- Visit the official LM Studio website and download the latest version for your operating system.
- Alternatively, you can download LM Studio from GitHub.
-
Install and launch LM Studio:
- Run the downloaded installation file and follow the instructions.
- Launch LM Studio after installation.
-
Download and load a model:
-
In LM Studio, navigate to the "Models" section.
-
Choose a model to download or import an already downloaded model.

-
WoPeD should work with most models from LM Studio. The following 4 models have been extensively tested and can be recommended:
llama-3.2-1b-instruct,mistral-7b-instruct,meta-llama-3.1-8b-instruct, andgemma-2-9b-it. For a detailed comparison of these models, please see the Model Comparison. -
Ensure the context length is set correctly, 10 000 tokens suits most cases, then load the model.

-
Start the local server with the loaded model by clicking "Start Server".
-
-
Verify the server is running:
-
Configure the web client:
- Start this P2T service and for example the web client.
- Navigate to "P2T (Process2Text)".
- Select "lmStudio" as the provider.
- No API key is needed for LM Studio since it runs locally.
-
Use Process2Text with LM Studio:
- Select "lmStudio" as the provider.
- Choose the desired model from the dropdown list.
- Execute the translation.
-
Troubleshooting:
- Ensure the LM Studio server is running before starting a translation.
- If no models are displayed, verify that LM Studio was started correctly and a model is loaded.
- For connection issues, check if the default URL
http://localhost:1234is accessible. If formatting are identified, runmvn spotless:applyto automatically reformat that affected files.
