Skip to content

Commit 5300d86

Browse files
committed
tweaks
1 parent e634cfe commit 5300d86

File tree

1 file changed

+5
-5
lines changed

1 file changed

+5
-5
lines changed

setup.KubeConEU25/README.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -509,7 +509,7 @@ TODO
509509
## Example Workloads
510510

511511
We now will now run some sample workloads that are representative of what is run on
512-
a typical AI GPU Cluster.
512+
an AI GPU Cluster.
513513

514514
### Batch Inference with vLLM
515515

@@ -630,8 +630,8 @@ The two containers are synchronized as follows: `load-generator` waits for
630630

631631
### Pre-Training with PyTorch
632632

633-
In this example, `alice` uses [PyTorch]() to pre-training a model using the
634-
[Kubeflow Training Operator](https://github.com/kubeflow/training-operator).
633+
In this example, `alice` uses the [Kubeflow Training Operator](https://github.com/kubeflow/training-operator)
634+
to run a job that uses [PyTorch](https://pytorch.org) to train a machine learning model.
635635

636636
<details>
637637

@@ -641,8 +641,8 @@ TODO
641641

642642
### Fine-Tuning with Ray
643643

644-
In this example, `alice` uses [Ray](https://github.com/ray-project/ray) to fine tune a model using
645-
[KubeRay](https://github.com/ray-project/kuberay).
644+
In this example, `alice` uses [KubeRay](https://github.com/ray-project/kuberay) to run a job that
645+
uses [Ray](https://github.com/ray-project/ray) to fine tune a machine learning model.
646646

647647
<details>
648648

0 commit comments

Comments
 (0)