Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  1. With your model selected, go to the

    Status
    titleInspect
    tab.

  2. After each iteration, a new point will be drawn on the line plot. You can deduce from it that, when the curve starts to flatten, the model quality is converging, and probably it's not going to learn much more. If the curve is not stable, you have to continue training the model as it is still learning.

...

Evaluate model performance

...

  1. Prepare your data file with the following columns:

    1. Text: Your raw text instances

    2. Label: Label annotations for each text

  2. With your model selected, go to the

    Status
    titleEvaluate
    tab.

  3. Upload your file

  4. Select the text column and label column

  5. As a result, you’ll see a table with different performance metrics and a list of text instances (Errors) where the predicted label and the annotated label mismatch below it.

Info

Use a different dataset for evaluation than the one you used for training.

...

Info

In case the results are not sufficient, there might be multiple reasons:

  • you might have to revise your labels and hypotheses and use Promptranker to create another model in order to achieve better results

  • your evaluation set is not good enough - the distribution in the examples in it is not equal (e.g. in the example above in the evaluation set we had only 30 examples for special occasions but 148 for social gatherings, and respectively the latter got better results)

...

Next:

...

Run your model