Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  1. Prepare your data file with the following columns:

    1. Text: Your raw text instances

    2. Label: Label annotations for each text

  2. With your model selected, go to the

    Status
    titleEvaluate
    tab.

  3. Upload your file

  4. Select the text column and label column

  5. As a result, you’ll see a table with different performance metrics and a list of text instances (Errors) where the predicted label and the annotated label mismatch below it.

Info

Use a different dataset for evaluation than the one you used for training.

...

Info

In case the results are not sufficient, there might be multiple reasons:

  • you might have to revise your labels and hypotheses and use Promptranker to create another model in order to achieve better results

  • your evaluation set is not good enough - the distribution in the examples in it is not equal (e.g. in the example above in the evaluation set we had only 30 examples for special occasions but 148 for social gatherings, and respectively the latter got better results)

...

Next: Run your model