FAQ
Last updated
Was this helpful?
Last updated
Was this helpful?
The system will decide the final language of the generated dataset based on the current user's language selection. Currently, it supports Chinese and English. The default language environment is Chinese. If you need to generate an English dataset, you need to manually switch to English.
Currently, it supports OpenAI standard protocol model access, compatible with Ollama. The system only has some common model configurations built-in. If you can't find the desired model, you can customize the model provider, model name, API address, and key.
In many cases, the system requires the model to output in a specified JSON format. If the model's understanding ability or context length is insufficient, the output may be unstable. It is recommended to replace it with a model with a larger parameter quantity and longer context length.
The processing speed of the task is largely determined by the processing speed of the selected model. If it is a local model, please check the resource utilization rate. If it is a remote model, it is recommended to replace it with a faster and more stable platform.
It is likely that the model's rate limiting strategy has been triggered, which is common in unpaid Silicon Flow and free OpenRouter models. You can manually reduce the concurrent processing number in the task configuration, which is currently set to 5 by default.
You can add custom prompt words in the project configuration - prompt word configuration to actively intervene.