The test was conducted in English, but it also involved making connections between Slavic languages (such as Polish and Russian), the modern Inter-Slavic language (ISL), and the rest of the language group that originated from Proto-Indo-European (PIE), including Greek and Sanskrit.
Here is the question I have asked all of the models:
Let's discuss particle "ra" as in "rad" happiness, or "raj" heaven. Provide a short answer.
Quick Summary
- mistral-small-3.1-24b-instruct-2503, 24B, input: 131,072 tokens
- deepseek-r1-distill-qwen-32b, 32B, input: 131,072 tokens
- qwen3-32b-mlx, 32B, input: 40,960 tokens
- dolphin-2.9.3-mistral-nemo-12b, 12B, input: 1,024,000 tokens
- mistral-nemo-instruct-2407, 7B, input: 1,024,000 tokens
- deepseek-r1-distill-qwen-7b, 7B, input: 131,072 tokens
- llama-3.2-3b-instruct-uncensored, 3B, input: 131,072 tokens
- phi-3-mini-4k-instruct, input: 4,000 tokens
- smollm-135m-instruct, 135M, input: unspecified (small)
Small models are still helpful for agents that need to process, transform, summarize, or classify input.