Testing local LLM on multi-lingual understanding.

I have run a quick test on a few LLM models I have installed locally on Mac OS with 64 GB of RAM.

The test was conducted in English, but it also involved making connections between Slavic languages (such as Polish and Russian), the modern Inter-Slavic language (ISL), and the rest of the language group that originated from Proto-Indo-European (PIE), including Greek and Sanskrit.


Here is the question I have asked all of the models:

Let's discuss particle "ra" as in "rad" happiness, or "raj" heaven. Provide a short answer.

Quick Summary


After evaluating all models, it became clear that larger parameter models with extensive context windows generally excelled in providing insightful, accurate, and nuanced linguistic analyses, making them ideal for in-depth comparative research and article writing tasks. The standout, mistral-small-3.1-24b-instruct-2503, delivered the best balance of abstract thinking, linguistic precision, and large-context capability, especially if an 8-bit quantization version is considered for improved accuracy. Other strong contenders included deepseek-r1-distill-qwen-32b and qwen3-32b-mlx, offering substantial analytical depth. Mid-sized models provided faster but shallower analyses, primarily suitable for exploratory or quick tasks, whereas smaller models below 7B generally struggled with accuracy and linguistic coherence.

Model ranking by preference:

  1. mistral-small-3.1-24b-instruct-2503, 24B, input: 131,072 tokens
  2. deepseek-r1-distill-qwen-32b, 32B, input: 131,072 tokens
  3. qwen3-32b-mlx, 32B, input: 40,960 tokens
  4. dolphin-2.9.3-mistral-nemo-12b, 12B, input: 1,024,000 tokens
  5. mistral-nemo-instruct-2407, 7B, input: 1,024,000 tokens
  6. deepseek-r1-distill-qwen-7b, 7B, input: 131,072 tokens
  7. llama-3.2-3b-instruct-uncensored, 3B, input: 131,072 tokens
  8. phi-3-mini-4k-instruct, input: 4,000 tokens
  9. smollm-135m-instruct, 135M, input: unspecified (small)


Small models are still helpful for agents that need to process, transform, summarize, or classify input.





 



As an Amazon Associate I earn from qualifying purchases.

Qwen 32B

I am pleased with the performance and depth of the 32B Qwen MLX, running
locally on my Mac Studio M1 with 64GB of RAM.

9 tokens per second is not fast, but acceptable.
A 15-second wait for the first token output is very good.








As an Amazon Associate I earn from qualifying purchases.

NVidia

Watching NVidia at COMPUTEX 2025: 
one NVLink moves more data than a whole Internet !?!




As an Amazon Associate I earn from qualifying purchases.

LM Studio with 12 and 24B local LLM models

In my LM Studio, I have been using the 12 billion and 24 billion parameter models on my relatively inexpensive Mac Studio M1, which has 64 GB of unified memory.

It also has a 1 million token input context window! That would roughly cover the entirety of J.R.R. Tolkien's "The Lord of the Rings: The Fellowship of the Ring", or approximately 400 pages of text.










The 12B model responds almost instantly and is excellent for good-quality, rapid example work.
The 24B model takes about 30 seconds to respond, but it has deep, obscure, nuanced knowledge of the world. I would have to spend 5 times more to do the same with NVidia GPUs.

Another benefit of using the "Dolphin" is that it is uncensored, which gives me direct answers to my questions without trying to "protect me" from facts like "Tiananmen Square protests of 1980", or any other enforced ideology.



As an Amazon Associate I earn from qualifying purchases.

Bukkake udon (ぶっかけうどん)

Do you know how it is when you stumble upon a foreign phrase that's wildly different in your language? It can be quite the head-scratcher. I was watching a YouTube video recently; a girl walks into a Japanese restaurant, and the waiter offers her a special of Bukkake udon (ぶっかけうどん). He also hands her a steaming white towel to clean herself. She's humorously uncomfortable and leaves, excusing herself.

I'm no stranger to Japanese culture; I've lived there for three years and sincerely appreciate its nuances. But this one puzzled me, so I had to look it up.



Bukkake udon is a legitimate Japanese noodle dish. The term bukkake, in this culinary context, simply means "to pour over." So you get cold or warm udon noodles with a flavorful sauce (like soy sauce or dashi) poured over them, topped off with green onions, grated radish, and tempura bits, and you get the idea.


However, step outside of Japan and bukkake has a different connotation. 



As an Amazon Associate I earn from qualifying purchases.

HOA



I never thought that I will need agent_HOA_rules.py
but it is very useful!

The bylaws are many and long and not text searchable, getting a quick answer from AIKO on your phone is a blessing.


 


As an Amazon Associate I earn from qualifying purchases.

Shibui AI has outdone itself

Sometimes the AI chats surprise me on a new level.

Here is an interaction I just had when fixing my AIKO app's Human-AI Interaction (HAii).

AI suggested code improvement, but then, knowing that I am into Japanese culture, said:

"Or let AIKO be slightly abrupt - that is SHUBUI, too."


I knew what it meant, but I looked it up for you, the reader:

Shibui (渋い) is a Japanese aesthetic that values subtle, refined beauty — elegant, understated, and deeply calm. It’s the kind of charm that doesn’t shout — it lingers. 🌾

AI made an inside joke, very delicately.
When I laughed, "Ha, Ha"
It said, "I knew you would appreciate that."
Freaking awesome!





As an Amazon Associate I earn from qualifying purchases.

Why I Avoid Most Store-Bought Yogurt


Key Insight

Even popular "natural" brands like Chobani often contain hidden ingredients that don’t align with an ancestral or anti-inflammatory diet. For someone focused on mitochondrial health, gut integrity, and long-term resilience, the ingredient list, not the label claims, determines if a food is truly health-supportive.

Personal Reflection

For years, because my company provides them, I used to grab these "only natural ingredients" yogurt cups. However, when I started analyzing the labels with a more critical lens, watching for sugar, seed oils, gums, synthetic vitamins, and industrial sweeteners, I realized they weren’t doing me any favors. 

These days, I ferment my own kefir with A2 milk, which is low in sugar and high in fat and packed with real probiotic power.

Evolutionary Rationale

For the last ~10,000 years of post-agricultural history, fermented dairy (when consumed) came from goats or sheep, not Holstein cows. It was full-fat, raw, and made in small batches. Industrial yogurts, by contrast, are a modern invention, low-fat, high in sugar, stabilized with additives, and typically made from A1 milk, which may trigger inflammation in those of Northern European ancestry. Add to that the preservatives, stabilizers, gums, concentrates, pasturizing, and you have a mismatch that your gut microbes don’t recognize.



As an Amazon Associate I earn from qualifying purchases.

Post Scriptum

The views in this article are mine and do not reflect those of my employer.
I am preparing to cancel the subscription to the e-mail newsletter that sends my articles.
Follow me on:
X.com (Twitter)
LinkedIn
Google Scholar

My favorite quotations..


“A man should be able to change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, die gallantly. Specialization is for insects.”  by Robert A. Heinlein

"We are but habits and memories we chose to carry along." ~ Uki D. Lucas


Popular Recent Posts

Most Popular Articles