DNA study of the Piast Dynasty of early Poland

On June 2, 2025, Prof. Marek Figlerowicz’s team from the Polish Academy of Sciences announced that the early Polish Piast dynasty belongs to the Y-DNA haplogroup R1b1a1b1a1a2c1a1f, also known as the R1b-S747 branch. 

While we await the peer-reviewed and published verification of these facts, let's discuss what we currently know.

The R1b-S747 mutation is estimated to be approximately 2,000 years old, placing its origin between 500 BCE and 500 CE. This particular subclade is closely related to the Picts and Dalriadan Gaels, as found in the regions of Argyll, Perthshire, and Moray in the Highlands of the northern British Isles. Please note that these groups are not considered Scotts.

The finding is significant, as there were approximately 30 ruling Piast members from Mieszko I (d. 992) to Kazimierz III (d. 1370), and over 180 notable noble names. The Piast dynasty is culturally bound to the essence of what Poland is.

Image: Mieszko I, the first Polish Christian ruler, by master painter Jan Matejko. 

By the ninth century, the Piast dynasty had established itself as Slavic rulers in the heart of Poland, including their legendary progenitors from earlier centuries, such as:
- Piast Kołodziej, "the caretaker wheelwright";
- Siemowit, "the family leader";
- Lestek, "the cunning"; and
- Siemomysł, "the thoughtful of the family". 
Both the legend of Piast and the names suggest native and local Polish origin.

Despite the legendary Slavic origins, the Piast R1b-S747 is extremely rare in Slavic countries. 

In contrast, which makes the matter highly controversial in Poland: most Slavic people belong to a different paternal haplogroup, R1a, which comprises up to 60% of:
- the Sorbs (Western Slavs in today's eastern Germany),
- Pomeranians 
(Western Slavs in today's Poland),
- Kashubians (Western Slavs in today's Poland),
- Poland (Western Slavs core)
- and most Eastern Slavs, including those from Ukraine
 and the Slavic parts of the Moscow territory.

Please keep in mind that many north-western European R1b people had moved south during the medieval Great Migration period caused by the volcanic eruption of 536 AD, which plunged temperatures up to 2.5°C into the worst mini-ice age in the previous 2000 years. Altogether, as many as 13 large tribal groups, notably the Goths, moved through Europe in the 6th century.

This is likely when some of the (cold and hungryPicts or Gaelic Piast ancestors moved about 850 miles southeast.  Commuting was a common thing even back then, which turned into the Viking era. In fact, the Old Norse "víkingr" or "fara í víking" meant "to go on an expedition or raid by sea".

I know I lump the Picts and Gaelic people with the activities associated with the Scandinavians, but as we will learn, even Western Slavs were doing much Viking back then.



As an Amazon Associate I earn from qualifying purchases.

Relativity

Physics and the laws of relativity: in parent-child relationship sound waves reach the subject after 25 years. 








As an Amazon Associate I earn from qualifying purchases.

AI, Visionaries and Architects

I decided to write down a few thoughts to clarify my obsessions with creating the "multitude" of AI agents that rely on the private (both personal and corporate) tiny learning models (LLM, TLM, TLL).




The future of work is poised for a significant transformation as artificial intelligence (AI) continues to advance. 

Some believe that the 62,000 tech layoffs in 2025 are already a result of this transformation. This is the first year when CTOs are not budgeting for more project managers (PMs) or junior developers.

Roles that traditionally relied on data input, data manipulation, and digital output, such as managers processing tasks, creating Spreadsheets and PowerPoint presentations, analysts, or developers typing code, are increasingly being automated. 

This automation doesn’t mean the end of human roles (yet) but rather a shift in their nature. 

Instead of spending hours on repetitive tasks, professionals will focus on higher-level strategic thinking and creative problem-solving.

Visionary Leaders, not Managers


Imagine a world where the leadership cadre is no longer bogged down by the minutiae of slide creation but instead spends their time strategizing and leading their organizations. All management and optimization tasks are automated and run constantly behind the scenes.

Architects, not Developers


Similarly, developers will move from writing (i.e., typing) lines of code to designing complex systems and orchestrating multiple AI agents to work together seamlessly. 

This shift will require new skills and a different mindset, emphasizing creativity, innovation, and leadership.

Distributed, not Central


The future is not about a few central AI companies dictating the direction of the world with homogenous solutions, but rather a multitude of proprietary AI solutions. The giants will play a crucial role in providing the models and computing resources. 

However, each individual person and company will develop their own AI agents tailored to their specific needs, leading to a diverse ecosystem of AI tools. 

This diversity will foster competition and innovation, driving the development of highly sophisticated and ultra-specialized AI solutions.

Today, I utilize local, yet powerful, models with up to 32 billion parameters and book-length input contexts. Next year, I fully expect to use a multitude of models with hundreds of billions of parameters, running locally on a Mac Studio M5 Ultra (?) or some new AI hardware.

In this new landscape, the role of the visionary and architect becomes paramount. Please let me know in the comments what you think and how you prepare yourself and your organization for 2026 and beyond.


As an Amazon Associate I earn from qualifying purchases.

Testing local LLM on multi-lingual understanding.

I have run a quick test on a few LLM models I have installed locally on Mac OS with 64 GB of RAM.

The test was conducted in English, but it also involved making connections between Slavic languages (such as Polish and Russian), the modern Inter-Slavic language (ISL), and the rest of the language group that originated from Proto-Indo-European (PIE), including Greek and Sanskrit.


Here is the question I have asked all of the models:

Let's discuss particle "ra" as in "rad" happiness, or "raj" heaven. Provide a short answer.

Quick Summary


After evaluating all models, it became clear that larger parameter models with extensive context windows generally excelled in providing insightful, accurate, and nuanced linguistic analyses, making them ideal for in-depth comparative research and article writing tasks. The standout, mistral-small-3.1-24b-instruct-2503, delivered the best balance of abstract thinking, linguistic precision, and large-context capability, especially if an 8-bit quantization version is considered for improved accuracy. Other strong contenders included deepseek-r1-distill-qwen-32b and qwen3-32b-mlx, offering substantial analytical depth. Mid-sized models provided faster but shallower analyses, primarily suitable for exploratory or quick tasks, whereas smaller models below 7B generally struggled with accuracy and linguistic coherence.

Model ranking by preference:

  1. mistral-small-3.1-24b-instruct-2503, 24B, input: 131,072 tokens
  2. deepseek-r1-distill-qwen-32b, 32B, input: 131,072 tokens
  3. qwen3-32b-mlx, 32B, input: 40,960 tokens
  4. dolphin-2.9.3-mistral-nemo-12b, 12B, input: 1,024,000 tokens
  5. mistral-nemo-instruct-2407, 7B, input: 1,024,000 tokens
  6. deepseek-r1-distill-qwen-7b, 7B, input: 131,072 tokens
  7. llama-3.2-3b-instruct-uncensored, 3B, input: 131,072 tokens
  8. phi-3-mini-4k-instruct, input: 4,000 tokens
  9. smollm-135m-instruct, 135M, input: unspecified (small)


Small models are still helpful for agents that need to process, transform, summarize, or classify input.





 



As an Amazon Associate I earn from qualifying purchases.

Qwen 32B

I am pleased with the performance and depth of the 32B Qwen MLX, running
locally on my Mac Studio M1 with 64GB of RAM.

9 tokens per second is not fast, but acceptable.
A 15-second wait for the first token output is very good.








As an Amazon Associate I earn from qualifying purchases.

Post Scriptum

The views in this article are mine and do not reflect those of my employer.
I am preparing to cancel the subscription to the e-mail newsletter that sends my articles.
Follow me on:
X.com (Twitter)
LinkedIn
Google Scholar

My favorite quotations..


“A man should be able to change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, die gallantly. Specialization is for insects.”  by Robert A. Heinlein

"We are but habits and memories we chose to carry along." ~ Uki D. Lucas


Popular Recent Posts

Most Popular Articles