LM Studio with 12 and 24B local LLM models

In my LM Studio, I have been using the 12 billion and 24 billion parameter models on my relatively inexpensive Mac Studio M1, which has 64 GB of unified memory.

It also has a 1 million token input context window! That would roughly cover the entirety of J.R.R. Tolkien's "The Lord of the Rings: The Fellowship of the Ring", or approximately 400 pages of text.










The 12B model responds almost instantly and is excellent for good-quality, rapid example work.
The 24B model takes about 30 seconds to respond, but it has deep, obscure, nuanced knowledge of the world. I would have to spend 5 times more to do the same with NVidia GPUs.

Another benefit of using the "Dolphin" is that it is uncensored, which gives me direct answers to my questions without trying to "protect me" from facts like "Tiananmen Square protests of 1980", or any other enforced ideology.



As an Amazon Associate I earn from qualifying purchases.

No comments:

Post a Comment

Please be polite.

Post Scriptum

The views in this article are mine and do not reflect those of my employer.
I am preparing to cancel the subscription to the e-mail newsletter that sends my articles.
Follow me on:
X.com (Twitter)
LinkedIn
Google Scholar

My favorite quotations..


“A man should be able to change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, die gallantly. Specialization is for insects.”  by Robert A. Heinlein

"We are but habits and memories we chose to carry along." ~ Uki D. Lucas


Popular Recent Posts

Most Popular Articles