Bellingcat
Bellingcat
Larger models train faster (need less compute), for reasons not fully understood. These large models can then be used as teachers to train smaller models more efficiently. I’ve used Qwen 14B (14 billion parameters, quantized to 6-bit integers), and it’s not too much worse than these very large models.
Lately, I’ve been thinking of LLMs as lossy text/idea compression with content-addressable memory. And 10.5GB is pretty good compression for all the “knowledge” they seem to retain.
I don’t think federation has to be an obstacle for non-tech people. They don’t really have to know about it, and it can be something they learn about later. I really don’t know if federation stops people from trying it out. Don’t people think, “I don’t know what instance to join, so I’m not going to choose any?”
Personally, having no algorithm for your home feed is what I don’t like about it. Everything is chronological. Some people I follow post many times a day, some post once per month, some post stuff I’m extremely interested in sporadically, followed by a sea of random posts. Hashtag search and follow is also less useful because there’s no option for an algo.
The UI seems fine to me. I guess I’m not picky about UIs. The one nitpick I have is on mobile, tapping an image will just full-screen the image instead of opening the thread.
Hmm. I just assumed 14B was distilled from 72B, because that’s what I thought llama was doing, and that would just make sense. On further research it’s not clear if llama did the traditional teacher method or just trained the smaller models on synthetic data generated from a large model. I suppose training smaller models on a larger amount of data generated by larger models is similar though. It does seem like Qwen was also trained on synthetic data, because it sometimes thinks it’s Claude, lol.
Thanks for the tip on Medius. Just tried it out, and it does seem better than Qwen 14B.