Technically correct ™
Before you get your hopes up: Anyone can download it, but very few will be able to actually run it.
So does OSM data. Everyone can download the whole earth but to serve it and provide routing/path planning at scale takes a whole other skill and resources. It’s a good thing that they are willing to open source their model in the first place.
What’s the resources requirements for the 405B model? I did some digging but couldn’t find any documentation during my cursory search.
Typically you need about 1GB graphics RAM for each billion parameters (i.e. one byte per parameter). This is a 405B parameter model. Ouch.
Edit: you can try quantizing it. This reduces the amount of memory required per parameter to 4 bits, 2 bits or even 1 bit. As you reduce the size, the performance of the model can suffer. So in the extreme case you might be able to run this in under 64GB of graphics RAM.
Typically you need about 1GB graphics RAM for each billion parameters (i.e. one byte per parameter). This is a 405B parameter model.
Wake me up when it works offline “The Llama 3.1 models are available for download through Meta’s own website and on Hugging Face. They both require providing contact information and agreeing to a license and an acceptable use policy, which means that Meta can technically legally pull the rug out from under your use of Llama 3.1 or its outputs at any time.”
WAKE UP!
It works offline. When you use with ollama, you don’t have to register or agree to anything.
Once you have downloaded it, it will keep on working, meta can’t shut it down.
Well, yes and no. See the other comment, 64 GB VRAM at the lowest setting.
Oh, sure. For the 405B model it’s absolutely infeasible to host it yourself. But for the smaller models (70B and 8B), it can work.
I was mostly replying to the part where they claimed meta can take it away from you at any point - which is simply not true.