I’m just some guy, you know.
Those aren’t rumors. The Lemmy repo is quite open about this. Lemmy’s devs are part of the Tankie problem here.
Honestly, Kbin and Mbin are looking very attractive, not being run by extremists. Lemmy, as a product, is dragged down by the Tankies that make it - just as Pleroma (a Mastodon alternative) is dragged down by the Neo-Nazis that make it.
Centralized platforms get top-down control. You’re trading your freedom for convenience.
Stop pining for the algorithms. They’re making you stupider by guaranteeing that you only see the content you want to see, and never the content you need to see.
Bluesky is not decentralized at all.
Don’t fall for it. Read their privacy policy.
They keep your data in the cloud and share it with third parties, including advertisers.
Pen and paper doesn’t snitch.
We can conclude: that photo isn’t AI-generated. You can’t get an AI system to generate photos of an existing location; it’s just not possible given the current state of the art.
That’s a poor conclusion. A similar image could be created using masks and AI inpainting. You could take a photo on a rainy day and add in the disaster components using GenAI.
That’s definitely not the case in this scenario, but we shouldn’t rely on things like verifying real-world locations to assume that GenAI wasn’t involved in making a photo.
I mean, now that we know the addresses of people like Nick Fuentes and Matt Walsh, we should be able to figure out everywhere else they go too.
If we want to find the addresses of other notable fascists, just keep track when/where they’re seen publicly until you figure out which device on the map is theirs, then see where they go at night.
They only apply to games from Japan. This is the Japanese patent system.
“Open Source” is mostly the right term. AI isn’t code, so there’s no source code to open up. If you provide the dataset you trained off of, and open up the code used to train the model, that’s pretty close.
Otherwise, we need to consider “open weights” and “free use” to be more accurate terms.
For example, ChatGPT 3+ in undeniably closed/proprietary. You can’t download the model and run it on your own hardware. The dataset used to train it is a trade secret. You have to agree to all of OpenAI’s terms to use it.
LLaMa is way more open. The dataset is largely known (though no public master copy exists). The code used to train is open source. You can download the model for local use, and train new models based off of the weights of the base model. The license allows all of this.
It’s just not a 1:1 equivalent to open source software. It’s basically the equivalent of royalty free media, but with big collections of conceptual weights.
The bridge is necessary because BlueSky and Mastodon cannot federate, and they never will be able to. ActivityPub and ATProto are different protocols.