You should research the definition of AI then. Even the A* pathfinding algorithm was historically considered AI. It’s a remarkably broad field.
You should research the definition of AI then. Even the A* pathfinding algorithm was historically considered AI. It’s a remarkably broad field.
Even Zuckerberg admits that trying to scale LLMs larger doesn’t work because the energy and compute requirements go up exponentially. There must exist a different architecture that is more efficient, since the meat computers in our skulls are hella efficient in comparison.
Once we figure that architecture out though, it’s very likely we will be able to surpass biological efficiency like we have in many industries.
Right, so AIs don’t really know what words are. All they see are tokens. The tokens could be words and letters, but they could also be image/video features, audio waveforms, or anything else.
I dunno if this changes any of the UX paradigms, but I heard GIMP is about to release a huge major version.
I mean, the rules would allow everybody to get around copyright in this way. A rule like that would apply to all the indie LORA trainers as well.
I don’t think this will make much difference honestly, we already have very little regulation here. It probably means that there will be copyright exemptions for training models. And honestly eroding copyright bit by bit is okay in my book.
What we should be worried about are the human rights that are about to be violated.
That’s because they are serious about it. Chip fabrication will likely determine the victor of the next 25 years in world politics.
Of course! It’s not like animals have jet engines!
Human brains are merely the proof that such energy efficiencies are possible for intelligence. It’s likely we can match or go far beyond that, probably not by emulating biology directly. (Though we certainly may use it as inspiration while we figure out the underlying principles.)