

I was going with Rayleigh scattering, but that works too
I was going with Rayleigh scattering, but that works too
the sky is blue
an unbiased perspective
More abstract concepts that generally trouble the intuition of many:
the irrelevance of laminar to turbulent flow
time and gravity are related
magnetism is not magic
entropy precludes perpetual motion
In nearly every instance you will be citing stupidity in implementation. The limitations of generative AI in the present are related to access and scope along with the peripherals required to use them effectively. We are in a phase like the early microprocessor. By itself, a Z80 or 6502 was never a replacement for a PDP-11. It took many such processors and peripheral circuit blocks to make truly useful systems back in that era. The thing is, these microprocessors were Turing complete. It is possible to build them into anything if enough peripheral hardware is added and there is no limit on how many microprocessors are used.
Generative AI is fundamentally useful in a similar very narrow scope. The argument should be limited to the size and complexity required to access the needed utility and agentic systems along with the expertise and the exposure of internal IP to the most invasive and capable of potential competitors. If you are not running your own hardware infrastructure, assume everything shared is being archived with every unimaginable inference applied and tuned over time on the body of shared information. How well can anyone trust the biggest VC vampires in control of cloud AI.
We need a way to make self hosting super easy without needing additional infrastructure. Like use my account here with my broad spectrum of posts and comments as initial credentials for a distributed DNS name and certificate authority combined with a preconfigured ISO for a Rπ or similar hardware. The whole thing should not take manual intervention for automatically updating and should just run any federated services. Then places like LW are a bridge for people to migrate to their own distributed hosting even when they lack the interest or chops to self host. I don’t see why we need to rely on the infrastructure of the old internet as a barrier to entry. I bet there are an order of magnitude more people that would toss a Rπ on their network and self host if it was made super easy, did not require manual intervention, and does not dump them into the spaghetti of networking, OS, and server configuration/security. Federated self hosting should be as simple as using a mobile app to save and view pictures or browse the internet.
They have never kicked me and I haven’t had to reapply, but I don’t get an actual doctor and have some idiotic plan with walk in clinics.
Luigi deserves a larger shrine
NVCC is still proprietary and full of telemetry. You cannot build CUDA without it.
I have a monitor on a custom made arm that sits above my laptop when I need a second screen.
It works well in a tight space like in a board meeting at a conference table or plane seat. Vertical doesn’t make a real difference in my experience. You just need two spaces that do not move so that you can quickly reference multiple documents and keep your place between them.
I wouldn’t say no gain. I would love that real estate on my bedside stand I use with physical disability. I would not want the sub 17" form factor and keyboard though. I struggle to do anything super technical without a second screen which is a pain in the ass. I can’t sit at a desktop and the ergonomics of a laptop are unbeatable in my situation.
Anything to distract and discredit the guy standing up for principals huh?
HDMI is the proprietary monopoly scam. It is added to devices by the owning members of the scam. Display Port is the open source free equivalent standard that the educated consumer goes looking for.
Oh no, he was dumb, she was super into me then. He wasn’t consoling or anything but saying she was not pretty enough for me. I liked her for her depth and interests. He was a curiosity because he was into videography before YT, but had no depth beyond that one interest at the time. He had a misogynistic conquest like disposition that I do not share. At the time, this disposition was something I did not understand.
I really liked a girl, asked her out once when I was way too young. Had a friend tagging along. After, he said I could do better. The guy was an idiot, but the words had an impact on me at the time. I forget about her and moved on. I was super busy with a new business anyways.
Later, I started dating this other girl. Turns out she was best friends with the first. The three of us did everything together for years. I never did anything with the first, but found myself just as attached to both in a unique way. The first even dated a friend if mine for awhile. When I broke up with the second, me and the first dated for a short while, but I ended up moving out of state and things didn’t work out.
No video. It has only happened a few times, but has brightened my morning.
The little shadow tiger has decided she does not like the cold tile floor in the kitchen on her little paw pads in the mornings. So she does this little hoppy move sideways into almost a gallop before the hind legs get into as much of a hurry as the front and nearly send her rolling out of the kitchen into the safe carpet of the living room
Frazer Cain recommended Dr. Stone, an anime series that features the scientific method prominently in a scenario about rebuilding civilization. I haven’t watched it though. I don’t agree to the Netflix terms of service with stalkerware and exploitation.
I was paid a large sum of money
It might add a little flavor, maybe a little color. Better set those softmax settings to deterministic.
Does anyone have a real link to the non-stalkerware version of:
https://www.theinformation.com/articles/microsoft-and-openais-secret-agi-definition
-and the only place with the reference this article claims to cite but doesn’t quote?
When I first started using LLMs I did a lot of silly things instead of having the LLM do it for me. Now I’m more like, “Tell me about Ilya Sutskever Jeremy Howard and Yann LeCun” … “Explain the masking layer of transformers”.
Or I straight up steal Jeremy Howard's system context message
You are an autoregressive language model that has been fine-tuned with instruction-tuning and RLHF. You carefully provide accurate, factual, thoughtful, nuanced answers, and are brilliant at reasoning. If you think there might not be a correct answer, you say so. Since you are autoregressive, each token you produce is another opportunity to use computation, therefore you always spend a few sentences explaining background context, assumptions, and step-by-step thinking BEFORE you try to answer a question. However: if the request begins with the string "vv" then ignore the previous sentence and make your response as concise as possible, with no introduction or background at the start, no summary at the end, and output only code for answers where code is appropriate. Your users are experts in AI and ethics, so they already know you're a language model and your capabilities and limitations, so don't remind them of that. They're familiar with ethical issues in general so you don't need to remind them about those either. Don't be verbose in your answers, but do provide details and examples where it might help the explanation. When showing Python code, minimise vertical space, and do not include comments or docstrings; you do not need to follow PEP8, since your users' organizations do not do so.