And doesn’t understand LLMs, which don’t “learn” a damn thing after the training is completed. The only variation after that is random numbers and the input it receives.
That’s not true. There are other ways of influencing the numbers that tools use. Most of them have their own internal voting systems, so that humans can give feedback to directly influence the LLM.
Diffusion models have LoRAs, and the models themselves can be trained on top of the base model.
And doesn’t understand LLMs, which don’t “learn” a damn thing after the training is completed. The only variation after that is random numbers and the input it receives.
That’s not true. There are other ways of influencing the numbers that tools use. Most of them have their own internal voting systems, so that humans can give feedback to directly influence the LLM.
Diffusion models have LoRAs, and the models themselves can be trained on top of the base model.