Scientists used "knowledge distillation" to condense Stable Diffusion XL into a much leaner, more efficient AI image generation model that can run on low-cost hardware.
No, lol. Well, at least I’m not 100% familiar with Pis new offerings, but idk about their PCI-E capabilities. Direct quote:
The tool can run on low-cost graphics processing units (GPUs) and needs roughly 8GB of RAM to process requests — versus larger models, which need high-end industrial GPUs.
Makes your question seem silly trying to imagine hooking up my GPU which is probably bigger than a Pi to a Pi.
Have been running all the image generation models on a 2060 super (8GB VRAM) up to this point including SD-XL, the model they “distilled” theirs from… Not really sure what exactly they think they are differentiating themselves from, reading the article…
Is that feasible on a Raspberry pi?
Probably. FastSD CPU already runs on a Raspberry PI 4.
No, lol. Well, at least I’m not 100% familiar with Pis new offerings, but idk about their PCI-E capabilities. Direct quote:
Makes your question seem silly trying to imagine hooking up my GPU which is probably bigger than a Pi to a Pi.
Have been running all the image generation models on a 2060 super (8GB VRAM) up to this point including SD-XL, the model they “distilled” theirs from… Not really sure what exactly they think they are differentiating themselves from, reading the article…
Jeff Geerling has entered the chat
Here is an alternative Piped link(s):
Jeff Geerling has entered the chat
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
There are three models and the smallest one is 700M parameters.
Lol read the article, it cites “8gb vram” and if i had to guess it will only support nvidia out of the gate