• peppers_ghost@lemmy.ml
    link
    fedilink
    English
    arrow-up
    3
    ·
    6 days ago

    llama and pytorch support it right now. CUDA isn’t available on its own as far as I can tell. I’d like to try one out but the bandwidth seems to be ass. About 25% as fast as a 3090. It’s a really good start for them though.