I want to buy a new GPU mainly for SD. The machine-learning space is moving quickly so I want to avoid buying a brand new card and then a fresh model or tool comes out and puts my card back behind the times. On the other hand, I also want to avoid needlessly spending extra thousands of dollars pretending I can get a ‘future-proof’ card.

I’m currently interested in SD and training LoRas (etc.). From what I’ve heard, the general advice is just to go for maximum VRAM.

  • Is there any extra advice I should know about?
  • Is NVIDIA vs. AMD a critical decision for SD performance?

I’m a hobbyist, so a couple of seconds difference in generation or a few extra hours for training isn’t going to ruin my day.

Some example prices in my region, to give a sense of scale:

  • 16GB AMD: $350
  • 16GB NV: $450
  • 24GB AMD: $900
  • 24GB NV: $2000

edit: prices are for new, haven’t explored pros and cons of used GPUs

  • j4k3@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    18 hours ago

    My 16 GB 3080Ti is only annoying with Flux gens right now. Those take like 1.5-2 minutes each and need a lot of iterations. My laptop heat saturates from Flux. It could get better if the tools support splitting the workflow between GPU and CPU like with Textgen, but at the moment it is GPU only and kinda sucks. Stuff like Pony runs fast enough at around 20-30 seconds for most models.

    • comfy@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      17 hours ago

      I am using a lot of Pony* models at the moment and Pony v7 looks like it will switch to AuraFlow or FLUX [1] so it’s useful to hear your experience with it on a 3080Ti.