• 29 Posts
  • 12 Comments
Joined 1 year ago
cake
Cake day: June 10th, 2023

help-circle





  • Blaed@lemmy.worldOPtoTechnology@lemmy.mlVicuna v1.5 Has Been Released!
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    1 year ago

    I used to feel the same way until I found some very interesting performance results from 3B and 7B parameter models.

    Granted, it wasn’t anything I’d deploy to production - but using the smaller models to prototype quick ideas is great before having to rent a gpu and spend time working with the bigger models.

    Give a few models a try! You might be pleasantly surprised. There’s plenty to choose from too. You will get wildly different results depending on your use case and prompting approach.

    Let us know if you end up finding one you like! I think it is only a matter of time before we’re running 40B+ parameters at home (casually).






















  • FWIW, it’s a new term I am trying to coin in FOSS communities (Free, Open-Source Software communities). It’s a spin off of ‘FOSS’, but for AI.

    There’s literally nothing wrong with FOSS as an acronym, I just wanted to use one more focused in regards to AI tech to set the right expectations for everything shared in /c/FOSAI

    I felt it was a term worth coining given the varied requirements and dependancies AI/LLMs tend to have compared to typical FOSS stacks. Making this differentiation is important in some of the semantics these conversations carry.