CoderSupreme@programming.dev to Asklemmy@lemmy.ml · edit-28 months ago...message-squaremessage-square43fedilinkarrow-up137arrow-down12
arrow-up135arrow-down1message-square...CoderSupreme@programming.dev to Asklemmy@lemmy.ml · edit-28 months agomessage-square43fedilink
minus-squareSir_Kevin@lemmy.dbzer0.comlinkfedilinkEnglisharrow-up3·1 year agoThe quality and amount of training done from model to model can very substantially.
minus-squarevrighter@discuss.tchncs.delinkfedilinkarrow-up2arrow-down1·1 year agoproving my point. the training set can be improved (until they’re irreversibly tainted with llm genrated data). The tech is not. Even with a huge dataset, llms will still have today’s limitations.
The quality and amount of training done from model to model can very substantially.
proving my point. the training set can be improved (until they’re irreversibly tainted with llm genrated data). The tech is not. Even with a huge dataset, llms will still have today’s limitations.