Maven (famous)@lemmy.zip to Programmer Humor@programming.dev · 2 months agoSimple Optimization Tricklemmy.zipimagemessage-square66linkfedilinkarrow-up1937arrow-down17
arrow-up1930arrow-down1imageSimple Optimization Tricklemmy.zipMaven (famous)@lemmy.zip to Programmer Humor@programming.dev · 2 months agomessage-square66linkfedilink
minus-squareHugeNerd@lemmy.calinkfedilinkarrow-up23·2 months agoHave you seen the insane complexity of modern CPUs? Ain’t no one hand coding that like a 6502 in 1985.
minus-squaremusubibreakfast@lemmy.worldlinkfedilinkarrow-up10·2 months agoI wonder if there’s anyone alive right now who would be capable of such a task.
minus-squareBlackmist@feddit.uklinkfedilinkEnglisharrow-up10·2 months agoIf the hardware was fixed, I don’t see why not. Might not be as fast as the optimisations compilers do these days though. If you have to support thousands of types of GPU and CPU and everything else, then fuck no.
minus-squareskuzz@discuss.tchncs.delinkfedilinkarrow-up6·2 months agoEven if one did, say using x86, it would still just be interpreted by the CPU into the CPU’s native opcodes, as the legacy instruction sets are interpreted/translated.
minus-squareBigDanishGuy@sh.itjust.workslinkfedilinkarrow-up2·2 months ago as the legacy instruction sets are interpreted/translated. Wth? That’s it, I’m sticking to the AVR then
Have you seen the insane complexity of modern CPUs? Ain’t no one hand coding that like a 6502 in 1985.
I wonder if there’s anyone alive right now who would be capable of such a task.
If the hardware was fixed, I don’t see why not.
Might not be as fast as the optimisations compilers do these days though.
If you have to support thousands of types of GPU and CPU and everything else, then fuck no.
Even if one did, say using x86, it would still just be interpreted by the CPU into the CPU’s native opcodes, as the legacy instruction sets are interpreted/translated.
Wth? That’s it, I’m sticking to the AVR then