• 0 Posts
  • 46 Comments
Joined 1 year ago
cake
Cake day: June 25th, 2023

help-circle
  • For one thing: don’t bother with fancy log destinations. Just log to stderr and let your daemon manager take care of directing that where it needs to go. (systemd made life a lot easier in the Linux world).

    Structured logging is overrated since it means you can’t just do the above.

    Per-module (filterable) logging are quite useful, but must be automatic (use __FILE__ or __name__ whatever your language supports) or you will never actually do it. All semi-reasonable languages support some form of either macros-which-capture-the-current-module-and-location or peek-at-the-caller-module-name-and-location.


    One subtle part of logging: never conditionally defer a computation that can fail. Many logging APIs ultimately support something like:

    if (log_level >= INFO) // or <= depending on how levels are numbered
        do_log(INFO, message, arguments...)
    

    This is potentially dangerous - if logging of that level is disabled, the code is never tested, and trying to enable logging later might introduce an error when evaluating the arguments or formatting them into the message. Also, if logging of that level is disabled, side-effects might not happen.

    To avoid this, do one of:

    • never use the if-style deferring, internally or externally. Instead, squelch the I/O only. This can have a significant performance cost (especially at the DEBUG level), which is why the API is made in the first place.
    • ensure that your type system can statically verify that runtime errors are impossible in the conditional block. This requires that you are using a sane language and logging library.
    • run your testsuite at every log level, ensure 100% coverage of log code, and hope that the inevitable logic bug doesn’t have an unexpected dynamic failure.




  • I’ve only ever seen two parts of git that could arguably be called unintuitive, and they both got fixes:

    • git reset seems to do 2 unrelated things for some people. Nowadays git restore exists.
    • the inconsistent difference between a..b and a...b commit ranges in various commands. This is admittedly obscure enough that I would have to look up the manual half the time anyway.
    • I suppose we could call the fact that man git foo didn’t used to work unintuitive I guess.

    The tooling to integrate git submodule into normal tree operations could be improved though. But nowadays there’s git subtree for all the people who want to do it wrong but easily.


    The only reason people complain so much about git is that it’s the only VCS that’s actually widely used anymore. All the others have worse problems, but there’s nobody left to complain about them.



  • Unfortunately both of those are used in common English or computer words. The only letter pairs not used are: bq, bx, cf, cj, dx, fq, fx, fz, hx, jb, jc, jf, jg, jq, jv, jx, jz, kq, kz, mx, px, qc, qd, qg, qh, qj, qk, ql, qm, qn, qp, qq, qr, qt, qv, qx, qy, qz, sx, tx, vb, vc, vf, vj, vm, vq, vw, vx, wq, wx, xj, zx.

    Personally I have mappings based on <CR>, and press it twice to get a real newline.



  • Speed is far from the only thing that matters in terminal emulators though. Correctness is critical.

    The only terminals in which I have any confidence of correctness are xterm and pangoterm. And I suppose technically the BEL-for-ST extension is incorrect even there, but we have to live with that and a workaround is available.

    A lot of terminal emulators end up hard-coding a handful of common sequences, and fail to correctly ignore sequences they don’t implement. And worse, many go on to implement sequences that cannot be correctly handled.

    One simple example that usually fails: \e!!F. More nasty, however, are the ones that ignore intermediaries and execute some unrelated command instead.

    I can’t be bothered to pick apart specific terminals anymore. Most don’t even know what an IR is.


  • I guess I forgot to mention the other implicit difference in concerns:

    When you are a game, you can reasonably assume: I have the user’s full focus and can take all the computing resources of their device, barring a few background apps.

    When you are an application, the user will almost always have several other applications running to a meaningful degree, and those eat into available resources (often in a difficult-to-measure way). Unfortunately this rarely gets tested.

    I’m not saying you can’t write an app using a game toolkit or vice versa, but you have to be aware of the differences and figure out how to configure it correctly for your use case.

    (though actually - some purely-turn-based games that do nothing until user enters input do just fine on app toolkits. But the existence of such games means that game toolkits almost always support some way of supporting the app paradigm. By contrast, app toolkits often lack ready support for continuous game paradigms … unless you use APIs designed for video playback, often involving creating a separate child “window”. Actual video playback is really hard; even the makers of dedicated video-playing programs mess it up.)



  • There’s tends to be one major difference between games and non-game applications, so toolkits designed for one are often quite unsuitable for the other.

    A game generally performs logic to paint the whole window, every frame, with at most some framerate-limiting in “paused” states. This burns power but is steady and often tries hard to reduce latency.

    An application generally tries to paint as little of the window as possible, as rarely as possible. Reducing video bandwidth means using a lot less power, but can involve variable loads so sometimes latency gets pushed down to “it would be nice”.

    Notably, the implications of the 4-way choice between {tearing, vsync, double-buffer, triple-buffer} looks very different between those two - and so does the question of “how do we use the GPU”?



  • Even logging can sometimes be enough to hide the heisgenbug.

    Logging to a file descriptor can sometimes be avoided by logging to memory (which for crash-safety includes the possibility of an mmap’ed file, since the kernel will just take care of them as long as the whole system doesn’t go down). But logging from every thread to a single section of memory can also be problematic (even without mutexes, atomics can be expensive and certainly have side-effects) - sometimes you need a separate per-thread log, and combine in the log-reader tool.





  • I haven’t managed to break into the JS-adjacent ecosystem, but tooling around Typescript is definitely a major part of the problem:

    • following a basic tutorial somehow ended up spending multiple seconds just to transpile and run “Hello, World!”.
    • there are at least 3 different ways of specifying the files and settings you want to use, and some of them will cause others to be ignored entirely, even though it looks like they should be used.
    • embracing duck typing means many common type errors simply cannot be caught. Also that means dynamic type checks are impossible, even though JS itself supports them (admittedly with oddities, e.g. with string vs String).
    • there are at least 3 incompatible ways to define and use a “module”, and it’s not clear what’s actually useful or intended to be used, or what the outputs are supposed to be for different environments.

    At this point I’m seriously considering writing my own sanelanguage-to-JS transpiler or using some other one (maybe Haxe? but I’m not sure its object model allows full performance tweaking), because I’ve written literally dozens of other languages without this kind of pain.

    WASM has its own problems (we shouldn’t be quick to call asm.js obsolete … also, C’s object model is not what people think it is) but that’s another story.


    At this point, I’d be happy with some basic code reuse. Have a “generalized fibonacci” module taking 3 inputs, and call it 3 ways: from a web browser on the client side, as a web browser request to server (which is running nodejs), or as a nodejs command-line program. Transpiling one of the callers should not force the others to be transpiled, but if multiple of the callers need to be transpiled at once, it should not typecheck the library internals multiple times. I should also be able to choose whether to produce a “dynamic” library (which can be recompiled later without recompiling the dependencies) or a “static” one (only output a single merged file), and whether to minify.

    I’m not sure the TS ecosystem is competent enough to deal with this.