Husband, father, kabab lover, history buff, chess fan and software engineer. Believes creating software must resemble art: intuitive creation and joyful discovery.

🌎 linktr.ee/bahmanm

Views are my own.

  • 48 Posts
  • 99 Comments
Joined 1 year ago
cake
Cake day: June 26th, 2023

help-circle










  • Good question!

    IMO a good way to help a FOSS maintainer is to actually use the software (esp pre-release) and report bugs instead of working around them. Besides helping the project quality, I’d find it very heart-warming to receive feedback from users; it means people out there are actually not only using the software but care enough for it to take their time, report bugs and test patches.




  • I usually capture all my development-time “automation” in Make and Ansible files. I also use makefiles to provide a consisent set of commands for the CI/CD pipelines to work w/ in case different projects use different build tools. That way CI/CD only needs to know about make build, make test, make package, … instead of Gradle/Maven/… specific commands.

    Most of the times, the makefiles are quite simple and don’t need much comments. However, there are times that’s not the case and hence the need to write a line of comment on particular targets and variables.


  • Can you provide what you mean by check the environment, and why you’d need to do that before anything else?

    One recent example is a makefile (in a subproject), w/ a dozen of targets to provision machines and run Ansible playbooks. Almost all the targets need at least a few variables to be set. Additionally, I needed any fresh invocation to clean the “build” directory before starting the work.

    At first, I tried capturing those variables w/ a bunch of ifeqs, shells and defines. However, I wasn’t satisfied w/ the results for a couple of reasons:

    1. Subjectively speaking, it didn’t turn out as nice and easy-to-read as I wanted it to.
    2. I had to replicate my (admittedly simple) clean target as a shell command at the top of the file.

    Then I tried capturing that in a target using bmakelib.error-if-blank and bmakelib.default-if-blank as below.

    ##############
    
    .PHONY : ensure-variables
    
    ensure-variables : bmakelib.error-if-blank( VAR1 VAR2 )
    ensure-variables : bmakelib.default-if-blank( VAR3,foo )
    
    ##############
    
    .PHONY : ansible.run-playbook1
    
    ansible.run-playbook1 : ensure-variables cleanup-residue | $(ansible.venv)
    ansible.run-playbook1 : 
    	...
    
    ##############
    
    .PHONY : ansible.run-playbook2
    
    ansible.run-playbook2 : ensure-variables cleanup-residue | $(ansible.venv)
    ansible.run-playbook2 : 
    	...
    
    ##############
    

    But this was not DRY as I had to repeat myself.

    That’s why I thought there may be a better way of doing this which led me to the manual and then the method I describe in the post.


    running specific targets or rules unconditionally can lead to trouble later as your Makefile grows up

    That is true! My concern is that when the number of targets which don’t need that initialisation grows I may have to rethink my approach.

    I’ll keep this thread posted of how this pans out as the makefile scales.


    Even though I’ve been writing GNU Makefiles for decades, I still am learning new stuff constantly, so if someone has better, different ways, I’m certainly up for studying them.

    Love the attitude! I’m on the same boat. I could have just kept doing what I already knew but I thought a bit of manual reading is going to be well worth it.

















  • First off, I was ready to close the tab at the slightest suggestion of using Velocity as a metric. That didn’t happen 🙂


    I like the idea that metrics should be contained and sustainable. Though I don’t agree w/ the suggested metrics.

    In general, it seems they are all designed around the process and not the product. In particular, there’s no mention of the “value unlocked” in each sprint: it’s an important one for an Agile team as it holds Product accountable to understanding of what is the $$$ value of the team’s effort.

    The suggested set, to my mind, is formed around the idea of a feature factory line and its efficiency (assuming it is measurable.) It leaves out the “meaning” of what the team achieve w/ that efficiency.

    My 2 cents.


    Good read nonetheless 👍 Got me thinking about this intriguing topic after a few years.