• 0 Posts
  • 19 Comments
Joined 1 year ago
cake
Cake day: July 2nd, 2023

help-circle
  • This is the real answer.

    There are still, in the year 2023, Cobal developers graduating and getting hired to work on software.

    My alma mater’s website runs on PHP.

    The investment to flip even a microservice from one language to another is REALLY high, and most companies won’t pay unless there’s a significant pain point. They might not greenfield new projects with it anymore - but it will still be around effectively forever.


  • You’re not wrong. Having to figure out which element is borked in a yaml file is not great. And the implementation using yaml is all over the place, so even though tools do exist, they’re mediocre at best.

    But, to be fair, Python has always done the same to me. As a fellow Neuro-spicy (and with a background in Java and C# and JavaScript), although the tools are better to point you in the right direction, significant white space (or indentations) are significant white space (or indentations).🤷‍♂️




  • Is it just an extreme difficulty spike at this point that I have to trial-and-error through, or am I doing anything wrong?

    I would say this is the biggest ‘aha’ moment for pretty much any developer - the first time you go from “I built this myself” to “A team built this and has supported it for 10+ years”. Not only can a team of three or four write a lot of code in ten years - they’ll optimize the Hell out of it. It’s ten years worth of edge case bugs, attempts to go faster, new features, etc. And it’s ‘bumpy’ because some of it was done by Dev A in their own style, some of it by Dev B, and so on. So you’ll find the most beautiful implementation for problems that you haven’t even considered before next to “Hello World” level implementation on something else.

    The biggest thing you can do to help yourself out is make sure you’re clear on their branching strategy. When you’re the only one working on your code, it’s cool to push to main and occasionally break things and no harm no foul. But for a mature code base, a butterfly flapping its wings on that obscure constructor can have a blast radius of ‘okay, we have to rebase to the last stable commit’. When in doubt, ‘feature/(what you’re working on)’; but there might be more requirements than that, and it’s okay to ask. Some teams have feature requests tracked by number, on a kanban board, some put it in their username, etc.

    Get the code pulled down, get it running on your machine (no small task), git checkout -b from wherever you’re pulling a branch off of (hopefully main or master, but again, it’s okay to ask) and then, figure out what the team’s requirements are for PRs. Do they have any testing environments, besides building it locally? Do they use linting or some other process to enforce style on PR reviews?

    And then…don’t move a button. (Unless that button actually needs moved!) But try to mimic something that already exists. Create a second button in the new location. Steal from the codebase - implement something small in a way that has been done before. After the new button works - then remove the old button and see what happens.

    The longer you deal with a codebase (and the attendant issues and feedback) the more you’ll feel yourself drawn to certain parts of the code that you’re familiar with.

    Anyway, hope that advice helps! But most of all, don’t be scared. You will break things unintentionally. Your code will break things. If there’s not a process in place to catch it before it happens, that’s not your fault; that’s the senior dev/owners fault. But do try to limit the damage by using good branching strategies, only PRing after linting/testing, and otherwise following the rules.


  • The most compelling argument I heard is that WASM can’t manipulate the DOM and a lot of people don’t want to deal with gluing JS code to it, but aside from that

    But other than that, Mrs. Lincoln, how was the play?

    You’ve gotten several other answers that are true and correct - the pain of implementation at this point is greater than the pain points that WASM solves. But this is also a non trivial one - most of what Javascript should be doing on a webpage is DOM manipulation.

    At some point, WASM will either come out with a killer feature/killer app/use case that Javascript (and all the libraries/frameworks out there) hasn’t figured out how to handle, and it will establish a niche (besides “Javascript is sort of a dumb language let’s get rid of it”), and depending on the use case, you might see some of the 17.4 million (estimated) Javascript developers chuck it for…what? Rust? Kotlin? C? C#? But the switching costs are non-trivial - and frankly, especially if you still have to write Javascript in order to manipulate the DOM…well, what are we solving for?

    If you’re writing a web app where one of the WASM languages gives you a real competitive advantage, I’d say that’s your use case right there. But since most web applications are basically strings of api calls looped together to dump data from the backend into a browser, it’s hard to picture wider adoption. I’ve been wrong before, though.


  • For a second I thought my github was going viral. ;)

    This is a hilarious (and interesting!) read.

    As a young(er) and slightly shittier web developer, before I really understood or could implement promises effectively (or when my code would ‘race’ and fail to recognize that the DOM hadn’t been loaded yet, so I couldn’t attach event listeners yet), I was known to implement a 2 second timeout.

    It wasn’t pretty, but it worked!



  • Junior-ish DevOps with some blue/green experience.

    It’s a very thorny problem, and I think your willingness to put up with the trade offs really would drive what patttern of architectecture.

    Most of our blue/green deployment types use a unitary database behind the backend infra. There’s a lot ways to implement changes to the database (mostly done through scripting in the pipeline, we don’t typically use hibernate or other functionality that wants to control the schema more directly), and it avoids the pain of trying to manage consistency with multiple db instances. It helps that most of our databases are document types, so a lot of db changes can be implemented via flag. But I’ve seen some SQL implementations for table changes that lend themselves to blue/green - you just have to be very mindful to not Bork the current live app with what you’re doing in the background. It requires some planning - not just “shove the script into source control and fire the pipeline.”

    If we were using SQL with a tightly integrated schema and/or we couldn’t feature flag, I think we’d have to monkey around with blue/greening the database as well. But consistency is non trivial, especially depending on what kind of app it is. And at least one time when a colleague set up a database stream between AWS accounts, he managed a circular dependency, which….well it wasn’t prod so it wasn’t a big deal, but it easily could’ve been. The data transfer fees are really what kills you. We managed to triple our Dev AWS bill prototyping some database streams at one point. Some of it undoubtedly was inefficient code, but point stands. With most blue/green infra, your actual costs are a lot less than 2x what a ‘unitary’ infra would cost, because most infra is pay for use and isn’t necessary except when you go to deploy new code anyway. But database consistency, at least when we tried it, was way MORE expensive than just 2x the cost of a unitary db, because of the compute and transfer fees.




  • Tech doesn’t really self select for well balanced, socially confident, neurologically normal folks.

    I’m sure those people are in tech and have success as well, but the stereotype of the “hacker nerd” didn’t spring out of nothing. The obsessiveness and desire to be right and know everything that make IT geniuses can also make those same folks really, really hard to be around.

    People that are ostracized for their socially aberrant behavior usually (not always!) have sympathy for other outcast groups, whatever the reason.

    And you’re right, too - writing code is sort of one of those ultimate bullshit tests - either it works, or it doesn’t. Computers don’t care about your pedigree or your appearance or even your personality. Nice guys who write shit code might have management or product team in their future, but they don’t usually write code for very long. But good devs are hard to find, so even the most straight laced companies are willing to bend a bit when it comes to talented developers.

    My $.02, and worth every penny 😂


  • Just because it’s ‘the hot new thing’ doesn’t mean it’s a fad or a bubble. It doesn’t not mean it’s those things, but…the internet was once the ‘hot new thing’ and it was both a bubble (completely overhyped at the time) and a real, tidal wave change to the way that people lived, worked, and played.

    There are already several other outstanding comments, and I’m far from a prolific user of AI like some folks, but - it allows you to tap into some of the more impressive capabilities that computers have without knowing a programming language. The programming language is English, and if you can speak it or write it, AI can understand it and act on it. There are lots of edge cases, as others have mentioned below, where AI can come up with answers (by both the range and depth of its training data) where it’s seemingly breaking new ground. It’s not, of course - it’s putting together data points and synthesizing an output - but even if mechanically it’s 2 + 3 = 5, it’s really damned impressive if you don’t have the depth of training to know what 2 and 3 are.

    Having said that, yes, there are some problematic components to AI (from my perspective, the source and composition of all that training data is the biggest one), and there are obviously use cases that are, if not problematic in and of themselves, at very least troubling. Using AI to generate child pornography would be one of the more obvious cases - it’s not exactly illegal, and no one is being harmed, but is it ethical? And the more societal concerns as well - there are human beings in a capitalist system who have trained their whole lives to be artists and writers and those skills are already tragically undervalued for the most part - do we really want to incentivize their total extermination? Are we, as human beings, okay with outsourcing artistic creation to this mechanical turk (the concept, not the Amazon service), and whether we are or we aren’t, what does it say about us as a species that we’re considering it?

    The biggest practical reasons to not get too swept up with AI is that it’s limited in weird and not totally clearly understood ways. It ‘hallucinates’ data. Even when it doesn’t make something up, the first time that you run up against the edges of its capabilities, or it suggests code that doesn’t compile or an answer that is flat, provably wrong, or it says something crazy or incoherent or generates art that features humans with the wrong number of fingers or bodily horror or whatever…well then you realize that you should sort of treat AI like a brilliant but troubled and maybe drug addicted coworker. Man, there are some things that it is just spookily good at. But it needs a lot of oversight, because you can cross over from spookily good to what the fuck pretty quickly and completely without warning. ‘Modern’ AI is only different from previous AI systems (I remember chatting with Eliza in the primordial moments of the internet) because it maintains the illusion of knowing much, much better.

    Baseless speculation: I think the first major legislation of AI models is going to be to require an understanding of the training data and ‘not safe’ uses - much like ingredient labels were a response to unethical food products and especially as cars grew in size, power, and complexity the government stepped in to regulate how, where, and why cars could be used, to protect users from themselves and also to protect everyone else from the users. There’s also, at some point, I think, going to be some major paradigm shifting about training data - there’s already rumblings, but the idea that data (including this post!) that was intended for consumption by other human beings at no charge could be consumed into an AI product and then commercialized on a grand scale, possibly even at the detriment of the person who created the data, is troubling.


  • Maybe this is because I’m still relatively junior (2ish years), but my favorite question to ask is, “What are some of the characteristics you’re looking for in someone in this role?”

    I use it as a vibe check, especially at the end of interviews. If they start reading my resume back to me, or listing the things we’ve talked about during the interview…well, that’s a good sign. If they start describing a bunch of stuff that we didn’t talk about, it’s a chance to throw a ‘Hail Mary’ pass and show them how that’s me, as well - maybe we didn’t talk about something that was important to them, but I have relevant experience or a background.

    If they start describing somebody else…well, that’s not great.




  • This kind of implies that you’re crunching and then ‘recovering’. That may or may not be something you have any control over - there’s a lot that goes into creating an unsustainable ‘sprint’, and probably 99.8% of it is not related to actual developers or code - but ideally you would be using these ‘lulls’ to try to pull stuff out of the next crunch so maybe it won’t hurt so bad.

    In reality, if I’m coming off of a bad crunch, I do anything I can do to avoid burnout. Sometimes that’s ‘fun’ backlog items or research for future features or something else I’m excited about, sometimes it’s studying for certs, sometimes it’s cutting slack (@cianuro@programming.dev watching Netflix feels familiar!). But again - whatever it takes to recharge my batteries and feel less bitter and shitty.

    The most ‘sure’ sign that I’m coming off a crunch, though, is that I start reinforcing work/life boundaries. “It’s 5p and I’m logging off and I’m not going to think about work shit willingly until tomorrow.”


  • I think you have an interesting background and potentially interesting technical skills, and I could totally see you catching on with someone and having a fantastic career. I could also see why it would be a weird or awkward fit, that you might be totally overwhelmed, and possibly even hate it. Let me qualify my answer(s) and see if that helps at all.

    I feel like at its heart, being a DevOps is just being passionate about tinkering and technology. The best DevOps Engineers I know love nothing more than to nerd out about…well, all kinds of stuff. From K8s to Linux distros to build tools to code. DevOps is a practice, not a skill set - and that’s reflected in the fact that there’s no ‘base’ skill set for DevOps Engineers. I’ve known developers, sysadmins, even help desk type folks that found their way into the field and were successful. It just depends.

    It kind of feels like you have the heart of a tinkerer, and the fact that you have a MS in a hard science suggests that you have the brainpower to hack it - maybe literally. :)

    That said - what would worry me if I were considering hiring you is that you don’t really have any exposure to Software Development Lifecycle concepts. Maybe I’m too stupid to understand all the acronyms above, but in my (limited) experience, having a good handle on SDLC is sort of the beating heart of DevOps - at least in part because being able to have the infrastructure ready to mate up with the code at the right time and right place is like, 80% of my gig. Too early is a security vulnerability (potentially), too late and the dev team misses all their sprint targets. You don’t have to write code, exactly (although I wish I wrote more), but you have to be able to ‘follow along’ with the dev team. Especially when you’re troubleshooting.

    For SRE particularly - you have a lot of nice sysadmin-y type background skills, but particularly understanding design patterns and telemetry would be the thing I’d be most nervous about for you. Scalability as well - although that’s hard for almost everybody. But for an SRE to improve reliability, you have to be able to really hone in on what’s breaking - and once you’ve gotten the big pieces sorted, being able to understand resource usage, and all of that points towards good instrumentation (and good instrumentation practices).

    I joke that reading logs is my superpower - both because my devs, bless them, don’t do it, and also because if we’ve done a good job building the application, build/deploy pipelines, and infrastructure, your alerts and instrumentation will tell you exactly where any pain points are happening, and make it a lot easier to figure out where and how to focus your efforts moving forward.

    So, after that wall of text - I’d point you towards the cloud. AWS is the largest/most widely known, but arguably kind of opinionated in terms of implementation. Still, AWS Solutions Architect is a pretty good ‘gold standard’ type certification. If you’re more familiar with GCP or Azure, do the ‘associate’ level certs there.

    Another obvious thing that I didn’t see in your background - VCS. Git gud, as it were. I’m a big fan of hanging pretty much all your personal projects on GitHub. Mine is atrocious since I got hired, but before that I had a full year straight of commits. Sometimes it was impressive stuff, most of the time it was just messing around with code - but all the companies that gave me an offer letter mentioned it. Ymmv.

    Finally - you might expand your search a little wider (SysOps instead of SRE off the bat? DevOps as well? Maybe going straight stick software dev, with your background, at a company where your science background would be a real value add is something to look at) and also be prepared to ‘take a step back’ if you do jump. I’d definitely hire you to see how things go, but I’d want you to come in as a Junior, and based on what you wrote above, that’s probably a bit of a paycut for you.

    TL;DR - Do cloud certs, practice on GitHub so employers can see what you’re working on, consider SysOps/DevOps as well as SRE.

    Best of luck to you!


  • There’s probably a bit of a disparity, but it’s not nearly as much as you’re making it out to be.

    In the US it depends greatly on the industry and company - I don’t know anybody making 200k-400k in software development (CTO? sure. Devs writing code? Nah), but I also don’t know or work with anybody that’s in FAANG world. Those are the companies paying $100k+ for a junior.

    I live and work in a lower cost of living area, for a company that’s not ‘software first’ and our juniors come in between $50k-75K. And that’s not the lowest I’ve seen for junior engineers starting out.

    That said - it’s also not unusual for mid-career folks to be in the $100-150k range, and seniors/leads moving up from there.

    So with all that in mind - some of it is market forces (are there more devs and/or fewer dev jobs in Europe than in the US? Potentially less mobility?) but one of the bigger causes (I’d guess, anyway) is the lack of FANG type “Master of the Universe” companies. Part of the reason juniors and seniors command that kind of pay in the US is because the rates that the FANGs pay tend to ‘trickle down’. The average senior/mid career dev may not be interested (or capable!) of working at a FANG - but if the other people in their hiring pool are, they’re still going to command that kind of salary.

    As a point of comparison - my understanding is that financial services is sort of the same thing. Most Euro bankers/stockbrokers/finance bro types are pretty heavily underpaid compared to their US counterparts. Some of that is regulatory, but a lot of it is that there are more higher paying jobs in the US, mostly at the big multinantional conglomerates you can think of off the top of your head (Goldman Sachs, Bank of America, Citibank, JP Morgan), and that tends to drag the scale up throughout the whole system. A rising tide raises all boats.

    Anyway - I don’t have any research or statistics to back any of these suggestions up - hopefully Cunningham’s Law gets us a ‘real’ answer. :)