• sugar_in_your_tea@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 day ago

    Not OP, but here’s my 2c as someone also part of the interview process.

    I had an interviewe where the candidate asked if they could use AI, and I told them to use whatever they normally use in development. I’ll skip the details, but basically the AI generated wrong code, which they missed, and they corrected when we pointed it out. That happens. But then we had them refactor and the AI made the same mistake and they missed it again, which we pointed out, and they fixed. But that wasn’t the nail in the coffin. We then asked them how confident they were about the code (we saw other errors that we didn’t mention), and they said 100%. They didn’t get the job.

    I don’t care what tools you use, I mostly care how you approach problems and whether you overstate your abilities. We’re in the business of producing working code on time, so we need devs who can at least notice when they need more time to check things. We were hoping they’d say they needed to write some tests to get a code review, not just ship it.

    Our coding projects are designed such that a competent dev can complete them quickly (5-10 min for first round “weeder” task, 20-30 min for second round “engineering” task), and we allow double the time expected to cover for nerves. In fact, we might hire you even if you fail spectacularly, provided you can explain your approach (i.e. it’s just nerves).

    • rebelsimile@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      edit-2
      7 hours ago

      Thanks for this.

      I mentor lots of people and i met with someone last week for the first time, and as we were chatting he mentioned several times things like “So I just asked the AI what to do, and then did that exact thing”…. Uh, so… I don’t use AI that way.

      I started using it basically as soon as it came out and I started like everyone else, writing out all these requirements into the system, marveling at how it just spit back out a whole program, and then obviously ran into all the pitfalls that that entails.

      So, these days, my AI use is limited to what I’d say is syntax conversion/lookup (like “What’s the syntax for instantiating and adding to a set in python?”) and anything I’d immediately verify.

      I should also say I’m aware of leetcode/things like that. I play around a lot on Codewarriors and see how others put together solutions and learn a lot from that. I really enjoy the silly grindy aspects of coding like figuring out how to extract all the content from a json object that should be a string but can’t be a string for <reasons>, and building larger/complex systems like game engines (engines to make my games work, not the underlying engine). Components/react and that style of development makes a lot of intuitive sense to me as well.

      Anyway I say all that to say I’d be sort of embarrassed to use AI during an interview like I’d be embarrassed to need to google anything, but it would be primarily about syntax and I’d be as likely to distrust anything the AI was saying as to use it unless it aligned with what I’d expect the code to look like.

      Do you mind if I ask what a “weeder” task might be vs. a more involved one? As someone who hasn’t worked on a dev team before, I only vaguely know what you mean by “We were hoping to say they needed to write some tests to get a code review”.

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        3 hours ago

        A weeder task is a super simple programming task that should be second nature. Some options for different stacks:

        • JavaScript - use array functions to turn the input into a given output (2-3 lines of code)
        • React-specific - pass data from an input field to a text field in separate components (5-10 lines of code, we provide the rest)
        • Python - various list, dict, or set comprehension tasks
        • Rust - something with iterators or traits

        Basically, we just want to know if they can write basic code in the position we’re hiring for.

        I only vaguely know what you mean by “We were hoping to say they needed to write some tests to get a code review”.

        The “programming challenge” isn’t really a challenge for programming skill, but more of a software engineering challenge to see how they turn vague requirements into a product the company could ship.

        Let’s say the task is to build a CLI store, where the user inputs items and quantities they want to buy, and the app updates inventory after a sale. For the sake of time, we’ll say data doesn’t need to persist between runs.

        I think any developer could build something like that in about 15-20 min, maybe less if they’re familiar with that kind of task. In Python, it’s basically an input() loop that queries an “inventory” dictionary and updates an “order” dictionary.

        However, there are also a bunch of corner cases:

        • user inputs invalid item
        • insufficient inventory
        • invalid quantity
        • add the same item twice (not an error, maybe a warning?)
        • what if user decides to abandon the purchase and start over?
        • if we make it concurrent (i.e. a server with multiple users), how do we ensure inventory is correct?

        After they build the basic solution, we ask them an open ended question: how confident are you that the code is correct? They wrote it in 20 min or so, therefore the confidence should be pretty low. I’ll then ask what they’d do to get more confidence.

        A good software engineer doesn’t only want to ensure the happy path works, but that the corner cases are handled and those uses will continue to work regardless of what other developers add in the future. So I’m looking for one or all of:

        • peer review of the code
        • unit tests
        • documentation

        If they say it’s perfect and to ship it, that’s concerning, especially if I identified an issue in the process of them solving it. We identified the same issue twice for that candidate that relied on AI, and they still said “ship it,” and we also noticed other issues as well that we didn’t tell them about.

        So we’re looking both for competence and self awareness. Know your stuff, but also recognize your limitations. Meeting the requirements is only half of development IMO, you also need to maintain it long term.