How AI coding tools are being utilized in major tech firms for production work

Hello everyone. I wanted to bring this up because I’ve noticed many people claiming that AI-assisted coding isn’t suitable for real production software. This isn’t accurate at all.

I’m a software engineer with roughly a decade of experience, including five years in a top tech firm. I began my career as a systems engineer before transitioning into development.

Here’s how we effectively integrate AI tools into our production code workflow:

Step 1: We always kick off with a technical design document. Most of the heavy lifting occurs here. You start with a proposal and seek agreement from relevant teams before proceeding to develop the complete system architecture, including integrations with other services.

Step 2: Design review phases. Senior engineers rigorously evaluate your design document. It may seem tough, but it helps prevent issues down the line.

Step 3: Planning for development. A few weeks are dedicated to documenting each subsystem that distinct development teams will construct.

Step 4: Organizing tasks and sprint planning. Developers collaborate with product managers to define detailed tasks and establish their development order.

Step 5: The actual coding phase. This is where AI tools significantly accelerate our process. We practice test-driven development; thus, I let the AI generate unit tests first for the feature I’m working on, and then I use it to aid in crafting the actual feature code.

Step 6: Code review procedures. We require approval from two additional developers before any changes can be merged. AI is also beginning to assist in the review process.

Step 7: Testing occurs in a staging environment before we deploy to production.

We’re witnessing around a 30% enhancement in delivery speed from the initial concept to the final production release. This marks a considerable advancement.

The main point to remember is to start with robust design and architecture. Build incrementally and always prioritize writing tests first.

sounds like your company’s way ahead of us. management still won’t trust ai tools for anything beyond basic autocomplete. quick question though - does the ai actually catch edge cases well, or do you end up writing those tests manually?

this is fascinating! quick question though - what specific issues does ai catch in code reviews that humans miss? and have you seen certain bugs pop up more with ai-generated code vs regular coding? i’d love to hear about your day-to-day experience with this!