It is now clear: AI is transforming how software is developed.
Anyone leading software development teams needs to be keeping a close eye on this to understand the tools’ rapidly evolving capabilities as well as their limitations and risks.
Here’s a snapshot (as of January 2025) of how I think AI tools currently fit into software development for teams working on established products and codebases.
High Priority
Have a clear company policy on use of AI
Your team needs to be clear about what they’re allowed (and not allowed) to do with AI tools.
If your company doesn’t already have a clear policy around this, then draw one up or push for one.
AI tools can increasingly offer significant productivity boosts (and more) but can carry risks of sending sensitive data to 3rd parties. Company leaders should be clear and intentional about any tradeoffs you’re making around this.
Provide access to AI tools
If at all possible, have a company-approved way for your team to be able to access a broad range of up-to-date AI services, including:
- APIs to LLMs (e.g. via OpenAI, AWS, Azure or GCP)
- At least one chat interface (ChatGPT, Google Gemini or similar) [could be a self-hosted wrapper of an LLM API if necessary]
- An AI code editor such as Cursor or GitHub Copilot
(Some of the remaining points will only be possible if you have the above.)
Encourage experimentation with AI tools
Since this is such a rapidly-moving space, probably the most important thing is to encourage your team to be experimenting regularly with how AI can help in your particular context.
- Have a clear policy
- As mentioned above, make it as easy as possible for people to know what’s allowed and what’s not. Be explicit about the most obvious use cases and make it easy for people to get clarity on others.
- Eliminate concerns about trivial costs
- Consider allocating an ‘experimentation budget’.
- Consider dedicated ‘experimentation’ API keys with capped usage limits.
- Encourage people to expense things they’ve tested themselves.
- Champion experimentation
- Lead by example: share your own experiments with AI tools.
- Encourage people to share their own experiments and findings.
- Consider including ‘sharing learnings from experimentation with AI’ in some or all individuals’ development plans.
Lean into AI code editors (Cursor, GitHub Copilot or similar)
Some developers remain skeptical of these tools (saying they produce low quality code), but others are finding them very useful (particularly with models since Claude 3.5 Sonnet.)
Personally, I think these tools are already very useful and, since they’re only getting better, it makes sense for developers to be familiar with these tools and to be sharing tips on getting the most from them with your particular codebase(s).
Be mindful of quality, however. These tools can make it faster to generate good code but, for now at least, they can also quickly produce a lot of bad code. So human oversight is needed. And you need to figure out what level of oversight is appropriate in your environment in different situations.
The ‘autocomplete’ features of these editors are, I think, a fairly safe way to get a productivity boost if developers are responsible for reading code they generate in this way and you maintain good automated test coverage alongside this.
More ‘agentic’ modes of operation where you prompt the AI to go off and make changes to multiple areas of the code can work well in some situations. They need much more care, though, not least because they can easily produce large sets of (potentially low quality) changes. Developing code this way this way entails a very different workflow; one which, as an industry, we’re only just starting to figure out.
Discuss the team’s use of AI
Given how quickly the landscape of AI software development tools (and practices for working with them) is evolving, you probably want to be regularly sharing learnings and discussing the tradeoffs of different approaches within your team(s). Hopefully that’s happening organically. If not, you may want to raise the topic from time to time.
You may want to have an evolving team policy or guidelines around how, as a team, you’re currently choosing to use AI. (If so, make sure it’s reviewed and updated every few months, though!)
Encourage use of ChatGPT or similar for general productivity
There’s a reason why ChatGPT was one of the fastest-growing products of all time – AI chat interfaces are incredibly useful. Some people will use them a lot; some people may not for now. That’s okay. You want to get people experimenting with these tools and learning when to reach for them and how best to incorporate them into the non-coding aspects of their work.
Medium Priority
Experiment with front-end focussed coding agents
Tools such as V0 and Replit Agent can be useful in some cases for prototyping. For example, a product manager could potentially create quick prototypes of a UI with minimal help from a developer.
Equally, tools like this could be useful in rapidly developing small internal tools (or parts of them) where quality isn’t critical.
Experiment with auxiliary AI tools
AI tools are rapidly popping up to help with other specific aspects of software development. For example, AI tools that can review code for security issues.
There’s too much to cover here in detail but I think it’s worth experimenting with any such tools that look well-aligned with your current priorities.
Low Priority
Be aware of end-to-end coding agents
There are also more ambitious coding agents such as Devin and OpenHands. These are designed to perform larger tasks, including front-end and back-end, but are fairly unreliable. I would keep an eye on developments here but not be in a rush to do anything with them for now.
What Do You Think?
What have I missed here?
Is there anything you disagree with?
And are there tools or new ways of working that you and your team are finding especially valuable?
Leave a Reply