In a recent exploration of AI’s impact on the role of software consultants (see below), I highlighted the need for a clear AI policy within your company, even if you are not currently planning to adopt AI assistants in your software development lifecycle.
There are two main reasons for this:
- Unseen use of AI tools. Like it or not, your employees are already using ChatGPT, Copilot or similar solutions. No doubt about it. In fact, according to Salesforce: “More than Half of Generative AI Adopters Use Unapproved Tools at Work“. And by doing so, they are potentially exposing internal data and introducing code vulnerabilities into projects. An AI policy clarifies permissible actions and ensures the right processes are followed.
- Mitigating developer anxiety. A study by Pluralsight reveals that 43-45% of developers express anxiety about AI’s impact on their careers. An explicit AI policy, complemented by measures like upskilling, reframes AI as an asset rather than a threat, alleviating these concerns.
This AI policy could also be a great opportunity to rethink your processes based on a new way of doing things AI can provide (instead of just using AI to optimize the way you already do things). IMHO, this AI policy should also emphasize the need to keep the human in the loop. The AI proposes, the human decides. AI is a junior assistant that needs its suggestions, its code, … reviewed, not just published.
The policy should also be used to put in place controls to test for ethical biases in AI models. LLMs are getting better but you can still “provoke” them to give you a biased answer. Keep this in mind if you use them, or you risk a PR nightmare!
Finally, while you are right now mostly exploting how to use AI, you will soon find yourself building new AI components for your future projects crossing the bridge from AI user to AI developer. When that day comes, take a look at BESSER, our low-code platform for smart software.