Skip to main content

In our rapidly evolving technological landscape, the use of Artificial Intelligence (AI) to create websites and apps is becoming more and more common. And why wouldn’t it be - it’s not difficult to see why developers would be drawn to using AI coding assistants, like ChatGPT or GitHub Copilot. After all, if you’re looking to create code with speed and ease, it’s tempting to let a bot do all the work.

But while AI might allow your concept to become reality with little effort, and make coding more accessible to more non-technical users, the concept of quality control should not be overlooked. As AI tools continue to develop and take over more and more tasks from programmers, it’s worth asking: is your concept secure? And is the speed worth the security risks?

AI and coding - a natural fit?

Generative AI is the term used for artificial intelligence that can create different types of content - from pictures and videos and text. Software development is one of the early, popular use cases for such an AI - according to a study by GitHub, a majority of developers are already using AI to create code for various tasks and applications. Proponents of AI generated code often cite its promise for increased efficiency, reduced human errors, and potential for code optimisation - the bot is able to analyse an existing code and, at least in theory, make it more streamlined and lightweight.

While AI does have immense potential, it is important that its significant limitations are recognised. A recent study by GitClear into developers using AI found that the use of AI assistants could be detrimental to overall code quality. They compared AI generated code in some ways to the contributions of short-term developers, who move from one project to another and as such are unable to fully integrate their work into the broader project. This hastily generated code can cause issues to the teams that are expected to maintain it afterwards. The study also found an increase in code churn (the percentage of code that has to be significantly altered or removed entirely soon after integration), and a higher amount of duplicated code compared to 3 years ago, both of which indicate a decrease in quality and more ‘bad code’ being written.

The risks of AI code

Another big question mark around AI coding assistants are the possible effects their rising popularity might have on how engineers are compensated - not to mention the legal aspects and the complex questions around copyright infringement, plagiarism, intellectual property rights, and other various ethical considerations.

One of the most glaring concerns, however, is the security aspect. A study carried out by researchers at Cornell University points out some troubling trends regarding the security vulnerabilities resulting from the use of AI assistants by programmers. It found that developers using an AI coding tool wrote, on average, significantly less secure code when compared to those who did not rely on AI assistance. It turns out that while these tools can boost efficiency, they can also lead to a kind of overconfidence - developers were found to be more likely to believe that their AI-assisted code is more secure than it actually is, containing more vulnerabilities than man-made equivalents. Meanwhile, users that have a more cautious approach to AI programming tools were found to write their prompts more carefully. Their lower level of trust in the AI results in more carefully created code, with fewer security issues.

In short, the quality of the output depends on the input data. The AI does not build in security, or take security issues into consideration in any way, unless it is told to do so by someone who knows what they’re doing. Meticulous preparation and testing by a human is still needed to ensure that apps, websites and software are built in a way that results in a secure outcome.

What does this mean for developers?

So, will AI replace programmers? The current consensus among experts is that the AI assistants should be viewed as collaborators, not a replacement of human workers. The importance of actual humans in the creation of secure and functional code cannot be overstated.

If you’re planning to use AI tools to write your code, it’s imperative that you are aware of the risks and possible shortcomings of your new assistant. It is likely that the importance of software developers that know how to safely leverage AI tools is going to only increase in the future.  

Cookie Notice

We use cookies to ensure that we give you the best experience on our website. Please confirm you are happy to continue.

Back to top