ChatGPT for developers: friend or foe?  

Michael Man
Michael Man
DevSecOps Advisor at Veracode

Michael Man, DevSecOps Advisor at Veracode, explores whether developers should be wary of ChatGPT – or embrace it as their ultimate assistant.

Since the launch of ChatGPT, there have been myriad speculations about how generative AI is impacting jobs across sectors. Just like the pocket calculator or the internet, generative AI is a tool that is set to revolutionise work and become essential in our day-to-day lives. It does, however, raise concerns that some roles are becoming defunct as a result. 

Perhaps one of the roles most in question is the role of software developers, as generative AI tools are already being used to develop code. Currently, the fact that much of the resulting code has been reported to be more likely to be wrong than right, with 52% of answers having errors, plays in their favour, as it shows that humans continue to be indispensable to generate quality code. But it does raise the questions: should developers be wary of ChatGPT, or will it become their ultimate assistant? How will it impact software security? 

A study presented at Black Hat in 2022 showed that 40% of code written by large language models trained on vast troves of unrefined data contained vulnerabilities. The rapid advancement of generative AI tools over the past 12 months requires another look at how it will impact software security, especially as big tech firms are already incorporating generative AI into their software. Blocking all uses of generative AI technology within an organisation is unrealistic as its use grows. 

The new normal 

We are only now starting to see the potential that large language models and generative AI present, and ChatGPT is only the beginning. For developers, ChatGPT is currently no more than an assistance tool, albeit a very good one – but this could change. 

A big issue for ChatGPT and any other generative AI model is trust. One can argue that trust is a matter of time, but developers are unlikely to have full confidence in generative AI tools until their abilities have evolved through appropriate updates.

The truth is, as generative AI models get better and gain users’ confidence, they will no longer be just a ‘nice to have’ for businesses, but will quickly become a ‘must have’. Those businesses that do not utilise it effectively are certain to lose ground in the race against competitors.  

ChatGPT as a foe  

ChatGPT being easy to use and providing spookily accurate answers is precisely the reason why users are inclined to trust the information it provides. In the case of developers, this could result in faulty and vulnerable code being included in their work. A recent IBM report detailed easy workarounds researchers uncovered to get large language models, including ChatGPT, to write malicious code and give poor security advice.  

When generative AI provides false or misleading information, some call it ‘hallucinating’. It can hallucinate on its own, but prompts can also make ChatGPT go back on its words and correct itself.

This is partly because simple and easy is the limit for ChatGPT. As good as the software is at finding answers to prompts, generative AI is unaware of the context in which a prompt is given, so only provides answers to specific information shared by users. In addition, many generative AI models, including ChatGPT, are trained on information that is publicly available and may be inaccurate. Research conducted by American university Purdue College found that 52% of answers were erroneous and up to 77% were unnecessarily long. The researchers argued that, in addition to being overly lengthy, the answers used a deliberate and raised style of language that mislead users into believing the information in them.  

In addition to including incorrect code in their work, another threat developers and businesses face is leakage of sensitive information. Many companies have warned against or even forbidden workers from using generative AI software at all, including tech giants Apple and Microsoft. 

ChatGPT as a friend 

There has been a lot of scaremongering around generative AI taking jobs and making some roles redundant. While this might be true for a small number of roles, it is not the case for software development jobs because, quite simply, a developer’s job is complex and multi-layered. It is hard, if not impossible, to imagine a time where human intervention won’t be required to develop software.  

Generative AI is a useful tool to support developers with their everyday activities, enhance creativity, and accelerate how knowledge is acquired and created. It can already be used to solve algorithmic problems faster, write unit tests or help understand code snippets. Knowing this, it follows that there will soon be tools to help developers scan AI-generated code for safety concerns or vulnerabilities. Open AI has already developed a course to teach developers how to prompt ChatGPT to build new and powerful applications more efficiently. 

A good example of AI as a friend for developers are the AI-powered error remediation tools available to coders. These types of tools can scan code as it is being developed, looking for potential vulnerabilities or errors. Trained by cybersecurity firms with only the users’ best interests at heart, AI powered tools are trained on quality data specifically chosen to ensure they provide the best possible results.  

Regulation is also under way to make generative AI as safe as possible. Earlier this year the European Commission put forward a proposal for the first-ever legal framework on AI, which addresses the risks of AI and aims to provide developers, deployers, and users with clear requirements and obligations regarding specific uses of AI. Once in place, this type of regulation will ensure more transparency around how a generative AI model is trained, so users and software developers can assess whether it is trustworthy or not. 

Conclusion: the answer, is ChatGPT a friend or foe?  

Whether we like it or not, ChatGPT and generative AI technology are here to stay, so we should make the most of the many benefits they bring. The regulation already being worked on by governments and international organisations will bring much-needed guardrails for this technology. In the meantime, companies should look to empower their developers to use generative AI responsibly, given the potential threats this technology brings. Businesses should also look to introduce their own internal guardrails to make sure AI-generated code is not introducing security vulnerabilities. 

It is essential for organisations to strengthen their cybersecurity practices and for developers to be sure they are using the likes of ChatGPT safely. Through governance and AI-powered security testing tools, organisations can defend against threats with speed, accuracy, and efficiency. Software developers and security teams have access to AI tools that can be used to fight AI threats and significantly bolster the security of their applications. There should be no compromise between convenience and safety. 

Categories

Related Articles

Top Stories