DEV Community

Neo
Neo

Posted on

AI assistance produce less secure code than those who fly solo.

Computer scientists from Stanford University have found that programmers who accept help from AI tools like Github Copilot produce less secure code than those who fly solo. They also found that AI help tends to delude developers about the quality of their output. An NYU study has shown that AI-based programming suggestions are often insecure in experiments. The results of a Stanford study are inconclusive as to whether artificial intelligence (AI) assisted or harmed developers. Participants were asked to write code in response to five prompts using a React-based Electron app monitored by the study administrator.

The authors conclude that AI assistants should be viewed with caution because they can mislead inexperienced developers and create security vulnerabilities. At the same time, they hope their findings will lead to improvements in the way AI assistants are designed for use in the field of computer programming.

Top comments (0)