Students, especially those outside the field of cybersecurity, are increasingly turning to Large Language Model (LLM)-based generative AI tools for coding assistance. These AI code generators provide valuable support to developers by generating code based on provided input and instructions. However, the quality and accuracy of the generated code can vary, depending on factors such as task complexity, the clarity of instructions, and the model’s familiarity with the programming language. Additionally, these generated codes may inadvertently utilize vulnerable built-in functions, potentially leading to source code vulnerabilities and exploits. This research undertakes an in-depth analysis and comparison of code generation, code completion, and security suggestions offered by prominent AI models, including OpenAI CodeX, CodeBert, and ChatGPT. The research aims to evaluate the effectiveness and security aspects of these tools in terms of their code generation, code completion capabilities, and their ability to enhance security. This analysis serves as a valuable resource for developers, enabling them to proactively avoid introducing security vulnerabilities in their projects. By doing so, developers can significantly reduce the need for extensive revisions and resource allocation, whether in the short or long term.
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Free, publicly-accessible full text available February 27, 2025
-
Free, publicly-accessible full text available December 15, 2024
-
The integration of cyber-physical systems (CPS) has been extremely advantageous to society, it merges the attention of cybersecurity for vehicles as a timely concern as a matter of public and individual. The failure of any vehicle system could have a serious impact on vehicle control and cause undesired consequences. With the growing demand for security in CPS, there are few hands-on labs/modules available for training current students, future engineers, or IT professionals to understand cybersecurity in CPS. This study describes the execution of a free security testbed to replicate a vehicle’s network system and the implementation of this testbed via hands-on lab designed to introduce concepts of vehicle control systems. The hands-on lab simulates insider threat scenarios where students had to use can-utils toolkits and SavvyCAN to send, modify, and capture the network packet and exploit the system vulnerability threats such as replay attacks and fuzzing attacks on the vehicle system. We conducted a case study with 21 university-level students, and all students completed the hands-on lab, pretest, posttest, and a satisfaction survey as part of a non-graded class assignment. The experimental results show that most students were not familiar with cyber-physical systems and vehicle control systems and never had the chance to do any hands-on lab in this field before. Furthermore, students reported that the hands-on lab helped them learn about CAN-bus and rated high scores for enjoyment. We discussed the design of an affordable tool to teach about vehicle control systems and proposed directions for future work.more » « less