AI: A time-saving opportunity for vulnerability analysis & remediation?
Vulnerability management with AI: the tempting benefits
We can already picture the advantages of using AI (artificial intelligence) for this kind of system maintenance work:
- Time savings: Conversational agents based on large language models such as ChatGPT or Copilot improve precision by detecting trends and anomalies that manual reviews of long vulnerability lists might miss. They automate data analysis and deliver real-time results based on past patterns and defects, providing an accurate summary of embedded system health.
- Cost reduction: AI-driven vulnerability assessment can reduce manual effort. That means saving money by eliminating the need for extra resources to handle the vulnerability evaluation process and letting them focus on activities with a higher added value like application development.
AI’s strengths in CVE lifecycle management
Vulnerability lifecycle management is divided into three main phases, in which AI can play different roles:
- Detection: AI can be useful for the first step in detecting vulnerabilities based on a SBOM (Software Bille of Materials, i.e. a list of packages) given to an LLM (Large Language Model, i.e. a linguistic model with a very large number of parameters based on neural networks) to identify the critical CVE (Common Vulnerabilities and Exposures) applicable to your system.
- Evaluation: It can also greatly assist during the evaluation phase, by providing more information about these Linux CVE and helping to assess whether they are relevant to your device, in particular by offering a highly effective natural language interface that allows you to “discuss” a CVE and its applicability by incorporating a broader context based on Internet searches.
- Remediation: Correcting vulnerabilities mainly involves human intervention to apply patches or updates, and AI is not yet involved in this process.
Read more: How to monitor Linux CVE?
What the research (barely) tells us
Vulnerability management with AI and LLM is still a cutting-edge topic, so research and experimentation on the effectiveness of LLM remain limited.
In 2025, Siemens Healthcare conducted an experiment by building an internal language model trained on past manual vulnerability assessments from their systems. This artificial intelligence model was used to evaluate CVEs, and the results were reviewed by a cybersecurity expert to ensure data quality.
However, across all the LLMs tested, most issues were surprisingly similar – mainly differing in severity. The main types of errors observed were:
- Hallucinations: AI tends to omit or wrongly include critical details necessary for analysis, patching and communication to customers. This can happen, for instance, with affected software versions, recommended update versions, or software/component names.
- Superfluous text generation: LLMs often produce unnecessary or off-topic content, complicating CVEs analysis and verification.
- Performance degradation on long texts: Chaining vulnerabilities in long sequences remains a challenge for AI. The best workaround for now is to segment the text and apply robust semantic understanding to each Linux vulnerability.
The Siemens study concluded that artificial intelligence trained on large internal datasets can be an effective assistant for product cybersecurity experts, helping them quickly identify relevant security vulnerabilities and communicate mitigations to customers. It may even become a valuable support tool for mitigation in the future.
That said, despite these strengths, AI alone isn’t enough for full vulnerability management in embedded Linux systems – neither for CRA compliance nor in general.
Why public LLMs aren’t (yet?) ready for prime time
In this section, we’ll concentrate on public LLMs – widely accessible models like GPT-5 by OpenAI – rather than internal LLMs, such as the one developed by Siemens Healthcare, which are custom-built and trained within organizations for private, domain-specific applications.
A quick reminder on AI usage precautions
It’s worth repeating: always verify your sources and watch out for hallucinations in AI-generated results – vulnerability management with AI is definitely not an exception.
The risk of missing a critical CVE
While AI can assist with vulnerability identification, it does not guarantee that you will not miss a critical CVE (just as no single tool can, since the process relies on already discovered vulnerabilities and some may remain undetected). Therefore, it is essential not to rely solely on AI-generated results for this type of analysis, particularly when striving to meet Cyber Resilience Act (CRA) requirements, which demands both rigor and responsiveness: any exploitable vulnerability in a product must be quickly identified, validated, reported to the authorities, and then, if necessary, corrected by means of a product update.
The risk of leaking sensitive product vulnerability data
AI in cybersecurity raises ethical and confidentiality concerns. CVEs are tied to your product. Feeding ChatGPT, Copilot, or similar tools with this data risks exposing highly sensitive information that could be redistributed outside your organization. This has already happened – ChatGPT has previously leaked confidential CVE data that should never have been disclosed. Not optimal.
Working with up-to-date data
When using ChatGPT by default, you only have access to the context of the LLM at the time it was trained and not necessarily to up-to-date search results. This issue can be overcome by using assistants such as Kagi Assistant, which are linked to a search engine and automatically provide the prompt with fresh search results.
Integrating AI into your DevOps and CVE toolchain: easier said than done
Integrating AI into your pipeline or with other vulnerability management tools can be complex. Challenges like interoperability, data structure differences, and the need for fine-tuned tool adaptation can be major blockers to adoption.
Internal AI agents: a data-hungry investment
AI algorithms need large volumes of high-quality data. For vulnerability monitoring and remediation, that means having a substantial database of past CVE analyses to train your own LLM. This is feasible—though energy-intensive—if you’ve been manually analyzing vulnerabilities for years. But it’s less accessible if your internal process is still being built…
So, AI or no AI for Linux vulnerability management?
In the context of these examples, it should be noted that ChatGPT offers no added value compared to non-AI-based tools when it comes to producing a list of applicable CVE (significant vulnerabilities are omitted, there are false positives and errors).
On the other hand, the second example demonstrates the usefulness of such a tool in analyzing a given vulnerability by speeding up the search for information and diagnosis of a product by providing additional technical information, such as Git logs and configuration details.
However, these results should not be taken for granted, and caution should be exercised: any technical parameter that leads to the validation or non-validation of a vulnerability and its mitigation must be confirmed manually to guard against possible inaccuracies and omissions.
That’s why we’re convinced that AI isn’t yet mature enough to support the full vulnerability lifecycle in embedded systems. It can be an interesting tool for investigating a specific CVE and its relevance to your device – but it’s better to rely on proven databases and tools to ensure information reliability and avoid critical data leaks.
Utilizing a private AI / LLM for vulnerability management could help mitigate concerns about data exposure, since sensitive information would remain within your organization. In time, you might even expand its capabilities to apply patches automatically. However, achieving this level of integration would demand careful customization and considerable investment to ensure it meets your specific requirements.
CVE Scan stands out as a reliable tool for this kind of work. It leverages public CVE databases (NVD, Ubuntu Tracker, OSV) to identify vulnerabilities in embedded Linux systems and analyze them based on the packages present in your system—drilling down to the Linux kernel to reduce false positives.
While CVE Scan isn’t AI-based, it still saves you significant time in vulnerability analysis and lifecycle management with its advanced filtering capabilities and tracking dashboards. Even more as it doesn’t require any training and is easy to configure.




