By Roxanne Libatique
The Australian Signals Directorate (ASD) has issued a new report warning that artificial intelligence (AI) and machine learning (ML) are introducing fresh cyber security risks to organisational supply chains.
The guidance is intended for businesses deploying or developing AI and ML systems and highlights the need for tailored risk management strategies as these technologies become more prevalent.
AI and ML systems are increasingly integrated into business operations to improve efficiency and decision-making.
However, the ASD notes that the underlying complexity of these systems – built on interdependent models, datasets, software libraries, and cloud infrastructure – can create multiple points of vulnerability.
These entry points may be exploited by malicious actors, potentially resulting in compromised data, system outages, or unauthorised access.
According to the ASD, AI and ML supply chains are exposed to threats such as data poisoning, hidden backdoors, and malicious code – all of which can undermine critical business functions if not properly managed.
The report emphasises that insurance companies and other organisations must assess their entire AI and ML supply chain as part of their broader cyber security posture.
This includes mapping out all suppliers, service providers, and subcontractors involved in the delivery and maintenance of AI systems.
The ASD recommends early engagement with vendors to clarify shared security responsibilities and suggests that contractual agreements should explicitly address cyber risk management.
Recent incidents have demonstrated the impact of third-party vulnerabilities. In 2025, several significant data breaches were linked to weaknesses in external service providers.
Industry surveys cited by the ASD indicate that a majority of organisations have experienced data leaks related to AI, with a notable portion reporting direct breaches of their AI systems.
The ASD guidance also calls for organisations to review how AI and ML systems interact with sensitive data.
This includes understanding which information is processed, stored, or transmitted, and ensuring that access is restricted to authorised personnel.
The report highlights the importance of internal communication channels for reporting supply chain concerns and recommends targeted training for staff involved in AI system development and operations.
Large datasets, often centralised for AI training purposes, may contain sensitive or proprietary information.
The ASD advises applying established data security practices, such as data sanitisation, using trusted sources, and verifying the integrity of data through digital signatures or checksums.
The ASD identifies several technical risks associated with AI and ML supply chains, including:
To address these risks, the ASD recommends a combination of preventative and detective controls. Suggested measures include:
Organisations are also encouraged to thoroughly vet third-party vendors and to incorporate security requirements into procurement processes.
The ASD’s publication coincides with broader industry findings that highlight the growing role of AI in cyber incidents.
Verizon Business’ 2025 Data Breach Investigations Report notes a doubling of AI-related malicious activity over the past two years, with state-sponsored groups using AI to automate attacks and develop new malware variants.
The report also points to internal risks, such as employees using generative AI tools on unsecured platforms, which can lead to inadvertent data exposure.
Within Australia and the wider Asia-Pacific region, system intrusions and third-party breaches are increasingly common, underscoring the need for robust supply chain oversight.
The report advises organisations to prioritise prompt patching, rigorous third-party assessments, and advanced threat detection capabilities.

Leave a Comment