Stay up to date
FYI: the dark side of ChatGPT is in your software supply chain

FYI: the dark side of ChatGPT is in your software supply chain

Jul 27, 2023

Ben Hirschberg
CTO & Co-founder

Let’s face it, the tech world is a whirlwind of constant evolution. AI is no longer just a fancy add-on; it’s shaking things up and becoming part and parcel of various industries, not least software development. One such tech marvel that’s stealthily carving out a significant role in our software supply chain is OpenAI’s impressive language model – ChatGPT. 

Unraveling the ChatGPT mystery

If you’re still navigating the acronym minefield, don’t worry, you’re not alone. ChatGPT stands for “Generative Pretrained Transformer” and it’s an AI language model created by the great minds at OpenAI. It’s trained on an ocean of internet text, making it a Jack of all trades: answering queries, penning essays, summarizing texts, and – here’s the kicker – generating code. 

For developers, this is like a dream come true. An AI-powered sidekick offering code snippets? Sign us up! And that’s precisely what GitHub did. Their CoPilot tool, plugged into Visual Studio Code, harnesses the power of ChatGPT to anticipate and propose code snippets or even whole functions. We’re looking at quicker software development – and who wouldn’t want that?

By Security standards, at DevOps pace.

Actionable, contextual,
end-to-end
Kubernetes-native security

ChatGPT sneaks into the software supply chain

The software supply chain is everything that goes into software development and distribution. We’re talking source code, third-party libraries, APIs, and even the integrated development environment (IDE). Now, when you think of AI-driven code snippets sliding into this mix (thanks to ChatGPT or GitHub CoPilot), it becomes clear that these models are indirectly weaving their way into our software codebase. Subtly but surely, ChatGPT is becoming a cog in the machine of the software supply chain.

The ChatGPT plot thickens: dark deeds in code generation

Hold onto your hats, folks, because here’s where the plot thickens. While the union of AI and software development sounds like a match made in heaven, it’s not without its drawbacks. The villain of the piece? Malicious actors who’ve spotted an opportunity to exploit AI code generation. By training models like ChatGPT with harmful code snippets, they’re playing a dangerous game. Developers asking for code could, unknowingly, be pulling dodgy code into their software supply chain.

Take, for example, the time when developers were after a method to decode URLs. ChatGPT, in all its innocence, suggested using a particular Python package. Unbeknownst to all, this package had been maliciously uploaded to the Python Package Index. When developers fired up this package following ChatGPT’s suggestion, the rogue code swooped in and stole their credentials. Just like that, the attacker could manage AWS as if they were the developers themselves. Yikes.

Let’s face it, this is a serious issue

We’re not just talking about a security issue here; it’s an unprecedented threat to trust, privacy, and the integrity of the software supply chain. And it’s not going away. As AI becomes more intertwined with software development, the scope for such attacks grows, opening up new, terrifying horizons for cyber threats.

But let’s not lose our heads. As with all tech, the key is finding a sweet spot between advancement and security. ChatGPT and its AI counterparts are incredible assets to software development, but we’ve got to keep our eyes wide open to avoid inviting vulnerabilities into our supply chain.

Awareness is our best defense. Developers need to understand this emerging threat and be vigilant with AI-generated code. And let’s not forget the people behind the AI. Organizations like OpenAI and GitHub have a responsibility to keep their training data clean and keep those pesky malicious actors at bay.

The AI rollercoaster: ups, downs, and what lies ahead

At the end of the day, we can’t deny the boost AI gives to the software supply chain. But with great power comes great responsibility. As we ride the wave of AI-driven software development, it’s crucial to remember that AI is a tool that can be wielded for good and ill. It’s up to us to ensure that it’s used wisely and protect ourselves from potential misuse. As we buckle up for the future of software development, we’ve got one heck of a challenge – and opportunity – on our hands.

As ever, we suggest being proactive about your Kubernetes security. Evolve your security practices. Pay extra attention to your Software Bill of Materials (SBOM) and image signing as well as code provenance. Continue to adopt the zero-trust mindset. Scan for misconfigurations and vulnerabilities on a regular basis. ARMO is here to help you with this. Try it free, today!

Actionable, contextual, end-to-end
{Kubernetes-native security}

From code to cluster, helm to node, we’ve got your Kubernetes covered:

Cut the CVE noise by significantly reducing CVE-related work by over 90%

Automatic Kubernetes compliance for CIS, NSA, Mitre, SOC2, PCI, and more

Manage Kubernetes role-based-access control (RBAC) visually

slack_logos

Continue to Slack

Get the information you need directly from our experts!

new-messageContinue as a guest