2025-05-04
One key aspect of AI apps that everyone overlooks
Here's an unpopular opinion.
There's one critical aspect of building all AI apps that everyone is overlooking currently, even though usually it's deemed to be one of the key factors in all applications.
It's security. Yes, I'm being boring but please, bear with me.
How many times is it an overlooked aspect of building an application? How many times do we see security as an afterthought? How many times do we see security as a checkbox that needs to be ticked?
With AI it's especially important. We need to be able to see how our AI is making decisions, have some inspectability and be able to understand how it works. We need to be able to trust it.
We need to be able to see how it works and how it makes decisions. We need to be able to see how it learns and how it adapts. We need to be able to see how it interacts with the world around it.
How about prompt injection attacks? I mean currently the architecture of the LLM is so that it's actually IMPOSSIBLE to wholly secure it from this kind of a thing. Think about it.
Throwing in another model in front is not a solution. We need to evaluate our prompts, protect the queries, have inspectability and reproducibility, yet we quite often don't.
That is an open question - how to solve it.
I believe it'll become more and more relevant, some big organisations are already catching up on that.
And it's no wonder. Living in a non-deterministic world in programming is quite a new thing for all the folks except for some ML engineers. We need to learn how to deal with it.
How? A good start is using tools like guardrails, instructor, lakera, deepeval, braintrust, langfuse, openllmetry and so on.
It's not enough however. We need more.
Without it our AI can get outta hand before we notice it.