The million dollar AI engineering problem

2 hours ago
Written by
Fouad Matin
@fouadmatin

The crown jewels of AI: model weights, biases, and the data that trains them. Regardless of where inference is hosted, what model you're using, or cloud provider you use, these .json, .onnx, and .gguf files are one of a company's most valuable assets.

For companies developing custom models or fine-tuning existing models, they'll invest millions of dollars in engineering time, compute, and training data collection.

Most models operate on publicly accessible data like Wikipedia, Common Crawl, or something like OpenAI's internal WebText dataset. But like with ChatGPT, the real value comes from the fine-tuning data and reinforcement learning from human feedback (RLHF) that's used to adapt the model to a specific use case.

Usually this data sits in a shared S3 bucket, accessible to everyone in the company. In a very simple case, it might look something like this:

$ aws s3 ls secret-internal-model-archive/models
FINE_TUNED-openhermes-2.5-mistral-7b.Q4_K_M.gguf
llama-2-7b-chat-hf-ggml-model-q4_0.gguf
added_tokens.json
$ aws s3 ls secret-internal-model-archive/training-data
commoncrawl-CC-MAIN-2023-50/
scale-export-2024-02-23/
app-rlhf-latest/

While Llama is a fully open-source model today, it didn't start that way.

Back when Meta first announced LLaMA, they were intending to restrict full access to limited set of researchers and allow people to request access. To maintain integrity and prevent misuse, we are releasing our model under a noncommercial license focused on research use cases.

One week later, someone leaked the model on 4chan — that's a link to TheVerge, not 4chan.

How to not leak your AI model

After initial development, the next step is to control access to the model and training data. Unlike most commercial software, AI models are a lot more valuable and a lot easier to leak.

The primary goal should be to limit access to only the machines that absolutely need it, using a combination of IAM policies and secure virtual networking.

The second step is to monitor access to the model and training data. This can be done by logging access to the S3 bucket, and using a tool like AWS CloudTrail to monitor access to the bucket.

Why is the marketing team downloading the fine-tuning data? Why is the model being accessed from a region where we have no employees or customers? Why is Leon uploading confidential models to a personal Google Cloud account?

People generally will still need access to the model and training data, so the third step is to require justification for access. This can be as simple as a Slack message to a security team, or as complex as a ticketing system that requires approval from a manager.

Depending on team size, the strictness of these controls will vary:

  1. Small teams (under 10 people) should require justification by default, especially if you're spending half your seed round training a model. Companies that deal with sensitive data—like personally identifiable information (PII), health data (PHI), and financial info will enforce stricter controls from day one.
  2. Growing teams (between 10 and 100 people) start to tighten control to the models. Infrastructure and model teams still need instant access, while others need a reason or approval.
  3. Large teams (100+ people) who've invested millions into training models — a model leak is an existential risk. Access control is tied to team membership and strict approvals.

Fine-tuning data security

Most teams use a pre-trained model like Mistral and fine-tune it on their own data. It's much cheaper and faster to get a model that's good enough for most use cases.

Fine-tuning data allows AI models to adapt and evolve based on real user feedback over time, resulting in far better performance including with smaller models. Usually this data is based on task completion, user interaction, or even something as simple as a thumbs up/down button shown to users.

For these teams, the fine-tuning data is the most valuable asset. It's the secret sauce that makes their model better than the competition.

RLHF is what separates you from competitors who are also using OpenAI, Mistral, and Llama models to build their products.

If data is oil, RLHF is aluminum — a strategic resource for building the future.

How to set up temporary access for AI

The best way to secure your AI secrets is to limit access to only the machines that need it, and to monitor and require justification for access.

You can build a simple internal tool, ticket-oriented service desk, or use a product like Indent to enforce this workflow.

Indent provides a simple API that you can use to require justification for access to your AI secrets, and to monitor and log data access.

It's also a lot easier if engineers can request directly from Slack and, if approval is required, route notifications into Slack channels. This is a lot easier than having to log into a ticketing system to request access.

Talk to Us

We've thought about the broader problem of implementing strict access controls a lot (previously at Segment and CoreOS) which is what led us to build Indent. If you need any help deciding on the right security architecture or controls for your team, we're happy to help — you can get a demo or talk to us.

We're also building a set of APIs that you can use to build security into your AI products. For example, you can use our Approval API to require justification for access to your AI secrets, and our Prompt API enable AI models to get clarification from users or developers in production.

Try Indent for free.