Privacy and Security
With Cube’s AI API, your credentials are never shared with AI, and neither is the connection to your data store. All access to the AI API is governed by the same security context as anything else in Cube Cloud.
Data Retention Policy
Your data isn’t used by third-party LLMs to train models or improve products. We partner with OpenAI to enforce a strict data retention policy.
- No data is used for LLM model training or product improvements by third-party LLMs.
- OpenAI may securely retain API inputs and outputs for up to 30 days for abuse monitoring purposes. Data is permanently deleted after that time.
Dynamic grounding with secure data retrieval
- Relevant information from your Cube semantic layer is merged with the prompt to provide context.
- The metadata available for grounding the prompt is limited to the permissions of the user executing the prompt.
- Secure data retrieval preserves in place all standard Cube role-based controls for user permissions and column/row level access when merging grounding data from your Cube semantic layer.
Prompt Defense
- Context provided by the semantic layer limits hallucinations by the LLM.
- LLMs interface with existing Cube APIs further constraining their ability and limiting hallucinations, whilst providing enhanced transparency
Data Masking
- Data masking policies enforced by Cube are also enforced in AI API usage.
- You can configure what must and must not be masked in the Cube Semantic Layer.