As an AI language model, Sofia GPT is designed to generate responses based on the text provided to it. However, the AI does not control the storage, processing, or handling of user data. In short, all components of Sofia GPT (the AI engine, data, etc.) are contained within OnePlan's Azure subscription. This means that the AI engine is deployed inside of the OnePlan Azure tenant and is not deployed into cross/multi-tenant. Also, it is an independent service that only inherits the Azure security layer and does not compromise any other services in Azure.
So, using Sofia GPT has the same level of security as using OnePlan itself.
To keep your data safe, our development team and and the Azure OpenAI Service providers generally take measures such as:
- Secure data transmission: Using encryption methods like HTTPS or SSL/TLS to protect the data you send and receive when using the AI assistant.
- Secure storage: Encrypting user data and ensuring that access to the storage systems is strictly controlled and limited to authorized individuals.
- Data anonymization: De-identifying or anonymizing user data, removing personally identifiable information (PII) to minimize privacy risks.
- Regular updates: Keeping the AI's infrastructure and software components updated with the latest security patches to protect against known vulnerabilities.
- Monitoring and auditing: Regularly monitoring and auditing the AI's systems for unusual activity, security threats, or potential data breaches.
- Access controls: Implementing strict access controls and authentication procedures to ensure that only authorized users can access the AI's backend systems.
- Privacy policies: Creating and complying with privacy policies that outline what data is collected, how it's used, and how long it's retained, while adhering to applicable data privacy regulations.
- Data retention policies: Defining data retention policies to deliver clarity on how long user data is stored and ensuring compliance with legal requirements.