Announcement Class D Support

Dear Users,

over the past months, we have made significant strides in strengthening the security posture of the ChatAI service through a comprehensive set of technical and organizational enhancements. These improvements are either integrated into the ChatAI frontend and the SAIA platform that hosts our backend infrastructure or measures will be dedicated for deployment of use cases requiring data class D.

Key advancements include:

  • Secure Model Hosting via our dedicated SecureHPC workflow, ensuring models run on trusted, isolated hardware.
  • End-to-End Encryption of all request payloads, guaranteeing that data remains protected from transmission to inference even though intermediate transport mechanisms.
  • Minimized Attack Surface through enhanced access controls and isolation mechanisms—both from internal and external threats.

Together, these measures enable us to route customer messages directly to secure inference nodes, where no administrator or third party can access or inspect the data in transit or at rest.

As Prof. Kunkel, notes: “We’ve reached a critical technical milestone. I’m confident we meet the rigorous technical and organizational requirements necessary to support use cases involving sensitive data.”

A beta deployment of this hardened workflow is now available. We invite you to join us in pilot projects to explore secure model inference for the most demanding, privacy-sensitive applications.

If you’re working with high-security data and would like to be an early adopter, we’d be delighted to collaborate. Please reach out to us—we’re here to support your journey toward trusted, secure AI.