Security

Security overview

Bellomy AI Analytics was built by an in-house team of developers. We continue to evolve and update AI Analytics, combining the latest and greatest technology with our market research-specific knowledge, all backed by high security standards.

- AI Analytics was built in-house. Multiple AIs are employed in a proprietary ensemble model.

- OpenAI is used in some parts of our implementation.

- Bellomy’s proprietary AIs are maintained in our AWS data center within our full control. OpenAI is utilized as a SaaS provider for their portion of the implementation.

- Client data is not used to train the AI. Fragments of client data are supplied to the AI for specific in-the-moment AI uses, but these are not used to train the AI and are not retained by OpenAI. 

- Bellomy believes that a general-purpose Large Language Model (LLM) approach, in conjunction with our proprietary data pipelines and approach to AI prompt engineering, is superior to purpose-built Market Research AIs in most cases.

For additional information on Bellomy's security standards, certifications & training, visit www.bellomy.com/security

 

Security standards, certifications & training

- SOC2 Type 2 annual audit (third-party external audit performed annually)

- ISO27001 certified

- All developers trained in OWASP best practices

- Annual external penetration test

- End-to-end encryption (In Transit via TLS version 1.2, In Storage via AES256)

- HIPAA Compliant (Can meet HIPAA requirements for data handling based on client needs; scrubbing or masking of sensitive information is supported both automatically and manually, as needed; governance processes in place)

 

Security status

Bellomy’s in-house developers are focused on understanding rapid shifts to AI platforms such as ChatGPT, Gemini, Claude, and Llama. Bellomy AI Analytics is built with the necessary precautions to protect the data of our clients and ensure that AI platforms cannot learn from client data.

As of Q1 2024:

- We use OpenAI as our primary vendor

- We employ end-to-end encryption

- We scrub PII automatically in our processing pipelines

- No datasets are sent to third-party AI — we send only limited fragments suitable for specific requests. Client data is not used to train the AI. 

- Our proprietary process for automatic proper noun “symbolization” further reduces the amount of details provided to the AI  No custom models are built using client data

- Information shared with AI vendors, including OpenAI, is for short-term/immediate use only; we do not store information with any AI vendor

 

OpenAI policy

- While we use OpenAI as our primary vendor, we ensure they are not receiving any non-encrypted information

- Nothing we share with OpenAI can be used to train their models

- Information sent to OpenAI is stored only for short-term/immediate use (i.e., no more than 24 hours)

- Our proprietary modular ensemble approach gives us the ability to swap in vendors other than OpenAI if necessary

- We can deploy “walled garden” large language models (LLMs) that are entirely hosted by Bellomy in our AWS data center 

security badges

.

Start unlocking customer insights today.