What is Credal.ai?
Credal.ai is a tool designed to secure sensitive data while integrating AI applications within an enterprise. From enforcing access policies to masking sensitive information, Credal provides companies with secure APIs, a secure chat UI, and a Slack bot, all aimed towards safeguarding business secrets and protected data like PII.
How does Credal ensure data security?
Credal safeguards data security by automatically enforcing access policies and enabling acceptable usage rules. It provides auto-redaction of sensitive phrases and keywords before data exits the organization, and maintains constant sync with permissions from source systems like Google documents and Confluence pages. Additionally, granular audit logs provide a clear record of data shared with AI providers.
What types of AI applications can be securely utilized using Credal?
Credal allows secure utilization of popular AI applications like ChatGPT and offers a drop-in substitute for OpenAI and Anthropic APIs. Internally developed or externally procured AI tools can also be safely integrated with Credal.
What are the key features offered by Credal to protect sensitive data?
Credal's key features for protecting sensitive data include automated enforcement of access policies, masking of sensitive data, and granular audit logs for data shared with AI providers. It also allows redaction of sensitive keywords and phrases, negotiates and maintains MSAs with AI providers, and ensures permissions synchronization with source systems.
Can Credal be fully deployed on-premise?
Yes, Credal can be fully deployed on-premise. This functionality ensures that data never leaves the network and can leverage existing investments in Azure OpenAI, AWS Bedrock, and even Open Source Models.
What are some popular AI applications that can be securely used with Credal?
Credal can be used with popular AI applications like GPT-4 32k, Claude and others. It also offers drop-in substitutes for OpenAI and Anthropic APIs.
How does Credal support developers in maintaining data security?
Credal supports developers by providing secure APIs to build custom applications while respecting source permissions and masking sensitive data. It offers drop-in replacements for OpenAI and Anthropic APIs, making it possible to build applications on top of enterprise data securely.
What integrations does Credal offer with external data sources?
Credal integrates with enterprise data sources like Google Drive, Confluence and Slack. This allows seamless use of AI while respecting source system permissions and masking sensitive data.
How does Credal automate enforcement of access policies?
Credal automates enforcement of access policies by integrating with source systems, and respecting their permissions. Its secure functionality allows enterprises to manage policies on internally developed and externally procured AI tools all in one place.
What types of data can Credal mask?
Credal is capable of masking all sorts of sensitive data, including but not limited to business secrets, personal identifiable information (PII), sensitive keywords, and phrases.
What audit logging capabilities does Credal provide?
Credal provides comprehensive audit logs on all data shared with AI providers. This feature offers full transparency into who shares what data, the receiver of the shared data, and the terms and agreements governing that data.
Is it possible to use Credal with internally built AI tools?
Yes, Credal enables companies to define, enforce, and audit access to internally built AI tools. Simplifying management of these tools in a single place, Credal enhances security while allowing for customisation.
How does Credal aid in enforcing acceptable use policies?
Credal enforces acceptable use policies by integrating them into its APIs and all other tools. These policies help prevent misuse and ensure users comply with guidelines while accessing enterprise data through AI apps.
Does Credal redact sensitive data before it leaves the organization?
Yes, Credal automatically redacts sensitive keywords, phrases, and categories of data before it leaves the organization. This feature increases data security and grants InfoSec teams full control while remaining user-friendly.
Can Credal sync permissions with source systems?
Yes, Credal syncs permissions with source systems such as Google Documents, Confluence pages, and Slack channels. This feature ensures the right permissions are applied to data shared with AI providers while keeping context relevant for Large Language Models (LLM).
How can enterprises control usage of AI apps with Credal?
Enterprises can control usage of AI apps with Credal by defining, enforcing, and auditing access policies. With Credal, companies can manage policies on both internally built and externally procured AI tools, all from a single portal.
Can companies build custom applications on secure APIs using Credal?
Yes, companies can build custom applications on secure APIs using Credal. It allows developers to build applications while respecting source permissions, generating automatic audit logs and masking sensitive data.
How can developers benefit from using Credal?
Developers can benefit from using Credal by using its secure APIs to build custom applications that comply with company data security regulations. Additionally, it provides drop-in replacements for OpenAI and Anthropic APIs, allowing developers to build apps securely on enterprise data.
Does Credal negotiate MSAs with AI providers?
Yes, Credal negotiates and maintains Master Service Agreements (MSAs) with all major AI providers. Under these agreements, models cannot train on client data and all data is removed from Large Language Model (LLM) servers.
What do existing users have to say about Credal?
Credal receives positive reviews from existing users. For instance, Will, COO of Latchel appreciates Credal for its accuracy in capturing notes for long meetings without the risk of sharing sensitive data. Similarly, Aaron, COS of Arena AI, commends Credal for aiding in coordinating strategy and operations across a team without compromising data security.