Guardrails AI
- 190 followers
- United States of America
- http://guardrailsai.com
- contact@guardrailsai.com
Pinned Loading
Repositories
- unusual_prompt Public
A Guardrails AI input validator that detects if the user is trying to jailbreak an LLM using unusual prompting techniques that involve jailbreaking and tricking the LLM
guardrails-ai/unusual_prompt’s past year of commit activity - llm_critic Public
A Guardrails AI validator that validates LLM responses by grading + evaluating them against a given set of criteria / metrics
guardrails-ai/llm_critic’s past year of commit activity - saliency_check Public
Guardrails AI: Saliency check - Checks that the summary covers the list of topics present in the document
guardrails-ai/saliency_check’s past year of commit activity - response_evaluator Public
A Guardrails AI validator that validates LLM responses by re-prompting the LLM to self-evaluate
guardrails-ai/response_evaluator’s past year of commit activity - wiki_provenance Public
A Guardrails AI validator that detects hallucinations using Wikipedia as the source of truth
guardrails-ai/wiki_provenance’s past year of commit activity - politeness_check Public
guardrails-ai/politeness_check’s past year of commit activity
People
This organization has no public members. You must be a member to see who’s a part of this organization.
Most used topics
Loading…