Cekura Red Teaming recently launched!

Launch YC: Cekura Red Teaming: Stress-test your AI agents for Jailbreaks, Bias, Toxicity and more

"Break your AI agents security before your customers do."
TL;DR: Cekura Red Teaming has launched for enterprises and startups building Conversational AI in compliance-heavy sectors like BFSI, Healthcare, Legal, etc.

Watch the launch video here or book a call here.

Founded by Tarush Agarwal, Shashij Gupta & Sidhant Kabra

Hi everyone! Meet Tarush, Shashij, and Sidhant, co-founders of Cekura 👋

The Team

The founders met over eight years ago during their undergraduate studies at IIT Bombay.

Tarush comes from quantitative finance, where he worked on simulations for ultra-low latency trading strategies (think nanoseconds!).

Shashij has previously researched NLP at Google Research and is the first author of a paper on testing AI systems reliably, which has 50+ citations from his work at ETH Zurich.

Sidhant comes from a consulting background advising CXOs at Fortune 500 companies in FMCG and medical devices. He managed P&L in a leading contact center.

Problem

Enterprise security teams are blocking deployments because manual, vibe-based testing doesn’t provide enough assurance against adversarial users. Whether it’s a user bypassing a paywall, tricking a bot into giving legal advice, or social-engineering it into leaking company secrets - the attack surface of conversational AI is massive.

Solution

Scalable, Automated Red Teaming: Cekura allows you to run thousands of adversarial simulations in minutes. It acts as the "bad actor," pushing your agent's logic to its limits across every major vulnerability category:

  • 🔓 Jailbreaking: Cekura simulates sophisticated "prompt injection" attacks to see if your agent will ignore its instructions or reveal its system prompt.
  • ⚖️ Bias & Fairness: Cekura tests for hidden biases in financial, medical, or recruitment advice to ensure your agent stays compliant and fair.
  • 🤬 Toxicity: Cekura tries to provoke your agent into unprofessional or offensive behavior.
  • 🛡️ PII & Data Leakage: Cekura attempts to extract sensitive data like credit card numbers, internal keys, or user data that your agent shouldn't have the authority to share.

Red Teaming as a Service (RTaaS)

Beyond its standard library of thousands of scenarios, Cekura also deploys its Forward Deployed Engineers to build personalized adversarial test cases tailored to your specific industry and use case (HIPAA compliance for healthcare, PCI DSS for fintech, etc.)

Learn More

🌐 Visit www.cekura.ai to learn more.
🤝 Ready to Security Test your Voice & Chat AI Agents? Book a call here.
📧 Email the founders here.
👣 Follow Cekura on LinkedInX.

Posted 
January 16, 2026
 in 
Launch
 category
← Back to all posts  

Join Our Newsletter and Get the Latest
Posts to Your Inbox

No spam ever. Read our Privacy Policy
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.