Large Language Models unlock new capabilities—and expose brand-new attack surfaces.
From prompt injection and data exfiltration to model denial-of-service and insecure plugin calls, adversaries are exploiting weaknesses traditional AppSec never anticipated.
The new OWASP LLM Top-10 provides a shared vocabulary for AI risks; this session turns that list into actionable engineering practice.
You’ll learn how to threat-model LLM endpoints, design guardrails that actually block malicious behavior, sandbox tools and plug-ins with least privilege, and align your mitigations to the NIST AI Risk Management Framework for audit-ready governance.
Problems Solved
Why Now
What You’ll Learn
Agenda
Opening: The New AI Attack Surface
How LLMs change the threat model. Examples of real-world attacks: prompt injections, indirect injections, model DoS, and exfiltration via vector stores.
Pattern 1: Threat Modeling LLM Endpoints Identify assets, trust boundaries, and high-risk flows. Apply STRIDE-inspired analysis to prompts, context windows, retrieval layers, and plugin calls.
Pattern 2: Designing Input/Output Guardrails Policy filtering, schema validation, and content moderation. Runtime vs compile-time guardrails—what actually works in production. Enforcing determinism and fail-safe defaults.
Pattern 3: Sandboxing and Least Privilege Plugins Secure function calling: scoped IAM, network egress rules, per-plugin secrets, and API key vaulting. Container isolation and ephemeral agent sandboxes.
Pattern 4: Data Protection and Tenancy in RAG Redacting sensitive data before embedding. Segregating tenant vectors and access policies. Auditing data lineage and evidence paths.
Pattern 5: Red Team & Evaluation Frameworks Running adversarial simulations aligned with OWASP LLM Top-10. Common exploits and how to detect them. Integrating automated red-team tests into CI/CD pipelines.
Pattern 6: Governance & Framework Mapping Mapping mitigations to NIST AI RMF (categories RA, MA, ME). Building dashboards and executive summaries for risk reporting.
Wrap-Up & Action Plan Summarize practical controls that can be implemented within 30 days. Introduce the Guardrail Policy Starter Kit + Red-Team Runbook templates. Live checklist review for readiness maturity.
Key Framework References
Takeaways
Rohit Bhardwaj is a Director of Architecture working at Salesforce. Rohit has extensive experience architecting multi-tenant cloud-native solutions in Resilient Microservices Service-Oriented architectures using AWS Stack. In addition, Rohit has a proven ability in designing solutions and executing and delivering transformational programs that reduce costs and increase efficiencies.
As a trusted advisor, leader, and collaborator, Rohit applies problem resolution, analytical, and operational skills to all initiatives and develops strategic requirements and solution analysis through all stages of the project life cycle and product readiness to execution.
Rohit excels in designing scalable cloud microservice architectures using Spring Boot and Netflix OSS technologies using AWS and Google clouds. As a Security Ninja, Rohit looks for ways to resolve application security vulnerabilities using ethical hacking and threat modeling. Rohit is excited about architecting cloud technologies using Dockers, REDIS, NGINX, RightScale, RabbitMQ, Apigee, Azul Zing, Actuate BIRT reporting, Chef, Splunk, Rest-Assured, SoapUI, Dynatrace, and EnterpriseDB. In addition, Rohit has developed lambda architecture solutions using Apache Spark, Cassandra, and Camel for real-time analytics and integration projects.
Rohit has done MBA from Babson College in Corporate Entrepreneurship, Masters in Computer Science from Boston University and Harvard University. Rohit is a regular speaker at No Fluff Just Stuff, UberConf, RichWeb, GIDS, and other international conferences.
Rohit loves to connect on http://www.productivecloudinnovation.com.
http://linkedin.com/in/rohit-bhardwaj-cloud or using Twitter at rbhardwaj1.