Rohit Bhardwaj is a Director of Architecture working at Salesforce. Rohit has extensive experience architecting multi-tenant cloud-native solutions in Resilient Microservices Service-Oriented architectures using AWS Stack. In addition, Rohit has a proven ability in designing solutions and executing and delivering transformational programs that reduce costs and increase efficiencies.
As a trusted advisor, leader, and collaborator, Rohit applies problem resolution, analytical, and operational skills to all initiatives and develops strategic requirements and solution analysis through all stages of the project life cycle and product readiness to execution.
Rohit excels in designing scalable cloud microservice architectures using Spring Boot and Netflix OSS technologies using AWS and Google clouds. As a Security Ninja, Rohit looks for ways to resolve application security vulnerabilities using ethical hacking and threat modeling. Rohit is excited about architecting cloud technologies using Dockers, REDIS, NGINX, RightScale, RabbitMQ, Apigee, Azul Zing, Actuate BIRT reporting, Chef, Splunk, Rest-Assured, SoapUI, Dynatrace, and EnterpriseDB. In addition, Rohit has developed lambda architecture solutions using Apache Spark, Cassandra, and Camel for real-time analytics and integration projects.
Rohit has done MBA from Babson College in Corporate Entrepreneurship, Masters in Computer Science from Boston University and Harvard University. Rohit is a regular speaker at No Fluff Just Stuff, UberConf, RichWeb, GIDS, and other international conferences.
Rohit loves to connect on http://www.productivecloudinnovation.com.
http://linkedin.com/in/rohit-bhardwaj-cloud or using Twitter at rbhardwaj1.
AI has permanently transformed the role of Enterprise Architects. Traditional architectures built around data, applications, and integration are no longer enough. Modern intelligent systems rely on retrieval-augmented reasoning (RAG), relationship-driven graph reasoning (GraphRAG), and autonomous AI agents that must operate safely, predictably, and in alignment with business goals.
This full-day immersive workshop introduces the ARCHAI Blueprint, the first EA 4.0 framework that unifies:
– ARCHAI Fabric — enterprise knowledge & reasoning layer powered by RAG and GraphRAG
– ARCHAI Agents — assistive, autonomous, and cooperative agents with guardrails
– ARCHAI View — C4++ modeling for intelligent architectures
– ARCHAI Maturity Model — a 5-level roadmap toward the autonomous enterprise
Through storytelling, live architecture labs, and hands-on modeling, participants will learn how to design safe, scalable, AI-augmented enterprise architectures. You will build an end-to-end architecture for a realistic case study—ArchiMetal, a global manufacturing enterprise modernizing with AI.
By the end, you will not just understand RAG and GraphRAG—you will know how to embed them into production-grade enterprise architecture that is governable, observable, and future-proof.
⸻
KEY TAKEAWAYS
Participants will leave with the ability to:
Architect AI-Driven Knowledge Systems
•Design enterprise-scale RAG and GraphRAG pipelines
•Build knowledge fabrics that unify documents, graphs, embeddings & metadata
•Govern retrieval consistency, drift, safety, lineage & real-time updates
Model Intelligent Systems Using ARCHAI View
•Produce C0 → C3 diagrams (C4++ enhanced for AI)
•Model knowledge flows, agent interactions, guardrails & reasoning boundaries
Design and Govern Enterprise AI Agents
•Define agent roles, decisions, constraints, and safety boundaries
•Create multi-agent workflows across business domains
•Establish guardrail & observability architecture
Build AI-Augmented Business, Data, Application & Technology Architectures
•Extend TOGAF with AI reasoning-layer constructs
•Integrate RAG/GraphRAG into EA artifacts and capability maps
•Architect runtime platforms for inference, retrieval, safety & cost control
Create an EA 4.0 Roadmap Using the ARCHAI Maturity Model
•Assess enterprise readiness
•Identify transformation milestones across 5 maturity levels
•Build a 12–36 month strategic roadmap for intelligent systems adoption
Welcome & Foundations of EA 4.0
•Why enterprise architecture must evolve for AI
•Overview of the 5 ARCHAI components
•ARCHAI Blueprint
•ARCHAI View
•ARCHAI Fabric
•ARCHAI Agents
•ARCHAI Maturity Model
⸻
Session 1 — Architecture Vision
•The new Enterprise Knowledge & Reasoning Layer
•Why RAG/GraphRAG require architectural foundations
•Intelligent system context modeling (C0/C1)
•Introducing the ArchiMetal case study
⸻
Session 2 — Business Architecture for AI
•Mapping AI-driven capabilities and value streams
•Decision hotspots and agent opportunities
•Business capability redesign
•ARCHAI Maturity Model assessment
⸻
Session 3 — Data Architecture: ARCHAI Fabric
•Designing the knowledge layer (RAG + GraphRAG)
•Vector, graph, ontology, and metadata models
•Governance for retrieval, drift, lineage, and safety
•C2 modeling for the Fabric
⸻
Session 4 — Application Architecture: ARCHAI Agents
•Assistive, autonomous & cooperative agent patterns
•Agent decision boundaries and governance
•Multi-agent workflows & human-in-loop logic
•C2/C3 diagrams for agent flows
⸻
Session 5 — Technology Architecture
•AI & retrieval runtimes
•Guardrail and policy engines
•Observability for reasoning, retrieval, and agent behavior
•Technical standards for EA 4.0 systems
⸻
Session 6 — Integrated Architecture Lab
•Build the full ARCHAI Blueprint for ArchiMetal
•Create C0 → C3 diagrams (ARCHAI View)
•Design Fabric + agent ecosystem
•Map guardrails & governance
•Define the EA 4.0 transformation roadmap
⸻
Session 7 — Governance & Operating Model
•Knowledge governance (Fabric)
•Agent governance (charters, permissions, kill switches)
•Model & retrieval lifecycle governance
•Risk, compliance, auditability
•EA 4.0 operating model for intelligent systems
⸻
Session 8 — Future Trends & Roadmap
•Multi-modal RAG & graph fusion
•Enterprise agent meshes
•Intelligent twins & edge reasoning
•Autonomous governance
•3–5 year ARCHAI roadmap
⸻
Closing & Next Steps
•Recap of frameworks & deliverables
•EA transformation priorities for the next 90 days
•Certification and final Q&A
Large Language Models unlock new capabilities—and expose brand-new attack surfaces.
From prompt injection and data exfiltration to model denial-of-service and insecure plugin calls, adversaries are exploiting weaknesses traditional AppSec never anticipated.
The new OWASP LLM Top-10 provides a shared vocabulary for AI risks; this session turns that list into actionable engineering practice.
You’ll learn how to threat-model LLM endpoints, design guardrails that actually block malicious behavior, sandbox tools and plug-ins with least privilege, and align your mitigations to the NIST AI Risk Management Framework for audit-ready governance.
Problems Solved
Why Now
What You’ll Learn
Agenda
Opening: The New AI Attack Surface
How LLMs change the threat model. Examples of real-world attacks: prompt injections, indirect injections, model DoS, and exfiltration via vector stores.
Pattern 1: Threat Modeling LLM Endpoints Identify assets, trust boundaries, and high-risk flows. Apply STRIDE-inspired analysis to prompts, context windows, retrieval layers, and plugin calls.
Pattern 2: Designing Input/Output Guardrails Policy filtering, schema validation, and content moderation. Runtime vs compile-time guardrails—what actually works in production. Enforcing determinism and fail-safe defaults.
Pattern 3: Sandboxing and Least Privilege Plugins Secure function calling: scoped IAM, network egress rules, per-plugin secrets, and API key vaulting. Container isolation and ephemeral agent sandboxes.
Pattern 4: Data Protection and Tenancy in RAG Redacting sensitive data before embedding. Segregating tenant vectors and access policies. Auditing data lineage and evidence paths.
Pattern 5: Red Team & Evaluation Frameworks Running adversarial simulations aligned with OWASP LLM Top-10. Common exploits and how to detect them. Integrating automated red-team tests into CI/CD pipelines.
Pattern 6: Governance & Framework Mapping Mapping mitigations to NIST AI RMF (categories RA, MA, ME). Building dashboards and executive summaries for risk reporting.
Wrap-Up & Action Plan Summarize practical controls that can be implemented within 30 days. Introduce the Guardrail Policy Starter Kit + Red-Team Runbook templates. Live checklist review for readiness maturity.
Key Framework References
Takeaways
AI, agentic workflows, digital twins, edge intelligence, spatial computing, and blockchain trust are converging to reshape how enterprises operate.
This session introduces Enterprise Architecture 4.0—a practical, future-ready approach where architectures become intelligent, adaptive, and continuously learning.
You’ll explore the EA 4.0 Tech Radar, understand the six major waves of disruption, and learn the ARCHAI Blueprint—a structured framework for designing AI-native, agent-ready, and trust-centered systems.
Leave with a clear set of patterns and a 12-month roadmap for preparing your enterprise for the next era of intelligent operations.
⸻
KEY TAKEAWAYS
•Understand the EA 4.0 shift toward intelligent, agent-driven architecture
•Learn the top technology trends: AI, agents, edge, twins, spatial, blockchain, and machine customers
•See how the ARCHAI Blueprint structures AI-first design and governance
•Get practical patterns for agent safety, digital twins, trust, and ecosystem readiness
•Leave with a concise 12-month roadmap for implementing EA 4.0
⸻
AGENDA
– The Speed of Change
Why traditional enterprise architecture cannot support AI-native, agent-driven systems.
– The EA 4.0 Tech Radar
A 3–5 year outlook across:
•Agentic AI
•Edge intelligence
•Digital twins
•Spatial computing
•Trusted automation (blockchain)
•Machine customers
– The Six Waves of Transformation
Short deep dives into each wave with real enterprise use cases.
– The ARCHAI Blueprint
A clear architectural framework for AI-first enterprises:
•Attention & Intent Modeling
•Retrieval & Knowledge Fabric
•Capability & Context Models
•Human + Agent Co-working Patterns
•Action Guardrails & Safety
•Integration & Intelligence Architecture
This gives architects a single, unified design methodology across all emerging technologies.
– The Architect’s Playbook
Practical patterns for:
•Intelligence fabrics
•Agent-safe APIs
•Digital twin integration
•Trust & decentralized identity
•Ecosystem-ready design
– Operationalizing EA 4.0
How architecture teams evolve:
•New EA roles
•Continuous planning
•Agent governance
•EA dashboards
•The 12-month adoption roadmap
Enterprises are moving from single AI agents to networks of agents that trigger thousands of API calls, retries, and tool-chains per prompt. Without orchestration discipline and APIs built for AI-scale, systems buckle under bursty load, retry storms, cache-miss spikes, inconsistent decisions, and runaway costs.
This talk shows how to combine MCP (Model Context Protocol) with proven inter-agent orchestration patterns — Supervisor, Pub/Sub, Blackboard, Capability Router — and how to harden APIs for autonomous traffic using rate limits, dedupe, backpressure, async workflows, resilient caching, and autoscaling without bill shock.
You’ll also learn the AIRLOCK Framework for governing multi-agent behavior with access boundaries, identity checks, rate controls, least-privilege routing, observability, compliance filters, and kill-switches.
You will walk away with a practical blueprint for building multi-agent systems that are fast, safe, reliable, and cost-predictable.
KEY TAKEAWAYS
Pattern Literacy: When to use Orchestrator, Pub/Sub, Blackboard, Router
MCP Fluency: Standardize agent↔tool integration
API Scaling: Rate limits, dedupe, backpressure, async, caching
Resilience: Bulkheads, jitter, circuit breakers, autoscaling guardrails
Observability: Trace chain-ID/tool-ID across agents & tools
AIRLOCK Governance: Access boundaries, identity, rate controls, least-privilege routing, compliance, kill-switches
AGENDA
Why AI Changes Load Patterns
Bursty workloads · fan-out · retry amplification · cost spikes
MCP 101
Standardized agent→tool access · hot-swappable tools
Orchestration Patterns
Supervisor · Pub/Sub · Blackboard · Capability Router
Architecting APIs for AI Traffic
Multi-dimensional rate limits · dedupe · backpressure · SWR caching · async
Resilience & Autoscaling
Circuit breakers · bulkheads · kill-switches · budget caps
Observability & Governance
Chain-ID tracing · anomaly detection · AIRLOCK boundaries
AI inference is no longer a simple model call—it is a multi-hop DAG of planners, retrievers, vector searches, large models, tools, and agent loops. With this complexity comes new failure modes: tail-latency blowups, silent retry storms, vector store cold partitions, GPU queue saturation, exponential cost curves, and unmeasured carbon impact.
In this talk, we unveil ROCS-Loop, a practical architecture designed to close the four critical loops of enterprise AI:
•Reliability (Predictable latency, controlled queues, resilient routing)
•Observability (Full DAG tracing, prompt spans, vector metrics, GPU queue depth)
•Cost-Awareness (Token budgets, model tiering, cost attribution, spot/preemptible strategies)
•Sustainability (SCI metrics, carbon-aware routing, efficient hardware, eliminating unnecessary work)
KEY TAKEAWAYS
•Understand the four forces behind AI outages (latency, visibility, cost, carbon).
•Learn the ROCS-Loop framework for enterprise-grade AI reliability.
•Apply 19 practical patterns to reduce P99, prevent retry storms, and control GPU spend.
•Gain a clear view of vector store + agent observability and GPU queue metrics.
•Learn how ROCS-Loop maps to GCP, Azure, Databricks, FinOps & SCI.
•Leave with a 30-day action plan to stabilize your AI workloads.
⸻
AGENDA
1.The Quiet Outage: Why AI inference fails
2.X-Ray of the inference pipeline (RAG, agents, vector, GPUs)
3.Introducing the ROCS-Loop framework
4.19 patterns for Reliability, Observability, FinOps & GreenOps
5.Cross-cloud mapping (GCP, Azure, Databricks)
6.Hands-on: Diagnose an outage with ROCS
7.Your 30-day ROCS stabilization plan
8.Closing: Becoming a ROCS AI Architect
AI systems behave fundamentally differently from traditional software — they reason, retrieve, learn, and act with autonomy.
These behaviors introduce new failure modes: retry storms, inference cost surges, misaligned agent actions, semantic drift, and retrieval errors.
Most system design approaches were never built to handle these risks.
This talk presents the A.R.C.H.A.I. Blueprint
(AI-Ready Contextual Human-Aligned Initiative),
a modern, AI-first architecture methodology that helps organizations design systems that are safe, scalable, resilient, and aligned with human intent.
Through vivid scenarios drawn from Dreamazon—a fictional global retailer—we show how classical architecture breaks when AI agents interact with APIs, data, and user flows.
Attendees learn how to extend the traditional C4 model into C4+, incorporating AI reasoning layers, retrieval paths, guardrails, drift surfaces, and human oversight points.
Participants also engage in an Architecture Lab, applying ARCHAI to design an AI-powered system using real templates, patterns, and safety practices.
This session equips architects, developers, and technical leaders with the frameworks and confidence needed to build AI-first systems responsibly.
⸻
Key Skills You Will Learn
•How to design systems that incorporate AI reasoning and autonomous behavior
•How to extend C4 into C4+, modeling intelligence, retrieval, guardrails, and safety layers
•How to build AI-safe flows with idempotency, retry constraints, and ambiguity handling
•How to architect Retrieval-Augmented Generation (RAG) and agent orchestration
•How to map business capabilities and value chains for AI transformation
•How to identify AI-specific failure patterns and design to prevent them
•How to define drift detection, cost ceilings, and guardrail policies
•How to create an AI-first governance and ownership model
⸻
What You Will Take Away
•A complete understanding of the A.R.C.H.A.I. Blueprint
•A reusable C4+ architecture template for designing AI systems
•Practical patterns to prevent runaway AI behavior, duplication, and cost explosions
•A framework for aligning AI systems with business intent and human oversight
•Tools for modeling retrieval boundaries and agent interaction flows
•A clear professional roadmap for becoming an AI-first architect
⸻
Agenda
•Why traditional system design breaks when AI agents enter the system
•Realistic case studies from Dreamazon’s Black Friday failures
•Introduction to the A.R.C.H.A.I. Blueprint
•How to apply the six pillars to real-world AI use cases
•Extending the C4 model into C4+ for AI architecture
•Modeling reasoning paths, retrieval pipelines, and safety constraints
•Architecture Lab: Building an AI-ready system using ARCHAI
•Design templates, scorecards, and guardrail patterns
•Roadmap for evolving into an AI-first architect
“By 2030, 80 percent of heritage financial services firms will go out of business, become commoditized, or exist only formally but not competing effectively”, predicts Gartner.
This session explores the integration of AI, specifically ChatGPT, into cloud adoption frameworks to modernize legacy systems. Learn how to leverage AWS Cloud Adoption Framework (CAF) 3.0, Microsoft Cloud Adoption Framework for Azure, and Google Cloud Adoption Framework to build cloud-native architectures that maximize scalability, flexibility, and security. Designed for architects, technical leads, and senior IT professionals, this talk provides actionable insights and strategies for successful digital transformation.
Attendees will learn how to:
Integrate AI assistants into cloud readiness, migration, and optimization phases.
Use AI to analyze legacy code, auto-generate documentation, and map dependencies.
Employ the AWS CAF 3.0, Microsoft CAF, and Google CAF to guide large-scale migration while balancing security, compliance, and cost.
Design cloud-native architectures powered by continuous learning, resilience, and automation.
Packed with case studies, modernization blueprints, and AI-assisted workflows, this session equips architects and technical leaders to bridge the gap between heritage systems and future-ready enterprises.
Agenda (60–90 minutes)
1 Introduction: Why Legacy Modernization Now (10 min)
The Gartner 2030 prediction and what it means for enterprises.
The rise of AI-augmented modernization.
2 Understanding Cloud Adoption Frameworks (15 min)
Overview of AWS CAF 3.0, Microsoft CAF for Azure, Google CAF.
Common pillars: strategy, governance, people, platform, security, and operations.
Strengths and trade-offs across frameworks.
3 Strategic Role of AI in Legacy Modernization (15 min)
How LLMs augment discovery, documentation, and refactoring.
ChatGPT as a legacy analysis assistant: reading COBOL, PL/SQL, Java monoliths.
AI-driven dependency mapping, test case generation, and modernization playbooks.
4 Steps for Moving Legacy Systems to the Cloud (20 min)
Assessment → Migration Planning → Modernization Execution → Optimization.
Incremental vs. Full Rewrite: decision matrix and hybrid models.
Ensuring compliance, resilience, and audit readiness throughout migration.
5 Designing AI-Ready Cloud-Native Architectures (15 min)
Embedding RAG, microservices, and event-driven architecture.
Leveraging container orchestration (EKS, AKS, GKE) and serverless compute.
Implementing AI observability, MLOps, and data pipelines on cloud.
6 Case Studies & Real-World Transformations (10 min)
BFSI: Mainframe-to-Microservices using AWS CAF + GenAI refactoring.
Manufacturing: SAP modernization using Azure CAF + AI code summarization.
Retail: Omnichannel API modernization with GCP CAF + Copilot GPTs.
7 Best Practices & Roadmap (5 min)
Align modernization with business capability models.
Embed AI governance into CAF workflows.
Build continuous improvement loops through feedback and metrics.
8 Q&A / Wrap-Up (5 min)
Recap core insights.
The future of AI-enhanced cloud adoption and autonomous modernization.
AI is moving from pilots to production faster than most enterprises are prepared for. Over the next 3–5 years, architectures must evolve to support agentic workflows, governed AI, secure inference, and cost-efficient operations.
This session gives you a practical blueprint for building an AI-native enterprise architecture—powered by MCP/LangGraph orchestration, GraphRAG retrieval, ISO/IEC 42001 governance, NIST AI RMF safety controls, confidential computing, and post-quantum cryptography.
You’ll leave with a 90-day activation plan and a 3-year roadmap for designing safe, trusted, and scalable AI systems.
KEY TAKEAWAYS
•Clear blueprint for agentic AI architecture
•How to implement governance-as-code (ISO 42001 + NIST AI RMF)
•Patterns for secure & confidential AI (PQC + enclaves)
•FinOps + GreenOps practices for cost & carbon visibility
•AgentOps methods for observability and reliability
•A 90-day plan to start safely
•A 3-year roadmap to modernize EA
⸻
AGENDA
1.The Shift: From AI pilots to agentic platforms
2.Failure Modes: What breaks in real enterprises
3.Blueprint: MCP, LangGraph, GraphRAG, tool safety
4.Governance & Security: ISO 42001, NIST, PQC, confidential compute
5.FinOps & GreenOps: Cost + carbon per inference
6.AgentOps: Observability & drift detection
7.90-Day Activation Plan
8.3-Year EA Roadmap
9.Closing: Architecting the Trusted AI Enterprise