ARCHCONF2026 menu_2
  • Home close
  • Schedule
  • Sessions
  • Speakers
  • Travel
  • Contact Us
  • Members
    • Sign In
  • Register

An architecture pattern is a reusable solution to a commonly occurring problem in software architecture within a specific context. However, lurking alongside these patterns are their dangerous counterparts—anti-patterns—that, while appealing in theory, can lead to costly and far-reaching consequences in practice.

This full-day workshop dives deep into the world of architecture patterns and anti-patterns. We’ll explore the applicability, trade-offs, and governance of key patterns, while also examining how to identify and avoid common anti-patterns. Beyond the technical aspects, we’ll uncover how architecture patterns intersect with organizational elements like implementation, infrastructure, team topologies, data strategies, and generative AI.

Through qualitative analysis, real-world examples, and the use of fitness functions, attendees will gain actionable insights to design systems that are resilient, scalable, and aligned with business goals. Whether you’re an architect, developer, or technical leader, this workshop will equip you with the tools to navigate the complexities of modern software architecture.

Modern system design has entered a new era. It’s no longer enough to optimize for uptime and latency — today’s systems must also be AI-ready, token-efficient, trustworthy, and resilient. Whether building global-scale apps, powering recommendation engines, or integrating GenAI agents, architects need new skills and playbooks to design for scale, speed, and reliability.

This full-day workshop blends classic distributed systems knowledge with AI-native thinking. Through case studies, frameworks, and hands-on design sessions, you’ll learn to design systems that balance performance, cost, resilience, and truthfulness — and walk away with reusable templates you can apply to interviews and real-world architectures.

Target Audience

Enterprise & Cloud Architects → building large-scale, AI-ready systems.

Backend Engineers & Tech Leads → leveling up to system design mastery.

AI/ML & Data Engineers → extending beyond pipelines to full-stack AI systems.

FAANG & Big Tech Interview Candidates → preparing for system design interviews with an AI twist.

Engineering Managers & CTO-track Leaders → guiding teams through AI adoption.

Startup Founders & Builders → scaling AI products without burning money.

Learning Outcomes

By the end of the workshop, participants will be able to:

Apply a 7-step system design framework extended for AI workloads.

Design systems that scale for both requests and tokens.

Architect multi-provider failover and graceful degradation ladders.

Engineer RAG 2.0 pipelines with hybrid search, GraphRAG, and semantic caching.

Implement AI trust & security with guardrails, sandboxing, and red-teaming.

Build observability dashboards for hallucination %, drift, token costs.

Reimagine real-world platforms (Uber, Netflix, Twitter, Instagram) with AI integration.

Practice mock interviews & chaos drills to defend trade-offs under pressure.

Take home reusable templates (AI System Design Canvas, RAG Checklist, Chaos Runbook).

Gain the confidence to lead AI-era system design in interviews, enterprises, or startups.

Workshop Agenda (Full-Day, 8 Hours)
Session 1 – Foundations of Modern System Design (60 min)

The new era: Why classic design is no longer enough.

Architecture KPIs in the AI age: latency, tokens, hallucination %, cost.

Group activity: brainstorm new KPIs.

Session 2 – Frameworks & Mindset (75 min)

The 7-Step System Design Framework (AI-extended).

Scaling humans vs tokens.

Token capacity planning exercise.

Session 3 – Retrieval & Resilience (75 min)

RAG 2.0 patterns: chunking, hybrid retrieval, GraphRAG, semantic cache.

Multi-provider resilience + graceful degradation ladders.

Whiteboard lab: design a resilient RAG pipeline.

Session 4 – Security & Observability (60 min)

Threats: prompt injection, data exfiltration, abuse.

Guardrails, sandboxing, red-teaming.

Observability for LLMs: traces, cost dashboards, drift monitoring.

Activity: STRIDE threat-modeling for an LLM endpoint.

Session 5 – Real-World System Patterns (90 min)

Uber, Netflix, Instagram, Twitter, Search, Fraud detection, Chatbot.

AI-enhanced vs classic system designs.

Breakout lab: redesign a system with AI augmentation.

Session 6 – Interviews & Chaos Drills (75 min)

Mock interview challenges: travel assistant, vector store sharding.

Peer review of trade-offs, diagrams, storytelling.

Chaos drills: provider outage, token overruns, fallback runbooks.

Closing (15 min)

Recap: 3 secrets (Scaling tokens, RAG as index, Resilient degradation).

Templates & takeaways: AI System Design Canvas, RAG Checklist, Chaos Runbook.

Q&A + networking.

Takeaways for Participants

AI System Design Canvas (framework for interviews & real-world reviews).

RAG 2.0 Checklist (end-to-end retrieval playbook).

Chaos Runbook Template (resilience drill starter kit).

AI SLO Dashboard template for observability + FinOps.

Confidence to design and defend AI-ready architectures in both career and enterprise contexts.

The hardest part of software architecture isn’t the technology, it’s the people. Every architecture lives or dies by its ability to influence behavior, build consensus, and turn vision into change. In this session, Michael Carducci explores the real work of being an architect: communicating clearly, guiding decisions, and driving meaningful change in complex organizations. Drawing from decades of experience and the principles behind the Tailor-Made Architecture Model, Carducci shows how to identify where change is needed, package ideas for adoption, and lead with both clarity and empathy.

And while AI may soon help us design systems, it still can’t align humans around them. The enduring art of architecture lies in shaping not just the code, but the culture that makes progress possible. You’ll leave with practical tools to navigate the human side of architecture and a renewed appreciation for why that art still matters.

Event-driven architecture (EDA) is a design principle in which the flow of a system’s operations is driven by the occurrence of events instead of direct communication between services or components. There are many reasons why EDA is a standard architecture for many moderate to large companies. It offers a history of events with the ability to rewind the ability to perform real-time data processing in a scalable and fault-tolerant way. It provides real-time extract-transform-load (ETL) capabilities to have near-instantaneous processing. EDA can be used with microservice architectures as the communication channel or any other architecture.

In this workshop, we will discuss the prevalent principles regarding EDA, and you will gain hands-on experience performing and running standard techniques.

  • Key Concepts of Event-Driven Architecture
  • Event Sourcing
  • Event Streaming
  • Multi-tenant Event-Driven Systems
  • Producers, Consumers
  • Microservice Boundaries
  • Stream vs. Table
  • Event Notification
  • Event Carried State Transfer
  • Domain Events
  • Tying EDA to Domain Driven Design
  • Materialized Views
  • Outbox Pattern
  • CQRS (Command Query Responsibility Segregation)
  • Saga Pattern (Choreography and Orchestrator)
  • Avoiding Coupling
  • Monitoring Systems
  • Cloud-Based EDA

Building an AI model is the easy part—making it work reliably in production is where the real engineering begins. In this fast-paced, experience-driven session, Ken explores the architecture, patterns, and practices behind operationalizing AI at scale. Drawing from real-world lessons and enterprise implementations, Ken will demystify the complex intersection of machine learning, DevOps, and data engineering, showing how modern organizations bring AI from the lab into mission-critical systems.

Attendees will learn how to:

Design production-ready AI pipelines that are testable, observable, and maintainable

Integrate model deployment, monitoring, and feedback loops using MLOps best practices

Avoid common pitfalls in scaling, governance, and model drift management

Leverage automation to reduce friction between data science and engineering teams

Whether you’re a software architect, developer, or engineering leader, this session will give you a clear roadmap for turning AI innovation into operational excellence—with the same pragmatic, architecture-first perspective that Ken is known for.

TBD

TBD

In the ever-changing landscape of technology, Solution Architects stand as the linchpin between complex business challenges and innovative technological solutions. This presentation dives deep into the world of Solution Architects, exploring their pivotal role in crafting tailored, efficient, and scalable solutions. From deciphering intricate business requirements to orchestrating seamless integrations, Solution Architects navigate a maze of technologies to deliver outcomes that align perfectly with organizational goals. Join us as we unravel the key responsibilities, skills, and methodologies that empower Solution Architects to transform abstract ideas into tangible, impactful solutions, shaping the future of businesses in a digital age.

In this session we will walk thru the following:

  • Understanding the role of the Solution Architect (SA)
  • Key Responsibilities
  • Skills and Competencies of an SA
  • Common and Future Innovations to consider

Whether you are an existing Solution Architect looking to hone or validate your skills or you are looking to get into the role of Solution Architect, this session is for you!

In the realm of architecture, principles form the bedrock upon which innovative and enduring designs are crafted. This presentation delves into the core architectural principles that guide the creation of structures both functional and aesthetic. Exploring concepts such as balance, proportion, harmony, and sustainability, attendees will gain profound insights into the art and science of architectural design. Through real-world examples and practical applications, this session illuminates the transformative power of adhering to these principles, shaping not only buildings but entire environments. Join us as we unravel the secrets behind architectural mastery and the principles that define architectural brilliance.

Good architectural principles are fundamental guidelines or rules that inform the design and development of software systems, ensuring they are scalable, maintainable, and adaptable. Here are some key architectural principles that are generally considered valuable in software development:

  • Modularity
  • Simplicity
  • Scalability
  • Flexibility
  • Reusability
  • Maintainability
  • Performance
  • Security
  • Testability
  • Consistency
  • Interoperability
  • Evolutionary Design

Adhering to these architectural principles can lead to the development of robust, maintainable, and adaptable software systems that meet the needs of users and stakeholders effectively.

In this hands-on session, participants will learn how to bridge the gap between technical strategy and execution using systems thinking principles.

Through some exercises, software architects will practice mapping business goals, constraints, and feedback loops, then translate them into a clear and adaptable technical roadmap.

This presentation focuses on helping architects/engineers to move from abstract vision to actionable outcomes, aligning architecture with value, sequencing initiatives, and communicating trade-offs effectively to stakeholders.

By the end of this session, participants will be able to:

Understand how systems thinking reveals dependencies and leverage points within technical ecosystems
Identify how business outcomes can be mapped to technical capabilities
Practice creating an adaptive roadmap using “Now / Next / Laterˮ framing
Learn to communicate trade-offs and priorities in a way that aligns with business goals Leave with a reusable framework and template for turning architectural strategy into delivery steps

Modernizing legacy systems seemed exciting…until I found myself absorbed in rewrites, facing business blockers, and watching tech debt pile up instead of shrink. In this talk, I’ll share the biggest traps I’ve seen and experienced firsthand while working on modernization efforts in large organizations—and what helped us avoid (or recover from) them. From picking the wrong architecture patterns too early to losing stakeholder trust halfway through, I’ll walk through real examples of what not to do, along with the principles and strategies that helped us get back on track. Whether you’re breaking down a monolith or updating a business-critical system, I’ll help you steer clear of common pitfalls and make smarter, more sustainable decisions.

What This Talk Will Answer:
-What are the most common and costly mistakes teams make during architecture modernization?
-How do you choose between refactoring, rewriting, or rearchitecting a legacy system?
-How can Domain-Driven Design reduce risk and improve focus in modernization efforts?
-What strategies keep modernization aligned with business priorities and avoid loss of momentum?
-How do you avoid turning tech upgrades into long-running, low-impact projects?

In this architectural kata, you will step into the shoes of a software architect tasked with designing a modern healthcare management system for a rapidly growing provider, MedBest.

The challenge is to create a system that integrates patient records, appointment scheduling, billing, and telemedicine while ensuring robust security, compliance with regulations, scalability, and cost efficiency.

In the fast-paced world of software development, maintaining architectural integrity is a
continuous challenge. Over time, well-intended architectural decisions can erode, leading to unexpected drift and misalignment with original design principles.

This hands-on workshop will equip participants with practical techniques to enforce architecture decisions using tests. By leveraging architecturally-relevant testing, attendees will learn how to proactively guard their system's design, ensuring consistency, scalability, and security as the codebase evolves. Through interactive exercises and real-world examples, we will explore how testing can serve as a powerful tool for preserving architectural integrity throughout a project's lifecycle.

Key Takeaways
Participants will learn to:
Write architecture-driven tests that validate and enforce design constraints.
Identify architectural drift early and prevent unintended changes.
Maintain consistent, scalable, and secure architectures over time.
Collaborate effectively within teams to sustain architectural excellence.
Prerequisites
Basic Understanding of Software Architecture: Familiarity with architectural patterns and
principles
Experience with Automated Testing: Understanding of unit, integration, or system testing
concepts
Collaboration and Communication Skills: Willingness to engage in discussions and
teamwork
Experience working with Java
Optional
Familiarity with Static Analysis and Code Quality Tools: Knowledge of tools like ArchUnit,
SonarQube, or custom linters is beneficial but not required
Experience with Large-Scale Systems: Prior work on complex systems can enhance the

Key Takeaways:

Participants will learn to:
Write architecture-driven tests that validate and enforce design constraints.
Identify architectural drift early and prevent unintended changes.
Maintain consistent, scalable, and secure architectures over time.
Collaborate effectively within teams to sustain architectural excellence.

Prerequisites:

Basic Understanding of Software Architecture: Familiarity with architectural patterns and principles
Experience with Automated Testing: Understanding of unit, integration, or system testing concepts
Collaboration and Communication Skills: Willingness to engage in discussions and
teamwork
Experience working with Java

Optional
Familiarity with Static Analysis and Code Quality Tools: Knowledge of tools like ArchUnit,
SonarQube, or custom linters is beneficial but not required
Experience with Large-Scale Systems: Prior work on complex systems can enhance the

We live in a world of complexity. Complex societies, complex language and mathematics and science – all navigable. Indeed, if we are to thrive individually and collectively, navigating this complexity is essential. We software professionals – current, semi-current (former), and near-future – have ourselves given rise to layers upon layers of systems and software complexity. We didn’t (always) anticipate the extant environmental challenges, but the systems we built inevitably interacted with other complex systems. We often knowingly create this multiplier effect on complexity, out of necessity. But how do we navigate it? Certainly, our tools help us, but they are mere tools, powerful though they are. Are we really that unsure of our capabilities? We don’t have to be!

In this session, Darrell Rials explores examples of extraordinary complexity, both in the natural world and in societal systems. He lays out some fundamental principles and mental models, describing the ways that individuals and systems of species must adopt, adapt and evolve in order to thrive in complex environments. Darrell then presents a humble challenge to the participants: Can you be instruments for building the next generation of adaptable, evolvable humanity-serving automated systems?

As software professionals, we deal with complexity and uncertainty on a daily basis. In fact, we are often masters at understanding all the various forms of systems complexity, and often are proficient at coherently communicating designs and solutions.

Unfortunately, within and amongst organizations, we set ourselves up as “the expert” – as prima donnas, if you will. Oftentimes, we set up unnecessary psychological competitions amongst peers, rather than treating peers and the wider software community as just that: collaborative and self-improving communities. Surely, there are better ways of working as a community and a society that promote individual and community growth, learning, and exponential improvement.

In this workshop, Darrell Rials presents an argument for participating in open, safe-space, supportive collaborations: software and systems architecture guilds. Darrell briefly highlights examples of the guild (or collegium) concept in historical and current-day contexts, with their benefits and detriments, and explains why many of today’s attempts at sustaining organization-driven architecture teams are nothing more than vain attempts at empire-building or box-checking procedural show.

Darrell lays out some key grounding principles for the effective collaboration of architects in a broad-based software practitioners’ guild, and addresses a few immediate questions about the mechanics of undertaking and sustaining such endeavor as practicing architects. Almost as importantly, we address a key question: What’s in it for me?

As software professionals, we deal with complexity and uncertainty on a daily basis. In fact, we are often masters at understanding all the various forms of systems complexity, and often are proficient at coherently communicating designs and solutions.

Unfortunately, within and amongst organizations, we set ourselves up as “the expert” – as prima donnas, if you will. Oftentimes, we set up unnecessary psychological competitions amongst peers, rather than treating peers and the wider software community as just that: collaborative and self-improving communities. Surely, there are better ways of working as a community and a society that promote individual and community growth, learning, and exponential improvement.

In this workshop, Darrell Rials presents an argument for participating in open, safe-space, supportive collaborations: software and systems architecture guilds. Darrell briefly highlights examples of the guild (or collegium) concept in historical and current-day contexts, with their benefits and detriments, and explains why many of today’s attempts at sustaining organization-driven architecture teams are nothing more than vain attempts at empire-building or box-checking procedural show.

Darrell lays out some key grounding principles for the effective collaboration of architects in a broad-based software practitioners’ guild, and addresses a few immediate questions about the mechanics of undertaking and sustaining such endeavor as practicing architects. Almost as importantly, we address a key question: What’s in it for me?

As code generation becomes increasingly automated, our role as developers and architects is evolving. The challenge ahead isn’t how to get AI to write more code, it’s how to guide it toward coherent, maintainable, and purposeful systems.

In this session, Michael Carducci reframes software architecture for the era of intelligent agents. You’ll learn how architectural constraints, composition, and trade-offs provide the compass for orchestrating AI tools effectively. Using principles from the Tailor-Made Architecture Model, Carducci introduces practical mental models to help you think architecturally, communicate intent clearly to your agents, and prevent automation from accelerating entropy. This talk reveals how the enduring discipline of architecture becomes the key to harnessing AI—not by replacing human creativity, but by amplifying it.

When Eliyahu Goldratt wrote The Goal, he showed how local optimizations (like adding robots to a factory line) can actually decrease overall performance. Today, AI threatens to repeat that mistake in software. We’re accelerating coding without improving flow. In this talk, Michael Carducci explores what it means to architect for the goal: continuous delivery of value through systems designed for flow.

Drawing insights from Architecture for Flow, Domain-Driven Design, Team Topologies, and his own Tailor-Made Architecture Model, Carducci shows how to align business strategy, architecture, and teams around shared constraints and feedback loops. You’ll discover how to turn automation into advantage, orchestrate AI within the system of work, and build socio-technical architectures that evolve—not just accelerate.

Modernizing legacy systems is often seen as a daunting task, with many teams falling into the trap of rigid rewrites or expensive overhauls that disrupt the business. The Tailor-Made Architecture Model (TMAM) offers a new approach—one that is centered on incremental evolution through design-by-constraint. By using TMAM, architects can guide legacy systems through a flexible, structured modernization process that minimizes risk and aligns with both technical and organizational needs.

In this session, we’ll explore how TMAM facilitates smooth modernization by identifying and addressing architectural constraints without resorting to drastic rewrites. We’ll dive into real-world examples of how legacy systems were evolved incrementally and discuss how TMAM provides a framework for future-proofing your systems. Through its focus on trade-offs, communication, and holistic fit, TMAM ensures that your modernization efforts not only solve today’s problems but also prepare your system for the challenges of tomorrow.

This session is ideal for architects, developers, and technical leads who are tasked with modernizing legacy systems and are looking for a structured, flexible approach that avoids the pitfalls of rigid rewrites. Learn how to evolve your legacy system while keeping it adaptable, scalable, and resilient.

Everyone’s talking about AI models, but almost no one is talking about the data architecture that makes them intelligent. Today’s AI systems are brittle because they lack context, semantics, and shared understanding. In this session, Michael Carducci explores how linked data, RDF, ontologies, and knowledge graphs solve the very problems that leave the industry floundering: hallucination, inconsistency, and lack of interoperability.

Drawing from real-world examples, Carducci connects decades of overlooked research in semantic web technologies to the challenges of modern AI and agentic systems. You’ll see how meaning itself can be modeled, linked, and reasoned over; and why the future of AI depends not on bigger models, but on smarter data.

Everyone’s talking about AI models, but almost no one is talking about the data architecture that makes them intelligent. Today’s AI systems are brittle because they lack context, semantics, and shared understanding. In this session, Michael Carducci explores how linked data, RDF, ontologies, and knowledge graphs solve the very problems that leave the industry floundering: hallucination, inconsistency, and lack of interoperability.

Drawing from real-world examples, Carducci connects decades of overlooked research in semantic web technologies to the challenges of modern AI and agentic systems. You’ll see how meaning itself can be modeled, linked, and reasoned over; and why the future of AI depends not on bigger models, but on smarter data.

Architectural decisions are often influenced by blindspots, biases, and unchecked assumptions, which can lead to significant long-term challenges in system design. In this session, we’ll explore how these cognitive traps affect decision-making, leading to architectural blunders that could have been avoided with a more critical, holistic approach.

You’ll learn how common biases—such as confirmation bias and anchoring—can cloud judgment, and how to counteract them through problem-space thinking and reflective feedback loops. We’ll dive into real-world examples of architectural failures caused by biases or narrow thinking, and discuss strategies for expanding your perspective and applying critical thinking to system design.

Whether you’re an architect, developer, or technical lead, this session will provide you with tools to recognize and mitigate the impact of biases and blindspots, helping you make more informed, thoughtful architectural decisions that stand the test of time.

AI inference is no longer a simple model call—it is a multi-hop DAG of planners, retrievers, vector searches, large models, tools, and agent loops. With this complexity comes new failure modes: tail-latency blowups, silent retry storms, vector store cold partitions, GPU queue saturation, exponential cost curves, and unmeasured carbon impact.

In this talk, we unveil ROCS-Loop, a practical architecture designed to close the four critical loops of enterprise AI:
•Reliability (Predictable latency, controlled queues, resilient routing)
•Observability (Full DAG tracing, prompt spans, vector metrics, GPU queue depth)
•Cost-Awareness (Token budgets, model tiering, cost attribution, spot/preemptible strategies)
•Sustainability (SCI metrics, carbon-aware routing, efficient hardware, eliminating unnecessary work)

KEY TAKEAWAYS
•Understand the four forces behind AI outages (latency, visibility, cost, carbon).
•Learn the ROCS-Loop framework for enterprise-grade AI reliability.
•Apply 19 practical patterns to reduce P99, prevent retry storms, and control GPU spend.
•Gain a clear view of vector store + agent observability and GPU queue metrics.
•Learn how ROCS-Loop maps to GCP, Azure, Databricks, FinOps & SCI.
•Leave with a 30-day action plan to stabilize your AI workloads.

⸻

AGENDA
1.The Quiet Outage: Why AI inference fails
2.X-Ray of the inference pipeline (RAG, agents, vector, GPUs)
3.Introducing the ROCS-Loop framework
4.19 patterns for Reliability, Observability, FinOps & GreenOps
5.Cross-cloud mapping (GCP, Azure, Databricks)
6.Hands-on: Diagnose an outage with ROCS
7.Your 30-day ROCS stabilization plan
8.Closing: Becoming a ROCS AI Architect

A live, end-to-end walkthrough of an AWS Well-Architected Review for a GenAI app. You’ll learn how to apply the AWS Generative AI Lens across the six pillars, then add Bedrock Guardrails and Knowledge Bases (RAG) to raise reliability, safety, and accuracy. You’ll leave with a reusable checklist and a prioritized remediation plan.

Who it’s for & why

  • Cloud architects, AI/ML engineers, SRE/Platform teams, and security leads
  • Moving from PoCs → production with need for a repeatable WA review process

What you’ll learn

  • How to apply Generative AI Lens questions per pillar
  • Spot hot spots: data flow, prompts, vector DBs, GPU scaling
  • Map risks → fixes with Guardrails + RAG patterns

What you’ll take away

  • WA Review scorecard & template
  • Pillar-by-pillar remediation backlog
  • Sample Guardrail policies (safety, PII, toxic output)
  • RAG/Knowledge Base reference architecture

Graphs aren’t just academic—they power the backbone of real systems: workflows (Airflow DAGs), build pipelines (Bazel), data processing (Spark DAGs), and microservice dependencies (Jaeger).
This session demystifies classic graph algorithms—BFS, DFS, topological sort, shortest paths, and cycle detection—and shows how to connect them to real-world systems.
You’ll also see how AI tools like ChatGPT and graph libraries (Graphviz, NetworkX, D3) can accelerate your workflow: generating adjacency lists, visualizing dependencies, and producing test cases in seconds.
You’ll leave with reusable patterns for interviews, architecture reviews, and production systems.

You’ll leave with reusable patterns for interviews, architecture reviews, and production systems.

Why Now

  • Graphs underpin the modern stack: Airflow DAGs, Spark, Kubernetes, Jaeger, and CI/CD pipelines.
  • AI tools can rapidly generate, visualize, and validate graph structures—bridging algorithmic theory and practical engineering.
  • Graph literacy now distinguishes great developers from average system designers.

Problems Solved

  • Translating algorithmic concepts into production patterns
  • Lack of intuition about DAGs, dependency graphs, or routing systems
  • Difficulty visualizing large graph structures quickly
  • Limited practice applying BFS/DFS/topo sort outside interview prep

Learning Outcomes

  • Apply BFS/DFS, topological sort, and shortest path algorithms in production systems
  • Translate graph theory into schedulers, dependency analyzers, and routing services
  • Use AI (ChatGPT/Copilot) for quick generation of adjacency lists, test data, and graph visualization
  • Map graphs to system design conversations: latency, scaling, dependencies
  • Build your own reusable Graph Thinking toolkit for architecture and interviews

Agenda
Opening: From Whiteboard to Production
Why every large-scale system is a graph in disguise.
How workflows, microservices, and dependency managers rely on graph structures.
Pattern 1: Graphs in the Real World
Examples:

  • Workflows: Airflow, Dagster
  • Builds: Bazel
  • Data pipelines: Spark
  • Services: Jaeger tracing DAGs
Show how each maps to graph nodes, edges, and cycles.

Pattern 2: Core Algorithms Refresher

  • BFS/DFS: Reachability, search, and crawl use cases
  • Dijkstra / A*: Routing, latency, and cost optimization
  • Topological Sort: Scheduling builds, DAG execution order
  • Cycle Detection: Fail-fast prevention in workflows and dependency graphs

Pattern 3: AI-Assisted Graph Engineering
How to use AI tools to accelerate graph work:

  • Generate adjacency lists from plain-text prompts
  • Auto-create test cases for reachability and cycle detection
  • Use Graphviz / NetworkX / D3 to visualize graphs instantly
  • Validate algorithm correctness interactively

Pattern 4: Graph Patterns in Architecture
Mapping algorithms to system design:

  • BFS → discovery & dependency mapping
  • DFS → deep audits & lineage analysis
  • Dijkstra → route optimization & latency modeling
  • Topo sort → job orchestration & CI/CD scheduling
How architects can embed graph thinking into design reviews.

Pattern 5: AI Demo
Prompt → adjacency list → Graphviz/NetworkX render → algorithmic validation.
Demonstrate quick prototyping workflow with AI assistance.

Wrap-Up: From Algorithms to Architectural Intuition
How graph literacy improves system reliability and scalability.
Checklist and reusable templates for ongoing graph-based reasoning.

Key Framework References

  • NetworkX / Graphviz / D3.js: Visualization and validation libraries
  • Apache Airflow / Spark / Bazel / Jaeger: Real-world DAG examples
  • AI Tools (ChatGPT, Copilot): Adjacency generation, testing, and explanation
  • Big-O Foundations: BFS/DFS/Dijkstra complexity reminders for performance analysis

Takeaways

  • Graph Thinking Checklist: Nodes, edges, cycles, and DAG validation
  • AI Prompt Pack: Templates for adjacency generation, test creation, and visualization
  • Algorithm Snippet Starter Kit: BFS, DFS, Dijkstra, Topo Sort in Python/JS
  • Architecture Mapping Guide: Graph patterns → system use cases
  • Mindset: Move from memorizing algorithms → to engineering with them

Enterprise Architecture (EA) has long been misunderstood as a bottleneck to innovation, often labeled the “department of no.” But in today’s fast-paced world of Agile, DevOps, Cloud, and AI, does EA still have a role to play—or is it a relic of the past?

This session reimagines the role of EA in the modern enterprise, showcasing how it can evolve into a catalyst for agility and innovation. We’ll explore the core functions of EA, its alignment with business and IT strategies, and how modern tools, techniques, and governance can transform it into a driver of value. Attendees will leave with actionable insights on building a future-ready EA practice that thrives in an ever-changing technological landscape.

In this session we will discuss the need to document architecture, and see what mechanisms are available to us to document architecture—both present and future.

We've all learned that documenting your code is a good idea. But what about your architecture? What should we be thinking about when we document architecture? What tools and techniques can we reach for as we pursue this endeavor? Can we even make this a sustainable activity, or are we forever doomed to architectural documentation getting outdated before the ink is even dry?

In this session we will discuss a range of techniques that will not only help document your architecture, but even provide a mechanism to think about architecture upfront, and make it more predictable. You'll walk away armed with everything you need to know about documenting your current, and future architectures.

It's not just architecture—it's evolutionary architecture. But to evolve your architecture, you need to measure it. And how does that work exactly? How does one measure something as abstract as architecture?

In this session we'll discuss various strategies for measuring your architecture. We'll see how you know if your software architecture is working for you, and how to know which metrics to keep an eye on. We'll also see the benefits of measuring your architecture.

We'll cover a range of topics in this session, including

Different kinds of metrics to measure your architecture
The benefits of measurements
Improving visibility into architecture metrics

An architecture pattern is a reusable solution to a commonly occurring problem in software architecture within a specific context. Architecture patterns affect the “-ilities” of a system, such as scalability, performance, maintainability, and security as well as impact the structural design of the system.

This session explores various architecture patterns, their applicability and trade-offs. But that's not all—this session will also provide insight into the numerous intersections of these patterns with all the other tendrils of the organization, including implementation, infrastructure, engineering practices, team topologies, data topologies, systems integration, the enterprise, the business environment, and generative AI. And we will see how to govern each pattern using fitness functions to ensure alignment.

Architecture is often defined as “hard to change”. Within software architecture, an architecture pattern is a reusable solution to a commonly occurring problem in software architecture within a specific context. Architecture anti-patterns are their diabolical counterparts—wherein they sound good in theory, but in practice lead to negative consequences. And given that they affect both the architectural characteristics and the structural design of the system, are incredibly expensive and have far-reaching consequences.

This session explores various architecture patterns, how one can easily fall into anti-patterns, and how one can avoid the antipatterns. We will do qualitative analysis of various architecture patterns and anti-patterns, and introduce fitness functions govern against anti-patterns.

Here I’ll break down how GitOps simplifies the operational challenges around cloud and Kubernetes environments. We’ll look at how a Git-driven model brings consistency, automation, and better visibility across both infrastructure and application delivery.

The goal is to share a clear and practical approach to reducing operational overhead and creating a more reliable DevOps workflow.

As cloud architectures evolve, AI is quickly becoming a foundational component rather than an add-on.

This session explores the architectural principles behind building scalable hybrid clouds and shows how AI can elevate them—from predictive scaling to intelligent workload optimization. We’ll look at patterns already emerging in the industry and map out a clear approach for designing resilient, AI-augmented systems that are ready for the next wave of innovation.

API security goes beyond protecting endpoints—it requires defense across infrastructure, data, and business logic. In this talk, I’ll present a structured approach to implementing Zero Trust security for APIs in a cloud-native architecture.

We’ll cover how to establish a strong foundation across layers—using mTLS, OAuth2/JWT, policy-as-code (OPA), GitOps for deployment integrity, and cloud-native secrets management. The session addresses real-world threats like misconfigurations, privilege escalation, and API abuse, and shows how to mitigate them with layered controls in Kubernetes-based environments on Azure and AWS.

Attendees will walk away with actionable practices to secure their API ecosystem end-to-end— without slowing development teams down.

In this session, I’ll walk through how we helped a large enterprise address fragmented API
practices and inconsistent governance by building a platform-first approach to API delivery.

Teams were deploying microservices independently on Azure Kubernetes Service (AKS) with no consistent routing, security, or visibility. We implemented a GitOps-powered framework that allowed teams to integrate a lightweight, Kubernetes-native gateway with their services, while inheriting organizational standards like rate limiting, JWT auth, routing consistency, and metadata tagging—without any manual ticketing or bottlenecks.
The platform approach enabled decentralized development with centralized governance,
accelerating service onboarding while ensuring every API remains secure, discoverable, and compliant by design.

2025 shattered the old cadence of software architecture. AI agents now co‑author code and refactors, compliance expectations tightened, and cost/latency signals moved inside everyday design loops. Static diagrams, quarterly review boards, and slide-driven governance can’t keep up.

This curated set of 3 sessions will help equip senior technologists to evolve from document stewardship to adaptive integrity management—blending human judgment, executable principles, and guided agent assistance.Architecture is shifting from static designs to adaptive, agent-driven execution.

Come to the Agentic Architect session if you want to see:

  • how the role of architecture is evolving in the agentic era

  • practical tips and trick for how to embrace the new agentic toolset

  • how to lean into architecture as code

  • cut decision time from weeks and days to hours

  • stop redrawing diagrams forever

“The Agentic Architect isn't about AI writing your code – it's about transforming how you make, communicate, and enforce architecture in an AI-accelerated world.”

2025 shattered the old cadence of software architecture. AI agents now co‑author code and refactors, compliance expectations tightened, and cost/latency signals moved inside everyday design loops. Static diagrams, quarterly review boards, and slide-driven governance can’t keep up.

This live demo session takes the patterns from “The Agentic Architect” and runs them end-to-end starting with a blank slate.

  • Watch ideas turn into working architecture

  • See diagrams-as-code that update themselves based on a more holistic context

  • Learn how to use AI agents on a daily basis to transform your work

2025 shattered the old cadence of software architecture. AI agents now co‑author code and refactors, compliance expectations tightened, and cost/latency signals moved inside everyday design loops. Static diagrams, quarterly review boards, and slide-driven governance can’t keep up.

2025 delivered unprecedented architectural disruption.

This interactive session will explore key events throughout 2025/2026 that have impacted the architect's role in the context of AI ubiquity, platform acceleration, and cost pressures.

This session will focus on the essential technical skills that are needed by software architects on a daily basis from ideation to product delivery. For many architects, maintaining your technical skills can be a challenge.

Come to this session if you want to learn some tricks and tips for how to raise your technical game as an architect.

Authentication and authorization are foundational concerns in modern systems, yet they’re often treated as afterthoughts or re-implemented inconsistently across services.

In this talk, we’ll explore Keycloak, an open-source identity and access management system, and how it fits into modern application architectures. We’ll break down what Keycloak actually does (and what it doesn’t), explain the role of JWTs and OAuth2/OpenID Connect, and examine how identity, trust, and access control are handled across distributed systems.

We’ll also compare Keycloak to secret management systems like Vault, clarify common misconceptions, and walk through integrations you will need with Spring, Quarkus, and other frameworks

By the end, you’ll understand when Keycloak is the right tool, how to integrate it cleanly, and how to avoid the most common architectural mistakes.

In this session, we will define what Keycloak is, its value, and how it integrates with your existing architecture. Here is the layout of the talk:

  • “Who are you?” vs “What are you allowed to do?”
  • Authentication vs Authorization vs Identity
  • Avoiding Rolling your Own Auth(n|z)
  • What is Keycloak
  • What isn't Keycloak
  • Core Concepts
  • Review of OAuth2, OpenID, JWT, and Tokens
  • Identity Federation
  • Difference between Keycloak and Vault
  • Where do we put it in architecture
  • Integration with Spring, Quarkus, and other Frameworks
  • Integration with other Architecture and Components
  • What to do on Monday Morning

Data Mesh rethinks data architecture in organizations by treating data as a product, owned and operated by bounded context teams rather than centralized platforms. This way, data owners can describe, enrich, and prove data sources to prevent any malicious poisoning.

  • A Quick Introduction to DDD
  • What is Data Mesh?
  • Benefits of Data Mesh
  • Data Ownership
  • Data Qualifications and Medallions
  • Open Metadata
  • How to use Open Metadata
  • Role of AI in Data Mesh

This workshop will explore the principles of the Ports and Adapters pattern (also called the Hexagonal Architecture) and demonstrate how to refactor legacy code or design new systems using this approach. You’ll learn how to organize your domain logic and move UI and infrastructure code into appropriate places within the architecture. The session will also cover practical refactoring techniques using IntelliJ and how to apply Domain Driven Design (DDD) principles to ensure your system is scalable, maintainable, and well-structured.

What You’ll Learn:

  1. What is Hexagonal Architecture?
    Understand the fundamental principles of Hexagonal Architecture, which helps isolate the core business logic (the domain) from external systems like databases, message queues, or user interfaces. This architecture is designed to easily modify the external components without affecting the domain.

  2. What are Ports and Adapters?
    Learn the key concepts of Ports and Adapters, the core elements of Hexagonal Architecture. Ports define the interface through which the domain interacts with the outside world, while Adapters implement these interfaces and communicate with external systems.

  3. Moving Domain Code to Its Appropriate Location:
    Refactor your domain code to ensure it is correctly placed in the core domain layer. You will learn how to separate domain logic from external dependencies, ensuring that business rules are isolated and unaffected by user interface or infrastructure changes.

  4. Moving UI Code to Its Appropriate Location:
    Discover how to refactor UI code by decoupling it from the domain logic and placing it in the appropriate layers. You’ll learn how to use the Ports and Adapters pattern to allow the user interface to communicate with the domain without violating architectural boundaries.

  5. Using Refactoring Tools in IntelliJ:
    Learn how to use IntelliJ’s powerful refactoring tools to streamline code movement. Techniques such as Extract Method, Move Method, Extract Delegate, and Extract Interface will be applied to refactor your codebase.

  6. Applying DDD Software Principles:
    We’ll cover essential Domain-Driven Design principles, such as Value Objects, Entities, Aggregates, and Domain Events.

  7. Refactoring Techniques:
    Learn various refactoring strategies to improve code structure, Extract Method, Move Method, Extract Delegate, Extract Interface, and Sprout Method and Class

  8. Verifying Code with Arch Unit:
    Ensure consistency and package rules using Arch Unit, a tool for verifying the architecture of your codebase. You will learn how to write tests confirming your project adheres to the desired architectural guidelines, including separating layers and boundaries.

Who Should Attend:

This workshop is perfect for developers who want to improve their understanding of Ports and Adapters Architecture, apply effective refactoring techniques, and leverage DDD principles for designing scalable and maintainable systems.

Workshop Requirements

If you wish to do the interactive labs:

  1. Java 21+ Higher
  2. IntelliJ (a must)
  3. Maven

This workshop will explore the principles of the Ports and Adapters pattern (also called the Hexagonal Architecture) and demonstrate how to refactor legacy code or design new systems using this approach. You’ll learn how to organize your domain logic and move UI and infrastructure code into appropriate places within the architecture. The session will also cover practical refactoring techniques using IntelliJ and how to apply Domain Driven Design (DDD) principles to ensure your system is scalable, maintainable, and well-structured.

What You’ll Learn:

  1. What is Hexagonal Architecture?
    Understand the fundamental principles of Hexagonal Architecture, which helps isolate the core business logic (the domain) from external systems like databases, message queues, or user interfaces. This architecture is designed to easily modify the external components without affecting the domain.

  2. What are Ports and Adapters?
    Learn the key concepts of Ports and Adapters, the core elements of Hexagonal Architecture. Ports define the interface through which the domain interacts with the outside world, while Adapters implement these interfaces and communicate with external systems.

  3. Moving Domain Code to Its Appropriate Location:
    Refactor your domain code to ensure it is correctly placed in the core domain layer. You will learn how to separate domain logic from external dependencies, ensuring that business rules are isolated and unaffected by user interface or infrastructure changes.

  4. Moving UI Code to Its Appropriate Location:
    Discover how to refactor UI code by decoupling it from the domain logic and placing it in the appropriate layers. You’ll learn how to use the Ports and Adapters pattern to allow the user interface to communicate with the domain without violating architectural boundaries.

  5. Using Refactoring Tools in IntelliJ:
    Learn how to use IntelliJ’s powerful refactoring tools to streamline code movement. Techniques such as Extract Method, Move Method, Extract Delegate, and Extract Interface will be applied to refactor your codebase.

  6. Applying DDD Software Principles:
    We’ll cover essential Domain-Driven Design principles, such as Value Objects, Entities, Aggregates, and Domain Events.

  7. Refactoring Techniques:
    Learn various refactoring strategies to improve code structure, Extract Method, Move Method, Extract Delegate, Extract Interface, and Sprout Method and Class

  8. Verifying Code with Arch Unit:
    Ensure consistency and package rules using Arch Unit, a tool for verifying the architecture of your codebase. You will learn how to write tests confirming your project adheres to the desired architectural guidelines, including separating layers and boundaries.

Who Should Attend:

This workshop is perfect for developers who want to improve their understanding of Ports and Adapters Architecture, apply effective refactoring techniques, and leverage DDD principles for designing scalable and maintainable systems.

Workshop Requirements

If you wish to do the interactive labs:

  1. Java 21+ Higher
  2. IntelliJ (a must)
  3. Maven

Join us for an indepth exploration of cuttingedge messaging styles in your large domain.

Here, we will discuss the messaging styles you can use in your business.

  • Event Sourcing
  • EventDriven Architecture
  • Claim Check
  • Event Notification
  • Event Carried State Transfer
  • Domain Events

We take a look at another facet of architectural design, and that is how we develop and maintain transactions in architecture. Here we will discuss some common patterns for transactions

  • TwoPhase Commit
  • The Problem with 2PC
  • Using EventDrivenArchitecture to manage transactions
  • Transactional Outbox
  • Compensating Transaction
  • Optimistic vs Pessimistic Locking
  • TCC (TryConfirm/Cancel)
  • Saga Orchestrator
  • Saga Choreography

This session will focus on data governance and making data available within your enterprise. Who owns the data, how do we obtain the data, and what does governance look like?

  • CQRS
  • Materialized Views
  • Warehousing vs Data Mesh
  • OLAP vs OLTP
  • Pinot, Kafka, and Spark
  • Business Intelligence
  • Making Data Available for ML/AI

Embarking on the journey to become an architect requires more than technical expertise; it demands a diverse skill set that combines creativity, leadership, communication, and adaptability. You may be awesome as a developer or engineer, but the skills needed to be an architect are often different and require more than technical awareness to succeed.

This presentation delves into the crucial skills aspiring architects need to cultivate. From mastering design principles and embracing cutting-edge technologies to honing collaboration and project management abilities, attendees will gain valuable insights into the multifaceted world of architectural skills. Join us as we explore practical strategies, real-world examples, and actionable tips that pave the way for aspiring architects to thrive in a dynamic and competitive industry.

Awareness is the knowledge or perception of a situation or fact, which based on myriad of factors is an elusive attribute. Likely the most significant unasked for skill… perhaps because it's challenging to “measure” or verify. It is challenging to be aware of aware, or is evidence of it's adherence. This session will cover different levels of architectural awareness. How to surface awareness and how you might respond to different technical situations once you are aware.

Within this session we look holistically an engineering, architecture and the software development process. Discussing:

* Awareness of when process needs to change (original purpose of Agile)
* Awareness of architectural complexity
* Awareness of a shift in architectural needs 
* Awareness of application portfolio and application categorization 
* Awareness of metrics surfacing system challenges
* Awareness of system scale (and what scale means for your application)
* Awareness when architectural rules are changing
* Awareness of motivation for feature requests
* Awareness of solving the right problem

The focus of the session will be mindful (defined as focusing on one's awareness), commentating in sharing strategies for heightening awareness as an architect and engineer.

AI enablement isn’t buying Copilot and calling it done; it’s a system upgrade for the entire SDLC. Code completion helps, but the real bottlenecks live in reviews, testing, releases, documentation, governance, and knowledge flow. Achieving meaningful impact requires an operating model: guardrails, workflows, metrics, and change management; not a single tool.

This session shares SPS Commerce’s field notes: stories, failures, and working theories from enabling AI across teams. You’ll get a sampler of adaptable patterns and anti-patterns spanning productivity, systems integration, guardrails, golden repositories, capturing tribal knowledge, API design, platform engineering, and internal developer portals. Come for practical menus you can pilot next week, and stay to compare strategies with peers.

In an era where digital transformation and AI adoption are accelerating across every industry, the need for consistent, scalable, and robust APIs has never been more critical. AI-powered tools—whether generating code, creating documentation, or integrating services—rely heavily on clean, well-structured API specifications to function effectively. As teams grow and the number of APIs multiplies, maintaining design consistency becomes a foundational requirement not just for human developers, but also for enabling reliable, intelligent automation. This session explores how linting and reusable models can help teams meet that challenge at scale.

We will explore API linting using the open-source Spectral project to enable teams to identify and rectify inconsistencies during design. In tandem, we will navigate the need for reusable models—recognizing that the best specification is the one you don’t have to write or lint at all! These two approaches not only facilitate the smooth integration of services but also foster collaboration across teams by providing a shared, consistent foundation.

Most enterprise LLM failures aren’t technical — they’re trust failures. Models hallucinate, drift from source truth, or produce outputs with no provenance. For regulated industries, that’s unacceptable.
This session introduces GraphRAG — a breakthrough approach combining knowledge graphs (Neo4j) with retrieval-augmented generation to deliver traceable, explainable, and auditable AI outputs.
You’ll learn how to design, evaluate, and deploy GraphRAG architectures aligned with the EU AI Act, NIST AI Risk Management Framework, and enterprise AI governance standards.

Problems Solved

  • LLM answers without evidence or traceability
  • Stale or inconsistent retrieval data
  • Non-compliance with transparency and provenance regulations
  • Lack of explainability for model outputs
  • Low confidence from regulators, auditors, and executives

Why Now

  • Enterprise AI adoption slowed by lack of trust and explainability
  • Regulations (EU AI Act, NIST AI RMF) now require provenance and model transparency
  • Executives demand evidence-based reasoning, not black-box answers

What GraphRAG Is

  • Combines knowledge graphs (Neo4j) with retrieval-augmented generation
  • Returns answers with structured evidence paths — connecting entities → relationships → source documents → LLM response
  • Goes beyond flat vector search to capture contextual meaning, hierarchy, and causality

Where It Applies

  • Insurance: Claims approvals and denials with transparent justification
  • Healthcare: Patient summaries with provenance and compliance
  • Finance: Audit trails, credit-risk reasoning, regulatory reporting
  • Policy & Legal: Regulatory interpretation and case law summaries

Why It’s Valuable

  • Establishes trust with executives, auditors, and regulators
  • Improves faithfulness, groundedness, and transparency of model outputs
  • Reduces disputes, compliance risks, and hallucination-related rework
  • Creates structured AI reasoning pipelines aligned with governance frameworks

Agenda
Opening & Problem Context
Why trust is the bottleneck for enterprise AI.
Examples of LLMs failing in regulated use cases — what breaks when outputs lack provenance.
Pattern 1: Anatomy of GraphRAG
Understanding how GraphRAG extends RAG with Neo4j graphs.
Schema design for entities, relationships, and evidence paths.
Structured retrieval from graph → vector → generator.

Pattern 2: Architecture & Data Flow
End-to-end GraphRAG blueprint:
Ingestion → Entity extraction → Graph population → Retrieval orchestration → Response grounding.
Contrast with plain RAG and vector-only approaches.

Pattern 3: Explainability & Evaluation
Metrics for evaluating explainability:
Faithfulness, groundedness, and coverage.
How to trace model answers back to graph nodes and documents.
Integration with AI observability platforms (PromptLayer, Arize, etc.).

Pattern 4: Compliance & Governance Alignment
Connecting GraphRAG design to regulatory frameworks:

  • EU AI Act: Transparency, traceability, human oversight
  • NIST AI RMF: Trustworthiness and accountability
  • ISO 42001: AI Management Systems
Implementing provenance tags and explainability layers as compliance enablers.

Pattern 5: Real-World Scenarios
Industry case patterns:

  • “Why was this insurance claim denied?”
  • “Which regulation does this contract violate?”
  • “Which patient data contributed to this summary?”
Each example maps relationships, evidence, and trace paths through Neo4j.

Wrap-Up & Discussion
Recap of GraphRAG architecture and design patterns.
Checklist for adoption: schema templates, metrics, and governance integration.
Q/A and enterprise discussion on explainable AI roadmaps.

Key Framework References

  • Microsoft GraphRAG: Open-source structured hierarchical retrieval pattern
  • Neo4j Graph Data Science & LLM Integration Guide
  • EU AI Act & NIST AI RMF: Provenance, explainability, and risk transparency
  • ISO/IEC 42001: AI governance and management principles
  • Gartner & Forrester: Trust and transparency as core adoption barriers

Takeaways

  • GraphRAG design blueprint (schema + ingestion + retriever)
  • Evaluation metrics: faithfulness, groundedness, coverage
  • Reference architecture diagrams for Neo4j + RAG + LLM stack
  • Playbook for integrating explainability with compliance frameworks

Coding interviews and production systems share the same challenge: transforming vague problems into correct, efficient, and explainable solutions.
This talk introduces a 7-step algorithmic thinking framework that begins with a brute-force baseline and evolves toward an optimized, production-grade solution—using AI assistants like ChatGPT and GitHub Copilot to accelerate ideation, edge-case discovery, and documentation, without sacrificing rigor.
Whether you’re solving array or graph problems, optimizing data pipelines, or refactoring legacy logic, this framework builds the discipline of clarity before optimization—and shows how to use AI responsibly as a thinking partner, not a shortcut.

Why This Talk Now (in the AI Era)

  • AI is already in your workflow: 51% of professional developers use AI tools daily; 84% plan to adopt. (Stack Overflow Developer Survey)
  • AI boosts productivity, but needs structure: Controlled studies show developers complete tasks ~56% faster with GitHub Copilot—but correctness still requires disciplined reasoning. (arXiv)
  • Engineering leaders demand ROI + rigor: 71% of organizations report regular GenAI use, but need trustworthy frameworks to reduce “hallucination debt.” (McKinsey)
  • Interviews still test DS&A: Problem-solving frameworks outperform memorization. (Google Tech Dev Guide)

Problems Solved

  • Unclear or incomplete problem statements
  • Over-reliance on AI code suggestions without validation
  • Jumping to optimization before correctness
  • Failing to reason about time/space complexity
  • Difficulty communicating trade-offs in reviews or interviews

The 7-Step Algorithmic Thinking Playbook

  1. Clarify – Define inputs, outputs, and constraints precisely.

  2. Baseline – Write the simplest brute-force solution for correctness.

  3. Measure – Analyze time and space complexity; identify bottlenecks.

  4. Map Patterns – Recognize the family (array, tree, graph, DP, greedy).

  5. Refactor – Apply the optimal pattern or data structure.

  6. Validate – Test edge cases and boundary conditions automatically.

  7. Explain – Communicate trade-offs, scalability, and readability.

Learning Outcomes

  • Apply a repeatable, 7-step problem-solving framework for any coding challenge.
  • Know when brute force is acceptable—and when optimization matters.
  • Confidently compare greedy vs. DP or iterative vs. recursive strategies.
  • Use AI tools responsibly for ideation, validation, and refactoring.
  • Communicate algorithmic reasoning clearly in code reviews and interviews.

Agenda
Opening: The AI-Accelerated Engineer
How AI is reshaping developer workflows—and why algorithmic clarity matters more than ever.
Examples of AI code that’s correct syntactically but wrong logically.

Pattern 1: Clarify and Baseline
Turning vague questions into crisp specifications.
Why starting with brute force improves correctness and confidence.

Pattern 2: Measure and Map Patterns
How to quickly estimate complexity and identify known solution families.
Mapping problems to arrays, graphs, or DP templates.

Pattern 3: Refactor with AI as a Partner
Using Copilot or ChatGPT to suggest refactors, not replace reasoning.
Prompt patterns for safe collaboration (“generate + verify + explain”).
Spotting hallucinated optimizations.

Pattern 4: Validate and Explain
Building automated test scaffolds and benchmark harnesses.
AI-assisted edge-case discovery.
How to articulate trade-offs in interviews or design docs.

Pattern 5: Framework in Action
Live problem walkthrough:
From brute-force substring search → optimized sliding window solution → complexity and trade-off explanation.
Demonstrate where AI adds value and where human logic rules.

Pattern 6: Guardrails for AI-Assisted Coding
Version control hygiene, reproducibility, test coverage.
Ensuring deterministic, reviewable AI suggestions.
Avoiding “hallucination debt” in production codebases.

Wrap-Up: From Algorithms to Systems Thinking
How this framework extends from whiteboard problems to microservices, pipelines, and data workflows.
Checklist for using AI as a disciplined amplifier of human reasoning.

Key Framework References

  • Stack Overflow Developer Survey (2024) – AI adoption statistics
  • GitHub Copilot Research – Productivity vs correctness studies
  • McKinsey State of AI Report – ROI benchmarks in engineering teams
  • Google Tech Dev Guide – Problem-solving and DS&A frameworks
  • IEEE/ACM Ethical AI Practices – Human-in-the-loop coding

Takeaways

  • 7-Step Algorithmic Thinking Framework — printable reference card
  • AI Guardrails Checklist for safe Copilot/ChatGPT use in code and reviews
  • Prompt Templates for structured ideation, verification, and documentation
  • Live Case Study Walkthrough for clarity, optimization, and explanation
  • A mindset shift: from memorizing algorithms → to designing reasoning systems

Dynamic Programming (DP) intimidates even seasoned engineers. With the right lens, it’s just optimal substructure + overlapping subproblems turned into code. In this talk, we start from a brute-force recursive baseline, surface the recurrence, convert it to memoization and tabulation, and connect it to real systems (resource allocation, routing, caching). Along the way you’ll see how to use AI tools (ChatGPT, Copilot) to propose recurrences, generate edge cases, and draft tests—while you retain ownership of correctness and complexity. Expect pragmatic patterns you can reuse in interviews and production.

Why Now

  • DP = #1 fear topic in interviews.
  • Used in systems: caching, routing, scheduling.
  • 55% faster with Copilot, but needs guardrails.
  • AI adoption is surging — structure required.

Key Framework

  • Find optimal substructure.
  • Spot overlapping subproblems.
  • Start brute force → derive recurrence.
  • Memoization → tabulation.
  • Compare vs. greedy & divide-and-conquer.
  • Use AI for tests & recurrences, not correctness.

Core Content

  • Coin Change: brute force → DP; greedy fails in non-canonical coins.
  • 0/1 Knapsack: DP works, greedy fails; fractional knapsack = greedy.
  • LIS: O(n²) DP vs. O(n log n) patience method.
  • Graphs: shortest path as DP on DAGs.
  • AI Demos: recurrence suggestion, edge-case generation.

Learning Outcomes

  • Know when a problem is DP-worthy.
  • Build recurrence → memoization → tabulation.
  • Decide Greedy vs DP confidently.
  • Apply AI prompts safely (tests, refactors).
  • Map DP to real-world systems.

Autonomous LLM agents don’t just call APIs — they plan, retry, chain, and orchestrate across multiple services.
That fundamentally changes how we architect microservices, define boundaries, and operate distributed systems.
This session delivers a practical architecture playbook for Agentic AI integration — showing how to evolve from simple request/response designs to resilient, event-driven systems.
You’ll learn how to handle retry storms, contain failures with circuit breakers and bulkheads, implement sagas and outbox patterns for correctness, and version APIs safely for long-lived agents.
You’ll leave with reference patterns, guardrails, and operational KPIs to integrate agents confidently—without breaking production systems.

Problems Solved

  • Microservices collapse under agent retries or fan-out behavior
  • Lack of event logs or compensations breaks agent re-planning
  • Failures cascade due to missing bulkheads or circuit breakers
  • Non-deterministic APIs cause unpredictable agent actions
  • Ops teams can’t separate or monitor agent vs human traffic

Why Now

  • Agentic frameworks (Agentforce, LangGraph, CrewAI) are entering production.
  • Traditional microservices assume human or synchronous clients — not autonomous retriers.
  • Reliability, determinism, and observability must now be built into API contracts.
  • Agent traffic adds new stress patterns and compliance visibility requirements.

What Is Agentic AI in Microservices

  • Agents plan, retry, and chain service calls — requiring deterministic, idempotent APIs.
  • Services must be tool-callable (stable operationId, strict input/output schemas).
  • Systems must survive retry storms, fan-out, and long-lived sessions.

Agenda
Opening: The Shift to Agent-Driven Systems
How autonomous agents change microservice assumptions.
Why request/response architectures fail when faced with planning, chaining, and self-healing agents.

Pattern 1: Event-Driven Flows
Use events, queues, and replay-safe designs to decouple agents from synchronous APIs.
Patterns: pub/sub, event sourcing, and replay-idempotency.

Pattern 2: Saga and Outbox Patterns
Manage long workflows with compensations.
Ensure atomicity and reliability between DB and event bus.
Outbox → reliable publish; Saga → rollback on failure.

Pattern 3: Circuit Breakers and Bulkheads
Contain agent-triggered failure storms.
Apply timeout, retry, and fallback policies per domain.
Prevent blast-radius amplification across services.

Pattern 4: Service Boundary Design
Shape services around tasks and domains — not low-level entities.
Example: ReserveInventory, ScheduleAppointment, SubmitClaim.
Responses must return reason codes + next actions for agent clarity.
Avoid polymorphic or shape-shifting payloads.

Pattern 5: Integrating Agent Frameworks
Connect LLM frameworks (Agentforce, LangGraph) safely to services.
Use operationId as the agent tool name; enforce strict schemas.
Supervisor/planner checks between steps.
Asynchronous jobs: job IDs, progress endpoints, webhooks.

Pattern 6: Infrastructure and Operations

  • Observability: Tag agent runs (x-agent-run-id), trace retries, success/failure.
  • Versioning: Use SemVer, deprecation headers, and multi-version gateways.
  • Resilience: Autoscale on retry rate, degrade gracefully, and run failover drills.

Wrap-Up: KPIs and Guardrails for Production
Key metrics: retry rate, success ratio, agent throughput, event replay lag.
Lifecycle governance: monitoring, versioning, deprecation, and sunset plans.

Key Framework References

  • Salesforce Agentforce – agentic orchestration and guardrail templates
  • LangGraph / CrewAI – multi-agent planning and coordination patterns
  • Cloud Native Patterns: Saga, Outbox, Circuit Breaker, Bulkhead, Event-Driven Architecture
  • OpenTelemetry + Prometheus: Observability for agent vs human traffic
  • OWASP LLM Top-10: Guardrails for safe function calling and data handling

Takeaways

  • Blueprint for agent-friendly microservices architecture
  • Patterns for event-driven, saga, and outbox consistency
  • Guardrails: circuit breakers, bulkheads, least privilege APIs
  • Framework integration checklist (Agentforce, LangGraph, etc.)
  • Ops playbook for observability, versioning, and resilience
  • KPIs to measure readiness: retry rate, grounding accuracy, and agent success ratio

Enterprises are moving from single AI agents to networks of agents that trigger thousands of API calls, retries, and tool-chains per prompt. Without orchestration discipline and APIs built for AI-scale, systems buckle under bursty load, retry storms, cache-miss spikes, inconsistent decisions, and runaway costs.

This talk shows how to combine MCP (Model Context Protocol) with proven inter-agent orchestration patterns — Supervisor, Pub/Sub, Blackboard, Capability Router — and how to harden APIs for autonomous traffic using rate limits, dedupe, backpressure, async workflows, resilient caching, and autoscaling without bill shock.

You’ll also learn the AIRLOCK Framework for governing multi-agent behavior with access boundaries, identity checks, rate controls, least-privilege routing, observability, compliance filters, and kill-switches.

You will walk away with a practical blueprint for building multi-agent systems that are fast, safe, reliable, and cost-predictable.

KEY TAKEAWAYS
Pattern Literacy: When to use Orchestrator, Pub/Sub, Blackboard, Router

MCP Fluency: Standardize agent↔tool integration

API Scaling: Rate limits, dedupe, backpressure, async, caching

Resilience: Bulkheads, jitter, circuit breakers, autoscaling guardrails

Observability: Trace chain-ID/tool-ID across agents & tools

AIRLOCK Governance: Access boundaries, identity, rate controls, least-privilege routing, compliance, kill-switches

AGENDA

  • Why AI Changes Load Patterns
    Bursty workloads · fan-out · retry amplification · cost spikes

  • MCP 101
    Standardized agent→tool access · hot-swappable tools

  • Orchestration Patterns
    Supervisor · Pub/Sub · Blackboard · Capability Router

  • Architecting APIs for AI Traffic
    Multi-dimensional rate limits · dedupe · backpressure · SWR caching · async

  • Resilience & Autoscaling
    Circuit breakers · bulkheads · kill-switches · budget caps

  • Observability & Governance
    Chain-ID tracing · anomaly detection · AIRLOCK boundaries

AI models are evolving fast, but the systems around them aren’t. Every backend change still breaks your carefully tuned AI client, while on the web, every change to a server doesn’t require you to download a new browser. What if AI worked the same way?

In this talk, Michael Carducci explores the architecture of 3rd Generation Agentic AI, building on the ideas and technologies introduced in Data Architecture for AI. You’ll discover how JSON-LD, Hydra, and semantic integration enable truly evolvable, interoperable AI ecosystems at web scale. Through live demos and real-world examples, Carducci shows how these web-native standards create APIs that describe themselves, adapt to change, and empower agents to discover and interact safely without brittle coupling. The real frontier isn’t smarter models—it’s shared meaning—and that’s an architectural problem worth solving.

AI models are evolving fast, but the systems around them aren’t. Every backend change still breaks your carefully tuned AI client, while on the web, every change to a server doesn’t require you to download a new browser. What if AI worked the same way?

In this talk, Michael Carducci explores the architecture of 3rd Generation Agentic AI, building on the ideas and technologies introduced in Data Architecture for AI. You’ll discover how JSON-LD, Hydra, and semantic integration enable truly evolvable, interoperable AI ecosystems at web scale. Through live demos and real-world examples, Carducci shows how these web-native standards create APIs that describe themselves, adapt to change, and empower agents to discover and interact safely without brittle coupling. The real frontier isn’t smarter models—it’s shared meaning—and that’s an architectural problem worth solving.

Security problems empirically fall into two categories: bugs and flaws. Roughly half of the problems we encounter in the wild are bugs and about half are design flaws. A significant number of the bugs can be found through automated testing tools which frees you up to focus on the more pernicious design issues. 

 In addition to detecting the presence of common bugs as we have done with static analysis for years, however, we can also imagine automating the application of corrective refactoring. In this talk, I will discuss using OpenRewrite and the Moderne cli to fix common security issues and keep them from coming back.

 

In this talk we will focus on:

  • Introducing the OpenRewrite OSS framework and demonstrate how it can automate common code remediation tasks.
  • Using OpenRewrite and the Moderne cli to automatically identify and fix known security vulnerabilities including:
  • Common Java flaws
  • OWASP Top Ten
  • Common Spring Issues
  • Checking in credentials
  • Integrating security scans with OpenRewrite for continuous improvement.
  • Writing custom recipes for defining your own security policies
  • Free up your time to address larger concerns by addressing the pedestrian but time-consuming security bugs.

Security problems empirically fall into two categories: bugs and flaws. Roughly half of the problems we encounter in the wild are bugs and about half are design flaws. A significant number of the bugs can be found through automated testing tools which frees you up to focus on the more pernicious design issues. 

 In addition to detecting the presence of common bugs as we have done with static analysis for years, however, we can also imagine automating the application of corrective refactoring. In this talk, I will discuss using OpenRewrite and the Moderne cli to fix common security issues and keep them from coming back.

 

In this talk we will focus on:

  • Introducing the OpenRewrite OSS framework and demonstrate how it can automate common code remediation tasks.
  • Using OpenRewrite and the Moderne cli to automatically identify and fix known security vulnerabilities including:
  • Common Java flaws
  • OWASP Top Ten
  • Common Spring Issues
  • Checking in credentials
  • Integrating security scans with OpenRewrite for continuous improvement.
  • Writing custom recipes for defining your own security policies
  • Free up your time to address larger concerns by addressing the pedestrian but time-consuming security bugs.

Retrieval Augmented Generation (RAG) systems have emerged to provide guardrails to spirited non-determinism of unfettered Large Language Models. While useful, they are clearly not enough even in the more advanced configurations of query rewriting, domain/chunk-size alignment, and re-ranking activities.

At the edge of energetic AI wave is a new form of token generation involving concepts and actions that will take things even further.

We will cover:

  • A brief overview of RAG systems and the issues that remain
  • The Sonar Embedding model and how it forms the basis of Large Concept Models (LCMs)
  • The various Action-based embedding models for physically and digitally-embodied agentic systems
  • Use cases that emerge from these advanced, multi-lingual, multi-modal developments in the ever-changing world of generative AI

Retrieval Augmented Generation (RAG) systems have emerged to provide guardrails to spirited non-determinism of unfettered Large Language Models. While useful, they are clearly not enough even in the more advanced configurations of query rewriting, domain/chunk-size alignment, and re-ranking activities.

At the edge of energetic AI wave is a new form of token generation involving concepts and actions that will take things even further.

We will cover:

  • A brief overview of RAG systems and the issues that remain
  • The Sonar Embedding model and how it forms the basis of Large Concept Models (LCMs)
  • The various Action-based embedding models for physically and digitally-embodied agentic systems
  • Use cases that emerge from these advanced, multi-lingual, multi-modal developments in the ever-changing world of generative AI

The typical technologist has a fairly straightforward perspective about the use of resources in modern software systems. They understand the concept of stable identifiers and what some of the HTTP verbs are intended for based upon experiences with the Web.

There is a rich ecosystem of use cases that build upon these basic ideas, however, and in this talk I will demonstrate several of my favorite examples. Drawing upon my pattern-oriented book, I will highlight patterns that surface information, transform it, direct
traffic, and more. These patterns will be presented with intention, consequences, and the usual context we expect in pattern-oriented literature to help us communicate sophisticated design decisions.

Come develop a more sophisticated palette of resource-oriented patterns to help you solve a variety of issues in distributed information systems development.

There's a clear need for security in the software systems that we build. The problem for most organizations is that they don't want to spend any money on it. Even if they did, they often have no idea how much to spend. No particular initiative is likely to imbue your system with “security”, but a strong, deep defensive approach is likely to give you a fighting chance of getting it right.

Web Security as applied to APIs in particular are an important part of the plan. In this workshop, we'll show you how approaches to defining “enough” as well as concrete techniques to employ incrementally in your designs.

In this workshop, we will pick a hands on framework for implementation, but the ideas will generally be standards-based and transcend technology choice so you should have a strategy for mapping the ideas into your own systems.

We will cover a broad range of topics including:

  • The concepts behind Building Security in
  • Designing for Security
  • Authentication and Authorization Strategies
  • Identity Management
  • Protecting Data in transit
  • Protecting Data at rest
  • Frameworks for selecting security features
  • Attack and Threat Models for APIs

There's a clear need for security in the software systems that we build. The problem for most organizations is that they don't want to spend any money on it. Even if they did, they often have no idea how much to spend. No particular initiative is likely to imbue your system with “security”, but a strong, deep defensive approach is likely to give you a fighting chance of getting it right.

Web Security as applied to APIs in particular are an important part of the plan. In this workshop, we'll show you how approaches to defining “enough” as well as concrete techniques to employ incrementally in your designs.

In this workshop, we will pick a hands on framework for implementation, but the ideas will generally be standards-based and transcend technology choice so you should have a strategy for mapping the ideas into your own systems.

We will cover a broad range of topics including:

  • The concepts behind Building Security in
  • Designing for Security
  • Authentication and Authorization Strategies
  • Identity Management
  • Protecting Data in transit
  • Protecting Data at rest
  • Frameworks for selecting security features
  • Attack and Threat Models for APIs

Modern architecture consists of complex, interconnected systems that exhibit both positive and negative emergent behaviors. As a result, traditional reductionist approaches to analysis and design are insufficient for defining precise requirements, designing distributed systems, or identifying root causes in production. In today’s landscape, it is crucial for architects to understand and apply Systems Thinking.

This session offers an introduction to Systems Thinking, covering key definitions, techniques, models, and patterns. It also demonstrates how Systems Thinking is applied to architecture through practical examples. Prezi Presentation

No matter the techniques used to make enterprise solutions Highly Available (HA), failure is inevitable at some point. Resiliency refers to how quickly a system reacts to and recovers from such failures. This presentation discusses various architectural resiliency techniques and patterns that help increase Mean Time to Failure (MTTF), also known as Fault Tolerance, and decrease Mean Time to Recovery (MTTR).

Failure of Highly Available (HA) enterprise solutions is inevitable. However, in today's highly interconnected global economy, uptime is crucial. The impact of downtime is amplified when considering Service Level Agreement (SLA) penalties and lost revenue. Even more damaging is the harm to an organization's reputation as frustrated customers express their grievances on social media. Resiliency, often overlooked in favor of availability, is essential. Prezi Presentation

Software architecture involves inherent trade-offs. Some of these trade-offs are clear, such as performance versus security or availability versus consistency, while others are more subtle, like resiliency versus affordability. This presentation will discuss various architectural trade-offs and strategies for managing them.

The role of a technical lead or software architect is to design software that fulfills the stakeholders' vision. However, as the design progresses, conflicting requirements often arise, affecting the candidate architecture. Resolving these conflicts typically involves making architectural trade-offs (e.g. service granularity vs maintainability). Additionally, with time-to-market pressures and the need to do more with less, adopting comprehensive frameworks like TOGAF or lengthy processes like ATAM may not be feasible. Therefore, it is crucial to deeply understand these architectural trade-offs and employ lightweight resolution techniques. Prezi Presentation

Most architecture documentation lives in slide decks and wikis — formats that humans struggle to act on and LLMs can't reason over reliably. This talk introduces CoDL (Constraints Description Language) and CaDL (Capabilities Description Language) as lightweight, structured notations for expressing architecture in a form that both governance processes and AI tooling can consume. Drawing on the BTABoK's Architecture Description competency — which emphasises producing structured, stakeholder-relevant, and traceable representations of systems — the session shows how formalising architectural intent into machine-readable schemas unlocks new possibilities: automated compliance checks, LLM-driven design critiques, and governance workflows that run without manual chasing.

Attendees leave with a working mental model of what these languages look like, where they slot into everyday architecture work, and why getting the notation right is the prerequisite for everything else in the AI-assisted architecture stack.

Architecture too often floats free of the business model that funds it, producing technically coherent systems that fail to deliver the outcomes that actually matter. This talk builds a full, traceable chain from business model canvas — how the organisation creates and captures value — through product feature decisions, down to the fitness functions and quality attributes that determine whether the system can actually support those features at scale. Using BTABoK's Business Model and Product & Project concepts as the foundation, the session demonstrates how each layer constrains and informs the next: a subscription revenue model demands very different availability and onboarding characteristics than a transactional one, and those differences must propagate into explicit architectural decisions, not just intuition.

Attendees get a hands-on framework for doing this analysis on their own products, making the invisible architecture of their business model visible and actionable.

LLMs are already being used to generate code, but using them to generate and validate architecture is a fundamentally harder and more interesting problem. This talk introduces a practical approach to LLM-based design loops built on the BTABoK CLI and MCP (Model Context Protocol), where structured architecture artefacts — canvases, decisions, fitness functions — become the inputs and outputs of iterative AI-assisted design cycles. Rather than asking an LLM to freeform a system design, the loop grounds generation in BTABoK schemas, validates outputs against CoDL/CaDL constraints, and surfaces gaps for human review.

Drawing on BTABoK's Design concept — architecture as deliberate, constraint-aware shaping of solutions — the session is honest about where LLMs add genuine leverage (option generation, consistency checking, documentation) and where human judgement remains essential (trade-off resolution, stakeholder alignment, ethical constraints). Attendees leave with a concrete architecture for building their own design loop, not just a demo.

The Architecture Decision Record has become a staple of modern architecture practice, but most teams treat all decisions the same way — and pay for it later in rework, confusion, and decisions that quietly rot. This talk makes the case that architecturally significant decisions fall into at least three distinct categories — structural decisions that shape the system's fundamental form, cross-cutting decisions that enforce constraints across components, and local decisions that make sense only in narrow context — each requiring different levels of rigour, different audiences, and different lifecycle management. Grounded in the BTABoK's Decisions concept, which frames decision-making as a core architecture artefact rather than a byproduct of design, the session gives practitioners a practical classification model they can apply immediately.

You'll walk away knowing which decisions deserve a full ADR, which need something lighter, and which ones are silently doing the most damage when they go unrecorded.

REST APIs often fall into a cycle of constant refactoring and rewrites, leading to wasted time, technical debt, and endless rework. This is especially difficult when you don't control the API clients.

But what if this could be your last major API refactor? In this session, we’ll dive into strategies for designing and refactoring REST APIs with long-term sustainability in mind—ensuring that your next refactor sets you up for the future.

You’ll learn how to design APIs that can adapt to changing business requirements and scale effectively without requiring constant rewrites. We’ll explore principles like extensibility, versioning, and decoupling, all aimed at future-proofing your API while keeping backward compatibility intact. Along the way, we’ll examine real-world examples of incremental API refactoring, where breaking the cycle of endless rewrites is possible.

This session is perfect for API developers, architects, and tech leads who are ready to stop chasing their tails and want to invest in designing APIs that will stand the test of time—so they can focus on building great features instead of constantly rewriting code.

Large Language Models unlock new capabilities—and expose brand-new attack surfaces.
From prompt injection and data exfiltration to model denial-of-service and insecure plugin calls, adversaries are exploiting weaknesses traditional AppSec never anticipated.
The new OWASP LLM Top-10 provides a shared vocabulary for AI risks; this session turns that list into actionable engineering practice.
You’ll learn how to threat-model LLM endpoints, design guardrails that actually block malicious behavior, sandbox tools and plug-ins with least privilege, and align your mitigations to the NIST AI Risk Management Framework for audit-ready governance.

Problems Solved

  • Unprotected LLM endpoints vulnerable to prompt injection or jailbreaks
  • Lack of policy filters or content-moderation layers
  • Plugins/tools running with excessive privileges
  • Poor data isolation leading to tenant cross-leakage in RAG systems
  • Missing visibility into misuse, drift, and attack attempts

Why Now

  • OWASP LLM Top-10 (2024–2025) defines a standard taxonomy of AI risks—widely adopted across the industry.
  • AI platforms are moving to production faster than security controls can catch up.
  • Regulators and auditors require demonstrable alignment with frameworks like NIST AI RMF and ISO 42001.
  • Attackers have begun weaponizing generative models through indirect prompt injection and model-driven DoS.

What You’ll Learn

  • Threat-modeling methodology for LLM endpoints and agentic flows
  • Input/output guardrail design: policy filters, allow-lists, block-lists, and content classifiers
  • Sandboxing tools, plug-ins, and function calls with least privilege and egress control
  • Sensitive-data redaction and tenancy-aware retrieval for secure RAG pipelines
  • Red-team drills mapped to OWASP categories and mitigation validation
  • How to map each control to NIST AI RMF (governance, risk, assurance)

Agenda
Opening: The New AI Attack Surface
How LLMs change the threat model. Examples of real-world attacks: prompt injections, indirect injections, model DoS, and exfiltration via vector stores.

Pattern 1: Threat Modeling LLM Endpoints
Identify assets, trust boundaries, and high-risk flows.
Apply STRIDE-inspired analysis to prompts, context windows, retrieval layers, and plugin calls.

Pattern 2: Designing Input/Output Guardrails
Policy filtering, schema validation, and content moderation.
Runtime vs compile-time guardrails—what actually works in production.
Enforcing determinism and fail-safe defaults.

Pattern 3: Sandboxing and Least Privilege Plugins
Secure function calling: scoped IAM, network egress rules, per-plugin secrets, and API key vaulting.
Container isolation and ephemeral agent sandboxes.

Pattern 4: Data Protection and Tenancy in RAG
Redacting sensitive data before embedding.
Segregating tenant vectors and access policies.
Auditing data lineage and evidence paths.

Pattern 5: Red Team & Evaluation Frameworks
Running adversarial simulations aligned with OWASP LLM Top-10.
Common exploits and how to detect them.
Integrating automated red-team tests into CI/CD pipelines.

Pattern 6: Governance & Framework Mapping
Mapping mitigations to NIST AI RMF (categories RA, MA, ME).
Building dashboards and executive summaries for risk reporting.

Wrap-Up & Action Plan
Summarize practical controls that can be implemented within 30 days.
Introduce the Guardrail Policy Starter Kit + Red-Team Runbook templates.
Live checklist review for readiness maturity.

Key Framework References

  • OWASP LLM Top-10 (2024–2025) – Prompt Injection, Data Exfiltration, DoS, Insecure Plugins
  • NIST AI RMF (2023) – Governance + Risk + Assurance Categories
  • ISO/IEC 42001 – AI Management System Standard
  • MITRE ATLAS – Adversarial Tactics for AI Systems
  • OWASP SAMM + ASVS – Integrating AI security into AppSec programs

Takeaways

  • Clear understanding of OWASP LLM Top-10 risks in plain language
  • Guardrail Policy Starter Kit (template YAML + reference policies)
  • Sandboxing Playbook for tools and plugins (scoped IAM, network controls)
  • Red-Team Runbook for testing and validation
  • NIST AI RMF Mapping Guide for executive and audit reporting
  • A practical 30-day roadmap to move from reactive patching → resilient AI security

LLM agents don’t just fetch data—they decide and act. To support planning and chaining, microservices must expose not only endpoints but also semantic context: what entities mean, which states are valid, which actions come next, and why decisions were made. This talk shows how to evolve from data-only APIs to MCP-aware, semantically rich services using JSON-LD/Schema.org, Hydra-style affordances, domain events, and OpenAPI metadata. You’ll learn retrofit vs greenfield paths, see cross-industry demos, and leave with a migration checklist that makes your services truly agent-ready.

Agenda

  • Why MCPs matter: the “USB-C for AI apps.”
  • Gaps in data-only APIs: no semantics, state, or lineage.
  • Adding semantics: JSON-LD, Hydra, enriched OpenAPI, domain events.
  • How agents benefit: safer chaining, fewer errors, explainability.
  • Implementation paths: retrofit vs MCP-native.
  • Use cases & demos: retail discounts, manufacturing sensors, insurance risk, healthcare consent.

Takeaways

  • MCPs transform APIs into agent-collaborators, not just data pipes.
  • Semantic context = better planning, safer automation, fewer hallucinations.
  • Practical roadmap: enrich existing APIs or design new ones MCP-native.

Reliable systems are not accidents. They are designed with explicit operating limits. This session translates lessons from high-risk domains into practical engineering guardrails for microservices: latency budgets, timeout strategy, retry discipline, concurrency limits, and blast-radius controls.

In high-consequence systems, teams define and respect operating limits. Software teams should do the same.

This session introduces an operating-limits model for modern microservices and platform environments. We’ll map common failure patterns (retry storms, cascading timeouts, queue overload, dependency fan-out) to concrete design and operational constraints that prevent small issues from becoming full incidents.

You’ll learn practical techniques for timeout layering, bulkheads, error budgets, load shedding, progressive degradation, and observability signals that reveal approaching limits before customers feel impact.

We’ll also cover leadership practices: how to align teams around reliability contracts and how to enforce guardrails without turning architecture into bureaucracy.

Outcomes:

  • Define service-level operating envelopes
  • Reduce cascading failures in distributed systems
  • Improve incident prevention with better guardrails
  • Balance delivery speed with reliability discipline

Yes, we will talk about when your retries are lying to you. And no, adding one more queue is not always the answer.

In the age of digital transformation, Cloud Architects emerge as architects of the virtual realm, bridging innovation with infrastructure. This presentation offers a comprehensive exploration of the Cloud Architect's pivotal role.

Delving into cloud computing models, architecture design, and best practices, attendees will gain insights into harnessing the power of cloud technologies. From optimizing scalability and ensuring security to enhancing efficiency and reducing costs, this session unravels the strategic decisions and technical expertise that define a Cloud Architect's journey. Join us as we decode the nuances of cloud architecture, illustrating its transformative impact on businesses in the modern era.

The organization has grown and one line of business has become 2 and then 10. Each line of business is driving technology choices based on their own needs. Who and how do you manage alignment of technology across the entire Enterprise… Enter Enterprise Architecture! We need to stand up a new part of the organization.

This session will define the role of architects and architectures. We will walk through a framework of starting an Enterprise Architecture practice. Discussions will include:

  • Differences of EA teams from one organization to another
  • Different architectural roles
  • Challenges that face EA
  • How to start or refine an EA practice

AI agents are not just for developers. They are personal operating systems for your professional and personal life. In this session, Ken shares what it actually looks like to live and work with a personal AI agent — from morning briefs to travel ops to speaking pipeline automation — and provides a practical framework you can start deploying the same week.

Everyone talks about AI. Fewer people show what it looks like to actually live with one.

In this session, Ken shares his real-world deployment of a personal AI agent that runs across his work, speaking career, and personal life. This is not a demo of ChatGPT prompts. This is an operating model — built incrementally over time — that handles morning briefings, calendar privacy bridges, travel logistics, speaking pipeline automation, secure vault retrieval, relationship nudges, and nightly content creation while he sleeps.

The session covers a four-stage framework: Build (what your agent knows), Trust (the autonomy ramp), Delegate (what to hand off first), and Compound (where the real leverage comes from).

Attendees will see live or recorded demonstrations of real workflows, including:

  • Morning brief: email triage, calendar alerts, priority surfacing
  • Speaking pipeline: CFP tracking, abstract generation, deck outlining
  • Travel ops: auto-detecting conference trips, fare watching, booking checklists, TripIt and Expensify forwarding
  • Personal-to-work calendar bridge: privacy-safe OOO blocking without exposing personal details
  • Secure vault: on-demand retrieval of travel IDs and sensitive data with a sudo-style auth challenge
  • Nightly content forge: agent builds artifacts while you sleep

We also cover safety and trust design — how to define what your agent can do autonomously versus what requires your approval — and how to build a context-rich memory system that makes the agent genuinely useful over time.

Outcomes:

  • A mental model for deploying your own agent at any level of technical skill
  • A starter list of high-ROI automations to implement this week
  • A realistic view of trust, safety, and control for autonomous AI systems
  • Inspiration to stop using AI occasionally and start using it continuously

Note: This talk is best when delivered with live demonstrations. Ken runs this system daily and can demo real workflows in real time. No slides required for the demo sections — the agent speaks for itself.

  • FAQ
  • Code of Conduct
  • Speakers
  • About Us
  • Contact Us
  • Speak at NFJS Events
  • Site Map

NFJS Events

  • No Fluff Just Stuff Tour
  • UberConf
  • TechLeader Summit
  • Arch Conf
  • Code Remix Summit
  • API Conf
Big Sky Technology
5023 W. 120th Avenue
Suite #289
Broomfield, CO 80020
help@nofluffjuststuff.com
Phone: (720) 902-7711
NFJS_Logo_2
© 2026 No Fluff, Just Stuff TM All rights reserved.