Daniel Hinojosa

Independent Consultant

Daniel is a programmer, consultant, instructor, speaker, and recent author. With over 20 years of experience, he does work for private, educational, and government institutions. He is also currently a speaker for No Fluff Just Stuff tour. Daniel loves JVM languages like Java, Groovy, and Scala; but also dabbles with non JVM languages like Haskell, Ruby, Python, LISP, C, C++. He is an avid Pomodoro Technique Practitioner and makes every attempt to learn a new programming language every year. For downtime, he enjoys reading, swimming, Legos, football, and barbecuing.

Presentations

In today’s data-driven world, the ability to process and analyze data in real time is no longer a luxury—it’s a necessity. Apache Flink, a powerful stream processing framework, has emerged as a game-changer for handling high-throughput, low-latency data applications.

In this session, you’ll gain a clear understanding of what Apache Flink is, how it works, and why it’s become a cornerstone for modern data infrastructure. We’ll explore key features such as its robust stream and batch processing capabilities, event-time handling, stateful computations, and fault tolerance. You’ll also discover how Flink integrates seamlessly with popular systems like Kafka, Kubernetes, and major cloud platforms.

Whether you’re working with real-time analytics, event-driven applications, or machine learning pipelines, Apache Flink provides the scalability and flexibility needed to turn massive streams of data into actionable insights. Join us to see why Flink is critical to modern data ecosystems and learn how to start leveraging its power in your projects.

This session will focus on data governance and making data available within your enterprise. Who owns the data, how do we obtain the data, and what does governance look like?

  • CQRS
  • Materialized Views
  • Warehousing vs Data Mesh
  • OLAP vs OLTP
  • Pinot, Kafka, and Spark
  • Business Intelligence
  • Making Data Available for ML/AI

Join us for an indepth exploration of cuttingedge messaging styles in your large domain.

Here, we will discuss the messaging styles you can use in your business.

  • Event Sourcing
  • EventDriven Architecture
  • Claim Check
  • Event Notification
  • Event Carried State Transfer
  • Domain Events

This presentation will discuss the patterns required when things go wrong in architecture and how to stay resilient under pressure. These are battle-tested patterns that every architect should know.

  • Circuit Breaker
  • Throttling
  • Retries
  • Bulkhead
  • Failover
  • Fallback
  • Timeout
  • Leader Election
  • Raft Protocol
  • Competing Consumers
  • Health Checks
  • Replicator
  • Idempotency
  • Shadowing
  • Graceful Degradation

In this session, we will discuss architectural concerns regarding security. How do microservices communicate with one another securely? What are some of the checklist items that you need?

  • Valet Key
  • mTLS and Sidecars
  • Public Key Infrastructure (PKI)
  • SASL
  • Hashicorp Vault Keys
  • SBOMs
  • JSON Web Tokens

We take a look at another facet of architectural design, and that is how we develop and maintain transactions in architecture. Here we will discuss some common patterns for transactions

  • TwoPhase Commit
  • The Problem with 2PC
  • Using EventDrivenArchitecture to manage transactions
  • Transactional Outbox
  • Compensating Transaction
  • Optimistic vs Pessimistic Locking
  • TCC (TryConfirm/Cancel)
  • Saga Orchestrator
  • Saga Choreography

Domain Driven Design has been one of the major cornerstones in large system design for many years. It has recently been in the zeitgeist as of late, especially when it comes to the terms bounded context and microservices. This full-day class introduces you to Domain Driven Design, and why it is important. We will cover the design and pattern, discuss subdomains, context mapping, tools, and management

Our workshop will not only introduce you to the terms, but we will cover planning, and go through challenges on design so we can discuss the tradeoffs that come with the design. We will also cover some of the more modern patterns that came out of DDD like CQRS, Data Mesh, Rest, and more.

Event-driven architecture (EDA) is a design principle in which the flow of a system’s operations is driven by the occurrence of events instead of direct communication between services or components. There are many reasons why EDA is a standard architecture for many moderate to large companies. It offers a history of events with the ability to rewind the ability to perform real-time data processing in a scalable and fault-tolerant way. It provides real-time extract-transform-load (ETL) capabilities to have near-instantaneous processing. EDA can be used with microservice architectures as the communication channel or any other architecture.

In this workshop, we will discuss the prevalent principles regarding EDA, and you will gain hands-on experience performing and running standard techniques.

  • Key Concepts of Event-Driven Architecture
  • Event Sourcing
  • Event Streaming
  • Multi-tenant Event-Driven Systems
  • Producers, Consumers
  • Microservice Boundaries
  • Stream vs. Table
  • Event Notification
  • Event Carried State Transfer
  • Domain Events
  • Tying EDA to Domain Driven Design
  • Materialized Views
  • Outbox Pattern
  • CQRS (Command Query Responsibility Segregation)
  • Saga Pattern (Choreography and Orchestrator)
  • Avoiding Coupling
  • Monitoring Systems
  • Cloud-Based EDA

Domain Driven Design has been guiding large development projects since 2003, when the seminal book by Eric Evans came out. Domain Driven Design is split up into two parts: Strategic and Tactical. One of the issues is that the Strategic part becomes so involved and intense that we lose focus on implementing these sorts of things. This presentation swaps this focus as topic pairs. For example, when we create a bounded context, is that a microservice or part of the subdomain? When we create a domain event, what does that eventually become? How do other tactical patterns fit into what we decide in the strategic phase?

In this presentation, we will break it down into pairs of topics.

  • Subdomains to Architecture Boundaries
  • Bounded Contexts
  • Context Mapping
  • Value Objects, Entities, and Aggregates
  • Where do Microservices fit in?
  • How do subdomains and bounded contexts fit in?
  • Where do we put human beings in all this?
  • What is Event Storming

Domain Driven Design has been guiding large development projects since 2003, when the seminal book by Eric Evans came out. Domain Driven Design is split up into two parts: Strategic and Tactical. One of the issues is that the Strategic part becomes so involved and intense that we lose focus on implementing these sorts of things. This presentation swaps this focus as topic pairs. For example, when we create a bounded context, is that a microservice or part of the subdomain? When we create a domain event, what does that eventually become? How do other tactical patterns fit into what we decide in the strategic phase?

In this workshop, we will break it down into pairs of topics.

  • Subdomains to Architecture Boundaries
  • Where do Microservices fit in?
  • Where does DataMesh fit in?
  • How do subdomains and bounded contexts fit in?
  • What is shared kernel technology-wise? Is it a library or a service?
  • Where do we put human beings in all this?
  • What can we afford to do?
  • What kind of technology shopping list are we looking at?

In this workshop, we will perform the following activities

  • Event Storming but with a Technology Focus
  • Context Mapping but with a Technology Focus
  • Decide how to implement some common patterns with a Technology Focus

Hashicorp Vault stores encrypted secrets securely. You can store anything that you want into Vault including API keys, passwords, and certificates. Vault can also store dynamic secrets where it can negotiate with a cloud service on your behalf without direct interaction with your API keys. Hashicorp Vault is well thought out “bank” of information that handles storage, encryption, leasing, sealing.

  • Setting up a vault dev server
  • Adding key values
  • Secrets Engine
  • Dynamic Secrets
  • Establishing Policies
  • Sealing
  • Leases
  • Rest API Access
  • Deploy to Kubernetes

This workshop will explore the principles of the Ports and Adapters pattern (also called the Hexagonal Architecture) and demonstrate how to refactor legacy code or design new systems using this approach. You’ll learn how to organize your domain logic and move UI and infrastructure code into appropriate places within the architecture. The session will also cover practical refactoring techniques using IntelliJ and how to apply Domain Driven Design (DDD) principles to ensure your system is scalable, maintainable, and well-structured.

What You’ll Learn:

  1. What is Hexagonal Architecture?
    Understand the fundamental principles of Hexagonal Architecture, which helps isolate the core business logic (the domain) from external systems like databases, message queues, or user interfaces. This architecture is designed to easily modify the external components without affecting the domain.

  2. What are Ports and Adapters?
    Learn the key concepts of Ports and Adapters, the core elements of Hexagonal Architecture. Ports define the interface through which the domain interacts with the outside world, while Adapters implement these interfaces and communicate with external systems.

  3. Moving Domain Code to Its Appropriate Location:
    Refactor your domain code to ensure it is correctly placed in the core domain layer. You will learn how to separate domain logic from external dependencies, ensuring that business rules are isolated and unaffected by user interface or infrastructure changes.

  4. Moving UI Code to Its Appropriate Location:
    Discover how to refactor UI code by decoupling it from the domain logic and placing it in the appropriate layers. You’ll learn how to use the Ports and Adapters pattern to allow the user interface to communicate with the domain without violating architectural boundaries.

  5. Using Refactoring Tools in IntelliJ:
    Learn how to use IntelliJ’s powerful refactoring tools to streamline code movement. Techniques such as Extract Method, Move Method, Extract Delegate, and Extract Interface will be applied to refactor your codebase.

  6. Applying DDD Software Principles:
    We’ll cover essential Domain-Driven Design principles, such as Value Objects, Entities, Aggregates, and Domain Events.

  7. Refactoring Techniques:
    Learn various refactoring strategies to improve code structure, Extract Method, Move Method, Extract Delegate, Extract Interface, and Sprout Method and Class

  8. Verifying Code with Arch Unit:
    Ensure consistency and package rules using Arch Unit, a tool for verifying the architecture of your codebase. You will learn how to write tests confirming your project adheres to the desired architectural guidelines, including separating layers and boundaries.

Who Should Attend:

This workshop is perfect for developers who want to improve their understanding of Ports and Adapters Architecture, apply effective refactoring techniques, and leverage DDD principles for designing scalable and maintainable systems.

The road has been difficult for anyone who has attempted to do any database testing for the last 20 years. One solution that we tried was installing the test database. Oftentimes, that has been hard to maintain, particularly when that test database is shared. Another solution is to use DBUnit on a local system or, possibly, a shared system to set up the database as needed, test it, and tear it down. That too, was very hard to maintain. For many, it was just deploying your application and running end-to-end tests. If it works, it works, and we move on. With the advent of containerization, we now have something substantially better, test containers! Test containers give us the power to bring up a database that we need to test against and the exact version of the database.

With Test Containers, we can programmatically set the required database, possibly insert some data, and run our test against it. Behind the scene, test containers will load and cache the database so we may run the database subsequently and even faster. There are a multitude of databases to choose from, and it is also supported by multiple languages like Java, Go, Ruby, and more.

Since 1994, the original Gang of Four Design Patterns book, “Design Patterns: Elements of Reusable Object-Oriented Software” has helped developers recognize common patterns in development. The book was originally written in C++, but there have been books that translate the original design patterns into their preferred language. One feature of “The Gang of Four Design Patterns” that has particularly stuck with me has been testability for the most part. With the exception of singleton, all patterns are unit testable. Design Patterns are also our common developer language. When a developer says “Let's use the Decorator Pattern” we know what is meant.

What's new though is functional programming, so we will also discuss how these patterns change in our new modern functional programming world. For example, functional currying in place of the builder pattern, using an enum for a singleton and reconstructing the state pattern using sealed interfaces. We will cover so much more, and I think you will be really excited about this topic and putting it into practice on your own codebase.

In this day-long work workshop, we will walk through a catalog of all the common architectural design patterns. For each design pattern, we will run docker-compose files that demonstrate the strengths and weaknesses of those design patterns. So you have a first-hand, full-on, and highly engaged full-day workshop to give you the knowledge you need to make critical architectural choices.

We will cover:

  • Anti-corruption-layer
  • Claim Check
  • CQRS
  • Gatekeeper
  • Strangler Fig
  • Bulkhead
  • More …

There are multiple elements to Kubernetes where each component seems like a character in a book, pods, services, deployments, secrets, jobs, config maps, and more. In this presentation, we just focus on the security aspect of Kubernetes and the components involved. Particularly centered around RBAC and ServiceAccounts. What they are, what they do. We discuss etcd and secrets. We will also discuss other options for security in Kubernetes.

  • Service Accounts
  • Secrets
  • Kubernetes API
  • Authentication
  • Authorization
  • RBAC
  • Roles and Cluster Roles

There are multiple elements to Kubernetes where each component seems like a character in a book, pods, services, deployments, secrets, jobs, config maps, and more. In this presentation, we just focus on the security aspect of Kubernetes and the components involved. Particularly centered around RBAC and ServiceAccounts. What they are, what they do. We discuss etcd and secrets. We will also discuss other options for security in Kubernetes.

Service Accounts
Secrets
Kubernetes API
Authentication
Authorization
RBAC
Roles and Cluster Roles

Machine Learning Data Pipelines - Video Preview

This workshop builds an entire event driven data pipeline with Machine Learning and Kafka. From Kafka where we use producers or Kafka Connect to generate information, we then will Kafka Streams to apply a machine learning model to make business decisions.

This intensive lab will start by integrating sources into our backplane, then train our models, and operationalize our model using Kafka Streams. We will then create result topics when we can read in as a report and display visualizations of our data. The result will also be scalable and fault tolerant.

Canary Deployments are the last ingredient of any Continuous Delivery or Continuous Deployment rollout. A canary deployment is a deployment strategy that releases an application or service incrementally to a subset of users. All infrastructure in a target environment is updated in small phases (e.g., 2%, 25%, 75%, 100%). This control makes a canary release the lowest risk-prone compared to all other deployment strategies, like the blue-green strategy. If you need to back out of a production deployment quickly without much disruption, then canary deployments may be an excellent practice to set up.

We will treat this talk like a recipe so that you can set up a canary in your work environment.

  • Continuous Integration, Continuous Delivery,
  • Continuous Deployment Overview
  • You can perform this anywhere: Cloud instances or Kubernetes
  • What is Canary? What are some of the strategies you can use?
  • What CD Frameworks perform Canary Deployments?
  • How do you perform metrics? Are things going too slow or too fast?
  • How do you back out if this rollout is not performing as well as it should?
  • Can you perform canary deployments without a CD framework? Perhaps something native to Kubernetes?
  • Performing using Spinnaker, Argo, Istio
  • Conclusion

Books

Testing in Scala

by Daniel Hinojosa

  • If you build your Scala application through Test-Driven Development, you’ll quickly see the advantages of testing before you write production code. This hands-on book shows you how to create tests with ScalaTest and the Specs2—two of the best testing frameworks available—and how to run your tests in the Simple Build Tool (SBT) designed specifically for Scala projects.

    By building a sample digital jukebox application, you’ll discover how to isolate your tests from large subsystems and networks with mocking code, and how to use the ScalaCheck library for automated specification-based testing. If you’re familiar with Scala, Ruby, or Python, this book is for you.

    • Get an overview of Test-Driven Development
    • Start a simple project with SBT and create tests before you write code
    • Dive into SBT’s basic commands, interactive mode, packaging, and history
    • Use ScalaTest both in the command line and with SBT, and learn how to incorporate JUnit and TestNG
    • Work with the Specs2 framework, including Specification styles, matchers DSLs, and Data Tables
    • Understand mocking by using Java frameworks EasyMock and Mockito, and the Scala-only framework ScalaMock
    • Automate testing by using ScalaCheck to generate fake data