Matt Stine

I Enable Early-Career Enterprise Software Engineers to Continuously Improve

Matt Stine

My passion is taking a metaphysical approach to software engineering: what is the nature of the collaborative game that we continuously play, and are there better, more contextually-aware ways to play that game?

By day I lead a team tasked with taking a first-principles-centric approach to intentionally enabling programming language usage at the largest bank in the United States.

By night I write and teach my way through a masterclass in software engineering and architecture targeting early-career software engineers working in large-scale enterprise technology organizations.

What is the primary goal?

To win the game. More seriously: to get 1% better every day at providing business value through software.

Who am I?

I'm a 22-year veteran of the enterprise software industry. I've played almost every role I can imagine:

  • Software Engineer
  • Software Architect
  • Technical Lead
  • Engineering Manager
  • Consultant
  • Product Manager
  • Field CTO
  • Developer Advocate
  • Conference Speaker
  • Author
  • Technical Trainer
  • Technical Marketer
  • Site Reliability Engineer
  • Desktop Support Specialist

I've worked at Fortune 500 companies, a tenacious teal cloud startup, and a not-for-profit children's hospital. I've written a book, and I've hosted a podcast. I've learned a lot along the way, including many things I wish I'd known when I first got started. And so now I want to pass those learnings on to you, especially if you've only just begun your career.

Presentations

Cloud-Native Application Architecture

Monday, 9:00 AM EST

Cloud-native architectures combine the unique aspects of cloud platforms with the principles of DevOps and Continuous Delivery to enable the rapid development, deployment, and management of applications. As the speed of innovation becomes one of the key drivers of business success, these architectures ensure teams are able to meet the need of the business, to move quickly, while at the same time ensuring important non-functional characteristics like availability and scalability.

Many of the innovators in this space, including Amazon, Twitter, LinkedIn, and Netflix, leverage small, autonomous teams which focus on business capabilities and build twelve-factor style, microservice applications. Microservices integration is achieved via lightweight, decentralized, and choreographed point-to-point interactions rather than the heavyweight, centralized, and orchestrated ESB-style integration found in traditional SOA.

With the advent of cloud-native architectures, building distributed systems will become increasingly common for the enterprise Java developer. Fortunately many of these same innovators have embraced the JVM as they’ve built increasingly complex systems, with Netflix open-sourcing much of its toolkit for constructing these systems at NetflixOSS.

Cloud Foundry and Spring provide open source framework tooling and platform services for developers to quickly build some of the common patterns in found in distributed, cloud-native systems. Many of these patterns are provided by the Spring Cloud project, which wraps many of the battle-tested components found at NetflixOSS with the Spring programming model, and provides easy deployment of NetflixOSS services to Cloud Foundry.

In this class the learner will have the opportunity to practice working with cloud-native architectures using Spring and Cloud Foundry.

Building 12 Factor JVM Applications

Tuesday, 10:30 AM EST

Modern applications are changing as we embrace the engineering practices associated with Continuous Delivery and DevOps, migrate our applications to modern cloud platforms, elastically scale applications with the dynamics of customer demand, and embrace microservices architectures. The Twelve-Factor App is a collection of application development patterns developed by Heroku engineers that aim to support these types of architectural and cultural change.

The 12 Factors are:

  1. One codebase tracked in revision control, many deploys
  2. Explicitly declare and isolate dependencies
  3. Store config in the environment
  4. Treat backing services as attached resources
  5. Strictly separate build and run stages
  6. Execute the app as one or more stateless processes
  7. Export services via port binding
  8. Scale out via the process model
  9. Maximize robustness with fast startup and graceful shutdown
  10. Keep development, staging, and production as similar as possible
  11. Treat logs as event streams
  12. Run admin/management tasks as one-off processes

We’ll examine how to implement these factors using JVM “microframeworks” like Spring Boot and Dropwizard.

Concourse: CI that scales with your project

Tuesday, 3:15 PM EST

Concourse (http://concourse.ci/) is a CI system composed of simple tools and ideas. Concourse can express entire pipelines, integrating with arbitrary resources, or it can be used to execute one-off tasks, either locally or in another CI system. Concourse attempts to reduce the risk of adoption by encouraging practices that keep your project loosely coupled to the details of your continuous integration infrastructure.

Concourse optimizes around the following principles:

  • Simplicity
  • Usability
  • Build Isolation
  • Scalable, reproducible deployments
  • Flexibility
  • Local iteration

During this session we'll learn the simple key concepts from which Concourse pipelines are constructed. We'll understand how to deploy a local Concourse cluster using Vagrant as well as a scalable Concourse cluster to your cloud of choice using Cloud Foundry BOSH. Finally, we'll look at basic and advanced examples of pipelines for Java projects.

Advanced Data Architecture Patterns

Wednesday, 9:00 AM EST

As we move toward microservices, we learn to properly decompose not only our behavior model, but also our data model into bounded contexts. This data decomposition is not without consequences. By placing strict boundaries around ownership of domain concepts, we make it more difficult to refer to concepts that naturally want to cross these boundaries. How do we “denormalize” these entities effectively? How do we keep these representations in sync? What do transactions look like? How do we ask BIG questions that span multiple contexts? These are the questions that we’ll dive into in this session.

Topics to include:

  • CAP Theorem
  • Embracing Eventual Consistency
  • Command-Query Responsibility Segregation (CQRS)
  • Caching
  • Event Sourcing
  • Lambda Architecture

Microservices Testing Strategies

Wednesday, 11:00 AM EST

Microservice architectures place great emphasis on autonomous product teams that develop and deploy equally autonomous services using decentralized release management, testing, and deployment strategies. I don’t have to wait on you to deploy my service, and you don’t have to wait on me. And yet the complexity associated with managing these large, distributed systems seems like it would demand even greater discipline and centralized coordination of testing activities. Fortunately, while greater discipline is in fact required, we don’t require the centralized coordination that would seem to destroy many of the benefits of embracing microservices. In this session will examine principles and practices that will help us develop an effective testing strategy for microservices.

Topics will include:

  • who moved my trust boundary?
  • expressing API’s as versioned contracts
  • what’s in a contract? the good, the bad, and the ugly
  • leveraging consumer-driven contracts
  • exploratory testing for risk discovery
  • recovery OVER prevention
  • dogfooding