Why talk about resilience when thinking of scale? It turns out all the effort we put in to achieve great performance may be lost if we're not careful with failures. Failure is not only about unavailability of parts of an application to some users, it may result in overall poor performance for everyone else as well.
In this presentation we will discuss ways to attain scale and discuss how to preserve those efforts by dealing with failures properly.
Once considered lightweight, threads in reality take up significant memory and thus turn into a limitation for true scale. The JVM is heading towards creating fibers which are lightweight compared to threads and have the potential to be truly non-blocking. Mixed with continuations, which are data structures that can preserve state between calls, we can create highly effective asynchronous applications that can scale to a much greater extent than threads.
Come to this presentation to learn the future of threads and asynchronous programming on the JVM.
GraalVM is a polyglot environment that can execute your code, written in multiple languages, in multiple different platforms. With version 1.0 released this year, this technology has the potential to make significant impact on both development and deployment.
In this presentation we will get an understanding of what GraalVM is and how we can benefit from using it.
Big data comes in two flavors high volume and high frequency. How we process the data depends on both the nature of data and the type of applications.
In this presentation we will take a look at the big data processing landscape. We will explore the different tools, the type of problems they solve, and when to use each one of them, and occasions when we may even mix and match some of the tools.
The world we live in today is changing rapidly, both in terms of hardware and in business demands.
In this presentation we will look at technologies that will have significant impact in the next three to ten years. Some of these are poised to fundamentally change the way and the type of applications we will be building.
A down in the trenches look at building, running and day-to-day development with a Continuous Delivery pipeline. This talk is based on my experiences building multiple CD pipelines and optimizing developer workflows to push changes to production all day. I'll walk you through how we transformed a two-day deployment process into a 20-minute CD pipeline and then go on to perform more than 20,000 deployments.
During this presentation we'll walk through the evolution of a team teetering on collapse. Production deployments are a long running ceremony that hasn't really changed in years. Deployments are risky and everyone involved with the project acts accordingly, deployments can take days and the company website has scheduled maintenance windows.
Over several months, the team will transform into a model of agile process mastery. Deployments will take minutes instead of days. The team's structure and concerns over deploying to production will also change shape.
During this talk we'll dig into the anatomy of a continuous delivery pipeline, what it is, how it works, and the challenges you'll face making the transition. Where do you start and what are the big four considerations of continuous delivery? Do you need company buy-in or can you start small and grow out to the rest of the organization?
We'll walk through the entire process, talk about team organization, breaking up the monolith, your first steps towards CD, identifying your primary objective, the building blocks of a Microservices architecture, the psychology of continuous delivery, how to write effective code in a CD ecosystem, and we'll build a continuous delivery pipeline and Microservice during the presentation.
How do you build a Cloud Native Applications? So many cloud deployments are a lift and shift architecture, what would it look like if you started from scratch, only used cloud native technologies? During this session we will compare and contrast two applications, one built using a traditional Java application architecture, the other using a cloud native approach. How does building an app for the cloud change your architecture, application design, development and testing processes? We’ll look at all this and more.
During this session we’ll dive in to the details of Cloud Native applications, their origins and history. Then, look at what’s involved when you move from an on-prem data center to the cloud. Should you change your approach to application design now that you are in the cloud? If so, what does a cloud-based design look like.
By the end of the session, you’ll have a better understanding of the benefits of a cloud native application design, how to best leverage cloud capabilities, and how to create performant Microservices.
I hope you'll join me on this exciting survey of Serverless Computing. When you think of Serverless you probably think of Lambda's or Cloud Functions but there's so much more to the Serverless ecosystem. During this session will look at Serverless Computing in all its various forms and discuss why you might want to use a Serverless architecture and how it compares to other cloud services.
Serverless is an exciting component of Cloud computing and it's a growing rapidly. During this session we'll look at all things Serverless and discuss how to incorporate it into your system architecture. We'll build a Lambda function during the presentation and talk about the pros and cons of Serverless and when you should use Serverless systems.
There are a few Serverless frameworks available today to make building a function easier than ever. We'll look at a couple of these frameworks, build a local, Serverless function and deploy it to AWS (if the network cooperates). Finally, we'll talk about performance considerations, how to structure your Serverless functions, and how to perform safe l
The cloud promises highly scalable infrastructure, economies of scale, lower costs and a more secure platform. When moving to the cloud, how do you take advantage of these new capabilities? How do you optimize your organization to make the best use of the resiliency and elasticity offered by the cloud?
Closely associated with cloud computing is Continuous Delivery, the automated process to get changes to your customers quickly, safely and in a sustainable way. Continuous Delivery was born in the cloud and is a great way to get ideas to your customers. There’s one catch, if you want to adopt a Continuous Delivery strategy, you need to build applications differently, your team structure needs to change and how you test and validate systems needs to adapt to these changes.
This presentation will look at how to transform your organization to take advantage of all the cloud has to offer. We’ll look at strategies for initiating your transition to the cloud, how to adopt a continuous delivery strategy, and how to manage cross-functional teams (sometimes called two-pizza teams) and projects when every team can deploy to production multiple times a day.
Managing teams in chaos will provide you the information needed to implement the two-pizza rule for your organization, enable your teams to work independently while still focusing on a common goal, and how to beat your competition to market.
Docker has revolutionized how we build and deploy applications. While Docker has revolutionized production, it's also had a huge impact on developer productivity. Anyone that's used Docker for an extensive period of time will tell you it's a blessing and a curse. Yes, it's portable but networking and other characteristics of Docker can make the most chill developer long for plain old Java. During this session we'll look at Docker's good points and how to tackle the difficult areas. The end goal - enable anyone on your team to go from zero to productive in under 20 minutes.
This session will should you how to structure a Java CRUD application that leverages Docker to enable rapid developer onboarding, schema migrations, and utilize common cloud services (like Pub/Sub); all from your laptop. This setup will enable you to build a streamlined, Continuous Delivery ready, Cloud Native application, the same configuration that enables local development will supercharge your CI/CD pipeline.
If you work in a polyglot environment, you know switching to a new service can be a difficult process. There are new tools to install, environments to setup, databases to use and so on. Docker can streamline this process and enable you to switch between services quickly and easily.
By the end of this session, you'll have a pattern for creating team friendly Microservices that works well in a Continuous Delivery Pipeline and can be deployed to any container environment. Docker will enable you to build, test and deploy your code faster and safer than ever before.
Current approaches to software architecture do not work. As we challenge some of the sacred truths of software development (reuse, failure prevention), we examine how current approaches to software architecture must also change.
Software systems evolve but current approaches to architecture do not factor in this inevitable evolution. Attempts to define the architectural vision for a system early in the development lifecycle does not work. Big architecture up front (BAUF) does not work. To compound the challenge, agile methods offer very little guidance on how to effectively do software architecture.
In this session, we examine several actionable principles that will help you create software architectures that possess the ability to continuously evolve.
The Java Platform Module System was available with Java 9. In this session, we provide a clear framework for migrating your applications to JPMS.
With Java 9, modularity became a first class construct on the Java platform…Finally! In this session, we explore the default module system and examine how to migrate applications. We'll start by examining the first step in the migration and then examine several strategies for migrating your application.
Finally, we will explore advanced concept of JPMS that bring greater structural integrity and encapsulation to the Java platform.
Organizations have a lot of expertise in Java EE. With MicroProfile, developers can leverage this expertise to build cloud-native applications.
Few consider Java EE as a viable option for building microservices. Yet developers have a wealth of knowledge and skill that they may want to leverage to build microservices as they adopt cloud-native architecture patterns. The MicroProfile is a baseline platform definition that optimizes Enterprise Java for a microservices architecture and delivers application portability across multiple MicroProfile runtimes. In this session, we will explore the MicroProfile and examine it’s viability for using Java EE to build cloud-native applications.
Modularity is the common aspect of modern architectures and platforms. Understanding the role of modularity when making architecture decisions is critical.
The architecture paradigms we’re using are changing. The platforms we deploy our software to are changing. We are confronted with several new architecture paradigms to choose from, such as microservices and miniservices. Yet should we automatically discard some of the proven architectures we’ve used in the past, including more traditional web services? Likewise, new platforms, such as cloud, complicate the decision. Yet, at the heart of this transformation is modularity.
In this session, we’ll explore how modularity is impacting the platforms we are leveraging and the architecture paradigms we’ll use and offer a clear roadmap with proven guidance on navigating the architecture decisions we must make.
No single architectural style solves all needs. Though microservices have taken the developer community by storm recently, they are not always the optimal solution. In some cases, a more monolithic architecture may be more suitable short term. Or perhaps a more traditional system of web services that allow you to leverage existing infrastructure investment is preferable. Fortunately, proven architectural practices allow you to build software that transcends specific architectural alternatives and develop a software system that gives the development team the agility to shift between different architectural styles without undergoing a time-consuming, costly, and resource intensive refactoring effort. Modularity is the cornerstone of these alternatives.
In this workshop, we will examine the benefits and drawbacks of several different modular architecture alternatives and we’ll explore the infrastructure, skills, and practices necessary to build software with each of these alternatives. There will be straightforward exercises and demonstrations that show the alternatives and how to create software that provides the architectural agility to easily shift between architectural alternatives.
Topics discussed include:
Pen and paper
Java 8+
Ant
Gradle
Graphviz (Optional)
Heroku account and Heroku CLI (Optional, only if you want to deploy to Heroku PaaS)
Harold McMillan was Prime Minister of England from 1957 to 1963, the last British PM born during Queen Victoria’s rule, and one whose wit and even-keeled nature defined his administration. When asked by a reporter what might force his government off the course he had firmly laid out for it, he allegedly replied “Events, dear boy, events.”
The same might be said about what is driving software architectures today. Event-driven systems have enabled organizations to build substantial microservices ecosystems with all of the decoupling and evolvability that we were promised by the distributed computing technologies of 20 years ago. But these systems raise some interesting questions: if events now rule, what has become of entities? If we store events in logs, do we still need databases? Can we merely produce immutable events to trivially scalable logs and loose our microservices to consume them with no regard for what is actually out there in the world?
To make sense of this, we turn to the past. Spanning 2,500 years before McMillan deployed his wit on that poor reporter, we will look at what Heraclitus, Aristotle, Karl Popper, and W.V.O. Quine thought and wrote about these same questions. Are there things in the world that maintain their identity over time, or is the world just a sequence of experiences? Life may be a stream of events, but sometimes I still want to look things up by key. Four great thinkers will help be better at following the paradigm that will be shaping our systems for the next generation. And as usual, a good philosophy lesson will make us better at practical tasks. We’ll apply a rich view of events and entities to a proposed microservices architecture that can last the next decade.
It has become at truism in the past decade that building systems at scale, using non-relational databases, requires giving up on the transactional guarantees afforded by the relational databases of yore, ACID transactional semantics are fine, but we all know you can’t have them all in a distributed system. Or can we?
In this talk, I will argue that by designing our systems around a distributed log like Kafka, we can in fact achieve ACID semantics at scale. We can ensure that distributed write operations can be applied atomically, consistently, in isolation between services, and of course with durability. What seems to be a counterintuitive conclusion ends up being straightforwardly achievable using existing technologies, as an elusive set of properties becomes relatively easy to achieve with the right architectural paradigm underlying the application.
Kafka has become a key data infrastructure technology, and we all have at least a vague sense that it is a messaging system, but what else is it? How can an overgrown message bus be getting this much buzz? Well, because Kafka is merely the center of a rich streaming data platform that invites detailed exploration.
In this talk, we’ll look at the entire open-source streaming platform provided by the Apache Kafka and Confluent Open Source projects. Starting with a lonely key-value pair, we’ll build up topics, partitioning, replication, and low-level Producer and Consumer APIs. We’ll group consumers into elastically scalable, fault-tolerant application clusters, then layer on more sophisticated stream processing APIs like Kafka Streams and KSQL. We’ll help teams collaborate around data formats with schema management. We’ll integrate with legacy systems without writing custom code. By the time we’re done, the open-source project we thought was Big Data’s answer to message queues will have become an enterprise-grade streaming platform, all in 90 minutes.
On the inside, Kafka is schemaless, but there is nothing schemaless about the worlds we live in. Our languages impose type systems, and the objects in our business domains have fixed sets of properties and semantics that must be obeyed. Pretending that we can operate without competent schema management does us no good at all.
In this talk, we’ll explore our how the different parts of the open-source Kafka ecosystem help us manage schema, from KSQL’s data format opinions to the full power of the Confluent Schema Registry. We will examine the Schema Registry’s operations in some detail, how it handles schema migrations, and look at examples of client code that makes proper use of it. You’ll leave this talk seeing that schema is not just an inconvenience that must be remedied, but a key means of collaboration around an enterprise-wide streaming platform.
Everyone wants to be successful in life. Many have found the SMART (specific, measurable, achievable, relevant & time boxed) goal setting framework to be a powerful tool to help clarify and validate their goals. Unfortunately having well defined goals is not enough to obtain them. This is where WINS (write, incentivize, network & share) comes in.
In this session, you will learn how to become more successful by putting goals into action with SMART WINS.
.
.
Laptops will not be required for the exercises, but you will need them (or an electronic device) to view the online slides as well as the online exercises.
Becoming a software architect is a longed-for career upgrade for many software developers. While the job title suggests a work day focused on technical decision-making, the reality is quite different. In this workshop, software architect Nathaniel Schutta constructs a real world job description in which communication trumps coding.
Discover the skill sets needed to juggle multiple priorities, meetings, and time demandsLearn why your best team leadership tool is not a hammer, but a shared cup of coffeeHear the best ways to give and take criticismUnderstand the necessity of writing effective email and formal architecture documentsGet tips for delivering confident career-building presentations to any audienceReview essential techniques for stakeholder management and relationship buildingExplore the critical needs for architecture reviews and an effective process for conducting themThrough lecture and small group exercises, Nathaniel will help you understand what it means to be a successful architect. Working through various problems, attendees will have opportunities to think through architectural decisions and patterns, discuss the importance of non functional requirements and why architects cannot afford to practice resume driven design.
Learning about design patterns is not really hard. Using design patterns is also not that hard. But, using the right design pattern for the right problem is not that easy. If instead of looking for a pattern to use if we decide to look for the design force behind a problem it may lead to better solutions. Furthermore, with most mainstream languages supporting lambda expressions and functional style, the patterns appear in so many more elegant ways as well.
In this workshop we will start with a quick introduction of a few patterns. Then we will work with multiple examples—take a problem, delve into the design, and as we solve it, see what patterns emerge in the design. The objective of this workshop is to get a hands on experience to prudently identify and use patterns that help create extensible code.
-Java 8 JDK or later version
-Your favorite IDE (Preferably IntelliJ IDEA Community Edition)
-git
Should Information Management systems apply the services architecture? Many data provisioning and BI systems are monolithic, tightly coupled, difficult to scale, and stumble when it comes to delivering MVP in a timely manner. They also don't implement Privacy by Design.
Data as a Service delivers MVP of real-time data management, while implementing privacy practices and avoiding many of the anit-patterns that traditional data provisioning and BI systems portray. Unlike traditional BI tooling, building out a Data as a Service system doesn't require high up-front costs and the welding of multiple products.
Get hands-on experience learning how the Rust language, a Kafka broker, and DaaS and PbD SDK can be used to build out a DaaS platform that delivers faster and more scalable solutions to your customer.
In this workshop we will walk-through and implement the key components of the Data as a Service architecture pattern by building out a simple real-time event driven online report.
In this workshop you will learn:
All you'll need is a computer with an internet browser (Chrome or Firefox, ideally)!
Virtual developer slices will be provided during the workshop by IAPP.org
See the Workshop GitHub repository for more details.
We all know that a distributed architecture is best adapted for meeting the changing business needs. However, for those who have built applications in a distributed architecture, we are all too familiar with the reality of implementing clustered applications. Such systems typically encounter issues with synchronizing communication, data constancy and cloud-based restrictions.
Knowing which patterns support a distributed system can easy your next implementation.
In this session we'll look at the basic anatomy of a system and how to prepare for moving it to distributed environment.
Should Information Management systems apply the services architecture? Many data provisioning and BI systems are monolithic, tightly coupled, difficult to scale, and stumble when it comes to delivering MVP in a timely manner.
In this session we will look at the common obstacles such systems inherently bring with them, and how the Data as a Service architecture pattern addresses many of these issues.
Agenda
Continuous Integration has redefined our testing practices. Testing has become more focused, efficient, and re-positioned further upstream in the development life-cycle. Unfortunately, our testing systems haven't evolved in lock-step - specifically the provisioning of realist test data.
It remains common practice to extract, cleanse and load production data into our non- production environments. This is a lengthy process with serious security concerns, and still doesn't satisfy all our data content requirements.
What if there is a better way of providing realist test data? What if it could be generated on-demand as part of the Continuous Integration process - without the heavy databases and traditional batch jobs?
Come join us in a journey as we walk through the concepts, building blocks and implementation of the light-weight Test Data Generation package that addresses this automated testing niche.
We'll provide an overview of the Rust language, explain how a mathematical framework from the 1950s was rediscovered, and provide an overview of Machine Learning patterns that were applied.
Agenda
.
.
Jorge Santayana is famous for saying “Those who cannot remember the past are condemned to repeat it”. When SOA (Service-Oriented Architecture) was all the craze, everyone got all excited about services, but forgot about the data. This ended in disaster. History repeats itself, and here we are with Microservices, where everyone is all excited about services, but once again, forgets all about the data. In this session I will discuss some of the challenges associated with breaking apart monolithic databases, and then show the techniques for effectively creating data domains and how to split apart a database. I consider the data part of Microservices the hardest aspect of this architecture style. In the end, it's all about the data.
Agenda
Have you ever wondered how to share data between microservices? Have you ever wondered how to share a single database schema between hundreds (or even thousands) of microservices (cloud or on-prem)? Have you ever wondered how to version relational database changes when sharing data in a microservices environment? If any of these questions intrigue you, then you should come to this session. In this session I will describe and demonstrate various caching strategies and patterns that you can use in Microservices to significantly increase performance, manage common data in a highly distributed architecture, and even manage data synchronization from cloud-based microservices. I'll describe the differences between a distributed and replicated cache, Using live coding and demos using Hazelcast and Apache Ignite, I'll demonstrate how to share data and also how to do space-based microservices, leveraging caching to its fullest extent.
Agenda:
In 250BC Rome began its expansion into Carthage, and later into the divided kingdoms of Alexander, starting the rise of a great empire until its decline starting around 350AD. Much can be learned from the rise and fall of the Roman Empire as it relates to a similar rise and fall: Microservices. Wait. Did I say “fall of microservices”? Over the past 5+ years Microservices has been on the forefront of most books, articles, and company initiatives. While some companies been experiencing success with microservices, most companies have been experiencing pain, cost overruns, and failed initiatives trying to design and implement this incredibly complex architecture style. In this session I discuss and demonstrate why microservices is so vitally important to businesses, and also why companies are starting to question whether microservices is the right solution. Sir Issac Newton once quoted “What goes up must come down”; Blood, Sweat & Tears sang about this in their hit “Spinning Wheel”. Microservices is no exception. Come to this provocative session to learn about the real challenges and issues associated with microservices, how we might be able to overcome some of the technical (and business) challenges, and whether microservices is really the answer to our problems.
.
Software architecture is hard. It is full of tradeoff analysis, decision making, technical expertise, and leadership, making it more of an art than a science. The common answer to any architecture-related question is “it depends”. To that end, I firmly believe there are no “best practices” in software architecture because every situation is different, which is why I titled this talk “Essential Practices”: those practices companies and architects are using to achieve success in architecture. In this session I explore in detail the top 6 essential software architectural practices (both technical architecture and process-related practices) that will make you an effective and successful software architect.
This session is broken up into 2 parts: those essential architecture practices that relate to the technical aspects of an architecture (hard skills), and those that relate to the process-related aspects of software architecture (soft skills). Both parts are needed to make architecture a success.
Whether starting a new greenfield application or analyzing the vitality of an existing application, one of the decisions an architect must make is which architecture style to use (or to refactor to). Microservices? Service-Based? Microkernel? Pipeline? Layered? Space-Based? Event-Driven? SOA?. Having the right architecture style in place is essential to the success of any application, big or small. Come to this fast-paced session to learn how to analyze your requirements and domain to make the right choice about which architecture style is right for your situation.
Agenda
Reactive architecture patterns allow you to build self-monitoring, self-scaling, self-growing, and self-healing systems that can react to both internal and external conditions without human intervention. These kind of systems are known as autonomic systems (our human body is one example). In this session I will show you some of the most common and most powerful reactive patterns you can use to automatically scale systems, grow systems, and self-repair systems, all using the basic language API and simple messaging. Through code samples in Java and actual run-time demonstrations, I'll show you how the patterns work and also show you sample implementations. Get ready for the future of software architecture - that you can start implementing on Monday.
Agenda
As Tech Leaders, we are presented with problems and work to find a way to solve them, usually through technology. In my opinion this is what makes this industry so much fun. Let's face it - we all love challenges. Sometimes, however, the problems we have to solve are hard - really hard. So how do you go about solving really hard problems? That's what this session is about - Heuristics, the art of problem solving. In this session you will learn how to approach problems and also learn techniques for solving them effectively. So put on your thinking cap and get ready to solve some easy, fun, and hard problems.
Agenda:
Many developers aspire to become architects. Some of us serve currently as architects while the rest of us may hope to become one some day. We all have worked with architects, some good, and some that could be better. What are the traits of a good architect? What are the skills and qualities we should pick to become a very good one?
Come to this presentation to learn about things that can make that journey to be a successful architect a pleasant one.
Before spending substantial effort in refactoring or altering design, it would be prudent to evaluate the current quality of design. This can help us decide if we should proceed with refactoring effort or a particular alteration of design. Furthermore, after evolving a design, using some design metrics would help us to evaluate if we have improved on the design front.
In this workshop we will learn about some critical qualities of design and how to measure those. We will learn about these by working through some example code, refactoring it, and evaluating the design again at each stage of refactoring.
-Java 8 JDK or later version
-Your favorite IDE (Preferably IntelliJ IDEA Community Edition)
-git
Before spending substantial effort in refactoring or altering design, it would be prudent to evaluate the current quality of design. This can help us decide if we should proceed with refactoring effort or a particular alteration of design. Furthermore, after evolving a design, using some design metrics would help us to evaluate if we have improved on the design front.
In this workshop we will learn about some critical qualities of design and how to measure those. We will learn about these by working through some example code, refactoring it, and evaluating the design again at each stage of refactoring.
-Java 8 JDK or later version
-Your favorite IDE (Preferably IntelliJ IDEA Community Edition)
-git
These days, you can’t swing a dry erase marker without hitting someone talking about microservices. Developers are studying Eric Evan’s prescient book Domain Driven Design. Teams are refactoring monolithic apps, looking for bounded contexts and defining a ubiquitous language. And while there have been countless articles, videos, and talks to help you convert to microservices, few have spent any appreciable time asking if a given application should be a microservice. In this talk, I will show you a set of factors you can apply to help you decide if something deserves to be a microservice or not. We’ll also look at what we need to do to maintain a healthy micro(services)biome.
There are many good reasons to use a microservices architecture. But there are no free lunches. The positives of microservices come with added complexity. Teams should happily take on that complexity…provided the application in question benefits from the upside of microservices. This talk will cut through the hype to help you make the right choice for your unique situation.
These days, you can’t swing a dry erase marker without hitting someone talking about microservices. Developers are studying Eric Evan’s prescient book Domain Driven Design. Teams are refactoring monolithic apps, looking for bounded contexts and defining a ubiquitous language. And while there have been countless articles, videos, and talks to help you convert to microservices, few have spent any appreciable time asking if a given application should be a microservice. In this talk, I will show you a set of factors you can apply to help you decide if something deserves to be a microservice or not. We’ll also look at what we need to do to maintain a healthy micro(services)biome.
There are many good reasons to use a microservices architecture. But there are no free lunches. The positives of microservices come with added complexity. Teams should happily take on that complexity…provided the application in question benefits from the upside of microservices. This talk will cut through the hype to help you make the right choice for your unique situation.
Big up front design is discouraged in agile development. However, we know that architecture plays a significant part in software systems. Evolving architecture during the development of an application seems to be a risky business.
In this presentation we will discuss the reasons to evolve the architecture, some of the core principles that can help us develop in such a manner, and the ways to minimize the risk and succeed in creating a practical and useful architecture.
This workshop highlights the ideas from the forthcoming Building Evolutionary Architectures, showing how to build architectures that evolve gracefully over time.
An evolutionary architecture supports incremental, guided change across multiple dimensions.
For many years, software architecture was described as the “parts that are hard to change later”. But then microservices showed that if architects build evolvability into the architecture, change becomes easier. This workshop, based on my upcoming book, investigates the family of software architectures that support evolutionary change, along with how to build evolvable systems. Understanding how to evolve architecture requires understanding how different parts of architecture interact; I describe how to achieve appropriate coupling between components and services. Incremental change is critical for the mechanics of evolution; I cover how to build engineering and DevOps practices to support continuous change. Uncontrolled evolution leads to undesirable side effects; I cover how fitness functions build protective, testable scaffolding around critical parts to guide the architecture as it evolves.
The software development ecosystem exists in a state of dynamic equilibrium, where any new tool, framework, or technique leads to disruption and the establishment of a new equilibrium. Predictability is impossible when the foundation architects plan against changes constantly in unexpected ways. Instead, prefer evolvability over predictability. This hands-on workshop provides a high-level overview of a different way to think about software architecture.
Outline:
No prerequisites or requirements–all exercises are done with paper, pen, and intellect.
This workshop highlights the ideas from the forthcoming Building Evolutionary Architectures, showing how to build architectures that evolve gracefully over time.
An evolutionary architecture supports incremental, guided change across multiple dimensions.
For many years, software architecture was described as the “parts that are hard to change later”. But then microservices showed that if architects build evolvability into the architecture, change becomes easier. This workshop, based on my upcoming book, investigates the family of software architectures that support evolutionary change, along with how to build evolvable systems. Understanding how to evolve architecture requires understanding how different parts of architecture interact; I describe how to achieve appropriate coupling between components and services. Incremental change is critical for the mechanics of evolution; I cover how to build engineering and DevOps practices to support continuous change. Uncontrolled evolution leads to undesirable side effects; I cover how fitness functions build protective, testable scaffolding around critical parts to guide the architecture as it evolves.
The software development ecosystem exists in a state of dynamic equilibrium, where any new tool, framework, or technique leads to disruption and the establishment of a new equilibrium. Predictability is impossible when the foundation architects plan against changes constantly in unexpected ways. Instead, prefer evolvability over predictability. This hands-on workshop provides a high-level overview of a different way to think about software architecture.
Outline:
No prerequisites or requirements–all exercises are done with paper, pen, and intellect.
The way we build and deliver software is changing. We must deliver software more quickly than ever before and traditional approaches to software architecture, infrastructure and methodology do not allow us to meet demand. We’ve reached the limits of agility through process improvement alone, and further increases demand we focus on improving architecture, infrastructure, and methodology simultaneously. 12 Factor is an app development methodology for building modern apps in the modern era.
Building modern apps requires modern methods and 12 Factor is an app development methodology that helps development teams build software by emphasizing development practices that meld together modern architectural paradigms with agile practices like continuous delivery for deployment to cloud platforms. In this session, we’ll examine the 12 Factors and explore how to apply them to apps built using Java.
At the end of this workshop, you will be comfortable with designing, deploying, managing, monitoring and updating a coordinated set of applications running on Kubernetes.
Distributed application architectures are hard. Building containers and designing microservices to work and coordinate together across a network is complex. Given limitations on resources, failing networks, defective software, and fluctuating traffic you need an orchestrator to handle these variants. Kubernetes is designed to handle these complexities, so you do not have to. It's essentially a distributed operating system across your data center. You give Kubernetes containers and it will ensure they remain available.
Kubernetes continues to gain momentum and is quickly becoming the preferred way to deploy applications.
In this workshop, we’ll grasp the essence of Kubernetes as an application container manager, learning the concepts of deploying, pods, services, ingression, volumes, secrets, and monitoring. We’ll look at how simple containers are quickly started using a declarative syntax. We'll build on this with a coordinated cluster of containers to make an application. Next, we will learn how Helm is used for managing more complex collections of containers. See how your application containers can find and communicate directly or use a message broker for exchanging data. We will play chaos monkey and mess with some vital services and observe how Kubernetes self-heals back to the expected state. Finally, we will observe performance metrics and see how nodes and containers are scaled.
Come to this workshop the learn how to deploy and manage your containerized application. On the way, you will see how Kubernetes effectively schedules your application across its resources.
Optionally, for more daring and independent attendees, you can also replicate many of the exercises on your local laptop with Minikube or Minishift. There are other Kubernetes flavors as well. However, if during the workshop you are having troubles please understand we cannot deviate too far to meet your local needs. If you do want to try some of the material locally this stack is recommended:
Some of the topics we will explore:
These concepts are presented and reinforced with hands-on exercises:
You will leave with a solid understanding of how Kubernetes actually works and a set of hands-on exercises your can share with your peers. Bring a simple laptop with a standard browser for a full hands-on experience.
“In order to make delicious food…. you need to develop a palate capable of discerning good and bad. Without good taste,
you can't make good food.” - Jiro Ono (World’s Best Sushi Chef)
Many of us are stuck with messy code. We know it’s not great but it works and what can we do? Where and how do you start?
We are going to use some cutting edge training to train your pattern recognition section of your brain to instantly recognize common, reoccurring anti-pattern (smells) in your code.
Then we will learn very specific techniques to start improving on these specific smells.
Once you are trained to see these anti-patterns you’ll recognize them everywhere. Now that you are equipped to handle them your code will start to transform into something beautiful and easy to work with.
Let’s get back to basics.
One of the microskills often used in TDD is Consume First Architecture, which simply means using the fields and methods before they exist. Sounds easy? Well yes and no. Even simple lines of code can have HUGE implications on your architecture. The real skill in consume first is to be able to see, question and respond to those implications on sight.
In this lab, we are going to geek out over a single line of code. We will take it and turn it into 40-50 variations and explore how each variation impacts the resulting design.
If you think pairing programming ( 2 people on 1 computer ) is crazy, hold onto your hats; it’s time for Mob Programming.
Mob Programming: All the brilliant people working on the same thing, at the same time, in the same place, and on the same computer.
We are going to take a look at a new way of working, what it looks like, and why it can work. More importantly, we’ll have a (very) short session of actual mobbing, so you can see for yourself and come to your own conclusions.
Learn how to use Heroku's 12 (15) Factor App methodologies to make your applications more portable, scalable, reliable and deployable.
Do you want to improve your application’s portability, scalability, reliability and deploy ability? Now you can, with Heroku’s 12 Factor App methodologies. Learn from their experience hosting and supporting thousands of apps in the cloud. During this hands-on workshop, you will learn how to incorporate factors like configuration, disposability, dev/prod parity and much more into an existing application whether it is an on premise or cloud native app. But wait, there’s more! Act now, and get an additional 3 factors absolutely free! API first, Telemetry and even Authentication and authorization will be included at no additional cost.
Learn how to use Heroku's 12 (15) Factor App methodologies to make your applications more portable, scalable, reliable and deployable.
Do you want to improve your application’s portability, scalability, reliability and deploy ability? Now you can, with Heroku’s 12 Factor App methodologies. Learn from their experience hosting and supporting thousands of apps in the cloud. During this hands-on workshop, you will learn how to incorporate factors like configuration, disposability, dev/prod parity and much more into an existing application whether it is an on premise or cloud native app. But wait, there’s more! Act now, and get an additional 3 factors absolutely free! API first, Telemetry and even Authentication and authorization will be included at no additional cost.
You've heard the old adage “It's not what you know it's who you know.” The focus of this session is divided between ways to better connect with everyone you meet as well as ways to grow your network, help and influence people and ultimately build long-term relationships and build your reputation.
Networking isn't about selling nor it isn't about “taking.” Done properly it benefits everyone. Among the benefits are strengthening relationships; getting new perspectives and ideas; building a reputation of being knowledgable, reliable and supportive; having access to opportunities and more!
Slides available online: https://prezi.com/ck1fdbhgqwiq/?token=8f8240f753ad9ae2c50ce696657020f40a877a40fa224790652eb412ac5eb8d3
In tech teams it's a constant firefight. We react. Then we react to the reaction… the cycle continues. In all this noise, in all this chaos, how do we move forward. How do we remain proactive?
A great leader must be an enabler for the team. At times this means insulating the team from the noise. At other times it means improving the environment for the team. At all times, however, it requires setting clear priorities and conditions for success.
This session is focused on the art of moving forward in even the noisiest environments.
Great leaders inspire, excite, and empower those in their teams. These leaders help create a team that is more than the sum of it's parts; in short, a great leader can be a force multiplier for the team.
But what makes these force multipliers? Is it simply raw talent? Charisma? How are these leaders different from the bad leaders who become bottlenecks and roadblocks?
In this session, we explore the answer to that question and identify the skills and principles that create force multipliers. Put these skills into action and you can be one too!
While the Web itself has strong decentralized aspects to how it is used, the backend technologies are largely centralized. The naming systems, the routing systems and the traffic that all points back to the same place for a website are all centralized technologies. This creates both a liability as well as a control point.
In order to break free of some of these limitations, new technologies are emerging to provide a more decentralized approach to the Web.
This talk will walk you through some emerging technology to provide decentralized content storage and distribution, edge computing and more. We will touch upon the Interplanetary Filesystem, WebTorrent, Blockchain spin offs and more.
In this guided demo, we are going to look at 3 different techniques that are remarkably powerful in combination to cut through legacy code without having to go through the bother of reading or understanding it.
The techniques are:
Combination Testing: to get 100% test coverage quickly
Code Coverage as guidance: to help us make decisions about inputs and deletion
Provable Refactorings: to help us change code without having to worry about it.
In combination, these 3 techniques can quickly make impossible tasks trivial.
Will will be doing this on the Gilded Rose Kata, https://github.com/emilybache/GildedRose-Refactoring-Kata
It is extra beneficial if you try it out yourself first so you can see how your implementation would differ.
Transitioning from a monolith to a microservices based architecture is a non-trivial endeavor. It is mired with many practices that may lead to a disastrous implementation if we're not careful.
In this presentation we will discuss some core practices and principles that are critical to follow to effectively transition from a monolith to a microservices based architecture.
It's common knowledge: software must be extensible, easier to change, less expensive to maintain. But, how? That's what we often struggle with. Thankfully there are some really nice design principles and practices that can help us a great deal in this area.
In this workshop, we will start with a few practical examples, problems that will demand extensibility and ease of change. We will approach their design, and along the way learn about the principles we apply, why we apply them, and the benefits we get out of using these principles. Instead of talking theory, we will design, refactor, create prototypes, and evaluate the design we create.
-Java 8 JDK or later version
-Your favorite IDE (Preferably IntelliJ IDEA Community Edition)
-git
It's common knowledge: software must be extensible, easier to change, less expensive to maintain. But, how? That's what we often struggle with. Thankfully there are some really nice design principles and practices that can help us a great deal in this area.
In this workshop, we will start with a few practical examples, problems that will demand extensibility and ease of change. We will approach their design, and along the way learn about the principles we apply, why we apply them, and the benefits we get out of using these principles. Instead of talking theory, we will design, refactor, create prototypes, and evaluate the design we create.
-Java 8 JDK or later version
-Your favorite IDE (Preferably IntelliJ IDEA Community Edition)
-git
Creating code is easy, creating good code takes a lot of time, effort, discipline, and commitment. The code we create are truly the manifestations of our designs. Creating a lightweight design can help make the code more extensible and reusable.
In this presentation we will take an example oriented approach to look at some core design principles that can help us create better design and more maintainable code.
Design patterns are common place in OO programming. With the introduction of lambda expressions in languages like Java, one has to wonder about their influence on design patterns.
In this presentation we will take up some of the common design patterns and rework them using lambda expressions. We will also explore some other patterns that are not so common, but are quite useful ways to apply lambdas.
Developers and architects are increasingly called upon to solve big problems, and we are able to draw on a world-class set of open source tools with which to solve them. Problems of scale are no longer consigned to the web’s largest companies, but are increasingly a part of ordinary enterprise development. At the risk of only a little hyperbole, we are all distributed systems engineers now.
In this talk, we’ll look at four distributed systems architectural patterns based on real-world systems that you can apply to solve the problems you will face in the next few years. We’ll look at the strengths and weaknesses of each architecture and develop a set of criteria for knowing when to apply each one. You will leave knowing how to work with the leading data storage, messaging, and computation tools of the day to solve the daunting problems of scale in your near future.
Java Modules are the future. However, our enterprise applications have legacy code, a lots of it. How in the world do we migrate from the old to the new? What are some of the challenges. In this presentation we will start with an introduction to modules and learn how to create them. Then we will dive into the differences between unnamed modules, automatic modules, and explicit modules. After that we will discuss some key limitations of modules, things that may surprise your developers if they're not aware of. Finally we will discuss how to migrate current applications to use modules.
.
Our technical world is governed by facts. In this world Excel files and technical diagrams are everywhere, and too often this way of looking at the world makes us forget that the goal of our job is to produce value, not to fulfill specifications.
Feedback is the central source of agile value. The most effective way to obtain feedback from stakeholders is a demo. Good demos engage. They materialize your ideas and put energies in motion. They spark the imagination and uncover hidden assumptions. They make feedback flow.
But, if a demo is the means to value, shouldn’t preparing the demo be a significant concern? Should it not be part of the definition of done?
That is not even all. A good demo tells a story about the system. This means that you have to make the system tell that story. Not a user story full of facts. A story that makes users want to use the system. That tiny concern can change the way you build your system. Many things go well when demos come out right.
Demoing is a skill, and like any skill, it can be trained. Regardless of the subject, there always is an exciting demo lurking underneath. It just takes you to find it. And to do it.
In this session we will get to exercise that skill.
Looking at what occupies most of our energy during software development, our domain is primarily a decision making business rather than construction one. As a consequence, we should invest in a systematic discipline to approach making decisions.
Assessment denotes the process of understanding a given situation about a software system to support decision making.
During software development, engineers spend as much as 50% of the overall effort on doing precisely that: they try to understand the current status of the system to know what to do next. In other words, assessing the current system accounts for half of the development budget. These are just the direct costs. The indirect costs can be seen in the quality of the decisions made as a result.
One might think that an activity that has such a large economical impact would be a topic of high debate and improvement. Instead, it is typically treated like the proverbial elephant in the room. In this talk, we argue that we need to:
• Make assessment explicit. Ignoring it won’t make it go away. By acknowledging its existence you have a chance of learning from past experiences and of optimizing your approach.
• Tailor assessment. Currently, developers try to assess the system by reading the source code. This is highly ineffective in many situations, and it simply does not scale to the size of the modern systems. You need tools, but not any tools. Your system is special and your most important problems will be special as well. That is why generic tools that produce nice looking reports won’t make a difference. You need smart tools that are tailored to your needs.
• Educate ourselves. The ability to assess is a skill. Like any skill, it needs to be educated. Enterprises need to understand that they need to allocate the budget for those custom tools, and engineers need to understand that it is within their reach to build them. It’s not rocket science. It just requires a different focus.
Most nontrivial software systems suffer from significant levels of technical and architectural debt. This leads to exponentially increasing cost of change, which is not sustainable for a longer period of time. The single best thing you can do to counter this problem is to give some love to your architecture by carefully managing and controlling the dependencies among the different elements and components of a software system. For that purpose we will introduce a DSL (domain specific language) that can be used to describe and enforce architectural blueprints. Moreover we will make an excursion into the topic of legacy software modernization.
In this workshop part participants will use Sonargraph to assess and analyze a software system of their choice (Java, C/C++, C# or Python) and design an architectural model using the domain specific language introduced in the session. The tool and a free 60 day license will be provided during the workshop.
This workshop will use Sonargraph-Architect to create architectural models for a project of your choice. While I will bring software and license keys on a flash drive you could install it upfront by registering on www.hello2morrow.com, download the tool and request an evaluation license. If possible, please bring a project to analyze that can be built on your laptop. Supported languages are Java, C#, C/C++ and Python. For people that cannot bring a project you will be provided with an open source project to work on.
Most nontrivial software systems suffer from significant levels of technical and architectural debt. This leads to exponentially increasing cost of change, which is not sustainable for a longer period of time. The single best thing you can do to counter this problem is to give some love to your architecture by carefully managing and controlling the dependencies among the different elements and components of a software system. For that purpose we will introduce a DSL (domain specific language) that can be used to describe and enforce architectural blueprints. Moreover we will make an excursion into the topic of legacy software modernization.
In this workshop part participants will use Sonargraph to assess and analyze a software system of their choice (Java, C/C++, C# or Python) and design an architectural model using the domain specific language introduced in the session. The tool and a free 60 day license will be provided during the workshop.
This workshop will use Sonargraph-Architect to create architectural models for a project of your choice. While I will bring software and license keys on a flash drive you could install it upfront by registering on www.hello2morrow.com, download the tool and request an evaluation license. If possible, please bring a project to analyze that can be built on your laptop. Supported languages are Java, C#, C/C++ and Python. For people that cannot bring a project you will be provided with an open source project to work on.
You have some modular code with a REST API. You are on your way to Microservices. Next, you package it in a container image that others can run. Simple. Now what? Your service needs to log information, needs to scale and load balance between its clones. Your service needs environment and metadata way outside its context. What about where the service will run? Who starts it? What monitors its health? What about antifragility? Updates? Networking? Oh my.
Don't get flustered. We will explore how Kubernetes simplifies the complexity of distributed computing.
This session will help you understand the terms, architecture and the mechanics of the Kubernetes tools. You will understand how to target your applications to a seemingly complex distributed compute platform.
Prerequisite: If you are unfamiliar with Kubernetes be sure to attend: Kubernetes Koncepts (1 of 2)
Aha moments with apps in containers can be quite liberating. The mobile space is saturated with “there's an app for that”. For us, we now expect “there's a container for that”. “Write once, run anywhere” (WORA) has changed to “Package once, run anywhere” (PORA). The growing community of containers is riding up the hype curve. We will look at many ways to assemble pods using architecture patterns you already know.
Your software package delivery and installation is no longer an rpm, deb, dmg, jar, war, native executable or a run script, it is simply an image that has a common run container command.
During the presentation, we will explore some examples on Katacoda.
Kubernetes is a strong platform for running and coordinating large collections of services, containers, and applications, but even with all of its standard resources, Kubernetes can't do everything. Fortunately, Kubernetes is highly configurable and extensible. The Operator pattern has emerged as an important extension technique.
We’ll break down what this pattern is all about. There are public operators, there are operator registries, and there are frameworks so you can write your own operators. Beyond the Hello World examples, you'll soon realize Operators are important and how you can get started with them.
Prerequisite: If you are unfamiliar with Kubernetes be sure to attend: Kubernetes Koncepts
At the 2009 Agile conference, J.B.Rainsberger declared “Integration tests are a scam”. I agree. Come see some compelling reasons why consumer-driven contract testing is a much better approach. Particularly for microservices.
We will explore different testing techniques on Kubernetes, including an important one called “Consumer-Driven Contracts”.
After a brief overview of the concepts a live demonstration will show you how to:
Prerequisite: If you are unfamiliar with Kubernetes be sure to attend: Kubernetes Koncepts.
From operating system on bare metal, to virtual machines on hypervisors, to containers orchestration platforms. How we run our code and bundle our applications continues to evolve. Serverless computing continuous our evolutionary path for our architectures.
Kubernetes provides an ideal, vendor-agnostic platform for serverless computing. Serverless promises to developers that we can worry less about the cluster and focus more on their logic. Based on your Kubernetes knowledge we will discover the various contributors of serverless frameworks on Kubernetes. Specifically, we will unpack how two open source serverless frameworks, Kubeless and OpenFaaS, leverage Kubernetes to achieve their promises. We will explore how Knative is helping the serverless providers evolve to the next levels sophistication.
Prerequisite: If you are unfamiliar with Kubernetes be sure to attend: Kubernetes Koncepts.
Kubernetes is a powerful platform for running containers and distributing computation workloads across resources. A significant question is how do you get all your code to this platform, continuously.
In 2019 our community is bursting with new solutions to assist our delivery pipelines. While Jenkins is a dominant player, there is a growing array of new ideas and choices. From coding at your laptop to building containers to deployments, we will explore the various tools and techniques to reduce the delivery frictions.
Kubernetes is also a fitting platform for hosting your continuous tools, pipeline engines, registries, testing, code analysis, security scans, and delivery workflows.
From this session, you will understand the latest tools and techniques for pipelining on Kubernetes. Let's up the game on your Maturity Model.
Kubernetes out of the box is a strong platform for running and coordinating large collections of services, containers, and applications. As is, Kubernetes is powerful for many solutions.
Remember Underdog? He was a mild-mannered dog, but when stress and conflict were introduced to the plot he took a magic pill, he became a superhero. Istio is a superhero for Kubernetes.
Istio is an open, platform-independent service mesh that manages communications between services in a transparent way. With a distributed architecture many things can fail, slow down and become less secure. Istio provides solutions to those stresses in our plot toward this architecture style:
• Traffic management
• Observability
• Policy enforcement
• Service identity and security
We will explore these benefits with Istio through some working examples on Kubernetes. The key point is this meshing solution allows your Java code to be less coupled to the inherent weaknesses of a distributed platform.
Prerequisite: If you are unfamiliar with Kubernetes or Istio meshing be sure to attend: Understanding Kubernetes: Fundamentals or Understanding Kubernetes: Meshing Around with Istio.
Kubernetes is a complex container management system. Your application running in containers is also a complex system as it embraces the distributed architecture of highly modular and cohesive services. As these containers run, things may not always behave as smoothly as you hope. Embracing the notions of antifragility and designing a system to be resilient despite the realities of resource limitations, network failures, hardware failures and failed software logic. All of this demands a robust monitoring system to open views into the behaviors and health of your applications running in a cluster.
Three important aspects to observe are log streams, tracing, and metrics.
In this session, we look at some example microservices running in containers on Kubernetes. We add Istio to the cluster for meshing. We observe how logs are gathered, We see transactions are traced and measured between services. We inspect metrics and finally add alerts when metrics are indicating a problem.
Three evolutionary ecosystems work well together Java, Containers, and Kubernetes.
Past versions of Java were never designed to be “container aware”. This has led some to stray away from the JVM and consider other shiny languages. But wait, before you go, let's discover what Java 9, 10, 11, 12, 13 (…) has done to get our applications into efficiently distilled containers that pack nicely into Kubernetes.
Topics covered:
Machine Learning is all the rage, but many developers have no idea what it is, what they can expect from it or how to start to get into this huge and rapidly-changing field. The ideas draw from the fields of Artificial Intelligence, Numerical Analysis, Statistics and more. These days, you'll generally have to be a CUDA-wielding Python developer to boot. This workshop will gently introduce you to the ideas and tools, show you several working examples and help you build a plan to for diving deeper into this exciting new field.
We will cover:
Please install Anaconda for Python 3 before the workshop if possible. https://www.anaconda.com/download
Machine Learning is all the rage, but many developers have no idea what it is, what they can expect from it or how to start to get into this huge and rapidly-changing field. The ideas draw from the fields of Artificial Intelligence, Numerical Analysis, Statistics and more. These days, you'll generally have to be a CUDA-wielding Python developer to boot. This workshop will gently introduce you to the ideas and tools, show you several working examples and help you build a plan to for diving deeper into this exciting new field.
We will cover:
Please install Anaconda for Python 3 before the workshop if possible. https://www.anaconda.com/download
Explore another learning medium to add to your toolbox: Katacoda.
This is a 90-minute mini-workshop where you learn to be an author on Katacoda. Bring your favorite laptop with just a browser and a text editor.
Have a Github account and bring your laptop. Let's learn together.
We are continuously learning and keeping up with the changing landscapes and ecosystems in software engineering. Some technologies are difficult to learn or may take too much time for us to set up just to get to the key points of each technology. One of the reasons why you might be here at NFJS is to do exactly that – too learn. Great!
There are many mediums we use to learn and we often combine them for different perspectives. Books, how-to articles, GitHub readmes, blog entries, recorded talks on YouTube, and online courses. All these help us sort through the new concepts. I'm sure you have your favorites.
Katacoda is becoming a compelling platform for learning and teaching concepts. You can also author your own topics for public communities or private teams. Katacoda offers a platform that hosts live server command lines in your browser with a split screen for course material broken into easy to follow steps.
Containers enable rapid development and rapid software delivery - and with that increase in speed comes a need to shift how people think about and tackle security. Running those containers is part of this consideration - the platform and container orchestration has to figure out and handle all of the moving parts.
In this talk, Laine and Josh will give their recommendations for Kubernetes as a platform to run containers. They'll go through talking about security from the perspective of the pieces that make up the container - the ingredients, and how it runs in addition to where it runs. They'll discuss application and platform boundaries while explaining a simple model to use in order to think about and discuss this complex topic.
“What you value is what you get” looks at the unexpected results of emphasizing traditional software deliverables and organizational structures. In this presentation, we will focus on undifferentiated work, how to recognize it and how to motivate your organization to focus instead on differentiated work. No one knows your customers better than you, why have your teams build custom infrastructure or software frameworks instead of adding business value?
It turns out, what executive leadership values and more importantly what you reward and recognize, may encourage your teams to focus on undifferentiated work. During this presentation we’ll talk about how to spot undifferentiated work, questions to ask your teams to get to the heart of what they are doing and how to value, encourage and reward work that will truly set you apart from your competition.
This session describes how architects can identify architectural characteristics from a variety of sources, how to distinguish architectural characteristics from domain requirements, and how to build protection mechanisms around key characteristics. This session also describe a variety of tradeoff analysis techniques for architects, to try to best balance all the competing concerns on software projects.
Architects must translate domain requirements, external constraints, speculative popularity, and a host of other factors to determine the key characteristics of a software system: performance, scale, elasticity, and so on. Yet architects must also analyze the tradeoffs each characteristics entails, arriving at a design that manages to maximize as many beneficial properties as possible. This session describes how architects can identify architectural characteristics from a variety of sources, how to distinguish architectural characteristics from domain requirements, and how to build protection mechanisms around key characteristics. This session also describe a variety of tradeoff analysis techniques for architects, to try to best balance all the competing concerns on software projects.
This session covers basic application and distributed architectural styles, analyzed along several dimensions (type of partitioning, families of architectural characteristics, and so on).
A key building block for burgeoning software architects is understanding and applying software architecture styles and patterns. This session covers basic application and distributed architectural styles, analyzed along several dimensions (type of partitioning, families of architectural characteristics, and so on). It also provides attendees with understanding and criteria to judge the applicability of a problem domain to an architectural style.
Patterns/antipatterns, techniques, engineering practices, and other details showing how to restructure existing architectures and migrate from one architecture style to another.
A common challenge facing many architects today involves restructuring their current architecture or migrating from one architectural style to another. For example, many companies start with monolithic applications for simplicity, but find they must migrate it to another architecture to achieve different architectural characteristics. This session shows patterns/antipatterns, techniques, engineering practices, and other details showing how to make major changes to architectures. This session introduces a new measure, the architectural quantum, as a way of measuring and analyzing coupling and portability within architectures.
This session describes mechanisms to automate architectural governance at application, integration, and enterprise levels
A nagging problem for architects is the ability to enforce the governance policies they create. Yet, outside of architecture review boards or code reviews, how can architects be sure that developers utilize their rules? This session describes mechanisms to automate architectural governance at application, integration, and enterprise levels. By focusing on fitness functions, architects define objective tests, metrics, and other criteria to ensure governance polices stick.
In some organizations, architects are dismissed as people that draw box and arrow diagrams - the dreaded whiteboard architect. While we don't want to foster that stereotype, it is important for an architect to be able to construct basic architectural diagrams. An architect must also be able to separate the wheat from the chaff eliminating those models that don't help tell the story while fully leveraging those that do.
In this workshop, we'll discuss the various diagrams at our disposal. We'll walk through a case study and as we go, we'll construct a set of diagrams that will help us effectively communicate our design. We'll talk about stakeholders and who might benefit from each typ of diagram. Additionally we'll discuss how to constructively review an architectural model.
Neither a laptop nor special software is required for this workshop though your modeling tool of choice (Spark, Visio, OmniGraffle, etc.) is welcome for the exercises. Of course paper and pencil are very effective too and frankly recommended! Feel free to work in pairs or teams. That's it! Well, and a willingness to participate!
In some organizations, architects are dismissed as people that draw box and arrow diagrams - the dreaded whiteboard architect. While we don't want to foster that stereotype, it is important for an architect to be able to construct basic architectural diagrams. An architect must also be able to separate the wheat from the chaff eliminating those models that don't help tell the story while fully leveraging those that do.
In this workshop, we'll discuss the various diagrams at our disposal. We'll walk through a case study and as we go, we'll construct a set of diagrams that will help us effectively communicate our design. We'll talk about stakeholders and who might benefit from each typ of diagram. Additionally we'll discuss how to constructively review an architectural model.
Neither a laptop nor special software is required for this workshop though your modeling tool of choice (Spark, Visio, OmniGraffle, etc.) is welcome for the exercises. Of course paper and pencil are very effective too and frankly recommended! Feel free to work in pairs or teams. That's it! Well, and a willingness to participate!
How do you create creativity? This talk offers techniques and perspectives to discover, grow, and project your ideas.
Where do ideas for new talks, books, videos, software…ultimately, everything…come from? A common question at conference panels to the speakers is “Where do you get your ideas for talks?” This session answers that question, along with how some of us cultivate new ideas. This talk investigates three aspects of ideation:
— How do new ideas arise?<br>
I cover various ways of synthesizing new ideas: switching axiom(s), mutation, oblique strategies, and a host of other techniques to generate a germ of a new idea
— How do ideas grow into intellectual property?<br>
I cover techniques for iterating on ideas to discover deeper meanings and connections. I also cover how techniques to evolve and grow ideas.
— How do you communicate new IP?<br>
I cover various ways to convey IP: presentations, articles, books, videos, and a host of other media. I talk about how writing and presenting techniques to amplify your new idea and get it out into the world.
One key to building new IP is separating ideation, organization, and realization, which often become jumbled. By separating them, we can build practices to allow each to flourish. This talk provide concrete advice in each area to help realize new ideas.
Creativity is the reason we have all the Cool Stuff we have. This session investigates how to fan the spark of an idea into a roaring flame of intellectual erudition.
Development teams often focus on getting code to production losing site of what comes after the design and build phase. But we must consider the full life cycle of our systems from inception to deployment through to sunset, a discipline many companies refer to as site reliability engineering.
While your organization may or may not have an SRE team, you have someone playing that role and we can all benefit from looking at the principles and practices that we can bring to bear on our projects. In this talk, I will introduce the concepts of SRE and how you can adopt these ideas on your applications.
Architecture is as important as functionality, at least in the long run. As functionality is recognized as a business asset, it follows that architecture is a business asset, too. In this talk we show how we can approach architecture as an investment rather than a cost, and detail the practical implications both on the technical and on the business level.
Often systems that have great current value are expensive to evolve. In other words, the future value of the system is highly influenced by its structure. Indeed, when I talk with technical people, they broadly agree with the idea that architecture is as important as functionality, at least in the long run.
If we truly believe this, we should act accordingly. If two things are equally important, we should treat them with the same way. Given that functionality of a system is considered a business asset, it follows that the architecture is a business asset as well. That means that we should stop perceiving the effort around architecture as a cost, and start seeing it as an investment.
Functionality receives significant testing investments through direct development effort, dedicated tools and even education. In a way, testing is like an insurance, but unlike other insurances, this one is essentially guaranteed to pay off later on. Now, do you check the architecture with the same rigor? Do you have automatic architectural checks that prevent you from deploying when they fail? Not doing so means that half of the business assets remain uninsured. Half.
How can you test architecture automatically? You need to first see the code as data. The same applies for configurations, logs and everything else around a software system. It’s all data, and data is best dealt with through dedicated tools and skills.
Software systems should not remain black boxes. In this talk we show how we can complement domain-driven design with tools that match the ubiquitous language with visual representations of the system that are produced automatically. We experiences of building concrete systems, and, by means of live demos, we exemplify how changing the approach and the nature of the tools allows non-technical people to understand the inner workings of a system.
Software appears to be hard to grasp especially for non-technical people, and it often gets treated as a black box, which leads to inefficient decisions. This must and can change.
In this talk we show how by changing our tools we can expose the inner workings of a system with custom visual representations that can be produced automatically. These representations enhance the ubiquitous language and allow non-technical people to engage actively with the running system.
We start from describing experiences of building concrete systems, and, by means of live demos, we exemplify how changing the approach and the nature of the tools allows non-technical people to understand the inner workings of a system. We then take a step back and learn how we should emphasize decision making in software development as an explicit discipline at all layers, including the technical ones. This talk is accompanied is relevant for both technical and non-technical people.
Software metrics can be used effectively to judge the maintainability and architectural quality of a code base. Even more importantly they can be used as “canaries in a coal mine” to warn early about dangerous accumulations of architectural and technical debt.
This session will introduce some key metrics that every architect should know and also looks into the current research regarding software architecture metrics. Since we have 90 minutes there will be some time for hands-on software assessments. If you'd like to follow along bring your laptop and install Sonargraph-Explorer from our website www.hello2morrow.com. (It's free and covers most of the metrics we will introduce) Bring a Java, C#, C/C++ or project and run the metrics on your own code. Or just download an open source project and learn how to use metrics to assess software and detect issues.
On the one hand, agile processes, like Scrum, promote a set of practices. On the other hand, they are based on a set of principles. While practices are important at present time, principles allow us to adapt to future situations.
In this talk we look at Inspection and Adaptation and construct an underlying theory to help organizations practice these activities. Why a theory? Because, as much as we want to, simply invoking “Inspect and Adapt” will not make it happen.
It turns out that for almost half a century the software engineering community has been working on a theory of reflection, which is defined as “the ability of a system to inspect and adapt itself”. We draw parallels between the design of software systems and the design of organizations, and learn several lessons:
Reflection must be built into the organization.
Reflection incurs a cost that must be planned for.
Inspection is easier than adaptation.
We can only reflect on what is explicit.
Reflection is a design tool that enables unanticipated evolution.
This sounds technical, but the most important observation is that reflection is an inherent human ability. It only requires training to develop it into a capability.
Marshall McLuhan told us among other things that “We shape our tools and thereafter our tools shape us.” If this is true, we should be very careful with the tools that we expose ourselves to because they will determine the way we are going to think.
Quick question: did you check your phone within 5 minutes of waking up this morning? Likely, yes. Yet, this seemingly imperious need did not exist before the iPhone brought it in our world.
The tools we use have a deep influence on our behavior. This is particularly relevant in our increasingly digital world, and it has a critical impact in the way we approach software. Just think of this: software is data, and data has no particular shape. Yet we as humans require shape to be able to reason about something. But, the shape of software is provided by our tools. This means that we must scrutinize the tools we use and make our choices with as much care as we do any other significant architectural choice.
In this talk, we show examples of how the tools influence the way we think, and look carefully at how we should equip our environment to foster better software.
If you're not terrified, you're not paying attention.
Publishing information on the Web does not require us to just give it away. We have a series of tools and techniques for managing identity, authentication, authorization and encryption so we only share content with those we trust.
Before we tackle Web Security, however, we need to figure out what we mean by Security. We will pull from the worlds of Security Engineering and Software Security to lay the foundation for technical approaches to protecting our web resources. We will also discuss the assault on encryption, web security features and emerging technologies that will hopefully help strengthen our ability to protect what we hold dear.
Topics include:
If you're not terrified, you're not paying attention.
Publishing information on the Web does not require us to just give it away. We have a series of tools and techniques for managing identity, authentication, authorization and encryption so we only share content with those we trust.
Before we tackle Web Security, however, we need to figure out what we mean by Security. We will pull from the worlds of Security Engineering and Software Security to lay the foundation for technical approaches to protecting our web resources. We will also discuss the assault on encryption, web security features and emerging technologies that will hopefully help strengthen our ability to protect what we hold dear.
Topics include:
A real-world look at using Consumer Driven Contracts in practice. How to eliminate a test environment and how to build your services with CDC as a key component.
One of the biggest challenges in building out a Microservices architecture is integration testing. If you use a Continuous Delivery pipeline, none of your environments, stage or production, are even in a steady state. How do you perform adequate testing when your environment can change during your test? How do you manage a complex web of interdependent Microservices? How do you safely evolve your API in this environment?
Consumer Driven Contracts are a key component for a successful Microservices strategy. We'll look at different CDC frameworks and how to use them. We'll discuss developer workflows and how to ensure your API changes don't break client implementations. Finally, we'll build a couple of Microservices and walk through the lifecycle of Consumer Driven Contract tests.
If you listen to zealots and critics, blockchain-based systems and the cryptocurrencies they enable are either the Best Thing Ever or the Worst Thing Ever. As you may suspect, the reality is somewhere in-between. We will introduce the major ideas, technologies and players as well as evaluate them from technological, economic and social perspectives.
Come have a spin-free discussion about these polarizing technologies to find how they might be useful to you.
Want to bring in [new cool thing X] or [necessary technology change Y] to your company, because you know there's a need for it? GOOD IDEA! Except…now what? If your company is more than about 3 people, how do you explain, enable, and encourage the adoption of this change, especially if it will require some work on everyone’s part?
In How to Technology Good, Josh and Laine will explain how bringing in technology is subject to one of the biggest problems in IT - how to scale it. They'll also talk about tips and tricks for how to be as successful as you can, and the main things to keep track of and watch out for. They'll go through each phase of bringing in new tech, all the way from how to pick your success criteria through what to think about when it comes to maintenance.
If companies truly want to go FAST, occasionally that requires changing something about the culture of the company. Processes get stale or overly complex, people don’t know why things are the way they are, and everyone wonders at the wisdom of asking too many questions.
Culture change is hard, and in this talk we’ll explain the most important piece of surviving and even finding JOY in it – having a strong, supportive community.
We work in IT – and while we WORK with computers, we do not always FUNCTION like computers where inputs consistently make the same outputs. Our jobs are mostly theory and design and strategy, with some good old fashioned implementation thrown in – and as skilled knowledge workers, we function best when we respect that our mental and emotional resources matter.
In this talk, we’ll explain some of the best practices we’ve stumbled across for personal (brain and heart) resource maintenance.
Come to this session to learn about how we solved a fairly complex problem associated with maintaining predictable response time across set of service calls that are spread across multiple clouds. Many over the past few years have embraced microservices based architectures to increase flexibility and speed of feature delivery.
However, with this comes the challenge of maintaining consistent performance, scale, and good response times, in this session we will talk about a specialized controller that we have written called Predictable Response Time Controller (PRTC) to help with the challenges of maintaining scale and response times.
We often meet customers that have migrated to the public cloud only to later determine that some of their critical legacy application patterns have transitioned to a public cloud implementation, and they are now paying higher costs due to this design flaw. Regardless of cloud location, what really matters is how well you have abstracted the application platform nature of your enterprise workloads. If you don’t understand your application workloads in terms of scalability, performance, reliability, security, and overall management, then you are simply shifting the problem from one cloud to another.
IT practitioners are bringing their old habits to new problems. The key to this problem is deeply rooted in the knowledge gap that exists between development and operations organizations. In this session, we talk about the notion of the application platform and its teachings to close the gap that exists between developers and infrastructure architects. At the most fundamental level you can think of application platforms as an abstraction of three major parts: 1) application code logic; 2) application runtime where the code runs; and 3) infrastructure abstractions such as CaaS, K8s, and fundamental IaaS. We will also cover the notion of the Hybrid Cloud Runtime (HCR) as a common control plane that will help in getting a common observability across such multi cloud distributed applications. At the most fundamental level HCR is made of Servicemesh, and a set of application runtime aware controllers to manage SLA and help SREs optimize their day-to-day interactions with such systems.
Machine Learning is all the rage, but many developers have no idea what it is, what they can expect from it or how to start to get into this huge and rapidly-changing field. The ideas draw from the fields of Artificial Intelligence, Numerical Analysis, Statistics and more. These days, you'll generally have to be a CUDA-wielding Python developer to boot. This workshop will gently introduce you to the ideas and tools, show you several working examples and help you build a plan to for diving deeper into this exciting new field.
We will cover:
Please install Anaconda for Python 3 before the workshop if possible. https://www.anaconda.com/download
This session will be a deep dive into the machine learning and artificial intelligence services within AWS. This will include Amazon Comprehend, Forecast, Lex, Personalize, Polly, SageMaker, Recognition, Textract, Translate, and Transcribe. We will cover key concepts of each of the services, common use cases, and design patterns.
Come to this session if you to get up to speed on the ML/AI services in AWS.
This session will focus on the essential skills that are needed by software architects on a daily basis from ideation to product delivery. For many architects, it’s not the technology related areas that give you problems, but people related areas.
Come to this session if you want to learn some tricks and tips for how to raise your game as an architect.
This session will focus on architecting for multi-cloud big data. This will include evaluating and comparing the big data capabilities in AWS and Azure, data synchronization, security, orchestration, disaster recovery, and other key aspects of multi-cloud enterprise big data systems.
Come to this session if you want to learn about architecting and delivering enterprise big data systems in a multi-cloud environment.
The maturing of industry projects and tools around cloud development and administration has led to the formation of the Cloud Native Computing Foundation. This new foundation is similar to the Apache Foundation in that it provides governance over projects from incubation to maturity. These projects define the current and future standards of the cloud which is important for all devops teams to be aware of. This session is a guided at jet speed tour of each project and how it fits in the eco-system.
This session will briefly cover each of the CNCF projects will a outline of:
The projects covered include:
One of the hardest activities and strategies of DevOps team or should we say production is how to transition from one version of an application to another version of an application with cascading consequences of service dependencies. There are a number of strategies for managing this concern. In this talk, we will outline a few of them along with required conditions of the underlying infrastructure to achieve it.
This session will demonstrate on a DC/OS platform how to create a continuous delivery solution which pushes builds into production leverage blue / green deployments. Following this we will switch on the fly from blue to green and vice versa. We will stretch this concept to it's extreme and demonstrate A/B testing in a production environment.
The single worst architectural anti-pattern is also the one I see the most often. It locks you into an architecture. Makes your choices permanent and inhibits being able to respond when you need to scale.
We are going to look at multiple examples of this anti-pattern. Not only focusing on how to avoid it in the first place, but also how to restructure code once you have detected it in your current system.
Analyzing architecture is all about finding structural decay in applications and systems to determine whether the architecture is still satisfying the business concerns (performance, scalability, fault tolerance, availability, and so on) and also whether the architecture supporting the application functionality is still viable. This is known as “architectural vitality”. While the functionality of a system may be sound, the architecture supporting that functionality may not be. For example, performance and scalability may have been the number one concern 5 years ago, but today agility, testability, and deployability is the number one concern to support high levels of competitive advantage and time-to-market. Does the architecture support these “-ilities”? If not, the company is likely to fail in today’s highly competitive market.
In this intense 1-day hands-on workshop you will learn what structural decay means and how to detect it. You will also learn what it means to “analyze an architecture”, and how to measure and quantify various “-ilities” such as performance, scalability, testability, maintainability, and so on. Leveraging source code metrics and open-source analysis tools, you will then see how to apply micro-level (source code) analysis techniques to identify decay in your architecture. You will also learn how to perform risk analysis against your architecture to help identify and prioritize architectural refactoring efforts, and also how to assess the level of modularity in your application in preparation for the move to microservices.
While the analysis techniques taught in this class are largely platform and technology-agnostic, most of the analysis tools we will be using will be in Java.
Requirements
A laptop is recommended for this workshop so you can follow along with the class exercises.
Prerequisites
The desire to find out whether you have a sound and viable architecture supporting your systems and applications. There are no technical or architectural prerequisites for the class.
“Emerge your architecture” goes the agile mantra. That’s great. Developers get empowered and fluffy papers make room for real code structure. But, how do you ensure the cohesiveness of the result?
In this talk, we expose how architecture is an emergent property, how it is a commons, and we introduce an approach for how it can be steered.
To steer means three at least things:
When it comes to steering agile architecture, of the above three, only the second point is about design. The first and the third points are about software assessment. While the literature covers the design aspect in detail, the assessment issues are left open.
In this talk, we focus on how by integrating software assessment in the daily development process we can make steering agile architecture a reality.
“Technical debt” is a successful metaphor that exposes software engineers to economics, and managers to a significant technical problem. It provides a language that both engineers (“technical”) and managers (“debt”) understand.
But, “technical debt” is just a metaphor that has its limitations, too. The most important limitation is that it presents a negative proposition: The best thing that can happen to you is having no technical debt.
Technical debt is both brought about and solved as a result of decisions. As such, we turn our attention to how people reach decisions about a software system. Decision making is a critical software engineering activity. Developers alone spend some half of their time reading code. This means half of the budget. Even though it is the single most significant development activity, nobody really talks about how this effort is being spent.
It’s time to change this. The talk motivates the need for software assessment as an explicit discipline, it introduces the humane assessment method and outlines the implications.
A long time ago, in a land far far away, there were monoliths. These fabled artifacts brought consistency and stability to the land - but there was a cost in speed, agility, time, and development pain.
Whether Java EE, .NET, or something else, the big ol' integrated plexi-purpose binaries or yore (and also now…) have grown into problems that hurt developers, architects, and the execution of business goals.
In this talk, Josh and Laine will talk specifics about the pain points of monoliths, and the various strategies they've seen to alleviate that pain.
All companies are IT companies. Except…not. All companies SHOULD be IT companies, if they're trying to keep up with the weight of their customers' ever-increasing demands for speed and agility. Unfortunately…most companies don't know how to get there - or even what “there” looks like, or how they'd describe it.
Josh and Laine will talk about how to use a diagram (in this case, a map!) to build and discuss a strategy to navigate the high seas of being a business today in order to deliberately find the treasure. The treasure (continuous delivery) gives IT, and companies, the ability to embrace and empower existing resources, and eventually will give enough resources to thrive even at the lightning-fast pace of being a business today.
Leading technical organizations in micro-service based architectures all use an orchestrator in their datacenter; be it Apache Mesos, Kubernetes, Tupperware, the Borg or Omega. The dominate platforms in the open source space are Kubernetes and Mesos. This session will dive deep into the core difference including:
Presented by an engineer that has worked for over 6 years with Docker and container orchestrators. Attendees will leave with a clear understanding of the Kubernetes API, scheduler, controllers and operators, and how they are different than the Mesos 2-level scheduler. The session will include how resources are managed in the cluster along with pod life-cycle management. The session will call out topics of concern regarding availability and scalability and how to best manage those concerns.
When architecting a critical system the Availability of CAP theorem becomes the most important element. Architecture measures availability in 9s with 99.99% equating less than 1 hour of unplanned downtime. This session will focus on what it takes to get there.
After establishing high availability expectations and measurements, this session will dive into what it takes to establish the highest scale possible. It includes a look at infrastructure needs with a separation between capacity and scale. A look a service discovery with pro and cons of service to service dependencies. We look at infrastructure necessary such as health checks and monitoring. The session will include a look at different layers of fault domains including cross region.
There are distinct advantages to a monolithic architecture, but when does the balance tip towards smaller targets? What cultural and devops practices are essential to success? How do you decide where to make the first slice, evaluate and iterate? All these questions will be answered, and we'll discuss pro tips from monolith-slaying case studies.
We'll look at all the factors that impact your migration from a Monolith to a Microservices architecture.
Every organization has at least a phalanx or two in the “Cloud” and it is, understandably changing the way we architect our systems. But your application portfolio is full of “heritage” systems that hail from the time before everything was as a service. Not all of those applications will make it to the valley beyond, how do you grapple with your legacy portfolio? This talk will explore the strategies, tools and techniques you can apply as you evolve towards a cloud native future.
In this talk, you will learn:
As we migrate towards distributed applications, it is more than just our architectures that are changing, so too are the structures of our teams. The Inverse Conway Maneuver tells us small, autonomous teams are needed to produce small, autonomous services. Architects are spread thin and can’t be involved with every decision. Today, we must empower our teams but we need to ensure our teams are making good choices. How do we do that? How do you put together a cohesive architecture around distributed teams?
This talk will discuss creating “paved roads”, well worn paths that we know works and we can support. We will also explore the importance of fitness functions to help our teams adopt appropriate designs.
Any system of significant scale or latency sensitivity employs the use of caching. It could be as simple as memoization, or as complicated as a fully distributed system. These ideas serve us well, but how do we take it to the next level?
Join Aaron as he demonstrates customizing a caching system. He will discuss the pros and cons of embedding application and domain specificity into your caching model. Aaron will show a start to finish implementation of a custom Redis module that reduces latency, network round trips, and adds pub/sub notifications.
Learn how to take your cache to the next level and encode elements of your system directly into the handling of your most accessed data.
This session will span multiple languages, but will focus on C for the Redis module implementation. Knowledge of C is not required to attend this session, as the details will be explained alongside the code with examples in higher level languages.
Microservices bring about a series of architectural shifts. One of the most powerful is true separation of concerns. This change brings with it incredible security opportunities. Join Aaron as he demonstrates how to identify and execute on these opportunities. In this session you will explore service and data classification techniques, authentication and access control, and service interface design that respects classification boundaries. If you are interested in, building, or currently using Microservices, this session is a must see!
More to follow…
We've all got secrets, but nobody seems to know where to put them. This long standing issue has plagued system design for ages and still has many broken implementations. While many consider this an application concern, the foundations rest in the design of the system. Join Aaron for an in-depth workshop that will cover the following secret management solutions:
Additionally, this workshop will demonstrate tools for discovering sensitive information checked in to your project.
This is a two session workshop and is best received by attending both sessions.
You will need the following tools installed and updated prior to the workshop:
Optional
We've all got secrets, but nobody seems to know where to put them. This long standing issue has plagued system design for ages and still has many broken implementations. While many consider this an application concern, the foundations rest in the design of the system. Join Aaron for an in-depth workshop that will cover the following secret management solutions:
Additionally, this workshop will demonstrate tools for discovering sensitive information checked in to your project.
This is a two session workshop and is best received by attending both sessions.
You will need the following tools installed and updated prior to the workshop:
Optional
Security should always be built with an understanding of who might be attacking and how capable they are. Typical threat modeling exercises are done with a static group of threat actors applied in “best guess” scenarios. While this is helpful in the beginning, the real data eventually tells the accurate story. The truth is that your threat landscape is constantly shifting and your threat model should dynamically adapt to it. This adaptation allows teams to continuously examine controls and ensure they are adequate to counter the current threat actors. It helps create a quantitative risk driven approach to security and should be a part of every security teams tools.
Join Aaron as he demonstrates how to look at web traffic to analyze the threat landscape and turn request logs into data that identifies threat actors by intent and categorizes them in a way that can be fed directly into quantitative risk analysis. Aaron will show how important this data is in driving risk analysis and creating an effective and appropriate security program.
Over the course of my life I have amassed a great quantity of 1-3 minute talks. Tonight we are going to Randomly pick from that list and see where the adventure takes us!
Talks:
Test Driven Math
10 X
A swimming pool isn’t just a bigger bathtub
Bdd vs TDD
Arlo’s Git Notation
The Curse of knowledge
Do NOT use the greater than sign in programming
DocDoD
Sparrows
Leveling up
On being the best
Quantum Computing
Theory Based thread testing
Better Lunches
Decision trees
Sustainable Pace
Standing alone
Better Interviews
Duplication and Cohesion
Generic Type Information at Runtime in Java
Make the easy change
Machine Learning is a key differentiator for modern organizations, but where does it fit into larger IT strategies? What does it do for you? How can it go wrong?
This class will contextualize these technologies and explain the major technologies without much (if any) math.
We will cover:
Too often, developers drill into the see of data related to a software system manually armed with only rudimentary techniques and tool support. This approach does not scale for understanding larger pieces and it should not perpetuate.
Software is not text. Software is data. Once you see it like that, you will want tools to deal with it.
Developers are data scientists. Or at least, they should be.
50% of the development time is typically spent on figuring out the system in order to figure out what to do next. In other words, software engineering is primarily a decision making business. Add to that the fact that often systems contain millions of lines of code and even more data, and you get an environment in which decisions have to be made quickly about lots of ever moving data.
Yet, too often, developers drill into the see of data manually with only rudimentary tool support. Yes, rudimentary. The syntax highlighting and basic code navigation are nice, but they only count when looking into fine details. This approach does not scale for understanding larger pieces and it should not perpetuate.
This might sound as if it is not for everyone, but consider this: when a developer sets out to figure out something in a database with million rows, she will write a query first; yet, when the same developer sets out to figure out something in a system with a million lines of code, she will start reading. Why are these similar problems approached so differently: one time tool-based and one time through manual inspection? And if reading is such a great tool, why do we even consider queries at all? The root problem does not come from the basic skills. They exist already. The main problem is the perception of what software engineering is, and of what engineering tools should be made of.
In this talk, we show live examples of how software engineering decisions can be made quickly and accurately by building custom analysis tools that enable browsing, visualizing or measuring code and data. Once this door is open you will notice how software development changes. Dramatically.
Like everyone else, you have a large product that is hard to work with. We're going to change that in 75 minutes. Together we will save some gnarly legacy code (one thousand-line function). We will start with something hard to read, untested, and possibly buggy. We will finish with code that is stupidly easy to modify. You'll learn 6 trivial techniques that you can apply over and over to fix 95% of the messiest code you have. You can take home this exercise to help the rest of your team learn these techniques. You'll also learn how your team can teach itself a bunch more techniques to handle the other 5%.
We are going to save some legacy code. In 90 minutes. While adding features. We will mob program; you will save this legacy code. We won't introduce any bugs along the way. We will spend the time that you would normally use reading code to instead make it readable. You can apply these techniques and reduce the cost of coding within 48 hours of getting home.
We have done this exercise with dozens of teams. They code differently now. Changing existing code is actually safer and cheaper than writing new code. Their designs get a little better each day. This session will improve your code and show you what skills to learn to gain further improvements.
Learning Outcomes:
Know the 6 refactorings required for reading code by refactoring it.
Differentiate between refactoring and micro-rewrites (code editing), and choose each where appropriate.
Have fluency in the key refactorings with one tool set and know how to spread that fluency to other tools and to broaden the skills within that tool set.
Able to start successfully saving legacy code without making major investments, even with no tests.
See an obvious path for continuing to learn design and refactoring skills - know where and how to get feedback and can create own curriculum for next 1.5-3 years of improvements.
“Software is eating the world” means all innovations in the company must be channeled through software. As Architects, we create the choices, trade-offs and conditions for software-based innovation to occur successfully. Those choices also affect how software is built and tested and vice versa. For example proper modularization does not just improve maintainability and separation of concerns, but can also dramatically impact the time it takes to build and test software. , Mapping micro-services to source repositories is often influenced by build and test time constraints, with often suboptimal results.
Good architecture and developer productivity engineering should work hand-in-hand.
The paradox of a successful software team is that as the codebase and team sizes grow - it becomes harder to maintain the automation, fast feedback cycles, and reliable feedback that enables the software development team to execute at their full potential including sticking to the architectural roadmap. Compared to other industries, the software development process is in the dark ages, with little data to observe and optimize the process itself.
Join Hans Dockter, founder and CEO of Gradle for a discussion of how to measure the impact, and apply data and acceleration technologies to speed up and improve the essential software development processes from builds to testing to CI and how this will benefit and enable better architecture.
There were two fatal crashes of the Boeing 737 Max in the fall of 2018 and spring of 2019 grounding the airplane world-wide and begging the question why? In the end, it comes down to software but there is much more to that story. Ken, the presenter in this session is in the unique position of being an instrument-rated private pilot and a software engineer with experience working with remote teams, both will provide insight into lessons we will learn as we peel back the details of these tragic events.
In this session, you will learn about aircraft types and how they affect decisions of the airline industry from pilot scheduling, plane scheduling, innovation and profits. We will see how an airplane design from 1994 causes challenges in 2018-2019. We will learn how software becomes the solution to a hardware problem of design. We will continue with plane ratings and what “in-type” means and how it plays an affect. We will broach on the topic of the USA FAA relinquishing quality standards to Boeing because of man-power and costs. Last we will hone in on what a pilot does and expects and what the MCAS system did by design. The climax of the talk will center around software requirements and how disconnected remote teams without user experience in the problem space will write exactly what you agree on… which can be lethal.
In a world increasingly defined in software, is the database–a tool primarily built to aid human-computer interaction–always the right tool to choose? In this talk, we’ll look at a new type of database, built not only for the tables and columns we’re familiar with, but also the continuous, never-ending “streams of events” that represent data as it moves. We’ll take a look at ksqlDB’s syntax and show how it can replace bespoke Kafka Consumers with short, declarative queries.
From there, we’ll look at what kinds of software architectures a streaming database supports. Hint: they look an awful lot like what the most ambitious Kafka deployments are doing with the systems they’re refactoring to microser ices. We’ll look at how Kafka and ksqlDB solve the attendant problems elegantly, and how the software architectures on which many teams are converging closely resembles the databases of old.
Our industry never stops changing, but sometimes those changes are trivia and fluffy. Sometimes they are fundamental and enduring. This series is going to highlight some of the most important trends happening in the hardware, software, data and architecture spaces.
While still new to most people, WebAssembly provides a formidable vision of safe, fast, portable code. Through clever choices and well-considered design, the basic vision allows us to target browsers as a platform using a variety of languages other than (but compatible with) Javascript. This technology coupled with advancements in the Web platform are setting up the future of Web-delivered applications to look more like (and likely to replace) desktop applications.
Modern software developers need to understand how just about every aspect of their industry is about to change.
We will cover: