Microservices continues to be the latest buzzword in the industry, and probably will be for some time. If you are not sure what microservices is or want to start getting your feet wet in understanding the basics of this architecture style, then this session is just right for you. In this session I will cover the basics of the microservices architecture pattern. We'll talk about distributed architecture, what a microservice is, what the bounded context means, how to determine the right level of service granularity, the dangers of inter-service communication, and the role of the API layer. By the end of this session you will have a good idea of what the microservices architecture style is all about and whether it is a good fit for you.
Agenda:
.
.
Jorge Santayana is famous for saying “Those who cannot remember the past are condemned to repeat it”. When SOA (Service-Oriented Architecture) was all the craze, everyone got all excited about services, but forgot about the data. This ended in disaster. History repeats itself, and here we are with Microservices, where everyone is all excited about services, but once again, forgets all about the data. In this session I will discuss some of the challenges associated with breaking apart monolithic databases, and then show the techniques for effectively creating data domains and how to split apart a database. I consider the data part of Microservices the hardest aspect of this architecture style. In the end, it's all about the data.
Agenda
Once you break things apart into microservices, you must then put them back together. In other words, individual services still sometimes need to talk to one another to complete a given business transaction, whether that transaction is synchronous or asynchronous. In this session I talk about the various patterns of communication within microservices - orchestration, aggregation, and adapters. I also talk about coupling between services, including stamp coupling and bandwidth issues, and how to address these common communication woes.
Agenda
Have you ever wondered how to share data between microservices? Have you ever wondered how to share a single database schema between hundreds (or even thousands) of microservices (cloud or on-prem)? Have you ever wondered how to version relational database changes when sharing data in a microservices environment? If any of these questions intrigue you, then you should come to this session. In this session I will describe and demonstrate various caching strategies and patterns that you can use in Microservices to significantly increase performance, manage common data in a highly distributed architecture, and even manage data synchronization from cloud-based microservices. I'll describe the differences between a distributed and replicated cache, Using live coding and demos using Hazelcast and Apache Ignite, I'll demonstrate how to share data and also how to do space-based microservices, leveraging caching to its fullest extent.
Agenda:
Reactive architecture patterns allow you to build self-monitoring, self-scaling, self-growing, and self-healing systems that can react to both internal and external conditions without human intervention. These kind of systems are known as autonomic systems (our human body is one example). In this session I will show you some of the most common and most powerful reactive patterns you can use to automatically scale systems, grow systems, and self-repair systems, all using the basic language API and simple messaging. Through code samples in Java and actual run-time demonstrations, I'll show you how the patterns work and also show you sample implementations. Get ready for the future of software architecture - that you can start implementing on Monday.
Agenda
There are many different uses for Apache Kafka. It can be used as a streaming broker, event broker for transactional data, and even a database. This session is about understanding streaming architecture and how to implement it using Apache Kafka. I start this session by talking about some of the streaming architecture patterns, then dive into how Apache Kafka works using the Core API. Using live coding examples in Apache Kafka, I also talk about the differences between Kafka and regular messaging (RabbitMQ, ActiveMQ, etc.) and when you should use each. I end this session by putting everything together, showing an actual streaming architecture using Kafka within a Microservice ecosystem for gathering various metrics for business and operational monitoring and reporting.
Agenda:
One of the expectations of any software architect is to analyze the current technology environment and recommend solutions for improvement. This is otherwise known as continually assessing architecture vitality. Too many times software architects fail to regularly perform this task, leading to emergency refactoring efforts to save a troubled system from failure. The question is, what does it mean to assess an application architecture? In this session we will explore static analysis metrics and tools and techniques for leveraging those metrics for determining structural decay. Using a real-world large-scale application, I'll show you how to leverage code metrics to find (and fix) structural decay before it gets you into trouble.
Agenda
While there are dozens of activities within an enterprise architecture effort, there is only one primary outcome - an enterprise architecture roadmap. Roadmaps describe what efforts (i.e., projects) need to be done to meet a specific objective, the dependencies between those efforts, and the prioritization of those efforts. In this session I'll cover the four main models that make up an EA roadmap and show you techniques for how to identify projects, classify projects, prioritize projects, and finally illustrate these efforts through consolidated roadmap views. By the end of this session you'll have a clear view of why enterprise architecture is needed, the purpose behind it, and how to create an effective and clear enterprise architecture roadmap.
Agenda
There are many traditional approaches to enterprise architecture. Unfortunately, these traditional approaches are one of the reasons EA fails in today's world. In the first part of this session I'll describe and demonstrate the traditional approaches to EA, explain why they fail, and then show you several modern approaches to enterprise architecture that hold lots of promise in transforming EA to the 21st century. In the second part of this session I'll then describe 4 different enterprise architecture strategies for overall EA team structure, governance, process, and standards.
Agenda
Organizing and governing enterprise architecture models and processes is a daunting task. No wonder so many people are wondering whether an enterprise architecture framework will help. Understanding various enterprise architecture frameworks like Zachman, TOGAF, and FEAF is the first step. More important, however, is knowing whether you need an EA framework at all. In this session I will start with the basics of the Zachman Framework, TOGAF (The Open Group Architecture Framework), and FEA (Federal Enterprise Architecture) so that you can gain a complete understanding of how each of these frameworks work. During the journey of these frameworks I will continually point out the strengths and weaknesses of each framework to arrive at the best part of the session - how to build your own EA Framework that works for you and your situation.
Agenda
As Tech Leaders, we are presented with problems and work to find a way to solve them, usually through technology. In my opinion this is what makes this industry so much fun. Let's face it - we all love challenges. Sometimes, however, the problems we have to solve are hard - really hard. So how do you go about solving really hard problems? That's what this session is about - Heuristics, the art of problem solving. In this session you will learn how to approach problems and also learn techniques for solving them effectively. So put on your thinking cap and get ready to solve some easy, fun, and hard problems.
Agenda:
It seems like all we talk about these days is making our architectures more modular. However, several questions emerge when moving towards a level of architectural modularity. What are the benefits? Why should you care? How far should you take architectural modularity? Should you use service-based architecture or move all the way to microservices? What is the best approach for moving to microservices? In this keynote I'll address all of these questions so that you'll fully understand the rationale for this important trend and also understand a solid approach for moving to microservices.
It seems like all we talk about these days is making our architectures more modular. However, several questions emerge when moving towards a level of architectural modularity. What are the benefits? Why should you care? How far should you take architectural modularity? Should you use service-based architecture or move all the way to microservices? What is the best approach for moving to microservices? In this keynote I'll address all of these questions so that you'll fully understand the rationale for this important trend and also understand a solid approach for moving to microservices.
New architectural paradigms like microservices and evolutionary architecture, as well as the challenges associated with managing data and transactional contexts in distributed systems, have generated a renewed interest in disciplined software design and modular decomposition strategies. We know that the secret to obtaining the benefits of these architectures is getting the boundaries right, both at the team and the component/service level! Fortunately, there is a mature, battle-tested approach to system decomposition that is perfect for these architectures: Domain-Driven Design.
In this workshop, we'll cover the following topics:
You'll leave the workshop with a solid understanding of DDD and how it can help you best decompose your domain and business capabilities so that you can be more effective with modern architectures.
Attendees will require a laptop with the following capabilities:
It would also be helpful to have a free Realtime Board account, or failing that, a presentation/diagramming tool you feel comfortable and productive using.
A basic understanding of Java frameworks like Lombok, JUnit, AssertJ, Mockito is useful.
This workshop highlights the ideas from the forthcoming Building Evolutionary Architectures, showing how to build architectures that evolve gracefully over time.
An evolutionary architecture supports incremental, guided change across multiple dimensions.
For many years, software architecture was described as the “parts that are hard to change later”. But then microservices showed that if architects build evolvability into the architecture, change becomes easier. This workshop, based on my upcoming book, investigates the family of software architectures that support evolutionary change, along with how to build evolvable systems. Understanding how to evolve architecture requires understanding how different parts of architecture interact; I describe how to achieve appropriate coupling between components and services. Incremental change is critical for the mechanics of evolution; I cover how to build engineering and DevOps practices to support continuous change. Uncontrolled evolution leads to undesirable side effects; I cover how fitness functions build protective, testable scaffolding around critical parts to guide the architecture as it evolves.
The software development ecosystem exists in a state of dynamic equilibrium, where any new tool, framework, or technique leads to disruption and the establishment of a new equilibrium. Predictability is impossible when the foundation architects plan against changes constantly in unexpected ways. Instead, prefer evolvability over predictability. This hands-on workshop provides a high-level overview of a different way to think about software architecture.
Outline:
No prerequisites or requirements–all exercises are done with paper, pen, and intellect.
No single architectural style solves all needs. Though microservices have taken the developer community by storm recently, they are not always the optimal solution. In some cases, a more monolithic architecture may be more suitable short term. Or perhaps a more traditional system of web services that allow you to leverage existing infrastructure investment is preferable. Fortunately, proven architectural practices allow you to build software that transcends specific architectural alternatives and develop a software system that gives the development team the agility to shift between different architectural styles without undergoing a time-consuming, costly, and resource intensive refactoring effort. Modularity is the cornerstone of these alternatives.
In this workshop, we will examine the benefits and drawbacks of several different modular architecture alternatives and we’ll explore the infrastructure, skills, and practices necessary to build software with each of these alternatives. There will be straightforward exercises and demonstrations that show the alternatives and how to create software that provides the architectural agility to easily shift between architectural alternatives.
Topics discussed include:
Pen and paper
Java 8+
Ant
Gradle
Graphviz (Optional)
Heroku account and Heroku CLI (Optional, only if you want to deploy to Heroku PaaS)
Becoming a software architect is a longed-for career upgrade for many software developers. While the job title suggests a work day focused on technical decision-making, the reality is quite different. In this workshop, software architect Nathaniel Schutta constructs a real world job description in which communication trumps coding.
Discover the skill sets needed to juggle multiple priorities, meetings, and time demandsLearn why your best team leadership tool is not a hammer, but a shared cup of coffeeHear the best ways to give and take criticismUnderstand the necessity of writing effective email and formal architecture documentsGet tips for delivering confident career-building presentations to any audienceReview essential techniques for stakeholder management and relationship buildingExplore the critical needs for architecture reviews and an effective process for conducting themThrough lecture and small group exercises, Nathaniel will help you understand what it means to be a successful architect. Working through various problems, attendees will have opportunities to think through architectural decisions and patterns, discuss the importance of non functional requirements and why architects cannot afford to practice resume driven design.
Learning about design patterns is not really hard. Using design patterns is also not that hard. But, using the right design pattern for the right problem is not that easy. If instead of looking for a pattern to use if we decide to look for the design force behind a problem it may lead to better solutions. Furthermore, with most mainstream languages supporting lambda expressions and functional style, the patterns appear in so many more elegant ways as well.
In this workshop we will start with a quick introduction of a few patterns. Then we will work with multiple examples—take a problem, delve into the design, and as we solve it, see what patterns emerge in the design. The objective of this workshop is to get a hands on experience to prudently identify and use patterns that help create extensible code.
-Java 8 JDK or later version
-Your favorite IDE (Preferably IntelliJ IDEA Community Edition)
-git
Machine Learning is all the rage, but many developers have no idea what it is, what they can expect from it or how to start to get into this huge and rapidly-changing field. The ideas draw from the fields of Artificial Intelligence, Numerical Analysis, Statistics and more. These days, you'll generally have to be a CUDA-wielding Python developer to boot. This workshop will gently introduce you to the ideas and tools, show you several working examples and help you build a plan to for diving deeper into this exciting new field.
We will cover:
Please install Anaconda for Python 3 before the workshop if possible. https://www.anaconda.com/download
Tired of trying to manage and maintain servers? Never have a large enough operations team? Don’t have a budget for running lots of server? Don’t want to pay for servers siting idle? Afraid you might become so popular that you won’t be able to scale fast enough? Don’t worry, it is possible to alleviate these issues by moving to a serverless architecture that utilizes microservices hosted in the cloud. This type of architecture can support all different types of clients including web, mobile and IoT.
During this hands-on workshop, you will build a serverless application utilizing AWS services such as Lambda, API Gateway, S3 and a datastore.
During this session you will build a simple web application utilizing AWS services and Angular.
At the end of this workshop, you will be comfortable with designing, deploying, managing, monitoring and updating a coordinated set of applications running on Kubernetes.
Distributed application architectures are hard. Building containers and designing microservices to work and coordinate together across a network is complex. Given limitations on resources, failing networks, defective software, and fluctuating traffic you need an orchestrator to handle these variants. Kubernetes is designed to handle these complexities, so you do not have to. It's essentially a distributed operating system across your data center. You give Kubernetes containers and it will ensure they remain available.
Kubernetes continues to gain momentum and is quickly becoming the preferred way to deploy applications.
In this workshop, we’ll grasp the essence of Kubernetes as an application container manager, learning the concepts of deploying, pods, services, ingression, volumes, secrets, and monitoring. We’ll look at how simple containers are quickly started using a declarative syntax. We'll build on this with a coordinated cluster of containers to make an application. Next, we will learn how Helm is used for managing more complex collections of containers. See how your application containers can find and communicate directly or use a message broker for exchanging data. We will play chaos monkey and mess with some vital services and observe how Kubernetes self-heals back to the expected state. Finally, we will observe performance metrics and see how nodes and containers are scaled.
Come to this workshop the learn how to deploy and manage your containerized application. On the way, you will see how Kubernetes effectively schedules your application across its resources.
Optionally, for more daring and independent attendees, you can also replicate many of the exercises on your local laptop with Minikube or Minishift. There are other Kubernetes flavors as well. However, if during the workshop you are having troubles please understand we cannot deviate too far to meet your local needs. If you do want to try some of the material locally this stack is recommended:
Some of the topics we will explore:
These concepts are presented and reinforced with hands-on exercises:
You will leave with a solid understanding of how Kubernetes actually works and a set of hands-on exercises your can share with your peers. Bring a simple laptop with a standard browser for a full hands-on experience.
Micronaut is a modern, JVM-based, full-stack framework for building modular, easily testable microservice applications. Micronaut embraces some of the same ideas Grails uses to prioritize developer productivity and code simplicity, then applies those ideas to a framework specifically designed to overcome the challenges associated with microservice architectures. Through lectures, real-world examples, and lab exercises, this 1-day hands-on workshop will arm you with everything you need to get started building microservice applications using Micronaut. The workshop will cover the fundamentals of Micronaut and build a real application which will be deployed to Google Cloud Platform (GCP) during the experience.
Although everyone is welcome, this workshop is best suited for JVM developers who want to build microservice applications. Participants should be comfortable with Java as a programming language.
Should Information Management systems apply the services architecture? Many data provisioning and BI systems are monolithic, tightly coupled, difficult to scale, and stumble when it comes to delivering MVP in a timely manner.
In this session we will look at the common obstacles such systems inherently bring with them, and how the Data as a Service architecture pattern addresses many of these issues.
Agenda
Data as a Service delivers MVP of real-time data management, while avoiding many of the anit-patterns that traditional data provisioning and BI systems portray. However, building out a Data as a Service system doesn't require high up-front costs and the welding of multiple products. Learn how the open source product Talend Open Studio can be used to build out a DaaS system that delivers faster and more scalable solutions to your customer.
In this session we will look a close look at the key components of the Data as a Service architecture pattern by walking through an example implemented using Talend.
Agenda
Continuous Integration has redefined our testing practices. Testing has become more focused, efficient, and re-positioned further upstream in the development life-cycle. Unfortunately, our testing systems haven't evolved in lock-step - specifically the provisioning of realist test data.
It remains common practice to extract, cleanse and load production data into our non- production environments. This is a lengthy process with serious security concerns, and still doesn't satisfy all our data content requirements.
What if there is a better way of providing realist test data? What if it could be generated on-demand as part of the Continuous Integration process - without the heavy databases and traditional batch jobs?
Come join us in a journey as we walk through the concepts, building blocks and implementation of the light-weight Test Data Generation package that addresses this automated testing niche.
We'll provide an overview of the Rust language, explain how a mathematical framework from the 1950s was rediscovered, and provide an overview of Machine Learning patterns that were applied.
Agenda
Open source is growing with leaps and bounds. Even corporations are adopting the open source products and reaping the numerous benefits. But there’s more than the products that need to be adopted – the Open Source Model itself is also highly valuable.
Find out how adopting the Open Source Model can improve your organization’s “time to market”, product quality, professional climate, and Agile adoption. We’ll focus on what constitutes an Open Source mindset, how to implement it, and some of the pitfalls and anti-patterns to avoid.
Agenda
In the world today, and even more so in IT, no one can afford to be stuck in their ways. New ideas arise, which make fundamental changes to the world as we used to know it. In an age where technology is becoming a major driver in business, this talk answers the following questions:
How can a person still remain relevant as an individual and as a team member?
How can a team perform beyond the expectations given the blockers that always come their way?
What do organisations gain when it’s teams are always improving?
How can an organisation support and encourage continuous improvement?
How can an organisation continue improving along with its people?
At the end, an individual is encouraged to find ways to improve in all aspects of their life. The business leaders understand what continuous improvement is bringing to their teams and organisation as a whole. They also understand how a team and organisation can grow along with its people.
When we have a problem which can be solved using a software, we first design an architecture that will guide how the system will look like. This architecture needs to be robust and well thought of to ensure that it handles all the requirements at hand and flexible enough for the future.
This talk is about some considerations to take while designing a system:
The problem to be solved
The users of the system
Systems integrations
The talk also highlights some common pitfalls that teams fall into during this process:
Database management
Buzzword-oriented architecture
Outcome of the talk:
By the end of this session the listener is be able to:
Interpret the most important considerations while designing a system
Evaluate the business and customer requirements to determine their architecture
Analyse past organisation strengths and shortcomings to make better decisions
As the cloud becomes more popular, many cloud-inexperienced architects wonder whether migration to the cloud is the correct way to scale. When they decide to migrate they have to figure out where to start from and which components to use. This talk is not about a particular cloud vendor but the questions and considerations to take while deciding on a cloud architecture for your business.
After deciding to migrate to the cloud, the architecture design will determine the success rate of the infrastructure. This architecture needs to be robust and well thought of to ensure that it handles all the requirements at hand and flexible enough for the future.
This talk is about considerations to take while designing a system, including:
The intended clients
Investment decisions
The business strategy
The development team
Choice of tools
Good development practices
Here we also discuss common pitfalls of the architecture design process, including choosing tools.
As Cloud computing becomes more popular and many businesses are keen to adopt it,one of their major concerns is security. In spite of the hype accompanying it and the success stories from the large organisations who have adopted, there are also numerous examples of breaches that have been experienced in the cloud. Many businesses would like to know how to create a secure cloud infrastructure to ensure that all their applications and data is well protected.
This talk is based on my experience in different projects that I have been involved in, some pitfalls that my team has fallen into and considerations that we can take while preparing for new cloud infrastructure.
This talk is not about a particular cloud vendor solutions but about the questions and considerations to take to ensure that your cloud infrastructure is secure.
The considerations include:
Ensuring data is securely protected from anyone who would want to access it.
Encrypting the data so that if it got to the wrong people
Authentication to ensure that only the authorised people can access the data
Enabling due diligence so that data is not accessible by those who eavesdrop and would like to modify it.
Protecting the infrastructure from Denial of Service attacks(DOS) from both internal and external sources
This talk also highlights some common pitfalls:
Using components for a different purpose other that what it is created to for.
Waiting till after the application is built before preparing the infrastructure.
Creating the infrastructure then thinking about the security at the last minute.
At the end of this session, attendees:
Are able to evaluate the level of security that they need
Can reorganise their priority while designing for their cloud infrastructure
Are equipped to create a highly secure infrastructure
Design patterns are common place in OO programming. With the introduction of lambda expressions in languages like Java, one has to wonder about their influence on design patterns.
In this presentation we will take up some of the common design patterns and rework them using lambda expressions. We will also explore some other patterns that are not so common, but are quite useful ways to apply lambdas.
Big up front design is discouraged in agile development. However, we know that architecture plays a significant part in software systems. Evolving architecture during the development of an application seems to be a risky business.
In this presentation we will discuss the reasons to evolve the architecture, some of the core principles that can help us develop in such a manner, and the ways to minimize the risk and succeed in creating a practical and useful architecture.
Creating code is easy, creating good code takes a lot of time, effort, discipline, and commitment. The code we create are truly the manifestations of our designs. Creating a lightweight design can help make the code more extensible and reusable.
In this presentation we will take an example oriented approach to look at some core design principles that can help us create better design and more maintainable code.
Before spending substantial effort in refactoring or altering design, it would be prudent to evaluate the current quality of design. This can help us decide if we should proceed with refactoring effort or a particular alteration of design. Furthermore, after evolving a design, using some design metrics would help us to evaluate if we have improved on the design front.
In this workshop we will learn about some critical qualities of design and how to measure those. We will learn about these by working through some example code, refactoring it, and evaluating the design again at each stage of refactoring.
-Java 8 JDK or later version
-Your favorite IDE (Preferably IntelliJ IDEA Community Edition)
-git
Before spending substantial effort in refactoring or altering design, it would be prudent to evaluate the current quality of design. This can help us decide if we should proceed with refactoring effort or a particular alteration of design. Furthermore, after evolving a design, using some design metrics would help us to evaluate if we have improved on the design front.
In this workshop we will learn about some critical qualities of design and how to measure those. We will learn about these by working through some example code, refactoring it, and evaluating the design again at each stage of refactoring.
-Java 8 JDK or later version
-Your favorite IDE (Preferably IntelliJ IDEA Community Edition)
-git
It's common knowledge: software must be extensible, easier to change, less expensive to maintain. But, how? That's what we often struggle with. Thankfully there are some really nice design principles and practices that can help us a great deal in this area.
In this workshop, we will start with a few practical examples, problems that will demand extensibility and ease of change. We will approach their design, and along the way learn about the principles we apply, why we apply them, and the benefits we get out of using these principles. Instead of talking theory, we will design, refactor, create prototypes, and evaluate the design we create.
-Java 8 JDK or later version
-Your favorite IDE (Preferably IntelliJ IDEA Community Edition)
-git
It's common knowledge: software must be extensible, easier to change, less expensive to maintain. But, how? That's what we often struggle with. Thankfully there are some really nice design principles and practices that can help us a great deal in this area.
In this workshop, we will start with a few practical examples, problems that will demand extensibility and ease of change. We will approach their design, and along the way learn about the principles we apply, why we apply them, and the benefits we get out of using these principles. Instead of talking theory, we will design, refactor, create prototypes, and evaluate the design we create.
-Java 8 JDK or later version
-Your favorite IDE (Preferably IntelliJ IDEA Community Edition)
-git
This two session workshop covers AMQP messaging concepts and technologies including hands-on exercises with RabbitMQ, Spring and Docker
Topics
Fundamentals: AMQP
Technologies and Architectures: RabbitMQ & Spring
Demos and Hands-on Exercises
Download Prior to Workshop
This two session workshop covers AMQP messaging concepts and technologies including hands-on exercises with RabbitMQ, Spring and Docker
Topics
Fundamentals: AMQP
Technologies and Architectures: RabbitMQ & Spring
Demos and Hands-on Exercises
Download Prior to Workshop
Software architecture involves inherent trade-offs. Some of these trade-offs are clear, such as performance versus security or availability versus consistency, while others are more subtle, like resiliency versus affordability. This presentation will discuss various architectural trade-offs and strategies for managing them.
The role of a technical lead or software architect is to design software that fulfills the stakeholders' vision. However, as the design progresses, conflicting requirements often arise, affecting the candidate architecture. Resolving these conflicts typically involves making architectural trade-offs (e.g. service granularity vs maintainability). Additionally, with time-to-market pressures and the need to do more with less, adopting comprehensive frameworks like TOGAF or lengthy processes like ATAM may not be feasible. Therefore, it is crucial to deeply understand these architectural trade-offs and employ lightweight resolution techniques. Prezi Presentation
In this session you will learn to strategically introduce technology innovations by applying specific change patterns to groups of individuals. Using these patterns and related techniques will not only benefit your organization but will ultimately benefit your career as a technologist by making you a better influencer, writer, and speaker.
The rapid pace of technological innovation has enabled many organizations to dramatically increase productivity while at the same time decrease their overall headcount. However, the vacillating global economy combined with “change fatigue” within organizations has resulted in a risk averse culture. In such an environment how can one possibly introduce and inculcate the latest technology or process within an organization? The answer is to have a solid understanding of Diffusion Theory and to leverage Patterns of Change.
Prezi Location: http://prezi.com/b85wwmw7hccn
Come to this session to learn about building application platforms that are capable of handling new deployment paradigms such as microservices, fast data, big data, and functions. While these paradigms have offered immense developer velocity and productivity; they often lead to many challenges at runtime from performance and scalability perspectives.
I will demonstrate how complexity of the old monolith has been shifted to a new complexity of a distributed nature. If what use to be an in memory call now is a network hop away to another service, this comes at a cost, but then can the platform be made smarter to handle this? It turns out the answer to this is absolutely yes! The rise of the application platform has given way for other runtime specialized layers to be encoded into the platform to handle network distribution complexities that you don’t have to worry about. For example, if two microservices happen to call each other 99% of the time across the network within one application domain (or multiple domains for that matter), then why do they need to be distributed so far from each other? Why suffer such a latency burden? What if a specialized network layer could detect this and bring the two services really close to each other so that the latency between them is minimal. There are many other patterns I will discuss here, but this essentially gives rise to a specialized layer the industry is calling service mesh. What if I could specify a latency-tolerates limit across a sequence of calls between microservices, and indicate that regardless of what happens I want this layer to guarantee it doesn’t exceed this limit, that would be super cool!
In-memory databases have now become permanent components of the enterprise application stack, and knowing how to size, scale, and tune them in VMware vSphere or bare metal environments is a paramount skillset. In recent years, we have seen in-memory cluster sizes from 1 to 5 TB of memory within a single cluster driving millions of transactions per day. Not only do these systems have zero tolerance to failure, most expect a predictable throughput and response time. In this session, we visit the most common deployment patterns and what choices you have to make in placing the server components vs. the consumption/ingestion clients. We will also inspect various transaction volumes and discuss common administration tasks.
This session will do a sizing deep dive, in terms of how to best size the cache nodes, how to size the virtual environment, and other considerations to make these systems highly available, scalable and with predictable performance. In the case of Java based in memory DBs we will do a deep dive into various GC algorithms and how to best configure JVMs.
A Technology Radar is a tool that forces you to organize and think about near term future technology decisions, both for you and your company. This talk discusses using the radar for personal breadth development, architectural guidance, and governance.
ThoughtWorks Technical Advisory Board creates a “technology radar” twice a year, a working document that helps the company make decisions about interesting technologies and where we spend our time. ThoughtWorks then started conducting radar-building exercises for our clients, which provides a great medium for technologists company-wide to express their opinions about the technologies they use every day. For companies, creating a radar helps you document your technology decisions in a standard format, evaluate technology decisions in an actionable way, and create cross-silo discussions about suitable technology choices. This session describes the radar visualization and how to conduct a radar building session for yourself. After a brief introduction, the bulk of the workshop consists of attendees building a radar for the group, following the same procedure you'll use when you do this exercise at your company. At the end, we'll have created a unique Radar for this event and practiced doing it for yourself.
This multi-disciplinary session takes a deep dive into
the confluence of topics required to fully understand the intersection
of Continuous Delivery and architecture, including evolutionary
architecture and emergent design, with an emphasis on how
architectural decisions affect the ease in changing and evolving your
code, the role of metrics to understand code, how Domain Driven
Design's Bounded Context reifies in architecture, how to reduce
intra-component/service coupling, and other techniques.
Continuous Delivery is a process for automating the production
readiness of your application every time a change occurs – to code,
infrastructure, or configuration. In the Continuous Delivery world,
rather than hone skills at predicting the future via Big Design Up
Front, Continuous Delivery emphasizes techniques for understanding
and changing code with less cost during the process. Some architectures
and engineering practices yield better designs for this
environment. This multi-disciplinary session takes a deep dive into
the confluence of topics required to fully understand the intersection
of Continuous Delivery and architecture, including evolutionary
architecture and emergent design, with an emphasis on how
architectural decisions affect the ease in changing and evolving your
code, the role of metrics to understand code, how Domain Driven
Design's Bounded Context reifies in architecture, how to reduce
intra-component/service coupling, and other techniques.
This session covers two critical soft skills for architects:
New architects find soft skills like creating lucid documentation and building compelling presentations challenging. This presentation covers a variety of ways to document ideas in software architecture, ranging from diagramming techniques (that aren't UML) to Architecture Decisions Records and ultimately to presentations. The second part of the talk leverages patterns and anti-patterns from the Presentation Patterns book to help architects build clear and concise representations of their ideas.
Let me guess - your company is all in on “the Cloud” but no one can really agree what that means. You’ve got one group Dockering all the things while another group just rearchitected the Wombat system as a set of functions…as a service. It is enough to make a busy developer’s head spin - how do we make sense of all the options we have? I hate to burst your bubble, but there are no silver bullets, just a set of tools that we can leverage to solve problems. And just as a master carpenter knows when to use their favorite framing hammer and when they need to reach for the finish hammer, we need to use the right tool at the right time to solve our problems.
In this talk we will survey the various options today’s application teams have at their disposal looking at the consequences of various approaches. We will clear up the buzzword bingo to give you a solid foundation in various cloud computing approaches. Most importantly, we will discuss why the right answer will almost always be: and not or.
Development teams often focus on getting code to production losing site of what comes after the design and build phase. But we must consider the full life cycle of our systems from inception to deployment through to sunset, a discipline many companies refer to as site reliability engineering.
While your organization may or may not have an SRE team, you have someone playing that role and we can all benefit from looking at the principles and practices that we can bring to bear on our projects. In this talk, I will introduce the concepts of SRE and how you can adopt these ideas on your applications.
In some organizations, architects are dismissed as people that draw box and arrow diagrams - the dreaded whiteboard architect. While we don't want to foster that stereotype, it is important for an architect to be able to construct basic architectural diagrams. An architect must also be able to separate the wheat from the chaff eliminating those models that don't help tell the story while fully leveraging those that do.
In this workshop, we'll discuss the various diagrams at our disposal. We'll walk through a case study and as we go, we'll construct a set of diagrams that will help us effectively communicate our design. We'll talk about stakeholders and who might benefit from each typ of diagram. Additionally we'll discuss how to constructively review an architectural model.
Neither a laptop nor special software is required for this workshop though your modeling tool of choice (Spark, Visio, OmniGraffle, etc.) is welcome for the exercises. Of course paper and pencil are very effective too and frankly recommended! Feel free to work in pairs or teams. That's it! Well, and a willingness to participate!
In some organizations, architects are dismissed as people that draw box and arrow diagrams - the dreaded whiteboard architect. While we don't want to foster that stereotype, it is important for an architect to be able to construct basic architectural diagrams. An architect must also be able to separate the wheat from the chaff eliminating those models that don't help tell the story while fully leveraging those that do.
In this workshop, we'll discuss the various diagrams at our disposal. We'll walk through a case study and as we go, we'll construct a set of diagrams that will help us effectively communicate our design. We'll talk about stakeholders and who might benefit from each typ of diagram. Additionally we'll discuss how to constructively review an architectural model.
Neither a laptop nor special software is required for this workshop though your modeling tool of choice (Spark, Visio, OmniGraffle, etc.) is welcome for the exercises. Of course paper and pencil are very effective too and frankly recommended! Feel free to work in pairs or teams. That's it! Well, and a willingness to participate!
By now I bet your company has hundreds, maybe thousands of services, heck you might even consider some of them micro is stature! And while many organizations have plowed headlong down this particular architectural path, your spidey sense might be tingling…how do we keep this ecosystem healthy?
In this talk, I will go beyond the buzzwords into the nitty gritty of actually succeeding with a service based architecture. We will cover the principles and practices that will make sure your systems are stable and resilient while allowing you to get a decent night's sleep!
Monoliths are out and microservices are in. Not so fast. Many of the benefits attributed uniquely to microservices are actually a byproduct of other architectural paradigms with modularity at their core. In this session, we’ll look at several of the benefits we expect from today’s architectures and explore these benefits in the context of various modern architectural paradigms. We’ll also examine different technologies that are applying these principles to build the platforms and frameworks we will use going forward.
Along the way, we’ll explore how to refactor a monolithic application using specific modularity patterns and illustrate how an underlying set of principles span several architectural paradigms. The result is an unparalleled degree of architectural agility to move between different architectural paradigms.
Big architecture up front is not sustainable in today's technology climate where expectations are high for delivering high quality software more quickly than ever before. To accept change, teams are moving to agile methods. But agile methods provide little architectural guidance. Attempts to define the architectural vision for a system early in the development lifecycle does not work. In this session, we provide practical guidance for software architecture for agile projects.
We will explore several principles that help us create more flexible and adaptable software systems. We’ll expose the true essence of what’s meant when we say “architectural agility.” And we’ll explore the real goal of software architecture and how we can accommodate architectural change to help increase architectural agility.
Microservice architecture is a modern architectural approach that increases development and delivery agility by focusing on building modular services. The framework we use has a tremendous impact on how quickly and easily we can deliver servcies. New frameworks are emerging that embrace new approaches for helping us deliver microservices.
In this session, we will explore some modern Java frameworks for building microservices (aka micro frameworks). Example frameworks you may see include Dropwizard, Ratpack, Spark, Ninja, RestExpress, Play, Restlet, and RestX. We'll demonstrate each framework by using a programming kata to build the same service using several different frameworks. Optionally, bring your own laptop, clone the github repo, and you can build the services along with me. To do this, you must have Java 8+ and Gradle.
The way we build and deliver software is changing. We must deliver software more quickly than ever before and traditional approaches to software architecture, infrastructure and methodology do not allow us to meet demand. We’ve reached the limits of agility through process improvement alone, and further increases demand we focus on improving architecture, infrastructure, and methodology simultaneously. 12 Factor is an app development methodology for building modern apps in the modern era.
Building modern apps requires modern methods and 12 Factor is an app development methodology that helps development teams build software by emphasizing development practices that meld together modern architectural paradigms with agile practices like continuous delivery for deployment to cloud platforms. In this session, we’ll examine the 12 Factors and explore how to apply them to apps built using Java.
Java 9 with the Jigsaw module system is here. In this session, we'll explore the basics of the Jigsaw module system and examine the impact it will have on how we build Java applications. We will dig into it's major features, including dependency management and Jigsaw services. Once we understand Jigsaw's basics, we will explore what it's going to take to migrate existing Java application to Java 9 and leverage Jigsaw.
Jigsaw's impact stands to be consequential. Jigsaw will restrict application code from accessing non-published JDK classes (ie. sun.com), require you to be explicit in declaring your dependencies, and more. We will explore Jigsaw basics and then dig into the impact Jigsaw will have on migrating existing Java applications to Java 9.
You have some modular code with a REST API. You are on your way to Microservices. Next, you package it in a container image that others can run. Simple. Now what? Your service needs to log information, needs to scale and load balance between its clones. Your service needs environment and metadata way outside its context. What about where the service will run? Who starts it? What monitors its health? What about antifragility? Updates? Networking? Oh my.
Don't get flustered. We will explore how Kubernetes simplifies the complexity of distributed computing.
This session will help you understand the terms, architecture and the mechanics of the Kubernetes tools. You will understand how to target your applications to a seemingly complex distributed compute platform.
Prerequisite: If you are unfamiliar with Kubernetes be sure to attend: Kubernetes Koncepts (1 of 2)
Aha moments with apps in containers can be quite liberating. The mobile space is saturated with “there's an app for that”. For us, we now expect “there's a container for that”. “Write once, run anywhere” (WORA) has changed to “Package once, run anywhere” (PORA). The growing community of containers is riding up the hype curve. We will look at many ways to assemble pods using architecture patterns you already know.
Your software package delivery and installation is no longer an rpm, deb, dmg, jar, war, native executable or a run script, it is simply an image that has a common run container command.
During the presentation, we will explore some examples on Katacoda.
Prerequisite: If you are unfamiliar with Kubernetes be sure to attend: Kubernetes Koncepts
At the 2009 Agile conference, J.B.Rainsberger declared “Integration tests are a scam”. I agree. Come see some compelling reasons why consumer-driven contract testing is a much better approach. Particularly for microservices.
We will explore different testing techniques on Kubernetes, including an important one called “Consumer-Driven Contracts”.
After a brief overview of the concepts a live demonstration will show you how to:
This is the droid you are looking for. Within this droid are hundreds of rules designed to review your code for defects, hotspots and security weaknesses. Consider the resulting analysis as humble feedback from a personal advisor. The rules come from your community of peers, all designed to save your butt.
We will explore techniques on how to add these checks to your IDE, your build scripts and your build pipelines.
Too much chatter in your pull requests? See how the analysis tools teach best practices, without ego or criticism, to a spectrum of developers. As a leader see how to develop an effective code quality intern program around this technique. We will also see some techniques to use Kubernetes to obtain reports and dashboards right on your local machine and from your continuous integration pipeline.
Prerequisite: If you are unfamiliar with Kubernetes be sure to attend: Kubernetes Koncepts.
From operating system on bare metal, to virtual machines on hypervisors, to containers orchestration platforms. How we run our code and bundle our applications continues to evolve. Serverless computing continuous our evolutionary path for our architectures.
Kubernetes provides an ideal, vendor-agnostic platform for serverless computing. Serverless promises to developers that we can worry less about the cluster and focus more on their logic. Based on your Kubernetes knowledge we will discover the various contributors of serverless frameworks on Kubernetes. Specifically, we will unpack how two open source serverless frameworks, Kubeless and OpenFaaS, leverage Kubernetes to achieve their promises. We will explore how Knative is helping the serverless providers evolve to the next levels sophistication.
Prerequisite: If you are unfamiliar with Kubernetes or Istio meshing be sure to attend: Understanding Kubernetes: Fundamentals or Understanding Kubernetes: Meshing Around with Istio.
Kubernetes is a complex container management system. Your application running in containers is also a complex system as it embraces the distributed architecture of highly modular and cohesive services. As these containers run, things may not always behave as smoothly as you hope. Embracing the notions of antifragility and designing a system to be resilient despite the realities of resource limitations, network failures, hardware failures and failed software logic. All of this demands a robust monitoring system to open views into the behaviors and health of your applications running in a cluster.
Three important aspects to observe are log streams, tracing, and metrics.
In this session, we look at some example microservices running in containers on Kubernetes. We add Istio to the cluster for meshing. We observe how logs are gathered, We see transactions are traced and measured between services. We inspect metrics and finally add alerts when metrics are indicating a problem.
On the 2017 tour, I introduced the notion of “serverless” and Functions as a Service (FaaS) platforms. We understood the motivation for serverless computing, compared serverless to other cloud-native infrastructure approaches, navigated some architectural tradeoffs, and took a whirlwind tour of the Big 3 FaaS providers.
In this 2018 edition of the talk, we’ll still cover a few of the same themes to bring new folks up to speed, but we’ll also look at what’s changed in this ecosystem over the past year, take a look at new or enhanced features, offerings, runtimes, and programming models, and examine what use cases are becoming popular for serverless computing. We’ll also look at how tradeoffs have evolved, and definitely throw in a few demos.
In this presentation, we'll build, test, and deploy an image-processing pipeline using Amazon Web Services such as Lambda, API Gateway, Step Functions, DynamoDB, and Rekognition.
We'll take a look at some of the following topics:
All software architectures have to deal with stress. It’s simply the way the world works! Stressors come from multiple directions, including changes in the marketplace, business models, and customer demand, as well as infrastructure failures, improper or unexpected inputs, and bugs. As software architects, one of our jobs is to create solutions that meet both business and quality requirements while appropriately handling stress.
We typically approach stressors by trying to create solutions that are robust. Robust systems can continue functioning properly in the presence of internal and external challenges, but they also have one or more breaking points. When we pass a robust systems known threshold for a particular type of stress, it will fail. When a system encounters an “unknown unknown” challenge, it will usually not be robust!
Recent years have seen new approaches, including resilient, antifragile, and evolutionary architectures. All of these approaches emphasize the notion of adapting to changing conditions in order to not only survive stress but sometimes to benefit from it. In this class, we’ll examine together the theory and practice behind these architectural approaches.
As an architectural style, microservices are here to stay. They have crossed the proverbial chasm, and now it’s time to get to work. Microservices provide us with the ability to create truly evolutionary architectures composed of cohesive and autonomous components using well known and characterized distributed systems patterns.
As we create and compose components across the hard boundary of the network, we become deeply interested in establishing the correct boundaries and has resulted in renewed interest in system design and decomposition. Fortunately, the tried and true practices of Domain-Driven Design are available to us.
In this presentation, we will cover a distillation of strategic (bounded contexts, subdomains, context mapping) and tactical (aggregates, domain events) DDD techniques and demonstrate how they enable us to create effective event-driven microservices.
If you’ve been following along, you’ve realized by now that cloud native architectures are fundamentally different than most traditional architectures. Most of the cloud native architectures that we can see in the wild have been built by relatively young companies that began from a zero-legacy state. Architects in more mature organizations are faced with the daunting challenge of building modern systems that exploit the unique characteristics of cloud infrastructure while simultaneously attempting to migrate legacy systems into those same environments, all the while “keeping the lights on.”
Much of the last two years of my career have been spent helping Fortune 500 companies devise cloud native migration strategies, and I’ve built a increasingly large catalog of patterns that have proven useful across multiple systems and industry verticals. In this session we’ll dive into those patterns and more, including:
The learner should leave this session with a tool belt suitable for attacking an upcoming cloud native architecture migration effort.
Chaos Engineering, pioneered by Netflix, is the discipline of experimenting on a distributed system in order to build confidence in the system's capability to withstand turbulent conditions in production.
In this presentation, we'll take a look at the problem of building resilient software, and discuss how applying Google's SRE principles and patterns for architectural resiliency can help us to solve it. We'll then examine how the practice of Chaos Engineering can help us to prove or disprove the resiliency of our systems.
Learn how to use Heroku's 12 (15) Factor App methodologies to make your applications more portable, scalable, reliable and deployable.
Do you want to improve your application’s portability, scalability, reliability and deploy ability? Now you can, with Heroku’s 12 Factor App methodologies. Learn from their experience hosting and supporting thousands of apps in the cloud. During this hands-on workshop, you will learn how to incorporate factors like configuration, disposability, dev/prod parity and much more into an existing application whether it is an on premise or cloud native app. But wait, there’s more! Act now, and get an additional 3 factors absolutely free! API first, Telemetry and even Authentication and authorization will be included at no additional cost.
Learn how to use Heroku's 12 (15) Factor App methodologies to make your applications more portable, scalable, reliable and deployable.
Do you want to improve your application’s portability, scalability, reliability and deploy ability? Now you can, with Heroku’s 12 Factor App methodologies. Learn from their experience hosting and supporting thousands of apps in the cloud. During this hands-on workshop, you will learn how to incorporate factors like configuration, disposability, dev/prod parity and much more into an existing application whether it is an on premise or cloud native app. But wait, there’s more! Act now, and get an additional 3 factors absolutely free! API first, Telemetry and even Authentication and authorization will be included at no additional cost.
Rich Hickey once said programmers know the benefits of everything and the trade offs of nothing…an approach that can lead a project down a path of frustrated developers and unhappy customers. As architects though, we must consider the trade offs of every new library, language, pattern or approach and quickly make decisions often with incomplete information. How should we think about the inevitable technology choices we have to make on a project? How do we balance competing agendas? How do we keep our team happy and excited without chasing every new thing that someone finds on the inner webs?
As architects it is our responsibility to effectively guide our teams on the technology journey. In this talk I will outline the importance of trade offs, how we can analyze new technologies and how we can effectively capture the inevitable architectural decisions we will make. I will also explore the value of fitness functions as a way of ensuring the decisions we make are actually reflected in the code base.
The ideas of reactive systems and reactive programming has been around for a while. However, changes in many areas including how applications are deployed to how applications are used, including big data, have resulted in a renewed interest in this area.
In this presentation we will start with a discussion of the nature of reactive systems and the characteristics of reactive applications. Then we will dive into design and architectural concerns we have to address to effectively create such system that can meet the demands of high volume, high frequency data and interactions.
Transitioning from a monolith to a microservices based architecture is a non-trivial endeavor. It is mired with many practices that may lead to a disastrous implementation if we're not careful.
In this presentation we will discuss some core practices and principles that are critical to follow to effectively transition from a monolith to a microservices based architecture.
Everybody seems to be rocking with Kubernetes and OpenShift! Even your favorite repos at GitHub are running on top of it. Don't be the last developer to board this bullet train. Come and learn a LOT in this session about Kubernetes.
We will provide numerous practical tips & techniques that will take you from cloud newbie to cloud native.
Java developers have run their code in Application Servers for many years. However, the cloud paradigm brought new ways to think and design applications. One example of this change is the Serverless architecture where event-driven code is executed on an ephemeral container managed by a 3rd party. It doesn't mean that there are no servers involved, but for the developer's perspective, it means that they don't need to worry about them.
Come to this session and learn FaaS - Function-as-a-Service, open source event driven lambda style programming model for Kubernetes.
If you're not terrified, you're not paying attention.
Publishing information on the Web does not require us to just give it away. We have a series of tools and techniques for managing identity, authentication, authorization and encryption so we only share content with those we trust.
Before we tackle Web Security, however, we need to figure out what we mean by Security. We will pull from the worlds of Security Engineering and Software Security to lay the foundation for technical approaches to protecting our web resources. We will also discuss the assault on encryption, web security features and emerging technologies that will hopefully help strengthen our ability to protect what we hold dear.
Topics include:
If you're not terrified, you're not paying attention.
Publishing information on the Web does not require us to just give it away. We have a series of tools and techniques for managing identity, authentication, authorization and encryption so we only share content with those we trust.
Before we tackle Web Security, however, we need to figure out what we mean by Security. We will pull from the worlds of Security Engineering and Software Security to lay the foundation for technical approaches to protecting our web resources. We will also discuss the assault on encryption, web security features and emerging technologies that will hopefully help strengthen our ability to protect what we hold dear.
Topics include:
If you listen to zealots and critics, blockchain-based systems and the cryptocurrencies they enable are either the Best Thing Ever or the Worst Thing Ever. As you may suspect, the reality is somewhere in-between. We will introduce the major ideas, technologies and players as well as evaluate them from technological, economic and social perspectives.
Come have a spin-free discussion about these polarizing technologies to find how they might be useful to you.
While the Web itself has strong decentralized aspects to how it is used, the backend technologies are largely centralized. The naming systems, the routing systems and the traffic that all points back to the same place for a website are all centralized technologies. This creates both a liability as well as a control point.
In order to break free of some of these limitations, new technologies are emerging to provide a more decentralized approach to the Web.
This talk will walk you through some emerging technology to provide decentralized content storage and distribution, edge computing and more. We will touch upon the Interplanetary Filesystem, WebTorrent, Blockchain spin offs and more.
Machine Learning is a key differentiator for modern organizations, but where does it fit into larger IT strategies? What does it do for you? How can it go wrong?
This class will contextualize these technologies and explain the major technologies without much (if any) math.
We will cover:
This session covers the landscape of Big Data tools, technologies and best practices in 2018. You'll leave this session armed with the knowledge you need to build Big Data solutions by assembling the best technologies for you.
We cover the components of a big data pipeline, options available for each module and the pros, cons and best practices for each option.
Software systems should not remain black boxes. In this talk we show how we can complement domain-driven design with tools that match the ubiquitous language with visual representations of the system that are produced automatically. We experiences of building concrete systems, and, by means of live demos, we exemplify how changing the approach and the nature of the tools allows non-technical people to understand the inner workings of a system.
Software appears to be hard to grasp especially for non-technical people, and it often gets treated as a black box, which leads to inefficient decisions. This must and can change.
In this talk we show how by changing our tools we can expose the inner workings of a system with custom visual representations that can be produced automatically. These representations enhance the ubiquitous language and allow non-technical people to engage actively with the running system.
We start from describing experiences of building concrete systems, and, by means of live demos, we exemplify how changing the approach and the nature of the tools allows non-technical people to understand the inner workings of a system. We then take a step back and learn how we should emphasize decision making in software development as an explicit discipline at all layers, including the technical ones. This talk is accompanied is relevant for both technical and non-technical people.
Architecture is as important as functionality, at least in the long run. As functionality is recognized as a business asset, it follows that architecture is a business asset, too. In this talk we show how we can approach architecture as an investment rather than a cost, and detail the practical implications both on the technical and on the business level.
Often systems that have great current value are expensive to evolve. In other words, the future value of the system is highly influenced by its structure. Indeed, when I talk with technical people, they broadly agree with the idea that architecture is as important as functionality, at least in the long run.
If we truly believe this, we should act accordingly. If two things are equally important, we should treat them with the same way. Given that functionality of a system is considered a business asset, it follows that the architecture is a business asset as well. That means that we should stop perceiving the effort around architecture as a cost, and start seeing it as an investment.
Functionality receives significant testing investments through direct development effort, dedicated tools and even education. In a way, testing is like an insurance, but unlike other insurances, this one is essentially guaranteed to pay off later on. Now, do you check the architecture with the same rigor? Do you have automatic architectural checks that prevent you from deploying when they fail? Not doing so means that half of the business assets remain uninsured. Half.
How can you test architecture automatically? You need to first see the code as data. The same applies for configurations, logs and everything else around a software system. It’s all data, and data is best dealt with through dedicated tools and skills.
This session will focus on architecting enterprise big data systems in AWS. This will include evaluating the big data capabilities in AWS, dealing with governance and business critical systems (high availability, disaster recovery, service level agreements, operations, automation …) in the cloud, mapping your on premise big data workloads to AWS, and other key aspects of cloud based enterprise big data systems.
Come to this session if you want to learn about architecting and delivering enterprise big data systems in AWS.
This session will focus on architecting and using linked data in AWS. This will include a brief introduction to linked data (RDF, OWL, SPARQL, Jena, Protégé), key concepts for making linked data / graph data more approachable in your organization, and a deep dive into Neptune - a ground breaking linked data graph database technology in AWS.
Come to this session if you to get up to speed on linked data in AWS.
This session will focus on the essential skills that are needed by software architects on a daily basis from ideation to product delivery. For many architects, it’s not the technology that gives you problems, but people.
Come to this session if you want to learn some tricks and tips for how to raise your game as an architect. This will be an interactive session - so bring your tough questions.
An evolutionary architecture supports incremental, guided change along multiple dimensions.
For many years, software architecture was described as the “parts that are hard to change later”. But then microservices showed that if architects build evolvability into the architecture, change becomes easier. This talk, based on my upcoming book, investigates the family of software architectures that support evolutionary change, along with how to build evolvable systems. Understanding how to evolve architecture requires understanding how architectural dimensions interact; I describe how to achieve appropriate coupling between components and services. Incremental change is critical for the mechanics of evolution; I cover how to build engineering and DevOps practices to support continuous change. Uncontrolled evolution leads to undesirable side effects; I cover how fitness functions build protective, testable scaffolding around critical parts to guide the architecture as it evolves.
The software development ecosystem exists in a state of dynamic equilibrium, where any new tool, framework, or technique leads to disruption and the establishment of a new equilibrium. Predictability is impossible when the foundation architects plan against changes constantly in unexpected ways. Instead, prefer evolvability over predictability. This keynote illustrates how to achieve evolutionary architectures and how to retrofit existing systems to support better evolution.
Building Evolutionary Architectures requires identifying and creating architectural fitness functions. This hands-on workshop defines fitness functions and provides group exercises to help identify and discover them.
According to the Building Evolutionary Architectures book, an architectural fitness function provides an objective integrity assessment of some architectural characteristic(s). This hands-on workshop provides examples of fitness functions and group exercises to identify, define, and implement a variety of fitness functions: atomic, holistic, continuous, triggered, temporal, and others.
How do you create creativity? This talk offers techniques and perspectives to discover, grow, and project your ideas.
Where do ideas for new talks, books, videos, software…ultimately, everything…come from? A common question at conference panels to the speakers is “Where do you get your ideas for talks?” This session answers that question, along with how some of us cultivate new ideas. This talk investigates three aspects of ideation:
— How do new ideas arise?<br>
I cover various ways of synthesizing new ideas: switching axiom(s), mutation, oblique strategies, and a host of other techniques to generate a germ of a new idea
— How do ideas grow into intellectual property?<br>
I cover techniques for iterating on ideas to discover deeper meanings and connections. I also cover how techniques to evolve and grow ideas.
— How do you communicate new IP?<br>
I cover various ways to convey IP: presentations, articles, books, videos, and a host of other media. I talk about how writing and presenting techniques to amplify your new idea and get it out into the world.
One key to building new IP is separating ideation, organization, and realization, which often become jumbled. By separating them, we can build practices to allow each to flourish. This talk provide concrete advice in each area to help realize new ideas.
Creativity is the reason we have all the Cool Stuff we have. This session investigates how to fan the spark of an idea into a roaring flame of intellectual erudition.
We've all got secrets, but nobody seems to know where to put them. This long standing issue has plagued system design for ages and still has many broken implementations. While many consider this an application concern, the foundations rest in the design of the system. Join Aaron for an in-depth workshop that will cover the following secret management solutions:
Additionally, this workshop will demonstrate tools for discovering sensitive information checked in to your project.
This is a two session workshop and is best received by attending both sessions.
You will need the following tools installed and updated prior to the workshop:
Optional
We've all got secrets, but nobody seems to know where to put them. This long standing issue has plagued system design for ages and still has many broken implementations. While many consider this an application concern, the foundations rest in the design of the system. Join Aaron for an in-depth workshop that will cover the following secret management solutions:
Additionally, this workshop will demonstrate tools for discovering sensitive information checked in to your project.
This is a two session workshop and is best received by attending both sessions.
You will need the following tools installed and updated prior to the workshop:
Optional
This course will cover the foundations of threat intelligence. It will consist of a combination of lecture and lab where we will work through the concepts of detecting indicators of attack and compromise, and building automation to process and eliminate it. This is a fully immersive, hands on workshop that will include a number of techniques, tools, and code.
It will cover the following topics:
Attendees will leave with a fully functional threat intelligence proof of concept system. This PoC can be used to design further capabilities or to evaluate larger commercial systems. Be prepared for an exciting day of code, modeling, and automation.
You will need the following tools installed and updated prior to the workshop:
Run docker pull
This course will cover the foundations of threat intelligence. It will consist of a combination of lecture and lab where we will work through the concepts of detecting indicators of attack and compromise, and building automation to process and eliminate it. This is a fully immersive, hands on workshop that will include a number of techniques, tools, and code.
It will cover the following topics:
Attendees will leave with a fully functional threat intelligence proof of concept system. This PoC can be used to design further capabilities or to evaluate larger commercial systems. Be prepared for an exciting day of code, modeling, and automation.
You will need the following tools installed and updated prior to the workshop:
Run docker pull
Software architecture is hard. It is full of tradeoff analysis, decision making, technical expertise, and leadership, making it more of an art than a science. The common answer to any architecture-related question is “it depends”. To that end, I firmly believe there are no “best practices” in software architecture because every situation is different, which is why I titled this talk “Essential Practices”: those practices companies and architects are using to achieve success in architecture. In this session I explore in detail the top 6 essential software architectural practices (both technical architecture and process-related practices) that will make you an effective and successful software architect.
This session is broken up into 2 parts: those essential architecture practices that relate to the technical aspects of an architecture (hard skills), and those that relate to the process-related aspects of software architecture (soft skills). Both parts are needed to make architecture a success.
Whether you want to effect culture change in your organization, lead the transition toward a new technology, or simply get more out of your team; you must first understand that having a “good idea” is simply the beginning. An idea must be communicated; a case must be made. Communicating that case well is as important, if not more so, than the strength of the idea itself.
You will learn 6 principles to make an optimal case and dramatically increase the odds that the other person will say “Yes” to your requests and suggestions, along with several strategies to build consensus within your teams. As a professional mentalist, Michael has been a student of psychology, human behavior and the principles of influence for nearly two decades. There are universal principles of influence that are necessary to both understand and leverage if you want to be more effective leader of change in your organization.
“Emerge your architecture” goes the agile mantra. That’s great. Developers get empowered and fluffy papers make room for real code structure. But, how do you ensure the cohesiveness of the result?
In this talk, we expose how architecture is an emergent property, how it is a commons, and we introduce an approach for how it can be steered.
Testing, pair programming and code reviewing are the proposed means to approach this problem. However, testing is only concerned with the functional side of a system, and thus, it is not able to capture structural contracts. Pair programming and reviewing work well in the small, but they do not scale when you need to handle the millions of details entailed in modern systems.
Another way of approaching the structure of the system is through standard checkers, such as FindBugs or Checkstyle. These are fine tools, but when they are left to only check standard idioms, the specifics of your architecture remain unverified.
The architecture of the system is important and it deserves special attention because it is too easy for it to go wrong in the long run, and it is too expensive when that happens. In this tutorial we detail a method of approaching this challenge by steering the architecture on a daily basis through:
One challenging aspect is that of constructing custom analysis tools during development. This process requires a new kind of infrastructure and associated skills that enable you to craft such checkers fast and cheaply. However, this is a technical detail. The critical benefit comes from making architectural decisions explicit, and from the daily actions of cleaning the state of the system.
This talk is targeted to both engineers and managers. We cover the basics of the process, and we accompany the conceptual descriptions with real life examples.
“Emerge your architecture” goes the agile mantra. That’s great. Developers get empowered and fluffy papers make room for real code structure. But, how do you ensure the cohesiveness of the result?
In this talk, we expose how architecture is an emergent property, how it is a commons, and we introduce an approach for how it can be steered.
Testing, pair programming and code reviewing are the proposed means to approach this problem. However, testing is only concerned with the functional side of a system, and thus, it is not able to capture structural contracts. Pair programming and reviewing work well in the small, but they do not scale when you need to handle the millions of details entailed in modern systems.
Another way of approaching the structure of the system is through standard checkers, such as FindBugs or Checkstyle. These are fine tools, but when they are left to only check standard idioms, the specifics of your architecture remain unverified.
The architecture of the system is important and it deserves special attention because it is too easy for it to go wrong in the long run, and it is too expensive when that happens. In this tutorial we detail a method of approaching this challenge by steering the architecture on a daily basis through:
One challenging aspect is that of constructing custom analysis tools during development. This process requires a new kind of infrastructure and associated skills that enable you to craft such checkers fast and cheaply. However, this is a technical detail. The critical benefit comes from making architectural decisions explicit, and from the daily actions of cleaning the state of the system.
This talk is targeted to both engineers and managers. We cover the basics of the process, and we accompany the conceptual descriptions with real life examples.
The #remote, #nomeetings, #noestimates, #nobacklog recent trends tend to disrupt the classic approach to software development. In this talk, we explore this space also based on my own experience of working with teams to build projects that rely on all these.
Can #remote, #nomeetings, #noestimates, #nobacklog really work? What if we employ them at the same time? If we look at the open-source space, we see that we indeed can build highly successful and innovative projects while working completely remotely, rely almost exclusively on asynchronous communication, rely on no estimates and even no real backlog.
What does that mean for the enterprise? At the very least, we should accept that these are not a fad, and we should not equate them with lack of engineering. Instead, we should look at successful examples and learn from them. In this talk, we do exactly that. We go through concrete examples and observe the implications on the way we work and on the systems we can build.
Software has no shape. Just because we happen to type text when coding, it does not mean that text is the most natural way to represent software.
We are visual beings. As such we can benefit greatly from visual representations. We should embrace that possibility especially given that software systems are likely the most complicated creations that the human kind ever produced. Unfortunately, the current software engineering culture does not promote the use of such visualizations. And no, UML does not really count when we talk about software visualizations. As a joke goes, a picture tells a thousand words, and UML took it literally. There is a whole world of other possibilities out there and as architects we need to be aware of them.
In this talk, we provide a condensed, example-driven overview of various software visualizations starting from the very basics of what visualization is.
Visualization 101:
How to visualize
What to visualize
Interactive software visualizations
Visualization as data transformation
On the one hand, agile processes, like Scrum, promote a set of practices. On the other hand, they are based on a set of principles. While practices are important at present time, principles allow us to adapt to future situations.
In this talk we look at Inspection and Adaptation and construct an underlying theory to help organizations practice these activities. Why a theory? Because, as much as we want to, simply invoking “Inspect and Adapt” will not make it happen.
It turns out that for almost half a century the software engineering community has been working on a theory of reflection, which is defined as “the ability of a system to inspect and adapt itself”. We draw parallels between the design of software systems and the design of organizations, and learn several lessons:
Reflection must be built into the organization.
Reflection incurs a cost that must be planned for.
Inspection is easier than adaptation.
We can only reflect on what is explicit.
Reflection is a design tool that enables unanticipated evolution.
This sounds technical, but the most important observation is that reflection is an inherent human ability. It only requires training to develop it into a capability.
Looking at what occupies most of our energy during software development, our domain is primarily a decision making business rather than construction one. As a consequence, we should invest in a systematic discipline to approach making decisions.
Assessment denotes the process of understanding a given situation about a software system to support decision making.
During software development, engineers spend as much as 50% of the overall effort on doing precisely that: they try to understand the current status of the system to know what to do next. In other words, assessing the current system accounts for half of the development budget. These are just the direct costs. The indirect costs can be seen in the quality of the decisions made as a result.
One might think that an activity that has such a large economical impact would be a topic of high debate and improvement. Instead, it is typically treated like the proverbial elephant in the room. In this talk, we argue that we need to:
• Make assessment explicit. Ignoring it won’t make it go away. By acknowledging its existence you have a chance of learning from past experiences and of optimizing your approach.
• Tailor assessment. Currently, developers try to assess the system by reading the source code. This is highly ineffective in many situations, and it simply does not scale to the size of the modern systems. You need tools, but not any tools. Your system is special and your most important problems will be special as well. That is why generic tools that produce nice looking reports won’t make a difference. You need smart tools that are tailored to your needs.
• Educate ourselves. The ability to assess is a skill. Like any skill, it needs to be educated. Enterprises need to understand that they need to allocate the budget for those custom tools, and engineers need to understand that it is within their reach to build them. It’s not rocket science. It just requires a different focus.
We produce software systems at an ever increasing rate, but our ability to cleanup after older systems does not keep up with that pace. Because of the impact of our industry, we need to look at software development as a problem of environmental proportions. We must build our systems with recycling in mind. As builders of the future world, we have to take this responsibility seriously.
On the one hand, this is great. On the other hand, our ability to get rid of older systems does not keep up with that pace. Let’s take an example: a recent study showed that there are some 10’000 mainframe systems still in use containing some 200 billion lines of code. These systems are probably older than most developers. This shows that software is not that soft, and that once in use, systems produce long lasting consequences. We cannot continue to disregard how we will deal with software systems at a later time.
Engineers spend as much as half of the effort on understanding software systems and the percentage grows with the size and age of the system. In essence, software engineering is more about dealing with existing systems than it is about building them. Two decades ago, Richard Gabriel coined the idea of software habitability. Indeed, given that engineers spend a significant part of their active life inside software systems, it is desirable for that system to be suitable for humans to live there. We go further and introduce the concept of software environmentalism as a systematic discipline to pursue and achieve habitability.
We must build our systems with recycling in mind. We have to be able to understand these systems in the future and be able to reuse and evolve them as needed.
Engineers have the right to build upon assessable systems and have the responsibility of producing assessable systems. For example, even if code has often a textual shape, it is not text. Hence, reading is not the most appropriate approach to deal with code. The same applies to logs, configurations and anything else related to a software system. It’s all data, and data is best dealt with through tools. Not any tools would do either. We need custom tools that can deal with specific details. No system should get away without dedicated tools that help us take it apart and recycle it effectively.
Who should build those tools? Engineers. This implies that they have to be empowered to do it. We need to go back to the drawing board to (1) construct moldable development environments that help us drill into the context of systems effectively, (2) reinvent our underlying languages and technologies so that we can build assessable systems all the way down, and (3) reeducate our perception of what software engineering is.
Because of the spread and impact of the software industry, we need to look at software development as a problem of environmental proportions. As builders of the future world, we have to take this responsibility seriously.
Micronaut is a new JVM-based, full-stack framework for building modular, easily testable microservice applications. Unlike reflection-based IoC frameworks, which load and cache reflection data for every single field, method, and constructor in your code; with Micronaut, your application startup time and memory consumption are not bound to the size of your codebase.
The Micronaut framework shares many core values with Grails, including the enablement of code simplicity and developer productivity. Micronaut offers many additional features for a new class of applications (e.g., microservices, serverless deployments, etc.) that may not be well-suited for Grails.
Compelling aspects of the Micronaut framework include:
In this talk, Jeff demonstrates how the future of Grails, GORM, and Micronaut are linked, as well as how the OCI Groovy and Grails team is taking productivity around developing microservices to the next level!
The end has come. REST is finally dead. The world of reactive data sources has killed it, and your users will be forever grateful. Gone from your applications are 'Refresh' buttons. Gone from your sever code are the polling routines, pinging remote services for changes. Customers dashboards update seamlessly and in real time. Your users have never been happier.
If this sounds like a world that you want to live in, join us for this awesome workshop exploring the various options available to the enterprise architect when designing and implementing the reactive software layers and constructs necessary to make this dream a reality today!
Users are demanding applications which keep them informed of new events as soon as they happen. They are no longer willing to accept “Just hit the refresh button” or “It will update in a few minutes by itself” when demanding satisfaction of this new basic requirement. They are collaborating in real time, co-editing, co-authoring, 'co-laborating' with colleagues across the country and around the world, chatting over the phone or VOIP while working together via your app. They want their updates to travel from their laptop to their co-workers screens as fast as their voice reaches them through the phone. This is a tough requirement to meet, especially when trying to put a modern face on a legacy app or integrating a shiny, new, reactive app with a legacy, REST-based datasource.
And it is not just your end-users that are clamoring for reactive data sources. No, the requirements for server-to-server communication of changes to data or state have forever changed. REST no longer is King in the world of web services. REST just doesn't cut the mustard any longer. Corporate users of your data services are demanding more flexible, reactive options when consuming your endpoints.
Join us for this thought provoking and exploratory workshop and learn the what, why and how of dealing with these new architectural challenges as we explore how you can architect your new or existing stack to satisfy the ever-increasing demand for 'real-time' applications and data services fed by reactive data sources regardless of your current technology choices.
This workshop is for developers of all levels from any programming language background. The patterns discussed will be applicable to all software stacks.
The end has come. REST is finally dead. The world of reactive data sources has killed it, and your users will be forever grateful. Gone from your applications are 'Refresh' buttons. Gone from your sever code are the polling routines, pinging remote services for changes. Customers dashboards update seamlessly and in real time. Your users have never been happier.
If this sounds like a world that you want to live in, join us for this awesome workshop exploring the various options available to the enterprise architect when designing and implementing the reactive software layers and constructs necessary to make this dream a reality today!
Users are demanding applications which keep them informed of new events as soon as they happen. They are no longer willing to accept “Just hit the refresh button” or “It will update in a few minutes by itself” when demanding satisfaction of this new basic requirement. They are collaborating in real time, co-editing, co-authoring, 'co-laborating' with colleagues across the country and around the world, chatting over the phone or VOIP while working together via your app. They want their updates to travel from their laptop to their co-workers screens as fast as their voice reaches them through the phone. This is a tough requirement to meet, especially when trying to put a modern face on a legacy app or integrating a shiny, new, reactive app with a legacy, REST-based datasource.
And it is not just your end-users that are clamoring for reactive data sources. No, the requirements for server-to-server communication of changes to data or state have forever changed. REST no longer is King in the world of web services. REST just doesn't cut the mustard any longer. Corporate users of your data services are demanding more flexible, reactive options when consuming your endpoints.
Join us for this thought provoking and exploratory workshop and learn the what, why and how of dealing with these new architectural challenges as we explore how you can architect your new or existing stack to satisfy the ever-increasing demand for 'real-time' applications and data services fed by reactive data sources regardless of your current technology choices.
This workshop is for developers of all levels from any programming language background. The patterns discussed will be applicable to all software stacks.
You built the app. You are ready to launch! But how do you proceed from there? You need to ensure that, once deployed, your app remains 'up', healthy, available and secure. For that, you are going to need some serious tools in your belt! Join us as we explore the tools and services you can use to complete your deployment stack and give you all of the monitoring and control that you need for a successful launch!
You built the app. You are ready to launch! But how do you proceed from there? You need to ensure that, once deployed, your app remains 'up', healthy, available and secure. For that, you are going to need some serious tools in your belt! Join us as we explore the tools and services you can use to complete your deployment stack and give you all of the monitoring and control that you need for a successful launch!
You adopted microservices architecture to fundamentally change your time-to-market, your code-to-production time from months to days, perhaps even hours. The first generation of microservices was primarily shaped by Netflix OSS and leveraged by numerous Spring Cloud annotations all throughout your business logic. The second generation of microservices architectural style was enabled by the rise of Kubernetes, now the defacto standard cloud native application infrastructure. The next generation of microservices will leverage sidecars and a service mesh.
In this session, we will give you a taste of Envoy and Istio, two open source projects that will change the way you write distributed, cloud native, Java applications on Kubernetes & OpenShift. Demonstrations of tracing, circuit-breaking, traffic shaping, network fault-injection, smart canaries, dark launches and much more.
The lean startup is changing the way companies are built and new products are launched. In this session we dive into the concepts and ideas as well as strategies to incorporate into your own organization.
.
Our technical world is governed by facts. In this world Excel files and technical diagrams are everywhere, and too often this way of looking at the world makes us forget that the goal of our job is to produce value, not to fulfill specifications.
Feedback is the central source of agile value. The most effective way to obtain feedback from stakeholders is a demo. Good demos engage. They materialize your ideas and put energies in motion. They spark the imagination and uncover hidden assumptions. They make feedback flow.
But, if a demo is the means to value, shouldn’t preparing the demo be a significant concern? Should it not be part of the definition of done?
That is not even all. A good demo tells a story about the system. This means that you have to make the system tell that story. Not a user story full of facts. A story that makes users want to use the system. That tiny concern can change the way you build your system. Many things go well when demos come out right.
Demoing is a skill, and like any skill, it can be trained. Regardless of the subject, there always is an exciting demo lurking underneath. It just takes you to find it. And to do it.
In this session we will get to exercise that skill.
By now I bet your company has hundreds, maybe thousands of services, heck you might even consider some of them micro is stature! And while many organizations have plowed headlong down this particular architectural path, your spidey sense might be tingling…how do we keep this ecosystem healthy?
In this talk, I will go beyond the buzzwords into the nitty gritty of actually succeeding with a service based architecture. We will cover the principles and practices that will make sure your systems are stable and resilient while allowing you to get a decent night's sleep!
Most nontrivial software systems suffer from significant levels of technical and architectural debt. This leads to exponentially increasing cost of change, which is not sustainable for a longer period of time. The single best thing you can do to counter this problem is to give some love to your architecture by carefully managing and controlling the dependencies among the different elements and components of a software system. For that purpose we will introduce a DSL (domain specific language) that can be used to describe and enforce architectural blueprints. Moreover we will make an excursion into the topic of legacy software modernization.
In this workshop part participants will use Sonargraph to assess and analyze a software system of their choice (Java, C/C++, C# or Python) and design an architectural model using the domain specific language introduced in the session. The tool and a free 60 day license will be provided during the workshop.
This workshop will use Sonargraph-Architect to create architectural models for a project of your choice. While I will bring software and license keys on a flash drive you could install it upfront by registering on www.hello2morrow.com, download the tool and request an evaluation license. If possible, please bring a project to analyze that can be built on your laptop. Supported languages are Java, C#, C/C++ and Python. For people that cannot bring a project you will be provided with an open source project to work on.
Most nontrivial software systems suffer from significant levels of technical and architectural debt. This leads to exponentially increasing cost of change, which is not sustainable for a longer period of time. The single best thing you can do to counter this problem is to give some love to your architecture by carefully managing and controlling the dependencies among the different elements and components of a software system. For that purpose we will introduce a DSL (domain specific language) that can be used to describe and enforce architectural blueprints. Moreover we will make an excursion into the topic of legacy software modernization.
In this workshop part participants will use Sonargraph to assess and analyze a software system of their choice (Java, C/C++, C# or Python) and design an architectural model using the domain specific language introduced in the session. The tool and a free 60 day license will be provided during the workshop.
This workshop will use Sonargraph-Architect to create architectural models for a project of your choice. While I will bring software and license keys on a flash drive you could install it upfront by registering on www.hello2morrow.com, download the tool and request an evaluation license. If possible, please bring a project to analyze that can be built on your laptop. Supported languages are Java, C#, C/C++ and Python. For people that cannot bring a project you will be provided with an open source project to work on.
Software metrics can be used effectively to judge the maintainability and architectural quality of a code base. Even more importantly they can be used as “canaries in a coal mine” to warn early about dangerous accumulations of architectural and technical debt.
This session will introduce some key metrics that every architect should know and also looks into the current research regarding software architecture metrics. Since we have 90 minutes there will be some time for hands-on software assessments. If you'd like to follow along bring your laptop and install Sonargraph-Explorer from our website www.hello2morrow.com. (It's free and covers most of the metrics we will introduce) Bring a Java, C#, C/C++ or project and run the metrics on your own code. Or just download an open source project and learn how to use metrics to assess software and detect issues.
Kafka has become a key data infrastructure technology, and we all have at least a vague sense that it is a messaging system, but what else is it? How can an overgrown message bus be getting this much buzz? Well, because Kafka is merely the center of a rich streaming data platform that invites detailed exploration.
In this talk, we’ll look at the entire open-source streaming platform provided by the Apache Kafka and Confluent Open Source projects. Starting with a lonely key-value pair, we’ll build up topics, partitioning, replication, and low-level Producer and Consumer APIs. We’ll group consumers into elastically scalable, fault-tolerant application clusters, then layer on more sophisticated stream processing APIs like Kafka Streams and KSQL. We’ll help teams collaborate around data formats with schema management. We’ll integrate with legacy systems without writing custom code. By the time we’re done, the open-source project we thought was Big Data’s answer to message queues will have become an enterprise-grade streaming platform, all in 90 minutes.
The toolset for building scalable data systems is maturing, having adapted well to our decades-old paradigm of update-in-place databases. We ingest events, we store them in high-volume OLTP databases, and we have new OLAP systems to analyze them at scale—even if the size of our operation requires us to grow to dozens or hundreds of servers in the distributed system. But something feels a little dated about the store-and-analyze paradigm, as if we are missing a new architectural insight that might more efficiently distribute the work of storing and computing the events that happen to our software. That new paradigm is stream processing.
In this workshop, we’ll learn the basics of Kafka as a messaging system, learning the core concepts of topic, producer, consumer, and broker. We’ll look at how topics are partitioned among brokers and see the simple Java APIs for getting data in and out. But more than that, we’ll look at how we can extend this scalable messaging system into a streaming data processing system—one that offers significant advantages in scalability and deployment agility, while locating computation in your data pipeline in precisely the places it belongs: in your microservices and applications, and out of costly, high-density systems.
Come to this workshop to learn how to do streaming data computation with Apache Kafka!
Workshop Repo:
https://github.com/confluentinc/kafka-workshop.git
Here's what we need:
Install Docker for Windows or Docker for Mac on your machine. If you're using Linux, you probably know better than I do how to get Docker running. :)
Clone the workshop repo and perform exercise 0. This is very important, because it will do the bulk of the downloading you need to do to get the exercises running. I may make some small tweaks to the Docker Compose file between now and the workshop, but this should result in minimal additional downloading that conference wifi can accommodate. If you do the docker-compose pull on workshop day, it might be painful.
The toolset for building scalable data systems is maturing, having adapted well to our decades-old paradigm of update-in-place databases. We ingest events, we store them in high-volume OLTP databases, and we have new OLAP systems to analyze them at scale—even if the size of our operation requires us to grow to dozens or hundreds of servers in the distributed system. But something feels a little dated about the store-and-analyze paradigm, as if we are missing a new architectural insight that might more efficiently distribute the work of storing and computing the events that happen to our software. That new paradigm is stream processing.
In this workshop, we’ll learn the basics of Kafka as a messaging system, learning the core concepts of topic, producer, consumer, and broker. We’ll look at how topics are partitioned among brokers and see the simple Java APIs for getting data in and out. But more than that, we’ll look at how we can extend this scalable messaging system into a streaming data processing system—one that offers significant advantages in scalability and deployment agility, while locating computation in your data pipeline in precisely the places it belongs: in your microservices and applications, and out of costly, high-density systems.
Come to this workshop to learn how to do streaming data computation with Apache Kafka!
Workshop Repo:
https://github.com/confluentinc/kafka-workshop.git
Here's what we need:
Install Docker for Windows or Docker for Mac on your machine. If you're using Linux, you probably know better than I do how to get Docker running. :)
Clone the workshop repo and perform exercise 0. This is very important, because it will do the bulk of the downloading you need to do to get the exercises running. I may make some small tweaks to the Docker Compose file between now and the workshop, but this should result in minimal additional downloading that conference wifi can accommodate. If you do the docker-compose pull on workshop day, it might be painful.
Developers and architects are increasingly called upon to solve big problems, and we are able to draw on a world-class set of open source tools with which to solve them. Problems of scale are no longer consigned to the web’s largest companies, but are increasingly a part of ordinary enterprise development. At the risk of only a little hyperbole, we are all distributed systems engineers now.
In this talk, we’ll look at four distributed systems architectural patterns based on real-world systems that you can apply to solve the problems you will face in the next few years. We’ll look at the strengths and weaknesses of each architecture and develop a set of criteria for knowing when to apply each one. You will leave knowing how to work with the leading data storage, messaging, and computation tools of the day to solve the daunting problems of scale in your near future.
Everybody is moving to microservices, but Gartner says that by 2019, 90% of organizations will think they’re too disruptive and switch to miniservices. In the meantime, enterprises continue to look at their monolithic applications and macroservices and plan for microservice nirvana, not knowing how likely they are to fail. As with most new technologies, a one-size-fits-all approach is a bad idea.
Come to this session to learn the differences among microservices, miniservices, macroservices, and monoliths and when to use each of them. We’ll investigate DevOps and organizational structure in addition to the technology specifics.
Kubernetes out of the box is a strong platform for running and coordinating large collections of services, containers, and applications. As is, Kubernetes is powerful for many solutions.
Remember Underdog? He was a mild-mannered dog, but when stress and conflict were introduced to the plot he took a magic pill, he became a superhero. Istio is a superhero for Kubernetes.
Istio is an open, platform-independent service mesh that manages communications between services in a transparent way. With a distributed architecture many things can fail, slow down and become less secure. Istio provides solutions to those stresses in our plot toward this architecture style:
• Traffic management
• Observability
• Policy enforcement
• Service identity and security
We will explore these benefits with Istio through some working examples on Kubernetes. The key point is this meshing solution allows your Java code to be less coupled to the inherent weaknesses of a distributed platform.
Explore another learning medium to add to your toolbox: Katacoda.
This is a 90-minute mini-workshop where you learn to be an author on Katacoda. Bring your favorite laptop with just a browser and a text editor.
Have a Github account and bring your laptop. Let's learn together.
We are continuously learning and keeping up with the changing landscapes and ecosystems in software engineering. Some technologies are difficult to learn or may take too much time for us to set up just to get to the key points of each technology. One of the reasons why you might be here at NFJS is to do exactly that – too learn. Great!
There are many mediums we use to learn and we often combine them for different perspectives. Books, how-to articles, GitHub readmes, blog entries, recorded talks on YouTube, and online courses. All these help us sort through the new concepts. I'm sure you have your favorites.
Katacoda is becoming a compelling platform for learning and teaching concepts. You can also author your own topics for public communities or private teams. Katacoda offers a platform that hosts live server command lines in your browser with a split screen for course material broken into easy to follow steps.
Many developers aspire to become architects. Some of us serve currently as architects while the rest of us may hope to become one some day. We all have worked with architects, some good, and some that could be better. What are the traits of a good architect? What are the skills and qualities we should pick to become a very good one?
Come to this presentation to learn about things that can make that journey to be a successful architect a pleasant one.
We all have seen our share of bad code and some really good code as well. What are some of the common anti patterns that seem to be recurring over and over in code that sucks? By learning about these code smells and avoiding them, we can greatly help make our code better.
Come to this talk to learn about some common code smell and to share your experiences as well.
Java Modules are the future. However, our enterprise applications have legacy code, a lots of it. How in the world do we migrate from the old to the new? What are some of the challenges. In this presentation we will start with an introduction to modules and learn how to create them. Then we will dive into the differences between unnamed modules, automatic modules, and explicit modules. After that we will discuss some key limitations of modules, things that may surprise your developers if they're not aware of. Finally we will discuss how to migrate current applications to use modules.
.
Since the dawn of software development, we've struggled with a huge disconnect between the management world and the engineering world. We try to explain our problems in terms of “technical debt”, but somehow the message seems to get lost in translation, and we drive our projects into the ground, over and over again.
What if we could detect the earliest indicators of a project going off the rails, and had data to convince management to take action? What if we could bridge this communication gap once and for all?
In this session, we'll focus on a key paradigm shift for how we can measure the human factors in software development, and translate the “friction” we experience into explicit risk models for project decision-making.
How does your team decide what's the most important problem to solve?
When we ask a question like “what's the biggest problem?“, it doesn't mean the biggest problems will come to mind. Instead, we're biased to think about what's bothered us most recently, annoyances, or pet peeves. It's really easy to spend tons of time working on improvements that make little difference.
But what if we had data that pointed us to the biggest problems across the team?
In this session, we'll dig into the data from a 1-month case study tracking Idea Flow Metrics, and discuss the patterns of friction during development, and how to identify the biggest opportunities for improvement with data.
What makes software development complex isn't the code, it's the humans. The most effective way to improve our capabilities as an organization is to better understand ourselves.
In this session, we'll breakdown the dynamics of culture into explicit architecture models based on a synthesis of research that spans cognitive science, biology and philosophy. We'll discuss the nature of Identity, communication, relationships, leadership and human motivation by thinking about humans like code!
If you want to better understand the crazy humans around you, you won't want to miss this talk!
On the inside, Kafka is schemaless, but there is nothing schemaless about the worlds we live in. Our languages impose type systems, and the objects in our business domains have fixed sets of properties and semantics that must be obeyed. Pretending that we can operate without competent schema management does us no good at all.
In this talk, we’ll explore our how the different parts of the open-source Kafka ecosystem help us manage schema, from KSQL’s data format opinions to the full power of the Confluent Schema Registry. We will examine the Schema Registry’s operations in some detail, how it handles schema migrations, and look at examples of client code that makes proper use of it. You’ll leave this talk seeing that schema is not just an inconvenience that must be remedied, but a key means of collaboration around an enterprise-wide streaming platform.
Once upon a time, it was just me and my app – the days when all I had to know was “get data, put on screen.” Fast forward ten years later, and what the hell happened? The level of complexity that we deal with in modern software development is insane.
Are we really better off than we were 10 years ago, or have we just been putting out our fires with gasoline?
In this session, we'll turn the projector off, and focus on a deep-dive discussion, contrasting the world of 10 years ago versus today. Rather than generalizations and hand-waiving about the golden promises of automation and magic frameworks, we're going to question everything and anchor our discussions in concrete experience.
Looking back across your career in software development, how has the developer experience changed?
First, we'll dig into the biggest causes of friction in software development, and how our solutions have created new problems. Then we'll focus on distilling strategies for overcoming these challenges, and how we can take our teams, and our industry in a better direction.
Using the Microservices Architectural Style to incrementally adopt an Event-driven Architecture (EDA) lowers up-front costs while decreasing time-to-market. EDA extracts value from existing occurrences, limiting invasive refactoring or disrupting existing application development efforts. Implementing Event-driven Microservices yields intelligence, scalable, extensible, reactive endpoints.
This session will cover the fundamentals, patterns, techniques and pitfalls of Event-driven Microservices with several demos leveraging Spring-Boot, Camel, ActiveMQ and Docker.
No matter the techniques used to make enterprise solutions Highly Available (HA), failure is inevitable at some point. Resiliency refers to how quickly a system reacts to and recovers from such failures. This presentation discusses various architectural resiliency techniques and patterns that help increase Mean Time to Failure (MTTF), also known as Fault Tolerance, and decrease Mean Time to Recovery (MTTR).
Failure of Highly Available (HA) enterprise solutions is inevitable. However, in today's highly interconnected global economy, uptime is crucial. The impact of downtime is amplified when considering Service Level Agreement (SLA) penalties and lost revenue. Even more damaging is the harm to an organization's reputation as frustrated customers express their grievances on social media. Resiliency, often overlooked in favor of availability, is essential. Prezi Presentation
Interest in MongoDB and other NoSQL platforms has waxed and waned over the years, however, Mongo remains an enormously useful tool.
In this session, you will learn everything you need to know to master MongoDB.
We dive deep into advanced topics, data architecture, tooling options, clustering, replication and sharding. You'll learn when Mongo is the perfect tool for the job (and when it isn't) and what's new in 2018
You've heard the old adage “It's not what you know it's who you know.” The focus of this session is divided between ways to better connect with everyone you meet as well as ways to grow your network, help and influence people and ultimately build long-term relationships and build your reputation.
Networking isn't about selling nor it isn't about “taking.” Done properly it benefits everyone. Among the benefits are strengthening relationships; getting new perspectives and ideas; building a reputation of being knowledgable, reliable and supportive; having access to opportunities and more!
Slides available online: https://prezi.com/ck1fdbhgqwiq/?token=8f8240f753ad9ae2c50ce696657020f40a877a40fa224790652eb412ac5eb8d3
Whether starting a new greenfield application or analyzing the vitality of an existing application, one of the decisions an architect must make is which architecture style to use (or to refactor to). Microservices? Service-Based? Microkernel? Pipeline? Layered? Space-Based? Event-Driven? SOA?. Having the right architecture style in place is essential to the success of any application, big or small. Come to this fast-paced session to learn how to analyze your requirements and domain to make the right choice about which architecture style is right for your situation.
Agenda
Very few applications stand alone anymore. Rather, they are combined together to form holistic systems that perform complex business functions. One of the big challenges when integrating applications is choosing the right integration styles and usage patterns. In this session we will explore various techniques and patterns for application integration, and look at what purpose and role open source integration hubs such as Camel and Mule play in the overall integration architecture space (and how to properly use them!). Through actual integration scenarios and coding examples using Apache Camel you will learn which integration styles and patterns to use for your system and how open source integration hubs play an part in your overall integration strategy
Agenda: