Jonathan Johnson is an independent software architect with a concentration on helping others unpack the riches in the cloud native and Kubernetes ecosystems.
For 30 years Jonathan has been designing useful software to move businesses forward. His career began creating laboratory instrument software and throughout the years, his focus has been moving with industry advances benefitting from Moore’s Law. He was enticed by the advent of object-oriented design and applied it to financial software. As banking moved to the internet, enterprise applications took off and Java exploded onto the scene. Since then, he has inhabited that ecosystem. After a few years, he returned to laboratory software and leveraged Java-based state machines and enterprise services to manage the terabytes of data flowing out of DNA sequencing instruments. As a hands-on architect, he applied the advantages of microservices, containers, and Kubernetes with a laboratory management platform.
Today he enjoys sharing his experience with peers. He provides perspective on ways to modernize application architectures while adhering to the fundamentals of modularity - high cohesion and low coupling.microservices, containers, and Kubernetes to their laboratory management platform.
Java developers and specifically Spring enthusiasts, fear not, Spring-based containers on Kubernetes continue to improve!
Write once, run anywhere, the promise of the JVM (WORA). Package once, run anywhere, the promise of containers (PORA). Given these two postulates, isn't WORA and PORA the same goal with different technologies? Yes, it’s completely redundant, and wasteful on expensive cloud servers. GraalVM has given us a way to reduce this waste with significant CPU and memory advantages. This solution has arrived with Spring Native.
We’ll walk through a hands-on session to see how much Spring Native can save you.
Cloud native applications are distributed to reap the benefits of resource scaling. Distributed computing is powerful but it also makes you think differently in designing applications. Atomic, modular, highly cohesive, and low coupled applications play nicely on these distributed systems. But it comes with costs.
We’ll look at architecture styles that adapt well to running in containers and on Kubernetes. Along the way, we’ll note the extra things your application should do to play nicely with distributed cloud native targets.
Distributed computing has surprising challenges. When targeting applications for Kubernetes your capabilities should meet its demands. If you can raise your team’s understanding of the cloud native maturity model, then it will increase your success with Kubernetes solutions.
I get it, we don’t always follow the best techniques and your team does not do everything perfectly. However, it would be sad if you did not lay out some plans to eventually raise your capability goals. We’ll examine some worthwhile goals and applicable techniques.
Distributed computing is hard. A significant challenge is typically when you want to get something done you have to call another service. Understanding how services are discovered and connected is fundamental to understanding Kubernetes' strengths.
We’ll walk through some networking concepts and hands-on examples of various techniques to understand simple to sophisticated traffic control and routing. Ingress and Istio will be demystified.
Hopefully, your DevOps team is ensuring that your platform is healthy and its delivery system is frictionless with new updates continuously rolling out. How can you achieve an automated and reliable delivery pipeline? Fortunately, your pipelines can all be run on Kubernetes. One of the highest maturity model goals with pipelines is automated delivery models. Specifically Progressive Delivery.
We’ll look at delivery model techniques and how Kubernetes and meshes provide a framework to make deliveries successful.
Don't fear entropy, embrace it.
When you move toward distributed computing the likelihood of failure proportionally increases. It's not your fault, it's simply physics. Once you start spreading your data and applications across more devices, then access to resources such as CPU, memory, and I/O have a higher rate of failure.
Embrace entropy with chaos experiments and increase your cloud native capability model. We’ll investigate some of the leading chaos frameworks for Kubernetes and dive into hands-on experiments targeted within blast radiuses.