Adam Tornhill is a programmer who combines degrees in engineering and psychology. He’s the founder of CodeScene where he designs tools for code analysis. Adam is also the author of multiple technical books, including the best selling Your Code as a Crime Scene and Software Design X-Rays. Adam’s other interests include modern history, music, retro computing, and martial arts.
Code quality fails to gain traction at the business level, leading software companies to prioritize new features over maintaining a healthy codebase. This trade-off results in technical debt that consumes up to 40% of developers' time, causing stress, frustration, and costly delays in product delivery. Despite its importance, it's hard to build a business case for code quality: how do we quantify and communicate the benefits to non-technical stakeholders? Or even inside our own engineering team?
In this mini-keynote, Adam presents groundbreaking industry benchmarks and innovative metrics that, for the first time, enable organizations to compare their performance with top industry players. By leveraging statistical models, he demonstrates how you can predict the business gains of technical improvements in terms of increased development velocity and bug reduction. With these actionable recommendations, your organization can ship software faster and gain a competitive edge.
Prioritizing technical debt is a hard problem as modern systems might have millions of lines of code and multiple development teams — no one has a holistic overview. In addition, there's always a trade-off between improving existing code versus adding new features so we need to use our time wisely.
What if we could mine the collective intelligence of all contributing programmers and start making decisions based on information from how the organization actually works with the code?
In this workshop, you'll learn how easily obtained version-control data lets you uncover the behavior and patterns of the development organization. This language-neutral approach lets you prioritize the parts of your system that benefit the most from improvements so that you can balance short- and long-term goals guided by data.
In this session, you’ll learn:
To prioritize technical debt in large-scale systems
Balance the trade-off between improving existing code versus adding new features
Visualize long time trends in technical debt
Take a data-driven approach to technical debt.
During this workshop, you get access to CodeScene – a behavioral code analysis tool that automates the analyses – which we use for the practical exercises. We’ll do the exercises on real world codebases in Java, C#, JavaScript and more to discover real issues.
Participants are also encouraged to take this opportunity to analyze their own codebase to get actionable take-away information.
As AI accelerates the pace of coding, organizations will have a hard time keeping up; acceleration isn't useful if it's driving our projects straight into a brick wall of technical debt. This presentation explores the consequences of AI-assisted coding, weighing its potential to improve productivity against the risks of deteriorating code quality.
Adam delivers a fact-based examination of the short and long-term implications of using AI assistants in software development. Drawing from extensive research analyzing over 100,000 AI-driven refactorings in real-world codebases, we scrutinize the claims made by contemporary AI tools, demonstrating that increased coding speed does not necessarily equate to true productivity. Additionally, we also look at the correctness of AI generated code, a concern for many organizations today due to the error-prone nature of current AI tools.
Finally, the talk offers strategies for succeeding with AI-assisted coding. This includes introducing a set of automated guardrails that act as feedback loops, ensuring your codebase remains maintainable even after adopting AI-assisted coding.
Key insights include:
Novel Quality Metrics: Introduction and application of innovative metrics designed to act as guardrails, ensuring that AI contributions maintain high standards of code quality.
Balancing Speed and Quality: Strategies to leverage AI for increased efficiency while avoiding the pitfalls of technical debt.
Real-World Data: Fact-based presentation from comprehensive research on real-world codebases.
We'll never be able to understand a software system from a single snapshot of the code. Instead we need to understand how the code evolved and how the people who work on it are organized. We also need strategies for finding bottlenecks and technical debt impairing our productivity, as well as uncovering hidden dependencies between code and people. Where do you find such strategies if not within the field of criminal psychology?
This workshop starts with a crash course in offender profiling before we quickly move on to adopt those principles to software development. You'll learn how easily obtained version-control data lets you uncover the behavior and patterns of the development organization. This language-neutral approach lets you prioritize the parts of your system that benefit the most from improvements so that you can balance short- and long-term goals guided by data.
Key insights include:
Prioritizing Technical Debt: Techniques to identify and address technical debt in large-scale systems based on return on investment.
Balancing Improvements and Features: Strategies for deciding between improving existing code versus adding new features.
Mitigating Key Person Dependencies: Methods to identify and reduce risks associated with critical dependencies on key individuals.
During the workshop, you get access to CodeScene – a behavioral code analysis tool that automates the analyses – which we use for the practical exercises. We’ll do the exercises on real world codebases in Java, C#, JavaScript and more to discover real issues. No coding experience is necessary.
Effective software development requires that we keep code and people in balance so that one supports the other. Yet, this equilibrium often eludes us, leading to coordination challenges, tightly interconnected services, and fragile code which is painful to change.Such challenges stem from the fact that the organization which builds the system is invisible in the code itself. Without a clear view of this social dimension, we're left grappling with surface-level fixes rather than addressing the root causes. What if we could visualize this intersection of code and people?
This keynote tackles that challenge head-on by introducing the concept of behavioral code analysis. By combining technical metrics with patterns extracted from Git repositories and insights from social psychology, you'll gain the data-driven ability to identify modules requiring excessive coordination, evaluate microservice boundaries, design modular monoliths, as well as practical solutions for rectifying these issues. Not only will you see these techniques in action on real-world codebase, you will also leave with a newfound arsenal of architectural analysis techniques.