Team performance
DORA metrics
These are 4 good metrics for measuring the performance of an engineering team. The book Accelerate: The Science of Lean Software and DevOps goes in to details on these, but the basics are:
Deployment Frequency: High-performing teams tend to ship often, so measuring deployment frequency is a good indicator.
Lead Time For Changes: i.e. how long does it take you to go from starting a ticket to getting it in production. A high lead time could indicate any number of problems, from poor scoping, to high work load or overly slow CI/code reviews. It can help to break down this measurement into sections too (e.g. time to PR, time for review, time for CI, etc.).
Change Failure Rate: How often does a change result in a bug/outage/etc.? A high rate indicates something wrong in the quality process of your team.
Mean Time To Recovery: How long does it take to recover from a failure? If you have a low MTTR, you can be more adventurous and in experimental in your work with the knowledge that failure has a low cost - the opposite applies if you have a high MTTR.
Implementing
Measure first; set targets later (or not at all) 1
You can't set a target without first knowing where you stand today. You'll also find that the process of setting up and viewing measurements will lead to positive improvements in itself.
Carefully define ancillary terms. There are several terms that the DORA metrics use that you need to carefully define: “failure”, “recovery”, “change”, what “started” and “finished” means, etc. Even “deployed” can be more complex than it first appears: what if you deploy new features behind feature flags? Do you stop the clock on LT when the code hits production, or when 100% of users are flagged in? Somewhere in between?...the answers don’t really matter as long as you’re consistent 1