Leveraging data and DORA metrics to improve tech processes.

AJ Wasserman:
Hi everyone, my name is AJ Wasserman and I’m a product owner in the data insight space at Liberty Mutual Insurance. Welcome to episode two of Liberty’s Tomorrow Talks. For those of you who don’t know Liberty Mutual, we are a global fortune 100 property and casualty insurer, and we help to provide our customers with protection from the unexpected, for their vehicles, their businesses, and their homes.
AJ Wasserman:
Today almost all of our interaction with employees and customers is digital – from our websites to our applications, to our call centers. As technology continues to rapidly evolve, our tech teams are constantly looking for ways to use technology to provide seamless experiences, with over 5,000 technology teammates, driving the business and industry forward. Today I’m joined with my peers to talk to you about DORA Metrics and how we’re using this as one of the many ways that we’re using technology to empower our engineers, to provide that seamless customer experiences. With that, let’s start with some introductions. Jenna, let’s start with you.
Jenna Dailey:
Thank you, AJ. Hi, I’m Jenna Dailey. I am based out of Indianapolis, Indiana, and I am currently working as a Senior Scrum Master supporting portfolio management in our distribution technology space. With this, I get to support around 50 squads that are responsible for things such as libertymutual.com as well as the number of systems used by our thousands of independent agents across the countries and the partners that we interact with to sell insurance globally.
Gabriel Leake:
Hello, I’m Gabriel Leake. I’ve been at Liberty about 15 years now, based out of Boston, Massachusetts as an Engineering Manager, aligned to our personal Alliance tech space. I’m a former engineer and I’m highly passionate about innovation, data driven decision making, tech metrics and cloud architecture.
Justin Robison:
Hi, I’m Justin Robison. I’ve been with Liberty for probably almost 15 years – same as Gabe. I am in Portsmouth, New Hampshire. I am a solutions architect in our cybersecurity space, heavy focus on zero trust strategies. But beyond that, I am a software engineer at heart with a… Also very passionate about software quality and using data to help us drive our decisions.
David Miller:
Hey everyone! My name is Dave Miller, also a software engineer at heart, but I’m an architect today representing Liberty Mutual’s architecture cloud and engineering enablement org. I spend a lot of my time working closely with our global underwriting and policy platform group. Thrilled to be part of the panel today because I love anything that relates to engineering excellence in practice and DORA obviously plays a big role in that.
Scott Aucoin:
Hi everyone, Scott Aucoin, director of engineering in the Global Risk Solutions, which is part of our commercial property casualty area, as well as the surety and specialty underwriting and insurance spaces. Part of my role is to lead our agile evolution, our agile office, with some agile coaches and the way that we transform ourselves. Additionally, got some responsibilities around portfolio management, the portfolio management team and our user experience team, which makes up a good group of people who are focused on research and design to improve the way that our users experience the many applications that we have out there in the world.
AJ Wasserman:
Thanks everyone for those introductions. I’m really excited for our conversation today, but before we dig in, Scott, could you give us some background on what DORA is for our listeners that may not know or may not be familiar with that?
Scott Aucoin:
Yeah, sure. Thanks AJ. So as you started off with, Liberty Mutual is a large organization, we’re very widespread as far as the types of technology we have, the ages of the technology we have, the geography – very diverse workforce. And with that, it may seem like where you can veer off in many different directions, but we have a shared mission. And that shared mission is to make sure that we’re satisfying our customers, supporting them through their hardest times and really trying to become the most trusted and respected property and casualty insurer. That mission brings us all together in many different ways, but on the technology front, it means that we need to be excellent at delivering digital products and user experiences that are really best in class. To do that we have to be innovative. We have to work together, think outside the box, and that innovation can occur internally.
Scott Aucoin:
But also, it’s a means for us to look at what’s happening externally. So thinking about external ways of operating, we can see tools like DORA and leverage those and learn from them. So DORA is the DevOps research and assessment crew that was founded back in about 2013 or so, and since then, they’ve been creating different state of DevOps reports annually, right up through 2021 with I think, just one missed year. And the state of DevOps report covers many different things, not just engineering, but also cultural practices, organizational practices, the way that we work together, product management. And within it, there are four key metrics. So we’ll talk about those four key metrics in a second, but really they boil down to two different categories: stability and throughput.
Scott Aucoin:
And by using those metrics, we can look at our teams and our organization to understand how do we compare? Not just internally, but maybe more importantly, externally. So we use those four key metrics combined with some gauges that they give us to understand, are we low, medium, high, elite, and then get a better sense of how we can continue to improve and learn together. Gabe, you want to talk a little bit about the four key metrics?
Gabriel Leake:
So, the four metrics really capture the effectiveness of the development and delivery process. And you can roll those up to how we’re doing across both throughput and stability. We measure the throughput of delivery process using both deployment frequency and lead time metrics. Deploy frequency is how often our teams are deploying shippable code to production as well as releasing it to our customers and end users. Lead time for change measures by answering the following question: how long does it take to go from code commit to successfully running in production? Now stability, we use two DORA metrics to measure that. The first one is time to restore, and the second one is change failure rate. Time to restore is really how long it takes to restore service when an incident or a defect occurs that impacts our customers.
Gabriel Leake:
And that can be an outage to degraded service, you know, like for example, the website’s running much slower than usual. Change failure rate’s really a measure of quality of our release process. So what percentage of our changes to production that are released to customers result in some sort of degraded service, whether it’s an outage or impaired service, and then it requires a hot fix, a rollback, a patch, or a bug fix to be pushed out there. We consider that a change failure and then we have to remediate it. So those are really just a high level summary of the four DORA metrics. Jenna, did you want to chime in on perhaps the importance of those?
Jenna Dailey:
Yeah. You know that this is something that gets me really excited. So, one thing I love about these four –I’m kind of a data nerd – so I love that this was based on this meta-analysis that the DORA organization really spearheaded by Google, because obviously they have an interest in making sure that quality software development results in profits, results in revenue for the business, ultimately, better outcomes as a for-profit company, and at Liberty we are as well. And so what I love is that they looked at, I think it was hundreds of different data points and did all of this statistical analysis to say, “Which of these indicators actually correlates to better business results?” And of all the factors they looked at, these were the four that they found statistically significant evidence that it really tied to business results.
Jenna Dailey:
So there’s lots of reasons to do lots of really good things. And we’re going to talk about some examples that tie to these, but I love that it’s not just sometimes the technology, we get excited about the tech for tech, but here we can really talk to our business partners and say, “No, this is grounded in evidence based analysis that working on these things, investing in these things, maybe slowing down a little bit to focus on these things can help the company’s bottom line at the end of the day.” And I just think that’s really cool that we have this analysis, in this research that we can base that on.
AJ Wasserman:
Yeah. Thanks Jenna. Why don’t we keep going with that, Dave, can you talk a little bit about why DORA is important to Liberty Mutual?
David Miller:
Sure, I’d love to. So at Liberty Mutual, one thing that we’re always really striving for is engineering excellence. And when you think about engineering excellence, it could probably feel a bit overwhelming because there’s what seems like an infinite number of ways that any team could focus their time on to help improve the quality of their products or their engineering practices. They might for example, focus on things like the AWS well architected framework to improve their product architecture, or maybe they want to focus on unit testing to improve their ability to refactor code and confidently make changes, or maybe they want to make the jump to building serverless solutions. And with the endless number of things that a team could focus on, DORA to me is a great place to start because it will make the team better at deploying. And in fact, if you think about those DORA metrics, what they’re really measuring is, are you delivering business value frequently?
David Miller:
And are you doing it in a way that it doesn’t break everyone’s stuff, right? So if you are elite at DORA, you’re probably already automatically inheriting some of those great behaviors we see in high performing teams like building small and having a small blast radius when things go wrong, having deployment infrastructure in place and deployment pipelines with simple rollback strategies, when things go not as planned. And with the expectations of our customers today, we need to be good at deploying, that’s non-negotiable, and DORA really help with that. So that’s really why it’s important to me, but I’d also like to ask my fellow architect, Justin, about what he’s seeing with DORA and why it’s important to him and his teams.
Justin Robison:
And I would love to answer you Dave. So to me, when I think about DORA, right, it’s an industry standard, right? It’s a standard that was, as Scott talked about, developed a lot in part by Google, some really big data nerds went after this and gathered the data. I mean they surveyed hundreds and thousands of companies. They pulled together all this data that said, “Regardless of the industry you’re in, regardless of the technology you’re working on, regardless of your agile methodology or lack thereof, you can apply these metrics and these practices, and you can use the metrics to gauge yourself and apply certain practices to improve them and improve your overall delivery performance.” So the beauty of that is, everyone’s included, everyone gets to do this.
Justin Robison:
There’s no arguing that you can’t do it, right? It’s pretty much a given that if you want to do it, you want to be involved, you want to get better, you can use something like DORA and the practices that go along with it to improve them to get better. And so really for me, when this landed in my lap a few years ago, when I finally got tuned into it, I saw the map and compass that I could finally give to my squads and my peers that gave them a way to pick a direction and get better, right? A repeatable, consistent measure that they could use on a ceremonial basis of their choosing – if it’s an agile team could be sprint wise, could be in a planning increment, could be over a quarter.
Justin Robison:
Again, I think the goal at Liberty Mutual for the use of DORA, the metrics themselves, to help us drive continuous improvement on our engineering practices, it really comes down to the autonomy of the teams, right? This is not a push down or a punitive measure. No one’s looking at team A and saying, “Wow, your deployment frequency is much better than team B.” That is entirely not the intent of this data, right? This is for engineers, for teams to get themselves better, right? Happier devs mean a lot of great things, right? I mean, DORA has a predictive relationship with a lot of constructs, one of them being sustainability, burnout, right? Increased sustainability and reduce burnout. For organizations this improved organizational performance, right? Again, we talked about bottom line organizations.
Justin Robison:
We need to make money if we want to keep doing great stuff and reinvesting in great tech and keep serving our customers as best as possible. So, with the behaviors and practices that we know we can adopt and we can use within our squads to continuously get better, we can hit on every front here, happier engineers, a better culture, a better bottom line for our company. So overall, for me, it was just a no-brainer. Just like, “Why don’t we just do this?” And so it was like the light shining finally.
AJ Wasserman:
Awesome. Thanks Justin. I mean, in talking more about how this really is for the engineers, maybe we could share some stories of how our engineers are using it and how it’s making them better. Gabe, why don’t you kick us off with a story?
Gabriel Leake:
Yeah, I’d love to share a story about DORA. We’ve had some good wins with it in my time in PL tech. In our space, we have very mature, continuous delivery for pipelines. The teams are able to on demand deploy to production, they have the autonomy to deliver code when they need to and get it up to their end users, and it’s a very mature space and all deployed to the cloud. Despite that, one of our teams was very surprised to see that they had a lower-than-expected lead time for change. And if you recall what that definition was from earlier on in the webcast here, that’s the time it takes to go from code commit to production and get it out to your users. And what they saw is they were tracking DORA themselves was that it was trending up and they were losing their elite status.
Gabriel Leake:
So they looked internally and had discussion amongst themselves as a team and they tried to elicit some improvement opportunities. It’s pretty interesting what they found because it kind of ran the gamut of the DevOps process, from code reviews to agile and story sizing. So, what they found was that sometimes their production deploys were held up to create a relief when the functionality was actually ready to be delivered maybe a week or two sooner, days sooner even. They were breaking down their work into stories, but it was very granular, which sounded really good to them on paper when they did it, but then what they found was that it led to a higher dependency between the stories. And so, they were slower to deliver them all together, and so they looked at that opportunity. And then they went over to their code process and how they’re delivering code.
Gabriel Leake:
And they look and found that, “Hey, we could be a little more efficient with our code review process and make that a little more lightweight without sacrificing quality and signing off sooner and reducing merge conflicts by looking at what they were delivering.” And then they also looked at moving to trunk based development, which then further improved their DevOps process because they were allowed to adjust their branching strategy to be able to more efficiently check in code. So once they identified all these different areas that they could address, they worked amongst themselves to implement those process changes. They kept the stories more business focused, so they were independently deliverable. They increased their visibility into features that were production ready just by adding a column to their daily board. And then what they saw over time was that their lead time went back to elite.
Gabriel Leake:
And it was a huge win, not just for the org and Liberty in general, because now this team is more efficient, but they self-solutioned it, they used the data trend and analyzed it for themselves and their morale went up just because they got their own win and they drove towards their own destiny there. So, yeah, it was awesome. Anyone else?
Justin Robison:
Can I just jump in there? I want to comment on something you said there Gabe, that I thought was really interesting. You mentioned that they swapped to a trunk based development kind of version control strategy, right? You get shop here. And I think just wanted to highlight that, that we talk a lot about… We kind of abstractly say we want to improve our practices, adopt better practices. And that’s an example of one, right? That’s a continuous integration strategy, right? A more frequent integration, which is a well-known kind of practice in the DevOps space and continuous delivery space. And again, it’s one of those practices that has been documented by the DORA research to show it’s a predictive relationship to all the things we talked about, sustainability burnout, reduced burnout, organizational performance. So, yeah, I think it’s a great story. I picked up on that when you said trunk based development got me all excited.
David Miller:
Maybe even continuing that line of thinking there. I also have an interesting story to share and maybe a bit interesting because it’s not what you would typically expect to get out at DORA, but similar to Gabe’s example, it shows how you can use DORA to empower teams to make real data driven decisions. And the example is: we had a team looking at their deployment frequency metrics and noticed that they spent a lot of time making deployments to their performance environment. And at the time that wasn’t an environment that was routinely used, so that started the snowball of conversations that led us to question the value of that environment and ultimately led us to decommissioning it.
David Miller:
So, I think it’s surprising how some of these conversations evolve and take shape and lead you down a path that you didn’t necessarily expect, but still provide a ton of value. And in this case, that value came in the form of less infrastructure for us to maintain, less time troubleshooting issues and environment that aren’t even used, less money spent on servers and run times that aren’t even needed. All of which give teams timing capacity back to focus on building, differentiating capabilities that set us apart. That’s my story. Does anyone else have anything they’d like to share?
Justin Robison:
I’ve got one, a little bit different of a twist. So there’s a situation in our personal line space or auto and property kind of main area of insurance. We had a large agile team that noticed that their lead time was kind of slipping, sprint over sprint lead time. Their ability to get-go from, as Gabe described code commit to running in the hands of the customer, a code product in the hands of the customer, was sliding. And they were going down in a direction they weren’t happy with. And they grouped up, the team grouped up with the scrum master and the aligned manager of the team and they did a self-organized reflection on and a troubleshooting session.
Justin Robison:
And the ultimate outcome was that they realized that the team was, one: it was too large for an agile team, communication channels, agile 101, communication channels were a little bit too broad or too many communication channels. And two: the focus of the team was split up. There was three core focuses that was really kind of contact switching what the team was working on. Even though, the product backlog was very solid, it was just still a lot of churn for the team to keep jumping back and forth. The end result was, the team was broken up into three teams, right, with a specific focus, smaller group’s eyes, right?
Justin Robison:
So, communication got a lot easier and at the end of the day, their lead time went back up to a much happier position. They were much happier overall with their status on their lead time. And again, overall, I think similar to Gabe the morale went up on the team. So slightly different story. It was more of a team structure change driven by a metric. But I think again, to Dave’s point there’s no end to what you can optimize if you’re looking at the data and if you allow your teams to solve the problem.
Jenna Dailey:
That’s so true, Justin, and what’s really interesting to me is we have an example in my area with one of our squads, the Supercats, and similarly, they were looking at the lead time metric and noticed, “Hey, this is way longer than we want to be. We think we can do something about this.” They dug into their internal process within the team and kind of said, “Hey, we’re going to make this the focus of our retros, this planning increment.” What’s interesting is their outcome was to actually implement more test driven design practices. So they noticed it was really a lot of the time was being taken with handoffs between the testing group that had to do a lot of the end-to-end with some of the integrations that had some more legacy systems that relied on batch and different things like that.
Jenna Dailey:
They ultimately moved all the way to writing tests before they write a line of code. And obviously we know TD is incredibly, you know, that’s a best practice, that is a great way to write really quality software. But until they saw the tie to this metric and how that change could really just move the needle significantly, it hadn’t really, doing that, implementing those changes, they hadn’t really seen kind of the what’s in it for me of it. And then seeing that drove this other best practice. So, it’s interesting how just one metric, the lead time, the number of have brought up has brought about many different changes across our squads to better their engineering practices and really move the teams forward in their technical journey and seeking that continuous improvement spirit that we all are about.
AJ Wasserman:
That’s awesome. Thanks everyone for sharing your stories, because I think that really helps make it real. And the common theme I heard throughout everybody’s stories was really that empowerment of the teams to use that data, to tell the story, to make changes for their team, which ultimately leads to happy developers. And we like happy developers. It leads to really great things. So thanks everyone for those stories. So Liberty Mutual has been doing DORA for years. Could we share some of our recommendations for folks that are looking to implement DORA at scale? Dave, maybe you can kick us off there.
David Miller:
Sure. So from my experience, sometimes when you’re working in a big enterprise like Liberty Mutual, it could be difficult to get everyone to buy into the importance of something. Teams might view it as something I just need to do to cross it off some proverbial checkbox to meet some objective. And I think sometimes that top down approach can be very ineffective, but here at Liberty, I’m proud of how we are able to have some very open and candid conversations with our engineering teams where we can review DORA metrics and there’s this mutual understanding that it’s all in the interest of continuous improvement of our craft, which is delivering those high quality software products that delight our customers.
David Miller:
So I think having that grassroots drive for something like DORA at the team level where everyone can see the value and teams want to utilize it to improve. I think that’ll jumpstart implementing DORA faster than anything else. And it’s also, uniquely enough, how the culture at Liberty Mutual, working at Liberty mutual and technology is a very enjoyable experience. So, do others have learnings that they’d like to share about implementing DORA at scale?
Justin Robison:
I do. I do. Thanks Dave. Yeah. I whole heartedly agree with everything you said Dave, a 100%. I think for me, one of my learnings was that initiating a program that involves bringing large amounts of disparate data together to provide something like the DORA metrics to everybody within Liberty Mutual wants access to them. It’s a large effort, right? And so when we took on this initiative on, like you said, we started small and folks were doing it pencil and paper for tabletop exercises for a long time. And then AJ and her organization stepped in to help us automate it. And we had a particular product goal there, and we were able to use the metadata and all of the events that are flowing in and out of our developer productivity tools like pipelines, version control, ticketing systems.
Justin Robison:
We were able to mash that all together and provide the metrics available to everybody at a, what I call a coarse grained view, kind of a roll up on product essentially. And that brought a ton of value for the organization, right? We were able to have conversations with our managers, directors, even our squads. I think what we’re starting to see now is that we’re getting demands for other views of the data, right? It’s different slices of the data, more granular, more team oriented. So I guess the lesson learned, it’s not really a lesson learned. We started small, we grew into it with a product goal in mind. That product goal has met the definition and the goals we wanted. And now we’re just looking for evolution and iteration. So couldn’t agree more with you, Dave start small, that’s the best way to go.
Scott Aucoin:
Yeah. I’ll add to that. We’ve got a large organization as we’ve talked about a few times here, but it’s difficult to bring along the entire organization at once. So that should never really be the goal, but just in the area that Dave and I work in, in particular, there are 1,300 engineers and we’re all individuals and we all want to be treated like individuals, and I think it’s safe to say that most people don’t want to be judged based on specifically a number. So sometimes when you start to introduce something like DORA, especially thinking about implementing it at scale, you’ve got to go about it in a way where we’re showing success stories and bringing people along on the journey, rather than saying, “Here, go do it. Now you’re going to be measured.” So we’ve been very cautious about that. And we’re certainly not at the point where 100% of our organization has totally adopted it.
Scott Aucoin:
Yeah. I’ll add to that. We’ve got a large organization as we’ve talked about a few times here, but it’s difficult to bring along the entire organization at once. So that should never really be the goal, but just in the area that Dave and I work in, in particular, there are 1,300 engineers and we’re all individuals and we all want to be treated like individuals, and I think it’s safe to say that most people don’t want to be judged based on specifically a number. So sometimes when you start to introduce something like DORA, especially thinking about implementing it at scale, you’ve got to go about it in a way where we’re showing success stories and bringing people along on the journey, rather than saying, “Here, go do it. Now you’re going to be measured.” So we’ve been very cautious about that. And we’re certainly not at the point where 100% of our organization has totally adopted it.
Scott Aucoin:
In fact, there are parts of our organization they’re still on this learning journey and trying to figure out how does it work for them. And back to a point that Dave made, I think is an excellent one: it’s how do we make sure they understand that this is beneficial to them because, geez, nobody wants to sit there and wait two days for deployment to happen, so the better we get at things like DevOps and leveraging DORA metrics, the better we are at our employee happiness, as AJ mentioned before, but really just being able to get the done and help our customers and keep our innovative thought processes moving without feeling like we’re just stumbling over ourselves through some slow deployment process.
AJ Wasserman:
Yeah. No, everything makes total sense. And as Justin mentioned, my team, my discovery team was critical in helping get some of that data automation, but I would just recommend that folks just keep in mind, this is a journey, right? So make incremental improvements, just start, right. Just start even if you have to manually calculate metrics and just make incremental improvements. So just recognize it’s a journey for sure. All right. So now maybe we can talk a little bit about like, how is Liberty leadership accepting of DORA and how are they using it?
Jenna Dailey:
Yeah. So it’s definitely been kind of as our journey has matured, I know for a lot of squads in my area, this is kind of being baked into just the ceremonies. Most of our teams are scrum. They have the planning increments every quarter, they have retrospectives on their sprint cadence. So every two to three weeks, and this is just regularly something that the scrum masters have up on the board and talk about how are we trending? And those sorts of things. It’s also been awesome to really be able to bring the POs along in that journey again, because this is tied to business outcomes and those sorts of things. It’s not just this technical thing that only the engineers have to worry about. And so I think that, that’s been a really great way that we’ve been able to partner with our product owners to have this be like a shared sense of ownership. And Scott, as can tell us, I think there’s also been a lot of visibility and support from our technical leadership as well on this.
Scott Aucoin:
Yeah. Thanks, Jenna. Exactly true. And I think about past organizations I’ve been at in fact, the one that I was at before I joined Liberty Mutual, and there was a conversation around measurement. It was actually before DORA was adopted there, but it was a conversation about how do we gauge? How teams are being effective? And I said, something really quickly, like having a three day cycle time is a good way for us to at least be thinking about some delivery. Well, that was taken verbatim and really brought to the entire organization as a target, all of good intention, but it didn’t feel exactly right to the individual engineers and teams because they had a lot of complexities, a lot of things that they needed to deal with. So having more information around that was going to be necessary. And that’s an example of an area where leadership was saying, “Go do this, hit this target.”
Scott Aucoin:
We’re fortunate to have an organization that hasn’t taken that tact. Instead, this is very much a learning process that’s happening more grassroots. And it’s more encouraged by peers and peers learning from each other as a few folks have brought up today. That said, it doesn’t mean that leaders aren’t engaged in it. In fact, a little over a year ago, we were doing our year end technology readout to the global CIO, James McGlennon, and myself and one of the senior leader CIOs shared some information on DevOps and on DORA metrics specifically. And part of the story we were telling was when we looked at 800 applications, we saw a 300% increase in deployment frequency that year because of our focus on DevOps.
Scott Aucoin:
That 300% deployment frequency came with only a 0.5% increase in change failure rate, meaning we are delivering a lot more, we’re getting things out there so we can test and learn from it quicker, and we’re doing it without breaking stuff significantly more frequently. So, that was a great story. And it had the buy in from not just that global CIO, but all of the other leaders, they love to see that, but they also love to see that we’re empowered to discover those things on our own without being told, this is exactly the target, now go hit it.
AJ Wasserman:
Yeah, that’s awesome. And to get that kind of support and to get those results too, right. It kind of speaks for itself. So before we wrap up our conversation today, I want to open it up to the panel for any final thoughts for our listeners.
Gabriel Leake:
Yeah. I’d love to jump back in there. So as 2021 ended we’re asking ourselves our journey’s far from over but what’s next for risk coming into 20 22 and beyond? And what we’ve seen is that now we have teams using DORA to become more data driven in their decision making proving their efficiency across the gamut of DevOps. And this drive really spurred innovation and passion for data across the org at all levels. And we know what we saw last year even was that one of the teams began innovating with different ways to bring that data that AJ’s group has been providing us into a format that’s perhaps customable by not just that team, but even any team. So they’re even thinking broadly, as they’re in an agile squad, delivering for their business focus, they’re trying to make this better for everybody as well.
Gabriel Leake:
So that’s been really inspiring and awesome to see. And I know Justin was alluding to that a bit earlier and it just really highlights the passion that DORA is generated in our engineers and that continuous improvement mindset that they have. So you can see that pushing into DORA has really matured us and pushed us to have not just a stronger continuous improvement mindset, but a focus on how the data drives our decision making in our teams. And then as leaders how do we make moves to support those teams? You heard some stories about making organizational changes and team changes to help those teams get those metrics down. So as leaders we’ve been able to take a look at those trends and enable that for the teams where needed.
Gabriel Leake:
So we’ll just be continuing to work with the teams to spur more of that passion adoption across our orgs. We’ll be recruiting some data driven champions at the grassroots level and the leadership level to help drive that forward. And then we’ll just keep benefiting, not just our engineers, but our customers as well, because this and the end impacts our customers and just makes the experience much better for them.
Justin Robison:
I have some thoughts on that one too, actually, Gabe. Really I would say closing out if you’re listening to this webcast and you say this DORA thing sounds interesting, where do I start? I would highly recommend the accelerate book. It breaks down the why, the how, and it explains it in an awesome way. It also talks a lot about the actual engineering practices. We didn’t talk a lot about some of the specific practices. I know we talked about continuous integration, trunk based development, TDD a bit, there’s a lot more, there’s actually 20 or more in engineering practices, not including product management practices, security practices. And DevOps 2021, they just released the state of DevOps, 2021.
Justin Robison:
They added security and documentation as key practices now, right? They’re predicting organizational performance, again, engineering happiness, better cultures. So again, I will highly recommend reading the book if you want to get involved or understand, it’s an easy read. It’s a fun read. Then from there read the reports, read the latest report. Google put it out earlier, late last year. It’s a really good place to start also, and just really bring it back to if you’re engineer, this is about your craft. This is about how you become the best you at what you do, the best you at what you do for your team.
Justin Robison:
If you’re not an engineer, if you’re management, a product owner, scrum master, or some other role supporting an agile or non-agile team, get involved also, your role in supporting this, like I said, product owners, this is about product delivery also, so, understanding how your product choice is lean versus not lean or whatever it might be also impacts the overall metrics and the delivery performance and the organizational success. So again, this is for you, for all of us. And so we should take advantage of it.
David Miller:
Great point, Justin, we all love happy engineers, especially engineers, but if I were to offer up one piece of advice, I think it would just be very simply to…and this was hit on a few other times, but start small. When you’re taking on anything new, sometimes it can get overwhelming and that can lead to people just throwing their arms up and just giving up. So, if you don’t have all the great automation in place like we do, and you’re calculating things by hand, like Justin’s team started out, just know that there’s real value to be gained by looking at these metrics if you take them seriously. It’ll make you a better engineer. It will help your team improve. It’ll help you achieve more and better business outcomes. So start small. That would be my advice.
Scott Aucoin:
Yeah. And just to add onto that, Dave, helping on all of those fronts and helping out with our experiences in general, right? Not to say that DORA can solve all of our problems. We’re not trying to pitch that at all here, but it helps in sense of helping our employee experience, like you mentioned before, the developer experience and that something we’re really passionate about within our organization is how do we make sure that we’re really excelling in those areas, employee, and developer experience.
Scott Aucoin:
But also as we’re getting better and better with DevOps and with gauging ourselves on metrics like these four, we’re better positioned to be able to improve our user experience, our customer experience, because we can test and learn faster. We can get things out there faster. So I think in general, the learnings that our organization have gone through so far have been excellent and they’ve supported us on many fronts of our journey. We still have a long ways that we can go to do better with it, but it’s exciting to see how much across this organization we’ve been able to embrace DORA and leverage it to our advantage.
AJ Wasserman:
Great. I want to thank everyone on the panel today for your time. I really did enjoy our conversation today and I’m looking forward to continuing our partnership to champion DORA across Liberty Mutual. Thanks everyone.
Scott Aucoin:
Thank you.
Justin Robison:
Thanks AJ.
Jenna Dailey:
Thank you, AJ.
David Miller:
Thank you.