Want to figure out the perfect metric by which to measure the success of your DevRel team? A metric which you report up to your leadership that’s clear and concise?
Those who have worked with me know that I am a fan of metrics, constantly iterating and testing new approaches to DevRel to try and push those metrics. This is especially important to align developer relations teams around common goals and drive improvements, but is also a valuable way of obtaining buy-in on your priorities and communicating them throughout the organization.

“We need a single metric”
DevRel teams are frequently asked for a single top-line metric to represent the team’s goal and all of their work — both short-term and long-term. This can be valuable for clear and concise internal communication, though over-simplification can be detrimental for many of the other motivations behind goal setting.
Let’s talk through a few metrics I’ve used in the past, their value and their challenges. If your developer product is a modern cloud service with authenticated developers and no open source component, you can skip this section, but it may still be helpful.
1 million developers building with Google. Google DevRel (~2007) .
Google’s nascent developer relations team took form under Online Sales and Operations (OSO), a highly metrics-driven organization led by Sheryl Sandberg [more on that in a future post]. Our Director was asked for a visionary metric to drive the team by and we settled on a large, bold number.
This metric seemed perfect. It represented the community of developers we were trying to build and many of the activities of the team (documentation/tutorials/guides, blog posts, support, DevEx improvements, etc) can theoretically be traced up to impacting the metric.
Measurement methodology is the crux of any good metric, and it was very challenging in this case to identify an individual developer. Development of a unified Google Account for all products was underway by a four-person engineering team IIRC, but it was not adopted by all Google services. We simply had no foolproof way to identify specific humans (developers) building with Google.
Nonetheless, we settled upon Google Accounts for those products which supported them. However, one of the most active developer communities, Google Maps, didn’t even require authentication for developers. We needed a proxy to represent a developer. We used unique websites for Google Maps. We, of course, knew that a single developer could be behind hundreds or thousands of auto-generated content sites on the web, but had no way to account for that, just like we had no way to account for a single developer having multiple Google accounts.
Perhaps the biggest concern with this calculation was that 95% of all “developers building with Google” were accounted for because of Google Maps. Teams working on other products had no meaningful way to move the number, making the metric worthless for aligning and encouraging these teams internally.
Perhaps we could have had a metric like X million developers on mature products, and Y developers on labs products? (though the Google Code Labs program I co-founded is no longer in existence)
X monthly active machines running Neo4j. Neo4j DevRel (~2015).
Neo4j is an open source graph database, distributed as a Community Edition (GPL) and a commercial Enterprise Edition (then AGPL). It is also now available as a cloud service – Neo4j Aura, but was not at the time we used this metric.
Monthly active machines was based on unique MAC addresses pinging back saying that they were running Neo4j. This telemetry data existed by default in both the free and commercial editions of the database, but was able to be disabled by the user [or blocked by a good firewall config].
Good metrics are not easily manipulated by a single actor. This metric was.
Problems:
- No meaningful measurement of actual usage / value achieved for a specific human, just that the machine was running
- MAC addresses are not always unique (VMs, etc)
- Some of the highest value users disabled this telemetry
- Runaway CI jobs caused spikes in the metric
- A large financial services firm chose to install Neo4j on all client instances for a specific feature which was not enabled by default. Neo4j was running, but not in use.
Solutions explored:
- Only count machines running > X queries / hour
- Cap the number of new MAC addresses added per month/day/etc
I’m sure there were ways to use a mixture of data science and pattern matching to ensure we counted only “real” machines and smooth out some of the charts, but we were a small team without any dedicated data folk.
Y data people educated on Delta Lake. Databricks DevRel (~2020)
The goal of this metric was to motivate the right type of scalable DevRel programs, while communicating up a single top-line number instead of a list of program metrics.
I think this metric worked well internally for prioritization of other goals, but failed to succeed as a metric for managing up to the CMO and CEO.
Why? Nuances. How the hell can we say someone is educated?
We used a composite metric to infer that some fractional data person was educated ~ 0.2 of the way if they read a blog post, 0.5 of the way if they watched an on-demand video, 0.5 of the way if they downloaded a book, 1.0 of the way if they watched a live video or attended a live event, etc.
This mechanism worked in many cases. However, the CEO was easily able to come up with examples where this failed to motivate the right activities: “If someone read 4 blog posts, were they just as educated as someone attending a talk (by the creator of Delta Lake)? The variables of the specific speakers, blog post authors and content quality were simply not accounted for, yet had an impact on the effectiveness of the education.
Sure, if we owned the training and certification programs, we could have assessments to grade our target audience [and such programs were under discussion for the OSS project]. However, our team could be very successful at building wide awareness and adoption even if not a single person got “certified.”
So, we could be successful and not get credit, and we could get credit while not being successful.
The other big issue with this metric is that there were people on the team who were responsible for other product areas (MLflow, Spark OSS, Databricks product, etc). While they helped move this metric at times when “co-marketing” the products, they didn’t move it significantly enough to feel ownership over the number.
What metrics are valuable?
The only perfect metrics are those that point up and to the right at an increasing rate.
Okay, in all seriousness, metrics for DevRel on open source projects are super challenging, but modern day cloud APIs/services are much easier to measure when developers are authenticated.
If your developer accounts are unified across many services (single sign-on to things like product, docs, forums, support, etc), then you’re in a very good place to create DevRel metrics around adoption. You’ll have a great way of knowing whether developers are building with your product, how quickly they can be successful (aha moments), and whether they continue to build. You’ll also be able to measure their impact on the rest of the community.
Depending on your priorities any given quarter, you then choose metrics like:
- Number of developers with active applications
- Time for new developers to achieve X
- Number of new developers creating their fist applications
- Number of developers actively building in the last X days
Do you need to report up a single metric?
While I have fought tooth-and-nail over this with past leaders, I do actually think it is valuable to place a single goal that everyone on the team is running towards and communicate it up. However, I would only do this when that metric is simple, represents the work of the team well, and not easily manipulated. Otherwise, I would choose up to 3 metrics and regularly report on those.