Estimating the impact of the Community Toolkit, or how the company got an incredibly good deal when they hired me ;)

| ibm, work

Numbers are good to have when you’re thinking about the differences you’ve made. Let’s see if I can estimate the impact of the community toolkit that I built for Lotus Connections: newsletter, metrics, exports, OPML generator, and so on.

Some metrics from the logs: The web interface has been accessed 94,095 since November 13, 2011, the oldest entry in the web server logs. That was 94 days ago, which means that it’s accessed a thousand times a day on average. Let’s ignore all the page views, and just focus on the times when users submitted an actual request to the system. That leaves us with 10,814 entries in the log.

5,305 entries were for Feedmagic, the RSS embed wrapper I created for another widget. Let’s say that saves people 2 minutes each, because otherwise they would have to click through to the feed itself and then return to their page after reading. That’s around 10,600 minutes saved. Considering that part of the tool took me less than an hour to write, I think that’s great ROI.

1,973 entries retrieved statistics for a community. Let’s say that 80% of them were successful, to account for typos and attempts to gather data for private communities. Manually gathering this information would involve going to each of the components of a community, counting discussion threads and comments, counting blog posts and comments, listing files. For some of the hidden data like wiki views, it would also involve accessing the API. There’s a date filter, so manually recreating this would actually involve checking each item to see when it was posted. Let’s say that takes 1-2 hours of work, maybe 1.5 hours on average. That’s around 142,000 minutes saved.

1,730 entries were for the profile summary. This searched Lotus Connections Profiles for combinations of tags and displayed the counts for each of them. For example, if you used A, B, and C as rows and X and Y as columns, the tool searched for A&X, A&Y, B&X, B&Y, C&X, and C&Y. Each total was linked to the search. People searched for a lot of different combinations. You could duplicate this by manually creating lots of links to searches, which would probably take 10-20 minutes depending on how many specs you were looking for. You wouldn’t have the totals, though, and it would probably take another 10 minutes each time to tally things up. If you were just interested in finding an expert using the search, let’s say it saves you 2 minutes of figuring out how to search the system yourself. I’m going to go with a time savings estimate of 5 minutes per request, which balances how people were using it for creating pages, making reports, and simplifying search. That’s 8,600 minutes.

877 entries were for community newsletters. Again, let’s assume 80% were successful. It takes even more work to create a newsletter, because you have to create links and count up new replies. Let’s say that’s 2-3 hours of work, or 2.5 for our estimate. That’s around 105,000 minutes saved.

421 entries were for community data export, which is also handy for determining individual member contribution, wiki page views, and file downloads. Let’s assume that 80% succeeded. This takes a lot of effort as well, because you have to tally contributions by member and copy all the details. I’d say that would take 4+ hours for an community, and you would save that effort for the rare occasions when you wanted to recognize people for their individual activities or justify your investment into building the community wiki. That’s probably at least 80,800 minutes there.

102 requests were for a generic feed export. You could do this manually by going through all the pages in the feed and copying the information, filtering by date. Let’s say that would’ve taken 20 minutes. Assuming 80% success, that’s around 1,600 minutes.

98 requests were for the code to create the feed embedder. You could duplicate this by creating your own page that included the feed embedder information, but that would probably have taken people 10 minutes to figure out. That’s around 1,000 minutes.

56 entries were for the forum exporter. This made it easier for people to analyze community discussions by copying the information into a spreadsheet with the subject, the body, and the author. You could duplicate this by opening each page of each discussion and copying the results into a spreadsheet or document, so I’d say this saved people an hour on average. Assuming 80% of the requests succeeded, that saved around 2,700 minutes.

55 entries were for community OPML, to make it easier for people to subscribe to different community feeds. You could do this manually by substituting the community ID into a template, although the OPML tool was neat because it provided an importable file as well as HTML links. I’d say this saved people 20 minutes. Assuming 80% success, that’s around 880 minutes.

9 entries were for a blog exporter. You could do this manually by copying and pasting all the entries into a document, so I’d say this saved people 10 minutes because most blogs don’t have a ton of entries. Assuming 80% success, that’s around 70 minutes.

There were a number of other requests, too, but we’ll ignore them for this analysis.

Over the 94-day span, then, this tool might have helped save 354,000 minutes. That’s about 3,800 minutes a day, or 2.6 days, or almost 8 8-hour workdays saved every day. Considering that my main focus is client projects and this was a voluntary effort squeezed into the gaps of billable projects, that’s pretty darn cool. This estimate doesn’t take into account the command-line use of the tool for restricted communities or external communities, either.

The tool’s been around for longer than just the three months that we’ve been looking at.  My first blog post about it was in June 2010, when Marty Moore mocked up a web interface and people started asking me to put the command-line tool on the web. That probably meant that I’d been gradually building it and sharing it over the past few months, and it had gotten popular enough for people to ask for a less techie interface than a command-line.

Let’s say the tool linearly built up in value over the 607 days since that blog post, eventually getting to this point where it saved people around 3,800 minutes per day. That means the value is described by a line with the equation y = mx + b. To simplify, we’ll assume that the tool started with 0 value, although it was already used by others on June 18, 2010, and that it eventually gets to 3,800 minutes per day on February 15, 2012. This would have been a great to break out calculus and integrals in order to get the number of minutes under this function (and we probably would, if we were assuming growth was curved or something like that). But it’s just as easy to think of this as a triangle with a width of 607 days and a height of 3,800 minutes/day. The area of this triangle would approximate the total number of minutes saved over the tool’s web-based lifetime, assuming linear growth for simplicity (as more people found out about the tool, and as I added new features). The area of a triangle is 1/2 * base * height, so that gives me 1,153,000 estimated minutes saved over the past 607 days.

I joined IBM on October 15, 2007, and I will leave on February 17, 2012. This is 4 years, 4 months, and 3 days, or approximately 226 weeks rounded down. Let’s say only 90% of those weeks are actually work weeks (holidays, vacations, etc). Eight hours a day, five days a week, for 203 weeks or so – that means that when I leave, I’ll have worked for IBM around 480,000 minutes. The vast majority of those minutes have on client projects. (Ah, the life of a consultant with utilization metrics…) So yeah, net benefit to IBM, which is great.

That one set of tools, which I built in my spare time to save me and other people from repetitive work and to open up new possibilities for communities – that may have saved people more time than I have even worked myself. That’s the amazing thing about intrapreneurship. By breaking the relationship between time and value, you can scale beyond the number of hours you can physically work. The tool probably took me less than a month of development time, spread out over e-mail requests and lunch breaks and calls. Granted, people would probably not have run those reports or crunched those numbers if the tool wasn’t available. But hey, the tool is there, and I’m glad it made these things possible.

I never received any direct monetary compensation for creating the tool. (Oh, wait, there was that Best of IBM award!) The steady stream of thank-you notes came in very handy during performance reviews (and subsequent bonuses =) ). The best benefit from intrapreneurship was meeting a lot of wonderful people throughout IBM and seeing what they did with their communities thanks to the tool.

Special thanks go to Luis Benitez, who’s taking over as the primary contact and who put together the Lotus Notes plugin; Marty Moore and Stephan Wissel, who contributed spiffy designs; Robi Brunner, whose hosting and domain gave it a lot of credibility; John Handy-Bosma and John Rooney, who helped me figure things out with the CIO; Darrel Rader, for suggesting plenty of nifty tools and using them to make his communities smarter; and lots of people throughout IBM for suggestions, improvements, and even the occasional bugfix.

If you haven’t started yet – be an intrapreneur! If you’ve gone down that wonderful road: Have you made something valuable? Can you estimate your ROI?

If my math is wrong or if it looks funny, please help me make it better!

You can comment with Disqus or you can e-mail me at sacha@sachachua.com.