Rusty Divine

Live, Love, Learn, Teach

Do you have an iteration or sprint calendar?


Does your team sometimes lose track of the important mini-milestones in your iteration? Our team has daily stand-ups and a task board, but we sometimes forgot when we were supposed to get a release ticket created or if we were promoting this week or next.

We created a simple iteration calendar and pinned it up at our stand-up wall. Each day we moved a red pin to the day we were at. It was a simple hack to help us remember to do all the little things, which was extra important on our team because each member took a turn with setting up the necessary meetings and doing the actual code promotion.

If you would like to learn more about how we managed our iterations check out module 5 of my Pluralsight course on Introduction to Working the Project.

Agile Story Pointing Benchmarks

We use story points to estimate how much effort a given story will take. We have found it reduces optimism inherent in estimating with hours because it encourages the team to use comparisons to other stories instead of imagining how long it will take. When we do estimate with hours, we typically think about the happy-path, and really, that’s OK.

Consider a two-week iteration with 10 stories. Maybe 6 of those stories take the happy-path and are completed pretty much as expected. Two stories are actually a lot easier and are completed faster than expected, but the other two take significantly longer than expected. The problem is, we don’t know which 2 those are in the beginning, so there’s no way using hours to accurately estimate. Instead, if your team is fairly consistent, you can use points to know that in a given iteration you can complete 30 points of stories. A few of those stories turn out harder than you expected, but that’s always the case. Everything evens out in the end, or at least, evens out more consistently in the end.

But, to get this ball rolling your team really needs a good common understanding about how to compare two stories to know how much bigger one is than another. We use the series of sizes like: .5, 1, 2, 3, 5, 8, 13, 20 –> we rarely let a story stay at 13 or 20 without breaking it down further because at that size we’re getting into something we’re not sure someone can complete in one iteration. You can start your team off by choosing a very familiar story that can be your 2-point benchmark, and one that can be your 5-point benchmark. Then, every story you consider can be compared to those two benchmarks and your team can collectively decide where in the range the story fits. Typically even non-developers can point development stories and developers can point non-development stories as long as they can fit it into their mental picture for how much effort a story like that would take.

Our team has experimented with taking it one step further and creating descriptions for each of these stories. This is still experimental and the general feeling is we would need to shorten this list by about half to make it more concise:

.5pt story

· Small bug fixes with no foreseeable risk and small JavaScript tweaks

· Small view updates; no tests affected (spelling; remove field; add label; move field)

· Business meeting that requires less than two hours of prep work

1pt story

· Bug fix with low-to-medium difficulty/risk, but well-understood, may affect a few tests

· Simple well-defined task (new validation; make field read-only when; fix/improve CSS)

· Research task time-boxed at about a day (what if scenario; will this work)

· Business meeting that requires two-to-six hours of prep/follow-up work

2pt story

· Bug of medium difficulty/uncertainty, or several tests affected (unexplained JS errors; AJAX)

· Understood task spanning >1 layers/screens/tables (validation rule server & client side; repetitive but tricky; significant data load script changes)

· Try small prototype (performance improvement; what if scenario)

· Business meeting that requires both significant prep and follow-up work (6-to-12 hours)

3pt story

· Bug fix of medium-high difficulty, significant uncertainty, or many tests affected (concurrency;

· Very simple screen or feature (system functions; nightly jobs action)

· Task spanning multiple layers/screens with one-or-two business rules to modify

· Significant updating of existing documentation (UCP; user manual; tech document; workflow)

· Multi-day research tasks (investigate one-to-two COBOL programs; new framework/tool)

· Business meeting requiring 2-3 meetings or >2 day research/analysis

5pt story

· Bug fix of high-difficulty, high uncertainty, and probable course changes (fix & refactor)

· Design & implement new feature, service, or framework (toast; branch-wide; JS test framework)

· Research & implement performance improvement (batch update via sproc; profiling)

· Documentation branch-wide (style guide; user manual)

· Series of 3-5 business meetings with analysis and design (pink sheet; COBOL workflow)

8pt story

· Implement a new feature that affects most/all of the site (granular security; table logging)

· Refactor a significant amount of code (funding section page, database/code renaming)

· Implement a screen from CICS (estimate adj.)

· Comprehensive documentation (site-wide user guide, technical specs)

· Significant research that will take most of the iteration for one person

13pt story

· Design & Implement an automated process/service (prod>test data move)

· Significant research task that could take all of an iteration for one or two people

20pt story (Consider if this story can be broken down into smaller stories)

· Large research task that could take an entire iteration for two-or-three people

· Very complicated refactoring

· Create and configure a new project solution (n-tier, nugget packages, tests, basic gui)

Some times we’ve failed with Agile

Tomorrow there’s a lunch gathering for our local Agile community and the topic is failure led by Keil Wilson, an agile project manager who works with me at NDOR.

At this Luncheon, we'll focus on the value of failing fast and forward.  Bring a simple--or the most heinous--failure you've encountered in your Agile Transformation.  We'll then focus on how it was discovered and the good that came out of it.  We'll also rely on the collective wisdom of our Community to offer other suggestions.

We often talk about failure being an option on our software development team. For example, we did not burn down last iteration when two of our user stories didn’t get completed in time. At our retrospective we brought this up to talk about it. One of our customers was there and she didn’t care that we didn’t burn down, in fact, just the opposite – she thought we were too focused on it when it wasn’t a problem. We talked about value and commitment but in the end agreed that if we fail to burn down occasionally it is actually a sign that we are pushing ourselves appropriately. If we always burned down it would seem a little suspicious that maybe we were slacking off some, and if we never burned down it would be a loss-of-trust issue where the expectation would begin to be set that the burn down is irrelevant. In our case, we burn down 80% of the time or more I’d estimate, and our team is comfortable with that amount of failure.

For the luncheon tomorrow though I’ve thought of a few examples of when I’ve been part of  projects and teams that have failed with Agile in the past:

  1. One customer brought us a requirements document they had created and put out to bid. We won that project and actually did a great job delivering on the project using Agile to guide us. The failures we had were poor definitions of conditions of acceptance from that document they wrote, missing assumptions and constraints, and a fixed budget that didn’t allow room for pivoting when they needed to left the customer with a solution they weren’t satisfied with in the end. Later, they paid another company to rewrite it, and that company is still working on it today, but at least they are charging time & materials.
  2. Another customer was a startup and we pre-billed them each month with a definition of what we were going to complete that month – we worried their funds might dry up. We were able to get done what they needed, but in the later stages of the project their project sponsor (the money-person) realized how much the project had cost, and their product owner (his brother) got convenient amnesia and pushed the blame on us. Our failure there was not making sure the product sponsor was involved more in the project as it was being completed and it ruined a relationship with that customer that could have been prevented with better project management.
  3. Some of my early experiences with Scrum and Agile took a light-approach. I now realize we could have done much better by occasionally reflecting on how we could be doing better instead of making the same mistakes over and over.
  4. On the project I currently work we still have some things we could be doing better. For instance, our customer does not get very involved with prioritizing the backlog, and when they do it seems like they aren’t thinking as strategically as would be good for the project in the long run. The result is we end up focusing on enhancements to existing features – moving fields around, swapping columns, changing shading, etc. – instead of taking on the new initiatives that need to be completed during this phase of the project. The silver lining is that our project manager specifically set a budget in this phase’s project charter for enhancements, and as of this iteration they only have a few points left to spend before they need to make an official request for more. Even though we put this safeguard in place, it didn’t prevent them from spending their entire enhancement budget first, including reversing some of the work they asked for, before working on anything new.

I’m looking forward to tomorrow’s meeting to see what the rest of the group has to say about their experiences. One things for sure, the restaurant we are meeting at has never failed to sever up some great burgers!

A System for Technical Training–How and Why Your Team Should be Learning

Graph drawing on blackboardThere is something you should be doing to help yourself grow - practicing your craft. You may feel like you don't have the time or the energy to do it outside of your regular responsibilities. Maybe you've tried off and on, or maybe the thought of it conjures negative feelings. Sometimes it feels more rewarding to just keep working through tasks and checking them off the list than to switch to work on something that by its nature makes you struggle and thrash and can feel aimless like you are putting in more work and just not having any way to measure a payoff. Sometimes you aren't even allowed to study on the job.


You need to remove as many obstacles as you can to make Learning something that you practice consistently. So, what is stopping you? What's getting in the way of your team from spending the time it needs to get better?


Common obstacles include the following:

  • Why should we set aside time for learning?
  • When can I study and practice?
  • What do I study? Where do I start?
  • Should it be something directly related to what my company does?
  • How much time should I spend?
  • What's more important, learning or finishing a task?
  • How do I know I'm getting better?
  • Do I have to?


Let me tell you what my team is doing to remove most of these obstacles and how it is invigorating us.


Hamster Running WheelWe have recognized that there is a significant risk of burn-out if a team has a never-ending backlog of work to do. We need regular breaks that mark and acknowledge the work we've done while allowing our minds to relax and regroup for the next push. Our team plans three weeks of work at a time and marks the end of the cycle with a demonstration to the end users, a retrospective of what went well and what could have been better that helps us continually improve, a day off from our daily 15 minute stand-up meeting, and an afternoon with nothing to do other than learn something new.


We also dedicate two hours every week  on Friday mornings from ten-to-noon to group training where the tech lead (that's me for now) prepares something for the team to work on together. It could be improving a skill that helps the team or training on a new technology that we are considering adopting. We've used these meetings to learn and practice TDD, explore Azure, test out new versions of Visual Studio and TFS, start new projects from scratch, and pick a new technology to implement a small prototype in, just to name a few.


Our team has MSDN Universal, which comes with $150/month credit to Azure that we can use to run virtual machines and websites. We also all have Pluralsight Annual Plus subscriptions that we are encouraged to use for our allocated training time and also that last 30 minutes of the day that can be otherwise unproductive if you've wrapped up your work.


We enter our time in TFS for our stories and we track that we are each entering at least six hours a day. We have set aside time in TFS for training that including the two hours each Friday comes out to at least four hours per week (a little more would not be frowned upon).


At one of our retrospectives recently we talked about some problems we were having making ourselves take the allocated time for training instead of just working on tasks. We made some adjustments to our schedule, agreed as a group that training is higher priority than adding unplanned tasks to our iteration, and decided to assess our skills so that each of us can focus on the areas we needed most.


We each took the Pluralsight tech pro challenge over at Smarterer and had a lot of fun grousing about the questions and learning how we stacked up to the average developer. Based on each of our lowest scores, we picked courses from Pluralsight to watch that would most help improve our skills. We will then take their assessments to demonstrate our proficiency and also come up with a simple project to present to the rest of the group that covers the new skills we learned. Our progress will be tracked by our supervisor, but also the team as a whole will benefit because each of us will be better at our jobs and each of us will teach what we have learned.




Sometimes the best solution to a problem is creating a system or process that you can follow where the solution just falls out naturally. We have to learn because we've setup a process that leaves us no choice; we have the time set aside, we have identified what we need to focus on, and we have made ourselves accountable to our supervisor and to each other. We no longer struggle with not knowing what direction to go, or whether or not we should study or find a task to do, and it feels empowering and easy and just downhill.

Some Highlights from HDC14

When I go to a conference I am always re-energized afterward from having learned about tools that I’d love to try, techniques others are using, and finding common ground with other professionals I wouldn’t otherwise get to interact with.

This week I went to, and spoke at, the 2014 Heartland Developer’s Conference in Omaha, NE. I was impressed with this year���s selection of talks – it seemed more were really well designed than I’d seen in years past.

Jack Skeel’s Master Agile and Write More Code

The highlight for me was a keynote delivered by Jack Skeels of Agency Agile titled Master Agile and Write More Code (slides). Jack talked about how agile processes may not actually improve productivity, especially for shorter projects and projects that have constraints of budget and schedule. He made some points about how agile can work in a constrained environment, which all really hit home for me and my experience in custom software development through various consulting firms.

  • Small scope is better
  • Direct communication is critical
  • Learning constantly via process improvements is important

Those points really tied in nicely with my talk title From Idea to Core Feature Set which covered how to get started on the right foot on a project that may be constrained by budget and schedule.

He also emphasized the importance of flow – keeping interruptions to a minimum – so that a team can be most productive. I was excited to hear about how important he thought this was because I’m working on a side project that deals directly in this problem space! Now that HDC is over, I intend to turn back to that and bring something out by early next year.

One of the books he highly recommended was: Practices for Scaling Lean and Agile Development Practices for Scaling Lean and Agile Development, which he said he owned two copies and both were dog-eared and bookmarked and well-worn. He also mentioned Thinking Fast and Slow and his Amazon Bookshelf.

John Wirtz’s Stakeholders and Persona’s are for Wusses – Everyone go Meet Your Real Users

John’s talk on the importance of integrating with your users was enlightening. I had met John for coffee one time to talk about how his company Hudl manages their teams and their agile processes, so I was really looking forward to hearing what he had to say and I wasn’t disappointed.

John told us a bit about what Hudl does and how they modeled their teams after Spotify’s tribe-guild-squad model that scales agile teams very well.

One technique Hudl uses to get raw feedback they call the Noob Gut-Punch – they go sit with a user who is pretty new to their software and ask them to navigate around and tell them what each button and menu means to them. John played a brief clip of one of these interviews and it truly was hard to listen to how confused the user was! I can only imagine being the software developer responsible for creating a pretty decent interface only to see someone struggle with it that much. The uncomfortableness of that moment is going to drive the developer to think like someone new to the system and give them someone to talk about when they argue over how features should be implemented so that someone who is new to the software can understand it.

He also said that they have their developers ask the customers in a 5-why's type of style where they just keep drilling down into what the customer is saying until they can find the root of the problem. 

One other point he made was that they try to hire developer’s with natural user empathy. If someone really cares about finding out how users are using software, then they will feel welcome in Hudl’s culture.

He mentioned tools such as: WuFoo, and UserVoice.

Miscellaneous Highlights

  • Pete Brown talked about the Intel Galileo, an arduino-type board that runs windows! He highly recommended everyone should watch the IT Crowd and get Little Bits for their kids or themselves.
  • Bill Fink showed us the Kinect v2 and said it will be priced at $199 and be incredibly much better than v1!
  • Andy Ochsner recommended the ELK stack for log analysis, and showed how awesome Kibana is at charting log data – suggested you could do some cool things with public data.
  • Dave Royce sat next to me at one session and told me his team is using Liquid Planner, Confluence, and Box to manage his custom software projects for his team.
  • Paul Oliver gave a spectacularly funny talk on Git (slides) where he showed us how to beat-box, too. He recommended this interactive tool to help learn Git, and SourceTree for a beginning IDE, then GitExtensions for advanced, and said not to use the VS IDE in a team because merge conflicts are a pain. He gave out a few copies of the Git Pocket Guide and had some great things to say about it.


Also, I got a chance to chat with Jim Collison:

From Idea to Core Feature Set–Heartland Developer’s Confernce 2014

Thanks to everyone who worked on HDC this year to make it a great success!! I really enjoyed meeting with everyone and talking with those that came to my talk From Idea to Core Feature Set.

Here are the slides and examples for download: (3.4MB)

Let me know if you have any questions!

Heartland Developer Conference 2014

I’ve been attending HDC for about 4 years and I’m happy to say I am presenting this year! Register now and come see my talk titled From Idea to Core Feature Set where I’ll show you how to take an idea for a product or an initiative at work and get to a shared vision of what is really most important and what really needs to get done first.

We Scrubbed our QA/Prod Move

It was quarter-after eleven Friday morning and I noticed my teammate’s brow was furrowed. It was her turn this iteration to coordinate the move to QA and Production, which involved enough steps that we have created several process documents and an iteration calendar to help keep the move on track.

The seven of us were in the basement for our weekly team technical training meeting – two hours set aside to explore new technologies and discuss how to improve our code base. It’s a time we look forward to each week as a fun way to start a Friday and a time to talk openly about our ideas and interests.

Two of the team were discussing or debating something while the rest of us worked through a tutorial on SignalR on our Azure VMs.

I asked the developer in charge of the QA move how it was going and she told me there was a problem.

“Could you open the QA website and check the obligation transaction page? We are getting a work phase not found error,” she said.

I checked and found the same error. It was strange and took a minute to process what could be going on to cause this error message. Neither our developers nor our very thorough QA tester had encountered this in our development branch during the iteration.

After a few minutes of conferencing about it, we realized one of the stories we sent to QA was failing because the table it was expecting to update did not have any data loaded yet, and would not until we turned on this multiple-iteration release in production, converted the data, and took over the functionality we were replacing in the main frame system.

Our application is complicated by having some parts that are used in production, and some parts that are technically in production but are turned off and hidden until we can complete the entire set of features and turn them off in the current main frame system. This particular feature bridged both worlds, which is why it caused a problem when we got to QA – in demo, we have data loaded from production, but QA matches production exactly and does not have that data loaded.

What to do. We could scrub moving to QA and Prod this iteration, roll back the problem feature, and do a normal QA/Prod move next iteration. We could populate the empty tables with production data so that our cross-over feature would work, but the data would become stale as soon as someone used the main frame system, so that was out. We decided our best option was to scrub the move because there was only one minor feature that would have been visible in production.

The QA mover and I went upstairs to talk to the product owner. We explained the situation and she agreed, the minor feature wasn’t important enough to do an emergency fix for and we should just scrub the move.

To me, this issue and how our process handled it so smoothly was awesome. Sometimes its easy to overlook something like this that appears trivial – it was no more than 20 minutes between the time we realized the issue to when we decided to scrub the move. We didn’t need to debate or discuss, it was clear from our process what we should do – and that included both developers and the product owner.

Days like this make me so happy to be working with an agile team!

Evaluation Results from my Presentation

I recently presented at Nebraska Code Camp for “An Agile Requirements Gathering Process to get to MVP” and my session evaluations are back!

Getting feed back is something I look forward to because I put a lot of work into what I do and I always am looking to improve.

An improvement I see based on talking with attendees and reviewing the evaluations is that I could have done a better job explaining some of the agile terminology and philosophies that I discussed during the presentation so that attendees less familiar with it would have had an easier time following along.

Based on 8 evaluations, out of I would estimate 30 attendees, here are my average scores on a scale of 1 = Terrible and 5 = Great:

  • Preparation: 4.63
  • Clarity: 4.75
  • Content: 4.5
  • Demos: 4.5
  • Use of Time: 4.75


  • Liked effective real world use. Practical tips usable inf9
  • Overall, great, well prepared presentation. Consider slowing down a little. This did allow for a lot of well facilitated discussion time though!
  • Rusty provided a wonderfully clear process that can be multiplied across numerous organizations and projects. The presentation was easy to follow and understand, especially because of the example he provided. I really enjoyed this presentation!
  • Very useful want to do a user group on this topic
  • More concrete demos would be nice. Also maybe some agile introduction.

Thanks to everyone who came, to those who helped me prepare, and to the attendees who took the time to leave me some feedback.

Agile Requirements Gathering Process to Get to Minimum Viable Product (MVP)

Here are the supporting materials for my talk at Nebraska Code Camp (#ncc4) this year:

  1. Slides (PPT)
  2. Assessment Documents
  3. Slides (PDF)

Thanks to everyone who came, and a special thanks to Adam Costenbader & Keil Wilson who helped develop the process and who helped shape this talk through several iterations of creating the outline and slides.