Rusty Divine

Live, Love, Learn, Teach

Manager Tools One on Ones

If you are responsible for a team of people at work then I would recommend listening to the podcast series on One on One meetings at the Manager Tools website. You will learn how to build a good business relationship with your team. Or, maybe you are having trouble connecting with your boss. Listening to the podcast series may help you to suggest some changes to your boss that would make your relationship stronger.


Big Ideas

There are many benefits to doing 1:1s for both parties including building a strong relationship that can withstand difficult times, helping your direct to grow professionally by really knowing their strengths, weaknesses and aspirations, and dedicating a time where your direct can tell you anything they need to that may otherwise go unsaid.


Scheduled 30-min/week

The authors emphasized that this meeting should be scheduled on your calendar and it should be weekly. Scheduling it communicates that this is important and will happen even if it is occasionally rescheduled. The dependability of this earns trust for you and as an additional benefit may actually reduce interruptions during the week for issues that could wait until your next 1:1 meeting for your direct to ask you about. The 1:1 should be weekly because you are building and maintaining a relationship and that just takes time, plus people tend to only keep the previous and next week in their mind so waiting longer means you will lose some details.


I have had the experience of a busy boss who has wanted me to track him down for the 1:1 and it was a weekly dreadful task for me instead of being something that re-energized my day. Having it on the calendar as something that can be depended on helps me prepare for it with questions to ask or topics I'd like to talk about.

I've also experienced 1:1s where the things I said were never followed-up on, which has taught me to try to follow-up on anything I need to from the 1:1 the day that it occurs if possible. Showing that as a manager you take these meetings seriously and really want to help will go a long way to building the trust you need to be able to help each of your directs.


Each direct's 1:1 should feel different

The 1:1 is about the direct, it's their chance to say whatever they want to. Some people are most comfortable sticking to business and provide a status update, others may want to talk about one specific thing they are working on, and sometimes a conversation about what someone did last weekend or is planning for vacation is what fills the time.


The authors recommend that every 1:1 has the same agenda, though. Their typical format would be 10-min for the direct, 10-min for the manager, and 10-min for talk about coaching or the future. This format can be tailored to whatever makes sense for you, though. In my experience with an agile software team, I know from the daily stand up meeting what each team member is doing and what blockages they may have. I sometimes have some organizational news to share or some advice or coaching, but I don't need 20 minutes to cover both of those.


My 1:1s generally follow a format of 15-20min for them and 10-min for me where I might ask them a question that will help me understand them better (what do you like most, e.g.), give them some advice or feedback, or share some news from a meeting they weren't present in.


Final thoughts

After listening to the podcast series I made a change to my 1:1s. I took them out of the small conference rooms I had scheduled them in and brought them into the large breakroom with a wall of windows to give them a more open, relaxed feel. The privacy of a noisy breakroom has so far proved sufficient to be able to talk about whatever has been on our minds.


I would like there to be a podcast tailored to directs that teaches them what to expect from a 1:1 and how to prepare. I've updated my agenda to give my directs a little more information about the format and what to expect, but I think listening to a 30-min podcast about why 1:1s are important and what they can get out of them would really help them understand. I think the first episode of the Manager Tools podcast would probably suffice for this, too, but it would be nice to have it tailored to the direct instead of to the manager.


I always take notes in a notebook and then type up a few quick bullet points in One Note to help keep track of what we talked about. It would be nice if there was a 1:1 app that helped track what you discussed, things you need to follow up on, and help you to cover feedback and future goals intentionally.

Do you have an iteration or sprint calendar?


Does your team sometimes lose track of the important mini-milestones in your iteration? Our team has daily stand-ups and a task board, but we sometimes forgot when we were supposed to get a release ticket created or if we were promoting this week or next.

We created a simple iteration calendar and pinned it up at our stand-up wall. Each day we moved a red pin to the day we were at. It was a simple hack to help us remember to do all the little things, which was extra important on our team because each member took a turn with setting up the necessary meetings and doing the actual code promotion.

If you would like to learn more about how we managed our iterations check out module 5 of my Pluralsight course on Introduction to Working the Project.

10 Years of Blogging

My first blog post was ten years ago! After blogging for ten years it is a good time to reflect on what blogging has meant to me.

I think we could do a quick test if you would help me out. If you see this post within a week (by June 29th, 2015) leave a comment. It can just be ‘I was here’ or something similar. I’m guessing I get 0 comments, but maybe as many as 3.

The first thing that says to me is that you can blog for ten years and not have any followers. I didn’t expect that when I started, but it hasn’t stopped me either. I blog to entertain and to teach, but also mostly its selfish so that I can develop my ideas more fully or find a reference to how I did some complicated code just by searching my own blog. Even without any followers I expect to be around ten years from now, too. Blogging is like jogging – it’s exercise for my brain – and like jogging, people aren’t very interested in watching me exercise –). Unlike jogging, some of my blogging content is actually useful to people who’ve encountered a similar problem that I’ve solved in a blog post.

My most popular post in these 10 years is How to encrypt a password reset email link in asp.Net mvc with around 7,000 views and the runner up is No One Knows They Want Visible Solids in Their Spaghetti Sauce with about 5,000 views.. Ok, that second one really surprised me. It looks like almost all of the traffic is from Tuesday May 27, 2014, and the main referrer is from the second bullet point on this Sweedish blog post, I kinda wonder if the author intended to link to my post?? The top-viewed post gets a lot of search traffic and I think it’s the accepted answer on a stack overflow question, or at least someone references the article for how to do it.

Have I made any money? Yes! I made about $2.50 when my good friend actually clicked on the Buy me a beer button on the sidebar. I have not received a single check from Google or any other advertisement. I also have not been given any free books or software to review, although if someone does read this I would totally do that!

Every time I interview for a job I do mention that I blog, and that generally gets a positive reaction even if the company never goes to check it out. It’s just like saying, “I care enough about my craft to write about it.” I would recommend blogging to any software developer just for that bonus.

What could I be doing better? I like to look to role models like Scott Hanselman and Scott Adams who do really well at teaching others by clearly explaining ideas and being multi-dimensional by blogging about lots of life experiences. Hanselman does a great job trying out cutting edge technologies and giving people quick guides with screen shots to how he got that technology to work. Adams has his systems philosophy where you throw goals out the window and just follow a process every day until one day you reach and surpass any goals you would have set. I could be doing more like Hanselman to show what I know, and I could be doing more like Adams to set up a consistent blogging schedule so that I have some content that followers would stay engaged with.

Agile Story Pointing Benchmarks

We use story points to estimate how much effort a given story will take. We have found it reduces optimism inherent in estimating with hours because it encourages the team to use comparisons to other stories instead of imagining how long it will take. When we do estimate with hours, we typically think about the happy-path, and really, that’s OK.

Consider a two-week iteration with 10 stories. Maybe 6 of those stories take the happy-path and are completed pretty much as expected. Two stories are actually a lot easier and are completed faster than expected, but the other two take significantly longer than expected. The problem is, we don’t know which 2 those are in the beginning, so there’s no way using hours to accurately estimate. Instead, if your team is fairly consistent, you can use points to know that in a given iteration you can complete 30 points of stories. A few of those stories turn out harder than you expected, but that’s always the case. Everything evens out in the end, or at least, evens out more consistently in the end.

But, to get this ball rolling your team really needs a good common understanding about how to compare two stories to know how much bigger one is than another. We use the series of sizes like: .5, 1, 2, 3, 5, 8, 13, 20 –> we rarely let a story stay at 13 or 20 without breaking it down further because at that size we’re getting into something we’re not sure someone can complete in one iteration. You can start your team off by choosing a very familiar story that can be your 2-point benchmark, and one that can be your 5-point benchmark. Then, every story you consider can be compared to those two benchmarks and your team can collectively decide where in the range the story fits. Typically even non-developers can point development stories and developers can point non-development stories as long as they can fit it into their mental picture for how much effort a story like that would take.

Our team has experimented with taking it one step further and creating descriptions for each of these stories. This is still experimental and the general feeling is we would need to shorten this list by about half to make it more concise:

.5pt story

· Small bug fixes with no foreseeable risk and small JavaScript tweaks

· Small view updates; no tests affected (spelling; remove field; add label; move field)

· Business meeting that requires less than two hours of prep work

1pt story

· Bug fix with low-to-medium difficulty/risk, but well-understood, may affect a few tests

· Simple well-defined task (new validation; make field read-only when; fix/improve CSS)

· Research task time-boxed at about a day (what if scenario; will this work)

· Business meeting that requires two-to-six hours of prep/follow-up work

2pt story

· Bug of medium difficulty/uncertainty, or several tests affected (unexplained JS errors; AJAX)

· Understood task spanning >1 layers/screens/tables (validation rule server & client side; repetitive but tricky; significant data load script changes)

· Try small prototype (performance improvement; what if scenario)

· Business meeting that requires both significant prep and follow-up work (6-to-12 hours)

3pt story

· Bug fix of medium-high difficulty, significant uncertainty, or many tests affected (concurrency;

· Very simple screen or feature (system functions; nightly jobs action)

· Task spanning multiple layers/screens with one-or-two business rules to modify

· Significant updating of existing documentation (UCP; user manual; tech document; workflow)

· Multi-day research tasks (investigate one-to-two COBOL programs; new framework/tool)

· Business meeting requiring 2-3 meetings or >2 day research/analysis

5pt story

· Bug fix of high-difficulty, high uncertainty, and probable course changes (fix & refactor)

· Design & implement new feature, service, or framework (toast; branch-wide; JS test framework)

· Research & implement performance improvement (batch update via sproc; profiling)

· Documentation branch-wide (style guide; user manual)

· Series of 3-5 business meetings with analysis and design (pink sheet; COBOL workflow)

8pt story

· Implement a new feature that affects most/all of the site (granular security; table logging)

· Refactor a significant amount of code (funding section page, database/code renaming)

· Implement a screen from CICS (estimate adj.)

· Comprehensive documentation (site-wide user guide, technical specs)

· Significant research that will take most of the iteration for one person

13pt story

· Design & Implement an automated process/service (prod>test data move)

· Significant research task that could take all of an iteration for one or two people

20pt story (Consider if this story can be broken down into smaller stories)

· Large research task that could take an entire iteration for two-or-three people

· Very complicated refactoring

· Create and configure a new project solution (n-tier, nugget packages, tests, basic gui)

Code Reviews in Visual Studio 2013

This is the process that I created for our team to do code reviews.

Working on a User Story

After finding the next story in priority order, assign it and the tasks to yourself in TFS and put your name on the printed card and move it over to the Working On section on our task board.

The story will now show up on the My Work tool in VS under Available Work Items (If you haven’t yet customized your Available Work Items query, jump to that post first).

clip_image002 clip_image004

Drag the user story from the Available Work Items to the In Progress Work. Now it will be associated with your change set so that it will be easy to find during a code review. When you check in your changes, this story will automatically be marked as resolved for you.

If your user story has nested bugs or other user stories, add those, too (you do not need to associate tasks). Then, when you check in, the child bugs and stories will also be marked resolved and the bugs will be re-assigned to whomever reported them.

Each night suspend your changes before you leave for the day. This will create a shelf set for you in TFS so that if your computer dies your changes will not be lost. Be aware after you resume a suspended shelf set that shelf set is deleted from TFS, so if you want to keep a shelf set around, actually shelve it.

You can create more than one suspended work sets and each remembers all of your open windows, break points, and other VS context settings, which makes it really easy to jump around between tasks like code reviews, bug fixes, and different work items.

Requesting a Code Review

Before you request a code review, make sure to get latest, merge any changes, and run the tests to make sure they pass.

From the My Work tool under In Progress Work, click the link for Request Review. This will create a shelf set of your work in progress and associated user story. We code review all features prior to checking them in.

If this is your first time requesting a review, you will need to add each team member individually via the drop down list, but the next time you can just select the “Add Recent Reviewers” link. Enter a description if you like for the review and click the Submit Request button. You can see the shelf set this creates by going to File>Source Control>Find>Find Shelf Sets.

The team will receive an email about the request, and it will appear in their My Work tool under Code Reviews>Incoming Requests.

At this point you can undo your changes and do someone else’s code review, or grab the next story that needs to be worked on. You can review the status of your code review request by vising the “My Code Reviews & Requests” list in the Code Reviews section of the My Work tool.

Remember to move the printed card on the task board over to the Code Review section.

Accepting and Completing a Code Review

It’s important for everyone on the team to participate in code reviews because it is good practice to critique others code and learn from it at the same time, it helps improve understanding of the system, and it helps the team so that some members don’t get stuck doing more than their share of reviews.

You can find reviews either in the email list, or better, in the My Work tool in Team Explorer under the Incoming Requests.


1. Suspend your current work so that VS is cleared.

2. Double-click on a code review in the Incoming Requests list.


1. If no one else is marked as Accepted you can accept the review.

a. Generally we only have one team member do the review. You do not need to decline code review requests, they will fall off your list once the initiator closes the review.

2. By clicking on the file name you can see the difference comparison. Within the comparison, you can highlight sections of code, rt-click, and add comments to the developer. You can also check the box on the right to keep track of which files you have completed.

a. If it is a long code review, you can even send comments back before you finish your review so that the developer can start working on them (not shown above).

3. You can add an overall comment on the code review. We sometimes use the format:

* Unit Tests/Coverage Complete?

* User Story COA Met?

* UCP Mockups/Text/Validations Updated?

* Refactoring/Rework Needed?

4. Make sure you always get the shelveset and test out the code in your local browser. Do a little regression testing yourself and make the QA team’s job easier!

If there are any C# changes, make sure to run all of the unit tests. Use Test>Analyze Code Coverage>All Tests to check for code coverage in new code (we try to write tests for any code we create or update).


1. Double-click on a method that is not fully covered.

2. See in red highlight the code that is skipped. Leave a comment on it if there are just a few places missed, or an overall comment on the code review if there are many places missed.

After you are finished reviewing the code select a response from the Send & Finish drop down in the code review:

· Looks Good: go ahead and check this in

· With Comments: minor changes or maybe updating a document like UCP

· Needs Work: missed a requirement, missed some tests, something caused an error.

This will send the requestor an email to let them know it’s done, and the entire team will see the comments you added in their email. It will take the code review off of your list of incoming requests, but it will not be removed from the rest of the team’s list until the requestor closes the review.

Finally, open up the user story and record your name and hours under the code review task.

Checking In

Only check in code that has been reviewed, unless it is a very minor tweak or you did pair programming while writing the code.

Suspend all pending work first, then open the completed code review from the “My Code Reviews & Requests” list in the Code Reviews area of the My Work tool.


1. Click the link to activate the change set so that you get all your changes into your work space. Review any comments provided and make any changes.

a. If you have questions about the review, it’s probably best just to go talk to the reviewer at this point.

b. We don’t generally send a second review request after changes unless the requestor feels there were a lot of changes and they would like them to be reviewed. In this case, create a new code review just for the developer who originally reviewed your code.

2. Close the review. This will mark the user story as resolved and remove the code review request from the teams’ list of incoming requests.

Now that your review is closed, make sure to check in your changes on the Pending Changes tool. Also, move the printed card from the In Review area over to the Completed/Merged area on the task board.

How to Customize the Available Work Items Query in Visual Studio Team Explorer

In Team Explorer’s Work tool, there is a section for Available Work Items that shows all the user stories, tasks, and bugs that are active and not being worked on. The default query may be overwhelmed by stories or tasks that aren’t development related. You can change the query to just show your user stories.



1. Make sure the project is set to your intended project.

2. Set the query to Current Iteration, then click the Open Query link

3. Click on the Edit Query


1. Delete the state = Resolved line

2. Add a new line for Area Path Not Under Project Admin

3. Save this query and give it a name.


1. Use the drop down in available work items to select your query.

2. See the items assigned just to you below.

Outlook Email Rules Configuration

The code review process generates a lot of emails, so you may want to setup an Outlook rule to help manage them.


Create a rule as above that sends anything from your TFS email address with Code review in the subject to a special code review folder.

Inside the folder, you can rt-click on the grid header and add Subject into the columns, then sort by Subject and check Show In Groups.

C# Unit Testing Guidelines

Unit Testing Guidelines

The guidelines in this post are based on recommendations from Art of Unit Testing by Roy Osherove (2013), and the Testing on the Toilet blog by Google. You can fork these on on Github, too.


It's important to have a common language when talking about testing strategies so that we can understand each other's preferences without confusion over what each other think of as a unit test.

Unit of Work

Everything that can happen from invoking a public method to it returning after it's finished; can include traversing multiple classes and methods and covering multiple scenarios.

Unit Test

Automated code that invokes a unit of work for one specific scenario and checks an assumption about the result of that unit. It's readable by having a good name that describes the scenario it tests.

Naming convention


  • Unit of Work - name of method or the description of the unit of work, such as "Login"
  • Scenario - the conditions being tested, such as "InvalidUser" or a description of the parameters being passed into the unit of work
  • Expected behavior - your expected result, such as "UserNotFoundMessage"

Use readability as your guide to the name; the test name should read like a sentence with no ands and ors in it.





Integration Test

While unit tests fake dependencies to test scenarios in the unit of work, an integration test uses real dependencies, covers many scenarios, or crosses layers in the test. Examples of integration tests include changing data in a database, accessing the file system, working with system time, checking that all controllers have a specific attribute, or using the actual service layer from a controller instead of faking it.

Integration tests are important, but should be put into their own project so that they can run only on check in or manually because they generally will take longer to run and may require special setup.


A substitute for a dependency in the system that is used only so that the dependency void is filled and the test runs without a compile error. A stub is never asserted against - it can never make a test fail. A stub can return a test-specified value from its operations or throw an exception.


A substitute for a dependency in the system that knows whether or not it was called and is asserted against - it can make a test fail. It is used to make sure the unit of work actually called the expected dependency.


A generic term that can be used as a verb to describe what a stub or a mock does; they fake the behavior of the dependency so that the real dependency does not need to be used.


Restructuring the design of your code should be done frequently - after writing several tests or completing a unit of work - and should be possible most times without breaking any of your tests even when you refactor some logic into private methods.

Over Specifying

Unit tests that too often are describing what should happen and how through over use of mocks instead of testing a scenario returns the expected outcome.

End Result Types of Unit Tests

Unit tests come in the following three varieties:

  • Value-based - check the value returned from a unit of work.
  • State-based - check for noticeable behavior changes after changing state
  • Interaction - check how a unit of work makes calls to another object


Value-based: The easiest test to complete, just make sure the return result is what you expect and ignore the rest.

State-based: if a method changes the state of a class' public property, you can test that pretty easily. If you need to test the state of a method-scope variable it gets a little trickier. You can arrange a .DoInstead() on a service call to then control what happens instead of that service call. Using this technique, it's possible to get a copy of a parameter into the scope of your unit test like:

   myService.Arrange(x => x.Save).DoInstead((int arg1) => { unitTestScopeVar = arg1;})

Interaction-based: interaction based testing uses mocks that know how many times a method has been called (.OccursOnce(), .MustBeCalled()) or the order it is called in. This is the last thing you should turn to for testing and avoid it if possible because it makes your code very hard to refactor since changing how anything works breaks tests just by not doing something in the same number of steps instead of breaking tests because the end result is wrong.


The JustMock framework calls all fakes mocks, even if they are stubs.

  • Mock.Create() is a stub of that service.
  • myService.Arrange(x => x.Save).Returns("Success") (or the similar Mock.Arrange) is a stub of x.Save
  • myService.Arrange(x => x.Save).DoInstead((int arg1) => { toTest = arg1;}) is a way to use a stub to check the state of an argument passed into the stubbed service
  • myService.Save().MustBeCalled() is a mock because it now knows it was called and can break the test if it is not called

Each test should only test one scenario, so avoid more than one mock per test. Any time a mock is being used you are doing an interaction test, and always choose to do interaction testing as the last option when the interacting between objects are the end result (such as testing a service method that just passes the call to the repository).

Tests will always be more maintainable when you don't assert that an object was called. If more than 5% of your tests have mock objects, then you might be over specifying things. If tests specify too many expectations, then they become very fragile and break even when the overall functionality isn't broken; the extra specifications can make the test fail for the wrong reasons.

When to mock

  • Testing an event
  • Testing a scenario where there is not a state change or value returned, such as a pass-through method on the service layer, or a catch block that just logs or emails the error using an external dependency
  • Specify only one of the three end result types
  • Use non-strict fakes when you can so that tests will break less often for unexpected method calls
  • No private method exists without a reason; somewhere a public API calls into the private method and your test should cover the scenarios possible, which should also lead to 100% coverage of the private method without ever explicitly testing that method. If you test only the private method and it works, it does not mean the public API uses it correctly.
Google's "Testing on the Toilet" Blog

Google's testing team started posting articles on the doors of bathroom stalls as a way to get people thinking about good testing habits, hence the name of the blog. Here are some example posts:

Unit Testing Checklist

  • The test does not cross project layers or use real dependencies (file, database, system time)
  • The test does not invoke private methods
  • The test name is easy to read (MethodName_Scenario_ExpectedBehavior)
  • The test does not check interactions where value or state change checks could be used for full coverage
  • The test only checks one of: value result, state change, or interaction
  • The test does not use strict fakes where it would pass with loose fakes
  • The test does not fake more than it has to
  • The test tests for behavior over one-test-to-one-method design
  • The test does not use a mock where it is not testing interaction
  • The test does not have more than one mock (used for interaction)
  • The test does not have flow control (switch/if/while)
  • The test does not test a third party library


  • The tests cover 100% of the code changed and cover common and edge-case behaviors
  • The tests don't repeat code that could be refactored into setup or factory methods

What do I have checked out in SharePoint (2013)?

1. Open SharePoint in your browser

2. Click on the ellipsis and select “Create View”: clip_image002

3. Choose “Standard View”

4. Enter a view name “Checked Out”

5. Check the box for “Checked Out To” in the Columns section

6. Sort by “Name (for use in forms)”

7. In the Filter, change it to only show when “Checked Out To”, “is equal to”, your name

8. Down in the Folders section, choose “Show all items without folders” – this will let you see what’s checked out across the entire library instead of just the current folder

For a view that shows what’s checked out by anyone, change step 7 above to be “Checked Out To”, “is not equal to”, and leave the text box blank.

How to create a Visual Studio 2013 Database Project in 10 Minutes!

Watch my Pluralsight Author Audition video to learn how to create a Visual Studio 2013 Database Project from an existing SQL Server database and populate it with data from that database.

After you’ve created your database project, you can publish it to your development environment for a consistent development database that reduces the risk of data integrity bugs sneaking in.

Improve Team Collaboration with Visual Studio 2013 Database Projects


Approximate transcript:

Have you ever been part of a team who has had problems coordinating changes to the database during development or while publishing your work to production? Wouldn't it be nice to have some reliable and consistent test data that you can use while developing and debugging a new feature?

Hi, I'm Rusty Divine and in this course on improving team collaboration with Visual Studio 2013 database projects I am going to show you how to create a consistent data set for each member of your team and overcome database conflicts by creating a database project that can be published on your local machine at the click of the button. After watching this course you will be ready to create a new Visual Studio Database Project with test data so that you can develop your new features in isolation of any model or data changes by other team members.


On many software development teams more than one developer makes changes to the database. A team might use a shared development database on a server and have worked out a system to handle breaking database changes and buggy data that creep in. Some teams can script each change they make to the database model and seed data, then update or rebuild the database to any version they need to by running each script in order.


An option that has worked well for teams that I have been part of is to use a database project to manage the model and test-data changes. Database projects work well when the development of the solution will take months to years to build and the team of three or more will each be helping make changes to the database. The database project is a good option because it:

  • Gives developers a local database sandbox
  • Provides consistent data that can be wiped and reloaded when data integrity bugs creep in
  • Makes merging into your version control system easier than having to merge scripts that are dependent on the order they are ran
  • Allows reuse on automatic builds to publish a consistent database that automated tests can be written against


Some potential drawbacks to database projects are:

  • Setting it up the first time takes time
  • On a new project where you are working to define the database and it is changing rapidly, it can feel like this takes you out of the develop-test flow
  • When moving to QA and Production you need to do a model comparison to generate change scripts and create any data motion scripts you need to change data in each environment

Let's take a look at what it will take to get a database project added to your solution.


We'll be using this Northwind database to represent a backup of your development database where you've taken the time to get just the data you need into it - the less data the better because it will take less time to publish that way.

I've created a database project using the default settings from rt-clicking on this database and selecting create new project. You can see it has brought in all of the tables, views, and other schema, but it did not script the data.

Here I've added a scripts folder and a post-deployment folder to organize the data load scripts. Now, there are several ways to get the data scripts generated, but the way I find works best for a set of data that isn't reaching into the hundreds of thousands of rows is to use SQL Management Studio's script generation; for larger data sets I would recommend SSMS tools (pop-up a link).

In management studio, I will right-click on the database and choose to generate scripts. I want to select just the tables and then choose to generate a separate file for each. In the advanced settings, I choose to not generate the use database commands and change the export from schema to data. Now I'll export it to the post-deployment folder we created in our project.

I've shown all the files here in the project and want to include our new scripts. Now I need to set the build action to none for these so that they aren't included in the publish because I want to control the order of them.

Next, I will add a post-deployment script that will execute each of these data loads in the correct order that does not conflict with their foreign key relationships.

The format for running these is to use this colon-r followed by the script location to run the script. The script uses command syntax, so I will turn on sql command mode.

Now, I'll paste in the rest of the tables. When I try to build, I see an error with the order details table. Looking at it, I see that I need to add some quotation marks around it, and now it builds fine.

Let's publish this database to see our data. Rt-click on the project and choose publish. In the advanced mode, I want to choose recreate the database every time. There are ways you can merge this data into an existing database, but I want to make sure I wipe out any data related bugs that may have crept in and start clean each time. I'll save the profile for later use, and then click on publish.

There was a problem publishing the database, so I'll click this link to view the results. Here, I can see the script it generated and I could copy/paste this into SQL management studio to troubleshoot, but looking at the messages, I see there was a problem with the self-referential key on the employee table. I know that's going to be a problem to load, so I'll just drop the key before I load the table, then re-add it after.

Now, let's publish from the profile we saved. You can see it published successfully, and if I go into management studio and refresh the databases, there is our new database and you can see it did bring over our data.



In this course we covered the benefits of having a consistent development data set for your team so that they develop and test their work in isolation.

We covered how a database project can

  • Help your medium to large team coordinate model and test-data changes
  • Provide an isolated sandbox to develop new features
  • Wipe out any data-related bugs that creep in during development

There are a lot of extra features that you can explore with database projects, and I'd encourage you to watch these other Pluralsight courses to learn more.

Some times we’ve failed with Agile

Tomorrow there’s a lunch gathering for our local Agile community and the topic is failure led by Keil Wilson, an agile project manager who works with me at NDOR.

At this Luncheon, we'll focus on the value of failing fast and forward.  Bring a simple--or the most heinous--failure you've encountered in your Agile Transformation.  We'll then focus on how it was discovered and the good that came out of it.  We'll also rely on the collective wisdom of our Community to offer other suggestions.

We often talk about failure being an option on our software development team. For example, we did not burn down last iteration when two of our user stories didn’t get completed in time. At our retrospective we brought this up to talk about it. One of our customers was there and she didn’t care that we didn’t burn down, in fact, just the opposite – she thought we were too focused on it when it wasn’t a problem. We talked about value and commitment but in the end agreed that if we fail to burn down occasionally it is actually a sign that we are pushing ourselves appropriately. If we always burned down it would seem a little suspicious that maybe we were slacking off some, and if we never burned down it would be a loss-of-trust issue where the expectation would begin to be set that the burn down is irrelevant. In our case, we burn down 80% of the time or more I’d estimate, and our team is comfortable with that amount of failure.

For the luncheon tomorrow though I’ve thought of a few examples of when I’ve been part of  projects and teams that have failed with Agile in the past:

  1. One customer brought us a requirements document they had created and put out to bid. We won that project and actually did a great job delivering on the project using Agile to guide us. The failures we had were poor definitions of conditions of acceptance from that document they wrote, missing assumptions and constraints, and a fixed budget that didn’t allow room for pivoting when they needed to left the customer with a solution they weren’t satisfied with in the end. Later, they paid another company to rewrite it, and that company is still working on it today, but at least they are charging time & materials.
  2. Another customer was a startup and we pre-billed them each month with a definition of what we were going to complete that month – we worried their funds might dry up. We were able to get done what they needed, but in the later stages of the project their project sponsor (the money-person) realized how much the project had cost, and their product owner (his brother) got convenient amnesia and pushed the blame on us. Our failure there was not making sure the product sponsor was involved more in the project as it was being completed and it ruined a relationship with that customer that could have been prevented with better project management.
  3. Some of my early experiences with Scrum and Agile took a light-approach. I now realize we could have done much better by occasionally reflecting on how we could be doing better instead of making the same mistakes over and over.
  4. On the project I currently work we still have some things we could be doing better. For instance, our customer does not get very involved with prioritizing the backlog, and when they do it seems like they aren’t thinking as strategically as would be good for the project in the long run. The result is we end up focusing on enhancements to existing features – moving fields around, swapping columns, changing shading, etc. – instead of taking on the new initiatives that need to be completed during this phase of the project. The silver lining is that our project manager specifically set a budget in this phase’s project charter for enhancements, and as of this iteration they only have a few points left to spend before they need to make an official request for more. Even though we put this safeguard in place, it didn’t prevent them from spending their entire enhancement budget first, including reversing some of the work they asked for, before working on anything new.

I’m looking forward to tomorrow’s meeting to see what the rest of the group has to say about their experiences. One things for sure, the restaurant we are meeting at has never failed to sever up some great burgers!