Rusty Divine

Live, Love, Learn, Teach

Agile Story Pointing Benchmarks

We use story points to estimate how much effort a given story will take. We have found it reduces optimism inherent in estimating with hours because it encourages the team to use comparisons to other stories instead of imagining how long it will take. When we do estimate with hours, we typically think about the happy-path, and really, that’s OK.

Consider a two-week iteration with 10 stories. Maybe 6 of those stories take the happy-path and are completed pretty much as expected. Two stories are actually a lot easier and are completed faster than expected, but the other two take significantly longer than expected. The problem is, we don’t know which 2 those are in the beginning, so there’s no way using hours to accurately estimate. Instead, if your team is fairly consistent, you can use points to know that in a given iteration you can complete 30 points of stories. A few of those stories turn out harder than you expected, but that’s always the case. Everything evens out in the end, or at least, evens out more consistently in the end.

But, to get this ball rolling your team really needs a good common understanding about how to compare two stories to know how much bigger one is than another. We use the series of sizes like: .5, 1, 2, 3, 5, 8, 13, 20 –> we rarely let a story stay at 13 or 20 without breaking it down further because at that size we’re getting into something we’re not sure someone can complete in one iteration. You can start your team off by choosing a very familiar story that can be your 2-point benchmark, and one that can be your 5-point benchmark. Then, every story you consider can be compared to those two benchmarks and your team can collectively decide where in the range the story fits. Typically even non-developers can point development stories and developers can point non-development stories as long as they can fit it into their mental picture for how much effort a story like that would take.

Our team has experimented with taking it one step further and creating descriptions for each of these stories. This is still experimental and the general feeling is we would need to shorten this list by about half to make it more concise:

.5pt story

· Small bug fixes with no foreseeable risk and small JavaScript tweaks

· Small view updates; no tests affected (spelling; remove field; add label; move field)

· Business meeting that requires less than two hours of prep work

1pt story

· Bug fix with low-to-medium difficulty/risk, but well-understood, may affect a few tests

· Simple well-defined task (new validation; make field read-only when; fix/improve CSS)

· Research task time-boxed at about a day (what if scenario; will this work)

· Business meeting that requires two-to-six hours of prep/follow-up work

2pt story

· Bug of medium difficulty/uncertainty, or several tests affected (unexplained JS errors; AJAX)

· Understood task spanning >1 layers/screens/tables (validation rule server & client side; repetitive but tricky; significant data load script changes)

· Try small prototype (performance improvement; what if scenario)

· Business meeting that requires both significant prep and follow-up work (6-to-12 hours)

3pt story

· Bug fix of medium-high difficulty, significant uncertainty, or many tests affected (concurrency;

· Very simple screen or feature (system functions; nightly jobs action)

· Task spanning multiple layers/screens with one-or-two business rules to modify

· Significant updating of existing documentation (UCP; user manual; tech document; workflow)

· Multi-day research tasks (investigate one-to-two COBOL programs; new framework/tool)

· Business meeting requiring 2-3 meetings or >2 day research/analysis

5pt story

· Bug fix of high-difficulty, high uncertainty, and probable course changes (fix & refactor)

· Design & implement new feature, service, or framework (toast; branch-wide; JS test framework)

· Research & implement performance improvement (batch update via sproc; profiling)

· Documentation branch-wide (style guide; user manual)

· Series of 3-5 business meetings with analysis and design (pink sheet; COBOL workflow)

8pt story

· Implement a new feature that affects most/all of the site (granular security; table logging)

· Refactor a significant amount of code (funding section page, database/code renaming)

· Implement a screen from CICS (estimate adj.)

· Comprehensive documentation (site-wide user guide, technical specs)

· Significant research that will take most of the iteration for one person

13pt story

· Design & Implement an automated process/service (prod>test data move)

· Significant research task that could take all of an iteration for one or two people

20pt story (Consider if this story can be broken down into smaller stories)

· Large research task that could take an entire iteration for two-or-three people

· Very complicated refactoring

· Create and configure a new project solution (n-tier, nugget packages, tests, basic gui)

Code Reviews in Visual Studio 2013

This is the process that I created for our team to do code reviews.

Working on a User Story

After finding the next story in priority order, assign it and the tasks to yourself in TFS and put your name on the printed card and move it over to the Working On section on our task board.

The story will now show up on the My Work tool in VS under Available Work Items (If you haven’t yet customized your Available Work Items query, jump to that post first).

clip_image002 clip_image004

Drag the user story from the Available Work Items to the In Progress Work. Now it will be associated with your change set so that it will be easy to find during a code review. When you check in your changes, this story will automatically be marked as resolved for you.

If your user story has nested bugs or other user stories, add those, too (you do not need to associate tasks). Then, when you check in, the child bugs and stories will also be marked resolved and the bugs will be re-assigned to whomever reported them.

Each night suspend your changes before you leave for the day. This will create a shelf set for you in TFS so that if your computer dies your changes will not be lost. Be aware after you resume a suspended shelf set that shelf set is deleted from TFS, so if you want to keep a shelf set around, actually shelve it.

You can create more than one suspended work sets and each remembers all of your open windows, break points, and other VS context settings, which makes it really easy to jump around between tasks like code reviews, bug fixes, and different work items.

Requesting a Code Review

Before you request a code review, make sure to get latest, merge any changes, and run the tests to make sure they pass.

From the My Work tool under In Progress Work, click the link for Request Review. This will create a shelf set of your work in progress and associated user story. We code review all features prior to checking them in.

If this is your first time requesting a review, you will need to add each team member individually via the drop down list, but the next time you can just select the “Add Recent Reviewers” link. Enter a description if you like for the review and click the Submit Request button. You can see the shelf set this creates by going to File>Source Control>Find>Find Shelf Sets.

The team will receive an email about the request, and it will appear in their My Work tool under Code Reviews>Incoming Requests.

At this point you can undo your changes and do someone else’s code review, or grab the next story that needs to be worked on. You can review the status of your code review request by vising the “My Code Reviews & Requests” list in the Code Reviews section of the My Work tool.

Remember to move the printed card on the task board over to the Code Review section.

Accepting and Completing a Code Review

It’s important for everyone on the team to participate in code reviews because it is good practice to critique others code and learn from it at the same time, it helps improve understanding of the system, and it helps the team so that some members don’t get stuck doing more than their share of reviews.

You can find reviews either in the email list, or better, in the My Work tool in Team Explorer under the Incoming Requests.

clip_image008

1. Suspend your current work so that VS is cleared.

2. Double-click on a code review in the Incoming Requests list.

clip_image010

1. If no one else is marked as Accepted you can accept the review.

a. Generally we only have one team member do the review. You do not need to decline code review requests, they will fall off your list once the initiator closes the review.

2. By clicking on the file name you can see the difference comparison. Within the comparison, you can highlight sections of code, rt-click, and add comments to the developer. You can also check the box on the right to keep track of which files you have completed.

a. If it is a long code review, you can even send comments back before you finish your review so that the developer can start working on them (not shown above).

3. You can add an overall comment on the code review. We sometimes use the format:

* Unit Tests/Coverage Complete?

* User Story COA Met?

* UCP Mockups/Text/Validations Updated?

* Refactoring/Rework Needed?

4. Make sure you always get the shelveset and test out the code in your local browser. Do a little regression testing yourself and make the QA team’s job easier!

If there are any C# changes, make sure to run all of the unit tests. Use Test>Analyze Code Coverage>All Tests to check for code coverage in new code (we try to write tests for any code we create or update).

clip_image012

1. Double-click on a method that is not fully covered.

2. See in red highlight the code that is skipped. Leave a comment on it if there are just a few places missed, or an overall comment on the code review if there are many places missed.

After you are finished reviewing the code select a response from the Send & Finish drop down in the code review:

· Looks Good: go ahead and check this in

· With Comments: minor changes or maybe updating a document like UCP

· Needs Work: missed a requirement, missed some tests, something caused an error.

This will send the requestor an email to let them know it’s done, and the entire team will see the comments you added in their email. It will take the code review off of your list of incoming requests, but it will not be removed from the rest of the team’s list until the requestor closes the review.

Finally, open up the user story and record your name and hours under the code review task.

Checking In

Only check in code that has been reviewed, unless it is a very minor tweak or you did pair programming while writing the code.

Suspend all pending work first, then open the completed code review from the “My Code Reviews & Requests” list in the Code Reviews area of the My Work tool.

clip_image014

1. Click the link to activate the change set so that you get all your changes into your work space. Review any comments provided and make any changes.

a. If you have questions about the review, it’s probably best just to go talk to the reviewer at this point.

b. We don’t generally send a second review request after changes unless the requestor feels there were a lot of changes and they would like them to be reviewed. In this case, create a new code review just for the developer who originally reviewed your code.

2. Close the review. This will mark the user story as resolved and remove the code review request from the teams’ list of incoming requests.

Now that your review is closed, make sure to check in your changes on the Pending Changes tool. Also, move the printed card from the In Review area over to the Completed/Merged area on the task board.

How to Customize the Available Work Items Query in Visual Studio Team Explorer

In Team Explorer’s Work tool, there is a section for Available Work Items that shows all the user stories, tasks, and bugs that are active and not being worked on. The default query may be overwhelmed by stories or tasks that aren’t development related. You can change the query to just show your user stories.

 

clip_image002

1. Make sure the project is set to your intended project.

2. Set the query to Current Iteration, then click the Open Query link

3. Click on the Edit Query

clip_image003

1. Delete the state = Resolved line

2. Add a new line for Area Path Not Under Project Admin

3. Save this query and give it a name.

clip_image005

1. Use the drop down in available work items to select your query.

2. See the items assigned just to you below.

Outlook Email Rules Configuration

The code review process generates a lot of emails, so you may want to setup an Outlook rule to help manage them.

clip_image007

Create a rule as above that sends anything from your TFS email address with Code review in the subject to a special code review folder.

Inside the folder, you can rt-click on the grid header and add Subject into the columns, then sort by Subject and check Show In Groups.

C# Unit Testing Guidelines

Unit Testing Guidelines

The guidelines in this post are based on recommendations from Art of Unit Testing by Roy Osherove (2013), and the Testing on the Toilet blog by Google. You can fork these on on Github, too.

Definitions

It's important to have a common language when talking about testing strategies so that we can understand each other's preferences without confusion over what each other think of as a unit test.

Unit of Work

Everything that can happen from invoking a public method to it returning after it's finished; can include traversing multiple classes and methods and covering multiple scenarios.

Unit Test

Automated code that invokes a unit of work for one specific scenario and checks an assumption about the result of that unit. It's readable by having a good name that describes the scenario it tests.

Naming convention

UnitOfWork_Scenario_ExpectedBehavior

  • Unit of Work - name of method or the description of the unit of work, such as "Login"
  • Scenario - the conditions being tested, such as "InvalidUser" or a description of the parameters being passed into the unit of work
  • Expected behavior - your expected result, such as "UserNotFoundMessage"

Use readability as your guide to the name; the test name should read like a sentence with no ands and ors in it.

Examples

Login_InvalidUser_UserNotFoundMessage

UpdateDisplayOrder_MoveFirst_MovesToFirstPosition

ValidateWorkPhaseSEctionEstimateDetails_WithErrors_ReturnsSuccessIsFalse

Integration Test

While unit tests fake dependencies to test scenarios in the unit of work, an integration test uses real dependencies, covers many scenarios, or crosses layers in the test. Examples of integration tests include changing data in a database, accessing the file system, working with system time, checking that all controllers have a specific attribute, or using the actual service layer from a controller instead of faking it.

Integration tests are important, but should be put into their own project so that they can run only on check in or manually because they generally will take longer to run and may require special setup.

Stub

A substitute for a dependency in the system that is used only so that the dependency void is filled and the test runs without a compile error. A stub is never asserted against - it can never make a test fail. A stub can return a test-specified value from its operations or throw an exception.

Mock

A substitute for a dependency in the system that knows whether or not it was called and is asserted against - it can make a test fail. It is used to make sure the unit of work actually called the expected dependency.

Fake

A generic term that can be used as a verb to describe what a stub or a mock does; they fake the behavior of the dependency so that the real dependency does not need to be used.

Refactoring

Restructuring the design of your code should be done frequently - after writing several tests or completing a unit of work - and should be possible most times without breaking any of your tests even when you refactor some logic into private methods.

Over Specifying

Unit tests that too often are describing what should happen and how through over use of mocks instead of testing a scenario returns the expected outcome.

End Result Types of Unit Tests

Unit tests come in the following three varieties:

  • Value-based - check the value returned from a unit of work.
  • State-based - check for noticeable behavior changes after changing state
  • Interaction - check how a unit of work makes calls to another object

 

Value-based: The easiest test to complete, just make sure the return result is what you expect and ignore the rest.

State-based: if a method changes the state of a class' public property, you can test that pretty easily. If you need to test the state of a method-scope variable it gets a little trickier. You can arrange a .DoInstead() on a service call to then control what happens instead of that service call. Using this technique, it's possible to get a copy of a parameter into the scope of your unit test like:

   myService.Arrange(x => x.Save).DoInstead((int arg1) => { unitTestScopeVar = arg1;})

Interaction-based: interaction based testing uses mocks that know how many times a method has been called (.OccursOnce(), .MustBeCalled()) or the order it is called in. This is the last thing you should turn to for testing and avoid it if possible because it makes your code very hard to refactor since changing how anything works breaks tests just by not doing something in the same number of steps instead of breaking tests because the end result is wrong.

Mocking

The JustMock framework calls all fakes mocks, even if they are stubs.

  • Mock.Create() is a stub of that service.
  • myService.Arrange(x => x.Save).Returns("Success") (or the similar Mock.Arrange) is a stub of x.Save
  • myService.Arrange(x => x.Save).DoInstead((int arg1) => { toTest = arg1;}) is a way to use a stub to check the state of an argument passed into the stubbed service
  • myService.Save().MustBeCalled() is a mock because it now knows it was called and can break the test if it is not called

Each test should only test one scenario, so avoid more than one mock per test. Any time a mock is being used you are doing an interaction test, and always choose to do interaction testing as the last option when the interacting between objects are the end result (such as testing a service method that just passes the call to the repository).

Tests will always be more maintainable when you don't assert that an object was called. If more than 5% of your tests have mock objects, then you might be over specifying things. If tests specify too many expectations, then they become very fragile and break even when the overall functionality isn't broken; the extra specifications can make the test fail for the wrong reasons.

When to mock

  • Testing an event
  • Testing a scenario where there is not a state change or value returned, such as a pass-through method on the service layer, or a catch block that just logs or emails the error using an external dependency
Tips
  • Specify only one of the three end result types
  • Use non-strict fakes when you can so that tests will break less often for unexpected method calls
  • No private method exists without a reason; somewhere a public API calls into the private method and your test should cover the scenarios possible, which should also lead to 100% coverage of the private method without ever explicitly testing that method. If you test only the private method and it works, it does not mean the public API uses it correctly.
Google's "Testing on the Toilet" Blog

Google's testing team started posting articles on the doors of bathroom stalls as a way to get people thinking about good testing habits, hence the name of the blog. Here are some example posts:


Unit Testing Checklist

  • The test does not cross project layers or use real dependencies (file, database, system time)
  • The test does not invoke private methods
  • The test name is easy to read (MethodName_Scenario_ExpectedBehavior)
  • The test does not check interactions where value or state change checks could be used for full coverage
  • The test only checks one of: value result, state change, or interaction
  • The test does not use strict fakes where it would pass with loose fakes
  • The test does not fake more than it has to
  • The test tests for behavior over one-test-to-one-method design
  • The test does not use a mock where it is not testing interaction
  • The test does not have more than one mock (used for interaction)
  • The test does not have flow control (switch/if/while)
  • The test does not test a third party library

Overall

  • The tests cover 100% of the code changed and cover common and edge-case behaviors
  • The tests don't repeat code that could be refactored into setup or factory methods

How to create a Visual Studio 2013 Database Project in 10 Minutes!

Watch my Pluralsight Author Audition video to learn how to create a Visual Studio 2013 Database Project from an existing SQL Server database and populate it with data from that database.

After you’ve created your database project, you can publish it to your development environment for a consistent development database that reduces the risk of data integrity bugs sneaking in.

Improve Team Collaboration with Visual Studio 2013 Database Projects

 

Approximate transcript:

Have you ever been part of a team who has had problems coordinating changes to the database during development or while publishing your work to production? Wouldn't it be nice to have some reliable and consistent test data that you can use while developing and debugging a new feature?

Hi, I'm Rusty Divine and in this course on improving team collaboration with Visual Studio 2013 database projects I am going to show you how to create a consistent data set for each member of your team and overcome database conflicts by creating a database project that can be published on your local machine at the click of the button. After watching this course you will be ready to create a new Visual Studio Database Project with test data so that you can develop your new features in isolation of any model or data changes by other team members.

************

On many software development teams more than one developer makes changes to the database. A team might use a shared development database on a server and have worked out a system to handle breaking database changes and buggy data that creep in. Some teams can script each change they make to the database model and seed data, then update or rebuild the database to any version they need to by running each script in order.

************

An option that has worked well for teams that I have been part of is to use a database project to manage the model and test-data changes. Database projects work well when the development of the solution will take months to years to build and the team of three or more will each be helping make changes to the database. The database project is a good option because it:

  • Gives developers a local database sandbox
  • Provides consistent data that can be wiped and reloaded when data integrity bugs creep in
  • Makes merging into your version control system easier than having to merge scripts that are dependent on the order they are ran
  • Allows reuse on automatic builds to publish a consistent database that automated tests can be written against

***********

Some potential drawbacks to database projects are:

  • Setting it up the first time takes time
  • On a new project where you are working to define the database and it is changing rapidly, it can feel like this takes you out of the develop-test flow
  • When moving to QA and Production you need to do a model comparison to generate change scripts and create any data motion scripts you need to change data in each environment

Let's take a look at what it will take to get a database project added to your solution.

***********

We'll be using this Northwind database to represent a backup of your development database where you've taken the time to get just the data you need into it - the less data the better because it will take less time to publish that way.

I've created a database project using the default settings from rt-clicking on this database and selecting create new project. You can see it has brought in all of the tables, views, and other schema, but it did not script the data.

Here I've added a scripts folder and a post-deployment folder to organize the data load scripts. Now, there are several ways to get the data scripts generated, but the way I find works best for a set of data that isn't reaching into the hundreds of thousands of rows is to use SQL Management Studio's script generation; for larger data sets I would recommend SSMS tools (pop-up a link).

In management studio, I will right-click on the database and choose to generate scripts. I want to select just the tables and then choose to generate a separate file for each. In the advanced settings, I choose to not generate the use database commands and change the export from schema to data. Now I'll export it to the post-deployment folder we created in our project.

I've shown all the files here in the project and want to include our new scripts. Now I need to set the build action to none for these so that they aren't included in the publish because I want to control the order of them.

Next, I will add a post-deployment script that will execute each of these data loads in the correct order that does not conflict with their foreign key relationships.

The format for running these is to use this colon-r followed by the script location to run the script. The script uses command syntax, so I will turn on sql command mode.

Now, I'll paste in the rest of the tables. When I try to build, I see an error with the order details table. Looking at it, I see that I need to add some quotation marks around it, and now it builds fine.

Let's publish this database to see our data. Rt-click on the project and choose publish. In the advanced mode, I want to choose recreate the database every time. There are ways you can merge this data into an existing database, but I want to make sure I wipe out any data related bugs that may have crept in and start clean each time. I'll save the profile for later use, and then click on publish.

There was a problem publishing the database, so I'll click this link to view the results. Here, I can see the script it generated and I could copy/paste this into SQL management studio to troubleshoot, but looking at the messages, I see there was a problem with the self-referential key on the employee table. I know that's going to be a problem to load, so I'll just drop the key before I load the table, then re-add it after.

Now, let's publish from the profile we saved. You can see it published successfully, and if I go into management studio and refresh the databases, there is our new database and you can see it did bring over our data.

*************

Summary

In this course we covered the benefits of having a consistent development data set for your team so that they develop and test their work in isolation.

We covered how a database project can

  • Help your medium to large team coordinate model and test-data changes
  • Provide an isolated sandbox to develop new features
  • Wipe out any data-related bugs that creep in during development

There are a lot of extra features that you can explore with database projects, and I'd encourage you to watch these other Pluralsight courses to learn more.

Some Highlights from HDC14

When I go to a conference I am always re-energized afterward from having learned about tools that I’d love to try, techniques others are using, and finding common ground with other professionals I wouldn’t otherwise get to interact with.

This week I went to, and spoke at, the 2014 Heartland Developer’s Conference in Omaha, NE. I was impressed with this year’s selection of talks – it seemed more were really well designed than I’d seen in years past.

Jack Skeel’s Master Agile and Write More Code

The highlight for me was a keynote delivered by Jack Skeels of Agency Agile titled Master Agile and Write More Code (slides). Jack talked about how agile processes may not actually improve productivity, especially for shorter projects and projects that have constraints of budget and schedule. He made some points about how agile can work in a constrained environment, which all really hit home for me and my experience in custom software development through various consulting firms.

  • Small scope is better
  • Direct communication is critical
  • Learning constantly via process improvements is important

Those points really tied in nicely with my talk title From Idea to Core Feature Set which covered how to get started on the right foot on a project that may be constrained by budget and schedule.

He also emphasized the importance of flow – keeping interruptions to a minimum – so that a team can be most productive. I was excited to hear about how important he thought this was because I’m working on a side project that deals directly in this problem space! Now that HDC is over, I intend to turn back to that and bring something out by early next year.

One of the books he highly recommended was: Practices for Scaling Lean and Agile Development Practices for Scaling Lean and Agile Development, which he said he owned two copies and both were dog-eared and bookmarked and well-worn. He also mentioned Thinking Fast and Slow and his Amazon Bookshelf.

John Wirtz’s Stakeholders and Persona’s are for Wusses – Everyone go Meet Your Real Users

John’s talk on the importance of integrating with your users was enlightening. I had met John for coffee one time to talk about how his company Hudl manages their teams and their agile processes, so I was really looking forward to hearing what he had to say and I wasn’t disappointed.

John told us a bit about what Hudl does and how they modeled their teams after Spotify’s tribe-guild-squad model that scales agile teams very well.

One technique Hudl uses to get raw feedback they call the Noob Gut-Punch – they go sit with a user who is pretty new to their software and ask them to navigate around and tell them what each button and menu means to them. John played a brief clip of one of these interviews and it truly was hard to listen to how confused the user was! I can only imagine being the software developer responsible for creating a pretty decent interface only to see someone struggle with it that much. The uncomfortableness of that moment is going to drive the developer to think like someone new to the system and give them someone to talk about when they argue over how features should be implemented so that someone who is new to the software can understand it.

He also said that they have their developers ask the customers in a 5-why's type of style where they just keep drilling down into what the customer is saying until they can find the root of the problem. 

One other point he made was that they try to hire developer’s with natural user empathy. If someone really cares about finding out how users are using software, then they will feel welcome in Hudl’s culture.

He mentioned tools such as: WuFoo, and UserVoice.

Miscellaneous Highlights

  • Pete Brown talked about the Intel Galileo, an arduino-type board that runs windows! He highly recommended everyone should watch the IT Crowd and get Little Bits for their kids or themselves.
  • Bill Fink showed us the Kinect v2 and said it will be priced at $199 and be incredibly much better than v1!
  • Andy Ochsner recommended the ELK stack for log analysis, and showed how awesome Kibana is at charting log data – suggested you could do some cool things with public data.
  • Dave Royce sat next to me at one session and told me his team is using Liquid Planner, Confluence, and Box to manage his custom software projects for his team.
  • Paul Oliver gave a spectacularly funny talk on Git (slides) where he showed us how to beat-box, too. He recommended this interactive tool to help learn Git, and SourceTree for a beginning IDE, then GitExtensions for advanced, and said not to use the VS IDE in a team because merge conflicts are a pain. He gave out a few copies of the Git Pocket Guide and had some great things to say about it.

Interview

Also, I got a chance to chat with Jim Collison:

How to Enforce Explicit TypeScript Types in Visual Studio 2013

TypeScript has a compiler flag –noImplicitAny to make sure you have explicitly typed all of your variables, even if you type is as “any”.

//With --noImplicitAny flag set, this is OK
class Student {
    fullname: string;
    constructor(public firstname : any, public middleinitial : string, 
        public lastname : string) {
        this.fullname = firstname + " " + middleinitial + " " + lastname;
    }
}

//With --noImplicitAny flag set, this will throw a compile error:

class Student {
    fullname: string;
    constructor(public firstname, public middleinitial, public lastname) {
        this.fullname = firstname + " " + middleinitial + " " + lastname;
    }
}

In order to set this flag in Visual Studio 2013, edit your project’s properties and then in the TypeScript Build menu, uncheck the Allow implicit ‘any’ types.

TS_implicit_any

With the check box unchecked, your compiler will catch any instance where you did not declare the variable type in your TypeScript.

SQL Server Profiler Templates and How To Isolate My Database Calls from Others on my Team

Sometimes a time-saving feature is right under your nose for years. One example for me was the ability to save my SQL Server Profiler configuration and set it as the default template. I hope you find it useful!

Selecting a Base Template

After opening Profiler and connecting to a database a “Trace Properties” window will present itself for entering the trace details. To make a custom template, select any of the templates form the “Use the template” drop down and give it a Trace name like “{My Database} Standard Trace”.

When I run profiler, I am almost always just checking for excessive database calls (especially n+1 selects) or troubleshooting TSQL or stored procedure calls. Sometimes I’m running the trace on a development server that has multiple databases or users and I want to isolate it to just my calls. For me, “TSQL_Duration” a good starting point for a standard template.

image

Configuring Events and Filters on the Template

Next, click on the Events Selection tab to configure events to listen for:

image

Be sure to check the “Show all columns” checkbox. This is important because if the column isn’t visible in this events grid, then you can’t filter by it in the Column Filters tool.

Next, click on the Column Filters button and filter out any events from other databases on this database server (use the percent sign % as a wildcard here):

image

Occasionally I will also change the login name in my app’s connection string so that I can isolate just my connections by adding a filter on the “LoginName” column.

Click the Run button to try it out. To make changes to the columns and filters, be sure to stop the trace first, then click on the properties icon in the tool bar.

Saving and Exporting a Template

Once satisfied with the trace template, select File > Save As > Trace Template and enter the template name like “{My Database} Standard Trace”. The next time you start a trace, it will be in the drop down list of templates. You can also select File > Templates > Export Template to save it to a file and share it with your team.

Setting a Template to the Default Template

Setting the default template will save a few clicks each time you use Profiler. Select File > Templates > Edit Template. Choose the template from the drop down list, and then check “Use as default template..”

image

Final Thoughts and Other Options

A good code review should also check out the traffic hitting the database for the code under review. It is so easy to stop coding once something “works” and not consider how it will scale. Checking in with profiler is especially important if your project is using an ORM with lazy loading turned on.

I would not recommend running SQL profiler on a production database because it takes a significant amount of resources. If you do need to do something like this, consider whether there is another tool that would be safer.

I would also encourage you to consider the following tools in your web project:

Remember to set the Target Framework

This is the second time in a few weeks where I have had a wtf? moment caused by not setting the Target Framework for a new project to the correct version.

Today it was missing an EmailAddress data annotation from the System.ComponentModel.DataAnnotations reference. I know the EmailAddress annotation should be part of that library, and my reference showed the correct library version (4.0.30319), but the EmailAddress just sat there with that red squiggly line mocking me.

Until I remembered to update my Target Framework from .NET Framework 4.0 to .Net Framework 4.5.

TargetFramework

OWASP - Lincoln

Interested in learning more about making your software secure?

 

Drop me a line if you’d like to be invited to an OWASP meeting in Lincoln early next year.

 

OWASP is a global community that drives the visibility and evolution in the safety and security of the world’s software.

 

Rob Temple leads the Omaha chapter of OWASP and we would like to explore the interest level in Lincoln by coordinating a lunch-time meeting early in 2014.

 

Some example meetings topics from the Omaha chapter:

•Thu Jun 6, 2013 - Web Application Security - So many tools, so little time

•Thu Sep 12, 2013 - The OWASP Way: Understanding the OWASP Vision and the Top Ten

•Thu Dec 5, 2013 - Advanced Mobile Penetration Testing