0

Still Running TFS 2010? It’s Aging Out of Support Next Month. Polaris Solutions Can Help You Upgrade Quickly

by Angela 4. June 2015 12:04

You heard me correctly, mainstream support for TFS 2010 ends on July 14th, less than 6 weeks from today! So if you’re thinking “it still WORKS, why should I upgrade?” Consider these points…

  • Any issues arising with your server will NOT be patched or serviced by Microsoft support, and it will be harder and harder to find experienced people to work on it (well, who WANT to work on it)
  • Your infrastructure team may be chomping at the bit to stop supporting the old operating systems and SQL Server versions that TFS is running on
  • You’re missing out on some amazing new capabilities that it would take me hours to cover and that I promise will revolutionize the way you develop and deliver software
  • You attract great new talent by offering robust and modern development environments, trust me on this
  • I can tell you from a LOT of personal experience, that the longer you wait to upgrade, the harder and more time consuming it is!

The good news is that you may qualify for up to $5,000 worth of free services to help you plan and prepare for your upgrade through the Microsoft Deployment Planning Services program (DTDPS)! Wondering what that is? Below is a quick FAQ that I created to explain the program:

Now what exactly IS DTDPS? Well first of all it’s a Microsoft offering, so expect MANY acronyms to follow. DTDPS stands for Developer Tools Deployment Planning Services. Specifically, the development tools that these services are meant to be used in conjunction with are the Microsoft Visual Studio ALM platform - Team Foundation Server, Visual Studio, and Microsoft Test Manager (TFS, VS, and MTM for good measure). 

So what does this really do for me? While most people are already very familiar with Visual Studio from a .NET development perspective, many people who own the other tools within the TFS platform are not taking full advantage of them. DTDPS is the solution to this problem, connecting customers with the right partners to make sure they are getting the full value of their ALM investment. Software that sits on the shelf is a huge waste of money.  And from Microsoft’s perspective is something you’re not likely to buy again, so it is of course in their interest to offer such a program.

What kinds of services are included in DTDPS? Currently there are 4 DTDPS offerings available: TFS deployment planning assessment, Visual Studio Quality Tools assessment, Visual Studio Agile Deployment Assessment, and Visual Studio DevOps Deployment Assessment. You’ll notice a theme here, the word “planning”. These engagements are not meant to be used to implement the tools. Instead, they are short, fixed-length (3 and 5 days) engagements for gathering data and analyzing your current environment and needs in order for us to help you build a plan for implementation and adoption of Visual Studio and TFS ALM tooling. It’s a great kickstart and will drastically accelerate your ALM initiatives.

But what if I don’t need one of those services, but need other assistance with TFS? Well, it depends. I know, I know, typical consulting answer. These programs can be expanded upon to assist customers with other ALM related concerns, so drop me a line at the email I provide below, and I’ll be happy to discuss it with you in more detail. 

Who delivers the engagement? DTDPS is a program delivered through certified and experienced ALM partners like Polaris Solutions to help customers with SA (Software Assurance) benefits to take full advantage of the tools they own.  We have delivered dozens of these engagements over the past few years and every customer we have worked with has been extremely happy with the valuable roadmaps that we delivered. You will benefit from a wealth of relevant experience and proven ALM practices that only comes from us having deployed and leveraged the tools in a large number of different environments and business verticals.

OK, I’m intrigued, but how expensive is it? It is FREE. Seriously, and absolutely.  This benefit is available to customers who purchase Microsoft products with SA, think of it as a rewards program. In fact, you may have DTDPS credits without knowing it!  Many of the customers I work with did not know they had DTDPS credits available until I turned them onto the program.

I want in! How do I sign up?  Start at the DTDPS site. Here you can peruse the various services available and see which ones are right for you and your organization.  Then check out the DTDPS QuickStart guide which walks you through the steps of accessing your benefits.  Then you just pick a partner to work with, like us, and you’re on your way to a better way of doing ALM!

 

If you are interested in learning more about DTDPS, or if you would like to find out more about getting a free quick assessment of the effort required to upgrade and the benefits that your team would enjoy, please contact me at Angela@PolarisSolutions.com. And if you know anyone still using an older version of TFS (anyone running TFS 2013 or earlier qualifies) help them out and point them to this blog!

0

Say Hello to Chicago’s Newest ALM MVP

by Angela 3. October 2013 20:35

I’m totally stoked to be the latest Chicagoan to be named an ALM MVP. There are currently only 114 ALM MVPs worldwide (that I see on the site anyway), and I am proud to be counted amongst these awesome folks. Sadly, the site is not quite updated so you won’t see yours truly listed just yet.

Wait, “what the heck is an ALM MVP you say?” I know, that is a lot of acronyms there.  In case you’re not hip to Microsoft lingo, that’s an Application Lifecycle Management Most Valued Professional.  This essentially means that in the areas of ALM (TFS, Visual Studio, Microsoft Test Manager, SDLC, etc.), I’ve made significant enough contributions to the community at large to get some serious props. And it’s been a fun ride, and I certainly don’t plan to slow down :)

This is not to say I know EVERYTHING there is to know on the topic of ALM, oh how I wish there were enough hours in the day.  But on any given day you’re likely to find me Installing/upgrading/customizing TFS, scouring MSDN forums, leading a class through the ropes of agile development, or perhaps giving a talk at a local user group on adopting a new ALM strategy in the real world.  I’m definitely passionate about what I do.

Anyway, that’s it for now! Just a little update on the latest excitement in my professional life.  Hope to catch you at a conference or user group near you soon! And don’t forget to stop by the Chicago ALM User Group sometime.  We will be posting details on out October meeting soon!

 

And because I’m always striving to do thing my mom can brag about, here is a picture of me being all giddy about my award :)

V__7802

Tags:

ALM | Application Lifecycle Management | VS 2013 | VS 2012 | VS 2010 | Visual Studio 2013 | Visual Studio 2012 | Visual Studio | TFS Upgrade | TFS 2013 | TFS 2012 | TFS Administration | TFS 2010 | TFS 2008 | TFS | SDLC | Process Methodology | MSDN

0

DTDPS, What It Is and Why You’ll LOVE it

by Angela 19. July 2013 19:18

It sounds like an STD, I know, but I promise it’s not. and after you’ve given your customers a DTDPS, they will thank you for it Smile  So hopefully I’ve intrigued you enough to read a bit more about this mysterious program. I’ve created a short FAQ to walk you through it:

Now what exactly IS DTDPS? Well first of all it’s a Microsoft offering, so expect MANY acronyms to follow. DTDPS stands for Developer Tools Deployment Planning Services. Specifically, the development tools that these services are meant to be used in conjunction with are the Microsoft Visual Studio ALM platform - Team Foundation Server, Visual Studio, and Microsoft Test Manager (TFS, VS, and MTM for good measure). 

So what does this really do for me? While most people are already very familiar with Visual Studio from a .NET development perspective, many people who own the other tools within the TFS platform are not taking full advantage of them. DTDPS is the solution to this problem, connecting customers with the right partners to make sure they are getting the full value of their ALM investment. Software that sits on the shelf is a huge waste of money.  And from Microsoft’s perspective is something you’re not likely to buy again, so it is of course in their interest to offer such a program.

What kinds of services are included in DTDPS? Currently there are 3 DTDPS offerings available: TFS deployment planning, Visual SourceSafe migration planning, and Microsoft Test Professional deployment planning. You’ll notice a theme here, the word “planning”. These engagements are not meant to be used to implement the tools. Instead, they are short, fixed-length (3 and 5 days) engagements for gathering data and analyzing a customer’s current environment in order to help them build a plan for implementation and adoption of TFS and/or MTM.

But what if I don’t need one of those services, but need other assistance with TFS? Well, it depends. I know, I know, typical consulting answer. These programs can be expanded upon to assist customers with other ALM related concerns, so drop me a line and I’ll be happy to discuss it with you in more detail. Also, the programs being offered may be changing soon so check the site occasionally to see if a program was added to fit your needs.  

Who delivers the engagement? DTDPS is a program delivered through certified and experienced ALM partners like Polaris Solutions to help customers with SA (Software Assurance) benefits to take full advantage of the tools they own.  This means customers benefit from a wealth of relevant experience and established best practices that only comes from having deployed and leveraged the tools in a large number of environments.

OK, I’m intrigued, but how expensive is it? It is FREE. Seriously, and absolutely.  This benefit is available to customers who purchase Microsoft products with SA, think of it as a rewards program. In fact, you may have DTDPS credits without knowing it!  Many of the customers I work with did not know they had DTDPS credits available until I turned them onto the program.

I want in! How do I sign up?  Start at the DTDPS site. Here you can peruse the various services available and see which ones are right for you and your organization.  Then check out the DTDPS QuickStart guide which walks you through the steps of accessing your benefits.  Then you just pick a partner to work with, like us, and you’re on your way to a better way of doing ALM!

0

So You Were Forced to Use the dreaded TFS Collection /Recover Command, Now What?

by Angela 11. October 2012 08:23

Since we have used Recover on a production database and lived to tell the tale I thought I would share our experiences. If you read this post you will know that one of my client’s got themselves into a world of hurt where we needed to restore a nightly backup that was not detached.  I know, I know, detached backups are the way to go.  Well, now THEY know that too Winking smile  Nonetheless, sometimes you may find yourself needing to recover a TFS Team Project Collection (TPC) database, and if you’ve read the MSDN documentation you’ll know this is not an ideal situation. The Recover command is very lossy, BUT you get your data back. And in our case it was worth the risk.

So here is the backstory…  Someone deleted a Test Plan with a month’s worth of data in it, and if you know MTM you know there is no “undelete”. Restoring a backup was our only hope. BUT our nightly backups are SQL backups of the entire SQL Server instance, so undetached (we are addressing this NOW). Plucking one TPC out of there and attaching it is just not an option. And we did not have hardware to restore the entire thing and detach it properly.  So here is what we did:

  1. Restore the backed up TPC from the nightly backup into our dev TFS environment
  2. Used the TFSConfig /Recover command, followed by TFSConfig /Attach to get it attached in dev
  3. Used the TFSConfig /Recover command to get the TPC into the proper state
  4. Detach the hosed TPC from production
  5. Restore that detached version of the TPC to production
  6. Attach the backup to production (we actually hit an interesting bug in TFS 2010 at this point, so the attach was quite harrowing and involved an emergency hotfix to our TFS sprocs, I may blog about later.)

Now, I would love to say everything was perfect but the recover command did blow away some things that we had to get back into place before people could use the TPC again.  What we lost:

  1. All the security setting ever!
    • Collection level groups and permissions
    • Team Project (TP) level groups and permissions in every TP in the TPC
    • Permissions around Areas and Iterations in every TP in the TPC
    • Permissions around Source Control in every TP in the TPC
  2. SharePoint settings  (in every TP in the TPC). Settings on the SharePoint server themselves will be fine of course but you will probably see a “TF262600: This SharePoint site was created using a site definition…” error when you try to open the portal site that was once attached to those TPs. You will need to fix this in 2 places.
    • Go to TFS Admin Console, select the TPC you just restored and make sure the SharePoint Site settings for the TPC are correct. It will probably be set to “not configured” now.
    • Open team explorer (as an Admin user), and for each TP go to “Team Project Settings | Portal Settings” and verify everything there is correct. Ours were just plain gone so we had to enable the team project portal and reconfigure the URL.
  3. SSRS Settings – this will probably be fine if you restored the database as-is but we also renamed it as part of the restore, and so had to update the Default Folder Location through the Admin Console for the TPC in order for this to work again.

So word to the wise, make sure you understand what the settings above are for all of the TPs in your TPC BEFORE you perform a Recover command because chances are you will have to manually set them all back up.

Tags:

ALM | Application Lifecycle Management | MSDN | MTM | Microsoft Test Manager | Microsoft Test Professional | TFS | TFS 2010 | Team Foundation Server | VS 2010 | Visual Studio | TFS Administration

0

So you accidentally deleted your MTM Test Plan, Now What?

by Angela 10. October 2012 04:14

So this week, we had a little bit of fun, by which I mean a day that started with panic and scrambling when someone accidentally deleted a Test Plan (yes, a whole test plan) in MTM in production. A well established test plan with dozens of test suites and over a hundred test cases with a month’s worth of result data no less... Some important things of note:

  • test plans are not work items, they are just a “shell” and so are a bit easier to delete than they should be (in my opinion)
  • there is no super secret command-line only undelete like there is for some artifacts in TFS, so recreate from scratch or TPC recovery are your only options here to get it back
  • when you delete a test plan, you lose every test suite you had created.  Thankfully, not test cases themselves, those are safe in this situation.  Worst case, a plan can be created, although it is tedious and can be time consuming.
  • when you delete a test plan, test results associated with that test plan will be deleted*. Let that sink in – ALL OF THE TEST RESULTS FOR THAT TEST PLAN, EVER, WILL ALSO BE DELETED.  ::this is why there were flailing arms and sweaty brows when it happened::

So at this point, you may be thinking it’s time to update your resume and change your phone number, but hold up. You may have some options to recover that data, so buy some donuts for your TFS admin(I like cinnamon sugar, BTW).  I should mention, there may be a lot of other options but these are the three I was weighing, and due to some things beyond my control we had to go with #2.

1) Best Case Scenario: restore your DETACHED (this is required) team project collection database from a backup, cause you’re totally taking nightly backups and using the TFS Power Tool right? You lose a little data depending on how old that backup is, but it may be more important to get back your test runs than have to redo a few hours of work.

2) Second Best Case Scenario: If you cannot lose other data, and are willing to sacrifice some test run data, then restore the TFS instance from a traditional SQL backup to a separate TFS instance (so, NOT your production instance), open up your test plan in that secondary environment, and recreate your test plan in production.  Not ideal, but if you didn’t have a ton of test runs this may be faster and you don’t sacrifice anything in SCM or WIT that was changed since the backup was taken.

3) Worst Case Scenario: if your backups were not detached when you did your last backup, cry a little, then use the recover command to re-attach them. The gist is to use the TFSConfig Recover command on the collection to make it “attachable” again, then attach it to your collection. I have written a separate post on this because it can be complicated…

Once you are back up and running, make sure rights to managing test plans is locked down!  It might not be obvious that you can even do this, or where to find it, since it is an “Areas and Iterations” level permission. But do it, do it now!  This permission controls the ability to create and delete Test Plans, so be aware of that. But for the most part, anyone with authority and knowledge to delete entire Test Plans, considering what they contain, should be the only person creating them.  If everyone needs the ability to create/delete these willy-nilly, then you are doing it wrong, in my opinion anyway.

I am still in the midst of getting this back up and running so will update once we’re finished. There is an MSDN forum post out there regarding one bug I seem to have uncovered, if anyone wants to look at it and maybe fix my world by answering it Smile I am sure I’ll be able to add some more tips and tricks by then.

0

Microsoft Test Manager (MTM) Tip O’ the Day–Filtering test lists

by Angela 3. July 2012 07:41

Now, I am no @ZainNab, the guru of “Tips and Tricks”, but I occasionally run across features that have been staring me in the face for YEARS and yet somehow went completely unappreciated, sometimes unnoticed.  And then one day it hits me and OMG my life is easier, and I want to tell everyone.  Sure, it’s a bit embarrassing to admit sometimes given that I worked at Microsoft for 5.5 years focusing on the Visual Studio tools, but who hasn’t done that?  Not you? Really?  I am skeptical…  There are after all, a bajillion commands to try and remember. For real, if you don’t believe me, look at the entire book that Sara Ford and Zain wrote about it. It’s worth every penny and Amazon has a great deal on it, pick up a copy! Smile

So, back to my point. I was sitting in MTM, looking at a fairly daunting list of PBI based test suites, thinking “now which PBI’s were the ones where I had test cases to run again?”  I started thinking about writing a query, but that only helps is YOU are assigned to the test case, it doesn’t really help with test RUN assignment. Then it all came flooding back.  Wait, there’s this FILTER button to sort that out.  And conveniently it’s right there in front of my face ::face palm::  I felt a little better when no one else admitted to noticing it was there either. Maybe they were just being nice to me.  Either way, in case you didn’t notice it, check it out. Before:

Untitled

After, I have MUCH fewer test suites that I have to look at:

Untitled2

That’s my Microsoft Test Manager tip o’ the day!  I won’t be posting them every day like Zain has been doing on his blog around Visual Studio 2010 for the past couple of years, of course I also don’t mainline 5 hour energy like he does Smile  I will do them whenever I can.  Hope this was helpful! Feel free to post any tips of your own or shoot me a note if you have other questions or comments.

0

June ALM User Group Meeting–Acceptance Testing Using SpecFlow!

by Angela 17. May 2012 06:12

Get ready, we have a packed summer full of great topics at the Chicago Visual Studio ALM user group! Be sure to join us in beautiful downtown Chicago at the Aon Center in June for this next session on how to improve your user acceptance testing practices using SpecFlow! Be sure to pre-register on our user group site so we can get you entered into the security tool, and please do keep us posted if you have to cancel! We don’t like throwing away food and it helps me to order the right amount.

Topic Description:

Imagine a project you’ve worked on in the past. Whether or not you or your organization makes use of Agile processes, you probably spent a good deal of time going back and forth with business stakeholders on the fine detail of how the software you’re building should behave. It’s possible you had to dedicate effort simply to producing a demo that the business will appreciate and understand. It’s even more likely that at some point, you and the business had disagreement(s) on whether something was “working”, “finished”, or “done”. Those types of discussions can leach away at your team’s time, expend effort, and impact morale as well as create tension between development teams and the business.

Now imagine if you could instead pour that blood, sweat, and tears into developing your application’s functionality. Imagine a scenario where new features are authored test-first, by non-tech staff in a plainly understandable, shareable, and versionable text format. Imagine a situation where the same set of specifications can be shared to drive a browser-based test suite at the same time that the specifications drive an integration test suite. These are the types of scenarios that tools like SpecFlow are particularly well-suited to address.

Unit tests are great for verifying atomic pieces of software functionality, but they are very poor at capturing and communicating specifications at any other resolution than fine-grained. They’re also completely useless to a non-technical user attempting to understand a system’s functionality.

This is where acceptance testing enters the picture. Although commonly classified as BDD (Behavior-driven design), tools and frameworks like SpecFlow serve to bridge the gap between proving the correctness of a piece of code from the inside, micro perspective and the correctness of an application as a whole from the outside perspective.

In this talk, we’ll go over what acceptance testing is, when it should be used, and how to add acceptance testing into an existing application using SpecFlow. We’ll also talk a bit about DSLs (domain-specific languages), the pyramid of returns vs. effort when it comes to different types of testing, techniques for authoring and designing tests and bindings, and finally, because this *is* a group about ALM, how to integrate SpecFlow into a CI environment and why you or your organization should do so.

If attendees wish to follow the demo on their laptops, they can save time by pre-installing the VS tooling for SpecFlow – http://specflow.org. The download there adds some tooling support within the VS IDE, and is not needed to run SpecFlow.

 

Speaker Bio:

Josh Elster is the founder and principal of his independent production and consulting company, Liquid Electron. With clients ranging from small media design shops to multi-billion dollar corporations, Josh’s experience spans a number of different sectors, projects, and roles. In February of 2012, Josh joined the community advisors board for Microsoft’s Patterns and Practices team for the CQRS journey project (http://cqrsjourney.github.com), as well as being a contributor. Like the common cold, but without the whole being ill aspect, it is Josh’s hope that he can infect others with his passion for software development. When not serving as Patient Zero, Josh can be found reading, playing video games or guitar, or coding. His website can be found at http://www.liquidelectron.com. His Twitter handle is @liquid_electron. His most recent demonstration project, the PostcardApp, can be found at http://www.postcardsfromskyrim.net.

0

An interesting Quest (pun intended)…into Agile testing!

by Angela 9. May 2012 08:57

So there is a fantastic little conference gaining steam in the Midwest called Quest, which is all about Quality Engineered Software.  If you’ve never heard of it, you should seriously check it out next year regardless of your role.  As I have always said, Quality is NOT the sole responsibility of the testers, and this conference has something for everyone.  I was fortunate enough to be introduced to the local QAI chair who runs the conference the first year it ran (2008), which lucky for me also happened to be in my back yard.  I was with Microsoft at the time, and we had opted in as the biggest conference sponsor, cause let’s be real - who on earth in QA ever thought “Yeah, Microsoft has some awesome testing tools”.  ::crickets::  Right.

At the time VSTS (remember THAT brand? Smile with tongue out) was still new-ish, and the testing tools were focused almost entirely on automated testing. Yeah, I know, TECHNICALLY there was that one manual test type but let’s not even go there.  I know a few, like literally 3, customers used the .MHT files to manage manual tests in TFS, but it wasn’t enough. The automated tools were pretty awesome, but what we found was that MOST customers were NOT doing a lot of automation yet. Most everyone was still primarily doing manual testing, and with Word and Excel, maybe SharePoint. We had a great time at Quest talking to testers and learning about what they REALLY need to be happy and productive, we got the word out on VSTS and TFS, and started planning for the next year.  I was able to be part of Quest as a Microsoftie in early 2009 as well, when the 2010 tools (and a REAL manual test tool) were just starting to take shape, and then the conference spent a couple of years in other cities.  Fast-forward to 2012 when Quest returned once again to Chicago.

I was no longer a Microsoftie, but if you’ve ever met me you know that working a booth and talking to as many people as possible about something I am passionate about is something I rock at, and enjoy! So I attended Quest 2012 again this year, this time as a guest of Microsoft.  I worked the Microsoft booth doing demos and answering questions about both the 2010 tools and the next generation of tools, and WOW did we get some great responses to them.  Particularly the exploratory testing tools.  I am pretty sure the reverse engineering of test cases from ad-hoc exploratory tests, and 1-click rich bug generation that sent ALL THE DATA EVER to developers gave a few spectators the chills. I certainly got a lot of jaws dropping and comments like “THIS is a Microsoft tool?!” and “I wish I had this right now!”. It was pretty great.

I was also fortunate enough to also get to attend a few pre-conference workshops, keynotes and a session or two.  I have to say, WOW, the conference is really expanding, and I was very impressed with the quality of the speakers and breadth of content.  As a born again agilista, I was so pleasantly surprised to see an entire TRACK on Agile with some great topics.  I was able to attend “Transition to Agile Testing” and “Test Assessments: Practical Steps to Assessing the Maturity of your Organization“ and learned quite a bit in both sessions.  One disappointment, there is even more FUD out there in the QA world than what I see in the developer world when it comes to Agile, what it actually means and how it SHOULD be practiced.  I’m not about being a hard core “to the letter” Scrummer or anything, but I also am not about doing it wrong, calling it Agile, and blaming the failure on some fundamental problem with Agile.  There are lots of Agile practices that can be adopted to improve how you build, test and deliver software, without going “all in”, and that was something I kept trying to convey whenever I spoke up.

I heard “Agile is all about documenting as little as possible”, “Agile lacks discipline”, “Agile is about building software faster”, and all of the usual suspects you would expect to hear.  No, it’s about "documenting only as much as is necessary; there is a difference!  Agile requires MORE discipline actually.  People on Agile teams don’t work faster, they just deliver value to the business SOONER than in traditional waterfall models, which sure, can be argued is “faster” in terms of time to market.  The only thing that will make me work faster would be a better laptop and typing lessons.  I still look at the keyboard, I know :: sigh::   I am highly considering doing a session next year on Mythbusting Agile and Scrum, to help people understand both the law and the spirit of Agile practices.  Overall it was great to see that the QA community is also embracing Agile and attempting to collaborate better with the development side of the house. We just need the development side to do the same Winking smile  I also met at least a dozen certified Scrum Masters in my workshops as well, which was great to see! 

One of my favorite parts of the conference was of course getting to catch up and talk tech with Brian Harry.  He was the first keynote presenter of the conference, and spoke on how Microsoft “does Agile”, the failures and successes along the way, and even spent some time talking about his personal experiences as a manager learning to work in an Agile environment. I.LOVED.THIS. Yeah, I’m a bit of a Brian Harry fan-girl, but it really was a fantastic talk, and I had many people approach me in the booth later to comment on how much they enjoyed it. My favorite part was Brian admitting that at first, even HE was uncomfortable with the changes. It FELT like he was losing control of the team, but he eventually saw that he had BETTER visibility and MORE control over the process, and consequently the software teams.  It was brilliant.  So many managers FEAR Agile and Scrum for just those reasons. It’s uncomfortable letting teams self organize, trusting them to deliver value more often without constant and overwhelming oversight by project managers, and living without a 2 year detailed project plan - that in all actuality is outdated and invalid as little as a week into the project.  Wait, WHY is that scary? Sorry, couldn’t let that get by.

And so off I go again, into the software world, inspired to keep trying to get through to the Agile doubters and nay-sayers, and to help teams to adopt Agile practices and tooling to deliver better software, sooner.

Tags:

Agile | ALM | Application Lifecycle Management | TFS 2010 | SDLC | Team Foundation Server | Testing | Test Case Management | User Acceptance Testing | VS 11 Beta | VS 2010 | Visual Studio | development

0

Upgrading Team Projects from Agile 4.2 to Agile 5.0 on TFS 2010–Part 3, Swapping in the QoS Requirement

by Angela 28. March 2012 07:33

So if you’re reading this you are probably finishing up my 3 part story about updating a process template from Agile 4.2 to Agile 5.0 on a TFS 2010 server.  This is the last installment where I embarrass myself further by sharing one more stumbling block that I encountered along the way.  So now we have all of our awesome tools installed, we downloaded Hakan’s script, we got our work item definitions imported and updated, and finally added our trusty old Quality of Service Requirement to the new Requirements Category in the process template.  Everything was working beautifully until I went and tried to link a QoS Requirement to a Test Case. Cue Sad Trombone again…

image

This was certainly not handled in any script, and I couldn’t find any documentation of it on MSDN, so hey, maybe this is something actually NEW in terms of guidance Smile  As soon as I saw this I knew what was happening.  I was pretty sure that somewhere there was some XML specifying what work item types were allowable in that dropdown, and my guess was QoS Requirement was not one of them.  I would have thought it was covered in the updated TestCase.xml used by Hakan’s script, or that maybe it was using the “Requirement Category” from Cateogires’xml but that would have covered QoS Requirement.  I double checked and it was not.  Here is the xml included with the script, note only “User Story” is allowed here:

image

I went ahead and made a little tweak so that QoS Requirements were allowable for the “Tested User Story” functionality and re-imported the TestCase work item definition using the Power Tools.  Essentially all I had to do was add my work item type to the TypeFilter node in the XML:

image

And now when I click “New” or “Link To” from a TestCase work item, I have access to my Quality of Service Requirements, HUZZAH!

image

Now, I am sure this is intentional. I assume in most cases you really only want “User Story” type work items to be linked in this particular tab, but for our purposes this is what we are looking to do.  I was a little curious as to why Hakan’s update script did not include the User Story work item type definition…  But hey, at the end of the day I demystified some more of the “magic” going on behind the scenes in TFS.  I am currently digging in a bit more to figure out if it makes sense to add User Stories to these upgraded Team Projects as well, since there are some very different fields and metadata being collected on them.  As I mentioned before, these are mostly inactive projects I am “experimenting on” so I’d love to hear and feedback or opinions on what you have done with your own projects.

OK, one last pro tip before I go. How often do you get an error dialog from TFS or VS, and you want to Google or Bing it, but now you have to type in all of the text by hand and hope you don’t miss a letter or number?  For me, daily!  *Sometimes* you can copy and paste the test, sometimes there is even a tool or link to let you copy it, but often times you are on your own. I totally ran into this on accident the other day.  So in OneNote you have a great screen capture tool that will work in any app, even on the desktop.  Make sure you have opened the OneNote app at least once, and seriously if you haven’t you’re crazy cause it’s the only tool I use for taking and sharing notes.  Hit Control+S, drag the cross hair around what you want to capture and let go. Copy the image to your clipboard and paste anywhere.  Cool huh? It gets better.  I noticed if you right click an image, you get the option to “Copy text from picture”. I saw that and thought, “no way that works!”, and lo and behold it does.

image

You’re welcome Smile

That’s it for now, hope you learned something in reading about my adventures in process template upgrading.

 

Part 1 – Process and Tools

Part 2 – Field Mismatches

Tags:

ALM | Agile | Application Lifecycle Management | MSDN | Power Tools | SDLC | TFS | TFS 2008 | Team Foundation Server | TFS 2010 | Test Case Management | VS 2010 | Visual Studio | Work Item Tracking

0

Upgrading Team Projects from Agile 4.2 to Agile 5.0 on TFS 2010– Part 2, Field Mismatches

by Angela 28. March 2012 07:05

So hopefully you’ve already scanned through this other post where I cover the overall process I used for doing my updates. It also has some great tips and tricks for making the whole job easier using a few free tools, as well as a few links to helpful blogs and MSDN resources to save your sanity! So, that being said, here are some of the issues I encountered during my upgrade, and how I was able to work around or fix them.  Again if you are using Hakan’s script and just running as-is, you might not see some of these errors.  I just figure you learn more by screwing up, and I was working with some test projects and so had the luxury of being able to try out several different strategies without affecting anything critical, and so I did a lot of things by hand first.

First stumbling block I encountered during the upgrade was a weird issue with inconsistent “friendly names” for some of the fields.  Essentially, I had some naming collisions when I tried to import some of the new artifacts like SharedStep.xml and TestCase.xml.  You might at some point encounter an error message similar to “TF26177: The field XxxXxx cannot be renamed from ‘XxxXxx’ to ‘Xxx Xxx ’.” In other words, “Area ID” vs. “AreaID”, “Iteration ID” vs. “IterationID” and a few others.

image_thumb[6]

The ones I was importing had field names that didn’t match EXACTLY. Now I started thinking, “But I am just re-uploading the work item type definitions that TFS was ALREADY using. They should be exactly the same right?”.  I opened up the work item type definitions (thank you TFS Power Tools) and found that indeed, some of the field names did NOT match the ones on the server. You’ll note in the screen shot below that in just a handful of cases, a blank character was missing from the field name, so the import process sees this as a rename attempt. You are looking at a new Agile 5.0 Team Project work item definition on the right, and the standard Agile 5.0 Team Project work item definition used to create that new project on the left.

image

In essence, what I ended up having to do to rectify this was to go in and modify the work item template definitions for the appropriate work items to ensure that the field names being imported matched the field names on the server, before attempting to import them again. For me, it was an issue in both the ShareStep and TestCase work item type definitions, but certainly didn’t take long to fix.  Once that was done, I had success! 

image

UPDATE: Turns out some of the fields being used were of course already defined on the server from the previous implementation of TFS 2008, and when TFS 2010 was released a few of the names had been altered slightly. After struggling with this for an hour or two and somehow not running across the documentation stating that this was a known issue, I eventually figured out the fix on my own.  Today, I was kindly pointed to a couple of places where this was documented, including a post by Gregg Borr that was pretty much written specifically to address this very situation ::facepalm::

Last thing we need to do is update the categories.xml. Silly me tried just importing the Categories.xml from the Agile 5.0 template which will of course NOT work because 4.2 requirements were named a bit differently than 5.0 ones.  You’ll see something starting with “TF237059: The import of the category definition failed” because the new Categories.xml will refer to “User Story” and what you have is a “Quality of Service Requirement”.  I opened up the XML provided with Hakan’s script because I was wanted to verify what was happening, and what I was doing wrong, and was not shocked to see that it was essentially updating the “Requirement Category” to support the new world order of work item types. RTFM Angela, RTFM.  Here is what you will see in Hakan’s updated Categories.xml file:

image

So now my categories were imported correctly and I was feeling good but had to do some testing as I was SURE I would encounter some additional problems once I dug into Visual Studio and Microsoft Test Manager and started creating work item types in the new and improved Agile 5.0ish Team project.  It was definitely a trip seeing the co-mingling of the 2 versions of the Agile template in the Team Explorer:

image

For the most part this all “just worked”. I created work item types, linked them together, created hierarchies, opened them in Project and Excel and made changes and published. Life was good.  And then I tried to link a TestCase to a “requirement” in my new world. Wah, wah, wah, waaaaaaaah.  Check out my third post for details on how I managed to fix this.  Again, it could very well have been something I did wrong but it was a lesson learned…

 

Part 3 – Swapping in QoS Requirement for User Story

Tags:

ALM | Application Lifecycle Management | Agile | MSDN | Power Tools | SDLC | TFS | TFS 2008 | TFS 2010 | Team Foundation Server | VS 2010 | Visual Studio | Work Item Tracking

Powered by BlogEngine.NET 2.7.0.0
Original Design by Laptop Geek, Adapted by onesoft