About Me

Translate

Tuesday, September 23, 2014

Spike Story

Spikes are an invention of Extreme Programming (XP), are a special type of story that is used to drive out risk and uncertainty in a user story or other project facet.

A story aimed at answering a question or gathering information, rather than at producing a shippable product. Sometimes the development team cannot estimate a story without doing some actual work to resolve a technical question or a design problem. So we create a spike story whose purpose is to provide the answer or solution. Like any other story, the spike is then given an estimate and included in the sprint backlog and the outcome demonstrated at the end of the iteration.

A spike story could include activities such as research, design, exploration and prototyping. The purpose could be
1)To gain the knowledge necessary to reduce the risk of a technical approach.
2)To get a better understanding of the requirement.
3)To increase the reliability of a story estimate for technically or functionally complex features.

There can be two types of spike stories
1)Technical - To determine feasibility and impact of design strategies.
2)Functional - To analyze the aggregate functional behavior and to determine how to break it down, how it might be organized and where risk and complexity exists, in turn influencing implementation decisions.

Since spikes do not directly deliver value to the user, they should be used only rarely.

 We should be able to estimate (time box) it  and the result (answer, solution, prototype) should be something that can be demonstrated by the development team and acceptable by the product owners.
 It should be reserved for more critical and larger unknowns only.

 Do not plan the spike story and implementation story in the same sprint. If you think it is that simple then probably it is not a spike story since every story will inherently have some unknowns discovered when implementing it.

Saturday, July 5, 2014

Technical Debt

Technical debt is a metaphor referring to the eventual consequences of poor system design, also known as design debt or code debt. Every project is going to have some form of technical debt no matter what and there are several ways you get there. You can create technical debt advertently or inadvertently.
I know but I need to ship it soon
Advertently Created Technical debt is when you know that you are creating technical debt. One very common argument is the need to ship the product quickly in order to get to the market first. If we don't ship our product in time then there is a risk of losing the market to competition. So we pay more attention to getting out the product early and compromise on quality assuming that once shipped we will get back to getting the quality right. There are circumstances where this is a valid argument and a good development team will have a plan to reduce debt once the product is released.
I know but I don't care
There is a case where applications are built without paying attention to design or quality. The teams knows that it is not right and debt is being built but rather care less. The common arguments to this approach is "We don't have time for all this", "Our product owner will not allow this","We have to get this done even if quality is compromised", "This is how we have been doing it", "If it ain't broken why fix it" etc...This is commonly seen in projects that have started using scrum. The goal is to go fast and faster, or to get things done. The wrong metric is used to measure progress. Focus is on burned down charts and micromanagement of tasks and most likely the result of work done at the end of sprint is not release ready without a hardening or testing sprint. This is a very naive approach where the team does not understand that doing agile or scrum does not bring agility. The true value of agile comes when you are able to ship a product with quality at the end of a sprint. If you cant do that you are doing it wrong.If you can't ship your product at the end of a sprint with quality typical one week or a maximum of four weeks then you are not AGILE. Sadly this is the most commonly seen way of creating debt.
I don't know what I am doing
There are inadvertent ways in which you can still create debt into your code. One is when you have a relatively less experienced team trying there best unknowingly creating a mess. Teams new to Test Driven Development probably focus on the basic aspects of TDD and not fully utilizing design insights to refactor code to patterns. The same issue when applying the wrong patterns to a problem without realizing the same. These teams probably need training, a Technical mentor and probably adopt pair programming and code review sessions to come up with better solutions. Not realizing the maturity level of the teams and not providing appropriate training and mentor ship can lead progress in the wrong path. By the time the team realizes about the technical debt acquired it will probably be late.
I didn't know how I was supposed to do it at the time
There is one more pattern of creating inadvertent debt is a case when the domain of the business itself was complex that the evolution of understanding of the business was not reflected in the code. This is the typical case where the team feels "If I knew what I know now at the beginning of the project I probably would have built it in a different way". Most projects even with a lot of talent and practices could still get into technical debt in this way due to the complexity of the domain and gap in understanding the requirements. This could also happen if the requirements changed over time for the right reasons. 
To sum up, no matter what you do you are going to have technical debt build up in the process and you are going to have a strategy to deal with it. The best way to ensure managing technical debt is to make sure you are keeping you design flexible to change and have a  suite of tests to manage your features and code so that you can constantly refactor to update you code and design to reflect your current understanding of the system and tools.
Good pragmatic practices including Acceptance Test Driven Development, Domain Driven Design along with Test Driven Development and constant refactoring and using all the feedback loops appropriately to build a culture of learning and reacting to change is a good way to go. 
The only people who know and see technical debt are the folks who write the code. Its very difficult to convince anyone outside the development team. The only way to get rid of technical debt is to constantly refactor code while developing code in a smart way. It is the responsibility of the developer to keep the code base clean and maintainable by applying all the pragmatic practices suitable to the project situation. Caving in to reasoning's that do not keep the code base clean is irresponsible and unprofessional because essentially you are making your product owner pay for the irresponsibility when he has to pay more to add features in the future at a greater cost.


Saturday, May 10, 2014

Story Pointing

Story pointing is probably one of the most ambiguous and confusing concept in agile practices. Many teams seem to be confused on  how to deal with it. The idea is to measure the complexity of a story to an ideal baseline. Depending on who you, you have different perspective of what it is.

Story pointing is a relative estimation technique. One of the hardest tasks for a developer is to estimate how long it will take to complete a task. Software estimation has been a very difficult task to achieve and we can always say that we are very bad at estimation as software engineers. From a management perspective that is all what they care about. How long and when will it be completed.
The idea behind story pointing is to pick one simple story from your backlog that can be done in a couple of hours to a day or less and point it as one. Pick other stories and compare it with that and estimate relatively ie if it is twice harder(2) or thrice harder(3 points) or the same (1 point). The higher  the unknowns the higher the pointing. Generally complexity and unknowns will increase the pointing. This way you have a relative estimation for a story. With a couple of sprints the number of story points done in each sprint will determine how much is the velocity of the team. Another way is to use yesterdays weather to determine tomorrows weather ie is to use the last sprint velocity as the number of points that could probably be done in the next sprint.

More than the relative estimation one advantage I see in story pointing is that during the story pointing when we have a wide range of points it indicates that the understanding of the story is not clear enough and that should lead to more discussions and a re-pointing to come at a better pointing for the story.

I have observed some teams estimate story points in terms of days of work for example 1 point story can be done in a day, 2 story point in two days etc. As long as the team is consistent on its idea of story pointing and it provides a baseline for estimation for the product owner it may be fine. The difficulties arise when you have multiple teams working on the same project and the product owner tendency to compare the velocity from team to team, which is a naive approach. The velocity for one team should never be compared against another team even if they both work on the same project. It would be better to divide the project on focused modules so that work on each module does not effect the other.

I have heard some arguments on what is the need of story pointing? Why not  just pull in stories and using lean Kanban techniques and use last weeks weather to determine how much work can be done. But what will be last weeks weather to determine next weeks weather also it will be hard to buy in no estimation from a management perspective. After all you need to know when a project will end.

My overall view on story pointing with experience is just to use it for the advantages it gives like 1) a very generic estimation which could be used to estimate roughly when a project will end.2) a story complexity measure to determine if the story is rightly sized and has enough information to be pointed 3) give a rough velocity estimation. I would not give much importance to story pointing other than a rough estimation technique. With experience generally teams do get good at estimating story points and I have seen some teams come up with a reasonable velocity estimation that can be used. Mature teams who can come up with a consistent velocity from sprint to sprint in terms of output and value to business could use it to their advantages with the right conducive environment. With proper clarity of story pointing and understanding of story points teams and business can use it as a tool to come up with a generic estimation provided there is an environment and understanding to use the best process and engineering practices to move towards the business goal by releasing the most marketable value items early to the  business so that each sprint has a possible valuable outcome irrespective of the velocity estimations.


Monday, April 21, 2014

Velocity


According to scrum values, velocity is how much product backlog effort a team can handle in one sprint. Velocity is used for predicting when the project will be done. When a project is started we have a general scope, a projected cost, and a probable date. When we go sprint by sprint we learn more about the scope based on the reality of the situation. We go faster or slower, may find better opportunities or challanges. So predicting when a project will be over based on velocity may not work out well as mostly seen in agile or scrum based projects.

Lets take a different approach and ignore velocity and capacity and work based on a goal. The team with the product owner selects a bunch of items of high priority for the next sprint and establish a goal for the team to meet. As the sprint moves either the items which were more than the team could  do be removed or get more PBI's that could fit into the goal into the sprint if they have the time for it. At the end of the sprint the Product Owner, Team and Key stakeholders can inspect and adapt based on what was completed.

If we used short cycles of development of a week to 2 weeks at a maximum which will give a quick feedback on the progress towards our goals, commitments and projections. A build up chart might help to gauge how far we have progressed towards our goal. Any time a team spends on worrying about velocity or capacity is waste and not adding any value. It slows down progress, impedes agility and robs time and effort that could otherwise go to creating value.

Use last sprint's velocity as a guidance to what can be done in next sprint but focus on goal and measure progress based on how far or near you are to your goal and make changes that help you to get to your goal.

Cucumber and SpecFlow are collaboration tools

Cucumber and SpecFlow are collaboration tools and not testing tools.

From "The Cucumber Book" “Cucumber might just seem like a testing tool, but at its heart it’s really a collaboration tool.”

With cucumber/specflow scenarios we are collaborating more because the story is told in a way that's accessible to all participants (Developers, Testers, Domain Experts and Product Owners). It also helps us to define a ubiquitous language for a requirement from a end user perspective.

The Given-When-Then scenarios becomes an executable documentation accessible to non-technical stakeholders as a means of seeing what the system can and can not do.

The tests system and regression tests tell the how we do it part of the story. In short the cucumber/specflow executable documentation will explain what the system can and can not do and the Tests that the testers will run will detail on how it is implemented.

ie the cucumber will explain the business requirement that is fulfiled and the testers test will be a script on how it is implemented in the system. Both are complementary to each other but are really seperate things.

Step definitions/Fixtures are the developers interpretation of the story. So they are not specifications nor tests rather a glue to the specification and tests and the reason they would change could be the story of the underlying tests changed due to one of the reasons below

1. The product owner changed the given when then scenario on what the system can do
2. Developers changed the way the interpreted the specification due to refactoring or reorganising of code.

Or in short the domain evolved and so the domain language evolved and new concepts were added and existing ones were better understood or changed.

The step definitions will be the developers telling the product owners that we thought this is what you meant. With a clean code base in addition to a consistent levels of abstraction and good naming developers and product owners can look at this and get a better understanding of what is being built.

Instead of relying on just scripted documentation from the testers and just Test driven tests from the developers the true value relatively comes from relating requirements to implementation in a way which is accessible to business stakeholders and encourages converstation and is executable at the same time.

* Given-When-Then (What)
* Step definitions ("...Thought you meant...")
* Tests (How)

Caution on testers using cucumber/specflow for writing tests

It is tempting for the testers to use specflow for writing scripted tests for the following reasons
1.Its easy to add new tests using existing step definitions
2.Can easily change the parameters to test edge cases which means opportunity to add many data scenarios

The problem we could get into in that approach is we may end up in
-too much details
-we may miss the big picture
-they will be a lot of repititions and we may end up in a lot of scenarios

scenarios in cucumber should be focused on the business user. The language should be kept very informal. It is encouraged to keep their words as much as possible. Business users will not be talking in terms of clicking a button or a popup etc. The code underneath will do whatever is necessary to make the business scenarios work.

If something is a given do whatever is necessary to make it work. ie setting up the data or mocking/hacking a model etc

The when is the event that causes the behavior you're interested in. You should do this from the highest level you can. This is the core of your scenario, so you shouldn't be hacking data in here.

Do not put UI concerns into feature files. Surfacing UI concerns on a feature can make it brittle to legitimate design changes, and also rather boring to read.

Be deliberate in your use of domain language

When writing your scenarios keep in mind that you are writing them for two audiences: the person the feature is for and the person implementing it. Check the wording to see if you can spot anything from neither the problem nor the solution domains. If you find you are using language from outside those domains, you might be over-specifying the implementation or specifying unnecessarily broad requirements that mix concerns.

If you really care about how the behaviour is implemented, you should probably be specifying that elsewhere in a more fine-grained story – in other words chunking down to provide more detail – that won’t be interesting to the audience of this one. If not, you might want to push the detail down into the implementation of the steps.

The most important thing about BDD/ATDD is to talk to people. Please try to do that and not let the tools prevent you from doing it.

Grooming

Scrum doesn't give a clear cut idea on grooming which is actually one of the most important practices in an agile project.  A well groomed product backlog is one of the basic tenants for a successful agile project. From a well groomed product backlog you can pull out stories to a sprint and comfortably complete the implementation with less uncertainties.

Unfortunately most projects in the initial stages have a tendency to use product backlog to push stories into the sprint without a proper grooming more due to ignorance. We are agile so we don't want to spend too much time on grooming. Once the story is picked into the sprint and when worked on we will discuss the details. The problem is that if we start learning what needs to be done while working on a story the is likely hood that there will be scope creep. Working on the details will expose more questions and might even end up in the story not being completed or completed without the whole business value or quality and overall trust on the team will be effected. You don't want to spend the sprint to be used to figure out the business need and struggle to keep up to the commitments. This more sounds like waterfall model inside a sprint, ie first find the requirements, develop the code and then put it for testing. If the business is not reachable within the sprint it will add more problems. If there are external project dependencies it gets even more difficult.

If we can apply grooming the right way which is to 1) discuss the story first to find out if it is the right story in terms of priority, test ability and feasibility. 2) if it is in the right size and priority then discuss the acceptance criteria in terms of a common understand by 3) bring out the scenarios that can be used to define the scope and help define the defenition of done based on acceptance criteria so the team now only needs to pull it into the sprint and work on the implementation details.

How can this be achieved. It is very important that the team understands the business need of the story. Spend about 10% of capacity of the current sprint to groom stories for the next sprint. Grooming will include Identifying the user and need for the story to the business and the scenarios that will define the story to be done based on the acceptance criteria identified for the story. This will reduce the time on planning and help estimate tasks for a story.