Shane Duffy

Archive for the ‘testing’ Category

At some stage most of us have needed to work on our websites and move things around, For most people would do this live on the website and this does not make for a great experience for your customers and clients who end up visiting broken and half finished pages. Or even worse the search engine crawlers that come along and find sample text on your pages.

One of the neat tricks of a .htaccess file is been able to enable a maintenance page for all your visitors to your website but still been able to access the website yourself for testing and development.

RewriteEngine On
RewriteBase /
RewriteCond %{REMOTE_HOST} !^86\.43\.107\.123
RewriteCond %{REMOTE_HOST} !^86\.43\.107\.201
RewriteCond %{REQUEST_URI} !/maintenance\.html$
RewriteRule .* /maintenance.html [R=302,L]

The above when placed in a .htaccess file will redirect all clients who are not coming from 86.43.107.123
or 86.43.107.201 to the maintenance.html webpage

Simple yet very handy to have!

The Microsoft Enterprise Library has a really good Logging Application Block that allows you to make your logging infrastructure configurable.

This framework makes logging as easy as typing:

Logger.Write(“System rolled over!”);

the framework is configurable from your config file.

The Microsoft Enterprise Library out of the box provides the following destinations for logging messages:

  • Email
  • Custom Locations
  • Message Queues
  • WMI Events
  • Database
  • Text Files
  • File

I took our web site and our main global error handler and did a dump routine of all exceptions to the log and it worked great. But the question arose – where should we log? Here are some of our ideas that spring to mind on this issue.

  • The least error prone seems to be the flat file. The custom location seems the worst option simply we would hope that our proprietary code is less tested in less real world scenarios than Microsoft’s Enterprise Library components.
  • The idea of logging to the database is attractive from a reporting perspective, but everyone on the team felt like it was risky given that the most typical scenario for an error in the application itself was a database related error. I suppose if we had the logging database seperate from our application database then this would reduce the risk, but we don’t have the resources for such a move, so if our application were to kill the database it would take down any logging sources there as well.
  • Even with the flat file, it is possible that it will fail. A simple scenario for it failing would that the file is read-only or the ASPNET process doesn’t have permission to access the file. So we need at least two logging mechanisms to provide redundancy. The Enterprise Library allows for this – you can configure more than one listener for every log message being written.
  • Our second challenge with using flat files is that they no provide a notification mechanism like email. So if there is an exception, the file has to be constantly monitored or periodically checked. Therefore, combining an email message and a flat file backing store seemed like a decent combination. If the email fails, then we may not know for a day or two but at least the error will be logged. If the file fails, then at least we’ll get the email.
  • The event log is not a bad option either and its generally easier to monitor as most monitoring tools already handle event log monitoring. The one issue with using the event log that I have found is that if you use an event log source that doesn’t exist the logging will fail without any notification to you. Your error will simply disappear and won’t be logged. You can use any existing EventLog source or create a new one in the registry for your custom logging needs. In addition, you have to make sure that the ASPNET account has permission to write to the event log.

That’s where we are at so far – its not quite bullet-proof yet but its getting there. If you have any further suggestions or thoughts, leave them as a comment!

Introduction

The purpose of this article is to define a set of ideal practices for an agile software development project. I encourage you to leave comments about this article using the comments box at the bottom of this page. Please note that the practices listed are the practices that I believe are essential to a good agile development project; they do not necessarily have anything to do with being agile. I have tried to list the practices in descending order of importance.

Practice 1: Aggressive refactoring

In my opinion, refactoring is the most overlooked skill for a software developer. A well refactored application has a much higher value to the project sponsor than a poorly refactored application. The most common sign of code in need of refactoring is excessively long methods. I try to keep methods to less than 100 lines. Other common code smells are misleading or meaningless variable names, and code duplication. Static code analysis tools, such as FxCop, can provide a useful measure of code quality.

Practice 2: Testing

Firstly, there should be some developer testing. All the code that is written should be testable and should have tests written for it. It is acceptable to modify your program to facilitate good testing. I believe that the traditional testing terms; unit tests, integration tests and system tests have become outdated. Instead, I prefer the terms developer tests, functional tests and non-functional tests. Non-functional tests are things like performance testing, functional tests are tests that the customer cares about like use case tests or business transaction tests, and developer tests are everything else that the developer needs to test to prove to herself that the code is correct.

We should automate as much testing as possible and run it as part of continuous integration. If code coverage analysis is included in the automated testing it provides a nice indication of the health of the system at any point of time.

Practice 3: Automated build and deployment

The project should have an automated build, an automated deployment and ideally automated testing. In the optimal situation, a developer can click a button and the build process will build the latest source, deploy, test and report on the result. Automating these processes not only saves time but also eliminates a huge number of bugs and time wasters.

Practice 4: Continuous integration

If a project has automated build, deployment and testing then continuous integration is really just a matter of automating the kick-off of that build, deploy test cycle. Every checkin should result in a new build and test, on a separate build server. The results of this should be reported to every team member and it should be an established team practice to immediately fix the build. A working build should be everyone’s top priority. People should not be made to feel bad if they break the build, as this decreases their courage.

Practice 5: Source control

A source control system should be used to store all project artifacts including: code, non-code documentation, build scripts, database schema and data scripts, and tests. The code should not be checked into the build until it compiles and passes its tests.

Practice 6: Communication plan

There should be a defined, direct communication channel between the developers and the customers. This can be (best to worst): on demand face-to-face communication, daily or weekly face-face communication, contact phone numbers, instant messaging, email mailing list, intermediary (BA or PM). These communication channels can and should be combined.

Practice 7: Task tracking

There should be a defined technique for recording and prioritizing development tasks and bugs. The system should make it possible to assign responsibility for tasks to individuals. If tasks are tracked against estimates then the estimate should be performed by the person who will do the task.

Practice 8: Self documenting code

Code comments should be subjected to the same quality requirements as the code itself. Everything possible should be done to ensure that no other technical documentation is required. When non-code technical documentation is required it should be subject to the following restrictions: referenced from the code, always up-to-date (change when the code changes), only one version per baseline, stored in source control.

Practice 9: Peer review

There must be some form of peer review, such as code review of fellow programmers. If the developers are subjected to performance reviews then the peer reviews they do should be an input to that process. This helps to avoid the temptation to approve everything to avoid confrontation. Make sure that the reviews are for quality, not just for correctness.

Practice 10: Work-in-progress

A working version of the latest iteration should always be available for customer feedback. The advantage of this is that customers see very quickly when something has been developed contrary to what they had in mind. Shortening this feedback loop decreases the cost of change.

Practice 11: Feedback mechanism

There should be a defined mechanism for project team members, including the customer, to provide feedback on the project’s processes. My suggestion is to hold a short meeting at the end of each iteration.

 

Article by

Liam McLennan

As a development manager or project manager, you here a lot of weasel words and excuses from your staff or from external consultants who are trying to hose you into believing things are better than they are. In many cases, developers use these phrases to even convince themselves that things are better than they are, resulting in chronic late delivery and poor quality.

So beware the following phrases from your teams:

  • It should work: this usually means that it doesn’t. It also means that it was probably not tested properly as the result is current undetermined. The word “should” should be taken out every developer’s vocabulary – it either does or it doesn’t.
  • I just need: beware of that word “just”. Its a belittling word meant to make things smaller than they are. Just take the phrase “I just need to write this component” to be “I need to write this component” and already the magnitude of the work involved grows. Developers tend to be chronic under-estimators and the use of the word “just” is a sign of that mentality.
  • Almost done: this is also a weasel word. When a developer tells you things are “almost done” ask for the specific tasks that are left over immediately. In addition, keep in mind that projects do not progress linearly – the last 10% is always about 40-50% of the work of the total project. I’ve seen projects that are chronically late be “almost done” for 3 months.
  • It was tested: this also usually means it that wasn’t tested properly. Ask for a test plan and the specific tests that were done. If the developer cannot produce these with sufficient evidence that PROVES that it was tested, then it wasn’t tested.
  • It must be an environment/configuration/deployment problem: this may be actually true, but it usually points to a larger stability problem. If you cannot build and deploy reliably then why would you have confidence that the code works?
  • If things go smoothly: this I hear a lot, e.g. if we don’t hit any snags then we can be done by Friday. Guess what – you’re likelihood of hitting a “snag” by next Friday is probably high and given the lack of risk based management, the team has probably got no mitigation or contingency strategy. Then next week, you’ll hear the next phrase in our list, “Yes, it could have been done if it weren’t for that Snag we had”.
  • Yes, but, as in “Yes it can be done, but”: this means it cannot be done. Tell your staff to just come clean and say, “No it cannot be done”. Another variant on this is, “Yes it could have been done, if it weren’t for marketing, requirements, technical risk, etc.” This simply shows that your developers work in an idealistic world where things never go wrong.
  • We’ll make up the time at the end: if you’re already late by the end of requirements, you’re likely going to be even later by the end if you simply keep going on the same track. In my experience, teams don’t dramatically faster as they hit their stride. Even if there is some efficiency, its nowhere enough to make up for lost time.
  • I’ve got it done, but I need to build a few more components for you to be able to see it: then its not done! Encourage show and tells, code reviews, unit tests, etc. so that code is visible as soon as possible. Use slicing models so that you can see pieces of the application in weeks, not months. In addition, code that isn’t checked in should never be counted – that means that someone cannot build it sufficiently to share it. It only counts if you can verify it. Ideally, its not “done” unless there are sufficient unit tests, a build script, and a document that someone walking into the code repository could check out the project run a build and have all the unit tests pass. Then the code is done – anything less is mythical.
  • I’ve got it done – I just need to integrate it: The word “Integrate” is a big weasel word. Think of a web service that adds two numbers together. The algorithm is one line of code. But the integration work is huge. In addition, integration usually means the first time that disparate teams are bringing code together which is always cause for issues. Don’t under-estimate the integration, especially in today’s world of distributing computing, web services, etc.
  • It Worked on my Machine!: programmers use this excuse to downplay a bug. The reality is actually the opposite – it means that you have an intermittent bug which is by far the worst kind of bug to have in your application. You want bugs to fail quickly and consistently – any variant such as “That’s Weird”, “That didn’t happen yesterday”, “That must be a data problem”, etc. is admitting you have a bug that cannot be easily duplicated.

My recommendations to reduce the amount of excuse making from your team:

  • Encourage a culture of honesty and team work: You get these excuses when developers are hiding things, and sometimes this is because you’ve created a culture that encourages hiding because you don’t want an honest answer.
  • Be ruthless with your quality and talent standards: don’t excuse poor talent, bad management or chronic late delivery. If you create a culture where talent isn’t rewarded the bar isn’t kept high then you’ll be excusing the team to continually strive for excellence.
  • Expect more than just code: measure performance based on estimation, quality, delivery and team work as well as pure code quality. If you have a developer who produces great code but cannot deliver on time then that’s not a great developer.
  • Increase visibility and shrink delivery cycles: if you have to show your work on a constant basis and deliver on 2 week iteration cycles, your excuses tend to go away. You either deliver or you get found out pretty quickly. Use show and tells, code reviews and continuous integration to see what people are doing on a constant basis.
  • Don’t give half credit for 50% done – its either done or not done: If your tasks cannot be managed this way, then you should split up your tasks until you can work this way.
  • Establish what “Done” really means: for example, at a minimum “Done” should mean checked into source code control and able to build into the current branch. If you’re doing Test Driven Development, it should also mean all tests run successfully. If you have specific performance criteria, then its not “Done” until the performance tests pass.
  • Use counting techniques to measure wherever possible: this is a great suggestion from McConnell’s book on estimation. The more you can count in units, the more accurate you estimation. So if you can count the number of pages, web services, objects, databases, tables, stored procedures, tasks, etc. that are left to accomplish then you can measure them more easily than if its a big blob of work. If your requirements aren’t well defined enough to count objects, e.g. you don’t know how many web pages you’re building in your web site, then you’re really not in a position to estimate your ship date.
  • Don’t sucker, manipulate or bully your team: If as a project manager, you resort to traditional management tactics such as playing games, being political, establishing a blame culture, or bullying your team you’ll lose your credibility and simply encourage lying. A tortured prisoner will tell you anything you want to hear – the same goes with development teams.

If you have a project that operates in the open, has a culture of honesty and establishes a high performance bar, you’ll find that peer pressure as well as some overall guidance will get risks, problems and bugs out in the open. If when you discover these problems people work as a team to fix them instead of blaming each other then every problem solved becomes a victory and not a blame opportunity. You’ll get better answers and improved morale on the team as you set clearer performance expectations.


Subscribe to this blog now!

Top Clicks

  • None
May 2017
M T W T F S S
« Feb    
1234567
891011121314
15161718192021
22232425262728
293031