Shane Duffy

Archive for the ‘software’ Category

The Microsoft Enterprise Library has a really good Logging Application Block that allows you to make your logging infrastructure configurable.

This framework makes logging as easy as typing:

Logger.Write(“System rolled over!”);

the framework is configurable from your config file.

The Microsoft Enterprise Library out of the box provides the following destinations for logging messages:

  • Email
  • Custom Locations
  • Message Queues
  • WMI Events
  • Database
  • Text Files
  • File

I took our web site and our main global error handler and did a dump routine of all exceptions to the log and it worked great. But the question arose – where should we log? Here are some of our ideas that spring to mind on this issue.

  • The least error prone seems to be the flat file. The custom location seems the worst option simply we would hope that our proprietary code is less tested in less real world scenarios than Microsoft’s Enterprise Library components.
  • The idea of logging to the database is attractive from a reporting perspective, but everyone on the team felt like it was risky given that the most typical scenario for an error in the application itself was a database related error. I suppose if we had the logging database seperate from our application database then this would reduce the risk, but we don’t have the resources for such a move, so if our application were to kill the database it would take down any logging sources there as well.
  • Even with the flat file, it is possible that it will fail. A simple scenario for it failing would that the file is read-only or the ASPNET process doesn’t have permission to access the file. So we need at least two logging mechanisms to provide redundancy. The Enterprise Library allows for this – you can configure more than one listener for every log message being written.
  • Our second challenge with using flat files is that they no provide a notification mechanism like email. So if there is an exception, the file has to be constantly monitored or periodically checked. Therefore, combining an email message and a flat file backing store seemed like a decent combination. If the email fails, then we may not know for a day or two but at least the error will be logged. If the file fails, then at least we’ll get the email.
  • The event log is not a bad option either and its generally easier to monitor as most monitoring tools already handle event log monitoring. The one issue with using the event log that I have found is that if you use an event log source that doesn’t exist the logging will fail without any notification to you. Your error will simply disappear and won’t be logged. You can use any existing EventLog source or create a new one in the registry for your custom logging needs. In addition, you have to make sure that the ASPNET account has permission to write to the event log.

That’s where we are at so far – its not quite bullet-proof yet but its getting there. If you have any further suggestions or thoughts, leave them as a comment!

Advertisements

What the Heck is a Regular Expression Anyway?

I’m sure you are familiar with the use of “wildcard” characters for pattern matching. For example, if you want to find all the Microsoft Word files in a Windows directory, you search for “*.doc“, knowing that the asterisk is interpreted as a wildcard that can match any sequence of characters. Regular expressions are just an elaborate extension of this capability.

In writing programs or web pages that manipulate text, it is frequently necessary to locate strings that match complex patterns. Regular expressions were invented to describe such patterns. Thus, a regular expression is just a shorthand code for a pattern. For example, the pattern “\w+” is a concise way to say “match any non-null strings of alphanumeric characters”. The .NET framework provides a powerful class library that makes it easy to include regular expressions in your applications. With this library, you can readily search and replace text, decode complex headers, parse languages, or validate text.

Some Simple Examples

Searching for Elvis

Suppose you spend all your free time scanning documents looking for evidence that Elvis is still alive. You could search with the following regular expression:

1. elvis Find elvis

This is a perfectly valid regular expression that searches for an exact sequence of characters. In .NET, you can easily set options to ignore the case of characters, so this expression will match “Elvis”, “ELVIS”, or “eLvIs”. Unfortunately, it will also match the last five letters of the word “pelvis”. We can improve the expression as follows:

2. \belvis\b Find elvis as a whole word

Now things are getting a little more interesting. The “\b” is a special code that means, “match the position at the beginning or end of any word”. This expression will only match complete words spelled “elvis” with any combination of lower case or capital letters.

Suppose you want to find all lines in which the word “elvis” is followed by the word “alive.” The period or dot “.” is a special code that matches any character other than a newline. The asterisk “*” means repeat the previous term as many times as necessary to guarantee a match. Thus, “.*” means “match any number of characters other than newline”. It is now a simple matter to build an expression that means “search for the word ‘elvis’ followed on the same line by the word ‘alive’.”

3. \belvis\b.*\balive\b Find text with “elvis” followed by “alive”

With just a few special characters we are beginning to build powerful regular expressions and they are already becoming hard for we humans to read.

Let’s try another example.

Determining the Validity of Phone Numbers

Suppose your web page collects a customer’s seven-digit phone number and you want to verify that the phone number is in the correct format, “xxx-xxxx”, where each “x” is a digit. The following expression will search through text looking for such a string:

4. \b\d\d\d-\d\d\d\d Find seven-digit phone number

Each “\d” means “match any single digit”. The “-” has no special meaning and is interpreted literally, matching a hyphen. To avoid the annoying repetition, we can use a shorthand notation that means the same thing:

5. \b\d{3}-\d{4} Find seven-digit phone number a better way

The “{3}” following the “\d” means “repeat the preceding character three times”.

Read the rest of this entry »

Introduction

The purpose of this article is to define a set of ideal practices for an agile software development project. I encourage you to leave comments about this article using the comments box at the bottom of this page. Please note that the practices listed are the practices that I believe are essential to a good agile development project; they do not necessarily have anything to do with being agile. I have tried to list the practices in descending order of importance.

Practice 1: Aggressive refactoring

In my opinion, refactoring is the most overlooked skill for a software developer. A well refactored application has a much higher value to the project sponsor than a poorly refactored application. The most common sign of code in need of refactoring is excessively long methods. I try to keep methods to less than 100 lines. Other common code smells are misleading or meaningless variable names, and code duplication. Static code analysis tools, such as FxCop, can provide a useful measure of code quality.

Practice 2: Testing

Firstly, there should be some developer testing. All the code that is written should be testable and should have tests written for it. It is acceptable to modify your program to facilitate good testing. I believe that the traditional testing terms; unit tests, integration tests and system tests have become outdated. Instead, I prefer the terms developer tests, functional tests and non-functional tests. Non-functional tests are things like performance testing, functional tests are tests that the customer cares about like use case tests or business transaction tests, and developer tests are everything else that the developer needs to test to prove to herself that the code is correct.

We should automate as much testing as possible and run it as part of continuous integration. If code coverage analysis is included in the automated testing it provides a nice indication of the health of the system at any point of time.

Practice 3: Automated build and deployment

The project should have an automated build, an automated deployment and ideally automated testing. In the optimal situation, a developer can click a button and the build process will build the latest source, deploy, test and report on the result. Automating these processes not only saves time but also eliminates a huge number of bugs and time wasters.

Practice 4: Continuous integration

If a project has automated build, deployment and testing then continuous integration is really just a matter of automating the kick-off of that build, deploy test cycle. Every checkin should result in a new build and test, on a separate build server. The results of this should be reported to every team member and it should be an established team practice to immediately fix the build. A working build should be everyone’s top priority. People should not be made to feel bad if they break the build, as this decreases their courage.

Practice 5: Source control

A source control system should be used to store all project artifacts including: code, non-code documentation, build scripts, database schema and data scripts, and tests. The code should not be checked into the build until it compiles and passes its tests.

Practice 6: Communication plan

There should be a defined, direct communication channel between the developers and the customers. This can be (best to worst): on demand face-to-face communication, daily or weekly face-face communication, contact phone numbers, instant messaging, email mailing list, intermediary (BA or PM). These communication channels can and should be combined.

Practice 7: Task tracking

There should be a defined technique for recording and prioritizing development tasks and bugs. The system should make it possible to assign responsibility for tasks to individuals. If tasks are tracked against estimates then the estimate should be performed by the person who will do the task.

Practice 8: Self documenting code

Code comments should be subjected to the same quality requirements as the code itself. Everything possible should be done to ensure that no other technical documentation is required. When non-code technical documentation is required it should be subject to the following restrictions: referenced from the code, always up-to-date (change when the code changes), only one version per baseline, stored in source control.

Practice 9: Peer review

There must be some form of peer review, such as code review of fellow programmers. If the developers are subjected to performance reviews then the peer reviews they do should be an input to that process. This helps to avoid the temptation to approve everything to avoid confrontation. Make sure that the reviews are for quality, not just for correctness.

Practice 10: Work-in-progress

A working version of the latest iteration should always be available for customer feedback. The advantage of this is that customers see very quickly when something has been developed contrary to what they had in mind. Shortening this feedback loop decreases the cost of change.

Practice 11: Feedback mechanism

There should be a defined mechanism for project team members, including the customer, to provide feedback on the project’s processes. My suggestion is to hold a short meeting at the end of each iteration.

 

Article by

Liam McLennan

With the current company and projects Im working on we have been using Mantis bug tracker since i set it up about 2 years ago. At the time of its introduction it was a big change for the team who weren’t used to using proper [Basic required] developer tools to get the job done.

Here are some of the great things we have found in using Mantis:

  • Completely web based
  • Runs on LAMP(cheap servers/hosting, and we have in-house experience in managing these)
  • Configuration notifications via email on changes to tasks. We currently use this to notify the assigned user and the reporter of a task when changes are made to their task.
  • Almost simple enough to use without any training for our users.
  • Granular permsions for users and groups, [Can also be a pain with lots of projects and cross user dependencies if not set up right]

Some negatives

  • No summary emails listing the issues still open that need to be followed up.
  • Searching is not very good, and can lead to duplicate bugs been created.
  • Permission model can get very combersum with a lot of nested projects

Overall a very good free product.

Our team has been playing around with trying to speed up the responsiveness. Here’s a few things we’ve tried that have worked at least most of the time:

  • Close the Toolbox tab – Even with just the tab closed, VS2005 still seems to use resources to keep it up to date. By removing it from your workspace, the project pane and other windows appear much more responsive.
  • Turn Off Animated Windows – When VS2005 gets sluggish, expanding and hiding tabs can appear horrendously slow as the screen repaints. Turning this option off helped a little bit. Uncheck the box found under Tools >> Options >> Environment >> General >> Animate Environment
  • Turn off the VS2005 File Navigator – With resharper installed, you don’t need VS2005 to update the list of methods and fields at the top of the file (CTRL-F12 does this nicely). I’ve hardly even noticed the small panel that sits at the top of the file you’re editing but apparently it takes quite a lot of effort for VS2005 to keep it up to date. Disable the Navigation Bar checkbox under Tools … Options … Text Editor … All Languages … Display.
  • Disable Startup Page – Wondered why VS2005 seemed sluggish on start up? It’s probably because it’s trying to download something from the Internet by default. Turn off the main startup page and the “live” content by unchecking the box found under Tools >> Options >> Environment >> General >> Startup > “Download content every”. I’d also change the “At Startup” option to “Show Empty Environment”.
  • Install Cool Commands – When you use Track Active Item in the Explorer pane, collapsing projects to run tests of various kinds can be hard. Cool Commands has some helpful things like Collapse All Projects so you don’t have to do it yourself when running tests.

As a development manager or project manager, you here a lot of weasel words and excuses from your staff or from external consultants who are trying to hose you into believing things are better than they are. In many cases, developers use these phrases to even convince themselves that things are better than they are, resulting in chronic late delivery and poor quality.

So beware the following phrases from your teams:

  • It should work: this usually means that it doesn’t. It also means that it was probably not tested properly as the result is current undetermined. The word “should” should be taken out every developer’s vocabulary – it either does or it doesn’t.
  • I just need: beware of that word “just”. Its a belittling word meant to make things smaller than they are. Just take the phrase “I just need to write this component” to be “I need to write this component” and already the magnitude of the work involved grows. Developers tend to be chronic under-estimators and the use of the word “just” is a sign of that mentality.
  • Almost done: this is also a weasel word. When a developer tells you things are “almost done” ask for the specific tasks that are left over immediately. In addition, keep in mind that projects do not progress linearly – the last 10% is always about 40-50% of the work of the total project. I’ve seen projects that are chronically late be “almost done” for 3 months.
  • It was tested: this also usually means it that wasn’t tested properly. Ask for a test plan and the specific tests that were done. If the developer cannot produce these with sufficient evidence that PROVES that it was tested, then it wasn’t tested.
  • It must be an environment/configuration/deployment problem: this may be actually true, but it usually points to a larger stability problem. If you cannot build and deploy reliably then why would you have confidence that the code works?
  • If things go smoothly: this I hear a lot, e.g. if we don’t hit any snags then we can be done by Friday. Guess what – you’re likelihood of hitting a “snag” by next Friday is probably high and given the lack of risk based management, the team has probably got no mitigation or contingency strategy. Then next week, you’ll hear the next phrase in our list, “Yes, it could have been done if it weren’t for that Snag we had”.
  • Yes, but, as in “Yes it can be done, but”: this means it cannot be done. Tell your staff to just come clean and say, “No it cannot be done”. Another variant on this is, “Yes it could have been done, if it weren’t for marketing, requirements, technical risk, etc.” This simply shows that your developers work in an idealistic world where things never go wrong.
  • We’ll make up the time at the end: if you’re already late by the end of requirements, you’re likely going to be even later by the end if you simply keep going on the same track. In my experience, teams don’t dramatically faster as they hit their stride. Even if there is some efficiency, its nowhere enough to make up for lost time.
  • I’ve got it done, but I need to build a few more components for you to be able to see it: then its not done! Encourage show and tells, code reviews, unit tests, etc. so that code is visible as soon as possible. Use slicing models so that you can see pieces of the application in weeks, not months. In addition, code that isn’t checked in should never be counted – that means that someone cannot build it sufficiently to share it. It only counts if you can verify it. Ideally, its not “done” unless there are sufficient unit tests, a build script, and a document that someone walking into the code repository could check out the project run a build and have all the unit tests pass. Then the code is done – anything less is mythical.
  • I’ve got it done – I just need to integrate it: The word “Integrate” is a big weasel word. Think of a web service that adds two numbers together. The algorithm is one line of code. But the integration work is huge. In addition, integration usually means the first time that disparate teams are bringing code together which is always cause for issues. Don’t under-estimate the integration, especially in today’s world of distributing computing, web services, etc.
  • It Worked on my Machine!: programmers use this excuse to downplay a bug. The reality is actually the opposite – it means that you have an intermittent bug which is by far the worst kind of bug to have in your application. You want bugs to fail quickly and consistently – any variant such as “That’s Weird”, “That didn’t happen yesterday”, “That must be a data problem”, etc. is admitting you have a bug that cannot be easily duplicated.

My recommendations to reduce the amount of excuse making from your team:

  • Encourage a culture of honesty and team work: You get these excuses when developers are hiding things, and sometimes this is because you’ve created a culture that encourages hiding because you don’t want an honest answer.
  • Be ruthless with your quality and talent standards: don’t excuse poor talent, bad management or chronic late delivery. If you create a culture where talent isn’t rewarded the bar isn’t kept high then you’ll be excusing the team to continually strive for excellence.
  • Expect more than just code: measure performance based on estimation, quality, delivery and team work as well as pure code quality. If you have a developer who produces great code but cannot deliver on time then that’s not a great developer.
  • Increase visibility and shrink delivery cycles: if you have to show your work on a constant basis and deliver on 2 week iteration cycles, your excuses tend to go away. You either deliver or you get found out pretty quickly. Use show and tells, code reviews and continuous integration to see what people are doing on a constant basis.
  • Don’t give half credit for 50% done – its either done or not done: If your tasks cannot be managed this way, then you should split up your tasks until you can work this way.
  • Establish what “Done” really means: for example, at a minimum “Done” should mean checked into source code control and able to build into the current branch. If you’re doing Test Driven Development, it should also mean all tests run successfully. If you have specific performance criteria, then its not “Done” until the performance tests pass.
  • Use counting techniques to measure wherever possible: this is a great suggestion from McConnell’s book on estimation. The more you can count in units, the more accurate you estimation. So if you can count the number of pages, web services, objects, databases, tables, stored procedures, tasks, etc. that are left to accomplish then you can measure them more easily than if its a big blob of work. If your requirements aren’t well defined enough to count objects, e.g. you don’t know how many web pages you’re building in your web site, then you’re really not in a position to estimate your ship date.
  • Don’t sucker, manipulate or bully your team: If as a project manager, you resort to traditional management tactics such as playing games, being political, establishing a blame culture, or bullying your team you’ll lose your credibility and simply encourage lying. A tortured prisoner will tell you anything you want to hear – the same goes with development teams.

If you have a project that operates in the open, has a culture of honesty and establishes a high performance bar, you’ll find that peer pressure as well as some overall guidance will get risks, problems and bugs out in the open. If when you discover these problems people work as a team to fix them instead of blaming each other then every problem solved becomes a victory and not a blame opportunity. You’ll get better answers and improved morale on the team as you set clearer performance expectations.

I’ve been using a product from Atlassian called JIRA for the past couple years to track tasks, bugs, and requests.

I highly highly recommend it – its an amazing product and the team behind it provides great support and rapid response. They’re also highly supportive of charity and open-source organizations – if you ask nicely, they’ll give you an enterprise license to their products for free!

What makes JIRA great?

  • Completely web based
  • Highly customizable, including allowing for custom fields for each task type
  • Configuration notifications via email on changes to tasks. We currently use this to notify the assigned user and the reporter of a task when changes are made to their task.
  • Stupid simple to use – it takes about 30 seconds to create a task and about a minute to train an end user on how to use it.
  • Everything is permission based, so you can allow people to view status of issues but not allow them to change them for example.
  • Java based, runs in Tomcat on just about any kind of machine.
  • The database is flexible. We use SQL Server, but it will support MySQL, Oracle, Postgres, etc.
  • You can use it for everything throughout the development life cycle. We start using JIRA right at the requirements gathering stage to collect user stories and then translate them into functional tasks during development. Then as we transition to QA, we mark each task as resolved and QA tests and closes them. QA can then add bugs as tasks and re-assign them to the developers.
  • From a project management perspective, the tool provides a centralized view of the entire team – its very easy to see all the tasks assigned to a single person, group, component, project, etc. and then re-assign with a few clicks.
  • Its simple enough that non-technical users can benefit from it. We’re currently using it to track marketing tasks, HR requests, IT helpdesk tasks, etc. as well as software changes. It’s that flexible.

I’ve been using JIRA on multiple software projects for a few years now and would highly recommend it as a solution for managing tasks, bugs, features, etc. in a complex environment.

Even if you’re a commercial company, the price for JIRA is totally worth it – licenses are $1200-$4,800US depending on the version you need.


Subscribe to this blog now!

Top Clicks

  • None
October 2017
M T W T F S S
« Feb    
 1
2345678
9101112131415
16171819202122
23242526272829
3031