About software development methods, case tools and functional design

In 1998, I started my career as a functional designer in a large company, designing all kinds of new functionality for large and complex systems. We used MS Word to document the customer requirements. We were free to decide how to do this.

A simple case tool (named SDW) was used to document the functional designs. These designs contained a description of the system functionality, as well as data structure/relationships from a logical point of view. The projects were based on the Waterfall method.

Nowadays, designers use all kind of sophisticated methods and techniques to improve speed, quality, maintainability, etc. I’ve worked in many organizations using methods like RUP, Agile and Scrum, to name a few!

All kinds of expensive tools were used, such as Enterprise Architect, Rational Rose, alongside use cases to document the functional designs. But never have I experienced the level of professionalism as during the ‘Waterfall and MS Office’ period at my first employment.

Why? I’ve been asking myself that question.

Why is it nowadays that Waterfall is often considered as a ‘bad method’? Why do functional designers use these newer design techniques like use cases? Were the classic ones not sufficient? Are these newer techniques better?

Waterfall vs. iterative methods

Waterfall is a relatively simple method for system development. It is a sequential method, that starts with defining all the requirements, then creating the functional design, which is then followed by development and testing. Each stage handles all requirements and has to be finished/approved before the next stage can be started.

The main disadvantage is that the requirements that arise at a later stage are often difficult to include in the scope. Sometimes it simply doesn’t fit the architecture. Or sometimes a chosen solution that is already programmed should be reverted before the new solution can be implemented. So adding or changing requirements at a later stage can be difficult.

A solution for this is to use an iterative method, like RUP. These methods can handle ‘progressive insight’ better than Waterfall because the changes are divided into chunks, which are implemented in phases. So the requirements do not need to be defined in an extensive way, as is done in the Waterfall method at the very beginning of the project.

But iterative methods are usually harder to manage because you have to do parallel planning, manage multiple disciplines at the same time, (re)define the stages and priorities during the whole project. Also iterative methods can lead to constantly changing requirements during the project, because the stakeholders are involved throughout the whole process. New requirements are being added regularly, or changed due to ‘progressive insight’, which makes it very hard for an inexperienced project manager to manage properly.

To sum things up…

Waterfall is relatively easy to use but is too rigid to deal with new insights. This could be an issue in fast changing markets/environments, especially when dealing with larger projects where these risks are higher. But if the requirements at the beginning are solid and usable in the long-term (compared to the project term) then Waterfall could be very suitable.

Iterative methods are harder to manage, but provide flexibility for the stakeholders to (re)define the requirements.

Office tools vs. sophisticated case tools

Case tools are very effective for performing impact analyses, drawing mock-ups, describing functionalities, and creating diagrams. But when the quality from functional design is poor, then a case tool will not improve it, if you would compare it with a text editor like MS Word, for instance.

A functional design needs to be correct, complete, and clear (CCC). As an analyst, developer, or tester, one needs to trust the information it contains. But for this CCC, you do not need a case tool, however, it might help.

So a case tool could be very beneficial when it is used properly.

But sometimes as a designer you are expected to be pragmatic (because of deadlines, for example). In this case, a designer won’t be able to deliver a full blown design. And sometime no designs are made at all, when the changes are communicated orally. This will result in the already existing design being incomplete and even more inconsistent. In this case, the benefit for case tools will not be significantly greater, if compared to a good text editor. On the other hand, an existing and complete design in a case tool will make analysis and maintenance much easier.

However, case tools will not deliver the most visually creative designs as designers are limited in their creativity. This is because of the rules upon the shared use that are agreed with other colleges. The case tool itself also demands that the design are created a certain way (according to a specific structure). But this could also be considered as an advantage because the case tool will guide a designer, making it easier to create the respective design(s).

So a case tool can deliver many advantages, but should be used properly to justify the investment (money, learning time, set up, integration, etc.).

Traditional designing vs. use case

With traditional design techniques you describe the functionalities of a system. You describe what a system does from a system perspective, as well as the logic to achieve this.

Nowadays, use cases have become a very popular method to replace the traditional designing method. There is only one problem. Use cases do not describe the system functionality. But they are used to describe the interaction between systems and actors. So you often still need to design the system logic. But the ‘problem’ is that many designers aren’t aware of this and consider the use cases as the final stage for designing. It is very helpful for work-flow and/or a scenario driven system, and in many cases offer enough information for the developers. But it is not practical to always use use cases. Try to design Excel by using use cases, for instance. That just doesn’t work.

There is also another problem. Readers need to be familiar with use cases, so you need to learn to read them. Designers often undergo training to learn to this method. The readers (customers, testers, developers) do not. There are also many interpretations about how to create a use case properly. For instance, include and exclude relations between use cases. These are very much debated (also on the Internet). This can lead to ambiguous designs. One can guess the consequences.

But under the right circumstances, it is a very powerful method for designers to describe the functionalities (from actor’s perspective), especially when combined with a case tool.

Conclusion

All these new methods, techniques, and tools can be helpful. But this doesn’t mean the old ones aren’t sufficient anymore. Depending on the situation, the right method has to be chosen.

Some things to consider…

Define the requirements for the requirements. In other words, what do you need to gather and document the requirements in the most efficient way.

Investigate the environment:

  • Is the environment fast changing or not?
  • Consider the level of IT expertise within the company. Do the developers/testers/stakeholders understand use cases. How experienced are the project managers?
  • Consider the company culture. How much does the business control the IT department? Does the company demand that everything is documented or do they prefer not to document too much?
  • What are the needs? Speed above quality or vice versa.

Then make the right decision. BUT use the advanced one properly, or otherwise keep it simple.

There is no Silver Bullet like ‘Rush code to live’

There is no Silver Bullet for software development, as you can read in Fred Brooks epic 1975 book about software development titled “The Mythical Man-Month“. But if I have to come up with any Silver Bullet it would be: ‘Rush code to live’. This is a theoretical model for doing software development, which follows the DevOps model. Since it is a ‘Silver Bullet’ it claims to be a revolutionary way to produce software much faster and cheaper.

SaaS needs a rush

In a Software-as-a-Service (SaaS) landscape you need to keep your customers happy by continuous, incremental improvements of the software, which follow an organic growth model. By doing big bulky updates, the user will have to wait a long time before anything changes and the user experience will suddenly change, which might alienate your users. Apart from that: 1) the moving target kicks in; 2) risks grow, since reverting will be increasingly difficult; 3) it is unclear what needs to be tested; 4) and users lose confidence when you keep moving the deadline for the next version.

Organize carefully

So take SCRUM, with a product owner and everything that comes with it. Now split your big software team of into small expert groups of 2-3 people that own a specific part of the code. Increased feeling of ownership encourages responsible behavior. Now make sure every group has one natural leader (a more senior developer), to avoid unproductive coding style/architecture discussions. Now let them develop blocks of code that are as small as possible, but can be brought to live individually. This would make programmers work for 2-3 days on a single small feature. If a feature is too big, simply cut it up into parts to make it smaller. Test the feature and bring it to live by the next day, when the code is still ‘top of mind’ and bugs can be fixed fast and easy. It is important the testing is done together with some other developers in demo-like sessions to avoid blind spots and encourage competition and enthusiasm.

Testing and deployment

When bringing the code to live, make sure this is done carefully by only the best developers. Do it when usage is low and do not tell customers about your ‘Ninja deployment’, since this may increase the expectations and thus the bug costs. When bringing code to live, your monitoring must be top-notch and you must be able to revert fast (only to fix bugs and redeploy). This means that if something fails, it will always be the new feature and only a limited set of customers will notice. If you can, bring the feature live for only a subset of the customers first. Both of these measures help further reduce the risk and positively influence the test (test vs. bug costs) trade-off on which you decide the amount of testing effort. Do not hesitate to bring code to live that does not add value. Even if it only provides part of a feature and must therefor be hidden, you should still deploy it.

Change your thinking

Now think outside the box and consider the truth of the following ‘Rush code to live’ principle:

Every 10 lines of code written but not brought live (into production) will cost you one extra man-hour every day.

Reasons it might be true? Only after code is running flawlessly on live will people stop discussing, changing, arguing and worrying about it. Note that I only have a gut feeling to back this claim up, and is not based on any research whatsoever. Still, I believe the cost of unreleased code is way higher than anyone can imagine, so ‘Rush your code to live’ for fun and profit!

PHP 5.3 is now officially end-of-life (EOL)

PHP 5.3 last regular release (5.3.27) was done in July 2013, back then we read the following statement on the release notes:

Please Note: This will be the last regular release of the PHP 5.3 series. All users of PHP are encouraged to upgrade to PHP 5.4 or PHP 5.5. The PHP 5.3 series will receive only security fixes for the next year. – php.net

So, back then it was not a big deal, since security fixes would be released for one more year (and a year seems very long). But last week PHP 5.3.29 was released and since that year has passed PHP 5.3 is now officially end-of-life (EOL). This means there are no further updates, not even security fixes, as you can read in the release notes:

This release marks the end of life of the PHP 5.3 series. Future releases of this series are not planned. All PHP 5.3 users are encouraged to upgrade to the current stable version of PHP 5.5 or previous stable version of PHP 5.4, which are supported till at least 2016 and 2015 respectively. – php.net

Ubuntu Linux users that run the still supported (and popular) 12.04 LTS release on their web server should not be worried too much: Ubuntu maintainers will backport security fixes until 2017. But running PHP 5.3 might be cumbersome, especially if you want to develop using the latest PHP frameworks or libraries. These often contain “array short syntax” and thus require PHP version 5.4 or higher . The simplest option is to upgrade your Ubuntu 12.04 LTS to 14.04 LTS, since that comes with PHP 5.5. If you decide to stay at 12.04 for a while, you will be stuck with 5.3.10 from the repo, unless you…

Upgrade PHP from 5.3 to 5.4 in Ubuntu 12.04 LTS

This is more or less the only option you have. Since it is not officially supported you have to install a PPA. I normally do not recommend this, since you could mess up your system badly and/or severely endanger the security of your machine. But I must admit that Ondřej Surý’s PPA is a very famous and widely used one, which would make it a bit more trusted. So, I will include the instructions, but you have been warned:

sudo apt-get install python-software-properties
sudo add-apt-repository ppa:ondrej/php5-oldstable
sudo apt-get update
sudo apt-get dist-upgrade

Why you should not upgrade PHP to 5.5 in Ubuntu 12.04 LTS

PHP 5.5 and it’s dependencies are provided by the “ppa:ondrej/php5″ repo. And even though PHP 5.5 is longer supported and more powerful than PHP 5.4, you should probably stick to PHP 5.4. The reason for this is that PHP 5.5 requires Apache 2.4, where Ubuntu 12.04 comes bundled with Apache 2.2 by default. This means that when you upgrade PHP 5.3 to PHP 5.5 you also have to upgrade Apache 2.2 to Apache 2.4 (as a dependency). This could break many things, but it will (most certainly) break your virtual host configuration. So this is something I can’t recommend unless you are really sure what you are doing. Do not upgrade PHP to version 5.5 without having a tested upgrade plan. I’m serious… be very very careful!

Please stop using pop-up windows in web applications

In the Nineties, we were writing desktop applications with pop-ups. These desktop applications consisted of multiple windows that popped up. I was programming Delphi back in these days, where windows were called forms. The naming probably came from their main purpose: data entry into the bundled Paradox database. This is comparable with the forms that we see today on the web and they were equally abused for other purposes.

The purpose of forms in a database driven application is to facilitate CRUD (Create, Read, Update, Delete) operations. That is why you need the List, Add, Edit and Delete forms. Maybe the Delete form is not needed and this can be just a conformation dialog. To simplify the application flow, it used to be possible to make pop-up windows “modal”. This meant that you could not ignore them and had to click them away before you could continue. This is typically something you would want when you want the user to confirm an action or acknowledge a critical error.

JavaScript (like Delphi back in the day) has simplified making modal pop-ups by offering us the functions “alert()” and “confirm()”. But let’s take an example of a typical company database application. Such an application may have an overview showing a listing of customers. Maybe you can search in this list. If you click on a specific customer you may be able to see a list of their orders. In the Delphi days, we would have a window with customers and when you clicked on the “view orders” button, it would open up a new window with this information.

lightbox_popup
Figure 1: Example of a jQuery lightbox styled pop-up in WordPress

On the web, we first tried to copy this model by opening new browser windows in web applications. Then came the era of pop-up ads and ad blockers, and people started moving away from the multiple browser window strategy. This move was stimulated even more when browsers started having tabs. Then we saw that developers started making jQuery lightbox styled pop-ups on top of other pages. These are still used a lot, but I feel they lead to horrible user experiences wherever they are used.

In the end, most of the developers saw the light (fortunately). They probably realized that you do not need a stack of windows since the web browser already allows you to go back (and forward by opening new pages) in this stack using the back button and the links you can click. Also, the browser creators acknowledged that in order to make users use the “alert()” and “confirm()” functions, they had to make sure these pop-ups rendered in much prettier way. Until then, they resembled the JavaScript error pop-ups from the Nineties.

So today, when I stumble across an HTML anchor tag with an TARGET property, I cringe. It hurts even more when I see people use jQuery lightbox styled pop-ups. Not only because they are almost always a pain to close, but also because they do not work properly on different screen sizes (like phones). However, the worst thing about this form of pop-up, is the wrong expectation that people have about the underlying page. What should happen when the pop-up is closed? Should it be reloaded, so it is updated? Or can it have old data? I don’t know, can you tell me?

To add something constructive to this rant, I will also propose some new rules for pop-up lovers that have a hard time forgetting the Nineties:

  1. Everything is a page, and your application can most probably be represented by a tree (with some jumps back).
  2. Use clickable breadcrumbs to show the current path in the tree structure of your application.
  3. The back button should work everywhere and warn when needed (about reposts or expired pages).
  4. Make sure all your pages have a single, structured, short, but descriptive URL.
  5. For confirmation, rely on the JavaScript “confirm()” function.
  6. Use top of page colored flash messages to show success or failure.

Are you in the business of making web applications that mainly do CRUD operations on a database? Have you still not sworn off pop-ups? Do you think I am wrong? Please use the comments to discuss.

PyCon Australia 2014: conference videos online

pycon_au_logo

PyCon Australia 2014 was held last week (1st – 5th August) at the Brisbane Convention & Exhibition Center.

PyCon Australia is the national conference for the Python Programming Community, bringing together professional, student and enthusiast developers with a love for developing with Python.

For all of you that did not go, most of the conference is available on YouTube (39 videos):

  1. Graphs, Networks and Python: The Power of Interconnection by Lachlan Blackhall
  2. PyBots! or how to learn to stop worrying and love coding by Anna Gerber
  3. Deploy to Android: Adventures of a Hobbyist by Brendan Scott
  4. How (not) to upgrade a platform by Edward Schofield
  5. Caching: A trip down the rabbit hole by Tom Eastman
  6. Verification: Truth in Statistics by Tennessee Leeuwenburg
  7. Record linkage: Join for real life by Rhydwyn Mcguire
  8. Command line programs for busy developers by Aaron Iles
  9. What is OpenStack? by Michael Still
  10. Software Component Architectures and circuits? by James Mills
  11. IPython parallel for distributed computing by Nathan Faggian
  12. A Fireside Chat with Simon Willison
  13. Accessibility: Myths and Delusions by Katie Cunningham
  14. Python For Every Child In Australia by Dr James R. Curran
  15. Lightning talks
  16. How to Read the Logs by Anita Kuno (HP)
  17. Serialization formats aren’t toys by Tom Eastman
  18. Django MiniConf: Lightning talks
  19. What in the World is Asyncio? by Josh Bartlett
  20. Try A Little Randomness by Larry Hastings
  21. Building Better Web APIs by HawkOwl
  22. devpi: Your One Stop Cheese Shop by Richard Jones
  23. Learning to program is hard, and how to fix that by Jackson Gatenby
  24. Lesser known data structures by Tim McNamara
  25. The Quest for the Pocket-Sized Python by Christopher Neugebauer
  26. Sounds good! by Sebastian Beswick
  27. Running Django on Docker: a workflow and code by Danielle Madeley
  28. Software Carpentry in Australia: current activity and future directions by Damien Irving
  29. The Curse of the Django Podcast by Elena Williams
  30. Grug make fire! Grug make wheel! by Russell Keith-Magee
  31. (Benford’s) Law and Order (Fraud) by Rhys Elsmore
  32. The Curse of the Django Podcast by Elena Williams
  33. Lightning talks
  34. The Curse of the Django Podcast by Elena Williams
  35. Seeing with Python by Mark Rees
  36. Descriptors: attribute access redefined by Fraser Tweedale
  37. How do debug tool bars for web applications work? by Graham Dumpleton
  38. Continuous Integration Testing for Your Database Migrations by Joshua Hesketh
  39. Closing

Have fun watching!