Babcock's Laws of IS Development:


1. "If you automate a mess you end up with an automatic mess."

2. "Those who don't plan to fail surely will."

3. "If it is an R & D project, plan to do it twice."

4. "Let costs remain where they are incurred."

5. "Don't encode independent information in database keys (or names, for that matter.)"

6. "Never try to swim with too much chain."

7. "Thou shalt not admit vagrant processes."

8. "Thou shalt not covet useless data."

9. "Honor thy parser for then it will be well with you and then you shall have good success."

10. "It is easier to get criticism than criteria."

If you automate a mess you end up with an automatic mess.

This gem was one of the first principles learned in an IS career that now spans three decades. As far as I know, it is original with me. It seems that the first problem most of us encounter as we seek to apply IS automation to any process is gaining an understanding of that process. We often find that the current process is both inconsistent and incoherent. Automation applies a rigor that generally forces the existing processes to be recast in order to achieve the required coherence. Unfortunately, this step is often overlooked because it is costly or unpleasant. If we don't take the time to straighten out the existing "manual" processes prior to automating them, we will end up with a mess that is now automated. This has been the downfall of many IS projects. We rush right in with a technical solution before fixing the underlying logical/organizational problems.

The developer is in the unenviable position of being the messenger bearing bad news. Nevertheless, if he is true to his craft, he must have the fortitude to "tell it like it is." They do, on occasion, shoot the messenger. You don't really want to work for those clients anyway. If the customer's processes are logically flawed they will not be helped by mere automation. Adding automation can actually result in performance that is WORSE than that prior to automation.

If the customer is adamant and you don't mind the risk to your integrity and reputation, then at least work on an hourly basis. You'll come out ahead financially, albeit with ulcers.

Those who don't plan to fail surely will.

I believe that I coined this one as well. However, in all fairness I must credit my early training (both explicit and incidental) in this topic to Exxon. In the late 70's they were very high on sending their engineers to courses put on by Kepner-Tregoe that addressed this concept.

It has been my observation that most project plans are put together with a sort of linear optimism which reflects Gantt-chart thinking as each task moves to the next in stair-step order. Almost never do I see plans that actually include FAILURE as a task to be planned for. This is most often reflected in IS plans which show the stages of development neatly followed by a testing phase (almost always too short) which in turn is followed by roll out to production.

I seldom see a formally published plan that actually shows a task (or the time allotment required) for reworking a project to fix the problems encountered during testing. One exception I recall was done by a fellow that asked me to review his plan and applied this law as noted. He actually added tasks to account for failure. His plan was a success, by the way. I suppose that it is politically incorrect to suggest that our efforts are less than perfect on the first try. My experience suggests otherwise. How many programs run without error on the first compile?

On the other hand, those who diligently PLAN for failure will often be pleasantly surprised when that part of their plan is not needed. They, at least, have a bona-fide chance to come in ahead of schedule. If they DO have to make repairs, at least they have planned for it.

If it is an R & D project, plan to do it twice.

What is an R & D project? Basically anything that has never been done before or which significantly depends upon component technologies with which there is little or no prior experience. This kind of endeavor typically requires that at least one version be built to completion as a prototype. Only in the rarest of cases will the prototype be satisfactory. More often than not, the first really viable version will be the second try. This law is related to law number two in the sense that most project managers don't have the fortitude to plan to fail. This is simply silly in the face of an obvious R & D project. I suppose this stems from the unwillingness of project managers/leaders to ask their bosses for the resources to "waste" on learning how to actually do some new thing. The bosses are not without blame as they often don't receive such requests with good sense either.

The sad result is that often the prototype is often foisted upon the unwilling victims as the production project with the assurances that all "minor" deficiencies will be fixed "in the next release." Instead, we should plan to build one to learn how to build one and then throw it away prior to building the "real" version. It may seem hard to ask for resources to build a "throw away" but isn't it more honest? In reality this is what happens anyway. If we were honest about it in our planning then so many of the recriminations and ulcers that we get would disappear. In addition, the TRUE cost of an endeavor would be available to managers "up front" before the resources are committed. This can (we hope) result in better decision making.

Let costs remain where they are incurred.

I suppose this "law" has been forming for years but it really has stood the test of time in every industry in which I've had the pleasure of working from aviation to utilities.. It is hard to capture the idea in a brief, pithy statement so some amplification is in order. An analogy may help.

Consider that as you walk from the parking lot to your cube, there is one piece of sidewalk that is raised so that you stub your toe PAINFULLY upon it. I'll wager that the next time you come in, you studiously avoid repeating the incident. But suppose that, through some mix up in your nervous system, when you stubbed your toe, you felt the pain in your knee. Now based upon the erroneous location of the pain you conclude that the pain you are suffering on your daily walk to work must result from the jogging that you have been doing as a form of regular exercise. So you quit jogging, get fat, and die from a heart attack.

Organizations are not unlike the human body. However, the management/nervous system is much less sophisticated. It has relatively few pain receptors and these are most concentrated in the areas of budgets, schedules, and resources (time, money, materiel, and personnel.) Management has been described as "Achieving the desired end(s) within the available resources." In our analogy, management directs the activities of the organization in order to "minimize pain." I realize that, on face value, this statement may cause some of you to lose me at this point but hang in there.

What typically happens is that some problem or obstacle arises which impedes progress. This is the "pain." Instead of dealing with the cost of working through the problem, a shortcut or alternate route is taken. This alternate route has it's own "pains" associated with it. These pains, however may not be felt by the immediate group that chooses the alternative and so they don't feel any local pain. However, some other part of the organization usually does. The problem is that, now that the pain has been dislocated, management perceives the pain as originating in the other part of the organization and therefore makes ill-advised plans for remedies.

A recent case study will serve to illustrate the (proper) application of this principle. Up to now, we have been using a form of disk cloning to mass produce workstations of a given configuration. This practice dates back to the earliest days of DOS and the availability of disk cloning hardware/software. It is considerably faster and easier than individual installations of software using the manufacturer's installation programs. It seemed to work pretty well with DOS. The advent of more sophisticated 32 bit operating systems has made this practice increasingly questionable. Although it has worked reasonably well with OS/2. It will not with NT.

Originally, we planned to "clone" NT configurations as we have with DOS and OS/2. However, we found that Microsoft would not agree to support these configurations if they were cloned. They insist upon individual builds. There are, in fact, reasonable technical arguments for this. However, it is much more COSTLY in terms of manufacturing and "up front" configuration development. With the architecture as it stands and our intended use of it, it was tempting to ease the "pain" of manufacturing and development by cloning. However, this merely shifts that cost "downstream" to support personnel and to future rework if Microsoft's architecture takes a turn that makes their present restriction of more effect. If we took the shortcut, the costs and hence the organizational "pain" would be borne in the support area instead of in manufacturing. Management might well try to apply fixes to the support structure when, in fact, it was manufacturing that really needed the attention. Proper manufacturing would reduce the support costs. Since those in the support world rarely get the chance to voice their opinions in the early phases of a project, it would be easy for those concerned with manufacturing to try and "save a buck." Besides, manufacturing (since it is outsourced) is "real" money whereas the consumption of support staff is "funny money", right? Wrong!

Compounding this is the fact that costs that are dealt with head on are often very identifiable whereas costs that are dislocated often become obscured. It is easy for management to feel the pain of an expensive manufacturing invoice. By the time costs have been dislocated downstream to the support staff, the "pain" becomes harder to locate with the same degree of precision.

Fortunately, we made the "right" decision and have elected to adhere to the Microsoft guidelines. This will mean higher up front costs and effort but by meeting these costs "where they are incurred" we will undoubtedly avoid much more difficult "pain" in the future.

Finally, having been a manager myself for a number of years, may I say that it is simply unfair to the poor creatures to distort the little data they have to work with by obscuring and dislocating costs. Give them a break. Let costs remain where they are incurred. If you HAVE to move them, make sure that the pointers to the original "pain" are well documented and understood by all. At least this gives a manager a chance at a good decision.

Don't encode independent information in database keys (or names for that matter.)

This is really just an application of database normalization. When a database is in third normal form, the relationship between table records and their keys may be summed up by saying that the information in the record is related to the key, the whole key, and nothing but the key. I've seen numerous database designs that did not follow this dictum and they almost always ran into difficulties later on. There are broader applications outside of mere database discussion.

Entity-Relationship (ER) modeling applies to more than just database design. I find that it often arises when discussing network-naming standards. More generally, it will apply anytime naming is in view. Consider the name as the key and the thing named as the record. To be "normalized" the attributes of the thing named should be associated only with the name, the whole name, and nothing but the name.

Take the case of your checkbook. Suppose you designated check numbes ending (or prefixed with) in "1" for mortgage, "2" for utilities, "3" for auto expenses, "4" for medical bills, etc. Well, you only pay the mortgage once per month so if you pay that with check 1 and get to check 11 before the next month you have to "skip" it or discard it. Invariably, you end up painting yourself into some kind of corner when the underlying attribute of the entity whose key you are trying to encode changes. Or, you end up having distributions of attribute occurences which confilict with the more or less even distribution of any numeric encoding scheme. The check number should ID the check, period. It should NOT ID what the check was written for or where it was applied.

As a real world example of the problems than can arise when this is not followed, consider the case of having a particpant ID "scheme" in conducting clinical trials that is more than just a simple number. In a typical multi-center trial I've seen the particpant ID assigned so that the leading digits correspond to a given center. But what happens when that participant in a long trial moves into another region and starts reporting to a different center? Now you have an anomaly that you live with or you have to go refactor all of the participant records with a new ID.

The bottom line is that any time you try to encode some information into a key or name for an object that is not dependent solely and wholly upon that object but has outside dependencies, you will inevitably encounter corruption of the naming scheme when any of those outside dependencies change. It is tempting to try and encode locations, topologies, and the like into names so that the very name conveys additional meaning. Invariably, it paints the "schemers" into a corner that later proves difficult to correct. Don't do it.

Never try to swim with too much chain.

At first glance, what I'm about to describe may sound like good old-fashioned scope creep. In a way, it is. However, the genesis is much different. With conventional scope creep you typically can easily identify the source of accretions to your project. Usually your customer is simply adding to his expectations. The beast I'm going to describe is much more sinister and hard to recognize.

An organization in many ways can be thought of as a body. The body has so called "voluntary" and involuntary systems. For example, your heartbeat is an involuntary system. We don't have to consciously command the heart to beat. Eating is a voluntary system. Breathing can be both. Similarly, the organization has formal and informal management systems. The formal management systems are easy to recognize (they are called bosses.) The informal management systems that exist generally arise from organizational needs that are not met by the formal management systems. They can be harder to recognize but can actually be more powerful (to a point) than the formal systems. They are harder to recognize because they are created on an ad hoc basis by staff trying to get something done. They won't appear on any organizational chart or in any policy manual. For the most part, the players seem to intuitively understand this and manage to discover those that are essential to getting what needs to be done, done.

As I said, these "involuntary" systems can be extremely powerful. If you starve your body for protein, it will begin to rob protein from muscles in order to survive. Ultimately, this will result in death if not corrected. In the same way, there are strong forces at work in any organization to meet its unmet needs. The difficult part is that these are often hard to recognize. Failure to recognize and deal with these properly can literally spell death for an endeavor.

Here's what typically happens: First, you have an organization that has a number of unmet needs. This is particularly likely in organizations that are "lean." Someone in the organization begins to address one of these needs. Someone doing something to correct one problem attracts the attention of other problem holders who then try to find ways of getting their needs met by the new initiative. To visualize how this happens, imagine a group of swimmers in a rough sea all trying to tread water. Suddenly, one swimmer begins to make progress toward the shore. Immediately, the others try to attach themselves to the strong swimmer and thus overloaded and distracted, all drown.

We have a saying down in Louisiana when some miscreant is discovered drowned in the Mississippi river, "Poor boy just tried to swim with too much chain." I've seen otherwise worthy initiatives sink rapidly out of sight when they tried to take on "too much chain." It is essential that project managers keep their focus clearly upon the "shore" and not allow other "swimmers" to jeopardize their success, no matter how worthy the plea. It is often very hard to say no. It takes maturity and discipline. Better that a few make it to shore than for all to drown.

Thou shalt not admit vagrant processes.

A vagrant is defined as one with "no visible means of support." As we grapple with providing solutions to the problems that confront us, no doubt many good suggestions will be placed upon the table. Before putting any of these into effect, it is necessary to clearly identify how the new processes will be supported. Generally, this means that a process "owner" must be clearly identified. It is unusual to find processes that don't require some "care and feeding" during their useful life. The "caretakers" must be identified.

Unfortunately, it seems more often the case that some new process/solution will be created and placed into service to meet an immediate need. With the crisis past, we forget to go back and add the infrastructure necessary to continue the process. This often results in illogical and awkward assignments of duties within an organization. One group may end up "owning" the solution, not because they are the logical place for it but merely because they were "there."

It is therefore better to require that such proposed solutions be required to present their "means of support" before they move to any stage of actual implementation. Or, to put it another way, don't waste time and effort wrestling with a solution, no matter how attractive, that has no identified means for support. Ultimately, you will end up wasting effort and creating a larger problem. There is a real danger that if sufficient effort is expended, it will become too "embarrassing" to discontinue the unwise course and the solution/process will take on an unwarranted life of its own.

This is all somewhat general so a couple of clear examples from experience are in order. In one company plagued with a complicated and arcane procurement process, one of the proposed new processes was an on-line catalog that would allow end users to directly view and select items for purchase from it. It was a wonderful idea. Unfortunately, the catalog was implemented and considerable effort was expended on the technology with little or no thought given to who would ultimately maintain the thing. When it came time to put the technology into effect, there was no catalog caretaker to be found. The effort though considerable, was stillborn. The real and present danger is that the potential for "embarrassment" may cause management to force the caretaker role on some unwilling victim instead of "cutting their losses."

In another company, we were considering bringing in Microsoft's SMS to manage our workstation environment. This is truly a comprehensive product. However, even Microsoft points out that it requires an organization to take care of the product and administer it. It may be the greatest thing technically since sliced bread but we should not take one step toward implementing it until the support means has been explicitly identified. To do so would invite failure, embarrassment, recriminations and anger. It is always so.

The proper approach is to never allow any proposal or idea that does not come with an identified and explicit means of support to remain for very long on the table of ideas. It is certainly proper to bring forth suggestions but before those suggestions are taken very far and before much thought and effort are expended on their behalf, we must insist that the "hard" questions concerning who, what, when, why, and how are answered or have a reasonable (and not just wishful thinking) expectation of being answered.

Thou shalt not covet useless data.

By definition, useless data is data for which there is no specific and defined use. One of the more common tasks in developing various information processes and systems is the creation of paper or automated data collection forms. During the design phase of these forms it is exceedingly difficult to keep squarely focused on the task at hand as those involved suggest all manner of "nice to have" items that might be solicited. Fundamentally, no data should be solicited without also specifying the destination or consumer of the data and the associated downstream process. If this requirement is kept, it will go a long way towards keeping the data entry and collection tasks under control. The architect should insist that no information be included for which there is no specifically defined consumer process.

Honor thy parser for then it will be well with you and then you shall have good success.

When you boil information processing down to its essentials, you can describe it by saying that we take data in one place and move it to another and perhaps manipulate it on the way. In over a quarter of a century of programming in numerous languages I've found many similarities and only a few salient features that put some languages at the top of my "most useful" list. All of them have assignment operators, branching operators, logical operators, math operators, etc. Only a few have robust built-in parsing capability.

Moving data to and fro is not difficult. When you have to take that data apart and put it back together in new forms, you must invariably parse it. Logic and math operations are straightforward but you can spend hours of coding to get the parsing "right" if you don't have a language tool that provides robust parsing operators. Parsing fixed format data is not difficult but as the data becomes more "conversational" in structure, parsing takes on additional challenges.

Whether you want to have glitzy GUI's or wow them with your web site, you will always need to parse. Choose your weapons wisely. An appropriate choice can make you look like a productive genius.

It is easier to get criticism than criteria.

It seems to be hard to get customers to tell you what they want. Sometimes this is because they don't know what they want. Sometimes they don't have the "language" with which to communicate their needs (programmers have been known to speak in a strange tongue.) Sometimes it is just hard to get them to take the time out of their busy schedules and focus on the problem long enough to render a usable specification. However, you can be sure that when you've built whatever it is that you are building for them, they will be quick to tell you what they really wanted it to do and even quicker to point out what is just plain wrong.

If you have the stomach for it (and management that is clued-in to the approach) you can often get design criteria much faster by building a "throwaway" (the politically correct term is "prototype".) In the face of ever shortening deadlines and scarce resources, it is often hard to get "buy in" to this approach but in the hands of a seasoned developer it can actually get you where you need to go faster and with less expense.

For this to work, you must establish a minimum level of credibility with your client. This is because they have a natural fear that the prototype will be the production model. Until you demonstrate that you really will make good on your promise to build the "real one" after the prototype, you may take a lot of "heat." However, if you tell them from the start that this is the approach you will take and then follow through, subsequent projects can become much easier as a spirit of true collaboration is developed.

So, if you are having a tough time nailing down what is required, consider taking your best shot and presenting it. You will then get all of the design criteria you would ever want and then some!

 

 



This page was updated on September 8, 1999