Tuesday, November 1, 2011

The End of Scheduling and Software Project Problems


Software project management requires a comprehensive problem solving approach. Currently, project managers lack a clear conception of the problem and solution elements involved in the management of software systems. There is no understanding of the mechanisms and metrics needed to identify and solve problems and take the actions required for adaptation to changing events. As a result, when problems occur project managers can only request more people, time, and money or take actions that will reduce quality or functionality. While the hard quantitative metrics may indicate that there is something a wrong, there is a need for qualitative metrics to understand what it is wrong and why. Without problem solving mechanisms and metrics that improve understanding, projects will continue to become farther behind schedules. Eventually pressure from management will become so great that the project team will implement an immature system.

Turing award winner and author of the “Mythical Man Month”, Fred Brooks stated that software systems are disappointing because they are developed like tangible goods. He argues that the difference is that tangible goods are made of atoms while software systems are made of bits. There is also the change dynamics. Brook argues that software objects are subjected to constant changes during development and after implementation while manufactured things are infrequently changed after manufacture. The problem in trying to manufacture software is that most factories perform visible, repetitious, simple, mechanical tasks. On the other hand, software development is mental work that involves a level of problem solving that is much more uncertain and much less deterministic than factory work. You cannot do time and motion studies on the human brain. Unlike the consistency of factory inputs, requirements as the project’s primary input are of very poor quality and likely to change. Trying to manufacture a product under circumstances where the inputs are often invalid, ambiguous, complex, incomplete, inconsistent and fuzzy is an exercise in futility. System requirements changes and other downstream changes significantly disrupt a deterministic manufacturing process with significant increases in changes and rework that require more effort and time. 

Guru

Open systems project managers need to constantly holistically assess what is out of balance, what teams are functioning effectively, and then make decisions regarding where and how to deploy the time, attention, resources, and energy needed to restore the  out of balance conditions. Project management need advanced, open systems project tracking systems to help them rapidly analyze project situations, identify potential problems, determine the causes, and implement solutions fast enough to keep the project on track. Guru, a prototype of an open system project management approach will be used as an example. Based upon models of similar performance, the prototype will be used to identify out-of balance conditions, analyze the possible situations, rank the potential problems, and identify solutions.

Traditional project management practices control projects through task management. In Guru, it is trend analysis. The purpose of trend analysis is to quickly identify potential problems and make the mid-course corrections before project system objectives are put at risk. Guru will be used to identify out-of balance conditions, analyze the possible situations, rank the potential problems, and identify a range of solutions. The first step is to develop model of expected performance from similar projects, particularly those projects used as the basis for estimating and planning. The idea is to build baseline performance models from clusters of similar systems at user designated checkpoints.
These checkpoints are based upon systems engineering activities that include the systems engineering lifecycle functions. The systems engineering lifecycle functions are requirements and design, acquisition and development, test and validation, deployment, support, training, tech support, operations and maintenance, and software removal. Each activity is decomposed into a series of control points. A user-designated checkpoint is a fraction of an activity 25%, 50%, 75%, or 100% of the software functional elements, particularly the features. Remember software functional consists of systems, subsystems, software functions, capabilities, and features. For example, what should a project look like at 50% of features tested in this release for Team A, Team B, or the entire project?
 
Feature tracking is useful because features are common to all software systems. Features facilitate communication with the business customer and improve the ability to communicate project status in a manner that is understandable to the customer. Contrast the value of feature metrics with LOC and function points. By the end of the systems engineering release planning activity, the number of features planned for release is well known. As a result, the percentage completion and performance ratios are highly accurate because the size factors are known. Percentage completion ratios reflect that features are defined, allocated, designed, coded, integrated, deployed, supported, trained, operated and maintained divided by total features. Performance ratios are expressed in nominator and denominator relationships such as effort/features, defect/features, and changes/features and many more. For example, 1000hours/250features, 2defects/50 features, 10changes/50features. Productivity ratios are the opposite such as 50features/1000 hours.  

Dynamic Variables   

Dynamic variables are the measures allow a project manager to identify when the project is approaching trouble, the probable conditions that are causing the potential problem and a range of solutions used by other projects in the organization to resolve the potential problem. The methodology is to select clusters of similar subsystems, develop baselines measures for the dynamic measure from the clusters, and compare the current project measure to the baseline measures. Baselines of the dynamic variables include the normal range and the abnormal ranges that includes lower than normal and higher than normal. Dynamic variables are the measures of interest to a particular project and could include staff hours, problem reports, percent software functions completed, and staff count as well as features certified, supported, deployed, and satisfied. At the designated control points, Guru graphs the actual values and the baseline values. Abnormal conditions occur when actual values are above or below the normative ranges.

If the current measure is within the boundaries established by the baseline measure, the dynamic variable is good. If the measure is out of balance or trending toward being out of balance (higher than normal or lower than normal), the project manager initiates an investigation. During investigation, an expert function asks a series of questions based upon the possible common causes. Special rules and knowledge gleaned from extensive research into the causes and effects of project deviations can help project managers reach conclusions about the reasons for the out of balance measures. Based upon problem solving processes, lessons learned, profiles, relational clusters, models, and processes, the investigators answers the questions posed by the system for that specific out-of-balance dynamic variable. For example, Staff Hours Greater Than Normal, Staff Hours Lower than Normal, Features tested Higher than Normal, Features Tested Lower than Normal, or Defects Detected High than Normal, Defects Detected Lower than Normal, ectra. 

7.7.1.1 Staff Hours Greater Than Normal Examples

Under these circumstances, analyze the accuracy of the size and effort estimates. If effort estimates are created from any method other than models generated from similar previous projects, estimating is probably the problem.  

Measure the growth and stability of the features designated to be coded in the release. If the number of feature in the release is growing significantly, take actions to stabilize the release. Are features growing or changing? If significant growth has occurred and requirements creep were not factored into the release estimates, re-estimate the release and ensure that the features are required. Are there still undefined requirements or outstanding issues that require resolution? Address the relationships between the known and unknown features. If possible, put the unstable features on hold and address the known features. Schedule the implementation of the remaining features once the requirements are known.

If the size growth is insignificant then the problem is productivity. Calculate development team productivity by dividing the number of features completed by number of staff hours required to complete the work units. Compare the results to the productivity of the projects sued for estimating. Compare the original staffing profile to actual staff profiles. Match original job category and skill requirements with those of the project team. If the staffing profile is the problem, re-estimate the project based upon the current staffing profile. Evaluate supervisory skills, work environment, and team effectiveness. Perform a root-cause analysis. Analyze team domain, application, language, and tool experience. If the problem is development team experience adjust the project accordingly. Analyze the management plans, poor planning could result in the decrease in productivity. Evaluate the quality of the plans and determine whether the team follows the plans. Rate the quality of the various plans. Finally, consider that the productivity assumptions were too ambitious.

7.7.1.2 Staff Hours Lower Than Normal Examples
 
If staff hours are below normal and schedules are not being met, then the project does not have enough people and is staffing up too slowly. Compare the current staff profile to the original. Make adjustments that reflect the differences in staffing assumptions and staffing actuality. 

If staff hours are below normal and the team continues to meet schedules, the problem class context is lower than expected. In other words, the problem is simpler than expected, the system is easier than expected, the system is smaller than expected, or the teams are better than expected.  

However, it could also mean that the development team lacks complete understanding of the problem and requirements or they are maintaining schedules at the expense of quality. Determine whether team and user have clear understanding of the problem and the requirements. Ensure that the team is not trying to maintain schedules at the expense of quality. They might be neglecting defect prevention, detection, and correction activities. Make sure that team is effectively performing inspections, code reading, walk-through, testing and other feedback control methods.   

Joint Optimization  

Joint optimization directs time, effort, and resources to the most important, common, high usage, high priority software functional elements (systems, subsystems, software functions, capabilities, and features). System effectiveness must come first but the marginal and probably unused, lowest priority, software functional elements must compete with the project’s resources, budgets, and scheduling constraints. Under this evolutionary development approach, user certified software functional elements are delivered in five stages of software complexity: the basic or structural, the mandatory, the mature, the elegant, and the sophisticated. With joint optimization, the definition of failure shifts from realizing a system that is very far over budget and behind schedule, of very poor quality, or needs extensive rework to projects that are mature, on time and within budget, and of user certified quality sans the often unused and unneeded bells and whistles features.

Joint optimization is possible because not all requirements are equal. A study by the Standish Group of 100 companies found that 45% of application features are never used, 19 percent rarely used, 16 percent sometimes used, 13 percent often used, and 7 percent always used. Joint optimization prioritizes the major the software functional elements so that the bulk of the project resources go to the most highly prized functional elements. The more marginal and lesser needed functional elements must compete with project efficiency in terms of schedules and costs. As a result, the scarce resources no longer automatically flow to the development of unused or unneeded functionality that exponentially adds to project costs, schedules, and complexity. For example, systems architectures designed around unused or rarely used functionality also increases the maintenance and operational costs of systems.

Without the marginal requirements, highly robust and quality systems could be implemented in less than 1/2 the time of monolithic projects. Concurrent software engineering requires that all the system engineering lifecycle functions be released concurrently. These top-tier software functional elements will receive the best quality and the best system support in terms of deployment, training, tech support, operations and maintenance. This will improve systems efficiency. No longer will software be considered too difficult and expensive to setup, tech support, document, deploy, train, operate and maintain because the project ran out of time and money developing excessive functionality

Gone are the massive wastes from excessive functionality that go unused, sometime as much as 45% of the feature set. Gone are the projects that seem to go on forever amassing tremendous schedule and budget overruns because there were no distinctions between software functional elements that were important and those that were not. Gone are the additional resources from unnecessary increase in system size and the accompanying exponential growth in project complexity, difficulty, time, costs, and effort. Gone are the unused, rejected, cancelled, or low value system that did not achieve the desired results because each release will be certified before another is formally started. This way the horror of implementing a system that does not work is avoided. Gone are the project cancellations with nothing to show for the expended time, costs, and effort. Unused or rejected systems are in the past because each release will be certified before another is formally started. Gone are the costs of operational software systems are inflexible, difficult to use, expensive to support, hard to deploy, and very costly to maintain because the software systems and support systems are released and delivered together.

Finally, gone is the schedule and budget waste caused by complex and difficult software designs. Organic software objects are many times less complex and difficult to deliver than machine objects because machine objects group similar requirements from different, dissimilar, and distinct sources into centralized designs that quickly turn into spaghetti code. Centralized designs serve many too many constituents are difficult to visualize, design, code, and test as well as being costly to maintain and error prone. The lack of visibility, conformity, and changeability increase problems and extend budgets and schedules.  




No comments:

Post a Comment