Chat with us, powered by LiveChat A local area hospital read great success stories of project management being used in industry and has read recent stories about this being used in a hospital setting. For many - Writingforyou

A local area hospital read great success stories of project management being used in industry and has read recent stories about this being used in a hospital setting. For many

A local area hospital read great success stories of project management being used in industry and has read recent stories about this being used in a hospital setting. For many years, the hospital has struggled securing patients’ information under current HIPPA guidelines, and their IT department has had many issues with new software rollouts within the hospital and off-site emergency care and surgical centers. The CEO of the hospital has also discovered through her research that IT projects historically have a lower success rate as compared to other types of projects.

She has tasked you to draft your findings on whether or not the hospital should consider opening a “Continuous Improvement Department'' to help cut costs, improve quality, and better streamline their IT services.

> Draft a response detailing the pros and cons of opening such a department within a medical setting that will prepare the CEO for an upcoming meeting that she has with the Board of Directors.

Need 4-6 pages in APA format with introduction and conclusion. Include minimum of 8 peer-reviewed citations.

CHAPTER

12 System build, implementation and

maintenance: change management

LEARNING OUTCOMES

After reading this chapter, you will be able to:

■ state the purpose of the build phase, and its difference from changeover and implementation;

■ specify the different types of testing required for a system;

■ select the best alternatives for changing from an old system to a new system;

■ recognise the importance of managing software, IS and organisational change associated with the introduction of a new BIS.

MANAGEMENT ISSUES

Eff ective systems implementation is required for a quality system to be installed with minimal disruption to the business. From a mana- gerial perspective, this chapter addresses the following questions:

■ How should the system be tested?

■ How should data be migrated from the old system to the new system?

■ How should the changeover between old and new systems be managed?

■ How can the change to a process-oriented system be managed?

CHAPTER AT A GLANCE

MAIN TOPICS

■ System build and implementation 440

■ Maintenance 448

■ Change management 450

CASE STUDIES

12.1 Business-process management (BPM) 458

12.2 Play pick-and-mix to innovate with SOA 463

M12_BOCI6455_05_SE_C12.indd 439 10/13/14 5:56 PM

Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT440

System build occurs after the system has been designed. It refers to the creation of software using programming or incorporation of building blocks such as existing software components or libraries. The main concern of managers in the system build phase is that the system be adequately tested to ensure it meets the requirements and design specifications developed as part of the analysis and design phases. They will also want to closely monitor errors generated or identified in the build phase in order to control on-time delivery of the system. System implementation follows the build stage. It involves setting up the right environment in which the test and finished system can be used. Once a test version of the software has been produced, this will be tested by the users and corrections made to the software followed by further testing and fixing until the software is suitable for use throughout the company.

Maintenance deals with reviewing the IS project and recording and acting on problems with the system.

Change management in this chapter is considered at the level of software, information systems and the organisation. Software change management deals with meeting change requests or variations to requirements that arise during the systems development project from business managers, users, designers and programmers. IS change management deals with the migration from an old to a new IS system. Organisational change management deals with managing changes to organisational processes, structures and their impact on organisational staff and culture. Business process management (BPM) provides an approach to this challenge.

INTRODUCTION

System build

The creation of software by programmers involving programming, building release versions of the software and testing by programmers and end-users. Writing of documentation and training may also occur at this stage.

System implementation

Involves the transition or changeover from the old system to the new and the prepararion for this, such as making sure the hardware and network infrastructure for a new system are in place, testing of the system and also human issues of how best to educate and train staff who will be using or affected by the new system.

Maintenance

This deals with reviewing the IS project and recording and acting on problems with the system.

Change management

The management of change which can be considered at the software, information system and organisational levels.

SYSTEM BUILD AND IMPLEMENTATION

System development, which includes programming and testing, is the main activity that occurs at the system build phase.

The coverage of programming in this book will necessarily be brief, since the technical details of programming are not relevant to business people. A brief coverage of the techniques used by programmers is given since a knowledge of these techniques can be helpful in managing technical staff. Business users also often become involved in end-user development, which requires an appreciation of programming principles.

Software consists of program code written by programmers that is compiled or built into files known as ‘executables’ from different modules, each with a particular function. Executables are run by users as interactive programs. You may have noticed application or executable files in directories on your hard disk with a file type of ‘.exe’ , such as winword. exe for Microsoft Word, or ‘.dll’ library files.

There are a number of system development tools available to programmers and business users to help in writing software. Software development tools include:

■ Third-generation languages (3GLs) include Basic, Pascal, C, COBOL and Fortran. These involve writing programming code. Traditionally this was achieved in a text editor with limited support from other tools, since these languages date back to the 1960s. These languages are normally used to produce text-based programs rather than interactive graphical user interface programs that run under Microsoft Windows. They are, however, still used extensively in legacy systems, in which there exist millions of lines of COBOL code that must be maintained.

System development

M12_BOCI6455_05_SE_C12.indd 440 10/13/14 5:56 PM

441ChaPter 12 SYSTEM BUILD, IMPLEMENTATION AND MAINTENANcE: chANgE MANAgEMENT

■ Fourth-generation languages (4GLs) were developed in response to the difficulty of using 3GLs, particularly for business users. They are intended to avoid the need for programming. Since they often lack the flexibility for building a complex system, they are often ignored.

■ Visual development tools such as Microsoft Visual Studio, Visual Basic and Visual C++ use an ‘interactive development environment’ that makes it easy to define the user interface of a product and write code to process the events generated when a user selects an option from a menu or button. They are widely used for prototyping and some tools such as Visual Basic for Applications are used by end-users for extending spreadsheet models. These tools share some similarities with 4GLs, but are not true application generators since programming is needed to make the applications function. Since they are relatively easy to use, they are frequently used by business users.

■ CASE or computer-aided software engineering tools (see Chapter 11 for coverage of CASE tools) are primarily used by professional IS developers and are intended to assist in managing the process of capturing requirements, and converting these into design and program code.

Computer-aided software engineering (CASE) tools

Primarily used by professional IS developers to assist in managing the process of capturing requirements, and converting these into design and program code. Software metrics are used by businesses developing information systems to establish the

quality of programs in an attempt to improve customer satisfaction through reducing errors by better programming and testing practices. Software or systems quality is measured according to its suitability for the job intended. This is governed by whether it can do the job required (Does it meet the business requirements?) and the number of bugs it contains (Does it work reliably?). The quality of software is dependent on two key factors:

1. the number of errors or bugs in the software; 2. the suitability of the software to its intended purpose, i.e. does it have the features

identified by users which are in the requirements specification?

It follows that good-quality software must meet the needs of the business users and contain few errors. We are trying to answer questions such as:

■ Does the product work? ■ Does it crash? ■ Does the product function according to specifications? ■ Does the user interface meet product specifications and is it easy to use? ■ Are there any unexplained or undesirable side-effects to using the product which may

stop other software working?

The number of errors is quite easily measured, although errors may not be apparent until they are encountered by end-users. Suitability to purpose is much more difficult to quantify, since it is dependent on a number of factors. These factors were referred to in detail earlier (in Chapters 8 and 11) which described the criteria that are relevant to deciding on a suitable information system. These quality criteria include correct functions, speed and ease of use.

Assessing software quality

Software or systems quality

Measures software quality according to its suitability for the job intended. This is governed by whether it can do the job required (Does it meet the business requirements?) and the number of bugs it contains (Does it work reliably?).

What is a bug? Problems, errors or defects in software are collectively known as ‘bugs’, since they are often small and annoying! Software bugs are defects in a program which are caused by human error during programming or earlier in the lifecycle. They may result in major faults

Software bug

Software bugs are defects in a program which are caused by human error during programming or earlier in the lifecycle. They may result in major faults or may remain unidentified.

M12_BOCI6455_05_SE_C12.indd 441 10/13/14 5:56 PM

Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT442

Software quality also involves an additional factor which is not concerned with the functionality or number of bugs in the software. Instead, it considers how well the software operates in its environment. For example, in a multitasking environment such as Microsoft Windows, it assesses how well a piece of software coexists with other programs. Are resources shared evenly? Will a crash of the software cause other software to fail also? This type of interaction testing is known as ‘behaviour testing’.

Software metrics

Software metrics have much in common with measures involved with assessing the quality of a product in other industries. For example, in engineering or construction, designers want to know how long it will take a component to fail or the number of errors in a batch of products. Most measures are defect-based, measuring the number and type of errors. The source of the error and when it was introduced into the system are also important. Some errors are the result of faulty analysis or design and many are the result of a programming error. By identifying and analysing the source of the error, improvements can be made to the relevant part of the software lifecycle. An example of a comparison of three projects in terms of errors is shown in Table 12.1. It can be seen that in Project 3, the majority of errors are introduced during the coding (programming) stage, so corrective action is necessary here.

While the approach of many companies to testing has been that bugs are inevitable and must be tested for to remove them, more enlightened companies look at the reasons for the errors and attempt to stop them being introduced by the software developers. This implies that longer should be spent on the analysis and design phases of a project. Johnston (2003) suggests that the balance between the phases of a project should be divided as shown in Table 12.2, with a large proportion of the time being spent on analysis and design.

In software code the number of errors or ‘defect density’ is measured in terms of errors per 1000 lines of code (or KLOC for short). The long-term aim of a business is to reduce the defect rate towards the elusive goal of ‘zero defects’.

Errors per KLOC is the basic defect measure used in systems development. Care must be taken when calculating defect density or productivity of programmers using KLOC, since this will vary from one programming language to another and according to the style of the programmer and the number of comment statements used. KLOC must be used consistently between programs, and this is usually achieved by only counting executable statements, not comments, or by counting function points (function point analysis is covered in Chapter 9).

or may remain unidentified. A major problem in a software system can be caused by one wrong character in a program of tens of thousands of lines. So it is often the source of the problem that is small, not its consequences.

Computing history recalls that the first bug was a moth which crawled inside a valve in one of the first computers, causing it to crash! This bug was identified by Grace Hopper, the inventor of COBOL, the first commercial programming language.

Software metrics

Measures which indicate the quality of software.

Table 12.1 Table comparing the source of errors in three different software projects

Project 1 Project 2 Project 3

Analysis 20% 30% 15%

Design 25% 40% 20%

Coding 35% 20% 45%

Testing 20% 10% 20%

Errors per KLOC

Errors per KLOc (thousand lines of code) is the basic defect measure used in systems development.

M12_BOCI6455_05_SE_C12.indd 442 10/13/14 5:56 PM

443ChaPter 12 SYSTEM BUILD, IMPLEMENTATION AND MAINTENANcE: chANgE MANAgEMENT

A significant activity of the build phase is to transfer the data from the old system to the new system. Data migration is the transfer of data from the old system to the new system. When data are added to a database, this is known as ‘populating the database’. One method of transferring data is to rekey manually into the new system. This is impractical for most systems since the volume of data is too large. Instead, special data conversion programs are written to convert the data from the data file format of the old system into the data file format of the new system. Con-version may involve changing data formats, for example a date may be converted from two digits for the year into four digits. It may also involve combining or aggregating fields or records. The conversion programs also have to be well tested because of the danger of corrupting existing data. Data migration is an extra task which needs to be remembered as part of the project manager’s project plan. During data migration data can be ‘exported’ from an old system and then ‘imported’ into a new system.

When using databases or off-the-shelf software, there are usually tools provided to make it easier to import data from other systems.

The technical quality of software can also be assessed by measures other than the number of errors. Its complexity, which is often a function of the number of branches it contains, is commonly used.

Another metric, more commonly used for engineered products, is the mean time between failures. This is less appropriate to software since outright failure is rare, but small errors or bugs in the software are quite common. It is, however, used as part of outsourcing contracts or as part of the service-level agreement for network performance.

A more useful measure for software is to look at the customer satisfaction rating of the software, since its quality is dependent on many other factors such as usability and speed as well as the number of errors.

Table 12.2 Ideal proportions of time to be spent on different phases of a systems development project, focusing on details of build phase

Project activities Suggested proportion

Definition, design and planning 20%

Coding 15%

Component test and early system test 15%

Full system test, user testing and operational trials 20%

Documentation, training and implementation support 20%

Overall project management 10%

Data migration

Data migration is the transfer of data from the old system to the new system. When data are added to a database, this is known as populating the database.

Import and export

Data can be ‘exported’ from an old system and then ‘imported’ into a new system.

Testing is a vital aspect of implementation, since this will identify errors that can be fixed before the system is live. The type of tests that occur in implementation tend to be more structured than the ad hoc testing that occurs with prototyping earlier in systems development.

Note that often testing is not seen as an essential part of the lifecycle, but as a chore that must be done. If its importance is not recognised, insufficient testing will occur. Johnston (2003) refers to the ‘testing trap’, when companies spend too long writing the software without changing the overall project deadline. This results in the amount of time for testing being ‘squeezed’ until it is no longer sufficient.

Testing information systems

Data migration

M12_BOCI6455_05_SE_C12.indd 443 10/13/14 5:56 PM

Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT444

During prototyping, the purpose of testing is to identify missing features or define different ways of performing functions. Testing is more structured during the implementation phase in order to identify as many bugs as possible. It has two main purposes: the first is to check that the requirements agreed earlier in the project have been implemented, the second is to identify errors or bugs. To achieve both of these objectives, testing must be conducted in a structured way by using a test specification which details tests in different areas. This avoids users’ performing a general usability test of the system where they only use common functions at random. While this is valid, and is necessary since it mirrors real use of the software, it does not give a good coverage of all the areas of the system. Systematic tests should be performed using a test script which covers, in detail, the functions to be tested.

Test specification

A detailed description of the tests that will be performed to check the software works correctly.

Test plan

Plan describing the type and sequence of testing and who will conduct it.

Jim Goodnight: crunching the numbers By Michael Dempsey

Addressing a recent business intelligence conference in London, Jim Goodnight’s considered responses and soft Southern drawl left the impression of a thoughtful figure who just happens to be chief executive of a $1.34bn business.

His taciturn aspect changed when the absolute quality of his company’s software was raised. ‘SAS is still quicker and better’, he states.

Despite the waves of re-labelling that have allowed his business to surf through management information systems and data warehousing to reach today’s focus on business intelligence and performance management, Mr Goodnight defines SAS in the light of a very old-fashioned customer grouse. ‘When we ship software, it’s almost bug-free. We learnt about doing that the hard way, many years ago.’

During the 1980s, SAS released some software before it was fully tested and provoked a vocal reaction from the users. ‘They let us know what was wrong with it.’ He jokes about the number of bugs that are still found in other large commercial systems and then generously redeems his competitors with the remark ‘but then we do so much more testing’.

Mini case study

Source: Dempsey, M. (2005) Jim Goodnight: crunching the numbers. Financial Times. 23 March. © The Financial Times Limited 2005. All Rights Reserved.

Given the variety of tests that need to be performed, large implementations will also use a test plan, a specialised project plan describing what testing will be performed when, and by whom. Testing is always a compromise between the number of tests that can be performed and the time available.

The different types of testing that occur throughout the software lifecycle should be related to the earlier stages in the lifecycle against which we are testing. This approach to development (Figure 12.1) is sometimes referred to as the ‘V-model of systems development’, for obvious reasons. The diagram shows that different types of testing are used to test different aspects of the analysis and design of the system: to test the requirements specification a user acceptance test is performed, and to test the detailed design unit testing occurs.

We will now consider in more detail the different types of testing that need to be conducted during implementation. This review is structured according to who performs the tests.

M12_BOCI6455_05_SE_C12.indd 444 10/13/14 5:56 PM

445ChaPter 12 SYSTEM BUILD, IMPLEMENTATION AND MAINTENANcE: chANgE MANAgEMENT

Developer tests

There are a variety of techniques that can be used for testing systems. Jones (2008) identifies 18 types of testing, of which the most commonly used are subroutine, unit, new function, regression, integration and systems testing. Many of the techniques available are not used due to lack of time, money or commitment. Some of the more common techniques are summarised here.

n Module or unit tests. These are performed on individual modules of the system. The module is treated as a ‘black box’ (ignoring its internal method of working) as developers check that expected outputs are generated for given inputs. When you drive a car this can be thought of as black box testing – you are aware of the inputs to the car and their effect as outputs, but you will probably not have a detailed knowledge of the mechanical aspects of the car and whether they are functioning correctly. Module testing involves considering a range of inputs or test cases, as follows: (a) Random test data can be automatically generated by a spreadsheet for module

testing. (b) Structured or logical test data will cover a range of values expected in normal use of

the module and also values beyond designed limits to check that appropriate error messages are given. This is also known as ‘boundary value testing’ and is important, since many bugs occur because designed boundaries are crossed. This type of data is used for regression testing, explained below.

(c) Scenario or typical test data use realistic example data, possibly from a previous system, to simulate day-to-day use of the system.

These different types of test data can also be applied to system testing.

n Integration or module interaction testing (black box testing). Expected interactions such as messaging and data exchange between a limited number of modules are assessed. This can be performed in a structured way, using a top-down method where a module calls other module functions as stubs (partially completed functions which should return expected values) or using a bottom-up approach where a driver module is used to call complete functions.

n New function testing. This commonly used type of testing refers to testing the operation of a new function when it is implemented, perhaps during prototyping. If testing is

Figure 12.1 The V-model of systems development relating analysis and design activities to testing activities

Initiation Implementation review

Requirements specification

User acceptance test

Overall design System test

Detailed design Unit test

Code

Module or unit testing

Individual modules are tested to ensure they function correctly for given inputs.

M12_BOCI6455_05_SE_C12.indd 445 10/13/14 5:56 PM

Part 2 BUSINESS INFORMATION SYSTEMS DEVELOPMENT446

limited to this, problems may be missed since the introduction of the new function may cause bugs elsewhere in the system.

n System testing. When all modules have been completed and their interactions assessed for validity, links between all modules are assessed in the system test. In system testing, interactions between all relevant modules are tested systematically. System testing will highlight different errors to module testing, for example when unexpected data dependencies exist between modules as a result of poor design.

n Database connectivity testing. This is a simple test that the connectivity between the application and the database is correct. Can a user log in to the database? Can a record be inserted, deleted or updated, i.e. are transactions executing? Can transactions be rolled back (undone) if required?

n Database volume testing. This is linked to capacity planning of databases. Simulation tools can be used to assess how the system will react to different levels of usage anticipated from the requirements and design specifications. Methods of indexing may need to be improved or queries optimised if the software fails this test.

n Performance testing. This will involve timing how long different functions or transactions take to occur. These delays are important, since they govern the amount of wasted time users or customers have to wait for information to be retrieved or screens refreshed. Maximum waiting times may be specified in a contract, for example.

n Confidence test script. This is a short script which may take a few hours to run through and which tests all the main functions of the software. It should be run before all releases to users to ensure that their time is not wasted on a prototype that has major failings which mean the test will have to be aborted and a new release made.

n Automated tests. Automated tools simulate user inputs through the mouse or keyboard and can be used to check for the correct action when a certain combination of buttons is pressed or data entered. Scripts can be set up to allow these tests to be repeated. This is particularly useful for performing regression tests.

n Regression testing. This testing should be performed before a release to ensure that the software performance is consistent with previous test results, i.e. that the outputs produced are consistent with previous releases of the software. This is necessary, as in fixing a problem a programmer may introduce a new error that can be identified through the regression test. Regression testing is usually performed with automated tools.

End-user tests

The purpose of these is twofold: first, to check that the software does what is required; and second, to identify bugs, particularly those that may only be caused by novice users.

For ease of assessing the results, the users should be asked to write down for each bug or omission found:

1. module affected; 2. description of problem (any error messages to be written in full); 3. relevant data – for example, which particular customer or order record in the database

caused the problem; 4. severity of problem on a three-point scale.

Different types of end-user tests that can be adopted include:

n Scenario testing. In an order processing system this would involve processing example orders of different types, such as new customers, existing customers without credit and customers with a credit agreement.

n Functional testing. Users are told to concentrate on testing particular functions or modules such as the order entry module in detail, either following a test script or working through the module systematically.

Volume testing

Testing assesses how system performance will change at different levels of usage.

Regression testing

Testing performed before a release to ensure that the software performance is consistent with previous test results, i.e. that the outputs produced are consistent with previous releases of the software.

Functional testing

Testing of particular functions or modules either following a test script or working through the module systematically.

System testing

When all modules have been completed and their interactions assessed for validity, links between all modules are assessed in the system test. In system testing, interactions between all relevant modules are tested systematically.

M12_BOCI6455_05_SE_C12.indd 446 10/13/14 5:56 PM

447ChaPter 12 SYSTEM BUILD, IMPLEMENTAT