A-level Computing/WJEC (Eduqas)/Component 1/Systems analysis

From Wikibooks, open books for an open world
< A-level Computing‎ | WJEC (Eduqas)
Jump to navigation Jump to search



The waterfall approach adopts a linear nature to software development and at the end of every stage 'deliverables' are produced. The approach gets its name because of the nature of development being like a natural waterfall, as one stage must be completed before you can move onto the next one - also you cannot move back up one stage, you must start again from the top. A clear emphasis on documentation is evident in this process, as at every stage you must produce these 'deliverables'.


  • The deliverables can be shown to clients to inform them of the progress on the project.
  • A sense of discipline is maintained throughout due to the deadlines for each stage.
  • Requirements must be considered before work is begun.


  • Takes a long time to deliver a project using this approach.
  • Loss of flexibility - you can't traverse the stages as you see fit, there is a rigid order to follow.
  • Cannot change the project down the line even when new innovations come to market.
  • Any mistakes/overlooked issues will mean you need to completely restart the project.
  • Requirements are almost impossible to grasp before beginning to develop.


The agile approach puts more control in the hands of the developer, rather than focusing on the client. The developers are trusted to find a solution to the problem and code it, after being given the resources to do so i.e. a fairly powerful PC with Visual Studio installed typically. Developers are typically grouped into teams and given a part of the solution to implement, these teams meet with the client and discuss their requirements in detail.


  • The agile approach believes you can't know the requirements at the start of the project.
  • The project is very adaptable, since the plan can change.


  • Paired programming is not liked by many.
  • Customers who don't understand this approach will not like the lack of documentation.
  • People work in sprints, meaning code is usually spaghetti code.


The agile manifesto values "individuals and interactions over processes and tools, working software over comprehensive documentation, customer collaboration over contract negotiation and responding to change over following a plan."


Extreme programming: in this developers must communicate with each other and the other teams, as well as the clients. The most simple solution should be adopted with feedback being provided by others for example in paired programming.

Scrum: Regular prototypes are created as quick as humanly possible. These are produced in a set period of time, called a 'sprint'. Since people work in teams on multiple areas, meetings called 'scrums' are held to discuss progress and offer others advice.

Software Development Life-Cycle[edit]


People Involved[edit]

Stakeholders: people who hold an interest in the new system. They are either the ones building it and being paid, or the customer who is paying for the system.

Users: the people who will use the new system, for example people using a shopping website.

Programmers/developers: the people who will build the system.

Project Managers: coordinate the project's completion, by assigning developers to different parts of the project and monitoring their progress. They can be moved around based on their skills and different people with a higher skill-set can work on the original area.

System analysts: find out what the customer wants in the new system. This is done using various fact finding methods since most customers don't know exactly what they want and/or not giving enough information on what data the system already contains.

Problem Definition[edit]

The customer and development team will meet and discuss the scope of the project - what needs to be done and what the current limitations of their system are. Customers may have a over-realistic view of what can be accomplished and what data the new system must contain, which is why we meet and decide what the scope of the problem is. The problem definition may be designing a new system since their old system does not meet their current business goals, or expanding the system with a new website or mobile application.

Feasibility Study[edit]

A feasibility study is carried out to see whether a solution is achievable, given a development companies limited resources: money, time, technical ability, as well as their reputation.


The company taking on the project have a limited amount of money, since they need to hire developers as well as a building, of which there are outgoing costs: Visual Studio licenses, computers, paying the developers, building costs (electricity, gas, rent, etc). The company will also desire to make some profit, which may or may not be split with the developers.

Time Limits[edit]

There are only so many hours to a day, developers work from 9AM to 5PM, meaning only those hours can be used for the project. The time to complete the project must be calculated by the Project Manager and then decided whether to go forth with or not - late projects face repercussions and possible legal action. The customer will want a project by a deadline, which should be checked with the time it'll take to complete the project, to calculate the time feasibility.


Developers aren't gods, they only can code in so many programming languages, have only read some documentation and are not familiar with every single system implementation. Using a skills audit a company can see where a developers skills lie. Another problem is that a customer could request something that isn't currently technologically feasible, e.g. thin VR (Virtual Reality) glasses.


Some systems aren't able to go forth due a hostile political landscape, for example moralistic choice systems, health systems, product testing systems. This will impact the reputation of the company if the system isn't politically correct.


A system proposed may break some laws in its country of origin. Thus, this system cannot be considered to be legally feasible. This is very hard to do, take the Fair Use law and sharing files/producing commentary videos which can be flagged by automated systems incorrectly. On record, YouTube has faced backlash over the Content ID system incorrectly flagging videos.


Analysis is the process of collecting details on what the new system will need to be able to do through various fact finding methods, such as interviews, questionnaires, document collection and observations.


Questionnaires also known as surveys include a set of questions relevant to the system. This is useful if the staff are spread over a wide geographical area or there is a lot of them.


  • Relatively cheap to produce for a large number of people.
  • Can gather a large range of viewpoints.
  • Large amount of (potentially) very rich details.
  • Can be distributed worldwide across many company locations.
  • Could be completed on-line so results can be available very quickly.


  • Questionnaires have to be designed by an expert or information could be unusable.
  • People are 'too busy' and may not complete them.
  • People may not give correct answers.


Interviews are a one-on-one discussion between a stakeholder and the system analyst. This is suitable when the analysts need a lot of information from a small amount of people.


  • Can gather large amount of detailed information.
  • Can make judgements on validity of information from personal contact or body language.
  • The person will be a stakeholder so will be co-operative in answering the questions well.
  • Follow-up questions can be asked unlike other methods of analysis.


  • Time consuming and expensive to carry out.
  • Has to be carried out by a trained interview or closed questions written by experts.
  • Difficult to analyse large amount of information.
  • Difficult to analyse wide variety of information.

Document Collection[edit]

Document collection involves getting together a large range of various company files, for example invoices, taxable receipts, employee records, company spreadsheets, etc. This is suitable for investigation current data storage requirements and data flow.


  • Team can see how system should currently be operating.
  • Inexpensive method of gathering lots of information fairly quickly.
  • Can identify the storage requirements of the system.


  • Staff may not be following procedures listed in the documentation and using system in their own way.
  • May be highly confidential and not available to the system analyst.
  • Documentation may be out of date and not updated to reflect changes made to the system.


An observation involves watching someone go about their day-to-day work, often commonly referred to as job-shadowing. This is suitable for gathering information first hand.


  • See details missed out in the problem definition.
  • Verify any other information found.
  • Can actually see what is happening, no reliance on other people telling you what they think is happening.


  • Very time consuming and expensive to carry out.
  • Staff may feel threatened being watched so may act differently from they way they act day-to-day.
  • There is a cost to send analysts across the country .


The requirements are specific goals the system should meet in order to be delivered to the client. This is important in a legal sense so the company can prove it provided what was asked, but also in another sense of keeping the project within the scope. There are 3 areas covered, of which all must be measurable for legal/scope reasons: interface, functional and performance.


An interface requirement is a standard measurable goal for how the system will be interfaced with. Take the video streaming site Twitch for instance. The interaction will take place in terms of the user clicking their mouse button to trigger a stream load. On the specification we're interested in a measurable standard, this will be referred to as "the website must be accessible by every type of computer user with a mouse, as well as mobile users with a touchscreen press." We're not interested in how many buttons they have on their gamer mouse, but rather a left click can be used to interact with the site, meaning the interface requirements would be met.


Functional requirements detail what features a system should have. Sticking with our Twitch example, some functional requirements could be "a subscription system where users will pay a monthly fee and receive perks" or just simply "videos must load the applicable stream when interacted with". Either of these could be implemented in a number of ways, but simply having the functionality deems them met.


Performance requirements just cover how fast the system should be when used. Twitch is fairly snappy on a good broadband connection, perhaps it had the performance requirement of "load a video stream within 5 seconds." This is definitely measurable and if the stream did indeed load that fast this performance requirement would be met.

Requirements Specification[edit]

The requirements specification, often just called the 'spec', outlines the production of the new system and what it will entail., The contents differ from company to company, but they all stick strongly to their own specification template. This may include: purpose, scope, functional/interface/performance requirements, functions of the product, constraints and issues. An example template can be viewed at the University of Illinois.

System Design[edit]

The design is what the system will look like and how data will be input and processed in to the system. The final design can only be created when the requirements specification is complete. The design should take into account what hardware/software is available e.g. a website or a game, data structure design, input i.e. how data will go into the system and output i.e. how data will be presented when reading out of the system.

Data Structure Design[edit]

The data structure design shows diagrammatically how data will be represented in the system. All of the data gathered during the analysis phase will be organised into fields and placed into a data dictionary.

Example Data Dictionary
Name Type Size Description Example
First Name Text 64 characters The first name of the pupil. Bob
Last Name Text 64 characters The last name of the pupil. Ross
Age Number 3 characters The age of the pupil. 13
Grade Text 2 characters The grade the pupil achieved. A*

After the data dictionary has been constructed, the method of storage is chosen, most commonly some form of database.


Analysing a Problem[edit]

We can solve a problem using many techniques, but you need to know about two of these techniques: abstraction and decomposition.


Abstraction is giving a name to a process. Perhaps you can understand it in a sense of programming - there are many ways to solve a problem in your code and once you've solved it, you may put the solution in a function, so you can call that solution whenever you want. That is abstraction of a problem.

If you still don't understand, go check out this detailed StackOverflow answer.


Decomposition is breaking down a problem into much smaller, achievable, parts. A good example of decomposition is in Object-Orientated programming where every bit of data is broken down into various different objects.

Constructing System Diagrams[edit]

To understand the data flow in a system, it is usually best to construct a diagrammatic representation of the system to refer back to. There are three diagrams you need to know: entity relationship diagrams, flowcharts and data flow diagrams.

Entity Relationship Diagrams[edit]

Entity relationship diagrams (ERMs) are used to represent databases, showing which tables relate to each other and in what way. There are 3 types of relationship: one-to-one, one-to-many and many-to-many. Each link represents a relationship between two entries (database tables).

Data Flow Diagrams[edit]

Data flow diagrams (DFDs) show how information is processed in a system. There will be a series of processes that the system will carry out as a result of interaction from an 'external entity' to the system (an external entity is anyone interacting with the system). When a process (carries out an action in the system) is run, data will be needed, which can be obtained from a 'data store'. A data store is anything that stores data, it is usually a database. Take a cinema for example, the employees will interact with the system as external entities, the processes will book tickets and the data store will store who has bought which ticket, their payment details and which screen they're watching the movie in.


This section will be completed soon.


Testing is always crucial before delivering a project to a client. It ensures the system works as it should and there are no errors or critical errors with your system, which can be critical in a live business environment. Developers need other people to test the project as each computer differs, so the system may not work properly on other computers.

White Box[edit]

White box testing is carried out by the original developer, as well as other developers in-house. The code is entirely visible to anyone testing, hence the 'white' in the name, as the code is clear to see. In white box testing, each path that could possibly be encountered is run. A simple IF this THEN do that ELSE do this, has two paths - one if the condition is met and another if the condition is not met.

Black Box[edit]

Black box testing is outsourced to other companies whose job it is to test programs, or done by others without any knowledge of the code. The 'black' in the name comes from the fact that the code is hidden from view. Every element of the system is tested to see if each path produces the proper output if correct or incorrect.


Alpha testing is carried out when a system is currently still in development and is not close to completion, lacking lots of functionality or not working properly. These builds are never given to customers, since they will assume that is what will be delivered. This type of testing is carried out in-house by company employees to test the functionality of the system.


Beta testing is carried out when a system is very close to completion, having all of the functionality and is carried out after alpha testing. This tested by releasing the system to a limited audience outside of the company and their comments are noted. There will generally be a few minor issues with the beta system.

Unit Testing[edit]

Unit testing is entirely automated testing run on a regular schedule, where some test conditions are placed into the system, to see if the relevant output is produced. Since code is developed separately and then inserted into the main system, it is quite likely that some new code inserted will break some older code, which is why unit tests are carried out. The original developer of the unit tests creates the tests that need to be passed for the code changes to be accepted into the main system.

End User[edit]

End user testing is done by the original customer that requested the system. They use real live data to test the system and the requirements specification is checked so both parties are happy that they've met their side of the agreement. The customer will have a complete system and the development company will be paid.


Acceptance testing is done by the customer or end user to prove that the system works correctly.


Direct Changeover[edit]

Direct changeover, or 'Big Bang' changeover, when the old system is suddenly replaced by the new system. This can be used when a failure would not be catastrophic to the business but cannot be used when a failure would cost lives or a lot of money. This is cheap to implement as there are no extra staffing costs but if the system fails the business has no system which could be dangerous or cost money.

Phased Changeover[edit]

Phased changeover is when the system is introduced part-by-part, a new piece of functionality is implemented at each 'part'. This is suitable when there are multiple departments in the business. If the new system fails the staff can revert back to the old system. Problems can be fixed quickly as there will be more experts on hand to solve the problems. Any problems in one area can also be fixed in the next area as to not cause a problem when implementing the next part of the system. But this method of changeover may cause problems in the changeover period as staff will need to use two different systems and communicate with each other.

Pilot Changeover[edit]

Pilot changeover is when the system is implemented in a single part of the business, so it is suitable for multiple branches/offices, it can be rolled out in a single branch/office. If the new system fails in the office, all the other offices will still continue to function, so failure is not catastrophic. Problems identified in one area can be fixed in the next area as not to cause any problems in the roll out across all branches/offices. But this method can cause problems as staff will need to use two different systems and communicate with each other. For example, this could be used in a single supermarket branch.

Parallel Changeover[edit]

Parallel changeover is when both systems run together for a set period of time. This is the safest option out of the four changeover methods as if the system fails the business can still function with the old system. This can be expensive as this could require extra staff to enter data on both systems, or overtime for current staff to operate both of the systems at once. This could cause confusion for both the customers and the staff of the business though. This method is for critical systems, such as those used in banking establishments.


Documentation is produced for internal use in the creation of the system as well as the users of the system, so they know how it works and how they can utilise the system.


User documentation helps the use traverse the user interface (UI) of the system and will show how to achieve many common tasks with the system, as well documenting what each button of the system does and telling the user what each element of the system means. The documentation must be produced in lay-mans terms so the average user can understand it and you should be able to accomplish any task the system can do.


Maintenance documentation must be produced because inevitably, the system will need some changes as the business will change and society will adopt new design languages. The system will need to be understood by an system administrator, who will be tech-literate and this documentation will be of relevance to them. The system may need to be moved entirely to another data center and the Installation section of the maintenance documentation will be of relevance in that circumstance, documenting how to setup the database, install a web server like nginx/apache, etc. Other sections could be: data structures, algorithm flowcharts, data dictionary, internal documentation like design documentation and minimum system requirements e.g. RAM, CPU power, drive space required.


There are various types of maintenance that will be required in the life-time of a system.

Corrective Maintenance[edit]

This is required while the system is being used and an error is discovered. An example of corrective maintenance is when a bug is found for example an incorrect calculation and the calculation is thus changed to produce the correct result.

Adaptive Maintenance[edit]

This is required when the system must be altered to adapt to a new law or environment. An example of adaptive maintenance is when the program has to be altered to run on a new operating system, a desktop application running on Windows has to be adapted to run on a mobile phone. Or a change in the law, e.g. an increase in the VAT rate from 20% to 25%.

Perfective Maintenance[edit]

This is required when the performance of a system has to be enhanced. An example of perfective maintenance is when the program's performance is improved like a search algorithm being amended to produce results more quickly.


This section will be completed soon.

Backup and Recovery[edit]

Backup is the process of copying files on one storage area to another, separate storage area. There are two main strategies to backing up and recovering data, the first being:

  1. Copy the data onto a portable medium e.g. USB flash drive or external hard disc drive.
  2. This should be done at a regular interval e.g. weekly or monthly intervals.
  3. The medium should be stored in another location in a fire-proof/waterproof safe.
  4. If there was a disaster the data could be copied back onto a new hard disc drive/SSD.

The second makes use of "cloud storage":

  1. Data should be uploaded to a third party.
  2. This should be done at a regular interval e.g. weekly or monthly interval.
  3. The data is physically stored at the third party's site (their data center).
  4. If there was a disaster the data could be copied back onto a new hard disc drive/SSD.