A-level Computing/WJEC (Eduqas)/Component 1/Systems analysis
- 1 Approaches
- 2 Software Development Life-Cycle
- 2.1 Feasibility
- 2.2 Analysis
- 2.3 Requirements
- 2.4 Design
- 2.5 Build
- 2.6 Testing
- 2.7 Implementation
- 2.8 Documentation
- 2.9 Maintenance
- 2.10 Evaluation
The waterfall approach adopts a linear nature to software development and at the end of every stage 'deliverables' are produced. The approach gets its name because of the nature of development being like a natural waterfall, as one stage must be completed before you can move onto the next one - also you cannot move back up one stage, you must start again from the top. A clear emphasis on documentation is evident in this process, as at every stage you must produce these 'deliverables'.
- The deliverables can be shown to clients to inform them of the progress on the project.
- A sense of discipline is maintained throughout due to the deadlines for each stage.
- Requirements must be considered before work is begun.
- Takes a long time to deliver a project using this approach.
- Loss of flexibility - you can't traverse the stages as you see fit, there is a rigid order to follow.
- Cannot change the project down the line even when new innovations come to market.
- Any mistakes/overlooked issues will mean you need to completely restart the project.
- Requirements are almost impossible to grasp before beginning to develop.
The agile approach puts more control in the hands of the developer, rather than focusing on the client. The developers are trusted to find a solution to the problem and code it, after being given the resources to do so i.e. a fairly powerful PC with Visual Studio installed typically. Developers are typically grouped into teams and given a part of the solution to implement, these teams meet with the client and discuss their requirements in detail.
- The agile approach believes you can't know the requirements at the start of the project.
- The project is very adaptable, since the plan can change.
- Paired programming is not liked by many.
- Customers who don't understand this approach will not like the lack of documentation.
- People work in sprints, meaning code is usually spaghetti code.
The agile manifesto values "individuals and interactions over processes and tools, working software over comprehensive documentation, customer collaboration over contract negotiation and responding to change over following a plan."
Extreme programming: in this developers must communicate with each other and the other teams, as well as the clients. The most simple solution should be adopted with feedback being provided by others for example in paired programming.
Scrum: Regular prototypes are created as quick as humanly possible. These are produced in a set period of time, called a 'sprint'. Since people work in teams on multiple areas, meetings called 'scrums' are held to discuss progress and offer others advice.
Software Development Life-Cycle
Stakeholders: people who hold an interest in the new system. They are either the ones building it and being paid, or the customer who is paying for the system.
Users: the people who will use the new system, for example people using a shopping website.
Programmers/developers: the people who will build the system.
Project Managers: coordinate the project's completion, by assigning developers to different parts of the project and monitoring their progress. They can be moved around based on their skills and different people with a higher skill-set can work on the original area.
System analysts: find out what the customer wants in the new system. This is done using various fact finding methods since most customers don't know exactly what they want and/or not giving enough information on what data the system already contains.
The customer and development team will meet and discuss the scope of the project - what needs to be done and what the current limitations of their system are. Customers may have a over-realistic view of what can be accomplished and what data the new system must contain, which is why we meet and decide what the scope of the problem is. The problem definition may be designing a new system since their old system does not meet their current business goals, or expanding the system with a new website or mobile application.
A feasibility study is carried out to see whether a solution is achievable, given a development companies limited resources: money, time, technical ability, as well as their reputation.
The company taking on the project have a limited amount of money, since they need to hire developers as well as a building, of which there are outgoing costs: Visual Studio licenses, computers, paying the developers, building costs (electricity, gas, rent, etc). The company will also desire to make some profit, which may or may not be split with the developers.
There are only so many hours to a day, developers work from 9AM to 5PM, meaning only those hours can be used for the project. The time to complete the project must be calculated by the Project Manager and then decided whether to go forth with or not - late projects face repercussions and possible legal action. The customer will want a project by a deadline, which should be checked with the time it'll take to complete the project, to calculate the time feasibility.
Developers aren't gods, they only can code in so many programming languages, have only read some documentation and are not familiar with every single system implementation. Using a skills audit a company can see where a developers skills lie. Another problem is that a customer could request something that isn't currently technologically feasible, e.g. thin VR (Virtual Reality) glasses.
Some systems aren't able to go forth due a hostile political landscape, for example moralistic choice systems, health systems, product testing systems. This will impact the reputation of the company if the system isn't politically correct.
A system proposed may break some laws in its country of origin. Thus, this system cannot be considered to be legally feasible. This is very hard to do, take the Fair Use law and sharing files/producing commentary videos which can be flagged by automated systems incorrectly. On record, YouTube has faced backlash over the Content ID system incorrectly flagging videos.
Analysis is the process of collecting details on what the new system will need to be able to do through various fact finding methods, such as interviews, questionnaires, document collection and observations.
Questionnaires also known as surveys include a set of questions relevant to the system, types of question can be closed, open or scale-based. This provides rich details if the questions are open or some helpful evidence even if all the questions are closed.
- Can gather a large range of viewpoints.
- Large amount of (potentially) very rich details.
- Can be done globally across many company locations.
- Questionnaires are hard to create (must be done by an expert).
- People may not complete them.
- Details retrieved can be highly opinion based and not objective.
Interviews are a one-on-one discussion between a stakeholder and the system analyst. This should provide lots of good detail from someone with a vested interest in the new system and with experience in the business. This will, however, have a large reliance upon what is asked so the questions should be carefully constructed.
- Large amounts of detail can be gathered.
- The person will be a stakeholder so will be co-operative in answering the questions well.
- Follow-up questions can be asked unlike other methods of analysis.
- Will depend upon the content of the interview - poorly constructed interview will produce poor responses
- People may over-exaggerate the problems due to their opinion.
- May take a while if you interview many people.
Document collection involves getting together a large range of various company files, for example invoices, taxable receipts, employee records, company spreadsheets, etc. These documents tell the system analyst what details need to be included in the new system and a sense of how they trade can be established.
- Documents are entirely objective (assuming they haven't been tampered with).
- Show the way the current system works.
- Gather a large amount of details in a short amount of time, typically won't miss out crucial details as they are used for trade.
- No clue as to why the documents exist (current system may not include the details).
- May be highly confidential and not available to the system analyst.
An observation involves watching someone go about their day-to-day work, often commonly referred to as job-shadowing. The daily work grind is watched and the systems analyst notes down any relevant details that may have been missed in the scope.
- See details missed out in the problem definition.
- Verify any other information found.
- You get an overview of the overall work process at the company and can develop the system as such.
The requirements are specific goals the system should meet in order to be delivered to the client. This is important in a legal sense so the company can prove it provided what was asked, but also in another sense of keeping the project within the scope. There are 3 areas covered, of which all must be measurable for legal/scope reasons: interface, functional and performance.
An interface requirement is a standard measurable goal for how the system will be interfaced with. Take the video streaming site Twitch for instance. The interaction will take place in terms of the user clicking their mouse button to trigger a stream load. On the specification we're interested in a measurable standard, this will be referred to as "the website must be accessible by every type of computer user with a mouse, as well as mobile users with a touchscreen press." We're not interested in how many buttons they have on their gamer mouse, but rather a left click can be used to interact with the site, meaning the interface requirements would be met.
Functional requirements detail what features a system should have. Sticking with our Twitch example, some functional requirements could be "a subscription system where users will pay a monthly fee and receive perks" or just simply "videos must load the applicable stream when interacted with". Either of these could be implemented in a number of ways, but simply having the functionality deems them met.
Performance requirements just cover how fast the system should be when used. Twitch is fairly snappy on a good broadband connection, perhaps it had the performance requirement of "load a video stream within 5 seconds." This is definitely measurable and if the stream did indeed load that fast this performance requirement would be met.
The requirements specification, often just called the 'spec', outlines the production of the new system and what it will entail., The contents differ from company to company, but they all stick strongly to their own specification template. This may include: purpose, scope, functional/interface/performance requirements, functions of the product, constraints and issues. An example template can be viewed at the University of Illinois.
The design is what the system will look like and how data will be input and processed in to the system. The final design can only be created when the requirements specification is complete. The design should take into account what hardware/software is available e.g. a website or a game, data structure design, input i.e. how data will go into the system and output i.e. how data will be presented when reading out of the system.
Data Structure Design
The data structure design shows diagrammatically how data will be represented in the system. All of the data gathered during the analysis phase will be organised into fields and placed into a data dictionary.
|First Name||Text||64 characters||The first name of the pupil.||Bob|
|Last Name||Text||64 characters||The last name of the pupil.||Ross|
|Age||Number||3 characters||The age of the pupil.||13|
|Grade||Text||2 characters||The grade the pupil achieved.||A*|
After the data dictionary has been constructed, the method of storage is chosen, most commonly some form of database.
Analysing a Problem
We can solve a problem using many techniques, but you need to know about two of these techniques: abstraction and decomposition.
Abstraction is giving a name to a process. Perhaps you can understand it in a sense of programming - there are many ways to solve a problem in your code and once you've solved it, you may put the solution in a function, so you can call that solution whenever you want. That is abstraction of a problem.
If you still don't understand, go check out this detailed StackOverflow answer.
Decomposition is breaking down a problem into much smaller, achievable, parts. A good example of decomposition is in Object-Orientated programming where every bit of data is broken down into various different objects.
Constructing System Diagrams
To understand the data flow in a system, it is usually best to construct a diagrammatic representation of the system to refer back to. There are three diagrams you need to know: entity relationship diagrams, flowcharts and data flow diagrams.
Entity Relationship Diagrams
Entity relationship diagrams (ERMs) are used to represent databases, showing which tables relate to each other and in what way. There are 3 types of relationship: one-to-one, one-to-many and many-to-many. Each link represents a relationship between two entries (database tables).
Data Flow Diagrams
Data flow diagrams (DFDs) show how information is processed in a system. There will be a series of processes that the system will carry out as a result of interaction from an 'external entity' to the system (an external entity is anyone interacting with the system). When a process (carries out an action in the system) is run, data will be needed, which can be obtained from a 'data store'. A data store is anything that stores data, it is usually a database. Take a cinema for example, the employees will interact with the system as external entities, the processes will book tickets and the data store will store who has bought which ticket, their payment details and which screen they're watching the movie in.
This section will be completed soon.
Testing is always crucial before delivering a project to a client. It ensures the system works as it should and there are no errors or critical errors with your system, which can be critical in a live business environment. Developers need other people to test the project as each computer differs, so the system may not work properly on other computers.
White box testing is carried out by the original developer, as well as other developers in-house. The code is entirely visible to anyone testing, hence the 'white' in the name, as the code is clear to see. In white box testing, each path that could possibly be encountered is run. A simple IF this THEN do that ELSE do this, has two paths - one if the condition is met and another if the condition is not met.
Black box testing is outsourced to other companies whose job it is to test programs, or done by others without any knowledge of the code. The 'black' in the name comes from the fact that the code is hidden from view. Every element of the system is tested to see if each path produces the proper output if correct or incorrect.
Alpha testing is carried out when a system is currently still in development and is not close to completion, lacking lots of functionality or not working properly. These builds are never given to customers, since they will assume that is what will be delivered. This type of testing is carried out in-house.
Beta testing is carried out when a system is very close to completion, having all of the functionality and being more robust due to the alpha testing previously carried out. Customers can test out the system in this stage, using some live data, to see if the system can handle normal data being input. There will generally be a few minor issues with the beta system.
Unit testing is entirely automated testing run on a regular schedule, where some test conditions are placed into the system, to see if the relevant output is produced. Since code is developed separately and then inserted into the main system, it is quite likely that some new code inserted will break some older code, which is why unit tests are carried out. The original developer of the unit tests creates the tests that need to be passed for the code changes to be accepted into the main system.
End user testing is done by the original customer that requested the system. They use real live data to test the system and the requirements specification is checked so both parties are happy that they've met their side of the agreement. The customer will have a complete system and the development company will be paid.
Direct changeover, or 'Big Bang' changeover, is where the old system is disabled and the new system is immediatedly transferred to. This is potentially very dangerous as the new system may be buggy, staff may dislike the new system, people may not be trained properly, the system may fail leading to lost sales and the collapse of the company's business.
This is where a new system is introduced gradually to people, in various phases. One part of the new system will be used and the rest of the old system will remain. This allows for the testing of each part of the new system, any problems can have the system revert back to the old part, while the new one is fixed and put back for testing.
Pilot changeover is where the new system is introduced at one business branch where all the other branches will continue to use the old system. This only works when each location is doing the same task. This could work well in a supermarket installing a new POS (Point-of-Sale) system meaning that if it did not work well at all, the only branch losing money is the one with the new system.
This is where both the old system and the new system will run in tandem. This will require staff to enter details on both the systems, but if the new system is broken then no sales will be lost. This is great against lost sales but employees will be doing double the work, so part-time employees may need to be taken on because of the sheer amount of data entry required. This method is for critical systems, such as those used in banking establishments.
Documentation is produced for internal use in the creation of the system as well as the users of the system, so they know how it works and how they can utilise the system.
User documentation helps the use traverse the user interface (UI) of the system and will show how to achieve many common tasks with the system, as well documenting what each button of the system does and telling the user what each element of the system means. The documentation must be produced in lay-mans terms so the average user can understand it and you should be able to accomplish any task the system can do.
Maintenance documentation must be produced because inevitably, the system will need some changes as the business will change and society will adopt new design languages. The system will need to be understood by an system administrator, who will be tech-literate and this documentation will be of relevance to them. The system may need to be moved entirely to another data center and the Installation section of the maintenance documentation will be of relevance in that circumstance, documenting how to setup the database, install a web server like nginx/apache, etc. Other sections could be: data structures, algorithm flowcharts, data dictionary, internal documentation like design documentation and minimum system requirements e.g. RAM, CPU power, drive space required.
This section will be completed soon.
This section will be completed soon.