- 2.5.11 Analysis Techniques and Deliverables
- 184.108.40.206 Introduction
- 220.127.116.11 Developing Data Flow Diagrams
- 18.104.22.168.1 Types of Data Flow Diagrams
- 22.214.171.124.1.1 Current Physical Data Flow Diagram
- 126.96.36.199.1.2 Current Logical Data Flow Diagram
- 188.8.131.52.1.3 New Logical Data Flow Diagram
- 184.108.40.206.1.4 New Physical Data Flow Diagram
- 220.127.116.11.2 Elements of a Data Flow Diagram
- 18.104.22.168.2.1 Data Stream
- 22.214.171.124.2.1.1 Naming Data Streams/Modifiers
- 126.96.36.199.2.1.2 Routers and Collectors
- 188.8.131.52.2.1.3 Split Data Stream
- 184.108.40.206.2.2 Process
- 220.127.116.11.2.3 Data Store
- 18.104.22.168.2.4 Access Key
- 22.214.171.124.2.5 Source/Sink
- 126.96.36.199.3 Using Special Conventions to Model Data Flows
- 188.8.131.52.3.1 Accessing/Updating a Data Store
- 184.108.40.206.3.2 Updating Re-circulating Data Stores
- 220.127.116.11.3.2.1 Depicting Files on Logical Data Flow Diagrams
- 18.104.22.168.3.2.2 Depicting Files on Physical Data Flow Diagrams
- 22.214.171.124.3.3 Sorts
- 126.96.36.199.3.4 Off-Page Connector
- 188.8.131.52.4 Leveling Data Flow Diagrams
- 184.108.40.206.5 Identifying a Data Flow Diagram
- 220.127.116.11.5.1 Naming a Data Flow Diagram
- 18.104.22.168.5.2 Numbering a Data Flow Diagram
- 22.214.171.124.5.3 Sequencing Data Flow Diagrams
- 126.96.36.199.6 Balancing Data Flow Diagrams
- 188.8.131.52.7 Additional Guidelines for Developing Data Flow Diagrams
- 184.108.40.206.8 Transition to Design
- 220.127.116.11 Developing Data Definitions
- 18.104.22.168.1 Types of Data Definitions
- 22.214.171.124.2 Components of Data Definitions
- 126.96.36.199.2.1 Attributes and Cross-References
- 188.8.131.52.2.2 Data Streams/Elements
- 184.108.40.206.2.3 Data Streams/Groups
- 220.127.116.11.2.4 Data Stores
- 18.104.22.168.2.5 External Entities
- 22.214.171.124.3 Avoiding Redundant Data Definitions
- 126.96.36.199.4 Defining the Contents of Data
- 188.8.131.52.5 Data Name Aliases
- 184.108.40.206 Developing Process Specifications
- 220.127.116.11.1 Process Specification Attributes
- 18.104.22.168.2 Type of Process Specifications
- 22.214.171.124.2.1 Primitive Level Process Specifications (mini specs)
- 126.96.36.199.2.2 Higher Level Process Specifications
- 188.8.131.52.3 General Rules for Writing Process Specifications
- 184.108.40.206.4 Structured English
- 220.127.116.11.4.1 Structured English Vocabulary
- 18.104.22.168.4.2 Logical Constructs of Structured English
- 22.214.171.124.5 Decision Tables
- 126.96.36.199.6 Decision Tree
- 188.8.131.52.7 Common Process Specifications
- 184.108.40.206 Functional Specification Package
- 220.127.116.11.1 Data Flow Diagrams and Process Specifications
- 18.104.22.168.2 Data Definitions
- 22.214.171.124.3 Screen Displays
- 126.96.36.199.4 Reports/Layouts
- 188.8.131.52.5 External Entities
- 184.108.40.206.6 Table of Contents/Cross References
- Exhibit 2.5.11-1 Format for a Leveled Data Flow Diagram
- Exhibit 2.5.11-2 Symbols Used to Define Data
- Exhibit 2.5.11-3 A Procedure described using a Narration, then using a Decision Table
- Exhibit 2.5.11-4 Common Process Specification Example
Part 2. Information Technology
Chapter 5. Systems Development
Section 11. Analysis Techniques and Deliverables
Structured analysis is a technique that involves analysis, description, specification, and decomposition of business processes and data to derive a result that is graphic and concise, non-redundant, top down partitioned and logical instead of physical. This technique uses logical models to enhance communication by emphasizing what (logically) needs to be designed, not how (physically) to design it.
Data flow diagrams, data definitions, and process specifications are the tools used in structured analysis. The deliverable that results from applying structured analysis is the functional specification package. This deliverable comprises data flow diagrams, data definitions, and process specifications.
This manual establishes standards, guidelines, and other controls for analyzing business processes and data. This manual describes techniques for modeling processes/data flows among these processes, defining data, and specifying processes. This manual is distributed to promote the development of business models that are easy to understand, change, and maintain.
The guidelines, standards, techniques, and other controls established in this manual apply to all software developed for the Internal Revenue Service. This development includes that performed by government employees as well as contractors. For system development purposes, the controls established in this manual may be used with any Agency approved life cycle (e.g. SDLC, eSDLC, or ELC).
A data flow diagram is a graphic tool for depicting the partitioning of a system into a network of activities and their interfaces, together with their origins, destinations, and stores of data. The system being partitioned can be automated, manual, or a combination of each. A data flow diagram pictures the system as a continuous stream of ongoing data but does not address physical concerns (i.e., decisions or loops) as does the traditional flow chart. A data flow diagram emphasizes the flow of data and de-emphasizes the flow of control.
Data flow diagrams present a logical view of the system, unlike flowcharts, which introduce many physical constraints too early in system development.
At a very early stage, data flow diagrams provide a graphic model of the system being developed which can be easily understood by the customer. Areas of misunderstanding are resolved early in system development rather than in a later development stage where changes have much more impact.
Data flow diagrams break-up a system into functional subcomponents. This partitioning aids in identifying and isolating the various functions of a system.
Data flow diagrams graphically depict the boundaries between the system itself and the externals, which interact with the system. In addition to providing this macro view, data flow diagrams can be decomposed into levels of increasing detail to provide the analyst with a very flexible graphic representation. At its higher levels, the data flow diagrams present system overviews suitable for management briefings; and at their most detailed levels data flow diagrams readily communicate with the system designer.
Data flow diagrams are used to graphically describe the transformation of data through the system. Data flow diagrams are developed by studying the data from the user's point of view and then creating different logical and physical system models.
In applying structured analysis, develop and use the following types of data flow diagrams:
Current physical data flow diagram;
Current logical data flow diagram;
New logical data flow diagram;
New physical data flow diagram.
Use this typal data flow diagram to model the current physical environment. This type of data flow diagram models the physical characteristics of an existing system such as: department names, physical location, organizations, people's names, and mechanical or operational devices.
Use this typal data flow diagram when modeling a system for the first time. Since the user is more familiar with the physical terminology, get the user's approval of the accuracy of the model of the existing system before continuing analysis.
Make a current physical data flow diagram by removing physical considerations and constraints. For instance, replace department names with the actual processing functions within that department. The logical model must depict how the data is being transformed, not who or what is transforming it.
To accommodate required changes to a system, examine and modify the current logical data flow diagram. Reexamine the rationale behind why processes are done and the way they are done. This model is still logical and becomes the candidate for the final aspect of data flow diagram development.
The data flow diagrams that result from this technique are the actual maintainable documentation required within the functional specification package.
In the final aspect of data flow diagram development, balance the implementation of the ideal system (as represented by the new logical data flow diagrams) against the realities of time and cost constraints. Consider feasibility and impact studies, cost/benefit analysis, and other variables until an appropriate compromise physical model is selected.
Make physical decisions (such as which data stores will be data bases, as opposed to sequential files) and consider "packaging."
Do not allow the data flow diagrams to become too physical, as this will defeat their purpose and unnecessarily limit the choices available to the designer. Depict functional processes as opposed to organizational entities affecting the system, such as departments or divisions.
This section discusses the elements that appear on data flow diagrams.
A data stream is one or more elements of data. A data stream is used to indicate a sharing of data. A data stream is graphically represented by an arrow that shows the direction for which data is being shared. Figure 2.5.11- 1 depicts a data stream.
Label all data streams with meaningful names and applicable naming standards. When a data stream has been logically transformed and this needs to be distinguished, do not create a new data name. Use a modifier to qualify the name and place it in parentheses after the data stream name. Figure 2.5.11- 2 depicts data streams with modified names.
A router is used to subdivide a data stream or decompose data. A router is graphically represented by a right half circle. Figure 2.5.11- 3 depicts a router.
A collector is used to rebuild data streams or recompose data. A router is graphically represented by a left half circle. Figure 2.5.11- 4 depicts a collector.
A split data stream divides the routing of data. Unlike the case with the logical router and logical collector, no decomposing or recomposing of the data stream takes place. A split arrow is used to show the routing of a data stream to two or more destinations. A split data stream is graphically represented by a multi prong arrow. Figure 2.5.11- 5 depicts a split data stream.
A process represents a logical transformation of an incoming data stream(s) into an outgoing data stream(s). A process is a type of object that represents activity and constitutes a data flow diagram. A process name should consist of a transitive verb followed by a subject. Show a process by an ellipse or circle with a process name inside. Figure 2.5.11-6 shows these conventions.
For decomposition purposes, three types of processes are acknowledged:
A context process represents the scope of activity being analyzed and modeled. A context process represents the first level of decomposition for a related set of data flow diagrams. A context process is a process that comprises other processes and does not constitute another process. As a rule, a context process may not constitute another process
A parent process is a process that comprises other processes. As a rule, a parent process must constitute another process and comprise other processes
An elementary process is a process that constitutes another process and does not comprise other processes. As a rule, an elementary process must not comprise other processes.
A Data Store (Data Base, File, Table), represented by parallel lines, is a data stream, which is at rest (i.e., a temporary repository of data). Place the data store name between the parallel lines. Figure 2.5.11-7 illustrates a data store.
An access key is an optional figure that is represented by a dashed line with a name; and is only used to represent a key accessing a random access disk file. Figure 2.5.11-8 illustrates an access key to a data store.
Represent a Source/Sink (e.g., External Entity, External Input and Output) by a rectangle. A source or sink is a person or organization, external to the context of a system that is a net originator or receiver of system data. Place the source/sink name inside the rectangle. Figure 2.5.11-9 illustrates a source/sink.
Some situations will require other diagramming conventions to graphically express the situation.
Use the arrows to represent reads, writes, or other accesses to a data store and may be used in any appropriate combination. Figure 2.5.11-10 illustrates the accessing of or reading from a data store.
Figure 2.5.11-11 illustrates the updating of or writing to a data store.
When diagramming a file key accessing a data store, the key is optional. Only show the key on a physical data flow diagram (i.e., after it is decided that the file media will be direct access). Figure 2.5.11-12 illustrates a file key accessing a data store.
When developing logical or physical data flow diagrams, some situations will require the modeling of re-circulating data stores.
When designating files on logical data flow diagrams in which data from an old version of a file is being transformed into data for a new version of that same file, show the old and new versions of the file. Figure 2.5.11-13 illustrates a data transformation.
If a decision is made during creation of the physical data flow diagram to use a single random access file, designate it on the new physical data flow diagram (but not on the logical data flow diagram). Figure 2.5.11-14 illustrates random file access depicted on a data flow diagram.
If the file is to be sequentially processed or it is still unsure how it will be processed, depict the file on the new physical data flow diagram as Figure 2.5.11-15 illustrates.
Introduce a sort as a process bubble only when it is logically required If the sort process is being shown as a bubble and is sorting an input file and putting out a sorted file, then show these files on the data flow diagram. Name the data stores only, not the data streams.
Figure 2.5.11-16 provides an example (of a sequential file update or an update where the decision as to whether it will be random or sequential has not been made) that illustrates a sort.
An off-page connector is represented by a circle and arrow. Avoid continuation pages because they make the data flow diagram less readable. When a data flow diagram must be continued onto another page and the diagram remains at the same level of decomposition, then, the off-page connector may be used. Write the sending and receiving page numbers within the respective circles. To avoid confusion, make the circles smaller than the process bubbles on your data flow diagram. Figure 2.5.11-17 illustrates the conventions used to depict off-page connectors.
Leveling is the partitioning of a large system into manageable units, resulting in system documentation that is easier to comprehend. Top-down analysis and reanalysis of processes and data (partitioning and re-partitioning) produce a high level overview for management and lower, more detailed levels for the designer and users. A leveled data flow diagram set comprises:
The top-level diagram, called the context diagram, which defines the boundary of the system and consists of only one bubble that is labeled with an overall system descriptor. The system sources, sinks, inputs, and outputs are depicted; and the input and output data streams are shown to define the domain of the system.
Middle-level data flow diagrams are used when it is necessary to represent the system processes within the context diagram broken down into a more detailed level. They are the intermediate level between a context diagram and the functional primitives.
The lowest-level data flow diagram, called a functional primitive, represents a process that cannot be further decomposed. A functional primitive has no internal data streams and usually only a single input and single output.
Exhibit 2.5.11- 1 illustrates the format for a leveled data flow diagram.
A diagram for which there is a lower level diagram(s) is termed a "parent" diagram. For instance, in Exhibit 2.5.11- 1, the context diagram is parent to Diagram 0 which is termed a "child" diagram. Diagram 0 also assumes the role of a parent to Diagram 2, which is the child of Diagram 0. Therefore, a diagram can be both a child of a higher-level data flow diagram and a parent to a lower level data flow diagram. However, a lowest level (Functional Primitive) data flow diagram can only be a child diagram because it cannot be further decomposed.
Each level of the data flow diagram is to reside on a separate page. The reader can follow the diagram leveling using the diagram and bubble numbering system as a guide.
There is no set number of levels. However, there is always at least a context diagram level and an associated lowest level. The number of middle level diagrams is dependent upon the complexity of the system being defined.
In the interest of readability, partition levels into about seven bubbles (plus or minus two bubbles).
Data flow diagrams are identified through naming and numbering.
Title each data flow diagram with the name of its "parent' bubble. The context diagram within a data flow diagram set has no "parent" diagram; it is the highest-level diagram and identifies the system name, input and outputs.
Except for the context diagram, each data flow diagram is labeled with the diagram number of its parent bubble. This diagram number is carried over into the numbering of the individual bubbles by taking the diagram number, placing a decimal point after it, and then placing a sequential number after the decimal point to give each bubble a unique identifier. The diagram number retains the bulk of the numbering and the bubbles are numbered with only the last decimal point number. Figure 2.5.11-18 illustrates the numbering, i.e., the actual process reference numbers of diagram 2.4.5 are 220.127.116.11, 18.104.22.168, and 22.214.171.124.
Exhibit 2.5.11- 1 illustrates a properly numbered data flow diagram.
Place the sequence or order of appearance of the data flow diagrams in the functional specification package in ascending numeric order (the data flow diagram name is unimportant and not used in this sequencing). Use the data flow diagram numbers, which appear in the page heading of each diagram, for sequencing. Follow one particular sequencing order to maintain uniformity between various functional specification packages.
Figure 2.5.11-19 illustrates proper numeric sequence.
Keep data flow diagrams in balance. Represent in the associated bubbles in the child diagram all data streams shown entering and exiting a parent diagram. There are exceptions to the balance rule-minor error paths and trivial inputs (e.g., error messages, system date) need not be in balance.
Show a data store (file) on the first data flow diagram level where all system references to it are shown. Apply this concept at all levels. If a file is used primarily by the system represented in the context diagram, there is no need to show the file at the context diagram level, however, if the file is external to the system, show it on the context diagram.
As an example, Figure 2.5.11-20 illustrates a data store or file not shown in Diagram 0. This is because the file is internal to the processing in process 3 (as though it is concealed inside the bubble). The file and all its data streams are shown when process 3 is diagrammed.
Figure 2.5.11-21 is for Diagram 0 and illustrates a file being used by processes 2 and 3.
Identify all major inputs and outputs to the system on the context diagram.
Show minor inputs, and reject data flows at an appropriate lower level. These data streams need not be balanced between parent and child.
Do not show trivial error paths, such as screen messages, on the data flow diagram. Instead, note the processing for the message and the actual message in the appropriate Process Specification.
Label each data stream, data process, and data store with a meaningful name developed in accordance with applicable naming standards.
Use a descriptive and strong action verb to name a process bubble. Try to use a singular object to complement the verb.
On a data flow diagram, ensure that each process has at least one input data stream and one output data stream.
Format each level of the data flow diagrams in a left to right flow, a convention with which most readers are familiar. Try to aid readability by not crossing data streams.
Present the system from the viewpoint of the data and show the processes transforming the data.
Don't represent the flow of control or control information (i.e., timing considerations).
Don't show initialization and termination such as job control language, control decisions (beginning or end-of-file) and file initialization.
This section provides guidance on using the results from analysis as the basis for software design.
If a data flow diagram has not been evenly partitioned, the diagram will combine some detail and some higher levels of abstraction. In this case, perform a top-down partitioning of the data flow diagram by:
Replacing any problem bubble by its "child" network and then connecting the data flows;
Grouping into sets to minimize interfaces;
Allocating one top-level bubble per set;
Renumbering and renaming everything.
As the new physical data flow diagrams are being developed, both the analyst and the designer must consider certain physical details. Unless a data flow diagram is small and limited in function, it will need to be "packaged". Packaging is the process of subdividing the data flow diagram processes into related groups of processes; and each of these related groups of processes evolves into a separate structure chart that will be created during the software design. The following physical boundaries and constraints have a bearing on the packaging of a data flow diagram set:
Man/machine boundary-separates manual processes from those performed on ADP equipment.
Hardware boundary-separates processes, which must be performed on different types of ADP equipment.
Batch/on-line/real time boundary-various functions of a system may be on-line, real time, or batch mode depending on the speed requirements for data retrieval, display, availability, etc.
Cycle or timing boundary-some processes must be run daily, while others only need to be run once a week, month, or year.
Commercial software-some processes may be accomplished using vendor-supplied software.
Security/safety needs-security and safety requirements may cause the addition of otherwise unnecessary boundaries and intermediate data stores. Other needs of this type include audit, back-up, recovery, and checkpoint/restart requirements.
Resources-some processes may not be able to be run at the same time because of limited resources, (e.g., the job is too large for computer capacity).
Data flow diagrams provide a general picture of the data transformations (processes) and their interfaces (data streams) in a system. To make the data flow diagrams more precise, define both the data and the processing. Data definitions add precision to a system by capturing the details of the data streams and data stores. Since the means to catalogue these definitions will vary by site, the exact form they take will vary. Whether this method is manual, automated, or a combination of the two, standards dictate that project managers ensure consistency within their systems.
Develop a definition for each data stream and data store on the data flow diagram, and maintained in a system glossary. Define all data elements and groups of data elements contained in these data streams and data stores, and all components referenced in the process specifications. Ensure that all names are in accordance with established naming standards. If an enterprise data dictionary is used to maintain data definitions, then follow any agency guidelines on use of the dictionary.
Define Data in terms of its components and their relationship in the hierarchy. Define the following four types:
Data element - the smallest piece of data that is not further decomposed.
Data group - a data structure that consists of other data groups and/or data elements.
Data stream (also called data flow) - data in motion; a pipeline along which information of known composition is passed Note that data streams are not defined as separate entities in the system glossary; each data stream is a flow consisting of either a data group or a data element.
Data store data at rest - i.e., a temporary repository of data, a file.
External entities (sources and sinks) are not data but should also be described in the system glossary or data dictionary.
This section discusses the components of data definitions.
The following attributes actually describe and define the system components. They provide information critical to understanding the system under description:
Use the following cross-references to describe the relationships between system components and provide information that enhances system understanding and maintainability:
Access Key To;
Additional cross-references for common Process Specifications.
Because attributes are critical, they have a higher priority than cross-references. Therefore, first document all attributes of the components; then, as time permits, add the cross-references. Use only the attributes and cross-references that are pertinent to that specific piece of data being defined.
Store and maintain, whenever possible, the values/meanings, descriptions, constraints (if applicable), and cross-reference information in an automated manner, such as through an automated data dictionary. On the other hand, keep the breakdown of data streams and data stores (contains attributes) into elementary components on the same medium with the accompanying data flow diagrams and process specifications.
If the list of values and meanings is so extensive as to be unwieldy, then reference an existing internal management document. Use the following phrase as a comment in the "values" category - "for values refer to the most current version of fill-in official title and number of reference document", and state under what specific name in the internal management document the values and meanings can be found.
A single data element may constitute a data stream; this is indicated on a data flow diagram when the element name is assigned to an arrow.
The following attributes define the data streams/elements:
Description: a narrative description of the element.
Values/ Meanings: a list of valid values for an element and the meanings of those values.
Use the following cross-references to describe the relationships with other system components:
Access Key To: an alphabetical listing of data stores for which this data element is an access key.
Aliases/ Modifiers: an alphabetical listing of other names for the same element (aliases), or qualifiers for the element name (modifiers).
Input To: if the element is a data stream, an alphabetical listing of processes that, as input, accept the data stream.
Output Of: if the element is a data stream, an alphabetical listing of processes which are sources of the data stream.
Part Of: an alphabetical listing of data groups which contain this element.
A group may be a data stream, or it may be a component part of a larger group, data stream, or data store. The following attributes define the data streams/groups:
Description: a narrative description of the group.
Contains: a list of the elements and/or subordinate groups which constitute the data stream/group being defined; symbols are used to show the relationships between the contents.
Use the following cross-references to describe the relationships with other system components:
Access Key To: an alphabetical listing of data stores for which this data group is an access key.
Aliases/Modifiers: an alphabetical listing of other names for the same group (aliases), or qualifiers for the group name (modifiers).
Input To: if the group is a data stream, an alphabetical listing of processes that, as input, accept the data stream.
Output Of: if the group is a data stream, an alphabetical listing of processes which are sources of the data stream.
Part Of: an alphabetical listing of larger data groups which contain this group.
The following attributes define the data store:
Description: a narrative description of the data store.
Constraints: a narrative description of items, rules, regulations, etc. affecting the data store; e.g., required password security or response time criteria for database access.
Contains: a list of the elements and/or groups which constitute the data store; symbols are used to show the relationships between the contents.
Accessed By: an alphabetical listing of processes or external entities which use this data store as a source; the data store is shown as an input to these processes or external entities on the data flow diagram.
Updated By: an alphabetical listing of processes or external entities which change or reorder the contents of the data store.
Aliases/Modifiers: an alphabetical listing of other names for the same data store (aliases), or qualifiers for the data store name (modifiers).
Part Of: an alphabetical listing of larger data stores (such as a database) which contain this element.
External entities are defined by the following:
Description: a narrative description of the external entity.
Provides: an alphabetical listing of data stream(s), which the external entity provides as input to the system.
Receives: an alphabetical listing of data stream(s), which the external entity receives as output from the system.
When a system is using both a glossary and data dictionary, some data will be defined in both. This is most likely to occur in the case of data streams on the context diagram that enter and exit the system, and with data elements. If it happens that the data is defined in both and any attributes are the same, state in the system glossary, see Data Dictionary.
Define the contents of data in the "Contains" attribute. Data, except for data elements, is composed of and defined by lower level data components. Show the relationships between these data components in the "Contains" attribute using a symbol convention.
Unless the use of an automated data dictionary precludes usage, then refer to Figure 2.5.11-22 for the symbols to be used to list the contents of data.
Exhibit 2.5.11- 2 provides the symbols used to define data.
An alias is a data name that is synonymous with a more commonly accepted data name of a data stream, data store, data group, or data element. An alias is created when the same data is labeled with more than one data name (i.e., the commonly accepted name and the alias name(s)).
Aliases generally occur for three reasons:
Different users have different names for the same form, etc.
An analyst inadvertently introduces an alias in the data flow diagram.
Two analysts working independently with the same data stream give it different names.
Enter the alias name(s) under the aliases cross-reference of the commonly accepted data name. In addition, the commonly accepted name should be entered under the aliases cross-reference of the alias name(s). Eliminate all aliases, except those mandated by users, by the end of analysis.
All data processing systems, whether structured or not, require descriptions of the procedures that determine how inputs are to be transformed into outputs. As these procedures increase in complexity, English narrative descriptions become a more ambiguous and less acceptable means to specify these transformations. Structured analysis introduces the process specification, a written description and explanation of the processing which takes place within a data flow diagram process bubble.
Only use structured English, decision tables, or decision trees. To write the procedures section of a process specification.
The three attributes associated with all process specifications are (in appropriate order):
Description - a narrative describing the purpose and objective of the process.
Constraints - a narrative description of the process constraints. This section usually is not needed, but may be used to specify nonprocedural requirements. An example is a timing requirement such as a report that must be generated only on the last day of the month. Pertinent characteristics of the process such as input and output volumes, peaks and seasonality of the input and output data flows are other possible constraints.
Procedures - use structured English, decision tables, or decision trees to describe in detail the criteria governing the transformation of input data streams into output data streams. Convey what has to be done, not how it is to be accomplished.
There are two types of process specifications:
Primitive Level Process Specifications;
Higher Level Process Specifications.
This type of process specification defines primitive level (lowest level; not further decomposed) data flow diagram process bubbles. This specification must contain the description and procedures attributes; and, as applicable, may contain the constraints attribute.
This type of process specification defines higher level ("parent'') data flow diagram process bubbles and conveys information common too more than one of its "child" bubbles. This type of specification must contain the description attribute; and, as applicable, may contain the constraints attribute.
Do not address physical requirements such as tape input, card records, telecommunications devices, etc in the written description. Describe physical information in the constraints section, but only when essential.
Ensure that the process specifications stress what needs to be accomplished, not how it is to be accomplished.
Develop a process specification for each bubble.
Entitle each process specification with the process name and number of its associated bubble on the data flow diagram.
The main intent of the process specification is to define the transformation of input data into output data.
Avoid redundancy between the process specification and other tools in the functional specification package.
Use the data names as shown in the data flow diagram's and date definitions in the process specification.
Use Structured English, decision table or decision tree format to write the procedures attribute.
Describe the procedure sections of process specification for the lowest level (primitive) bubbles using an abbreviated form of English known as structured English. The intent is to provide communications that are more explicit and less contextual than narrative English. Structured English tries to limit reader interpretation, despite the reader's frame of reference, to one obvious conclusion.
Structured English uses a limited vocabulary consisting of:
strong action verbs;
direct and indirect objects that are defined in a glossary/data dictionary;
as few adjectives and adverbs as possible.
Do not write process specifications from a physical code-like perspective (i.e., in the worst case they contain detailed instructions as to control, working storage, physical media considerations, switch settings, etc.) as this will cause a significant decrease in "user friendliness". Structured English that is simply pseudocode for program source code defeats the purpose of analysis, limits design options and flexibility, and creates a maintenance headache for analysts.
Structured English also uses a limited statement syntax consisting of simple imperative sentences, closed-end decisions, closed-end repetitions, or any combinations of the above. All Process Specification functions must be described using only the three constructs discussed below. These constructs have only one entry point and one exit point; they create a linear presentation (one construct is done completely before another construct is begun), thus avoiding the confusion inherent in narratives that skip around. Procedures written in Structured English should be numbered in a consistent manner for readability and referencing purposes. Numbering the constructs helps facilitate citing or referencing within the Procedures Section of a process specification.
Use this construct to present a sequence of steps to be taken in order. It consists of imperative sentences, each of which is ended with a period and is indented equally in the construct. Indentation signifies the relationships and dependencies of statements within the various constructs. Figure 2.5.11-23 provides an example of a sequence construct.
Use this type of construct to select an action from among mutually exclusive alternative actions based upon the outcome of a specific condition. There are two versions of this construct, the If-Then-Else format and the Select-Case format.
When there are only two alternatives, use the If-Then-Else format. When there is no action to be taken, write the "Else" or the "Then" statement portion of the decision showing no action; or the "Else" statement portion may be omitted (unless it is being used as an end marker for readability). Figure 2.5.11-24 illustrates the If-Then-Else format.
Figure 2.5.11-25 provides an example of an If-Then-Else Decision Construct.
Use the Select-Case format when there are more than two alternatives, each mutually exclusive. Number each case, enclose in parentheses, and end with a comma. Once a case is selected and its action taken, ignore the subsequent case statements. Figure 2.5.11-26 provides an example of a Select-Case Decision Construct.
Use the Repetition Construct to show action performed until some condition occurs to cause the action(s) to cease. Figure 2.5.11-27 illustrates a Repetition Construct.
Decision tables are a concise and efficient way of specifying processes, which have numerous combinations of conditions from which to choose in order to perform a specific action. Figure 2.5.11- 28 illustrates the format of a decision table. Exhibit 2.5.11- 3 illustrates a procedure and a decision table that expresses the same procedure.
Figure 2.5.11- 29 provides an example of a decision table.
A decision table comprises four typal sections, the following explains these sections:
Conditions Section: lists the conditions or statements that affect the process being specified and must be considered in performing the process being specified.
Condition Entries Section: indicates whether the condition is met or not. This section is subdivided into vertical columns that are generally numbered and are known as rules. The blocks within the condition entries section, when filled in, contain the responses to the conditions listed in the conditions section.
Actions Section: lists those actions that may be taken in order to perform the process.
Action Entries Section: This section is also subdivided into the same vertical columns or rules as the condition entries section. The blocks contain either an "X" (meaning perform the action) or either a " " or a "=" (meaning the action does not apply).
Exhibit 2.5.11- 3 illustrates the use of a narration and a decision table to describe the same procedure. For the purposes of this exhibit, a procedure to determine who will approve the selection and acquisition of contractual services is being used.
A decision tree is a graphic representation of a decision table; it communicates the same information in a different form. Figure 2.5.11- 29 provides an example of a decision table. Figure 2.5.11- 30 depicts a decision tree that expresses the decisions expressed in Figure 2.5.11- 29.
Some processes are shared by different systems or are used more than once within a single system. These common processes are designated by adding the suffix "(COMMON)", capitalized and placed in parentheses, to the end of the process name.
In addition to the three attributes which apply to all process specifications, there are three cross-references associated with common process specifications:
Maintained by the organizational identification (branch, section, etc.) of those who have maintenance responsibility;
Last Revision month, day, and year of the latest revision;
Used By organizations that use the Process Specifications.
The first appearance of the common process specification in a data flow diagram set must list the description attribute; may list the constraints and procedures attributes; and must list the Maintained By and Last Revision cross-references. Subsequent occurrences of the common process specification need only list the Description attribute.
Each common process specification is the maintenance responsibility of one project team or organizational area; and only those responsible will be allowed to make changes to a specification. Any organizational area may use the specification, but the group with the maintenance responsibility will notify the users of any changes and revisions.
Exhibit 2.5.11- 4 provides an example of the first and then subsequent occurrence of a common process specification.
The primary deliverable that results from structured analysis is a functional specification package. After a system is defined using the tools of structured analysis, the resulting documentation must be packaged to derive the functional specification package.
Where applicable, naming standards shall be applied. Names, numbers, and all other identifiers shall be consistent among deliverables.
Standard identifying information should be provided on every page of the analysis documents. Include the following types of information when applicable:
Functional Specification Package number;
Responsible organization (e.g., branch/section);
Project Name/Project Number.
A functional specification package comprises one context diagram. The scope of the functional specification package should be consistent with the scope of the context diagram. The functional specification package comprises:
data flow diagrams, which graphically depict business processes and the data interfaces among these processes;
data definitions, which define and document the interfaces on the data flow diagrams;
process specifications, which specify the data transformations among the business processes.
The two allowable ways of ordering the data flow diagrams and process specifications within a functional specification package. The data flow diagrams should be sequenced in ascending numeric order and the process specifications placed immediately behind their associated data flow diagram or grouped together behind the entire data flow diagram set.
To maintain uniformity among functional specifications packages developed for a system, one of the following methods should be used for sequencing:
Sequence the Process Specification in the same sequential order they would appear in if they were interspersed with the data flow diagrams; or
Sequence the Process Specifications in ascending numeric order (e.g., 1.0, 2.0, 2.1, 2.1.1, 2.1.2, 2.2, 3.0, 3.1, 3.2, 3.3, 4.0).
Enter all data definitions into a system glossary and/or data dictionary. If a manual system glossary is being used, packaged it into the functional specification package.
In order to fully document a system, add other sections to a functional specification package. Graphic CRT screen displays for projects that use CRT terminals for processing should be organized into a Screen Display Section.
Organize Graphic formats for printed inputs and outputs such as reports (e.g., error registers) into a Print Report Layouts Section.
This section provides centralized information about the system's sources of input data and sinks for system output data.
Add a Table of Contents and/or cross-referencing material to aid the reader in understanding and following the functional specification package. Develop an alphabetical index of names of external entities, processes, data groups, and data elements; or a listing of external inputs and outputs, and processes to organize a functional specification package.