The working program of the training practice for the professional module "Development of software modules for computer systems". PM.01

The working program of the training practice for the professional module "Development of software modules for computer systems". PM.01
PROFESSIONAL MODULE
"Development of software
software modules
software for computer
systems"

MDK

System Programming
Application programming

Goals and objectives of the module

know:
main stages of software development
security;
basic principles of structural technology
and object-oriented
programming;
basic principles of debugging and testing
software products;
methods and means of developing technical
documentation.

Goals and objectives of the module

be able to:
develop software code
modules in modern programming languages;
create a program according to the developed algorithm
as a separate module;
debug and test the program
module level;
prepare software documentation
facilities;
use tools for
automation of paperwork;

Goals and objectives of the module

have practical experience:
development of an algorithm for the task and
implementation by means
automated design;
software product code development
based on the finished specification at the level
module;
use of tools on
debugging stage of the software product;
software testing
module according to a specific scenario;

Professional competencies

PC 1.1. Carry out the development of specifications for individual
component.
PC 1.2. Carry out the development of software product code
based on ready-made specifications at the module level.
PC 1.3. Perform debugging of program modules with
using specialized software.
PC 1.4. Perform testing of software modules.
PC 1.5. To optimize the program code of the module.
PC 1.6. Develop design and technical components
documentation using graphic languages
specifications.

Interdisciplinary connections

Informatics and ICT;
Information Technology;
Architecture computer systems;
Basics of programming;
OS.

Stages of study

Auditory lessons
Practical lessons
Independent work
course project
Educational practice
Internship
Qualifying exam (defense
portfolio)

Application programming

Section 1. Basic principles of application development

Topic 1.1. Basic concepts
application programming

Questions

Classification software
Software lifecycle
Program Development Stages
Program Documentation

What is programming?

Programming - in a broad sense
represents all technical
operations required to create
programs, including requirements analysis and
all stages of development and implementation. IN
narrow sense is coding and
program testing within
some specific project.

What is software?

Software (software) (software)
- general term for
"intangible" (as opposed to physical)
components of a computer system.
In most cases, it refers to
programs run
computer system to
emphasize their difference from hardware
means of the same system.

What classes of software
You know?

system: operating systems; drivers
devices; various utilities;
for developers: programming environments;
translators and interpreters; CASE tools;
program libraries;
for end users: text
processors; spreadsheets; graphic
editors; solvers of mathematical problems;
training and control systems;
computer games; application programs.

What is an applied
program?

application program (application
program) - any program,
contributing to the task,
assigned to the computer within this
organization, and making a direct contribution to
implementation of this task.

What can be called a software system?

The software system represents
is a set of solutions to the set
different but related
tasks (OS, DBMS).
More highly specialized
programs are not called systems
(text editor, compiler, etc.)

Software life-cycle (software life-cycle) the entire period of existence
software systems,
starting from the development of the initial
concept of this system and ending it
obsolescence

SOFTWARE LIFE CYCLE

STAGES OF CREATING PROGRAMS

System analysis.
An analysis of the requirements for
software system based on
primary study of all information flows
during traditional work and is carried out
in the following sequence:
a) clarification of the types and sequence of all work;
b) definition of goals that should be
achieved by the developed program;
c) identification of analogues that ensure the achievement
similar goals, their advantages and disadvantages.

STAGES OF CREATING PROGRAMS

External specification
Consists of defining external specifications, i.e.
descriptions of input and output information,
forms of their presentation and ways of processing information.
Implemented in the following sequence:
a) setting a task for the development of a new program;
b) assessment of the achieved goals of the developed
software product.
Further, if necessary, steps 1-2 can be repeated until
achieving a satisfactory appearance of the program
system with a description of the functions it performs and some
clarity of implementation of its functioning.

STAGES OF CREATING PROGRAMS

Program design
A set of works is being carried out to form a description of the program.
The initial data for this phase are the requirements set out
in the specification developed in the previous step. accepted
decisions regarding how to meet requirements
specifications. This phase of program development is divided into two stages:
a) architectural design. It is a development
program descriptions in general view. This description contains
information about possible options for structural construction
software product (either in the form of several programs, or in the form of
several parts of one program), as well as about the main
algorithms, and data structures. The results of this work are
the final version of the software system architecture,
requirements for the structure of individual software components and
organization of files for interprogram data exchange;
b) working design. At this stage, the architectural description
program is detailed to a level that makes
possible work on its implementation (coding and assembly). For
This is done by compiling and checking the specifications of the modules,
compiling descriptions of the logic of modules, compiling the final
program implementation plan.

STAGES OF CREATING PROGRAMS

Coding and Testing
Implemented for individual modules and
collection of ready-made modules up to
receiving the finished program.
Comprehensive testing
Development of operational
documentation
Acceptance and other types
tests

STAGES OF CREATING PROGRAMS

Program correction
Based on results
previous tests.
Delivery to the customer
Final delivery in progress
software product to the customer.
Replication

STAGES OF CREATING PROGRAMS

Program support
Includes all technical operations required
to use this program in the work
mode. The program is being modified
making corrections to the working documentation,
program improvement, etc.
Due to the wide scale of such
operations support is iterative
process that it is desirable to carry out
as much after as before software release
products for general use.

Questions

1. Basic concepts of programming.
Software classes.
2. Software life cycle
ensure
3. Stages of creating programs

PROGRAM DOCUMENTATION

Every design stage
culminates in the drafting
relevant documents, so
an important design element
software applications is
preparation of software documentation.

PROGRAM DOCUMENTATION

Program specification (program
specification) - an exact description of that
result to be achieved
using the program. This description
must specify exactly what
make a program without specifying how it is
should do it.

PROGRAM DOCUMENTATION

For programs that finish their work with some result, usually compiled
I/O specifications that describe
desired mapping of the input set
quantities into a set of output quantities.
For cyclic programs (in which no
indicate the end point), develop
specifications where the focus is
focuses on individual functions,
implemented by the program during cyclical
operations.

PROGRAM DOCUMENTATION

The primary specification describes:
objects involved in the task (what the program does
and what does the person working with this program do);
processes and activities (project procedures and activities
human, algorithms for solving problems in a machine,
order of information processing, the size of operational
memory required for the program to work);
input and output data, as well as their organization
(for example, a dialog script with screen forms,
organization of files specifying the lengths of record fields and
maximum amount of information in files);
instructions for using the future program.

PROGRAM DOCUMENTATION

Distinguish between external software
documentation that is consistent with
customer, and intermediate
internal project documentation.
When compiling a program
documentation is first developed
external specs and then -
internal.

PROGRAM DOCUMENTATION

External specs include
input and output specifications
data, their organization, reactions to
exceptions, definition,
what does a person do (by what algorithms
it works and where it gets information from), and
that machine.

PROGRAM DOCUMENTATION

Internal specifications include
description of internal program data
(variables, especially structured ones) and
descriptions of the algorithms of the whole program and its
parts.
Internal specifications are given in unity
with a description of the software architecture
complex and internal structure
building separate software
component.

Homework

Make a list of types of documents for
ensuring the software life cycle.

inclusion principle, which provides that
requirements for the creation, operation and development
Software is determined from the side of more complex,
the system that includes it;
the principle of systemic unity, which is
that at all stages of creation, operation and
software development, its integrity will be ensured
connections between subsystems, and
functioning of the control subsystem;
development principle, which provides for software
opportunity to expand and improve
components and links between them;

SYSTEM-WIDE PRINCIPLES OF CREATING PROGRAMS

the principle of complexity
is that the software provides
connectivity of information processing
individual elements, and for the entire volume
data in general at all stages of processing;
the principle of information unity, that is
in all subsystems, means of support and
software components use common
terms, symbols, conventions and
presentation methods;

SYSTEM-WIDE PRINCIPLES OF CREATING PROGRAMS

The principle of compatibility is that
language, symbols, codes and software tools
provision agreed, provide
joint functioning of all
subsystems and keep an open structure
systems as a whole;
the principle of invariance defines
invariance of software subsystems and components
to the information being processed, that is, they are universal or typical.

Programming technologies are
proven strategies for creating
programs that are presented in the form of methods
with information funds, descriptions
design procedures and design operations.
There is a technology of structural
programming, technology
designing programs with rational
data structure, object-oriented programming technology,
visual programming technology.

TECHNOLOGIES AND PROGRAMMING PARADIGMS

Programming paradigms (concepts,
belief systems) are different
approaches to writing programs.
There are four main paradigms
that describe most of today's
programming methods: imperative,
applicative, rule-based
and object-oriented.

TECHNOLOGIES AND PROGRAMMING PARADIGMS

imperative paradigm
This model follows from the features of the hardware
a standard computer that executes instructions
(commands) sequentially.
The main type of abstraction used in this
paradigm, are algorithms. Based on it developed
many operator-oriented languages
programming.
A program in such a language consists of the sequence
operators, the execution of each of which entails
changing the value in one or more memory cells. IN
In general, the syntax of such a language is:
Operator_1:
Operator_2:
...

TECHNOLOGIES AND PROGRAMMING PARADIGMS

Applicative paradigm
This paradigm is based on the consideration
the function that the program performs.
The question is: what function is needed
apply to the initial state of the machine (by
choosing an initial set of variables and
combining them in a certain way) to
get the desired result?
Languages ​​that emphasize this view of
calculations are called applicative, or
functional. The syntax of a language like
rule looks like this:
Function_n (... function_2 (function_1 (data))...)

TECHNOLOGIES AND PROGRAMMING PARADIGMS

Rule Based Paradigm
Languages ​​based on this paradigm check
the presence of the necessary permissive condition, and in the case of its
detections perform the appropriate action.
Running a program in a similar language is like
execution of a program written in an imperative language.
However, the statements are not executed in the order they are
which they are defined in the program. Order of execution
define permissive conditions. The syntax of such languages
as follows:
permissive condition_1 -> action_1 permissive
condition_2 -> action__2
allowing condition_n -> action _n
Sometimes rules are written as "if action
permissive condition" when the action to be performed
written on the left.

TECHNOLOGIES AND PROGRAMMING PARADIGMS

Object Oriented Paradigm
This model builds complex data objects.
For operations on them, some
limited set of methods. Created
objects can inherit properties of simpler
objects.
Because of this capability, object-oriented programs have a high
the effectiveness of the programs
written in imperative languages. Opportunity
development of various classes that use
limited set of data objects,
provides flexibility and reliability, which
characteristic of an applicative language.

Broadcast (compilation)
It is a method of translating programs written in
high-level languages, to equivalent
machine language programs used
computer.
After that, the interpreter built into
microprocessor hardware,
directly executes the translated into
machine code program. The advantage of this
method - very fast program execution
after the completion of the translation process.

BROADCASTING AND INTERPRETATION OF PROGRAMS

A translator is a language processor that
accepts programs on some source
language as input, and as output
produces equivalent in their functionality
programs, but already on another, so-called
object language (which can also be
arbitrary level).
An assembler is a translator whose source
language is a symbolic representation
machine code (assembler), and object language
is a kind of machine language
any real computer.

BROADCASTING AND INTERPRETATION OF PROGRAMS

A compiler is a translator for which the source
is a high-level language, and its object language
close to the machine language of a real computer. This
either assembly language, or some variant
machine language.
The linker (linker) is a translator,
whose source language consists of programs in
machine language in relocatable form and tables
data indicating the points at which
relocatable code must be modified,
to become executable. The object language consists of
ready to execute machine instructions. task
linker is to create a single executable
program that uses agreed
addresses as shown in the table.

BROADCASTING AND INTERPRETATION OF PROGRAMS

The preprocessor (macroprocessor) is
translator whose source language
is an extended form of
high-level language (such as Java or
C++), and the object language - the standard
version of this language. object program,
created by the preprocessor, ready for
translation and execution of the usual
processors of the original standard
language

BROADCASTING AND INTERPRETATION OF PROGRAMS

Interpretation (software simulation)
This is the method when using the program
(interpreter) executed on
hardware computer, created
machine language virtual computer
high level. The interpreter decodes and
executes each statement in the program
high-level language in the respective
sequences and produce output
resulting data defined by this
program.

BROADCASTING AND INTERPRETATION OF PROGRAMS

Mixed implementation systems
First, the program is translated from its original
forms into a form that is more convenient for execution.
This is usually done by creating multiple
independent parts of the program, called
modules.
During the loading phase, these independent parts are combined
with a set of runtime programs,
implementing software-simulated
(interpreted) operations. It leads to
creating an executable form of a program, statements
which are decoded and executed by means of their
interpretation.

ENVIRONMENTS AND IMPLEMENTATIONS OF PROGRAMMING LANGUAGES

The programming environment is a set
tools used in the development
software.
This set usually consists of a file
system, text editor, editor
links and compiler. Additionally, he
may include a large number
instrumental complexes with
uniform user interface

Exercise

List and describe the various
programming environments.

The procedure for developing a software module.

  • 1. study and verification of the module specification, choice of programming language; (that is, the developer, studying the specification, finds out whether it is clear to him or not, whether it describes the module sufficiently; then he chooses the programming language in which the module will be written, although the programming language can be the same for the entire PS)
  • 2. choice of algorithm and data structure (here it turns out whether any algorithms are known for solving the problem and, if so, use it)
  • 3. module programming (writing program code)
  • 4. polishing the text of the module (editing existing comments, adding additional comments to ensure the required quality)
  • 5. checking the module (the logic of the module is checked, its work is debugged)

The following methods of program module control are applied:

  • - static check of the module text (the text is read from beginning to end in order to find errors in the module. Usually, in addition to the module developer, one or even several programmers are involved for such a check. It is recommended that errors detected during such a check be corrected not immediately, but upon completion reading module text)
  • - end-to-end tracing (manually scrolling the execution of the module (operator by operator in the sequence that follows from the logic of the module) on a certain set of tests)
  • 6. compilation of the module.

Structural programming.

The most popular programming technique today is top-down structured programming.

Structural programming is the process of breaking down an algorithm step by step into smaller and smaller parts in order to obtain elements for which specific prescriptions can be easily written.

Two principles of structured programming:

  • 1. sequential detailing "from top to bottom"
  • 2. limited base set of structures for constructing algorithms of any degree of complexity

Structured programming requirements:

  • 1. The program should be drawn up in small steps, so the complex task is divided into fairly simple, easily perceived parts.
  • 2. program logic should be based on a minimum number of sufficiently basic control structures (linear, branching and cyclic structures)

The main properties and advantages of structured programming:

  • 1. Reducing the complexity of programs
  • 2. the possibility of demonstrating the correctness of programs at various stages of solving the problem
  • 3. visibility of programs
  • 4. ease of modification (change) of programs.

Modern programming tools should provide maximum protection against possible developer errors.

Here we can draw an analogy with the development of methods of driving vehicles. At first, safety was ensured through the development of traffic rules. Then there was a system of road markings and regulation of intersections. And, finally, traffic interchanges began to be built, which, in principle, prevent the intersection of traffic flows of cars and pedestrians. However, the means used should be determined by the nature of the problem being solved: for a country road, it is quite enough to observe a simple rule - "look under your feet and around."

The basic idea of ​​structured programming: the program should be a set of blocks, combined in a hierarchical tree structure, each of which has one input and one output.

Any program can be built using only three basic types of blocks:

  • 1. functional block - a separate linear operator or their sequence;
  • 2nd branching block - If
  • 3. generalized loop - construction of type While

It is essential that each of these structures has only one input and one output in terms of control. Thus, the generalized operator also has only one input and one output.

Structured programming is sometimes referred to as "programming without GO TO". However, the point here is not the GO TO statement, but its indiscriminate use. Very often, when implementing structured programming in some programming languages, the transition operator (GO TO) is used to implement structural constructs without reducing the main advantages of structured programming. It is the "non-structural" jump statements that confuse the program, especially the jump to a statement located in the module text above (earlier) the jump statement being executed. However, the attempt to avoid the jump statement in some simple cases can lead to structured programs that are too cumbersome, which does not improve their clarity and contains the danger of additional errors in the text of the module. Therefore, it may be recommended to avoid using the jump statement wherever possible, but not at the cost of program clarity.

Useful cases of using the transition operator include exiting a loop or procedure on a special condition that "early" terminates the work of this loop or this procedure, i.e. terminating the work of some structural unit (generalized operator) and thereby only locally violating the structuring of the program. Great difficulties (and complication of the structure) are caused by the structural implementation of the response to exceptional (often erroneous) situations, since this requires not only an early exit from the structural unit, but also the necessary processing of this situation (for example, the issuance of appropriate diagnostic information). The exception handler can be located at any level of the program structure, and it can be accessed from different lower levels. Quite acceptable from a technological point of view is the following "non-structural" implementation of the response to exceptional situations. Exception handlers are placed at the end of one or another structural unit, and each such handler is programmed in such a way that, after completing its work, it exits from the structural unit at the end of which it is placed. Such a handler is called by the jump operator from the given structural unit (including any nested structural unit).

Generally speaking, the main thing in structured programming is the competent compilation of the correct logical scheme of the program, the implementation of which by language means is a secondary matter.

    J. Hughes, J. Michtom. Structural approach to programming. - M.: Mir, 1980. - p. 29-71.

    V. Tursky. Programming methodology. - M.: Mir, 1981. - pp. 90-164.

    E.A. Zhogolev. Technological foundations of modular programming // Programming, 1980, no. 2. - p.44-49.

    R.C. Holt. Structure of Computer Programs: A Survey // Proceedings of the IEEE, 1975, 63(6). - p. 879-893.

    G. Myers. Software reliability. - M.: Mir, 1980. - p. 92-113.

    I.Pyle. ADA is an embedded systems language. M.: Finance and statistics, 1984. - p. 67-75.

    M. Zelkovets, A. Shaw, J. Gannon. Principles of software development. - M.: Mir, 1982, p. 65-71.

    A.L. Fuksman. Technological aspects of creating software systems. M.: Statistics, 1979. - p. 79-94.

  1. Lecture 8. Development of a software module

  2. The procedure for developing a software module. Structural programming and step-by-step detailing. The concept of pseudocode. Software module control.

  3. 8.1. The procedure for developing a software module.

  4. When developing a software module, it is advisable to adhere to the following order:

    study and verification of the module specification, language selection

    programming;

    choice of algorithm and data structure;

    module programming;

    polishing the text of the module;

    module check;

    module compilation.

    The first step in the development of a software module is to a large extent a contiguous control of the structure of the program from below: by studying the specification of the module, the developer must make sure that it is understandable and sufficient for him to develop this module. At the end of this step, a programming language is selected: although the programming language may already be predefined for the entire PS, in some cases (if the programming system allows it), another language may be chosen that is more suitable for the implementation of this module (for example, assembly language).

    At the second step in the development of a software module, it is necessary to find out if any algorithms are already known for solving the problem posed or close to it. And if there is a suitable algorithm, then it is advisable to use it. The choice of suitable data structures that will be used when the module performs its functions largely determines the logic and quality indicators of the module being developed, so it should be considered a very important decision.

    At the third step, the text of the module is built in the chosen programming language. The abundance of all kinds of details that must be taken into account when implementing the functions specified in the module specification can easily lead to the creation of a very confusing text containing a lot of errors and inaccuracies. Finding errors in such a module and making the required changes to it can be a very time-consuming task. Therefore, it is very important to use a technologically justified and practically proven programming discipline to build the text of the module. For the first time, Dijkstra drew attention to this, formulating and substantiating the basic principles of structured programming. Many of the programming disciplines that are widely used in practice are based on these principles. The most common is the drill down discipline, which is discussed in detail in Sections 8.2 and 8.3.

    The next step in the development of the module is related to bringing the text of the module to the final form in accordance with the PS quality specification. When programming a module, the developer focuses on the correct implementation of the module's functions, leaving comments unfinished and allowing some violations of the requirements for the style of the program. When polishing the text of a module, he should edit the comments in the text and possibly include additional comments in order to provide the required quality primitives. For the same purpose, the program text is edited to meet stylistic requirements.

    The module verification step is a manual check of the module's internal logic prior to its debugging (using its execution on a computer), implements the general principle formulated for the discussed programming technology, about the need to control the decisions made at each stage of the PS development (see lecture 3). Module validation methods are discussed in Section 8.4.

    And finally, the last step of module development means completing module validation (using the compiler) and moving on to the module debugging process.

  5. 8.2. Structural programming.

  6. When programming a module, it should be borne in mind that the program must be understandable not only to a computer, but also to a person: both the module developer, and the persons checking the module, and the text writers preparing tests for debugging the module, and the PS maintainers who make the required changes to the module will have to repeatedly parse the logic of the module. In modern programming languages, there are enough tools to confuse this logic as much as you like, thereby making the module difficult to understand for a person and, as a result, making it unreliable or difficult to maintain. Therefore, care must be taken to select appropriate language tools and follow a certain programming discipline. For the first time, Dijkstra drew attention to this and proposed to build a program as a composition of several types of control structures (structures), which can greatly increase the comprehensibility of the logic of the program. Programming using only such constructs was called structural programming.

    The main constructs of structured programming are: follow, branch, and repeat (see Figure 8.1). The components of these constructions are generalized operators (processing nodes) S, S1, S2 and a condition (predicate) P. As a generalized operator, there can be either a simple operator of the programming language used (assignment, input, output, procedure calls), or a program fragment , which is a composition of the main control structures of structured programming. It is essential that each of these structures has only one input and one output in terms of control. Thus, the generalized operator also has only one input and one output.

    It is also very important that these constructions are already mathematical objects (which, in essence, explains the reason for the success of structured programming). It is proved that for each unstructured program it is possible to construct a functionally equivalent (that is, solving the same problem) structured program. For structured programs, some properties can be proved mathematically, which makes it possible to detect some errors in the program. A separate lecture will be devoted to this issue.

    Structured programming is sometimes referred to as "programming without GO TO". However, the point here is not the GO TO statement, but its indiscriminate use. Very often, when implementing structured programming in some programming languages ​​(for example, in FORTRAN), the transition operator (GO TO) is used to implement structural structures without reducing the main advantages of structured programming. It is the "non-structural" jump statements that confuse the program, especially the jump to a statement located in the module text above (earlier) the jump statement being executed. However, the attempt to avoid the jump statement in some simple cases can lead to structured programs that are too cumbersome, which does not improve their clarity and contains the danger of additional errors in the text of the module. Therefore, it may be recommended to avoid the use of the jump statement wherever possible, but not at the cost of program clarity.

    Useful cases of using the transition operator include exiting a loop or procedure on a special condition that "early" terminates the work of this loop or this procedure, i.e. terminating the work of some structural unit (generalized operator) and thereby only locally violating the structuredness of the program. Great difficulties (and complication of the structure) are caused by the structural implementation of the response to exceptional (often erroneous) situations, since this requires not only an early exit from the structural unit, but also the necessary processing (exclusion) of this situation (for example, issuing a suitable diagnostic information). The exception handler can be located at any level of the program structure, and it can be accessed from different lower levels. Quite acceptable from a technological point of view is the following "non-structural" implementation of the response to exceptional situations. Exception handlers are placed at the end of one or another structural unit, and each such handler is programmed in such a way that, after completing its work, it exits from the structural unit at the end of which it is placed. Such a handler is called by the jump operator from the given structural unit (including any nested structural unit).

  7. 8.3. Step-by-step detailing and the concept of pseudocode.

  8. Structured programming makes recommendations about what the text of a module should be. The question arises of how a programmer should act in order to construct such a text. Sometimes the programming of a module begins with the construction of its block diagram, which describes in general terms the logic of its operation. However, modern programming technology does not recommend doing this. Although flowcharts provide a very visual representation of the logic of a module, when they are coded in a programming language, a very specific source of errors arises: mapping essentially two-dimensional structures, such as flowcharts, onto linear text representing a module, contains the danger of distorting the logic of the module, moreover, it is psychologically quite difficult to maintain a high level of attention when reviewing it again. An exception may be the case when a graphical editor is used to build flowcharts and they are formalized so that text in a programming language is automatically generated from them (as, for example, this can be done in R - technology).

    As the main method of constructing module text, modern programming technology recommends step-by-step detailing. The essence of this method is to divide the process of developing the text of the module into a number of steps. The first step describes general scheme the operation of the module in a visible linear textual form (i.e. using very large concepts), and this description is not completely formalized and is focused on human perception. At each next step, one of the concepts (we will call it refined) is refined and detailed, which is used (as a rule, not formalized) in any description developed at one of the previous steps. As a result of this step, a description of the selected concept being refined is created either in terms of the base programming language (i.e., the module chosen for representation), or in the same form as in the first step using new concepts being refined. This process ends when all the concepts being refined are eventually expressed in the underlying programming language. The last step is to obtain the text of the module in the base programming language by replacing all occurrences of the refined concepts with their specified descriptions and expressing all occurrences of structured programming constructs using this programming language.

    The step-by-step drill-down involves the use of a partially formalized language to represent said descriptions, which is called pseudocode. This language allows the use of all structured programming constructs that are formalized, along with informal natural language fragments to represent generic statements and conditions. Corresponding fragments in the base programming language can also be specified as generalized operators and conditions.

    The head description in pseudocode can be considered the external design of the module in the base programming language, which

    the beginning of the module in the base language, i.e. the first sentence or heading (specification) of this module ;

    a section (set) of descriptions in the base language, and instead of descriptions of procedures and functions - only their external design;

    informal designation of the sequence of module body statements as one generalized statement (see below), as well as informal designation of the sequence of body statements of each procedure or function description as one generalized statement;

    the last sentence (end) of the module in the base language.

    The external design of the description of a procedure or function is presented in a similar way. However, following Dijkstra, it would be better to present the section of descriptions here also with an informal notation, detailing it as a separate description.

    An informal designation of a generalized operator in pseudocode is made in natural language by an arbitrary sentence that reveals its content in general terms. The only formal requirement for the design of such a designation is the following: this sentence must occupy one or more graphic (printed) lines in its entirety and end with a dot.

    For each informal generalized operator, a separate description must be created that expresses the logic of its work (detailing its content) using the composition of the main structures of structured programming and other generalized operators. The heading of such a description should be the informal designation of the generalized operator being refined. The basic constructs of structured programming can be represented as follows (see Figure 8.2). Here, the condition can either be explicitly specified in the underlying programming language as a Boolean expression, or informally represented in natural language by some fragment that outlines the meaning of this condition. In the latter case, a separate description should be created detailing this condition, indicating the designation of this condition (fragment in natural language) as the title.

  9. Rice. 8.2. Basic constructions of structured programming in pseudocode.

  10. Rice. 8.3. Particular cases of the transition operator as a generalized operator.

    As a generalized operator in pseudocode, you can use the above special cases of the transition operator (see Fig. 8.3). The sequence of exception handlers (exceptions) is specified at the end of a module or procedure (function) description. Each such handler looks like:

    EXCEPTION exception_name

    generic_operator

    ALL EXCEPTION

    The difference between an exception handler and a procedure without parameters is as follows: after the procedure is executed, control returns to the statement following the call to it, and after the exception is executed, control returns to the statement following the call to the module or procedure (function), at the end of which ( which) this exception is placed.

    It is recommended at each step of detailing to create a sufficiently meaningful description, but easily visible (visual), so that it is placed on one page of text. As a rule, this means that such a description should be a composition of five or six structured programming constructs. It is also recommended to place nested structures with a shift to the right by several positions (see Fig. 8.4). As a result, you can get a description of the logic of work in terms of visibility that is quite competitive with flowcharts, but has a significant advantage - the linearity of the description is preserved.

  11. DELETE IN THE FILE RECORDS BEFORE THE FIRST,

    SUITABLE FOR THE SET FILTER:

    SET THE BEGINNING OF THE FILE.

    IF ANOTHER RECORD SATISFITS

    FILTER TO

    DELETE ANOTHER RECORD FROM THE FILE.

    ALL IF

    BYE

    IF ENTRY IS NOT DELETED THEN

    TYPE "RECORDS NOT DELETED".

    PRINT "REMOVED n RECORDS".

    ALL IF

  12. Rice. 8.4. An example of one step of detailing in pseudocode.

  13. The idea of ​​step-by-step detailing is sometimes attributed to Dijkstra. However, Dijkstra proposed a fundamentally different method for constructing the module text, which seems to us to be deeper and more promising. First, along with the refinement of operators, he proposed to gradually (step by step) refine (detail) the data structures used. Secondly, at each step, he suggested creating a certain virtual machine for detailing and, in its terms, detailing all the refined concepts for which this machine allows doing this. Thus, Dijkstra proposed, in essence, detailing by horizontal layers, which is the transfer of his idea of ​​\u200b\u200blayered systems (see Lecture 6) to the level of module development. This method of module development is currently supported by ADA language packages and object-oriented programming tools.

  14. 8.4. Software module control.

  15. The following methods of program module control are applied:

    static check of module text;

    end-to-end tracking;

    proof of the properties of the software module.

    During static checking of the text of a module, this text is read from beginning to end in order to find errors in the module. Usually, in addition to the module developer, one more or even several programmers are involved for such a check. It is recommended that errors detected during such a check be corrected not immediately, but upon completion of reading the text of the module.

    End-to-end tracking is one of the types of dynamic control of the module. It also involves several programmers who manually loop through the execution of the module (statement by statement in the sequence that follows from the logic of the module) on a certain set of tests.

    The next lecture is devoted to proving the properties of programs. It should only be noted here that this method is still very rarely used.

  16. Literature for lecture 8.

  17. 8.2. E. Dijkstra. Notes on Structured Programming// W. Dahl, E. Dijkstra, K. Hoor. Structural programming. - M.: Mir, 1975. - S. 24-97.

    8.3. N. Wirth. Systematic programming. - M.: Mir, 1977. - S. 94-164.

  18. Lecture 9

  19. The concept of program justification. Formalization of program properties, Hoor's triad. Rules for setting the properties of an assignment operator, conditional operator, and compound operator. Rules for establishing the properties of a loop operator, the concept of a loop invariant. Termination of program execution.

  20. 9.1. Program justifications. Formalization of program properties.

  21. To improve the reliability of software, it is very useful to supply programs with additional information, using which you can significantly increase the level of control of the software. Such information can be given in the form of informal or formalized statements that are tied to various program fragments. We will call such assertions program justifications. Non-formalized justifications of programs can, for example, explain the motives for making certain decisions, which can greatly facilitate the search for and correction of errors, as well as the study of programs during their maintenance. Formalized justifications make it possible to prove some properties of programs both manually and to control (set) them automatically.

    One of the currently used concepts of formal justifications for programs is the use of the so-called Hoor's triads. Let S be some generalized operator over the information environment IS, P and Q - some predicates (statements) over this environment. Then the notation (P)S(Q) is called the Hoor triad, in which the predicate P is called the precondition, and the predicate Q is called the postcondition with respect to the operator S. The operator (in particular, the program) S is said to have the property (P)S(Q) , if whenever predicate P is true before S is executed, predicate Q is true after S is executed.

    Simple examples of program properties:

    (9.1) (n=0) n:=n+1 (n=1),

    (9.2) (n

    (9.3) (n

    (9.4) (n>0) p:=1; m:=1;

    WHILE m /= n DO

  22. BYE

    To prove the property of the program S, we use the properties of simple operators of the programming language (here we restrict ourselves to the empty operator and the assignment operator) and the properties of control structures (compositions) with which the program is built from simple operators (we restrict ourselves here to the three main compositions of structured programming, see Lecture 8). These properties are usually called program verification rules.

  23. 9.2. Properties of simple operators.

  24. For an empty operator,

    Theorem 9.1. Let P be a predicate over the information environment. Then property (P)(P) holds.

    The proof of this theorem is obvious: the empty operator does not change the state of the information environment (in accordance with its semantics), so its precondition remains true after its execution.

    For the assignment operator,

    Theorem 9.2. Let the information environment IS consist of the variable X and the rest of the information environment RIS:

  25. Then the property

    (Q(F(X, RIS), RIS)) X:= F(X, RIS) (Q(X, RIS)) ,

    where F(X, RIS) is some single-valued function, Q is a predicate.

    Proof. Let the predicate Q(F(X0, RIS0), RIS0) be true before the execution of the assignment operator, where (X0, RIS0) is some arbitrary state of the information environment IS, then after the execution of the assignment operator the predicate Q(X, RIS) will be true, so how X will get the value F(X0, RIS0) and the state of RIS is not changed by the given assignment statement, and hence after the execution of this assignment statement in this case

    Q(X, RIS)=Q(F(X0, RIS0), RIS0).

    By virtue of the arbitrariness of the choice of the state of the information environment, the theorem is proved.

    An example of a property of an assignment operator is Example 9.1.

  26. 9.3. Properties of the basic structures of structural programming.

  27. Consider now the properties of the main structures of structured programming: following, branching and repetition.

    The properties of succession are expressed by the following

    Theorem 9.3. Let P, Q and R be predicates over the information environment, and S1 and S2 be generalized operators having, respectively, the properties

    (P)S(Q) and (Q)S2(R).

    Then for the compound operator

    S1; S2<.blockquote>

    there is a property

    (P) S1; S2(R) .

    Proof. Let the predicate P be true for some state of the information environment before the execution of the operator S1. Then, by virtue of the property of the operator S1, after its execution, the predicate Q will be true. by executing the operator S2. Consequently, after the execution of the operator S2, by virtue of its property, the predicate R will be true, and since the operator S2 completes the execution of the compound statement (in accordance with its semantics), the predicate R will be true after the execution of this compound statement, which was required to be proved.

    For example, if properties (9.2) and (9.3) hold, then

    place and property

    (n

    The branching property is expressed by the following

    Theorem 9.4. Let P, Q and R be predicates over the information environment, and S1 and S2 be generalized operators having, respectively, the properties

    (P,Q)S1(R) and (`P,Q)S2(R).

    Then for the conditional operator

    IF P THEN S1 ELSE S2 ALL IF

    there is a property

    (Q) IF P THEN S1 ELSE S2 ALL IF (R) .

    Proof. Let the predicate Q be true for some state of the information environment before the execution of the conditional operator. If the predicate P is also true, then the execution of the conditional operator in accordance with its semantics is reduced to the execution of the operator S1. By virtue of the property of the operator S1, after its execution (and in this case, after the execution of the conditional operator), the predicate R will be true. If, however, before the execution of the conditional operator, the predicate P is false (and Q is still true), then the execution of the conditional operator in accordance with its semantics is reduced to the execution of the operator S2. By virtue of the property of the operator S2, after its execution (and in this case, after the execution of the conditional operator), the predicate R will be true. Thus, the theorem is completely proved.

    Before proceeding to the property of the repetition construction, it should be noted that it is useful for further

    Theorem 9.5. Let P, Q, P1 and Q1 be predicates over the information environment for which the implications

    P1=>P and Q=>Q1,

    and let property (P)S(Q) hold for the operator S. Then property (P1)S(Q1) holds.

    This theorem is also called the weakening property theorem.

    Proof. Let the predicate P1 be true for some state of the information environment before the execution of the operator S. Then the predicate P will also be true (due to the implication P1=>P). Consequently, by virtue of the property of the operator S, after its execution, the predicate Q will be true, and hence the predicate Q1 (by virtue of the implication Q=>Q1). Thus the theorem is proved.

    The repetition property is expressed by the following

    Theorem 9.6. Let I, P, Q and R be predicates over the information environment for which the implications

    P=>I and (I,`Q)=>R ,

    and let S be a generalized operator with property (I)S(I).

    Then for the loop operator

    BYE Q DO S ALL BYE

    there is a property

    (P) BYE Q DO S ALL BYE (R) .

    The predicate I is called the invariant of the loop operator.

    Proof. To prove this theorem, it suffices to prove the property

    (I) BYE Q DO S ALL BYE (I,`Q)

    (by Theorem 9.5 on the basis of the implications in the conditions of this theorem). Let predicate I be true for some state of the information environment before the execution of the cycle operator. If, in this case, the predicate Q is false, then the cycle operator will be equivalent to an empty operator (in accordance with its semantics) and, by virtue of Theorem 9.1, after the execution of the cycle operator, the statement (I ,`Q). If the predicate Q is true before the execution of the loop operator, then the loop operator, in accordance with its semantics, can be represented as a compound operator S; BYE Q DO S ALL BYE

    By virtue of the property of the operator S, after its execution, the predicate I will be true, and the initial situation arises for proving the property of the cycle operator: the predicate I is true before the execution of the cycle operator, but for a different (changed) state of the information environment (for which the predicate Q can be either true or false). If the execution of the loop statement ends, then by applying the method of mathematical induction, in a finite number of steps, we will come to a situation where the statement (I,`Q) will be true before its execution. And in this case, as was proved above, this statement will be true even after the execution of the cycle statement. The theorem has been proven.

    For example, for the loop operator from example (9.4), the property takes place

    m:= m+1; p:= p*m

    ALL YET (p= n.!}

    This follows from Theorem 9.6, since the invariant of this loop operator is the predicate p=m! and the implications (n>0, p=1, m=1) => p=m! and (p=m!, m=n) => p=n!

  28. 9.4. Termination of program execution.

  29. One of the program properties that we may be interested in in order to avoid possible errors in the PS is its termination, i.e. the absence of cycling in it for certain initial data. In the structured programs we have considered, only the repetition construct can be the source of looping. Therefore, to prove the termination of a program, it suffices to be able to prove the termination of a loop operator. The following is useful for this.

    Theorem 9.7. Let F be an integer function that depends on the state of the information environment and satisfies the following conditions:

    (1) if predicate Q is true for a given state of the information environment, then its value is positive;

    (2) it decreases when the state of the information environment changes as a result of the execution of the operator S.

    Then the execution of the loop statement

    WHILE Q DO S EVERYTHING WHILE completes.

    Proof. Let is be the state of the information environment before the execution of the cycle statement and let F(is)=k. If the predicate Q(is) is false, then the execution of the loop statement ends. If Q(is) is true, then by the assumption of the theorem k>0. In this case, statement S will be executed one or more times. After each execution of the operator S, according to the condition of the theorem, the value of the function F decreases, and since before the execution of the operator S, the predicate Q must be true (according to the semantics of the cycle operator), the value of the function F at this moment must be positive (according to the condition of the theorem). Therefore, due to the integrality of the function F, the operator S in this cyclen can be executed more than k times. The theorem has been proven.

    For example, for the example of the cycle operator considered above, the conditions of Theorem 9.7 are satisfied by the function f(n, m)= n-m. Since before the execution of the loop statement m=1, the body of this loop will be executed (n-1) times, i.e. this loop statement terminates.

  30. 9.5. An example of a program property proof.

  31. Based on the proven rules for program verification, it is possible to prove the properties of programs consisting of assignment statements and empty statements and using three basic compositions of structured programming. To do this, analyzing the structure of the program and using its pre- and post-conditions, it is necessary to apply a suitable verification rule at each step of the analysis. In the case of repetition composition, it will be necessary to choose an appropriate cycle invariant.

    As an example, let us prove property (9.4). This proof will consist of the following steps.

    (Step 1). n>0 => (n>0, p - any, m - any).

    (Step 2). Occurs

    (n>0, p - any, m - any) p:=1 (n>0, p=1, m - any).

    By Theorem 9.2.

    (Step 3). Occurs

    (n>0, p=1, m - any) m:=1 (n>0, p=1, m=1).

    By Theorem 9.2.

    (Step 4). Occurs

    (n>0, p - any, m - any) p:=1; m:=1 (n>0, p=1, m=1).

    By Theorem 9.3, due to the results of Steps 2 and 3.

    Let us prove that the predicate p=m! is a cycle invariant, i.e. (p=m m:=m+1; p:=p*m {p=m!}.!}

    (Step 5). Takes place (p=m m:=m+1 {p=(m-1)!}.!}

    By Theorem 9.2, if we represent the precondition in the form (p=((m+1)-1).!}

    (Step 6). Takes place (p=(m-1) p:=p*m {p=m!}.!}

    By Theorem 9.2, if we represent the precondition in the form (p*m=m.!}

    (Step 7). There is an invariant cycle

    (p=m m:=m+1; p:=p*m {p=m!}.!}

    By Theorem 9.3, due to the results of Steps 5 and 6.

    (Step 8). Occurs

    (n>0, p=1, m=1) WHILE m /= n DO

    m:= m+1; p:= p*m

    ALL YET (p= n.!}

    By Theorem 9.6, by virtue of the result of step 7 and bearing in mind that (n>0, p=1, m= 1)=>p=m!; (p=m!, m=n)=>p=n!.

    (Step 9). Occurs

    (n>0, p - any, m - any) p:=1; m:=1;

    WHILE m /= n DO

    m:= m+1; p:= p*m

    ALL YET (p= n.!}

    By Theorem 9.3, due to the results of Steps 3 and 8.

    (Step 10). Property (9.4) holds by Theorem 9.5 due to the results of Steps 1 and 9.

  32. Literature for lecture 9.

  33. 9.1. S.A. Abramov. Elements of programming. - M.: Nauka, 1982. S. 85-94.

    9.2. M. Zelkovets, A. Shaw, J. Gannon. Principles of software development. - M.: Mir, 1982. S. 98-105.

  34. Lecture 10

  35. Basic concepts. Test design strategy. Debugging commandments. Offline debugging and testing of a software module. Comprehensive debugging and testing of software.

  36. 10.1. Basic concepts.

  37. Debugging the PS is an activity aimed at detecting and correcting errors in the PS using the processes of executing its programs. PS testing is the process of executing its programs on a certain data set, for which the result of the application is known in advance or the rules for the behavior of these programs are known. The specified data set is called a test or just a test. Thus, debugging can be represented as a repeated repetition of three processes: testing, as a result of which the presence of an error in the PS can be ascertained, searching for the place of an error in the programs and documentation of the PS, and editing programs and documentation in order to eliminate the detected error. In other words:

    Debugging = Testing + Finding errors + Editing.

    In foreign literature, debugging is often understood only as a process of finding and correcting errors (without testing), the presence of which is established during testing. Sometimes testing and debugging are considered synonymous. In our country, the concept of debugging usually includes testing, so we will follow the established tradition. However, the joint consideration of these processes in this lecture makes the indicated discrepancy not so significant. However, it should be noted that testing is also used as part of the PS certification process (see lecture 14).

  38. 10.2. Principles and types of debugging.

  39. The success of debugging is largely determined by the rational organization of testing. During debugging, mainly those errors are found and eliminated, the presence of which in the PS is established during testing. As already noted, testing cannot prove the correctness of the PS, at best it can demonstrate the presence of an error in it. In other words, it cannot be guaranteed that by testing the software with a practically feasible set of tests, it is possible to establish the presence of every error present in the software. Therefore, two problems arise. First, prepare such a set of tests and apply PS to them in order to detect as many errors as possible in it. However, the longer the testing process (and debugging in general) continues, the greater the cost of the software becomes. Hence the second task: to determine the moment when the debugging of the PS (or its individual components) is completed. A sign of the possibility of the end of debugging is the completeness of the coverage by the tests passed through the PS (i.e., the tests to which the PS is applied) of many different situations that arise during the execution of PS programs, and the relatively rare manifestation of errors in the PS at the last segment of the testing process. The latter is determined in accordance with the required degree of reliability of the PS, specified in the specification of its quality.

    To optimize the test suite, i.e. to prepare such a set of tests that would allow for a given number of them (or for a given time interval allotted for testing) to detect a larger number of errors, it is necessary, firstly, to plan this set in advance and, secondly, to use a rational planning strategy ( design) tests. Test design can begin immediately after the completion of the stage external description PS. There are different approaches to developing a test design strategy, which can be conditionally placed graphically (see Figure 9.1) between the following two extreme approaches. The left extreme approach is that tests are designed only on the basis of studying the PS specifications (external description, architecture description and module specification). The structure of the modules is not taken into account in any way, i.e. they are treated as black boxes. In fact, this approach requires a complete enumeration of all input data sets, since when using only a part of these sets as tests, some sections of the PS programs may not work on any test and, therefore, the errors contained in them will not appear. However, testing the PS with a full set of input data sets is practically impossible. The right extreme approach is that tests are designed based on the study of program texts in order to test all the ways in which each PS program is executed. If we take into account the presence of cycles with a variable number of repetitions in programs, then there may also be an extremely large number of different ways of executing PS programs, so that testing them will also be practically impossible.

    The optimal test design strategy is located within the interval between these extreme approaches, but closer to the left. It involves designing a significant part of the tests to specifications, based on the principles: for each function or feature used - at least one test, for each area and for each boundary of change in any input variable - at least one test, for each special case or for each exception specified in the specifications - at least one test. But it also requires the design of some tests and the texts of programs, based on the principle (at a minimum): each command of each PS program must work on at least one test.

    The optimal test design strategy can be specified based on the following principle: for each program document (including program texts), which is part of the PS, should design their own tests in order to identify errors in it. In any case, this principle must be observed in accordance with the definition of software and the content of the concept of programming technology as a technology for developing reliable software (see lecture 1). In this regard, Myers even defines different types of testing, depending on the type of program document on the basis of which the tests are built. In our country, there are two main types of debugging (including testing): stand-alone and complex debugging. Offline debugging means testing only some part of the program included in the PS, with the search and correction of errors recorded during testing. It actually includes debugging each module and debugging module pairing. Comprehensive debugging means testing the PS as a whole with the search for and correction of errors recorded during testing in all documents (including texts of PS programs) related to the PS as a whole. Such documents include the definition of requirements for the PS, the quality specification of the PS, the functional specification of the PS, the description of the PS architecture, and the texts of the PS programs.

  40. 10.3. Debugging commandments.

  41. This section provides general guidelines for organizing debugging. But first, a phenomenon should be noted that confirms the importance of error prevention in previous stages of development: as the number of detected and corrected errors in the software grows, the relative probability of the existence of undetected errors in it also increases. This is explained by the fact that with an increase in the number of errors detected in the PS, our understanding of the total number of errors made in it, and hence, to some extent, the number of errors not yet detected is also refined. This phenomenon confirms the importance of early detection of errors and the need for careful control of decisions made at each stage of software development.

    Commandment 1. Consider testing as a key task of software development, entrust it to the most qualified and gifted programmers; it is undesirable to test your own program.

    Commandment 2. A good test is one that has a high probability of finding an error, not one that demonstrates the correct operation of the program.

    Commandment 3. Prepare tests for both correct and incorrect data.

    Commandment 4. Avoid non-reproducible tests, document their passage through the computer; study the results of each test in detail.

    Commandment 5. Connect each module to the program only once; never modify a program to make it easier to test.

    Commandment 6. Skip again all tests related to checking the operation of any PS program or its interaction with other programs if changes have been made to it (for example, as a result of fixing a bug).

  42. 10.4. Offline module debugging.

  43. In offline debugging, each module is actually tested in some programming environment, unless the program being debugged consists of only one module. This environment consists of other modules, some of which are modules of the program being debugged that are already debugged, and some of which are modules that control debugging (debug modules, see below). Thus, during offline debugging, some program is always being tested, built specifically for testing the module being debugged. This program only partially matches the program being debugged, except when the last module of the program being debugged is being debugged. As the debugging of the program progresses, an increasing part of the environment of the next module being debugged will be already debugged modules of this program, and when debugging the last module of this program, the environment of the module being debugged will consist entirely of all other (already debugged) modules of the program being debugged (without any) debugging modules. modules, i.e. in this case, the debugged program itself will be tested. This process of building up a debugged program with debugged and debugged modules is called program integration.

    The debug modules included in the environment of the module being debugged depend on the order in which the modules of that program are debugged, which module is being debugged, and possibly which test will be skipped.

    In bottom-up testing (see Chapter 7), this environment will always contain only one debug module (except when the last module of the program being debugged is being debugged), which will be the head of the program under test and which is called the master (or driver). The leading debugging module prepares the information environment for testing the module being debugged (i.e., it forms its state required for testing this module, in particular, it can enter some test data), calls the module being debugged, and, after its work is completed, issues the necessary messages. When debugging one module, different master debug modules can be compiled for different tests.

    In downward testing (see Chapter 7), the environment of the module being debugged contains, as debug modules, simulators of all modules that can be accessed by the module being debugged, as well as simulators of those modules that can be accessed by the debugged modules of the program being debugged (included in this environment), but which have not yet been debugged. Some of these simulators may change for different tests when debugging one module.

    In fact, the environment of the module being debugged can in many cases contain both types of debug modules for the following reasons. Both upward and downward testing have their advantages and disadvantages.

    Benefits of bottom-up testing include

    ease of preparation of tests and

    the ability to fully implement the module test plan.

    This is due to the fact that the test state of the information environment is prepared immediately before the call to the module being debugged (leading debugging module). The disadvantages of bottom-up testing are the following features:

    test data is prepared, as a rule, not in the form that is designed for the user (except when the last, head, module of the program being debugged is debugged);

    a large amount of debugging programming (when debugging one module, you often have to compose many leading debugging modules for different tests);

    the need for special testing of interface modules.

    The advantages of top-down testing include the following features:

    most tests are prepared in a form designed for the user;

    in many cases, a relatively small amount of debugging programming (module simulators, as a rule, are very simple and each is suitable for a large number, often for all, tests);

    there is no need to test the pairing of modules.

    The disadvantage of top-down testing is that the test state of the information environment before accessing the module being debugged is prepared indirectly - it is the result of applying already debugged modules to test data or data issued by simulators. This, firstly, makes it difficult to prepare tests, requires highly qualified test executor, and secondly, makes it difficult or even impossible to implement a complete test plan for the module being debugged. This shortcoming sometimes forces developers to use bottom-up testing even in the case of top-down development. More often, however, some modification of top-down testing is used, or some combination of top-down and bottom-up testing.

    Based on the fact that top-down testing is, in principle, preferable, let us dwell on techniques that allow us to overcome these difficulties to some extent. First of all, it is necessary to organize the debugging of the program in such a way that the modules that perform data entry are debugged as soon as possible - then the test data can be prepared in a form designed for the user, which will greatly simplify the preparation of subsequent tests. This input is by no means always carried out in the head module, so you first have to debug the chains of modules leading to the modules that carry out the specified input (cf. with the method of purposeful constructive implementation in Lecture 7). Until the input modules are debugged, test data is supplied by some simulators: they are either included in the simulator as part of it, or entered by this simulator.

    In top-down testing, some information environment states under which it is required to test the module being debugged may not occur during the execution of the program being debugged for any input. In these cases, it would be possible not to test the module being debugged at all, since the errors found in this case will not appear when the program being debugged is executed under any input data. However, it is not recommended to do this, since when the program being debugged changes (for example, when maintaining the PS), the states of the information environment that were not used for testing the module being debugged may already arise, which requires additional testing of this module (and this, with a rational organization of debugging, could not be done if the module itself has not changed). In order to test the module being debugged in these situations, suitable simulators are sometimes used to create the desired state of the information environment. More often, a modified version of top-down testing is used, in which the modules being debugged are pre-tested separately before being integrated (in this case, a leading debugging module appears in the environment of the module being debugged, along with module simulators that can be accessed by the module being debugged). However, another modification of top-down testing seems to be more appropriate: after the top-down testing of the module being debugged for reachable test states of the information environment is completed, it should be separately tested for the remaining required states of the information environment.

    Often a combination of bottom-up and bottom-up testing is also used, which is called the sandwich method. The essence of this method lies in the simultaneous implementation of both upward and downward testing until these two testing processes meet on some module somewhere in the middle of the structure of the program being debugged. This method allows, with a reasonable approach, to take advantage of the advantages of both bottom-up and top-down testing and to a large extent neutralize their shortcomings. This effect is a manifestation of a more general principle: the greatest technological effect can be achieved by combining top-down and bottom-up methods of COP programming. It is to support this method that the architectural approach to software development is intended (see Lecture 7): a layer of well-designed and carefully tested modules greatly facilitates the implementation of a family of programs in the corresponding subject area and their subsequent modernization.

    Very important in offline debugging is module pairing testing. The fact is that the specification of each program module, except for the head one, is used in this program in two situations: firstly, when developing the text (sometimes they say: the body) of this module and, secondly, when writing an appeal to this module in other program modules. In both cases, as a result of an error, the required compliance with a given module specification can be violated. Such errors need to be detected and corrected. This is the purpose of testing the pairing of modules. In top-down testing, pairing testing is carried out along the way with each skipped test, which is considered the strongest advantage of top-down testing. During bottom-up testing, the debugged module is accessed not from the modules of the program being debugged, but from the leading debugging module. In this regard, there is a danger that the last module can adapt to some "misconceptions" of the module being debugged. Therefore, when starting (in the process of program integration) debugging a new module, it is necessary to test each call to a previously debugged module in order to detect inconsistencies between this call and the body of the corresponding module (and it is possible that the previously debugged module is to blame for this). Thus, it is necessary to partially repeat testing of a previously debugged module under new conditions, and the same difficulties arise as in the case of top-down testing.

    Autonomous testing of the module should be carried out in four successive steps.

    Step 1: Based on the specification of the module you are debugging, prepare a test for each possibility and situation, for each boundary of the ranges of all inputs, for each range of data changes, for each invalid range of all inputs, and each invalid condition.

    Step 2. Check the text of the module to make sure that each direction of any branch will pass on at least one test. Add missing tests.

    Step 3. Verify from the module text that for each loop there is a test for which the loop body is not executed, a test for which the loop body is executed once, and a test for which the loop body is executed the maximum number of times. Add missing tests.

    Step 4. Check the text of the module for its sensitivity to individual special input data values ​​- all such values ​​should be included in the tests. Add missing tests.

  44. 10.5. Complex debugging of the software.

  45. As mentioned above, with complex debugging, the PS as a whole is tested, and tests are prepared for each of the PS documents. Testing of these documents is usually carried out in the reverse order of their development (the only exception is testing of application documentation, which is developed according to the external description in parallel with the development of program texts; this testing is best done after testing of the external description is completed). Testing in complex debugging is the application of the PS to specific data that, in principle, may arise from the user (in particular, all tests are prepared in a form designed for the user), but possibly in a simulated (rather than real) environment. For example, some input and output devices that are inaccessible during complex debugging can be replaced by their software simulators.

    PS architecture testing. The purpose of testing is to find a discrepancy between the description of the architecture and the set of programs of the PS. By the time testing of the PS architecture begins, the autonomous debugging of each subsystem should already be completed. Architecture implementation errors can be associated primarily with the interaction of these subsystems, in particular, with the implementation of architectural functions (if any). Therefore, I would like to check all the ways of interaction between the PS subsystems. But since there may be too many of them, it would be desirable to test at least all the execution chains of subsystems without re-entry of the latter. If the given architecture represents the PS as a small system of selected subsystems, then the number of such chains will be quite visible.

    Testing external functions. The purpose of testing is to find discrepancies between the functional specification and the set of software programs of the PS. Despite the fact that all these programs have already been independently debugged, these discrepancies may be, for example, due to a mismatch between the internal specifications of the programs and their modules (based on which autonomous testing was carried out) and the external functional specification of the MS. As a rule, testing external functions is done in the same way as testing modules in the first step, i.e. like a black box.

    PS quality testing. The purpose of testing is to search for violations of the quality requirements formulated in the PS quality specification. This is the most difficult and least studied type of testing. It is only clear that not every PS quality primitive can be tested by testing (see the next lecture on PS quality assessment). The completeness of the PS is checked already when testing external functions. On this stage testing of this quality primitive can be continued if it is required to obtain any probabilistic estimate of the degree of reliability of the PS. However, the methodology for such testing still needs to be developed. Accuracy, robustness, security, temporal efficiency, memory efficiency to some extent, device efficiency, extensibility and, in part, device independence can be tested. Each of these types of testing has its own specifics and deserves separate consideration. We will limit ourselves here to listing them. The ease of use of the PS (a quality criterion that includes several quality primitives, see Lecture 4) is evaluated when testing documentation on the use of the PS.

    Testing documentation on the application of the PS. The purpose of testing is to search for inconsistencies between the documentation on the application and the set of software programs, as well as the inconvenience of using the software. This stage immediately precedes the user's connection to the completion of the development of the PS (testing the requirements for the PS and certification of the PS), so it is very important for developers to first use the PS themselves in the way the user will do it. All tests at this stage are prepared solely on the basis of the documentation on the application of the PS. First of all, the capabilities of the software should be tested as it was done when testing external functions, but only on the basis of the application documentation. All unclear places in the documentation should be tested, as well as all examples used in the documentation. Next, the most difficult cases of application of the PS are tested in order to detect a violation of the requirements for the relativity of ease of use of the PS.

    Testing the definition of requirements for the PS. The purpose of testing is to find out to what extent the software does not correspond to the presented definition of requirements for it. The peculiarity of this type of testing is that it is carried out by the purchasing organization or the user organization of the PS as one of the ways to overcome the barrier between the developer and the user (see lecture 3). Usually this testing is carried out with the help of control tasks - typical tasks for which the result of the solution is known. In cases where the developed PS should replace another version of the PS that solves at least part of the tasks of the developed PS, testing is carried out by solving common problems using both the old and the new PS, followed by a comparison of the results obtained. Sometimes, as a form of such testing, trial operation of the PS is used - a limited application of a new PS with an analysis of the use of the results in practice. In essence, this type of testing has much in common with the testing of the PS during its certification (see lecture 14), but is performed before certification, and sometimes instead of certification.

  46. Literature for lecture 10.

  47. 10.1. G. Myers. Software reliability. - M.: Mir, 1980. - S. 171-262.

    10.2. D. Van Tassel. Style, development, efficiency, debugging and testing programs. - M.: Mir, 1985. - S. 179-295.

    10.3. J. Hughes, J. Michtom. Structural approach to programming. - M.: Mir, 1980. - S. 254-268.

    10.4. J. Fox. Software and its development. - M.: Mir, 1985. - S. 227-241.

    10.5. M. Zelkowitz, A. Shaw, J. Gannon. Principles of software development. - M.: Mir, 1982. - S. 105-116.

    10.6. Yu.M. Bezborodov. Individual debugging of programs. - M.: Nauka, 1982. - S. 9-79.

    10.7. V.V. Lipaev. Program testing. - M.: Radio and communication, 1986. - S. 15-47.

    10.8. E.A. Zhogolev. Introduction to programming technology (lecture notes). - M.: "DIALOGUE-MGU", 1994.

    10.9. E. Dijkstra. Notes on structured programming. //U. Dahl, E. Dijkstra, K. Hoor. Structural programming. - M.: Mir, 1975. - S. 7-13.

  48. Lecture 11

  49. 11.1. Functionality and reliability as mandatory criteria for the quality of software.

  50. In the previous lectures, we considered all stages of the development of the PS, except for its certification. At the same time, we did not touch upon the issues of quality assurance of the PS in accordance with its quality specification (see lecture 4). True, while implementing the functional specification of the PS, we thereby discussed the main issues of ensuring the functionality criterion. Having declared software reliability as its main attribute (see lecture 1), we have chosen error prevention as the main approach to ensure software reliability (see lecture 3) and discussed its implementation at different stages of software development. Thus, the thesis about the obligatory functionality and reliability of the PS as criteria for its quality was manifested.

    However, the software quality specification may contain additional characteristics of these criteria, the provision of which requires special discussion. These questions are the focus of this lecture. Ensuring other quality criteria will be discussed in the next lecture.

    The provision of the MS quality primitives that express the criteria for the functionality and reliability of the MS are discussed below.

  51. 11.2. Ensuring the completeness of the software.

  52. The completeness of the PS is a general PS quality primitive for expressing both the functionality and reliability of the PS, and for functionality it is the only primitive (see Lecture 4).

    The functionality of the PS is determined by its functional specification. The completeness of a PS as a primitive of its quality is a measure of how this specification is implemented in a given PS. Providing this primitive in full means implementing each of the functions defined in the functional specification, with all the details and features indicated there. All the previously discussed technological processes show how this can be done.

    However, several levels of implementation of the functionality of the PS can be defined in the PS quality specification: some simplified (initial or starting) version can be defined, which must be implemented first, and several intermediate versions can also be defined. In this case, an additional technological problem arises: the organization of increasing the functionality of the PS. It is important to note here that the development of a simplified version of the PS is not the development of its prototype. The prototype is being developed in order to better understand the conditions for the use of the future PS, to clarify its external description. It is designed for selected users and therefore may differ greatly from the required PS not only in the functions performed, but also in the features of the user interface. A simplified version of the required PS should be designed for practical use by any users for whom it is intended. Therefore, the main principle for ensuring the functionality of such an OS is to develop the OS from the very beginning in such a way as if the entire OS is required, until the developers deal with those parts or details of the OS, the implementation of which can be deferred in according to its quality specification. Thus, both the external description and the description of the PS architecture must be developed in full. It is possible to postpone only the implementation of those software subsystems defined in the architecture of the developed PS, the functioning of which is not required in the initial version of this PS. The implementation of the software subsystems themselves is best done by the method of purposeful constructive implementation, leaving in the initial version of the PS suitable simulators of those software modules that are not required in this version. A simplified implementation of some software modules is also acceptable, omitting the implementation of some details of the corresponding functions. However, from a technological point of view, it is better to consider such modules as their original imitators (albeit far advanced ones).

    Due to the errors in the developed PS, the completeness achieved in ensuring its functionality (in accordance with the specification of its quality) may actually not be as expected. We can only say that this completeness has been achieved with a certain probability, determined by the volume and quality of the testing. In order to increase this probability, it is necessary to continue testing and debugging the PS. However, estimating such a probability is a very specific task (taking into account the fact that the manifestation of the error in the PS is a function of the initial data), which is still waiting for appropriate theoretical studies.

  53. 11.3. Ensuring the accuracy of the software tool.

  54. Providing this primitive is connected with operations on values ​​of real types (more precisely, with values ​​represented with some error). To ensure the required accuracy when calculating the value of a particular function means to obtain this value with an error that does not go beyond the specified boundaries. The types of errors, methods for estimating them, and methods for achieving the required accuracy (the so-called approximate calculations) are dealt with by computational mathematics. Here we only pay attention to a certain structure of the error: the error of the calculated value (total error) depends

    on the error of the calculation method used (in which we include the inaccuracy of the model used),

    from the error in the presentation of the data used (from the so-called fatal error),

    from the rounding error (inaccuracy in the execution of the operations used in the method).

  55. 11.4. Ensuring the autonomy of the software.

  56. This quality primitive is provided at the stage of quality specification by making a decision whether to use any suitable underlying software in the developed PS or not to use any underlying software in it. At the same time, it is necessary to take into account both its reliability and the resources required for its use. With increased requirements for the reliability of the developed PS, the reliability of the basic software available to the developers may turn out to be unsatisfactory, therefore, its use must be abandoned, and the implementation of its functions in the required volume must be included in the PS. Similar decisions have to be made under severe restrictions on the resources used (according to the PS efficiency criterion).

  57. 11.5. Ensuring the sustainability of the software.

  58. This quality primitive is provided with the help of so-called defensive programming. Generally speaking, defensive programming is used to improve the reliability of the MS when programming the module in a broader sense. As Myers states, "Defensive programming is based on an important premise: The worst thing a module can do is accept bad input and then return an incorrect but plausible result." In order to avoid this, the text of the module includes checks of its input and output data for their correctness in accordance with the specification of this module, in particular, the fulfillment of the restrictions on the input and output data and the relationships between them specified in the module specification should be checked. In the case of a negative result of the check, the corresponding exception is raised. In this regard, fragments of the second kind are included at the end of this module - handlers of the corresponding exceptional situations, which, in addition to issuing the necessary diagnostic information, can take measures either to eliminate errors in the data (for example, require them to be re-entered) or to mitigate the impact of the error (for example, , soft stop of devices controlled by the PS in order to avoid their breakdown in the event of an emergency termination of the program execution).

    The use of protective programming of modules leads to a decrease in the efficiency of the PS both in time and in memory. Therefore, it is necessary to reasonably regulate the degree of application of defensive programming, depending on the requirements for the reliability and efficiency of the PS, formulated in the quality specification of the developed PS. The input data of the developed module can come either directly from the user or from other modules. The most common case of using defensive programming is its use for the first group of data, which means the realization of the PS stability. This should be done whenever the quality specification of the PS contains a requirement to ensure the stability of the PS. The use of defensive programming for the second group of input data means an attempt to detect an error in other modules during the execution of the developed module, and for the output data of the developed module, an attempt to detect an error in the module itself during its execution. In essence, this means a partial implementation of the error self-detection approach to ensure the reliability of the software, which was discussed in Lecture 3. This case of defensive programming is used extremely rarely - only when the requirements for the reliability of the software are extremely high.

  59. 11.6. Ensuring the security of software.

  60. Distinguish the following types protection of PS from information distortion:

    protection against hardware failures;

    protection from the influence of a "foreign" program;

    protection against failures of "own" program;

    protection against operator (user) errors;

    protection against unauthorized access;

    protection against protection.

    Protection against hardware failures is currently not a very urgent task (taking into account the level of computer reliability achieved). But it's still useful to know her solution. This is ensured by the organization of the so-called "double-triple miscalculations". To do this, the entire data processing process, determined by the PS, is divided in time into intervals by the so-called "reference points". The length of this interval should not exceed half the average computer uptime. A copy of the state of the memory changed in this process for each reference point is written to the secondary memory with some checksum (a number calculated as a function of this state) in the case when it will be considered that the processing of data from the previous reference point to this one (i.e. . one "miscalculation") is made correctly (without computer failures). In order to find out, two such "miscalculations" are made. After the first "calculation", the specified checksum is calculated and stored, and then the memory state for the previous reference point is restored and the second "calculation" is made. After the second "miscalculation", the specified checksum is calculated again, which is then compared with the checksum of the first "miscalculation". If these two checksums match, the second calculation is considered correct, otherwise the checksum of the second "calculation" is also stored and the third "calculation" is performed (with the preliminary restoration of the memory state for the previous reference point). If the checksum of the third "miscalculation" matches the checksum of one of the first two "miscalculations", then the third miscalculation is considered correct, otherwise an engineering check of the computer is required.

    Protection from the influence of a "foreign" program refers primarily to operating systems or programs that partially perform their functions. There are two types of this protection:

    failure protection,

    protection against the malicious influence of a "foreign" program.

    When a multiprogramming mode of operation of a computer appears, several programs can be simultaneously running in its memory, alternately receiving control as a result of interruptions (the so-called quasi-parallel execution of programs). One of these programs (usually: the operating system) handles interrupts and manages multiprogramming. In each of these programs, failures may occur (errors appear) that may affect the performance of functions by other programs. Therefore, the control program (operating system) must protect itself and other programs from such influence. To do this, the computer hardware must implement the following features:

    memory protection,

    two modes of computer operation: privileged and work (user),

    two types of transactions: privileged and ordinary,

    correct implementation of interrupts and initial start-up of the computer,

    temporary interruption.

    Memory protection means the ability to programmatically set for each program areas of memory that are inaccessible to it. In privileged mode, any operations (both ordinary and privileged) can be performed, and in run mode, only ordinary ones. An attempt to perform a privileged operation, as well as to access protected memory in the operating mode, causes the corresponding interrupt. Moreover, privileged operations include operations to change the memory protection and operation mode, as well as access to the external information environment. The initial power on of the computer and any interruption should automatically enable privileged mode and override memory protection. In this case, the control program (operating system) can completely protect itself from the influence of other programs, if all control transfer points at initial power-up and interruptions belong to this program, if it does not allow any other program to work in privileged mode (when control is transferred to any other the program will turn on only the operating mode) and if it completely protects its memory (containing, in particular, all its control information, including the so-called interrupt vectors) from other programs. Then no one will prevent it from performing any protection functions implemented in it for other programs (including access to the external information environment). To facilitate the solution of this problem, a part of such a program is placed in permanent memory, i.e. inseparable from the computer itself. The presence of a temporary interrupt allows the control program to protect itself from looping in other programs (without such an interrupt, it could simply lose the ability to control).

    Protection against failures of "one's own" program is ensured by the reliability of this program, which is the focus of the entire programming technology discussed in this course of lectures.

    Protection against user errors (in addition to input data errors, see ensuring PS stability) is provided by issuing warning messages about attempts to change the state of the external information environment with the requirement to confirm these actions, as well as the ability to restore the state of individual components of the external information environment. The latter is based on the implementation of archiving changes in the state of the external information environment.

    Protection against unauthorized access is provided by the use of secret words (passwords). In this case, each user is provided with certain information and procedural resources (services), the use of which requires the presentation of a password to the PS, previously registered in the PS by this user. In other words, the user, as it were, "hangs a lock" on the resources allocated to him, the "key" to which only this user has. However, persistent attempts may be made to break such protection in individual cases if the protected resources are of extreme value to someone. In such a case, additional measures must be taken to protect against security breaches.

    Protection against security breaches is associated with the use of special programming techniques in the PS that make it difficult to overcome protection against unauthorized access. The use of ordinary passwords is not enough when it comes to an extremely persistent desire (for example, of a criminal nature) to gain access to valuable information. Firstly, because the information about passwords that the PS uses to protect against unauthorized access can be obtained relatively easily by a "cracker" of this protection if he has access to this PS itself. Secondly, using a computer, it is possible to carry out a sufficiently large enumeration of possible passwords in order to find a suitable one for accessing the information of interest. You can protect yourself from such a hack in the following way. The secret word (password) or just the secret integer X is known only to the owner of the protected information, and to check the access rights, another number Y=F(X) is stored in the computer, which is uniquely calculated by the PS every time an attempt is made to access this information upon presentation of the secret word. At the same time, the function F can be well known to all PS users, but it has such a property that restoring the word X from Y is practically impossible: with a sufficiently large length of the word X (for example, several hundred characters), this requires astronomical time. Such a number Y will be called the electronic (computer) signature of the owner of the secret word X (and hence the protected information).

    Another type of such protection is related to the protection of messages sent over computer networks, deliberate (or malicious) distortion. Such a message can be intercepted at "transshipment" points of the computer network and replaced by another message from the author of the intercepted message. This situation arises primarily in the implementation of banking operations using a computer network. By replacing such a message, which is an order from the owner of a bank account to perform some banking operation, money from his account can be transferred to the account of a "hacker" protection (a kind of computer bank robbery). Protection against such a security breach can be done as follows. Along with the function F, which determines the computer signature of the owner of the secret word X, which is known to the addressee of the protected message (if only its owner is a client of this addressee), another Stamp function is defined in the PS, from which the sender of the message must calculate the number S=Stamp(X,R ) using the secret word X and the text of the transmitted message R. The Stamp function is also considered well known to all MS users and has such a property that it is practically impossible to recover the number X from S, or to select another message R with the corresponding computer signature. The transmitted message itself (together with its protection) should look like:

    moreover, Y (computer signature) allows the addressee to establish the truth of the client, and S, as it were, fastens the protected message R with the computer signature Y. In this regard, we will call the number S an electronic (computer) seal. The PS defines one more Notary function, according to which the recipient of the protected message checks the truth of the transmitted message:

  61. This allows you to unambiguously establish that the message R belongs to the owner of the secret word X.

    Protection from protection is necessary in the event that the user has forgotten (or lost) his password. For such a case, it should be possible for a special user (PS administrator) responsible for the functioning of the protection system to temporarily remove protection from unauthorized access for the owner of the forgotten password in order to enable him to fix a new password.

  62. Literature for lecture 11.

  63. 11.1. I.S. Berezin, N.P. Zhidkov. Calculation methods, vol. 1 and 2. - M.: Fizmatgiz, 1959.

    11.2. N.S. Bakhvalov, N.P. Zhidkov, G.M. Kobelkov. Numerical methods. - M.: Nauka, 1987.

    11.3. G. Myers. Software reliability. - M.: Mir, 1980. S. 127-154.

    11.4. A.N. Lebedev. Protection of banking information and modern cryptography//Issues of information security, 2(29), 1995.

  64. Lecture 12. Software quality assurance

  65. 12.1. General characteristics of the software quality assurance process.

  66. As already noted in Lecture 4, the quality specification defines the main guidelines (goals) that at all stages of the development of the PS in one way or another influence the choice of the appropriate option when making various decisions. However, each quality primitive has its own characteristics of such influence, thus, ensuring its presence in the PS may require its own approaches and methods for developing the PS or its individual parts. In addition, the inconsistency of the PS quality criteria and the quality primitives expressing them was also noted: good provision of one of the PS quality primitives can significantly complicate or make it impossible to provide some of the other of these primitives. Therefore, an essential part of the process of ensuring the quality of PS consists of finding acceptable trade-offs. These trade-offs should be partially defined already in the PS quality specification: the PS quality model should specify the required degree of presence in the PS of each of its quality primitives and determine the priorities for achieving these degrees.

    Quality assurance is carried out in each technological process: the decisions made in it, to one degree or another, affect the quality of the software as a whole. In particular, because a significant part of the quality primitives is associated not so much with the properties of the programs included in the PS, but with the properties of the documentation. Due to the noted inconsistency of quality primitives, it is very important to adhere to the chosen priorities in their provision. But in any case, it is useful to adhere to two general principles:

    first, it is necessary to ensure the required functionality and reliability of the PS, and then bring the remaining quality criteria to an acceptable level of their presence in the PS;

    there is no need, and it may even be harmful, to seek a higher level of presence in the PS of any quality primitive than that defined in the PS quality specification.

    Ensuring the functionality and reliability of the PS was considered in the previous lecture. The provision of other OS quality criteria is discussed below.

    12.2.. Ensuring ease of use of the software tool

    P-documentation of the PS determines the composition of the user documentation

    In the previous lecture, the provision of two of the five quality primitives (stability and security) that determine the ease of use of the PS was already considered.

    P-documentation and informativeness determine the composition and quality of user documentation (see the next lecture).

    Communication is ensured by the creation of a suitable user interface and the corresponding exception handling. What is the problem here?

  67. 12.3. Ensuring the effectiveness of the software.

  68. The effectiveness of the PS is ensured by making appropriate decisions at different stages of its development, starting with the development of its architecture. The choice of data structure and presentation affects the efficiency of PS especially strongly (especially in terms of memory). But the choice of algorithms used in certain software modules, as well as the features of their implementation (including the choice of programming language) can significantly affect the effectiveness of the PS. At the same time, one constantly has to resolve the contradiction between temporal efficiency and efficiency from memory. Therefore, it is very important that the quality specification explicitly indicates the quantitative relationship between the indicators of these quality primitives, or at least sets quantitative boundaries for one of these indicators. And yet, different software modules have a different effect on the efficiency of the PS as a whole: both in terms of their contribution to the total costs of the PS in terms of time and memory, and in terms of the impact on different quality primitives (some modules can strongly affect the achievement of time efficiency and practically do not affect memory efficiency, while others can significantly affect the overall memory consumption without significantly affecting the operating time of the PS). Moreover, this impact (primarily in terms of time efficiency) in advance (before the completion of the implementation of the PS) can not always be correctly assessed.

    first you need to develop a reliable PS, and only then achieve its required efficiency in accordance with the quality specification of this PS;

    to improve the efficiency of the PS, first of all, use an optimizing compiler - this can provide the required efficiency;

    if the achieved efficiency of the PS does not satisfy the specification of its quality, then find the most critical modules in terms of the required efficiency of the PS (in the case of temporal efficiency, this will require obtaining the distribution by modules of the PS operation time by appropriate measurements during the execution of the PS); these modules and try to optimize them first by manually modifying them ;

    do not optimize the module if it is not required to achieve the required efficiency of the PS.

    12.4. Ensuring maintainability.

    C-documentation, information content and understandability determine the composition and quality of maintenance documentation (see the next lecture). In addition, the following recommendations can be made regarding the texts of programs (modules).

    use comments in the text of the module that clarify and explain the features of the decisions being made; if possible, include comments (at least in a short form) at the earliest stage of developing the text of the module;

    use meaningful (mnemonic) and persistently distinguishable names (the optimal length of the name is 4-12 letters, numbers are at the end), do not use similar names and keywords;

    be careful when using constants (a unique constant must have a single occurrence in the module text: when it is declared or, in last resort, when initializing a variable as a constant );

    don't be afraid to use optional parentheses (brackets are cheaper than bugs ;

    place no more than one statement per line; to clarify the structure of the module, use extra spaces (indents) at the beginning of each line ;

    avoid tricks i.e. such programming techniques when fragments of a module are created, the main effect of which is not obvious or hidden (veiled), for example, side effects of functions.

    Extensibility is ensured by creating a suitable installer.

    Structuredness and modularity simplify both understanding of program texts and their modification.

    12.5. Ensuring mobility.

  69. Literature for lecture 12.

  70. 12.1. Ian Somerville. Software engineering. - Addison-Wesley Publishing Company, 1992. P.

    12.3. D. Van Tassel. Style, development, efficiency, debugging and testing programs. - M.: Mir, 1985. S. 8-44, 117-178.

    12.4. Software User Documentation/ANSI/IEEE Standard 1063-1987.

  71. Lecture 13

  72. 13.1. Documentation created during the software development process.

  73. When developing a PS, a large amount of various documentation is created. It is necessary as a means of transferring information between the developers of the PS, as a means of managing the development of the PS, and as a means of transmitting to users the information necessary for the application and maintenance of the PS. The creation of this documentation accounts for a large share of the cost of the PS.

    This documentation can be divided into two groups:

    PS development management documents.

    Documents included in the PS.

    PS development management documents (process documentation) record the processes of developing and maintaining the PS, providing communication within the development team and between the development team and managers (managers) - persons managing the development. These documents can be of the following types:

    Plans, estimates, schedules. These documents are created by managers to anticipate and manage development and maintenance processes.

    Reports on resource usage during development. Created by managers.

    Standards. These documents prescribe to developers what principles, rules, agreements they must follow in the process of developing the PS. These standards can be either international or national, or specially created for the organization in which this PS is being developed.

    Work documents. These are the main technical documents that provide communication between developers. They contain a fixation of ideas and problems that arise during the development process, a description of the strategies and approaches used, as well as working (temporary) versions of documents that should be included in the PS.

    Notes and correspondence. These documents capture various details of the interaction between managers and developers.

    The documents included in the PS (product documentation) describe the PS programs both from the point of view of their use by users, and from the point of view of their developers and maintainers (in accordance with the purpose of the PS). It should be noted here that these documents will be used not only at the stage of operation of the PS (in its application and maintenance phases), but also at the development stage to manage the development process (along with working documents) - in any case, they should be checked (tested) for compliance with PS programs. These documents form two sets with different purposes:

    PS user documentation (P-documentation).

    Documentation for the support of the PS (C-documentation).

  74. 13.2. Software User Documentation.

  75. The user documentation of the PS (user documentation) explains to users how they must proceed in order to apply this PS. It is necessary if the PS involves any interaction with users. Such documentation includes documents that guide the user when installing the PS (when installing the PS with the appropriate setting for the environment for using the PS), when using the PS to solve its problems and when managing the PS (for example, when this PS interacts with other systems). These documents partially cover the issues of software support, but do not deal with issues related to the modification of programs.

    In this regard, two categories of PS users should be distinguished: ordinary PS users and PS administrators. An ordinary user of the PS (end-user) uses the PS to solve his problems (in his subject area). This could be an engineer designing a technical device, or a cashier selling train tickets using a PS. He may not know many details of computer operation or programming principles. The PS administrator (system administrator) manages the use of the PS by ordinary users and provides support for the PS that is not related to the modification of programs. For example, it may regulate the access rights to the OS between ordinary users, communicate with the OS providers, or perform certain actions to keep the OS in working order if it is included as part of another system.

    The composition of the user documentation depends on the audiences of users that this PS is aimed at, and on the mode of use of documents. The audience here is understood as the contingent of users of the PS, which has a need for certain user documentation of the PS. A successful user document essentially depends on the precise definition of the audience for which it is intended. The user documentation should contain the information required for each audience. The mode of use of a document refers to the manner in which the document is used. Usually, the user of sufficiently large software systems requires either documents to study the PS (use in instructions), or to clarify some information (use as a reference).

    In accordance with the works, the following composition of user documentation for sufficiently large PS can be considered typical:

    General functional description of the PS. Gives a brief description of the functionality of the PS. It is intended for users who must decide how much they need this PS.

    PS Installation Guide. Designed for system administrators. It should prescribe in detail how to install systems in a particular environment. It shall contain a description of the machine-readable medium on which the MS is supplied, the files representing the MS, and the requirements for the minimum hardware configuration.

    Instructions for the use of PS. Designed for ordinary users. Contains the necessary information on the application of the PS, organized in a form convenient for its study.

    Reference book on the application of PS. Designed for ordinary users. Contains the necessary information on the application of the PS, organized in a form convenient for the selective search of individual details.

    PS Management Guide. Designed for system administrators. It should describe the messages generated when the MS interacts with other systems and how to respond to these messages. In addition, if the MS uses system hardware, this document may explain how to maintain that hardware.

    As mentioned earlier (see Lecture 4), the development of user documentation begins immediately after the creation of an external description. The quality of this documentation can significantly determine the success of a PS. It should be quite simple and user-friendly (otherwise, this PS, in general, was not worth creating). Therefore, although draft versions (drafts) of user documents are created by the main developers of the PS, professional technical writers are often involved in the creation of their final versions. In addition, to ensure the quality of user documentation, a number of standards have been developed (see, for example,), which prescribe the procedure for developing this documentation, formulate requirements for each type of user documents, and determine their structure and content.

    13.3. Software support documentation.

    Documentation for the maintenance of the PS (system documentation) describes the PS from the point of view of its development. This documentation is necessary if the PS involves the study of how it is arranged (designed) and the modernization of its programs. As noted, maintenance is an ongoing development. Therefore, if it is necessary to upgrade the PS, a special team of accompanying developers is involved in this work. This team will have to deal with the same documentation that determined the activities of the team of initial (main) developers of the PS, with the only difference being that this documentation will, as a rule, be someone else's for the maintainer development team (it was created by another team). The maintenance team will have to study this documentation in order to understand the structure and development process of the upgraded PS, and make the necessary changes to this documentation, repeating to a large extent the technological processes by which the original PS was created.

    Documentation on support of PS can be divided into two groups:

    (1) documentation that defines the structure of programs and data structures of the PS and the technology for their development;

    (2) documentation to help make changes to the PS.

    The documentation of the first group contains the final documents of each technological stage of the development of the PS. It includes the following documents:

    External description of the PS (Requirements document).

    Description of the system architecture of the PS, including the external specification of each of its programs.

    For each PS program, a description of its modular structure, including an external specification for each module included in it.

    For each module - its specification and description of its structure (design description).

    Module texts in the selected programming language (program source code listings).

    OS validation documents describing how the validity of each OS program was established and how the validation information was associated with the requirements for the OS.

    Software verification documents primarily include testing documentation (test design and test suite description), but may also include the results of other types of software validation, such as proofs of program properties.

    The documentation of the second group contains

    The system maintenance guide, which describes known problems along with the software, describes which parts of the system are hardware- and software-dependent, and how the development of the software is taken into account in its structure (design).

    A common maintenance problem for a PS is to ensure that all of its representations keep pace (remain consistent) when the PS changes. To help this, relationships and dependencies between documents and their parts must be captured in the configuration management database.

  76. Literature for lecture 13.

  77. 13.1. Ian Somerville. Software engineering. - Addison-Wesley Publishing Company, 1992. P.

    13.2. ANSI/IEEE Std 1063-1988, IEEE Standard for Software User Documentation.

    13.3. ANSI/IEEE Std 830-1984, IEEE Guide for Software Requirements Specification.

    13.4. ANSI/IEEE Std 1016-1987, IEEE Recommended Practice for Software Design Description.

    13.5. ANSI/IEEE Std 1008-1987, IEEE Standard for Software Unit Testing.

    13.6. ANSI/IEEE Std 1012-1986, IEEE Standard for Software Verification and Validation Plans.

    13.7. ANSI/IEEE Std 983-1986, IEEE Guide for Software Quality Assurance Planning.

    13.8. ANSI/IEEE Std 829-1983, IEEE Standard for Software Test Documentation.

  78. Lecture 14

  79. Appointment of software certification. Testing and evaluation of software quality. Types of tests and methods for assessing the quality of software.

  80. 14.1. Appointment of software certification.

  81. PS certification is an authoritative confirmation of the quality of the PS. Usually, a representative (attestation) commission is created for the certification of a software system, consisting of experts, representatives of the customer and representatives of the developer. This commission conducts tests of the PS in order to obtain the necessary information to assess its quality. Under the test of the PS, we mean the process of carrying out a set of measures that examine the suitability of the PS for its successful operation (application and maintenance) in accordance with the requirements of the customer. This complex includes checking the completeness and accuracy of the software documentation, studying and discussing its other properties, as well as the necessary testing of the programs included in the software package, and, in particular, the compliance of these programs with the available documentation.

    Based on the information obtained during the testing of the PS, first of all, it must be established that the PS performs the declared functions, and it must also be established to what extent the PS has the declared primitives and quality criteria. Thus, the assessment of the quality of the PS is the main content of the certification process. The assessment of the quality of the PS is recorded in the relevant decision of the attestation commission.

  82. 14.2. Types of software testing.

  83. The following types of PS tests are known, carried out for the purpose of certification of PS:

    PS component testing;

    system tests;

    acceptance tests;

    field trials;

    industrial tests.

    PS component testing is a verification (testing) of the operability of individual subsystems of the PS. They are held only in exceptional cases by a special decision of the attestation commission.

    System testing of the PS is a check (testing) of the operability of the PS as a whole. It may include the same types of testing as in the complex debugging of the PS (see lecture 10). It is carried out by the decision of the attestation commission, if there are doubts about the quality of debugging by the developers of the PS.

    Acceptance tests are the main type of tests for certification of PS. It is with these tests that the certification commission begins its work. These tests begin with the study of the submitted documentation, including documentation on testing and debugging the PS. If the documentation does not contain sufficiently complete results of software testing, the certification committee may decide to conduct system testing of the software or to terminate the certification process with a recommendation to the developer to conduct additional (more complete) testing of the software. In addition, during these tests, developer tests can be selectively skipped, as well as user control tasks (see lecture 10) and additional tests prepared by the commission to assess the quality of the certified PS.

    Field testing of the PS is a demonstration of the PS together with the technical system controlled by this PS to a narrow circle of customers in real conditions, and the behavior of the PS is carefully monitored. Customers should be given the opportunity to set their own test cases, in particular, from exits to critical modes of operation of the technical system, as well as with the call of emergency situations in it. These are additional tests carried out by the decision of the attestation commission only for some PSs that control certain technical systems.

    Industrial testing of the PS is the process of transferring the PS into permanent operation to users. It is a period of trial operation of the PS (see lecture 10) by users with the collection of information about the behavior of the PS and its operational characteristics. These are the final tests of the PS, which are carried out by the decision of the attestation commission, if insufficiently complete or reliable information was obtained during the previous tests to assess the quality of the certified PS.

  84. 14.3. Methods for assessing the quality of software.

  85. The evaluation of the quality of the PS for each of the criteria is reduced to the evaluation of each of the primitives associated with this criterion of the quality of the PS, in accordance with their specification, made in the quality specification of this PS. Methods for evaluating PS quality primitives can be divided into four groups:

    direct measurement of quality primitive indicators;

    processing programs and documentation of the PS with special software tools (processors);

    testing of PS programs;

    expert evaluation based on the study of programs and documentation of the PS.

    Direct measurement of quality primitive indicators is carried out by counting the number of occurrences in a particular program document of characteristic units, objects, structures, etc., as well as by measuring the operating time of various devices and volume used memory computer when executing test cases. For example, some measure of memory efficiency may be the number of lines of a program in a programming language, and some measure of time efficiency may be the response time to a query. The use of any indicators for quality primitives may be defined in the quality specification of the MS. The method of direct measurement of quality primitive indicators can be combined with the use of program testing.

    Certain software tools can be used to determine whether a MS has certain quality primitives. Such software tools process program texts or software documentation in order to control any quality primitives or obtain some indicators of these quality primitives. To assess the structuredness of PS programs, if they were programmed in a suitable structural dialect of the base programming language, it would be enough to pass them through a structured program converter that performs syntactic and some semantic control of this dialect and translates the texts of these programs into the input language of the base translator. However, only a small number of quality primitives can currently be controlled in this way, and even then in rare cases. In some cases, instead of software tools that control the quality of the software, it is more useful to use tools that transform the presentation of programs or program documentation. Such, for example, is a program formatter that brings program texts to a readable form - processing of texts of PS programs with such a tool can automatically ensure that the PS has an appropriate quality primitive.

    Testing is used to evaluate some primitives of PS quality. Such primitives primarily include the completeness of the PS, as well as its accuracy, stability, security, and other quality primitives. In some cases, testing is used in combination with other methods to evaluate individual PS quality primitives. So, to assess the quality of documentation on the use of PS (P-documentation), testing is used in combination with an expert assessment of this documentation. If a sufficiently complete testing was carried out during the complex debugging of the PS, then the same tests can be used during the certification of the PS. In this case, the certification committee can use the testing protocols carried out during complex debugging. However, even in this case, it is necessary to perform some new tests or at least re-run some of the old ones. If testing during complex debugging is found to be insufficiently complete, then it is necessary to conduct more complete testing. In this case, a decision may be made to conduct component tests or system tests of the PS, as well as to return the PS to developers for revision. It is very important that in order to evaluate the PS according to the criterion of ease of use (during the debugging and certification of the PS), full testing is carried out on tests prepared on the basis of the documentation for the application, and according to the maintainability criterion - on tests prepared for each of the documents proposed for maintenance. PS.

    To assess the majority of PS quality primitives, only the method of expert assessments can currently be used. This method consists in the following: a group of experts is appointed, each of these experts, as a result of studying the submitted documentation, makes his opinion about the possession of the PS by the required quality primitive, and then the assessment of the required quality primitive of the PS is established by voting of the members of this group. This assessment can be made both on a two-point system ("possesses" - "does not have"), and take into account the degree of possession of the PS by this quality primitive (for example, it can be made on a five-point system). At the same time, the expert group should be guided by the specification of this primitive and an indication of the method for its assessment, formulated in the quality specification of the certified PS.

    Literature for lecture 14.

    14.2. V.V. Lipaev. Program testing. - M.: Radio and communication, 1986. - S. 231-245.

    14.3. D. Van Tassel. Style, development, efficiency, debugging and testing programs. - M.: Mir, 1985. - S. 281-283.

    14.4. B. Schneiderman. Psychology of programming. - M.: Radio and communication, 1984. - S. 99-127.

  86. Lecture 15. Object approach to software development

  87. 15.1. Objects and relations in programming. The essence of the object approach to software development.

  88. The world around us consists of objects and relationships between them. An object embodies some entity and has some state that can change over time as a result of the influence of other objects that are in some way with the data. It can have an internal structure: it can consist of other objects that are also in some relationship with each other. Based on this, you can build a hierarchical structure of the world from objects. However, for each specific consideration of the world around us, some objects are considered indivisible ("point"), and, depending on the goals of consideration, such (indivisible) objects of different levels of hierarchy can be accepted. A relation connects some objects: we can consider that the union of these objects has some property. If a relation connects n objects, then such a relation is called n-place (n-ary). At each place of association of objects that can be connected by any specific relationship, there can be different objects, but quite definite ones (in this case they say: objects of a certain class). A one-place relation is called a property of an object (the corresponding class). The state of an object can be studied by the value of the properties of this object or implicitly by the value of the properties of the unions of objects linked together with a given one or another relation.

    In the process of knowing or changing the world around us, we always take into consideration one or another simplified model of the world (model world), in which we include some of the objects and some of the relations of the world around us and, as a rule, one level of hierarchy. Each object that has an internal structure can represent its own model world, including the objects of this structure and the relationships that bind them. Thus, the world around us can be considered (in some approximation) as a hierarchical structure of model worlds.

    Currently, in the process of learning or changing the world around us, computer technology is widely used to process various kinds of information. In this regard, a computer (information) representation of objects and relations is used. Each object can be informationally represented by some data structure that displays its state. The properties of this object can be set directly as separate components of this structure, or by special functions on this data structure. N-ary relations for N>1 can be represented either in active form or in passive form. In its active form, an N-place relation is represented by some program fragment that implements either an N-place function (determining the value of the property of the corresponding union of objects) or a procedure that changes the states of some of them based on the state of representations of objects connected by the represented relation. In a passive form, such a relation can be represented by a certain data structure (which may include representations of objects connected by this relation), interpreted on the basis of accepted agreements on general procedures that are independent of specific relations (for example, a relational database). In either case, the representation of the relationship defines some data processing activities.

    When exploring the model world, the user can receive (or want to receive) information from the computer in different ways. In one approach, he may be interested in obtaining information about the individual properties of the objects of interest to him or the results of any interaction between some objects. To do this, he orders the development of one or another PS that performs the functions of interest to him, or some information system capable of issuing information about the relations of interest to him, using the appropriate database. In the initial period of the development of computer technology (with not enough high power of computers), such an approach to the use of computers was quite natural. It was he who provoked the functional (relational) approach to the development of PS, which was discussed in detail in previous lectures. The essence of this approach is the systematic use of the decomposition of functions (relations) to build the structure of the PS and the program texts included in it. At the same time, the objects themselves, to which the ordered and implemented functions were applied, were presented fragmentarily (to the extent necessary to perform these functions) and in a form convenient for the implementation of these functions. Thus, a complete and adequate computer representation of the model world of interest to the user was not provided: displaying it on the used PS could turn out to be a rather laborious task for the user, an attempt to slightly expand the volume and nature of information about the model world of interest to the user. received from such substation could lead to their serious modernization. This approach to the development of PS is supported by most of the used programming languages, ranging from assembly languages ​​and procedural languages ​​(FORTRAN, Pascal) to functional languages ​​(LISP) and logic programming languages ​​(PROLOGUE).

    With another approach to the study of the model world using a computer, the user may be interested in observing the change in the states of objects as a result of their interactions. This requires a fairly solid representation in the computer of the object of interest to the user, and the software components that implement the relationships in which this object participates are explicitly associated with it. To implement this approach, it was necessary to build software tools that simulate the processes of interaction between objects (model world). With the help of traditional development tools, this turned out to be quite a laborious task. True, programming languages ​​have appeared that are specifically focused on such modeling, but this only partially simplified the task of developing the required PS. The most complete solution of this problem is the object approach to the development of PS. Its essence lies in the systematic use of the decomposition of objects in the construction of the structure of the PS and the texts of programs included in it. At the same time, the functions (relations) performed by such a PS were expressed through the relations of objects of different levels, i.e. their decomposition significantly depended on the decomposition of objects.

    Speaking about the object approach, one should also clearly understand what kind of objects in question: objects of the user's model world, about their information representation, about the program objects, with the help of which the PS is built. In addition, one should distinguish between the actual objects ("passive" objects) and subjects ("active" objects).

  89. 15.2. Objects and subjects in programming.

  90. 15.3. Objective and subjective approaches to software development.

  91. Descartes noted that people usually have an object-oriented view of the world (c).

    They believe that object-oriented design is based on the principles of:

    highlighting abstractions,

    Access limitation,

    modularity,

    hierarchy,

    typing,

    parallelism,

    sustainability.

    But all this can be applied in a functional approach.

    It is necessary to distinguish between the advantages and disadvantages of the general object approach and its special case - the subject-oriented approach.

    Advantages of the general objective approach:

    Natural mapping of the real world on the PS structure (natural human perception of the PS capabilities, no need to "invent" the PS structure, but use natural analogies).

    The use of sufficiently meaningful structural units of the PS (an object as the integrity of non-redundant associations, information-strong modules).

    Reducing the complexity of software development through the use of a new level of abstractions (using a hierarchy of "non-program" abstractions in the development of software: classification of real world objects, the method of analogies in nature) as a new level of inheritance.

  92. 15.4. An object approach to the development of an external description and software architecture.

  93. Object-oriented design is a method that uses object decomposition; object-oriented approach has its own system symbols and offers a rich set of logical and physical models for designing highly complex systems. .

    The object-oriented analysis (OOA) rendered the object approach. OOA aims to create models that are closer to reality using an object-oriented approach; it is a methodology in which requirements are formed on the basis of the concepts of classes and objects that make up the vocabulary of the subject area. .

    Features of object-oriented programming.

    Objects, classes, object behavior, properties, events.

  94. Literature for lecture 15.

  95. 15.1. K. Futi, N. Suzuki. Programming languages ​​and VLSI circuitry. - M.: Mir, 1988. S. 85-98.

    15.2. Ian Somerville. Software engineering. - Addison-Wesley Publishing Company, 1992. P. ?-?

    15.3. G. Butch. Object-oriented design with examples of application: per. from English. - M.: Concord, 1992.

    15.4. V.Sh.Kaufman. Programming languages. Concepts and principles. Moscow: Radio and communication, 1993.

MINISTRY OF EDUCATION AND SCIENCE

DONETSK PEOPLE'S REPUBLIC

STATE PROFESSIONAL

EDUCATIONAL INSTITUTION

"DONETSK INDUSTRIAL AND ECONOMIC COLLEGE"

WORKING PROGRAMM

Educational practice UP.01

professional module PM.01 Development of software modules for computer systems

specialty 09.02.03 "Programming in computer systems"

Compiled by:

Volkov Vladimir Aleksandrovich, teacher of computer disciplines of the qualification category "specialist of the highest category", State Educational Institution "Donetsk Industrial and Economic College"

The program is approved by: Vovk Pavel Andreevich, Director of "Smart IT Service"

1. PASSPORT OF THE PRACTICE PROGRAM

2. RESULTS OF PRACTICE

3. STRUCTURE AND CONTENT OF PRACTICE

4. CONDITIONS FOR ORGANIZING AND CONDUCTING PRACTICE

5. MONITORING AND EVALUATION OF PRACTICE RESULTS

1 PASSPORT OF THE PROGRAM OF EDUCATIONAL PRACTICE UP. 01

1.1 Place of training practice UP.01

The program of educational practice UP.01 of the professional module PM.01 "Development of software software modules for computer systems" specialty 09.02.03 "Programming in computer systems » enlarged group 09.00.00 "Computer science and computer technology", in terms of mastering the main type of professional activity (VPD):

Development of software software modules for computer systems and related professional competencies (PC):

Perform the development of specifications for individual components.

Carry out the development of software product code based on ready-made specifications at the module level.

Perform debugging of program modules using specialized software tools.

Perform testing of software modules.

To optimize the program code of the module.

Develop design and technical documentation components using graphic specification languages.

The program of educational practice UP.01 of the professional module PM.01 "Development of software software modules for computer systems" can be used in additional professional education and professional training of employees for specialties 09.02.03 Programming in computer systems with a secondary (complete) general education. Work experience is not required.

1.2 Goals and objectiveseducational practice UP.01

In order to master the specified type of professional activity and the relevant professional competencies, the student in the course of educational practice UP.01 must:

have practical experience:

    development of the algorithm of the task and its implementation by means of computer-aided design;

    development of a software product code based on a finished specification at the module level;

    use of tools at the stage of debugging a software product;

    testing a software module according to a specific scenario;

be able to:

    carry out the development of the program module code in modern programming languages;

    create a program according to the developed algorithm as a separate module;

    debug and test the program at the module level;

    draw up software documentation;

    use tools to automate the preparation of documentation;

know:

    main stages of software development;

    basic principles of structural and object-oriented programming technology;

    basic principles of debugging and testing software products;

methods and means of developing technical documentation.

1.3 Number of weeks(hours) for the development of the programeducational practice UP.01

Just 1.5 weeks, 54 hours.

2 RESULTS OF PRACTICE

The result of the educational practice UP.01 of the professional module PM.01 "Development of software software modules for computer systems" is the development of general competencies (OK):

Name of the practice result

-

OK 2. Organize their own activities, choose standard methods and methods for performing professional tasks, evaluate their effectiveness and quality.

OK 3. Make decisions in standard and non-standard situations and be responsible for them.

OK 4. Search and use the information necessary for the effective implementation of professional tasks, professional and personal development.

OK 5. Use information and communication technologies in professional activities.

OK 6. Work in a team and in a team, communicate effectively with colleagues, management, consumers.

OK 7. Take responsibility for the work of team members (subordinates), for the result of completing tasks.

-

qualifications

OK 9. Navigate in conditions of frequent change of technologies in professional activity.

professional competencies (PC):

Type of professional activity

Name of practice results

Mastering the main type of professional activity

    use of resources of local and global computer networks;

    management of data files on local, removable storage devices, as well as on disks of a local computer network and on the Internet;

    printing, replication and copying of documents on a printer and other office equipment.

    current control in the form of a report on each practical work.

    module qualifying exam.

    literacy and accuracy of work in application programs: text and graphic editors, databases, presentation editor;

    the speed of searching for information in the contents of databases.

    accuracy and literacy of e-mail settings, server and client software:

    the speed of information search using technologies and services of the Internet;

    accuracy and literacy of entering and transmitting information using Internet technologies and services.

    literacy in the use of methods and means of protecting information from unauthorized access;

    correctness and accuracy Reserve copy and data recovery;

    literacy and accuracy of working with file systems, various file formats, file management programs;

    maintenance of reports and technical documentation.

3 STRUCTURE AND CONTENT OF THE PROGRAMTRAINING PRACTICE UP.01

3.1 Thematic plan

Codes of generated competencies

Name of the professional module

Scope of time, assigned to practice

(in weeks, hours)

Dates

PC 1.1 - PC 1.6

PM.01 "Development of software modules for computer systems"

1.5 weeks

54 hours

3.2 Practice content

Activities

Types of jobs

Name of academic disciplines, interdisciplinary courses indicating topics, ensuring the performance of types of work

Number of hours (weeks)

“Mastering the main type of professional activity »

Topic 1. Introduction. Algorithms for solving problems. Structure linear algorithm. Structure cyclic algorithm. Algorithm of a subroutine (function).

Formed knowledge on the basics of creating special objects

Subject2 . Environment Skratch (Scratch).

Formed knowledge on the basics of process automation tools Formed knowledge on the basics of animation effects to objects; use of hyperlinks and buttons; demo setup; presentations saved in different formats.

MDK.01.01 "System programming"

Subject 3 . Creating a training program (lesson from the subject).

Formed knowledge on the basics of data analysis using processor functions

MDK.01.02 "Applied programming"

Topic 4. Game program development.

Formed knowledge on the basics of calculating the final characteristics

MDK.01.01 "System programming"

Topic 5. Graphical programming language LabVIEW.

Formed knowledge on the basics of creating a processor test.

MDK.01.02 "Applied programming"

Subject 6. Building an application using LabVIEW.

Formed knowledge of the basics of the user's dialogue with the system

MDK.01.02 "Applied programming"

Subject 7 Reuse of a fragment of the program.

Formed knowledge of the operators and functions of the system.

MDK.01.02 "Applied programming"

Subject 8 Workshop on LabVIEW. Labor protection when working with a computer at the user's workplace.

Formed knowledge on the calculation of elementary functions. Formed knowledge on labor protection.

MDK.01.02 "Applied programming".

OP.18 "Labor protection"

Subject 9 Conclusions. Compiling a practice report.

Skills of analysis of computer technologies, problem solving are formed. Skills are formed.

MDK.01.01 "System programming"

MDK.01.02 "Applied programming"

MDK.04.01 "Office software"

4 CONDITIONS OF ORGANIZATION AND CARRYING OUT

EDUCATIONAL PRACTICE UP. 01

4.1 Documentation Requirements, necessary for practice:

Working program of educational practice UP.01 of the professional module PM.01. "Development of software modules for computer systems" is part of the training program for mid-level specialists by the State Vocational Educational Institution "Donetsk Industrial and Economic College" in accordance with the state educational standard of secondary vocational education in the specialty 09.02.03 "Programming in computer systems", founded on the curriculum in the specialty, work program in the disciplines MDK.01.01 "System Programming", MDK01.02 "Applied Programming", methodological recommendations for educational and methodological support of the practice of students mastering educational programs of secondary vocational education.

4.2 Requirements for educational and methodological support of practice:

a list of approved tasks by type of work, guidelines for students on the performance of work, recommendations for the implementation of practice reports.

4.3 Logistics Requirements:

the organization of industrial practice requires the presence of classrooms and laboratories.

Office equipment and workplaces:

    seats according to the number of students (table, computer, chair);

    teacher's workplace (table, computer, chair);

    cabinet for storage of teaching aids and information carriers;

    tasks for an individual approach to learning, organization of independent work and exercises, a student on a computer;

    reference and methodical literature;

    a set of system, application and training programs for PC on optical and electronic media;

    journal of instructing students on labor protection;

    a set of teaching aids.

Technical training aids:

    classroom board;

    personal computer with licensed software;

    laser printer;

  • educational PCs;

    set of interactive equipment (projector, screen, speakers);

    fire extinguishing means (fire extinguisher).

Equipment of the cabinet and workstations of development tools: personal computers (monitor, system unit, keyboard, mouse), a set of educational and methodological documentation, software in accordance with the content of the discipline (shells of programming languages).

All computers in the class are connected to a local network, have access to the network storage of information and have access to the Internet.

Communication equipment:

    network adapters;

    network cables;

    WiFi wireless equipment.

Components for installation of networks, equipment for installation.

4.4 List of educational publications, Internet resources, additional literature

Main sources:

    Olifer V.G. Network operating systems: a textbook for universities / V.G. Olifer, N.A. Olifer. - 2nd ed. - St. Petersburg: Peter, 2009,2008. - 668 p.:

    E. Tanenbaum. OS. Development and implementation. St. Petersburg: Piter, 2006. - 568 p.

    Pupkov K.A. Mastering the Unix operating system / K.A. Pupkov, A.S. Chernikov, N.M. Yakusheva. - Moscow: Radio and communication, 1994. - 112 p.

    L. Beck Introduction to system programming - M.: Mir, 1988.

    Grekul V.I., Denishchenko G.N., Korovkina N.L. Design of information systems / Moscow: Binom, 2008. - 304 p.

    Lipaev, V. V. Software engineering. Methodological foundations [Text]: Proc. / V. V. Lipaev; State. un-t - Higher School of Economics. - M.: TEIS, 2006. - 608 p.

    Lavrishcheva E. M., Petrukhin V. A. Methods and means of software engineering. - Textbook

    Ian Somerville. Software engineering, 6th edition.: Per. from English. ―M. : Williams Publishing House, 2002.―624 p.

    Excel 2010: professional programming in VBA.: Per. from English. - M.: LLC “I.D. Williams”, 2012. - 944 p. : ill. - Paral. tit. English

    Fowler M. Refactoring: Improving Existing Code. From English.—St. Petersburg: Symbol Plus, 2003.—432 p.

Additional sources:

    Volkov V.A. METHODOLOGICAL INSTRUCTIONS for the implementation of practical work in the discipline "System Programming", Donetsk: DONPEK, 2015.

    Volkov V.A. Guidelines to the implementation of the course project, Donetsk: DONPEC, 2015.

Internet- resources:

    System programming [electronic resource] / Access mode: http://www.umk3.utmn.ru.

    Software and Internet resources: http://www.intuit.ru

    Literature by discipline - http://www.internet-technologies.ru/books/

    Electronic textbook "Introduction to Software Engineering" - http://www.intuit.ru/studies/professional_skill_improvements/1419/info

    Electronic textbook "Programming Technology" - http://bourabai.kz/alg/pro.htm

4.5 Requirements for practice leaders from an educational institution and organization

Requirements for practice leaders from an educational institution:

engineering and teaching staff: graduates - teachers of interdisciplinary courses and general professional disciplines. Experience in organizations of the relevant professional field is mandatory.

Master industrial training: availability of 5–6 qualification category with mandatory internship in specialized organizations at least once every 3 years. Experience in organizations of the relevant professional field is mandatory.

5 MONITORING AND EVALUATION OF RESULTS

EDUCATIONAL PRACTICE UP. 01

Form of reporting on educational practice UP.01 - a report on practice, drawn up in accordance with the requirements of methodological recommendations.

results

(mastered professional competencies)

Basic indicators

result of preparation

Forms and Methods

control

PC 1.1. Carry out the development of specifications for individual components

Development of an algorithm for the task and its implementation by means of computer-aided design

Expert observation and evaluation of the student's activities in the process of mastering the educational program in practical classes, when performing work on educational and industrial practice.

PC 1.2. Carry out the development of software product code based on ready-made specifications at the module level.

Know the basic principles of structural and object-oriented programming technology.

To carry out the development of the program module code in modern programming languages.

PC 1.3. Perform debugging of program modules using specialized software tools

Perform debugging and testing of the program at the module level.

PC 1.4. Perform testing of software modules.

Create a program according to the developed algorithm as a separate module.

PC 1.5. Perform module code optimization

Development of a software product code based on a finished specification at the module level.

PC 1.6. Develop design and technical documentation components using graphical specification languages

Know the methods and means of developing technical documentation.

Prepare software documentation.

Use tools to automate documentation.

Forms and methods of monitoring and evaluating learning outcomes should allow students to check not only the formation of professional competencies, but also the development of general competencies and the skills that provide them.

results

(mastered general competencies)

Main indicators for evaluating the result

Forms and methods of control and evaluation

OK 1. Understand the essence and social significance of your future profession show a sustained interest in it.

Demonstration of constant interest in the future profession;

- the validity of the application of mastered professional competencies;

Expert observation and assessment in practical classes when performing work on industrial practice;

OK 2. Organize their own activities, determine the methods and ways of performing professional tasks, evaluate their effectiveness and quality.

Justification of goal setting, selection and application of methods and methods for solving professional problems;

Carrying out self-analysis and correction of the results of their own work

Evaluation in practical classes in the performance of work;

Observation during practice;

Introspection

OK 3. Solve problems, assess risks and make decisions in non-standard situations.

The effectiveness of decision-making of standard and non-standard professional tasks in a certain time;

The effectiveness of the plan to optimize the quality of work performed

Interpretation of the results of monitoring the activities of the student in the process of completing tasks

OK 4. Search, analyze and evaluate the information necessary for setting and solving professional problems, professional and personal development.

Selection and analysis of the necessary information for a clear and fast execution professional tasks, professional and personal development

Expert assessment in the course of work;

Self-control in the course of posing and solving problems

OK 5. Use information and communication technologies to improve professional activities.

ability to use information and communication technologies to solve professional problems

assignment evaluation

OK 6. Work in a team and team, ensure its cohesion, communicate effectively with colleagues, management, consumers.

Ability to interact with a group, teachers, master of industrial training

OK 7. Set goals, motivate the activities of subordinates, organize and control their work with the assumption of responsibility for the result of the tasks.

- self-analysis and correction of the results of their own work and the work of the team

Observation of the progress of work in the group in the process of production practice

OK 8. Independently determine the tasks of professional and personal development, engage in self-education, consciously plan advanced training.

Organization of independent work on the formation of a creative and professional image;

Organization of work on self-education and improvement

qualifications

Observation and evaluation in the process of industrial practice;

Reflective analysis (algorithm of student actions);

Practice diary;

Student Portfolio Analysis

OK 9. Be prepared to change technologies in professional activities.

Analysis of innovations in the field of technological processes for the development and manufacture of garments

Evaluation of solutions to situational problems;

Business and organizational-educational games;

Observation and evaluation in practical classes, in the process of production practice