Terms: Data input-output synchronous and asynchronous. Synchronous and asynchronous I/O Synchronous and asynchronous I/O

Terms: Data input-output synchronous and asynchronous. Synchronous and asynchronous I/O Synchronous and asynchronous I/O

As you know, there are two main input/output modes: exchange mode with polling of the readiness of the input/output device and exchange mode with interruptions.

In the exchange mode with a readiness poll, input/output control is carried out by the central processor. The central processor sends a command to the control device to perform some action on the input/output device. The latter executes the command, translating signals understandable to the central device and the control device into signals understandable to the input/output device. But the speed of the I/O device is much lower than the speed of the central processor. Therefore, you have to wait for a ready signal for a very long time, constantly polling the corresponding interface line for the presence or absence of the desired signal. It makes no sense to send a new command without waiting for the ready signal indicating the execution of the previous command. In the readiness poll mode, the driver that controls the process of data exchange with an external device executes the “check for readiness signal” command in a loop. Until the ready signal appears, the driver does nothing else. In this case, of course, the CPU time is used irrationally. It is much more profitable to issue an I/O command, forget about the I/O device for a while and move on to executing another program. And the appearance of a readiness signal is interpreted as an interrupt request from an I/O device. These readiness signals are the interrupt request signals.

The interrupt exchange mode is essentially an asynchronous control mode. In order not to lose connection with the device, a time countdown can be started, during which the device must execute the command and issue an interrupt request signal. The maximum amount of time that an I/O device or its controller must issue an interrupt request signal is often called the configured timeout. If this time expires after issuing the next command to the device, and the device still does not respond, then it is concluded that communication with the device is lost and it is no longer possible to control it. The user and/or task receives the appropriate diagnostic message.

Rice. 4.1. I/O Control

Drivers. operating in interrupt mode, they are a complex set of program modules and can have several sections: a start section, one or more continuation sections, and a termination section.

The startup section initiates the I/O operation. This section is run to turn on an I/O device or simply to initiate another I/O operation.

Continuation section (there may be several of them if the data exchange control algorithm is complex and several interrupts are required to complete one logical operation) carries out the main work of data transmission. The continuation section, in fact, is the main interrupt handler. The interface used may require several sequences of control commands to control I/O, and the device usually only has one interrupt signal. Therefore, after executing the next interrupt section, the interrupt supervisor must transfer control to another section at the next ready signal. This is done by changing the interrupt processing address after executing the next section; if there is only one interrupt section, then it itself transfers control to one or another processing module.

The termination section typically turns off the I/O device or simply ends the operation.

An I/O operation can be performed on the program module that requested the operation in synchronous or asynchronous modes. The meaning of these modes is the same as for those discussed above system calls, - synchronous mode means that software module suspends its operation until the I/O operation is completed, and in asynchronous mode, the program module continues to execute in multiprogram mode simultaneously with the I/O operation. The difference is that the I/O operation can be initiated not only by the user process - in this case, the operation is performed as part of a system call, but also by kernel code, for example, subsystem code virtual memory to read a page that is missing from memory.

Rice. 7.1. Two I/O modes

The I/O subsystem must provide its clients (user processes and kernel code) with the ability to perform both synchronous and asynchronous I/O operations, depending on the needs of the caller. I/O system calls are often framed as synchronous procedures due to the fact that such operations take a long time and the user process or thread will still have to wait for the results of the operation to be received in order to continue its work. Internal I/O calls from kernel modules are usually executed as asynchronous procedures, since kernel code needs freedom to choose what to do next after an I/O operation is requested. The use of asynchronous procedures leads to more flexible solutions, since based on an asynchronous call, you can always build a synchronous one by creating an additional intermediate procedure that blocks the execution of the calling procedure until the I/O is completed. Sometimes an application process also needs to perform an asynchronous I/O operation, for example, with a microkernel architecture, when part of the code runs in user mode as an application process, but performs functions operating system, requiring complete freedom of action even after calling the I/O operation.

In computer systems under synchronous input And synchronous output understand the processes of input-output of data samples, in which the time intervals (frequency) of transmitting samples to input or output are clearly preserved. I/O synchronization is achieved through some form of hardware support (timers and various other peripheral devices with synchronization capabilities and the use of data buffering to smooth out the I/O data flow). At the same time, the term synchronous I/O does not necessarily mean the presence of a synchronization signal at the I/O interface, and synchronization can be provided by both internal and external clocking.

Typically, in processor and measurement systems, the direction of data flow (to input or output) is considered relative to the computer (processor) standing at the center of the architecture in question. In particular, samples of the ADC and digital (discrete) inputs are considered input data, and samples of the DAC and digital (control) outputs are considered output data.

At asynchronous input and output The time intervals (frequency) of transmitting samples to input or output are not saved. In this case, data arrives for input or output at the pace of the data transfer interface itself, with unpredictable buffering delays, without saving any time intervals. In particular, this means for the programmer that when using asynchronous I/O functions, it makes no sense to compare the moments of calling an asynchronous I/O function with the physical moments of performing this operation “on the other side of the interface” (such a comparison makes sense only with the statistical method of data processing) . The fact that an asynchronous operation has been executed for output can be verified using confirmation (if this operation the output has confirmation, or confirmation can be obtained by a subsequent data entry operation). For asynchronous I/O, the above mentioned data buffering can play the opposite role and increase the uncertainty of data delivery time in the absence of a mechanism for synchronizing and controlling data transmission traffic.

ADC/DAC module
16/32 channels, 16 bits, 2 MHz, USB, Ethernet

We've waited too long for him

What could be more stupid than waiting?

B. Grebenshchikov

During this lecture you will learn

    Using the select system call

    Using the poll system call

    Some aspects of using select/poll in multi-threaded programs

    Standard asynchronous I/O facilities

select system call

If your program primarily deals with I/O operations, you can get the most important benefits of multithreading in a single-threaded program by using the select(3C) system call.

I/O devices typically operate much slower than the CPU, so the CPU usually has to wait for them to perform operations on them. Therefore, in all operating systems, synchronous I/O system calls are blocking operations.

This also applies to network communications - interaction via the Internet involves long delays and, as a rule, occurs through a not very wide and/or overloaded communication channel.

If your program operates on multiple I/O devices and/or network connections, it does not benefit it from blocking on an operation involving one of those devices, because in this state it may miss the opportunity to perform I/O from another device without blocking. This problem can be solved by creating threads that work with different devices. In previous lectures, we studied everything necessary to develop such programs. However, there are other means to solve this problem.

The select(3C) system call allows you to wait for multiple devices or network connections(indeed, the readiness of most types of objects that can be identified by a file descriptor).

When one or more of the handles are ready to transmit data, select(3C) returns control to the program and passes lists of ready handles as output parameters.

On 32-bit versions of UnixSVR4, including Solaris, fd_set is still a 1024-bit mask; in 64-bit versions of SVR4 this is a 65536-bit bit mask. The size of the mask determines not only the maximum number of file descriptors in the set, but also the maximum number of file descriptors in the set. The size of the mask in your version of the system can be determined at compile time by the value of the preprocessor symbol FD_SETSIZE.

Unix file descriptor numbering starts at 0, so the maximum file descriptor number is FD_SETSIZE-1.

So if you use select(3C), you need to set limits on the number of handles your process can handle. This can be done with the ulimit(1) shell command before starting the process, or with the setrlimit(2) system call while your process is running. Of course, setrlimit(2) must be called before you start creating file descriptors.

If you need to use more than 1024 handles in a 32-bit program, Solaris10 provides a transition API. To use it you need to define preprocessor symbol FD_SETSIZE with a numeric value greater than 1024 before including the file .

At the same time in the file

the necessary preprocessor directives will fire and the fd_set type will be defined as a large bitmask, and select and other system calls in this family will be redefined to use masks of this size.

Some implementations implement fd_set by other means, without using bit masks. For example, Win32 provides select as part of the so-called Winsock API. In Win32, fd_set is implemented as a dynamic array containing file descriptor values. Therefore, you should not rely on knowledge of the internal structure of the fd_set type.

intnfds – a number one greater than the maximum file descriptor number in all sets passed as parameters.

fd_set*readfds – Input parameter, a set of descriptors that should be checked for readability. The end of a file or the closing of a socket is considered a special case of ready to read. Regular files are always considered ready to be read. Also, if you want to check that a listening TCP socket is ready to accept(3SOCKET), it should be included in this set. Also, the output parameter is a set of descriptors ready to be read.

fd_set*writefds – Input parameter, a set of descriptors that should be checked for readiness for writing. A deferred write error is considered a special case of readiness to write. Regular files are always ready to be written. Also, if you want to check for completion of an asynchronousconnect(3SOCKET) operation, the socket should be included in this set.

Also, the output parameter is a set of descriptors ready to be written.

fd_set*errorfds – Input parameter, a set of descriptors to check for exception conditions. The definition of an exception depends on the type of file descriptor. For TCP sockets, an exception occurs when out-of-band data arrives. Regular files are always considered to be in exceptional condition.

Also, the output parameter is the set of descriptors on which exceptional conditions occurred.

structtimeval*timeout – timeout, time interval specified accurate to microseconds. If this parameter is NULL, select(3C) will wait indefinitely; if a zero time interval is specified in the structure, select(3C) operates in polling mode, that is, it returns control immediately, possibly with empty descriptor sets.

Example 1: Two-way copying of data between the terminal and the network connection. The example is taken from the book by W.R. Stevens, Unix: development network applications

. Instead of standard system calls, “wrappers” are used, described in the file “unp.h”

#include "unp.h"

void str_cli(FILE *fp, int sockfd) (

int maxfdp1, stdineof;

char sendline, recvline;

if (stdineof == 0) FD_SET(fileno(fp), &rset);

FD_SET(sockfd, &rset);

maxfdp1 = max(fileno(fp), sockfd) + 1;

Select(maxfdp1, &rset, NULL, NULL, NULL);

if (FD_ISSET(sockfd, &rset)) ( /* socket is readable */

if (Readline(sockfd, recvline, MAXLINE) == 0) (

if (stdineof == 1) return; /* normal termination */

else err_quit("str_cli: server terminated prematurely");

Fputs(recvline, stdout);

if (FD_ISSET(fileno(fp), &rset)) ( /* input is readable */

if (Fgets(sendline, MAXLINE, fp) == NULL) (

Shutdown(sockfd, SHUT_WR); /* send FIN */

FD_CLR(fileno(fp), &rset);

Writen(sockfd, sendline, strlen(sendline));

Note that the Example 1 program recreates the handle sets before each select(3C) call.

    This is necessary because select(3C) modifies its parameters on normal completion.

    select(3C) is considered MT-Safe, but when using it in a multi-threaded program, you need to keep the following point in mind. Indeed, select(3C) itself does not use local data and therefore calling it from multiple threads should not lead to problems. However, if multiple threads are working with overlapping sets of file descriptors, the following scenario is possible:

Thread 1 calls read from handle s and gets all the data from its buffer

    Thread 2 calls read from handle and blocks.

    To avoid this scenario, handling file descriptors under such conditions should be protected by mutexes or some other mutual exclusion primitives.

    It is important to emphasize that it is not the select that needs to be protected, but rather the sequence of operations on a specific file descriptor, starting with including the descriptor in the set for select and ending with receiving data from this descriptor, more precisely, updating the pointers in the buffer into which you read this data. If this is not done, even more exciting scenarios are possible, for example:

    Thread 1 includes handle s in the readfds set and calls select.

    select on thread 1 returns as ready to read

    Thread 2 calls read from handles, receives the data and writes it over the data received by thread 1

In Chapter 10, we'll look at the architecture of an application in which multiple threads share a common pool of file descriptors - the so-called worker thread architecture.

In this case, the threads, of course, must indicate to each other which descriptors they are currently working with.

From a multithreaded program development perspective, an important drawback of select(3C)—or perhaps a drawback of the POSIXThreadAPI—is the fact that POSIX synchronization primitives are not file descriptors and cannot be used in select(3C). At the same time, in actual development of multi-threaded I/O programs, it would often be useful to wait for file descriptors to be ready and other threads of its own process to be ready in one operation.

synchronous model

input/output. The read(2) , write(2) system calls and their analogues return control only after data has already been read or written. This often results in the thread becoming blocked. Note In reality, it's not that simple. read(2) does have to wait for the data to be physically read from the device, but write(2) operates in write-lazy mode by default: it returns after the data has been transferred to the system buffer, but generally before the data is physically transferred to the device. This usually significantly improves the observed performance of the program and allows memory from the data to be used for other purposes immediately after write(2) returns. But delayed recording also has significant disadvantages. The main one is that you will learn about the result of a physical operation not immediately by the return code of write(2) , but only some time after the return, usually by the return code of the next write(2) call. For some applications - transaction monitors, many real-time programs, etc. - this is unacceptable and they are forced to turn off lazy recording. This is done by the O_SYNC flag, which can be set when the file is opened and changed at

open file by calling fcntl(2) . inconvenient. Working in polling mode is also not always acceptable. The fact is that select(3C) and poll(2) consider a file descriptor ready for reading only after data physically appears in its buffer. But some devices begin to send data only after they are explicitly asked to do so.

Also, for some applications, especially real-time applications, it is important to know the exact moment when data begins to arrive. For such applications it may also be unacceptable that select(3C) and poll(2) consider regular files always ready to read and write. Really, file system is read from disk and although it works much faster than most network connections, accessing it is still associated with some delays. For hard real-time applications, these delays may be unacceptable - but without an explicit read request file system does not give out data!

For hard real-time applications, another aspect of the I/O problem may be significant. The fact is that hard RT applications have a higher priority than the kernel, so they execute system calls - even non-blocking ones! - can lead to priority inversion.

The solution to these problems has been known for a long time and is called asynchronous input/output. In this mode, I/O system calls return control immediately after a request is made to the device driver, typically even before the data has been copied to the system buffer. Forming a request consists of placing an entry (IRP, Input/Output Request Packet, input/output request packet) into a queue. To do this, you only need to briefly capture the mutex that protects the “tail” of the queue, so the problem of priority inversion can be easily overcome. In order to find out whether the call has ended, and if it has ended, then how exactly, and whether the memory in which the data was stored can be used, a special API is provided (see Fig. 8.1)


Rice. 8.1.

Asynchronous model was the main I/O model in operating systems such as DEC RT-11, DEC RSX-11, VAX/VMS, OpenVMS. Almost everyone supports this model in one form or another. Real time OS. Unix systems have used several incompatible APIs for asynchronous I/O since the late 1980s. In 1993, ANSI/IEEE adopted POSIX 1003.1b, which describes a standardized API that we will explore later in this section.

In Solaris 10, asynchronous I/O functionality is included in the libaio.so library. To build programs that use these functions, you must use the -laio switch. To generate requests for asynchronous I/O, the functions aio_read(3AIO), aio_write(3AIO) and lio_listio(3AIO) are used.

The functions aio_read(3AIO) and aio_write(3AIO) have a single parameter, structaiocb *aiocbp. The aiocb structure is defined in the file< aio.h> and contains the following fields:

  • int aio_fildes - file descriptor
  • off_t aio_offset - offset in the file starting from which writing or reading will begin
  • volatile void* aio_buf - a buffer into which data should be read or in which data to be written lies.
  • size_t aio_nbytes - buffer size. Like traditional read(2) , aio_read(3AIO) can read less data than was requested, but will never read more.
  • int aio_reqprio - request priority
  • struct sigevent aio_sigevent - method of notifying that a request has completed (discussed later in this section)
  • int aio_lio_opcode - not used for aio_read(3AIO) and aio_write(3AIO), used only by the lio_listio function.

The lio_listio(3AIO) function allows you to generate multiple I/O requests with one system call. This function has four parameters:

  • int mode - can take the values ​​LIO_WAIT (the function waits for all requests to complete) and LIO_NOWAIT (the function returns control immediately after all requests are generated).
  • struct aiocb *list - a list of pointers to aiocb structures with descriptions of requests.

    Requests can be either read or write, this is determined by the aio_lio_opcode field. Requests to a single descriptor are executed in the order in which they are listed in the list array.

  • int nent - number of entries in the list array.
  • struct sigevent *sig - a way to notify that all requests have completed. If mode==LIO_WAIT this parameter is ignored.

The POSIX AIO library provides two ways to notify a program that a request has completed, synchronous and asynchronous. Let's look at the synchronous method first.

The function aio_return(3AIO) is destructive; if called on a completed request, it will destroy the system object that stores information about the status of the request. Calling aio_return(3AIO) multiple times on the same request is therefore not possible.

The aio_error(3AIO) function returns the error code associated with the request. If the request completes successfully, 0 is returned, if an error occurs - an error code, for incomplete requests - EINPROGRESS.

The aio_suspend(3AIO) function blocks a thread until one of its specified asynchronous I/O requests completes or for a specified period of time. This function has three parameters:

  • const struct aiocb *const list- an array of pointers to query descriptors.
  • int nent - number of elements in the list array.
  • const struct timespec *timeout- timeout accurate to nanoseconds (in fact, accurate to resolution system timer).

The function returns 0 if at least one of the operations listed in the list has completed. If the function fails, it returns -1 and sets errno. If the function timed out, it also returns -1 and errno==EINPROGRESS .

An example of using asynchronous I/O with synchronous request status checking is given in Example 8.3.

Const char req="GET / HTTP/1.0\r\n\r\n"; int main() ( int s; static struct aiocb readrq; static const struct aiocb *readrqv=(&readrq, NULL); /* Open socket […] */ memset(&readrq, 0, sizeof readrq); readrq.aio_fildes=s ; readrq.aio_buf=buf; readrq.aio_nbytes=sizeof buf; if (aio_read(&readrq)) ( /* ... */ ) write(s, req, (sizeof req)-1); , 1, NULL); size=aio_return(&readrq); if (size>0) ( write(1, buf, size); aio_read(&readrq); ) else if (size==0) ( break; ) else if ( errno!=EINPROGRESS) ( perror("reading from socket"); ) ) ) 8.3.

Asynchronous I/O with synchronous check of request status. The code is shortened, socket opening and error handling are excluded from it. Asynchronous notification of an application about the completion of operations consists of signal generation when the operation is completed. To do this, you need to make the appropriate settings in the aio_sigevent field of the request descriptor.

  • int sigev_notify - notification mode. Valid values ​​are SIGEV_NONE (do not send acknowledgments), SIGEV_SIGNAL (generate a signal when the request completes) and SIGEV_THREAD (start when the request completes specified function in a separate thread). Solaris 10 also supports the SIGEV_PORT alert type, which is discussed in the appendix to this chapter.
  • int sigev_signo - the number of the signal that will be generated when using SIGEV_SIGNAL.
  • union sigval sigev_value - parameter that will be passed to the signal handler or processing function. When used for asynchronous I/O, this is usually a pointer to the request.

    When using SIGEV_PORT, this should be a port_event_t pointer structure containing the port number and possibly additional data.

  • void (*sigev_notify_function)(union sigval) is the function that will be called when SIGEV_THREAD is used.
  • pthread_attr_t *sigev_notify_attributes- attributes of the thread in which it will be launched
  • sigev_notify_function when using SIGEV_THREAD .

Not all libaio implementations support the SIGEV_THREAD notification. Some Unix systems use the non-standard SIGEV_CALLBACK alert instead. Later in this lecture we will discuss only signal notification.

Some applications use SIGIO or SIGPOLL as the signal number (in Unix SVR4 these are the same signal). SIGUSR1 or SIGUSR2 are also often used; This is convenient because it ensures that a similar signal will not arise for another reason.

Real-time applications also use signal numbers ranging from SIGRTMIN to SIGRTMAX. Some implementations allocate a special signal number SIGAIO or SIGASYNCIO for this purpose, but there is no such signal in Solaris 10.

Of course, before executing asynchronous requests notified by a signal, you should install a handler for this signal. For notification, you must use signals processed in the SA_SIGINFO mode. It is not possible to install such a handler using the signal(2) and sigset(2) system calls; you must use sigaction(2) . Installing handlers using sigaction

I/O control.

block-oriented devices and byte-oriented

main idea

Key the principle is device independence

· Interrupt handling,

· Device Drivers,

It seems clear that a wide variety of interruptions are possible for a variety of reasons. Therefore, a number is associated with an interruption - the so-called interruption number.

This number uniquely corresponds to a particular event. The system can recognize interrupts and, when they occur, launches a procedure corresponding to the interrupt number.

Some interrupts (the first five in numerical order) are reserved for use by the central processor in case of any special events such as an attempt to divide by zero, overflow, etc. (these are actually internal J interrupts).

Hardware interrupts always occur asynchronously with respect to running programs. In addition, several interruptions can occur simultaneously!

To ensure that the system does not get confused when deciding which interrupt to service first, there is a special priority scheme. Each interrupt is assigned its own priority. If multiple interrupts occur simultaneously, the system gives priority to the one with the highest priority, deferring the processing of the remaining interrupts for a while.

The priority system is implemented on two Intel 8259 (or similar) chips. Each chip is an interrupt controller and serves up to eight priorities. Chips can be combined (cascaded) to increase the number of priority levels in the system.

The priority levels are abbreviated IRQ0 - IRQ15.


24. I/O control. Synchronous and asynchronous I/O.

One of the main functions of the OS is to manage all the computer's input/output devices. The OS must send commands to devices, intercept interrupts, and handle errors; it must also provide an interface between the devices and the rest of the system. For development purposes, the interface should be the same for all device types (device independence). More information about IV control, question 23.

Principles of protection

Since the UNIX OS from its very inception was conceived as a multi-user operating system, the problem of authorizing access of different users to files in the file system has always been relevant. By access authorization we mean system actions that allow or deny access given user to a given file depending on the user's access rights and access restrictions set for the file. The access authorization scheme used in the UNIX OS is so simple and convenient and at the same time so powerful that it has become the de facto standard of modern operating systems (which do not pretend to be systems with multi-level protection).

File protection

As is common in a multi-user operating system, UNIX maintains a uniform access control mechanism for files and file system directories. Any process can access a certain file if and only if the access rights described with the file match the capabilities this process.

Protecting files from unauthorized access in UNIX is based on three facts. Firstly, any process that creates a file (or directory) is associated with some user identifier unique in the system (UID - User Identifier), which can be further interpreted as the identifier of the owner of the newly created file. Second, each process that attempts to gain some access to a file has a pair of identifiers associated with it - the current user and group identifiers. Thirdly, each file is uniquely matched by its descriptor - i-node.

The last fact is worth dwelling on in more detail. It is important to understand that file names and files as such are not the same thing. In particular, when there are multiple hard links to the same file, multiple filenames actually represent the same file and are associated with the same i-node. Any i-node used in a file system always uniquely corresponds to one and only one file. The I-node contains a lot of different information (most of it is available to users through the stat and fstat system calls), and among this information there is part that allows the file system to evaluate the right of a given process to access a given file in the required mode.

General principles protections are the same for all existing versions of the system: The i-node information includes the UID and GID of the current owner of the file (immediately after the file is created, the identifiers of its current owner are set to the corresponding valid identifier of the creator process, but can later be changed by the chown and chgrp system calls). In addition, the i-node of the file stores a scale that indicates what the user - its owner - can do with the file, what users belonging to the same user group as the owner can do with the file, and what others can do with the file. users. Small implementation details in different options systems vary.

28. Managing access to files in Windows NT. Lists of access rights.

The access control system in Windows NT is characterized by a high degree of flexibility, which is achieved due to the wide variety of access subjects and objects, as well as the granularity of access operations.

File access control

For shared resources in Windows NT it is used general model an object that contains security characteristics such as a set of allowed operations, an owner identifier, and an access control list.

Objects in Windows NT are created for any resources when they are or become shared - files, directories, devices, memory sections, processes. The characteristics of objects in Windows NT are divided into two parts - a general part, the composition of which does not depend on the type of object, and an individual part, determined by the type of object.
All objects are stored in tree-like hierarchical structures, the elements of which are branch objects (directories) and leaf objects (files). For file system objects, this relationship scheme is a direct reflection of the hierarchy of directories and files. For objects of other types, the hierarchical relationship diagram has its own content, for example, for processes it reflects parent-child relationships, and for devices it reflects membership in a certain type of device and the connection of the device with other devices, for example, a SCSI controller with disks.

Checking access rights for objects of any type is performed centrally using the Security Reference Monitor running in privileged mode.

For system Windows security NT is characterized by the presence of a large number of different predefined (built-in) access subjects - both individual users and groups. Thus, the system always has such users as Adininistrator, System and Guest, as well as groups Users, Adiniiiistrators, Account Operators, Server Operators, Everyone and others. The point of these built-in users and groups is that they are endowed with certain rights, making it easier for the administrator to create an effective access control system. When adding a new user, the administrator can only decide which group or groups to assign this user to. Of course, an administrator can create new groups, as well as add rights to built-in groups to implement his own security policy, but in many cases, built-in groups are quite sufficient.

Windows NT supports three classes of access operations, which differ in the type of subjects and objects involved in these operations.

□ Permissions are a set of operations that can be defined for subjects of all types in relation to objects of any type: files, directories, printers, memory sections, etc. Permissions in their purpose correspond to access rights to files and directories in QC UNIX.

□ Rights (user rights) - are defined for subjects of the group type to perform certain system operations: setting the system time, archiving files, turning off the computer, etc. These operations involve a special access object - the operating system as a whole.

It is primarily rights, not permissions, that differentiate one built-in user group from another. Some rights for a built-in group are also built-in - they cannot be removed from this group. Other rights of the built-in group can be deleted (or added from the general list of rights).

□ User abilities are determined for individual users to perform actions related to the formation of their operating environment, for example, changing the composition of the main program menu, the ability to use the Run menu item, etc. By reducing the set of capabilities (which are available to the user by default), the administrator can force the user to work with the operating environment that the administrator considers most suitable and protects the user from possible errors.

The rights and permissions given to a group are automatically granted to its members, allowing the administrator to review a large number of users as a unit of accounting information and minimize their actions.

When a user logs into the system, a so-called access token is created for him, which includes the user ID and the IDs of all groups to which the user belongs. The token also contains: a default access control list (ACL), which consists of permissions and applies to objects created by the process; list of user rights to perform system actions.

All objects, including files, threads, events, even access tokens, are provided with a security descriptor when they are created. The security descriptor contains an access control list - ACL.

File descriptor- a non-negative integer assigned by the OS to a file opened by the process.

ACL(English) Access Control List- access control list, pronounced "ekl" in English) - determines who or what can access a specific object, and what operations this subject is allowed or prohibited from performing on the object.

Access control lists are the basis of selective access control systems. ( Wiki)

The owner of an object, typically the user who created it, has selective control over access to the object and can change the object's ACL to allow or prevent others from accessing the object. Built-in Windows administrator NT, unlike the UNIX superuser, may not have some permissions to access an object. To implement this feature, administrator and administrator group IDs can be included in the ACL, just like ordinary user IDs. However, the administrator still has the ability to perform any operations with any objects, since he can always become the owner of the object, and then, as the owner, receive the full set of permissions. However, the administrator cannot return ownership to the previous owner of the object, so the user can always find out that the administrator has worked with his file or printer.

When a process requests an operation to access an object in Windows NT, control always passes to the security monitor, which compares the user and user group identifiers from the access token with the identifiers stored in the object's ACL elements. Unlike UNIX, Windows NT ACL elements can contain both lists of allowed and lists of operations prohibited for a user.

Windows NT clearly defines the rules by which an ACL is assigned to a newly created object. If the calling code, when creating an object, explicitly specifies all access rights to the newly created object, then the security system assigns this ACL to the object.

If the calling code does not supply the object with an ACL, and the object has a name, then the principle of permission inheritance applies. The security system looks at the ACL of the object directory in which the name of the new object is stored. Some of the object directory ACL entries can be marked as inheritable. This means that they can be assigned to new objects created in this directory.

In the case where a process has not explicitly specified an ACL for the object being created, and the directory object does not have inheritable ACL entries, the default ACL from the process's access token is used.


29. Java programming language. Java Virtual Machine. Java technology.

Java is an object-oriented programming language developed by Sun Microsystems. Java applications are typically compiled to custom bytecode so they can run on any Java virtual machine (JVM), regardless of computer architecture. Java programs are translated into bytecode that is executed virtual machine Java ( JVM) - a program that processes byte code and transmits instructions to the equipment as an interpreter, but with the difference that byte code, unlike text, is processed much faster.

The advantage of this method of executing programs is the complete independence of the bytecode from the operating system and hardware, which allows you to run Java applications on any device for which there is a corresponding virtual machine. Another important feature Java technology is a flexible security system due to the fact that program execution is completely controlled by the virtual machine. Any operation that exceeds the program's established permissions (for example, an attempt to unauthorizedly access data or connect to another computer) causes an immediate interruption.

Often, the disadvantages of the virtual machine concept include the fact that the execution of bytecode by a virtual machine can reduce the performance of programs and algorithms implemented in the Java language.

Java Virtual Machine(abbreviated as Java VM, JVM) - Java virtual machine - the main part of the execution Java systems, the so-called Java Runtime Environment (JRE). The Java Virtual Machine interprets and executes Java bytecode pre-generated from the source code of a Java program by the Java compiler (javac). The JVM can also be used to run programs written in other programming languages. For example, source on Ada language can be compiled into Java bytecode, which can then be executed by the JVM.

The JVM is a key component of the Java platform. Since Java virtual machines are available for many hardware and software platforms, Java can also be considered as a middleware software, and as an independent platform, hence the principle “write once, run anywhere”. Using a single bytecode across multiple platforms allows Java to be described as “compile once, run anywhere.”

Runtime environment

Programs intended to run on the JVM must be compiled in a standardized portable binary format, which is usually represented as .class files. A program can consist of many classes located in different files. To make it easier to host large programs, some .class files can be packaged together into a so-called .jar file (short for Java Archive).

The JVM executes .class or .jar files by emulating instructions written for the JVM by interpreting or using a just-in-time (JIT) compiler such as HotSpot from Sun microsystems. These days, JIT compilation is used in most JVMs to achieve greater speed.

Like most virtual machines, the Java Virtual Machine has a stack-oriented architecture similar to microcontrollers and microprocessors.

The JVM, which is an instance of the JRE (Java Runtime Environment), comes into play when Java programs are executed. After execution completes, this instance is deleted by the garbage collector. JIT is a part of the Java Virtual Machine that is used to speed up the execution time of applications. JIT simultaneously compiles parts of the bytecode that have similar functionality and therefore reduces the amount of time required for compilation.

j2se (java 2 standard edition) – the standard library includes:

GUI, NET, Database...


30. .NET Platform. Main ideas and provisions. .NET programming languages.

.NET Framework - software technology from Microsoft designed to create regular programs and web applications.

One of the main ideas of Microsoft .NET is the interoperability of different services written in different languages. For example, a service written in C++ for Microsoft .NET might call a class method from a library written in Delphi; in C# you can write a class inherited from a class written in Visual Basic.NET, and an exception thrown by a method written in C# can be caught and handled in Delphi. Each library (assembly) in .NET has information about its version, which allows you to eliminate possible conflicts between different versions assemblies.

Applications can also be developed in text editor and use the console compiler.

Like Java technology, the .NET development environment creates bytecode for execution by a virtual machine. The input language of this machine in .NET is called MSIL (Microsoft Intermediate Language), or CIL (Common Intermediate Language, a later version), or simply IL.

The use of bytecode allows you to achieve cross-platform functionality at the compiled project level (in .NET terms: assembly), and not only at the source text level, as, for example, in C. Before starting the assembly in the CLR runtime, the bytecode is converted by the JIT compiler built into the environment (just in time, on-the-fly compilation) into machine codes of the target processor. It is also possible to compile the assembly into native code for the selected platform using the NGen.exe utility supplied with the .NET Framework.

During the translation procedure, the source code of the program (written in SML, C#, Visual Basic, C++ or any other programming language that is supported by .NET) is converted by the compiler into a so-called assembly and saved as a dynamically linked library file (Dynamically Linked). Library, DLL) or executable file (Executable, EXE).

Naturally, for each compiler (be it a C# language compiler, csc.exe or Visual Basic, vbc.exe), the runtime environment performs the necessary mapping of the types used into CTS types, and the program code into the code of the “abstract machine” .NET - MSIL (Microsoft Intermediate Language).

As a result, the software project is formed in the form of an assembly - a self-sufficient component for deployment, replication and reuse. The assembly is identified digital signature author and a unique version number.

Built-in programming languages ​​(included with the .NET Framework):

C#; J#; VB.NET; JScript .NET; C++/CLI - a new version C++ (Managed).


31. Functional components OS. File management

Functional OS components:

The functions of a stand-alone computer's operating system are typically grouped either according to the types of local resources that the OS manages or according to specific tasks that apply to all resources. Sometimes such groups of functions are called subsystems. The most important resource management subsystems are the process, memory, file, and external device management subsystems, and the subsystems common to all resources are the subsystems user interface, data protection and administration.

File management:

The ability of the OS to “shield” the complexities of real hardware is very clearly manifested in one of the main OS subsystems - the file system.

The file system links storage media on one side and an API (application programming interface) for accessing files on the other. When an application program accesses a file, it has no idea how the information in a particular file is located, nor what type of physical media (CD, hard disk, magnetic tape, or flash memory unit) it is recorded on. All the program knows is the file name, its size and attributes. It receives this data from the file system driver. It is the file system that determines where and how the file will be written on physical media (for example, a hard drive).

From the operating system's point of view, the entire disk is a collection of clusters ranging in size from 512 bytes and larger. File system drivers organize clusters into files and directories (which are actually files containing a list of files in that directory). These same drivers keep track of which clusters are currently in use, which are free, and which are marked as faulty.

However, the file system is not necessarily directly associated with the physical storage medium. There are virtual file systems, as well as network file systems, which are just a way to access files located on a remote computer.

In the simplest case, all files on this disk are stored in the same directory. This single-level scheme was used in CP/M and the first version of MS-DOS 1.0. The hierarchical file system with nested directories first appeared in Multics, then in UNIX.

Directories on different disks can form several separate trees, as in DOS/Windows, or be combined into one tree common to all disks, as in UNIX-like systems.

In fact, in DOS/Windows systems, as well as in UNIX-like systems, there is one root directory with subdirectories named “c:”, “d:”, etc. Hard disk partitions are mounted in these directories. That is, c:\ is just a link to file:///c:/. However, unlike UNIX-like file systems, in Windows recording to the root directory is prohibited, as is viewing its contents.

In UNIX, there is only one root directory, and all other files and directories are nested under it. To access files and directories on a disk, you need to mount the disk using the mount command. For example, to open files on a CD, you need to say in simple language, tell the operating system: “take the file system on this CD and show it in the /mnt/cdrom directory.” All files and directories located on the CD will appear in this /mnt/cdrom directory, which is called the mount point. On most UNIX-like systems removable disks(floppy disks and CDs), flash drives and other external storage devices are mounted in the /mnt, /mount or /media directory. Unix and UNIX-like operating systems also allow disks to be mounted automatically when the operating system boots.

Please note the use of slashes in file systems Windows, UNIX and UNIX-like operating systems (Windows uses the backslash “\”, and UNIX and UNIX-like operating systems use the simple slash “/”)

In addition, it should be noted that the above system allows you to mount not only the file systems of physical devices, but also individual directories (the --bind parameter) or, for example, ISO image(loop option). Add-ons such as FUSE also allow you to mount, for example, an entire directory on FTP and a very large number of different resources.

An even more complex structure is used in NTFS and HFS. In these file systems, each file represents a set of attributes. Attributes include not only the traditional read-only, system, but also the file name, size, and even content. So for NTFS and HFS, what is stored in a file is just one of its attributes.

Following this logic, one file can contain several variations of content. Thus, several versions of the same document can be stored in one file, as well as additional data (file icon, program associated with the file). This organization is typical for HFS on the Macintosh.


32. Functional components of the OS. Process management.

Process management:

The most important part of the operating system, which directly affects the functioning of the computer, is the process control subsystem. A process (or in other words, a task) is an abstraction that describes a running program. For the operating system, a process is a unit of work, a request to consume system resources.

In a multitasking (multiprocess) system, a process can be in one of three main states:

RUNNING - the active state of a process, during which the process has all the necessary resources and is directly executed by the processor;

WAITING - the passive state of a process, the process is blocked, it cannot be executed for its own internal reasons, it is waiting for some event to occur, for example, the completion of an I/O operation, receiving a message from another process, or the release of some resource it needs;

READY is also a passive state of the process, but in this case the process is blocked due to circumstances external to it: the process has all the resources required for it, it is ready to execute, but the processor is busy executing another process.

During life cycle each process moves from one state to another in accordance with the process scheduling algorithm implemented in a given operating system.

CP/M standard

The creation of operating systems for microcomputers began with OS SR/M. It was developed in 1974, after which it was installed on many 8-bit machines. Within the framework of this operating system, a significant amount of software was created, including translators from BASIC, Pascal, C, Fortran, Cobol, Lisp, Ada and many others, text languages. They allow you to prepare documents much faster and more conveniently than using a typewriter.

MSX standard

This standard determined not only the OS, but also the characteristics of hardware for school PCs. According to the MSX standard, the car had to have RAM with a capacity of at least 16 K, permanent memory with a capacity of 32 K with a built-in BASIC language interpreter, color graphic display with a resolution of 256x192 pixels and 16 colors, three-channel sound generator 8 octaves, a parallel port for connecting a printer and a controller for controlling an external drive connected externally.

The operating system of such a machine had to have the following properties: required memory - no more than 16 K, compatibility with CP/M at the level of system calls, compatibility with DOS in file formats on external drives based on floppy magnetic disks, support for translators of BASIC, C, Fortran and Lisp languages.

Pi - system

During the initial period of development personal computers The USCD p-system operating system was created. The basis of this system was the so-called P-machine - a program emulating a hypothetical universal computer. The P-machine simulates the operation of the processor, memory and external devices by executing special instructions called P-code. Software components Pi-systems (including compilers) are compiled on P-code, application programs are also compiled into P-code. Thus, the main distinguishing feature of the system was its minimal dependence on the features of PC equipment. This is what ensured the portability of the Pi-system to Various types machines. The compactness of the P-code and the conveniently implemented paging mechanism made it possible to execute relatively large programs on PCs with small RAM.

I/O control.

I/O devices are divided into two types: block-oriented devices and byte-oriented devices. Block-oriented devices store information in fixed-size blocks, each of which has its own address. The most common block-oriented device is a disk. Byte-oriented devices are not addressable and do not allow search operations; they generate or consume a sequence of bytes. Examples are terminals, line printers, network adapters. Electronic component called a device controller or adapter. The operating system deals with the controller. The controller performs simple functions, monitors and corrects errors. Each controller has several registers that are used to communicate with the central processor. The OS performs I/O by writing commands to the controller registers. Controller floppy disk IBM PC accepts 15 commands such as READ, WRITE, SEEK, FORMAT, etc. When the command is accepted, the processor leaves the controller and does other work. When the command completes, the controller issues an interrupt to transfer control of the processor to the operating system, which must check the results of the operation. The processor obtains the results and status of the device by reading information from the controller registers.

main idea organization of I/O software consists in dividing it into several levels, and the lower levels provide shielding of the equipment features from the upper ones, and those provide user-friendly interface for users.

Key the principle is device independence. The type of program should not depend on whether it reads data from a floppy disk or from hard drive. Another important issue for I/O software is error handling. Generally speaking, errors should be handled as close to the hardware as possible. If the controller detects a read error, it must try to correct it. If it fails, then the device driver must fix the errors. And only if the lower level cannot cope with the error, it reports the error to the upper level.

Another key issue is the use of blocking (synchronous) and non-blocking (asynchronous) transfers. Most physical I/O operations are performed asynchronously - the processor starts a transfer and moves on to other work until an interrupt occurs. It is necessary that I/O operations be blocking - after the READ command, the program automatically pauses until the data reaches the program buffer.

The last problem is that some devices are shared (disks: multiple users accessing the disk at the same time is not a problem) while others are dedicated (printers: lines printed by different users cannot be mixed).

To solve these problems, it is advisable to divide the I/O software into four layers (Figure 2.30):

· Interrupt handling,

· Device drivers,

· Device-independent layer of the operating system,

· Custom software layer.

The concept of hardware interrupt and its processing.

Asynchronous or external (hardware) interrupts are events that come from external sources (for example, peripheral devices) and can occur at any arbitrary moment: a signal from a timer, network card or disk drive, pressing keyboard keys, moving a mouse; They require immediate reaction (processing).

Almost all input/output systems in a computer operate using interrupts. Specifically, when you press keys or click a mouse, the hardware generates interrupts. In response to them, the system, accordingly, reads the code of the key pressed or remembers the coordinates of the mouse cursor. Interrupts are generated by the disk controller, adapter local network, serial data ports, sound adapter and other devices.