Computer Control and Human Error

Download Computer Control and Human Error by Trevor Kletz PDF
Free download. Book file PDF easily for everyone and every device. You can download and read online Computer Control and Human Error file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with Computer Control and Human Error book. Happy reading Computer Control and Human Error Bookeveryone. Download file Free Book PDF Computer Control and Human Error at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF Computer Control and Human Error Pocket Guide.

Work, Love and Life when Robots Rule the. Robots could in the future rule the area, yet what's a robot-ruled Earth like? Many imagine the 1st really clever robots may be mind emulations or "ems. In some cases, as systems become more and more complex, and faster and faster response time is required, the use of computer and application software is the only feasible approach.

Computer Control and Human Error

Therefore, all software-induced system failures are systematic failures. However, a piece of software in itself is not hazardous. It is hazardous only when it interacts with equipment that can cause injury to people, loss of life or damage to property. Therefore safety-critical software should, as far as possible, be: Skip to content Search for: Preface , Pages iii-iv Forethought , Page vii Introduction , Pages 1 - a few incidents that experience happened, typically in computer-controlled technique plants , Pages 2 - threat and operability hazop reports utilized to computer-controlled technique plants , Pages 3 - the explanations why computer-controlled platforms fail , Pages Afterthoughts , Pages Index , Pages Concepts, Methodologies, - download pdf or read online With the advance of ubiquitous and pervasive computing, elevated and increased adaptability to altering wishes, personal tastes, and environments will emerge to extra increase using expertise among international cultures and populations.

Erik Andriessen, Matti Vartiainen It is a e-book approximately cellular digital paintings. Get Understanding Digital Humanities PDF The appliance of recent computational options and visualisation applied sciences within the Arts and arts are leading to clean techniques and methodologies for the learn of latest and standard corpora. Work, Love and Life when Robots Rule the Robots could in the future rule the area, yet what's a robot-ruled Earth like? It is not easy to anticipate what types of errors will occur or how to train to prevent them. This section, adapted from Sheridan , provides brief comparisons and contrasts among different applications of supervisory control systems: The term process usually refers to a dynamic system, such as a fossil fuel or nuclear power generating plant or a chemical or oil production facility, that is fixed in space and operates more or less continuously in time.

Typically time constants are slow—many minutes or hours may elapse after a control action is taken before most of the system response is complete. Most such processes involve large structures with fluids flowing from one place to another and involve the use of heat energy to affect the fluid or vice versa.

Typically such systems involve multiple personnel and multiple machines, and at least some of the people move from one location of the process to another. Usually there is a central control room where many measured signals are displayed and where valves, pumps, and other devices are controlled. Supervisory control has been emerging as an element in process control for several decades. Starting with electromechanical controllers or control stations that. In such systems the operator can become part of the control loop by switching to manual control. Usually each control station displays both the variable being controlled e.

From the pattern of these alarms e. The large, general-purpose computer has found its way into process control. Instead of multiple, independent, conventional proportional-integral-derivative controllers for each variable, the computer can treat the set of variables as a vector and compute the control trajectory that would be optimal in the sense of quickest, most efficient, or whatever criterion is important. Because there are many more interactions than the number of variables, the variety of displayed signals and the number of possible adjustments or programs the human operator may input to the computer-controller are potentially much greater than before.

Thus there is now a great need, accelerated since the events at Three Mile Island, to develop displays that integrate complex patterns of information and allow the operator to issue commands in a natural, efficient, and reliable manner. The term system state vector is a fashionable way to describe the display of minimal chunks of information using G. Unlike the processes described above, vehicles move through space and carry their operators with them or are controlled remotely.

Various types of vehicles have come. We might start with spacecraft because, in a sense, their function is the simplest. They are launched to perform well-defined missions, and their interaction with their environment other than gravity is nil. In other words, there are no obstacles and no unpredictable traffic to worry about. The astronauts had to learn to use a simple keyboard with programs different functions appropriate to different mission phases , nouns operands, or data to be addressed or processed and verbs operations, or actions to be performed on the nouns.

Of course, the astronauts still performed a certain number of continuous control functions. They controlled the orientation of the vehicle and maneuvered it to accomplish star sighting, thrust, rendezvous, and lunar landing. But, as is not generally appreciated by the public, control in each of these modes was heavily aided. Not only were the manual control loops themselves stabilized by electronics, but also nonmanual, automatic control functions were being simultaneously executed and coordinated with what the astronauts did. In commercial and military aircraft there has been more and more supervisory control in the last decade or two.

Commercial pilots are called flight managers , indicative of the fact that they must allocate their attention among a large number of separate but complex computer-based systems. Military aircraft are called flying computers , and indeed the cost of the electronics in them now far exceeds the cost of the basic airframe.

By means of inertial measurement, a feature of the new jumbo jets as well as of military aircraft, the computer can take a vehicle to any latitude, longitude, and altitude within a fraction of a kilometer. In addition there are many other supervisory command modes intermediate between such high-level commands and the lowest level of pure continuous control of ailerons, elevators, and thrust. A pilot can set the autopilot to provide a display of a smooth command course at fixed turn or climb rates to follow manually or can have the vehicle slaved to this course.

The autopilot can be set to achieve a new altitude on a new heading. The pilot can lock onto. In the Lockheed L, for example, there are at least 10 separate identifiable levels of control. It is important for the pilot to have reliable means of breaking out of these automatic control modes and reverting to manual control or some intermediate mode.

For example, when in an automatic landing mode the pilot can either push a yellow button on the control yoke or jerk the yoke back to manually get the aircraft back under direct control. Air traffic control poses interesting supervisory control problems, for the headways spacing between aircraft in the vicinity of major commercial airports are getting tighter and tighter, and efforts both to save fuel and to avoid noise over densely populated urban areas require more radical takeoff and landing trajectories. New computer-based communication aids will supplement purely verbal communication between pilots and ground controllers, and new display technology will help the already overloaded ground controllers monitor what is happening in three-dimensional space over larger areas, providing predictions of collision and related vital information.

The CDTI cockpit display of traffic information is a new computer-based picture of weather, terrain hazards such as mountains and tall structures, course information such as way points, radio beacons and markers, and runways and command flight patterns as well as the position, altitude, heading and even predicted position of other aircraft.

It makes the pilot less dependent on ground control, especially when out-the-window visibility is poor. More recently ships and submarines have been converting to supervisory control. Direct manual control by experienced helmsmen, which sufficed for many years, has been replaced both by the installation of inertial navigation, which calls for computer control and provides capability never before available, and by the trends toward higher speed and long time lags produced by larger size e.

  • Computer Control and Human Error - Trevor A. Kletz, Paul Chung, Chaim Shen-Orr - Google Книги.
  • Elsa and Autumn Go to a Show.
  • Sleeping with the Lights On (and Other Remembrances of New Jersey) (Short Stories by Kerry Gleason).
  • Post navigation?
  • Supervisory Control Systems | Research Needs for Human Factors | The National Academies Press;
  • Computer Control and Human Error.

New autopilots and computer-based display aids, similar to those in aircraft, are now being used in ships. In a sense, manipulators combine the functions of process control and vehicle control.

The manipulator base may be carried on a spacecraft, a ground vehicle, or a submarine,. The hand gripper, end effector is moved relative to the base in up to three degrees of translation and three degrees of rotation. It may have one degree of freedom for gripping, but some hands have differentially movable fingers or otherwise have more degrees of freedom to perform special cutting, drilling, finishing, cleaning, welding, paint spraying, sensing, or other functions.

Manipulators are being used in many different applications, including lunar moving vehicles, undersea operations, and hazardous operations in industry. The type of supervisory control and its justification differs according to the application. The fact of a three-second time delay in the earth-lunar control loop resulting from round-trip radio transmission from earth leads to instabilities, unless an operator waits three seconds after each of a series of incremental movements.

This makes direct manual control time-consuming and impractical. Sheridan and Ferrell proposed having a computer on the moon receive commands to complete segments of a movement task locally using local sensors and local computer program control. They proposed calling this mode supervisory control. Delays in sending the task segments from earth to moon would be unimportant, so long as rapid local control could introduce actions to deal with obstacles or other self-protection rapidly. The importance of supervisory control to the undersea vehicle manipulator is also compelling.

There are things the operator cannot sense or can sense only with great difficulty and time delay e. For monotonous tasks e. The human operator may have other things to do, so that supervisory control would facilitate periodic checks to update the computer program or help the remote device get out of trouble. A final reason for supervisory control, and often the most acceptable, is that, if coramunications, power, or other systems fail, there are fail-safe control modes into which the remote system reverts to get the vehicle back to the surface or othewise render it recoverable.

Many of these same reasons for supervisory control apply to other uses of manipulators. Probably the greatest current interest in manipulators is for manufacturing so-called industrial robots , including machining, welding, paint spraying, heat treatment, surface cleaning, bin picking, parts feeding for punch presses, handling between transfer lines, assembly, inspection, loading and unloading finished units, and warehousing.

Today repetitive tasks such as welding and paint spraying can be programmed by the supervisor, then implemented with the control loops that report position and velocity. If the parts conveyor is sufficiently reliable, welding or painting nonexistent objects seldom occurs, so that more sophisticated feedback, involving touch or vision, is usually not required.

Manufacturing assembly, however, has proven to be a far more difficult task. In contrast to assembly line operations, in which, even if there is a mix of products, every task is prespecified, in many new applications of manipulators with supervisory control, each new task is unpredictable to considerable extent. Some examples are mining, earth moving, building construction, building and street cleaning and maintenance, trash collection, logging, and crop harvesting, in which large forces and power must be applied to external objects.

The human operator is necessary to program or otherwise guide the manipulator in some degrees of freedom, to accomodate each new situation; in other respects certain characteristic motions are preprogrammed and need only to be initiated at the correct time. Again, the surgeon controls some degrees of freedom e. There are a number of limited theories and methods in the human factors literature that should be brought to bear on the use of supervisory control systems. A great deal remains to be done, however, to apply them in this context. The discussion that follows deals with five aspects of the problem.

The first considers current formal models. The second discusses display and command problems. The third takes up computer knowledge-based systems and their relation to the internal cognitive model of the operator for on-line decision making in supervisory control. The fourth deals with mental workload, stress, and research on attention and resource allocation as they relate to supervisory control.

The fifth is concerned with issues of human error, system reliability, trust, and ultimate authority. In the area of real-time monitoring and control of continuous dynamic processes, the optimal control model Baron and Kleinman, describes the perceptual motor behavior of closed-loop systems having relatively short time constants. Experimentation on this topic has been limited, suggesting that this class of model may be broadened to represent monitoring and discrete decision behavior in dynamic systems in which control is infrequent Levison and Tanner, There are also attempts to extend this work to explore its applicability to more complex systems Baron, et al.

An increasing number of supervisory control systems can be represented by a hierarchy of three kinds of interaction Sheridan, Since there are three levels of intelligence one human, two artificial , the allocation of cognitive and computational tasks among the three becomes central. Similarly, skill-based tasks filtering, display generation, servo-control may be assigned to various low-level computers.

The operator must concentrate on the environmental tasks that compete for his attention, allocating his attention among five roles: Design of integrated computer-generated displays is not a new problem, and the military services and space agencies. But the technology continues to create more possibilities. Operators of supervisory control systems need to have fewer displays, not more, telling them what they want or need to know when they want or need to know it. An additional design problem is that what operators think they need and what they really need may differ.

Browse this book

As computer collaborators become more and more sophisticated a useful type of display would tell the operator what the computer knows and assumes, both about the system and about the operator, and what it intends to do. An important source of guidance regarding the design of displays has been and will continue to be the intuitive beliefs of experienced operators.

The designer needs to know how much credence to give to these intuitions. The studies of clinical judgment conducted in the s and s Goldberg, are a third. These studies found that in the course of their diagnoses expert clinicians imagine that they rely on more variables and use them in more complex manner than appears to be the case from attempts to model their diagnostic processes. Although good-quality computer-generated speech is both available and cheap, and although it can give operators warnings and other information without their prior attention being directed to it, little imaginative use of such a capability has been made as yet in supervisory control.

The use of command language has arisen more recently in conjunction with teaching or programming robot systems. A more primitive form of it is found in the new autopilot command systems in aircraft. Giving commands to a control system by means of strings of symbols in syntax is a new game for most operators. Progress in this area depends on careful technology transfer from data processing that is self-paced to dynamic control in which the pace is determined by many factors.

Naturalness in use of such language is also an important goal. Command, in many circumstances, is not a solitary task. The operator must interact with many individuals in order to get a job done.

  • Paper Space Craft: Fold X-Wings, Cylon Warships, UFOs and More, 16 Fantastic Paper Plane Models that Fly.
  • Over 2000 and Some Odd Reasons NOT to Marry Your Mate.
  • Hitched to the Horseman (Mills & Boon Cherish) (Men of the West, Book 13) (Men of the West series).

This may be particularly the case when the nature of the emergency means that the technical system cannot be trusted to report and respond reliably—that is, an interacting human system may assume and perhaps interface with some of the functions of the interacting technical system. The kinds of human interaction possible include requesting information, monitoring the response of the system, notifying outsiders e. When are these interactions initiated? How valid are the cues? What features of technical systems make such intervention more and less feasible?

Another question that arises with multiperson systems is whether one individual or group should both monitor for and cope with crises.

In medicine it is not always assumed that the same individual has expertise in both diagnosis and treatment. Perhaps in supervisory control systems the equivalent functions should be separated, and different training and temperament called for in monitoring and in intervention. It is not a new idea that, in performing a task, people somehow represent the task in their heads and calculate whether, given certain constraints, doing this will result in that.

Such ideas derive from antiquity. That is, a differential equation model of the external controlled process is included in the automatic controller and is driven by the same input that drives the actual process. Any discrepancy between the output of this computerized model of the environmental process and the actual process is fed back as a correction to the internal model to force its variables to be continuously the same as the actual. Then any and all state variables as represented observed in the internal model may be used to directly control the process, if direct measurement of those same variables in the actual environment may be costly, difficult, or impossible.

This physical realization of the traditional idea of the internal model probably provoked much of the current research in cognitive science. Tsach has developed a realization of this as an operator aid for application to process control Tsach et al.

Categories

Ideally the computer should keep the operator informed of what it is assuming and computing, and the operator should keep the computer informed of what he or she is thinking. In the last several years cognitive psychology has contributed some theories about human inference that make the application of knowledge-based systems particularly relevant to supervisory control. This is similar to but more inclusive and less well developed than the internal process model used by control theorists.

And, the contribution of specialists in artificial intelligence concerning knowledge-based systems provides one way to implement the computer portion of such human-computer interaction. Recent studies of cognitive. For example, how can people best be trained to develop effective problem spaces?

What is the optimal mix of analog and digital representation? What means can be used to ensure that the current state of the model fits with the current state of the system? For human supervision to be really effective, a detailed understanding of how the human controller grasps a complex system at any moment in time and updates it over time is necessary. Verbal protocol techniques Bainbridge, make use of key words and relations. A likely and perhaps common source of difficulty is a mismatch in the mental models of a system of those who design it and those who operate it.

Operators who fail to recognize this disparity are subject to unpleasant surprises when the system behaves in unexpected ways. Operators who do recognize it may fail to exploit the full potential of the system for fear of surprises if they push it into unfamiliar territory Young, On a descriptive level, it would be useful to understand the correspondence between the mental models of designers and operators as well as to know which experiences signal operators that there is a mismatch and how they cope with that information. On a practical level, it would be useful to know more about the possibility of improving the match of these two models by steps such as involving operators more in the design process or showing them how.

The magnitude of these problems is likely to grow to the extent that designers and operators have different training, experience, and intensity of involvement with systems. The concept of mental workload as discussed in this section is not unique to supervisory control, but it is sufficiently important in this context to be included here as a special consideration. Military specifications for mental workload are nevertheless being prepared by the Air Force, based on the assumption that mental workload measures will predict—either at the design stage or during a flight or other operation—whether an operation can succeed.

In other words, it is believed that measurements of mental workload are more sensitive in anticipating when pilot or operator performance will break down than are conventional performance measures of the human-machine system. It must be inferred; it cannot be observed directly like human control response or system performance, although it might be defined operationally in terms of one or several or a battery of tests. There is a clear distinction between mental and physical workload: The latter is the rate of doing mechanical work and expending calories. There is consensus on measurements based on respiratory gases and other techniques for measuring physical workload.

Of particular concern are situations having sustained mental workloads of long duration. Many aircraft missions continue to require such effort by the crew. But the introduction of computers and automation in many systems has come to mean that for long periods of time operators have nothing to do—the workload may be so low as to. The operator may then suddenly be expected to observe events on a display and make critical judgments—indeed, even to detect an abnormality, diagnose what failed, and take over control from the automatic system.

Also of concern is that at the beginning of the transient the computer-based information will be opaque to the operator, and it will take some time even to figure out how to access and retrieve from the system the needed information. There have been three approaches to measuring mental workload. One approach, used by the aircraft manufacturers, avoids coping directly with measurements of the operator per se and bases workload on a task time-line analysis: This provides a relative index of workload that characterizes task demand, other factors being equal.

It says nothing about the mental workload of any actual person and indeed could apply to a task performed by a robot. This may be done during or after the events judged. One form of this is a single-category scale similar to the Cooper-Harper scale for rating aircraft handling quality. These scales have been used by the military services as well as aircraft manufacturers.

A criticism of them is that people are not always good judges of their own ability to perform in the future. Some pilots may judge themselves to be quite capable of further sustained effort at a higher level when in fact they are not. The third approach is the so-called secondary task or reserve capacity technique. In it a pilot or operator is asked to allocate whatever attention is left over from the primary task to some secondary task, such as verbally generating random numbers, tracking a dot on a screen with a small joy stick, etc.

Theoretically, the better the performance on the secondary task, the less the time. A criticism of this technique is that it is intrusive; it may itself reduce the attention allocated to the primary task and therefore be a self-contaminating measure. And, in real flight operations the crew may not be so cooperative in performing secondary tasks.

The fourth and final technique is really a whole category of partially explored possibilities—the use of physiological measures. Many such measures have been proposed, including changes in the electroencephalogram ongoing or steady-state , evoked response potentials the best candidate is the attenuation and latency of the so-called P , occurring milliseconds after the onset of a challenging stimulus , heart rate variability, galvanic skin response, pupillary diameter, and frequency spectrum of the voice.

All of these have proved to be noisy and unreliable. Both the Air Force and the Federal Aviation Administration currently have major programs to develop workload measurement techniques for aircraft piloting and traffic control. First, one should examine the situation for causal factors that could be redesigned to be quicker, easier, or less anxiety-producing. Or perhaps parts of the task could be reassigned to others who are less loaded, or the procedure could be altered so as to stretch out in time the succession of events loading the particular operator.

Finally, it may be possible to give all or part of the task to a computer or automatic system. It is important, for purposes of evaluating both mental workload and cognitive models as discussed in the previous section, to note that there has been an enormous change in models of mental processing in both psychology and computer science.

Computer Control and Human Error - download pdf or read online

In their recent paper, Feldman and Ballard in press argue that:. The critical resource that is most obvious is time. Neurons, whose basic computational speed is a few milliseconds must be made to account for complex behaviors which are carried out in a few hundred milliseconds… Posner, This means that higher complex behaviors are carried out in less than a hundred time steps. It may appear that the problem posed here is inherently unsolvable and that we have made an error in our formulation, but recent results in computational complexity theory suggest that networks of active computing elements can carry out at least simple computations in the required time range—these solutions involve using massive numbers of units and connections and we also address the question of limitations on these resources.

There is also evidence from experimental psychology Posner, that the human mind is, at least in part, a parallel system.

IN ADDITION TO READING ONLINE, THIS TITLE IS AVAILABLE IN THESE FORMATS:

From neuropsychological considerations there is reason to suppose that a parallelism is represented in regional areas of the brain responsible for different sorts of cognitive functions. For example, we know that different visual maps Cowey, underlie object recognition and that separate portions of the cortex are involved in the comprehension and production of language. We also know more about the role of subcortical and cortical structures in motor control.

The study of mental workload has simply not kept up with these advances in the conceptualization of the human mind as a complex of subsystems. The majority of researchers of human workload have studied the interference of one complex task with another. There is abundant evidence in the literature that such interference does occur. However, this general interference may account for only a small part of the variance in total workload. More important may be the effects of the specific cognitive systems shared by two tasks. Indeed, Kinsbourne and Hicks have recently formulated a theory of attention in which the degree of facilitation or interference between tasks depends on the distance between their cortical representation.

The notion of distance may be merely metaphorical, since we do not know whether it represents the actual physical distance on the cortex or. Viewing humans in terms of cognitive subsystems changes the perspective on mental workload see Navon and Gopher, It is unusual for any human task to involve only a single cognitive system or to occur at any fixed location in the brain. Most tasks differ in sensory modality, in central analysis systems, and in motor output systems. There is need for basic research to understand more about the separability and coordination of such cognitive systems.

We also need a task analysis that takes advantage of the new cognitive systems approach to ask how tasks distribute themselves among different cognitive systems and when performance of different tasks may draw on the same cognitive system.