Neurodecisions 2


What if it were necessary that you had to have an answer to a question that was based on the values of a fixed set of individual variables? (not including the simple problem of finding the “best” or “highest” ranking). And, you had to have that answer “right away”. And then, after these values changed a minute later, you had to have the answer again and again and again). These “instantaneous” results can be obtained by even “low-level” employees. First, however, you’ll have to use your expertise to associate a set of examples to a set of desired outcomes (decisions/control actions). This can usually be done by having the group of experts sit around a table and come to a cosensus as to what each example should be (for a given set of conditions, a given decision/control action.) Should your examples be diagnoses, you must associate the symptoms/conditions with the disorder. If the problem is to detect a malfunction, then you must associate the conditions with the defect. Available historical data is one of the best methods of developing examples. In more complex situations, perhaps a simulation program could be used to associate inputs to actual outcomes. These sets of examples will comprise a “lookup table”: for a given unique set of conditions (values a,b,c,…m) of a unique, FIXED set of independent variables (A,B,C,…n), you’ll have to assign a given decision/diagnosis/control action/malfunction. Then you make another assignment for an other unique set of conditions. This then is the “lookup table”:


Now then, the set of values (a10,b9,c8,d12,e2) represent classification (decision) #5. The same applies to the other vectors (a,b,c,d,e). (This “lookup table” can theoretically have hundreds of independent variables (A,B,C,…n), and an “unlimited” number of output classifications (decisions/diagnoses/control actions/malfunctions) by having many unique tables, each with its own unique values (a,b,c,…m) along with the corresponding output classifica tions).

NOTE: This “lookup table” is actually a MODEL of your system. That is the Table represents your system in terms of the values of the independent variables that comprise your system. The independent variables (a fixed set of them) could be such items such as mph, gal/sec, temperature, frequency, $/unit, rate of return, amount of reserve $, cost of sales, rpm, salt/ml, rooms/unit, displacement, volts/meter, pressure, lb/cu.ft., etc., etc.

Neurodecisions, Inc. will use this “lookup table” to train the software to enable you to now type into the computer your unidentified input (a,b,c,d,e). The software will “immediately” identify your input vector to be, say, 0% close to output #1, 0% close to output #2, …, 92% close to output #7, 6% close to output #8, 0% close to output #9, and 2% close to output #10. The Probabilities for all outputs equal 100%, and you can arbitrarily set, say, 90% as being required to sufficiently accept that output. Or, you can accept the highest output as being acceptable even though none of the outputs are greater than, say, 70%. (It should be noted that this support system allows a low-level employee to perform some of the functions of of a high-level employee.

Parenthetically,it should be said that spreadsheets are essentially equation-solvers. However, most real-world problems can not be represented adequately by a mathematical equation. The real-world says such things as “If A=a2 and B=b7 and C=c26, etc., Then do such-and-such, or, Then the diagnosis is such-and-such, or, Then the malfunction is such-and-such”, etc. These things can’t be captured in a mathematical model, usually, but rather they are precisely what a “lookup table” does very nicely.

Before describing more closely what is involved in this decision support system, a few applica- tions will be listed:

1. A communications company providing the best communications suite for a client.
2. A chemical company providing its maintenance people a simple means of diagnosing a malfunction (in quick time).
3. A field medical team having medical expertise available (in quick time).
4. Multiple levels of credit ratings arrived at by low-level personnel (in quick time).
5. A personnel office making the best possible placement of its human resources.
6. A manufacturing company making the best possible distribution of its resources.
7. Real estate market valuations based on standardized criteria.
8. Engine malfunction detection, by low-level personnel.
9. Travel decisions based on a great many variables.
10. Factory floor maintenance, by low-level personnel (in quick time)
11. Ship engine-room monitoring and maintenance by low-level personnel (in quick time).
12. Ship damage control (Navy) where rapid response is critical.
13. Electronics and electronic circuit board fault detection.
14. Business decisions of all varieties.
15. Optimizing results of biological experiments.
16. Chemical compound identification.
17. Supervisory process control.
18. Analysis of medical tests.
19. Diagnosis of medical conditions.
20. Sales prospect selection.
21. Any situation that can be represented by a “lookup table” (and there are volumes of such situations).


The following is a dataset (“lookup table”) for illustative purposes (this is actually a randomly generated dataset, so as not to be biased in any way):


The above dataset (“lookup table”) is a MODEL (i.e., represents) your system. In other words, Row #1 says that for the values of the independent variables A,B,C,…T, your system will require the Decision #1. (You of course created the dataset (“lookup table”)). NOW, lets say that, for instance, the sensors of your system provide outputs such as:


You would key these values into your computer (or, alternatively, have them “hard-wired” directly to your computer via a switch), press ENTER, and immediately have the results displayed. In this case, the results would show, via a bar-diagram, that your input vector was output (Decision) #2, at a Probability of 99%. The indeterminate input vector T2a is graphed with the Decision #2 vector. See the below figure.

Test Vector T2a and Decision #2 Vector

A test input vector was generated to determine how the software handled it. It was deliberately made to be “anything other than one of the 10 output decisions/diagnoses/control actions/malfunctions”. The test vector was then applied to the trained network (“lookup table”) to determine the Probability of the test vector being close to the 10 examples in the “lookup table”. After the run, a graph was made, as shown below. The network determined that the Test Vector T5b was 70% close to example #5 and 30% close to example #4. The other eight examples were all “at 0% closeness” to the Test Vector T5b.

GRAPH T5a and 5 and 4

The below graph is one that shows only Examples 1,2,3,4,&5. Each “lookup table” is usually only 10 examples large because more of them would tend to give erronious results, and fewer of them would not be cost-effective.

GRAPH 1 and 2 and 3 and 4 and 5


The following application is not an actual one, but rather one that will show how the Decision Support System method can be used to accomplish real-world applications. Consider these independent variables in the below figure, where each variable is assigned a unique point on the diagram:


A. Pulse rise-time in usec.
B. Pulse-width in usec.
C. Pulse rise-time in usec.
D. Pulse delay-time in usec.
E. Pulse delay-time in usec.
F. Pulse rise-time in usec.
G. Pulse-ringing in hz.
H. Pulse delay-time in usec.
I. Pulse frequency in mhz.
J. Pulse-width in usec.
K. Pulse fall-time in usec.
L. Pulse noise-content in hz.
M. Pulse-width in usec.
N. Pulse-overshoot in uvolts.
O. Pulse rise-time in usec.
P. Pulse frequency in mhz.
Q. Capacitor voltage in mv.
R. Pulse-width in usec.
S. Pulse rise-tims in usec.
T. Pulse-width in usec.

The following “Lookup Table” is created to “model” the above system.

——-46—–19—–21—–*—–17——–Decision 1: DBIN clocks D5 too late
——-27—–73—–91—–*—–67——–Decision 2: CS delayed at PROM2
——-76—–28—–46—–*—–29——–Decision 3: STSTB duration too short
——-17—–31—–66—–*—–46——–Decision 4: WR voltage too low
——-87—–76—–14—–*—–14——–Decision 5: MEMR is open-circuited
——-45—–95—–22—–*—–63——–Decision 10: HLDA has missing pulses

The above “Lookup Table” says that “IF the pulse rise-time at Point A is 46 and IF the pulse-width at point B is 19 and IF the pulse rise-time at Point C is 21 and IF ….. and IF the pulse-width at Point T is 17, THEN the output is Decision 1: The signal at DBIN clocks the flip-flop D5 too late. Thus, a low-level employee can do high-level troubleshooting on complex electronic equipment by reading the values at the Test Points A to T, keying these values into the computer, pressing ENTER and “instantaneously” obtaining the cause of the malfunction. In more sophisticated arrangements, the Test Points A to T could be automatically applied to the computer. The critical problem, of course, is the generation of the “Lookup Table”. This could be done by recording the test values each time an equipment is diagnosed by a high-level technician. This recorded data would be accumulated over time, and then made available for the generation of the “Lookup Table”. There could be many strategies for generating this table, and certainly this process is the most difficult and critical part of creating this type of Decision Support System.


Decision/Control Action

Probably the first thing that should be done when developing a Lookup Table is to determine the fixed set of independent variables that represent your “System”. These variables should be well chosen, and the structure of your Lookup Table should be a direct function of your system. These variables should be the most important aspects of your system, those things most relevent to the decisions that promote your goals. Perhaps you should list these decisions first, and then, based on that list, develop the supporting variables. For instance, you decide you want to do X or Y or Z. Then you decide what factors are most important to your doing X or Y or Z. These “factors” are most probably going to be the values of the independent variables that will determine whether you do X or Y or Z (decisions/control actions/diagnoses/malfunctions).

Again, you’ll have to determine that set of values, of a fixed set of independent variables, that represent a decision that supports your scheme of things. Then you’ll have to repeat this process for another different decision (using the SAME fixed set of independent variables. The number of these variables is theoretically “unlimited”, but in practice, 20 to 30 is a normal limit.) You then repeat the above process for an “unlimited” number of decisions. (Each row of your Lookup Table is an Example, i.e., an example of a representation of a decision.)

1. Adjust (or imply) a set of conditions (for a fixed set of independent variables) and then have a group of experts study the the situation and reach a consensus for the best decision/control action for that particular set of conditions. Repeat this process many times, changing the set of conditions each time (always using the same, fixed set of independent variables.)


1. Apply an input to the system, insert a known fault, and then record the values of a fixed set of strategically chosen independent variables. These variables should be chosen so that they represent accurately and fully the system under consideration. Repeat this process many times, changing the fault each time. (The fault might be something like an open circuit in an electronic equipment.)
2. Obtain historical data of systems that have been diagnosed by experts and the causes of the malfunction (fault) recorded. The more of these, the better. That is, the more malfunctions that can be detected by the Decision Support System. The causes of the malfunctions must be structured so that they’re in the form of a fixed set of independent variables.


This application optimistically explains the Decision Support System (DSS) concept very well, if but hopefully (since the author is NOT a medical person). It’s said “hopefully” because the application depends upon a surmise that’s merely the author’s uninformed “wouldn’t it be excellent”. That is, wouldn’t it be excellent if a blood sample could provide the detection of a multitude of medical disorders by the amounts of the various constituent parts of the blood sample? (Also, urine, and maybe, cerebrospinal fluid).

The concept will now be outlined. The presumption’s that the amount of a blood sample’s constituent parts (to be listed) does in fact represent a specific disorder (or even a combination of disorders.) The blood’s constituent parts are listed:

1. Albumin
2. Ammonia
3. Calcium
4. Cholesterol
5. Creatinine
6. Direct Bilirubin
7. Glucose
8. High Density Lipoprotein (HDL)
9. Iron
10. Magnesium
11. Phosphorous
12. Total Bilirubin
13. Total Iron Binding Capacity
14. Total Protein
15. Triglyerides
18. Urea Nitrogen (BUN)
17. Uric Acid
18. Uriary Protein
19. Thyronine Uptake
20. Thyroxine
21. Acid Phosphatase
22. Alanine Aminotransferase
23. Alkaline Phosphatase
24. Amylase
25. Aspartate Aminotransferase (GOT(
26. CK Isoenzyme (CKMB)
27. Creatine Kinase (CK)
28. y-Glutamyl Transferase (GT)
29. Lactic Dehydrogenase (LDH)
30. Lipase
31. Pseudo-cholinesterase (PCHE)
32. Carbon Dioxide
33. Chloride
34. Potassium
35. Sodium
36. Digoxin
37. Phenobarbitol
38. Phenyloin
39. Theophyline
40. Gentamicin
41. Tobramycin
42. Vancomycin
43. C-Reactive Protein
44. Tactic Acid
45. CFP (cerebral Spinal Fluid Protein
48. Salicylate

These constituent parts are the independent variables, with EACH of the variables having a specific value for a specific disorder. Thus, a person has a “profile” for his condition, with the profile being like one of the preceding graphs in a bar-graph form. Common sense tells us that to derive these bar-graphs (profiles) for each specific disorder, the following has to take place (this is true historical data): physician experts, over a period of time, have to record the values of the constituent parts of the blood sample of a person who’s been diagnosed with a given disorder. Many, many such records must be obtained from many people and for “all” the disorders. This will be a hugh database. Once it has been assembled, the panel of medical experts must then agree on an average (and/or range) value for each of the constituent parts of the blood sample for each of the disorders. Each disorder now has its own “profile”. (Disorder X’s A-independent variable has a value of a16, say, the B-independent variable’s value is b5, say, and so on for each variable of disorder X.)

Now that the historical data has been gathered and categorized (a bar-graph has been developed for each disorder), a neural network will use this dataset to train itself. It will train itself to the extent that when the blood sample data of a person with an undiagnosed disorder is presented to it, it will generalize that person’s data and so specify that the unknown disorder is classified most closely to disorder Y, say (that is, the Lookup Table is interpolated to determine the probability that the person’s unknown disorder is most closely associated with disorder Y.) Remember, the blood sample’s constituent parts are the independent variables; the values of the variables form a “profile” (if you will, a pattern), and the output of the neural network is the nearnes of that profile to a classification (a decision, if you will). This determination of the closeness to a classification, i.e., say, 92% close to the disorder Y, is accomplished “instantaneously”, per dataset. And in addition, there are companies, such as DuPont, that produce equipment that will identify the constituent parts of a blood sample within one minute.

This application is an excellent one if only to explain the concept of the DSS. And it was seen that the data preparation is the most critical part of the process, just as it’s seen that a neural network, once trained, is a “super diagnostician” (presuming that a valid dataset is possible and meaningful.). Thus, a unique combination of values of the fixed set of independent variables identifies, to a certain per cent, a unique disorder.



When I was at the Naval Sea System Command (NAVSEA), I was working on the development of a neural network application comparable to the medical diagnostician described above. In this case, a ship’s engine-room is the system under consideration. Such a system is complex, yes, but also, remedial action is an absolute must, especially in war-time. A one-page desciption will be preceded by various quotations from a Request for Bids for this application. This will be done to highlight ideas (and words) that are particularly germaine to this concept of neural network applications.

1. “Decision-makers on a modern warship are required to collect, interpret, and act upon a multitude of data in real-time”. (Key words: decision=makers, real-time).
2. “…..conditioned-based maintenance….”.
3. “…..on-line condition assessment on the U.S.S. America (CV-66)…..”
4. “…..140 parameters are monitored…..”
5. “…..monitored parameters…..” 6. “Real-time expert-based diagnostics, advisories, and maintenance recommendations”.
7. “Typically, the boilers are controlled automatically by a pneumatic control system which responds to the steam-pressure in the boiler-drum, the rate of the feedwater-flow to the boiler drum, and the water-level in the boiler drums to automatically control the fuel oil valves furnishing fuel oil to the burners, feedwater valves furnishing water to the drums and forced-draft burners furnishing air to the burners”. (The point of quoting the above is to give some idea of the complicated interrelationships involved in large systems that can only be realistically addressed by a neural network that can learn complex mappings.)
8. “Because the components of the pneumatic control system are interrelated and perform in a cascading mode, misalignment, degradation and malfunction in the operating characteristics of the components of the system can cause total degradation without any clear indication as to which component is causing the problem”. (Key words: interrelated, characteristics).
9. “…..with the simultaneous monitoring of all controller input and output values…..” (Key words” simultaneous, values).
10. “…..transducer outputs are converted to digital values and are read by a computer every tenth second”. (Key word: read, i.e., sampled).
11. “…..continuously monitors the various parameters…..” (read: conditions)
12. “…..reads all the signals virtually simultaneously; the plant status data represents more accurately the interrelationships…..” (Key words: simultaneously, interrelationships).
13. “Combinations of out-of-specification readings to help pinpoint impending failure of parts”. (Key words: combinations, impending).
14. “…..records condition-data…..”
15. “However. the analysis is experience-sensitive; a knowledge of the engine-operating parameter data and the relationship among the parameters is required…..cause-and-effect relationships”.
16. “If a group of parameters indicate a possible impending problem…..” (Key words: group, impending).
17. “Conditioned-based maintenance…..”

With this introduction, consider the following list of parameters of the ship’s engine, all of whose conditions together represent the condition of the entire engine. (The list is not complete, and is only to demonstrate this concept of neural network decision-making/diagnosis/control. Also, I am an electrical engineer, not a mechanical engineer).

AA–Drum Pressure
AB–Steam Pressure Transmitter Output
AC–Air Flow Transmitter Output
AD–Steam Flow Transmitter Output
AE–Feedwater Flow Transmitter Output
AF–Drum Level Transmitter Output
AG–High Signal Selector Output
AH–Steam Pressure Controller V.C.
AI–Steam Pressure Controller Output
AJ–Boiler Master A/M Station Output
AK–Fuel Air Ratio Station Output
AL–Air Flow Controller V.C.
AM–Air Flow Controller Output
AN–Steam Flow Rate Relay V.C.
AO–Steam Flow Rate Relay Output
AP–Range Modifier Output
AQ–Low Signal Selector Output
AR–Characterizing Relay output
AS–Combining Relay Output
AT–Drum Level Controller V.C.
AU–Drum Level Controller Output
AV–Feedwater A/M Station Output
AW–F.D. Blower #1 RPM
AX–F.D. Blower #2 RPM
AY–Fuel Oil System Pressure
AZ–Fuel Oil Burner Pressure
BA–Feedwater header Pressure
BB–Low Fuel Oil Pump Pressure
BC–Low Fuel Oil Header Pressure
BD–Low Fuel Oil Flow
BE–High Lub Oil Filter Differential Pressure
BF–Low Turbocharger Lub Oil Pressure
BG–Low Main Reduction Gear Lub Oil Pressure
BH–Low Lub Oil Header Pressure
BI–High Lub Oil Temperature to Engine
BJ–High Lub Oil Temperature from Engine
BK–Low Salt Water Pump Pressure
BL–Low Jacket Water Pump Pressure
BM–High Jacket Water Temperature to Engine
BN–High Jacket Water Temperature from Engine
BO–High Cylinder Exhaust Temperature
BP–Cylinder Exhaust Temperature Differential
BQ–High Crankcase Vacuum
BR–Engine RPM
BS–Rack position
BT–Clinder Temperatures 1 through 16
BU–Stack Temperature
BV–Salt Water Injection Temperature
BW–Salt Water Outlet Temperature
BX–Jacket Water Temperature to Engine
BY–Jacket Water Temperature from Engine
BZ–Salt Water Pump Pressure
CA–Jacket Water Pump Pressure
CB–Lub Oil Pump Pressure
CC–Lub Oil Header Pressure
CD–Lub Oil Filter Outlet Pressure
CE–Lub Oil Strainer Inlet Pressure
CF–Turbocharger Lub Oil Pressure
CG–Fuel Oil Pump Pressure
CH–Crank Case Vacuum
CI–Air Manifold Pressure
CJ–Air Intake Depression
CK–Air Intake Manifold Temperature
CL–Air Intake Manifold Air Flow
CM–Turbocharger Air Discharge Temperature
CN–Propeller Pitch
CO–Main Reduction Gear Lub Oil Pressure
CP–Engine-room No.1 Stress Temperature

Above find a partial table of a set of values of a set of parameters (conditions) that represents a certain specific malfunctiom/incipient malfunction. Thus, “IF AA=6 and AB=21 and AC=2 and AD=43 and AE=16 and ……..and CM=92 and CN=12 and CO=17 and CP=5, THEN the air pressure to the air clutch has been reduced”. This exemplar was produced by the following method: Physically do what is necessary to reduce the air pressure at the air clutch, and then record the values for AA, AB, AC, etc. (If a computer model (simulation) of the engine were available, this physical method would not be necessary, thus obviating possible harm to the system by actually inserting a fault in the system.) Next, restore the air clutch to its normal condition and “inject” a malfunction into the oil lubrication system at the proper point. Again take readings at each point AA, AB. AC, etc. to generate exemplar #2. In this way, a table will be built that will be learned by the neural network. What will be generated is a model of the engine under “all” kinds of malfunction conditions. It’s a multivariate lookup table, and only a neural network can interpolate it. The readings (AA=6, etc.)can be made by using a “gun” that shoots ultrasonic waves at the components at a distance and that then provides the condition (values/readings) of such things as bearings, vacuum leaks, gear boxes, welds, line blockage, steam traps, heat exchangers, seals, pumps, tanks, air brakes, valves, compressors, gaskets, motors, pipes, flow direction, pressure leaks, electric arcs, junction boxes, etc. (See ULTRAPROBE 2000, UE systems. Inc.). Now, after a neural network has been trained on the data in the generated lookup table, readings of all the points AA to CP, of an operating system (the ship’s engine) that is malfunctioning, are taken. This data (this input vector representing all the conditions of all the points AA to CP of the malfunctioning engine) is now applied to the trained neural network. (Remember, the network was trained on all the data in the generated lookup table). The neural network will “evaluate” (compare) this input vector against all the vectors in the lookup table to determine how close the input vector is to each of the vectors in the lookup table. This matching process will show that the input vector is 12% close to exemplar #1, say, and 2% close to exemplar #2, say, and ….. and 77% close to exemplar #46, say, and 5% close to exemplar #72, say, and 4% close to exemplar #97, say. (The sum of all the probabilities will equal 100%). There will “never” be a perfect match (that is, every value of all the variables (AA,AB,AC,…..) being precisely equal to every corresponding variable in the input vector). However, an input vector can get a probability of 100% if it is very close to a vector in the lookup table (and it isn’t particularly close to any other exemplar (vector) in the lookup table). If it isn’t close to any exemplar in the lookup table, you probably didn’t have enough exemplar (your table was to skimpy).

This then will be an intelligent diagnostician. This above concept can also be modified to provide Supervisory Control (SCADA). The method is the same. Instead of calling the output a Diagnosis, it’s called a Decision.

Last modified on Monday, December 02, 1996