Latest News in AI

Material Requirements Planning (MRP)

Material Requirements Planning (MRP) is a computer-based production planning and inventory control system. MRP is concerned with both production scheduling and inventory control. It is a material control system that attempts to keep adequate inventory levels to assure that required materials are available when needed. MRP is applicable in situations of multiple items with complex bills of materials. MRP is not useful for job shops or for continuous processes that are tightly linked.

The major objectives of an MRP system are to simultaneously:

1. Ensure the availability of materials, components, and products for planned production and for customer delivery,

2. Maintain the lowest possible level of inventory,

3. Plan manufacturing activities, delivery schedules, and purchasing activities.

MRP is especially suited to manufacturing settings where the demand of many of the components and subassemblies depend on the demands of items that face external demands. Demand for end items are independent. In contrast, demand for components used to manufacture end items depend on the demands for the end items. The distinctions between independent and dependent demands are important in classifying inventory items and in developing systems to manage items within each demand classification. MRP systems were developed to cope better with dependent demand items. The three major inputs of an MRP system are the master production schedule, the product structure records, and the inventory status records. Without these basic inputs the MRP system cannot function. The demand for end items is scheduled over a number of time periods and recorded on a master production schedule (MPS). The master production schedule expresses how much of each item is wanted and when it is wanted. The MPS is developed from forecasts and firm customer orders for end items, safety stock requirements, and internal orders. MRP takes the master schedule for end items and translates it into individual time-phased component requirements. The product structure records, also known as bill of material records (BOM), contain information on every item or assembly required to produce end items. Information on each item, such as part number, description, quantity per assembly, next higher assembly, lead times, and quantity per end item, must be available. The inventory status records contain the status of all items in inventory, including on hand inventory and scheduled receipts. These records must be kept up to date, with each receipt, disbursement, or withdrawal documented to maintain record integrity. MRP will determine from the master production schedule and the product structure records the gross component requirements; the gross component requirements will be reduced by the available inventory as indicated in the inventory status records.

MRP Computations We will illustrate MRP computations through examples. Example 1 Suppose you need to produce 100 units of product A eight week from now, where product A requires one unit of product B and two units of product C, while product C requires one unit of product D and two units of product E. How many units of each type do you need? In this example it is easy to compute the requirements of each item to produce 100 units of product A: Req(B) = 100, Req(C) = 200, Req(D) = 200, Req(E) = 400. Suppose further that the lead-times for the products are as follows: Product A, four weeks, product B three weeks, product C two weeks, products D and E one week each. Since the production lead-time for product A is four weeks, we must have products B and C available at the end of week four. Since product B has a lead time of three weeks, we need to release the production of product B by the end of the first week. Similarly, product C need to be released for production at the end of week two, while products D and E must be released for production at the end of week one. IEOR 4000: Production Management page 2 Professor Guillermo Gallego A material requirements plan has been developed for product A based on the product structure of A and the lead-time needed to obtain each component. Planned order releases of a parent item are used to determine gross requirements for its component items. Planned order release dates are simply obtained by offsetting the lead times.

The computations and steps required in the MRP process are not complicated. They involve only simple arithmetic. However, the bill-of-materials explosion must be done with care. What may get complicated is the product structure, particularly when a given component is used in different stages of the production of a finished item. 1.2 The Level of an Item To form a useful bill of material matrix it is convenient to order the items by levels. The level of an item is the maximum number of stages of assembly required to get the item into an end product. Example 2 Consider a system with two end items, item 1 and item 2. Item 1 requires two units of item A and one unit of item C. Item 2 requires one unit of item B, one unit of item D and three units of item E. Item A requires one unit of item B and two units of item F. Item B requires two units of item C and one unit of item E. Item C requires one unit of item F and three units of item G. Item D requires two units of item B and one unit of item C. The levels of the items are: Level 0: Items 1 and 2. Level 1: Items A and D. Level 2: Item B. Level 3: Items C and E. Level 4: Items F and G.

An Outline of the MRP Process Starting with end items the MRP process goes through the following steps 1. Establish gross requirements. 2. Determine net requirements by subtracting scheduled receipts and on hand inventory from the gross requirements 3. Time phase the net requirements. 4. Determined the planned order releases Week 1 2 3 4 5 6 7 Gross requirements Scheduled receipts Net requirements Time-phased net req Planned order releases Table 1: MRP Table IEOR 4000: Production Management page 3 Professor Guillermo Gallego The planned order releases aggregated over all the end items will result in the gross requirements for level one items, the gross requirements for this items are then netted and time phased to determined their own order releases. The process is continued until all the items have been exploded.

Source :http://www.columbia.edu/~gmg2/4000/pdf/lect_06.pdf

Samsung Uses Six Sigma To Change Its Image

Samsung Electronics Co. (SEC) of Seoul, Korea, is perfecting its fundamental approach to product, process and personnel development by using Six Sigma as a tool for innovation, efficiency and quality. SEC was founded in 1969 and sold its first product, a television receiver, in 1971. Since that time, the company has used tools and techniques such as total quality control, total process management, product data management, enterprise resource management, supply chain management and customer relationship management. Six Sigma was added to upgrade these existing innovations and improve SEC’s competitive position in world markets. The financial benefits made possible by Six Sigma, including cost savings and increased profits from sales and new product development, are expected to approach $1.5 billion by the end of 2002. Strategic Objective SEC wants to be a borderless, global brand that is a household word wherever its products and services are available. SEC’s strategic objective is to create both qualitative and quantitative growth and deliver competitive value to all stakeholders—customers, partners and shareholders—while maintaining profitability. The objective is focused on the value chain of the company’s four major businesses—home, mobile and office networks and core components. The emphasis is on creating a solid framework for these businesses by optimizing the supply chain to make operations as efficient and timely as possible. To achieve the goal of efficiency and timeliness, SEC has integrated Six Sigma into its entire business process .

SEC saw the universal adoption of Six Sigma throughout the company’s 16 businesses worldwide as the way to perfect its fundamental approach to product, process and personnel development. Deployment As a foundation for its Six Sigma thrust, SEC began by pursuing a pervasive goal of developing its internal resources, especially people, to put innovation first in the development and design of products, in manufacturing and marketing, and in the growth of employees. With its strategic objectives established, the foundation was ready for the Six Sigma process to begin in late 1999 and early 2000 with training for SEC’smanagement, Champions and other employees responsible for deployment planning.

The Juran Institute Inc. provided this training. After only three years of deployment, the number of Master Black Belts (MBBs), Black Belts (BBs) and Green Belts (GBs) already approaches 15,000—or almost one of every three employees. With a corporate goal of 100 MBBs, 3,000 BBs and 30,000 GBs by 2004, SEC obviously is serious about developing Six Sigma capability in a workforce of about 49,000 worldwide in 89 offices and 47 countries. No individual or operation in SEC is exempted from the training—including staff members in management, personnel, accounting and procurement. BBs are expected to guide several projects per year, which further increases each project’s return on investment. Promotion and other awards and incentives are awarded by the operation in which each employee is assigned. Starting in 2000, the use of the Six Sigma began in manufacturing using the define, measure, analyze, improve and control (DMAIC) discipline.

 

This was then expanded to use design for Six Sigma using the DMAIC phases for designing new products. Transactional Six Sigma was applied next to business and support processes internally and externally where customer needs and interactions have become increasingly critical. Through Sigma Park, an intranet site available worldwide to all SEC facilities, SEC provides reference materials, benchmarking opportunities, reports to senior management and enhancement for Six Sigma projects whose team members span several continents. Cross-border organizational learning is advanced as the Six Sigma methodologies are applied consistently from location to location.

Source : http://www.juran.com/elifeline/elifefiles/2009/09/Samsung-Uses-Six-Sigma-to-Change-Its-Image.pdf

Improving Manufacturing Processes Through Lean Implementation

Reducing waste, implementing efficiency-promoting practices, and continuously improving operations are the main goals of lean manufacturing ideology. These tasks may seem daunting for a manufacturer at the start of an improvement program, but there are many concrete steps that can be taken to shift the culture at any company.

For many companies, all it takes to dramatically increase efficiency and reduce waste is a commitment to dive right in and a willingness to try new and creative ideas to find out what works best. If you are able to simplify your manufacturing tasks, increase spatial and workflow organization, take steps to reduce errors, and listen to employees on the manufacturing floor, your company will begin to see reduced waste, improved employee morale, improved efficiency, and a greater ability to manufacture products on a predictable timetable.

The following tips can help send you on your way toward all of these goals and change the way your company operates to be ready for improvement at all times.

Simplify manufacturing tasks

At the heart of waste reduction and increased efficiency is simplifying manufacturing tasks. Without a critical eye toward opportunities for simplification, manufacturing tasks throughout your operations become inefficient, which can lead to wasted time and resources, inconsistent product quality, and a number of other negative outcomes. Finding an appropriate method for simplifying manufacturing tasks is therefore an important first step in any company’s improvement.

Take, for example, Butler Automatic, the inventor of the zero-speed, nonstop automatic film splicer. Butler equipment eliminates downtime due to web changes for the packaging industry. Given that promoting efficiency is a key part of Butler’s business, the principles of lean are integral to the company’s own manufacturing practices.

When Butler Automatic began to solidify its commitment to lean manufacturing practices, it had to find a simplification method that was right for its specific type of manufacturing. Because Butler builds configured machines and products that are all the same conceptually but each is slightly different to be perfectly tailored to its end use, a practice known as cellular manufacturing was implemented. Cellular manufacturing is highly useful for companies that build machines that must be configured exactly right the first time.


Implementation of cellular manufacturing at Butler Automatic has simplified manufacturing tasks and led to reduced waste of time and materials.

With this method, cells are set up on the manufacturing floor for each step in the manufacturing process and for each different component of the final product. The individual cells are tailored to their function in terms of materials, tools, and design. In this way, efficiency is increased and waste reduced because all the appropriate materials and tools are already at workers’ fingertips.

Cellular manufacturing also calls for the same process to be followed each time a certain part is produced or altered. Possible errors are reduced by this increased repetition, and operator training is made simpler. Perhaps most important to the lean manufacturing process, repetition makes it easier to make iterative changes and track whether these changes have a positive effect on the overall efficiency of the process. Continuous improvement will be addressed later on in this article, but it is a key component of every aspect of lean manufacturing practices.

Although cellular manufacturing isn’t the only way to simplify manufacturing operations, it is one of the most effective and provides an excellent example for the positive outcomes that can result from implementing lean practices. Cellular manufacturing may be right for your business, or you may want to try to find a different way to simplify tasks. Either way, finding a way to simplify your manufacturing process that leads to repeatable quality and easily traceable results is an important first step in improvement.

Increase organization

In addition to simplifying processes, organizing your manufacturing floor and workflow can greatly increase efficiency. Spatial organization of tools, materials, and manufacturing space cuts down on search and transport times. Neat and orderly workspaces help workers to feel more relaxed and enable them to work quickly and efficiently. General cleanliness from dirt, dust, and spills is important, and not just because of the positive effect on worker morale; cleanliness improves workers’ safety and final product quality. Cleanliness is fairly easy to maintain if cleaning supplies are visible and readily available. Organization, on the other hand, usually requires a more codified system.

 

Source : http://www.qualitydigest.com/inside/quality-insider-article/tips-improving-manufacturing-practices-through-lean-implementation#

For the Sake of Energy Efficiency, New “Intelligent Agents Lab” Doubles As a Building

The National Institute of Standards and Technology (NIST) is converting one of its laboratories into the equivalent of a small office building—not because of an increase in administrative overhead, but to develop and test smart software technologies designed to slash energy use in commercial buildings.

HVAC Test Lab
Architectural drawing of the new NIST ‘intelligent agents’ lab for more efficient building control systems.
Credit: Kikkeri/NIST
View hi-resolution image

From schools and hospitals to stores, offices and banks, commercial buildings account for a growing share of U.S. energy use—about 19 percent of the total and a third of electric power consumption.* More than four-fifths of this energy is consumed after construction by heating, cooling, lighting, powering plug-in equipment and other operations. By one estimate, day-to-day energy expenses make up 32 percent of a building’s total cost over its lifetime.**

NIST figures that these energy-eating operations can be accomplished far more efficiently and frugally with existing equipment by more intelligently coordinating their use. At the mock office building now under construction in a standard 1,000 square foot (93 square meters) modular lab space, NIST researchers will put this assertion to the test. There, they and their collaborators will investigate whether artificial intelligence tools already used in search engines, robots, routing and scheduling programs, and other technologies can work cooperatively to optimize building performance—from minimizing energy use to maximizing comfort to ensuring safety and security.

“Adapting intelligent agent technologies from other fields offers the promise of significant improvements in building operations,” explains Amanda Pertzborn, a mechanical engineer working in NIST’s Embedded Intelligence in Buildings Program. “The idea is a kind of ‘one for all approach’—use networked intelligent agents to manage and control devices and equipment subsystems to enhance the overall performance of a building rather than to optimize the operation of each component independently of all the others.”

Intelligent agents are combinations of software and hardware—sensors, mechanical devices and computing technologies—that perceive their environment, make decisions and take actions in response. They can monitor, communicate, collaborate and even learn, predict and adapt.

The energy-saving potential of this smart technology will grow with the evolution of the “smart grid” and its two-way communication capabilities, Pertzborn says. So, for example, cooperating teams of intelligent agents can parse time-of-day pricing, weather forecasts, availability of renewable energy supplies, and occupancy patterns to adjust individual equipment and systems to achieve optimal overall performance.

NIST’s simulated office building will serve as a proving ground for assessing whether intelligent agents dispersed among a structure’s multitudes of devices and subsystems can achieve this unity of purpose and work in concert. Prototypes will be tested on the most energy-intensive of building operations: heating, ventilating and air conditioning (HVAC). So-called HVAC systems in commercial buildings account for about 7 percent of total U.S. energy consumption.***

Modern HVAC systems consist of thousands of devices from local dampers, heaters, thermostats and fans to boilers, air handling units, chillers and cooling towers. When a building’s HVAC system is first installed and tested, this vast assortment of components can be tuned so that the system starts out performing at peak efficiency. Over time, however, efficiency tends to degrade from the optimum and energy use patterns of occupants change, requiring retesting and retuning the system. Intelligent agents distributed throughout a HVAC system would enable continuous tweaking to orchestrate the operation of all components so as to maintain peak performance and efficiency throughout the building’s lifetime.

Using a real building HVAC system under controlled laboratory conditions will enable meaningful comparisons of prototype intelligent agents, Pertzborn explains. Scheduled to be completed in the fall, this building-in-a-lab will consist of four zones serviced by two chillers, three air-handling units, four variable air volume units to control air flow and one ice storage tank, plus pumps, heat exchangers and other equipment.

 

Source : http://www.nist.gov/el/building_environment/ial-082514.cfm

CBR – Case Based Reasoning

What Is CBR

As the name implies; it is Reasoning, Based on Cases.

From Webster’s Dictionary –

  • Reasoning – The drawing of inferences or conclusions through the use of facts or other intelligible information.
  • Based – Grounded in known theory, knowledge or information.
  • Case – Similar set of related facts or information.

Thus Case-Based Reasoning is the act of developing solutions to unsolved problems based on pre-existing solutions of a similar nature.

This is analogous to being presented with a problem that you have to solve. During the time you spend thinking about the problem in order to gain a more complete understanding and start to develop a solution strategy, most people will naturally think about other, similar problems they have encountered. During this mental review of previous problems and associated solutions, the problem solver is actually performing CBR.

Within your mental picture of the current problem, and as you start to formulate a solution, you typically review other mental pictures and determine to what extent they relate to the current problem. If a previous problem/solution pair are fairly close to the current problem, then the solution to the previous problem is applied to the current problem. As the current problem and previous solution are compared for functionality and operational characteristics, the problem solver is actually determining how well the retrieved case matches the current needs. If the match is not completely acceptable, but close, then the problem solver starts to reason about the solution and how it must be modified to accommodate the new problem.

A Little History

CBR has grown, in part, out of the more general field of artificial intelligence. A.I. is distinct from general computing due to its base premise of attempting to solve a general purpose problem. Most computers and application code are designed to move and manipulate numbers, `number crunchers’. On the other hand, the ultimate expression of artificial intelligence is to develop computer code that mimics and can implement the general mechanisms underlying human intelligence. In other words, develop a computer program that generates solution(s) to new problems based on first principles of logic. First principles are a logical discourse on topic matter that leads to a solution of the problem, given in terms a knowledgable human can understand. No a-priori knowledge of the problem domain or other solutions of similar problems is required.

During research into the human ability to solve problems, researchers realized that most people derive solutions based on previous experience(s) with similar situations. It has been observed that people even discuss problems and solutions in terms of previous experiences. Thus, it appears obvious that, complete solutions derived solely from first principles is fairly rare. Instead, most problem solvers approach new problems and their associated solution(s) by relating both the problem and the solution to previous experiences. Thus, they build a new solution from information gained from previous experiences, coupled with some reasoning from first principles.

Expert Systems or Knowledge Based Systems (KBS) are a subset of CBR, and are based on a more limited problem domain (domain knowledge). This has evolved in this manner largely because a general problem solver was too broad based of a task to be accomplished.

 

 

 

Thought and a Real World Example

The process of thinking and reasoning requires an understanding of more than just the immediate facts. Thought requires the user to have and understand the explanation of the situation that is being presented. This seems to be quite cumbersome in terms of the number of problems a typical individual is faced with each and every day. One method to explain the obvious lack of a complete understanding of each and every event an individual is faced with is to review a simple example.

According to the general theory of human thought, an individual must fully understand the problem to be solved and have a complete explanation of the situation around him, prior to attempting a solution. As a simple example, think about the requirement of sustenance, eating, add to that requirement the constraint that you are not going to prepare the food yourself. According to the general theory, one is required to fully understand the problem, the requirement of nourishing food or food supplements, used to sustain bodily functions. Also, one must understand the surrounding environment: The food chain, food preparation, probable location of food or food supplements, how to obtain and ingest them, etc.

All this, as you might expect, does not typically occur when one wishes to dine-out. Instead, you usually just go to a restaurant and order and entree, wait for it to be served and then eat it, etc. So, what’s wrong with the theory? First, it is not inherently incorrect, just incomplete. What needs to be added is provisions for what the A.I. community calls `scripts’.

Scripts are a common understanding of expected events on the part of all parties involved. Suppose of a moment you wanted someone to provide food for you. Typically you go to a restaurant and order something appealing. The server realizes that what you ordered, you want delivered to your table with the appropriate silverware and napkin, etc. such that you can consume it. After it is delivered, you are expected to consume it, and show your appreciation by paying for it and leaving a tip for the server. All these events are mutually understood by both parties. To further the example, think about the events if instead of ordering food, you were to order, let’s say, your shirt dry-cleaned.

First, the restaurant is probably not set-up to do dry-cleaning, which would present a certain problematic situation to be addressed. More importantly though, the server, the cook and the owner of the restaurant are not there to dry-clean shirts, thus your request would cause a certain amount of confusion to say the least.

This is in part why computers have not yet reached the stage of `general problem solvers’. Their experience base is extremely limited in comparison to humans. That is why one of the current hot-beds of A.I. activity is in KBS and CBR or domain dependent problems. This approach limits the scope of the base knowledge required of the system and further refines the problem statement prior to system activity. In other words, the system has a basic idea of the nature of the problem a user will likely pose – You won’t be expecting to have your shirt dry-cleaned in a restaurant.

How Does This Apply To CBR

In a similar fashion CBR techniques are consider domain dependent. Meaning that any CBR system created is limited in scope to some known problem domain and its attendant knowledge base is optimized for these type of problems. Similarly the new problems presented to the CBR for analysis and matching will likewise be limited to an appropriate subset of all problems. This is advantageous, since a CBR system must contain enough cases within its knowledge base to provide diverse enough domain knowledge to solve new problems.

So what really is CBR – As stated it is matching similar problems and their solutions to new problems, and/or reasoning about solutions to new problems based on an understanding of previous solution methods and techniques. Sounds fairly straight forward and possibly it is. However, what happens when there is no perfect match to a new problem, from the domain knowledge of existing problems? What is the system suppose to do and how is it to achieve this goal?

Let’s review what a human problem solver would do in a similar situation. Back to the restaurant example; Suppose a customer ordered a grilled swordfish entree with a vegetable side dish. Further suppose, the server realizes that this particular restaurant does not serve swordfish in its normal business routine, what to do? Typically, prior to informing the customer of the dilemma, the server mentally runs through all the possible solutions to the situation.

First he could inform the customer that this particular restaurant does not serve this dish, but the restaurant just down the street in fact does. This course of action fully satisfies the customer but the server would lose his tip and the restaurant that employs him, some business – not a good choice.

Second, he could take the order and then go and discuss the situation with the cook. The cook could obvious prepare the swordfish dish, since this restaurant does in fact have a number of seafood offering on the current menu. Possibly the server or the cook could `go out the back door’ to the restaurant down the street and buy a swordfish entree and serve it as if they had prepared it. Again this satisfies the customer, but in the first case puts an added burden on the chef and possibly a time delay while the raw swordfish is purchased. The second option, going next door, would again satisfy the customer, but would not provide any margin of profit for the restaurant and stakes the reputation of this restaurant solely on someone else. Again not an acceptable choice.

Another alternative would be to simply tell the customer that he really didn’t want swordfish and that what he would be happiest with is a nice, thick steak – silly customer, ordering something he really didn’t want anyway. As you can see, this would likely have an adverse effect on business, not only from this customer, but from all the other people he told about the bazaar behavior of the servers they hire. Not a good choice.

A final alternative would be for the server to simply inform the customer that this restaurant does not serve swordfish, but that mahi-mahi, which is on the menu, is avery good substitute for swordfish. If the server is knowledgable in both the restaurant policy, server etiquette and food tastes (in this case, that mahi-mahi is similar to swordfish), he can work with the given situation and derive an acceptable solution.

In the final case, the server reviewed the current problem requirements – the customer wants a swordfish entree, and then the immediate domain knowledge – this restaurant does not serve swordfish, and then the pertinent domain knowledge – mahi-mahi tastes similar to swordfish. Finally the server con- cludes a solution – to suggest to the patron that swordfish is not available but an acceptable alternative is mahi-mahi, of which the chef does an incredible job in preparing (it is always good to `sell’ an idea).

Again, how does this example relate to CBR techniques? This example highlighted the fact that in most cases, there will not be an exact match for the current problem to use as a solution model. This example also showed what to do in cases of a similar nature. First the server, using specific and general domain knowledge, reviewed all possible situations in which similar circumstances had occurred. Possibly another customer at one point in time had order another dish that this restaurant did not serve. Then determined the optimal solution for all parties involved. In this case the customer chose an acceptable entree and the restaurant kept the patronage of this individual. However, no exact match of current problem and previous solution existed, what happened. Well in this simple example the server remembered a similar situation from another customer requesting a dish that this restaurant did not offer. In that situation, the server had also suggested a close substitute for the requested item. In this example the server used that solution strategy and substituted swordfish for the requirement and mahi-mahi for the solution. In essence the server used a partial match (the previous request and substitution) and domain specific knowledge (that mahi-mahi is close, in terms of taste, to swordfish) to solve the current problem.

At some point in the past, this server was new to the restaurant business. In that case he most likely asked a fellow employee, with hopefully more experience, for advice on how to deal with the situation. The more experienced employee suggested the substitution strategy and possible items to substitute for the unavailable, requested item. Taking a further step back in time, at some point where no prior experience specific to restaurants existed, the server may have used other experiences to formulate the substitution strategy or developed this strategy by trial and error or reasoning from first principles.

CBR is as well a partial matching strategy for problem resolution – break the problem down hierarchically and progress from the most minute detail to more general solution strategies until a match can be formulated and then substitute domain specific knowledge for the non-matching portions of the current problem and previous solution technique.

What’s The Difference Between CBR And KBS

Knowledge based systems use rules to guide their decision processes. Typically a knowledge engineer works with a domain `expert’ to derive the heuristics that, the expert uses when solving a problem. Whereas, case based reasoning `looks’ for similarities between the current needs and previous examples of similar problems and their attendant solutions. Rule base programming is currently very popular and well developed. Most `experts’ will expound on the rules they use to solve either everyday or very difficult and detailed problems. However research into human problem solving has determined that in almost all cases the `rules’ used by experts have been in part derived from a cause and effect relationship derived from previous experiences – cases.

In short, the most significant difference between CBR and KBS problem solving techniques, is that in the KBS paradigm, the rules are more concrete and tangible. Whereas in CBR the solution methodology is a process of comparison and evaluation of current needs with existing situations. However, the `rules’ approach is solidly grounded in cause and effect derivation of the reasons for doing specific tasks given a certain situation.

Memory Organization Packages, MOPs

Knowledge within a CBR system is very dynamic in nature. Case base reasoners use dynamic memory to model a changing situation as the problem solution develops. The problem posed to a CBR system is typically static in nature, however as a solution develops by matching previous, similar cases to the new problem definition, the system must determine if the current match is `good enough’ to be used to derive a complete solution to the problem at hand. As the system reasons about the state of partial match it has to modify its solution strategy to determine how this current case fits into the required solution. Knowledge is represented in discrete and distinctive packets called Memory Organization Packages or MOPs. Each of these represents the basic format and knowledge representation developed in the scripts morphology. However in the scripts morphology the knowledge and its representation were dynamic, that is they could not be reformulated and represented to better suit the existing problem. Also there was no mechanism provided for inter-linking them with other packets of information. MOPs, on the other hand, provide the necessary mechanism for linking information and dynamic representation.

MOPs are used to represent knowledge about events. An event can either be a representation of a full event or more typically some significant portion of a complete case. These events within a MOP are useful in partial matching, in that the required information from a specific case can be used to further a solution state without utilizing the entire case, which may not be a perfect match.

The information contained within a MOP is organized and classified by norms. Norms contain basic features of a MOP and are used to structure it. A MOP containing the event `going on a trip’, would have norms for planning and what is required to plan, getting tickets and where to go to purchase them, etc. Norms are a listing or classification of expected information or action associated with the event, stored within the MOP.

Finally, a MOP can have a specialization which makes the given information more unique. To further the above example; Suppose you were going on a trip, all of your previous experiences of `going on a trip’ would be stored within a MOP, or dynamically linked to this particular MOP. The norms in this event would be:

  1. Planning the trip, destination, mode of transport, etc.
  2. Ensuring enough money is available for the trip.
  3. Getting tickets, reservations, confirmations, etc.
  4. Obtaining information about things to be done, etc.

At this point there can be specialization within the overall MOP of `going on a trip’. The traveler knows ahead of time what type of trip this is to be; a business trip, with business contacts, meetings, etc., or a vacation with sight-seeing, entertainment, etc. From this point in the development of the solution to the problem of going on a trip, more specific information, regarding what type of trip it will be, is required. This more explicit information is considered a specialization of the MOP `going on a trip’.

To complete the example; If one were going on a vacation, he would need to know points of interest in the area and develop a plan to visit the interesting ones. This plan would require knowledge of location, times the attraction is open, cost of admittance, etc. This information, although different in specifics from that of a business trip (i.e. location company as opposed to attraction, time a specific person is available to meet the traveler as opposed to time the attraction is open, etc.) has the same general structure (time, place, admittance, etc.), thus it can be modeled by the same MOP.

A Simple Problem

Problem Statement

  1. You want to get away with your wife and child for awhile
  2. You have saved $800.00
  3. You have five days off from work

Given Information

Dest.      Travel    Travel  Lodging  Attraction   Business
            Cost      Time    Cost       Cost      Contacts
           (for 3)   (days)  (for 3)    (for 3)

Tampa     air - $520  0.5     $55        $60         none
Florida   bus - $350  1.0     $80
          car - $220  1.5

St. Paul  air - $480  0.5     $45        $30         none
Minnesota bus - $300  1.0     $75
          car - $160  1.0

Columbus  air - $179  0.5     $40        $18         Bill
Ohio      bus - $190  1.0     $55
          car - $75   0.5

Dallas    air - $470  0.5     $75        none        Tom
Texas     car - $210  1.5     $110

New York  air - $850  1.0     $110       $33         Jim
New York  bus - $600  2.0     $150                   Kevin
          car - $330  2.5

Experience Information

Destination         Comments

Tampa      Had a great time at Disney and Sea World.
Florida    Good relaxation.

St. Paul   Mall of America was neat, but trip 
Minnesota  was rushed, hectic

Columbus   Sea World was wonderful, not much else to do - 
Ohio       good time.  Had to meet Bill at the airport - 
           difficult meeting - expensive

Dallas     Crowded and busy, hot and humid, lots to see
Texas      Tom is easy going and accomplished alot, 
           would like to go back

New York   Hot, humid, busy - Expensive
New York   Most interesting City I've ever been in

CLIPS could be used to represent/store the information in the two tables above. The most efficient way would be to create templates using the CLIPS deftemplate command and store the data (“cases”) from the tables as copies of this/these templates.

CLIPS rules that search the above “cases” could then be developed that utilize CLIPS ability to perform pattern matching. One would need to decide how to implement partial matches within these rules (e.g. which items to be matched are most important if a full match can’t be achieved). Such partial matches are generally specific to each problem and thus must be coded each time.

 

Source :   https://engineering.purdue.edu/~engelb/abe565/cbr.htm

Article on AI in Maintenance – Artificial Intelligence Maintenance Management.

 

Over the past 3 decades many attempts have been made to use AI techniques in maintenance modeling and management

This paper is about the use of AI to replace human intelligence with machine intelligence in solving complex or large scale problems or indeed systems that change regularly. The ultimate objective is to achieve more effective maintenance management and in some cases to make achieving this goal a viable option. The used AI techniques are numerous ranging from the classic expert systems that utilize rule based reasoning to the more cumbersome optimization techniques used in Genetic Algorithms.

 

 

Not all the AI techniques are suitable to address issues in this area and indeed not all techniques have been applied. Over the past decade there has been a shift towards developing hybrid systems that use two or more AI techniques in order to produce more powerful systems. The application areas extend widely to all industries such as nuclear, chemical, petrochemical, building etc.

This paper presents an overview of the applications of AI techniques in maintenance over the past decades identifying specific applications and extent of using the techniques.

 

 

All aspects of maintenance decision making process are covered in the study except for not including the area of fault diagnosis. AI in fault diagnosis is a broad area with many application published. The paper also presents recent trends in developing AI systems in maintenance that may help projecting future developments based on understanding of the needs of the subject area.

 

Maintenance Management

 

 

Maintenance management is undertaken at 3 levels :

1) Strategic level issues such as business perspective, technical& commercial are addressed.

2) Tactical level at which aspects of maintenance planning and scheduling are addresses and usually involves advanced mathematical modelling.

3) Operational level where aspects of maintenance work execution are decided. Despite its critical importance this level has not received to date sufficient academic interest.

 

All 3 levels are part of maintenance planning cycle

 

 

 

 

AI Techniques:

 

There are many AI techniques that have been developed since the middle of the past century with the fast development of computers and computer software. The main AI techniques used in maintenance are:

 

Case-Based Reasoning (CBR): CBR utilises past experience to solve new problems. It uses case index schemes, similarity functions and adaptation and provide intelligence through updating the case base. CBR provides learning capabilities to AI systems and when combined with other AI techniques such as KBS it produces powerful tools.

 

Data Mining (DM): DM detects data patterns for data bases through the use of machine learning techniques. It helps developing predictive models to support decision making

 

Genetic Algorithms (GAs): GA is an optimisation technique based on the principles of genetics and natural selection i.e. solutions can be evolved through mutation and weaker solutions become extinct. GAs have superior and versatile features compared with the classic optimisation and search techniques

 

Knowledge Based Systems (KBS): KBSs use domain specific rules of thumb or production rules to identify a potential outcome or suitable course of action. KBSs which are closely linked to Expert Systems witnessed the revival of AI in the 1980s. Initial interest in applying AI techniques in maintenance started with KBS.

 

Neural Networks (NN): NNs are based on the idea of emulating human brain and use back propagation algorithm. They are often used in modelling and statistical analysis, classification and optimisation.

 

Fuzzy Logic (FL): In 1965 Zadeh introduced the concept of fuzzy sets to allow dealing with vague and imprecise concepts. Later he introduced FL which allows the representation of information of uncertain nature.

 

 Case Based Reasoning (CBR)

 

CBR is moderately used in maintenance management applications and it is particularly useful when combined with other AI techniques to provide an extremely useful “learning” capabilities that utilize past experience in supporting decision making.

Ruiz et al [2013] proposes a conceptual models to support multidisciplinary expert collaboration in decision making. The paper aims to achieve maintenance problem solving with a variant of the CBR mechanism with a process of solving new problems based on the solutions of similar past problems and integrating the experts’ beliefs Cheng et al (2008) presents an Intelligent Reliability Centred Maintenance Analysis (IRCMA). CBR is used to utilise historical records of RCM analysis on similar items to analyse a new item.

Motawa and Almarshad [2013] developed a knowledge-based Building Information Model (BIM) system for building maintenance which utilises the functions of information modelling techniques and knowledge systems to facilitate full retrieval of information and knowledge for maintenance work. In addition to the BIM module to capture relevant information a CBR module is used to capture knowledge. The system can help maintenance teams learn from previous experience and trace the full history of a building element and all affected elements by previous maintenance operations Cai et al(2011) presents a framework for command forces system, based on CBR, of equipment maintenance support. The paper provides an analysis of the CBR based method of representation and storage of equipment maintenance case.

 

Data Mining

 

DM is currently under-used in maintenance management though DM techniques are likely to be used with explicit reference to them in the future. Lee et al [2015] the authors argue that when all the data are aggregated by different methods that leads to “Big Data”. Data collection starts with data acquisition of the monitored assets using appropriate sensor installations etc. Historical data can also be harvested for further data mining. The transforming agent consists of several components: an integrated platform, predictive analytics, and visualization tools. The above process would lead to acquiring health information such as current condition, remaining useful life, failure mode, etc. and can be effectively conveyed in terms fault map, risk chart, and health

degradation curves.

 

 

 

Fuzzy Logic (FL)

 

There is a steady interest in applying FL in modelling maintenance problems due to its convenience in dealing with uncertainty. Alamaniotis et al [1] introduces a methodology called regression to fuzziness that estimates the remaining useful life (RUL) of power plant components. Mozami et al(2011) uses FL to prioritise maintenance activities based on pavement condition index, traffic volume, road width and rehabilitation and maintenance cost. Sasmal and Ramanjaneyulu (2008) develops a systematic procedure and formulations for condition evaluation of existing bridges using Analytic Hierarchy Process (AHP).

 

Risk based maintenance/ inspection is becoming a popular approach to ensure safe and economically viable operation. Sa’idi et al [2014] argues that RBM) is a proper risk assessment methodology minimizes the risk resulting from asset failures. But a main engineering problems in risk modelling is uncertainty due to the lack of information. This paper proposes a model for the risk of the process operations in the oil and gas refineries. Petrovic et al [2014] develops risk assessment model of mining equipment failure and Singh and Markeset [2009] presents a methodology for risk based inspection programme for pipes. Khan et al [2004] presents a structured risk-based inspection and maintenance methodology that uses FL in oil and gas operations.

 

 

 

Genetic Algorithms (GAs)

 

GA is an optimisation technique that was developed in the 1970s. GA is based on the principles of genetics and natural selection i.e. solutions can be evolved through mutation and weaker solutions become extinct. GAs have superior and versatile features compared with the classic optimisation and search techniques. For example it can work with large number of continuous or discrete variables and optimise extremely complex cost functions though it has the drawback of requiring high computing power that may require the use of parallel processing. Because of the above features. GAs are appealing in solving many complex maintenance management problems. GA is the most popular AI technique applied in maintenance. There are many publications on applying GAs in a wide range of maintenance problems and applications though mostly of operational rather than strategic nature. There are numerous recent developments and applications of GAs in maintenance that we cannot cover all in this preview paper.

 

 

Most popular maintenance problem situations and industrial applications of GAs

 

i.Maintenance scheduling problems represent large scale and complex applications due to the large amount of variables involved and their interdependence. Planning preventive maintenance is indeed one of the most complex activities due to the nature of the problem and the typical lack of data.

  1. Combined use of GAs with MonteCarlo simulation. For example Yin et al [2015] develops an integrated model of statistical process control and maintenance decision. Chootinan et al [2006] introduces a pavement maintenance programming methodology that can account for uncertainty in pavement deterioration using a simulation-based GA approach.

iii. Optimise maintenance of parallel/ series systems there are few GAs attempts in this area. Gao et al [2015] proposes an optimal dynamic interval (i.e. Changes from one interval to next) preventive maintenance scheduling for series systems. Nourelfath et al [2012] studies the joint redundancy and imperfect PM optimisation for a series-parallel multistate degraded system.

  1. iv. The application areas of GA in maintenance are numerous. Examples include maintenance in underground pipelines [Tee et al , 2014], bridge maintenance [Ok et al, 2013], Nuclear Power system [Jiejuan et al, 2004] and on PM scheduling in a process plant [Nguyen and Bagajewicz, 2008].

 

 

 

 

 

Knowledge Based Systems (KBS)

 

Over the years the interest in KBSs in maintenance maintained moderate interest. Applications also covered a wide range of areas. Kobbacy et al [1995] presents “IMOS”, an Intelligent Maintenance Optimisation system that uses KBS.

Ruiz et al [2014] suggests an original Experience Feedback process dedicated to maintenance, allowing to capitalize on past activities by formalizing the domain knowledge and experiences using a visual knowledge representation formalism with logical foundation; extracting new knowledge utilising association rules mining algorithms and interpreting and evaluating this new knowledge using reasoning operations of Conceptual Graphs. The suggested method is illustrated on a case study based on real data dealing with the maintenance of overhead cranes.

 

 

Neural Networks (NNs)

NNs are based on the idea of emulating human brain. They are often used in modelling and statistical analysis and in classification and optimisation. There has been a steep increase in number of papers on applications of NN in maintenance in the past two years. Sbarufatti [2015] studies the application of sequential Monte-Carlo sampling to estimate the probabilistic residual life of a structural component subjected to fatigue crack propagation. Real-time estimation of crack length is provided through artificial NNs trained with finite element simulated strain patterns. Wang and Gao [2012] carries out a research on Risk and Condition Based Maintenance (RCBM) task optimization technology. A Risk and Condition Based Indicator Decision-making System (RCBIDS) is built.

 

 

 

 

Hybrid Systems

Hybrid Systems employ two or more AI techniques. In maintenance applications the most frequently used AI technique in a hybrid system is FL when used with NN and GAs. There has been a steep increase in number of papers were on applications of NN in maintenance in the past two years. Trappey et al [2015] develops an intelligent engineering asset management system for power transformer maintenance. The system performs real-time monitoring of key parameters and uses data mining and fault prediction models to detect transformers’ potential failure under various operating conditions. Principal component analysis (PCA) and a back-propagation artificial neural network (BP-ANN) are adopted for the prediction model. Ben Ali et al [2015] presents a method for accurate bearing remaining useful life prediction based on Weibull distribution and ANN. Sbarufatti [2015] studies the application of sequential Monte-Carlo sampling to estimate the probabilistic residual life of a structural component subjected to fatigue crack propagation. Real-time estimation of crack length is provided through artificial NNs trained with finite element simulated strain patterns.

Kobbacy et al (2000) presents the next generation of IMOS discussed under KBS. The Hybrid IMOS, HIMOS, uses a hybrid approach of KBS and Case Based Reasoning which provides a much needed learning capability to the system.

 

 

 

Trends

Table (1) demonstrates the number of papers published on use of AI in maintenance management during the period 2005-09 based on the study of Kobbacy and Vadera [18] and the finding of the current study for the period 2010-2015. Even though the period of the current study is marginally longer the total number of publications in the area has more than doubled indicating very stronger interest in AI application in maintenance management. The technique with the largest increase is GAs which increased by staggering 5 folds. That is followed by FL and NNs Although the percentage of hybrid system publications remains around the same level of 17% its number has more than doubled. This is a healthy indication of developing more powerful approaches to solve maintenance management issues

 

 

 

 

 

 

Table (1) Number of published papers during 2005-09[18] and 2010-June 2015

 

 

Application Areas

FL applications cover a range from the more policy type applications e.g. assessment of maintenance approaches to the operational type application e.g. on condition evaluation of bridges.KBS shows similar spread of applications including the system which suggests maintenance policy of Batanov et al(2003) to the plant maintenance management system of Gabbar et al(2003).CBR are also applied to both policy and operational type applications.in contrast GAs and NNs have more focus on operational level applications.

 

 

 

 

CONCLUSIONS

 

This study presents an overview of AI applications in planning and modelling in maintenance. The study revealed strong increase of applications of AI in maintenance management in the past five years. GAs is the most popular AI technique applied in maintenance due to its nature which offers powerful optimisation tools that can deal with complex maintenance planning problems. Both NNs and FL show significant interest with fewer studies using CBR, DM and KBS. The percentage of hybrid system publications remains around the same level of 17% but that reflects strong increase in its number. This is a healthy indication of developing more powerful approaches to solve maintenance management issues As expected FL, KBS and CBR applications cover both policy type as well as operational type maintenance decision problems while GAs and NNs are mainly focused on operational problems such as PM planning and scheduling. The study of publications’ trends GAs showed large increase in rate of publication with both FL and Hybrid system publications increased significantly. CBR sustained its moderate number of publication with interest started in NNs. Only one publication was found using KBS and DM.

 

Source : http://omaintec.com/app/webroot/js/ckfinder/userfiles/files/Omaintec2015/speeches/03-Khairy-A-H-Kobbacy.pdf

 

 

 

 

Article on AI in Design – Toyota to Finance $50 Million ‘Intelligent’ Car Project

The Toyota Motor Corporation announced on Friday an ambitious $50 million robotics and artificial intelligence research effort, in collaboration with Stanford Universityand the Massachusetts Institute of Technology, to develop “intelligent” rather than self-driving cars.

The distinction is a significant one, according to Gill Pratt, a prominent American roboticist, who has left his position at the Defense Advanced Research Projects Agency of the Pentagon to direct the new effort.

Rather than compete with companies like Google and Tesla, which are developing cars that drive without human intervention, Toyota will focus its effort on using advances in A.I. technologies to make humans better drivers.

Dr. Pratt described the two approaches as “parallel” and “serial” autonomy. In layman terms, parallel means the machine watches what you do, while serial means it replaces you.

Toyota, the world’s largest carmaker, envisions cars of the future that will act as “guardian angels,” watching the driving behavior of humans and intervening to correct mistakes or avoid collisions when needed.

Dr. Pratt said Toyota’s goal was to keep the “human in the loop” in the car of the future and to ensure that driving remained fun. “A worry we have is that the autonomy not take away the fun in driving,” he said. “If the autonomy can avoid a wreck, it can also make it more fun to drive.”

Driver assistance technology — like pedestrian and bicyclist detection and avoidance systems, lane-departure warning and “lane keeping” systems, and software programs that alert drivers that they are becoming drowsy — have already become standard safety options from carmakers.

The Toyota program will focus on developing more artificial-intelligence-based monitoring systems. For example, in the future, an A.I. system might do more than warn drivers that they are leaving their lane and actively correct all kinds of driver errors. Another possibility might be to use A.I. technologies to permit aging drivers to continue to drive by offering driver assistance in areas like vision and reaction time.

“In parallel autonomy, there is a guardian angel or driver’s education teacher,” Dr. Pratt said. “It usually does nothing, unless you are about to do something dumb.”

Before joining Toyota, Dr. Pratt served as a Darpa program manager. Beginning in 2012, he oversaw a “Grand Challenge” contest there to design semiautonomous mobile robots capable of performing useful tasks in disaster areas where humans would be at risk, such as the Fukushima nuclear power plant disaster.

The contest was won earlier this year by a South Korean-designed robot that performed a series of tasks like driving, walking, opening doors, using power tools and climbing stairs. Twenty-three teams participated in the contest. However, it provided a striking contrast to science fiction movie portrayals of robots as superhuman machines that operate with agility, dexterity and speed.

During the contest, the robots exhibited little autonomy, moved glacially and often fell while doing tasks that are routinely performed by human toddlers.

Toyota will finance researchers at the Stanford Artificial Intelligence Laboratory and the M.I.T. Computer Science and Artificial Intelligence Laboratory to undertake a five-year project to make advances in both automotive transportation and indoor mobile robotics that might have applications in new markets like elder care.

Around the globe, human populations are aging in the more advanced countries, Dr. Pratt noted. This is raising what economists call the “dependency ratio,” a measure of the number of those in the work force compared to both young and old dependents. In the United States, the dependency ratio is expected to increase 52 percent in the next 15 years, while in Japan it is expected to increase 100 percent.

The development of intelligent transportation technologies and elder-care-related robots holds the promise of giving the aging more independence, he said.

“I had to take the car keys from my dad,” he said, arguing that losing independence is “also an awful way for a parent to live. Most retired people want independence in a human sense. Let’s use robotics to let people live in more human way.”

By financing the Stanford and M.I.T. artificial intelligence laboratories, Toyota is tapping into talent and experience in technologies that have recently made significant progress in perception, dexterity and autonomous motion. The two laboratories were founded by the artificial intelligence pioneers John McCarthy and Marvin Minsky in the 1960s and have produced basic innovations in artificial intelligence and robotics, as well as educated multiple generations of researchers.

Currently, Fei Fei Li, a computer scientist who is a specialist in machine vision, leads the Stanford laboratory, and the M.I.T. laboratory is led by Daniela Rus, a roboticist who has worked in new areas such as distributed and collaborative robotics.

“I see why Toyota wants to do this,” said Dr. Li. “It is the biggest carmaker in the world, and it wants to influence the next generation.”

The research will concentrate on keeping “the human in the loop,” which is a break from the direction of much A.I. research, which has focused on building systems and machines that replace humans.

“We see this as basic computer science, A.I. and robotics that will make a difference in transportation,” said Dr. Rus.

Dr. Pratt acknowledged that as a teenager, he had a passion for cars. “I had six cars, and Toyotas were the cars I loved to fix myself,” he said.

Source : http://www.nytimes.com/2015/09/05/science/toyota-artificial-intelligence-car-stanford-mit.html?smprod=nytcore-iphone&smid=nytcore-iphone-share&_r=0

Quick History of Machine Vision

Machine Vision is a branch of computer science that has really grown over the last 20

Modern Machine Vision Systemsyears to become an important feature of manufacturing. Today machine vision systems provide greater flexibility and further automation options to manufacturer’s, helping to find defects, sort products and complete a number of tasks faster and more efficiently than humans alone ever could. But how did this important and growing technology start? Here is a quick timeline of the key events leading to machine vision as we know it today:

1950’s – Two dimensional imaging for statistical pattern recognition developed: Gibson introduces optical flow and based on his theory mathematical models for optical flow computation on a pixel-by-pixel basis are developed.

Larry Roberts Blocks Imaging - Machine Vision 1960's1960’s – Roberts begins studying 3D machine vision: Larry Roberts wrote his PhD thesis at MIT on the possibility of extracting 3D geometric information from 2D views in 1960. This lead to much research in MIT’s artificial intelligence lab and other research institutions looking at computer vision in the context of blocks and simple objects.

1970’s – MIT’s Artificial Intelligence Lab opens a “Machine Vision” course – Researchers begin tackling “real world” objects and “low-level” vision tasks (i.e. edge detection and segmentation: In 1978 a breakthrough was made by David Marr (at the MIT AI lab) who created a bottom up approach to scene understanding through computer vision. This approach starts with a 2D sketch which is built upon by the computer to get a final 3D image.

1980’s – Machine vision starts to take off in the world of research, with new theories and concepts emerging: Optical character recognition (OCR) systems were initially used in various industrial applications to read and verify letters, symbols, and numbers. Smart camera’s were developed in the late 80’s, leading to more wide spread use and more applications.

1990’s – Machine vision starts becoming more common in manufacturing environments leading to creation of machine vision industry: over 100 companies begin selling machine vision systems. LED lights for the machine vision industry are developed, and advances are made in sensor function and control architecture,  furthering advancing the abilities of machine vision systems. Costs of machine vision systems begin dropping.

MACHINE VISION TODAY

Today machine vision systems continue to move forward. 3D vision systems that scan products running at high speeds are becoming affordable, and systems that do everything from thermal imaging to slope measurement can be readily found. Machine vision continues to be a growing market, with many new advances driven by the wide array of possible applications and the other market drivers in Figure #1, from an article by Frost Sullivan.Drivers of Modern Machine Vision Advancement

Many challenges still exist in the development of machine vision systems. The commonly accepted “bottom-up” framework developed by Marr is being challenged, as it has limitations in speed, accuracy, and resolution. Many modern machine vision researchers advocate a more “top-down” and heterogeneous approach, due the difficulties Marr’s framework exhibits. A new theory, called “Purposive Vision” is exploring the idea that you do not need complete 3D object models in order to achieve many machine vision goals. Purposive vision calls for algorithms that are goal driven and could be qualitative in nature.

Another area where machine vision is advancing in conjunction with EEG sensors in Gesture Based Interfaces. Gesture based interfaces allow operators to control computers and machinery with thoughts and gestures rather than with keyboards and other input devices.

 

Source : https://www.epicsysinc.com/blog/machine-vision-history

 

Comments:

Imp points :

  1. Machine Vision provides  greater flexibility and helps find defects.
  2. Today machine vision systems continue to move forward. 3D vision systems that scan products running at high speeds.
  3. A new theory, called “Purposive Vision” is exploring the idea that you do not need complete 3D object models in order to achieve many machine vision goals.

MYCIN: A Quick Case Study

 

No course on Expert systems is complete without a discussion of Mycin. As mentioned above, Mycin was one of the earliest expert systems, and its design has strongly influenced the design of commercial expert systems and expert system shells.

Mycin was an expert system developed at Stanford in the 1970s. Its job was to diagnose and recommend treatment for certain blood infections. To do the diagnosis “properly” involves growing cultures of the infecting organism. Unfortunately this takes around 48 hours, and if doctors waited until this was complete their patient might be dead! So, doctors have to come up with quick guesses about likely problems from the available data, and use these guesses to provide a “covering” treatment where drugs are given which should deal with any possible problem.

Mycin was developed partly in order to explore how human experts make these rough (but important) guesses based on partial information. However, the problem is also a potentially important one in practical terms – there are lots of junior or non-specialised doctors who sometimes have to make such a rough diagnosis, and if there is an expert tool available to help them then this might allow more effective treatment to be given. In fact, Mycin was never actually used in practice. This wasn’t because of any weakness in its performance – in tests it outperformed members of the Stanford medical school. It was as much because of ethical and legal issues related to the use of computers in medicine – if it gives the wrong diagnosis, who do you sue?

Anyway Mycin represented its knowledge as a set of IF-THEN rules with certainty factors. The following is an English version of one of Mycin’s rules:

IF the infection is pimary-bacteremia
AND the site of the culture is one of the sterile sites
AND the suspected portal of entry is the gastrointestinal tract
THEN there is suggestive evidence (0.7) that infection is bacteroid.

The 0.7 is roughly the certainty that the conclusion will be true given the evidence. If the evidence is uncertain the certainties of the bits of evidence will be combined with the certainty of the rule to give the certainty of the conclusion.

Mycin was written in Lisp, and its rules are formally represented as Lisp expressions. The action part of the rule could just be a conclusion about the problem being solved, or it could be an arbitary lisp expression. This allowed great flexibility, but removed some of the modularity and clarity of rule-based systems, so using the facility had to be used with care.

Anyway, Mycin is a (primarily) goal-directed system, using the basic backward chaining reasoning strategy that we described above. However, Mycin used various heuristics to control the search for a solution (or proof of some hypothesis). These were needed both to make the reasoning efficient and to prevent the user being asked too many unnecessary questions.

One strategy is to first ask the user a number of more or less preset questions that are always required and which allow the system to rule out totally unlikely diagnoses. Once these questions have been asked the system can then focus on particular, more specific possible blood disorders, and go into full backward chaining mode to try and prove each one. This rules out alot of unecessary search, and also follows the pattern of human patient-doctor interviews.

The other strategies relate to the way in which rules are invoked. The first one is simple: given a possible rule to use, Mycin first checks all the premises of the rule to see if any are known to be false. If so there’s not much point using the rule. The other strategies relate more to the certainty factors. Mycin will first look at rules that have more certain conclusions, and will abandon a search once the certainties involved get below 0.2.

A dialogue with Mycin is somewhat like the mini dialogue we gave in section 5.3, but of course longer and somewhat more complex. There are three main stages to the dialogue. In the first stage, initial data about the case is gathered so the system can come up with a very broad diagnosis. In the second more directed questions are asked to test specific hypotheses. At the end of this section a diagnosis is proposed. In the third section questions are asked to determine an appropriate treatment, given the diagnosis and facts about the patient. This obviously concludes with a treatment recommendation. At any stage the user can ask why a question was asked or how a conclusion was reached, and when treatment is recommended the user can ask for alternative treatments if the first is not viewed as satisfactory.

Mycin, though pioneering much expert system research, also had a number of problems which were remedied in later, more sophisticated architectures. One of these was that the rules often mixed domain knowledge, problem solving knowledge and “screening conditions” (conditions to avoid asking the user silly or awkward questions – e.g., checking patient is not child before asking about alcoholism). A later version called NEOMYCIN attemped to deal with these by having an explicit disease taxonomy (represented as a frame system) to represent facts about different kinds of diseases. The basic problem solving strategy was to go down the disease tree, from general classes of diseases to very specific ones, gathering information to differentiate between two disease subclasses (ie, if disease1 has subtypes disease2 and disease3, and you know that the patient has the disease1, and subtype disease2 has symptom1 but not disease3, then ask about symptom1.)

There were many other developments from the MYCIN project. For example, EMYCIN was really the first expert shell developed from Mycin. A new expert system called PUFF was developed using EMYCIN in the new domain of heart disorders. And systom called NEOMYCIN was developed for training doctors, which would take them through various example cases, checking their conclusions and explaining where they went wrong.

We should make it clear at this point that not all expert systems are Mycin-like. Many use different approaches to both problem solving and knowledge representation. A full course on expert systems would consider the different approaches used, and when each is appropriate. Come to AI4 for more details!

 

 

Source : http://cinuresearch.tripod.com/ai/www-cee-hw-ac-uk/_alison/ai3notes/section2_5_5.html

 

Comments:

1.Mycin was one of the earliest expert systems used to diagnose and recommend treatment for certain blood infections.

2.Mycin was never actually used in practice. This wasn’t because of any weakness in its performance – in tests it outperformed members of the Stanford medical school. It was as much because of ethical and legal issues related to the use of computers in medicine

1 2