Design Simplification by Analogical Reasoning

Marton E. Balazs

Design

What is our view on designed device?

What aspects of a design are we going to consider and why?
What ontologies can be built for each of the aspects selected?
How does the choice of these aspects and ontologies influence what designs can be represented and what reasonings can be performed?

How do we represent designs?

What representations can we use for the different aspects of a design?
How do we represent the connections (dependencies) between the representations of the different aspects?
dependency of function on behavior
dependency of behavior o structure

 

Design Simplification

What is design simplification?

What distinguishes simplification from other kinds of improvements?
What examples of improvement are not simplifications and why?

Does simplification require/allow special problem solving methods?

What are the/some possible contexts, aspects and measures that define simplification?

Contexts seem to be given by processes such as

  1. describing
  2. understanding
  3. use
  4. manufacturing
  5. repairing

Aspects refer to the aspects of the design:

  1. structure
  2. behavior
  3. function

 

Measures:

  1. attribute measures (values, shapes, surfaces, relations)
  2. counting (components, nesting)
  3. information (information content, failure probability)

What is a "simpler" relation?

For what context, aspect, measure combination does simplification make sense?
Under what conditions can an operational definition of simplification be given, such that the relation can be calculated?
Besides the designs involved what is important with respect to a simplification relation?

How do we represent "simpler relations?

How can simplification be done?

How does design simplification propagate?

Design Simplification by Analogy

What is the analogical reasoning model for design simplification?

What are the phases of the analogical reasoning process?

What is the control flow of the analogical reasoning process?

How is the target problem specified?

  1. The target (simplification) problem has to specify a design and a simplification point of view (task/aspect/metric)
  2. It is also possible that constraints on the solution and/or the simplification process need to be specified

What is the content of a source analog?

  1. The source analogs are design simplifications
  2. Each design simplification is composed of a design, a simplification of that design (another design) and the simpler relation connecting the two
  3. Both of the designs are described in terms of their structure, behavior and function
  4. The simpler relation is a description of the process by which the simplification was obtained from the design
  5. The description of the simplification process will contain:
  6. the simplification point of view (task/aspect/metric)
  7. a description of the conditions under which the simplification was applicable
  8. an explanation of why those conditions were needed
  9. a description the simplification process itself
  10. a description of the results of the simplification which account for the simpler relation
  11. a description of the propagation of the changes produced by the simplification to other aspects of the design
  12. Some of the elements of the above description may be missing from a simplification

How is a source analog retrieved?

  1. A source analog is retrieved based on the target problem (design and point of view)
  2. The point of view allows to select certain simplifications to be considered for matching:
  3. the simplest case would be if only simplifications from exactly e same point of view (task/aspect/metric) would be considered
  4. this could be relaxed by considering simplifications from points of view which only differ by their metrics; this requires that there is some way of converting the different metrics to each other (for instance count to information content)
  5. another way to relax the constraint on the simplifications considered is to allow aspects that could be mapped to each other (for instance different kinds of processes, such as behavior and function can be mapped to each other using abstraction)
  6. on could even think of relaxing the identical task requirement for considering simplifications in retrieval in the case of tasks that can be mapped to each other
  7. The source analogs selected based on the simplification point of view are searched by comparing them to the design in the problem specification
  8. The search must be based on aspects of the designs which helped to produce the source (stored) simplifications
  9. These aspects can be attributes, attribute values, components (surface aspects), but more likely relations between those
  10. They can be identified/extracted from the description of the simplification process (e.g. from preconditions, explanations, the simplification process itself, and so on)
  11. The aspects considered as most important for producing the source simplifications can be further abstracted and finally used for indexing the simplification
  12. Indexes build this way can then be used for searching for source analogs

How is the retrieved source analog mapped onto the target problem?

  1. The less simple design component of a retrieved source simplification is mapped onto the design component of the target problem.
  2. One possibility of mapping is to use the principle of the structure mapping engine [Falkenheimer]. This would allow to produce mappings that would prefer higher level relations to lower lever relations and attributes.
  3. The mapping process should be extended by adding a mechanism which would guide the mapping using the simplification goal (point of view).
  4. This should somehow weight the preference of the mapping mechanism towards forming mappings between similar attributes, and relations which, based on the old simplification process, were important to (or played a role in) the simplification.
  5. Information about which aspects of the old, un-simplified design were important to the source simplification can be extracted from the description of the simplification process (e.g. from preconditions, explanations, the simplification process itself, and so on).

What measure of similarity to use to select the best match?

  1. The measure of similarity needs to rely on the mappings produced: the "better" the mapping is the "more similar" the source design should be to the target design.
  2. Since we are looking at three levels of simplification: structural, behavioral and functional, we need to define the quality of mapping for each of those three aspects
  3. More generally we need to define what do we understand by "similar" structures, "similar" processes and how do we measure the degree of similarity in both cases.

What is (simplification) knowledge to be transferred?

  1. The simplification knowledge transferred will be obtained either from the description of the simplification process in the source simplification or from the computed differences between the two designs involved in the source simplification.
  2. Knowledge that can be transferred from the simplification process description in the source simplification could be elements of the preconditions, of some explanations and/or of the simplification process description.
  3. To transfer knowledge based on the differences from the source simplification, those differences must be computed.
  4. Computing differences must only refer to the simplification point of view which is currently considered. This may be "tricky" because for instance one may discover structural differences that were actually generated by the propagation of, let's say a functional simplification. thus transferring relevant simplification knowledge base only on differences between the designs involved in the source simplification may be non-trivial.

How is knowledge transferred from the source to the target? (directly or through a shared abstraction)

How does the transferred knowledge enable completing the solution?

How is the solution to the target problem evaluated?

  1. The solution to the target needs to be evaluated from at least two points of view:
  2. Does the solution satisfy the original design requirements and constraints of the target design?
  3. This is a design evaluation problem and should probably not be of much concern for us.

  4. Is the modified design a indeed simplification?
  5. There needs to be an operational definition of what it means that a design is simpler than another design from each of the context, measure, aspect combinations that make sense and that we are handling. Such definitions may be based either on some complexity metrics, or on some (partial) ordering relations between designs.

    These operational definitions must be implemented by corresponding algorithms.

What may be learned from the target problem and how? (generalization)

Is the target analog and/or generalization worth storing and if yes how?

What knowledge to transfer from an old simplification to a new problem?

How can simplification as goal guide the phases of analogical reasoning?

Dave's questions

  1. Ashok's IEEE Expert paper on analogy & creativity suggests that producing abstractions of designs (for indexing or to guide transfer) might occur at 1) design storage time 2) design retrieval time (at reminding time) or 3) at p-s time (when kn transfer is actually done). Given our discussion about goals affecting indexing, how does this idea of eager or late abstraction get modified by the presence of a particular simplification request (i.e., in the late case only)?
  2. What if designs were stored annotated by what goals were under consideration at the time of each decision. e.g., it's symmetrical to improve handling times during assembly. How might that affect goal-based analogical simplification?
  3. The possibility of keeping information with a simplification (or a design) about how useful it's been and in what way (or how easy) the design has been to simplify, or how easy it's been to match. This accumulated knowledge should become useful and will affect the process.
  4. Altmeyer & Schurman's paper in AID'96: They talk about a whole lot of things -- with an attempt at formality -- including assessing adaptation costs for a retrieved case. That idea can be applied to simplifications of course -- e.g., costs might include how much change is required to simplify at that level, and possibly the difficulty of making them, as well as costs of propagating to other levels.

 

  1. Todd Griffith:
  2. Making an analogical match based on functional similarity (?) 1st, and then trying to reduce the structural difference between the two matches by structurally transforming one into the other. Each transformation preserves function, and is, I think, actually simplifying. I think his preferences when looking for a fnl similar thing is to pick one that has simpler structure. He uses the transformed objects using some special reasoning methods (using deep kn about behavior) in order to try to figure out an answer to the original problem. His transformations might be of interest to you.

    Suggested the redesign of the Nintendo Power Glove as an example of simplification.

    Suggested that if the goal of all simplification is to preserve functionality, then perhaps the first pass in searching for potential simplifications (of any kind) would actually be used to retrieve a set of functionally similar designs. That set would then be used in the search for simplifications. I'm not sure if it works all the time, but it's worth considering carefully.

  3. Sam Bhatta's PhD work his system "discovers" the concept of "cascading" -- i.e., that several of a component might be used instead of one in order to satisfy a requirement/constraint. (e.g., 3 batteries instead of 1). This batteries example is an instance of that concept. If you think of that transformation to a design as being close to the inverse of simplification then it becomes clear that those concepts are generalized "complexifiers":-) and that you ought to see what you can borrow from Sam's mechanisms/ideas.
  4. Suppose you'd requested and carried out a functional simplification by analogy:

F of D(old) ===> F(simp) of D(new)

    and by propagation we get

B of D(old) ===> B' of D(new)

S of D(old) ===> S' of D(new)

    Now if S' happens to be simpler than S then we can store S ==> S' due to the fnl simpl as a simplification itself. Similarly for B ==> B' if that provides a simplification.

Dave's discussion with Amaresh

  1. Important idea about abstraction of given design. My idea, also suggested by Ameresh. I dont remember talking about it before. Abstraction needs to be guided in order to drive it towards possible analogical matches. Past matches are summarized in some way into meta-knowledge (perhaps heuristic) that suggests productive ways to abstract. e.g. a note that electricity can be abstracted towards "flow" to provoke a match.
  2. I have mentioned goal-driven abstraction before in my developed goal-tree and also mentioned some possible things that may guide the process. Never the less I think that this is an important idea and I haven't seen anything written on it.

    The nature (and formation) of this meta-kn seems like a very interesting issue. It's clearly needed

  3. Ameresh suggested post-analogical reflection might be done by a designer to see whether the simpl found might be applied to other similar things in order to establish its generality.
  4. This sounds like the generalization (for learning) as the last phase in an analogical reasoning process (as suggested by Goel and others).

    No, it doesn't have to be directly about generalization, but it could be. It could be used to discover applicability conditions for the simplification, or it could be used to place other copies of the simplification rule at other locations in the hierarchy. However, clearly both of these would allow generalization to occur more easily (to be more informed).

  1. Issue of whether to put back the D ==> Dsimpl produced by simplification by analogy back in the design knowledge.
  2. Related to the previous paragraph.

    But different. 

  3. Need to seek out more of the Design Guidelines literature (including. DFX) to see if there are more about simplifications. That may be one of your best sources of examples.
  1. Propagation is another analogy problem! If D(behav)==>D(behav)simpl is produced once a retrieved behavior based simplification applied to the given design then the problem of producing D(str)==>D(str) by propagating D(behav)simpl down a level is that of doing a sort of analogical reasoning:

D(behav)............. D(str)

| |

V V

D(behav)simpl ??????

    I would rather say that it is another problem that could be addressed by analogical reasoning. I think that the simplification in behavior may not be "implementable" by any structure resembling (or derived from) the behaviorally less simple design's structure. However it is reasonable to try to use a modification (based on some similarity) first for propagating behavioral simplification to function.

    I think I agree with that.

  1. Ameresh questioned "cost" and its role in simplification. While I was able to convince him that it is normally derived from simplification with respect to some process, he felt that simplification wrt cost might occur, and gave choosing a cheaper material (not simpl in my mind) but also choosing a simpler shape -- e.g. from a catalog. I still dont see it as convincingly simplification.

    My argument against viewing cost reduction as simplification is that cost can only be computed in context (i.e. it is not an intrinsic property of the design, such as number of components, number of operations,...). For instance the cost of a design (structure) viewed as structure depends on the cost of the material used, but that cost is different in every country, region, city. I like to compare the issue of cost in simplification to that of absolute (physical) time in (algorithm) complexity: the time for applying a method (algorithm) depends on the programmers skills as well as on the machine on which it is run, but the useful measure for choosing the best algorithm or for analyzing whether an algorithm can be run in a reasonable amount of time is the number of (some) basic operations.

    So cost, time, weight, space and counts are the possible absolutes? But cost, weight and time might depend on the context. e.g. depends on gravity e.g. depends on assembler's skill. One has to be careful about numbers of operations too, as this may vary depending on many resources (e.g. machines) available.

    What about space?

    You distinguish between logical and actual time? (as per algorithm analysis) Can you make the same distinction for other measures?