information (information content, failure
probability)
What is a "simpler" relation?
a binary relation between two designs with
respect to a given context/aspect/measure
For what context, aspect, measure combination
does simplification make sense?
Under what conditions can an operational
definition of simplification be given, such that the relation can be calculated?
Besides the designs involved what is important
with respect to a simplification relation?
an explanation of why the relation holds
a description of the process which resulted
in the relation
How do we represent "simpler relations?
How can simplification be done?
by search using the simpler relation to
guide it
by heuristic search using simplification
rules
by adapting old simplifications
How does design simplification propagate?
Design Simplification by Analogy
What is the analogical reasoning model for
design simplification?
What are the phases of the analogical reasoning
process?
Retrieval of source analog
Mapping of the target problem to the source
analog
Transfer of knowledge from the source analog
to the target problem
Completion of the solution to the target
problem (directed by the simplification task - e.g. MOLGEN?)
Propagation of the effects of design simplification
(interlaced with the previous two phases?)
Evaluation of the result of simplification
is it a simplification
design requirement/constraints
simplification requirement/constraints
Generalization over the solutions to the
source problem and target problem
Storing the solution of the target problem
and generalization
What is the control flow of the analogical
reasoning process?
How is the target problem specified?
The target (simplification) problem has
to specify a design and a simplification point of view (task/aspect/metric)
It is also possible that constraints on
the solution and/or the simplification process need to be specified
What is the content of a source analog?
The source analogs are design simplifications
Each design simplification is composed of
a design, a simplification of that design (another design) and the simpler
relation connecting the two
Both of the designs are described in terms
of their structure, behavior and function
The simpler relation is a description of
the process by which the simplification was obtained from the design
The description of the simplification process
will contain:
the simplification point of view (task/aspect/metric)
a description of the conditions under which
the simplification was applicable
an explanation of why those conditions were
needed
a description the simplification process
itself
a description of the results of the simplification
which account for the simpler relation
a description of the propagation of the
changes produced by the simplification to other aspects of the design
Some of the elements of the above description
may be missing from a simplification
How is a source analog retrieved?
A source analog is retrieved based on the
target problem (design and point of view)
The point of view allows to select certain
simplifications to be considered for matching:
the simplest case would be if only simplifications
from exactly e same point of view (task/aspect/metric) would be considered
this could be relaxed by considering simplifications
from points of view which only differ by their metrics; this requires that
there is some way of converting the different metrics to each other (for
instance count to information content)
another way to relax the constraint on the
simplifications considered is to allow aspects that could be mapped to
each other (for instance different kinds of processes, such as behavior
and function can be mapped to each other using abstraction)
on could even think of relaxing the identical
task requirement for considering simplifications in retrieval in the case
of tasks that can be mapped to each other
The source analogs selected based on the
simplification point of view are searched by comparing them to the design
in the problem specification
The search must be based on aspects of the
designs which helped to produce the source (stored) simplifications
These aspects can be attributes, attribute
values, components (surface aspects), but more likely relations between
those
They can be identified/extracted from the
description of the simplification process (e.g. from preconditions, explanations,
the simplification process itself, and so on)
The aspects considered as most important
for producing the source simplifications can be further abstracted and
finally used for indexing the simplification
Indexes build this way can then be used
for searching for source analogs
How is the retrieved source analog mapped
onto the target problem?
The less simple design component of a retrieved
source simplification is mapped onto the design component of the target
problem.
One possibility of mapping is to use the
principle of the structure mapping engine [Falkenheimer]. This would allow
to produce mappings that would prefer higher level relations to lower lever
relations and attributes.
The mapping process should be extended by
adding a mechanism which would guide the mapping using the simplification
goal (point of view).
This should somehow weight the preference
of the mapping mechanism towards forming mappings between similar attributes,
and relations which, based on the old simplification process, were important
to (or played a role in) the simplification.
Information about which aspects of the old,
un-simplified design were important to the source simplification can be
extracted from the description of the simplification process (e.g. from
preconditions, explanations, the simplification process itself, and so
on).
What measure of similarity to use to select
the best match?
The measure of similarity needs to rely
on the mappings produced: the "better" the mapping is the "more
similar" the source design should be to the target design.
Since we are looking at three levels of
simplification: structural, behavioral and functional, we need to define
the quality of mapping for each of those three aspects
More generally we need to define what do
we understand by "similar" structures, "similar" processes
and how do we measure the degree of similarity in both cases.
What is (simplification) knowledge to be
transferred?
The simplification knowledge transferred
will be obtained either from the description of the simplification process
in the source simplification or from the computed differences between the
two designs involved in the source simplification.
Knowledge that can be transferred from the
simplification process description in the source simplification could be
elements of the preconditions, of some explanations and/or of the simplification
process description.
To transfer knowledge based on the differences
from the source simplification, those differences must be computed.
Computing differences must only refer to
the simplification point of view which is currently considered. This may
be "tricky" because for instance one may discover structural
differences that were actually generated by the propagation of, let's say
a functional simplification. thus transferring relevant simplification
knowledge base only on differences between the designs involved in the
source simplification may be non-trivial.
How is knowledge transferred from the source
to the target? (directly or through a shared abstraction)
How does the transferred knowledge enable
completing the solution?
How is the solution to the target problem
evaluated?
The solution to the target needs to be evaluated
from at least two points of view:
Does the solution satisfy the original design
requirements and constraints of the target design?
This is a design evaluation problem and should
probably not be of much concern for us.
Is the modified design a indeed simplification?
There needs to be an operational definition
of what it means that a design is simpler than another design from each
of the context, measure, aspect combinations that make sense and that we
are handling. Such definitions may be based either on some complexity metrics,
or on some (partial) ordering relations between designs.
These operational definitions must be implemented
by corresponding algorithms.
What may be learned from the target problem
and how? (generalization)
Is the target analog and/or generalization
worth storing and if yes how?
What knowledge to transfer from an old simplification
to a new problem?
How can simplification as goal guide the
phases of analogical reasoning?
Dave's questions
Ashok's IEEE Expert paper on analogy &
creativity suggests that producing abstractions of designs (for indexing
or to guide transfer) might occur at 1) design storage time 2) design retrieval
time (at reminding time) or 3) at p-s time (when kn transfer is actually
done). Given our discussion about goals affecting indexing, how does this
idea of eager or late abstraction get modified by the presence of a particular
simplification request (i.e., in the late case only)?
What if designs were stored annotated by
what goals were under consideration at the time of each decision. e.g.,
it's symmetrical to improve handling times during assembly. How might that
affect goal-based analogical simplification?
The possibility of keeping information with
a simplification (or a design) about how useful it's been and in what way
(or how easy) the design has been to simplify, or how easy it's been to
match. This accumulated knowledge should become useful and will affect
the process.
Altmeyer & Schurman's paper in AID'96:
They talk about a whole lot of things -- with an attempt at formality --
including assessing adaptation costs for a retrieved case. That idea can
be applied to simplifications of course -- e.g., costs might include how
much change is required to simplify at that level, and possibly the difficulty
of making them, as well as costs of propagating to other levels.
Todd Griffith:
Making an analogical match based on functional
similarity (?) 1st, and then trying to reduce the structural difference
between the two matches by structurally transforming one into the other.
Each transformation preserves function, and is, I think, actually simplifying.
I think his preferences when looking for a fnl similar thing is to pick
one that has simpler structure. He uses the transformed objects using some
special reasoning methods (using deep kn about behavior) in order to try
to figure out an answer to the original problem. His transformations might
be of interest to you.
Suggested the redesign of the Nintendo Power
Glove as an example of simplification.
Suggested that if the goal of all simplification
is to preserve functionality, then perhaps the first pass in searching
for potential simplifications (of any kind) would actually be used to retrieve
a set of functionally similar designs. That set would then be used in the
search for simplifications. I'm not sure if it works all the time, but
it's worth considering carefully.
Sam Bhatta's PhD work his system "discovers"
the concept of "cascading" -- i.e., that several of a component
might be used instead of one in order to satisfy a requirement/constraint.
(e.g., 3 batteries instead of 1). This batteries example is an instance
of that concept. If you think of that transformation to a design as being
close to the inverse of simplification then it becomes clear that those
concepts are generalized "complexifiers":-) and that you ought
to see what you can borrow from Sam's mechanisms/ideas.
Suppose you'd requested and carried out
a functional simplification by analogy:
F of D(old) ===> F(simp) of D(new)
and by propagation we get
B of D(old) ===> B' of D(new)
S of D(old) ===> S' of D(new)
Now if S' happens to be simpler than S then
we can store S ==> S' due to the fnl simpl as a simplification itself.
Similarly for B ==> B' if that provides a simplification.
Dave's discussion with Amaresh
Important idea about abstraction of given
design. My idea, also suggested by Ameresh. I dont remember talking about
it before. Abstraction needs to be guided in order to drive it towards
possible analogical matches. Past matches are summarized in some way into
meta-knowledge (perhaps heuristic) that suggests productive ways to abstract.
e.g. a note that electricity can be abstracted towards "flow"
to provoke a match.
I have mentioned goal-driven abstraction
before in my developed goal-tree and also mentioned some possible things
that may guide the process. Never the less I think that this is an important
idea and I haven't seen anything written on it.
The nature (and formation) of this meta-kn
seems like a very interesting issue. It's clearly needed
Ameresh suggested post-analogical reflection
might be done by a designer to see whether the simpl found might be applied
to other similar things in order to establish its generality.
This sounds like the generalization (for
learning) as the last phase in an analogical reasoning process (as suggested
by Goel and others).
No, it doesn't have to be directly about
generalization, but it could be. It could be used to discover applicability
conditions for the simplification, or it could be used to place other copies
of the simplification rule at other locations in the hierarchy. However,
clearly both of these would allow generalization to occur more easily (to
be more informed).
Issue of whether to put back the D ==>
Dsimpl produced by simplification by analogy back in the design knowledge.
Related to the previous paragraph.
But different.
Need to seek out more of the Design Guidelines
literature (including. DFX) to see if there are more about simplifications.
That may be one of your best sources of examples.
Propagation is another analogy problem!
If D(behav)==>D(behav)simpl is produced once a retrieved behavior based
simplification applied to the given design then the problem of producing
D(str)==>D(str) by propagating D(behav)simpl down a level is that of
doing a sort of analogical reasoning:
D(behav)............. D(str)
| |
V V
D(behav)simpl ??????
I would rather say that it is another problem
that could be addressed by analogical reasoning. I think that the simplification
in behavior may not be "implementable" by any structure resembling
(or derived from) the behaviorally less simple design's structure. However
it is reasonable to try to use a modification (based on some similarity)
first for propagating behavioral simplification to function.
I think I agree with that.
Ameresh questioned "cost" and
its role in simplification. While I was able to convince him that it is
normally derived from simplification with respect to some process, he felt
that simplification wrt cost might occur, and gave choosing a cheaper material
(not simpl in my mind) but also choosing a simpler shape -- e.g. from a
catalog. I still dont see it as convincingly simplification.
My argument against viewing cost reduction
as simplification is that cost can only be computed in context (i.e. it
is not an intrinsic property of the design, such as number of components,
number of operations,...). For instance the cost of a design (structure)
viewed as structure depends on the cost of the material used, but that
cost is different in every country, region, city. I like to compare the
issue of cost in simplification to that of absolute (physical) time in
(algorithm) complexity: the time for applying a method (algorithm) depends
on the programmers skills as well as on the machine on which it is run,
but the useful measure for choosing the best algorithm or for analyzing
whether an algorithm can be run in a reasonable amount of time is the number
of (some) basic operations.
So cost, time, weight, space and counts
are the possible absolutes? But cost, weight and time might depend on the
context. e.g. depends on gravity e.g. depends on assembler's skill. One
has to be careful about numbers of operations too, as this may vary depending
on many resources (e.g. machines) available.
What about space?
You distinguish between logical and actual
time? (as per algorithm analysis) Can you make the same distinction for
other measures?