THOROUGHLY READ AND FOLLOW THE
PROJECT GUIDELINES.
These guidelines contain detailed information about how to structure your
project, and how to prepare your written and oral reports.
Your written report should be at most 12 pages long (including all the graphs, figures, and appendices). The font size should be no smaller than 11pts.
- Data Mining Technique(s):
Use the neural networks methods implemented in Weka and in Matlab
(or implement your own code). You can find the Weka module implementing neural
nets under Classifiers, functions, MultilayerPerceptron; and the Matlab functions implementing neural nets in the Neural Network Toolbox.
- Dataset(s):
In this project, we will use two datasets:
-
The Face Recognition dataset
described in Section 4.7 of your textbook.
This
dataset is also available at the
UCI Data Repository.
You can use the same learning task (that is, learn the direction the person
is facing: left, right, straight, or upward) and the same design decisions
and parameters described in Section 4.7. For example, you can use the
one-quarter size images, if you wish. I encourage you to experiment with
other settings and design decisions, and even other learning tasks
(e.g., sunglasses recognizer or face recognizer) if time allows.
- See some potentially useful input files for Weka and Matlab,
courtesy of the CS539 students Spring 2011.
Note that these files are not the full version of the pictures, but a smaller,
possibly incomplete version.
I encourage you to tranlate the original data files to appropriate input for
Weka and Matlab, but if you're unable to, you can use these files.
If you use any of them, please mention so in your report.
-
The
Spambase Dataset
available from the
Univ. of California Irvine (UCI) Data Repository.
- Performance Metric(s):
- Use classification accuracy. If you wish, you can use other metrics
to evaluate the "goodness" of your models, IN ADDITION to accuracy.
- If possible, compare the accuracies you obtained against those of
benchmarking techniques or previously studied techniques as
ZeroR, OneR, and J4.8 over the same (sub-)set
of data instances you used in each experiment.
- Report the training time needed to construct the model in each of
the experiments.
- Evaluation and Testing:
The training time of neural networks may be very high in some cases.
If n-fold cross validation with n=10 takes too long, you can lower the
number of folds n to say 3, or you can choose another evaluation method (e.g.,
%split) if necessary.
- Design Decisions:
For experimentation different to that described in Section 4.7 of the
textbook, I offer the following guidelines:
- Topology of your Neural Net:
- I suggest that you use a 2-layer, feedforward architecture. More
specifically, a net consisting of (1 input layer,) 1 hidden layer,
and 1 output layer. Each node in a layer is connected to each
and everyone of the nodes in the next layer, and no nodes on
the same layer are connected. However, you can experiment
with other architures in addition to the one suggested here.
- In the case of non-numeric target attributes, decide on
a convention that you'll use to match output nodes values and
target attribute values.
- Neural Net Parameters:
Besides experimenting with the topology of the neural net, see
how varying the learning rate, momentum, number of iterations
(training time), decay, size of validation set, and other
parameters affect the error backpropagation algorithm and the
quality of its results.