CS539 Machine Learning - Spring 2009
Project 3 - Neural Networks
Due Date:
Tuesday, Feb. 24th 2009. Slides are due at 2:00 (by email)
and Written Report is due at 3:30 pm (beginning of class).
- Read Chapter 4 of the textbook about neural nets in great detail.
- Homework Assignment:
-
Solve Exercises 4.6 and 4.7 of your textbook (page 125).
Include your solution in your written report (and not in your oral report).
-
- Project Assignment:
THOROUGHLY READ AND FOLLOW THE
PROJECT GUIDELINES.
These guidelines contain detailed information about how to structure your
project, and how to prepare your written and oral reports.
- Data Mining Technique(s):
Use the neural networks methods implemented in the Weka system, or
implement your own code. You can find the Weka module implementing neural
nets under Classifiers, functions, MultilayerPerceptron.
- Dataset(s):
In this project, we will use two datasets:
-
The World Happiness dataset.
Run experiments using SWL-index as the target attribute, and then
separate experiments using continent as the target attribute.
-
The Face Recognition dataset
described in Section 4.7 of your textbook.
You can use the same learning task (that is, learn the direction the person
is facing: left, right, straight, or upward) and the same design decisions
and parameters described in Section 4.7. For example, you can use the
one-quarter size images, if you wish. I encourage you to experiment with
other settings and design decisions, and even other learning tasks
(e.g., sunglasses recognizer or face recognizer) if time allows.
- Performance Metric(s):
- Use classification accuracy. If you wish, you can use other metrics
to evaluate the "goodness" of your models, IN ADDITION to accuracy.
- If possible, compare the accuracies you obtained against those of
benchmarking techniques or previously studied techniques as
ZeroR, OneR, ID3, and J4.8 over the same (sub-)set
of data instances you used in each experiment.
- Report the training time needed to construct the model in each of
the experiments.
- Evaluation and Testing:
The training time of neural networks may be very high in some cases.
If n-fold cross validation with n=10 takes too long, you can lower the
number of folds n, or you can choose another evaluation method (e.g.,
%split) if necessary.
- Design Decisions:
For experimentation different to that described in Section 4.7 of the
textbook, I offer the following guidelines:
- Topology of your Neural Net:
- I suggest that you use a 2-layer, feedforward architecture. More
specifically, a net consisting of (1 input layer,) 1 hidden layer,
and 1 output layer. Each node in a layer is connected to each
and everyone of the nodes in the next layer, and no nodes on
the same layer are connected. However, you can experiment
with other architures in addition to the one suggested here.
- In the case of non-numeric target attributes, decide on
a convention that you'll use to match output nodes values and
target attribute values.
- Neural Net Parameters:
Besides experimenting with the topology of the neural net, see
how varying the learning rate, momentum, number of iterations
(training time), decay, size of validation set, and other
parameters affect the error backpropagation algorithm and the
quality of its results.