DUE DATE: Thursday March 1st, 2012.
- Slides: Submit by email by 10:00 am.
- Written report: Hand in a hardcopy by 1:00 pm.
- Oral Presentation: during class that day.
Project Assignment:
- Read Chapter 4 and Appendix D of the textbook in great detail.
-
THOROUGHLY READ AND FOLLOW THE
PROJECT GUIDELINES.
These guidelines contain detailed information about how to structure your
project, and how to prepare your written and oral reports.
- Data Mining Technique(s):
We will run experiments using the following techniques:
- Pre-processing Techniques:
Feature selection, feature creation, dimensionality reduction, noise reduction, attribute discretization, ...
- Classification Techniques:
- Zero-R (majority class)
- One-R
- Decision trees:
J4.8 in Weka (given that J4.8 is able to handle numeric attributes and
missing values directly, make sure to run
some experiments with no pre-processing
and
some experiments with pre-processing, and compare your results);
or the decision tree functions in Matlab (see
Matlab decision tree demo); or both.
- Regression Techniques:
- Linear Regression (under "functions" in Weka)
- Regression Trees: M5P (under "trees" in Weka)
- Model Trees: M5P (under "trees" in Weka)
- Dataset(s):
In this project, we will use the
Communities and Crime Unnormalized Data Set
available at the
UCI Machine Learning Repository.
Convert the dataset to the arff format. The arff header is provided
in the dataset webpage.
Use the murdPerPop attribute as the target.
- For classification, discretize murdPerPop in 3 equal-frequency bins, using unsupervised discretization.
- For regression, keep murdPerPop as a continuous attribute.
Run experiments with and without discretizing the predicting attributes, removing attributes that are too "related" to the target (e.g., murders, pop, ...) or that make the trees long (e.g., states), and any other pre-processing and experiments that produce useful and meaningful models.
- Performance Metric(s):
- Use (1) classification accuracy (in classification tasks), or prediction error (in regression tasks, see note below),
(2) size of the tree, and
(3) readability of the tree, as separate measures to evaluate the "goodness" of your models.
Note: For regression tasks, use any subset of the following error metrics that you find appropriate: mean-squared error, root mean-squared error,
mean absolute error, relative squared error, root relative squared error, relative absolute error, correlation coefficient .
An important part of the data mining evaluation in this project is to try to make sense of these performance metrics and to become familiar with them.
- Compare each accuracy/error you obtained against those of benchmarking techniques
as ZeroR and OneR over the same (sub-)set of data instances you used in
the corresponding experiment.
- Remember to experiment with pruning of your tree:
Experiment with pre- and/or
post-prunning of the tree in order to increase the classification
accuracy, reduce the prediction error, and/or reduce the size of the tree.
- Advanced Topic(s) (20 points):
Investigate in more depth (experimentally, theoretically, or both) a topic of your
choice that is related to decision trees
and that is not covered already in this project.
This decision tree-related topic might be something that was described or mentioned
in the textbook or in class, or that comes from your own research, or that is related
to your interests. Just a few ideas are: The prune function in Matlab; C4.5;
C4.5 pruning methods (for trees or for rules); any of the
additional tree classifiers in Weka: DecisionStump, LMT RandomForest, RandomTree,
REPTree; meta-learning applied to decision trees (see Classifier -> Choose -> meta);
an idea from a research paper that you find intriguing; ...
- Project 2 Grading Sheet