diff options
Diffstat (limited to 'README.md')
-rw-r--r-- | README.md | 10 |
1 files changed, 5 insertions, 5 deletions
@@ -1,12 +1,12 @@ -Code of the winning entry to the Kaggle ECML/PKDD destination competition(https://www.kaggle.com/c/pkdd-15-predict-taxi-service-trajectory-i). Our approach is described there: http://arxiv.org/abs/1508.00021 +Code of the winning entry to the [Kaggle ECML/PKDD taxi destination competition](https://www.kaggle.com/c/pkdd-15-predict-taxi-service-trajectory-i). Our approach is described in [our paper](http://arxiv.org/abs/1508.00021). ## Dependencies We used the following packages developped at the MILA lab: -* Theano. A general GPU-accelerated python math library, with an interface similar to numpy (see [3, 4]). http://deeplearning.net/software/theano/ -* Blocks. A deep-learning and neural network framework for Python based on Theano. As Blocks evolves very rapidly, we suggest you use commit `1e0aca9171611be4df404129d91a991354e67730`, which we had the code working on. https://github.com/mila-udem/blocks -* Fuel. A data pipelining framework for Blocks. Same that for Blocks, we suggest you use commit `ed725a7ff9f3d080ef882d4ae7e4373c4984f35a`. https://github.com/mila-udem/fuel +* Theano. A general GPU-accelerated python math library, with an interface similar to numpy (see [3, 4]). See <http://deeplearning.net/software/theano/> +* Blocks. A deep-learning and neural network framework for Python based on Theano. As Blocks evolves very rapidly, we suggest you use commit `1e0aca9171611be4df404129d91a991354e67730`, which we had the code working on. See <https://github.com/mila-udem/blocks> +* Fuel. A data pipelining framework for Blocks. Same that for Blocks, we suggest you use commit `ed725a7ff9f3d080ef882d4ae7e4373c4984f35a`. See <https://github.com/mila-udem/fuel> We also used the scikit-learn Python library for their mean-shift clustering algorithm. numpy, cPickle and h5py are also used at various places. @@ -50,4 +50,4 @@ Note that some script expect the repository to be in your PYTHONPATH (go to the When running the training script, set the following Theano flags environment variable to exploit GPU parallelism: `THEANO_FLAGS=floatX=float32,device=gpu,optimizer=fast_run` -*More information in this pdf: https://github.com/adbrebs/taxi/blob/master/doc/short_report.pdf* +*More information in [this pdf](https://github.com/adbrebs/taxi/blob/master/doc/short_report.pdf)* |