Working with the University of Toronto Data Science Team on kaggle competitions, there was only so much you could do on your local computer. So, when we had to analyze 100GB of satellite images for the kaggle DSTL challenge, we moved to cloud computing.
We chose AWS for its ubiquity and familiarity. To prepare the data pipeline, I downloaded the data from kaggle onto a EC2 virtual instance, unzipped it, and stored it on S3. Storing the unzipped data prevents you from having to unzip it every time you want to use the data, which takes a considerable amount of time. However, this increases the size of the data substantially and as a result, incurs higher storage costs.
Now that the data was stored on AWS, the question was: How do we programmatically access the S3 data to incorporate it into our workflow? The following details how to do so in python.
In high school, I lead an AP Physics group for one and a half years in which I taught my fellow students the AP Physics B curriculum in preparation for the AP examination. I found teaching to be a very fulfilling experience. It added a new dimension of meaning to my knowledge: I didn’t learn just for my own sake but also so I may help others learn.
In my first year at the University of Toronto, I resolved to become a Teaching Assistant. At UofT, TAs mainly teach tutorials, small interactive classes that focus on the practical applications of concepts taught in lectures. I was motivated by my own experiences in tutorials: the usefulness of having a good TA and the frustration of having a bad one. I wanted to be that good TA.
For our third-place project description, click here For the video of our presentation, click here
Last September, I participated in HackOn(Data), a two-day data hackathon in Toronto. It was the first year it was held and one of the few data science competitions in Toronto. I learned a lot, met similar-minded data enthusiasts and even ended up winning third-place with my teammate Chris!