Python and R offer a good combination of powers: dozens of proven engineering, data science, and machine learning libraries, also a science oriented approach towards full reproducibility. As I have told you before, I started my coding journey with Python many years ago. I even wrote a large application for production optimization using OpenServer, Prosper, GAP and MBAL by Petroleum Experts while I was on my 3-year tour with Petronas in Kuala Lumpur in Malaysia.
I learned Python 10+ ago. With R now in my toolbox, it is difficult to go back. But still I am coding in Python to maintain the old code. What I really dislike is: the Jupyter notebooks, although I fell in love with them at first sight; the acceptance of organized chaos with the multiple versions floating around. I guess you get use to it when you are part of the Py ecosystem.
Just made changes in Rob Hyndman template to adapt to my new static website.
Motivation By nature, I am curious. I am not only interested in the why-of-things but also in the “how”. Be able to document it and reproducing it later. And that, most of the time, could be a time consuming affair. Pleasurable, rewarding, but time consuming.
Add to that data science and deep learning and you get a exponential combination.
I own a 8-core, 32GB RAM, 3TB SSD, Quadro K2100M GPU laptop that originally acquired with the intention of running several virtual machines with Windows, Linux and MacOS, as part of my work as an atypical petroleum engineer.
All of us at some point during our careers had problems with big datasets. So big that our computers could not handle it. Let’s approach the problem from a practical side. I have had similar challenges while working on data science projects with Python and R, and with SQL databases, and with plain vanilla CSV huge files.
In one of the cases, the dataset was an Oracle database with well test data from producing oil and gas wells, offshore shallow waters.
I have had my notes here and there: Evernote, network drives, LinkedIn, SPE forums. And I could never find the ideal way to put the data and info together until I found Hugo.
I borrowed the template ideas for my blog from Ron J. Hyndman blog.
The source code for the site is now hosted on github.
If you find any problem in this site, please feel free to let me know at.
Without intending -really-, I got immersed this weekend with remote sensor monitoring.
As you know, these modern ages bring their good size of sensors. Everything generates data in different way: discrete and analog, and from there the challenge is not only monitoring in real time but evaluating the data, interpreting it and make sense of it. No pun intended.
I wanted to bring the results of oil well models to a web page so it could be shared among other engineers, and I ended up creating a Linux server with Graphite and StatsD in a couple of hours.