Aarschot – Lier – Mechelen

After clouds and showers, back to the beautiful weather with sunshine ! For this last part of unvisited area around Leuven, I took the N19 starting from the ring of Leuven to Wilsele, Putkapel, Wezemaal, reaching the outskirts of Aarschot, then turned right to take the circular road and the left to enter the N10, heading to Lier via Begijnendijk, Pijpelheide, Goor, Heikant and Koningshooikt. Near Lier, I took the circular road on the left, took another left for the N14 to Mechelen via Duffel and Elzenstraat. At Mechelen, I reached the circular road and took the N26 back to Leuven.

loop Leuven-Aarschot-Lier-Mechelen ; 85km, 2 hours

Tienen

Nice weather, this Sunday morning, I could enjoy the ride starting from Leuven, and driving to Korbeek-Lo, Boutersem, and Roosbeek on the N3. Near Tienen, I took the N29, crossed Bumbeek, Glabbeek, Kapellen, Molenbeek-Werksbeck. Just before the E314, 2 times on the left and I switched to the N2, going back to Leuven through Bekkevoort, Sint-Joris-Winge, Sint-Bernard, Linden and finally Kessel-Lo.

This beautiful triangle has a perimeter of 81km ! Around 60min

Bertem – Tervuren

I took the N3 from Leuven, straight to Bertem and Tervuren, where stands the Royal Museum of Central Africa (unfortunately closed for renovation until 2018). I then took the N227 up to Nossegem, then switched to the N3, going back to Leuven.

Leuven Tervuren; 15km, 20min

Wavre – Gembloux – Jodoigne

After stormy weather conditions these last days, the weather finally did improve, so I could leave Heverlee on the N253 to Overijse, turned on the left to join the N4 to Wavre. Near Gembloux, I selected the N29, took the left again at Jodoigne to join the N240 and the N91 to reach Hamme-Mille so as to close the loop.

Closing the loop from Heverlee to Gembloux; 40km, 45min

Hoegaarden

For my second road trip in Belgium, I went to Hoegaarden.

From Leuven to Hoegaarden via secondary roads, 25km, 35min

I started my trip today on the ring road on the South part of Leuven, took the right just before Phillipsite, crossed the railway, the N25 and the E40, before reaching Bierbeck, Opvelp, Meldert and finally Hoegaarden where the famous beer is produced. It is a very small town, and the brewery was not so hard to find !

Behind the restaurant Kouterhof, the brewery can be visited

I followed the road to Zétrud-Lumay, crossed a small river (la Grande Gette), crossed Sainte-Marie Geest, Jodoigne, Hamme-Mille, Blanden and reached my home town, Heverlee.

From Heverlee to Waterloo

For my very first road trip in Belgium, I left Heverlee and followed the N253, which is a scenic road that crosses several villages and small towns in the countryside: Egenhoven, Neerijse, Loonbeck.

From Heverlee to Waterloo on the N253, 35km, 40min

From Huldenberg to Overijse, there is a nice little lake along the right edge of the road. I kept driving to Maleizen, then La Hulpe, Hannonsart, Ohain, Ransbeche, and finally Waterloo, where I visited the battlefield.

The museum was an excellent opportunity to take a break. I learned about the French Revolution, the rise and fall of Napoleon, and watched a very good 3D movie about the battle of Waterloo. Outside, I climbed the stairs on the hill (40m) which led to the statue of a lion, erected here as a memorial and the symbol of the return of peace in the world with the end of the Napoleon era. From there, I could enjoy the panorama since the weather was pretty sunny, with only very few clouds.

La Butte au Lion, erected at the beginning of the 20th Century

Python data structures

Yesterday, I completed another course of a specialisation cycle dedicated to Python for data analytics.

Python Data Structures by University of Michigan on Coursera. Certificate earned on October 1, 2016

Python Data Structures by University of Michigan on Coursera. Certificate earned on October 1, 2016

While the first course was dealing with the very basics of variables, conditional loops, iterations and functions, this course further builds on data structures such as strings, files, lists, dictionaries and tuples. In general, there are multiple ways to perform a task on data, but only few of them are simple and smart (“pythonic”). Selecting the right data structure is of utmost importance. The assimilation of Python idioms necessitates a little bit of time, but it is fundamental step to build on, and allows for very short and efficient code that can perform complex tasks.

How to create a mind ?

Kurzweil's book, written in 2012

Kurzweil’s book, written in 2012

Our Universe exists because of its informational content. From pure physics to chemistry and biology, evolution started from simple structures (atoms, carbon molecules…) to create more complex ones (DNA, proteins…), and life eventually ! This evolution yielded nervous systems and finally the human brain, which is able of hierarchical thinking. The neocortex is the central piece, it can work with patterns, associate symbols and link them together to give rise to the knowledge that we know. Technology is nothing else but applied knowledge being made possible by humans ability to manipulate objects and make tools. Reverse-engineering the brain to make thinking machines is probably the greatest project ever, one that can transcend humankind.

Even though this book is certainly not the expression of a real scientific work, it is full of optimistic insights and bewildering intuition on future. It is amazing to see how technological progress transformed our societies these last few decades, and we are possibly the witnesses of a major transition never seen before which is going to change humankind forever. Thinking machines able to compete with humans should appear by the 2030s. A natural consequence of LOAR (Law Of Accelerating Return, a postulate stating that evolution accelerates as it grows in complexity and capability) is that humans and machines will meld together, and the computing limits will probably be reached at the end of the century, giving rise to a deeply transformed society potentially able to colonise space and conquer new solar systems.

Getting started with Python

This week, I have completed the first course of a specialisation cycle dedicated to Python for data analytics.

Programming for Everybody (Getting Started with Python) by University of Michigan on Coursera. Certificate earned on August 25, 2016

Programming for Everybody (Getting Started with Python) by University of Michigan on Coursera. Certificate earned on August 25, 2016

Throughout my career of process engineer, I have been constantly facing issues that had to be investigated and understood, most often using the data which was available. Unfortunately, my experience is that only a very tiny amount is effectively uploaded in organised, well-structured databases ready to be queried, and raw data is in general not user-friendly at all and almost unreadable. Processes (especially the ones linked to metrology operations) generate a great deal of raw data, stored in multiple ways and formats. Hence, raw data treatment and preparation is a necessary step of data analysis and inference, but a tedious and low added-value one, unless one comes up with the right tools.

The first one that comes in my mind is the traditional Excel sheet, where data can be imported, filtered and analysed. It is very popular, widespread and versatile. Excel comes with a scripting language, VBA (Visual Basic for Applications), where macro can be designed so as to automate tasks. It is a very decent tool, which should be part of the data analytics survival kit, when nothing else is available. In fact, for manipulating huge datasets, a much better choice is JMP, a licensed statistical discovery software offered by SAS that offers an intuitive, Excel-like interface. JMP provides unique features to transform and combine multiple datasets with hundreds of thousand of lines in a blink of an eye, namely summarising, concatenating, splitting, stacking and subsetting… A bunch of advanced modules allows complex analysis, my favourite one being the profiler where multi-variable and multi-response trends can be immediately displayed with regression parameters associated to the underlying model, which is invaluable for experimental design. Repetitive tasks can be automated thanks to an integrated scripting language (namely JSL), with powerful macros able to build fragments of code.

While JMP is a really great piece of software for data manipulation and on-the-fly analysis, its scripting language lacks portability, is restricted to JMP environment (which is licensed), and basically its inputs are only datasets.

A programming language like Python can alleviate these shortcomings. It is a powerful high-level language, easy to learn, universal, open-source, free and portable. With Python, there is virtually no other limitation than hardware resources and programming skills. A very active community has been continuously designing advanced modules and libraries, allowing for high-productivity programming, with endless potential applications. For all these reasons, I consider Python as a smart choice for a natural extension of professional software specialised in data analytics, and my plan is to go through that full specialisation cycle.