Hi Folks! It’s been 10 months since my last blog. 2017 has been a good year for me, although I don’t have much to show for it. I would go so far to say, 2017 is the most productive year of my life. What follows is a summary of events of this year.

2016 was not that bad. Throughout 2016, I met so many interesting people in the Free Software circle. I met activists (FSMK) and developers from Bengaluru (BangPypers). I interacted with KariGLUG members who are actively working on establishing a decentralized resource sharing network at Alagappa university.

I had so much free time (emphasis on free). I would wake up in the morning and decide how to spend my day; which course to learn that day. I enjoyed Melanie Mitchell’s course at complexity explorer. I enjoyed Charles and Michael’s Machine Learning courses at Udacity. I loved experimenting with p5.js while watching the ever-so-charming Dan Shiffman’s youtube videos on Creative Coding. Consumed every popular anime out there - Code Geass, NGE, Parasyte, AoT, Tokyo Ghoul, Boku dake ga inai machi, Mirai Nikki, Shiki, … I may have wasted a good part of 2016.

2017 challenged me to turn away from escapism and face reality. I did indulge in anime, gaming, TV and movies, from time to time. I was introduced to the fantastic world of surrealism through Lynchian movies. I watched almost every one of David Lynch’s movies. Although Blue Velvet is considered Lynch’s greatest creation, I may have enjoyed Lost Highway more than that.

Jan 2017, my application to Google Brain Residency Program was rejected once again (not much of a surprise there). Towards the end of Jan, I was asked to conduct a beginner-friendly workshop on Machine Learning at 4ccon, organized by FSFTN (Free Software Foundation Tamilnadu).

Feb 2017, I moved to Hyderabad, to work for voicy.ai as a research engineer. I worked on the Dialog State Tracking Problem. As part of this work, I implemented a few interesting new neural networks for chatbots. To my surprise, my implementation of Hybrid Code Networks was used by Charles Akin-David and team at Stanford, for their CS224 course project.

During this time, I met the SWECHA team. In March, they conducted a workshop on Community Networks with Freedom Box. If you’ve read my previous blogs, you would know that I have a fascination with Decentralized Community-owned Networks. I was pleasantly surprised to know that Swecha had successfully built such a network and managed to provide low-cost internet connection to a tiny village named Gangadevipally. The workshop was intense; they shared with us every piece of their wisdom in detail. At the end of the day, I was confident that we can actually build a network that is jointly owned by the local community that uses it, while the ISPs just act as gateways. It is not just a Proof of Concept anymore. Later I read the story of Gangadevipally, which in itself, is quite intriguing.

A few weeks later, I was invited to conduct a 1-day Introductory Machine Learning workshop for the Swecha team, at “The Hub”. I choose to think it went pretty well. I feel bad for not spending more time with these people.

In May, I joined datalog.ai. This time I worked on Reading Comprehension. This was uncharted territory. I had to learn and implement so many new neural network constructs. Attention Mechanism is one of them. I wish someone had explained to me, the computational simplicity of it, instead of indulging in metaphors. So many failed attempts at implementing some high capacity networks. Nothing keeps you down like failures. Eventually I succeeded with the help of my friend, Selva, who has always been there to support me, in my quests. We spent a month and a half, solving bAbI, Cloze QA tasks - CBT, CNN/Dailymail, with Memory Networks and Attentive Readers.

In September, I joined A.I. Research Lab of SAAMA, headed by my friend and mentor, Malaikannan Sankarasubbu. Among other things, we work on Clinical Text Analysis. Last Month (November), Selva joined our team. Now we are working together on some interesting problems, just like we did in college (2009-2013).

In Puducherry, we celebrated this year’s Software Freedom Day (SFD) on 1st October. My team had put up a stall on Data Visualization and Creative Coding with p5.js. Representation of 10000 digits of PI, as a colored web, is my favorite; an idea inspired by Numberphile’s youtube video - Pi is beautiful. All the sketches are live.

Recently I completed the first part of Daphne Koller’s Proababilistic Graphical Models course. 2 more to go. I was completely lost the last time I took this course (2015). I must have gotten better at understanding the framework of probability. Perhaps I wasn’t desperate enough to learn Bayesian Inference, back in 2015.

What else happened this year?

Ah yes. We are building a community of A.I. practitioners in Chennai. Our fearless leader, Murugesh, is bringing together A.I. researchers from academia and practitioners from industry. The plan right now, is to meet during the weekends and discuss something interesting. Last month, I gave a talk on NLU for a small group of people from industry, at Vakil Search office, organized by Murugesh and team.

So many interesting things happened ever since. Last week, I listened to Dr. Ravindran’s talk on Reinforcement Learning at Nvidia Developers Connect (22 nov). Gave a talk on Neural Question Answering to academics as part of Faculty Development Programme on Deep Learning for Image and Text Analysis, at SSN College of Engineering (24 nov).

And last Sunday, I met with a bunch of Computer Vision practitioners, at Mad Street Den (thanks to my good friend, Syed), to discuss the architecture and impact of Hinton’s Capsule Net. Slides are available here. Normally I try to stay away from Computer Vision (I can be easily distracted by shiny things), but after reading about Capsule Net, I just couldn’t resist.

Solving SQuAD (Stanford Question Answering Dataset) is still left unchecked in my TODO list, among other things. I feel like this is something that I MUST do, before moving on with my life. A few days ago, I started implementing Dynamic Co-attention Network. The architecture diagram would have scared me away if I had seen it last year. It’s a good thing that I’m now equipped to implement and play with this network.

I am writing a blog on Neural Question Answering based on my literature survey and experiments. In fact, as I started writing it, the events of this year bled into it, and it slowly turned into this blog entry. I do have a 6000-word draft of Neural QA blog that I’ve been too scared to publish for a while now. Let me tweak it a bit and share it with you, hopefully tomorrow.

Right now I’m at home, Puducherry, wondering what adventures 2018 will bring. I hope it’ll challenge me as much as 2017 did.