I submitted the following essay to UBC as the introduction to a Master’s program letter of intent, so please forgive the self-engrossed tone. Haven’t heard back yet, hoping for good news! :crossed_fingers: :joy:

<essay>

There is a paradigm shift afoot in the software industry. The expectations of software users are rising in sophistication, and accordingly must the role of the software programmer change. Specifically, users are now expecting to make large-scale decisions based on an increasing, real-time inflow of data. They also expect machines to make lower-level, routine decisions on their behalf, with little or no personal involvement.

To respond to these demands, the programmer can no longer simply supply hand-crafted data analysis and processing algorithms that are tailored to a specific problem or a specific request. The turnaround time to hand-craft algorithms is prohibitive compared to the pace, scope and volume of data analysis demand. To meet this new demand, the programmer must become proficient in a new class of computing tools: machine learning algorithms.

The machine learning paradigm frees the programmer from continuously reiventing and reimplementing data analysis and processing algorithms. It also lifts the level of abstraction that the programmer is required to manipulate, from the level of raw data to one of “features”, much closer to the decision-making level to which end users are accustomed. The programmer’s role shifts to become that of a “machine learning engineer”: an engineer who understands the function, scope and tolerances of machine learning components, and is able to assemble architectures made of such components to meet problem specifications. Throughout this process, the programmer participates in enhancing existing machine learning components and creating new ones.

Therein lies my interest to study machine learning. As a professional programmer, I need to prepare for (or indeed, catch up to) the machine learning paradigm shift to remain relevant in my industry and to contribute meaningfully to it. Beyond acquiring and practicing the core techniques of machine learning, my interests lie in the following topics:

  • Computational linguistics: I’ve been exposed to many aspects of multilingual text processing throughout my professional career, in addition to my innate interest in language as a human model of knowledge representation. Given the success of ML methods in NLP, I am looking forward to diving into this domain.
  • Music analysis: As a lifelong music player and learner, I am interested in developing tools that help the musician work faster, especially when it comes to semantic music analysis (e.g. tonal analysis, chord extraction, music indexing and retrieval).
  • Hybrid and collaborative learning: how to use feedback from both humans and machines to enhance the model learning process. I am especially interested in the ability to supply not only raw data to the learning algorithm, but also rules that were borne out of human experience. Also, how a learned model can be presented to humans allowing them to extract novel rules and guidelines pertaining to the topic of interest.
  • Ethics and security of computing: How to avoid misusing the enormous data processing power afforded by ML in ways that are detrimental to individuals and societies, including issues such as purposefully defeating ML algorithms (aka adversarial attacks).

</essay>

Interestingly enough, today I stumbled (via Hacker News of course) on a Facebook post (!) by Yann LeCun that resembled a manifesto - renaming Deep Learning as “Differentiable Programming”. This term resonates with me because it emphasizes a) that what is happening is a new form of programming and b) that the fact that functions (in the programming sense) need to be differentiable in order to allow for the machine to learn about patterns, instead of needing to be taught explicitly about all the cases. I am of course romanticizing gradient-based optimization algorithms :-) Here’s the post:

OK, Deep Learning has outlived its usefulness as a buzz-phrase. Deep Learning est mort. Vive Differentiable...

Posted by Yann LeCun on Friday, 5 January 2018