Working AI: At the Office with MLE Ruben Sipos

Location: San Francisco, California

Age: 33

Education: B.Sc. in Computer Science and Mathematics, 2009, University of Ljubljana, PhD in Computer Science, 2014, Cornell University

Years in industry: 5

Favorite movie: Event Horizon (1997)

Favorite machine learning researcher: Thorsten Joachims

How did you first get started in AI?

My first taste was during one of my college courses. My professor suggested that we join an online ML competition. It wasn’t required for the credits or part of the grade, but just for fun and some hands-on experience. The class was split into two groups which competed against each other. The game was ON! It turned out that we were good enough to even get some prizes…

That sparked my interest in the field, and it influenced my choice of PhD thesis and work in the industry afterwards.

What are you working on at Pinterest?

Pinterest is a great destination for ideas and inspiration, but we don’t just want people to find ideas online – we want them to take action on them offline, such as making a meal, or buying a product they discover.

Toward this goal, I’m working on improving the relevance of products in search results. Is the user looking for inspiration or is now a good time to show products to act on? How do we create unified ranking models for products and non-products? How can we achieve cost-effective savings?

My focus right now is on leveraging metadata to better understand the relevance of search results. Instead of just matching keywords, we want to understand words that relate to a brand, fabric, or color. Finding a good solution for this requires experimentation, data analysis and customizing ML approaches to work with peculiarities of our data.

Take us through your typical workday.

I try to be in the office early (at least compared to others). Because it’s still cold outside, I start my day with a cup of hot herbal tea. I spend some time catching up with emails and Slack messages. Meetings rarely get scheduled during this time so I can work without interruption until lunch.

Lunchtime is a great opportunity to catch up with coworkers on the latest thing they are working on or just a friendly chat — crazy ideas for the next offsite or how the latest culinary adventure turned out.

Afternoons can get broken up with meetings, but having everyone on the same page at all time is essential for maintaining velocity. My work often spans at least one or two other teams, so I have to be aware of how their projects might interact with my work and vice-versa. I try to leave early (to offset coming in earlier) but don’t always manage it due to late meetings or wanting to finish the task at hand. Sometimes I reply to messages or continue working for a bit when I get home, but I do try to maintain relatively strict work-life balance.

What’s the most challenging thing about it? What do you like most?

Limited resources really up the challenge across the board. When I was earning my PhD, I could focus on a single well-defined problem and explore ideas with a simple, hacky implementation. Now I have to improve three or more components in tandem, experiments have to fit on production infrastructure, and off-the-shelf ML solutions will not beat the baseline. So it boils down to prioritizing user impact, picking the right solution, and coming up with clever tricks to squeeze more out of the data we have.

On the other hand, Pinterest is a great place to meet those challenges because we are big enough that I can find others who have encountered similar ML problems and are willing to help. At the same time, we are still like a startup in some respects. That makes it easier to do bigger design changes, because you know everyone involved and can stop by their desk and poke them to get things moving if necessary.

What tech stack do you use and why?
  • Java: Easy to learn and find developers — I guess that’s why most of the codebase is in Java. It’s fast enough (though some performance-critical parts are in C++). And it’s robust to crappy code (which we are cracking down on, but there’s still a long road ahead).
  • Thrift: I like Protocol Buffers more, but it does the job. I don’t see any difference in functionality, but it has rough edges that can bite you if you’re not careful.
  • IntelliJ: Working with Java auto-complete is a must for me. Having GUI also helps when dealing with convoluted code spread across many different places. My second go-to editor is Vim, which I still use for non-Java or non-code stuff.
  • Bazel: Most of the code is finally moving away from Maven, which is pure torture in my experience. I do miss farming compilation to remote nodes, but its ability to cache and take into account dependencies makes it decently fast.
  • Screen: Code changes are mostly done on my laptop but everything else runs on remote machines, so using Screen is essential to keep things running in the background and restore terminals across disconnected ssh sessions.
  • AWS: Practically all services run on EC2 and S3 is used as universal data storage.
What did you do before working in AI? How does it factor into your work now?

For a while I worked on semantic web and NLP. This quickly became tied to AI when I wanted to push the state-of-the-art in document understanding. A lot of that experience remains relevant today because search relies heavily on text.

How do you keep learning?

We have a reading group that meets weekly and goes over one or more interesting papers. This is a great way to have some good discussions and also to block out time on my calendar for continued learning. Otherwise there’s always something else! The other way I learn is by getting stuck and then looking at what’s out there: Did any past research deal with similar issues and were there any advances since the current approach was implemented?

What AI trend are you most excited about?

In the last few years I’ve seen quite a few very impressive results based on deep neural networks. They are powered by ever increasing complexity, which brings painfully long training times that make  it difficult to iterate and explore possibilities.

There was some publicity around custom hardware accelerators for learning that haven’t yet trickled into the mainstream. This hardware might greatly increase the pace of new discoveries after it becomes a commodity. Some of it can already be leased in the cloud, but it’s currently still a bit clunky and expensive. The difference in turn-around time and ease of experimentation could be stunning, from what I saw.

Why did you choose to work in industry vs. academia?

This was mostly a pragmatic choice. The only academic positions that appealed to me were at top universities. And those are really hard to get. So I checked the kind of offers I could get from industry and what I would work on.

What advice do you have for people trying to break into AI?

The industry embellishes a bit what working on AI entails. There are still very interesting challenges and unique datasets to work with, but there’s also a lot of grungy work to be done unless you’re high up on the ladder. Good coding skills, understanding of general serving systems, comfort with processing big data, and other back-end knowledge are great assets. So don’t bet everything on just having great ML skills. Make sure your expertise is well rounded.

Ruben Sipos is a Machine Learning Engineer at Pinterest. You can find him on Linkedin and his Cornell website page.

Do you know someone who’s working in AI? Nominate your friend, coworker, or idol by sending us a note at [email protected]!