top of page
Image by Adrien Olichon
D-0146.jpg

About Me

Deep neural networks researcher (PhD) working on representation learning and multimodality, employing transformers and other neural network architectures, as well as various deep learning algorithms. 

 

Having trained in both the theoretical and the computational domains, I am interested in (deep) neural networks, including the building blocks and fundamental aspects of deep learning, such as architecture design, learning algorithms, compression, and reasoning.

​

Have published in relevant conferences and journals (eg. ICML'24 AI4MATH workshop, NeurIPS'24 Multimodal Algorithmic Reasoning workshop, European Conference on Information Retrieval, etc.). TensorFlow, MXNet, and HuggingFace Transformers open source contributor of deep learning research.

​

Drawn to interesting / open problems related to (deep) neural network architectures and related machine learning algorithms.

​

While I have been working exclusively on deep neural networks for the past several years (and plan to continue in the future), my PhD research (probabilistic statistics, Virginia Tech) had been in algorithms related to maximum likelihood estimation and Kullback-Leibler divergence. At VT, I also had the honor of briefly collaborating with I.J.Good, one of Alan Turing's colleagues. 

 

 Majored in computer science for my undergraduate degree (when first introduced to neural networks) and in high school. In this previous life in a land far far away - math and physics olympic (silver medal, Romania nationals).

 

 Lives and works in New York City.

Google Scholar

DBLP

GitHub 

LinkedIn

Research highlights on X(Twitter):

* (deep) neural network architecture for representation learning in search

​

​

​

​

​

​

​

​

Home: About
Home: Contact

Contact

bottom of page