1. About Me
In official documents, my name appears as Nguyen Van Ty while in English order, it would be Ty Van Nguyen. Yet, it is just fine to call me by my first name, Ty.
I am a PhD student in Computer and Information Science at University of Pennsylvania and a VEF fellow, under the supervision of Dr. Vijay Kumar and Dr. Daniel D. Lee. I graduated with a Bachelor Degree in Engineering at Hanoi University of Science and Technology (2012), followed by a Master’s degree in Computer Engineering at Ulsan National Institute of Science and Technology under the supervision of Professor Tsz-Chiu Au. (He is currently looking for talented and enthusiastic students. If you are interested in his research, please do not hesitate to reach out to him at email@example.com)
As a long-term goal, I whole-heartedly work to facilitate the coexisting of robots and intelligent agents with us. Up to now, machines are very far from mimicking humans in completing tasks such as navigating, speaking, listening to humans and manipulating objects. However, I believe that we will eventually reach the point at which machines can perform these tasks and friendly communicate with humans. As a consequence, human beings can be released from heavy and dangerous tasks such as resource mining, infrastructure construction and catastrophe rescue. Moreover, people have more choices to make friends, thanks to humanoid robots that are already in the market, and the elders can have daily support by servant robots. My career vision is to contribute to this progress by studying critical problems in the related fields.
2. Recent Research Projects
1. Fast and robust deep learning models for Visual Inertial Odometry. (April, 2016 – Present)
Visual inertial odometry (VIO) is a technique to estimate the change of a mobile platform in position and orientation overtime using the measurements from on-board cameras and IMU sensor. VIO has been a highly active research problem due to the miniaturisation in size and low cost in price of two sensing modularities. However, it is very challenging when accuracy, real-time performance, robustness and operation scale are taken into consideration. For example, the traditional feature-based approaches to estimating can fail when good features cannot be identified, and can be slow. In this project, we aim to develop deep learning algorithms which can run fast and perform well even when traditional methods fail.
1. Ty Nguyen, Steven W. Chen, Shreyas S. Shivakumar, Camillo J. Taylor, and Vijay Kumar. “Unsupervised Deep Homography: A Fast and Robust Homography Estimation Model.” arXiv preprint arXiv:1709.03966 (2017), to appear in ICRA 2018 (pdf)(github).
Abstract: Homography estimation between multiple aerial images can provide relative pose estimation for collaborative autonomous exploration and monitoring. The usage on a robotic system requires a fast and robust homography estimation algorithm. In this study, we propose an unsupervised learning algorithm that trains a Deep Convolutional Neural Network to estimate planar homographies. We compare the proposed algorithm to traditional feature-based and direct methods, as well as a corresponding supervised learning algorithm. Our empirical results demonstrate that compared to traditional approaches, the unsupervised algorithm achieves faster inference speed, while maintaining comparable or better accuracy and robustness to illumination variation. In addition, on both a synthetic dataset and representative real-world aerial dataset, our unsupervised method has superior adaptability and performance compared to the supervised deep learning method.
Unsupervised Model Diagram