Destiny is not a matter of chance; it is a matter of choice. It is not a thing to be waited for, it is a thing to be achieved

― William Jennings Bryan

I’m a PhD candidate in Mechanical Engineering Department at University of Michigan. I am part of the Neurobionics Lab, which is affiliated with the Michigan Robotics Institute and led by Dr. Elliott Rouse . My research areas lie within the field of Robotics, Computer Programming and Mechatronics. My PhD work aims to advance human mobility (i.e., locomotion) by designing control principles for wearable robotic system. In particular, I'm interested in high-level, decision making control strategies for classifying ambulation mode. I use the tools of system dynamics, identification, machine learning and control to develop wearable robotic technologies. My past research areas include scene segmentation and localizing grasp affordance for mobile manipulation for people in real-world scenarios. Although my research areas may seem broad, the path I followed was to answer this question: How human/robot perceives the world and interact with robot/human? I believe as following this journey, it will not only fulfill my thirst of curiosity but also will improve the quality of individuals, which is my core motivation of these works.

I received a B.S. degree in Physics from Korea University in 2015. While I was in Korea, I worked in rehabilitation robotics research at Center for Bionics at the Korea Institute of Science and Technology. Before joining the Neurobionics Lab, I worked in the Automotive Research Center at the University of Michigan from 2016 to 2017 and interned in Fetch Robotics Inc., as a robotics software engineer during Summer 2017.

Aside from the research, I enjoy making computer games, playing basketball, watching Sci-Fi movies.

Current Projects

Intelligent and Autonomous Wearable Robotic Systems

Environment and User Aware Intelligent Exoskeleton

Recent advancement in wearable technologies enabled development of light weight and powerful wearable robotic systems; however, there are challenges deploying these systems in the real-world. To be applicable in daily life, the robots need to encompass multiple activities of the human wearer. To enable this, we propose to build an user and environment aware wearable robotic system. In this project, we use a Dephy Exo Boot, which is a powerful but comfortable ankle exoskeleton. As a first phase of the project, we developed an high performance CNN-based intent recognition system, which inferrs the wearer's activity intent to provide seamless transitions between different activities. As a second phase, we plan to develop a learning based controller which minimizes the energetic cost while maximizing the assistance for the wearer. At the end, we will merge these developed strategies to assist the wearer in the diverse range of activities encountered in the real-world.

The Open Source Robotic Leg

Unified, Standerdized and Scalable Robotic Leg Platform

Each researcher develops their own robotic leg to test their control strategies. This requires substantial amount of financial resources and time investment. Furthermore, their control strategies are specific to their own hardware, which hinders the comparison across different research groups. In this project, we seek to lower the barrier for conducting leg control research by providing a single, ubiquitous, and low-cost robotic leg system

Past Projects

Reinforcement Learning with Musculoskeletal models

NeurIPS 2019 Competition: Learn to Move - Walk Around

Competition Result: Passed Round 1, discontinued at Round 2

Developing an intelligent controller for a physiologically plausible human model to run in a physics-based simulation environment. Deep Reinforcement Learning is used to solve the task. By participating the challenge, I'm interested to explore : 1) how much physics-based simulation can closely represent real-life biomechanics, 2) Can RL provide insight on creating generalized controllers for powered-prosthesis. This task is part of the official 2019 NeurIPS competition Track.

Scene Segmentaion

Segmenting Unknown Objects

A model-free approach which enables segmenting unknown objects in a scene without prior knowledge. I've created a ROS wrapper for the existing work of Richtsfeld [1] by dividing the point cloud into small surface patches and using Support Vector Machines to evaluate the relationships between patches and form objects.

[1] A. Richtsfeld, T. Mrwald, J. Prankl, M. Zillich, and M. Vincze, “Segmentation of unknown objects in indoor environments,” in 2012 IEEE/RSJ International Conference on Intelligent Robots and System

Grasp Localization

Localizing Handle-Like Grasp Affordance

Localizing Handle-Like Grasp Affordance in 3-D point Clouds [2] by using ROS bridge. The main goal is to identify the sufficient geometric condition "handle like" for grasping, given the robot endeffector dimension.

[2] Andreas ten Pas and Robert PlattLocalizing Handle-like GraspAffordances in 3D Point Clouds, 2014