Clemson University
specialized in robotics, controls, and learning
Roboticist with an enthusiastic interest in mobile manipulator systems and human-robot co-manipulation
Ph.D. in Mechanical Engineering at Dr. Yue Wang's I²R Lab, Clemson University, where I also received my MS in 2017.
My research focuses on trust-aware and passivity-based human-robot co-manipulation in a semi-structured environment, by developing variable impedance control based on Bayesian inference of human-robot trust, and passivity-based hierarchical task priority control frameworks.
Familiar with various machine learning algorithms, including deep learning and reinforcement learning, as well as their implementation in robotic controls.
Developing passivity-based hierarchical task priority control algorithms
1 journal paper in preparation
Zhanrui Liao, and Yue Wang
Robotics and Computer-Integrated Manufacturing (RCM), 2024
Human–robot collaboration (HRC) systems integrate the strengths of humans and robots to improve joint system performance. In particular, human–robot cooperative manipulation (co-manipulation), a prominent area within the field of HRC, in which humans and robots manipulate the same object, has garnered significant attention. Trust in HRC is crucial in determining the level of human acceptance of robots and, hence, robot utilization. This paper develops a probabilistic dynamic Bayesian network (DBN)-based trust model and trust-aware variable impedance control for human–robot co-manipulation. Due to the continuous nature of trust evolution and the limitation of the classic parameter learning approach, a continuous and normalized version of Baum–Welch (BW) algorithm is developed to learn the trust model. Trust is vital in any HRC task, such as human–robot co-manipulation. To achieve successful co-manipulation, it is necessary to take into account human–robot trust and further facilitate seamless HRC. Therefore, we propose a comprehensive framework with trust-aware variable impedance control of robot behavior for human–robot co-manipulation, based on human trust in robot and force-based human intention estimation, with trust-based obstacle avoidance and trust-based robot-level task hierarchy. An extensive case study on human–robot co-transportation is conducted, encompassing five distinct robot behaviors, namely, compliant, autonomous, switching control-based, variable impedance control based only on human applied force, and our proposed trust-based variable impedance control behaviors. A rigorous statistical analysis is performed to test their statistical significance and our trust-aware variable impedance control strategy demonstrated its superiority in terms of efficiency, agreement, safety, pHRI, and sHRI factors compared to other benchmark behaviors. The statistical findings also indicate that our proposed trust model has the ability to reflect human subjective surveyed trust qualitatively.
Yue Wang, Fangjian Li, Huanfei Zheng, Longsheng Jiang, Maziar Fooladi Mahani, and Zhanrui Liao
IEEE Open Journal of Control Systems (OJ-CSYS), 2023
Trust model is a topic that first gained interest in organizational studies and then human factors in automation. Thanks to recent advances in human-robot interaction (HRI) and human-autonomy teaming, human trust in robots has gained growing interest among researchers and practitioners. This article focuses on a survey of computational models of human-robot trust and their applications in robotics and robot controls. The motivation is to provide an overview of the state-of-the-art computational methods to quantify trust so as to provide feedback and situational awareness in HRI. Different from other existing survey papers on human-robot trust models, we seek to provide in-depth coverage of the trust model categorization, formulation, and analysis, with a focus on their utilization in robotics and robot controls. The paper starts with a discussion of the difference between human-robot trust with general agent-agent trust, interpersonal trust, and human trust in automation and machines. A list of impacting factors for human-robot trust and different trust measurement approaches, and their corresponding scales are summarized. We then review existing computational human-robot trust models and discuss the pros and cons of each category of models. These include performance-centric algebraic, time-series, Markov decision process (MDP)/Partially Observable MDP (POMDP)-based, Gaussian-based, and dynamic Bayesian network (DBN)-based trust models. Following the summary of each computational human-robot trust model, we examine its utilization in robot control applications, if any. We also enumerate the main limitations and open questions in this field and discuss potential future research directions.
Zhanrui Liao, and Yue Wang
American Control Conference (ACC). IEEE, 2021
In this paper, we extend the classic passivity theory for a class of nonlinear impulsive multi-dimensional switched systems with both (exponentially) passive and nonpassive subsystems, where the state changes (i.e. state dimensional variations and/or state jumps) may occur at the switching moments. The passivity conditions of such systems are studied by adopting the transition-dependent average dwell time (TDADT) and multiple Lyapunov functions (MLFs). The state changes of such systems are taken into account by introducing the switching time threshold and the switching rate conditions. The proposed methods prove that, with a relaxed quasi-alternative switching signal, nonlinear impulsive multi-dimensional switched systems with both exponentially passive and nonpassive subsystems are (weakly) exponentially passive, and systems with both passive and nonpassive subsystems are (weakly) passive. Furthermore, the proved passivity properties of such systems are useful to achieve system stabilization by output feedback. The main results are demonstrated by a numerical example.
Yue Wang, Laura R. Humphrey, Zhanrui Liao, and Huanfei Zheng
ACM Transactions on Interactive Intelligent Systems (TiiS), 2018
Symbolic motion planning for robots is the process of specifying and planning robot tasks in a discrete space, then carrying them out in a continuous space in a manner that preserves the discrete-level task specifications. Despite progress in symbolic motion planning, many challenges remain, including addressing scalability for multi-robot systems and improving solutions by incorporating human intelligence. In this article, distributed symbolic motion planning for multi-robot systems is developed to address scalability. More specifically, compositional reasoning approaches are developed to decompose the global planning problem, and atomic propositions for observation, communication, and control are proposed to address inter-robot collision avoidance. To improve solution quality and adaptability, a hypothetical dynamic, quantitative, and probabilistic human-to-robot trust model is developed to aid this decomposition. Furthermore, a trust-based real-time switching framework is proposed to switch between autonomous and manual motion planning for tradeoffs between task safety and efficiency. Deadlock- and livelock-free algorithms are designed to guarantee reachability of goals with a human-in-the-loop. A set of nontrivial multi-robot simulations with direct human inputs and trust evaluation is provided, demonstrating the successful implementation of the trust-based multi-robot symbolic motion planning methods.
Sadrfaridpour Behzad, Maziar Fooladi Mahani, Zhanrui Liao, and Yue Wang
Dynamic Systems and Control Conference, ASME, 2018
A trust-based switching impedance control strategy for human-robot cooperative manipulation is proposed. The robot behavior switches between the proactive and reactive modes based on the estimated trust of human in robot. The robot estimates the human desired trajectory. A history-based probabilistic trust model for human-robot cooperative manipulation tasks is proposed. Trust is modeled as a dynamic Bayesian network. In the proactive mode, the robot estimates the human desired trajectory and plans accordingly while in the reactive mode, the robot only reacts to the human input. A simulation of the trust-based switching impedance control strategy is presented.
Zhanrui Liao, Longsheng Jiang, and Yue Wang
American Control Conference (ACC), IEEE, 2017
Student Travel Awards
Human-robot collaborations (HRC) can be used for object detection in domain search tasks, which integrate human and computer vision to improve accuracy and efficiency. The Bayesian sequential decision-making (BSD) method has been used for task allocation of a robot in search tasks. In this paper, we first provide an explanation to reveal the nature of the BSD approach: it makes decisions based on the expected value criterion, which is proved to be very different from human decision-making behaviors. On the other hand, it has been shown that joint performance of a team will improve if all members share the same decision-making logic. In HRC, since forcing a human to act like a robot is not desired, we propose to modify the BSD approach such that the robot imitates human logic. In particular, regret theory qualitatively models human's rational decision-making behaviors under uncertainty. We propose a holistic framework to measure regret quantitatively, an individual-based parametric model that fits the measurements, and the integration of regret into the BSD method. Furthermore, we design a human-in-the-loop experiment based on the framework to collect enough data points to further elicit requisite functions of regret theory. Our preliminary results match all the properties in regret theory, while the parametric elicited model shows a good fit to the experimental data.
SM Mizanoor Rahman, Zhanrui Liao, Longsheng Jiang, and Yue Wang
Trends in Control and Decision-Making for Human–Robot Collaboration Systems, 2017
SM Mizanoor Rahman, Zhanrui Liao, Longsheng Jiang, and Yue Wang
IEEE International Conference on Automation Science and Engineering (CASE), 2016
Newspaper: South Carolina Manufacturing
Dr. Yue “Sophie” Wang and I are conducting research aimed at helping robots and people work together more closely.
I programmed a mobile manipulator robot that can observe and mimic human actions to understand human intent. The goal is to research how robots and people can better work as a team.
Collaborator: Nephron Pharmaceuticals Corporation
Newspaper: Greenville Journal | GSA Business Report
A benchtop robot and automation system for the manufacturing of prefilled syringes using collaborative robots
Prototype system hardware and software control integration
Newspaper: Clemson News
Developed full online robotics course: kinematics, dynamics, motion planning, control, and manufacturing
The course "TIME for Robotics" was successfully launched online
I supervised 32 undergraduate and graduate students to develop 4 vision-based manufacturing pick-and-place robotic arm solutions with customized gripper designs, achieving a 95% overall pick-and-place success rate on 4 different color blocks.
Perform Inverse Kinematics for the Kuka KR210 in the simulation and complete pick and place tasks 10 times.
Build and Train a Fully Convolutional Network (FCN) to find a specific person in images from a simulated quad-copter
In this robotic inference project, an object classification problem with three different object classes is raised and solved by training a standard pre-trained network based on a self-collected dataset. For each object, different object colors, different object poses, and different backgrounds are also considered in the training and testing process to further prove the ability of the pretrained inference model. The inference accuracy achieves over 90% on three different object classes. Future work includes using different pretrained networks like AlexNet or the newer ones to compare the performances, as well as using color images as input to further improve the inference model.
In this project, an adaptive Monte Carlo localization is implemented in a simulated localization task. A benchmark and a customized robot have performed the task on a jackal race map, where the robots are put in the middle of the map and are required to navigate to the goal position and orientation. Both robots are able to reach the final goal in a reasonable path in most scenarios within a reasonable time after tuning the parameters to appropriate values based on the ROS wiki page. Future works include creation of a completely different robot with multiple different sensors as well as adding more parameters in order to achieve better performances. Another future work is to setup the AMCL on a real mobile robot and test its ability in a lab environment.
In this project, the goal is the implementation of the mapping algorithm on a robot to generate recognizable maps of two simulated worlds. A customized cafe world is created in Gazebo for testing. A two-wheel robot with a RGB-D camera and lasor sensor installed is used to perform the mapping task. A 2D occupancy grid and a 3D octomap of two environments are created using ROS, Gazebo, and Real-Time Apperarance-Based Mapping (RTAB-Map). After the robot explored both environments, the maps were generated accurately. Future work also includes the implementation of RTAB-Map on a real Pioneer robot with a RGB-D camera and LiDAR installed in a robotic lab enviroment.
The goal of this project is to create a DQN agent and define reward functions to teach a robotic arm to carry out two primary objectives:
1. Have any part of the robot arm touch the object of interest, with at least an 90% accuracy
2. Have only the gripper base of the robot arm touch the object, with at least an 80% accuracy
The project is trained on Nvidia Jetson TX2. The rewards and hyperparameters are modified after several trials.
Programmed a home service robot that can autonomously map an environment and navigate to pickup and deliver objects
In this project, I built a neural network to predict daily bike rental ridership.
In this project, I classified images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. I preprocessed the images, and then trained a convolutional neural network (CNN) on all the samples. The images were normalized, and the labels were one-hot encoded. I built convolutional, max pooling, dropout, and fully connected layers. My neural network predicts the sample images with a testing accuracy of 62.9%
In this project, l generated my own Simpsons TV scripts using recurrent neural networks (RNNs). I used part of the Simpsons dataset of scripts from 27 seasons. The Neural Network I built generates a new TV script for a scene at Moe's Tavern.
In this project, I trained a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
In this project, I used generative adversarial networks (GANs) to generate new images of faces.
Zhanrui Liao, and Yue Wang
Battelle Savannah River Alliance University Collaboration Exchange, Savannah River National Laboratory (SRNL), GA, 2022
South Carolina, Certification No.: 20315