Search results
1 – 10 of over 8000Myagmarbayar Nergui, Yuki Yoshida, Nevrez Imamoglu, Jose Gonzalez, Masashi Sekine and Wenwei Yu
The aim of this paper is to develop autonomous mobile home healthcare robots, which are capable of observing patients’ motions, recognizing the patients’ behaviours based on…
Abstract
Purpose
The aim of this paper is to develop autonomous mobile home healthcare robots, which are capable of observing patients’ motions, recognizing the patients’ behaviours based on observation data, and providing automatically calling for medical personnel in emergency situations. The robots to be developed will bring about cost‐effective, safe and easier at‐home rehabilitation to most motor‐function impaired patients (MIPs).
Design/methodology/approach
The paper has developed following programs/control algorithms: control algorithms for a mobile robot to track and follow human motions, to measure human joint trajectories, and to calculate angles of lower limb joints; and algorithms for recognizing human gait behaviours based on the calculated joints angle data.
Findings
A Hidden Markov Model (HMM) based human gait behaviour recognition taking lower limb joint angles and body angle as input was proposed. The proposed HMM based gait behaviour recognition is compared with the Nearest Neighbour (NN) classification methods. Experimental results showed that a human gait behaviour recognition using HMM can be achieved from the lower limb joint trajectory with higher accuracy than compared classification methods.
Originality/value
The research addresses human motion tracking and recognition by a mobile robot. Human gait behaviour recognition is HMM based lower limb joints and body angle data from extracted from kinect sensor at the mobile robot.
Details
Keywords
Meiyin Liu, SangUk Han and SangHyun Lee
As a means of data acquisition for the situation awareness, computer vision-based motion capture technologies have increased the potential to observe and assess manual activities…
Abstract
Purpose
As a means of data acquisition for the situation awareness, computer vision-based motion capture technologies have increased the potential to observe and assess manual activities for the prevention of accidents and injuries in construction. This study thus aims to present a computationally efficient and robust method of human motion data capture for the on-site motion sensing and analysis.
Design/methodology/approach
This study investigated a tracking approach to three-dimensional (3D) human skeleton extraction from stereo video streams. Instead of detecting body joints on each image, the proposed method tracks locations of the body joints over all the successive frames by learning from the initialized body posture. The corresponding body joints to the ones tracked are then identified and matched on the image sequences from the other lens and reconstructed in a 3D space through triangulation to build 3D skeleton models. For validation, a lab test is conducted to evaluate the accuracy and working ranges of the proposed method, respectively.
Findings
Results of the test reveal that the tracking approach produces accurate outcomes at a distance, with nearly real-time computational processing, and can be potentially used for site data collection. Thus, the proposed approach has a potential for various field analyses for construction workers’ safety and ergonomics.
Originality/value
Recently, motion capture technologies have rapidly been developed and studied in construction. However, existing sensing technologies are not yet readily applicable to construction environments. This study explores two smartphones as stereo cameras as a potentially suitable means of data collection in construction for the less operational constrains (e.g. no on-body sensor required, less sensitivity to sunlight, and flexible ranges of operations).
Details
Keywords
Mohsen Moradi Dalvand and Saeid Nahavandi
The purpose of this paper is to analyse teleoperation of an ABB industrial robot with an ABB IRC5 controller. A method to improve motion smoothness and decrease latency using the…
Abstract
Purpose
The purpose of this paper is to analyse teleoperation of an ABB industrial robot with an ABB IRC5 controller. A method to improve motion smoothness and decrease latency using the existing ABB IRC5 robot controller without access to any low-level interface is proposed.
Design/methodology/approach
The proposed control algorithm includes a high-level proportional-integral-derivative controller (PID) controller used to dynamically generate reference velocities for different travel ranges of the tool centre point (TCP) of the robot. Communication with the ABB IRC5 controller was performed utilising the ABB PC software development kit. The multitasking feature of the IRC5 controller was used to enhance the communication frequency between the controller and the remote application. Trajectory tracking experiments of a pre-defined three-dimensional trajectory were carried out and the benefits of the proposed algorithm were demonstrated. The robot was intentionally installed on a wobbly table and its vibrations were recorded using a six-degrees-of-freedom force/torque sensor fitted to the tool mounting interface of the robot. The robot vibrations were used as a measure of the smoothness of the tracking movements.
Findings
A communication rate of up to 250 Hz between the computer and the controller was established using C# .Net. Experimental results demonstrating the robot TCP, tracking errors and robot vibrations for different control approaches were provided and analysed. It was demonstrated that the proposed approach results in the smoothest motion with tracking errors of < 0.2 mm.
Research limitations/implications
The proposed approach may be employed to produce smooth motion for a remotely operated ABB industrial robot with the existing ABB IRC5 controller. However, to achieve high-bandwidth path following, the inherent latency of the controller must be overcome, for example by utilising a low-level interface. It is particularly useful for applications including a large number of short manipulation segments, which is typical in teleoperation applications.
Social implications
Using the proposed technique, off-the-shelf industrial robots can be used for research and industrial applications where remote control is required.
Originality/value
Although low-level control interface for industrial robots seems to be the ideal long-term solution for teleoperation applications, the proposed remote control technique allows out-of-the-box ABB industrial robots with IRC5 controllers to achieve high efficiency and manipulation smoothness without requirements of any low-level programming interface.
Details
Keywords
Chuntao Leng, Qixin Cao and Charles Lo
The purpose of this paper is to propose a suitable motion control method for omni‐directional mobile robots (OMRs) based on anisotropy.
Abstract
Purpose
The purpose of this paper is to propose a suitable motion control method for omni‐directional mobile robots (OMRs) based on anisotropy.
Design/methodology/approach
A dynamic modeling method for OMRs based on the theory of vehicle dynamics is proposed. By analyzing the driving torque acting on each axis while the robot moves in different directions, the dynamic anisotropy of OMRs is analyzed. The characteristics of dynamic anisotropies and kinematic anisotropies are introduced into the fuzzy sliding mode control (FSMC) system to coordinate the driving torque as a factor of influence.
Findings
A combination of the anisotropy and FSMC method produces coordinated motion for the multi‐axis system of OMRs, especially in the initial process of motion. The proposed control system is insensitive to parametric vibrations and external disturbances, and the chattering is apparently decreased. Simulations and experiments have proven that an effective motion tracking can be achieved by using the proposed motion control method.
Research limitations/implications
In order to obtain a clearer analysis of the anisotropy influence during the acceleration process, only the case of translation motion is discussed here. Future work could be done on cases where there are both translation and rotation motions.
Practical implications
The proposed motion control method is applied successfully to achieve effective motion control for OMRs, which is suitable for any kind of OMR.
Originality/value
The novel concept of dynamic anisotropy of OMRs is proposed. By introducing the anisotropy as an influential factor into the FSMC system, a new motion control method suitable for OMRs is proposed.
Details
Keywords
Ping Zhang, Xin Liu, Guanglong Du, Bin Liang and Xueqian Wang
The purpose of this paper is to present a markerless human–manipulators interface which maps the position and orientation of human end-effector (EE, the center of the palm) to…
Abstract
Purpose
The purpose of this paper is to present a markerless human–manipulators interface which maps the position and orientation of human end-effector (EE, the center of the palm) to those of robot EE so that the robot could copy the movement of the operator hand.
Design/methodology/approach
The tracking system of this human–manipulators interface comprises five Leap Motions (LMs) which not only makes up the narrow workspace drawback of one LM but also provides redundancies to improve the data precision. However, because of the native noises and tracking errors of the LMs, the measurement errors increase over time. To address this problem, two filter tools are integrated to obtain the relatively accurate estimation of the human EE, that is, Particle Filter for position estimation and Kalman Filter for orientation estimation. Because the operator has inherent perceptive limitations, the motions of the manipulator may be out of sync with the hand motions, so that it is hard to complete with the high performance manipulation. Therefore, in this paper, an over-damping method is adopted to improve reliability and accuracy.
Findings
A series of human–manipulators interaction experiments were carried out to verify the proposed system. Compared with the markerless and contactless methods(Kofman et al., 2007; Du and Zhang, 2015), the method described in this study is more accurate and efficient.
Originality/value
The proposed method would not hinder most natural human limb motion and allows the operator to concentrate on his/her own task, making it perform high-precision manipulations efficiently.
Details
Keywords
Laura Duarte, Mohammad Safeea and Pedro Neto
This paper proposes a novel method for human hands tracking using data from an event camera. The event camera detects changes in brightness, measuring motion, with low latency, no…
Abstract
Purpose
This paper proposes a novel method for human hands tracking using data from an event camera. The event camera detects changes in brightness, measuring motion, with low latency, no motion blur, low power consumption and high dynamic range. Captured frames are analysed using lightweight algorithms reporting three-dimensional (3D) hand position data. The chosen pick-and-place scenario serves as an example input for collaborative human–robot interactions and in obstacle avoidance for human–robot safety applications.
Design/methodology/approach
Events data are pre-processed into intensity frames. The regions of interest (ROI) are defined through object edge event activity, reducing noise. ROI features are extracted for use in-depth perception.
Findings
Event-based tracking of human hand demonstrated feasible, in real time and at a low computational cost. The proposed ROI-finding method reduces noise from intensity images, achieving up to 89% of data reduction in relation to the original, while preserving the features. The depth estimation error in relation to ground truth (measured with wearables), measured using dynamic time warping and using a single event camera, is from 15 to 30 millimetres, depending on the plane it is measured.
Originality/value
Tracking of human hands in 3 D space using a single event camera data and lightweight algorithms to define ROI features (hands tracking in space).
Details
Keywords
This paper seeks to present an inertial motion tracking system for monitoring movements of human upper limbs in order to support a home‐based rehabilitation scheme in which the…
Abstract
Purpose
This paper seeks to present an inertial motion tracking system for monitoring movements of human upper limbs in order to support a home‐based rehabilitation scheme in which the recovery of stroke patients' motor function through repetitive exercises needs to be continuously monitored and appropriately evaluated.
Design/methodology/approach
Two inertial sensors are placed on the upper and lower arms in order to obtain acceleration and turning rates. Then the position of the upper limbs can be deduced by using the kinematical model of the upper limbs that was designed in the previous paper. The tracking system starts from inertial data acquisition and pre‐filtering, followed by a number of processes such as transformation of coordinate systems of sensor data, and kinematical modelling and optimization of position estimation.
Findings
The motion detector using the proposed kinematic model only has drifts in the measurements. Fusion of acceleration and orientation data can effectively solve the drift problem without the involvement of a Kalman filter.
Research limitations/implications
The image rendering is not undertaken when the data sampling is performed. This non‐synchronization is applied in order to avoid the breaks in the continuous sampling.
Originality/value
This new motion detector can work in different environments without significant drifts. Also, this system only deploys two inertial sensors but is able to estimate the position of the wrist, elbow and shoulder joints.
Details
Keywords
N. Parnian and M.F. Golnaraghi
This paper represents a hybrid Vision/INS system for tool tracking applications. The proposed system incorporates low cost MEMS sensors and low cost vision type sensors for…
Abstract
Purpose
This paper represents a hybrid Vision/INS system for tool tracking applications. The proposed system incorporates low cost MEMS sensors and low cost vision type sensors for tracking industrial tools. Vision systems alone have to deal with the problem of “line of sight” and the INS sensor alone will encounter an exponential drift, which render both systems useless for the proposed application.
Design/methodology/approach
The Vision/INS system with the integration of the extended Kalman filter calculates 6D position‐orientation of a tool during its operation within the required accuracy tolerance specific to the application at hand. In this paper, a tool motion modeling approach is proposed to limit the error in an acceptable range for a short period of missing data. The motion of the tool is modeled and updated at any time that the instrument is in the camera view field. This model is applied to the estimation algorithm whenever the camera is not in line of site and the optical data is missing.
Findings
The result of applying motion modeling is shown that the resulted error due to absence of the vision measurement system was bounded and decreased (see the experimental results).
Originality/value
In this paper, the tool motion modeling is proposed to bind the error in the acceptable range for a short period of missing data. The motion of the tool is modeled and updated at any time that the instrument is in the camera view field. This model is applied to the estimation algorithm whenever the camera is not in line of site and the optical data is missing.
Details
Keywords
Matthew Field, Zengxi Pan, David Stirling and Fazel Naghdy
The purpose of this paper is to provide a review of various motion capture technologies and discuss the methods for handling the captured data in applications related to robotics.
Abstract
Purpose
The purpose of this paper is to provide a review of various motion capture technologies and discuss the methods for handling the captured data in applications related to robotics.
Design/methodology/approach
The approach taken in the paper is to compare the features and limitations of motion trackers in common use. After introducing the technology, a summary is given of robotic‐related work undertaken with the sensors and the strengths of different approaches in handling the data are discussed. Each comparison is presented in a table. Results from the author's experimentation with an inertial motion capture system are discussed based on clustering and segmentation techniques.
Findings
The trend in methodology is towards stochastic machine learning techniques such as hidden Markov model or Gaussian mixture model, their extensions in hierarchical forms and non‐linear dimension reduction. The resulting empirical models tend to handle uncertainty well and are suitable for incrementally updating models. The challenges in human‐robot interaction today include expanding upon generalising motions to understand motion planning and decisions and build ultimately context aware systems.
Originality/value
Reviews including descriptions of motion trackers and recent methodologies used in analyzing the data they capture are not very common. Some exist, as has been pointed out in the paper, but this review concentrates more on applications in the robotics field. There is value in regularly surveying the research areas considered in this paper due to the rapid progress in sensors and especially data modeling.
Details
Keywords
Jinwei Zhao, Shuolei Feng, Xiaodong Cao and Haopei Zheng
This paper aims to concentrate on recent innovations in flexible wearable sensor technology tailored for monitoring vital signals within the contexts of wearable sensors and…
Abstract
Purpose
This paper aims to concentrate on recent innovations in flexible wearable sensor technology tailored for monitoring vital signals within the contexts of wearable sensors and systems developed specifically for monitoring health and fitness metrics.
Design/methodology/approach
In recent decades, wearable sensors for monitoring vital signals in sports and health have advanced greatly. Vital signals include electrocardiogram, electroencephalogram, electromyography, inertial data, body motions, cardiac rate and bodily fluids like blood and sweating, making them a good choice for sensing devices.
Findings
This report reviewed reputable journal articles on wearable sensors for vital signal monitoring, focusing on multimode and integrated multi-dimensional capabilities like structure, accuracy and nature of the devices, which may offer a more versatile and comprehensive solution.
Originality/value
The paper provides essential information on the present obstacles and challenges in this domain and provide a glimpse into the future directions of wearable sensors for the detection of these crucial signals. Importantly, it is evident that the integration of modern fabricating techniques, stretchable electronic devices, the Internet of Things and the application of artificial intelligence algorithms has significantly improved the capacity to efficiently monitor and leverage these signals for human health monitoring, including disease prediction.