skip to main content


Search for: All records

Creators/Authors contains: "Karimoddini, Ali"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Fuzzy logic controllers can handle complex systems by incorporating expert’s knowledge in the absence of formal mathematical models. Further, fuzzy logic controllers can effectively capture and accommodate uncertainties that are inherent in real-world controlled systems. On the other hand, Robot Operating System (ROS) has been widely used for many robotic applications due to its modular structure and efficient message-passing mechanisms for the integration of system’s components. For this reason, Robot Operating System is an ideal tool for developing software stacks for robotic applications. This paper develops a generic and configurable Robot Operating System package for the implementation of fuzzy logic controllers, particularly type-1 and interval type-2, which are based on either Mamdani or Takagi-Sugeno-Kang fuzzy inference mechanisms. This is achieved by employing a systematic object-oriented approach using the Unified Model Language (UML) to implement the fuzzy inference system as a single class that is composed of fuzzifier, inference, and defuzzifier classes. The deployment of the developed Robot Operating System package is demonstrated by implementing an interval type-2 fuzzy logic control of an Unmanned Aerial Vehicle (UAV). 
    more » « less
  2. In this paper, an adjustable autonomy framework is proposed for Human-Robot Collaboration (HRC) in which a robot uses a reinforcement learning mechanism guided by a human operator's rewards in an initially unknown workspace. Within the proposed framework, the robot can adjust its autonomy level in an HRC setting that is represented by a Markov Decision Process. A novel Q-learning mechanism with an integrated greedy approach is implemented for robot learning to capture the correct actions and the robot's mistakes for adjusting its autonomy level. The proposed HRC framework can adapt to changes in the workspace, and can adjust the autonomy level, provided consistent human operator's reward. The developed algorithm is applied to a realistic HRC setting, involving a Baxter humanoid robot. The experimental results confirm the capability of the developed framework to successfully adjust the robot's autonomy level in response to changes in the human operator's commands or the workspace. 
    more » « less
  3. State-of-the-art lane detection methods use a variety of deep learning techniques for lane feature extraction and prediction, demonstrating better performance than conventional lane detectors. However, deep learning approaches are computationally demanding and often fail to meet real-time requirements of autonomous vehicles. This paper proposes a lane detection method using a light-weight convolutional neural network model as a feature extractor exploiting the potential of deep learning while meeting real-time needs. The developed model is trained with a dataset containing small image patches of dimension 16 × 64 pixels and a non-overlapping sliding window approach is employed to achieve fast inference. Then, the predictions are clustered and fitted with a polynomial to model the lane boundaries. The proposed method was tested on the KITTI and Caltech datasets and demonstrated an acceptable performance. We also integrated the detector into the localization and planning system of our autonomous vehicle and runs at 28 fps in a CPU on image resolution of 768 × 1024 meeting real-time requirements needed for self-driving cars. 
    more » « less
  4. This paper presents a novel method for pedestrian detection and tracking by fusing camera and LiDAR sensor data. To deal with the challenges associated with the autonomous driving scenarios, an integrated tracking and detection framework is proposed. The detection phase is performed by converting LiDAR streams to computationally tractable depth images, and then, a deep neural network is developed to identify pedestrian candidates both in RGB and depth images. To provide accurate information, the detection phase is further enhanced by fusing multi-modal sensor information using the Kalman filter. The tracking phase is a combination of the Kalman filter prediction and an optical flow algorithm to track multiple pedestrians in a scene. We evaluate our framework on a real public driving dataset. Experimental results demonstrate that the proposed method achieves significant performance improvement over a baseline method that solely uses image-based pedestrian detection. 
    more » « less