Design and Implementation of a Smart Agricultural Robot bullDOG (SARDOG)
- Description: Multifunctional agricultural robot integrating LiDAR SLAM, deep learning (YOLOv5 & MobileNetV2 SSD), IoT soil sensing, and solar energy harvesting for autonomous weeding, harvesting, and monitoring.
- Reference: H. Kulhandjian, Y. Yang, and N. Amely, “Design and Implementation of a Smart Agricultural Robot bullDOG (SARDOG),” Proc. IEEE International Conference on Computing, Networking and Communications (ICNC): AI and Machine Learning for Communications and Networking, pp. 767–771, Feb. 2024. doi:10.1109/ICNC59896.2024.10556345.
CrossBot: An AI-Driven Smart Robot for Enhancing Pedestrian Safety at Crosswalks
- Description: ROS2-based mobile robot with LiDAR and YOLO vision models for real-time detection of pedestrians/vehicles, improving crosswalk safety without fixed infrastructure.
-
Reference: H. Kulhandjian and O. Iredele, “CrossBot: An AI-Driven Smart Robot for Enhancing Pedestrian Safety at Crosswalks,” Proc. IEEE International Conference on Computing, Networking and Communications (ICNC), 2025, pp. 411–415. doi:10.1109/ICNC64010.2025.10993680.
Low-Cost Tree Health Categorization and Localization Using Drones and Machine Learning
- Description: UAV pipeline using RGB camera + ResNet-50 for classifying tree health (healthy/slightly unhealthy/dying) with 92.7% accuracy.
-
Reference: H. Kulhandjian, B. Irineo, J. Sales, and M. Kulhandjian, “Low-Cost Tree Health Categorization and Localization Using Drones and Machine Learning,” 2024 Workshop on Computing, Networking and Communications (CNC), pp. 296–300, 2024. doi:10.1109/ICNC59896.2024.10556023.
AI-Powered Fruit Harvesting System Using a Robotic Arm for Precision Agriculture
- Description: YOLOv8-based fruit-detection and 6-DOF robotic arm system for autonomous picking on the SARDOG platform (88% accuracy across 8 fruit types).
- Reference: H. Kulhandjian, N. Amely, and M. Kulhandjian, “AI-Powered Fruit Harvesting System Using a Robotic Arm for Precision Agriculture,” Proc. IEEE International Conference on Computing, Networking and Communications (ICNC): AI and Machine Learning for Communications and Networking, pp. 505–509, Feb. 2025. doi:10.1109/ICNC64010.2025.10993754.
AI-based Road Inspection Framework Using Drones with GPS-less Navigation
- Description: Thermal + visible-light drone system using CNN and Faster R-CNN for defect detection (99.5% RGB accuracy) in roads and infrastructure.
- Reference: H. Kulhandjian, J. M. Torres, N. Amely, C. Nieves, C. Reeves, and M. Kulhandjian, “AI-based Road Inspection Framework Using Drones with GPS-less Navigation,” 2024 Workshop on Computing, Networking and Communications (CNC), pp. 301–305, 2024.
AI-based Human Detection and Localization in Heavy Smoke using Radar and IR Camera
- Description: Combines micro-Doppler radar and thermal imaging for search-and-rescue in zero-visibility environments (98% validation accuracy).
- Reference: H. Kulhandjian, A. Davis, L. Leong, M. Bendot, and M. Kulhandjian, “AI-based Human Detection and Localization in Heavy Smoke using Radar and IR Camera,” Proc. IEEE Radar Conference (RadarConf23), pp. 1–6, June 2023. doi:10.1109/RADARCONF2351548.2023.10149735.
- Patent: H. Kulhandjian A. Davis, L. Leong and M. Bendot "SYSTEM AND METHOD FOR HUMAN AND ANIMAL DETECTION IN LOW VISIBILITY," in Google Patents, Patent Application 5124.002US1 filed on May 12, 2021. Accepted March 13, 2024.
AI-based RF-Fingerprinting Framework and Implementation using Software-Defined Radios
- Description: Deep-learning framework using spectrograms to identify transmitters for wireless security and device authentication (> 99% accuracy in low-end SDRs).
- Reference: H. Kulhandjian, E. Batz, E. Garcia, S. Vega, S. Velma, M. Kulhandjian, C. D’Amours, B. Kantarci, and T. Mukherjee, “AI-based RF-Fingerprinting Framework and Implementation using Software-Defined Radios,” Proc. IEEE International Conference on Computing, Networking and Communications (ICNC): AI and Machine Learning for Communications and Networking, pp. 143–147, Feb. 2023. doi:10.1109/ICNC57223.2023.10074023.
Pedestrian Detection and Avoidance at Night Using Multiple Sensors and Machine Learning
- Description: Multi-sensor fusion (RGB, IR, Radar) system for autonomous vehicle safety; achieved 98% accuracy in day and night conditions.
-
References: H. Kulhandjian, J. Barron, M. Tamiyasu, M. Thompson, and M. Kulhandjian, “AI-Based Pedestrian Detection and Avoidance at Night Using Multiple Sensors,” Journal of Sensor and Actuator Networks, 13, no. 3: 34, 2024, https://doi.org/10.3390/jsan13030034.
H. Kulhandjian, J. Barron, M. Tamiyasu, M. Thompson, and M. Kulhandjian, “Pedestrian Detection and Avoidance at Night Using Multiple Sensors and Machine Learning,” Proc. IEEE International Conference on Computing, Networking and Communications (ICNC): AI and Machine Learning for Communications and Networking, pp. 165–169, Feb. 2023. doi:10.1109/ICNC57223.2023.10074081.
Drowsy Driver Detection Using Deep Learning and Multi-Sensor Data Fusion
- Description: Fuses webcam video (eye blink, yawn) and micro-Doppler radar data for real-time driver alert systems (> 95% accuracy).
-
Reference: H. Kulhandjian, N. Martinez, and M. Kulhandjian, “Drowsy Driver Detection Using Deep Learning and Multi-Sensor Data Fusion,” Proc. IEEE Vehicle Power and Propulsion Conference (VPPC), pp. 1–6, Sept. 2022. doi:10.1109/VPPC55846.2022.10003386.