Learn more about me
My journey started in the suburbs of Saida, an western town in Algeria. My childhood was pretty much
similar
to any other kid in the area; playing and school. In high school, I was, for the first time,
introduced to a
computer. I loved it and decided to pursue my studies in learning about this magical box! I got a
bachelor's
and a master's degrees in computer science. Then, I got a scholarship to pursue a PhD at Beihang
University in China.
My research focuses on semantic web, knowledge graphs, machine learning, deep learning, and NLP for
Arabic.
Besides, I enjoy coding in Python and Java, building predictive models, and learning new tech.
On the side, I like reading religious books, playing football, recording recitations, and having
fruitful debates.
Projects
Trained Models
GPU Hours
Certificates & Awards
Check My Resume
Detail-oriented professional with 2+ years of experience and proven knowledge of machine learning, data analysis, and predictive modeling.
Beihang University, Beijing, China
My research focused on the Arabic knowledge graph which led to 4 publications. This opportunity allowed me to strengthen my research skills and learn how to efficiently collaborate.
University of Liège, Liège, Belgium
This program allowed me to discover how higher education works in Europe. I took classes in advanced machine learning, robotics, and bioinformatics.
Beihang University, Beijing, China
I was introduced to the Chinese culture through language learning. I had courses in reading, listening, speaking, and writing. My overall grade was an A.
Djillali Liabes University, Sidi Bel Abbès, Algeria
I learned a great deal about the different aspects of security that touches on networks, databases, and operating systems. My research thesis was about extracting structured data from HTML pages and converting it to RDF.
Dr. Tahar Moulay University, Saida, Algeria
My first intro to computers. I took many introductory and advanced courses in math and computer science. I ended up writing my first thesis about sniffers in a local network.
ControlExpert, Beijing, China
Beihang University, Beijing, China
I worked at Prof. Zhoujun Li's lab as a researcher on semantic web and knowledge graphs. My research project was on the construction of an Arabic Knowledge Graph. I identified the challenges facing its creation and the opportunities it offers. We were able to create a small knowledge graph and built two tools that would allow practitioners and data owners to transform their data into RDF and share it with the world. The applications of the AKG are beyond description; it can be used in creating semantic-based Arabic search engines, build smarter Arabic AI applications, and much more.
Symbio, Beijing, China
Download my full resume
My Projects
A highly-collaborative project that aims to enhance CE's benefit from the gathered structured data. I work with the team in analyzing the data and extracting a data model to build a graph database using Neo4j. This database will allow CE to explore the hidden connections between claims, car drivers, reporters, investigators, and so on to eventually discover frauds.
This project's goal is to solve an issue with part detection that prevents us from delivering accurate results on the damaged parts. Also, we aim to quantify the damage so I started working on both car damage and parts segmentation. The initial results seem promising.
Lead the development of a brand-new model to detect the damaged parts in a car accident. This predictive model will be integrated into our risk control service to improve our accuracy. The first step to achieving such a goal is data. Thus, I collect data, check it for completeness, clean it, upload images to our AWS cloud, and update the labeling MongoDB database.
Technically, decisions are taken around the nuances of thresholds. Therefore, a well-defined threshold can make a significant impact. In this project, I continuously work on optimizing our thresholds for better decision making. Since a decision that is taken from several images is better than from a single one, Tensorflow and Optuna were used to build an aggregation model and run an extensive hyper-parameters search to find the optimal values ( 10-15% accuracy boost).
At CE, we recognized the bottleneck of slow data labeling. Thus, I worked on creating a labeling tool that used one of our internal object detection models to automatically label our images. Further, human labelers would check if the results are accurate and amend them if necessary. This tool had an accuracy of 80% and allowed us to, approximately, save over 400 hours of human labor.
Our labeling tool uses a MongoDB database to store data. I am responsible for keeping the database up to date, add and remove data, fix any issues that may arise, and automate the generation of labeling stats for our human experts. Also, I create scripts that are used by the team to generate data for model training. In addition, I was one of two members responsible for checking the quality of our human experts until August 2019.
My Publications
Authors: Abdillah Mohamed, Li Zhang, Jing Jiang, Ahmed Ktob
Published: Dec 2018 in 22ND ASIA-PACIFIC SOFTWARE ENGINEERING CONFERENCE (APSEC)
Authors: Ahmed Ktob, Zhoujun Li
Published: Jul 2018 in International Journal on Semantic Web and Information Systems (IJSWIS)
Authors: Ahmed Ktob, Zhoujun Li
Published: 2017 in 11TH IEEE INTERNATIONAL CONFERENCE ON SEMANTIC COMPUTING (ICSC)
DOI: 10.1109/ICSC.2017.22
Authors: Ahmed Ktob, Zhoujun Li, Djelloul Bouchiha
Published: Oct 2017 in IEEE 3RD INTERNATIONAL CONFERENCE ON COLLABORATION AND INTERNET COMPUTING (CIC)
Contact Me
92 El-Kadissia, Ain El-Hadjar, Saida 20001, Algeria
ktobah@gmail.com | ktobah@buaa.edu.cn
+213 667334948 | +86 13426054970