2025-2026 Projects

First-year scholars are tackling diverse challenges such as leveraging generative models to protect machine learning frameworks, enhancing security in AI-based healthcare systems, and developing human-AI teaming solutions for behavioral health. Guided by our expert faculty, these research projects aim to push the boundaries of computer science, ensuring technological advancements are both innovative and secure.

Explore the future of computing through research at KSU and its profound impact on today's fields!

Return to the Main Projects Page Questions: Email Us

Computer Science (Zongxing Xie)

Explore COTS Wireless Sensor for Smart Health Applications

  • In this project, we will use commercial-off-the-shelf wireless sensors to collect data about the physical environment, where human subjects perform daily activities. The idea is when wireless signals will interact with and bound off the physical environment including human subjects. Different movements will result in different signal patterns.

    We will analyze the collected data to discover data patterns and understand their correlations with human activities using AI tools, such as machine learning, deep learning, and data visualization. We will investigate the opportunities and challenges in leveraging the discovered data patterns and correlations for healthcare applications, such as sleep quality assessment, gait pattern analysis, fall detection and fall risk prediction.

  • Students will develop research skills, such as surveying literature, identifying problems, designing experiments, analyzing results, presenting findings and insights.

    Students will gain hands-on experience with sensor systems.

    Students will gain hands-on experience with cutting-edge AI technologies.

    Students will develop a good understanding of methodologies for conducting scientific research.

    Students will develop teamwork and leadership skills from this project.

  • Students should meet with the advisor weekly, during which students should communicate their progress with the advisor.

    Students are expected to summarize their results and the status of the planned action items. 

    Students are expected to discuss and identify the planned action items (e.g., literature review, ideation, experiment design, algorithm implementation, evaluation and analysis) for the next week.

    Students are expected to reach milestones in a timely manner.

  • Hybrid
  • Dr. Zongxing Xie, zxie1@kennesaw.edu 

Computer Science (Yong Shi)

Quantum Machine Learning for Cybersecurity and Science & Engineering Data

  • Machine Learning is known to provide solutions for data analysis and interpretation, and it is used in various fields such as computer vision, malware detection, and drug discovery. However, traditional machine learning approaches are incapable of successfully extracting useful information from large data sets, as they require tremendous time and resources while being performed on traditional computers. Quantum computing is a new type of qubit-enabled computing paradigm based on quantum properties such as superposition, interface, and entanglement for data processing and other tasks. It can be used to work on problems traditional supercomputers would not be able to handle efficiently. Quantum Computing can collaborate with Machine Learning for faster computation and more accurate data analysis, and Quantum Machine Learning (QML) has gained a lot of attention from both academia and industry recently.  

    Two of the most important applications of QML are cybersecurity and analyzing data from various science and engineering fields, such as biology and industrial engineering. This project will begin with learning Machine Learning and Quantum Computing, then followed by the development of a system in Python (developed in Google Colab) that uses the Quantum Tensorflow package and applies QML algorithms to process various (1) security and malicious data sets (2) science and engineering data sets and compares the performance with classical Machine Learning (CML) algorithms. In this project, the students will also conduct research on various quantum platforms such as Microsoft Quantum Development Kit, IBM Qiskit, Google AI Quantum Cirq, PennyLane, and more.

  • Major Work, Milestones and Expected Outcome:

    Stage 1. Basic understanding of Quantum Computing and Machine Learning 

    Stage 2. Source code and demo for a system that applies QML algorithms to process various (1) security and malicious data sets (2) science and engineering data sets and compares the performance with CML algorithms. And a Google site to host the hands-on learning modules in your application.

    For the google website, each module should have pre-lab, lab, and post-lab.
    Please refer to https://sites.google.com/view/blockchainlabware to see examples of the modules.           

    Suggested modules (students can revise, after discussion with the professor) are:

    鈥 Quantum Neural Network (QNN) / Deep learning Model 
    鈥 Quantum Random Forest Classifier
    鈥 Quantum Support Vector Machine SVM 
    鈥 Quantum Principal Component Analysis (QPCA) 
    鈥 Quantum logistic regression 
    鈥 Quantum Bayesian Network 
    鈥 Quantum Decision Tree Classifier 
    鈥 More

    The large security data sets used for those modules can be for:

    鈥 DOS prevention 
    鈥 Malware detection 
    鈥 Finance Fraud detection 
    鈥 Ransomware prevention and detection 
    鈥 user behavior anomaly detection   
    鈥 Spam email filtering 
    鈥 Website Phishing prevention and detection
    鈥 Predicting Product Backorder
    鈥 Quality Prediction and Inspection
    鈥 Patient Flow Analysis and Optimization Environments
    鈥 Energy Consumption Estimation
    鈥 Maintenance Action Recommendation
    鈥 More

  • The students will perform:

    Task 1: Conduct research survey on various computing environments for our approach
    Task 2: Gather real world data sets to test our approach
    Task 3: Implement our algorithm and conduct experiments 
    Task 4: Collaborate with the advisor on a research paper
    Task 5: Help the advisor prepare for grant proposals for NSF programs

  • Hybrid
  • Dr. Yong Shi, yshi5@kennesaw.edu 

Computer Science (Asahi Tomitaka)

Nanoparticles Optimization Using AI for Therapy and Biosensor

  • Nanomedicine which uses nanoparticles for therapeutic and diagnostic applications has emerged with great potential to improve therapeutic and diagnostic efficacy. Among various nanoparticles developed for nanomedicine, magnetic nanoparticles have attracted attention due to their unique magnetic properties that can be applied for cancer thermal therapy and disease diagnostics using imaging modalities and biosensors. Although magnetic nanoparticles have shown promising properties for these applications, the constraints generated by a complex biological system hinder their performances and delay the translation of this promising technology. To overcome this challenge, this project proposes to develop artificial intelligence (AI) models that output optimal design spaces and properties of magnetic nanoparticles considering the constraints. Furthermore, this project will build a prototype magnetic sensor to detect magnetic nanoparticles for diagnostic applications.  
  • The student who performs the AI model task will learn AI model development and analysis using python programming for biomedical applications.

    The student who performs the sensor task will learn sensor construction and processing of sensor generated signals.  

  • The student will conduct literature review, work on AI model/sensor construction, and analysis.
  • Hybrid
  • Dr. Asahi Tomitaka, atomitak@kennesaw.edu 

Computer Science (Ahyoung Lee & Hoseon Lee)

AI/ML-based Low-Power Long-Range Wide-Area Network Management Systems for Water Quality Monitoring

  • This project focuses on developing energy-efficient and reliable communication frameworks to support continuous monitoring of water resources. Effective water quality management requires scalable networks capable of collecting and transmitting data from distributed sensors in both urban and remote environments. The project integrates artificial intelligence and machine learning (AI/ML) models to design adaptive algorithms that optimize energy use, enhance data transmission efficiency, and extend sensor network lifespan. To improve reliability, array antenna control and beamforming are applied, ensuring robust long-distance communication even under challenging conditions. A key innovation is the adoption of Low-Power Wide-Area Network (LPWAN) technologies, particularly Long Range Radio (LoRa), combined with edge cloud computing and software-defined networking (SDN). This integration supports rapid deployment, scalability, and Quality of Service (QoS) for monitoring applications while maintaining low energy consumption and cost-effectiveness. The system enables real-time monitoring of critical water quality indicators such as pH, turbidity, and dissolved oxygen. By providing continuous, reliable data, the proposed framework supports proactive decision-making, pollution detection, and sustainable water resource management, contributing to public health and environmental protection.
  • At the end of the project, students will be able to:

    1. Conduct high-impact research aimed at addressing critical problems in computer networks.
    2. Analyze and apply computer science and engineering approaches, with the ability to integrate software, hardware, communication systems, and network infrastructures into cohesive solutions.
    3. Critically read and evaluate academic papers in Computer Science to extract key insights, methodologies, and contributions.
    4. Prepare and deliver professional presentations that demonstrate a clear understanding of research papers, new tools, or emerging technologies.
    5. Assess and critique the results and limitations of published research, identifying gaps and opportunities for improvement.
    6. Design, implement, and document an original research project, culminating in the development of a scholarly paper suitable for academic or professional dissemination.
  • Students will conduct research in computer networks, with a particular focus on low-power, low-cost long-range radio networking and Internet-of-Energy technologies applied to water quality monitoring systems. Weekly activities may include developing algorithms, integrating sensors, and testing communication frameworks for measuring parameters such as pH, turbidity, and dissolved oxygen. Students are also expected to produce at least one manuscript submission targeting IEEE conferences or journals per semester, highlighting innovations in energy-efficient networking and environmental monitoring.
  • Hybrid
  • Dr. Ahyoung Lee, alee146@kennesaw.edu 

    Dr. Hoseon Lee, hoseon.lee@kennesaw.edu 

Computer Science (Md Abdullah Al Hafiz Khan, Abdul Muntakim, Abm. Adnan Azmee, & Francis Nweke)

SymbioticRAG: Human鈥揕LM Collaboration Framework for Trustworthy Retrieval

  • Artificial intelligence, particularly Large Language Models (LLMs), plays an important role in improving decision-making in real-world applications, such as question answering and classifications. The LLM model embeds an extensive amount of knowledge and responds based on human instructions/prompts. This human-AI teaming partnership yields better decision-making outcomes in complex information environments. The Retrieval-Augmented Generation (RAG) system utilizes the LLM knowledge to retrieve key requested information from external data. However, the quality and factual trustworthiness remain challenging and require forming a team with humans for better alignment and validations. In this project, we plan to develop a framework to form teams for enhanced RAG performance and improve the quality of the retrieved information. We plan to develop two-tiered human-machine collaboration, i) Interactive human curation of retrieved data, ii) Personalized retrieval models that adapt to user behavior. In this project, students will configure and experiment with pre-built RAG pipelines (e.g., LlamaIndex, Haystack) to investigate and analyze how human-in-the-loop review can improve retrieval quality and reduce hallucinations.
    • Develop skills in reading and synthesizing scientific literature on AI and RAG.
    • Gain real-world experience configuring and evaluating retrieval-based AI systems.
    • Learn experimental design, parameter tuning, and evaluation methodology.
    • Improve communication and scientific writing skills.
    • Present research findings at a poster session or technical seminar.
    • Read papers, technical articles, and framework documentation.
    • Run pre-configured RAG pipelines with the provided datasets.
    • Adjust retrieval and prompt parameters; run controlled experiments.
    • Annotate and evaluate retrieved results for accuracy and trustworthiness.
    • Meet weekly (in person or virtually) to report progress and discuss findings.
  • Hybrid
  • Dr. Md Abdullah Al Hafiz Khan, mkhan74@kennesaw.edu 

    Abdul Muntakim, amuntaki@students.kennesaw.edu 

    Abm. Adnan Azmee, aazmee@students.kennesaw.edu 

    Francis Nweke, fnweke@students.kennesaw.edu 

Computer Science (Arthur Choi)

Towards Bounding the Behavior of Neural Networks

  • Recent and rapid advances in Artificial Intelligence (AI), particularly in the form of deep neural networks, has opened many new possibilities, but it has also brought with it many new challenges. In particular, it has become increasingly apparent that while deep neural networks are highly performant, they can also be opaque and brittle. We do not have enough understanding of why and when they work well, and why they may fail completely when faced with new situations not seen in the training data.

    Over the past few years, our research group has developed a symbolic approach to explaining and formally verifying the behavior of machine learning models.  More recently, we have developed a compilation algorithm that can translate a neuron into a logical representation that facilitates the analysis of its behavior.  Our algorithm initially provides a loose bound on the behavior of a neuron, which incrementally tightens through continued compilation.  Our goal now is to propagate these bounds across a network of neurons, in order to bound the behavior of a neural network.  Our approach facilitates the analysis of a neural network, helping us to understand its behavior, and in turn, providing directions towards learning better models.

  • Real world experience working with (and developing) AI/ML models and tools, for preparation either for research in a graduate program (PhD), or for preparation for research/practice in AI/ML fields in industry.
    • AI/ML coding
    • AI/ML modeling and development
    • Reading papers
  • Face-to-Face
  • Dr. Arthur Choi, achoi13@kennesaw.edu 

Computer Science (Kazi Aminul Islam & Md Abdullah Al Hafiz Khan)

Development of Trustworthy Machine Learning Models

  • With the advancement of Artificial Intelligence (AI) technology, it is used in every part of our lives, e.g., remote sensing, healthcare, finance, security systems, autonomous vehicles, cybersecurity, transportation system, text generation, and human-AI Teaming. While AI has achieved state-of-the-art performance in these domains, its black-box nature remains a critical challenge. Malicious actors can utilize adversarial attacks to access and manipulate the AI-based system. The attackers can use adversarial example attacks to craft a poison sample in the inference phase to predict incorrect information or perform a model poisoning attack to steal confidential data. To address these limitations, we must enhance trust in machine learning models.

    In this project, students will create trustable machine learning frameworks to make the model secure, explainable, confidential, and private. Students will explore the potential security vulnerability primarily in AI models, but this can be extended according to research findings. Students must read research papers, government websites, newspapers, and white papers to prove their hypothesis. Students will learn and apply machine learning algorithms and adversarial attacks to demonstrate their hypotheses empirically, starting from a foundational level.

    • Investigate cutting-edge AI Technologies.
    • Learn machine learning algorithms and machine learning library, e.g., PyTorch and TensorFlow
    • Develop scientific writing skills by reading, critically analyzing, and discussing key scientific conference papers with mentors.
    • Improve presentation skills by participating in conference/workshop venue based on the research outcome.
    • Advance communication skills based on regular meeting with mentors.
    • Collaborate with other graduate/undergraduate students.
    • Attend weekly scheduled meeting in-person/online, present weekly outcome and receive mentor鈥檚 feedback.
    • Read research papers and summarize the finding in a scientific paper format.
    • Learn Machine learning library, e.g., PyTorch and TensorFlow
    • Execute and modify (if needed) a sample ML/AI source code and generate and analyze results.
    • Implement algorithm and execute experiments to get hands-on experience in cutting edge research.
    • Assist mentor writing research papers or submitting grants.
  • Hybrid
  • Dr. Kazi Aminul Islam, kislam4@kennesaw.edu 

    Dr. Md Abdullah Al Hafiz Khan, mkhan74@kennesaw.edu 

Computer Science (Xinyue Zhang)

Toward Private Fair and Efficient LLMs on Edge Devices

  • Over the past decade, artificial intelligence (AI) has moved from narrow pattern recognition to systems that can understand and generate natural language at a human-like level. Large language models (LLMs) now power search, writing assistants, accessibility tools, and voice interfaces, and are rapidly being embedded in phones, wearables, and home devices. This shift promises faster, more personal help. For example, LLMs can help us summarize notes on a phone, draft messages, or guide health and study routines. However, there are several critical concerns when applying LLMs. 1) The biggest concern is privacy. On-device assistants may see sensitive texts, locations, or health logs. If data are sent to the cloud or stored unnecessarily, users' privacy can be exposed (e.g., a student鈥檚 journal or a caregiver鈥檚 reminders). 2) Models can behave inconsistently across dialects, accents, reading levels, or cultural contexts. For example, mishearing a regional accent or giving different-quality explanations to different user profiles. 3) LLMs are large; on smaller devices, they can introduce lag, memory pressure, and battery drain, especially on older hardware.

    This project will develop and evaluate practical methods that make on-device LLMs private, fair, and efficient. Under guidance, first-year scholars will (1) design privacy-by-default settings; (2) conduct structured fairness checks using representative prompts and voices (e.g., varied dialects and reading levels), then apply targeted mitigations (prompt refinements, small safety filters, curated examples) to reduce unequal behavior; and (3) improve efficiency through resource-aware choices, including compact models, model compression to lower latency, memory use, and energy consumption. Work will be carried out on affordable edge hardware (smartphones or small single-board computers such as Raspberry Pi). The goal is to investigate responsible edge LLM features that respect user data, behave consistently across users, and operate smoothly within device constraints.

    1. Apply fundamental and disciplinary concepts and methods that support the research project.
    2. Attain the ability to identify, analyze, and solve problems creatively.
    3. Investigate the cutting-edge LLM models.
    4. Propose solutions to protect users' privacy.
    5. Learn the principles of academic writing and research presentation skills.
    6. Collaborate with other graduate and undergraduate students with effective oral and written communication.
    1. Weekly meeting and updates.
    2. Implement the state-of-the-art LLM models.
    3. Develop the features to protect users' privacy, mitigate bias, and improve efficiency.
    4. Prepare presentations for the literature review or key findings in the project.
    5. Final research project reports and poster presentation in the annual symposium.
  • Hybrid
  • Dr. Xinyue Zhang, xzhang48@kennesaw.edu 

Computer Science (Michail Alexiou)

Attacking and Defending AI Models for Image Understanding on Small Devices

  • Deep learning (DL) models have become central to many vision-based applications, from facial recognition and autonomous driving to medical imaging. However, as these models are increasingly deployed on edge computing platforms such as IoT devices and microcontrollers, new challenges arise regarding both efficiency and security. Unlike large-scale cloud-based systems, edge devices are constrained by limited computing power, memory, and energy availability, which creates a need to deploy lightweight DL architectures for processing. At the same time, adversarial attacks, such as adversarial patches, pose a unique risk in such scenarios, as they can be applied to physical objects and remain effective across different viewpoints and lighting conditions. 

    This combination of resource constraints and physical-world vulnerabilities creates an urgent need for new methods to evaluate and improve the robustness of lightweight models operating in edge environments, a challenge that will be studied in the current First-Year Scholars project. To explore the challenges of this setup, the student(s) will develop and deploy lightweight Convolutional Neural Networks (CNNs) models on an Arduino device using TensorFlow Lite. The project will also focus on the generation of physical adversarial patches, produced from both lightweight and standard CNN models, and evaluate their transferability across different model architectures running on the Arduino device. Beyond attack analysis, the project will also investigate simple defenses that can improve resilience without exceeding the strict computational limitations of edge hardware. 

    1. Learn to train and evaluate lightweight deep learning models for edge devices.
    2. Gain hands-on experience with adversarial attack and defense techniques on ML and hardware.
    3. Gain practical skills in deploying and testing models on Arduino hardware.
    4. Develop presentation skills and scientific research skills.
    1. Review research papers on adversarial attacks, defenses, and lightweight models for edge computing.
    2. Train and evaluate lightweight and standard Deep Learning object detection models on resource-constraint edge hardware.
    3. Generate and test printed adversarial patches to study real-world attack transferability.
    4. Document results and prepare progress updates for weekly meetings.
    5. Prepare project poster presentation and demo for the annual symposium.
  • Hybrid
  • Dr. Michail Alexiou, malexiou@kennesaw.edu 

Computer Science (M. Rasel Mahmud)

 Fallguard: Predicting and Preventing Risk of Fall in Immersive Environments

  • Our main goal is to develop a novel framework for real-time fall prediction and prevention in virtual reality (VR environments) using only headset-integrated sensors. Unlike traditional fall detection systems that rely on wearable devices or external sensors, our system will use body motion and position data from regular VR headsets and controllers to predict when a person is starting to lose balance and then give multimodal feedback (auditory, visual, and haptic) to help prevent a fall. 
    1. Develop virtual reality projects using Unity and C# with the PhD students
    2. Work on developing advanced AI techniques with the PhD students
    1. Students will engage in developing a project in virtual reality
    2. Virtual reality headsets will be provided, and students will test their projects to see if they work in the virtual environments
    3. The advisor and PhD/MS students in the lab will help students with their work
  • Hybrid
  • Dr. M. Rasel Mahmud, mmahmud2@kennesaw.edu 

Data Science and Analytics (Dhrubajyoti Ghosh)

Detectability & Linguistic Signatures of AI- vs Human-Generated Fake News

  • This project will provide the student with the opportunity to contribute to a research study examining the classification of misinformation originating from both human and artificial intelligence (AI) sources. With the increasing prevalence of AI-generated text, particularly in the context of news media, there is a growing need to understand the extent to which AI-generated fake news differs from human-generated fake news in terms of linguistic characteristics and detection difficulty.

    The student will utilize publicly available, pre-annotated datasets comprising true and fake news articles produced by both human authors and generative AI systems. The primary objectives of the project are twofold: (i) to systematically compare the classification performance of machine learning models in detecting fake news across AI and human sources, and (ii) to identify the most discriminative linguistic and semantic features contributing to accurate classification within each source type.

    The student will implement a range of natural language processing (NLP) methodologies to preprocess text data, extract salient features, and apply both classical machine learning algorithms (e.g., logistic regression, random forests) and advanced deep learning models (e.g., transformer-based architectures). Furthermore, the student will apply model interpretability techniques, including SHAP (SHapley Additive exPlanations), to quantify feature importance and elucidate the linguistic signatures most indicative of misinformation across distinct generative sources. This project will provide comprehensive training in applied machine learning, interpretable AI methodologies, and critical analysis of textual data in the context of contemporary misinformation challenges.

    • Learn how to work with text data, including basic cleaning, formatting, and preparation of datasets for analysis.
    • Gain experience using R / Python programming tools for natural language processing (NLP), such as tokenization, word frequency analysis, and simple text feature extraction.
    • Become familiar with running basic machine learning models for classification and evaluating model accuracy.
    • Acquire skills in scientific paper writing and conference presentations.
    • Attend regular weekly research meetings to discuss progress, receive feedback, and set goals for upcoming tasks.
    • Review relevant background literature on misinformation detection, AI-generated text, and natural language processing techniques.
    • Conduct data preprocessing tasks, including cleaning, formatting, and organizing text data for analysis.
    • Implement and evaluate machine learning models using R / Python.
    • Document all procedures and findings in well-organized progress reports.
    • Prepare summary figures, tables, and visualizations for inclusion in the final publication or conference presentation.
    • Contribute to the drafting of a research poster and present findings at the Symposium of Student Scholars.
  • Hybrid
  • Dr. Dhrubajyoti Ghosh, dghosh3@kennesaw.edu 

Data Science and Analytics (Amir Karami)

Understanding Human-Like AI and How It鈥檚 Changing Health and Money

  • Have you ever thanked Siri or felt like a chatbot really got you? You鈥檙e not alone鈥攁nd this project is your chance to find out why! We鈥檙e diving into the fascinating world of artificial intelligence (AI) and how people interact with it like it鈥檚 human鈥攁 behavior called anthropomorphism. As AI becomes more lifelike in the way it talks, responds, and behaves, people are starting to treat it like a friend, coach, or even therapist. This research explores how those human-like features鈥攃alled affordances鈥攊nfluence the way we use AI in everyday life. We鈥檒l look closely at how these behaviors show up in real-world tools, especially in areas that really matter, like mental health apps and financial services. Why do people open up to AI about their feelings? Would you take money advice from a chatbot? Students should have a desire to learn and a willingness to work with analytical tools throughout the research process.
    • Conducting real-world research on how people interact with human-like AI
    • Exploring topics such as anthropomorphism, mental health apps, or AI finance tools
    • Collecting and analyzing data from social media and real-world AI chatbots
    • Identifying themes and patterns in user language and behavior
    • Performing literature reviews and understanding research ethics
    • Learning basic tools like Excel and R or Python
    • Writing and communicating research findings for both academic and general audiences
    • Participating in weekly mentoring sessions and collaborative team meetings
    • Developing critical thinking and analytical skills
    • Preparing for future presentations, conference posters, or co-authored publications
    • Building a foundation for careers or studies in communication, psychology, business, health, or data science
    • Attend a team meeting (in-person or virtual) to discuss progress, ask questions, and receive mentorship
    • Review real-world content such as social media posts, app reviews, and user comments related to AI tools
    • Identify and code patterns in language and behavior to analyze how people interact with human-like AI
    • Help organize data, track themes, and take notes that contribute to the larger research project
    • Search for and summarize academic articles related to AI, anthropomorphism, mental health, or finance
    • Write brief reflections or discussion posts to practice sharing insights in clear, thoughtful ways
    • Collaborate with peers and contribute to shared documents using tools like Google Docs 
    • Assist in creating slides or writing short content for potential presentations or research reports
    • Receive ongoing feedback and support from the faculty mentor
    • Contribute approximately 5鈥7 hours per week with flexible scheduling
  • Hybrid
  • Dr. Amir Karami, akarami@kennesaw.edu 

Data Science and Analytics (Muhammad Imran)

Artificial Intelligence for Cancer Detection: Processing Medical Images and Generating Clinical Reports

  • Cancer remains one of the most pressing health challenges of our time, and early detection is often the key to saving lives. Pathologists traditionally diagnose cancer by carefully examining histopathological slides under a microscope, but this process is time-consuming, labor-intensive, and sometimes subjective. In today鈥檚 era of Artificial Intelligence (AI), we have an extraordinary opportunity to use computers to assist doctors by analyzing medical images faster and more consistently.

    This project invites first-year students to participate in an exciting and meaningful research experience where they will work with histopathological images, which are high-resolution pictures of tissue samples used to detect cancer. Students will learn how to process these images using modern AI tools, focusing on computer vision methods that allow machines to 鈥渟ee鈥 and recognize cancerous patterns. They will also explore Natural Language Processing (NLP), which is the field of AI that enables computers to generate understandable written reports from complex data. The ultimate goal is to build systems that can not only detect cancerous regions in medical images but also generate clear, human-readable summaries to support doctors and patients.

    To ground the research in real-world impact, students will study multiple cancer types, such as breast cancer, colorectal cancer, lung cancer, and prostate cancer. For example, the BreakHis dataset provides nearly 8,000 breast cancer images across different magnifications; the NCT-CRC-HE-100K dataset includes over 100,000 colorectal cancer images; and the Cancer Genome Atlas (TCGA) hosts whole-slide images for lung and prostate cancers along with clinical data. Students will work with these publicly available resources, ensuring they gain hands-on experience with authentic medical datasets used by researchers worldwide.

    Resources will include high-performance computing systems, Python-based AI libraries (PyTorch, TensorFlow, Scikit-learn), and specialized medical imaging tools (3D Slicer, SimpleITK, etc.).

    By the end of this project, students will not only contribute to an important area of healthcare research but also develop transferable skills in data science, programming, image analysis, and scientific communication. This research combines three of today鈥檚 most impactful areas (AI, medical image processing, and NLP), giving students a truly interdisciplinary experience at the forefront of innovation.

  • Students participating in this project will gain valuable technical, analytical, and professional skills. Specifically, they will:

    • Programming & Data Science: Learn the fundamentals of Python programming and how to use AI libraries (e.g., PyTorch, TensorFlow, Scikit-learn) for data analysis.
    • Medical Image Processing: Understand how to handle high-resolution histopathological images, preprocess data, and apply computer vision techniques for cancer detection.
    • Artificial Intelligence & Machine Learning: Gain exposure to supervised learning, convolutional neural networks (CNNs), and methods for model evaluation (accuracy, sensitivity, specificity).
    • Natural Language Processing (NLP): Learn how AI systems generate clinical-style text reports, bridging raw computational output with human-friendly language.
    • Ethics & Interdisciplinary Thinking: Reflect on the ethical challenges of AI in medicine, including privacy, bias, and the importance of human oversight.
    • Research Communication: Develop the ability to explain findings clearly through written reports, oral presentations, and visual posters.

    These outcomes are directly transferable to careers in healthcare, data science, computer science, and biomedical research. More importantly, students will leave the program with the confidence that they can tackle complex, real-world problems using computational methods, which is an empowering outcome for first-year scholars beginning their academic journey.

  • Students will engage in weekly, hands-on research activities under faculty guidance. Typical duties will include:

    1. Learning and Training: Completing guided tutorials on Python, AI libraries, and image processing methods.
    2. Data Handling: Downloading, organizing, and preprocessing histopathological image datasets (e.g., resizing, normalization, and annotation review).
    3. AI Model Development: Training and testing machine learning models for cancer detection (e.g., distinguishing benign vs. malignant breast tumors using BreakHis data).
    4. NLP Report Generation: Assisting in building simple systems that convert AI outputs into text-based summaries of cancer findings.
    5. Critical Analysis: Reviewing model results, identifying errors, and discussing strategies for improvement.
    6. Team Meetings: Participating in weekly research group discussions to share progress, troubleshoot challenges, and plan next steps.
    7. Communication: Preparing short reports or presentations summarizing weekly progress and results.

    By the end of each semester, students will contribute to a collective research report and potentially present findings at KSU鈥檚 Undergraduate Research Showcase or similar venues.

  • Face-to-Face
  • Dr. Muhammad Imran, mimran2@kennesaw.edu 

Information Technology (Chloe Xie)

Understanding Alzheimer鈥檚 Disease Through Protein Interactions

  • Alzheimer鈥檚 disease is one of the most common forms of dementia, affecting millions of people and their families. Although scientists know some of the key proteins involved in the disease鈥攕uch as tau and amyloid-beta鈥攖here is still much to learn about how these proteins interact and contribute to memory loss and brain degeneration.

    In this project, students will work with Dr. Chloe Yixin Xie and her research team in the X-Lab to explore how proteins in  the brain behave and interact with each other, especially in the early stages of Alzheimer鈥檚 disease. Using computer based simulations and artificial intelligence (AI), we will model how these proteins move, connect, and change shape. The goal is to better understand which interactions may be harmful and contribute to the disease, and which might be targeted in future treatments.

    First-year students will gain hands-on experience in reading scientific literature, running simulations, analyzing data, and communicating their findings to broader audiences. No prior experience with programming or biology is require 鈥攋ust curiosity and a willingness to learn. This is a great opportunity for students interested in health, science, or technology to get involved in real-world research that could help improve people鈥檚 lives.

  • Students in this project will gain hands-on experience in computational biology and AI-driven research. They will learn to read and interpret scientific articles, understand the basics of protein structure and function, and explore how these proteins contribute to Alzheimer鈥檚 disease.

    Technical skills include:

    • Running molecular simulations to observe protein behavior
    • Using tools like ChimeraX to visualize molecular structures
    • Analyzing data with Python and other computational tools
    • Learning the basics of machine learning and how models are trained

    Students will also develop essential research skills such as:

    • Asking scientific questions and forming hypotheses
    • Interpreting results and identifying patterns
    • Communicating findings clearly through presentations and posters

    By the end of the program, students will be prepared to engage in more advanced research, apply for fellowships, and contribute meaningfully to ongoing projects in biomedical science and AI.

  • Each week, students will participate in a mix of hands-on research, guided learning, and team collaboration:

    • Literature Review (1鈥2 hours): Read scientific articles on protein structure, Alzheimer鈥檚 disease, and AI methods.
    • Research Tasks (5鈥6 hours): Run simulations, visualize protein structures using ChimeraX, and analyze data using Python.
    • Team Meeting (1 hour): Attend weekly lab meetings to share progress, learn new tools, and receive feedback.
    • Mentorship (30 minutes): Meet with the faculty mentor or graduate assistant for support and guidance.
    • Documentation (1 hour): Maintain a digital lab notebook and reflect on research progress.

    Toward the end of the program, students will prepare a research poster or presentation for a campus event or undergraduate conference.

    No prior experience is required鈥攋ust curiosity and a willingness to learn.

  • Hybrid
  • Dr. Chloe Yixin Xie, yxie11@kennesaw.edu

Information Technology (Liang Zhao)

Cybersecurity in Healthcare: Protecting Patients in the Digital Age

  • As healthcare systems become increasingly digitized, they face growing risks from cyberattacks that can compromise sensitive patient data, disrupt hospital operations, and even threaten lives. From ransomware attacks shutting down hospitals to medical device vulnerabilities, healthcare cybersecurity is one of the most urgent and high-impact areas in information technology today.

    In this project, students will explore how cyber threats affect electronic health records (EHRs), medical imaging systems, wearable health devices, and telemedicine platforms. We'll look at the common vulnerabilities in healthcare IT systems and the unique challenges that make the healthcare industry a prime target for attackers. Students will study real-world incidents, examine emerging threats, and review defense strategies such as encryption, access control, network segmentation, and zero-trust architecture.

    The main deliverable of this project is a survey paper written by the student that synthesizes existing research and provides insights into best practices for protecting healthcare data and systems. The paper will aim to educate peers and potentially contribute to a future publication or grant proposal. Students will be guided through the entire research process, from identifying relevant literature to organizing findings and improving scientific writing.

    • Gain an understanding of core cybersecurity principles in healthcare systems.
    • Learn to conduct a thorough literature review on technical topics.
    • Improve skills in research organization, academic writing, and citation.
    • Develop the ability to analyze real-world cyberattacks and defense strategies.
    • Enhance public speaking and presentation skills through weekly updates and symposium participation.
    • Explore interdisciplinary topics that combine IT, public health, and ethics.
    1. Read and summarize 2鈥3 academic or industry papers on healthcare cybersecurity each week.
    2. Maintain a running annotated bibliography of relevant sources.
    3. Meet weekly with the mentor to present findings and receive feedback.
    4. Draft sections of the survey paper and revise based on mentor comments.
    5. Attend occasional guest seminars or virtual talks related to healthcare IT or security (optional but encouraged).
  • Hybrid
  • Dr. Liang Zhao, lzhao10@kennesaw.edu 

Information Technology (Honghui Xu)

Building a Safe and Robust Multimodal LLM Using Dual-Differential Privacy

  • In recent years, AI has become capable of understanding both images and text, much like how humans look at a picture and describe what they see. These advanced systems are called Multimodal Large Language Models (MLLMs). They are increasingly used in real-world applications such as virtual assistants, medical diagnostic tools, and educational platforms. However, these models often learn from sensitive data, including private photos or personal documents. This raises an important question: how can we protect people鈥檚 privacy while still allowing MLLMs to learn effectively?

    This research project introduces a new method for training MLLMs in a way that better safeguards personal information. The method is called Dual-Differential Privacy, or Dual-DP. It works by adding carefully controlled randomness, known as 鈥渘oise,鈥 at two important stages of the training process. The first is when the model processes images or text, and the second is when the model adjusts its internal settings to improve. This two-layered approach significantly reduces the chance that anyone can trace specific data back to individual users. What makes this project particularly exciting is that it goes beyond privacy. We also focus on keeping the model smart, accurate, and stable. We test the method on well-known MLLMs and find that they continue to perform well, even with strong privacy protections in place.

    First-year students who join this project will take part in building more responsible and trustworthy AI systems. They will help test models, visualize experimental results, and explore how to balance privacy with performance. No prior experience in artificial intelligence is required. Students only need curiosity, motivation, and a willingness to learn.

  • Students involved in this project will gain hands-on experience in several key areas of AI and data privacy. They will:

    1. Learn foundational concepts in AI, such as how large language models (LLMs) and multimodal systems (image and text together) work.
    2. Understand the importance of differential privacy, a mathematical approach that protects sensitive data in AI models.
    3. Gain basic proficiency in Python programming and LLM fine-tuning tools.
    4. Work with datasets and practice designing and running experiments, analyzing results, and communicating technical findings clearly.
    5. Explore the ethical implications of AI and data privacy in real-world applications.

    By the end of the project, students will have developed skills in research thinking, computational reasoning, and collaborative teamwork, which are valuable skills for future STEM careers and their graduate study.

  • Each week, students will participate in the following activities:

    1. Attend a 1-hour research meeting to discuss goals, progress, and challenges.
    2. Complete guided reading or video modules to build background knowledge on AI, multimodal systems, and privacy.
    3. Run small-scale experiments using Jupyter notebooks or Google Colab to test simple AI models.
    4. Implement different privacy settings to see how the model鈥檚 predictions change under privacy protection.
    5. Help plot results, analyze accuracy vs. privacy trade-offs.
    6. Collaborate with mentors and peers to summarize findings or prepare slides for group discussions.
    7. Receive weekly mentorship on topics like coding and data handling practices.

    All tasks are scaffolded to support beginners, with an emphasis on exploration and curiosity. No prior coding or research experience is required. PI's research team will teach students everything they need to know to contribute meaningfully to the project.

  • Hybrid
  • Dr. Honghui Xu, hxu10@kennesaw.edu 

Information Technology (Taeyeong Choi)

Smell-GPT: Enabling Language Models to Perceive Smell via Electronic Noses

  • This project, Smell-GPT: Enabling Language Models to Perceive Smell via Electronic Noses, explores a novel intersection between sensor-based olfaction and artificial intelligence. The goal is to introduce undergraduate students鈥攑articularly first-year students鈥攖o hands-on research that combines physical sensing, data collection, and large language model (LLM) reasoning in an accessible and engaging way.

    Electronic noses (e-noses) use sensor arrays to detect and characterize odors by capturing the chemical signatures of volatile compounds. While such sensors are increasingly used in food safety, environmental monitoring, and health diagnostics, interpreting their outputs remains a technical challenge. In parallel, large language models like GPT-4 have demonstrated remarkable capabilities in understanding structured prompts, performing classification tasks, and reasoning over unfamiliar data when given examples.

    In this project, students will collect e-nose data from a variety of common odor sources (e.g., vinegar, lemon, coffee, garlic) in a controlled setting. The focus will be on building a small, labeled dataset representing these odor categories. Rather than training complex machine learning models, students will explore how this raw or lightly preprocessed sensor data can be formatted and presented to language models for classification using prompt-based approaches.

    For example, students will design structured prompts that present the LLM with sensor readings from known samples and ask it to predict the class of an unknown sample. By experimenting with different prompt styles, sample orders, and data representations, students will gain hands-on experience in prompt engineering and understand how LLMs can be used for basic reasoning tasks鈥攅ven in domains they were not explicitly trained for.

    This project emphasizes accessibility, creativity, and foundational skill-building. Students will learn about sensor technology, data formatting, supervised classification, and natural language prompts, while also reflecting on the broader implications of sensory AI. Deliverables may include a small annotated dataset, a summary of prompt-based experiments, and a poster or demo showcasing how AI models can begin to "understand" smells.

    By focusing on a tangible and creative task with minimal technical barriers, this project serves as an ideal entry point into interdisciplinary AI research鈥攅quipping first-year students with both curiosity and confidence.

  • Students working on this project will develop technical and computational skills through hands-on research in sensor data processing and artificial intelligence. This project is ideal for students with an interest in programming, machine learning, or computational science.

    Participants will gain practical experience in the following areas:

    • Sensor Interfacing & Data Collection: Students will operate an electronic nose (e-nose) to collect time-series data from various odor sources. They will learn how to sample, store, and label sensor readings consistently for downstream analysis.
    • Python Programming & API Usage: Students will write Python scripts to interface with the e-nose and to interact with large language model (LLM) APIs (e.g., OpenAI鈥檚 GPT-4). They will learn to format input data, send structured prompts, and process model responses.
    • Data Preprocessing & Feature Engineering: Students will perform basic data preprocessing such as normalization, smoothing, and dimensionality reduction (e.g., PCA) to extract meaningful features from sensor data.
    • Prompt Engineering & Model Evaluation: Students will design and test structured prompts for LLMs, evaluating classification performance using accuracy, precision, recall, and confusion matrices. They will conduct experiments with few-shot and zero-shot prompting techniques.
    • Scientific Computing & Documentation: Students will document their methods and results using Jupyter notebooks, version control (Git), and clear code commenting. They will summarize findings in technical posters or reports.

    By the end of the project, students will have developed a basic end-to-end AI pipeline鈥攆rom physical sensing to computational inference鈥攁nd will have quantitative evidence of their work through small datasets, classification metrics, and reproducible scripts. This experience will prepare students for future research in AI, data science, or engineering.

  • Students participating in this project will engage in a variety of hands-on and learning-based activities each week to support the research project.

    Each week, students will meet in person for a 1-hour research meeting with the faculty advisor or a graduate student research assistant. In these meetings, students will report their progress, discuss any challenges, and receive feedback and guidance for the next steps. These meetings are also opportunities to learn how research teams collaborate and solve problems.

    Throughout the week, students will work on the following tasks:

    • Data Collection and Processing
      Students will collect odor data using an electronic nose device and prepare it for use in the project by organizing and labeling it carefully.
    • Python Programming
      Students will write and modify simple Python scripts to help build the data pipeline. This includes formatting the data and sending it to an AI model for interpretation.
    • Model Evaluation and Experimentation
      Students will test how well the AI model performs in recognizing different smells. They will run experiments and track how accurate or consistent the results are.
    • Reading and Independent Learning
      Each week, students will be expected to read short articles, tutorials, or book chapters related to the project. These resources will help them build skills in areas like data handling, AI concepts, or sensor basics.
    • Documentation
      Students will keep records of their work, including code, results, and observations, to help with communication and reproducibility.

    This project provides a consistent and supportive weekly routine designed to grow students鈥 skills in programming, data analysis, experimentation, and research communication.

  • Hybrid
  • Dr. Taeyeong Choi, tchoi3@kennesaw.edu 

Information Technology (Nazmus Sakib and Nursat Jahan)

AI and Machine Learning-Based Solutions for Identifying Early Cardiovascular Disease Risks

  • Cardiovascular disease (CVD) is a group of health conditions that affect the heart and blood vessels which often leads to serious health problems like heart attacks and strokes. It is a major cause of death worldwide, claiming millions of lives each year. Though this is a preventable disease, early detection is critically challenging because traditional diagnostic methods often identify risks only after symptoms appear, leaving limited time for intervention. Artificial intelligence (AI) and machine learning (ML) are opening new possibilities to change this. By using large amounts of health and lifestyle data, AI can uncover patterns that are hard for humans to see which can indicate early warning signs and predict who might be at higher risk.

    In this project, we will use AI and ML to analyze information such as heart rate, blood pressure, cholesterol levels, and lifestyle habits data. We will build systems to learn from the data to find connections between health indicators and heart disease risk. Our health informatics system will be built for feature extraction, model training, and predictions to identify individuals at elevated risk of CVD. Both machine learning models and deep learning architecture will be explored to capture complex relationships within the data. Our aim is to build a smart, data-driven tool that can identify CVD problems earlier, guide personalized prevention plans, and improve overall heart health outcomes.

    • Learn the ability to write clear, well-structured academic papers by exploring and addressing common writing challenges
    • Develop skills of fundamentals of AI, ML, and deep learning through implementations.
    • Strengthening problem-solving and critical thinking for real-world challenges.
    • Gain hands-on experience in analyzing datasets, identifying patterns, and using these insights to support research and decision-making
    • Practice research communication through presentations and workshop-style discussions
  • Students will engage in reviewing and evaluating scientific literature, developing research writing skills, implementing algorithms and conducting experiments, gaining hands-on machine learning experience, and presenting their research findings.
  • Hybrid
  • Dr. Nazmus Sakib, nsakib1@kennesaw.edu 

    Nursat Jahan, njahan2@students.kennesaw.edu 

Information Technology (Maria Valero de Clemente and Valentina Nino)

Supporting Nurses with Wearable Exoskeletons: Reducing Injury Risk and Enhancing Well-Being

  • This research project explores the potential of wearable exoskeletons to support nurses in physically demanding tasks and reduce the risk of injury and burnout. Nurses often perform repetitive lifting, bending, and patient transfers that place significant strain on the body, contributing to high rates of musculoskeletal injuries and fatigue. As the healthcare system continues to face growing demands, innovative solutions are needed to protect the well-being of frontline workers. Exoskeletons are wearable devices designed to provide mechanical support to the body, particularly the back and arms, to assist with heavy lifting and prolonged physical effort.

    This study aims to evaluate the usability, comfort, and effectiveness of several exoskeleton models in realistic nursing scenarios. The focus will be on how these devices affect muscle activity, movement patterns, and perceived workload. First-year student assistants will play an essential role in supporting the data collection process. Responsibilities will include preparing the research lab, assisting with the setup and calibration of equipment such as motion sensors and video cameras, and supporting participants as they complete tasks with and without exoskeletons. Students will help attach sensors, monitor participant safety, record observations, and distribute surveys assessing comfort and ease of movement.

  • Technical Skills:

    • Setting up and calibrating advanced equipment such as motion capture systems, muscle sensors, and video cameras.
    • Safely attaching sensors and monitoring participants during simulated nursing tasks.
    • Collecting and organizing biomechanical data, as well as distributing and compiling participant surveys.
    • Following established research protocols to ensure accurate and reliable data collection.

    Professional and Transferable Skills:

    • Teamwork and collaboration by working closely with faculty, graduate students, and peers in the HOPE Lab.
    • Communication skills through interacting with study participants and sharing observations with the research team.
    • Problem-solving and adaptability while assisting with experimental sessions and equipment setup.
    • Data management and organization, skills that are useful in both academic and professional settings
  • Each week, student assistants will be actively involved in supporting the research process within the HOPE Lab. Their duties will include preparing the lab for study sessions by setting up equipment, organizing materials, and ensuring a safe environment for participants. Students will assist with attaching and calibrating motion sensors, muscle sensors, and video cameras, learning step-by-step how to handle advanced research tools.

    During study sessions, students will help guide participants through simulated nursing tasks, such as lifting weighted mannequins or transferring patients, both with and without exoskeletons. Responsibilities will include monitoring participant safety, recording observations, and distributing short surveys to collect feedback on comfort, usability, and ease of movement. Students will also learn how to troubleshoot basic issues with equipment under the supervision of faculty and graduate mentors.

    Outside of data collection sessions, students will assist with organizing, labeling, and cleaning datasets to ensure accuracy and completeness. They may also help with simple data entry and review of participant surveys. Regular team meetings will provide opportunities for students to share observations, ask questions, and contribute ideas for improving study procedures.

  • Face-to-Face
  • Dr. Maria Valero de Clemente, mvalero2@kennesaw.edu 

    Dr. Valentina Nino, lvallad1@kennesaw.edu 

Information Technology (Xu Tao)

Tiny AI for Healthy Crops: Detecting Plant Diseases with Smart Sensors

  • Feeding a growing world population is one of today鈥檚 greatest challenges, and plant diseases are a serious threat to global food security. If left unchecked, they can spread quickly and cause major crop losses, reducing both food supply and farmer income. Detecting diseases early is essential to protect crops, but traditional methods can be slow, expensive, or hard to scale. This project explores how artificial intelligence (AI) and modern communication technologies can make farming smarter and more sustainable. 

    The goal of this project is to design and develop a cost-effective, end-to-end system for automatic crop disease detection. The system combines four key technologies: 1) A low-power microcontroller board equipped with a simple color (RGB) camera captures images of crop leaves in the field. These devices are inexpensive, portable, and energy efficient, making them suitable for large-scale farming. 2) A tiny machine learning (TinyML) model runs directly on the microcontroller to analyze leaf images and detect possible signs of disease without needing internet access. 3) A more powerful, pre-trained AI model is hosted on a remote server to provide backup analysis whenever the microcontroller鈥檚 prediction is uncertain. 4) The devices communicate with the server using LoRa, a wireless technology designed for long distances and low energy consumption, making it ideal for agricultural fields.

    Students in this project will explore model compression techniques to create TinyML models and develop optimization algorithms to achieve high crop disease detection accuracy while minimizing device energy use and data transmission. Through hands-on work with TinyML, microcontrollers, and wireless communication, students will gain practical experience in AI-powered smart farming and learn how technology can help protect crops, improve yields, and support sustainable agriculture.

    This project spans multiple fields, allowing students from different backgrounds to contribute their expertise:

    • Computing and Software Engineering Students: Develop TinyML models and design optimization algorithms for accurate, energy-efficient crop disease detection.
    • Engineering Students: Implement and integrate system components, including microcontrollers, RGB cameras, and LoRa communication modules.
    • Agriculture-Related Majors: Provide domain knowledge on crop diseases to improve model performance and ensure the system aligns with real-world farming needs.
  • As a first-year scholar in this project, you will gain practical experience at the intersection of embedded systems, artificial intelligence, and wireless communication. By the end of the project, you will be able to:

    Technical Skills:

    • Acquire model compression techniques that make it possible to run machine learning models on tiny, resource-limited devices.
    • Run TinyML models on microcontrollers and use them to analyze crop images captured by an RGB camera.
    • Integrate microcontrollers with energy-efficient wireless communication modules (such as LoRa) for long-range data transmission.
    • Formulate research problems and propose optimization algorithms to solve them. For example, develop methods that allow devices in the field to collaborate with a remote server, achieving higher crop disease detection accuracy while minimizing energy consumption and data transmission.

    Research & Professional Skills:

    • Apply critical thinking and problem-solving to address real-world challenges.
    • Communicate research findings effectively, both in writing and through oral presentations.
    • Collaborate with peers in discussing ideas, designing experiments, and interpreting results.
    • Develop confidence in independent learning by exploring new tools, coding frameworks, and hardware platforms.
  • Technical Work

    • Work on TinyML model development, including training, compressing, and testing lightweight AI models.
    • Deploy models onto microcontrollers with RGB cameras and run experiments with crop leaf images.
    • Collect, label, and analyze image data to improve model accuracy.
    • Test low-power wireless communication (LoRa) to send data to a remote server efficiently.

    Meetings and Collaboration

    • Attend a weekly project meeting with the faculty mentor and research team to discuss progress, challenges, and next steps.
    • Meet with peers or graduate student assistants as needed for guidance and troubleshooting.
    • Share updates, provide feedback, and collaborate on problem-solving during team discussions.

    Professional Development

    • Maintain a simple research log or lab notebook to track experiments, observations, and insights.
    • Prepare and deliver short mini-presentations or updates to practice communication skills.
    • Reflect on learning progress and identify areas for improvement.
  • Hybrid
  • Dr. Xu Tao, xtao@kennesaw.edu 

Information Technology (Xuechen Zhang)

 Computational Storage Virtualization for Accelerating Data-Driven Applications

  • Data-driven scientific applications may su铿er from significant overhead for loading huge amount of scientific data from storage servers to compute nodes, leading to CPU/GPU stalls and longer workflow execution time. The emerging computational storage devices offer the opportunity to reduce the overhead of data transfer by offloading computation to the devices for in-storage computing. However, the existing IO stack only achieves sub-optimal performance for computational storage devices for three reasons. (1) The random-access pattern of data-driven applications makes the workloads cache unfriendly. (2) No e铿ective IO interface exists for users to utilize computational storage devices. (3) There is a lack of coordination among IO layers on when and where to execute the offloaded computations. In this project, we propose a set of new designs focusing on HDF5-compatible interface, cross-layer runtime for executing offloaded kernels, and co-scheduling of IO and computation to address these issues. We will implement them in open-source software 位-HDF5. Real-world scientific applications will be able to use 位-HDF5 to access computational storage devices with performance improvement.
    1. Learn the state-of-the-art techniques used in file and storage systems and systems for AI.
    2. Learn about the simulation, emulation, and hardware platforms for storage system evaluation.
    3. Learn the entire process of science from making observations and generating hypotheses through publishing scientific papers.
    4. Increase communication skills by presenting papers in lab meetings and increase the ability to work in a research team.
    5. Increase critical thinking and data interpretation skills.
    1. Students will help develop and evaluate the 位-HDF5 software.
    2. Students will meet faculty advisors to discuss the project progress.
    3. Students will attend weekly group meetings.
  • Hybrid
  • Dr. Xuechen Zhang, xzhang54@kennesaw.edu 

Software Engineering and Game Development (Sungchul Jung)

Beyond the Screen: Measuring Engagement and Impact in VR-Based Learning Environments 

  • This research project explores the future of online education through highly immersive Virtual Reality (VR) classroom environments. It specifically investigates how immersive VR learning settings influence learners鈥 attention, emotional responses, and overall learning outcomes. By integrating advanced extended reality (XR) technologies with biometric data, including brain activity (e.g., EEG), physiological signals, and learning task performance, this study aims to uncover deep insights into the cognitive and emotional mechanisms that underlie learning in immersive environments. This study will employ a VR classroom system custom-built in our laboratory using the Unreal Engine, deployed with VR headsets to simulate realistic educational scenarios. Data will be collected through user interactions within these environments to analyze engagement, perception, and emotion during the learning process.

    The First-Year Scholar (FYS) student will work collaboratively with a graduate student to support this research, contributing to system testing, data analysis, and the interpretation of cognitive and perceptual trends observed during VR-based learning. Through this hands-on research experience, the FYS student will gain valuable exposure to immersive system design, human-subject research, and affective computing. The long-term vision of this project is to establish a foundational framework for dynamic, adaptive, and empathic immersive learning systems. Such systems could be widely applied across K鈥12, higher education, and remote learning platforms, supporting diverse learners and personalized instruction.

    • Engage with cutting-edge XR and affective technologies applied to real-world educational challenges.
    • Contribute to research aimed at journal or top-tier conference publication, with the student involved in all phases.
    • Design and develop interactive VR/MR prototypes using Unreal Engine.
    • Operate physiological sensors (e.g., EEG, GSR, heart rate) to collect and analyze biometric data related to attention and emotional state.
    • Critically review and present scholarly literature on immersive learning and cognitive-affective research.
    • Communicate findings through research posters, presentations, and technical documentation.
    • Build a foundation for future research opportunities (e.g., Sophomore Scholars Program, Summer Undergraduate Research Program) based on performance.
    • Participate in weekly lab meetings (in-person), and one-on-one mentoring sessions.
    • Submit regular progress reports on system development and study milestones.
    • Collaborate on the design, execution, and analysis of experimental studies.
    • Iteratively refine VR learning experiences based on user testing and feedback.
    • Demonstrate consistent engagement, curiosity, and initiative in all aspects of the project.
  • Face-to-Face
  • Dr. Sungchul Jung, sjung11@kennesaw.edu 

Software Engineering and Game Development (Brooke Zhao)

Taking Augmented Reality Beyond the Lab

  • Augmented Reality (AR) smartglasses let us see digital information directly in our line of sight. Imagine walking outdoors while navigation directions, maps, or helpful tips about your surroundings appear seamlessly in front of you. Recent advances are making AR headsets more practical than ever: lighter designs make them comfortable to wear for longer periods, Vision Positioning Systems (VPS) provide centimeter-level tracking accuracy so your headset knows exactly where you are, and powerful artificial intelligence (AI) acts like a 鈥渟econd brain,鈥 ready to assist with everyday tasks.

    This project, not limited to labs, will explore how to use the latest AR headsets for outdoor, on-the-move, real-world scenarios. We will combine technologies like VPS, AI, and creative interaction techniques, along with elements of game design, to create engaging and useful applications. You might: 

    • Design AR navigation that gives hands-free, real-time directions to your destination
    • Build interactive field guides to explore historical landmarks, plants, or wildlife
    • Create safety systems that warn about hazards in construction zones or busy streets
    • Invent collaborative outdoor games where the entire campus becomes your game board

    As a first-year scholar on the project, you will have the opportunity to design, prototype, and test AR applications that go beyond the lab. You will work with cutting-edge AR devices, learn about spatial computing, experiment with interaction methods, and explore how AI can make AR experiences more personalized and intelligent. You will also test your prototypes with real users and collect data to evaluate your designs using psychology-based research methods. 

    The goal is to discover what today鈥檚 AR technology can (and can鈥檛) do in real-world situations and to imagine new ways it could enhance daily life. Along the way, you鈥檒l gain hands-on experience in creative problem-solving, teamwork, user experience design, and emerging technologies that are shaping the future.

    • Design and build functional augmented reality prototypes.
    • Apply interdisciplinary methods from computer science, psychology, design, and engineering.
    • Engage in the full research process from question formulation to literature reading, building prototyping, data collection and analysis, and interpretation.
    • Present original work at research symposia, conferences, or in publications.
    • Design, develop, and refine AR prototypes aligned with project goals and milestones.
    • Review and summarize relevant research literature to inform design and evaluation.
    • Conduct user testing and capture observations and feedback.
    • Submit biweekly reports documenting progress, challenges, lessons learned and planned next steps.
    • Attend biweekly meetings with the faculty mentor and team to review progress and plan next steps.
  • Hybrid
  • Dr. Brooke Zhao, yzhao20@kennesaw.edu 

Software Engineering and Game Development (Chenyu Wang)

AOI-Meta: Augmented Object Intelligence for the Metaverse

  • The metaverse is often described as the next evolution of the internet, where people can interact with one another and with digital content in immersive three-dimensional spaces. Unlike traditional websites or apps, the metaverse combines elements of virtual reality (VR), augmented reality (AR), and extended reality (XR) to create environments where physical and digital worlds overlap. In this space, people are not just viewers but active participants who can play, learn, work, and create new digital objects. XR, in particular, provides tools to blend real-world surroundings with virtual content, making daily life more interactive and engaging.

    One of the challenges in the metaverse is how to make physical objects and digital assets work together in meaningful ways. Current XR systems often treat physical items as static backgrounds, while digital assets remain separate and difficult to create or manage without technical skills. Recent research on Augmented Object Intelligence (AOI) introduces an idea where everyday objects can act like digital portals. For example, a cooking pot could display recipe steps or start a timer simply by interacting with it in XR. This approach, called XR-Objects, shows how object recognition and artificial intelligence can make real-world items more interactive.

    Building on this concept, our project looks at extending AOI to support digital asset generation and interaction in the metaverse. Instead of just displaying information, interactions with physical objects could create digital objects鈥攕uch as 3D models, virtual tools, or collectible items鈥攖hat can be stored, shared, or traded across different metaverse platforms. By linking these assets with technologies like blockchain, we can also ensure that ownership and history of use are secure and transparent. This makes physical entities not only interactive but also valuable for education, commerce, and creative industries. The project will develop simple pipelines where interacting with objects or using gestures can automatically create corresponding AOI, which come with contextual information and can be modified through easy-to-use XR interfaces, such as voice commands or visual menus. In doing so, this work will help lower the barriers of creation in the fusion of digital and physical space, making it accessible not only to experts but also to everyday users. It envisions a future where physical and digital realities co-exist and co-create, turning everyday interactions into opportunities for learning, creativity, and meaningful engagement in the metaverse.

    1. Learn fundamental concepts of the metaverse, extended reality (XR), and digital asset generation.
    2. Apply computer vision and multimodal AI techniques for object recognition and asset creation in immersive environments.
    3. Gain hands-on experience with XR interaction design, 3D modeling workflows, and blockchain-based asset management.
    4. Develop skills in prototyping and testing interactive systems that connect physical and digital objects.
    5. Acquire experience in collaborative research, scientific writing, and presenting results at symposia or conferences.
    6. Build interdisciplinary knowledge that combines human-computer interaction, artificial intelligence, and creative design in the context of the metaverse.
    1. Conduct literature review on XR, metaverse, and digital asset interaction.
    2. Implement and test simple prototypes for object recognition and 3D asset generation.
    3. Document progress and maintain weekly research notes.
    4. Prepare updates for mentor meetings and contribute to group discussions.
  • Hybrid
  • Dr. Chenyu Wang, cwang38@kennesaw.edu 

Software Engineering and Game Development (Allison Garefino, Lei Zhang, & Melissa Osborne)

ParentSHIELD-MR-AI: Preventing Child Injuries through Intelligent and Adaptive Immersive Virtual Reality Training for Parents

  • Injury is the leading cause of death among children ages 0鈥3 in the U.S. (with more than 2,300 deaths and over 1.3 million non-fatal emergency department visits from unintentional injuries in 2023). This calls for an urgent need to address home hazards through effective parent-focused safety training programs. Current approaches, typically pamphlets and short videos, offer limited learning engagement and effectiveness. By contrast, embodied experiential learning through immersive virtual reality (IVR) and real world scenario simulations can significantly enhance both engagement and training outcomes.

    During the 2024鈥2025 academic year, our faculty and student team successfully developed a proof-of-concept IVR training prototype. Based on feedback from our user studies, we now aim to expand the prototype with advanced features designed to improve user experience and training effectiveness.

    The goal of the proposed project is to develop a Phase 2 prototype, ParentSHIELD-MR-AI, by implementing the following features and evaluating their effectiveness:

    1. Designing a semi-realistic 3D interior environment for the VR mode of the prototype.
    2. Integrating machine learning (ML) algorithms, such as computer vision, and large language models (LLMs) APIs into a mixed reality (MR) environment for hazardous object recognition, detection, and intelligent AI tutoring.
    3. Creating intuitive 3D user interfaces (3DUIs) to support diverse user tasks during training.
    4. Conducting a pilot user study to evaluate the system鈥檚 design and effectiveness.
    1. Research capability of doing literature review, user study data collection and analysis.
    2. Software development skills of creating immersive interactive experiences in Unity for educational and training purposes
    3. Interactive design skills in 3D user interfaces.
    4. Game level design skills to create 3D virtual environments.
  • Students will work on assigned project prototype development work and give weekly reports of the progress during co-PI meetings. The co-PIs will share supervision responsibilities equally,

  • Hybrid
  • Dr. Allison Garefino, agarefin@kennesaw.edu 

    Dr. Lei Zhang, lzhang24@kennesaw.edu 

    Dr. Melissa Osborne, mcowart3@kennesaw.edu 

Software Engineering and Game Development (Nasrin Dehbozorgi)

AI-Augmented Learning: Leveraging LLMs to Foster Computational Thinking and Problem-Solving in the Classroom

  • This project focuses on creating interactive learning modules that teach students how to use Artificial Intelligence (AI) and prompt engineering to improve their problem-solving and computational thinking skills. These modules will be designed as plugins for D2L, the online learning platform used at KSU, so that instructors can easily integrate them into their courses.

    As an undergraduate research assistant, you will:

    • Help design and develop engaging modules, activities, and practice exercises.
    • Explore how AI tools (like ChatGPT) can support students in class assignments and projects.
    • Work on real applications of AI in education by turning ideas into tools that students and faculty can actually use.
    • Gain hands-on experience with both AI in education and instructional design.

    No prior AI or programming experience is required. If you are curious about technology, enjoy problem-solving, and want to build something that can directly impact how students learn, this project is a great opportunity to get started in research.

  • Students working on this project will gain practical experience in both artificial intelligence and educational research. They will learn to use AI tools effectively in classroom settings and practice prompt engineering, developing strategies to design clear and purposeful questions for AI. These experiences will strengthen their computational thinking and problem-solving skills, which are valuable across many academic and career paths. They will also be trained in research methods, including conducting a literature review, designing and testing learning activities, and collecting and analyzing data. They will also receive mentorship in scientific writing, preparing them to contribute to research papers and presentations. Finally, they will collaborate closely with graduate students and peers in the AIET Lab, building skills in teamwork, communication, and project management. By the end of the project, they will have developed a foundation in research, data analysis, and scientific communication to prepare them for future academic and professional opportunities.
  • Each week, student will join a weekly meeting at the AIET Lab to check in with the supervisor and other team members. These meetings are a chance to share progress, ask questions, and set goals for the week. Outside of meetings, students will spend time on project tasks such as trying out AI tools, practicing prompt engineering, reviewing articles, or helping design learning activities. Depending on where the project is, they might also help with data collection, testing, or analysis. Students will keep track of what they鈥檝e worked on and write a short weekly report about their progress and next steps. They鈥檒l also work closely with graduate students and peers in the lab, getting hands-on experience with research while learning how to collaborate as part of a team.
  • Hybrid
  • Dr. Nasrin Dehbozorgi, dnasrin@kennesaw.edu